title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Hilma af Klint and Yayoi Kusama: Tapping spirituality and visions into art
Hilma af Klint and Yayoi Kusama: Tapping spirituality and visions into art The lives, work, and legacy of these creatives who dabbled in transcendental expression Group IX/SUW №12 by Hilma af Klint Hilma af Klint Deemed the first modern artist of the Western World, Hilma af Klint was a Swedish creative who credited her creative abilities to a divine, spiritual authority. Her abstract paintings in the 20th century reflected bold, imaginative aesthetics—carrying an essence the world had yet seen before. Because of this nature, she kept her work incredibly private and only permitted their release twenty years following her passing: a collection totaling 1,200 paintings, 100 texts, and 26,000 pages of notes. Only over the subsequent three decades post-1986 have her paintings and works begun to receive serious attention for their distinctive qualities.
https://uxdesign.cc/af-klint-and-kusama-tapping-spirituality-and-visions-into-art-c059e0960a75
['Michelle Chiu']
2020-12-14 00:46:43.851000+00:00
['Design', 'Culture', 'Creativity', 'Visual Design', 'Art']
Every Time Something Happens, Normal Changes Forever
Every Time Something Happens, Normal Changes Forever And we can never go back Do you remember when a loved one was flying to your city for a visit? You would drive to the airport and wait at the gate when they stepped off the plane. If you spent much time in an airport, you saw those joyous reunions at almost every gate. My wife used to travel a lot and more often than not, I would be the first person she saw when she left the jetway. Nevermore. Think about this. Someone born on September 12, 2001, will turn 19 years old this year. Grown adults in modern society. Probably frequent flyers. And they will never have a memory of 9/11. And the things that are inconveniences to us are normal to them. People who flew before 1970 remember a time with no scanning at all. The rash of skyjackings in the late ’60s changed that forever. Skyjacking. There’s a word most people under 50 aren’t familiar with. The point is, every time something abnormal happens, normal changes. Forever. The world has been through pandemics before. A lot of them. But this one is different. The way the world is reacting to it is different. And those reactions, not the pandemic itself, is what will change normal again. In what way? That remains to be seen. But some of the things we are doing now, right or wrong, will have a lasting effect on the world going forward. And some will be the new normal. Hoarding That’s a big one, and the most unusual. It’s unusual in that it was entirely unnecessary and senseless. No one needed to hoard anything, and the resulting shortages should never have occurred. But they did. And as a result, normal will shift a bit. Pantries will always be a bit more stocked than in the past. People won’t buy just one of anything. “Let’s get extra, just in case.” I think the shelf life on food labels will become more prominent. There will be an uptick in the sale of large freezers. And bidets. No one will ever have less than three cases of toilet paper stored in their homes. Ever. Social Distancing Will this ever completely go away? How long before people stop feeling a little uncomfortable going into someone else’s house? Or letting others into theirs. “Thanks for coming, it was great seeing you.” Door closes. “Get the sanitizer and wipe everything down!” How long before we stop being uneasy passing someone in the grocery store? Will we ever go to the store again without seeing someone wearing a mask and gloves? I think we will see changes in new and remodeled stores. Wider aisles. More touchless checkout. Plexiglas mounted between the cashier and customers. Will checkouts soon look like the windows at a pawnshop? Cash. It’s been going out of favor for decades. This may be the death knell. Carrying around and touching little pieces of paper that have been handle by thousands, if not millions of people. Who does that? Masks Everyone will own a few. Managing them will become part of everyday life. “Are you doing laundry today? I want to get the masks done.” They will become fashion accessories. The bedazzled fad will make a comeback. Sites like Redbubble, Etsy, and eBay will see a new growth industry in designer masks. Some of this won’t be a bad thing. I think in places like doctor’s offices and hospitals, masks will become mandatory for everyone. I don’t know who makes disposable masks, but you should buy some of their stock. Now. Travel For the travel industry to recover from this, they will have to make big changes. Normal in travel will change again. Somehow, hordes of people crammed together in small spaces have to end. I can see biometrics on the rise. We need to scan and identify people quickly without the bottleneck it now causes. Just think, if the ID process took 5 seconds instead of 20. I have no idea how airlines will react. Seats have gotten smaller and people closer together. We all breathe the same air for hours at a time. Air travel has always been dicey health-wise, so a new normal there is not a bad thing. And cruise ships? Personally, I see little change there. I have taken over 60 cruises and the procedures on cruise ships have always exceeded those everywhere else. Sure, outbreaks occur. But with 4,000 passengers and crew coming together from all over the planet in small spaces; I think the results have been phenomenal. Washing and sanitizing your hands. That was old news ten years ago on cruise ships. The world changes. Constantly. What we consider normal today would have been considered bizarre by our grandparents. There will be babies born this year into the new normal. How will they view our world? “Mommy, that man touched his face.” What will that world look like?
https://medium.com/live-your-life-on-purpose/every-time-something-happens-normal-changes-forever-68a03773dabc
['Darryl Brooks']
2020-05-06 21:49:08.615000+00:00
['Life Lessons', 'Covid 19', 'Society', 'Health', 'Life']
How I Built a Full-Time Career as a Freelance Writer
Lesson Two: Find a Golden Ticket One of the biggest problems freelancers tend to have is increasing their rates. They secure a few clients, those clients become accustomed to paying x amount of money per article, and then the budding writer gets stuck. They don’t want to demand more money and lose a client, but they also don’t want to spend years writing for the same nominal sums. How, then, do you increase your rates once the money starts coming in? Well, as with any sale, if you’re expecting customers to pay big money, you’d better start demonstrating big value. You wouldn’t spend $10,000 on a fake diamond ring, so don’t expect your clients to pay triple your current rate if you still have next to no experience. It doesn’t matter how much value you claim to provide. Anybody can say they’re providing value. You have to be able to prove it. You wouldn’t pay thousands for something if you couldn’t ascertain its value, so don’t expect clients to pay you a lot of money just because you tell them you’ll provide high-quality content. In the world of writing, the proof isn’t in the pudding, but in the experience. Often, not even a degree in creative writing is enough to persuade a person to pay you. Trust me. I’ve employed many writers, and not once have I asked to see a degree. I ask where they’ve been published. In my experience, being published in reputable spaces has enabled me to ramp up my rates quickly. Last winter, I was charging a standard price of £0.10 per word. Fast-forward 12 months and I’m being paid $500 for 500 words — $1 per word. That’s an enormous jump for a year, and the only way I was able to provide it was by demonstrating value. Interestingly, the client paying me those rates approached me. I didn’t apply to work for them. They found me through my work. So what changed? Well, around a year ago, I was catching up with my mum at a local cafe over a hot mug of coffee when my phone lit up, displaying an email that seemed too good to be true. After reading the subject line, “An Invitation From Arianna Huffington’s Thrive Global,” my first thought was ‘surely this is spam’. Spoiler alert: it wasn’t. Getting published to Thrive was a huge deal for me. But most importantly, it was a golden ticket that allowed me to increase my rates. Adding to that, I had two articles of mine go semi-viral, attracting 50K views each. More recently, my publication, Mind Cafe, exceeded 100K followers and began reaching millions of monthly readers, as well as welcoming esteemed writers such as Nir Eyal, Benjamin Hardy, PhD, and Brianna Wiest to our roster. All of these things communicate one thing to my clients. That is, that I know what I’m doing. I stand out amongst the competition, and therefore they’re happy to pay more money for my work. If you want to charge more and get away from those peanut-paying clients, you need to find ways to make a name for yourself either by growing an audience or being published in a reputable space. Those are your golden tickets — your credentials. Every decent feature is like an extra dollar in your pocket where your freelancing rates are concerned. You’ll probably be rejected a few times, but that’s okay. So long as you’re taking the time to write truly engaging, high-quality content, somebody will publish you, and that somebody will become your golden ticket.
https://medium.com/the-post-grad-survival-guide/how-i-built-a-full-time-career-as-a-freelance-writer-3d66f5090773
['Adrian Drew']
2020-12-11 13:27:15.585000+00:00
['Creativity', 'Business', 'Work', 'Freelancing', 'Writing']
The Asymmetric Top: Tackling Rigid Body Dynamics
Even legends are perplexed by rigid bodies When we think of the hard topics in physics, quantum mechanics and general relativity spring to mind. Although those topics are incredibly complex and non-intuitive, I personally feel that the motion of an asymmetric top trumps both QM and DR in complexity, making it one of the hardest concepts to grasp in physics. In this article, we will explore how to analyse the motion of the asymmetric top, of course with some constraints to make our analysis tractable. Problem Setup We consider the free rotation of an asymmetric top, for which we let all three moments of inertia be different. However, for simplicity, we impose the following condition on the moments of inertia: Anyone who has dabbled in classical mechanics will be familiar with Euler Equations, a set of equations that allow us to analyse rotational systems. In the case of free rotation, in other words, when net moments about all the axes are zero, Euler Equations reduce to the following form: Now our task will be to in some way find closed form solutions (or at least some simpler representation) of the angular velocities of the free rotating top. If we find the angular velocities of the top, we will have a fully determined system as we can find any other quantity using the angular velocities. Note that if the motion of the top included translation, then we will also need to know the linear velocities to fully determine the system. Approaching the solution We know from elementary physics that there are two main conservation theorems: total energy and momentum. In the case of rotation, momentum is replaced by angular momentum. Therefore, we can already produce two integrals of the equations of motion: Where E and M are the total energy and angular momentum respectively. We can actually write the first equation in terms of the components of angular momentum instead to make our analysis easier. We can already draw some conclusions about the relationship between the various angular velocities and moments of inertia. The energy conservation equation produces an ellipsoid with semi-axes sqrt(2EI1), sqrt(2EI2), sqrt(2EI3), and the angular momentum conservation equation produces a sphere with radius M. So when the angular momentum vector M moves around the component space, it moves along the intersection lines between the ellipsoid and sphere. The following figure illustrates that: Taken from Mechanics by Landau If we want to be rigorous, we can prove that intersections between the ellipsoid and sphere exist using the following inequality: We can prove this inequality by considering the initial condition we imposed on the moments of inertia and the surface equations of the ellipsoid and sphere. Let us examine the intersection curves in a bit more detail. When M² is slightly bigger than 2EI_1, the intersection curve is a small ellipse near axis x_1, and as M² tends to 2EI_1, the curve gets smaller until it shrinks to the axis x_1 itself. When M² gets larger, the curve also expands, until M² equals 2EI_2, where the curves become plane ellipses that intersect at the pole x_2. Now as M² increases past 2EI_2, the curves become closed ellipses near the x_3 poles. Conversely, when M² is slightly less than 2EI_3, the intersection curve is a small ellipse near the axis x_3, and as M² tends to 2EI_3, the curve shrinks to a point on x_3. We can also note a few things by looking at the nature of the curves. Firstly, since the all of the intersection curves are closed, there must be some sort of a period to those precession/rotation. Secondly, if we look at the size of the curve near the axes, we get an interesting result. Near axes x_1 and x_3, in other words, near the smallest and largest moment of inertia, the intersection curves are small and entirely in the neighborhood of the poles. This can be interpreted to be stable precession about the axis of largest and smallest moment of inertia. However, near the axis x_2, the intermediate moment of inertia, the intersection curves are non local and large. This means that deviations in rotation around the intermediate axis is unstable. This is consistent with the famous Tennis Racket Theorem (read more about it here), where it can be proved that the perturbation of motion about the intermediate axis is unstable. It is quite a remarkable way to prove the tennis racket theorem, purely graphically, with minimal mathematics. Analysing Angular Velocity Now that we have an idea of how the angular momentum and energy of the asymmetric top are interrelated, we can proceed and try to understand how the angular velocity evolves over the rotation of the top. We can first try and represent the angular velocities in terms of one another, and in terms of the constants of equation of motion, aka the energy and momentum. We get the following set of equations: Now, we can substitute these two equations into the Euler equation component for Omega_2: This expression hints to us that the integral for the angular velocity would be some form of an elliptic integral. Now we can add another condition to make our life easier: We now suggest the following change of variables to make the solution more tractable: Note that if the inequality is reversed, we can just interchange the signs of the moments of inertia in the substitutions. It is also useful to define a positive parameter k²<1, as Finally, we get a familiar integral Note that the origin of time is taken to be when Omega_2 is zero. This integral is non analytic, but inverting the integral gives us the Jacobian elliptic functions s=sn(tau). Now we can finally write our angular velocities as ‘closed’ form solution: Obviously these are periodic functions and we know that the period is Where K is the complete Jacobian integral of the first kind Note that at time T, the angular velocity vector returns back to its original position, but that does not mean that the asymmetric top itself returns to its original position. The solutions for the angular velocities might be elegant, but they do little for us mortals to understand the actual motion (maybe Landau was smart enough to visualise it). We can attempt to try and understand it by converting the angular velocities into equations involving Euler angles instead. However, the mathematics to do so are quite long and tricky, so I have omitted it. A Simpler Case Since the asymmetric top doesn’t really allow us to intuitively understand how the Euler equations work and how to interpret the results, we turn to a simpler problem. The following problem was actually set in Landau’s Mechanics textbook. Reduce to quadratures the problem of the motion of a heavy symmetrical top whole lowest point is fixed. We can represent this using the following diagram Taken from Mechanics by Landau We know that the Lagrangian for this system is Since phi and psi are both cyclic coordinates, in other words, their derivative in the Lagrangian is zero, we can already write down two integrals of the equations of motion Where We also know that the total conserved energy is Using our two integrals of equations of motion, we obtain Now, substituting those into our energy conservation equation, we obtain Where Note that the energy equation now resembles the sum of kinetic (as dictated by the parallel axis theorem) and potential energy (now called the effective potential). We know from standard analysis that this can now be represented as Evaluating this integral will give us the necessary solutions that we seek for the various angles. Note that this is also an elliptic integral. We know that E’ must be more than or equal to the effective potential. Also, the effective potential tends to infinity when theta is equal to either 0 or pi, and has a minimum between those points. So, the equation E’=U_eff must have two roots. We denote those two roots theta_1 and theta_2. When theta changes from theta1 to theta2, the derivative of phi might change sign depending on whether or not the difference M_z-M3cos(theta) changes sign. The different scenarios can result in the following type of motion: Taken from Mechanics by Landau When the derivative of phi does not change direction, we get the scenario in 49a, and this type of motion is known as nutation. Note that the curve shows the path of the axis of the top while the center of the sphere shows the fixed point of the top. If the derivative of phi does change direction, we get 49b, where the top moves in the opposite direction for a brief amount of time for phi. Lastly, if the theta1 or theta2 is equal to the difference M_z-M3cos(theta), then both derivatives of phi and psi vanish, resulting in the motion from 49c. Conclusion Hopefully this article has given you some insight on how the mind boggling field of rigid body dynamics works, in particular, how asymmetric tops rotate in free space. Notice that sometimes, looking at graphical representations of quantities can give us a lot of information instead of delving directly into the mathematics and getting stuck inside. Also, it sometimes helps to tackle simpler problems to understand how we can visualise the solutions, albeit for non-realistic scenarios.
https://medium.com/engineer-quant/the-asymmetric-top-tackling-rigid-body-dynamics-79f833567d22
['Vivek Palaniappan']
2019-09-23 14:28:14.786000+00:00
['Education', 'Physics', 'Science', 'Mathematics', 'Engineering']
What Every Developers & Programmers Need to Read in June 🔥🔥
What Every Developers & Programmers Need to Read in June 🔥🔥 Keyul Follow Jun 10 · 2 min read Hello Readers, Staying at home gives you extra time. You have to utilize this time in the right direction. During this time, spend maximum of your time in learning new programming, tech stack or tool, build various projects, improve your overall skills. You keep reading the amazing posts written by our developers & programmers. These are the best blog posts we have picked for you this month. via — unsplash Special thanks to our contributors Alyssa Atkinson , Raouf Makhlouf , Elvina Sh, Karthick Nagarajan, Tommaso De Ponti, Madhuresh Gupta If you like this, please forward this email to your friends to share QuickCode publication or tweet about it. Help us to reach out to 10K members milestone. Thank you.
https://medium.com/quick-code/what-every-developers-programmers-need-to-read-in-june-cd7f2d7da4e6
[]
2020-06-10 04:39:52.248000+00:00
['Programming', 'Development', 'Coding', 'Software Development', 'Software Engineering']
The Novelist’s Guide to Abject Failure
1. Before you even start, make sure everyone in your life is perfectly satisfied. In order to write, you’re going to need a lot of time to yourself. Seriously, a LOT of time. And, God help you if anyone interrupts you, because you’ll never get back on track. You need great, big gobs of uninterrupted time. If you’re going to write, you’ve got to do this right. Once you decide you’re going to plant yourself at your desk — and it HAS to be a monstrous slab of wood, like the desk Stephen King describes in On Writing — you need to partake in the great NASA tradition of a Go/No-Go check, to make sure that everyone in your life is willing to let you retreat into your writing. You start calling people: Kids — Go or No Go? Spouse — Go or No Go? Work — Go or No Go? In laws — Go or No Go? And so on. If you’re really going to be a writer, the universe will respect your decision, and all of the people in your life will find ways to help themselves. That’s how you know it’s meant to be. 2. You must go into isolation. YOU WILL GO TO THE PAPER TOWNS AND YOU WILL NEVER GO BACK. — Ancient Writerly Proverb. Once everybody in your life has decided that it’s okay for you to write, you must isolate yourself. As the proverb says, you must go to the paper towns. And, since there are no towns actually made out of paper, that means going to the town of the things that make paper: trees. You have to go out and get yourself a cabin in the middle of the forest. Ideally, there should be no access road, no heat, no cable, no internet, no bed. Just a slab of a desk and something to sit on. And electricity, I guess, if you’re working on a computer. Also, you can once again take the Stephen King route and hire yourself a guardian. This should be somebody who loves you, who sees your creative potential, and doesn’t mind getting their hands dirty to keep you on task. An affinity for needles and axes is a bonus. It will be their job to make sure that you stay in perfect isolation, free of any and all physical distractions, and to encourage you with daily writing goals. Yet another bonus: by the time you are finished, you’ll be excited to see your friends and family again. Their impositions on your time won’t seem so infuriating. 3. Wait for the muse to arrive. You don’t want to piss off Zeus. Whatever you do, you definitely don’t want that. In fact, I recommend not crossing anyone in the Greek pantheon. One of the first rules you learn in your creative writing MFA is that you have to wait for your muse to arrive. She has to be there before you do anything. If you start writing without filing the appropriate paperwork and waiting for your muse’s golden stamp of approval, you’ll never write anything again. Just ask Harper Lee. Unfortunately, your muse is a finicky creature with her own timetable and agenda. She usually likes to show up when you’re on the job — or squatting behind a bush in the forest, as the case may be — and she expects you to be ready and waiting. Be ready, writer. 4. Compare yourself to J.K. Rowling J.K. Rowling is not just a beloved author, but the yardstick by which all authors should compare their career. This means that you should do some serious planning. Your first book can go out into your country quietly, but it should explode in the overseas market. By the time you finish your third book, you should be a household name. By the time you finish your fifth book, Hollywood should be knocking down your door. Your sixth book should be enough to purchase a majestic castle outside of Edinburgh, and your seventh should make you richer than the Queen. But don’t stop where J.K. Rowling did: if there had been ten Harry Potter books (and I’m not talking about that fanfiction stageplay), she’d be worth more than The Vatican. Keep in mind that none of this was due to luck or serendipity. J.K. Rowling had a plan. J.K. Rowling had a dream. J.K. Rowling kept her eyes on the prize. J.K. Rowling got shit done. It’s totally a formula. 5. Guard Your Ideas Ferociously Your story idea is special. It is precious. Nothing like it has ever existed in the world before. And everybody is out to steal it. Everybody. Every innocent-looking writer in every innocuous-looking writer’s group is going to steal your idea. Keep it secret, keep it safe. Your first step is to register it with your country’s copyright office. This usually has a hefty fee attached to it, but it is worth every penny. Every single penny. You need to build Fort Knox around your idea, and a copyright is on your side. It didn’t hurt Mr. Disney none, now did it? You should also make sure that you breathe nothing about your novel to anyone. Not your spouse, not your siblings, not your kids. They might be spies for other writers. They might be reporting on you. They might take your precious, precious gemstone of an idea for themselves. You only get so many ideas, you know. Once you’re out, you’re out.
https://zachjpayne.medium.com/the-novelists-guide-to-abject-failure-4d5429941687
['Zach J. Payne']
2019-06-28 20:05:33.299000+00:00
['Humor', 'Learning', 'Creativity', 'Art', 'Writing']
I Want to Travel Without Killing Polar Bears
Glacier National Park, where many glaciers are melting and more have already disappeared Recently I read this New York Times article that broke down global warming, and in particular how it is affected by aviation, into digestible information that alarmed me. Did you know that one passenger on a 2,500-mile flight is responsible for melting 32 square feet (or, for those who are metrically challenged: 3 square meters) of Arctic summer sea ice cover? Now we both do. This article popped up on my Facebook feed shortly after having a conversation with a friend about this exact topic (the creepiness of that deserves its own post — I digress). In the time that has passed since the two of us last saw each other, my friend has adopted a vegan lifestyle, forsaken his car, and has now decided that after one “last hurrah” flight to hike the Annapurna Circuit in Nepal next year, he is swearing off transport by plane for good — and he encouraged me to do the same. Basically for the reasons that I’ve just read about in this NYT piece, with the thumbnail image of a sorrowful cartoon polar bear perched on an ice floe as airplanes fly overhead. I was primed to read this article after last week’s conversation with my friend (following which I of course desperately googled the topic, hoping I would find a goldmine of information telling me that flying isn’t all that bad, really. I failed to find the goldmine.) Recently I purchased a one-way flight from Sri Lanka (where I’ve been living and working for the past year) to India. In just a couple of weeks, I’ll be beginning my 5 month tour of Asia (perhaps more on that later). Like many other privileged millennials, I love to see new places, meet interesting people, eat weird food. Get out of my comfort zone. But does my desire to see the world (and my western lifestyle in general) justify that, according to this study by John Nolt, “the average American causes through his/her greenhouse gas emissions the serious suffering and/or deaths of two future people”? A while ago, I read a BBC article explaining why brain biases prevent action on climate change issues. I can see the effect of many of these biases in my own life and my attitude towards world travel — particularly hyperbolic discounting, and the bystander effect. For example, I choose to believe that the present is more important than the future and that it is the job of governments and companies to take climate action, while it is my job to travel the world now, while I’m young (and before it’s all destroyed). I asked another friend his opinion on the topic. His point was essentially that we need better technologies to mitigate these issues, but one thing is certain — humans refuse to downgrade their quality of life. The rest of the world is catching up to the standards of the west, which is a problem because we aren’t practicing sustainable living. I watched a Ted Talk called “100 Solutions to Reverse Global Warming” by Chad Frischmann of Project Drawdown, which claims to be the world’s leading source of climate solutions. “Drawdown” refers to the point when greenhouse gasses in the atmosphere level off and then start to decline, thereby reducing global warming. Through its research, Project Drawdown has identified 100 solutions that will make drawdown possible, all of which exist today and can be fully utilized with technology. By proposing real, attainable solutions, they aim to change the negative narrative surrounding the topic of climate change into one of opportunity and hope. Out of all the solutions to reversing global warming laid out by Project Drawdown, aviation was not at the top of the list (it’s #43). So what tops the list? While diving into this website and clicking through many other links, I realized that I didn’t know too much about global warming, what’s causing it, and what can be done to prevent it. But as I read, I began to feel much more optimistic about these proposed solutions and saw how I could be putting many into practice in my own life. A plant-rich diet and reducing food waste? Of course I can do that! Turn off the AC? I live in the tropics, but still I know I can vastly cut down here. Supporting programs to educate girls and promote family planning? Quite easy to get involved. I liked and saved this Instagram post by Sophia Bush, an activist who I admire, also generally relating to this topic: BUT, I still feel a nagging sense of guilt when I think about the damage that the flight from Sri Lanka to India will incur, or the continuing negative impact that my 5 month tour of Asia could potentially cause if I’m not intentional about the way I travel. So, what’s my point in all of this? Honestly, I’m not sure. But with the wealth of knowledge at my fingertips, I can’t use the excuse of ignorance. If I’m going to travel and see the world, I need to make sustainable travel my top priority. What I’ve realized is there are many ways that I can adapt my lifestyle, without downgrading my quality of life, that will in turn help my planet and my fellow humans. But that first requires me to be aware, to care, to learn, and to make changes. As Aziz Ansari put it…
https://sarahngottshall.medium.com/i-want-to-travel-without-killing-polar-bears-f3c711a6c91
['Sarah Gottshall']
2019-08-23 14:59:30.122000+00:00
['Climate Change', 'Environment', 'Millennials', 'Sustainability', 'Travel']
Intelligent Visual Data Discovery with Lux — A Python library
EDA with Lux: Supporting a Visual dataframe workflow Image from the presentation with permission from the author df When we print out the data frame, we see the default pandas table display. We can toggle it to get a set of recommendations generated automatically by Lux. Image by Author The recommendations in lux are organized by three different tabs, which represent potential next steps that users can take in their exploration. The Correlation Tab: shows a set of pairwise relationships between quantitative attributes ranked by the most correlated to the least correlated one. Image by Author We can see that the penguin flipper length and body mass show a positive correlation. Penguins’ culmen length and depth also show some interesting patterns, and it appears that there is a negative correlation. To be specific, the culmen is the upper ridge of a bird’s bill. Image by Author The Distribution Tab shows a set of univariate distributions ranked by the most skewed to the least skewed. Image by Author The Occurrence Tab shows a set of bar charts that can be generated from the data set. Image by Author This tab shows there are three different species of penguins — Adelie, Chinstrap, and Gentoo. There are also three different islands — Torgersen, Biscoe, and Dream; and both male and female species have been included in the dataset. Intent-based recommendations Beyond the basic recommendations, we can also specify our analysis intent. Let's say that we want to find out how the culmen length varies with the species. We can set the intent here as [‘culmen_length_mm’,’species’]. When we print out the data frame again, we can see that the recommendations are steered to what is relevant to the intent that we’ve specified. df.intent = ['culmen_length_mm','species'] df On the left-hand side in the image below, what we see is Current Visualization corresponding to the attributes that we have selected. On the right-hand side, we have Enhance i.e. what happens when we add an attribute to the current selection. We also have the Filter tab which adds filter while fixing the selected variable. Image by Author If you closely look at the correlations within species, culmen length and depth are positively correlated. This is a classic example of Simpson’s paradox. Image by Author Finally, you can get a pretty clear separation between all three species by looking at flipper length versus culmen length. Image by Author Exporting visualizations from Widget Lux also makes it pretty easy to export and share the generated visualizations. The visualizations can be exported into a static HTML as follows: df.save_as_html('file.html') We can also access the set of recommendations generated for the data frames via the properties recommendation. The output is a dictionary, keyed by the name of the recommendation category. df.recommendation Image by Author Exporting Visualizations as Code Not only can we export visualization as HTML but also as code. The GIF below shows how you can view the first bar chart's code in the Occurrence tab. The visualizations can then be exported to code in Altair for further edits or as Vega-Lite specification. More details can be found in the documentation.
https://towardsdatascience.com/intelligent-visual-data-discovery-with-lux-a-python-library-dc36a5742b2f
['Parul Pandey']
2020-12-22 10:26:45.962000+00:00
['Exploratory Data Analysis', 'Python', 'Data Visualization', 'Data Analysis', 'Lux']
12 Ways the World Will Change When Everyone Works Remotely
Workplace studies in 2019 have reached a common conclusion — remote work is here to stay. Once people try working remotely, up to 99% want to continue, while 95% would recommend the practice to others. But that’s not all. A Zapier survey revealed that 74% of workers would quit their jobs for the ability to work from anywhere. Two in three believe that the traditional workplace will be obsolete within the next decade. Source: Buffer State of Remote Work Report They’re right. According to the U.S. Census Bureau, the number of people working remotely has been rising for the past ten years. Meanwhile, UpWork projects that the majority of the workforce will be freelancing as soon as 2027. Globally, one billion people could be working in a remote capacity by 2035. Whether people become remote employees, online entrepreneurs, freelancers, or other gig workers — one thing’s for sure — life will be nothing like the current 9–5. The world will change and reflect this new reality.
https://medium.com/swlh/12-ways-the-world-will-change-when-everyone-works-remotely-cb8927ef1853
['Kristin Wilson']
2020-04-16 18:46:39.615000+00:00
['Work', 'Freelancing', 'Business', 'Startup', 'Future']
2020 Was the Turning Point for CRISPR
2020 Was the Turning Point for CRISPR Scientists took huge strides toward using the gene-editing tool for medical treatments Photo: Yuichiro Chino/Getty Images Amid a raging global pandemic, the field of gene editing made major strides in 2020. For years, scientists have been breathlessly hopeful about the potential of the gene-editing tool CRISPR to transform medicine. In 2020, some of CRISPR’s first real achievements finally came to light — and two of CRISPR’s inventors won the Nobel Prize. The idea behind CRISPR-based medicine sounds simple: By tweaking a disease-causing gene, a disease could be treated at its source — and possibly even cured. The other allure of gene editing for medical reasons is its permanence. Instead of a lifetime of drugs, patients with rare and chronic diseases like muscular dystrophy or cystic fibrosis could instead get a one-time treatment that could have benefits for life. This idea has proven difficult to realize. For one, scientists have to figure out how to get the gene-editing molecules to the right cells in the body. Once there, the molecules need to modify enough cells in order to have an impact on the disease. Both of these things need to happen without causing unpleasant or toxic side effects that would make the treatment too risky. With advances in CRISPR technology, scientists showed this year that we’re getting closer to gene-editing cures. “2020 is the year that we have definitive proof that we are headed to a future where we as a species will genetically engineer human beings for purposes of treating disease or preventing them from developing disease,” Fyodor Urnov, PhD, a gene-editing expert and professor of molecular and cell biology at the University of California, Berkeley, tells Future Human. Here’s why 2020 was such a milestone year for CRISPR. CRISPR has eliminated symptoms of genetic blood diseases in patients In July 2019, scientists at Vertex Pharmaceuticals of Boston and CRISPR Therapeutics of Cambridge, Massachusetts used a groundbreaking approach to treat a woman with sickle cell disease, an inherited blood disorder that affects 100,000 Americans — most of them Black — and often leads to early death. They first removed the blood-producing stem cells from her bone marrow. Using CRISPR, they edited her diseased cells in the lab. They then infused the modified cells back into her bloodstream. The idea is that the edited cells will travel back to the bone marrow and start producing healthy blood cells. The complex procedure requires spending several weeks in a hospital. NPR followed the story of the first patient, named Victoria Gray. On December 5 of this year, the companies reported in the New England Journal of Medicine that the treatment has relieved Gray from the debilitating episodes known as “pain crises” that are typical of sickle cell. Another person with a related inherited blood disorder called beta-thalassemia is also symptom-free more than a year after receiving CRISPR-edited cells. Beta-thalassemia patients require blood transfusions every few weeks, but the person who received the CRISPR treatment hasn’t needed a single blood transfusion since. “This is a really dramatic change in the quality of life for these patients,” says Giuseppe Ciaramella, PhD, president and chief scientific officer of Beam Therapeutics, a gene-editing startup based in Cambridge, Massachusetts. Vertex and CRISPR Therapeutics have now treated a total of 19 patients with sickle cell disease or beta-thalassemia. Both diseases arise from mutations in the HBB gene, which makes an important blood protein called hemoglobin. In sickle cell patients, faulty hemoglobin distorts red blood cells, causing them to stick together and clog the blood vessels. In beta-thalassemia, meanwhile, the body doesn’t make enough hemoglobin. Scientists made a single genetic edit to patients’ cells to switch on the production of a similar protein, called fetal hemoglobin, which can compensate for the diseased or missing hemoglobin. CRISPR Therapeutics’ CEO Samarth Kulkarni, PhD, has said the treatment has the potential to be curative for people with these disorders. CRISPR was used to edit genes inside a person for the first time At an Oregon hospital in March, a patient with a type of inherited blindness became the first to receive a gene-editing injection directly into their eye. It was the first time CRISPR was used in an attempt to edit a gene inside someone’s body. A second person this year also received the experimental treatment, which is designed to snip out a genetic mutation responsible for their severe visual impairment. “In other words, this is a transition from CRISPR in hospital wards to CRISPR in a syringe,” Urnov says. Editas Medicine, the Cambridge, Massachusetts-based company behind the treatment, has yet to release any data showing how well the injection is working. In an email to Future Human, the company said the first patient’s vision remains stable. The two people treated so far have very low vision and are receiving a low dose of the CRISPR therapy. “It is unknown if these patients’ visual pathways are intact,” a company spokesperson tells Future Human. “Even if editing occurs as predicted, if the visual pathways are not intact, their vision would not improve.” The company is testing the therapy on people with a type of progressive vision loss called Leber congenital amaurosis, which often begins early in life. Editas scientists need to make sure the injection is safe and doesn’t cause any side effects before testing a higher dose in people who may have a better chance of vision correction. The company plans to inject the gene-editing treatment in up to 18 adults and eventually wants to treat younger patients, who are most likely to still have functioning visual pathways. In another trial, a person with a rare disease called transthyretin amyloidosis received an IV infusion of CRISPR in November. The disease causes abnormal deposits of protein in organs, which leads to a loss of sensation in the extremities and voluntary control of bodily functions. Developed by another Cambridge, Massachusetts biotech firm, Intellia Therapeutics, the treatment is also meant to edit a person’s genes inside the body. The trial is just getting underway in the U.K., where the company plans to enroll up to 28 patients. CRISPR got more precise Despite its versatility, CRISPR is still error-prone. For the past few years, scientists have been working on more precise versions of CRISPR that are potentially safer than the original. This year, they made notable progress in advancing these new versions to human patients. One downside of traditional CRISPR is that it breaks DNA’s double helix structure in order to delete or edit a gene. When the DNA repairs itself, it doesn’t do so perfectly, and some of the DNA letters around the edited gene get scrambled. In a newer form of CRISPR called base editing, the aim is to simply swap out one DNA letter for another rather than breaking DNA, explains Ciaramella. In one key test of base editing, delivered via a single injection, it successfully lowered LDL or “bad” cholesterol in 14 monkeys. The treatment acts on two genes found in the liver that help regulate cholesterol and fat. The Massachusetts company that developed the injection, Verve Therapeutics, announced the findings at a virtual meeting in June. Ciaramella’s company, Beam Therapeutics, which is also pursuing base editing, presented lab and mouse data at the American Society of Hematology annual meeting in December to support the safety of the approach for sickle cell disease. The company said it hopes to begin a clinical trial next year. Scribe Therapeutics of Alameda, California, is using yet another form of CRISPR dubbed X-editing. The company’s CEO, Ben Oakes, tells Future Human that X-editing is designed to be safer than classic CRISPR. It uses a smaller protein that more efficiently and precisely snips DNA. Co-founded by CRISPR pioneer and Nobel winner Jennifer Doudna, PhD, the company will use the new gene-editing tool to develop treatments for neurological diseases. “As cool and exciting as this technology is, it has to go in someone’s body,” Oakes says. “And it’s critical that we get it right.”
https://futurehuman.medium.com/2020-was-the-turning-point-for-crispr-5a66cb44ad0a
['Emily Mullin']
2020-12-18 20:47:45.742000+00:00
['CRISPR', 'Technology', 'Science', 'Biotech', 'Future']
The Privileged Have Entered Their Escape Pods
Now, pandemics don’t necessarily bring out our best instincts either. No matter how many mutual aid networks, school committees, food pantries, race protests, or fundraising efforts in which we participate, I feel as if many of those privileged enough to do so are still making a less public, internal calculation: How much are we allowed to use our wealth and our technologies to insulate ourselves and our families from the rest of the world? And, like a devil on our shoulder, our technology is telling us to go it alone. After all, it’s an iPad, not an usPad. The more advanced the tech, the more cocooned insularity it affords. “I finally caved and got the Oculus,” one of my best friends messaged me on Signal the other night. “Considering how little is available to do out in the real world, this is gonna be a game-changer.” Indeed, his hermetically sealed, Covid-19-inspired techno-paradise was now complete. Between VR, Amazon, FreshDirect, Netflix, and a sustainable income doing crypto trading, he was going to ride out the pandemic in style. Yet while VRporn.com is certainly a safer sexual strategy in the age of Covid-19 than meeting up with partners through Tinder, every choice to isolate and insulate has its correspondingly negative impact on others. The pool for my daughter wouldn’t have gotten here were it not for legions of Amazon workers behind the scenes, getting infected in warehouses or risking their health driving delivery trucks all summer. As with FreshDirect or Instacart, the externalized harm to people and places is kept out of sight. These apps are designed to be addictively fast and self-contained — push-button access to stuff that can be left at the front door without any human contact. The delivery people don’t even ring the bell; a photo of the package on the stoop automagically arrives in the inbox. Like with Thomas Jefferson’s ingenious dumbwaiter, there are no signs of the human labor that brought it. Many of us once swore off Amazon after learning of the way it evades taxes, engages in anti-competitive practices, or abuses labor. But here we are, reluctantly re-upping our Prime delivery memberships to get the cables, webcams, and Bluetooth headsets we need to attend the Zoom meetings that now constitute our own work. Others are reactivating their long-forgotten Facebook accounts to connect with friends, all sharing highly curated depictions of their newfound appreciation for nature, sunsets, and family. And as we do, many of us are lulled further into digital isolation — being rewarded the more we accept the logic of the fully wired home, cut off from the rest of the world. And so the New York Times is busy running photo spreads of wealthy families “retreating” to their summer homes — second residences worth well more than most of our primary ones — and stories about their successes working remotely from the beach or retrofitting extra bedrooms as offices. “It’s been great here,” one venture fund founder explained. “If I didn’t know there was absolute chaos in the world … I could do this forever.” But what if we don’t have to know about the chaos in the world? That’s the real promise of digital technology. We can choose which cable news, Twitter feeds, and YouTube channels to stream — the ones that acknowledge the virus and its impacts or the ones that don’t. We can choose to continue wrestling with the civic challenges of the moment, such as whether to send kids back to school full-time, hybrid, or remotely. Or — like some of the wealthiest people in my own town — we can form private “pods,” hire tutors, and offer our kids the kind of customized, elite education we could never justify otherwise. “Yes, we are in a pandemic,” one pod education provider explained to the Times. “But when it comes to education, we also feel some good may even come out of this.” I get it. And if I had younger children and could afford these things, I might even be tempted to avail myself of them. But all of these “solutions” favor those who have already accepted the promise of digital technology to provide what the real world has failed to do. Day traders, for instance, had already discovered the power of the internet to let them earn incomes safely from home using nothing but a laptop and some capital. Under the pandemic, more people are opening up online trading accounts than ever, hoping to participate in the video game version of the marketplace. Meanwhile, some of the world’s most successful social media posses are moving into luxurious “hype houses” in Los Angeles and Hawaii, where they can livestream their lifestyles, exercise routines, and sex advice — as well the products of their sponsors — to their millions of followers. And maybe it’s these young social media enthusiasts, thriving more than ever under pandemic conditions, who most explicitly embody the original promise of digital technology to provide for our every need. I remember back around 1990, when psychedelics philosopher Timothy Leary first read Stewart Brand’s book The Media Lab, about the new digital technology center MIT had created in its architecture department. Leary devoured the book cover to cover over the course of one long day. Around sunset, just as he was finishing, he threw it across the living room in disgust. “Look at the index,” he said, “of all the names, less than 3% are women. That’ll tell you something.” He went on to explain his core problem with the Media Lab and the digital universe these technology pioneers were envisioning: “They want to recreate the womb.” As Leary the psychologist saw it, the boys building our digital future were developing technology to simulate the ideal woman — the one their mothers could never be. Unlike their human mothers, a predictive algorithm could anticipate their every need in advance and deliver it directly, removing every trace of friction and longing. These guys would be able to float in their virtual bubbles — what the Media Lab called “artificial ecology” — and never have to face the messy, harsh reality demanded of people living in a real world with women and people of color and even those with differing views. For there’s the real rub with digital isolation — the problem those billionaires identified when we were gaming out their bunker strategies. The people and things we’d be leaving behind are still out there. And the more we ask them to service our bubbles, the more oppressed and angry they’re going to get. No, no matter how far Ray Kurzweil gets with his artificial intelligence project at Google, we cannot simply rise from the chrysalis of matter as pure consciousness. There’s no Dropbox plan that will let us upload body and soul to the cloud. We are still here on the ground, with the same people and on the same planet we are being encouraged to leave behind. There’s no escape from the others. Not that people aren’t trying. The ultimate digital escape fantasy would require some seriously perverse enforcement of privilege. Anything to prevent the unwashed masses — the folks working in the meat processing plants, Amazon warehouses, UPS trucks, or not at all — from violating the sacred bounds of our virtual amnionic sacs. Sure, we can replace the factory workers with robots and the delivery people with drones, but then they’ll even have less at stake in maintaining our digital retreats. Unlike their human mothers, a predictive algorithm could anticipate their every need in advance and deliver it directly, removing every trace of friction and longing. I can’t help but see the dismantling of the Post Office as a last-ditch attempt to keep the majority from piercing the bubbles of digital privilege through something as simple as voting. Climb to safety and then pull the ladder up after ourselves. No more voting, no more subsidized delivery of alternative journalism (that was the original constitutional purpose for a fully funded post office). So much the better for the algorithms streaming us the picture of the world we want to see, uncorrupted by imagery of what’s really happening out there. (And if it does come through, just swipe left, and the algorithms will know never to interrupt your dream state with such real news again.) No, of course we’ll never get there. Climate, poverty, disease, and famine don’t respect the “guardian boundary” play space defined by the Oculus VR’s user preferences. Just as the billionaires can never, ever truly leave humanity behind, none of us can climb back into the womb. When times are hard, sure, take what peace and comfort you can afford. Use whatever tech you can get your hands on to make your kid’s online education work a bit better. Enjoy the glut of streaming media left over from the heyday of the Netflix-Amazon-HBO wars. But don’t let this passing — yes, passing — crisis fool you into buying technology’s false promise of escaping from humanity to play video games alone in perpetuity. Our Covid-19 isolation is giving us a rare opportunity to see where this road takes us and to choose to use our technologies to take a very different one.
https://onezero.medium.com/the-privileged-have-entered-their-escape-pods-4706b4893af7
['Douglas Rushkoff']
2020-09-03 00:18:18.428000+00:00
['Society', 'Privilege', 'Digital', 'Technology', 'Future']
Lessons learnt from building reactive microservices for Canva Live
Lessons learnt from building reactive microservices for Canva Live Behind the scenes on our mission to drive the next era of presentation software. Presentations are one of the most popular formats on Canva, with everyone from small businesses, to students and professionals creating stunning slide decks — averaging up to 4 new designs per second. But to truly drive the next era of presentation software, we’re empowering our community with live, interactive features. In this blog, Canva Engineer Ashwanth Fernando shares how the team launched a real-time experience through a hybrid-streaming backend solution to power the Canva Live presentation feature. With over 4 million people creating a new presentation on Canva each month, it’s no surprise this doctype consistently ranks as one of the fastest growing on our platform. But other than delighting our community with professional-looking presentation templates, we’re always on the lookout for new ways to demonstrate the magic of Canva. Throughout our research, it was clear that people invest time into creating a beautiful slideshow for one simple reason: every presenter wants maximum engagement. That’s why we’re seeing less text, and more photos, illustrations, and animations. To take engagement to the next level, we challenged ourselves to introduce real-time interactions, that allow every presenter to communicate with their audience easily, effectively and instantaneously. This is how Canva Live for Presentations came to be. What’s Canva Live? Canva Live is a patent-pending presentation feature that lets audiences ask live questions via a unique url and passcode on their mobile device. Submitted questions can then be read by presenters in a digestible interface, to support fluid audience interaction. For a visual demonstration of this, you can view the below video: As demonstrated, the audience’s questions appear in real-time on the presenter’s screen, without the page having to refresh. The traditional way to achieve this would be to poll the server at regular intervals — but the overhead of establishing a Secure Sockets Layer (SSL) connection for every poll would cause inefficiencies, potentially impacting reliability and scalability. Hence, a near real-time (NRT) experience is essential for Canva Live, while offering maximum reliability, resilience and efficiency. We achieved this with a novel approach to reactive microservices. Creating a Real-Time Experience through a Hybrid-Streaming Backend Solution Canva Live works over a hybrid streaming system. We transmit questions, question deletions, and audience count updates — from the Remote Procedure Call Servers (RPC) to the presenter UI — via a WebSockets channel. As WebSockets constantly remain open, this means the server and client can communicate at any time, making it the ideal choice for displaying real-time updates. As the number of connections between audience members and the RPC Fleet must scale in line with audience participants, we use more traditional request/response APIs for this. These connections are only required to transfer data at specific moments (eg. when a member submits a question), and multiple instances of an always-open WebSocket channel would use unnecessary compute resources. For clarity, we have created a diagram of the technology stack below (Fig. 1). The presenter and audience connect to a gateway cluster of servers, which manages all ingress traffic to our microservice RPC backend fleet. The gateway manages factors such as authentication, security, request context, connection pooling and rate limiting to the backend RPCs. The Canva Live RPC fleet is an auto-scaling group of compute nodes, which in turn talk to a Redis backend (AWS Elasticache with cluster mode enabled). Though the diagram looks very similar to a traditional n-tier deployment topology, we found an incredible variety of differences when building scalable streaming services. Below, we explain how potential landmines were avoided, and the valuable lessons we learnt in building Canva Live. Consider Building For Scale From Day 1 As software engineers, we all love the ability to build incrementally, then increase industrial strength and robustness based on traffic — especially when building a new product. For Canva Live, we had to prepare for wide scale usage from day one, with its start button being baked-in to one of our most popular pages. Redis Scalability Our Redis database is deployed with cluster mode enabled and has a number of shards with a primary master and replica. The replicas are eventually consistent with the data in the primary nodes, and can quickly be relied on if primary nodes are down. The client side is topology-aware at all times. It knows which node is a newly elected primary and can start re-routing reads/writes when the topology changes. Adding a new shard to the cluster and scaling out storage is easy within the click of a few buttons/commands as shown here. RPC scalability At our RPC tier, our compute nodes are bound to a scaling group that auto-scales based on CPU/memory usage. Gateway/Edge API layer scalability Our gateway tier sits at the edge of our data-center. It is the first component to intercept north-south traffic, and multiplexes many client websocket connections, into 1 connection to each RPC compute node. This helps scalability, as the direct mapping of client connections to compute nodes creates a linear growth on socket descriptors at the RPC compute node (which is a finite resource) The flip side of multiplexing is that the gateway tier cannot use Amazon Load Balancer (ALB) to talk to RPCs, as it has no knowledge of how many virtual connections are being serviced over a physical connection. As a result the ALB could make uninformed choices, when load balancing websocket connections over the RPC fleet. Hence, our gateway tier uses service discovery, to bypass the ALB, and talk directly talk to the RPC nodes. Choosing The Right Datastore Choosing the optimal data store is one of the most important yet overlooked aspects of system design. Canva Live had to be scalable from the start, with the system plugging into our high-traffic, existing presentation experience. Defaulting to an RDBMS database that only supports vertical scalability of writes, would make it more difficult to support growth. To build an end-to-end reactive system, we required a datastore with a reactive client driver to enable end-to-end request processing of the RPC, using reactive APIs. This programming model allows the service to enjoy the full benefits of reactive systems (as outlined in the reactive manifesto), prioritizing increased resilience to burst traffic, and increased scalability. We also needed a publish-subscribe (pub/sub) API that implements the reactive streams spec, to help us monitor data from participant events, such as questions and question deletions. Secondly, our session data is transient with a pre-set invalidation timeout of a few hours. We needed to expire data structures without performing housekeeping tasks in a separate worker process. Due to the temporary lifetime of our data, a file system based on databases would create the overhead of disk accesses. Finally, we have seen phenomenal year-on-year growth in our presentations product, and needed a database to horizontally scale. We chose Redis as our datastore, as it best met the above requirements. After analysing the pros and cons between the Redisson and lettuce Java Clients, we opted for the latter. lettuce was better suited, as its commands directly map onto the Redis counterpart. The lettuce low level java client for Redis provides a PubSub API based on the Reactive Streams, (specifications listed here), while Redisson supports all the Redis commands but has its own naming and mapping conventions (available here.). Redis also supports expiry of items for all data structures. In Redis cluster mode, we have established a topology that lets us scale from day one, without the need for code change. We host the Redis cluster in AWS Elasticache (Cluster Mode enabled) which lets us add a new shard, and rebalance the keys with a few clicks. Besides all of the above benefits, Redis also doubles up as a data-structures server, and some of these data-structures were suitable candidates for Canva Live out of the box — such as Redis Streams and SortedSets. It is worth mentioning that user updates could also be propagated to different users by using Kafka and/or SNS+SQS combos . We decided against either of these queuing systems, thanks to the extra data-structures and K-V support offered by Redis. Consider using Redis Streams for propagating user events across different user sessions There can be hundreds of question-adding and deletion events in just one Canva Live session. To facilitate this, we use a pub-sub mechanism , via Redis Stream — a log structure that allows clients to query in many different ways. (More on this here). We use Redis Streams to store our participant generated events, creating a new stream for every Canva Live session. The RPC module runs on an AWS EC2 node, and holds the presenter connection. This calls the Redis XRANGE command every second to receive any user events. The first poll in a session requests all user events in the stream, while subsequent polls only ask for events since the last retrieved entry ID. Though polling is resource inefficient, especially when using a thread per presenter session, it is easily testable, and lends itself to Canva’s vigorous unit and integration testing culture. We are now building the ability to block on a stream with an XREAD command while using the lettuce reactive API to flush down the user events to the presenter. This will allow us to build an end-to-end reactive system, which is our north star. We’ll eventually move to a model, where we can listen to multiple streams and then broadcast updates to different presenter view sessions. This will decouple the linear growth of connections from threads. In our Redis cluster mode topology, streams are distributed based on the hash key, which identifies an active Canva Live Session. As a result, a Canva Live event stream will land on a particular shard and its replicas. This allows the cluster to scale out, as not all shards need to hold an event stream. It’s hard to find an API counterpart like the Redis XREAD command in other database systems. Listening capabilities that span different streams are generally only available to messaging systems like Kafka. It’s wonderful to see this out of the box, in an in-memory data-structures server such as Redis, with simple key/value support. AWS Elasticache provides all this goodness without the headaches of administering a multi-master, plus replica Redis cluster. Minimize Network Traffic By Offloading Querying To The Datastore As Much As Possible Using a tactic taken straight from the RDMBS handbook, we have minimized network traffic by offloading querying to the datastore. As mentioned previously, our version 1 implementation used polling between RPC and Redis, to fetch new comments and manage the audience counter. However, repeated calls across several Canva Live presentations can create significant network congestion between the RPC and Redis cluster, meaning it was critical for us to minimize traffic volume. In the case of the audience counter, we only want to include new, active participants. To do this, we use a Redis SortedSet to register a participant ID, plus the current timestamp. Every time a participant polls again, the timestamp for the participant id is refreshed, by calling the Redis command, ZADD (this adds the participant id along with the current timestamp, which is always sorted). Then we need to confirm the audience count. We call Redis command ZCOUNT (count the number of items between a range of timestamps), with the current timestamp (T) and T — 10 seconds, to calculate the number of live participants within the last ten seconds. Both commands ZCOUNT and ZADD, have a time complexity of log(N), where N is the total number of items in the SortedSet. Imagine doing something like this in a file system-based database. Even if the database promised log(N) time complexity, the log(N) is still disk I/O dependent for each operation — which is far more expensive than doing it in-memory. Redis supports many more data-structures like SortedSet with optimal time complexity out of the box. We recommend using these, rather than resorting to key value storage and performing data filtering and/or manipulation at the RPC layer. The entire list of Redis commands is here https://redis.io/commands Understand the nuances of Redis transactions The concept of transaction in Redis is very different from its counterpart in traditional RDBMS. A client can submit an operations sequence to a Redis server as a transaction. The Redis server will guarantee the sequence is executed as an atomic unit without changing context to serve other requests, until the transaction is finished. However, unlike an RDBMS, if one of the steps in the sequence fails, Redis will not roll back the entire transaction. The reasoning for this behavior is listed here — https://redis.io/topics/transactions#why-redis-does-not-support-roll-backs Furthermore, a Redis cluster can only support transactions if the sequence of operations works on data structures in the same shard. We take this into account when we design the formulation of the keys hash tag for our data structures. If our data structures need to participate in a transaction, we use use keys that map to the same shard (see here https://redis.io/topics/cluster-spec#keys-hash-tags), to ensure these data structures live in the same shard. Ideally we’d like to do transactions across shard boundaries, but this will lead to strategies like two-phase commits, which could compromise the global availability of the Redis cluster (see CAP Theorem). Client specified consistency requirements like Dynamo would be more welcome for Redis transactions. Minimize streaming connections to chatty interactions It’s easy to get carried away and build every API over a streaming connection. However, a full-fledged streaming API demands substantial development effort, and runtime compute resources. As mentioned previously, we stuck with request and response APIs for the participant-facing Canva Lives features, which has proven to be a good decision. However, a case can be made for use of streaming here, because of the audience counter. Instead of repeatedly polling Canva every few seconds to inform availability, using a websockets connection can greatly simplify Redis storage by switching from the existing SortedSet to into a simple key/value store for the participant list. This is because we can detect when the client terminates the websockets connection, and use that event to remove the participant from the key value store. We voted against using participant side WebSockets connections because our first iteration uses one JVM polling thread per streaming session. If we used the same approach for the participant, it could lead to an unbounded number of threads per RPC server, with no smart event handling system in place. We’re in the design stage of replacing the polling model with a system that uses a single thread to retrieve updates across several transaction logs. This will broadcast updates to participants connected to the RPC process, to help decouple the linear growth of connections from the associated threads, and enhance scalability. Once we have this in place, it will be easier to adopt a stream-based connection for participants. De-risk complexity at every turn Canva Live is one of only two streaming services at Canva, meaning we didn’t have many established patterns to guide us. Since we’re working with a lot of new systems and libraries such as PubSub(Flux) for reactive streams, Lettuce and Redis, we wanted to make sure that we could launch to staging quickly, and validate the architecture and as a foundation to the final production version. Firstly we poll Redis at one second intervals to reduce complexity. Secondly, we decided to use request/response services for the participant side of Canva Live. Then, we implemented the data access layer using an in-memory database. Although the in-memory database implementation limited us to a single instance deployment, it allowed us to quickly validate our assumptions and ensure that the entire streaming architecture works as intended. To give more context on our technology stack, our in-memory database replicates the Redis streams with a Java Deque implementation, and the Redis SortedSet is a HashMap in Java. Once the redis implementation of the data access layer was ready, we swapped the in-memory version with the Redis version. All the above ways of de-risking complexity seems to be contrary advice to ‘Building for scale from day 1’. It is worth noting that ‘Building for scale from day 1’ does not mean trying to achieve perfection. The goal is to avoid making technical decisions that will significantly hamper our ability to scale to millions of users. Some of these ideas were borrowed from other Canva teams that had more experience in building similar systems. Leveraging the knowledge of other exceptional engineers was essential to de-risking the complexity of this system. Move towards end-to-end reactive processing We use Project Reactor (https://projectreactor.io/) to implement Reactive Streams on the RPC side. On the browser side, we use RxJs to receive events, and React, to paint the different parts of the UI in an asynchronous manner. At the beginning of our first Canva Live implementation, only our service API was using the Flux and FluxSink to flush events to the browser. However, building an entirely reactive system does supply the benefits of increased resilience, responsiveness and scalability. Due to this, we are making inner-layers reactive, all the way to the database. Our usage of lettuce, which uses the same Flux/Mono API as Reactor is ideal, as it helps our cause in writing an end-to-end reactive system. Conclusion As of now, Canva Live is enabled to all users, and we’re seeing an incredible number of sessions lighting-up our backend systems. Building distributed systems that operate at scale is complex. Adding streaming support takes that complexity to a whole new level. Having said that, we truly believe we have built a streaming NRT system that scales with demands of our users, and will help foster an interactive, seamless presentation experience. Please take a moment to try out Canva Live, and let us know your thoughts. Appendix Building the backend infrastructure for Canva Live is a result of the joint effort of Anthony Kong, James Burns, myself and innumerable other engineers that have been extremely selfless in reviewing our designs and guiding us along the way. The gateway piece of this puzzle is built and maintained by our awesome Gateway Engineering team.
https://medium.com/canva/lessons-learnt-from-building-reactive-microservices-for-canva-live-789892c58b10
['Canva Team']
2020-10-13 00:37:03.354000+00:00
['Microservices', 'Engineering', 'Software Development', 'Nodejs']
Single-binary Web Apps in Go and Vue — Part 4
Photo by David Pisnoy on Unsplash This is part 4 in a four part series where we will demonstrate building a web application using Go and Vue, and finally bundle it all together in a single binary for super-easy deployment/distribution. Part 1 can be found here, part 2 here, and part 3 here. In part 1 we built the Go and Vue apps. In part 2 we changed the Go app to automatically start the Vue app by running Node when the version of the application is “development”. In part 3 we bundled it all up into a single compiled binary. In this article, to make our lives easy, we are going to use the tool make to build everything. If you recall we had to run these commands to make a final bundled build. $ cd app $ npm run build $ cd .. $ go generate $ go build -tags=prod -ldflags="-X 'main.Version=1.0.0'" That’s a lot of steps. Let us simplify. I’ll start by showing the whole Makefile, and then we’ll break it down. Line 1 is simple. It states that when we run make the default action is run, meaning it will execute the script at line 17. Line 4 is a variable that identifies the path to UPX. UPX is a nifty tool that compresses executables. Why do I want this? When we bundle in our static JavaScript assets (our Vue app) it makes our final executable fairly large. UPX will make the binary as small as possible. Lines 6–10 define some variables we are going to use further down. The variable VERSION is where you can set the version of your application. This is applied to the variable Version in main.go is where you can set the version of your application. This is applied to the variable in BUILDFLAGS are flags that will be passed to go build. The flag “-s” tells Go to omit the symbol table from the binary, and the flag “-w” says to omit debug information. The “-X” flag is what is setting the version variable in the binary are flags that will be passed to go build. The flag “-s” tells Go to omit the symbol table from the binary, and the flag “-w” says to omit debug information. The “-X” flag is what is setting the version variable in the binary PROJECTNAME is a variable that is the name of the final executable is a variable that is the name of the final executable GCVARS contains the environment variables specifying to omit cgo and to use AMD64 architecture contains the environment variables specifying to omit cgo and to use AMD64 architecture GC is the final Go compile command. This is the string that will run the actual build Lines 12–15 is a bit of script that ensures that UPX is installed. If it isn’t installed the Make process will stop with an error. Lines 17 and 18 define the goal named run. This just runs the Go app with the tag “dev”. This will also start the Node server for the Vue app because in main.go the variable Version is “development”. The final bits define goals to build for various platforms. Notice how each depends on the goal generate-compiled-assets, which in turn depends on build-vue-app. This means that if you run make build-linux it will first build The Vue app, then run “go generate”, then finally run the Go build.
https://adam-presley.medium.com/single-binary-web-apps-in-go-and-vue-part-4-2a1ab9f69fcb
['Adam Presley']
2020-12-30 05:56:27.084000+00:00
['Software Development', 'JavaScript', 'Development', 'Vuejs', 'Golang']
Apple Is Killing A Billion-Dollar Ad Industry With One Popup
Apple Is Killing A Billion-Dollar Ad Industry With One Popup The new iOS 14 privacy feature spells trouble for advertisement agencies and promises to end an era of personalized ads Photo by Tobias Moore on Unsplash When Apple’s WWDC 2020 digital-only Keynote event kickstarted, all eyes were on the new mac OS Big Sur and the ambitious Apple Silicon chips. But, from the perspective of advertisement agencies, it was the new iOS 14 privacy-based features that sent shockwaves in their industry and became the major talking point. For the uninitiated, a lot of apps today use an Advertising identifier (IDFA). It allows developers and marketers to track activity for advertising purposes. Plenty of marketing agencies backed by Google and Facebook run campaigns to record purchases, usage time, user actions, and subsequently serve personalized ads. Over 100K apps on the App Store today have the Facebook or Google SDK integrated which tracks and sends your data to the tech giants and third party brokers. But iOS 14 is all set to change that by being upfront and transparent with users about how their data is used for ads.
https://medium.com/macoclock/apple-is-killing-a-billion-dollar-ad-industry-with-one-popup-2f83d182837f
['Anupam Chugh']
2020-07-10 18:05:22.119000+00:00
['Technology', 'Marketing', 'Advertising', 'Apple', 'Business']
8 last-minute ideas for a healthier Valentine’s Day
This story was originally published on blog.healthtap.com on February 14, 2018. Valentine’s Day is a wonderful day to celebrate all of the love in your life, whether that love is with a partner, your friends, or your family. Valentine’s Day is also an amazing day to take the time to practice some self-care and to show some love for yourself. If you need some last minute ideas to spice up your day, try these. They’re perfect to do as a date with that special someone, to do with your friends, or to do by yourself if you need a little time for self-love. These healthy ideas will help make your celebration more adventurous and interesting, and healthier too! Go on a hike Wintery hike in the woods? Coastal hike by the beach? Lace up your shoes and go spend some quality time outdoors by going on a hike with your loved ones, or just for some peace of mind. You’ll get fresh air, beautiful views, and some great exercise to boot. Cook dinner at home Light some candles, pop open some wine, and save some money by cooking a meal at home. It can be a collaborative effort, you’ll get to show off your cooking skills, and you can cook something healthy and nourishing for you and the one you’re with. You’ll also get to be able to better control your portions according to your health goals and needs. Make a dark chocolate dessert Instead of devouring a box of chocolates, try eating some dark chocolate or making a dark chocolate dessert instead. Dark chocolate is full of antioxidants called flavonols, which promote heart health by lowering LDL cholesterol (the “bad” cholesterol) in your arteries and by improving circulation. Just make sure you stick to chocolate above 70% cacao, and watch added sugar and fat which counteracts these heart-healthy benefits. Go out dancing Whether you’re going out with a date or with your friends, going to a dance class or just hitting the town can be not only an incredibly fun, but also an extremely fit way to celebrate the evening. Dancing is one of the most fun ways to get in aerobic and muscle-strengthening exercise, and you can burn up to 230 calories in just 30 minutes. Go on a picnic Load up a basket of healthy goodies and head outside to a nice and sunny spot outdoors. You’ll get to choose exactly what you want to put in your basket while getting to explore outside, and it can be a perfect way to end a day of hiking or any other adventure! Spend some time smooching If you’re with someone you care about on Valentine’s Day, it’s good to get in that time to smooch that special someone. Did you know that kissing has some great health benefits? Kissing helps lower blood pressure, spikes your feel-good hormones, and also burns a few calories. In fact, a good old-fashioned make-out can burn up to 6.5 calories a minute! Get a massage Whether you’re booking a couple massage or you’re going solo, getting a massage is a perfect way to show yourself some love on Valentine’s Day. Massages help decrease anxiety, relieve chronic pain, improve circulation, and have a host of other incredible benefits for both your mind and body. It’ll be a treat you’ll feel great about giving yourself. Give the gift of a group fitness pass Want to show someone you care? Give the gift fo working out together! A group fitness pass will be a wonderful way for you and your partner or friend to hold each other accountable in your fitness goals, and to have a lot more fun and time together doing it. Whatever your plans are for today, we wish you and your loved ones a healthy and happy Valentine’s Day! Author: Maggie Harriman
https://medium.com/healthtap/8-last-minute-ideas-for-a-healthier-valentines-day-2b29e86a6a97
[]
2018-02-14 18:30:58.406000+00:00
['Self Care', 'Health', 'Valentines Day', 'Love', 'Wellness']
Shitting Like Tim Ferriss
Enter Tim Ferriss and the Mental Ward So, if depleting my body of all animal products and saturated fat caused me to shake hands with God, Shiva, and the gang, then surely doing the exact opposite would work, right? Wrong. (Quick side note: I didn’t see Elvis there. Meaning, he’s either still alive or in the “other place.” Or, maybe the other place is the same as being alive? I digress.) I read all the books and articles that I could find to improve my mind, body, and soul. I was all in, baby! Again. Among my pile of books, I had copies of James Clear’s Atomic Habits and Tim Ferriss’s books about getting a 4-hour body and a 4-hour workweek. At this point, I’d settle for any habit, atomic or not. Whatever I could do to improve! Do you know what happens when you try to tackle 27 daily habits back-to-back? You go crazy. Legit. It was 11 p.m. on Saturday, and I was calling the Veterans Crisis Line (if anyone needs it: 800–273–8255). The lady on the phone was nice; the cops that showed up, not so much. Maybe it had to do with me being a former Marine and being 6'2" that they decided to cuff me and cart me out. Don’t worry, I’ll write about the whole experience later (ex: “7 Life Lessons I Learned While Locked Up in the Loney Bin”). Long story short: a week after living in a locked-up hospital ward with bolted-down furniture and “Medicine time” nurses, I realized something: I was taking things too far. I needed balance. I needed self-compassion.
https://medium.com/rogues-gallery/shitting-like-tim-ferriss-c959858f9173
['Ryan Dejonghe']
2020-12-16 23:03:23.765000+00:00
['Humor', 'Creativity', 'Health', 'Self', 'Tim Ferriss']
Designing for the Discovery of Big Data
Spain is the number one country for tourism in the world. With sights like the Museo del Prado and Royal Palace of Madrid, it is hard to see how such spectacular beauty could ever be overlooked. To assist in visualising the majesty of this European gem, Vizzuality and CartoDB are proud to announce our latest release: an interactive tool to analyse tourist spending in Spain during the summer of 2014. Using BBVA Data and Analytics, you can see how tourists of different nationalities spent their time in Spain. Take a look at this UN-BBVA-LIEVABLE visualisation! With anonymised data, on 5.4 million credit card transactions, we worked out how to visualise tourist spending effectively while optimising the speed and performance of the application. Equipped with our unique blend of pioneering design principles and innovative coding, we delivered “a piece of artwork” — almost fit for the Prado!
https://medium.com/vizzuality-blog/designing-for-the-discovery-of-big-data-b715afdd23ce
['Jamie Gibson']
2016-01-07 11:48:06.965000+00:00
['Design', 'Big Data', 'Data Visualization']
Animations with Matplotlib
Animations with Matplotlib Using the matplotlib library to create some interesting animations. Animations are an interesting way of demonstrating a phenomenon. We as humans are always enthralled by animated and interactive charts rather than the static ones. Animations make even more sense when depicting time-series data like stock prices over the years, climate change over the past decade, seasonalities and trends since we can then see how a particular parameter behaves with time. The above image is a simulation of Rain and has been achieved with Matplotlib library which is fondly known as the grandfather of python visualization packages. Matplotlib simulates raindrops on a surface by animating the scale and opacity of 50 scatter points. Today Python boasts of a large number of powerful visualization tools like Plotly, Bokeh, Altair to name a few. These libraries are able to achieve state of the art animations and interactiveness. Nonetheless, the aim of this article is to highlight one aspect of this library which isn’t explored much and that is Animations and we are going to look at some of the ways of doing that.
https://towardsdatascience.com/animations-with-matplotlib-d96375c5442c
['Parul Pandey']
2020-09-09 01:37:14.569000+00:00
['Data Visualization', 'Matplotlib', 'Python', 'Programming', 'Towards Data Science']
Why Engineers Cannot Estimate Time
A statistical approach to explaining bad deadlines in engineering projects Whether you are a junior, senior, project manager, or a top-level manager with 20 years of experience, software project time estimation never becomes easy. No one no matter how experienced or genius they are can claim to know for sure the exact time a software project would take. This problem is especially prevalent in software engineering, but other engineering disciplines are also known to suffer from the same downfall. So while this article focuses on software engineering, it also applies to other disciplines, to an extent. Overview Let’s first have a birds-eye view of the problem, the consequences, and the potential root causes. I will be covering most of these during this series. The Problem Software projects seldom meet the deadline. The Consequences Marketing efforts can be wasted, clients can be dissatisfied, stressed developers can write poor quality code to meet deadlines and compromise product reliability, and ultimately, projects can outright get canceled. The Known Causes Wrong time estimates (the focus of this article) . . Unclear requirements at the start of the project and, later, changing requirements. Gold-plating: too much attention to details outside the scope of work. Not taking enough time in the research and architecture design phase or, conversely, taking too much time. Overlooking potential issues with 3rd party integrations. The desire to “get it right the first time” Working on too many projects at the same time or getting distracted (breaking the flow too often). Unbalanced quality-throughput scale. Over-optimism, Dunning-Kruger effect, pure uncertainty, or just math? Stage 5: ACCEPTANCE It’s easy to dismiss the concept of over-optimism all together just because it’s common sense that no developer who ever struggled to meet a deadline will be optimistic when setting deadlines. Now if project management is not coming from an engineering background and they set deadlines without knowing what they are doing, that’s a whole different issue that is outside the scope of this article. Some also attribute bad time estimation to the Dunning-Kruger effect, however, if inexperience or overestimating one’s ability is behind underestimating time then definitely more experience should alleviate the issue, right? The biggest companies out there with almost infinite resources still have a shockingly high rate of missing deadlines, so that hypothesis is debunked. Not to mention, we have all experienced this ourselves. More experience barely helps when it comes to time estimates. Most developers, especially rather experienced ones, quickly conclude that it’s just pure uncertainty. And it follows that time estimates will always be wrong and that’s just a fact of life and the only thing we can do about it is, well, try to meet client demands and tell developers to “just crunch” when things go wrong. We are all familiar with the stress, the garbage code, and the absolute mayhem that this philosophy causes. Is there a method to the madness? Is this really the best way we can get things done? Well, I didn’t think so and that’s when I embarked on my journey trying to find a rational mathematical explanation as to why all those smart people are unable to estimate the time it’d take them to do something. It’s just math! One day I was doing a task that should have taken 10 minutes and ended up taking 2 hours. I started contemplating the reasons why I thought it would take 10 minutes and the root cause that pumped that number all the way up to 2 hours. My thought process was a bit interesting: I thought it would take 10 minutes because I actually knew 100% in my head the exact code that I needed to write. It actually took me around 7–10 minutes to be done with the code. Then it took 2 hours because of a bug in the framework completely unknown to me. This is what people like to call in project management “force majeure”; external uncontrollable causes of delay. Now you might be thinking that I’m just proving the uncertainty argument with that scenario. Well, yes and no. Let’s zoom out a bit. Sure, uncertainty is the root cause of the delay of this particular task because I would have never guessed that bug existed. But should it be responsible for the delay of the whole project? That’s where we need to draw the distinction that a single task isn’t representative of the project and vice versa. How we “normally” estimate time A normal distribution (bell curve) Normal distributions are all around us and the human brain is pretty used to them. We are experts at estimating things following a normal distribution by nature; it’s the basis of gaining experience by exposure. If you went to the nearest 7–11 almost 20 times this month and every time it took you 5 minutes, except for that time the elevator needed maintenance and you had to wait for 10 minutes and maybe that other time you decided to wait a couple of minutes until it stops raining. What would be your guess for the time it takes you to go there right now? 5 minutes? I mean, it doesn’t make sense to say 15 because it’s a rare incident or 7 unless it’s raining outside. And you’d be right, most likely. If 18 out of 20 times took 5 minutes then certainly there’s a big chance that it would just take 5 minutes (the median) this time, roughly 90% chance (without getting into more complex algebra, of course). It’s skewed! Even if you are really good at estimating the time a task will take, that doesn’t mean you will be good at estimating the time the project will take! Counter intuitively, you will be more wrong. Now all the math nerds (or data scientists/statisticians) reading right now must have already recognized that tiny graph in the previous meme as a right-skewed normal distribution. Let me enlarge, and clarify: The median still has a higher probability of being true than the mean, for that single task! If you were to guess the mode value which has the highest probability, you’d be even more wrong on a larger scale Do you see how things can go wrong here? Our “natural” guess is based on the median which maximizes our probability of guessing right, however, the real number when that “event” occurs enough times will always approach the mean. In other words: The more similar tasks you do, the more that error accumulates! Delay equation, based on that hypothesis Programming tasks on a project are usually pretty similar or at least grouped into few similar clusters! This equation also implies that the problem is scalable! While we want everything in software projects to be scalable, problems are certainly not welcome. So, how to use this knowledge? To be honest, while writing this article I didn’t have in mind any intention to give “instructions” based on this hypothesis. It’s just meant as an exploratory analysis concluding with a hypothesis that’s up to you, the reader, to interpret however you wish. However, I do know that many will be disappointed at that open-ended conclusion so here’s what I personally make of it. It’s easier to tell if task X would take more/less/same time compared to task Y than it is to tell exactly how long they would take. This is because comparing the medians works just as well as comparing the means if the skewness of the curves is roughly the same (which is true for similar tasks). I don’t recall or record every single similar task to do the math and get the mean (and couldn’t find any data to torture). So I usually estimate the inevitable error (mean-median) as a percentage of the task time that goes up/down depending on how comfortable I’m with the dev environment (do I like this language/framework? (40%) Do I have good debugging tools? (30%) Good IDE support? (25%) … etc). I started splitting sprints into equally sized tasks, just to create some uniformity in the time estimation process. This allows me to benefit from point 1, it should be easy to tell if two tasks are roughly equal in time. This also makes tasks even more similar so that the hypothesis applies even more perfectly and things become more predictable. With these principles applied, you can do a “test run” if you have the resources. For example, if in X1 days with Y1 developers Z1 of the uniform tasks were completed then we can easily solve for X2 (days) given we know Y2 (developers available) and Z2 (total tasks left). Finally, make sure to follow if you don’t want to miss the upcoming articles covering the other causes of delay.
https://medium.com/swlh/why-engineers-cannot-estimate-time-5639750df419
['Hesham Meneisi']
2020-12-26 10:43:29.157000+00:00
['Software Engineering', 'Software Development', 'Engineering', 'Time Management', 'Project Management']
Pinterest Trends: Insights into unstructured data
Stephanie Rogers | Pinterest engineer, Discovery What topics are Pinners interested in? When are they most engaged with these topics? How are they engaging with those topics? To answer these questions, we built an internal web service that visualizes unstructured data and helps us better understand timely trends we can resurface to Pinners through the product. The tool shows the most popular Pins, as well as time series trends of keywords in Pins and searches. One of the use cases for the tool is it helps us understand what topics Pinners are interested in, when that interest usually happens and how they are engaging with these topics. Specifically for when, we visualize keywords over time to more easily identify seasonality or trends of topics, but the most powerful insights come from understanding Pinner behavior through top Pins. For example, with a simple search of a holiday, like “Valentine’s day,” we can see that interest starts to rise about two months before February 14. But interest in the keyword wasn’t enough; we wanted to determine when one should start promoting different types of products. We saw that male Pinners were looking at products towards the beginning of the peak. These were forward-thinking individuals, looking for gifts that would have to be preordered. Approximately 2–3 days before the holiday, male Pinners were primarily looking at DIY crafts and baked goods, things that didn’t require much time or could bought at the convenience store the night before. And finally, on the day of Valentine’s Day, we saw a lot of humorous memes around being lonely. We were able to find these engagement trends in a matter of seconds. Male Pinning trends leading up to Valentine’s Day January 2015 — Products Early February 2015 — DIY & Baked Goods February 14, 2015 — Lonely Memes Motivation A core part of any solution for keyword trends is being able to perform full-text search over attributes. While MapReduce is good for querying structured content around Pins, it’s slow when answering queries that need full-text search. ElasticSearch, on the other hand, provides a distributed, full-text search engine. By indexing the unstructured data around Pins (such as description, title and interest) with ElasticSearch, we produced a tool that processes full-text queries in real-time and visualizes trends and related Pins in a user-friendly way. At a high level, the tool offers a keyword search over Pin descriptions and search queries to: Find the top N Pins or search queries with the given keyword Show and compare time series trends, including the volume of repins and searches daily Additionally, the tool filters keyword volume by various segments including location, gender, interests, categories and time. Implementation Extract all text associated with Pins Insert Pin text into ElasticSearch Index text data (ElasticSearch does this for us) Build a service to call ElasticSearch API on the application backend Visualize data on the application frontend using Flask and ReactJS Challenges Data Collection Gathering all of the text related to a Pin, including description, title, tagged interests, categories and timestamps, as well as Pinner demographics, requires complicated logic that can scale. We use a series of Hive and Cascading jobs (both MapReduce-based frameworks) to run a Pinball workflow nightly to extract and dump all text associated with the Pins from the previous day into our ElasticSearch clusters, which then indexes this text. Design A major design decision was to use daily indexes (one index per day) since many high-volume time-series projects do this by default, including Logstash. Using these daily indexes had several benefits to the scalability and performance of our entire system, including: Increased flexibility in specifying time ranges. Faster reads as a result of well-distributed documents among various nodes. Minimized number of indexes involved in each query to avoid associated overhead. Bulk insertion or bulk reads through parallel calls. Easier recovery after failure. Easier tuning of properties of the cluster (# shards, replication, etc.). Smaller indices led to faster iteration on testing these immutable properties. Scalability Despite using big data technologies, we faced various scalability challenges with our workflows. There was simply too much data to run simple Hive queries, so we optimized our Hive query settings, switched to Cascading jobs and made trade offs on implementation choices. With more than 14GB of data daily and around two years worth of data stored thus far (around 10TB of data total), a bigger issue of scalability came from our ElasticSearch clusters. We have had to continuously scale our clusters by adding more nodes. Today we have 33 i2.2xlarge search nodes and 3 m3.2xlarge master nodes. Although replication isn’t needed to gain protection against data-loss since ES isn’t the primary persistent storage, we still decided to use a replication factor of 1 (meaning there are two copies of all data) to spread read-load across multiple servers. Performance After launching our prototype, we saw a lot of room for improvement in application performance, especially as the number of users grew. We switched from raw HTTP requests to the ElasticSearch Python client and optimized the ElasticSearch query code in our service, which led to a 2x performance increase. We also implemented server-side and client-side caching for the added benefit of instantaneous results for more frequent queries. The end result of all of these optimizations is sub two second queries for users. Outcomes The innovative tool has been a tremendous success. Usage is pervasive internally to derive Pinner insights, highlight popular content and even to detect spam. If you’re interested in working on large scale data processing and analytics challenges like this one, join our team! Acknowledgements: This project is a joint effort across multiple teams inside Pinterest. Various teams provided insightful feedback and suggestions. Major engineering contributors include Stephanie Rogers, Justin Mejorada-Pier, Chunyan Wang and the rest of the Data Engineering team.
https://medium.com/pinterest-engineering/pinterest-trends-insights-into-unstructured-data-b4dbb2c8fb63
['Pinterest Engineering']
2017-02-21 19:42:38.385000+00:00
['Elasticsearch', 'Analytics', 'Pinterest', 'Engineering', 'Big Data']
Software Engineer of Tomorrow Manifesto
Collaborate to apply Artificial Intelligence Methods for developing Advantageous Conditions to increase Intensity and Efficiency of Software Engineering Follow
https://medium.com/ai-for-software-engineering/software-engineer-of-tomorrow-manifesto-70a4033d38d1
['Aiforse Community']
2017-08-31 14:47:46.752000+00:00
['Software Engineering', 'Artificial Intelligence', 'Software Development', 'Data', 'Data Science']
Cómo clasificar obras de arte por estilo en 7 líneas de código
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/metadatos/c%C3%B3mo-clasificar-obras-de-arte-por-estilo-en-7-l%C3%ADneas-de-c%C3%B3digo-335b3d11fc43
['Jaime Durán']
2019-06-06 00:39:14.165000+00:00
['Data Science', 'Computer Vision', 'Español', 'Neural Networks', 'Artificial Intelligence']
A Brief Introduction to Supervised Learning
Supervised learning is the most common subbranch of machine learning today. Typically, new machine learning practitioners will begin their journey with supervised learning algorithms. Therefore, the first of this three post series will be about supervised learning. Supervised machine learning algorithms are designed to learn by example. The name “supervised” learning originates from the idea that training this type of algorithm is like having a teacher supervise the whole process. When training a supervised learning algorithm, the training data will consist of inputs paired with the correct outputs. During training, the algorithm will search for patterns in the data that correlate with the desired outputs. After training, a supervised learning algorithm will take in new unseen inputs and will determine which label the new inputs will be classified as based on prior training data. The objective of a supervised learning model is to predict the correct label for newly presented input data. At its most basic form, a supervised learning algorithm can be written simply as: Where Y is the predicted output that is determined by a mapping function that assigns a class to an input value x. The function used to connect input features to a predicted output is created by the machine learning model during training. Supervised learning can be split into two subcategories: Classification and regression. Classification During training, a classification algorithm will be given data points with an assigned category. The job of a classification algorithm is to then take an input value and assign it a class, or category, that it fits into based on the training data provided. The most common example of classification is determining if an email is spam or not. With two classes to choose from (spam, or not spam), this problem is called a binary classification problem. The algorithm will be given training data with emails that are both spam and not spam. The model will find the features within the data that correlate to either class and create the mapping function mentioned earlier: Y=f(x). Then, when provided with an unseen email, the model will use this function to determine whether or not the email is spam. Classification problems can be solved with a numerous amount of algorithms. Whichever algorithm you choose to use depends on the data and the situation. Here are a few popular classification algorithms: Linear Classifiers Support Vector Machines Decision Trees K-Nearest Neighbor Random Forest Regression Regression is a predictive statistical process where the model attempts to find the important relationship between dependent and independent variables. The goal of a regression algorithm is to predict a continuous number such as sales, income, and test scores. The equation for basic linear regression can be written as so: Where x[i] is the feature(s) for the data and where w[i] and b are parameters which are developed during training. For simple linear regression models with only one feature in the data, the formula looks like this: Where w is the slope, x is the single feature and b is the y-intercept. Familiar? For simple regression problems such as this, the models predictions are represented by the line of best fit. For models using two features, the plane will be used. Finally, for a model using more than two features, a hyperplane will be used. Imagine we want to determine a student’s test grade based on how many hours they studied the week of the test. Lets say the plotted data with a line of best fit looks like this: There is a clear positive correlation between hours studied (independent variable) and the student’s final test score (dependent variable). A line of best fit can be drawn through the data points to show the models predictions when given a new input. Say we wanted to know how well a student would do with five hours of studying. We can use the line of best fit to predict the test score based on other student’s performances. There are many different types of regression algorithms. The three most common are listed below: Linear Regression Logistic Regression Polynomial Regression Simple Regression Example First we will import the needed libraries and then create a random dataset with an increasing output. We can then place our line of best fit onto the plot along with all of the data points. We will then print out the slope and intercept of the regression model. print("Slope: ", reg.coef_[0]) print("Intercept:", reg.intercept_) Output: Slope: 65.54726684409927 Intercept: -1.8464500230055103 In middle school, we all learned that the equation for a linear line is y = mx + b. We can now create a function called “predict” that will multiply the slope (w) with the new input (x). This function will also use the intercept (b) to return an output value. After creating the function, we can predict the output values when x = 3 and when x = -1.5. Predict y For 3: 194.7953505092923 Predict y For -1.5: -100.16735028915441 Now let’s plot the original data points with the line of best fit. We can then add the new points that we predicted (colored red). As expected, they fall on the line of best fit. Conclusion Supervised learning is the simplest subcategory of machine learning and serves as an introduction to machine learning to many machine learning practitioners. Supervised learning is the most commonly used form of machine learning, and has proven to be an excellent tool in many fields. This post was part one of a three part series. Part two will cover unsupervised learning.
https://towardsdatascience.com/a-brief-introduction-to-supervised-learning-54a3e3932590
['Aidan Wilson']
2019-10-01 04:36:45.128000+00:00
['Machine Learning', 'Artificial Intelligence', 'AI', 'Data Science', 'Data Visualization']
Serious back door Vulnerabilities spotted in TikTok
Serious back door Vulnerabilities spotted in TikTok The security flaws were identified by a cybersecurity firm Check Point, which the company claims to have fixed TikTok has broken all barriers of popularity, achieving 1.5 billion global users in just over two & a half years. The immense growth can be gauged from the fact that the app is available in 150 markets & used in 75 languages globally. Even more important is the niche that it serves — Generation Z which utilizes the app to create short video clips — mostly lip-synced of 3 to 15 seconds & short looping videos of 3 to 60 seconds. Having achieved all these laurels, however, the application has been under fire from a lot of quarters for the potential risks identified within the application recently. A Cybersecurity firm Check Point pointed to multiple vulnerabilities that its researchers uncovered. Although the security firm made Tik Tok aware of these security flaws on November 20, 2019, which the latter claims to have addressed by December 15, 2019, as confirmed by Check Point — the damage is done. The problems were brewing for Tik Tok, even before the report of these vulnerabilities surfaced. With its strong Chinese connection — the parent company ByteDance based in Beijing, the app was under intense scrutiny in the United States. Although the decision by American authorities to scrutinize Chinese technology like Tik Tok was considered more of a trade war by-product by some, that notion seems to be quelled with the recent revelations.
https://medium.com/technicity/serious-back-door-vulnerabilities-spotted-in-tik-tok-e717167a1b80
['Faisal Khan']
2020-01-15 00:49:20.554000+00:00
['Privacy', 'Technology', 'Artificial Intelligence', 'Future', 'Cybersecurity']
What Developers should know about Product Management
Life as a developer I started my career working in a startup as a developer. I had a really good manager. For 2 years, he taught me to write maintainable code, create a scalable architecture and write tests to ensure quality. It would have been the best job in the world... If it wasn’t because of product managers. Product managers were cold terrible people. They did not care about quality, efficiency or maintenance. They just wanted us to finish a project as fast as possible. Photo by Steve Harvey on Unsplash I remember one of them skipped the design of a feature and gave it to us to deliver it faster… it was the ugliest feature I have ever seen. I really loved my job, but Product managers brought all the bad: stress, deadlines, tech debt, etc. I hated writing low-quality code, so I decided to try in a new company. Spoiler alert: it did not improve. Actually, it got really bad. I started to work in a team with two product managers. I am going to call them Bob and Alice. On a normal day, I would have a list of ten bugs that needed to be fixed. I would pick issue 1111. Then, Alice would come and ask me to finish issue 3003 first. Then we would have the standup, where Bob would say that ticket 6676 was the top priority… COULD THEY TALK TO EACH OTHER BEFORE TALKING TO US? That was when I moved to DevOps. Learning about business In DevOps, there are no product managers. So I went from having two to having zero! Awesome right? Best job ever, I got to write code and enjoy life! But with the new role, I got new responsibilities too: investigate what Developers need, calculate budgets, design new tools, create and analyze metrics, risk analysis… We had no product managers because we were the product managers! Developers were our customers and we were trying to release new value to them. This made me change my point of view about product managers. I want to tell you the story of two projects that taught me many lessons: The first story is when we released a new feature called Parallel Deployments. We put lots of effort and thoughts into it. Then we rolled it out as an optional feature… and people loved it. We got feedback and it was awesome. Everyone was using it so it became a de-facto way to deploy. This taught me how satisfying it is to fix a problem that your customer is facing. It gave me a sense of self-realization. The second story is when we woke up one day and our testing cluster did not work. Most Developers were blocked in their work: releases, deadlines, and testing were all paralysed. We needed a fast fix. We worked 6h straight (no lunch break) to create a new full working environment (did I mention that it was in a different Cloud Platform?). It was a huge effort! We did some manual changes and ugly patches. I’m not proud of it… but we got it working! It was then when I started to understand Product Managers and why some times having technical debt is not the worst. After that, I read some books about Product Management: Project to Product by Mik Kersten, Inspired by Martin Cagan. I also read some business books like The Lean Startup by Eric Ries. Now that I understand some of these business values, I don’t see Product Managers as awful monsters anymore. They are people responsible to bring value to customers. The problem is that most companies have a wall between business and development that hide all this valuable information. As a developer, every ticket looks the same. I don’t know what brings value to customers, so it’s difficult to make decisions between quality, efficiency, scalability and time spent. And I get pissed off if they ask me to release a low-quality feature because of an unreasonable deadline. I want to share with you some of the things that I learned about business. So you can help to break the wall between Engineering and Business and have more meaningful work. Business values These are the most important things that I learned from these business books and my own experience: Why do we work Success is not delivering a feature; success is learning how to solve the customer’s problem — Eric Ries A Product is bigger than engineering. It starts when someone thinks about that feature. Then prioritization, planning, and design are needed. Depending on the company, many more steps may be part of this value stream. Probably, when you see a new feature ticket in your board, it has already been in many others teams’ boards for months. In engineering, we make trade-offs between efficiency, quality, time spent and maintainability every day. To take the best decisions, we should aim to understand the main parts of this value stream and the customers’ needs. The problem is that business people and engineers use different languages. As an engineer, I like facts: how fast do I fix a bug (maintainability), coverage in my tests (quality), how fast do I add a new feature (tech debt), error rate (quality), availability of my system(stability),… But I have no idea about business metrics! I just see priorities that have no explanations. In Mik Kersten’s book called Project to Product, he defines a Flow Framework to correlate business results (Value, Cost, Quality and Happiness) with development metrics (Velocity, Efficiency, Time and Load). This not also helps developers take better decisions, but it also helps business people define a better strategy for the company. In his book, Mik Kersten also discusses the importance of traceability on a Value Stream Network. Software development looks like an interconnected network of teams where each of them adds value to the product. The problem is when work arrives at the team more like a dump than a traceable story (what is the customer that needs the feature, when was designed, which work has been done by which teams,…). How should we work Our highest priority is to satisfy the customer through early and continuous delivery of valuable software — First Principle of the Agile Manifesto No one knows about the future. The best thing we can do is to add a small change, get feedback and LEARN. This improves the capacity of adaptability to the team and will allow adapting to market trends quicker than the competition. Small changes are useful everywhere: as a Developer, it is better to add small pieces of code that can be easily reviewed and tested. as a DevOps, it’s better to test a new tool in a small set of developers first and get feedback before using it everywhere. as a Product Managers, it helps to test hypotheses about the users. I once released a feature that took a month to build, just to see that no customer used it. The quickest we can test if it makes sense to build a feature, the better. This is called an MVP (minimum viable product) and it’s used to reduce time wasted on unneeded features. Who is responsible for the business Specialization allows us to handle ever-growing complexity, but the benefits of specialization can only be fully realized if the silos that it creates can be connected effectively. — Mik Kersten People run companies. They define culture and are responsible for every desition made. That is why communication is highly important. In the book called The Five Dysfunctions of a Team by Patrick M. Lencioni, he shows a pyramid of what makes a team dysfunctional. I believe these principles can be applied to fix communication problems: they can be used on something as small as within a team, to collaboration between teams or even between two companies. From bottom to top, these dysfunctions are: Absence of TRUST . Teammates need to be vulnerable with one another. It is vital to have confidence among team members to know their peers’ intentions are good and that there is no reason to be protective. . Teammates need to be vulnerable with one another. It is vital to have confidence among team members to know their peers’ intentions are good and that there is no reason to be protective. Fear of CONFLICT . Conflict helps growth, producing the best possible solution in the shortest period of time. Adding light to disagreements and force members to work through it is key to resolve issues. . Conflict helps growth, producing the best possible solution in the shortest period of time. Adding light to disagreements and force members to work through it is key to resolve issues. Lack of COMMITMENT . Some times it feels that a difficult decision gets delayed all the time. This is due to a lack of clarity and lack of buy-in. You should keep clear that a decision is better than no decision. Even a wrong decision can help us continue and learn from our mistakes. . Some times it feels that a difficult decision gets delayed all the time. This is due to a lack of clarity and lack of buy-in. You should keep clear that a decision is better than no decision. Even a wrong decision can help us continue and learn from our mistakes. Avoidance of ACCOUNTABILITY . Team members should be willing to call their peers on performance or behaviours that may hurt the team. If there is a specific process or requirement, everyone in the team should enforce it and ask others to follow it. Here, regular reviews and team rewards can help to achieve it. . Team members should be willing to call their peers on performance or behaviours that may hurt the team. If there is a specific process or requirement, everyone in the team should enforce it and ask others to follow it. Here, regular reviews and team rewards can help to achieve it. Inattention to RESULTS. This is when we avoid team status to focus on individual status. Teamwork is more important than superheroes. When I see a single person is responsible for the efficiency of a team.. I get scared. I have seen this many times: there are no discussions or communication as this person takes all the decisions alone. To avoid it, there should be a Public Declaration of Results and leaders should show that they do not value anything more than results. Thanks for reading! Maybe you are like me and have had bad experiences with Product Managers. If so, just try to talk to them and ask for metrics and information about customers. I have to say that the product managers I talked about were good at their job, but we did not know how to communicate. Leave a comment and applause if you liked this blog post. You can also write me on my twitter account @Marvalcam1
https://medium.com/hacking-talent/what-developers-should-know-about-product-management-90333f5354eb
['Maria Valcam']
2019-08-16 12:09:50.824000+00:00
['Software Engineering', 'Engineering', 'Business', 'Agile', 'Product Management']
Economics of Big Data and Privacy: Exploring Netflix and Facebook
Photo by Carlos Muza on Unsplash It is the 21st century, technology is on the rise, the internet has succeeded paper texts. We live in a world that is interconnected. In this fast-paced, growing world, data is being rapidly created every second. The use of algorithms and statistical measures allows us to graph each movement in a way that is acceptable for predictive modeling. Big data refers to huge amounts of data accumulated over time through the use of internet services. Traditional econometrics methods fail when analyzing such huge amounts of data and we require a host of new algorithms that can crunch this data and provide insights. (Harding et al, 2018). Big data can be referred to all the human activity performed over the last decade and exponentially growing every second. Being interconnected has its benefits and drawbacks, one of the major drawbacks being privacy. Big data does just encompass the analysis of data but it also consists of data collection. Data collection is on the ways where personal user data can become compromised. (Kshetri, 2014). Predictive modeling will not only help us improve our services but it would have a deep impact on industries like healthcare and food. The accumulation of data cannot be stopped and we must be well aware of the benefits and drawbacks of the holy grail of technology, data. This paper aims to examine all the facts, case studies related to data, and how it affects our modern life. Data or information can be referred to as the accumulation of past behavior. Information can also be categorized as a sort of data. Typically something that we took for granted a few years back has not boomed in this decade due to a large amount of human activity and computing technologies. In the 21st century, we are surrounded by data that can be composed of two types: discrete and continuous. Discrete data consists of entries that can be used for classification whereas continuous data refers to the entries that can be used for regression. Man and Data are inseparable as it is the flow of information. Data has been a very important part of human existence from time immemorial. Once civilizations were established they could not function without data. Indus Valley Civilization had seals (a type of coins) in which data was tabulated. The Incas, another very old civilization, had the same methods for data collection. As civilization progressed, man-made data tabulation also developed. It graduated into coins that replaced the barter system. There were also numbers which have been used since biblical times. The seafarers also had a system of data that helped in their trade. Historically data collection was an important aspect of life in ancient times and around the 1950s, due to the rise of computing systems, data could be presented in the format of bits and bytes. In the 21st century data has been regarded as the new oil. Privacy is the state of freedom from intrusion and the ability of an individual to have the information only up to themselves. The person should have the freedom to share the information whenever they require it. In the 21st century, due to the boom in data and computing, companies have tried to exploit it using sophisticated algorithms and techniques known as data mining. Due to limited enforcement by the government for these privacy laws companies have exploited the data to gain more and more users by invading their privacy. (Cate, 1997). One reason between the disconnection of data and privacy is that many users are not aware of when their data is being collected (Acquisti et al, 2016). While we may consider data a valuable resource, we should be aware of how this data can be exploited by companies or politicians to attract a certain set of customers. Users can give out unintended personal information to these platforms in forms of text, images, preferences, and browsing time (Xu et al, 2014). These data collections pose a threat to humanity and to rectify this, new techniques to perform data mining are being explored extensively where the main aim is to study, analyze, process data in such a way in which privacy is maintained (Xu et al, 2014). In the 21st century, human-computer interface activity is at its peak. A lot of companies depend on the accumulation and processing of huge amounts of data(Oussous et al, 2018). Huge amounts of data, also known as big data, are a resource to a company’s research and development as they help companies decide on where to put the money and invest. The world economy has been changed into something called a data economy that refers to an ecosystem where data is gathered, organized, and exchanged using big data algorithms. These days data can be huge, cluttered and unstructured, an example is when different clients have different accounts on the same platform and to extract useful information the source algorithms have to first preprocess the data in such a way that manages bias, outliers, and imbalances (Tummala et al, 2018). We are surrounded by data in such a way that services like YouTube experiences a new video every 24hours with a rough estimate of 13 billion to 50 billion data parameters in a span of 5 years (Fosso Wamba et al, 2015). Harnessing human data to predict future movement is a common strategy for companies to game data, while Youtube is producing such huge amounts of data, people using the service are contributing back to the service by storing their likes and dislikes in a “big database” maintained by YouTube. Big Data and Business analytics are estimated to provide an annual revenue of about 150.8 billion dollars in the US (Tao et al, 2019). While these firms earn by providing users with a better interface using their data, some firms exploit data to influence a portion of individuals. They use computer algorithms to predict and transform user data into something usable, using data crunching and data mining techniques, they extract user data to sell or influence. Facebook, a social network company, was recently involved with Cambridge Analytica, a data-mining firm that gathered data of Facebook users using loopholes. Community profiles were built upon this data which was used to target customized ads. Due to this Facebook was on a decline as this was considered a massive data breach and personal user data consisting of images, text, posts, and likes. This scandal played a key role in the US Elections 2016 and following this GDPR (General Data Protection Regulation) was established in the EU (Tao et al, 2019). It is estimated that companies use past data to build something called as recommendation engines that can predict what sort of content a user wants to view. One such example is Netflix that asks users to rate movies on a scale of 1 to 5 to build a personalized profile for the user. For the Netflix recommendation engine, linear algebra or to be more precise SVD (Singular Value Decomposition) was used to a system that can predict what the user might like (Hallinan et al, 2014). To Conclude, Big data and privacy go hand in hand as they are interconnected and interdependent. For a breach of privacy, one must have access to huge amounts of data, and to build these computing engines, we need large scale distributed computing resources and techniques. We see how data became so popular and with the disruption of the right technological tools and algorithms, companies were able to harness the predictive capabilities of the system. We also see how privacy is a big part of the data economy and how data collection methods seem to differ in this fast-paced growing world. We cannot stop the flow of data but we can surely be aware of what is being collected. Companies like Cambridge Analytica used loopholes in the Facebook Platform to gather personal user data was surely a breach of privacy due to which Facebook was called upon in Congress and their market share dropped drastically. We provide a logical flow of how data, privacy arose, and how due to the huge amounts of human activity data seemed to be called “Big Data”. For the future direction of this research, we plan to analyze how we can control the flow of data by using technological techniques and we plan to discuss the effect of racial bias in such huge amounts of data, more specifically how racial biases affect data mining algorithms (Obermeyer et al, 2019). References Harding, Matthew, and Jonathan Hersh. “Big Data in Economics.” IZA World of Labor, 2018, doi:10.15185/izawol.451. Kshetri, Nir. “Big Data׳s Impact on Privacy, Security and Consumer Welfare.” Telecommunications Policy, vol. 38, no. 11, 2014, pp. 1134–1145., doi:10.1016/j.telpol.2014.10.002. Data is the new oil. Header_image. [accessed 2020 Jun 22]. https://spotlessdata.com/blog/data-new-oil Cate FH. Privacy in the information age. Washington, D.C.: Brookings Institution Press; 1997. Acquisti A, Taylor C, Wagman L. The Economics of Privacy. Journal of Economic Literature. 2016;54(2):442–492. Oussous A, Benjelloun F, Ait Lahcen A, Belfkih S. Big Data technologies: A survey. Journal of King Saud University — Computer and Information Sciences. 2018 [accessed 2020 Jun 22];30(4):431–448. Tummala Y, Kalluri D. A review on Data Mining & Big Data Analytics. International Journal of Engineering & Technology. 2018 [accessed 2020 Jun 22];7(4.24):92. Fosso Wamba S, Akter S, Edwards A, Chopin G, Gnanzou D. How ‘big data’ can make a big impact: Findings from a systematic review and a longitudinal case study. International Journal of Production Economics. 2015;165:234–246. Xu L, Jiang C, Wang J, Yuan J, Ren Y. Information Security in Big Data: Privacy and Data Mining. IEEE Access. 2014 [accessed 2020 Jun 22];2:1149–1176. Hallinan B, Striphas T. Recommended for you: The Netflix Prize and the production of algorithmic culture. 2014;18(1):117–137. Tao H, Bhuiyan M, Rahman M, Wang G, Wang T, Ahmed M, Li J. Economic perspective analysis of protecting big data security and privacy. Future Generation Computer Systems. 2019 [accessed 2020 Jun 23];98:660–671. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 [accessed 2020 Jun 23];366(6464):447–453.
https://medium.com/towards-artificial-intelligence/economics-of-big-data-and-privacy-exploring-netflix-and-facebook-d7a2e9df05c8
['Aadit Kapoor']
2020-08-27 23:11:51.853000+00:00
['Privacy', 'Netflix', 'Software Development', 'Facebook', 'Big Data']
Matplotlib Cheat Sheet 📊
Making the bar graph horizontal is as easy as plt.barh( ). Let’s add one more attribute to our graphs in order to depict the amount of variance. Within you code add the following code varience = [2,4,3,2,4] plt.barh( sectors , sector_values , xerr = varience , color = ‘blue’) The xerr= allows us to indicate the amount of variance per sector value. If need be yerr= is also an option. Next we will create a stacked bar graph. It may appear that there is a lot of code for this graph but try your best to go through it slowly and remember all the steps we took while creating every graph until now. sectors = [‘Sec 1’,’Sec 2',’Sec 3',’Sec 4',’Sec 5'] sector_values = [ 23 , 45 , 17 , 32 , 29 ] subsector_values = [ 20 , 40 , 20 , 30 , 30 ] index = np.arange(5) width = 0.30 plt.bar(index, sector_values, width, color = ‘green’, label = ‘sector_values’) plt.bar(index + width, subsector_values,width, color = ‘blue’, label = ‘subsector_values’) plt.title(‘Horizontally Stacked Bars’) plt.xlabel(‘Sectors’) plt.ylabel(‘Sector Values’) plt.xticks(index + width/2 , sectors) plt.legend(loc = ‘best’) plt.show() Without making much modification to our code we can stack our bar graphs one atop the other by indicating, for example, bottom = sector_values within the plt.bar() method of the plot that we want to be on top. Be sure to get rid of the width variable and any instance where it was called further down into our code. index = np.arange( 5 ) plt.bar( index , sector_values , width , color = ‘green’ , label = ‘sector_values’ ) plt.bar( index , subsector_values , width , color = ‘blue’ , label = ‘subsector_values’ , bottom = sector_values ) Next let’s create a pie chart. This is done easily by using the pie( ) method. We will start with a simple chart then add modifying attributes to make it more unique. Again don’t be overwhelmed with the amount of code that this chart requires. plt.figure( figsize=( 15 , 5 ) ) hospital_dept = [ ‘Dept A’ , ’Dept B’ , ’Dept C’ , ’Dept D’ , ’Dept E’ ] dept_share = [ 20 , 25 , 15 , 10 , 20 ] Explode = [ 0 , 0.1 , 0 , 0 , 0 ] — — Explodes the Orange Section of Our Plot plt.pie( dept_share , explode = Explode , labels = hospital_dept , shadow =’true’ , startangle= 45 ) plt.axis( ‘equal’ ) plt.legend( title = “List of Departmments” , loc=”upper right” ) plt.show( ) Histograms are used to plot the frequency of score occurrences in a continuous dataset that have been divided into classes called bins. In order to create our dataset we are going to use the numpy function np.random.randn. This will generate data with the properties of a normal distribution curve. x = np.random.randn( 1000 ) plt.title( ‘Histogram’ ) plt.xlabel( ‘Random Data’ ) plt.ylabel( ‘Frequency’ ) plt.hist( x , 10 ) — — — plots our randomly generated x values into 10 bins. plt.show( ) Finally lets talk about scatter plots and 3D plotting. Scatter plots are vert useful when dealing with a regression problem. In order to create our scatter plot we are going to create an arbitrary set of height and weight data and plot them against each other. height = np.array ( [ 192 , 142 , 187 , 149 , 153 , 193 , 155 , 178 , 191 , 177 , 182 , 179 , 185 , 158 , 158 ] ) weight = np.array ( [ 90 , 71 , 66 , 75 , 79 , 60 , 98 , 96 , 68 , 67 , 40 , 68 , 63, 74 , 63 ] ) plt.xlim( 140 , 200 ) plt.ylim( 60 , 100 ) plt.scatter( height , weight ) plt.title( ‘Scatter Plot’ ) plt.xlabel( ‘Height’ ) plt.ylabel( ‘Weight’ ) plt.show( ) This same scatterplot can also be visualized in 3D. To do this we are going to first import the mplot3d module as follows: from mpl_toolkits import mplot3d Next we need to create the variable ax that is set equal to our projection type. ax = plt.axes( projection = ‘3d’) The following code is fairly repetitive of what you’ve seen before. ax = plt.axes( projection = ‘3d’ ) ax.scatter3D( height , weight ) ax.set_xlabel( ‘Height’ ) ax.set_ylabel( ‘Weight’ ) plt.show( ) Well if you’ve made it this far you should be proud of yourself. We’ve only gone through the basics of what matplotlib is capable of but, as you’ve noticed, there is a bit of a trend in how plots are created and executed. Check out the Matplotlib Sample Plots page in order to see the many more plots Matplotlib is capable of. Next we will discuss Seaborn.
https://medium.com/analytics-vidhya/matplotlib-cheat-sheet-51716f26061a
['Mulbah Kallen']
2019-10-10 05:11:00.196000+00:00
['Data Visualization', 'Python', 'Matplotlib']
What 4 Years Of Programming Taught Me About Writing Typed JavaScript
What 4 Years Of Programming Taught Me About Writing Typed JavaScript Untyped JavaScript, TypeScript, Flow, and PropTypes: Which one should you use? Photo by Kevin Canlas on Unsplash Types of types Mostly, statically typed languages are criticized for restricting developers. On the other hand, they’re loved for bringing early information about errors, documenting components (such as modules, methods, etc.), and now other more advanced functionalities such as auto-completion. A preliminary study from 2009¹ on untyped languages gives us some reference on exactly those pros and cons. Today, another type of language is also widely used: dynamically typed languages. A dynamically typed language is different from its counterpart by bringing types but at runtime. This way, you can have far more freedom than strongly typed languages while keeping their advantages. From our list, we have a single dynamically typed language: TypeScript. And that’s not completely exact, TS could also be called a soft typed language, that is between a dynamically and statically typed language. As this is not today’s subject, curious readers can have a look at the following article: Why am I saying there is only one? Of course, JavaScript is considered untyped (or weakly typed), while PropTypes is a package allowing type-checking at runtime. Flow is … neither. In practice, it looks a lot like TypeScript, and both are often compared. Inside your IDE and CLI, they are similar but their engines differ: TypeScript is a language while Flow is called a “static type checker”. In Flow, you write “annotations” to set types. At compile time, those annotations must be removed, which creates JavaScript files without any superset. It has been an argument in favor of Flow: performance. Both solutions have almost the same functionalities, but Flow removes any overhead that TypeScript has, once compiled. My experience I started my career in the JavaScript & front-end world in 2016 with Angular2 (and TypeScript). Before this front-end project, I mainly worked on C#, Java, and a bit of Vanilla JavaScript. I hated it. To me, vanilla JavaScript had no structure, type, object-oriented concepts, it was HELL. With more experience, practice with Angular2 (shipped with TypeScript) and then React, I almost started to enjoy it. That’s when I seriously considered a way to type JavaScript: I used TypeScript for a short time (which I didn’t master at the time) but I was back to untyped JavaScript with React. React with untyped JavaScript was working not-so-bad but I felt like a piece was missing: After a few weeks on a React project, code could easily get messy and difficult to understand, even more for newcomers. We needed a lot of time to read a piece of code. The number of errors after build was too high. Among the practices needed to avoid these problems, my experience told me typed JavaScript was the top priority. After some research: here I was, looking at TypeScript, Flow, and PropTypes. PropsTypes allowed me to type check my props but that was not enough. What about types outside of my React components? Can I validate that my app is type-safe as part of my CI pipeline? Can I validate it as a commit hook? Well, You can find some ways to validate types during your tests² and use it outside of component³ but PropTypes was not designed with that in mind. It was intended to give you information in real-time (at runtime). As I was not convinced by PropTypes, I was left with two choices: TypeScript and Flow. Functionality-wise they looked the same, with some good point on Flow side: No overhead meant better performance. Backed by Facebook, as well as React. Those were enough to make a difference and that’s why I started using Flow with React. Flow Around 2 years. That’s the time I’ve been using Flow and only Flow. It worked with React / React Native like a charm, boosted our team productivity, was a great tool as part of our CI pipeline, and generally helped us deliver. And then, we hit a wall. Once our projects started getting bigger, flow server struggled to start, was getting slower and slower. At some point we were simply unable to run it, our CI running time and its cost skyrocketed. Unfortunately, at the same time, I started getting into other projects. Among them: an embedded project with Arduino using johnny-five⁴. That’s when I realized the second weakness of Flow: its community support. That’s how typing systems in JavaScript works: you write your module with one solution and write an independent type definition for the other. TypeScript had a lot of support, I think that even today, the number of libraries without support that I used is less than 10. Flow was different: during the time I used it in the React ecosystem, there were a few libraries without support but I could still work without it. Outside of this ecosystem, support was a mess, I was unable to find even one library working with Flow. I found solutions to transform TypeScript definitions to Flow but in the end, it didn’t work. At some point, I wondered if I would really have to use TypeScript, even for React / RN projects. How inconsistent is that: using a technology backed by a company such as Facebook, and replacing its typing system with one from Microsoft? Moreover, support for TypeScript in React must be a mess, right? TypeScript I (re)discovered a whole new world. After, getting back to TypeScript on side projects with Johnny-Five or React, I couldn’t believe how I loved it. Not only my performance and library support problems disappeared, but I got to love its syntax. Since 2016 and a lot of updates, I could not find anything to reproach TypeScript for. React / React Native support was perfect, with the whole ecosystem such as eslint with prettier, jest, etc. My next move was simple: As I previously wrote templates/boilerplates for my team using Flow, they were quickly replaced with the equivalent in TypeScript. Since then, we’ve only been using TypeScript. Flow was great for a long time, its performance and support problems got the best of him while TypeScript later evolved to be amazing: my choice was simple. Conclusion If you were to ask me what to chose between untyped JavaScript, PropTypes, Flow, and TypeScript for your project: I would tell you the following: Untyped JavaScript can be a good choice if you work on a project for less than a week, throw it after, and don’t wish to learn any type solution. PropTypes is a great tool if, for any reason, you cannot use Flow or TypeScript but is not enough for a big project. Flow was great, since then I did not try it again, but would not bet on it. TypeScript is a great solution, in my own opinion the best out there and required for any JavaScript project with high stakes. Regarding using a typing library I experienced some objections, that I would like to answer here. Loss of performance: today’s solutions are great and the infinitely small loss of performance you could sustain would be balanced by the structure that typed code brings. Lack of knowledge in TypeScript / Flow: if your project has high stakes and you don’t want to use typed JavaScript because your developers don’t know it, the project is bound to fail. Change your team or train them, that’s the only way. Loss of time: I’ll quote Clean Code: A Handbook of Agile Software Craftsmanship⁵ from Robert C. Martin⁶: “Indeed, the ratio of time spent reading vs. writing is well over 10:1.” If one thing, typed JavaScript will help you read code, thus increasing your productivity rather than decreasing it. If you have a different opinion about typed JavaScript or experience with Flow, feel free to contact me so I can link yours. Thanks for reading. TL;DR TypeScript.
https://medium.com/javascript-in-plain-english/what-4-years-of-programming-taught-me-about-writing-typed-javascript-2bac38b45f79
['Teddy Morin']
2020-12-04 17:41:19.730000+00:00
['Programming', 'Software Development', 'JavaScript', 'Typescript', 'React']
4 Science-Backed Ways to Get You Feeling Energetic
4 Science-Backed Ways to Get You Feeling Energetic These tactical approaches will improve concentration and alertness Photo by Mateus Campos Felipe on Unsplash Raise your hands if you struggle to get out of bed, even when you’ve technically gotten enough sleep, constantly begging your alarm to give you just 5 minutes, and you rely on several pints of coffee to get you through the morning — and probably all through the average working day? It’s common to feel tired in our fast-paced modern world. It’s never a new thing to find yourself running from one activity to another, even when you’ve planned out a day to gain balance, and soothe your soul. Whether it’s the emotional fatigue from all the weird things going on in the world, trying to juggle your passion and talent with the job you find yourself in, or having your sleep routine thrown off by a change in schedule, it seems virtually everyone is struggling with morning tiredness and wondering how to get back their energy to make their mornings more reasonable. Tiredness is one of the UK’s top health complaints — figures from Healthspan show a worrying 97% of us claim we feel tired most of the time, and doctors’ records reveal 10% of people who book an appointment are looking for a cure for their tiredness. Before I proceed, I do want to mention that I’m not talking about conditions like Chronic Fatigue Syndrome and SEID, which affect several million people here in the US alone and are very hard to cure. What I am talking about is a general state of tiredness that affects many, many more people (both children and adults) and can be prevented by evaluating your habits and changing those that are draining your energy.
https://medium.com/skilluped/4-science-backed-ways-to-get-you-feeling-energetic-1e1e02113cbe
['Benjamin Ebuka']
2020-11-27 03:43:16.044000+00:00
['Health', 'Inspiration', 'Science', 'Life Lessons', 'Self Improvement']
Presenting Your Data
What is Data Visualization? Data visualization is the process of presenting data. It is how we communicate findings from data in visually clear, concise, and often aesthetic ways. A data visualization typically focuses on a specific dataset, aiming to communicate a relationship, trend, distribution, etc. or lack thereof among variables. Visualizations help us get a grasp of the holistic view of our dataset, one we cannot have if we simply eyeball the raw data. In short, humans need visuals! Why is it Important? At the core of it, a data visualization transforms many data points into a single story. And stories can be good or bad; ideally, a good data visualization — like a good story — holds the reader’s attention and presents information in a concise, easy-to-understand way, leaving the reader with something to take away. Data visualization allows for visual literacy in the data, meaning that it allows otherwise complex data to be visually processed and understood in simpler ways. Visualizations reduce the cognitive load required to understand a dataset, provide overviews of the data, and comprise a crucial part of conducting exploratory data analysis. Example of data visualization used in exploratory data analysis. Image credit: https://en.wikipedia.org/wiki/Exploratory_data_analysis To get a better sense of why data visualization is important, we can look into existing visualizations to get a sense of their communicative intents and whether they achieve that intent. Junk Charts is a blog for data visualization critiques, run by Kaiser Fung, who picks out data visualizations in media to evaluate/critique them and provide suggestions for improvement. Learning to weigh the pros and cons of a visualization can help give a sense of why we should take the time to produce strong data visualizations (which isn’t always easy). Clear data presentation goes hand in hand with good design, and not just design in the aesthetically pleasing visual sense. Good design is equivalent to clear communication, and it facilitates an end goal such that the viewers of the visualization can describe the relationships in the data. Data visualizations should contain communicative intent. So this brings into importance the choice of data visualization, data transformations, shapes, use of text and color, and more. Types of Data Visualization The Python Graph Gallery provides a comprehensive list and guide to different data visualization types and the information they communicate. The site splits data visualization types into categories of distribution, correlation, ranking, maps, and more. There are many types of visualizations, ranging from a simple line chart to a more complex parallel plot. The type of visualization makes a big difference on what is being communicated about the data. The best choice of visualization depends on the data itself and what relationships in the data we intend to explore. Take a heat map for example: this type of visualization would best be used if you want to understand the correlation between different values for multiple variables, for instance the correlation between average gas price and geographic location. Example of a heat map. Image credit: https://www.usatoday.com/story/travel/roadwarriorvoices/2015/01/10/use-this-us-gas-price-heat-map-to-design-cheapest-possible-road-trip/83204036/ On the other hand, a simple histogram can suffice for displaying the distribution of a single variable, such as the arrival time for a certain event. Example of a histogram. Image credit: https://en.wikipedia.org/wiki/Histogram. For more examples of current data visualization use cases, checkout The Atlas, a collection of charts and graphs used by Quartz. Visualization Tips Choosing a visualization type The best visualization for your data depends on how much data you have, what kind of data you have, what questions you are trying to answer from the data, and often there isn’t one best visualization. It would be helpful to play around with different visualization types because two visualizations on the same data can draw attention to different attributes of the data. This visualization picker allows you to customize what you’re exploring about the data, and filters out suggested visualizations for your specific goal, but keep in mind these are just suggestions for starter visualizations. When you have one variable (univariate) If it’s numeric (e.g. histogram, box plot), the visualization should display the distribution/dispersion of the data, mode, and outliers If it’s categorical (e.g. bar chart), the visualization should display frequency distribution and skew When you have 2 (or more) variables (bivariate) The better visualization type differs depending on whether the comparison is numeric to numeric (e.g. scatterplot), numeric to categorical (e.g. multiple histograms), or categorical to categorical (e.g. side-by-side bar plot) The right choice of visualization also depends on what question you are exploring about the data. See the below diagram for a loose guide to picking a visualization type. Diagram for choosing a data visualization type. Image credit: https://www.datapine.com/blog/how-to-choose-the-right-data-visualization-types/ Transforming the data Log transform is often applied to skewed data to bring it to a less skewed and more normal-shaped curve, making it easier to visually perceive the data and perform exploratory data analysis. When you have a lot of data, smoothing helps remove noise in the data to improve visibility of the general shape and important features of the data. Use of color For qualitative/categorical data, choose distinct colors to clearly separate the categories. For quantitative data, use gradients to show comparative differences in small to large values Use of text Incorporate text to add clarity and intention, avoid clutter. Add legends, labels, and captions to point out important features or conclusions Resources and Tutorials To get a more comprehensive understanding of data visualization and its purposes, checkout this guide on Developing Visualisation Literacy. For another overview of data visualization and walkthroughs on presenting data using R, checkout the book Data Visualization: A Practical Introduction by Kieran Healy (the draft manuscript is available online). Flowing Data covers data in everyday life and provides tutorials for doing data visualization in R. For those interested in visual storytelling, The Pudding is a digital publication that uses creative data visualizations to explain ideas in popular culture. Visualizing Data is an encyclopedia for data visualization, with insights into best practices, examples, interviews with experts, and more. If you’re looking to learn more coding for data visualization, this tutorial walks you through simple data visualization using matplotlib and Pandas, covering how to filter data, and how to plot a line plot, bar chart, and box plot. This Medium article does a good job of showcasing the use cases for bar charts, scatterplots, and pie charts (though we recommend against using pie charts because it’s hard to accurately compare values across pie charts — histograms/bar charts are generally better options).
https://medium.com/ds3ucsd/eyeballing-the-data-ea77437ff6db
['Emily Zhao']
2019-06-04 23:19:59.716000+00:00
['Visualization', 'Data Science', 'Matplotlib', 'Data Visualization', 'Data Analysis']
Women and AI.
Statistics show only an estimated 22% of Artificial Intelligence professionals globally are female and only 20% of all computer programmers are female. Find this revelation shocking? Well so do I. Hi, my name is Malaika and I find the world of technology and computer science absolutely riveting. My fascination with technology was not only the driving factor to overcome the challenges I faced, but it also helped keep up the inquisitive and explorative attitude that invigorated my fascination in the field of AI but as a woman, I was shocked to find that in the AI-driven automated world that we are moving towards, the computer science field is still mostly dominated by men. I have always wanted to be part of the technology industry and this contrast has only motivated me further to pursue my interests. My interest and love for computer science began at a very young age. My earliest childhood memories are of being surrounded by technology and since then I’ve tried to gain knowledge in all the various fields that computer science has to offer. I’ve experimented with graphic design, explored the world of web and app development and spent hours learning the basics of different programming languages. However, one field that captivated my interest and curiosity immensely was the world of Artificial Intelligence. I was introduced to AI while using a virtual assistant on my parent’s phone and this interaction ignited a ceaseless curiosity to learn more about AI. Along with research done on my own time I was given the opportunity to be part of the Inspirit AI Scholars program which expanded my knowledge on the field of AI. My instructors in the program were Raunak Bhattacharyya and Sharon Newman and they introduced us to subjects such as natural language processing and neural networks as well as solving real-life problems using AI. I learned how AI impacts almost every industry in the world and we have the power to use it for good. For example, our instructors showed us how AI is used in medical diagnostics as it helps spot signs of certain diseases in medical scans and makes accurate predictions about patients’ future health. My teamates and I also worked on a project and our project aimed to classify tweets relating to various natural disasters into categories depending on what type of aid was required. I loved being a part of this program and I felt more alive and engaged than I ever had before learning more about AI. In this three-part blog series I would like to include the impact AI has in different industries, such as automobile, education, finance and healthcare and then further discuss the issue of privacy and security as this will help introduce and bring awareness to teenagers and young adults on the basic concepts of AI and its importance. By being exposed to cutting edge technologies and creative coding in AI, as a woman, I would love to bridge the gender gap and be part of the leading team of women in this exciting field. We all have conscious and unconscious biases when talking about women in technology and I would like to use this blog to discuss the issue of ethics in the field of artificial intelligence, including gender and racial biases as I believe Artificial Intelligence has the potential to not only overcome but also eradicate biases.
https://medium.com/carre4/women-and-ai-a8389ec6334c
['Malaika N']
2020-12-08 21:28:46.140000+00:00
['Artificial Intelligence', 'AI', 'Women In Ai', 'Women In Tech']
Tug of war.
The morning coffee that awakens the mind, weakens the foundations of the shrine, that is the body. That which is forceful against the wills of the mind, the mind that remains powerless as the limbs keep pushing on, as though chasing that sun until it reaches the horizon.
https://medium.com/weeds-wildflowers/tug-of-war-6c2b64191099
[]
2020-12-24 00:38:03.991000+00:00
['Struggle', 'Mental Health', 'Mindfulness', 'Creativity', 'Poetry']
Anti Bar chart, Bar chart Club
Inspiration, knowledge, and anything about data science by Make-AI Follow
https://medium.com/make-ai-data-stories/anti-bar-chart-bar-chart-club-56a2275b08aa
['Benedict Aryo']
2019-08-21 06:14:06.328000+00:00
['Data Science', 'Bar Chart', 'Data Visualization', 'Exploratory Data Analysis', 'Visualization']
Radio #LatentVoices. 001 — A Glimpse of Another World.
Radio #LatentVoices. 001 — A Glimpse of Another World. AI-driven musical storytelling Eerie and fantastic at the same time: Exploration of AI-driven creative tools is full of surprises and discoveries. It’s like entering an Unsupervised Machine Dream — everything is new, unique, unexpectable. For me running JukeBox for the first time was an ethereal experience. Can you remember that scene from DEVS, a brilliant mini-series about a group of scientists who were guarding a secret? I don’t want to spoil the show, just this experience to scan another dimension — bit by bit, sound by sound, where pixelated obscurity becomes a clear vision… JukeBox provides you with a similarly overwhelming experience- My first soundscape was this one. I’ve just run it per default settings. Sometimes the most usual can become the most surprising. I’ve got this noisy soundtrack with somebody speaking: After several iterations and around 8 hours of work, AI provided me this clean sound: A poetic but unknown language. A man, speaking with silent bitterness about something, vanished. About life, about death. You cannot understand this language, but compassion and empathy are inundating your soul. And then you hear heavenly musical instruments and a female voice, singing an ethnic song. You almost remember the transience of the nameless world you’ve experienced for a minute through an AI-portal. But it’s so familiar, full of past and missing future. And yet — it’s all present. The world of Unknown. Which you never will be able to recreate. Like a dream fading away.
https://medium.com/merzazine/radio-latentvoices-001-a-glimpse-of-another-world-796417f18b5c
['Vlad Alex', 'Merzmensch']
2020-12-14 22:12:45.071000+00:00
['Art', 'Artificial Intelligence', 'Latentvoices', 'Music']
Insights from our Synthetic Content and Deep Fake Technology Round Table
INSIGHTS GATHERED: CHALLENGES: The ability to recognise deep fake pictures by humans is very low, while the ability of an AI to detect them is fairly high. Facebook’s Deepfake Detection Challenge, in collaboration with Microsoft, Amazon Web Services, and the Partnership on AI, was run through Kaggle, a platform for coding contests that is owned by Google. The best model to emerge from the contest detected deep fakes from Facebook’s collection just over 82 % of the time. It was argued that, at the current level, we won’t get to 90%. On the other hand, the percentage of deep fakes currently circulating on Facebook is in the single digits and there are many other sources of misinformation. We are easily fooled by video: Our subconsciousness has made a decision whether something is real or not before our conscious mind even starts processing the content. Authenticity and security — A cat vs. mouse race: generation of new deep fakes technologies vs. the detector of deep fakes. A significant security challenge is also socially engineered cyber attacks, not only main stream deep fakes. Increased accessibility of this technology to the mainstream public will significantly accelerate this race. A definition problem: There are various perspectives what a deep fake is and sometimes there is also confusion with terminology such as fake news. Historically, facial reenactment & face-swapping was the main deep fake use case. Now the term is used in a variety of situations and other developments such as, for example, voice synthesis add more layers to fool our senses. The last 10% is hard: Many use cases break at a certain point, especially for more horizontal use cases where the technology is simply not sophisticated enough. For example, creating avatars/assistants for gaming, customer service and AdTech. Interactive use cases are typically challenging. The quality in deep fakes, synthesis compared to other AI capabilities is not different. Even if you look at image classification — which has been democratised until now, if you try to generalise it — it will break. If you want to take an AI solution into production, it needs to be systematically structured and trained. The same is the case with synthetic data generation > if you can narrow down the use case and know the practicalities of what you are trying to produce, then it will likely work. OPPORTUNITIES: Lower costs of production with synthetic content is going to significantly accelerate high quality media production. This in turn will enable a wide set of application areas and use cases for commercial application — even by individuals. Deep Fakes are fuelling the start of the third evolutionary stage of media. Top Content: Movies, videos and post alterations: Individuals will be able to produce high quality content, even movies, with very limited resources. There will likely be a whole market for virtual actors to be customized for any purpose. Even after the final video is produced it could be altered for a different story with a new script. Digital copies and avatars: Individuals such as celebrities could scale their presence by addressing people in their local language or have their team write new scripts for presentations, talks or even commercials. Another use case is customized avatars that guide us in virtual worlds. Related to content creation — editing the video component to a virtual assistant is an interesting area. Synthetic product placement and fashion: Any products can be placed in media for more personal advertisement and essentially offer new marketing channels. Furthermore, clothing brands could use this technology for their advertisement and e-commerce sites. Personalization or Anonymization: Consumers can choose to personalize avatars in virtual (e.g. gaming) environments or swap our looks with alternative versions to stay anonymous. Workplace of the future: Immersive experience and interactive engagement mechanisms will enhance the accessibility of video calls, moving a step closer to semi-virtual reality. Advancement in video synthesis, where NVIDIA has recently shown a demo of how via face-tracking with a few floating-point variables, you are able to reconstruct it on the other side of a Zoom call for example. If we push the boundaries here we will be able to reach a 3-point video format that will allow us to look at people in a 3D format (for instance getting a step closer towards holograms). Identity verification/protection and blockchain: With a vast increase of use-cases pinned on these technologies, it is ever more important have protection over your digital identity and online reputation. Especially for those who’s authenticity is important to their digital reputation (politicians, influencers, etc). Tools to let someone know whether your image has been stolen to create a deep fake is an appealing use case. Single shared source of truth (SSOT) is one way to approach this. Meaning, one piece of information put on blockchain gets replicated 10,000 and stays there for eternity. Digital IP can be put on blockchain to track the authenticity of any content. Any modification to a picture gets shared on the blockchain, hence knowing if something is real or not. Some people argue that this is the (only) most scalable and sustainable way to approach the identity problem.
https://medium.com/dataseries/insights-from-our-synthetic-content-and-deep-fake-technology-round-table-609489ca26d9
['Mike Reiner']
2020-12-18 13:49:51.268000+00:00
['AI', 'Future', 'Deepfakes', 'Synthetic Data']
Learn AI with Free TPU Power — the ELI5 way
In this article, you’ll learn how to use Google Colab for training a CNN on the MNIST dataset using Google’s TPUs. Hold up, what’s a CNN? In a regular Neural Network, you recognize patterns from labelled data (“supervised learning”), with a structure made of inputs, outputs, and neurons in between. Some are these are connected, but deciding which ones are connected manually doesn’t work well for things like images, since the net doesn’t understand how the pixels are related. A CNN, or Convolutional Neural Network, connects some of the neurons to pixels that are close together, to start out with some knowing of how the pixels are related. This is a very high level overview, but if you want to dig into the architecture, check out this guide. What about MNIST? MNIST is a dataset of handwritten digits, with a training set of 60,000 examples and a testing set of 10,000 examples. It’s often used in beginner problems. And what is Google Colab? Google Colaboratory was an internal research tool for data science, which was released to the public with the goal of the dissemination of AI research and education. It offers free GPU and TPU usage (with limits, of course ;)). Lastly, what is a TPU? TPU stands for “Tensor Processing Unit”, an alternative to CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) that’s especially designed for calculating tensors. A tensor is an alternative to multidimensional arrays (like in NumPy), and are functions you feed data into. The relationships in a neural network can be easily described and processed in tensors, so a TPU is very fast for this kind of work. Training and Testing a Model on GPUs and TPUs Signup for Google Colaboratory. Open up this pre-made GPU vs TPU notebook (credits). When you open it up, TPU backend should be enabled (if not, check Runtime -> Change runtime type -> Hardware Accelerator -> TPU). Run a “cell” using Shift-Enter / Command-Enter. Run all the cells in order, starting from the cell named “Download MNIST”. If it’s successful, the empty [] brackets should turn into [1], [2], [3], and so on. That’s it! You may run into challenges if you’re doing this long after I wrote the article, and cell [4] (training and testing) will take some time to run its 10 epochs. Conclusion Much of the barrier of entry to AI and data science used to be in the infrastructure, for instance getting the necessary compute to train large models. Nowadays, with tools like Google Colab, it really is as simple as opening and running a notebook in your browser, and not much different from using a Google Doc or spreadsheet. What Should I Do With This Information? Now you at least know how to run an AI model easily. If you want to practice on real-world challenges, head on over to bitgrit’s competition platform, with new competitions regularly added. This will train your skills, and act as a means to build up your portfolio. This article was written by Frederik Bussler, CEO at bitgrit. Join our data scientist community or our Telegram for insights and opportunities in data science.
https://medium.com/bitgrit-data-science-publication/learn-ai-with-free-tpu-power-the-eli5-way-4e5484ea0d08
[]
2019-03-16 11:29:59.619000+00:00
['Machine Learning', 'Artificial Intelligence', 'Technology', 'Google', 'Data Science']
Celery throttling — setting rate limit for queues
Writing some code Well, let’s write some code. Create the main.py file and set the basic settings: from celery import Celery from kombu import Queue ​ app = Celery('Test app', broker='amqp://guest@localhost//') ​ # 1 queue for tasks and 1 queue for tokens app.conf.task_queues = [ Queue('github'), # I limited the queue length to 2, so that tokens do not accumulate # otherwise this could lead to a breakdown of our rate limit Queue('github_tokens', max_length=2) ] ​ # this task will play the role of our token # it will never be executed, we will just pull it as a message from the queue @app.task def token(): return 1 ​ # setting up a constant issue of our token @app.on_after_configure.connect def setup_periodic_tasks(sender, **kwargs): # we will issue 1 token per second # that means rate limit for github queue is 60 tasks per minute sender.add_periodic_task(1.0, token.signature(queue='github_tokens')) Do not forget to launch Rabbit, I prefer to do this with docker: docker run -d --rm --name rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management Now let’s run celery beat - special celery worker, that is always launched and responsible for running periodic tasks. celery -A main beat --loglevel=info After that, messages will appear in the console once a second: [2020-03-22 22:49:00,992: INFO/MainProcess] Scheduler: Sending due task main.token() (main.token) Well, we have set up the issue of tokens for our ‘bucket’. Now all we have to do is to learn how to pull tokens. Let’s try to optimize the code that we wrote earlier for requests to github. Add these lines to main.py : # function for pulling tokens from queue def rate_limit(task, task_group): # acquiring broker connection from pool with task.app.connection_for_read() as conn: # getting token msg = conn.default_channel.basic_get(task_group+'_tokens', no_ack=True) # received None - queue is empty, no tokens if msg is None: # repeat task after 1 second task.retry(countdown=1) ​ # Added some prints for logging # I set max_retries=None, so that tasks will repeat until complete @app.task(bind=True) def get_github_api1(self, max_retries=None): rate_limit(self, 'github') print ('Called Api 1') ​ ​ @app.task(bind=True) def get_github_api2(self, max_retries=None): rate_limit(self, 'github') print ('Called Api 2') Now lets check how it works. In addition to the beat process, add 8 workers: celery -A main worker -c 8 -Q github And create a separate little script to run these tasks, call it producer.py : from main import get_github_api1, get_github_api2 ​ tasks = [get_github_api1, get_github_api2] ​ for i in range(100): # launching tasks one by one tasks[i % 2].apply_async(queue='github') Start it with python producer.py , and look at logs of workers: [2020-03-23 13:04:15,017: WARNING/ForkPoolWorker-3] Called Api 2 [2020-03-23 13:04:16,053: WARNING/ForkPoolWorker-8] Called Api 2 [2020-03-23 13:04:17,112: WARNING/ForkPoolWorker-1] Called Api 2 [2020-03-23 13:04:18,187: WARNING/ForkPoolWorker-1] Called Api 1 ... (96 more lines) Despite the fact that we have 8 workers, tasks are executed approximately once per second. If there was no token at the time task reached the worker, task is rescheduled. Also, I think you have already noticed, that in fact we throttle not queue, but some logical group of tasks, that can actually be located in different queues. Thus, our control becomes even more detailed and granular. Putting it all together Of course, the number of such task groups is not limited (only by capabilities of the broker). Putting the whole code together, expanding and ‘beautifying’ it: from celery import Celery from kombu import Queue from queue import Empty from functools import wraps ​ app = Celery('hello', broker='amqp://guest@localhost//') ​ task_queues = [ Queue('github'), Queue('google') ] ​ # per minute rate rate_limits = { 'github': 60, 'google': 100 } ​ # generating queues for all groups with limits, that we defined in dict above task_queues += [Queue(name+'_tokens', max_length=2) for name, limit in rate_limits.items()] ​ app.conf.task_queues = task_queues ​ @app.task def token(): return 1 ​ @app.on_after_configure.connect def setup_periodic_tasks(sender, **kwargs): # generating auto issuing of tokens for all lmited groups for name, limit in rate_limits.items(): sender.add_periodic_task(60 / limit, token.signature(queue=name+'_tokens')) ​ # I really like decorators ;) def rate_limit(task_group): def decorator_func(func): @wraps(func) def function(self, *args, **kwargs): with self.app.connection_for_read() as conn: # Here I used another higher level method # We are getting complete queue interface # but in return losing some perfomance because # under the hood there is additional work done with conn.SimpleQueue(task_group+'_tokens', no_ack=True, queue_opts={'max_length':2}) as queue: try: # Another advantage is that we can use blocking call # It can be more convenient than calling retry() all the time # However, it depends on the specific case queue.get(block=True, timeout=5) return func(self, *args, **kwargs) except Empty: self.retry(countdown=1) return function return decorator_func ​ # much more beautiful and readable with decorators, agree? @app.task(bind=True, max_retries=None) @rate_limit('github') def get_github_api1(self): print ('Called github Api 1') ​ @app.task(bind=True, max_retries=None) @rate_limit('github') def get_github_api2(self): print ('Called github Api 2') ​ @app.task(bind=True, max_retries=None) @rate_limit('google') def query_google_api1(self): print ('Called Google Api 1') ​ @app.task(bind=True, max_retries=None) @rate_limit('google') def query_google_api1(self): print ('Called Google Api 2') Thus, the total task calls of the google group will not exceed 100/min, and the github group — 60/min. Note that in order to set up such throttling, it took less than 50 lines of code. Is it possible to make it even simpler?
https://medium.com/analytics-vidhya/celery-throttling-setting-rate-limit-for-queues-5b5bf16c73ce
['Magomed Aliev']
2020-05-10 15:54:26.235000+00:00
['Distributed Systems', 'Python', 'Software Development', 'Celery', 'Rabbitmq']
AWS Glue Studio—No Spark Skills-No Problem
AWS Glue Studio—No Spark Skills-No Problem Easily create Spark ETL jobs using AWS Glue Studio — no Spark experience required Image by Gerd Altmann from Pixabay AWS Glue Studio was launched recently. With AWS Glue Studio you can use a GUI to create, manage and monitor ETL jobs without the need of Spark programming skills. Users may visually create an ETL job by visually defining the source/transform/destination nodes of an ETL job that can perform operations like fetching/saving data, joining datasets, selecting fields, filtering etc. Once a user assembles the various nodes of the ETL job, AWS Glue Studio automatically generates the Spark Code for you. AWS Glue Studio supports many different types of data sources including: S3 RDS Kinesis Kafka Let us try to create a simple ETL job. This ETL job will use 3 data sets-Orders, Order Details and Products. The objective is to Join these three data sets, select a few fields, and finally filter orders where the MRSP of the product is greater than $100. Finally, we want to save the results to S3. When we are done the ETL jobs should visually look like this. Image by Author Lets start by downloading the data sets required for this tutorial. Save the data sets in S3. $ git clone https://github.com/mkukreja1/blogs $ aws s3 mb s3://glue-studio make_bucket: glue-studio $ aws s3 cp blogs/glue-studio/orders.csv s3://glue-studio/data/orders/orders.csv upload: blogs/glue-studio/orders.csv to s3://glue-studio/data/orders/orders.csv $ aws s3 cp blogs/glue-studio/orderdetails.csv s3://glue-studio/data/orderdetails/orderdetails.csv upload: blogs/glue-studio/orderdetails.csv to s3://glue-studio/data/orderdetails/orderdetails.csv $ aws s3 cp blogs/glue-studio/products.csv s3://glue-studio/data/products/products.csv upload: blogs/glue-studio/products.csv to s3://glue-studio/data/products/products.csv We will save these files to S3 and catalog them in the orders database using the Glue Crawler. $ aws glue create-database --database-input '{"Name":"orders"}' $ aws glue create-crawler --cli-input-json '{"Name": "orders","Role": "arn:aws:iam::175908995626:role/glue-role","DatabaseName": "orders","Targets": {"S3Targets": [{"Path": "s3://glue-studio/data/orders/"},{"Path": "s3://glue-studio/data/orders/"}]}}' $ aws glue start-crawler --name orders $ aws glue delete-crawler --name orders $ aws glue create-crawler --cli-input-json '{"Name": "orderdetails","Role": "arn:aws:iam::175908995626:role/glue-role","DatabaseName": "orders","Targets": {"S3Targets": [{"Path": "s3://glue-studio/data/orderdetails/"},{"Path": "s3://glue-studio/data/orderdetails/"}]}}' $ aws glue start-crawler --name orderdetails $ aws glue delete-crawler --name orderdetails $ aws glue create-crawler --cli-input-json '{"Name": "products","Role": "arn:aws:iam::175908995626:role/glue-role","DatabaseName": "orders","Targets": {"S3Targets": [{"Path": "s3://glue-studio/data/products/"},{"Path": "s3://glue-studio/data/products/"}]}}' $ aws glue start-crawler --name products $ aws glue delete-crawler --name products Using the AWS console open AWS Glue service and click on AWS Glue Studio using the left menu. Make sure you have Blank Graph selected. Click on Create. Image by Author Start by creating the first Transform Node-Fetch Orders Data Image by Author Make sure that Fetch Orders Data points to the orders table catalogued in Glue previously. Image by Author Using the same principles as above create the Transform Node-Fetch OrderDetails Data as well as Fetch Products Data Now we will create a Transform Node that will join Fetch Orders Data to Fetch OrderDetails Data. Image by Author Notice how the joining condition is defined between the two tables as below. Using the same principles create a Transform Node that will join Join Orders to Fetch Products Data. Image by Author Since we want to select a subset of columns from the three table we can use the Select Fields Node Image by Author Notice how you can check boxes for fileds that should be included in the final result set. Image by Author Now we would like to filter the products whose MRSP is greate than $100. This can be achived by creating a Filter Products MRSP>100 Node as below. Image by Author Notice how one or more filter condition can be defined. Image by Author Finally, we want to save the result table to S3 in Parquet format. For tis we create a Destination Node-Save Results Image by Author Image by Author Use the Save button to save your ETL job. At this time you should be able to see that AWS Glue Studio has automatically generated the Spark code for you. Click on the Script menu to view the generated code. We are all set. Lets run the job using the Run button on top right. Click on Run Details should show you the status of the running job. Once the job status changes to Succeeded you can go to S3 to check the final results of the job. Image by Author At this point there should be many Parquet files produced in the results folder. Image by Author You can check the cotenets of the file using the Apache Parquet Viewer. Image by Author I hope this article was helpful. AWS Glue Studio is covered as part of the AWS Big Data Analytics course offered by Datafence Cloud Academy. The course is taught online by myself on weekends.
https://towardsdatascience.com/aws-glue-studio-no-spark-skills-no-problem-b3204ed98aa4
['Manoj Kukreja']
2020-09-29 14:22:03.132000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'AWS', 'Data']
Rivian Has Become a Top Dog in the Electric Vehicle Battle
Rivian Has Become a Top Dog in the Electric Vehicle Battle Amazon will drive 10,000 Rivian vans in 2022, 100,000 in 2030 Photo by Amazon/Rivian While Tesla holds a massive lead in consumer electric vehicle sales, another company has a firm grip on the delivery market: Rivian. If you’ve never heard of Rivian, that’s OK: they’re rather lowkey. Founded in 2009, the company didn’t reveal its first two products — an electric pickup truck and SUV — until 2017. In 2019, Rivian received $1.55 billion in funding from three companies: Cox Automotive ($350 million), Ford ($500 million), and Amazon ($700 million). Due to factors relating to COVID-19, Ford terminated its contract with Rivian. Amazon didn’t leave, however, and will start seeing returns on its investment quite soon. Not long after Cox’s investment in September 2019, Amazon announced a purchase of 100,000 Rivian vans for its delivery fleet — the largest delivery EV order to date. Amazon plans to run on 100% renewable energy by 2030, and the Rivian fleet will play a large role in that. The company initially planned to have all 100,000 EV delivery vans on the road by 2024, with some hitting the roads in 2020, which is now unlikely with COVID-related delays. A recent announcement did say that 10,000 Rivian vans should be delivering Amazon packages by 2022. The two companies have yet to reveal specifications for the van but did say it will have 150 miles of range (which would be class-leading) and a 360-degree view of the exterior. This view will be shown on a center-stationed monitor display. Rivian is expected to announce its battery supplier for the Amazon project by the end of this year. This project will put Amazon in contention as the largest at-home delivery fleet in the world. Currently with about 30,000 vehicles, the company is behind the likes of FedEx (78,000), UPS (125,000), and United States Postal Service (200,000). The Amazon/Rivian partnership is not a unique one aside from the size of the commitment, though. UPS has ordered 10,000 EVs (with an option for 10,000 more) from startup Arrival and will begin rolling those out worldwide within the next few years. FedEx made an order for 1,000 electric vans at the end of 2018, while companies are currently battling for a large fleet contract from the USPS. Companies such as Tesla and Nikola are building electric semi-trucks meant for large-freight hauling to retailers. It would not be surprising to see either company jump into the smaller-scale delivery game down the line, however. The development of electric vehicles (specifically with batteries) comes slowly but surely, which is why prices are still high and delivery fleets aren’t already 100% electric. In an industry where things do move so slow, the latest reveal of Rivian’s van gives us some tangibility that we don’t always get. Thanks to Amazon’s commitment, Rivian has a large lead over all other hopeful EV suppliers. From a delivery standpoint, at least.
https://medium.com/swlh/rivian-has-become-a-top-dog-in-the-electric-vehicle-battle-2dcea44bebc5
['Dylan Hughes']
2020-11-24 20:42:19.860000+00:00
['Transportation', 'Climate Change', 'Environment', 'Sustainability', 'Electric Vehicles']
Committed to Success: Hai Hoang, Tech Lead at Planworth
Hai Hoang is a Commit engineer who joined Planworth, an early-stage wealth planning SaaS platform in 2019 as a technical lead. We sat down with him to hear about his journey. Tell us a bit about your background before joining Commit? I spent the first two-and-a-half years of my career at a large tech company, then did my own startup for around two years. It was an on-demand personal assistant app. We launched our product and got to market, but market conditions were bad at the time. Investments dried up and we ended up having to shut down. I went back to the first tech company I worked for, to figure out what I wanted to do next. Then I worked for a few startups, but nothing was really going anywhere. That’s when Commit came into the picture. I was one of the first engineers, part of the founding team. “You don’t really know how you get along with people until you work with them, right? To me, Commit offers a really good opportunity to get a feel for that before making a long term commitment” What drew you to Commit? The people. I had worked with some members of the Commit team at other startups. The people are really fun and they had senior engineers on board. I was attracted to that, because I knew I could learn a ton from them. It was a very good environment for me to start new projects all the time and learn from some of the best. It was clear that Commit’s goal was to minimize risk for engineers. We wanted to offer engineers the opportunity to meet with startup founders and assess what their product was and what their business strategy was before fully jumping in — that really appealed to me, because I had been with failed startups before. How did you get connected with Planworth? I actually had no intention of leaving Commit. I was there from the beginning, I was helping build the company — I thought there was no reason for me to leave. But then Planworth came around, and the product and the team got me very interested. I could see potential that I hadn’t seen with other projects. Plus, it’s in the fintech field, which I’ve always had an interest in but have never been able to dip my toes into. What attracted you to Planworth? I liked the fact that they recognized their product-market fit. That’s something many startups don’t have from the beginning. Most startups have an idea, then they build a product, then they go out and validate it with the market. Planworth built a rough proof-of-concept and got immediate validation. So by the time I joined, it was clear they had found a market, figured out a business model, and had a plan for earning revenue. It was amazing to see. Also, it was great that I had a three-month period working with the founders and the team, before I formally joined, so I really got to know them and their product. What has it been like so far? It’s been very, very good. They’ve given me the autonomy and authority to implement my vision of what I want a team to look like, which I’ve never had the opportunity to do. They gave me that trust. And not just from the management perspective — even the engineering team trusts me to make decisions. It’s a great career opportunity because previously I’ve been a team lead, where you’re running projects, but at Planworth I’m also managing people. How has your time with Commit helped you in this new role? I’ve definitely been able to transfer things I learned at Commit onto my work at Planworth. At Commit I learned about project management, especially figuring out ways to deliver work in a shorter time frame. A lot of the technical skill sets I picked up at Commit also set me up for success at Planworth, like devops tricks. I’m still learning a lot at Planworth. Maybe not as much on the engineering side, but on the management side and team lead side. I have Tarsem and James [Planworth’s co-founders] teaching me and coaching me. So I’m learning every day. How has it been working with two non-technical founders? It’s been good. Engineering is very different than what they’re used to, but they’re very open-minded and understanding about the process. That’s one of the characteristics I like about them. They care about scalability, and they care about having clean code and tests and stuff like that. Not all founders do. What would you tell or say to other engineers considering joining Commit? Go for it. And keep an open mind, because every single project is very different. You end up working with different teams and different people. You don’t really know how you get along with people until you work with them, right? To me, Commit offers a really good opportunity to get a feel for that. I think it’s a fantastic model. As a person who comes from the startup world, working on multiple failed startups, it really does mitigate the risk.
https://medium.com/commit-engineering/committed-to-success-hai-hoang-tech-lead-at-planworth-38f8442cebe9
['Beier Cai']
2020-06-14 21:48:45.193000+00:00
['Technology', 'Careers', 'Software Development', 'Startup', 'Entrepreneurship']
How I made the switch to AI research
In 2015, I wanted to help with AI research, but taking the first steps felt daunting. I’d graduated from MIT then spent eight years building web startups. I’d put in my 10,000 hours, gotten funding from Y Combinator and grown a company to thirty people. Moving to research felt like starting over in my career. Was it really a good idea to throw away years of work? A friend told me about South Park Commons (SPC), a new space for people who were taking the first steps on a new path, and introduced me to Ruchi, the founder. Ruchi is super impressive, she was one of the earliest Facebook engineers, and had founded and sold a successful company. She also has a high-bandwidth and disarmingly direct communication style that I found refreshing. Over lunch, Ruchi described South Park Commons as a community in which everyone is starting over. Starting over is in fact the main thing that unifies the group. For example, two current Commons members are Jason, the maintainer of a popular open-source project Quill, who’s been learning to do enterprise sales, and Malcolm, a successful infrastructure engineer who’s starting a fund Strong Atomics to invest in nuclear fusion companies. I joined South Park Commons, blocked off three months to see if I could make progress, and made a plan to teach myself machine learning. Several other SPC members were interested in the space, so we started going through a curriculum of courses and organized a paper reading group. As soon as I got over the fear and took the plunge, things got vastly easier. Six months of focused work later, I had a position at OpenAI.
https://medium.com/south-park-commons/how-i-made-the-switch-to-ai-research-b053b402608
['South Park Commons']
2017-08-03 16:49:23.029000+00:00
['AI', 'Naturallanguageprocessing', 'Artificial Intelligence', 'Machine Learning']
20 (more) Technologies that will change your life by 2050
20 (more) Technologies that will change your life by 2050 The future will be… weird ? I recently shared an article called “The “Next Big Thing” in Technology : 20 Inventions That Will Change the World”, which got a few dozen thousand hits in the past couple of weeks. This calls for a sequel. The previous 20 technologies were specifically centered on the next 20 years of technology development; but there’s a lot more to unravel when looking beyond the near future, though certainty obviously decreases with time. Below are 20 technologies that will change the world by 2050 and beyond. This date and predictions are understandably vague and arbitrary, and we all know that predictions often fall flat (check my 2020 tech predictions if you don’t believe me). Regardless, the knowledge gained through planning for potential technologies is crucial to the selection of appropriate actions as future events unfold. Above all, articles such as this one act as catalysts to steer the conversation in the right direction. 1. DNA computing Life is far, far more complex than any of the technologies humanity has ever created. As such, it could make sense to use life’s building blocks to create an entirely new type of computational power. Indeed, for all the talks of Artificial Intelligence, nothing beats our mushy insides when it comes to learning and making inferences. DNA computing is the idea that we can use biology instead of silicon to solve complex problems. As a DNA strand links to another, it creates a reaction which modifies another short DNA sequence. Action. Reaction. It’s not a silly idea : most of our computers are built to reflect the very organic way humans think (how else would we grasp computer’s inputs and outputs). Humanity is pretty far from anything usable right now : we’ve only been able to create the most basic Turing machine, which entails creating a set of rules, feeding an input, and getting a specific output based on the defined rules. In real term… well, we managed to play tic-tac-toe with DNA, which is both dispiriting and amazing. More on DNA Computing here [Encyclopaedia Britannica]. 2. Smart Dust Smart dust is a swarm of incredibly tiny sensors which would gather huge amounts of information (light, vibrations, temperature…) over a large area. Such systems can theoretically be put in communication with a server or with analysis systems via a wireless computer network to transmit said information. Potential applications include detection of corrosion in aging pipes before they leak (for example in drinking water… oh, hi Flint), tracking mass movements in cities or even monitoring climate change over large areas. Some of the issues with this technology is the ecological harm these sensors could cause, as well as their potential for being used for unethical behavior. We are also far from something that could be implemented in the near future : it’s very hard to communicate with something this small, and Smart Dust would likely be vulnerable to environmental conditions and microwaves. More on Smart Dust here [WSJ]. 3. 4D printing The name 4D printing can lead to confusion: I am not implying that humanity will be able to create and access another dimension (Only Rubik can do that). Put simply, a 4D-printed product is a 3D-printed object which can change properties when a specific stimulus is applied (submerged underwater, heated, shaken, not stirred…). The applications are still being discussed, but some very promising industries include healthcare (pills that activate only if the body reaches a certain temperature), fashion (clothes that become tighter in cold temperature), and home-making (furniture that becomes rigid under a certain stimulus). The key challenges of this technology is obviously finding the relevant components for all types of uses. Some work is being done in this space, but we’re not even close to being customer-ready, having yet to master reversible changes of certain materials. More on 4D Printing here [Wikipedia]. 4. Neuromorphic Hardware Now THIS is real SciFi. Taking a page from biology, physics, mathematics, computer science and electronic engineering, neuromorphic engineering aims to create hardware which copies the neurons in their response to sensory inputs. Whereas DNA Computing aims to recreate computers with organic matter, Neuromorphic Hardware aims to recreate neurons and synapses using silicon. This is especially relevant as we’re seeing an end to the exponential computing power growth predicted by Moore’s law (that’s quantum mechanics for you), and have to find new ways to calculate a bunch of things very quickly. We’re not really sure how far this idea can be taken, but exploring it is, if anything, great for theoretical AI research. Should said research go further and become actionable, you’ll find me knocking on Sarah Connor’s door. More on Neuromorphic Hardware here [Towards Data Science]. 5. Nanoscale 3D printing 3D printing is still a solution looking for a problem. That’s partly because 3D printers are still too expensive for the average Joe, and not sophisticated and quick enough for large-scale manufacturing companies. This may change over the next few decades : researchers have developed a method that uses a laser to ensure that incredibly tiny structures can be 3D-printed much, much faster (X1,000), while still ensuring a good quality of build. This method is called “femtosecond projection TPL”, but I much prefer “Nanoscale 3D printing” (because I’m a technological peasant). Use cases are currently centered around flexible electronics and micro-optics, but quick discoveries around materials (both liquid and solid) leads researchers to think that they will be able to build small but imagination-baffling structures in the near future. One might imagine the medical community could use something like this… More on Nanoscale 3D Printing here [Future Timeline]. 6. Digital Twins As opposed to some of the other techs discussed in this article, this technology may not affect you directly, and is already being implemented (and will continue to be for a long, long time). Essentially, digital twins integrate artificial intelligence, machine learning and software analytics to create a digital replica of physical assets that updates and changes as its physical counterpart changes. Digital twins provide a variety of information throughout an object’s life cycle, and can even help when testing new functionalities of a physical object. With an estimated 35 billion connected objects being installed by 2021, digital twins will exist for billions of things in the near future, if only for the potential billions of dollars of savings in maintenance and repair (that’s a lot of billions). Look out for big news on the matter coming out if the manufacturing, automotive and healthcare industries. Why would I mention this ever-present idea as a technology to look out for in 2050? Easy : though we are talking about objects now, the future of digital twins rests in the creation of connected cities, and connected humans. More on Digital Twins here [Forbes]. 7. Volumetric displays / Free-space displays If one cuts through the blah blah (of which there is too much in this space), volumetric displays are essentially holograms. There are currently 3 techniques to create holograms, none of which are very impressive : illuminating spinning surfaces (first seen in 1948), illuminating gases (first seen in 1914), or illuminating particles in the air (first seen in 2004). The use of volumetric displays in advertising (the primary focus for this concept) may be either greatly entertaining, or absolutely terrible because of potential impracticabilities. You can imagine which easily by watching Blade Runner 2049). I’m also dubious about the tech’s importance: computers were supposed to kill paper and I still print every single presentation I receive to read it. I don’t see hologram being anything else than a hype-tech attached to other more interesting techs (such as adaptive projectors). More on Volumetric Displays here [Optics & Photonics News]. 8. Brain-Computer interface (BCI) A brain-computer interface, sometimes called a neural-control interface, mind-machine interface, direct neural interface, or brain–machine interface, is a direct communication pathway between an enhanced or wired brain and an external device (If you start reading words like ElectroEncephaloGraphy, you’ve gone too far into the literature). If that sounds like something you’ve heard a lot about recently, it might have a lot to do with Elon Musk and a pig of his... Beyond obvious and needed work in the prosthetic space, it’s the medical aspect which would be most transformative. A chip implemented in the brain could help prevent motion sickness, could detect and diagnose cancer cells and help with the rehabilitation of stroke victims. It could also be used for marketing, entertainment and education purposes. But let’s not get ahead of ourselves : there are currently dozens, if not hundreds of technical challenges to wrestle with before getting anywhere near something the average person could use. First and foremost, we’d need to find the right material that would not corrode and/or hurt the brain after a few weeks, and get a better understanding of how the brain ACTUALLY works. More on brain-computer interface here [The Economist]. 9. Zero-knowledge proof (aka: zero-knowledge succinct non-interactive argument of knowledge) Privacy: ever heard of it? Computer scientists are perfecting a cryptographic tool for proving something without revealing the information underlying the proof. It sounds incredible but not impossible once you wrap your head around the concept and the fact that it’s a bit more complex than saying “c’mon bro, you know I’m good for it”. Allow me to simplify : Bob has a blind friend named Alice and two marbles of different colours, which are identical in shape and size. Alice puts them behind her back and shows one to Bob. She then does it again, either changing the marble or showing the same one again, asking if this is the same as the marble first shown. If Bob were guessing whether it was the same or not, he would have a 50/50 chance of getting it right, so she does it again. And again. And because Bob sees colours, he gets it right each time, and the chance that he guessed lucky diminishes. That way, Alice knows that Bob knows which marble is the original shown (and its colour), without her ever knowing the colour of any of the marbles. Boom, zero-knowledge proof. ZNP is this concept, applied to digitally to complex algorithms. It’s easy to come up with VERY cool use cases. For example, if an app needs to know that you have enough money to put a transaction through : your bank could communicate that yes, that is the case, without giving an amount. It could also help identify a person without a birth certificate, or allow someone to enter a restricted website without needing to display their date of birth. Yay for privacy. More on zero-knowledge proof here [Wired]. 10. Flying autonomous vehicles This one is easier to grasp as it has been part of the collective imagination for dozens, if not hundreds of years. Cars. But they fly. Obviously, there are a lot of issues with this very sci-fi idea. We’re already struggling to stop people from attacking “classical” autonomous cars, so the jury is still out on whether it will ever come to be. Another issue is the fact that much of our world is built for traditional cars. Roads, buildings, parkings, insurance, licenses… everything would need to be destroyed and remade. It is likely that such cars will never see the light of day unless society crumbles and is rebuilt (2020’s not over yet). There are currently 15 prototypes in development. I’d bet that none of these will ever come to light, except as playthings for the uber-rich. But hey, who doesn’t want to see Zuckerberg live out his mid-life crisis in the skies? More on Flying Autonomous Vehicles here [Forbes]. 11. Smart Robots / Autonomous Mobile Robots This has also been a staple of SciFi for many years, for obvious reasons: imagine mixing robotics with enough Artificial Intelligence to entertain the idea of the digital world becoming physical. Welcome to your tape. Before any of this can ever happen, we will need to improve robotics (robots don’t move so good right now) and create a new branch of AI research to explore a myriad of reactions such a technology would require to be operational. AMRs will also need nice, strong batteries, hence the current research into Lithium–silicon technologies. Though no terminators are in sight, we’re starting to see such autonomous robots in warehouses, where they pick your Amazon purchase, and in the street, where they’ve begun bringing us our groceries. More on Smart Robots here [EDx]. 12. Secure quantum internet As I’ve mentioned in previous articles, quantum computing will allow us to take leaps in the number of calculations a computer can do per second. A by-product of this is the fact that no password will be safe in a quantum world, as it should become possible to try all possible text and number combinations in record time. Modern problems require modern solutions. Researchers at the Delft University of Technology in the Netherlands are working on a quantum Internet infrastructure where communications are coded in the form of qubits and entangled in photons (yes, light) flowing in optical fibers, so as to render them impossible to decrypt without disturbing the network. In everyday word, that means that anyone listening in or hacking the network would disrupt the communication, rendering it unintelligible — data in such a state is, by nature, impossible to observe without altering it. The underlying science is fascinating, and I strongly recommend clicking on the link below to explore it. More on Secure Quantum Internet here [Harvard School of Engineering]. 13. Hyper-personalized medicine This is yet another technology which is burgeoning today, but has yet a long way to go. At its heart, hyper-personalised medicine is genetic medicine designed for a single patient, making it possible to treat diseases that were once incurable, or that were too rare to be worth curing. In 2019, a young girl named Mila Makovec, suffering from a rare and fatal genetic brain disease, was offered a tailor-made treatment (named Milasen — cute) to restore the function of the failed gene. Though she is not cured, her condition has stabilised, which is a big win. The development of such personalized drugs is made possible by rapid advances in sequencing and genetic editing : creating a complete human genome sequence has gone from costing $20 million or so in 2006 to less than $500 in 2020. However, creating a drug still requires major resources (a year of development in Mila’s case) and the mobilization of specialized teams. The question of cost therefore risks limiting the generalization of such treatments. More on Hyper-Personalised Medicine here [MIT Technology Review]. 14. Biotech / Cultured / Artificial tissues / bioprinting Bioprinting is the process of creating cellular structures using 3D printing techniques, where cell functions are retained throughout the printing process. Generally, 3D bioprinting uses a layer-by-layer printing method to deposit materials sometimes referred to as bio-inks to create natural biological tissue-like structures which are then used in the fields of medical engineering. A number of challenges pave the road ahead : we don’t know enough about the human body to implement these techniques safely, the price is very high, cells don’t live for very long when printed… the list goes on. And that’s without mentioning all the ethical questions such a technology raises. There are nevertheless so many potential use cases that it’s well worth solving these issues : beyond allowing us to do transplants on amputees, it could also help us create “meatless meat”, leading to a more humane and more ecological meat industry. More on Bioprinting here [Explaining the Future]. 15. Anti-aging drugs Several treatments intended to slow or reverse aging are currently in their testing phase. They block the aging of cells linked to age and reduce the inflammation responsible for the accumulation of toxic substances or degenerative pathologies, such as Alzheimer’s, cancer or cardiovascular diseases. In short, we’re not trying to “cure aging”, but instead seek to improve immune functions in older people. Many studies are ongoing. In June 2019, the American start-up Unity Biotechnology, for example, launched a knee arthritis drug test. The biotech Alkahest, on the other hand, promises to curb cognitive loss by injecting young blood components. Finally, researchers have been testing rapamycin, an immunosuppressant, as an anti-aging treatment for many, many years. The latter shows great promises, as it improves immune functions by as much as 20%. The barriers are many : beyond the scientific costs, political pressure will need to be applied to key players to change the rules of healthcare as we know it. And we know how THAT usually plays out… More on Anti-Aging Drugs here [University of Michigan]. 16. Miniature AI Because of AI’s complexity, the computing power required to train artificial intelligence algorithms and create breakthroughs doubles every 3.4 months. In addition, the computers dedicated to these programs require a gigantic consumption of energy. The digital giants are now working to miniaturize AI technology to make it accessible to the general public. Google Assistant and Siri thus integrate voice recognition systems holding onto a smartphone chip. AI is also used in digital cameras, capable of automatically retouching a photo by removing an annoying detail or improving the contrast, for example. Localized AI is better for privacy and would remove any latency in the transfer of information. Obviously, because this space is ever-evolving, it is very difficult to see beyond the next few years of evolution — all we know is that many technical difficulties are still in the way (mathematically, mechanically, spiritually…). More on Miniature AI here [MIT Technology Review]. 17. Hyperloop The fact that Elon Musk makes a second appearance on this list is a testament to his very specific brand of genius. His hyperloop project consists of an underground low-pressure-tube in which capsules transporting passengers and/or goods move. Beyond removing air from the tube, friction on the ground is also removed, as the capsules are lifted by an electromagnetic lift systems. The capsules are propelled by a magnetic field created by linear induction motors placed at regular intervals inside the tubes. Removing air and ground fiction would allow such a transportation method to reach insane speed : 1,102 km/h versus 885 km/h for planes at full speed (and the hyperloop can reach its top speed much faster than a plane). Other benefits include reduced pollution and noise. However, this technology would require the creation of extensive tunnels, sometimes under cities. The price is fairly prohibitive : $75M per kilometer built. Other issues include making a perfectly straight tunnel, removing ALL air from the tube, and reaching the passengers in case of accidents. This has led some transportation experts to claim that the hyperloop has no future. Regardless, the memes are hilarious. More on Hyperloop here [Tesla]. 18. Space mining / Asteroid mining Asteroids have huge mineral resources. The idea behind space mining is that we could catch these asteroids, extract the minerals (especially the rare ones!), and bring them back to earth to sell them. Planets are also considered relevant in this discussion. How hard could it be to make a lot of money from the final frontier ? Turns out, it’s fairly complex. Difficulties include the high cost of space exploration, difficulties finding the right asteroids, and the difficulty of landing on it when it’s moving at high speed (18,000 km/h on average). That’s a lot of difficulties. And that’s without discussing the potential trade and space wars that could result from two nations or companies having their eyes on the same space rock. So far, only the US and… Luxembourg (?) have passed laws in that regard. However, if resources on earth become scarily scarce, and recycling is not an option, it might just become worth it. More on Space Mining here [Financial Time]. 19. Orbital Solar Power An orbital solar power station, solar power satellite or space solar power plant would be an artificial satellite built in high orbit that would use microwave or laser power transmission to send solar energy to a very large antenna on Earth. That energy could then be used instead of conventional and polluting energy sources. The advantage of placing a solar power plant in orbit is that it would not be affected by day-night cycles, weather and seasons, due to its constant “view” of the Sun. This idea has been around since 1968, but we’ve still got a long way to go. Construction costs are very high, and the technology will not be able to compete with current energy sources unless a way is discovered to reduce the cost of launches (this is where Elon shines again). We could alternatively develop a space industry to build this type of power plant from materials taken from other planets or low gravity asteroids. Alternatively, we could just stop polluting the world, take one for future generation, and switch to less convenient / more expensive sources of energy… More on Orbital Solar Power here [Forbes]. 20. Teleportation of complex organic molecules Honestly, I don’t know enough science to explain this one properly. But I’ll do my best! Teleportation, or the science of disappearing in one place to immediately reappear in another, is something that’s been in the popular imagination for decades now. We’re discussing something a bit simpler here : quantum teleportation is able to share information near instantaneously from one point to another, not matter. We’re not talking about silly fish & chips recipe type of information, we’re talking about the make-up of entire molecules. In the early 2000s, scientists were able to transfer particles of light (with zero mass) over short distances. Further experiments in quantum entanglement led to successful teleportation of the first complete atom. This was followed by the first molecules, consisting of multiple atoms. Logically, then, we could expect the first complex organic molecules such as DNA and proteins to be teleported by 2050. I have no idea what to do with this information. More on Quantum teleportation here [Wikipedia]. Conclusion Technology has a tendency to hold a dark mirror to society, reflecting both what’s great and evil about its makers. It’s important to remember that technology is often value-neutral : it’s what we do with it day in, day out that defines whether or not we are dealing with the “next big thing”. Good luck out there.
https://medium.com/predict/20-more-technologies-that-will-change-your-life-by-2050-a28563a763a3
['Adrien Book']
2020-09-21 18:12:12.183000+00:00
['Next Big Thing', 'Predictions', 'Future', 'Technology', 'AI']
Build your first full-stack serverless app with Vue and AWS Amplify
Build flexible, scalable, and reliable apps with AWS Amplify In this tutorial, you will learn how to build a full-stack serverless app using Vue and AWS Amplify. You will create a new project and add a full authorisation flow using the authenticator component. This includes: Please let me know if you have any questions or want to learn more on the above at @gerardsans. Introduction to AWS Amplify Amplify makes developing, releasing and operating modern full-stack serverless apps easy and delightful. Mobile and frontend web developers are being supported throughout the app life cycle via an open source Amplify Framework (consisting of the Amplify libraries and Amplify CLI) and seamless integrations with AWS cloud services, and the AWS Amplify Console. Amplify libraries : in this article we will be using aws-amplify and @aws-amplify/ui-vue . : in this article we will be using and . Amplify CLI : command line tool for configuring and integrating cloud services. : command line tool for configuring and integrating cloud services. UI components : authenticator, photo picker, photo album and chat bot. : authenticator, photo picker, photo album and chat bot. Cloud services : authentication, storage, analytics, notifications, AWS Lambda functions, REST and GraphQL APIs, predictions, chat bots and extended reality (AR/VR). : authentication, storage, analytics, notifications, AWS Lambda functions, REST and GraphQL APIs, predictions, chat bots and extended reality (AR/VR). Offline-first support: Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for data reconciliation between offline and online scenarios. By using AWS Amplify, teams can focus on development while the Amplify team enforces best patterns and practices throughout the AWS Amplify stack. Amplify CLI The Amplify CLI provides a set of commands to help with repetitive tasks and automating cloud service setup and provision. Some commands will prompt questions and provide sensible defaults to assist you during its execution. These are some common tasks. Run: amplify init , to setup a new environment. Eg: dev, test, dist. , to setup a new environment. Eg: dev, test, dist. amplify push , to provision local resources to the cloud. , to provision local resources to the cloud. amplify status , to list local resources and their current status. The Amplify CLI uses AWS CloudFormation to manage service configuration and resource provisioning via templates. This a declarative and atomic approach to configuration. Once a template is executed, it will either fail or succeed. Setting up a new project with the Vue CLI To get started, create a new project using the Vue CLI. If you already have it, skip to the next step. If not, install it and create the app using: yarn global add @vue/cli vue create amplify-app Navigate to the new directory and check everything checks out before continuing cd amplify-app yarn serve Prerequisites Before going forward make sure you have gone through the instructions in our docs to sign up to your AWS Account and install and configure the Amplify CLI. Setting up your Amplify project AWS Amplify allows you to create different environments to define your preferences and settings. For any new project you need to run the command below and answer as follows: amplify init Enter a name for the project: amplify-app Enter a name for the environment: dev Choose your default editor: Visual Studio Code Please choose the type of app that you’re building javascript What javascript framework are you using vue Source Directory Path: src Distribution Directory Path: dist Build Command: npm run-script build Start Command: npm run-script serve Do you want to use an AWS profile? Yes Please choose the profile you want to use default At this point, the Amplify CLI has initialised a new project and a new folder: amplify. The files in this folder hold your project configuration. <amplify-app> |_ amplify |_ .config |_ #current-cloud-backend |_ backend team-provider-info.json Installing the AWS Amplify dependencies Install the required dependencies for AWS Amplify and Vue using: yarn add aws-amplify @aws-amplify/ui-vue Adding authentication AWS Amplify provides authentication via the auth category which gives us access to AWS Cognito. To add authentication use the following command: amplify add auth When prompted choose: Do you want to use default authentication and security configuration?: Default configuration How do you want users to be able to sign in when using your Cognito User Pool?: Username Do you want to configure advanced settings? No Pushing changes to the cloud By running the push command, the cloud resources will be provisioned and created in your AWS account. amplify push To quickly check your newly created Cognito User Pool you can run amplify status To access the AWS Cognito Console at any time, go to the dashboard at https://console.aws.amazon.com/cognito. Also be sure that your region is set correctly. Your resources have been created and you can start using them! Configuring the Vue application Reference the auto-generated aws-exports.js file that is now in your src folder. To configure the app, open main.ts and add the following code below the last import: import Vue from 'vue' import App from './App.vue' import Amplify from 'aws-amplify'; import '@aws-amplify/ui-vue'; import aws_exports from './aws-exports'; Amplify.configure(aws_exports); Vue.config.productionTip = false new Vue({ render: h => h(App), }).$mount('#app') Using the Authenticator Component AWS Amplify provides UI components that you can use in your app. Let’s add these components to the project In order to use the authenticator component add it to src/App.vue : <template> <div id="app"> <amplify-authenticator> <div> <h1>Hey, {{user.username}}!</h1> <amplify-sign-out></amplify-sign-out> </div> </amplify-authenticator> </div> </template> <script> import { AuthState, onAuthUIStateChange } from '@aws-amplify/ui-components' export default { name: 'app', data() { return { user: { }, } }, created() { // authentication state managament onAuthUIStateChange((state, user) => { // set current user and load data after login if (state === AuthState.SignedIn) { this.user = user; } }) } } </script> You can run the app and see that an authentication flow has been added in front of your app component. This flow gives users the ability to sign up and sign in. To view any users that were created, go back to the Cognito Dashboard at https://console.aws.amazon.com/cognito. Also be sure that your region is set correctly. Alternatively you can also use: amplify console auth Accessing User Data To access the user’s info using the Auth API. This will return a promise. import { Auth } from 'aws-amplify'; Auth.currentAuthenticatedUser().then(console.log) Publishing your app To deploy and host your app on AWS, we can use the hosting category. amplify add hosting Select the plugin module to execute: Amazon CloudFront and S3 Select the environment setup: DEV hosting bucket name YOURBUCKETNAME index doc for the website index.html error doc for the website index.html Now, everything is set up & we can publish it: amplify publish Cleaning up Services To wipe out all resources from your project and your AWS Account, you can do this by running: amplify delete Conclusion Congratulations! You successfully built your first full-stack serverless app using Vue and AWS Amplify. Thanks for following this tutorial. You have learnt how to provide an authentication flow using the authenticator component or via the service API and how to use Amplify CLI to execute common tasks including adding and removing services.
https://gerard-sans.medium.com/build-your-first-full-stack-serverless-app-with-vue-and-aws-amplify-9ed7ef9e9926
['Gerard Sans']
2020-09-14 12:08:20.757000+00:00
['JavaScript', 'Aws Amplify', 'Vuejs', 'AWS']
EaaS: Everything-as-a-Service
Traditional Service The easiest way to begin defining what a service is, is to define what it is not. A service is not a product. A service, in the context of business and economics, is a transaction in which no physical goods are exchanged — it is intangible. The consumer does not receive anything tangible or theirs to own from the provider. This is what differentiates services from goods (i.e. products) — which are tangible items traded from producer to consumer during an exchange, where the consumer then owns the good. Products compared to Services Most people are familiar with goods versus services, and many businesses offer a combination of both. For example, many banks offer physical products such as credit cards, and also offer services like financial advice and planning. However, this is a very simple example, and in today’s world — the differentiation between products and services is becoming increasingly blurred. Let’s expand on the banking example by considering the following: A tech consulting firm helps to develop a digital product — a mobile banking application for iOS — for their client in the retail arm of ACME Bank. They’re providing a service (software development) and the result is a tangible product (the app). ACME’s customers pay annual fees to the bank, but in return get both products and services. The mobile app, a digital product, can be used for services — like booking an appointment for financial advice or ordering a product — like a new credit card. The credit card is a physical product, but it is associated with the bank’s lending service. This example is still fairly straightforward, but you can see how products and services are blended to create better experiences for both producers and consumers. And examples like this are happening everywhere — in both traditional and emerging businesses, creating a service ecosystem. Traditionally, the “behind the scenes” work of developing and enabling products and services was performed through many different methods, sometimes in silos: Business Process Engineering, Traditional Project Management, Software Development, ITIL, Lean Six Sigma, etc. Today, in the very customer-centric and employee-centric world, the details of what goes into creating these “experiences” — through products and services — is loosely known as service design. Service Design So we understand services in the traditional sense — a consumer is serviced by the provider. Services, like products, don’t come out of thin air. Like products, services too need to be designed in order to provide the most satisfaction to both the consumer and the provider (i.e. employees). Even for product producers, service design is valuable when considering the experience of everyone involved in that value chain. It bridges the gaps between customer experience design and product design by considering everything in between. Services often involve many moving parts, and service design uses the following monikers to describe service components: People, Processes, and Props. The 3 P’s are common terms used to describe the “building blocks” of Service Design. Service Design, as you may infer from the name, utilizes the same concepts and methodologies from similar disciplines such as User Experience Design, Human-Computer Interaction, as well as Research, Ethnography, and Anthropology. There are methods to follow to ensure that experience is actually at the centre of the design, as opposed to just thinking about customers, or revenue, or brand image, for example. High-Level Approach for Service Design The service design framework is an iterative process and is co-created with the individuals involved in the service delivery including customers. Each phase has inputs and outputs, for example, personas, journey maps, and service blueprints. To expand on the components of a service, let’s take a look at the high-level customer journey below — using a theoretical service for food ordering and pick-up. Very High-Level Journey: Food Order & Pick-up Service What are the building blocks (People, Processes, and Props) that make up the composition of this particular service? We know that it involves some technology, some people, and a physical place — but let’s dig a bit deeper. Building Blocks that enable the Journey above At a high-level, we can map the necessary components for each phase of the journey. Similar to the paradigm of software, services have front-end and back-end components. People, processes, and props that the customer sees, and ones they don’t. This is a very high-level view, and a service blueprint would normally contain many details of the front-end and back-end people, processes, and props involved various versions of journeys. Also, keeping in mind our “stacked dimension” model for describing the abstraction level of services — we can see that People, Props and Processes can loosely translate to things that occur in the Business, Application, and Technology/Physical layers: Business Actors like Employees, Digital Applications, Physical Structures, Hardware, Business Processes, etc. And that’s Service Design in a nutshell. Service Oriented Architecture (SOA) Around the turn of the millennium, a new systems design concept began to rise in popularity. “Service Oriented Architecture” became the craze in many organizations. Service orientation as an approach to architecture builds on similar paradigms as traditional services described above. Similar to how a large business would offer many traditional services, a large software system often has many different functions or purposes. Traditionally, before the emergence of SOA, large-scale systems were monolithic; built on a singular codebase, with shared components, and shared infrastructure, all tightly coupled. Service Orientation, on the other hand, is the concept of breaking up the app into components as “services”, each representing a specific business functionality. They are loosely coupled and working together to form the overall functionality of a large system. Picture monoliths as a large, 3-tier client-server systems: a presentation layer or client such as a graphical user interface, an application layer with lots of logic in the form of methods or functions, and a data access layer connecting the system to the underlying databases. High-Level Comparison of Architectural Styles To add more detail, “services” in an SOA model are discrete, self-contained functions, that communicate with other components over a common protocol. Integration of services with each other, with data stores, etc. is facilitated through service brokers or service buses. A presentation layer like a GUI can consume the service through this layer. Services — because of this — are accessible remotely and can be independently updated and maintained. For example, you could swap out one service for another in a larger application, without taking down the rest of the application. Talk about not putting all of your eggs in one basket! A primary goal of service orientation is to promote the re-use of functions, leading to efficiencies and simpler development of new applications which are modular and can consist of existing services. Despite the popularity and rise of SOA, there is not one sole industry standard for implementing them, and instead, there are principles, patterns, and approaches to the concept developed by many organizations. A few examples of commonalities in the service-oriented architecture approach include: Services act as producers and consumers . The underlying logic of producer services is abstracted from the consumer — meaning they only see the endpoint. and . The underlying logic of producer services is abstracted from the consumer — meaning they only see the endpoint. Services are both granular and composable . They are coarse-grained and represent a specific business purpose; not too granular as to reduce re-use and applicability elsewhere. Services can be combined to compose other services. and . They are coarse-grained and represent a specific business purpose; not too granular as to reduce re-use and applicability elsewhere. Services can be combined to compose other services. Services are discoverable. They produce metadata which can be accessed over the common communication protocol and then interpreted. If this all sounds familiar — or confusing — its because the concept of services and SOA are closely related to modern APIs, microservices, and integration standards. Microservices Similar to SOA, “microservices” is also an architectural style for building systems and applications that are loosely coupled. In the examples above, you’ll notice that the “functionality” in a SOA model is broken up into smaller parts. However, SOA models often share single or few underlying databases or legacy systems, and rely on a single broker layer to pass data around. To build on this concept, microservices go a step further. They are discrete instances where the entire stack of a certain function is isolated, and various microservices are combined to compose a distributed system. High-Level View of a Distributed System composed of Microservices In contrast with SOA, there is no “broker” or “service bus” layer to connect various services. Instead, the microservices communicate over a lightweight protocol (such as HTTP). Just as how service orientation provided many benefits in the early-mid 2000s, microservices also provide many benefits in the modern technology landscape. As teams shift to more Agile and DevOps practices, microservices provide a unique advantage in the fact that various smaller teams can independently build product features for a larger application. A key principle of microservices is that they perform only one thing, and perform that thing well. With cloud platforms and containers, teams are able to quickly stand up virtual or containerized infrastructure to build their independent services. This also provides benefits for operations teams: outages in a small service do not bring down the entire system. There are even not-so-tangible benefits, for example, teams having the ability to develop services in a technology stack (language, OS, database, etc.) that they’re comfortable with — independent of the larger system. However —as with all things — benefits come with trade-offs. With larger, heterogeneous solutions composed of many microservices with underlying components to meet each use case, comes greater complexity and different architectural concerns. For example, in traditional monolithic systems (and in some cases SOA implementations) — applications share a common database acting as a system of record. In the microservices model, each instance has its own database combining to create a system of record. This adds great flexibility for teams because a required schema change to one service’s data store doesn’t introduce changes for other teams. With that being said, you can likely imagine a scenario where services read or write data within the same domain, which can lead to challenges with data consistency and integrity. The growth in popularity of microservices and their complexity has led to many innovations in the cloud space, including service meshes or fabrics. We will leave this more advanced topic for another time. Software, Platforms, and Infrastructure delivered “as a Service” Getting back to our familiar, traditional definition of service, we have seen it being applied to the procurement of information technology. Traditionally, (as in pre-2000s) software was complicated to build and manage. And as mentioned above, it still is. But, for application teams building business software, a lot of the management of underlying technologies have been abstracted. Let’s walk through the abstraction layers one more time to illustrate how this all comes together. Abstraction model + Examples of components Think of each of these layers consisting of various components that were traditionally sold as products themselves. You would, at the very least, need to purchase a server which would power the applications you are developing and installing. You would need storage for any persistence of data. You would need hardware for communication with the network. And on it goes up the stack — operating systems, databases, etc. Moore’s Law and other advances in computing hardware have led way to extreme drops in the cost of computing hardware. And with cheaper hardware costs, companies are able to purchase large amounts of hardware and rent it as…you guessed it…a service. IaaS — Infrastructure as a Service Infrastructure as a service is a concept where providers — like Amazon, Microsoft, and Google — own large premises comprised of hardware and infrastructure. They rent virtualized access to these resources through pay-as-you-go subscription models. As mentioned earlier, this adds agility to start-ups and software teams who do not have the capital to purchase their physical resources. At the highest level, this can be compared with renting a home versus purchasing one. Using this analogy, if the start-up decides to “move” — or pivot, dissolve, sell, etc. — they can shut down their infrastructure without any long term costs or losses. The use of IaaS generally provides the same flexibility as if it was your infrastructure. This comes with the same overhead from a managerial and operational perspective — minus managing the premises, power, security, etc. For teams that need a more managed service, there are various “platforms” offered as a service. PaaS — Platform as a Service Platform-as-a-service builds on the IaaS layer, and adds abstractions which can be leveraged by development teams. PaaS hides away the complexities of dealing with operating systems, middlewares, and runtimes — and allows developers and operators to focus on writing and supporting the application, rather than infrastructure. To go further, PaaS can be run on top of any types of hardware. For companies not yet at public-cloud maturity, the option of running PaaS on their on-premise hardware is still viable. A platform services team could manage the core PaaS infrastructure, and development teams can focus on building/deploying. The big cloud players offer PaaS solutions on top of their IaaS — Amazon Elastic Beanstalk, Google App Engine, Azure Apps. There are also other offerings like Heroku, Cloud Foundry, and OpenShift — which will run in environments not specific to one provider. The spectrum of responsibility: Purple = fully managed as a service. As applications grow, the value proposition of PaaS solutions begin to wear off, considering scale often leads to unique requirements. The unique requirements in return often lead to infrastructure level challenges, which cannot be addressed through the abstraction of the application platform. SaaS — Software as a Service Finally, we finish off with Software-as-a-Service. SaaS refers to applications which are fully managed by the provider. This means that organizations can purchase software licenses, and immediately get to work without any development, management of infrastructure, or installation — it just works over the internet. However, large SaaS applications are not that simple. Referencing the diagram above, you’ll notice that there is still a bit of green in the SaaS column. Often, customers will need to configure the system to meet their business requirements, add users, add some security configurations, and so on. With that said, companies can begin realizing value very quickly, often without the involvement of IT.
https://medium.com/swlh/eaas-everything-as-a-service-5c12484b0b4e
['Ryan S.']
2019-06-25 15:33:04.354000+00:00
['Economics', 'Technology', 'Cloud Computing', 'Business', 'Design']
Quickly
Quickly Read this quickly, faster, faster… Photo by Marc-Olivier Jodoin on Unsplash Read this quickly Faster Faster Hurry up Get in Sit down Don’t think Just do Early bird Don’t break Don’t stop Keep going Faster Get it right Make room Wind it up Never slow Never stop Outta time Outta this Outta that Not enough Need more Give ’em more In a hurry Faster still Never still Can’t breathe Can’t move Can’t decide CRASH! …breathe… Slow It Down. Feel your life. Look around. Capture it with an open palm. And let it wash your spirit.
https://medium.com/passive-asset/quickly-22a105bbeb49
['Kelly Neuer']
2020-12-14 21:20:22.798000+00:00
['Self-awareness', 'Meditation', 'Poetry On Medium', 'Culture', 'Society']
Serverless ETL using Lambda and SQS
AWS recently introduced a new feature that allows you to automatically pull messages from a SQS queue and have them processed by a Lambda function. I recently started experimenting with this feature to do ETL (“Extract, Transform, Load”) on a S3 bucket. I was curious to see how fast, and at what cost I could process the data in my bucket. Let’s see how it went! Note: all the code necessary to follow along can be found at https://github.com/PokaInc/lambda-sqs-etl The goal Our objective here is to load JSON data from a S3 bucket (the “source” bucket), flatten the JSON and store it in another bucket (the “destination” bucket). “Flattening” (sometimes called “relationalizing”) will transform the following JSON object: { "a": 1, "b": { "c": 2, "d": 3, "e": { "f": 4 } } } into { "a": 1, "b.c": 2, "b.d": 3, "b.e.f": 4 } Flattening JSON objects like this makes it easier, for example, to store the resulting data in Redshift or rewrite the JSON files to CSV format. Now, here’s a look at the source bucket and the data we have to flatten. Getting to know the data Every file in the source bucket is a collection of un-flattened JSON objects:
https://medium.com/poka-techblog/serverless-etl-using-lambda-and-sqs-d8b4de1d1c43
['Simon-Pierre Gingras']
2018-08-13 11:51:01.250000+00:00
['Python', 'S3', 'AWS', 'AWS Lambda', 'Serverless']
Streaming Real-time data to AWS Elasticsearch using Kinesis Firehose
Explore how we can deliver real-time data using data streams to Elasticsearch service using AWS Kinesis Firehose. Elasticsearch is an open-source solution that is used by many companies around the world for analytics. By definition, Elasticsearch is an open-source, RESTful, distributed, indexed search, and analytics solution. The first part of the definition is that it is an open-source solution, which means it is a community-driven solution and it is free to use widely. Next, it is RESTful, which means all the communication and configurations can be done through simple REST HTTP API calls. Elasticsearch has developed a feature-rich REST API framework for use by clients in order to consume from. Next distributed and indexed, now this is where Elasticsearch gets differentiated from a common search solution that we tend to implement on top of our existing databases. Elasticsearch is distributed, which means the functionality and data being stored are using multiple resources and it will use all of these resources to do these functionalities which makes them very efficient meanwhile providing high availability as well. Next, it is indexed, which improves significantly data retrieval. Elasticsearch uses Apache Lucene for indexing, which is known to be an extremely fast indexing solution that makes search or analytics extremely fast. There are many other concepts that we need to talk about Elasticsearch. But since this article is going to address a solution for stream data into Elasticsearch and not on how to use Elasticsearch I will not try to go and write about those concepts. If you are a complete beginner at Elasticsearch I recommend you to first learn at least the basic concepts that are used in Elasticsearch before continuing through this article. There are many valuable posts, videos on the internet about Elasticsearch for beginners which I am sure you will be able to find. For this article, we are going to use the AWS Elasticsearch service because it is fully managed by AWS so we do not need to worry about deployments, security, and making a fail-proof high available solution. AWS Kinesis Amazon Kinesis is a service provided by AWS for processing data in real-time. In this scenario that we are going to talk about data that are coming to Kinesis as a stream where Kinesis will execute all kinds of functionalities based on our requirements. There are many other data stream processing solutions in the community as well. Apache Kafka, Apache Spark are some of the examples that are used currently in the community. The main reason that I chose Kinesis is that our Elasticseach solution will also be created as an AWS service, but also since all the servers, failures are fully managed by AWS themselves it is really easy us to more focus on the outputs rather than managing the infrastructure. I think before going to discuss about Kinesis let us first try to understand what are data streams and some examples for data streams. Data Streams Data streams are data that are generated and sent continuously from a data source. This data source is called as data producers in the streaming world. The main feature that distinguishes data streams with other form of data sources is that it is generated continuously, like a river. So rather than sending data in batches, in streaming, we expect that every time data will be given to us in a means of a stream. Data streams mostly surface with the Big Data world. We may this that at a given time frame if we inspect data it might be small, so why are we referring to Big Data for data streams. But here we are talking about continuously sending data every time, which means a large quantity of data at the end of the day. Below are some examples of data streams that we can find in the real world. Log files generated by a software application Financial stock market data User interaction data from a web application Device and sensor output from any kind of IoT devices Thus what we do with these data streams is either we can do analytics in real-time and provide analytical data as the output or we can store these data and do analytics on top of the data. The first scenario is used in many solutions where real-time analytics are required like identifying traffic congestions, identifying different patterns, etc. For the second scenario what we do is we store these data in an analytical platform and later to analytics on top of that. Elasticsearch is such kind of analytical platform where we can perform analytical tasks. But how can we handle and send the data to Elasticsearch using the data stream that we are getting from data producers? thus comes the Kinesis into the use case of ours. So as mentioned earlier Kinesis is a service that is used to process data streams. So what kind of solutions does AWS Kinesis provides to handle these kinds of data streams? for that currently, AWS Kinesis provides four types of solutions at the time I am writing this article. Kinesis Data Streams — used to collect and process large streams of data records in real-time — used to collect and process large streams of data records in real-time Kinesis Data Firehose — used to deliver real-time streaming data to destinations such as Amazon S3, Redshift, Elasticsearch, etc.. — used to deliver real-time streaming data to destinations such as Amazon S3, Redshift, Elasticsearch, etc.. Kineses Data Analytics — used to process and analyze streaming data using standard SQL — used to process and analyze streaming data using standard SQL Kinesis Video Streams — used to fully manage services that use to stream live video from devices So out of the four solutions provided, we can see that for our use case which s to load data into Elaslticsearch we will be going to use Kinesis Data Firehose. Amazon Kinesis Data Firehose Kinesis Data Firehose is one of the four solutions provided by AWS Kinesis service. So as Kinesis service, it is also fully managed by AWS which means we do not need to worry about deployments, availability, and security at every level. Kinesis Firehose is used to deliver real-time streaming data to pre-defined service destinations. At the writing of this article, AWS supports four destinations. Amazon S3 — an easy to use object storage Amazon Redshift — petabyte-scale data warehouse Amazon Elasticsearch Service — open-source search and analytics engine Splunk — operational intelligent tool for analyzing machine-generated data As mentioned in the title itself in this we are going to look at how we can load data into Elasticsearch destination. Before implementing and designing or solution let us first look at some of the basic concepts of Kinesis Firehose. Kinesis Data Firehose delivery stream — the main component of the firehose, which is the main delivery stream which will be sent to our destinations the main component of the firehose, which is the main delivery stream which will be sent to our destinations Data producer — the entity which sends records of data to Kinesis Data Firehose. These will be the main source for our data streams — the entity which sends records of data to Kinesis Data Firehose. These will be the main source for our data streams Record — the data that our data producer sends to the Kinesis Firehose delivery stream. In s data stream these records will be sent continuously. Usually, these records will be very small with the max value as 1000KB — the data that our data producer sends to the Kinesis Firehose delivery stream. In s data stream these records will be sent continuously. Usually, these records will be very small with the max value as 1000KB Buffer size and buffer interval — the configurations which determine how much buffering is needed before delivering them to the destinations. In order to process data stream data producers should continuously send data to our Firehose. Thus now the question should be how do these data producers send data to our Firehose. There are several ways for data producers to send data to our Firehose. Kinesis Data Firehose PUT APIs — PutRecord() or PutRecordBatch() API to send source records to the delivery stream. Amazon Kinesis Agent — Kinesis Agent is a stand-alone Java software application that offers an easy way to collect and send source records. In order to load data in the Kinesis Agent, this agent should be available in our data producer system. AWS IoT — Create AWS IoT rules that send data from MQTT messages. CloudWatch Logs — If we are going to use cloudwatch logs we can use subscription filters to deliver a real-time stream of log events. CloudWatch Events — Create rules to indicate which events are of interest to your application and what automated action to take when a rule matches an event. In our solution, we are going to use the demo data stream provided by Kinesis Firehose. In demo data stream data will be sent in the following format. {"ticker_symbol":"QXZ", "sector":"HEALTHCARE", "change":-0.05, "price":84.51} Since now we have a basic understanding about the components and services we are going to use let us now begin to design our solution. Our scenario is to deliver data stream which are going to be created by data producers to our analytical platform, which is Elasticsearch service. Before implementing the solution we might be asking these following questions based on our requirements. Does the data need to be transformed before stored into Elasticsearch? If so what kind of transformation is required? Do we need to store raw data even after transformation for future purposes or as a backup? Let’s assume for our scenario answers for all three questions above is Yes. Then we need functionalities to transform our data into a different format and also save the raw data into some location. Fortunately, Kinesis Firehose already provides these functionalities by using AWS Lambda functions. According to the diagram above before sending data to the Elasticsearch service a Lambda function will transform our data according to our requirements. In the meantime, our raw data before transformation will be sent to an AWS S3 bucket. In there using lifecycle rules we can transfer them to either AWS Galcier or other S3 categories according to our requirements. Now since we have the design ready let’s go ahead and try to implement our designed solution in AWS. Creating Elasticsearch service We are going to create our Elasticsearch service only for development and learning purposes so the configurations that we will select for our domain will not be the most secure configurations and should not be followed for an enterprise solution. First , Go to AWS Elasticservice service and create a new domain. Since we are using this domain for only learning select Deployment type as Development and testing and click next. Next, provide a domain name and after that select the instance type as t2.small.elasticsearch because that is the only instance type that is available for the free tier if you are already using a free tier account. You can leave all the other options as default values without any change. On the next screen we will be asked to configure security for our elasticsearch domain. The recommended is to use VPC access where our domain will be in a private network with only instances within our VPC network that will have access. But in our case make it as Public access. Below on the same page you will see where we are setting access policy for our domain. Since this is only for learning add Allow open access to the domain which will make any IP address have access. (we are using this only for learning purpose) Here we can specify either an IAM role or specific AWS user accounts as well. Now that is it for creating our testing elasticsearch domain. After a couple of minutes our elasticsearch domain will be up and running. So creating our Elasticsearch service is done now. Creating Kinesis Data Firehose Go to AWS Kinesis service and select Kinesis Firehose and create a delivery stream. In the next screen give a stream name and select the source as Direct PUT or other sources. The other option here is to select the source as a Kinesis Data Stream. In our scenario, we do not have a Kinesis data stream and we send the data to the Firehose directly from our producer. We have the option to enable server-side encryption for data we stream. If we want to encryption for data enable it and provide the necessary encryption key. But for this article let’s keep server-side encryption as disabled. Next we will be asked whether we need to transform the source records. Here enable data transformation. This will prompt us to give an Lambda function which will handle the data transformation. Since we have not created a Lambda function yet select Create new. This will open a box where AWS will prompt us with already available lambda blueprints. Select the first option General Kinesis Data Firehose Processing for our use case since we are going to do a custom function. This will guide us to the Lambda function creation page. Before creating the Lambda function first create an IAM role for our lambda function with permission for AWS Kinesis services. Here we are providing full access to Firehose, but we only need permission for “firehose:PutRecordBatch” . But since this is only for learning purposes let’s give full access. Make sure to add AWSLambdaExecute permission as well which will give execution permission for our function as well as log permission to create logs to cloudwatch service. After creating our role let’s again go to our Lambda function creation page. There give our function a name and select the IAM role we created earlier. As mentioned earlier our data stream will have the following format. {"ticker_symbol":"QXZ", "sector":"HEALTHCARE", "change":-0.05, "price":84.51} So for this article's purpose let’s assume that we do not need change property to be stored in our Elasticsearch service. Also let’s rename the ticker_sysmbol property to ticker_id to be stored as in Elasticsearch. Below is the lambda function that I have created using node.js in order to fulfill our transformation. As you know you can use any supported language in Lambda to create your function. exports.handler = async (event, context) => { /* Process the list of records and transform them */ const output = event.records.map((record) => { const payload =record.data; const resultPayLoad = { ticker_id : payload.ticker_symbol, sector : payload.sector, price : payload.price, }; return{ recordId: record.recordId, result: 'Ok', data: (resultPayLoad), } }); console.log(`Processing completed. Successful records ${output.length}.`); return { records: output }; }; As mentioned above we are doing a simple transformation of renaming a property and removing a unwanted property. Go ahead and create this Lambda function with the above code. Now go back to our Firehose creation page and select our created Lambda function. This will give us a warning saying that our Lambda function timeout is only 3 seconds and increase it at least for 1 minute. This warning is given because we are dealing with streaming data it may take some time to execute and complete this function and the default 3 seconds timeout will not be enough. We can do that by going to our function’s basic settings. Lambda supports up to 5 minutes of timeout. After that, we have also have an option to convert record format to either Apache Parquet format for Apache ORC format rather than using a JSON format. These converting is done using AWS Glue service by defining our schemas there. But in our scenario, we do not need that functionality. Next, we will be asked for the destination of our data stream. Select Elasticsearch from there and select our domain we created earlier and with the index that we are going to store our data with. Next we have the option of taking backups of our raw data. Here we can either select to backup only failed records or all records for future purposes. Here we need to select a S3 bucket, if you don’t have it already created we can create a new bucket using Create New. We can also append a prefix to the data stream that will be save on the S3 bucket so we can easily categorize our saved data inside the S3 bucket. Kinesis Data Firehose automatically appends the “YYYY/MM/DD/HH/” UTC prefix to delivered S3 files. Apart from that, we can add a custom prefix as well according to our requirements. The next page will display several configurations that we can modify. First is Elasticsearch buffer conditions. Firehose will buffer data before sending them to store in Elasticsearch, we can determine this buffer using two metrics, buffer size and buffer interval. So when either of these conditions fulfilled Kinesis Firehose will deliver our data to Elasticsearch. Since we are going to use Lamda functions to transform our data stream we need to comply with the AWS Lambda function payload limit, which is 6 MB. Next configuration is S3 compression and encryption. These configurations are related to the backup S3 bucket that we are going to use for raw data. Here we can either compress data in order to make them smaller or make them secure by encrypting data before strong in S3. Next Error loggings will be enabled by default which will log errors on Cloudwatch, lastly, we need to create a new IAM role for our Firehose service. For that, we can go ahead and create a new IAM role. After that we have done all the configurations so go ahead and create our stream. It will take a couple of seconds for AWS to create our firehose stream. The status of our stream will become Active when the stream is fully created. Now in order to test our system working according to our design we can go to our stream and Test with demo data. This will send demo test data continuously to our Kinesis Firehose. We can first go to the Elasticsearch Kibana dashboard and verify that our data is loaded accordingly with the appropriate transformation we have done. Confirm that all the raw data is available on our S3 buckets as well. Now we have done our configurations and made sure they are working using the demo stream. That is all that I was hoping to go through using this article, but there are many more other functionalities provided by both AWS Kinesis and Elasticsearch services so make sure to explore more on those services. Thank you for reading this article. :)
https://medium.com/swlh/streaming-real-time-data-to-aws-elasticsearch-using-kinesis-firehose-74626d0d84f1
['Janitha Tennakoon']
2020-06-03 10:16:44.374000+00:00
['Big Data', 'Kinesis', 'AWS', 'Elasticsearch', 'Data']
Basics of React.Js
Once you know JavaScript, you can immediately cause changes in your browser by manipulating the DOM. You can use JavaScript or jQuery to add or remove elements or change values when things are clicked and create an interactive website. However, DOM manipulation requires some excessive steps. If you want to change a displayed number on a website with DOM manipulation, you need to manually reassign the value or text of that element to the new one. If you have a lot of elements that update on changing a single thing, then you need to manually reassign for all of them. But this is where React.js comes in! Creating A React App Create a react app. npx create-react-app appName cd appName npm start A new page should open up with a spinning atom on it. This means your app is running. Now open it up in the text editor of your choice. Open the src folder and the App.js file. This is the file mainly responsible for showing you the spinning atom page. Inside of the header the img tag is responsible for the atom, the p tag is responsible for the line of white text, and the ‘a’ tag is responsible for the blue link at the bottom. Now delete the header. Your page should now only have the following code: import React from 'react'; import logo from './logo.svg'; import './App.css'; function App() { return ( <div className="App"> </div> ); } export default App; You will see a white page. Now let’s get started. As you may have guessed from the sample template given, the display is very similar to basic HTML. The main difference is that everything is wrapped in divs. Add some paragraph’s inside the div. <div className="App"> <p>Paragraph Line</p> <p>Paragraph Line</p> <p>Paragraph Line</p> </div> Now on the page, you can see three-paragraph lines. Instead of putting in the same line three times however, you can create a component that says the line, then just re-use that component. Functional Component Create a new file called Paragraph.js and enter the following text inside. import React from "react"; const Paragraph = (props) => { return( <div> <p>Paragraph Line</p> </div> ) } export default Paragraph; This is called a functional component. It is just a function used for displaying information. It has no state or time hook (we will learn about these in a second). In our original App.js file, import the Paragraph from the Paragraph.js file. import Paragraph from './paragraph.js'; Instead, change the div to show: return ( <div className="App"> <Paragraph /> <Paragraph /> <Paragraph /> </div> ); Now you are displaying the Paragraph component three times, and it looks exactly the same as earlier. The other type of component is called a class component. It has a life cycle meaning you can make it do things at certain times, such as when the component is first being displayed or about to be removed, and a state. Class Component Create a new file called Number.js. import React from "react"; export default class Number extends React.Component { render() { return( <div> <p>1</p> <p>2</p> <p>3</p> </div> ) } } Add our new number class component to our app.js file. return ( <div className="App"> <Paragraph /> <Paragraph /> <Paragraph /> <Number /> </div> ); State To give it a state, add the following to it: state = { number: 0 } This gives the state a number value that is currently set to 0. Add a button that when clicked tells you information on the state. import React from "react"; export default class Number extends React.Component { state = { number: 0 } printNumber = () => { console.log(this.state) } render() { return( <div> <p>1</p> <p>2</p> <p>3</p> <button onClick={this.printNumber()}>Print State</button> </div> ) } } Firstly, you will see that your state is printed, but you can also make it print a specific variable in the state, in this case specifically the number variable. Change the print number function to: printNumber = () => { console.log(this.state.number) } Now it will print the number inside of the state. But you may now notice that the number is printed automatically without you clicking the button and that clicking the button actually doesn’t print anything in addition at all. This is because of the way the function is called inside of our button. <button onClick={this.printNumber()}>Print State</button> Right now, the function is being called the second the button is made. We want to make the button call the function on click. <button onClick={() => {this.printNumber()}}>Print State</button> The function now does just that. You can also make it so the something is passed into the function when the button is clicked, such as button information if you want a few buttons to run the same function but with different values based on the button (such as a calculator) or when entering information inside of a form. Change the button to: <button id="button" onClick={(event) => {this.printNumber(event)}}>Print State</button> Change the function to: printNumber = (event) => { console.log(this.state.number) console.log(event) console.log(event.target) console.log(event.target.id) } Now when pressed, it will print state.number, the event that called it (a click), as well as the specific thing that called the even (the button). Like in state above, you can also access the specific attribute you want, such as id or class name, by doing event.target.attribute. You can change state by the setState function. this.setState({ number: this.state.number + 1 }) This will set the number inside of state to be one more than whatever it currently is. Wrap it inside a function: addOne = () => { this.setState({ number: this.state.number + 1 }) console.log(this.state.number) } And add a new button: <button id="button" onClick={(event) => {this.addOne(event)}}>Add Number</button> If you click it multiple times, you see that it prints a number each time and that each number increases each time. However, you will notice that each time it prints the old number first, then add one and that this process is repeated each time. It looks like the function is printing first then adding to the number inside of state, but it should be doing the opposite! This is because setState takes a small amount of time, while console logging is instantaneous. You can, however, set it so that the print happens explicitly after state is finished changing. Change the addOne function to: addOne = () => { this.setState({ number: this.state.number + 1 }, () => { console.log(this.state.number) }) } Now it works as you would expect, the number is increased by 1 and the result is printed. Lifecycle You can make a component do things like run functions or change state at specific times. Add a new function that sets the state to something weird. funkyState = () => { this.setState({ number: 999 }) } We can make this function run as soon as the component is shown: componentDidMount(){ this.funkyState() } Now if you refresh the page and hit the Print State button, you will see that the number inside of state was instantly set to 999. Forms Create a new file called Form.js. import React from "react"; export default class Form extends React.Component { render() { return( <div> <p>Hello, Name!</p> <form> Name: <input type="text" value='' /> <input type="submit" value="Submit" /> </form> </div> ) } } Add it to our app.js file. return ( <div className="App"> <Paragraph /> <Paragraph /> <Paragraph /> <Number /> <Form /> </div> ); This is a form where you can write your name, and submit it. We want to make it so that the “Hello, Name!” line shows the name that you submit in the name bar. The value of the name bar is set to an empty string so that it doesn’t display a value by default. However, this means that no matter what you type in, nothing will display. This is where state comes in. Set up state. state = { name: '', submittedName: '' } You can change the input line so that whenever it changes, it updates the name-value inside of state, and it updates its own value to reflect the name inside of state. Add a function to handle change and update the name input line. handleChange = (event) => { this.setState({ name: event.target.value }) } Name: <input type="text" value={this.state.name} onChange={event => {this.handleChange(event)}} /> This makes it so whenever there is a change in the name box (such as when you are typing) it will call the handleChange function and pass in the typing changes you made, updating state.name. And since the value is now set to reflect state.name, whatever you type is also reflected inside of the bar. Change the hello line above to: <p>Hello, {this.state.submittedName}!</p> Right now the submittedName is blank so nothing will show up. Add the following submit function and changes to the form. submit = (event) => { this.setState({ submittedName: this.state.name }) } <form onSubmit={(event) => {this.submit(event)}}> Now enter a name hit submit. You see that your submitted name will be displayed for a fraction of a second before disappearing. This is because the page refreshes on a submit. You need to add the following line to the submit function: event.preventDefault() This stops the normal auto-refresh. submit = (event) => { this.setState({ submittedName: this.state.name }) event.preventDefault() } Now enter a name and try it again, and it should work as you would expect! Props Information can be passed from one component to another in the form of props. Create two new files called ‘sample1.js’ and ‘sample2.js’. Inside of sample1 enter the following code: import React from "react"; const Sample1 = (props) => { return( <div> <p>Sample 1</p> </div> ) } export default Sample1; This will be a functional component. Import sample1 and add it to the bottom of the div: <Sample1 /> The page will now display the line ‘Sample 1’. If you want to pass props to it from the form component, you do by doing: <Sample1 name={this.state.name} submittedName={this.state.submittedName}/> When you use the imported component, you also add: variableName={variableToPass} The variable name can be anything, although it is best to make it consistent. To access props inside a functional component do props.variableName, where variableName is the name given when passing it in. Change the lines inside of sample1 to: import React from "react"; const Sample1 = (props) => { return( <div> <p>props.name</p> <p>props.submittedName</p> <p>{props.name}</p> <p>{props.submittedName}</p> </div> ) } export default Sample1; You will see that the first two lines just say the words “props.name” and “props.submittedName”. But the third and fourth lines are blank. If you start typing into the bar, the third line will start reflecting whatever you are typing. This is because it shows whatever is inside the name variable that was passed down, which is the name inside of state of the first component. The second line shows submittedName, so it will only show a. value after you hit enter and actually set it to something. Notice that the lines change by themselves without you having to manually assign new values or anything to them, unlike if you were using regular HTML/DOM manipulation. Through the use of props and state, things automatically change to match the state whenever there are changes to the state. To access props inside a class component is slightly different, you use this.props.variableName, where variableName is the name given when passing it in. Change the lines inside of sample2 to: import React from "react"; export default class Sample2 extends React.Component { render() { return( <div> <p>this.props.name</p> <p>this.props.submittedName</p> <p>{this.props.name}</p> <p>{this.props.submittedName}</p> </div> ) } } Everything else is basically the same. CSS Inside your folder, you will see that you already came with an ‘App.css’ file. Inside your App.js file you will see that the file is already imported at the top via the line: import './App.css'; This means any changes to that CSS file will be applied globally. CSS is pretty standard, with the exception that the ‘class’ attribute is instead called ‘className’. If you go back to ‘App.js’ you will notice that the original div was given the className “App”. If you change is to ‘class1’ and add the following inside of the CSS file: .class1 { color: blue } You will see that now everything is no longer centered and that all of the text is blue. This is because the earlier formatting was for members of the ‘app’ class. Miscellaneous If you want to run a function or get the value of a variable inside of a normal line similar to the way you would normally use ${} for a string, here you use curly braces. Add the following function to the top: numberFunction = () => { return (9 + 9) } Open up ‘Number.js’ again add the following two lines to the bottom: <p>5 + 5</p> <p>{5 + 5}</p> <p>{this.numberFunction()}</p> The first line will just display the line ‘5 + 5’ but the second will actually display 10. The last line similar displays 18, which is the return value of the numberFunction. Congratulations, you now know the basics of React.js! A useful addition to React is Redux, which you can find the basics of in my other article on Redux: https://medium.com/future-vision/redux-in-react-7f1776f2443d. Have fun making frontend applications with your new-found knowledge!
https://medium.com/swlh/basics-of-react-js-92ba04117bc
['Nicky Liu']
2020-01-26 00:11:46.067000+00:00
['Software Engineering', 'Programming', 'Software Development', 'React', 'JavaScript']
AWS Lambda Event Validation in Python — Now with PowerTools
How can you improve on the already excellent Pydantic validation? Recently, I had the pleasure of contributing a new parser utility code to an amazing and relatively new project on Github: AWS Lambda Powertools. This repo, which started as Python oriented (but now supports other languages such as Java and more to follow) provides an easy to use solution for lambda logging, tracing (with cloud watch metrics), SSM utilities and now validation and advanced parsing for incoming AWS Lambda events. The new parser utility, will help you to achieve next level validation. It’s based on Pydantic and it’s marked as an optional utility. In order to install it, you will need to specify it like this: pip install aws-lambda-powertools[pydantic] For other non parsing usages for the libraries such as logger, metrics (and more) see this excellent blog post. Validation with Pydantic Well, currently, if you followed my guidelines in my previous blog post, you had to write (and now maintain) a handful of AWS Lambda events such as Eventbridge, DynamoDB streams, Step functions and more. Basically, a schema for each AWS event that a lambda receives. You found out how to write these Pydantic schemas by either looking at the AWS documentation or by printing the event JSON. It’s an important process it can get tedious quickly. Let’s take a look at an Eventbridge event: The detail field is a dictionary which describes the user schema, the actual message that we would like to extract, validate and parse. In order to achieve that, the following Pydantic schema can be used: The lambda handler which parses the event will look like this: It works. You have to write a little bit of code but it works. The problem in this solution is that you need to maintain an AWS event schema which might get changed by AWS at some point. When it does, this validation schema fails and raises a validation exception in runtime. Not ideal to say the least. However, it can get better, much better!
https://medium.com/cyberark-engineering/aws-lambda-event-validation-in-python-now-with-powertools-431852ac7caa
['Ran Isenberg']
2020-11-18 08:41:52.554000+00:00
['Validation', 'Python', 'AWS Lambda', 'Software Development', 'AWS']
We May Have Misunderstood Myelin
Myelin is often referred to as an insulator because it drastically increases the speed of action potential propagation; an axon ensheathed in myelin can send signals up to 300x faster than its unmyelinated counterparts. This enhanced conduction speed allows rapid communication to occur between distal parts of a spread out body plan. For example, a nerve impulse travels a great distance — from your foot to your brain and back to your foot — in a tiny fraction of a second. And this is a highly valuable feature of a living body — your chances of survival in dangerous situations depend largely on your reaction time. For this reason, myelin is a feature found in every vertebrate and may be an evolutionary requirement for the existence of vertebrates in general. Myelin: More Than A Golden Insulator Referring to myelin as an “insulating sheath” evokes an inaccurate equivalence between the brain and a modern electronic device, as if myelin is a piece of non-conductive material that wraps around an electrical wire. But myelin does so much more than function as an inert piece of insulation. As I mentioned before, the brain’s glial cells play the vital role of providing nutrients to power-hungry neurons. This is extremely important for oligodendrocytes because the insulating nature of myelin also isolates axons from the extracellular space. Whereas many types of cells can simply collect their own food from the environment, myelinated neurons have no access to the outside world. Oligodendrocytes are in the perfect position to spoon-feed neurons that constantly need to meet the energy demands of their firing axons. Oligodendrocytes traffic RNA across microtubules (which are lit up in this GIF) to maintain myelin through local translation. They also transport energy substrates, like lactate, through myelin into actively firing axons. Image Credit: @Meng2fu The importance of this this relationship between oligodendrocytes and neurons cannot be understated — proper myelination is absolutely necessary for our health. Evidence has linked abnormalities in myelin structure and function to a wide range of diseases. For example, Multiple sclerosis is caused when the immune system selectively attacks and degrades myelin is caused when the immune system selectively attacks and degrades myelin Cognitive disturbances in schizophrenia seem to be related to oligodendrocyte and myelin dysfunction seem to be related to oligodendrocyte and myelin dysfunction Post-mortem observations have seen that people with major depression show changes in myelinated regions of the brain show changes in myelinated regions of the brain Children with autism show an antibody response to myelin, which is similar to the etiology of multiple sclerosis show an antibody response to myelin, which is similar to the etiology of multiple sclerosis Veterans with post-traumatic stress disorder (PTSD) actually show increased myelination in parts of the brain Clearly, oligodendrocytes need to maintain a delicate balance of myelination to ensure neuronal health and brain function. Even beyond disease and dysfunction, myelination is a highly dynamic process that is constantly changing in response to the growth and behavior of an organism. For example, action potential conduction delays remain the same throughout development despite a huge increase in the distance that nerve impulses are required to travel (as the body gets bigger over time). In addition, extensive piano practice induces large-scale myelination changes in specific brain regions as a resulting of learning a new motor skill. But how can oligodendrocytes respond so dynamically to the needs of individual axons, the growth of a body, and the behavior of an organism? It had been hypothesized that a form of rapid communication between neurons and oligodendrocytes must be maintained in order to support these dynamic and responsive changes in myelin structure. And indeed, such a structure was recently discovered underneath the many layers of the myelin sheath: a hidden synapse between an axon and its myelin. The Axo-Myelinic Synapse Synapses are considered to be the major form of information processing in the brain. They are defined by Oxford Lexico as “a junction between two nerve cells, consisting of a minute gap across which impulses pass by diffusion of a neurotransmitter.” But synapses aren’t found only between nerve cells. The evidence is mounting that axo-myelinic synapses are formed within every segment of myelin and its underlying axon. These synapses seem to work in exactly the same way as a synapse between two neurons: an action potential causes the fusion of vesicles with the cell membrane, which dumps the contents of the vesicles (neurotransmitters) into the synaptic cleft. Then, protein receptors on the post-synaptic neuron sense the incoming chemical signal and respond accordingly. Action potentials induce the release of neurotransmitter at the synapse, which binds to post-synaptic receptors to induce a response in the receiving neuron. Image Credit: Wikimedia Commons In the case of neuron-to-neuron synapses, neurotransmitters will sometimes induce an action potential in the post-synaptic cell. But this doesn’t really apply to axo-myelinic synapses, because oligodendrocytes don’t produce action potentials. However, there are many other downstream effects that occur when neurotransmitters bind to receptors on any post-synaptic cell. Unlike a computer, brain cells don’t need to use electrical signals to transmit important information; cellular signaling often happens through vastly complicated networks of cascading chemical reactions. In this way, axo-myelinic synapses are a completely untapped therapeutic target that are likely involved in all myelin-related diseases. But practically nothing is known about the function of these newly discovered synapses. Despite the lack of hard evidence, we can make some educated speculations about the role of axo-myelinic communication in our brains. (I won’t dig too deeply into the details here, so if you want more information about the possible molecular mechanisms you can check out my qualifying exam and its associated figures). There are three probable ways in which axo-myelinic synapse activity could affect neurons and oligodendrocytes. We’ve already briefly touched on the first: the activity-dependent delivery of vital nutrients. As we discussed, myelin prevents axons from taking in vital nutrients, so oligodendrocytes deliver the food directly to the neuron. It seems extraordinarily likely that the amount of energy delivered to a given neuron would depend on the activity level of specific axons. And this information would be easily communicated through axo-myelinic synapses (see the top portion of the figure below). Secondly, axo-myelinic synapses are an attractive mechanism to explain how oligodendrocytes determine the thickness of their myelin wrappings. By using axo-myelinic synapses as a read-out of neuronal activity, an oligodendrocyte could fine tune the structure of each individual myelin sheath to modulate the speed of neuronal signaling (see the bottom portion of the figure below). This is a figure I created to visualize the hypothetical ways in which axo-myelinic synapses might function. Top part of the figure: PKC is immediately activated by low frequency action potential firing which perfuses lactate into myelin through monocarboxylate transporter 1 (MCT) where it is delivered to actively firing axons through MCT2. Bottom part of the figure: High frequency action potentials cause a delayed inhibition of MAPK through myelinic NMDA receptor activation which enhances mRNA translation in myelin. Finally, the interplay of these two mechanisms raises other fascinating possibilities. Because oligodendrocytes myelinate up to 60 different axons, they are in a perfect position to control the activity of an entire neuronal network. By receiving information from hundreds of axo-myelinic synapses, an oligodendrocyte could selectively provide nutrients to certain axons while also changing the structure of individual myelin sheaths. The combination of these abilities provides oligodendrocytes with an immense power to orchestrate the information processing of a group of neurons. An oligodendrocyte could starve out specific neurons by withholding nutrients, or it could alter the myelination on particular axons to synchronize activity across a network. We don’t really know if any of this happens, but the mere possibilities are mind-boggling. Whatever the actual functions of axo-myelinic synapses turn out to be, oligodendrocytes clearly play a much more important role in the brain than they receive credit for.
https://medium.com/medical-myths-and-models/youve-been-misled-about-myelin-d6238691704b
['Ben L. Callif']
2020-02-04 05:50:55.312000+00:00
['Neuroscience', 'The Law Of The Instrument', 'Brain', 'Myelin', 'Biology']
Kubernetes — Learn Init Container Pattern
Kubernetes — Learn Init Container Pattern Understanding Init Container Pattern With an Example Project Photo by Judson Moore on Unsplash Kubernetes is an open-source container orchestration engine for automating deployment, scaling, and management of containerized applications. A pod is the basic building block of kubernetes application. Kubernetes manages pods instead of containers and pods encapsulate containers. A pod may contain one or more containers, storage, IP addresses, and, options that govern how containers should run inside the pod. A pod that contains one container refers to a single container pod and it is the most common kubernetes use case. A pod that contains Multiple co-related containers refers to a multi-container pod. There are few patterns for multi-container pods on of them is the init container pattern. In this post, we will see this pattern in detail with an example project. What are Init Containers Other Patterns Example Project Test With Deployment Object How to Configure Resource Limits When should we use this pattern Summary Conclusion What are Init Containers Init Containers are the containers that should run and complete before the startup of the main container in the pod. It provides a separate lifecycle for the initialization so that it enables separation of concerns in the applications. For example, you need to install some specific software before you want to run your application you can do that installation part in the Init Container of the pod. Init Container Pattern If you look at the above diagram, you can define n number of containers for Init containers and your main container starts only after all the Init containers are terminated successfully. All the init Containers will be executed sequentially and if there is an error in the Init container the pod will be restarted which means all the Init containers are executed again. So, it's better to design your Init container as simple, quick, and Idompodent.
https://medium.com/bb-tutorials-and-thoughts/kubernetes-learn-init-container-pattern-7a757742de6b
['Bhargav Bachina']
2020-09-24 20:23:20.730000+00:00
['Software Engineering', 'DevOps', 'Software Development', 'Docker', 'Kubernetes']
The Free and Easy Way to Improve School Culture
All it takes is 30 minutes and a pair of shoes. Photo by Arek Adeoye on Unsplash What’s the purpose of school? No, really. I want you to think about that question for a few minutes before you read on. Did you come up with an answer? I bet it has something to do with educating kids so that they can grow up into fully functional adults. If it is, then I would pretty much agree with you. My next question is this: are schools serving their purpose? I wish we were at a coffee shop so you could really give me your answer to this question. I know it’s a complex question, and your answer will depend on your own experiences in school, where you live and any number of other factors. I would bet you an avocado toast and an Americano that your answer to the above question is no. Schools are easy to criticize and hard to defend. Everyone feels like schools are failing, students are failing and teachers are failing. If you feel like this, it’s not that you’re wrong. The thing that bothers me though, as a high school teacher and strong proponent of public education, is that while nearly everyone agrees that schools need to do better, it’s rare for people to galvanize their support behind common sense ideas and initiatives that work for students. There are no shortage of federal programs, private software companies, professional development consulting agencies, whack-a-doo secretaries of education lining up to push products, mandates and curriculum onto schools. Probably some of them are great, or they could be if they weren’t replaced with some other, newer version before the ink has dried on the contract or the check has been cashed. I’m one of the teachers that rolls my eyes when there is a new set of acronyms to learn at the beginning of the school year — RTIs, IEPs, 504s, ESSA. Out with the old and in with the new. At their heart though, the purpose of any of these programs are for kids to feel connected and noticed while they are at school so that they are mentally and emotionally prepared and willing to learn. Walk your way to a better school Here’s my plan to get rid of acronyms, cut spending, improve teacher retention, fight childhood obesity, decrease student stress and anxiety and improve attendance. Oh, did I mention: it’s free! The plan is this: every student should go for a 30 minute walk every day. I teach at a charter school, where I am fortunate to have the freedom to try out crazy ideas like the one above. The original goal of charter schools is to pilot interesting ideas on a small scale to see if they are scalable to larger schools. For the past 30 days, I have been going for walks with a group of 13 students. I make little maps of different neighborhoods within walking distance of my school, pass them out with arrows indicating the directions, and we take off. Sounds crazy, right? Let me convince you why this is a good idea and then explain how feasible it is for any size school in any location. A Pyramid of Benefits Photo by Emma Simpson on Unsplash The first benefit of this practice is improvements to physical health. If you haven’t seen that crazy popular youtube video about the health impacts of walking for 30 minutes each day, you should first get up and go for a walk and then watch the video. Walking is a low impact way to ward off obesity, diabetes, hypertension and more. The second benefit of going for daily walks is social engagement. When I started out on this endeavor in early September, I didn’t know any of the students well at all. As we walked shoulder to shoulder on sidewalks, trails and parking lots, I had great conversations with each and every one of them. I learned about little snippets of their lives that would never come up in a regular class period or even a fifteen minute check in with a guidance counselor. I’m not just nosy, this information is helpful in figuring out what motivates a student or troubleshooting their lack of motivation. We also ran into lots of people from the town who interacted with my students. Too often ‘school’ means being contained within the four walls of a classroom. It was great for students to help senior citizens bump their walkers up over the curb and to step off the sidewalk to allow a mother pushing a stroller to pass. These are little things, but I’d never think to address them in the classroom. Students also benefited from socializing with each other in a safe, somewhat controlled setting. Trust me, I wasn’t keyed in listening to every conversation, but they knew that it was not an appropriate setting for certain topics. It was great to hear them talking to each other about learning to drive, getting their first jobs or even giving relationship advice. Students that wouldn’t normally talk or hang out had the opportunity to socialize in a healthy, productive way. A third benefit is the connections that students made between academic content areas and the real world. An example is the daily math I had them do to figure out how long we had to walk. I didn’t give them an algorithm or a worksheet, but I expected them to figure out what time we had to turn around if we were leaving at 1:57 and we wanted to be back at 2:42. When we hoofed it through different neighborhoods, there were great conversations about why one part of town has big mansions and another part of town has small ranch homes. We identified trees and invasive species and noticed wildlife behaviors. A fourth (but probably not final) benefit was the connection to place that occurred as a result of these walks. Our school is in a very car-centric town. We would cover 1–3 miles and students were always amazed to see that we were able to walk to the Chinese food place or past Jacob’s house. Even they grew up in the town, they hadn’t ever developed a mental map of it. I teach science, so I’m always looking out for interesting plants and one day it made me burst with pride when a student up ahead of me shouted back “Yo, do you want us to turn left at that big sycamore tree?” Every School Can Do This Photo by Randy Fath on Unsplash Before you start listing the many reasons why it would be impossible for every student in every school to go for a walk every day, let’s just pause for a moment to remember that this is the country that expanded westward on foot and also sent a man for a walk on the moon. I think we can overcome some of these obstacles. For schools where permission slips are a headache, consider this: my school sends home one permission slip at the beginning of the year. It was written by our lawyer, so I assume it’s legal. When parents sign it, they give us permission to take kids on any field trips we want to all year long, provided we give them notification. It works out great and avoids the permission slip back and forth that has happened in other schools that I’ve worked in. Maybe some schools are worried about the time intrusion. With so many scheduled classes and trying to fit in electives like band and chorus, where would the time for this come from? At my school, we are able to award PE credit to students for participating, so that’s one avenue. I also have observed that shifting gears through six or more classes each day is overwhelming for students, which manifests as anxiety. Change the schedule, extend lunch, split a period with a study hall. There are all kinds of creative schedules out there, it is not impossible to find 30 minutes for an activity with such widespreading benefits. Staffing may be another issue to consider — one that I believe can be overcome as well. I am a busy teacher with young kids at home. Having the chance to get a little exercise in during the day is a joy! While not all teachers would jump at this chance, many would. There is no preparation or grading required to go for daily walks, and the relationships I form with students make them easier to work with in my regular academic classes. There are probably other obstacles as well, but none are insurmountable, especially considering the payoffs. Get Started Today! Photo by Manasvita S on Unsplash The great thing about this plan is that any school could try it out on a small or large scale immediately. There’s no cost and no risk — only rewards. So students, teachers, parents, administrators: what are you waiting for? Get walking! Don’t work in a school? Don’t care about schools? Weird that you’re reading this article, but that’s fine. Here’s the great thing: You can go for a walk too, and get all of the above benefits as well. When you get back from your walk, take all of that great energy and use it to tell someone else that you really think that the way to fix schools is to start by getting kids and teachers up on their feet. And then maybe when you do bump into me in a coffee shop, and I ask you if schools are serving their purpose, you’ll have a different answer!
https://medium.com/age-of-awareness/the-free-and-easy-way-to-improve-school-culture-644940d26601
['Emily Kingsley']
2020-01-24 02:34:50.374000+00:00
['Society', 'Schools', 'Culture', 'Health', 'Education']
3 Interviews — 5 Questions. Five basic things I learnt about Python…
1. Is Python a compiled or an interpreted language? The answer is ‘Both’ . But this answer most probably won’t get anyone the job until we explain the difference between an interpreted & a compiled language. Why we need a translator? Humans understand and hence talk human language, something closer to English. Machines talk in binary language, all 1’s & 0's. That is why, we need a translator in between which takes a human readable code, written in high level programming languages such as Python and converts it into a form understandable by a computer machine. Now the available translators are of two types, a compiler and an interpreter. What is a compiler? A compiler is a computer program that takes all your code at once and translates it into machine language. The resultant file is an executable that can be run as is. The pro is: this process is fast since it does all the job at once. The con is: this has to be done for every machine all over again. You cannot compile your code on one machine, generate an exe and run it over other machines regardless. What is an interpreter? On the other hand, an interpreter translates your code one instruction at a time. Pro: takes its time since an error at line 574 means it notifies you, you fix the error and it starts translating again from line 1. Con: Once translated, the generated bytecode file is platform independent. No matter what machine you want this code to run at, take your virtual machine with you and you are good to go because the generated bytecode is going to run on your PVM ( python virtual machine) and not on the actual physical CPU of your machine. Compiled vs Interpreted Now, the answer that might get you the job is: python does both. When we write a python code and run it, the compiler generates a bytecode file (with .pyc or .pyo extension). We can then take this bytecode and our python virtual machine and run it on any machine we want seamlessly. The PVM in this case is the interpreter that converts the bytecode to a machine code. 2. Is Python Call-by-Value or Call-by-Reference? The answer again is ‘Both’. This is so basic that you can even find it at your first Google search, but knowing the details is important. What is call-by-value? Call-by-value and call-by-reference are the techniques specifying how arguments are passed to a callable( more specifically a function) by a caller. In a language that follows call-by-value technique, when passing arguments to a function, a copy of the variable is passed. That means the value that is passed to the function is a new value stored at a new memory address hence any changes made to the value passed to the function will only happen for the copy stored at the new address and the original value will remain intact. call-by-value in action What is call-by-reference? In call-by-reference technique, we pass the memory address of the variable as an argument to the function. This memory address is called a reference. Hence, when a function operates on this value, it is actually operating on the original value stored at the memory address passed as an argument so the original value is not preserved anymore but changed. call-by-reference in action Python’s call-by-object-reference Python follows a combination of both of these, known as call-by-object-reference. This is a hybrid technique because what is passed is a reference but what happens (in some cases) is more similar to an original value change. Everything in Python is an object, which means the value is stored at a memory location and the variable we declare is only a container for that memory address. No matter how many times we create a copy of that value, all the variables will still be pointing to the same memory location. Hence, in Python there is no concept of passing a copy of variable as an argument. In any case we end up passing the reference(memory location) as an argument to a function. So this is call-by-reference inherently. A quick example to understand this, no matter how many variables we declare to store the value of an Interger 2, all of them contain the same memory address because variables in Python are nothing else but containers for memory addresses. Hence, this point is sorted that the arguments passed in Python are always the references and never the values. Whether the original value remains intact or not depends upon the type of data structure. Some of the data structures in Python are mutable which means you can change their values in place while some are immutable which means an effort to change their value will result in a new value stored at a new location and the new reference will be stored in the variable. The examples of mutable objects in Python are list, dict, set, byte array while the immutable objects are int, float, complex, string, tuple, frozen set [note: immutable version of set], bytes. So if the reference passed to the function was pointing towards a mutable value, it will be changed in place and your container will contain the same memory address it originally had. If the reference passed to the function is of a memory location storing an immutable value, the new value after processing will be stored at a new memory location and the container will be updated to store the address of new memory location. This is what call-by-object-reference is. As an example, when the variable ‘a’ is referencing an integer and we try to modify its value, ‘a’ starts pointing towards a new memory location since modifying the value of an integer in place is not possible.
https://medium.com/swlh/3-interviews-5-questions-55bd4cae8b9f
['Ramsha Bukhari']
2020-10-28 22:47:15.036000+00:00
['Python', 'Software Engineering', 'Programming', 'Data Visualization', 'Database']
Six Steps to a Better Deadlift
Photo by Alora Griffiths on Unsplash The barbell deadlift is a compound exercise that works almost every major muscle group. While it is phenomenal for developing full body strength, it also has the potential to be dangerous if performed improperly. For all of its benefits, it could be argued that the deadlift’s risks outweigh the rewards if you don’t take the time to execute it with precision technique. So in order to help you maximize those rewards while minimizing the risks, I’ve comprised a checklist of six steps that will walk you through every step of a properly performed deadlift — from start to finish. #1. Find Your Ideal Foot Placement Ankle mobility and limb length will play a factor in what exact stance is best for you, but here are a couple of general guidelines: As for how far underneath the bar your feet should be ; if you’re looking straight down, the bar should be right around the middle of the foot. A visual cue is to think about the bar “cutting your foot in half”. ; if you’re looking straight down, the bar should be right around the middle of the foot. A visual cue is to think about the bar “cutting your foot in half”. As for how far apart your feet should be; try simply performing a few vertical jumps and pay attention to where your feet are landing. Use this stance as a reference point to how far apart you may want to space your feet underneath the bar. #2. Use the “Balloon Analogy” to Brace Your Core Correctly Being told to “brace your core” is a common coaching cue. What is not so common, however, is being told how to brace the right way. The first thing a lot of people think of doing when they’re told to “brace” is to tighten up their abs by sucking in their stomach. This will cause the lower back to round under a load, and is the exact opposite of what you want to do. Instead, learn how to properly brace your core by using the “balloon analogy”. How to do it: Imagine a balloon is between your hands. If you press down on it, it expands 360 degrees. Now apply this same idea to your core by doing the following: Place your hands onto your midsection. Let your thumbs wrap around your sides, going towards your lower back area. Take in a deep , nasal-only breath. , nasal-only breath. You will feel expansion 360 degrees — around the stomach, the sides, and into your lower back. This is your body’s “internal weightlifting belt” turning on. (Which is the same way you should be bracing to get the most out of an actual weightlifting belt.) Remember that the #1 function of the core is to stabilize the spine under load. That stability is achieved with a 360 degree brace. #3. Screw Your Feet Into the Ground Imagine there is a thick piece of paper between your feet and you want to rip it in two. Let’s just assume this is one seriously thick piece of paper, and in order to rip it, you need to generate tension throughout the entire lower body. You can do this by thinking of “screwing your feet into the ground”. While keeping the feet flat on the floor, try to actively pull your feet apart from each other. You’ll find that the tension you’re creating at the feet will also cause the knees to push outwards. This arc of tension will continue all the way up to the hips, glutes, and hamstrings. “Screwing your feet into the ground” is a quick fix to keep the lower body tight from the feet up. #4. Create Tension in the Mid Back and Lats Keep your arms completely straight and reach your hands down your body as far possible while maintaining an upright posture. Now reach your hands back behind your body. At this point, you should be feeling tension all throughout the lats and middle back region — this is the same tension that you want to create when you reach down to grab the barbell. Of course, the barbell will prevent you from actually reaching your hands behind your body when you’re putting this to practice during a deadlift, but understanding how to set the lats in this manner is the first step in taking the slack out of the bar (more on that soon), and this position should remain constant throughout the entire duration of the set. Keep in mind that especially in a compound move like the deadlift, if any part of the body is allowed to remain “loose”, it can cause the entire chain to collapse. So being sure to properly pretension both the lower and upper body is critical. If you feel yourself losing tightness anywhere, put the weight down and reset — or discontinue the set. #5. Take the Slack Out of the Bar If the weight on the bar is 225 pounds, think about pulling with around 220 pounds of force as you place your hands onto the bar and pull your body into position. You should hear a “click” coming from either side of the barbell — that’s the sound of the bar being pulled into the weight plates. This will allow you to leverage your bodyweight against the barbell, and from this point you can set your body back into an ideal position to pull — typically a position that has your shoulders nearly directly over the barbell. #6. Leg Press the Floor Away, Lockout With a Neutral Posture, Repeat Once you’re ready to pull, think about performing a maximum effort leg press through the floor. Drive forcefully until you reach the point of lockout, and once you’re there, don’t hyperextend the lumbar spine. You may have seen lifters arch their lower back at the top of a deadlift, but all you need to do is stand straight up. Anything beyond a neutral posture is hyperextension and can excessively compress your lower back. From the lockout, you have the option of either simply dropping the weight to the floor or lowering the bar back down in the same fashion in which you brought it up. Photo by Anastase Maragos on Unsplash In Summary Anything worth doing is worth doing right, and if the barbell deadlift is a staple in your routine, keeping these six steps in mind will help optimize your performance while minimizing your risk of injury. Find your footing, brace your core, create full-body tension, pull the slack out of the bar, and leg press the floor away until lockout: Six steps to a better — and safer — deadlift. Thanks for reading! Have a question? Want something covered in a future article? Let me know in the comments! Click here to be notified whenever a new story is published. — Zack
https://medium.com/in-fitness-and-in-health/six-steps-to-a-better-deadlift-48974962dec2
['Zack Harris']
2020-11-01 17:16:18.134000+00:00
['Health', 'Wellness', 'Fitness', 'Life', 'Self Improvement']
Learning How to Learn: Powerful mental tools to help you master tough subjects, Diffuse Mode
Learning How to Learn: Powerful mental tools to help you master tough subjects, Diffuse Mode How to make use of your superior diffuse mode a.k.a. your subconsciousness This is a follow-up of the chapter discussing the focused mode. I have a lot of ideas every day, but not enough time to write them all down, so I chose to write the other ideas down before I forget them. I was pretty certain I won’t forget the ideas described in this chapter. So the diffuse mode is somewhat the opposite of the focused mode. It is active in the background or your subconsciousness. The mistake most people and students tend to make when learning, is that they don’t spend a lot of time in the diffuse mode. This can cause a dramatic lack of depth in their understanding about subjects. If you teach those students about the simulation hypothesis (see the chapter: 09/14/2019 — Simulation Hypothesis and ‘Good and Evil’), they will use their focused mode to learn every bit of element related to that hypothesis, but won’t spend much time in the diffuse mode. What is the consequence of spending less time in the diffuse mode? Well, they won’t ask themselves those deep and abstract questions like “But how does this hypothesis relate to ethics and morals?” It is a sad trend you get to see in modern education, superficial understanding of material. I much prefer the old days like in Ancient Greece or Rome where even the emperors like Marcus Aurelius knew about philosophy and had a deep understanding about things. How to enter the diffuse mode First of all, you can enter the diffuse mode simply by not focusing on anything (through meditation or mindfulness), but this alone is not effective in terms of creativity and learning new things. In order to command your diffuse mode to learn something in the background, you need to use the focused mode first. For example, you want to know your own personal definition of the meaning of life. What you could start with is simply Googling information (and also storing them long-term), try to answer and view as many perspectives as you can, and then just relax. Do something else like exercising or meditating, the thinking will continue and run in the background. And after 15 minutes or even hours afterward, simply return to the question and you will be surprised how many new and stronger connections were formed in your brain without the conscious ‘you’. This technique can also beautifully be used when taking tests, exams, or making homework: https://lifehacker.com/improve-your-test-scores-with-the-hard-start-jump-to-e-1790599531 — Improve Your Test Scores With the “Hard Start-Jump to Easy” Technique Diffuse mode and working memory The diffuse mode is not limited to the working memory slots located in your prefrontal cortex, unlike the focused mode which is. This makes your subconsciousness so much more powerful when used correctly (albeit not as powerful as depicted in those movies). The focused mode also tends to activate old neural pathways that aren’t really that creative nor are the cerebral distances very long (the distance between two activated neurons, brain regions etc). The diffuse mode can activate but also create new neural pathways that have a much longer cerebral distance than the focused mode can. This allows the diffuse mode to be much more creative but also combine ideas from many different brain regions. Again, the diffuse mode is not limited to your working memory slots located in your prefrontal cortex, so it can connect and ‘think’ about as many ideas simultaneously as its neural resources allow. Diffuse mode and psychedelics I would really advise the book ‘How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence’ by Michael Pollan to learn more about the information I am going to say next. So the diffuse mode is mostly active when you don’t focus (using your focused mode). What mostly happens, neurophysiologically seen, is that the activity in the so-called default mode network increases. This allows for all kinds of connections to be much more active too, not only within one brain region but between brain regions, too. This is why someone taking psychedelics gets to see all kinds of weird hallucinations like seeing faces in inanimate objects (this phenomenon, called pareidolia, happens even without taking psychedelics, but the increased activity from the default mode network just increases the probability of occurrence). People who take psychedelics or meditate have the feeling they have found all kinds of ‘truths’ never thought of before. You could say, that when taking psychedelics, you are essentially being aware of your diffuse mode. During sleep, you are experiencing your diffuse mode, too. Diffuse mode and Entropic Learning Model See the chapter 09/11/2019 — Entropic Learning Model for more information. The diffuse mode is just such an important part of our learning and thinking process, that I made a separate phase in my learning model to remind myself that after hours of hard thinking, I need to relax to allow my diffuse mode to take over the thinking work. Can the diffuse mode run when you are activating the focused mode? Yes, but only when you switch tasks or ways of thinking. If you are thinking about psychology and get stuck somewhere, switch to a more left-hemispheric mode of thinking like physics or mathematics (the idea that the left and right brain hemispheres are separated from each other, in terms of logic and creativity respectively, is a myth, but brain lateralization or specialization certainly does exist to a certain degree). How long does it take to ‘enter’ the diffuse mode? I am not sure, but as far as I have read the information, it can be as little as 10 minutes to even hours. The thing, however, is that to stay in the diffuse mode, you need to repeat to yourself the images, ideas, questions etc. from time to time in order to make your diffuse mode think about the subject even if it takes hours. According to research, there seems to be a correlation between having more knowledge and the duration required to switch between focused and diffuse mode effectively. The more knowledge, the faster you can switch between those two modes. Diffuse mode and exponential learning It is important, no matter how much homework you have, to try to switch between the focused mode and diffuse mode. It may feel like it takes a lot more time to finish your homework, in the long run, you will understand the things much deeper. This deeper understanding will make it much easier to learn new and related material. You don’t want to end up studying for years and then only using and remembering less than 10%. Imagine how it feels to spend 40 hours a week studying, while knowing in the back of your head that only 4 of these hours were ‘effective’. Keep this thought alive in the background to motivate yourself to use that diffuse mode from time to time and not to rush your learning. Of course there are many more techniques to make your retention get closer to that 100%, like the method of loci, spaced repetition, interleaved practice, exercise, nutrition, reducing stress, getting enough sleep (which most students lack), etc. I personally don’t spend 40 hours a week learning (new) things, not only because I don’t have the time for it, but because I don’t really need to. My retention and understanding of material is very close to that 100% and you might even say above 100%, because of all the new ideas I am generating. Those little 20 hours a week quickly turn into the equivalent of 40 hours a week most students follow, and it grows exponentially.
https://medium.com/superintelligence/09-16-2019-learning-how-to-learn-powerful-mental-tools-to-help-you-master-tough-subjects-e99684abb8c8
['John Von Neumann Ii']
2019-11-10 20:31:05.116000+00:00
['Neuroscience', 'Learning', 'Education', 'Students', 'Brain']
How to Use Storytelling Conventions to Create Better Visualizations
Story is the best form of communication we have. To the steely-eyed analyst, it may seem superfluous and manipulative — a way to persuade with emotion instead of facts. While it is true that some have used story to mislead and befuddle, to discard it altogether is like blaming shoes for an inability to dunk a basketball. Stories aren’t the problem; false stories are. The goal of the analyst, then, is not to avoid stories, but to tell better ones. Not only is story an effective way to communicate, for the data analyst it is unavoidable, because every presentation of data tells a story whether it is intended or not. If the story isn’t made explicit, the audience will make one up. Take the ubiquitous tabular report as an example… Story: I’m not sure what any of this means but I did work really hard to collect all the data. A visualization project doesn’t succeed by accident. Behind every one is a developer who has mastered the data, the subject matter, and the user stories. No one understands the content better than she does. By comparison, the audience’s vantage point is limited. If left to their own devices, chances are good that they will miss important insights or draw incorrect conclusions. Given that, there is no better person than the visualization developer to provide a point of view on what the data means. If the audience is looking for a story, then it is incumbent on the developer to guide them to the one that is most meaningful while staying true to the data. For a visualization to succeed, the developer must own the role of storyteller. The Story Framework Stories are about ideas. A particular story might be about a detective figuring out who did it, or survivors fighting off a zombie apocalypse, but underneath the fictional facade is a point of view about life. The combination of setting, characters, events, and tension is simply a metaphor about real-world ideas that matter — and are true. The genius of story is that it doesn’t tell you an idea is important; it shows you. When done well, its outcomes seem inevitable and its conclusions are convincing. Few methods can match a great story’s ability to enlighten and persuade. To accomplish this, stories typically follow a framework, or narrative arc, that looks like this… 1. A relatable protagonist whose world is in balance 2. A crisis that knocks their world out of balance making the status quo unacceptable 3. A journey to restore balance that faces progressively escalating opposition 4. A climax where the protagonist must decide to risk everything in order find balance once again You can see how this plays out in a couple of great movies from the ‘80s… In The Karate Kid, A high schooler named Daniel moves to California with his mom and is doing reasonably well at making new friends (balance), when a bully with a cool red leather jacket and sweet karate moves decides to make Daniel his target (crisis). Daniel is determined to learn karate to defend himself and finds Mr. Miyagi to train him (journey). In the end, Daniel must overcome the bully in a final battle royale for all to witness (climax). In Back to the Future, Marty is a normal kid trying to take his girlfriend to the prom, and also happens to be friends with a mad scientist who discovers time travel (balance). A string of events leads Marty to accidentally travel back in time to when his parents first met, and threatens his future existence by allowing his mom to become enamored with him instead of his dad (crisis). Marty then has to orchestrate events so that his mom and dad fall in love (journey), and then get back to the present time using the power from a clock tower struck by lightning (climax). The beauty of this framework is that it takes advantage of a characteristic we all share as humans: the need for order and balance. When something threatens that need, the tension causes us to direct all of our mental, emotional, and physical capacities toward restoring that balance. Sitting idle is not an option; action must be taken. A visualization can likewise use this framework to present information in a more persuasive and compelling way. If a report states facts simply because they exist with no concern for what they mean, then a visualization shows the facts that matter, when they matter, to whom they matter, and what can be done about it. Knowing that a user will act when he believes the status quo is untenable and understands what he can do about it, an effective visualization focuses on the facts that reveal meaningful tension and provide a guided path to the appropriate actions. Let’s look at how each part of the framework applies to visualization design… Scope depth over breadth “A relatable protagonist whose world is in balance” Storytellers understand who their audience is and what they care about, which enables them to create relatable protagonists and a clear picture of what a balanced and desirable life looks like. Good storytellers go deep, not wide. They limit the number of characters and the breadth of the created world to only what can be known intimately. If visualization is a form of storytelling, then the audience is its protagonist and the setting its analytical scope. A successful visual creates a world its audience will immediately recognize as their own, with familiar terminology, organization, and concepts of favorable conditions. Its scope favors depth over breadth. It does not waste space on extraneous topics just because the data is available or previous reports included them, but instead focuses solely on the problem it set out to solve, and solves only that. Exception-based visual cues “A crisis that knocks the protagonist’s world out of balance making the status quo unacceptable” Crisis is the driving force of a story. Without it there is no action, and without action there is no story. If the protagonist lives in a world where everything is as it should be, then why would she do anything to change that? Minor annoyances or moderately-challenging setbacks might lead her to make adjustments, but that doesn’t make for a compelling story. What is compelling is when an event threatens the very essence of life as she knows it. When that happens, action is not optional; it’s a matter of survival. A visualization is likewise defined by action — consequential action, more to the point. Its aim is to convince the viewer that the status quo is unacceptable and that action is mandatory. In the same way a story uses crisis as an impetus for action, a visualization makes crises jump off the screen and compels the viewer to act. It does not allow minor issues to clutter the view, but rather it focuses squarely on the things that will dramatically damage the current state if left unaddressed. In the business world it’s common to see a report full of performance KPIs like sales this year vs the previous year, or market share of a company vs a competitor. In far too many cases, every positive and negative variation is highlighted with green or red like the left side of the chart above. While it succeeds in looking like a Christmas tree, it fails at helping the viewer understand what truly matters. In reality only a few KPI variances have meaningful implications for the overall health of a business, which are called exceptions. An effective visualization is clear on which exceptions impact performance the most, and displays them front and center. Progressively-revealed detail “A journey to restore balance that faces progressively escalating opposition” Every story is a journey. They are sometimes about the protagonist literally getting from point A to point B, but they are always about the protagonist’s journey of personal transformation. No good story leaves its characters how it found them. It may seem that all is well at the beginning of a story, but a major crisis exposes how vulnerable they are. The narrative arc is not about recovering what the crisis took away; it’s about the protagonist growing into a better version of themselves that they didn’t realize was possible before. And just like in real life, it doesn’t happen with one transformational event, but progressively over the course of many events with each one requiring a little more than the one before it. The heroism that’s always required in the final act would not be possible in act one. It’s the journey in the middle that makes it possible. While a visualization does not usually demand heroic acts from its users, it does concede that they need to go on a journey involving several stages of analysis before they’re ready to act. Few real-world problems are so simple that a single KPI or view could clarify the severity of a situation or the appropriate response. Decision-makers want to go through a progression that starts with high-level performance questions and then move on to increasingly-detailed questions until a specific opportunity for action is identified. The job of a visualization is to simply mirror this progression. Actionable conclusions “A climax where the protagonist must decide to risk everything in order find balance once again” In the narrative arc of a story, the protagonist’s transformation is only complete once he irreversibly turns away from who he once was and embraces his new self. Every event, character, decision, and action in the story builds to the moment at the end where he makes a final decision and takes the required action. In a well-crafted story, the end seems inevitable because every previous moment logically led to it, one step at a time. In the same way, a visualization builds toward a final, decisive action from its users. Every choice about what, how, and where to show information is made with this end in mind. Common tabular reports provide information and nothing more. A better visualization provides the necessary insight for making decisions. To do this well, a visualization designer learns what type of information her user base needs for better decision-making, and then figures out how to sequence visuals so that her users can intuitively get to that information as quickly as possible.
https://medium.com/nightingale/how-to-use-storytelling-conventions-to-create-better-visualizations-45177ae517ba
['Dan Gastineau']
2019-06-12 21:43:18.930000+00:00
['Topicsindv', 'Design', 'Storytelling', 'Data', 'Data Visualization']
We Need a Code of Ethics
We Need a Code of Ethics We’ve been moving too fast for too long and it’s hurting everyone. I keep wondering what we can do to ensure we’re building a better world with the products we make and if we need a code of ethics. So often I see people fall through the cracks because they’re “edge cases” or “not the target audience.” But at the end of the day, we’re still talking about humans. Other professions that can deeply alter someone’s life have a code of ethics and conduct they must adhere to. Take physicians’ Hippocratic oath for example. It’s got some great guiding principles, which we’ll get into below. Violating this code can mean fines or the losing the ability to practice medicine. We also build things that can deeply alter someone’s life, so why shouldn’t we have one in tech? While this isn’t a new subject by any stretch, I made a mini one of my own. It’s been boiled down to just one thing and is flexible enough to guide all my other decisions. I even modified it from the Hippocratic Oath: We will prevent harm whenever possible, as prevention is preferable to mitigation. We will also take into account not just the potential for harm, but the harm’s impact. That’s meaningful to me because our for years, tech had a “move fast, break things” mentality, which screams immaturity. It’s caused carelessness, metrics-obsessed growth, and worse— I don’t need to belabor that here. We’ve moved fast for long enough, now let’s grow together to be more intentional about both what and how we build. Maybe the new mantra could be “move thoughtfully and prevent harm,” but maybe that isn’t quite as catchy. A practical example Recently, a developer launched a de-pixelizer. Essentially, it can take a pixelated image and approximate what the person’s face might look like. The results were…not great. Setting aside that the AI seems to have been only trained on white faces, we have to consider how process this might go wrong and harm people. Imagine that this algorithm makes it into the hands of law enforcement, who mistakenly identifies someone as a criminal. This mistake could potentially ruin someone’s life, so we have to tread very carefully here. Even if the AI achieves 90% accuracy, there’s still a 10% chance it could be wrong. And while the potential for false positives might be relatively low, the impact of those mistakes could have severe consequences. Remember, we aren’t talking about which version of Internet Explorer we should support, we’re talking about someone’s life — we have to be more granular because both the potential and impact of harm is high here. Accident Theory and Preventing Harm Kaelin Burns (Fractal’s Director of Product Management) has this to say about creating products ethically: “When you create something, you also create the ‘accident’ of it. For example, when cars were invented, everyone was excited about the positive impact of getting places faster, but it also created the negative impact of car crashes and injuries. How do you evaluate the upside of a new invention along with the possible negative consequences, especially when they have never happened before? So when you’re creating new technology, I believe you have a responsibility, to the best of your ability, to think through the negative, unintended, and problematic uses of that technology, and to weigh it against the good it can do. It become particularly challenging when that technology also has the potential be extremely profitable, but is even more important in those cases.” If you’re looking for an exercise you can do, try my Black Mirror Brainstorm. In closing In this post, we discussed why we need a code of ethics in our world. I shared one thing that I added to mine. We also talked about the impact of not having one. You also learned about Accident Theory and have a shiny new exercise to try. I’m curious about one thing: What’s one thing you’d include if you made a Hippocratic Oath for tech? Thanks for reading.
https://medium.com/thisisartium/we-need-a-code-of-ethics-eaaba6f9394b
['Joshua Mauldin']
2020-07-23 22:54:13.381000+00:00
['Entrepreneurship', 'Design', 'Ethics']
Amazon EC2 for Dummies — Virtual Servers on the Cloud
For more information on Instance Types see the EC2 documentation. 3- Storage Amazon EC2 offers flexible, cost-effective, and easy-to-use data storage options to be used with instances each having a unique combination of performance and durability. Each option can be used independently or in combination. These storage options are divided into 4 categories: 1- Amazon Elastic Block Store (EBS): EBS is a durable block-level storage option for persistent storage. It is recommended for data requiring granular and frequent updates such as a database. The data persists on an EBS volume even after the instance has been stopped or terminated, unlike instance store volumes. It’s a network-attached volume so can be attached or detached at will from an instance. More than one EBS volumes can be attached to an instance at a time. The EBS encryption feature allows the encryption of data. For backups, EBS provides the EBS snapshot feature which stores the snapshot on Amazon S3 and can be used to create an EBS volume from it and attached to a new instance. EBA volumes are created in a specific availability zone(AZ) and automatically replicated within the same AZ and are available for all instances in that particular availability zone. Amazon EBS provides the following volume types: General Purpose SSD, Provisioned IOPS SSD, Throughput Optimized HDD, and Cold HDD each type is either IOPS optimized or throughput optimized. EBS volumes provide the capability to dynamically increase size, modify the provisioned IOPS capacity, and change volume type on live production volumes. You continue to pay for the volume used as long as the data persists. 2- Amazon EC2 instance store: It’s a temporary block-level storage option and is available on disks physically attached to the host computer, unlike EBS volumes that are network-attached. The data on instance store volumes only persist for the lifetime of the instance and can’t be detached and attached to other instances like EBS. Data persists in case the instance reboots but stoping or terminating the instance results in permanent loss of data since every block of storage in the instance store is reset. It is ideal for data that needs to be stored temporarily such as cache, buffer or frequently changing data. The size of the instance store available and the type of hardware used for the instance store volumes are determined by the instance type. Instance store volumes are included in the cost of an instance’s usage cost. AMI’s created from instances having instance store as storage volumes don’t preserve data present on these volumes and are not present on instances launched from that AMI. Also changing instance type won’t have the same previous instance store volume attached to it and all data will be lost. 3- Amazon Elastic File System (Amazon EFS): Amazon EFS is a scalable file storage and can be used to create a file system and mount to EC2 instances. It is used as a common data source for workloads and applications running on multiple instances. 4- Amazon Simple Storage Service (Amazon S3): Amazon S3 is object storage that provides access to reliable, fast, and inexpensive data storage infrastructure. It allows you to store and retrieve any amount of data, at any time, from within Amazon EC2 or anywhere on the web. Amazon EC2 uses Amazon S3 for storing AMIs and snapshots of data volumes. To learn more about storage visit EC2 documentation. 4- Networking By default, all EC2 instances are launched in a default VPC which is a Virtual Private Cloud that enables you to logically separate a section of AWS cloud to launch your resources in a virtual network defined as per your needs. It consists of IPv4/IPv6 addresses, internet gateways, route tables, NAT gateways, public, and private subnets and helps you in designing your network and controlling access to your resources over the internet. You can place resources such as databases in a private subnet to deny any access to them over the internet or place others like web servers in public subnets for global access. Amazon EC2 and Amazon VPC use the IPv4 addressing protocol by default and this behavior can’t be disabled. An IPv4 CIDR block must be specified when creating a VPC. A public IPv4 address is automatically assigned to an instance when launched in default VPC. These public IP addresses are not associated with a specific AWS account and when disassociated from an instance, they are released back into the public IPv4 address pool, and cannot be reused. For this purpose, AWS offers Elastic IP addresses (EIP). It is a public IPv4 address that you can allocate to your account until you choose to release it. A DNS Server is also provided by Amazon that resolves Amazon-provided IPv4 DNS hostnames to IPv4 addresses. AWS also provides Elastic Network Interface(ENI) which is a logical networking component in a VPC that represents a virtual network card and enables the communication between different components. We can create our own network interface which can be attached to an instance, detached from an instance, and attached it to another instance as we require. Every instance in a VPC has a default network interface, called the primary network interface which cannot be detached. To learn more about networking visit EC2 documentation. 5- Security For building enterprise-level applications providing security is a must and AWS provides state of the art security features to prevent any threat to customer’s application. AWS follows the shared responsibility model that describes the security as a responsibility between both AWS and the customers. Security of the cloud — The protection of the infrastructure that runs AWS services in the AWS Cloud falls under AWS’s responsibility. AWS also provides services that you can use securely. Third-party auditors regularly test and verify the effectiveness of AWS security as part of the AWS Compliance Programs. — The protection of the infrastructure that runs AWS services in the AWS Cloud falls under AWS’s responsibility. AWS also provides services that you can use securely. Third-party auditors regularly test and verify the effectiveness of AWS security as part of the AWS Compliance Programs. Security in the cloud — Customer responsibility is determined by the AWS service that they use. Customers are responsible for other factors including the sensitivity of data, the company’s requirements, and applicable laws and regulations. For example, securely keeping the private key used for connection to EC2 instances is under Customer responsibilities. Security is an utmost priority at AWS and there are a lot of security features provided by AWS that can help manage EC2 and other services security needs that include: Infrasture security — that includes network isolation via VPCs and subnets, physical host isolation by virtually isolating EC2 instances on the same host, network control via security groups acts as a firewall and lets you define IP ranges to accept traffic from, NAT gateways to allow instances in private subnets to reach global internet, AWS Systems Manager Session Manager to access your instances remotely instead, VPC Flow Logs to monitor the traffic and EC2 Instance Connect for connection to your instances using Secure Shell (SSH) without the need to share and manage SSH keys. — that includes via VPCs and subnets, by virtually isolating EC2 instances on the same host, via acts as a firewall and lets you define IP ranges to accept traffic from, to allow instances in private subnets to reach global internet, to access your instances remotely instead, to monitor the traffic and for connection to your instances using Secure Shell (SSH) without the need to share and manage SSH keys. Interface VPC endpoint — enables you to privately access Amazon EC2 APIs by restricting all network traffic between your VPC and Amazon EC2 to the Amazon network and eliminate any need for internet gateways, NAT devices, or virtual private gateways. — enables you to privately access Amazon EC2 APIs by restricting all network traffic between your VPC and Amazon EC2 to the Amazon network and eliminate any need for internet gateways, NAT devices, or virtual private gateways. Resilience — The AWS global infrastructure is built around AWS Regions and Availability Zones. Regions include multiple (at least 2) Availability Zones that are physically separated and isolated connected via low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically failover between zones without interruption. — The AWS global infrastructure is built around AWS Regions and Availability Zones. Regions include multiple (at least 2) Availability Zones that are physically separated and isolated connected via low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically failover between zones without interruption. Data protection — Data hosted on AWS’s infrastructure is controlled and maintained by AWS, including the security configuration controls for handling customer content and personal data. However, customers are responsible for any personal data that they put in the AWS Cloud. For that AWS provides encryption at rest for EBS volumes and snapshots and encryption in transit by providing a secure communication channel for remote access to instances. Secure and private connectivity between EC2 instances of all types is provided by AWS. In addition, some instances type automatically encrypt data-in-transit between instances. — Data hosted on AWS’s infrastructure is controlled and maintained by AWS, including the security configuration controls for handling customer content and personal data. However, customers are responsible for any personal data that they put in the AWS Cloud. For that AWS provides for EBS volumes and snapshots and by providing a secure communication channel for remote access to instances. Secure and private connectivity between EC2 instances of all types is provided by AWS. In addition, some instances type automatically encrypt data-in-transit between instances. IAM — For data protection purposes, AWS recommends that you protect AWS account credentials and set up individual user accounts with AWS Identity and Access Management (IAM) so that each user is given only the permissions necessary to fulfill their job duties. Also, define roles and policies for services to access only the necessary features of a service. — For data protection purposes, AWS recommends that you protect AWS account credentials and set up individual user accounts with AWS Identity and Access Management (IAM) so that each user is given only the permissions necessary to fulfill their job duties. Also, define roles and policies for services to access only the necessary features of a service. Key Pairs — A key pair, consisting of a private key and a public key, is a set of security credentials that you use to prove your identity when connecting to an instance. The public key is stored on EC2, and you store the private key. Private key is used to securely access your instances. To learn more about security see EC2 documentation.
https://medium.com/analytics-vidhya/amazon-ec2-for-dummies-virtual-servers-on-the-cloud-205ceeb11cd4
['Furqan Butt']
2020-08-29 15:59:16.390000+00:00
['Amazon Web Services', 'AWS', 'Cloud Services', 'Cloud Computing', 'Ec2']
Artificial Intelligence: Synergy or Sinnery?
Will the advent of A.I. allow us to embark upon a complete overhaul of traditional labor structures? This is a question that comes up less frequently than others and one that has an answer that is wholly dependent on whether or not we’d like to take an optimistic or pessimistic view. In another way to phrase it — A.I. can be seen as the harbinger of an age where humankind can, for the most part, finally unshackle themselves from the toils of labor. Conversely, it can also be regarded — and is often regarded in this way — as an enormous threat to employment, set to disrupt almost every industry and cause massive scale job loss. Assuming an optimistic perspective, it’s certainly an exciting proposition, one that would have to be supplemented with some measure of a universal basic income for everyone or some completely innovative way by which resources can be accumulated by members of society. While it seems wholly unfeasible to live in a world where humans need no longer work (again, for the most part) and can be set free to pursue their individual endeavors, it is nonetheless a tantalizing prospect. Preparing for a world without work means grappling with the roles work plays in society, and finding potential substitutes. First and foremost, we rely on work to distribute purchasing power: to give us the dough to buy our bread. Eventually, in our distant Star Trek future, we might get rid of money and prices altogether, as soaring productivity allows society to provide people with all they need at near-zero cost. — Ryan Avent, The Guardian Supposing one were to take a pessimistic perspective, the threat of soaring unemployment rates is all too real. We’ve already seen the loss of jobs brought about by automation in the workforce and A.I. poses the most menacing danger of all. The darkest of estimates come to show the loss of half of all current jobs to automation and A.I. — if this had even been remotely exaggerated, it certainly seems drastic enough to consider alternative systems of wage disbursement in its entirety.
https://medium.com/hackernoon/a-i-synergy-or-sinnery-3eeb2a2c8d3
['Michael Woronko']
2019-02-25 11:41:00.852000+00:00
['AI', 'Philosophy', 'Technology', 'Artificial Intelligence', 'Elon Musk']
Big Tech Regulators Are Missing the Point
Facebook’s CEO Mark Zuckerberg at a 2018 Congressional hearing on privacy (Photo by Chip Somodevilla/Getty Images | Source) It has been a tragic saga, for people who are familiar with the ways that social media platforms and companies operate, to watch government regulatory sessions with Big Tech companies. For many young people, this began with U.S. lawmakers’ questioning in Congressional hearings; sessions that revealed the lack of understanding of social media by, frankly, elder legislators. However, for those of us who study modern technology and the way that it has mutated capitalism into an entirely new beast, the frustrations with how lawyers, government officials, and any who engage in mainstream regulatory discourse, continue and intensify. This is primarily because regulators seem to not have an understanding of the actual imperatives guiding Big Tech. While they aim at Antitrust, they tip their hand in journals like the New York Times and say that the case is harder to make than they expected. They fail to realize, because they are not versed in deep understanding of the paradigms that guide Big Tech, why their case is so hard. Companies like Google, Facebook, Apple, Amazon, Microsoft, and the like, have not been motivated by user products for over a decade. They are focused on data and prediction products. The disconnect between this older understanding of how capitalism has worked, and how Shoshanna Zuboff’s appropriately named “surveillance capitalism” works currently is ruining any chance of actually reigning in Big Tech. There is an urgent need for deeper understandings of surveillance capitalism and its imperatives in order to truly reveal the danger Big Tech poses to all of us, and move towards substantive regulation. Surveillance Capitalism? Zuboff’s paradigm-shifting work, The Age of Surveillance Capitalism, is a necessary prerequisite read for anybody who dares challenge Big Tech’s hegemonic influence. I’ll detail a few key concepts that motivate the regulatory arguments against Big Tech and best depict why current Antitrust cases will likely fall embarrassingly flat. Facebook is not after Instagram or WhatsApp in order to improve the actual user interfaces or messaging capabilities, they are after these companies to acquire more of your behavioral data to feed into their machine learning prediction algorithms. Primary is the idea that companies like Google, Facebook, Amazon, and more, are not in the business of making their user products better. Zuboff calls this old cycle of product improvement the “behavioral reinvestment cycle.” This argues that, in the old days, Google may have used user data on how their search bar has been used in order to improve the search bar itself — potentially adding a new feature like search suggestions. This cycle closely mirrors the cycle of capital reinvestment from industrial capitalism, where we can imagine the profits from a company like Ford Motor to be reinvested back into their production lines or the cars themselves. This is not how Big Tech companies operate. This point could not be more important. Companies who are playing the surveillance capitalist game are not interested in changing their products to better serve users. The actual products these companies sell are predictions — that’s why Google is in the advertising business; they predict how you are feeling, thinking, and how you may do so in the future in order to give you a perfectly timed and tailored advertisement. You are not the customer for Big Tech companies. You are the raw materials, you generate behavioral data that they analyze, and then they sell predictions to their actual customers: advertisers. All of this motivates the true incentives that Big Tech are following, which follow what Zuboff calls the extraction imperative. Their prediction products improve as they harvest more behavioral data from you. Therefore, there is a strong incentive to extract more data from you — i.e. they want to make you use their platforms more, and in different ways. There is also an incentive under the extraction imperative to simply collect as much data as possible, and this is facilitated by acquiring diverse companies. Facebook is not after Instagram or WhatsApp in order to improve the actual user interfaces or messaging capabilities, they are after these companies to acquire more of your behavioral data to feed into their machine learning prediction algorithms. Under these incentives, companies like Facebook have spent years biding their time and taking flak from privacy scandal after privacy scandal because their entire business relies on gathering more data from you. For example, in 2014, Facebook faced intense privacy backlash after acquiring WhatsApp, and vowed to keep the data from the two apps in separate silos. Almost seven years later, however, in today’s New York Times article on the Antitrust cases, it is taken as common sense that the apps are being integrated. The article states, “In September, 18 months after the initial announcement that the apps would work together, Facebook unveiled the integration of Instagram and its Messenger services. The company anticipates that it may take even longer to complete the technical work for stitching together WhatsApp with its other apps.” Zuboff points out that this is part of a pattern that Big Tech companies have used since the early 2000’s: they do something that shocks us and raises privacy concerns, apologize and say they made a mistake and will protect privacy, and then wait long enough until everyone forgets and simply do it anyway. She calls this the “dispossession cycle,” and it is crucial to understand for any regulator trying to understand how these companies operate. How Regulators Should Proceed In light of these ideas that drastically shift how Big Tech is understood, regulators need to commensurately shift their strategies. The narratives that Facebook and Google have become expert at blasting out in blog posts will trump regulators’ narratives unless they, and the public, truly understand what these companies are after. They plainly make people think their apps are communication, entertainment, or gaming tools. But this is only what they are on the surface: they are actually tools to make behavioral prediction products for advertisers. Instead of trying to argue that product-based competition has been harmed by Big Tech snapping up would-be competitors like Instagram or WhatsApp, a better argument must emphasize that prediction product competition is monopolized by acquiring more sources of data. I should strongly note that I do not endorse in the slightest the idea that a market of prediction products is even legitimate. Nor do I wish to imply it doesn’t infringe heavily on human rights. However, using the language of surveillance capitalism will help regulators take the first step in the argument against Big Tech, and will lead to even stronger critiques that these predictions products — based on enormous and rich streams of behavioral data — infringe on autonomy as they arguably “know” you so well they can manipulate you. The anti-competitive argument easily follows from recognizing that the competition lies in competing data extraction and predictions, not competing user interfaces or product features. An understanding of surveillance capitalism, and the extraction and prediction imperatives, also counters the typical narratives woven by companies like Facebook and Google. In the same New York Times article, Facebook executives are quoted saying things like, “These transactions were intended to provide better products for the people who use them, and they unquestionably did,” Jennifer Newstead, Facebook’s general counsel… … Mr. Zuckerberg said Facebook was fighting a far larger ecosystem of competitors that went beyond social networking, including “Google, Twitter, Snapchat, iMessage, TikTok, YouTube and more consumer apps, to many others in advertising.” That is because Facebook and its other apps are used for communication and entertainment, such as streaming video and gaming. These narratives make it seem like Big Tech companies are motivated by the old-school “behavioral reinvestment cycle” described above. They plainly make people think their apps are communication, entertainment, or gaming tools. But this is only what they are on the surface: they are actually tools to make behavioral prediction products for advertisers. The line that these companies “make better products for users” is utilized over and over again. It is a diversionary tactic, and should be recognized as such. Regulators need to be crystal clear in their counter-narratives and call out these diversions. The moves of regulators often are the only exposure a broader public receives to these issues, so regulators must do better to expose Big Tech’s charades to the general population. Regulators must finally also understand that arguments for privacy are not just based on Big Tech companies knowing where you live, or who your friends are. The true invasion of privacy is that, though prediction, they know how you feel, where you may be going, even what you may think about soon. Our thoughts and feelings are no longer private, and those are what are being fed to advertisers to make you more likely to view or click their ads. The same way that democracy is often tied with freedom of speech, we need to deeply understand the implications of systems of consolidated power having this knowledge so that we can move to protect freedom of behavior or freedom of thought. These ideas deserve a longer treatment, but it should suffice to say that they must be the crux of truly motivating why Big Tech is so dangerous. Urgency is Needed, with Caution These ideas scratch the surface of how understanding of Big Tech companies needs to radically shift in order to motivate any regulatory action and rhetoric that cuts at the core of the actual problems. Without such an understanding, regulators seem doomed to face frustrations and lose the trust of the public through failed action, and easy counter-arguments coming from Big Tech. Regulation as an ideology has decayed in the U.S. since the Neoliberal period under Reagan, and is now a partisan issue. Failed regulatory action will only stymie momentum towards understanding that a capitalist system can only function if it is regulated. We thus need to speak with urgency towards spreading understanding of surveillance capitalism so that everyone understands what’s at stake if Big Tech is left unchecked. Though, we must also be cautious and note that the problem is so contingent on a fairly massive ideological shift that it would likely take something along the lines of a social movement to meet the challenge — something that would take time. In the meantime, those who understand how surveillance capitalism operates must raise their voices and share these ideas as widely as possible. The power and reach of Big Tech’s behavioral extraction and manipulation will only increase with time.
https://medium.com/swlh/big-tech-regulators-are-missing-the-point-240481da2eb8
['Nick Rabb']
2020-12-11 03:53:58.772000+00:00
['Technology', 'Regulation', 'Google', 'Surveillance Capitalism', 'Facebook']
Docker: my questions from the first day
Images and Docker Hub How do I see the actual docker file on Docker Hub? Amazingly this isn’t a simple thing. Docker Hub really just hosts the images, not the actual Dockerfile used to make them (assuming they were made from a Dockerfile). You can get lucky by heading to the page for the desired image on Docker Hub, and often you will find a link to a GitHub hosted Dockerfile. You can also get some idea about the image if you head to Tags and click on the tag you want, and look at the image history. Where is the actual image on my machine? On your machine, run docker info and look for Docker Root Dir , like mine: Docker Root Dir: /var/lib/docker Liar! I went to that directory and it doesn’t exist! Probably you are on a Mac like me. In that case, a virtual image is located at: ~/Library/Containers/com.docker.docker/Data/vms/0 This image is run behind the scenes with HyperKit to run Docker images. You can enter that image with: screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty Then try the directory again ls /var/lib/docker/ should do it! Press control k to exit. Where on my machine can I see the actual Docker file that was used to create the image? You are similarly out of luck! There’s some tricks floating around on how to search the logs for how the image was built to find the Dockerfile, but in general this is not shipped with the image when you do a
https://medium.com/practical-coding/docker-my-questions-from-the-first-day-bc6af8d2a826
['Oliver K. Ernst']
2020-08-29 23:24:11.529000+00:00
['Coding', 'Development', 'Programming', 'Docker', 'AI']
Using Machine Learning to Predict Value of Homes On Airbnb
Introduction Data products have always been an instrumental part of Airbnb’s service. However, we have long recognized that it’s costly to make data products. For example, personalized search ranking enables guests to more easily discover homes, and smart pricing allows hosts to set more competitive prices according to supply and demand. However, these projects each required a lot of dedicated data science and engineering time and effort. Recently, advances in Airbnb’s machine learning infrastructure have lowered the cost significantly to deploy new machine learning models to production. For example, our ML Infra team built a general feature repository that allows users to leverage high quality, vetted, reusable features in their models. Data scientists have started to incorporate several AutoML tools into their workflows to speed up model selection and performance benchmarking. Additionally, ML infra created a new framework that will automatically translate Jupyter notebooks into Airflow pipelines. In this post, I will describe how these tools worked together to expedite the modeling process and hence lower the overall development costs for a specific use case of LTV modeling — predicting the value of homes on Airbnb. What Is LTV? Customer Lifetime Value (LTV), a popular concept among e-commerce and marketplace companies, captures the projected value of a user for a fixed time horizon, often measured in dollar terms. At e-commerce companies like Spotify or Netflix, LTV is often used to make pricing decisions like setting subscription fees. At marketplace companies like Airbnb, knowing users’ LTVs enable us to allocate budget across different marketing channels more efficiently, calculate more precise bidding prices for online marketing based on keywords, and create better listing segments. While one can use past data to calculate the historical value of existing listings, we took one step further to predict LTV of new listings using machine learning. Machine Learning Workflow For LTV Modeling Data scientists are typically accustomed to machine learning related tasks such as feature engineering, prototyping, and model selection. However, taking a model prototype to production often requires an orthogonal set of data engineering skills that data scientists might not be familiar with. Luckily, At Airbnb we have machine learning tools that abstract away the engineering work behind productionizing ML models. In fact, we could not have put our model into production without these amazing tools. The remainder of this post is organized into four topics, along with the tools we used to tackle each task: Feature Engineering: Define relevant features Define relevant features Prototyping and Training: Train a model prototype Train a model prototype Model Selection & Validation: Perform model selection and tuning Perform model selection and tuning Productionization: Take the selected model prototype to production Feature Engineering Tool used: Airbnb’s internal feature repository — Zipline One of the first steps of any supervised machine learning project is to define relevant features that are correlated with the chosen outcome variable, a process called feature engineering. For example, in predicting LTV, one might compute the percentage of the next 180 calendar dates that a listing is available or a listing’s price relative to comparable listings in the same market. At Airbnb, feature engineering often means writing Hive queries to create features from scratch. However, this work is tedious and time consuming as it requires specific domain knowledge and business logic, which means the feature pipelines are often not easily sharable or even reusable. To make this work more scalable, we developed Zipline — a training feature repository that provides features at different levels of granularity, such as at the host, guest, listing, or market level. The crowdsourced nature of this internal tool allows data scientists to use a wide variety of high quality, vetted features that others have prepared for past projects. If a desired feature is not available, a user can create her own feature with a feature configuration file like the following: When multiple features are required for the construction of a training set, Zipline will automatically perform intelligent key joins and backfill the training dataset behind the scenes. For the listing LTV model, we used existing Zipline features and also added a handful of our own. In sum, there were over 150 features in our model, including: Location : country, market, neighborhood and various geography features : country, market, neighborhood and various geography features Price : nightly rate, cleaning fees, price point relative to similar listings : nightly rate, cleaning fees, price point relative to similar listings Availability : Total nights available, % of nights manually blocked : Total nights available, % of nights manually blocked Bookability : Number of bookings or nights booked in the past X days : Number of bookings or nights booked in the past X days Quality: Review scores, number of reviews, and amenities A example training dataset With our features and outcome variable defined, we can now train a model to learn from our historical data. Prototyping and Training Tool used: Machine learning Library in Python — scikit-learn As in the example training dataset above, we often need to perform additional data processing before we can fit a model: Data Imputation: We need to check if any data is missing, and whether that data is missing at random. If not, we need to investigate why and understand the root cause. If yes, we should impute the missing values. We need to check if any data is missing, and whether that data is missing at random. If not, we need to investigate why and understand the root cause. If yes, we should impute the missing values. Encoding Categorical Variables: Often we cannot use the raw categories in the model, since the model doesn’t know how to fit on strings. When the number of categories is low, we may consider using one-hot encoding. However, when the cardinality is high, we might consider using ordinal encoding, encoding by frequency count of each category. In this step, we don’t quite know what is the best set of features to use, so writing code that allows us to rapidly iterate is essential. The pipeline construct, commonly available in open-source tools like Scikit-Learn and Spark, is a very convenient tool for prototyping. Pipelines allow data scientists to specify high-level blueprints that describe how features should be transformed, and which models to train. To make it more concrete, below is a code snippet from our LTV model pipeline: At a high level, we use pipelines to specify data transformations for different types of features, depending on whether those features are of type binary, categorical, or numeric. FeatureUnion at the end simply combines the features column-wise to create the final training dataset. The advantage of writing prototypes with pipelines is that it abstracts away tedious data transformations using data transforms. Collectively, these transforms ensure that data will be transformed consistently across training and scoring, which solves a common problem of data transformation inconsistency when translating a prototype into production. Furthermore, pipelines also separates data transformations from model fitting. While not shown in the code above, data scientists can add a final step to specify an estimator for model fitting. By exploring different estimators, data scientists can perform model selection to pick the best model to improve the model’s out of sample error. Performing Model Selection Tool used: Various AutoML frameworks As mentioned in the previous section, we need to decide which candidate model is the best to put into production. To make such a decision, we need to weigh the tradeoffs between model interpretability and model complexity. For example, a sparse linear model might be very interpretable but not complex enough to generalize well. A tree based model might be flexible enough to capture non-linear patterns but not very interpretable. This is known as the Bias-Variance tradeoff. Figure referenced from Introduction to Statistical Learning with R by James, Witten, Hastie, and Tibshirani In applications such as insurance or credit screening, a model needs to be interpretable because it’s important for the model to avoid inadvertently discriminating against certain customers. In applications such as image classification, however, it is much more important to have a performant classifier than an interpretable model. Given that model selection can be quite time consuming, we experimented with using various AutoML tools to speed up the process. By exploring a wide variety of models, we found which types of models tended to perform best. For example, we learned that eXtreme gradient boosted trees (XGBoost) significantly outperformed benchmark models such as mean response models, ridge regression models, and single decision trees. Comparing RMSE allows us to perform model selection Given that our primary goal was to predict listing values, we felt comfortable productionizing our final model using XGBoost, which favors flexibility over interpretability. Taking Model Prototypes to Production Tool used: Airbnb’s notebook translation framework — ML Automator As we alluded to earlier, building a production pipeline is quite different from building a prototype on a local laptop. For example, how can we perform periodic re-training? How do we score a large number of examples efficiently? How do we build a pipeline to monitor model performance over time? At Airbnb, we built a framework called ML Automator that automagically translates a Jupyter notebook into an Airflow machine learning pipeline. This framework is designed specifically for data scientists who are already familiar with writing prototypes in Python, and want to take their model to production with limited experience in data engineering. A simplified overview of the ML Automator Framework (photo credit: Aaron Keys) First, the framework requires a user to specify a model config in the notebook. The purpose of this model config is to tell the framework where to locate the training table, how many compute resources to allocate for training, and how scores will be computed. Additionally, data scientists are required to write specific fit and transform functions. The fit function specifies how training will be done exactly, and the transform function will be wrapped as a Python UDF for distributed scoring (if needed). Here is a code snippet demonstrating how the fit and transform functions are defined in our LTV model. The fit function tells the framework that a XGBoost model will be trained, and that data transformations will be carried out according to the pipeline we defined previously. Once the notebook is merged, ML Automator will wrap the trained model inside a Python UDF and create an Airflow pipeline like the one below. Data engineering tasks such as data serialization, scheduling of periodic re-training, and distributed scoring are all encapsulated as a part of this daily batch job. As a result, this framework significantly lowers the cost of model development for data scientists, as if there was a dedicated data engineer working alongside the data scientists to take the model into production! A graph view of our LTV Airflow DAG, running in production Note: Beyond productionization, there are other topics, such as tracking model performance over time or leveraging elastic compute environment for modeling, which we will not cover in this post. Rest assured, these are all active areas under development. Lessons Learned & Looking Ahead In the past few months, data scientists have partnered very closely with ML Infra, and many great patterns and ideas arose out of this collaboration. In fact, we believe that these tools will unlock a new paradigm for how to develop machine learning models at Airbnb. First, the cost of model development is significantly lower : by combining disparate strengths from individual tools: Zipline for feature engineering, Pipeline for model prototyping, AutoML for model selection and benchmarking, and finally ML Automator for productionization, we have shortened the development cycle tremendously. : by combining disparate strengths from individual tools: Zipline for feature engineering, Pipeline for model prototyping, AutoML for model selection and benchmarking, and finally ML Automator for productionization, we have shortened the development cycle tremendously. Second, the notebook driven design reduces barrier to entry : data scientists who are not familiar with the framework have immediate access to a plethora of real life examples. Notebooks used in production are guaranteed to be correct, self-documenting, and up-to-date. This design drives strong adoption from new users. : data scientists who are not familiar with the framework have immediate access to a plethora of real life examples. Notebooks used in production are guaranteed to be correct, self-documenting, and up-to-date. This design drives strong adoption from new users. As a result, teams are more willing to invest in ML product ideas: At the time of this post’s writing, we have several other teams exploring ML product ideas by following a similar approach: prioritizing the listing inspection queue, predicting the likelihood that listings will add cohosts, and automating flagging of low quality listings. We are very excited about the future of this framework and the new paradigm it brought along. By bridging the gap between prototyping and productionization, we can truly enable data scientists and engineers to pursue end-to-end machine learning projects and make our product better.
https://medium.com/airbnb-engineering/using-machine-learning-to-predict-value-of-homes-on-airbnb-9272d3d4739d
['Robert Chang']
2017-07-17 16:07:24.185000+00:00
['Machine Learning', 'Data Science', 'AI', 'Technology', 'Artificial Intelligence']
Tech Employees Disagree With Their Companies on BLM
Tech Employees Disagree With Their Companies on BLM Nearly 1/3 of Facebook staff members aren’t satisfied with the company’s response Photo by AJ Colores on Unsplash According to a new survey of tech professionals from data company Blind, a significant number of tech professionals at major companies disagree with their company’s response to the Black Lives Matter movement. More troubling, a large number also feel that they can’t discuss their perspectives openly at work. The survey revealed that 30% of Facebook employees disagree or strongly disagree with their company’s stance on Black Lives Matter and the death of George Floyd, as do 20% at Microsoft. More than half (56%) of Facebook staff members don’t feel comfortable raising their opinions on the situation to colleagues, and the same goes for 49% of people working at Google. This is a surprising result, especially for Google. The company usually prides itself on encouraging lively discussion and debate among its staff, using a network of Google-only private chat rooms and affinity groups. These groups often shape the company’s policies. When Google considered inking a deal with the Department of Defense to use its AI capabilities to analyze drone footage, staff members quickly organized using these internal groups and shut the efforts down. In an infamous case, a Googler also published an allegedly sexist memo on the company’s internal websites, which led to a backlash from other staff members who felt no qualms about speaking up. So it’s uncharacteristic for Googlers to feel they have to be reserved about a political and social movement, especially one that seems to fit relatively directly into Google’s “Don’t be Evil” ethos. It’s also unclear why Googlers felt uncomfortable. The company may feel that politically, it can’t comment as directly on the movement as it does on other issues. As a search engine that controls much of the world’s information, Google may feel that it has to remain neutral, even on important movements like #BLM. That’s a liability for a company with socially engaged staff members, and some may be feeling the impact of restrictions on their ability to take a strong stance. Encouragingly, a majority (62%) of African-American staff members agree with their company’s BLM stance and response. But at the same time, only 10% of Black and 20% of Latino staff members felt that their ethnicity was represented in the upper management of their tech company, versus 76% of white respondents. And nearly half of Google and Facebook employees of any ethnicity say that their personal values are represented by upper management. Diversity in tech is a challenging and important issue. Tech companies are clearly still navigating the best ways to respond to movements like Black Lives Matter, and how to find their own role in advancing these causes. Blind’s survey shows that they’re making progress, but have more work to do on this front. But even more importantly than their response to the movement, tech companies need to integrate diversity more directly into the core of the operations. Responding to a movement is one thing — ensuring representation at the upper echelons of a company is another. Tech should continue to evaluate its response to BLM, but should also consider diversity more broadly and continue working towards more inclusivity and representation of people of all ethnicities on boards and in leadership positions.
https://tomsmith585.medium.com/tech-employees-disagree-with-their-companies-on-blm-2207b9b4c4b3
['Thomas Smith']
2020-06-10 14:23:53.221000+00:00
['Google', 'Tech', 'Facebook', 'Diversity', 'Black Lives Matter']
New research shows why anyone with high blood pressure — nearly half of U.S.
New research shows why anyone with high blood pressure — nearly half of U.S. adults — should seek to lower it. High blood pressure, or hypertension, can accelerate the decline in brain function, including memory, concentration and verbal skills, scientists report today in the journal Hypertension. The cognitive decline occurs whether the hypertension starts early in life or much later. “Effectively treating high blood pressure at any age in adulthood could reduce or prevent this acceleration,” says study author Sandhi Barreto, MD, professor of medicine at the Universidade Federal de Minas Gerais in Brazil. “Collectively, the findings suggest hypertension needs to be prevented, diagnosed and effectively treated in adults of any age to preserve cognitive function.” The findings add to evidence in a feature article I wrote last year defining hypertension and revealing the rising problem:
https://robertroybritt.medium.com/new-research-shows-why-anyone-with-high-blood-pressure-nearly-half-of-u-s-7160828d3470
['Robert Roy Britt']
2020-12-14 23:58:30.427000+00:00
['Blood Pressure', 'Hypertension', 'Health', 'Brain', 'High Blood Pressure']
Correct Code
Correct Code Stephanie Weirich Designs Tools for a Safer World Stephanie Weirich By Jacob Williamson-Rea New cars are packed with helpful technology. Downward-facing cameras help drivers stay within lanes, and adaptive cruise control can brake and accelerate a vehicle based on other drivers’ speeds. Likewise, banks use encryption software that changes your banking information into code that only your bank can use and read, and bank software even analyzes financial markets to make investments. These features are based on software systems that rely on over 100 million lines of code, with separate programs for each component of each system. But as technology evolves, the software behind these systems needs to keep up. Stephanie Weirich, ENIAC President’s Distinguished Professor in Computer and Information Science, aims to make software systems more reliable, maintainable and secure. Her research improves tools that help programmers to determine the correctness of their code, which is applicable to a broad scope of software. Specifically, Weirich researches and improves Haskell, a programming language that places a lot of emphasis on correctness, thanks to its basis in logic and mathematical theories. “People might not realize how much computational power underlies our society,” Weirich says. “Cars, for example, possess very strong correctness requirements as they have become so reliant on computation. If banks mess up their code, it can cause disaster for our financial systems. The security and correctness of these programs is very important.” If a hacker goes after the software behind a less-correct (and thus less-protected) component of a car, such as the brakes, the results could be dangerous and devastating. Not only could the hacker gain control, but because each individual system is interconnected as a whole, the other programs for different components could be prone to errors as well. Similarly, if a driver is using lane-keeping assist and adaptive cruise control on the highway, a bug in a less-correct braking system might tell the adaptive cruise control that the car is braking when in fact it isn’t, which could be deadly. “Automation is everywhere. Cars today are just computers that have a steering wheel instead of a keyboard,” says Antal Spector-Zabusky, one of Weirich’s doctoral students. “It’s very important that these computers are as reliable as possible so that everything functions correctly, along with every other interlocking system, to prevent software from crashing and to ward off hackers.” TRAVELING ACROSS LANGUAGES Programs for embedded systems, like those found in cars, are typically written in a programming language called “C.” Programmers make sure that their software will use data correctly by combining relevant variables into classifications known as “data types.” Types are what allow a programmer to assign rules to all of the different components of a computer program. Weirich focuses on Haskell because she uses it to improve the type system of the language itself, which leads to even more extensive correctness for programmers. She’s making the types more expressive, and as a result, programmers can make better use of the type system to help them develop correct code. For example, you could represent a date, such as February 29, 2019, using three integers: 2, 29 and 2019. However, the non-expressive “integer” type does not capture the relationship between these numbers. A more expressive type used for storing dates would flag this value as invalid by encoding the fact that February only has 28 days in non-leap years. While these tools and systems are not directly usable across different languages, the ideas are. For example, Weirich says Mozilla’s Rust language, a new programming language similar to C, draws from research on type systems, such as the type system research in the Haskell community. Wherever they’re implemented, the more expressive the type system, the more it can check complex, intricate relationships between components of a program. By contrast, a less expressive type system might not be able to detect when such relationships are violated when the program is compiled, resulting in errors and incorrect behavior at runtime. Stronger types and better system verification software allow programmers to ensure they’re writing code correctly. Weirich has also worked with Spector-Zabusky to improve Haskell’s compiler, which is what turns the Haskell language into a language used by computers. “Instead of getting rid of bugs afterward, you get rid of them in the first place,” Weirich says. “The idea is that since you’re ruling out bugs at the beginning by how you are defining your types, you might be shortening development time. Also, because you don’t have to implement a wrong program and then redo that program, you’re shortening the maintenance time, because the compiler can help you figure out what part of the code can be changed.” DEEPSPEC Many professors and students in the Department of Computer and Information Science collaborate in a group called Programming Languages @ Penn, or PLClub. This includes Weirich and Spector-Zabusky, who have been working on a project called DeepSpec, a National Science Foundation flagship program, more formally known as “Expeditions in Computing: The Science of Deep Specification.” The DeepSpec project is a collaboration between Penn, MIT, Princeton and Yale. “DeepSpec is examining this question: What does it really take to specify software correctly?”, says Spector-Zabusky. “We want to specify software that is used in the real world.” To specify software is a fundamental component of ensuring that a program is as correct as possible. Specifications range in intensity, all the way from simple specifications, such as ensuring that an application won’t crash when it is used, to deep specifications, which could include ensuring that a complex numerical simulation computes correctly. Weirich’s research directly informs the DeepSpec project, particularly her work with SpectorZabusky to verify the Haskell compiler. The group aims to develop computer system verification for an entire computer system, which includes the operating system, hardware code and every other component. This takes correctness properties a step further, or deeper, than types can, resulting in a higher degree of confidence in these systems. Professor Stephanie Weirich leads a spirited discussion in CIS 552: Advanced Programming as students use pair programming to work through an exercise based on the topic of the week. COMPLEXITY IN CS Computer science’s complexity is what originally attracted Weirich to the field. “Everything changes rapidly, and there’s always new stuff,” she says. “Computer science is very broad, so it would be impossible to keep up with every aspect of every field. It makes more sense to gain expertise in specific areas.” Weirich has been accumulating expertise in statically typed programming languages like Haskell for over twenty years. She continues to do so, and students from all corners of the University, from freshmen to doctoral candidates, benefit tremendously. This semester, Weirich is teaching CIS 552: Advanced Programming to graduate students and select undergraduates. “In Advanced Programming, I demonstrate ideas that are most expressible in the Haskell language,” Weirich says. “I take ideas from my research and get to teach them to people who want to become software developers. This gives them not only a new way to develop code, but also a new perspective on programming.” In CIS 120: Programming Languages and Techniques, which Weirich will teach in spring 2020, she introduces freshmen to computer science through program design. She says she enjoys teaching this course, partly because she sees students progress from battling the difficult content to understanding it. “Overall, undergraduates recognize that so many different fields now rely on computation,” she says. “There’s a big distribution in skill level and understanding, so throughout the semester, it’s rewarding to see that switch to understanding at different points for different students.”
https://medium.com/penn-engineering/correct-code-f41cce278ae3
['Penn Engineering']
2020-06-04 19:19:44.622000+00:00
['Coding', 'Computer Science', 'Programming Languages', 'Engineering', 'Science']
Deploying Static Websites To AWS S3 + CloudFront + Route53 Using The TypeScript AWS CDK
Deploying Static Websites To AWS S3 + CloudFront + Route53 Using The TypeScript AWS CDK Dennis O'Keeffe Follow Nov 4 · 4 min read In today’s post, we’re going to walk through a step-by-step deployment of a static website to an S3 bucket that has CloudFront setup as the global CDN. The post is written using the AWS TypeScript CDK. This example is used as a deployment for a static export of a NextJS 10 website. Find the blog post on how to do that here. That being said, this post is aimed at pushing any HTML to S3 to use a static website. I simply use the NextJS content to demo the final product and changes in steps required to get it done. Getting Started We need to set up a new npm project and install the prerequisites. We’ll also create a stacks directory to house our S3 stack and update it to take some custom. Updating cdk.json Add the following the the cdk.json file: Setting up context Add the following the the cdk.json file: A guide to getting your account ID can be found on the AWS website, but if you are familiar with the AWS CLI then you can use the following. Ensure that you set the account to be a string with the number returned. For more information on context, see the AWS docs. Updating the TypeScript Configuration File In tsconfig.json , add the following: This is a basic TypeScript configuration for the CDK to compile the TypeScript configuration to JavaScript. Handling the static site stack Open up stacks/s3-static-site-with-cloudfront/index.ts and add the following: The above was adjusted from the AWS CDK Example to convert things to run as a stack as opposed to a construct. To explain what is happening here: We have an interface StaticSiteProps which allows up to pass an object of arguments domainName and siteSubDomain which will allow us to demo an example. If I were to push domainName as dennisokeeffe.com and siteSubDomain as s3-cdk-deployment-example then you would expect the website to be available at s3-cdk-deployment-example.dennisokeeffe.com. This is assigned as the variable siteDomain within the class. An ARN certificate certificateArn is created to enable us to use https . A new CloudFront distribution is created and assigned to distribution . The certificateArn is used to configure the ACM Certificate Reference, and the siteDomain is used here as the name. A new Alias Record is created for our siteDomain value and has the target set to be the new CloudFront Distribution. Finally, we deploy assets from a source ./site-contents which expects you to have your code source in that folder relative to the stacks folder. In our case, this will not be what we want and that value will be changed. The deployment also invalidates the objects on the CDN. This may or may not be what you want depending on how your cache-busting mechanisms work. If you have hashed assets and no-cache or max-age=0 for your index.html file (which you should) then you can switch this off. Invalidation costs money. In my case, I am going to adjust the code above to import path and change the s3deploy.Source.asset('./site-contents') value to become s3deploy.Source.asset(path.resolve(__dirname, '../../../next-10-static-export/out')) (which points to my output directory with the static HTML build assets). This relates to my corresponding blog post on exporting NextJS 10 static websites directly. Note that you will need to add import path = require('path') to the top and install @types/node . Using the StaticSite Stack Back at the root directory in index.ts , let's import the stack and put it to use. In the above, we simply import the stack, create a new app with the cdk API, then pass that app to the new instance of the StaticSite . If you recall, the constructor for the StaticSite reads constructor(parent: Construct, name: string, props: StaticSiteProps) and so it expects three arguments. The CDK app. The “name” or identifier for the stack. Props that adhere to our StaticSiteProps , so in our case an object that passes the domainName and siteSubDomain . Updating package.json Before deployment, let’s adjust package.json for some scripts to help with the deployment. Now we are ready to roll. Deploying our site Note: you must have your static folder from another project ready for this to work. Please refer to my post on a static site export of NextJS 10 if you would like to follow what I am doing here. To deploy our site, we need to transpile the TypeScript to JavaScript, then run the CDK synth and deploy commands. Note: you’ll need to make sure that your AWS credentials are configured for this to work. I personally use aws-vault. You’ll need to accept the new resources template generated before the deployment will commence. In my particular case, I used the NextJS static site example given from my post on Exporting Static NextJS 10 Websites You can see the final, live deploy at https://nextjs-10-static-example.dennisokeeffe.com. Resources Image credit: Ignat Kushanrev Originally posted on my blog.
https://medium.com/javascript-in-plain-english/deploying-static-websites-to-aws-s3-cloudfront-route53-using-the-typescript-aws-cdk-8ae66774d1b
["Dennis O'Keeffe"]
2020-11-04 07:05:20.616000+00:00
['S3', 'Nextjs', 'JavaScript', 'Typescript', 'AWS']
Building a data lake on AWS using Redshift Spectrum
Building a data lake on AWS using Redshift Spectrum Engineering@ZenOfAI Follow Mar 11 · 5 min read In one of our earlier posts, we had talked about setting up a data lake using AWS LakeFormation. Once the data lake is setup, we can use Amazon Athena to query data. Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage. With Athena, there is no need for complex ETL jobs to prepare data for analysis. Today, we will explore querying the data from a data lake in S3 using Redshift Spectrum. This use case makes sense for those organizations that already have a significant exposure to using Redshift as their primary data warehouse. Amazon Redshift Spectrum is used to efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables. Amazon Redshift Spectrum resides on dedicated Amazon Redshift servers that are independent of your cluster. Redshift Spectrum pushes many compute-intensive tasks, such as predicate filtering and aggregation, down to the Redshift Spectrum layer. How is Amazon Athena different from Amazon Redshift Spectrum? Redshift Spectrum needs an Amazon Redshift cluster and an SQL client that’s connected to the cluster so that we can execute SQL commands. But Athena is serverless. In Redshift Spectrum the external tables are read-only, it does not support insert query. Athena supports the insert query which inserts records into S3. Amazon Redshift cluster To use Redshift Spectrum, you need an Amazon Redshift cluster and a SQL client that’s connected to your cluster so that you can execute SQL commands. The cluster and the data files in Amazon S3 must be in the same AWS Region. Redshift cluster needs the authorization to access the external data catalog in AWS Glue or Amazon Athena and the data files in Amazon S3. Let’s kick off the steps required to get the Redshift cluster going. Create an IAM Role for Amazon Redshift Open the IAM console, choose Roles. Then choose, Create role. Choose AWS service, and then select Redshift. Under Select your use case, select Redshift — Customizable and then choose Next: Permissions. Then Attach permissions policy page appears. Attach the following policies AmazonS3FullAccess, AWSGlueConsoleFullAccess and AmazonAthenaFullAccess For Role name, enter a name for your role, in this case, redshift-spectrum-role. Choose Create role. Create a Sample Amazon Redshift Cluster Open the Amazon Redshift console . . Choose the AWS Region . The cluster and the data files in Amazon S3 must be in the same AWS Region. . The cluster and the data files in Amazon S3 must be in the same AWS Region. Select CLUSTERS and choose Create cluster . Cluster Configuration : and choose . : Based on the size of data and type of data(compressed/uncompressed), select the nodes. Amazon Redshift provides an option to calculate the best configuration of a cluster, based on the requirements. Then choose to Calculate the best configuration for your needs . . In this case, use dc2.large with 2 nodes. Specify Cluster details. Cluster identifier : Name-of-the-cluster. : Name-of-the-cluster. Database port : Port number 5439 which is the default. : Port number 5439 which is the default. Master user name : Master user of the DB instance. : Master user of the DB instance. Master user password: Specify the password. In the Cluster permissions section, select Available IAM roles and choose the IAM role that was created earlier, redshift-spectrum-role. Then choose the Add IAM role. Select Create cluster, wait till the status is Available. Connect to Database Open the Amazon Redshift console and choose EDITOR. Database name is dev. Create an External Schema and an External Table External tables must be created in an external schema. To create an external schema, run the following command. Please replace the iam_role with the role that was created earlier. create external schema spectrum from data catalog database 'spectrumdb' iam_role 'arn:aws:iam::xxxxxxxxxxxx:role/redshift-spectrum-role' create external database if not exists; Copy data using the following command. The data used above is provided by AWS. Configure aws cli on your machine and run this command aws s3 cp s3://awssampledbuswest2/tickit/spectrum/sales/ s3://bucket-name/data/source/ --recursive To create an external table, please run the following command. The table is created in the spectrum. create external table spectrum.table_name( salesid integer, listid Now the table is available in Redshift Spectrum. We can analyze the data using SQL queries like so: create external table spectrum.table_name( salesid integer, listid integer, sellerid integer, buyerid integer, eventid integer, dateid smallint, qtysold smallint, saletime timestamp) row format delimited fields terminated by '\t' stored as textfile location 's3://bucket-name/copied-prefix/'; Now the table is available in Redshift Spectrum. We can analyze the data using SQL queries like so: SELECT * FROM spectrum.rs_table LIMIT 10; Create a Table in Athena using Glue Crawler In case you are just starting out on the AWS Glue crawler, I have explained how to create one from scratch in one of my earlier articles. In this case, I created the rs_table in spectrumdb database. Comparison between Amazon Redshift Spectrum and Amazon Athena I ran some basic queries in Athena and Redshift Spectrum as well. The query elapsed time comparison is as follows. It take about 3 seconds on Athena compared to about 16 seconds on Redshift Spectrum. The idea behind this post was to get you up and running with a basic data lake on S3 that is queryable on Redshift Spectrum. I hope it was useful. This story is authored by PV Subbareddy. He is a Big Data Engineer specializing on AWS Big Data Services and Apache Spark Ecosystem.
https://medium.com/zenofai/building-a-data-lake-on-aws-using-redshift-spectrum-6e306089aa04
['Engineering Zenofai']
2020-03-17 11:15:39.768000+00:00
['Software Development', 'Redshift', 'AWS', 'Cloud Computing', 'Athena']
Complete Introduction to PySpark-Part 2
Complete Introduction to PySpark-Part 2 Exploratory Data Analysis using PySpark Photo by Markus Spiske on Unsplash Exploratory Data Analysis Exploratory Data Analysis is the most crucial part, to begin with whenever we are working with a dataset. It allows us to analyze the data and let us explore the initial findings from data like how many rows and columns are there, what are the different columns, etc. EDA is an approach where we summarize the main characteristics of the data using different methods and mainly visualization. Let’s start EDA using PySpark, before this if you have not yet installed PySpark, kindly visit the link below and get it configured on your Local Machine. Importing Required Libraries and Dataset Once we have configured PySpark on our machine we can use Jupyter Notebook to start exploring it. In this article, we will perform EDA operations using PySpark, for this we will using the Boston Dataset which can be downloaded Kaggle. Let’s start by importing the required libraries and loading the dataset. #importing Required Libraries import findspark findspark.init() import pyspark # only run after findspark.init() from pyspark.sql import SparkSession from pyspark.sql import SQLContext #Creating a pyspark session spark = SparkSession.builder.getOrCreate() #Importing Dataset df = spark.read.csv('Boston.csv', inferSchema=True, header=True) df.show(5) Boston Dataset(Source: By Author) Starting the EDA There are different functions defined under pyspark which we can use for Exploratory Data Analysis, let us explore some of these functions and see how useful they are. Schema Schema is similar to the Info() function of pandas dataframe. It shows us the information about all the columns which are there in the dataset. df.printSchema() Schema(Source: By Author) 2. Describe Describe function is used to display the statistical properties of all the columns in the dataset. It shows us values like Mean, Median, etc. for all the columns. In PySpark we need to call the show() function every time we need to display the information it works just like the head() function of python. df.describe().show() Statistical Properties(Source: By Author) Similarly, we can use describe function column-wise also. df.describe('AGE').show() Describe Column Wise(Source: By Author) 3. Filter The filter function is used to filter the data using different user-defined conditions. Let us see how we can use it accordingly. #Filtering data with Indus=7.07 df.filter(df.INDUS==7.07).show() Filter1(Source: By Author) Similarly, we can use multiple filters in a single line of code. df.filter((df.INDUS==7.07) & (df.MEDV=='High')).show() Filter2(Source: By Author) 4. GroupBy and Sorting PySpark inbuilt functions can be used to Group the data according to user requirements and also sorts the data as required. df.groupBy('MEDV').count().show() GroupBy(Source: By Author) df.sort((df.TAX).desc()).show(5) Sorting(Source: By Author) 5. Select & Distinct The select function is used to select different columns while a distinct function can be used to select distinct values of that column. df.select('MEDV').distinct().count() Select and Distinct(Source: By Author) 6. WithColumn WithColumn function is used to create a new column by providing certain conditions for the new columns and defining the name of the new column. #Creating New column with values from Age column divided by 2 df.withColumn('HALF_AGE', df.AGE/2.0). select('AGE','HALF_AGE') .show(5) WithColumn(Source: By Author) In this article, we covered some major functions defined under PySpark which we can use for Exploratory Data Analysis and Understanding the data we are working on. Go ahead, try these functions with different datasets, and if you face any problem let me know in the response section. Before You Go Thanks for reading! If you want to get in touch with me, feel free to reach me on [email protected] or my LinkedIn Profile. You can view my Github profile for different data science projects and packages tutorial. Also, feel free to explore my profile and read different articles I have written related to Data Science.
https://towardsdatascience.com/complete-introduction-to-pyspark-part-2-135d2f2c13e2
['Himanshu Sharma']
2020-11-13 14:01:44.049000+00:00
['Exploratory Data Analysis', 'Data Analysis', 'Pyspark', 'Python', 'Data Science']
Getting Your Data Ready for AI
Editor’s Note: Preparing data is a crucial and unavoidable part of any data scientist’s job. In this post writer Kate Shoup takes a closer look at the data bottleneck that affects so many projects, and how to address it. Most people enter the field of data science because “they love the challenge of developing algorithms and building machine learning models that turn previously unusable data into valuable insight,” writes IBM’s Sonali Surange in a 2018 blog post. But these days, Surange notes, “most data scientists are spending up to 80 percent of their time sourcing and preparing data, leaving them very little time to focus on the more complex, interesting and valuable parts of their job.” (There’s that 80% figure again!) This bottleneck in the data-wrangling phase exists for various reasons. One is the sheer volume of data that companies collect — complicated by limited means by which to locate that data later. As organizations “focus on data capture, storage, and processing,” write Limburn and Taylor, they “have too often overlooked concerns such as data findability, classification and governance.” In this scenario, “data goes in, but there’s no safe, reliable or easy way to find out what you’re looking for and get it out again.” Unfortunately, observes Jarmul, the burden of sifting through this so-called data lake often falls on the data science team. Another reason for the data-wrangling bottleneck is the persistence of data silos. Data silos, writes AI expert Edd Wilder-James in a 2016 article for Harvard Business Review, are “isolated islands of data” that make it “prohibitively costly to extract data and put it to other uses.” Some data silos are the result of software incompatibilities — for example, when data for one department is stored on one system, and data for another department is stored on a different and incompatible system. Reconciling and integrating this data can be costly. Other data silos exist for political reasons. “Knowledge is power,” Wilder-James explains, “and groups within an organization become suspicious of others wanting to use their data.” This sense of proprietorship can undermine the interests of the organization as a whole. Finally, silos might develop because of concerns about data governance. For example, suppose that you have a dataset that might be of value to others in your organization but is sensitive in nature. Unless you know exactly who will use that data and for what, you’re more likely to cordon it off than to open it up to potential misuse. In addition to prolonging the data-wrangling phase, the existence of data lakes and data silos can severely hamper your ability to locate the best possible data for an AI project. This will likely affect the quality of your model and, by extension, the quality of the broader organizational effort that your project is meant to support. For example, suppose that your company’s broader organizational effort is to improve customer engagement, and as part of that effort it has enlisted you to design a chatbot. “If you’ve built a model to power a chatbot and it’s working against data that’s not as good as the data your competitor is able to use in their chatbot,” says Limburn, “then their chatbot — and their customer engagement — is going to be better.” Solutions One way to ease the data-wrangling bottleneck is to try to address it up front. Katharine Jarmul champions this approach. “Suppose you have an application,” she explains, “and you’ve decided that you want to use activity on your application to figure out how to build a useful predictive model later on to predict what the user wants to do next. If you already know you’re going to collect this data, and you already know what you might use it for, you could work with your developers to figure out how you can create transformations as you ingest the data.” Jarmul calls this prescriptive data science, which stands in contrast to the much more common approach: reactionary data science. Maybe it’s too late in the game for that. In that case, there are any number of data catalogs to help data scientists access and prepare data. A data catalog centralizes information about available data in one location, enabling users to access it in a self-service manner. “A good data catalog,” writes analytics expert Jen Underwood in a 2017 blog post, “serves as a searchable business glossary of data sources and common data definitions gathered from automated data discovery, classification, and cross-data source entity mapping.” According to a 2017 article by Gartner, “demand for data catalogs is soaring as organizations struggle to inventory distributed data assets to facilitate data monetization and conform to regulations.” Examples of data catalogs include the following: Microsoft Azure Data Catalog Alation Catalog Collibra Catalog Smart Data Catalog by Waterline Watson Knowledge Catalog In addition to data catalogs to surface data for AI projects, there are several tools to facilitate other data-science tasks, including connecting to data sources to access data, labeling data, and transforming data. These include the following: Database query tools Data scientists use tools such as SQL, Apache Hive, Apache Pig, Apache Drill, and Presto to access and, in some cases, transform data. Programming languages and software libraries To access, label, and transform data, data scientists employ tools like R, Python, Spark, Scala, and Pandas. Notebooks These programming environments, which include Jupyter, IPython, knitr, RStudio, and R Markdown, also aid data scientists in accessing, labeling, and transforming data.
https://medium.com/oreillymedia/getting-your-data-ready-for-ai-efdbdba6d0cf
["O'Reilly Media"]
2020-09-24 12:49:30.657000+00:00
['Artificial Intelligence', 'AI', 'Data Science', 'Data', 'Data Scientist']
Why We Spend Our Brief Lives Indoors, Alone, and Typing
Why We Spend Our Brief Lives Indoors, Alone, and Typing Or, how I justify teaching my students the dying art of writing I worry about what to tell Kate. For most of my students, knowing how to write well will just be an unfair advantage in whatever career they choose — I tell them it’s like being rich, or pretty. But Kate takes art personally: she loves the Dadaists but also thinks they’re a bit cliqueish and silly; in her emails she quotes Rilke and Hurston. She’s one of those students so passionate about writing and literature it makes me feel, briefly, younger to be around her. It also secretly breaks my heart. I once unprofessionally confessed to Kate my misgivings about training her in an art form as archaic as stained glass. She tells me she still thinks about this all the time. I know better than to blame myself for Kate’s career choice; she was already doomed to this vocation when I met her. You recognize these students less by their intrinsic talent than by their seriousness of purpose: they allow themselves no other options; they’re in it for the long haul. She recently took a year off from studying law, politics, and economics to take nothing but courses in literature: “There is something telling me that if I ever do something to save people from the misery that other people have caused them,” she wrote me, “it will be because of what Ibsen teaches me, and not a class on terrorism.” She worries that by obsessively observing and recording she’s missing out on the experience of being alive. I have assured Kate she is more alive than anyone I know. How can I justify luring guileless young people into squandering their passion and talents on a life of letters? But in this dystopian year 2019, with “the newspapers dying like huge moths,” as Ray Bradbury predicted in the ’50s, “Literature” a niche category among all the Book-Shaped Products© on sale, and “the discourse” limited to 280 characters, how can I justify luring guileless young people into squandering their passion and talents on a life of letters? It’s not just that writing is an unmonetizable profession — Kate knows that much already — I worry it’s obsolete. Late in life, the novelist James Salter despaired of the post-literate civilization he saw already arriving: “The new populations will live in hives of concrete on a diet of film, television, and the internet.” Trying to read something like To the Lighthouse with attention spans stunted by stroboscopic overdoses of Instagram/Twitter/Reddit/Imgur might as well be climbing Kilimanjaro. These very words will likely vanish from your head within hours, driven out by the latest presidential obscenity or Mueller revelation, the next episode of Atlanta, a new Maru video. “They speak about the dumbing of America as a foregone thing, already completed,” wrote Michael Herr, “but, duh, it’s a process, and we haven’t seen anything yet.” That was in 1999, long before what people are calling, with the chilling nonchalance of a fait accompli, the “post-truth era.” Consensual reality is as abandoned as Friendster; everyone now gets to curate their own truths like Spotify playlists. You can convincingly Photoshop a Parkland survivor tearing up the Constitution, or CGI “deepfake” footage of Kat Dennings having sex with you. Journalists routinely get death threats, while the two most widely trusted institutions in America are the police and the military, who can be relied upon to obediently massacre us on command. I’m still haunted by an essay speculating that, if the President were proved to have committed impeachable crimes, we would face “an epistemological crisis” in this country: What if his supporters simply declined to buy the evidence? Of course people have always claimed that the culture is in decline, that the age of true art is past, that each generation is more illiterate, vulgar, and stupid than the last. But the constancy of this claim throughout history can obscure, from the limited perspective of a single short lifetime, that it may be true. We lack the historical elevation to tell whether this current darkness is just a passing reactionary spasm, like the McCarthy aberration, or part of a longer, more inexorable Gibbonian decline. “I know Writing isn’t dead and I believe it’ll only be once we all are for good,” Kate wrote me. I just hope the latter date is further away than it sometimes seems. But even an apocalypse needs chroniclers. One of my colleagues says she’s writing for that (possibly brief) interval between the end of the internet and our extinction, when our grandchildren may turn to our words to try to understand what happened. I keep remembering Agnolo di Tura writing, during the Black Death: “so many died that all believed it was the end of the world.” He had buried five children by then, and had every reason to believe it was true; still, he wrote it down. Or maybe it’s not civilization that’s in decline; maybe it’s just me. Talking to Kate also makes me feel older, uncomfortably aware of the distance between her searing idealism and my own guttering disillusion. Anyone who makes the mistake of turning their passion into a vocation gets to watch it turn, like gold transmuting into lead, into a job. You start out motivated by pure, childish things: the pleasure of finding something you do well, of telling stories or making jokes. You’re driven by the same fear that drives magnates and despots: the approaching deadline of mortality, the dreadful urgency to make something to prove you were here. These motives gradually get buried under geological layers of bullshit — reputation, recognition, self-image, money — until every airport bookstore becomes a warped hall of mirrors confronting you with your own insecurity, petty jealousies, and resentment. Posterity is no less absurd an illusion than an afterlife. A friend of mine recently forwarded me a cache of letters by Raymond Chandler, in which he ruminates, like one of his own weltschmerzy heroes, on the vanity of literary striving: “Do I wish to win the Nobel Prize? […] I’d have to go to Sweden and dress up and make a speech. Is the Nobel Prize worth all that? Hell, no.” Just as courage is acting despite your fear, faith is acting despite your despair. Why, then, do we do it — spend so much of our brief time alive in this gorgeous world indoors, alone, and typing? After all my worrying about what to tell Kate, it turned out it was up to her to tell me. “Somehow most people are taught that Art is a way to distract from the terror,” she wrote me, “when in fact I think it is the only way to get through it at all.” In other words, all my arguments for writing’s futility are in fact arguments for its necessity. I was never as idealistic as Kate — or rather, I was never as hopeful; my idealism is too fragile, too easily disappointed. What she and I share is that foolish, ineradicable belief in art and the written word: That there is such a thing as truth, and that it matters when it’s spoken, even if no one listens. Beliefs so frail and indefensible, so easily debunked, that you’d almost have to call them articles of faith. And faith is like courage: Just as courage is acting despite your fear, faith is acting despite your despair. The last time I saw Kate we stopped, on a whim, at the Cathedral of Saint John the Divine and discovered the “American Poet’s Corner,” a chapel dedicated to writers. We stood searching its floor for the names of our favorites, the patron saints of our chosen vocation: Poe and Twain, Fitzgerald and O’Connor, Cummings and Plath. The quotation from O’Connor reads: “I can, with one eye squinted, take it all as a blessing.” I’d likened writing to stained glass, an anachronism, but stained glass is more than an artifact in itself — it’s a medium, to make the invisible manifest. The sunlight through the cathedral windows cast a warm pastel glow across the flagstones, lending to those graven words the animating blush of illumination. A few days ago Kate wrote to let me know she’d been accepted to journalism school, with a full scholarship. She wrote: “Looking forward to it all.”
https://humanparts.medium.com/why-we-spend-our-lives-indoors-alone-typing-e3b1a98e6f45
['Timothy Kreider']
2019-04-15 18:00:04.298000+00:00
['Education', 'Creativity', 'Media', 'Culture', 'Writing']
An Update to Your Fitbit Could Detect a Covid-19 Symptom
An Update to Your Fitbit Could Detect a Covid-19 Symptom You might have a full-featured pulse oximeter sitting on your wrist right now Photo: Adam Birkett/Unsplash One of the scariest things about Covid-19 is that if you get the virus, there’s not a whole lot you can do. Official guidelines say to treat it at home, much as you would a cold or flu — rest, drink fluids, separate yourself from others in your living space, and so on. That’s all well and good, except that Covid-19 has fast developed a reputation for causing otherwise stable patients to crash alarmingly quickly. Doctors tell stories of patients who battle the virus for days or weeks, seem fine, and then in a matter of hours deteriorate and need to be placed on a ventilator — or worse. If you’re treating yourself for Covid-19 at home, how can you know if you’re in the middle of a serious crash? By all accounts, Covid-19 makes many people feel terrible — how can patients outside a hospital setting know when things have gone from merely awful to life-threatening? One potential health tool that’s rapidly emerging is the use of a home pulse oximeter. These simple devices measure the oxygen level in your blood. If it drops below 92%, that’s a concern. If it falls further, you could be in big trouble — some Covid-19 patients have reportedly had levels in the 50% range. Pulse oximeters are especially appealing because Covid-19 has been reported to cause silent hypoxia. In this condition — which seems tailor-made to haunt the dreams of hypochondriacs — a person can walk around with a serious Covid-19 oxygen deficiency and not know about it until it’s too late. There’s only one problem — home pulse oximeters are fast becoming more scarce than toilet paper. I got an Innovo pulse oximeter a year ago to monitor myself during exercise. I paid $23 for the device, and it arrived in two days. This morning, I checked Amazon and could only find one pulse oximeter available to ship in less than a week. They were charging $60. Most wouldn’t ship until mid-May, or later, at any price. If you’re one of the millions of people who wear a Fitbit smartwatch, though, there’s good news. You likely have a full-featured pulse oximeter sitting on your wrist right now. And it could be one firmware update away from potentially saving your life. That Fitbit has been quietly placing pulse oximeters in their watches for years has long been a badly kept industry secret. Users and gadget reviewers alike have noticed the sensors on the back of their Fitbit devices and speculated about their presence and function. A video on my own low-budget YouTube channel speculating about the sensor has 10,000+ views and has received more viewer engagement than many of my other videos. The consensus (which Fitbit ultimately confirmed) was that the company was quietly developing a program to use the sensor for detection of sleep apnea. As early as 2017, Fitbit was hinting at this direction and copped to testing hardware for detecting apnea (a serious condition, the treatment of which will be a projected $6.7 billion industry by 2021). As Fitbit rolled out improved sleep tracking last year, a move toward tracking sleep apnea seemed just over the horizon (I was a beta tester in this program). There are several hurdles to including a pulse oximeter in a consumer wearable. First, there are the technical hurdles. The device has to actually work, and measuring oxygen levels at the wrist is a challenging problem. Some data also indicates that it’s especially challenging with people of color, a concerning finding, especially since Covid-19 impacts these communities disproportionately. There are also concerns that a user’s movements could impact the readings — although, silent hypoxia aside, it’s unclear how much a Covid-19-afflicted patient would be moving around. And price is always a concern — smartwatches often cost $150+, putting them out of reach of many vulnerable populations. Rather than treating blood oxygen as another vanity metric to show to life hackers and exercise fanatics, it’s gone the much-harder route of using its sensors to work toward diagnosing an actual medical condition. But beyond the technical challenges, there are also major regulatory hurdles to clear. Telling people their step count (or even their heart rate) is one thing. Diagnosing them with a disease using a consumer device is another entirely. My own best guess is that Fitbit, as an independent company, didn’t have the regulatory connections and pocketbook to stomach a move into the medical device sector. With its announced acquisition by Google, though, Fitbit suddenly has a deep-pocketed corporate parent to navigate the U.S. Food and Drug Administration (and handle the liability from potential device failures) on its behalf. Perhaps because of that backing, Fitbit quietly rolled out blood oxygen level tracking in its app in January and told Gizmodo that it “expect[s] to submit for FDA clearance soon.” Fitbit’s new blood oxygen measurement capability has potentially life-saving implications in the fight against Covid-19. Twenty-eight million people already wear Fitbits, so using them as pulse oximeters could provide monitoring capabilities to a huge swath of people at once (the capability is not available in all Fitbit models, but is present in its newer smartwatches and trackers). Blood oxygen levels are more useful in a diagnostic sense when they’re used to track a trend. What better way to see oxygen level trends than to have an always-on pulse oximeter on your wrist? At the moment, Fitbit only exposes oxygen level data in the sleep-tracking portion of its app. The levels are used to show a general summary of oxygenation status, and users can’t get a specific percentage reading. This is consistent with its original goal of tracking sleep apnea. But that’s likely a firmware and software decision, not a hardware one. A simple firmware update over the air could likely enable full blood oxygen level tracking very easily, since the hardware (and likely the algorithms for processing raw pulse oximeter readings into meaningful data) are already there. So will Fitbit enable this feature? A lot likely depends on regulatory bodies like the FDA. The FDA reportedly did not allow Apple to enable its own blood oxygen level tracking on the Apple watch, another popular device with a stealth pulse oximeter onboard. But it did hint at allowing consumer wearables to monitor for Covid-19, and researchers are forging ahead with studies to evaluate the Apple Watch, Fitbit devices, and Garmin smartwatches for this purpose. At the moment, Fitbit still says its devices shouldn’t be used for medical purposes. And there are the ongoing technical concerns about the devices’ accuracy, especially at the low oxygen levels that indicate danger. Here, though, Fitbit’s cautious approach and years of testing may serve it well. Rather than treating blood oxygen as another vanity metric to show to life hackers and exercise fanatics, it’s gone the much-harder route of using its sensors to work toward diagnosing an actual medical condition. That means the company has likely been laying the groundwork for medical device clearance — technically and legally — from day one. That gives it a huge advantage over its competitors, both in terms of regulatory connections and the hardware already baked into millions of its devices. That the condition Fitbit chose to treat, sleep apnea, is characterized by low oxygen levels bodes well, too. The company has likely focused on detecting oxygen accurately at low levels from the beginning. This potentially gives it another major boost over other smart devices, which work best at measuring the high oxygen levels exhibited by healthy users. It may even give Fitbit’s devices an advantage over existing, FDA-cleared pulse oximeters, which likely use more basic software and simpler algorithms to perform their measurements. And for Fitbit’s blood oxygen levels to be useful, they don’t have to be perfect. They just need to show a meaningful trend. Rather than exposing the values as a specific percentage, the company could always give a summary statistic to indicate an overall trend or trajectory — green for “You’re fine,” yellow for “Call your doctor,” and red for “go to the ER.” Critics of Fitbit’s tech also miss the point that its pulse oximeter readings wouldn’t have to stand alone. The company already has detailed knowledge of its users’ bodies, including their height, weight, age, heart rate trends, and overall activity levels, as well as their baseline blood oxygen levels. All this data could be integrated into a risk score for low oxygen levels — the pulse oximeter reading wouldn’t need to stand alone. I don’t have a window into Fitbit’s tech or regulatory teams or into the FDA. But given what I know about the company’s trajectory and hardware, it seems ideally placed to rapidly provide life-saving oxygen monitoring to millions. Doing so would likely require addressing technical and regulatory (not to mention UI and privacy) issues rapidly and taking on some unknown risks. But Fitbit has years of research under its belt. It has proven hardware, trusted by millions of users and medical industry players alike. It appears to have a relationship with the FDA, and the corporate backing, in Google and Alphabet, to intensify that relationship quickly (and address the inevitable liability concerns of fast-tracking a medical device). If any company can bring life-saving pulse oximetry to millions of people overnight, my money is on Fitbit.
https://onezero.medium.com/an-update-to-your-fitbit-could-detect-a-covid-19-symptom-a14699667583
['Thomas Smith']
2020-05-13 05:31:01.310000+00:00
['Health', 'Wearables', 'Fitbit', 'Tech', 'Coronavirus']
Bulletproof Writers: Call for Submissions
Bulletproof Writers, as of now, has remained a dormant project of mine, due to my previous inability to share with others and try and do it all on my own. But now, I’ve realized my folly and am opening the publication to new writers. The easiest way to apply is through this link at Smedian, scrolling down to this publication, and then requesting to contribute there. The other way to apply is by sending me a message through Facebook, but as I’m not online there more than once or twice a day that might take longer to get accepted. *These guidelines have been updated as of June 7, 2020. Submission Requirements The only requirements I require from you as a guest poster to this publication are these: There is no required word length. Make your post as long or as short as it needs to be. Focus on being respectful with your reader’s time, not selling your product or service in the post, and providing value to the readers (including a link to your product in your bio is okay). Please keep your post centered around the theme of ‘writing’ exclusively. At the time, I’m accepting non-fiction pieces only. A short bio at the end of your post with a link to your website or opt-in offer is fine. Keep it to 2–3 sentences with 1–2 links included MAX. I’ll attach a byline of my own to every post, so if you apply to the publication, you must be okay with this. Submissions are okay, whether they are previously published or unpublished drafts. Although we prefer unpublished drafts, both are fine as long as the piece you are submitting was published less than 7 days ago. Please do not pull articles from another publication to be published in ours, though, as this is seen as bad etiquette online. Republishing your content is fine! Please change around the tags, photo, and headline to help it get more visibility on Medium, and not be penalized by Medium’s algorithm. Let your article sit for at least 6 months before republishing it. Also, if your piece has been published somewhere else before, please include a note at the bottom of your piece so I can look at it and suggest any changes that could be made to help it stand out as a unique piece to our publication. Attach a photo to your post to help it get more visibility. Use a free site like Unsplash to find these photos and make sure they are royalty-free & openly accessible to the creative commons. Please include photo credit so the editors are aware where the photo came from, and that you have the proper rights to use it. Include 3–5 relevant tags to your post. This is not necessary, but again, it will help your post get more visibility! Share it out to your followers. This can be as simple as a quick Tweet or a share on your Facebook profile page. Believe me, anything you can do to increase the visibility of your post helps! Edit your post. Please edit and polish your submission as best as you can. Make sure it’s spaced out nicely and is typo-free. The less work I have to do to publish it, the better! Focus on value first. Look at your post and ask “Is this valuable? If I was a reader, what would I be getting out of reading this post? What tangible things would I walk away from?” If you brainstorm your post from this way, it becomes a lot easier to help your readers reach the results you are promising with your post and that your audience desires. Okay, that was 11 tips. Maybe I’m a little more strict than I thought. Please, if you’re interested in applying, don’t hesitate to do so. I’d love to help you grow your audience and reach more writers with your words and this publication is a great way for me to do so :) Looking forward to reading your submissions! Cheers, Blake P.S. once you’re accepted as a writer to this publication, please follow this guide to submit your draft & add it to the publication’s queue.
https://medium.com/bulletproof-writers/bulletproof-writers-call-for-submissions-66d47d7f5c1e
['Blake Powell']
2020-06-07 22:26:49.684000+00:00
['Writing Tips', 'Creativity', 'Blogging', 'Art', 'Writing']
What Your Startup is Doing Wrong: Four Small Changes for Big Growth
One of the biggest rites of passage for any founder is the first time they pitch their idea / product / solution to an audience. You’ve worked hard to perfect your code and design, and your demo is debugged and polished. You’re the type of person who works wells under pressure, and you haven’t really thought about what you’re going to say specifically. Inspiration always comes to you when you need it! You approach the mic… you pull out your lucky laser pointer… and you begin to passionately talk about “your baby.” This is the dream! But, as you look out into the audience, you see it: the blank, confused expressions, the questioning looks, and the glances at phones and watches. Oh no! “Why aren’t they as excited as I am?” you think. Many early-stage startups struggle to pitch and market their product effectively and efficiently. There’s no judgement in that statement; most of these companies are founded and managed by really brilliant engineers and product-focused teams. This is the reality of young technology startups. Not everyone has the luxury of getting a business degree, hiring a growth marketer, or having access to a network of savvy advisers. But if these companies want to raise capital, make profits, and/or get good PR, they will need to be able to communicate their value succinctly. What are some quick fixes these fledgling companies can change or add to their website/pitch to bring real value? Write a unique value proposition about your solution. Photo by Thought Catalog on Unsplash. 1. Write a Unique Value Proposition. Growth marketers talk a lot about unique value propositions (UVPs), but that’s because they are so important! And while conceptually, it sounds easy to add a UVP, it’s actually a hard thing to execute. Your UVP is a statement that describes the benefits of your solution, how you meet your customers’ needs, and what distinguishes you from your competition. It also needs to be displayed prominently on your website, landing page, and marketing campaign. More deeply, in a concise yet evocative statement, you must describe the benefits of your product using The Four U’s: a) Useful: how is this product useful to a customer, and what problem is it solving for them? b) Urgency: why does a customer need your solution right this instant? c) Unique: what about your service is unlike anything else on the market? d) Ultra-specific: can you describe your company without any ambiguity or confusion, such that it leaves a reader without any hesitations or questions about what you are selling? Holy crap! A UVP needs to have A LOT of information in a super tiny space. It may seem impossible, but aim for clarity over creativity initially. As awareness for your product or service builds, then you can begin to get clever with your messaging For example, Netflix now simply states: “Watch what happens next.” This is a hat-tip to their binge watching reputation. But Netflix can be creative and vague because nearly everyone knows what Netflix does. They’ve achieved product market fit. Unfortunately, your machine learning startup with an esoteric company name and logo will probably need an incredibly ULTRA-SPECIFIC and USEFUL value proposition to keep users on your site. Do you want to quickly test your UVP? Ask strangers who are completely unaware of your website and product to view your home page with your UVP visible. Only allow them to view it for seven seconds. If they can’t tell you want your company does and how it will benefit someone immediately, keep tweaking your UVP. Pitch your product in a streamlined, catchy manner. Photo by Kane Reinholdtsen on Unsplash. 2. Craft a concise, catchy pitch. It’s cliché for sure, but can you pitch your company in the time it takes to ride an elevator? You may wonder why people use this analogy, but there are some useful reasons why you want to be able to pitch in 15 seconds. Those who can quickly and accurately describe their product and its benefits are seen as competent and prepared. They demonstrate they know what they’re talking about. You don’t want to lose your audience’s attention and you don’t want to lead them down a rabbit hole of misinformation and confusion. There are two types of pitches: One is this elevator pitch: a super quick overview, almost a spoken unique value proposition. The second is a meatier pitch: a longer pitch, useful for investor meetings, competitions, candidate interviews, and media events. I like to use this format to craft a one minute pitch: a) Start with a short, funny anecdote or staggering statistic that relates to why someone would use your solution. You need to get your audience’s attention, and using emotion is a great way to grab someone’s focus. b) Introduce your solution by succinctly stating what you do. c) List the key benefits of your solution. d) Highlight why your company and/or market has a competitive advantage (great team, huge market size, large waitlist of customers, large network or famous connections). e) Close with a final catchy phrase that re-summarizes your solution. Insert clear calls-to-action in your website and pitch. Photo by rawpixel.com on Unsplash. 3. Have a clear call-to-action. “Click here.” “Register.” “Join now.” We’ve all seen these vague buttons on websites. To the person who built the website, it’s obvious what clicking that button will do. But to your audience, your intrepid potential customers, they have no idea what they’re getting into. Companies need to make sure the messaging around their call-to-action (CTA) is crystal clear. Returning to the Netflix example, their sign-up button clearly states: “Join Free for a Month.” You know when you click that button, not only will you be joining Netflix, but you’ll be getting my first month free. No confusion there, right? If you’re simply selling a product online, “Buy Now,” always works well. But what about getting people to give you their email address to sign-up for a waitlist or newsletter? “Click here to be added to our Beta release launch!” or “Sign up for our otter meme-a-day email!” Make it obvious! Run A/B tests on your messaging, or perform the stranger test I mentioned above. Who are you trying to reach? Understanding this will allow you to tailor your messaging appropriately. Thanks to GIPHY and HBO. 4. Cater to your audience. There’s nothing worse than missing out on a big opportunity because you didn’t do your homework! If you’re pitching to non-technical investors, and you’re a high-tech engineering startup, make sure your presentation has the appropriate amount of technical definitions and background, as well as business-related information. This way, investors can make informed decisions, and you can appear business-savvy and empathetic. If your consumers are teenagers, but your customers are their parents, make sure your website is understandable to both generations. This will help you attract and sell to an appropriate audience. Many people hate old-school frameworks, but it could be incredibly beneficial to sit with your co-founders and conduct a STP exercise (segment. target. position.) a) What possible demographic segments could be interested in your solution ? b) Which segment will you be targeting initially and why? c) How will you position your solution to get that target’s attention? Remember: know your audience, know your customers, and know your unique value proposition! Know your audience… and audience size. Thanks to GIPHY and HBO. This article was inspired by my first San Francisco pitch competition last week, where my company was one of the few startup to go beyond the tech. We impressed the judges, especially the ones who weren’t data scientists and were looking for investments, with our emphasis on UVP and STP. Big props to my team at PipelineAI: we won the Startup Showcase at The Artificial Intelligence Conference. What is one overlooked business or marketing aspect you have found makes a big difference to focus on early in the life of a startup? Do you prefer focusing on traditional marketing frameworks or new-age growth hacks in a startup’s infancy? Share your thoughts below! Thanks to Thomas Maremaa and John A. Parks for providing invaluable editing direction on this article, Mikail Gündoğdu for GIF advice, and the Tradecraft community supporting me in my endeavors as a growth writer.
https://medium.com/tradecraft-traction/what-your-startup-is-doing-wrong-four-small-changes-for-big-growth-e1a9409392b6
['Jessica Poteet']
2017-09-29 15:10:20.298000+00:00
['Silicon Valley', 'Growth', 'Marketing', 'Business', 'Startup']
Jellyfish Reveal More Glowing Secrets & Bacteria Make Purple Sea Snail Dye
NEWSLETTER Jellyfish Reveal More Glowing Secrets & Bacteria Make Purple Sea Snail Dye This Week in Synthetic Biology (Issue #14) Receive this newsletter every Friday morning! Sign up here: https://synbio.substack.com/ Tell me about your research on Twitter. The Crystal Jelly Unveils Its Brightest Protein Yet Aequorea victoria, the crystal jelly, hovers in the waters off the coast of California. Decades ago, Osamu Shimomura noticed that these jellies emit a faint, green light. So he took pieces from one of them, did some experiments, and found the protein responsible for the glow. That protein — GFP — is now used in thousands of labs to light up the insides of microscopic cells. Shimomura shared the 2008 Nobel Prize for that work, along with Martin Chalfie and Roger Tsien, who died in 2016. Now, it looks like the crystal jelly hasn’t given up all of its secrets just yet. In a new study, nine previously unstudied proteins, also from Aequorea victoria and a related species, were reported. Several of the new fluorescent proteins have quirky characteristics, too. One of them is “the brightest GFP homolog yet characterized”, while another protein can respond to both UV and blue light. The scientists even found a couple of purple and blue-pigmented chromoproteins. The findings are further evidence that, in the darkness of the oceans, scores of mysteries remain to be discovered. This work was published Nov. 2 in the open-access journal PLoS Biology. Link Will DNA Replace Grocery Store Barcodes? A standard barcode — think grocery store rectangle, with black-and-white lines — contains 11 digits. Mixing up those digits in every possible way gives about 100 billion possible combinations. That’s a lot, but it’s not nearly as many combinations as what a barcode made from DNA could provide. A new study, published in Nature Communications, reports a molecular, DNA tagging system that could become the future of barcodes. The DNA was dehydrated, which made it more stable, and the sequences were read out in just a few seconds with an Oxford Nanopore MinION, a small, portable DNA sequencer. To facilitate that speed, the authors came up with some clever ways to avoid complex, computational analysis of the DNA signals; they were able to read the barcodes directly from the raw sequence data. This study was published Nov. 3 and is open access. Link Bacteria Produce Tyrian Purple Dye (From Sea Snails!) As early as 1570 BC, the Phoenicians were dying fabrics with Tyrian purple. To make the dye required a process so intensive as to be nonsensical; as many as 250,000 sea snails (Bolinus brandaris) had to be smashed into goop to make just one ounce of dye. It was a color reserved for royalty, and literally worth more than its weight in gold. Thank goodness, no more snails need to be smooshed to make Tyrian purple dye. Engineered E. coli bacteria can now make the dye’s predominant chemical, called 6,6'-dibromoindigo. To achieve this, scientists from Seoul National University added several genes to the bacteria; a tryptophan 6-halogenase gene, a tryptophanase gene and a flavin-containing monooxygenase. That’s a mouth garbling sentence, but I promise the result is easier to understand: the cells were able to produce 315 mg of 6,6'-dibromoindigo per liter in flasks, using tryptophan — an animo acid — as the chemical precursor. This work was published Nov. 2 in Nature Chemical Biology. Link 79 Different Cas9 Proteins Were Tested. Some Are Wicked Cool Cas9 is maybe the most famous protein on earth. It’s like, the Kim Kardashian of the protein world. If there was a magazine for proteins, Cas9 would be on its cover. Oh wait, that already happened. There’s a lot of different Cas9 proteins, but not all of them have been characterized. In a new study, scientists identified, and tested, 79 different Cas9 orthologs — proteins taken from different species, but that have the same function — and figured out how they recognize and cut DNA. Intriguingly, some of the Cas9 proteins only worked at specific temperatures; Cme2 Cas9, for example, “was only robustly active from ~30 °C to 55 °C suggesting the possibility of temperature-controlled DNA search and modification.” This study was published Nov. 2 in Nature Communications, and is open access. Link CRISPR Shuts Down Fertilized Eggs I didn’t know about the birds and the bees until my parents sat me down and told me. But if you’re wondering, a typical pregnancy starts like this: a fertilized egg latches on to the endometrium in the uterus. That activates a flood of genes to turn “on”, including one called leukemia inhibitory factor, or LIF. A new study has figured out a way to cut off fertility — with CRISPR — by targeting LIF and switching it “off”. The reason this is cool is because, well, the CRISPR-Cas9 system is photoactivatable, meaning it can be switched on with an LED. The scientists, from Keio University in Tokyo, think that their work could prove useful in basic science research that probes the molecular signals underpinning this process. The study was published Nov. 2 in the journal PNAS, and is open access. Link
https://medium.com/bioeconomy-xyz/jellyfish-reveal-more-glowing-secrets-bacteria-make-purple-sea-snail-dye-37d03a3d58e9
['Niko Mccarty']
2020-11-06 13:39:00.501000+00:00
['Newsletter', 'CRISPR', 'News', 'Science', 'Future']
IBM is Recognized in the 2020 iF Design Awards
On behalf of our design team at IBM Cloud, Data and AI, we’re excited to announce that we’ve won iF Design Awards in the Communications category for IBM AutoAI and IBM Watson Studio Desktop. We are thrilled to see these two products get recognized for their outstanding design work. This year, the iF Design jury, comprised of 78 international experts, judged 7,300 products and projects submitted from 56 countries from around the world. The iF Design Award is one of the world’s oldest, most celebrated, and most competitive design competitions. This is our third year in a row being recognized by this organization, and the first time that we have seen two of our products get awarded at the same time. It’s truly an achievement and an honor for us, and I’m so proud that our team’s hard work has paid off. What is IBM AutoAI? IBM AutoAI, part of IBM Watson Studio, automates the process of building machine learning models for users such as data scientists. Businesses looking to integrate AI into their practices often struggle to establish the necessary foundation for this technology due to limited resources or a gap in skill sets. The process of understanding how to use AI and generate machine learning models from data sets can take days or weeks. With a distinct emphasis on trust and explainability, IBM AutoAI visualizes this automated machine learning process through each stage of data preparation, algorithm selection, model creation, and data enhancement. The tool is able to teach and empower users to identify and apply the best models for their data in a matter of minutes, helping businesses save time and resources. AutoAI guides users through the process of joining multiple data sources with suggestions and prompts throughout the data preparation experience. Designing for IBM AutoAI One of the primary goals for the design team was making IBM AutoAI understandable for users with varying levels of expertise. It was a challenge for the designers to understand the AI and machine learning technology behind this automated solution, and then communicating the model creation process in a comprehensive but visually appealing way. The team set to create a software product that guided the user through these complex technological processes step by step. IBM AutoAI visualizes the entire model creation process through multiple “lenses”, providing transparency to users in a way that they can understand the process to whatever extent of detail that they need. The design team worked directly with IBM Research to understand the underlying technology and user expectations for this type of tool. The team also interviewed target users and conducted competitive research to increase their domain knowledge in artificial intelligence and better inform their design decisions. Based on deep user research, the designers found that users inherently didn’t trust an automated solution. The design team wanted to avoid this perception of an automated solution as a “black box”, where it is unclear to the user how a result was generated from the information that they input. Throughout the design process, the designers placed emphasis on explaining all steps of the software tool’s process in laymen’s terms in order to build confidence and trust with the users. By leveraging the IBM Enterprise Design Thinking framework the design process also extended to development, content, and offering management teams, which helped create a product more aligned with all stakeholder goals. What is IBM Watson Studio Desktop? IBM Watson Studio Desktop is a data science and machine-learning software platform that provides self-service, drag-and-drop data analytics right from the user’s desktop. The software platform’s features include the ability to automate data preparation and modeling, data analysis, enhanced visualizations, and an intuitive interface that doesn’t require coding knowledge. It can integrate with on-premise, cloud, and hybrid-cloud environments. This dashboard offers users a way to explore, prepare, and model their data with simple drag and drop features, without needing coding abilities. Data analysis can be a painstaking process as users need to gather, clean, sort, and sift through the data while working with data scattered across several sources and locations. IBM Watson Studio Desktop is an end-to-end solution that helps businesses to get started with the data analysis process faster, giving data scientists all the tools they need to improve their workflow. This product is a desktop version of IBM Watson Studio, a collaborative cloud data analysis software. Designing for IBM Watson Studio Desktop The design team behind IBM Watson Studio Desktop conducted research on their target users, primarily data scientist, to understand their needs. The designers conducted interviews with sponsor users and corporations as well as on-site user testing. The team found that data-scientists primarily worked in isolation, and were looking for a more dynamic, collaborative workflow, where they had all of their tools in one place. The team aimed to design a tool and interface where data scientists were provided with an ecosystem of data analysis tools. They wanted to create a space for their users to collaborate, access all of their needed tools and information at once, and create a cohesive workflow between themselves and their peers. User Experience Journey for IBM Watson Studio Desktop users Another challenge for the UX team was to design and implement all of these capabilities that were originally designed for the cloud version of the software into the desktop version. IBM Watson Desktop Studio was created for users who wanted to work offline as well as in an interface with more narrowed and tailored machine learning capabilities. The team wanted to design a desktop tool that translated well as an extension of the cloud tool, with a user experience that was more simplified and focused, but still familiar to users from the original cloud version. The team designed an interface that used similar design principles, as well as carried over key features from the cloud version that the users wanted to see in this new environment. “IBM Watson Studio Desktop and IBM Watson AutoAI bridge gaps in skills and knowledge and make data analysis and machine learning more accessible for businesses in the modern age. We designed these products with empathy and a user-centered approach, so that our users could confidently integrate AI into their business workflows.” --Alex Swain, Design Principal at IBM Cloud, Data and AI Designing Watson Products As described above, designing software products with AI and machine learning capabilities is a challenging task that requires an in-depth understanding of the field and its challenges. AI has the power to impact businesses on a large scale, and understanding how to take advantage of these capabilities is essential for businesses to succeed and excel with their data strategy. Being recognized for the design work behind these products is a true testament, to how much user experience can play a role in shaping how this AI technologies can impact our lives. Winning Teams IBM AutoAI Design Principal: Alex Swain Design Team: Dillon Eversman, Voranouth Supadulya IBM Watson Studio Desktop
https://medium.com/design-ibm/ibm-is-recognized-in-the-2020-if-design-awards-1221123585f8
['Arin Bhowmick']
2020-02-13 05:10:31.955000+00:00
['Machine Learning', 'UX', 'Data Science', 'Design', 'AI']
Where to begin with color in 3D?
Where to begin with color in 3D? Getting started with color in Cinema4D Lite. Color, shape, and fun in C4D (created by Sarah Healy) I can vividly remember opening 3D Studio Max for the first time. It felt like I had suddenly been thrust the controls of the starship enterprise, with zero experience of actually navigating through space. I can also remember hastily closing the software package, feeling defeated and retreating back the comfortable flatness of 2D space. As a designer used to create one dimension suddenly having multiple viewports, three dimensions and camera to contend with is a wee bit overwhelming. In the world of 3D, everything gets a little more complicated — even color. Well, at least I did not find it very intuitive when learning3D. Here is a starting point with color in Cinema 4D.
https://uxdesign.cc/where-to-begin-with-color-in-3d-3e81f92beb77
['Sarah Healy']
2019-08-21 23:47:24.331000+00:00
['Cinema 4d', 'Colors', 'Education', 'Design', 'Creativity']
I know what a Data Scientist is… but what the heck is a Machine Learning Engineer?!
I know what a Data Scientist is… but what the heck is a Machine Learning Engineer?! Rodney Joyce Follow Sep 6 · 6 min read “IT” This reply has got me by for the past 20 years when asked by various relatives and friends exactly what it is that I do. It does mean I have to “fix” working computers, install virus scanners, get printers working (throw it away), and fix iTunes for my mum on a regular basis and generally I am considered an authority on anything that is slightly more technical than average. However, in the last decade (and especially the last 3 years) the technical landscape has shifted exponentially with machine learning now accessible to anyone with a browser, so this answer no longer suffices as dinner parties. Out of interest, the drivers for this are things like access to more data (IoT, faster networks) and the abstraction of the AI tools used by Google et al into elastic cloud services and various others — that is a whole post in itself. Everyone seems to have heard of Data Scientists. It was even labelled “The Sexiest Job of the 21st Century” (google it — I don’t know who said it first). But when I say I am a Machine Learning Engineer I often draw blank looks. (I say “AI” Engineer to the non-technical to get a nod of recognition). So what exactly is the difference between a Data Scientist and a Data Engineer, and what is a Machine Learning Engineer? This too has been discussed to death however I read an article that summed it up perfectly. I am also currently working on a project (that shall remain nameless) that highlights the points made in this article perfectly. Do yourself a favour and read this first then come back here for a real example: https://www.oreilly.com/ideas/data-engineers-vs-data-scientists Some background: There’s a limit in statistics and maths that I hit fairly soon where I am happy to hand over to someone who specializes in it. I wish I had paid more attention at school during maths class and stopped having so much fun… trying telling that to a teenager though! I understand basic stats, I can train a Linear Regression model, I can tell Azure to run AutoML for me and I can hypertune a model using SparkML. I can build a pretty decent app end to end to identify hotdogs or not hotdogs on the edge. But I cannot tell you WHY these params worked better than those ones. WHY the Random Forest resulted in a higher accuracy or what the best metrics are to use to evaluate the outcome of 100 training runs for an X model. Fortunately, the Data Scientist can, and he loves the complex maths! Finally…. a job that is not boring! But… not everyone with a PHD knows how to train their models in parallel using distributed code (I will try not to mention Databricks yet again ;). Most Data Scientists use Pandas/numpy and don’t necessarily know (or care, to be fair) about the potential limitations when it comes to training. Nor do they necessarily care about ordering a beefy 128 Gig GPU machine to run their experiments overnight because it is taking 8 hours to train a model. Suggesting to use PySpark or Dask just gets an irritated look as it detracts from valuable experimenting time. When requested to deploy his model as an API driven by a Git commit with automatic model drift monitoring it is met with a disgruntled snort… However… I do, for example, appreciate the beauty of distributed compute and the wonderfully scaleable architecture of Spark. I love a good API and data pipeline as much as the next Data Engineer and can spend hours refactoring code until it passes all the definitions of “Clean Code” (Consultant Tip: If you want to meet your budget and project plan then find the healthy balance between technical perfection and the real value that the code will generate. We are doing all of this for a reason, and it’s not to get the code onto 1 line). I love the concept of CI/CD and I adore simplicity, practicality and optimizing things like cloud services, code, processes and every day life. Needless to say this does not always go down well with other humans, however it’s a common trait in Data Engineers and Programmers. So… now that we understand the personalities of the Data Scientist and Data Engineer let’s put them together, focus on their strengths and make an amazing team that can meet the business requirement as quickly as possible whilst consuming as little time and $$ as possible. Before I go further, obviously there are exceptions to the rule and lucky people (usually without kids) who are in fact able to bridge both roles… we’ll focus on the average here. Even if you can bridge both roles… should you? A healthy team is a diverse team. I’ve seen projects where a Data Engineer is given a complex Machine Learning project and a couple of days to figure it out. Whilst it is possible, I believe this is not a good idea. Data Science and Machine Learning engineering ARE NOT the same thing. I have also seen projects where a Data Scientist is put on a project which involves Big Data (whatever that is) with no data engineering support and in both cases everyone wonders why they are taking so long to get any results. The project I am on right now is a fantastic example of the article above. We have a Data Scientist (insert any number of PHDs here) and myself as the Machine Learning Engineer/Data Engineer (insert any number of Azure cloud certifications here). As a team we are approaching the problem according to our strengths and, of course, based on what we prefer to do, which is important if you want to retain your staff (did I mention that this “AI” stuff is in hot demand and everyone wants to do it but doesn’t know how?). For example, early on in the project, training a single model (we have over 40) was taking over 10 hours. One option would be to scale up and get a bigger VM which is the hammer and nail approach. These beasts are not cheap and halfway through a 10 hour training session could fail and the process needs to be repeated. This was the selected approach to get us past that blocker and is working. However, in parallel I am looking the Data Scientist’s code, rewriting it from Pandas into PySpark (Note: there’s 101 other ways to do this — I am just a Databricks fanboy) and building the system to log experiment results and deploy the models as containerized APIs microservices with an Azure function to orchestrate the results asynchronously. Put a near real-time PowerBI report and alerting to watch for model drift and an Azure function to trigger model retraining and it’s a work of art! Damn I love my job. Together we make an awesome team as the whole is greater than the sum of the parts. Our roles overlap a lot and I am improving my understanding of stats and ranking better in Kaggle contests. He is learning new ways to improve his workflow and understanding more about data engineering. To summarize: The best result is to understand the roles and challenges unique to a Machine Learning project and to plan appropriately from a time and effort POV -anything is possible with enough time. They share many aspects with standard application development projects and the approach is not too dissimilar. You wouldn’t ask a API engineer to do UX would you? (Don’t get me started — this happens a lot!). Just put the Data Scientists and the Data Engineers together in a room and let the magic happen… and if you need help putting an ML Ops process in place then get in touch at https://data-driven.com/
https://medium.com/data-driven-ai/i-know-what-a-data-scientist-is-but-what-the-heck-is-a-machine-learning-engineer-7996415ce3c
['Rodney Joyce']
2020-09-06 11:55:25.329000+00:00
['Machine Learning', 'Data Engineering', 'Machine Learning Engineer', 'AI', 'Data Science']
The Vantage Point of Stars
From a safe vantage point, far from the walrus-death cliffs, the stars hang smiling in the sky. The cosmos between themselves and earth is enough to swallow up their years of smiling light. If I could rise to that expanse could I forget the plunging to rock? The blood swept into the sea — the last heaving breaths of walruses? How lucky the stars for that vast space. They do not have to leap from the sky or find themselves shoved into spaces too small, too restrictive, to do what it is that stars do; quietly they still and twinkle and hold their spot for eons, asking nothing. Walruses ask nothing but a bit of ice or shore. Koalas, ablaze and climbing trees, shrieking. They, too, only need a bit of peace and branches of green. Sure, brush fires are normal. Ice melting; normal. But not like this. The Earth spinning and changing and moving through time as time has asked it to do — forgive us our interference, our human intrusion on the norm, for we really think it is all about us, our needs, our wants from this earth, our take, our taking. When the seas rise up to meet our mistakes, what cliffs, I ask, will we leap from? Will there be trees for us to climb, as the flaming koalas? Will there be a nice lady who will rip off her shirt and snuff out the flames of our sins as they crawl up our legs or will we simply keep running and hope the wind will put them out? The stars won’t shine then. They’ll wink their “I told you so’s,” grateful to be stars and not koalas, stars and not polar bears adrift on melting ice-boats, their furs narrowing at the sides, carcasses that breathe, until, they don’t. A star paints its path across the sky, one last, vast motion of hope, recipient of wish, of prayer, far-removed hope-flung.
https://medium.com/fiddleheads-floss/the-vantage-point-of-stars-1b9ae5dee13b
['Christina M. Ward']
2019-11-25 03:47:29.868000+00:00
['Poetry', 'Environment', 'Climate Change', 'Society', 'Short Story']
Why is it So Hard to Integrate Machine Learning into Real Business Applications?
You’ve played around with machine learning, learned about the mysteries of neural networks, almost won a Kaggle competition and now you feel ready to bring all this to real world impact. It’s time to build some real AI-based applications. But time and again you face setbacks and you’re not alone. It takes time and effort to move from a decent machine learning model to the next level of incorporating it into a live business application. Why? Having a trained machine learning model is just the starting point. There are many other considerations and components that needs to be built, tested and deployed for a functioning application. In the following post I will present a real AI-based application (based on a real customer use case), explain the challenges and suggest ways to simplify development and deployment. Use Case: Online Product Recommendations Targeted product recommendations is one of the most common methods to increase revenue, computers make suggestions based on users’ historical preferences, product to product correlations and other factors like location (e.g. proximity to a store), weather and more. Building such solutions requires analyzing historical transactions and creating a model. Then when applying it to production you’ll want to incorporate fresh data such as the last transactions the customer made and re-train the model for accurate results. Machine learning models are rarely trained over raw data. Data preparation is required to form feature vectors which aggregate and combine various data sources into more meaningful datasets and identify a clear pattern. Once the data is prepared, we use one or more machine learning algorithms, conduct training and create models or new datasets which incorporate the learnings. For recommendation engines, it is best to incorporate both deep learning (e.g. TensorFlow) to identify which products are bought “together”, and machine learning (e.g. XGboost) to identify the relations between users and products based on their historical behavior. The results from both models are then combined into a single model serving application. Example pipeline: Real-time product recommendations The serving application accepts a user’s ID, brings additional context from feature and user tables, feeds it into a model and returns a set of product recommendations. Note that serving must be done in real-time while the user is still browsing in the application, so its always better to cache data and models. On the other hand, recent product purchases or locations may have significant impact on future customer product choices and you need to constantly monitor activities and update feature tables and models. An online business requires automation and a CI/CD process applied into machine learning operations, enabling continuous applications. It is important to support auto-scaling and meet demand fluctuations, sustain failures and provide data security, not to mention to take regulatory constraints into consideration. The Machine Learning Operational Flow In a typical development flow, developing code or models is just the first step. The biggest effort goes on making each element, including data collection, preparation, training, and serving production-ready, enabling them to run repeatedly with minimal user intervention. What it takes to turn code or algorithms into real application The data science and engineering team is required to package the code, address scalability, tune for performance, instrument and automate. These tasks take months today. Serverless helps reduce effort significantly by automating many of the above steps, as explained in my previous post Serverless: Can is Simplify Data Science Projects?. Other important tools to keep in mind are Kubernetes and KubeFlow, which bring CI/CD and openness to the machine learning world. Read more about them in my post Kubernetes: The Open and Scalable Approach to ML Pipelines. Machine Learning Code Portability and Reproducibility A key challenge is that the same code may run in different environments, including notebooks for experimentation, IDEs (e.g. PyCharm) and containers for running on a cluster or as part of an automated ML workflow engine. In each environment you might have different configurations and use different parameters, inputs or output datasets. A lot of work is spent on moving and changing code, sometimes by different people. Once you run your work, you want to be able to quickly visualize results, compare them with past results and understand which data was used to produce each model. There are vendor specific solutions for these needs, but you can’t use them if you want to achieve portability across environments. Iguazio works with leading companies to form a cross platform standard and open implementation for machine learning environments, metadata and artifacts. This allows greater simplicity, automation and portability. Check out this video to learn how you can move from running/testing code in a local IDE to a production grade automated machine learning pipeline in less than a minute (based on KubeFlow).
https://towardsdatascience.com/why-is-it-so-hard-to-integrate-machine-learning-into-real-business-applications-69603402116a
['Yaron Haviv']
2019-07-08 20:45:27.105000+00:00
['Machine Learning', 'Data Science', 'AI', 'Kubernetes', 'Serverless']
10 Extraordinary GitHub Repos for All Developers
10 Extraordinary GitHub Repos for All Developers Interview resources, build your own X, a list of great public APIs, and more Photo by Vishnu R Nair on Unsplash GitHub is the number one platform for sharing all kinds of technologies, frameworks, libraries, and collections of all sorts. But with the sheer mass also comes the problem to find the most useful repositories. So I have decided to curate this list of ten fantastic repositories that provide great value for all software engineers. All of them have a lot of GitHub stars, underlining their relevance, popularity, and usefulness. Some of them will help you learn new things, some will help you build cool things, and all of them will help you to become better software engineers.
https://medium.com/better-programming/10-extraordinary-github-repos-for-all-developers-939cdeb28ad0
['Simon Holdorf']
2020-03-31 16:07:48.275000+00:00
['Creativity', 'JavaScript', 'Technology', 'Productivity', 'Programming']
Multi-Object tracking is hard, and maintaining privacy while doing it is even harder!
Tracking in Computer Vision is the task of estimating an object’s trajectory throughout an image sequence. To track an individual object, we need to identify the object from one image to another and recognize it among distractors. There are a number of techniques we can use to remove distractors, such as background subtraction, but we’re primarily interested here in the tracking technique known as tracking by detection. In this paradigm, we first try to detect the object in the image, and then we try to associate the objects we detect in subsequent frames. Distinguishing the target object from distractor objects is then part of an association problem — and this can get complicated! You can think of it like “connecting the dots” — which is exponentially more challenging when there are many dots representing many different objects in the same scene. For example, if we want to track a specific car in a parking lot, it’s not enough just to have a really good car detector; we need to be able to tell apart the car of interest from all the other cars in the image. To do so, we might compute some appearance features that allow us to identify the same car from image to image. Alternatively, we can try to track all the other cars, too — turning the problem into a Multi-Object Tracking task. This approach enables more accurate tracking by detection with weaker appearance models, and it allows us to track every object of a category without choosing a single target a priori. In these figures, we’re trying to track two red dots, simultaneously detected over 4 consecutive frames (t=1,2,3,4). But with only the position and time of detection as information, there are two different sets of trajectories that are acceptable solutions. The two dots may cross paths while maintaining a straight motion like in the left image, or they may avoid each other turning in opposite directions, like in the image on the right. If we were only interested in tracking one of the two dots, the second one would act as a distractor potentially causing a tracking error. What are the current approaches? As a still largely unsolved and active area of research, there’s an extensive literature covering different approaches to multi-object tracking. Since 2014, there has even been a standard benchmark in the field, called the Multiple Object Tracking (MOT) Challenge, which maintains datasets that researchers and individuals can use to benchmark their algorithms. We’ll discuss a few common approaches here and present them in a simplified way, but this is far from an exhaustive list. For more, we suggest the following survey from 2017 by Leal-Taixé Laura et al.. Kalman Filtering and Hungarian algorithm One of the simplest approaches is to try matching detections between adjacent frames, which can be formulated as an assignment problem. In its simplest form, for each object detected at time t, a matching distance is computed with each object detected at time t+1. The matching distance can be a simple intersection-over-union between bounding boxes, or it could include an appearance model to be more robust. An optimization algorithm called the Hungarian algorithm is then used to find the assignment solution that minimizes the sum of all the matching distances. In addition, since most of the objects we are trying to track are moving, rather than comparing the new detection’s position to the tracks most recent known locations, it works better to use the track position history to predict where the object was going. In order to integrate the different uncertainties from this kinematic model and the noise from the detector, a filtering framework is often used, such as a Kalman Filter or Particle Filter. A more complex but straightforward extension to this approach is to search for an optimal solution over a higher number of frames. One possible way is to use a hierarchical model. For example, we can compute small tracklets between adjacent frames over short, non-overlapping segments, and then try to match tracklets between consecutive segments. The Hungarian algorithm can be used again if we can come up with a good distance-matching function between tracklets. Multi Hypothesis Another possible approach is to maintain, for each original detection as a starting point, a graph of possible trajectories. Detections are represented by nodes in the tree and each path on that tree is a track hypothesis. Two hypotheses that share a detection are in conflict and the problem can then be reformulated as finding an independent set that maximizes a confidence score. Let’s imagine the simple case above, where two objects are being detected during three consecutive frames. Each node corresponds to a detection, and the nodes are vertically aligned with the frame they have been detected in. An edge between two nodes corresponds to a possible association and the number next to the edge measures the matching distance between detections (lower value means the two detections are more similar). It is common if the dissimilarity between two detections is above a threshold, to consider the association completely impossible. This is why here, there is no edge between nodes E and C. Node D however, could be associated either to B or E in the next frame, and the decision will be made using the matching distance. Therefore, each path on this graph corresponds to a track hypothesis, and here it should be easy to see that the optimal solution is obtained with two tracks: A–>B–>C (ABC) and D–>E–>F (DEF). There is however another acceptable solution: AEF and DBC. However, the track hypothesis DBF prevents any other complete trajectory starting from A, as track hypothesis in order to be compatible, must not share any node, and from node E, we can only go to F. The figure below is a new graph representing each track hypothesis with a node. There is also an edge between two nodes if the track hypotheses are in conflict, that is, if they share one or more detections. For example, there is an edge between nodes ABC and DBF, as they share the detection B. But, hypotheses ABC and DEF are not linked with an edge, and so they are compatible. The idea is to list all the independent sets in this graph and this would give us all the possible solutions to our association problem, and there are efficient algorithms in graph theory that allow us to do just that. Here the independent sets are: {ABC, DEF} {AEF, DBC} We just need to choose now between these two solutions. We can sum all the matching distances in a track hypothesis to get the track hypothesis cost, and sum all the track hypothesis costs in a set to get the sets cost. {ABC, DEF} ABC:Cost=0.1+0.1=0.2 DEF:Cost=0.1+0.1=0.2 {ABC, DEF}: Cost=0.2+0.2=0.4 {AEF, DBC} AEF:Cost=5+0.1=5.1 DBC:Cost=5+0.1=5.1 {AEF, DBC}Cost=5.1+5.1=10.2 {ABC, DEF} with a cost of 0.4 is then retained as the optimal solution. If you want to know more about Multi-Hypothesis Tracking, a more detailed description of an implementation by Chanho et al, can be read here. Network flow formulation The data association problem can also be formulated as a network flow. A unit of flow from the source to the sink represents a track, and the global optimal solution is the flow configuration that minimizes the overall cost. Intuitively, finding the best tracking solution as an association between objects can be seen as solving the K disjoint path on a graph, where nodes are detections and edge weights are the affinity between two detections. This is another one of the well-studied optimization problems in graph theory. Going back to our previous association problem, we can add imaginary starting and destination points, respectively S and T. We are now looking for the two shortest paths from S to T, that do not share any nodes. Again the solution will be SABCT and SDEFT. Another way to look at it is to imagine the node S as a source that sends a flow through the network. Each edge as a capacity (here it is 1 because we want non overlapping trajectory) and a cost, and solving the association problem becomes equivalent to minimizing the overall cost for a given amount of flow. For instance, here we are trying to send 2 units of flow from the sink (S) to the tank (T). One unit will go through ABC, the other through DEF, for a total cost of 0.4. But we could also have sent the same amount of flow (2) by sending one unit through DBC and another one through AEF, except the cost would be 10.2 and so {SABCT, SDEFT} is retained as the optimal solution. Again, for a more detailed description of an implementation of Network flow for tracking, you can find an example here by Zhang et al. Why is this so hard? Researchers have made some incredible advances in object detection and recognition in the past few years, thanks in large part to the emergence of Deep Learning. Now it’s possible to detect and classify hundreds of different objects in a single image with very high accuracy. But multi-object tracking is still extremely challenging, due to a number of problems: Occlusions: In crowded or other complex scene settings, it’s very common that an object of interest would have its trajectory partially occluded, either by an element of the background (fixed environment/scene), like a pole or a tree, or by another object. A multi-object tracking algorithm needs to account for the possibility that an object may disappear and later reappear in an image sequence, to be able to re-associate that object to its prior trajectory.
https://medium.com/numina/multi-object-tracking-is-hard-and-maintaining-privacy-while-doing-it-is-even-harder-c288ccbc9c40
['Raphael Viguier']
2019-11-08 17:24:11.177000+00:00
['Privacy By Design', 'Engineering', 'Tracking', 'Computer Vision', 'Algorithms']
Everything you need to know about color
Color evokes emotion, sparks excitement, and grabs attention. Color can help draw your eye where you want it on anything from a poster or billboard to an email in your inbox. Color can even influence your mood. Did you know the color psychology behind a red and yellow combination makes you hungry? It’s no wonder well-known fast-food chains like McDonald’s and KFC use red and yellow colors in their logos. Color theory is a set of principles for creating harmonious color combinations. It’s a mixture of science and art. Understanding the fundamentals of color theory and where color comes from is important to know as a designer. Once you master it, you’ll know how to create the best color combinations for your graphic and web design projects. If you don’t believe color has an impact on your design then take a look at this example. It’s the exact same illustration, the only difference is the colors. Which one is pleasing to look at and which makes your eyes want to explode?
https://uxdesign.cc/everything-you-need-to-know-about-color-d921c07c8b0b
['Monica Galvan']
2020-10-17 20:58:27.235000+00:00
['UI', 'Visual Design', 'Design', 'Creativity', 'UX']
Replicating a Human Pilot’s Ability to Visually Detect Aircraft
QUT researchers have used a complex maths model to develop an algorithm that enables unmanned aerial vehicles (UAV) to replicate a human pilot’s ability to visually detect aircraft at a range of more than 2km. Professor Jason Ford, who was awarded the inaugural Australian Defence Industry Award of Academic of the Year in 2019, said developing the visual detection system had tackled the key barrier to fully achieving the global commercial market of unmanned aerial vehicles. “We’ve been working on this problem for 10 years and over that time 50 people or more have been involved in this project,” said Ford, a chief investigator with the QUT Centre for Robotics. “We are leading the world in solving the extremely challenging problem of replicating the role of a pilot’s eye. “Imagine you’re observing something from a cockpit and it’s hidden against the clouds. If you watch it over a period of time, you build up confidence something is there. “The algorithm does the same.” The advisory for human pilots is that they will need at least 11.4 seconds to commence an avoidance manoeuvre once they visually detect another plane or other aerial vehicle. In the past decade, the system has evolved through a range of testing including on aircraft and on UAVs. The QUT researchers developed the algorithm based on a mathematical model called the Hidden Markov Model (HMM). HMMs were developed in the 1960s and allow people to predict unknown, or hidden, variables from observed information. Professor Jason Ford had completed his PhD on HMMs, and has developed techniques to work with weak measurements by using a combination of measure theory, control theory, and information theory. Image: QUT. Ford said although most people outside of the maths community would not have heard of HMMs, they would have benefited from its many applications in economics, neuro-biology and telecommunication and examples as broad as in DNA sequencing to speech recognition systems used by smartphone digital assistants. The algorithm used in the UAV object detection system was developed by Ford Dr Tim Molloy, Dr Jasmin Martin and others. “The algorithm boosts the weak signal while reducing the surround signal noise,” Ford said. Professor Jason Ford (front) led the development of an algorithm that enables unmanned aerial vehicles (UAV) to replicate a human pilot’s ability to visually detect aircraft at a range of more than 2km. Ford said one of the major challenges in developing the sense-and-avoid system for unmanned aerial aircraft was to make it small enough to be able to be carried on a UAV. The breakthrough is the latest step after a series of related research projects in the past decade, including the Smart Skies Project and Project ResQu in collaboration with Boeing Australia and Insitu Pacific. Testing commenced in 2010 with flights to collect data to start working on the project, and in early 2014 a breakthrough proof-of-concept flight proved a system in UAV was able to detect another aircraft using vision while in flight. “Boeing and Insitu Pacific have valued the ongoing collaboration with QUT and Professor Ford’s team,” said Brendan Williams, Associate Technical Fellow, Airspace Integration for The Boeing Company. “The algorithm has been evaluated and matured in regular flight tests, with strong positive results, and we are looking to transitioning its use as a baseline technology in regular Beyond Visual Line of Sight operations.” Since then, the research has focussed on improving the performance, size and cost of the technology to improve the commercial feasibility of the system. Ford said the ultimate aim of this research was to enable UAVs to be more easily be used in general airspace for commercial applications.
https://medium.com/thelabs/replicating-a-human-pilots-ability-to-visually-detect-aircraft-d9594913934a
['Qut Science']
2020-10-26 05:18:36.379000+00:00
['Machine Learning', 'Technology', 'Engineering', 'Business', 'AI']
How to create an irresistible offer
Know Your Audience Whether common sense or trite, I feel like I’m beating a dead horse whenever I bring this up. Which is why I sometimes glance over it. But it must be said, because you’re not going to rack up sales with a tone-deaf offer. You must know who your offer is for. You must know who your offer is for. (Click To Tweet) It’s like if someone came to Music Entrepreneur HQ and pitched a guest post about the environment (oh wait, this actually happened!). Sorry, though many musicians are environmentally conscious, trying to sell them your recycling services is going to prove an uphill battle. What are musicians interested in? Growing their fan base. Getting listeners for their music. Bringing a crowd to their shows. And so on. There might be an opportunity to sneak in some tips about reducing their carbon footprint in an offer that covers one or more of the topics just mentioned. But it would be best to assume no opportunity, because you want your content to be focused and targeted. Who is interested in recycling services? That’s what you’d want to figure out before pitching your offer. In like manner, if you wish to create an irresistible offer, you must know your audience and what their needs are. If you can, go and ask them now.
https://medium.com/datadriveninvestor/how-to-create-an-irresistible-offer-674a71ea7482
['David Andrew Wiebe']
2020-12-29 16:52:59.815000+00:00
['Business', 'Creativity', 'Entrepreneurship', 'Product', 'Freelancing']
5 Cities Where Man And Nature Collide
5 Cities Where Man And Nature Collide Bringing a little bit of wildness back into our most civilised spaces New York City and last of the city’s green space, Central Park (by Jermaine Ee on Unsplash) As cities continue to expand and our urban sprawl pushes further and further into what was once forests, fields and savannah, we are increasingly coming into conflict with nature. Many animals are turning this to their advantage and while some have roamed our cities for centuries, others are taking their first tentative steps into our cities. Sometimes the consequences are adorable, sometimes they can be deadly, but either way, these animals are changing how we interact with our world around us. Leopards — Mumbai, India A leopard spotted at night Mumbai, India is one of the most densely populated areas on the plan, with a population density of 32,000 people per square kilometre. It’s home 19.75 million people and also at least 40 leopards. Bordering the Sanjay Gandhi National Park, the city of Mumbai has virtually no buffer zone between the urban sprawl and the park, and so the leopards, seeking an easy meal are known to venture into the city at night. Lacking adequate trash infrastructure, waste piles up in the streets of Mumbai, attracting, among other things, stray dogs. There 30 million stray dogs in India and 95,000 in Mumbai alone and a sizeable minority of them carry the rabies virus. The Leopards, however, are helping to combat both them and the virus. As the apex predator in this unique eco-system, the leopards hunt the dogs, viewing them as a far easier meal than the deer commonly found in the park. At least 40% of their diet is thought to be dogs, and it’s been speculated the around 1,500 dogs are required per year to sustain a leopard population of this size. Each year they likely prevent 1,000 bites from stray dogs, preventing around 90 rabies cases in a country that’s one of the worst affected in the world by the rabies virus, thanks largely to its stray dog population. While attacks on humans from the leopards are very rare, they do happen, often with fatal results. As Mumbai continues to expand, the risk will continue to increase, but efforts are being made to combat this. Initiatives aimed at educating people on how to stay safe in areas where there are known to be leopards have proven incredibly effective in reducing attacks and those leading the charge in leopards conservation are confident that with proper care, both humans and leopards can flourish. Raccoons — Toronto, Canada A raccoon in the city (by Den Trushtin on Unsplash) Often referred to as the raccoon capital of the world, Toronto, Canada, has one of the highest raccoon populations in the world, with approximately one raccoon per twenty-nine people. Famously intelligent and nimble-fingered, raccoons will eat just about anything. It wasn’t until 2002 that raccoons colonised Toronto in such numbers, when the city introduced organic trash bins, without too much consideration as to how the raccoons living in the forests on the outskirts of the city would view this new, easy food source. Today the city still struggles with raccoons’ thanks in large part to how adaptable they are to city life. All attempts to control them and stop them stealing food have so far failed. The Native Americans understood raccoons as clearly as we do today, with the word ‘raccoon’ coming from the Powhatan word ‘aroughcum’ meaning ‘animal that scratches with its hands. The Aztecs were a little more to the point, calling them ‘mapachitli’ which means, ‘one who takes everything with its hands’. Hyenas — Harar, Ethiopia The Hyena Man of Harar (by Gill Penny from Flickr) For at least five hundred years, the people of Harar, Ethiopia have been feeding the Hyenas that live on the outskirts of the city in the caves of the Hakim Mountain. The hyenas make their home alongside the tombs of important religious leaders in the Islamic faith and the people of Harar came to view the hyenas there came to be seen as a symbol of luck. They feed them porridge, butter and goats meat and as long as the hyenas continue to eat, the city is said to have good fortune. When not being fed porridge and meat annually though, they roam the garbage dumps of the city, eating anything and everything they can find. As one of Africa’s largest predators, they have a huge appetite, which may be what led one family to feed them scraps of raw meat. Over the years, the hyenas have learned to come when called and for 40 years, one man fed a pack of them. He also trained his son to feed them, who’s famous today for feeding them from his mouth. He places a stick between his teeth with a piece of meat on the end for the hyenas to eat. He, in turn, is teaching his son to do the same, while his sister has also entered the family business of feeding hyenas. The feeding of the Hyenas has become one of the city’s most famous tourist attractions, and adventurous explorers can even feed the Hyenas from their mouths themselves. As Harar continues to grow, however, the family worries that these visitors may be pushed out and one of the most unlikely human-animal encounters with a rich history could come to an end. Foxes — London, UK Urban Fox — London Over 10,000 foxes live in London, accounting for 14% of the UK’s total population. Highly adaptable creatures, who can eat just about anything, foxes found a natural home amidst the gloomy streets of London. Their diet differs from their country-dwelling cousins and they eat an even split of household trash and meat. Their favourite, and most beneficial food for us, appears to be rats and they’re noted as being a significant factor in keeping London’s rat population to a minimum. There are as many as 18 foxes per square kilometre in London, which has led to them taking up residence in some interesting places. In 2011, while the UK’s tallest building, the Shard, was being constructed, a fox nicknamed Romeo took up residence on the 72nd floor. Now the open-air viewing gallery and the highest accessible point to the public, Romeo survived by eating scraps left by the construction workers. Referred to, quite aptly as, ‘a resourceful little chap’, Romeo was later caught by animal rescue workers and was released onto the streets of Bermondsey in London. Cats — Istanbul, Turkey One of many cats of Istanbul The history of cats in Istanbul goes back a long way. Originally, they came to the city as ships cats, which had been tasked with keeping the rat population down aboard ships during long sea voyages. When the Ottoman Empire took Istanbul (then Constantinople) in 1453, they brought with them a unique perspective on the city’s feline inhabitants. Cats have a special place in Islam, with one reportedly saving Muhammed’s life from a snake. As a reward, he blessed all cats to always land on their feet. Another story tells of Muhammed cutting the sleeve of his robe so that the cat sleeping there would not be disturbed. This love of cats translated into a deep respect and close bond with them in Istanbul, the new capital of the Ottoman Empire, and cat populations only continued to grow as the people there fed and cared for them. Today the cats of Istanbul number at least 125,000 are famously affectionate and tame, despite largely being street cats. As a bonus, they keep rat populations in the city in check too, which in previous centuries meant a reduced number of cases of the plague. Both the government and the residents, as well as tourists feed the huge population and as a result, Istanbul has become famous as a cat lovers paradise, with virtually nowhere in the city that the felines haven’t made their own, not even the famous Hagia Sofia, which has been home to a cat called Gli for 16 years. Wild Cities Ultimately, whatever the animal that roams out cities, we must learn to live with them. Cities are expanding at an unprecedented rate and for the wildlife whose homes we destroy to expand, there is often no choice but to move into our urban spaces. By working with wildlife organisations and learning about the creatures we share our cities with, we may be able to bring a little bit of wildness back into our most civilised spaces.
https://medium.com/age-of-awareness/5-cities-where-man-and-nature-collide-ba9c5f03a32c
['Danny Kane']
2020-08-04 00:50:02.259000+00:00
['Environment', 'Cities', 'Nature', 'Culture', 'Society']
Standard Cognition Uses Rockset to Deliver Data APIs and Real-Time Metrics for Vision AI
Standard Cognition Uses Rockset to Deliver Data APIs and Real-Time Metrics for Vision AI Kevin Leong Follow Jan 31 · 5 min read Walk into a store, grab the items you want, and walk out without having to interact with a cashier or even use a self-checkout system. That’s the no-hassle shopping experience of the future you’ll get at the Standard Store, a demonstration store showcasing the AI-powered checkout pioneered by Standard Cognition. The company makes use of computer vision to remove the need for checkout lines of any sort in physical retail locations. Their autonomous checkout system only requires easy-to-install overhead cameras, with no other sensors or RFID tags needed on shelves or merchandise. Standard uses the camera information in its computer vision platform to generate locations of individuals in the store-a type of in-store GPS-and track what items they pick up from the shelves. Shoppers simply exit the store with their items and get sent a receipt for their purchases. Employing computer vision to deliver a no-touch checkout experience requires that Standard efficiently handle large volumes of data from many sources. Aside from video data from each camera-equipped store, Standard deals with other data sets such as transactional data, store inventory data that arrive in different formats from different retailers, and metadata derived from the extensive video captured by their cameras. As is common with fast-growing markets, Standard’s data and analytics requirements are constantly evolving. Adding external data sources, each with a different schema, can require significant effort building and maintaining ETL pipelines. Testing new functionality on their transactional data store is costly and can impact production. Ad hoc queries to measure the accuracy of the checkout process in real time are not possible with traditional data architectures. To overcome these challenges and support rapid iteration on the product, the Standard engineering team relies on Rockset for their prototyping and internal analytics. Schemaless Ingest for Running Experiments Standard builds their production systems to access the streams of events they collect through a number of backend APIs, and the team is continually adding new API endpoints to make more data available to developers. Rockset plays a key role in prototyping APIs that will eventually be productionized and offers several advantages in this regard. When in the experimental phase, quick schema changes are required when analyzing their data. Rockset does not require schema definition for ingest, but still allows users to run fast SQL queries against the raw data using a very flexible schema-on-read approach. Using Rockset as their prototyping platform, Standard engineers can quickly experiment with different functions on the data. Standard also uses Rockset for fast prototyping because it can be readily accessed as a fully managed cloud service. Engineers simply connect to various data sources and ingest and query the data without having to manage servers or databases. Compared to the alternative of prototyping on their transactional data store, Standard’s cost of experimentation with Rockset is low. Ad Hoc Analysis of Operational Metrics Standard is constantly monitoring operational metrics from retailer partners, and their own demonstration store, to improve the efficiency and precision of their systems. Of particular importance in computer-vision-aided checkout is the accuracy of the transactions. Were shoppers charged for the correct number of items? How accurate were the AI models compared to human-resolved events? The engineering team pulls together multiple data sets-event streams from the stores, data from vendors, store inventory information, and debug logs-to generate accuracy metrics. They stream all this data into Rockset, which allows Standard to run ad hoc queries to join across data sets and analyze metrics in real time, rather than wait for asynchronous data lake jobs. An Environment for Rapid Prototyping and Real-Time Analytics Standard incorporates Rockset into their development flow for rapid prototyping and real-time analytics purposes. They bring in transactional data and various third-party data sets, typically in CSV or Parquet format and each with its own custom schema, using the Rockset Write API for ingestion whenever new data is available. For feature prototyping, engineers build an experimental API, using the Rockset Node.js client, that is refined over multiple iterations. Once a feature is mature, it is converted to a serverless function, using Google Cloud Functions, in their online production system in order to present data as an API to developers. This flow allows the engineering team to move quickly, with no infrastructure required, when developing new functionality. Standard productionizes several endpoints a day using this methodology. In the real-time analytics scenario, data from disparate sources-structured data managed by Standard and unstructured third-party data-is loaded into Rockset. Once ingested into Rockset, engineers can immediately perform SQL queries to measure and analyze operational metrics. Rockset offers the Standard team an ideal environment for ad hoc queries, allowing engineers to bring in and query internal and external data sets in real time without having to worry about indexing the data for performance. Constantly Improving Checkout Accuracy and Product at Standard Standard’s Rockset environment allows the team greater speed and simplicity when developing new features and verifying the accuracy of their AI models. In a nascent market where correctness of the computer vision platform will be crucial in gaining adoption of its automated checkout system, the ability to constantly improve accuracy and product functionality gives Standard an important edge. “The team at Standard is always looking to increase the accuracy of the computer vision platform and add new features to the product. We need to be able to drive product improvements from conception to production rapidly, and that involves being able to run experiments and analyze real-time metrics quickly and simply,” says Tushar Dadlani, computer vision engineering manager at Standard Cognition. “Using Rockset in our development environment gives us the ability to perform ad hoc analysis without a significant investment in infrastructure and performance tuning. We have over two thirds of our technical team using Rockset for their work, helping us increase the speed and agility with which we operate.” As Standard continues to evolve its AI-powered autonomous checkout offering, the team hopes to bring even more data into its platform in the future. Standard will extend the same rapid development model, enabled by Rockset, to incorporating new types of data into its analysis. Its next project will introduce user behavior event streams into its analysis, using Rockset’s SQL engine to join across the multiple data sets being analyzed.
https://medium.com/rocksetcloud/standard-cognition-uses-rockset-to-deliver-data-apis-and-real-time-metrics-for-vision-ai-a080180352c7
['Kevin Leong']
2020-01-31 21:56:22.117000+00:00
['Real Time Analytics', 'Computer Vision', 'AI', 'Data', 'API']
Let’s Go for a Walk
Let’s Go for a Walk Why a daily walk is as important to me as brushing my teeth Photo by Elijah Hail on Unsplash I always wanted to be a runner. I grew up in Los Angeles, where there were a lot of runners in their tiny, 80s short-shorts and sweatbands. They looked so powerful and elegant. One of the most exciting moments of my young life was sitting out one night with all the neighbors at the side of the road and watching an Olympian run by with the torch just before the 1984 Los Angeles Olympic games began. I still remember that figure flying by us so elegantly, torch held high, hardly breaking a sweat. I wanna be like that, I thought. But even as a child, I had issues with running. For one thing, I had serious asthma, perpetually aggravated by the thick layer of smog that covered the city in the 80s. I also had a lot of joint issues, despite my youth, and running caused my knees to ache unbearably within minutes. I often defaulted to walking — less elegant, but far more enjoyable, I soon discovered, when my dad asked me to start joining him for his morning walks. I don’t remember us talking very much — I’ve always had a hard time finding things to talk about with my dad — and so we often walked in silence. I remember feeling safe out there in those early mornings, before anyone expected me to get dressed and organize my homework and do all the things I was supposed to do. There was a freedom out there: me, just walking around like there was nothing better to do. I also remember being entranced by the sights we passed by. I loved looking at people’s yards — especially those who had done a lot of landscaping. I found the flowers so beautiful, and wondered what kinds of little passageways were behind the hedges. I loved looking at the trees and often stopped to touch their bark. And I was in heaven when we went down the street to the beautiful park near the freeway entrance that felt like a private little woodland filled with twisting pathways.
https://medium.com/wilder-with-yael-wolfe/lets-go-for-a-walk-eb4c5b8ea541
['Yael Wolfe']
2020-11-16 17:15:23.980000+00:00
['Walking', 'Nature', 'Outdoors', 'Mental Health', 'Health']
Yearly Review: Most Read Stories of 2020
Yearly Review: Most Read Stories of 2020 Focusing on business, productivity, and writing As it’s natural, at this time of the year, to look back at some of the highlights that truly stand out, I thought I’d do the same for some of my top pieces on Medium. I truly gave a lot of energy to this platform over the past year, and do not regret any of that. For one, it has made me a better writer. It also challenged me to start new things (such as a new business, a few columns). Overall, it also has taught me a lot about the fine balance between writing for myself and for the audience. Optimising my talents and strengths, whilst also keeping creativity and fun in the picture — such a hard one to balance. I thought I’d look back at the 10 most successful pieces of 2020, and provide an honest opinion (and speculation) on what worked and why they got so much love (I am using a combination of reads and views). One thing is for certain, I am looking to come back in 2021 with a more honest approach to what I can take on, and how much I can commit to the different things I am juggling. Less is more, for real this time. In this piece, I dive deep into some stats as well as using my intuition to provide some lessons for fellow writers.
https://medium.com/the-business-of-wellness/yearly-review-most-read-stories-of-2020-aa42fe724a90
['Fab Giovanetti']
2020-12-30 09:49:53.852000+00:00
['Headlines', 'Business', 'Writing', 'Creativity', 'Writing Tips']
Data science for weather forecast: how to prove a funny theory
What are we trying to do? Before describing what this experiment is all about, I need to give you some context. My colleague Aouss Sbai (co-author of this article) and I were looking for a fun project to work on. So we asked our mentor Plamen Nedeltchev (Distinguished Engineer at Cisco) if he had anything in stock and he shared with us that he had a theory about the weather in San Jose, CA. He told us that he was able to predict if the summer was going to be hot or not solely based on the temperatures of the 19th, 20th and 21st of May. He asked us to prove it. You read that correctly, predict the average weather of 3 months based on 3 days. Frankly, we did not really believe in it and approached this task with great scepticism. Nevertheless, we got started and tried to understand how we could go about proving such an original statement. Data collection The first hurdle In any data science problem, the starting point of anything is data. What data do we want exactly? Remember our objective: predict the average summer temperature following the temperatures of 3 days in May (19th, 20th and 21st) for the city of San Jose. So we started looking for databases or archives of historical temperature data. Guess what? There were none that were either complete enough or simply available to us 😬. The only source of information we found was on a website, the old Farmer’s Almanac, which listed the average temperatures of each day since 1945. Amazing, the job is done then! Well, not exactly… This is the page corresponding to the 23rd of March, 2019. We have access to the mean temperature, which is exactly what we need. But when it comes to calculating the average temperature of the summer of each year since 1945, this becomes much more tedious (~4500 days). There was no way we would visit a different page for each day and manually gather this data in an excel sheet… So we decided to automate this task and write a script for it! 🤖 Automation with a web scraping script Basically, the idea is that the script visits each page independently and looks for the data we want, calculates the average temperature of the summer, captures the temperatures of the 3 days in May, and repeats the process for each year starting from 1945. But how exactly can we do that? This is the URL of the page you just saw above. As you can see, it is specified in the URL the city and the date you want to access. So we could tell the script which URL to visit, for each day we were interested in. But here comes another issue. Once the script is on the page, how does it detect the temperature that we want? Well, as you might know, each webpage is written in HTML format, which means that each element that you see on screen belongs to a specific HTML tag. And, luckily for us, each page of that website was structured in the exact same way. So the only thing that we needed to do is identify in which HTML tag was the daily mean temperature stored and tell the script to fetch that specific value. (For those interested, we used the python library Beautiful Soup) The script was then able to do all the nasty calculations for us and return for each year the average summer temperature and the individual temperatures of the 3 days of May, all bundled in a nice Excel sheet 📝. what our dataset looks like now (temperatures are in Fahrenheit). The first 3 columns are the 3 days of May, and the last one is the average summer temperature But (there’s always a “but”), that was not enough. In fact, when you think about it, each line of our Excel sheet represented 1 year (average temp of summer + 3 days of May). So, even if we went back to 1945, that represented only 73 lines… which is far too little data to pretend to do any sort of reliable analysis or prediction ( a couple hundreds would be much better). So we decided to repeat the exact same process for 4 other cities of northern California around San Jose which were subject to the same type of weather but were far enough not to have redundant data (taking San Francisco for instance, which is by the sea would have biased everything, and taking Milpitas, which is in San Jose suburbs wouldn’t have added any relevant data). We now have 370 measurements, which is not ideal, but sufficient to start doing some analysis. Let the analysis begin! Data Transformation Now let’s try to simplify our dataset to make it easier to analyze. To start things off, we pulled the Excel file data into Alteryx, a data science tool to create end-to-end data pipelines. This will help us prepare and analyze the data all along the experiment. Ingested Data: we decided to add 2 columns which indicated the city and the year of the measurement We aimed to visualize the data using Tableau, which is one of the most commonly used Business Intelligence (BI) tools. Hence, we needed to transform the data in a format that is easily and efficiently consumed by Tableau. It is worth mentioning that we scraped the data in a format that was already structured, and, therefore, very little data cleaning was required. We merely reordered and reformatted some columns and checked that there were no null values.
https://towardsdatascience.com/data-science-for-weather-forecast-how-to-prove-a-funny-theory-f005ea2d1efe
['Julien Emery']
2019-04-06 18:29:28.919000+00:00
['Weather', 'Technology', 'Business Intelligence', 'Data Science', 'Data Visualization']
The new equation for ultimate AI energy efficiency.
The new equation for ultimate AI energy efficiency. Part V of our series, “Real Perspectives on Artificial Intelligence” features Rick Calle, AI business development lead for M12, Microsoft’s venture fund. How energy-intensive is the AI infrastructure today? And what does that mean for the future of discipline? Rick leads AI business development for M12, Microsoft’s venture fund. He works at the intersection of AI algorithms, hardware computing efficiency, and novel AI use cases. During his time with Qualcomm’s AI Research, he worked with the team that launched Qualcomm’s AI Engine into over 100 different models of AI-enabled mobile phones. Today’s AI algorithms, software and hardware combined are 10X to 100X more energy-intensive than they should be. In light of Microsoft’s recent announcement of its carbon negative commitment, my challenge to the industry is clear: let’s improve AI hardware and software so that we don’t overheat our planet. The computing industry is always optimizing for speed and innovation, but not necessarily considering the lifetime energy cost of that speed. I saw an inflection point around 2012 when the progression of AI hardware and algorithmic capabilities began to deviate from Moore’s law. Prior to that, most AI solutions were running on one, maybe two processors with workloads tracking to Moore’s law. A steady progression of workloads from the Perceptron in 1958 to systems like Bidirectional LSTM neural networks for speech recognition in the mid-2000s. Training AI models with multiple GPUs changed everything. After Alex Krizhevsky and team designed the AlexNet model with two GPUs in 2012, the computing power and electrical energy involved in training AI models took off at an entirely different pace: over 100X compounding every two years. Theirs was certainly not the first Convolutional Neural Network (CNN), but their “SuperVision” entry swept the field, winning the 2012 ImageNet competition by a huge margin. The next year nearly all competitors used CNNs and trained with multiple processors! Fast forward to 2019, and quickly developing innovative neural networks for Natural Language Processing may require hundreds or thousands of distributed GPUs — like self-attention encoder-decoder models that employ Neural Architecture Search (NAS) methods. According to a recent University of Massachusetts Amherst study, the amount of CO2 emitted from energy generation plants to power the computation involved in creating a new state-of-the-art AI model, was the equivalent of five automobile lifetime’s worth of CO2 emissions. If that’s what it takes to train only one new AI model, you can see that it is just not compatible with prioritization of sustainability. I believe we can incentivize the AI industry to make a change in the overall lifetime energy budget for AI workloads, and identify startups that are already committed to this cause. Where do you see the biggest opportunities for the highest impact energy savings? My colleagues and I think it’s joint optimization of three things: energy-efficient AI hardware, co-designed efficient AI algorithms and AI-aware computer networks. The challenge is that the energy consumption of AI models is likely the last thing an AI algorithm developer is thinking about (unless they’re focused on mobile phones). Usually the early optimizations are foremost around performance. AI engineers often think: “what’s my peak accuracy” and “how fast can I train the model” — both of which need faster computing and more energy. I support a new success metric to help incentivize the AI industry and startups to reduce energy and CO2 emissions at data center scale. We need to shift the focus to higher throughput and lower lifetime total cost of ownership of a system for given computing workloads. I stress “system” because often hardware marketing metrics forget to mention the energy cost of extra processors, memory, and networks required for an AI training system. Success Metric = Workload Throughput ÷ [ ($ Cost of System) + ($ Cost of Lifetime Energy of System) ] Throughput measures how fast we can compute the required AI algorithms. In the phraseology of the late Harvard Business School Professor Clayton Christensen, workload throughput is the “job” that matters at the end of the day. Not peak Floating Point Operations Per Second (FLOPS) which are magical, mystical marketing numbers only loosely related to getting the computational “job” done. The denominator of this ratio is the computing hardware cost plus the lifetime energy cost of operating that hardware including cooling and any extra network and processors required. With this new ratio, AI designers have far more degrees of freedom to optimize software, hardware and algorithms. For example, the power consumption of an AI chip itself — whether it is 50 watts or 450 watts — doesn’t matter as much. The lifetime energy consumption of many chips to deliver a certain workload throughput is what matters most. If we can maximize this success ratio, then by definition energy and CO2 emissions are reduced as well. Why change the “performance” mindset that has been the status quo for so long? AI has an existential problem. As its models continue to get larger, more computationally complex, and more accuracy is desired to reach human performance levels, the energy required to train those models increases exponentially. At some point if things continue as they have, researchers won’t be able to get enough computers or energy to create the new AI algorithms we want. I’m really worried about that potentially stalling AI innovation. Not many research labs can string together 4,000 leading-edge processors and run them for weeks. They just don’t have the resources to deploy exascale computers. So at some point — without change — we have the potential to reach a ceiling of innovation. I’d hate to see another AI winter. If our AI industry innovates around the success metric, then we will benefit from AI that is more compatible with sustainability, yet meets performance goals with lower lifetime energy hardware, more efficient AI algorithms and lower energy infrastructure. +
https://the-engine.medium.com/the-new-equation-for-ultimate-ai-energy-efficiency-119eccafb38c
['The Engine']
2020-06-23 16:19:38.710000+00:00
['AI', 'Artificial Intelligence', 'Climate Change', 'Energy', 'Computing']
How To Find Peace In The Eye Of The Stress Storm Around You
We Can Get Away Without Going Away “People try to get away from it all — to the country, to the beach, to the mountains. You always wish that you could too. Which is idiotic: you can get away from it anytime you like. By going within. Nowhere you can go is more peaceful — more free of interruptions — than your own soul…An instant’s recollection and there it is: complete tranquility.” — Marcus Aurelius, “Meditations”, Gregory Hayes Translation The words above are likely written in 180 AD, but the idea about “getting away” is often mouthed by a stressed worker in the present day. How many times have you wished you could get away? Ideas of a calm beach and umbrella drink may float in your daydreams. Likely ancient Romans did the same thing. However, one thing the emperor couldn’t do was “get away” — his chaotic times and life allowed no time for a lavish vacation. As Donald Robertson explains in his book “How to Think Like A Roman Emperor”, Marcus dealt with nonstop stressful personal and job-related issues. He lost 7 of his 13 children prematurely. One of his friends attempted to dethrone him by armed rebellion and a letter from his own wife may have started the attempt. The Antonine Plague ravaged the empire. It’s thought to have killed nearly 10% of Rome’s 75 million people at the time. A “friendly” German tribe rebelled and attacked the empire, forcing Marcus to live most of his late and sickly life at Spartan-like battle camps. Now, this is a stressful life. However, Marcus never turned into one of those horrific Roman tyrants you see portrayed in movies. So, how did he do it? As Marcus himself points out in his journal, he found a way to get away into his own mind. The journal he carried, which became Meditations, was his get away. As he himself mentioned, it only took an instance to escape to “complete tranquility”. While he didn’t leave a detailed explanation, the emperor shows he could escape his chaotic world without physically leaving. This is the ultimate discovery for us in the present day. We don’t have to jump on a plane and go somewhere to escape the ever-present stress. A peaceful get away and place for renewal is much closer than we can ever imagine.
https://medium.com/mind-cafe/how-to-find-peace-in-the-eye-of-the-stress-storm-around-you-f5db9fdfe298
['Erik Brown']
2020-12-26 14:54:39.962000+00:00
['Health', 'Philosophy', 'Mindfulness', 'Psychology', 'Self Improvement']
Write First with Your Head, Then Revise with Your Heart
By Caroline Donahue — THE BOOK DOCTOR After how long January feels, February seems to flash past, a quick five minutes. In the midst of this zooming month, I have begun revising my novel. So far, the daunting steps of reading the first draft over several times and considering whether the structure is really working have been completed. As I waded through scenes that took me years to write, only to make judgements on whether they are, as Heidi Klum used to say on Project Runway, “In” or “Out,” I was struck with the realization that a first draft and later ones have entirely different intentions. A first draft is for the head. We need to understand a lot of things through writing a first draft. To be clear, you may write many sections of a book many times in a first draft. But for the sake of clarity, I think of a first draft as the first time you’ve written a piece all the way through to the end and have been able to type THE END, if only to delete it immediately afterward. This first draft is to understand what happens in the book. This is true for fiction as well as memoir. Anything with a narrative. We are writing that first draft to understand the scope, to know what is part of the story, who the players are, what they are like, and what span of their lives we will witness as readers. There are a lot of decisions to make in this first draft, and our head works very hard to make them. First person or third? Multiple perspectives? Multiple timelines or chronological? Past tense of the increasingly trendy present? A first draft is often accompanied by lists of decisions to make. I have even assigned my clients these sorts of lists when they get lost in the weeds. But now that I have pushed the boat away from the dock and am floating in the middle of the water, far away from shore in my revision, I have come to understand that the second draft is no longer the draft of the head. I know what happens and who is involved, my tense, my POV and the span of time covered. So what is this new draft for? This, my friends, is when the heart gets involved. If you simply tell a story point by point but without any emotion or atmosphere or without engaging the senses, you’ll never get into your reader’s heart. My editor and mentor said when I gave him the end of my first draft that I now had the foundation and structure of a book, much like the frame and roof of a house. But now, I need to connect the electricity and decorate the book so the reader feels at home in it. And this is when you need to start re-reading what you’ve written with your feelings at the front. Try to distract your head with tasks that will keep it away from the action. Have it format the draft or make sure the line spacing is even while you get on with the real work of making the book come to life. Look at all the characters from an emotional perspective rather than a factual one. Instead of asking what the character does for a living, for example, ask how she feels about it. Does she love her work or hate it? Or, even better, does she love it even while she is going broke because it doesn’t cover all her bills. There we go…the heart has been engaged. Read through anything you write a first time to see if it makes sense, if it is logical. This is an essential part of revision; however, you’re not finished once you’ve determined that everything makes sense. The review that puts a hand on your heart and asks if you are feeling anything when you are reading it is the one that will make the difference between a fine book that people forget about quickly and one that they text their friends in the middle of the night to tell them that they absolutely must read it. I recently read a book twice in a row because I connected so completely to the emotional level of the story, even though I found a couple of factual errors that could have been easily avoided. A reader will forgive small breaks in logic, but she won’t forgive a lack of feeling. So, as you consider your writing this month, make sure you know where your heart stands on the way it’s going. As writers, our ability with language makes us susceptible to staying in our heads, but I hope I’ve convinced you that the key to a masterpiece is in the heart. Originally published at https://thewildword.com on February 27, 2020.
https://thewildwordmagazine.medium.com/write-first-with-your-head-then-revise-with-caroline-donahue-95cd1f784109
['The Wild Word Magazine']
2020-03-06 16:27:52.294000+00:00
['Writing Tips', 'Creativity', 'Writing']
3 Web Technologies Killed by Google
AngularJS AngularJS is perhaps the first relevant JavaScript framework to appear. It was released by Google in 2010 — at a time when the most prominent JavaScript library was jQuery. Instead of just a library like jQuery, AngularJS, also known as Angular 1, is a whole framework that brought the MVVM concept to the world of front-end development. In 2016 the Angular, which we know today, was released. According to Wappalyzer, many large websites still use AngularJS for their front-end — but support will be discontinued next year. The technology behind AngularJS is simply outdated — because modern frameworks like React, Vue, and Angular all use a CLI by now. This allows us to write code in, for example, React.js that would not work in a browser — in Reacts case; it is the JSX syntax that is converted by the CLI into classic JS & HTML for the production version. AngularJS, on the other hand, reminds us very much of Vue.js when we use it without CLI. Instead of converting the code, we write for production; we write everything directly in our HTML and JS files. So are the so-called directives, which we implement as HTML attributes: data-ng-repeat: "item in items" Without the JavaScript code provided by AngularJS, the browser could not do anything with these attributes — a classic example of client-side rendering. But the trend is more and more towards server-side-rendering and static pages where our JavaScript data structures are converted to HTML that can be rendered in the browser. Where for Angular, there is the so-called Angular Universal to render a page on the server-side; for AngularJS, the possibility seems to be missing. Working without a CLI and simply importing the library over a CDN and writing code like jQuery is not that complicated. Still, CLIs have become an integral part of the developer community — regardless of the framework or library, because it makes sense to have TypeScript, Linting, and transcompiling support. Without a CLI, however, this is virtually unthinkable. As of December 2021, AngularJS will stop long term support.
https://medium.com/javascript-in-plain-english/killed-by-google-aa2c71c324cf
['Louis Petrik']
2020-11-15 12:27:39.013000+00:00
['Web Framework', 'Software Development', 'Google', 'Web Development', 'Cloud Computing']
When You Feel like You Can’t Live up to Your Own Writing
When You Feel like You Can’t Live up to Your Own Writing Late night self-reflections of a writer on book deadline A page from my journal that night. I was rolling around in bed at 2 am. My mind was racing with outstanding tasks, unwritten paragraphs, and self-doubt. Together with my co-author John Fitch I’m in the final stages of writing a book about the importance of Time Off. And we are on a rapidly approaching deadline to hand the draft over to our editor. Many authors I talked to in the past told me that they are a mental mess in the days before handing off their manuscript, but I hoped I’d be immune to this. Especially given the topic we are writing about. But there I was, unable to sleep because my mind couldn’t stop worrying about the book. And at a deeper layer, worrying about the worrying. So I did the only thing I could think of at the moment to calm down my mind. I got up and put my thoughts to paper, writing the following words in my notebook to reassure myself of what I am doing. It helped me a lot. Maybe if you are in a similar situation, it can help you too.
https://maxfrenzel.medium.com/when-you-feel-like-you-cant-live-up-to-your-own-writing-93e1d94f0cf6
['Max Frenzel']
2019-10-25 01:28:29.617000+00:00
['Personal Growth', 'Self', 'Creativity', 'Journaling', 'Writing']
Data Exploration and Analysis Using Python
Data Exploration and Analysis Using Python Simple ways to make your data talk Data exploration is a key aspect of data analysis and model building. Without spending significant time on understanding the data and its patterns one cannot expect to build efficient predictive models. Data exploration takes major chunk of time in a data science project comprising of data cleaning and preprocessing. In this article, I will explain the various steps involved in data exploration through simple explanations and Python code snippets. The key steps involved in data exploration are: > Load data > Identify variables > Variable analysis > Handling missing values > Handling outliers > Feature engineering Load data and Identify variables: Data sources can vary from databases to websites. Data sourced is known as raw data. Raw data cannot be directly used for model building, as it will be inconsistent and not suitable for prediction. It has to be treated for anomalies and missing values. Variable can be of different types such as character, numeric, categorical, and continuous. Variable Type Identifying the predictor and target variable is also a key step in model building. Target is the dependent variable and predictor is the independent variable based on which the prediction is made. Categorical or discrete variables are those that cannot be mathematically manipulated. It is made up of fixed values such as 0 and 1. On the other hand, continuous variables can be interpreted using mathematical functions like finding the average or sum of all values. You can use a series of Python codes to understand the types of variables in your dataset. #Import required libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns #Load the data titan=pd.read_csv("../input/titan.csv") #get an overview of the data titan.head() titan.tail() titan.sample(10) #identify variable type titan.dtypes titan.info() titan.describe() Variable Analysis: Variable analysis can be done in three ways, univariate analysis, bivariate analysis, and multivariate analysis. Variable Analysis Univariate analysis is used to highlight missing and outlier values. Here each variable is analysed on its own for range and distribution. Univariate analysis differs for categorical and continuous variables. For categorical variables, you can use frequency table to understand distribution of each category. For continuous variables, you have to understand the central tendency and spread of the variable. It can be measured using mean, median, mode, etc. It can be visualized using box plot or histogram. #Understand various summary statistics of the data include =['object', 'float', 'int'] titan.describe(include=include) titan.describe() #Get count of values in a categorical variable titan.survived.value_counts() titan.age.hist(figsize=(10,5)) Histogram Bivariate Analysis is used to find the relationship between two variables. Analysis can be performed for combination of categorical and continuous variables. Scatter plot is suitable for analyzing two continuous variables. It indicates the linear or non-linear relationship between the variables. Bar charts helps to understand relation between two categorical variables. Certain statistical tests are also used to effectively understand bivariate relationship. Scipy library has extensive modules for performing these tests in Python. Bivariate Analysis Matplotlib and Seaborn libraries can be used to plot different relational graphs that help visualizing bivariate relationship between different types of variables. Scatter Plot iris = sns.load_dataset("iris") sns.relplot(x = 'sepal_length', y = 'petal_length', hue='species',data = iris) relplot = sns.catplot(x="pclass", hue="who", col="survived", data=titan, kind="count", height=4, aspect=.7); relplot Handling Missing Values: Missing values in the dataset can reduce model fit. It can lead to a biased model as the data cannot be analysed completely. Behavior and relationship with other variables cannot be deduced correctly. It can lead to wrong prediction or classification. Missing values may occur due to problems in data extraction or data collection, which can be categorized as MCAR, MAR, and NMAR. Missing Values Missing values can be treated by deletion, mean/mode/median imputation, KNN imputation, or using prediction models. Handling Missing Values You can visually analyse the missing data using a library called as Missingno in Python. import missingno as msno msno.bar(titan) msno.heatmap(titan) np.mean(titan['age']) from scipy import stats stats.mode(titan['embarked']) titancopy['age'].fillna(29,inplace=True) titancopy['embarked'].fillna("S", inplace=True) Handling Outliers: Outliers can occur naturally in a data or can be due to data entry errors. They can drastically change the results of the data analysis and statistical modeling. Outliers are easily detected by visualization methods, like box-plot, histogram, and scatter plot. Outliers are handled like missing values by deleting observations, transforming them, binning or grouping them, treating them as a separate group, or imputing values. Box Plot import plotly.express as px fig = px.box(titan,x='survived',y='age', color='pclass') fig.show() px.box(titan, y='age') px.box(titan,x='survived',y='fare', color='pclass') #Adding trendline to the data x=iris.sepal_length y=iris.petal_width plt.scatter(x, y) z = np.polyfit(x, y, 1) p = np.poly1d(z) plt.plot(x,p(x),"y--") plt.show() Feature Engineering: Feature engineering is the process of extracting more information from existing data. Feature selection also can be part of it. Two common techniques of feature engineering are variable transformation and variable creation. In variable transformation existing variable is transformed using certain functions. For example, a number can be replaced by its logarithmic value. Another technique is to create a new variable from the existing variable. For example, breaking the date field in the format of dd/mm/yy to date, month and year columns. Variable Transformation titancopy = titan.copy() #variable transformation titancopy['alive'].replace({'no':0,'yes':1}, inplace=True) #Convert boolean to integer titancopy["alone"]=titancopy["alone"].astype(int) Two other data transformation techniques are encoding categorical variables and scaling continuous variables to normalize the data. This depends on the model that is used for evaluation, as some models accept categorical variables. Irrelevant features can decrease the accuracy of the model. Feature selection can be done automatically or manually. A correlation matrix is used to visualize how the features are related to each other or with the target variable. Correlation Matrix titancopy.corr() plt.figure(figsize=(10,10)) corr = titan.corr() ax = sns.heatmap( corr, vmin=-1, vmax=1, center=0, cmap=sns.diverging_palette(20, 220, n=200), square=True, annot=True ) ax.set_xticklabels( ax.get_xticklabels(), rotation=45, horizontalalignment='right' ) ax.set_yticklabels( ax.get_yticklabels(), rotation=45, ); The scikit-learn library provides few good classes such as SelectBest to select specific number of features from the given dataset. The tree-based classifier in the same library can be used to get the feature importance scores. This covers some of the key steps involved in data exploration. Each of these steps can be reiterated depending on the size of the data and the requirement of the model. Data scientists spend the maximum amount of time in data preprocessing as data quality directly impacts the success of the model. All the code snippets shown here are executed in the Exploratory Data Analysis and Visualization Kaggle notebook.
https://towardsdatascience.com/data-exploration-and-analysis-using-python-e564473d7607
['Raji Rai']
2020-06-12 18:54:51.038000+00:00
['Python', 'Data Analysis', 'Data Science', 'Data Visualization', 'Data Exploration']
The One Year Plan For Cracking Coding Interviews
The One Year Plan For Cracking Coding Interviews About my hustle before cracking interviews. It took me one year to go from a noob programmer to someone decent enough to crack coding interviews for getting internships and gaining experience. I still have a long way to go, but the first step to being a good programmer is working in the real world and getting experience, which can be best gained by internships. And if you want an internship, you have to crack the interview first. Which brings us to this blog. Photo by Jordan Whitfield on Unsplash I have broken down my one-year plan, which I diligently followed, and will hopefully help you with your planning if you are in the starting stage. Prerequisite: Knowing the basics and syntax of one programming language. Most students tend to know Java, C, or Python from their colleges/highschools. You can stick to the one you are comfortable with from these three, but if C is your preferred language, I would recommend you to switch to C++. My first language was C, which made me switch to C++. I learned Java on the side, enjoyed it more, and decided to practice competitive coding in Java, and so every interview I have ever cracked was by using Java. I had zero experience in python, but after joining Facebook, all of the code I have written as an intern is in Python. So my point is, there is no superior language amongst these three, try not to worry about which one to choose. Just pick one, crack interviews in that one, and you can learn the rest on the go depending on where you get placed. Here’s the plan: The month-specific blogs that are released so far have been linked below, and the rest are coming soon. Month 1: Big O, Arrays and Strings: Read it here Month 2: Linked Lists: Read it here Month 3: Stacks and Queues: Read it here Month 4: Trees and Tries: Read it here Month 5: Hashmap, Dictionary, HashSet Month 5: Graphs Month 6: Recursion and Dynamic Programming Month 7: Sorting and Searching Month 8: Reading(about system design, scalability, PM questions, OS, threads, locks, security basics, garbage collection, etc. basically expanding your knowledge in whatever field required, depending on your target role) Month 9, 10, 11, 12: A mix of medium and hard questions in your preferred website. Practice by participating in contests, focusing on topics that you are weak at, mock interviews, etc. Source — forbes.com Here’s how I approach every topic in each month — Let’s say you are in month 4, and focusing on trees. You need to first understand what trees are, different types of trees, and be able to define class Node and Tree. You then need to be able to perform basic operations like adding, finding, and deleting an element, pre-order, in-order, post-order, and level-by-level traversal. Lastly, you practice different tree questions available on Hackerrank, Leetcode, or a website of your choice. You should target the easy questions first, and once you are comfortable, move on to medium and hard. The last 4 months are for solving a mix of different questions, via contests or otherwise, which is necessary because when you are practicing tree questions, you know you have to use a tree. But if you are given a random question, how will you know a tree would be the best approach? Also, always look for the most optimal solution in forums after solving it yourself. You have an entire month, and if you manage to dedicate 40–70 hours a week, you’ll be able to master trees in such a way that if a tree question is thrown at you in an interview, you’ll be able to mostly solve it since you trained your mind to think that way with intense practice. If you are a student, dedicating this much time is definitely doable, even with side projects, homework, etc. Your grades might take a hit (my As became Bs in that one semester(month 9,10, 11, 12) when I was dedicating over 8 hours a day to competitive coding) but it was worth it. You should also try to build projects or do research on the side while preparing. Some people learn better by participating in contests in CodeForces, CodeChef, etc. while others prefer practicing questions. Again, there is no benefit of one over the other, do what you personally prefer. I do not believe in practicing particular topics for a particular company, some websites claim to have a set of questions dedicated to a particular company, eg: cracking the Google interview. I think the goal should be to be a better developer overall, focusing on just a few topics that Google tends to test candidates on may not be the best way to follow. Interviewers also judge you based on your LinkedIn, Resume, past experiences, courses taken, Github, degrees and certifications, projects, research papers, etc. Practicing competitive coding does not guarantee a job, but it does guarantee you’ll be able to crack technical interview rounds most of the time, and you’ll also be a better developer overall, which might help you when you build projects. Lastly, don’t stop. It may seem easy at first when you are motivated, but that fuel dies in a month or so. Keep your goal in mind, of course, it’s going to be hard, but the only ones who make it are those who stick to the plan. You can edit the plan if you need to, but once done, stick to it, even on your lazy days, even when you have a college fest or a party to attend, even when you are sleepy. Like I said, the ones who succeed are the ones who *stick to the plan*. This sums up my schedule at a high level. I plan on digging deep, and my next blog will only focus on month 1(Big O, Arrays and strings), the one after that will be month 2, and so on. I hope this was helpful, let me know if you want me to also write about any other topic on the side, or if you have any queries. I’d appreciate it if you could ask your questions on Instagram since I prefer to keep LinkedIn for professional opportunities, but either is fine. Thanks! Signing off! Anjali Viramgama Incoming Software Developer at Microsoft LinkedIn | Instagram
https://towardsdatascience.com/the-one-year-plan-for-competitive-coding-6af53f2f719c
['Anjali Viramgama']
2020-12-13 21:58:57.485000+00:00
['Competitive Programming', 'Google', 'Facebook', 'Coding', 'Technology']
How Facebook and Google uses Machine Learning at their best
“Machine learning will automate jobs that most people thought could only be done by people.” ~Dave Waters Hello everyone , so today I will like to tell you how the most famous companies Facebook and Google uses Machine Learning at their best to ease their tasks and do cool stuffs that earlier was thought to be impossible. What is Machine Learning? Have you ever wondered that how we learn or how from our birth our brain learns each and every bit, be it our parents and friend’s faces or riding bicycles or learning mathematical formulas etc. It is because our brain observes whatever we see and makes patterns (which we call as experiences )and by analysing these patterns it predicts and makes further decisions. Due to this decision making power of our brain unlike computers the developers thought that why not we give prediction and decision making power to the computers which would add extra stars to the computers speed ie now computers can take decisions and do predictions as fast as they do calculations. So to achieve this they brought the concept of Machine learning which is to make some programs that based on the data provided to the computers (similar to experiences of our brain) can predict results or target values. In this way these programs can basically help machines learn. “Machine intelligence is the last invention that humanity will ever need to make.” ~Nick Bostrom Now talking about the companies, Google and Facebook took a great advantage of the Machine Learning and not only reduced their work load that was earlier done by humans but also proved to be the smartest and most lucrative and innovative companies for the clients. How Google uses Machine Learning? Google has declared itself a machine learning-first company. Google is the master of all. It takes advantage of machine learning algorithms and provides customers with a valuable and personalized experience. Machine learning is already embedded in its services like Gmail, Google Search and Google Maps. Google services, for example, the image search and translation tools use sophisticated machine learning. This allows the computer to see, listen and speak in much the same way as humans do. Much wow! Gmail As you all know that our social, promotional and primary mails are separated in different boxes . This is filtered through Google as it labels the email accordingly. This is where machine learning plays a crucial part. The user intervention is used to tune to its threshold and when a user marks a message in a consistent direction, Gmail itself performs a real-time increment to its threshold and that’s how Gmail learns for future and later uses those results for categorization. Smart replies: This is really a smart move made by Google. Now, with the help of this feature, you can reply instantly in a second. With the suggested replies given by Gmail. ‘Smart Replies’ and ‘Smart Compose’ are indeed the best products that Google has given to its customers. These are powered by machine learning and will offer suggestions as you type. This is also a major reason why Google stands as one of the leading companies today. Also, it is not just in English. It will bring support in four new languages: Spanish, French, Italian and Portuguese. Google Search and Google Maps This also employs machine learning and while you start typing in the search box it automatically anticipates what you are looking for. It then provides suggested search terms for the same. These suggestions are showcased because of past searches (Recommendations), trend (which everyone is looking for), or from your present location. For example — Bus traffic delays — Hundreds of major cities around the world, thousands of people traveling, One machine that is learning and informing . Google gets all the real-time data on bus locations and forecasts it in a jiffy. So, now you don’t have to wait long hours for your bus. With the combination of time, distance traveled, and individual events as datasets, it is now possible for Google to provide predictions. Now, there is no need to rely on bus schedules provided by public transportation agencies. With the help of your location, day of the week, and time of day, your estimated time of arrival (ETA) can be understood. The best invention ever done for students is Google search and no one will deny it. Google Search and Google Maps use machine learning too and help the people in their day to day tasks. You can check the awesomeness of Google Machine Learning by just going to Google and then typing or speaking just weather and without asking anything it will automatically tell you the whole weather report for your area. This is the cause of Machine Learning Google Assistant It helps one to assist in everyday tasks, be it household chores or a deal worth crores. The Google Assistant makes it easy for you to search for nearby restaurants when it’s raining heavily, helps you to buy movie tickets while on the go and find the nearest theatre from your place. Also, helps you to navigate to the theater. In short, you don’t have to worry when you have a smartphone, because Google takes care of everything. This is all done due to strong machine learning algorithms used by google. Google Translate The world is migrating. Leave the rest of the world, at least 24 languages are spoken in India itself, with over 13 different scripts and 720 dialects. Well, if we talk about the world there are roughly 6,500 spoken languages in the world today. Can’t thank Google enough cause we’ve all used Google Translate at some point (I hope, you travel a lot too).The best is that it’s free, fast, and is generally accurate. Its translation of words, sentences, and paragraphs have helped many to decode and understand. It is true that it is not 100% accurate when it comes to larger blocks of text or for some language, but it can provide people with a general meaning to make the understanding less complex. All this is possible because of Statistical Machine Translation (SMT). So, no matter how much you hate mathematics or statistics you will have to thank and love it . This is a process where computers analyze millions of existing translated documents from the web to learn vocabulary and look for patterns in a language. After that Google translates it. It then picks the most statistically probable translation when asked to translate a new bit of text. Speech Recognition: Ok, Google. The speech recognition feature enables the user to convert audio to text by applying powerful neural network models in an easy-to-use API. Currently, the API recognizes 120 languages and its variants to support the global user base. Through this voice, command-and-control can be enabled and the audio can be transcribed from the call centers. Also, the processing of real-time data can be done. Starting from streaming to prerecorded audio, speech recognition has mastered it all and all credits can be given to Google’s machine learning technology. Reverse Image Search Google’s Search by Image is a feature that uses reverse image search and allows users to search for related images just by uploading an image or image URL. Google accomplishes this by analyzing the submitted picture and constructing a mathematical model of it using advanced algorithms. It is then compared with billions of other images in Google’s databases before returning matching and similar results. When available, Google also uses metadata about the image such as description. Reverse image search is a content-based image retrieval (CBIR) query technique that involves providing the CBIR system with a sample image that it will then base its search upon; in terms of information retrieval, the sample image is what formulates a search query. Image search creates categories that you might be looking for. With the image search, it becomes easy to search for similar images. It also helps to find the websites that contain these images and the other sizes of the picture you searched with. Google Adsense With the help of machine learning, Google keeps track of the users’ search history. With the help of that history, it recommends the advertisement to the user as now its aware of its target market. It’s heavily based on the search history data and machine learning helps Google to achieve this. It created a win-win situation. With Google AdSense, the website owners earn money from their online content and AdSense works by matching text and display ads to the site based on the content and the visitors. There are many more examples such as Google Music, Google Photos ,Google Adwords etc. which makes great use of Machine Learning and that is the reason that Google has the most number of users in most of the field as it not only makes our work easier but also solves the problem smartly. How Facebook uses Machine Learning? Machine Learning is the vital aspect of Facebook. It would not even be possible to handle 2.4 billion users while providing them the best service without using Machine Learning! Let’s take an example. It is mind-boggling how Facebook can guess the people you might be familiar with in real life using “People You May Know”. And they are right most of the time!!! Well, this magical effect is achieved by using Machine Learning algorithms that analyze your profile, your interests, your current friends and also their friends and various other factors to calculate the people you might potentially know. That’s only one example ,other aspects are the Facebook News Feed, Facial Recognition system, Targeted Advertising on your page, etc. which we would look below. Facial Recognition Facial Recognition is among the many wonders of Machine Learning on Facebook. It might be trivial for you to recognize your friends on social media (even under that thick layer of makeup!!!) but how does Facebook manage it? Well, if you have your “tag suggestions” or “face recognition” turned on in Facebook (this means you have provided permission for Facial Recognition), then the Machine Learning System analyses the pixels of the face in the image and creates a template which is basically a string of numbers. But this template is unique for every face (sort of a facial fingerprint!) and can be used to detect that face again in another face and suggest a tag. So now the question is, What is the use of enabling Facial Recognition on Facebook? Well, in case any newly uploaded photo or video on Facebook includes your face but you haven’t been tagged, the Facial Recognition algorithm can recognize your template and send you a notification. Also, if another user tries to upload your picture as their Facebook profile picture (maybe to get more popular!), then you can be notified immediately. Facial Recognition in conjugation with other accessibility options can also inform people with visual impairments if they are in a photo or video. Textual Analysis While you may believe photos are the most important on Facebook (especially your photos!), the text is equally as important. And there is a lot of text on Facebook!!! To understand and manage this text in the correct manner, Facebook uses DeepText which is a text engine based on deep learning that can understand thousands of posts in a second in more than 20 languages with as much accuracy as you can! But understanding a language-based text is not that easy as you think! In order to truly understand the text, DeepText has to understand many things like grammar, idioms, slang words, context, etc. For example: If there is a sentence “I love Apple” in a post, then does the writer mean the fruit or the company? Most probably it is the company (Except for Android users!) but it really depends on the context and DeepText has to learn this. Because of these complexities, and that too in multiple languages, DeepText uses Deep Learning and therefore it handles labeled data much more efficiently than traditional Natural Language Processing models. Targeted Advertising Did you just shop for some great clothes at Myntra and then saw their ads on your Facebook page? Or did you just like a post by Lakme and then magically see their ad also? Well, this magic is done using deep neural networks that analyze your age, gender, location, page likes, interests, and even your mobile data to profile you into select categories and then show you ads specifically targeted towards these categories. Facebook also partners with different data collection companies like Epsilon, Acxiom, Datalogix, BlueKai, etc. and also uses their data about you to accurately profile you. For Example, Suppose that the data collected from your online interests, field of study, shopping history, restaurant choices, etc. profiles you in the category of young fashionista according to the Facebook deep neural networks algorithm. Then the ads you are shown will likely cater to this category so that you get the most relevant and useful ads that you are most likely to click. (So that Facebook generates more revenue of course!) In this way, Facebook hopes to maintain a competitive edge against other high-tech companies like Google who is also fighting to obtain our short attention spans!!! Language Translation Facebook is less a social networking site and more a worldwide obsession! There are people all over the world that use Facebook but many of them also don’t know English. So what should you do if you want to use Facebook but you only know Hindi? Never fear! Facebook has an in-house translator that simply converts the text from one language to another by clicking the “See Translation” button. And in case you wonder how it translates more or less accurately, well Facebook Translator uses Machine Learning of course! The first click on the “See Translation” button for some text (Suppose it’s Beyonce’s posts) sends a translation request to the server and then that translation is cached by the server for other users (Who also require translation for Beyonce’s posts in this example). The Facebook translator accomplishes this by analyzing millions of documents that are already translated from one language to another and then looking for the common patterns and basic vocabulary of the language. After that, it picks the most accurate translation possible based on educated guesses that mostly turn out to be correct. For now, all languages are updated monthly so that the ML system is up to date on new slangs and sayings! News Feed The Facebook News Feed was one addition that everybody hated initially but now everybody loves!!! And if you are wondering why some stories show up higher in your Facebook News Feed and some are not even displayed, well here is how it works! Different photos, videos, articles, links or updates from your friends, family or businesses you like show up in your personal Facebook News Feed according to a complex system of ranking that is managed by a Machine Learning algorithm. The rank of anything that appears in your News Feed is decided on three factors. Your friends, family, public figures or businesses that you interact with a lot are given top priority. Your feed is also customized according to the type of content you like (Movies, Books, Fashion, Video games, etc.) Also, posts that are quite popular on Facebook with lots of likes, comments and shares have a higher chance of appearing on your Facebook News Feed. So theses are some of the cool stuffs about how theses large companies are benefitted by Machine Learning. So now I will take your leave by telling you some interesting words of the co-founder of Chatables , Amy Stapleton and director of Paetoro Dr. Dave Waters. We are entering a new world. The technologies of machine learning, speech recognition, and natural language understanding are reaching a nexus of capability. The end result is that we’ll soon have artificially intelligent assistants to help us in every aspect of our lives.” ~Amy Stapleton Predicting the future isn’t magic, it’s artificial intelligence.” ~Dave Waters Thank you for reading!!
https://ushivam4u.medium.com/how-facebook-and-google-uses-machine-learning-at-their-best-f43453f6109d
['Shivam Prasad Upadhyay']
2020-10-20 08:03:31.862000+00:00
['Machine Learning', 'Speech Recognition', 'Facebook', 'Google', 'Facial Recognition']
Product Seven
Product Seven The seven steps necessary to produce a successful product. Seven steps are necessary to properly execute a product. This process does not guarantee success. Other variables such as team/personnel, financials, time, and so on are other variables which must also be considered. The first step is the idea. It is essential to move from mind to an available product as efficiently as possible. When an idea is new it has the most energy. This energy will fade over time and ultimately the idea will become obsolete. If action is not taken it allows opportunity for others to act. There is a thought that every idea is given to or held by at least two individuals. Be the one who acts. He who hesitates is lost. The second step is design. Manifesting an idea into something tangible is design. Facilitation is key as to not extinguish the spark of the idea. Ideation is also essential and should be pushed to it’s breaking point. There is no repercussion for pushing an idea to impossibility. Once the limits have been pushed the design may comfortably be accepted within the confines of impossibility. The third step is the (creative/technical) handoff. Now is the time for reality to ground the dreamer. A compromise between what is ideal and what is realistic must be made. This may require going back one or even two steps. This is not a failure and should not be frowned upon. It is extremely important not to move forward to the fourth step until both creative and technical parties have an aligned vision. The fourth step is development. The technical team must be left alone and not interrupted during this phase. Every interruption is time lost. Enough time lost is a failed product. If clarification is needed address it immediately but do not interrupt the flow. If the third step was adequately performed the technical team can be allowed isolation. This is ideal. The fifth step is testing. An unbiased third party must now examine the work of the (creative and technical) team as it stands. All observations and discoveries are welcomed in this stage. Resolve and reconcile everything that comes to light before moving forward. The sixth step is production. The product must be consumable by the population. Production must move as the gears of a clock. Any hiccup must immediately be addressed. Reliability is a key aspect in efficient production. The seventh step is available. The product must be presented to the population appropriately. If not immediately intuitive it must be explained eloquently. The product can and ultimately will die in this state. What is important is that the product has a successful life cycle before eventually dying. The integrity of the original idea must be retained. Do not milk the product to it’s dying breath. There is no honor in this. Instead start a new with an idea, and begin again.
https://uxdesign.cc/product-seven-d4fa0b6ec131
['Daniel Soucek']
2018-06-12 05:00:37.953000+00:00
['Development', 'User Experience', 'Design', 'Software Development', 'Product Design']
iOS 14: Apple Finally Listened
What’s new in iOS 14? The main changes that have been made to the upcoming version of iOS are ‘quality of life’ improvements that help reduce clutter, and give more information at a glance. 1. Widgets For Android users, widgets have been useful feature for years. If you want to look at your reminders quickly, you can just unlock your phone and look at the reminder’s widget, rather than having to open the reminders app. If you want to see your shopping list, you can place a widget on your homepage so it’s easily accessible. The lack of a proper widget system has been a flaw in iOS for years in my opinion. The closest thing we have had to Android’s widgets in previous years is the leftmost page on your home menu. All of the widgets look the same, and you have to scroll down if you want to find the widget for a specific app. Image: Apple Newsroom. iOS 14 brings widgets to a whole new level. The leftmost page on your home-screen has transformed into a tile-based menu where all the widgets are separated into smaller, but more accessible positions. The widgets stand out from one another, and can be dragged onto your home-screen, with the ability to position them in the space of any two-by-two area of app space. You can also drag widgets on top of each other, giving you the option to scroll through widgets — a smart feature that will stop your home-screen from having too many widgets. This is known as ‘Smart Stack’. Apple have also implemented a feature that allows Smart Stacks to automatically scroll during certain times of day, based on your activity. For instance, you could wake up and find the Apple News widget is currently being displayed; but at 11am the widget has switched pages to the reminders option. In the evening, the widget may have updated to show you a summary of your exercise for the day, or perhaps a show you can watch. The ‘Widget Gallery’ is a menu that can also be used to drag widgets onto your home-screen. The gallery gives the user the option to change the size of widgets to fit different areas of the home-screen. You could have a widget that is the size of a two-by-two area of apps, a two-by-four area, or even a four-by-four area of space. I am very much looking forward to seeing how widgets will make my iPhone a more simple, informative space for when I’m on the go.
https://medium.com/swlh/ios-14-apple-finally-listened-68e2f27db47c
['Joe Mccormick']
2020-06-27 20:50:19.305000+00:00
['Design', 'Software Development', 'Business', 'iOS', 'Apple']
Why Small Data is a Big Deal
Why Small Data is a Big Deal Here’s how small data can make a big impact. Big Data in the News When we hear about Artificial Intelligence in the news, it’s usually about some shiny new breakthrough built using big data — things like Tesla’s self-driving cars, OpenAI’s text generators, or Neuralink’s brain-computer interfaces. Small Data in Real Life However, as with much of what we read in the news, these are outliers. In reality, most AI projects look completely different. Most of us aren’t trying to create superintelligence, we’re just trying to optimize KPIs, whether it’s churn, attrition, sales, traffic, or any of a million other metrics. And to optimize KPIs, you don’t need big data. The most common sources of data for KPI optimization are day-to-day business tools like Hubspot, Salesforce, Google Analytics, or even Typeform. Unless you’re one of the outliers, the exports from these tools likely wouldn’t qualify as big data. If you do work with big data, then all the more power to you. You’ll potentially be able to create even more accurate models, but it’s not a pre-requisite to adding value to an organization. Small Data for Object Detection When we hear about “big data,” it’s often in the same breath as “object detection.” Indeed, the norm for object detection models has been training on massive amounts of data. Until now. Researchers at the University of Waterloo released a paper discussing ‘Less Than One’-Shot Learning, or the ability for a model to accurately recognize more objects than the number of examples it was trained on. A popular dataset for computer vision experiments is called MNIST, which is made of 60,000 training images of handwritten digits from 0 to 9. A previous experiment by MIT researchers showed that it was possible to “distill” this huge 60,000-image dataset down to just 10 images, carefully engineered to be equal in information to the full set, achieving almost the same accuracy. LO-Shot Learning In the new LO-Short paper, researchers figured out they could create images that blended multiple digits together, which are fed into the model with “soft” labels, like how a man and a horse have partial features of a centaur. In the context of MNIST, this refers to the idea of digits sharing some features, such as a digit of 3 looking somewhat like an 8, a tiny bit like a 0, but nothing like a 1 or a 7. Thus, training images can be used to understand more objects than are even in the training set. Astonishingly, it seems there’s virtually no limit to this concept, meaning that carefully engineered soft labels could encode any number of categories. “With two points, you can separate a thousand classes or 10,000 classes or a million classes.” (source) More Than Theory The paper demonstrates this concept with a kNN (k-nearest neighbors) approach, which classifies objects via separable planes along feature axes in a graphical approach. By creating tiny synthetic datasets, with carefully engineered soft labels, the kNN algorithm was able to detect more classes than there were data points. Limitations While this approach worked astonishingly well applied to the visual, interpretable kNN algorithm, the approach may not work for complicated and opaque neural networks. The previous example of “data distillation” also doesn’t work as well, since it requires that you start with a very large dataset. The Implications More than just a fascination, this research has important implications for the AI industry, namely in reducing data requirements. Intense data requirements make it extremely expensive to train AI models. For instance, GPT-3 cost upwards of $4 million to train. There are also concerns that inference, or making predictions with the model, is too expensive for researchers. This is more of a reflection of the extreme size and computing requirements of GPT-3, a 175-billion parameter model, than anything else. The status quo for current State-of-the-Art models is to train on as much data as possible. GPT-3, a language model, took that to the extreme by training on essentially the entire Internet. With the new LO-Shot Learning breakthrough, it may one day be possible to accomplish SOTA with just a few data points, resulting in extremely lightweight and efficient models.
https://medium.com/datadriveninvestor/why-small-data-is-a-big-deal-83c17e118785
['Obviously Ai']
2020-12-27 16:18:59.046000+00:00
['Data Analysis', 'Artificial Intelligence', 'Data', 'AI', 'Data Science']