title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Practising Self Care And Loving The Body Temple
If you are reading this, you are seeking to improve and expand your way of life. You don’t’ want to continue to gather intellectual materials, you are focused on practising a new way of living. Maybe you have been building rituals into your daily life, bringing your attention to improvements that will allow you to improve your creative forces. You may be focused on changing the planet, saving those who have been left behind. Deepak Chopra believes in the principle — to change the world we first have to change ourselves. Start by changing your focus. As you practice this new focus, you will improve and advance. Eventually, the advancements bring real-life changes into your life. Momentum replaces inertia, and then you are pulled forward by your own practice. Yoga practice by Sofie Zbořilová from Pixabay Treat your body as a temple so it contributes to your ability to share your gift with the world. You don’t have to have the perfect body to contribute your gifts. Don’t compare yourself with anyone else. We are all here with different challenges and advantages to overcome. Our gifts are different as well. Steven Hawking, Stevie Wonder, and Michael J Fox are all individuals that had physical imperfections or challenges with a disease. They didn’t let that get in the way of sharing their gift with the world. What is getting in the way for you? The first step is to discover what it is that is preventing you from contributing. Look at your body as a functioning whole. The body is a community of parts that function together. Your blood flows to all areas of your body, the various systems work throughout your body to connect and heal, clear debris. The systems work to provide essential minerals and nutrients that help you function at your best level. Sit quietly and scan your body. Take an inventory, then place your attention on all of the areas that work well. Give gratitude to these functioning parts of the body temple. Be grateful for the sharpness of your brain, your functioning heart, your kidneys, your stomach. Begin at the bottoms of your feet and slowly work upwards, being grateful for each small spot. Go slowly. If you discover an area that is stuck or problematic, don’t focus on it, but make a small note of it for future reference. Move all the way up through your body, appreciating each functioning part, and giving thanks for how they work together as the ultimate machine. and giving thanks for how they work together as the ultimate machine. This is the body you are using to express your gifts, so infuse it with light and love. Insert the color pink into it and let it float gently in your love. Now take that joy and appreciation for the parts that are amazing and shift it to one of the areas that are not working. Bring that light and appreciation to the part that needs this loving nourishment and let it linger there. Don’t judge, just listen. Listen to what you hear your body is telling you. Pay attention to small clues that might come to you. Do you need more exercise, better food, do you need to stop abusing alcohol or other substances? The message that you hear might be that you are on the right track. Continue doing what you are doing to support healing. Your body temple is full of light and it’s capable of great healing.
https://medium.com/illumination/practicing-self-care-and-loving-the-body-temple-c63e81d1b38a
['Tree Langdon']
2020-12-27 10:40:30.501000+00:00
['Relationships', 'Meditation', 'Self Improvement', 'Self Care', 'Science']
On Trust
On Trust And all consuming worry My first time climbing outside of the gym. 15 weeks pregnant. My dad used to tell me to be like the mountain climber, “And hold on with one hand, while you reach with the next.” What I heard was, “Don't quit your job, before you have the next one lined up.” At the time, I felt like a caged animal. A lab rat. A crazy person in a straight jacket. I sat on my little bouncy ball, and wore clogs that were not patten leather. I died my hair with streaks that were barely in regulation. I bent the rules as much as they would budge — but this didn’t change the fact that I had to go back to that god damn cubicle, again, tomorrow. And it made me itchy. I wanted so badly to leap. To free fall. To get the hell out of there. With wild abandon. My dad wanted me to keep my bird in the hand until I landed the next thing. And I just wanted to let my bird FLY. The only snag with my rejection of his advice, is that I hadn’t actually ever been rock climbing. What I know now, having learned to climb a little, is that the rock climber doesn’t just plod along, one hand to the next, in a boring repetitive and linear sequence — like I thought my Dad wanted me to do with my career. If you are learning, and growing—then there will inevitably be a moment. A moment when you know well enough where your next move is, but you aren’t totally certain you can get there. A moment when you have to trust. Trust yourself — that you might just make it. Trust your partner — that they will hold the rope. Trust your rope — that it will hold you. This moment of knowing you have to leap. Even the tiniest little leap, means you have to release your attachment to the earth. And accept, or maybe even enjoy, either outcome. What I’ve learned the hard way is that both outcomes are equally important. Missing the leap is just as valuable as landing it. Because we are still climbing, and learning — when we go up, and when we go down. The only point that we stop climbing, is when we cling. When we freeze with worry. Deliberating. Ruminating. Debating. Blaming. Regretting. These are the things we do to avoid facing that moment. And I find myself in that moment now. I left my job, and had a baby. Should I stay home? If so, for how long? Should I do my own thing? Or go for a job with benefits? Flexibility? Still frozen. More thinking. If I stay home too long, will I be able to go back to work? How will I explain the gap? Will I lose relevance? Enter all consuming worry. And then I catch myself. And I remember that I have a choice. I can stay frozen, fueling my worry. OR I can choose — to trust. Trust, that I will find that move. Trust, that it’s there. Because, it is. Of course it’s there. I just can’t see it, yet. And when I see it, I will take it. And if I miss, I will be ok. And then, it’s fun again. The climbing is fun. Staying home with my kid is fun. Exploring my next move, is fun. Turns out, my old man was right. Martin Luther King Jr. said, “Faith is taking the first step, even when you don’t see the whole staircase.” And here we are. In the middle of a hard climb. A winding staircase that curves out of sight. And we get to choose. **** **** I am a professional coach and co-founder of the Seattle Coaching Collective. If you are interested in learning more, drop us a love note here.
https://medium.com/the-mission/on-trust-66d43ee3f6db
['Meghann Mcniff']
2020-12-23 23:14:32.080000+00:00
['Short Story', 'Trust', 'Climbing', 'Life Lessons', 'Inspiration']
Java Keywords and Concepts To Know
And Concepts! Hello friends, hope you are all doing great as the holidays approaches. Today, I want to talk about some keywords and concepts that are important to understand as a beginner Java developer. In the Java programming language, it contains 51 keywords that have a predefined meaning. We are not allowed to change their meanings, so developers are not allowed to use these keywords as names for methods, classes, variables, or as any other identifiers. Concepts in Java also help the flow of how code blocks and programs should run. The 13 keywords and concepts I will be mentioning are: Keywords Package: Is a namespace that organizes a set of related classes and interfaces. Import: It declares a Java class to use in the code below the import statement. Class: Is the blueprint from which individual objects are created. Public: Declares a member’s access as public. Public members are visible to all other classes. This means that any other class can access a public field or method. Further, other classes can modify public fields unless the field is declared as final. Void: Used at method declaration and definition to specify that the method does not return any type, the method returns void. Concepts CamelCase: A way to format identifiers that contain more than one word, an example of a CamelCase is…“iPhone”. Colon: It shows the compiler where an instruction ends and where the next instruction begins. It allows the java program to be written in one line or multiple lines, by letting the compiler know where to end the instructions. Variable: Is a piece of memory that can contain a data value. Variable Declaration: When you create a variable, you must declare what kind of data type it will be. It can be an integer, character, boolean, float, long, strings, and etc. Dot(.): Indicates as Object accessor. An object that was made from this class will be able to access a method by using the dot notation. Object: An instance of a class that have states and behaviors. Method: is a block of code which only runs when it is called. Constructor: is a special method that is used to initialize objects. It is called when an object of a class is created. How to Utilize Them Using all the keywords mentioned and most of the concepts for good coding practices. When you first create a project on IntelliJ, you’re already utilizing a package, a public class, and a public static void main method. In the example above, I am asking for the user’s name and age in one method. Usually a method should not be longer than 5 lines of code, but for teaching purposes I am putting them all in the main method. If you take a look at all the line of codes, you can see a semicolon at the end of every line. This is an instruction that tells the compiler where the code should end and start. If you write code in JavaScript and Ruby, you are not forced to end the code with a semicolon but it is highly recommended that you should for good practice. On line 10, I am declaring a variable and giving it a data type of int (integer). The variable age is taking in an integer input from the user and that input is being allowed from line 8. The Scanner is one of Java’s classes that is being imported for our purpose. If you also take a look on line 9 and 10, we’re using dot notations to access a method in the Scanner class. In IntelliJ, you can easily access any class method by typing the “.” and will show all the methods that are available for that object. We are creating puppies for this class and we’re going to have a name and age. Then we will be able to access their name age with the class method of setAge and getAge. I love puppies which is why we are going to create some puppies for this blog. From lines 12 through 19 are class methods that we will be able to use on our puppy instance. We will be able to set an age for our puppy and have the compiler tell us the age of our puppy by returning the global variable of puppyAge (setting it by using the setAge method). On lines 8 through 10 is an example of a constructor, when we create our puppy the compiler will always read us the name of our newly created object from our class. Think of constructors as the materials for a blueprint. Sometimes methods doesn’t require a return type which is why on line 12, when we set the age of our puppy we have void instead of int. If the method doesn’t require a return type it is best to set it as void. If we compare it to the method getAge, we are returning a value of an integer and we have to set that method as int to be consistent. To Sum It All Up We want to be consistent with placing a semicolon at the end of every line of code to tell our compiler where our code starts and ends. Then we have our packages which helps organize our files and we could also import them into a file so that we can have access to some of their methods and properties. Next we have our classes which are like blueprints to make an object and constructors that are materials to create our object, just like how we made puppies. When we were able to create a puppy, we accessed our class methods by using dot notations and we could use dot notations on other imported classes such as Scanner. And within all of our classes we had methods and variables that required a return type. One thing to note variables cannot be void, but methods can (look at the setAge method). Finally for naming convention we used camel casing to help us indicate variables names, methods names, and class names. You can use snack casing too which is this_is_snake_casing. Whichever works better for you! With all this I hope you follow these concepts and memorize these keywords to help further increase your Java skills! Be consistent with your methods and variables and have fun.
https://littlesadtea.medium.com/java-keywords-and-concepts-to-know-b829b0111a37
['David Cha']
2020-10-30 03:21:57.658000+00:00
['Beginners Guide', 'Java', 'Concepts For Beginners', 'Keywords', 'Convention']
Is Deno the new Node
Examples 1. Basic Typescript Create a new file helloDeno.ts in your favorite text editor and enter the following code: let str:string; str = 'hello Deno'; console.log(str); As deno supports typescript out of the box we can directly run this in our terminal. You can run it with deno run helloDeno.ts which should output hello Deno . That’s how easy it is to get started with deno. No typescript configuration required. 2. Creating a server This is an example of a simple server which accepts connections on port 8000, and returns to the client anything it sends. Create a new file server.ts. import { serve } from "https://deno.land/[email protected]/http/server.ts"; const s = serve({ port: 8000 }); console.log("http://localhost:8000/"); for await (const req of s) { req.respond({ body: "Hello World \" }); } You can notice an unusual way of importing here. As mentioned earlier it uses modules referenced as URLs or file paths. Run the program with deno run server.ts You will see that this program returns an error regarding network access, so what did we do wrong? You might remember from the introduction that Deno is a runtime that is secure by default. This means that you need to explicitly give programs the permission to do certain ‘privileged’ actions like network access. Try it out again with the correct permission flag: deno run --allow-net server.ts Please don’t add the --reload flag in the command. I just added it here to show that the program downloads and caches all the files from the URL specified in the first line of the program. It does so that next time when it runs the program it doesn’t have to go to the network to fetch the files again. You can more or less think of the cache here as replacement for node_modules(however not strictly). Your server must be up and running now. You can now navigate to localhost:8000 in your browser to see the output.
https://medium.com/swlh/is-deno-the-new-node-5cdcc5a29154
['Akash Thakur']
2020-05-16 17:10:38.794000+00:00
['Web Development', 'Software Development', 'Nodejs', 'Programming', 'Deno']
Just like Monet
Not in an artistic sense — at all. Photo by Sasha Prasastika from Pexels When I was little, my parents took me to a university bookstore in Pittsburgh. At the shop, there was a poster print of a piece of art hanging on the wall. As the story goes, my little self pointed to the poster and correctly identified it as a work from Monet’s Water Lilies series. While I seemed like a child genius, it was just the one work of art I knew and loved. As I got older, I’d stop by the Carnegie Museum of Art frequently. While I mostly perused the natural history section, I’d always pop by the art side and check it out. They had one of the longer Water Lillies canvases hung by itself on a wall with a big couch in front. It was the perfect place to sit and wonder. Some things never change After my lattice degeneration started taking my eyesight, I developed a ritual of going to the museum before a surgery or a checkup. You know, just in case I went blind. I was like a squirrel storing nuts for the winter, only with images of my favorite exhibits and art. Of course, I always stopped to see Monet’s beautiful work. The more vision I lost, the more comfort I took in the painting. It didn’t change much for me even though everything else did. There weren’t any sharp lines that suddenly became curvey or tiny details that were wiped away by my muddied vision. It was still Water Lillies. Once I moved to Japan, I was happy to learn that the Fukuoka Museum of Art was going to have a temporary Monet exhibit. I gathered friends and excitedly waited in the long line as we crawled around from painting to painting. Seeing my favorite artist in my new city made it feel more like home — a taste of Pittsburgh and familiarity. Some things you never really knew at all As we progressed through his works, I began noticing things getting…sloppier. But also, I started relating more heavily to these paintings. They were bolder, almost angrier. They made sense to me, and I felt surprised I’d never seen these pieces before, at least not that I remembered. Then I started reading about Monet’s vision loss on the little plaques accompanying the paintings. At first, I didn’t understand what was going on around me. It’s been a few years since this happened, and I still feel nauseous and tense when I think about making this discovery. I’d never really researched him before; I didn’t know his vision was blurred from cataracts and that the light and colors he was so known for stopped making sense to him because of it. Did I cry right there in the museum? Yes. It was a mixture of feeling less alone, feeling completely shocked and feeling retroactively sad that he too went through the pain of having your passion suddenly flipped into frustration. Going back, I now know that my favorite painting was created between 1915 and 1926, which were peak vision issue years for Monet. That means that the amazing piece of art that I’ve loved my entire life was done when the artist was struggling in the same way I am now. Honestly, it feels like a pretty heavy-handed life lesson on adapting to your new situation and not giving up on what you really care about. If he could create my favorite piece of art ever while grappling with vision loss, then I should try to be more patient with my own vision complications and try harder to find new ways to adjust. What a ridiculously long and winding setup for that lesson though, right?
https://alexahuth.medium.com/just-like-monet-2982d56929e0
['Alexa Huth']
2019-11-30 10:18:07.720000+00:00
['Creativity', 'Pittsburgh', 'Frustration', 'Art', 'Artist']
Java, Ruby, and Go, Oh My!
Free Code Camp has focused 100% on full stack JavaScript since we started 17 months ago. We’ve taught JavaScript on the front end, JavaScript on the back end (thanks to the powerful Node.js framework) — and even JavaScript as a database querying language. And since the beginning, our open source community has fielded several requests each day asking us to teach other popular back end languages as well. Well today, I’m excited to announce that we will heed those thousands of requests. Free Code Camp will now teach a wide variety of web development languages. Starting today, we’re building a massive collection of Creative Commons-licensed tutorials on popular languages like Python, Java, Ruby, and PHP, along with emergent languages like Elixir and Go. And you can now complete our Back End Development Certification — and build its ten API and Dynamic Web App projects — using whichever languages and frameworks you want. An image of our Back End Development Certification challenges, with optional Git, Node.js, Express.js and MongoDB sections, and 10 required projects. Which of these languages should I learn first? The answer is the same as before: JavaScript. As virtually any professor would tell you, you should learn one language thoroughly before attempting to learn a second language. And JavaScript is by far the most popular and promising language right now. JavaScript is also a popular choice for a first programming language, and has a wide variety of free learning resources (including Free Code Camp itself). Regardless of which web development framework you use, you will need to become proficient at JavaScript. This is because JavaScript has the distinction of not only owning a near monopoly on front end web development, but also being quite competent on the back end, thanks to tools like Node.js and Express.js. So if you’re just getting started as a web developer, our advice remains the same: focus 100% of your time on mastering JavaScript. If new developers should focus on full stack JavaScript, then why will Free Code Camp teach additional back end languages? About two-thirds of our campers are new to web development. Some of them have no prior programming experience at all. Others join Free Code Camp with experience in web design, systems administration, and other related fields. This two-thirds of campers are the people for whom we specifically designed our open source curriculum. But another third of our community has already done some web development — often with languages like PHP or Ruby. And they are joining Free Code Camp to review — or build upon — existing skills. And — as you’ve probably heard — hundreds of our campers have gotten software development jobs after joining our open source community. Some of these jobs were not specifically full stack JavaScript jobs, but rather full stack web developer jobs that use alternative web development frameworks, like Python Django or Ruby on Rails. After accepting these jobs, these campers were able to parlay their knowledge of Node.js and Express.js into using these other tools. But many of these campers reported that they needed to pay for expensive learning resources in the process. This third of our community — experienced web developers and campers who just got hired — have voiced their desire for us to cover additional back end languages and frameworks. And their voice has been heard. Our open source community is now large enough — and we are now diverse enough in our web development expertise — that we can create extensive free resources on a wide range of web development topics. So that’s what we’re going to do. How will these languages be incorporated into Free Code Camp? One of Free Code Camp’s strengths has always been that we offer a clear, unambiguous path forward to your first software engineering job. Rather than complicate our challenge map with electives, we’ve chosen to keep our core curriculum 100% focused on full stack JavaScript. Instead, campers are building this content in Free Code Camp’s “Expanded Universe.” We’re creating a variety of tutorials and articles on these languages — everything from how to set up a development environment on different operating systems, to how to build example apps using popular libraries. And these can be written in Markdown and interlinked with one another, right on our wiki. We’re welcoming campers to live-stream web development in any language they want on our (soon-to-be 24-hour) Twitch.tv channel. We’re inviting campers to contribute articles to our Medium Publication on these languages. Here’s one we just published yesterday on the similarities between Java and Go. We’re creating videos that discuss various concepts specific to other languages, such as the Rails Asset Pipeline and the Java Virtual Machine. We’ll embed these videos in wiki articles and post them on our YouTube channel. Arijit Layek is an experienced Java and Python developer in Hyderabad, India. He joined Free Code Camp to improve his JavaScript, and now leads our effort to cover additional back end languages. Arijit Layek is actively creating tutorials for Python and Java, and coordinating the efforts of other campers who want to contribute tutorials on these and other languages. If you are a web developer with experience in one or more of these languages, you should join these relevant chat rooms and introduce yourself. Arijit and the other campers there can help you come up with ideas for tutorials, and answer any other questions you may have. Our goal is to build the most inclusive web development resource on the planet. To us, that means a rigorous core curriculum, volumes of supplementary content, and — most importantly — a vibrant, supportive community. I only write about programming and technology. If you follow me on Twitter I won’t waste your time. 👍
https://medium.com/free-code-camp/java-ruby-and-go-oh-my-6b5577ba2bc2
['Quincy Larson']
2017-12-27 22:33:28.185000+00:00
['Programming', 'Design', 'Technology', 'Education', 'Social Media']
Covid-19 Is Looking More and More Like an Autoimmune Disease
Covid-19 Is Looking More and More Like an Autoimmune Disease Autoimmunity may explain how the virus inflicts such widespread and unpredictable damage Throughout the pandemic, doctors have noticed a confounding phenomenon: A lot of people infected by the coronavirus develop myocarditis, an inflammation of the heart that can cause lasting damage and death. Even among people who have mild Covid-19 or who are asymptomatic, experts have found evidence of heart inflammation. A July study published in JAMA Cardiology found that 60% of coronavirus patients had active myocarditis two months after their initial infection. Remarkably, the study found that this inflammation was as common among people who recovered at home as it was among those who required hospitalization. (Myocarditis can often go undetected; its symptoms can be subtle and include shortness of breath, chest pain, and a fluttering heart.) “We’re still questioning why we see this inflammation in the heart,” says John Swartzberg, MD, an emeritus professor of infectious diseases and vaccinology at the UC Berkeley School of Public Health. “One of the hypotheses is that there’s an autoimmune process at work.” “It seems that Covid-19 shares a similar inflammatory immune response with autoinflammatory and autoimmune conditions.” “Autoimmunity” describes immune system activity — primarily inflammation — that is directed at healthy cells, tissues, or other inappropriate targets in the body. Autoimmune diseases, such as lupus and multiple sclerosis, are defined by this inappropriate inflammation and its resulting damage. When it comes to Covid-19 and myocarditis, Swartzberg says the autoimmune hypothesis posits that SARS-CoV-2 causes the immune system to misidentify something in the heart’s cells as dangerous. This misidentification leads to inflammation. He’s quick to add that this theory is just one of several possible explanations. The presence of inflammation, even if it lingers after the virus is wiped out, is not by itself an indicator of autoimmune disease, he says. But other researchers have made the case that Covid-19 is often driven by autoimmune processes. “It seems that Covid-19 shares a similar inflammatory immune response with autoinflammatory and autoimmune conditions,” write the authors of a recent study in the Journal of Immunology. They lay out evidence that SARS-CoV-2 may cause the body’s immune system to mistakenly attack its own cells and tissues — in the heart, in the brain, and elsewhere. Autoimmunity, they suggest, may explain how the virus inflicts such widespread and unpredictable damage. Understanding these autoimmune processes may be the key to preventing that damage and saving lives. The case for autoimmune involvement In October, a study in Nature Immunology examined the activity of immune cells and antibodies among people with severe Covid-19. It found some striking resemblances to autoimmune disease. “We observed the same type of B-cell activity we see in lupus flares, and also similar antibody activity,” says Ignacio Sanz, MD, co-author of the study and director of the Lowance Center for Human Immunology at Emory University. Sanz has also examined the immune systems of people with mild or persistent (aka long-haul) Covid-19; there again, he sees overlap with autoimmune conditions. Sanz says it’s possible that the phenomena he has documented are simply indicators of an aggressive immune response to an invading virus. But he says that in at least a subset of patients, elements of autoimmunity are strongly implicated in the development of severe Covid-19. How could the coronavirus cause a person’s immune system to mistakenly attack its own cells and tissues? Part of it may have to do with what biologists call molecular mimicry. “There are a number of similarities between the amino acid sequences of [coronavirus] proteins and those of human proteins,” says Timothy Icenogle, MD, a cardiothoracic surgeon and author of a recent paper on the autoimmunity elements of Covid-19, published in Frontiers in Immunology. These protein similarities may confuse the immune system and cause it to attack its own healthy cells; in some people, this attack may continue even after the true virus cells have been wiped out. Autoimmunity could explain why a robust immune response to the virus — one that includes the production of coronavirus-neutralizing antibodies — does not always correlate with mild Covid-19. It may be that in some patients, an immune response intended to eliminate the virus ends up also attacking healthy cells. These autoimmune phenomena could also explain why myocarditis and other forms of inflammation or injury show up weeks or months after a person has ostensibly recovered from a coronavirus infection. “We’re gradually coming to the realization that Covid-19 is primarily an autoimmune disease that is triggered by a virus.” How old weapons could fight a new virus Icenogle is a cardiac surgeon. “So I’m about the last doctor you’d expect to be writing a paper on Covid-19,” he says. But he spent 25 years as director of a heart transplant program, which provided him with a good understanding of immunology — something he says is uncommon for cardiologists. During the years he was running the transplant program, Icenogle occasionally encountered cases of viral myocarditis, which before the pandemic was a rare and often deadly condition. This led him to some surprising insights. “When we looked at the postmortem heart specimens from these myocarditis patients, it looked like they’d died of cardiac rejection,” he says. “We thought, gee whiz, how could this be? How could a virus cause you to reject your own heart like it was a transplant?” Icenogle combed the research and found evidence that autoimmune processes — ones triggered by a virus — could be to blame. He later had success treating myocarditis with powerful transplant drugs that essentially turned off the immune system and therefore blocked the autoimmune processes from doing more damage. Now, in the context of the pandemic, Icenogle speculates that some of these same transplant drugs may prove helpful in treating Covid-19. He mentions one in particular, rabbit antithymocyte globulin (rATG), which he has used to save the lives of some people with viral myocarditis. Not only does rATG calm the immune system, Icenogle says, but it also helps limit blood coagulation, which is another feature of severe and deadly Covid-19. So far, there is no published research on rATG or similar heavy-duty transplant drugs for the treatment of Covid-19. “These are very powerful drugs, and there’s a risk you might endanger someone,” Icenogle says. Some doctors have been willing to test out comparatively mild immunosuppressant drugs, and at least one of those—dexamethasone—has worked. But Icenogle says that many in medicine regard the notion of depleting the immune system of a virus-stricken patient as “at least paradoxical, if not crazy.” He thinks that will change. “We’re gradually coming to the realization that Covid-19 is primarily an autoimmune disease that is triggered by a virus,” Icenogle says. “I submit that eventually we’re going to treat these autoimmune processes with some of our big guns.” Emory University’s Sanz says he also believes that immune-suppressing drugs, including some that have not yet been tried in people with Covid-19, will prove helpful. “Without any doubt,” he says. “But I think that only some patients will benefit.” He emphasizes, again and again, that Covid-19 is a heterogeneous disease. It affects different people in different ways. Outside of a vaccine, there is unlikely to be a panacea or a single category of treatment that works for everyone. Still, Sanz says that a better understanding of the disease’s autoimmune facets, coupled with a broader deployment of immune-moderating therapies, could help improve patient outcomes — both in the short and long term. “I think autoimmunity is part of the story,” Sanz adds. “It’s not the whole story.”
https://elemental.medium.com/covid-19-is-looking-more-and-more-like-an-autoimmune-disease-b6c563f8da24
['Markham Heid']
2020-12-11 15:59:00.811000+00:00
['Pandemic', 'Covid 19', 'The Nuance', 'Body', 'Coronavirus']
AnalogFolk’s weekly innovation roundup! 27.08.19
AnalogFolk’s weekly innovation roundup! 27.08.19 Artificial illusionism, to taking steps in sign language recognition, and more… By Francisco Jordão, Technical Lead at AnalogFolk Welcome to the AnalogFolk weekly innovation roundup, where our technical lead Francisco Jordão summarises the innovations of the week; from the weird to the wonderful, and everything in-between. The AI system paves the way for experiments in what researchers called artificial illusionism. Read more… Google is reporting that it has hit something of a breakthrough by developing a new hand-tracking algorithm that is capable of tracking hand gestures in real-time. The new intelligent AI-based system leverages machine learning to generate a hand map. This map can be created with the help of just a single camera allowing for easier implementation. Read more… Marketed as a “high-end solution for VR locomotion,” Cybershoes are an accessory device for VR systems that are compatible with Steam VR. This means they can be used with Oculus, HTC, Pimax, and other headsets. Read more… Open now to the general public, Dream Corporation’s Otherworld VR arcade looks like something straight out of Tron: Legacy. Designed by architecture studio Red Deer, the futuristic space — once simple east London railway arch — now features 14 private VR entertainment pods. Read more… What’s changing Google is introducing new spelling and grammar correction capabilities for Gmail to help you compose emails quickly with confidence. Read more…
https://medium.com/analogfolk/analogfolks-weekly-innovation-roundup-27-08-19-293a07bedae8
[]
2019-08-27 09:03:34.401000+00:00
['Innovation', 'AI', 'News', 'Virtual Reality', 'Sign Language']
Pywedge: A complete package for EDA, Data Preprocessing and Modelling
Pywedge: A complete package for EDA, Data Preprocessing and Modelling Pywedge helps in visualizing the data, preprocessing, and creating baseline models Pywedge(Source: By Author) What is Pywedge? Pywedge is an open-source python library which is a complete package that helps you in Visualizing the data, Pre-process the data and also create some baseline models which can be further tuned to make the best machine learning model for the data. It is a hassle-free package that saves the time and effort of creating different types of plots to analyze the data. It contains 8 different types of visualization which can be used using the user-friendly GUI of pywedge. It takes care of all the pre-processing that the data may require whether it is cleaning the data, normalization, or handling the class imbalance pywedge covers it all. It works on both classification and regression problems. In this article, we will see what all we can do using Pywedge and explore its features. Before starting kindly follow me on medium by clicking here and stay updated about my new articles in the field of data science. Installing pywedge Pywedge is pip installable so we will install it using the command prompt by running the below-given command. pip install pywedge Importing the required libraries We are exploring pywedge so we will start by importing it. Also, we will import pandas for loading our dataset. import pandas as pd import pywedge as pw Loading the Dataset We will use different datasets for exploring pywedge. Let’s start by importing the Iris dataset from seaborn for the visualization part. import seaborn as sns df = sns.load_dataset("iris") df Iris Dataset(Source: By Author) Visualization using Pywedge Now we will use pywedge functions to visualize the dataset. Here we will use the pywedge_charts function which requires three inputs namely the data, “C” which is a redundant column in the dataset that we want to remove, and the dependant variable according to the problem i.e classification or regression. mc = pw.Pywedge_Charts(df, c=None, y="species" ) # For Visualization chart = mc.make_charts() Visualization(Source: By Author) In the output above we can clearly see visualize the interface which is created by pywedge. We can select different types of charts and use different variables to visualize the data. Preprocessing the data Next, we will see how easily we can preprocess the data using pywedge. For this, I have used the Boston dataset which I have split into the test and train dataset. We will see how pywedge works on finding missing values, normalization, class imbalance(for classification problem). #Loading the dataset train = pd.read_csv('bos_test.csv') test = pd.read_csv('bos_train.csv') #We are handling regression so we will use type as Regression ppd = pw.Pre_process_data(train, test, c=None, y='medv', type='Regression') #Storing the processed data into new dataframe new_X, new_y, new_test = ppd.dataframe_clean() Preprocessing(Source: By Author) In the output above we can clearly see how pywedge identifies missing data and provides options to the users(highlighted in image) for standardization and converting categorical to numerical. Similarly, if we will solve a classification problem it will show whether the data is imbalanced or not. Baseline Models Pywedge also creates some baseline models according to our dataset, so that we can select the best performing model and tune it to reach the maximum accuracy. It also helps us in finding out the importance of features. blm = pw.baseline_model(new_X,new_y) #We will use regression_summary, for classification we will use #classification_summary blm.Regression_summary() Baseline Models(Source: By Author) Here we can clearly analyze the different types of models and their performance, we can select the best performing model and further tune it. Go ahead try this interesting and useful library on different datasets and in case you face any difficulty reach out to me in the response section. This post is in collaboration with Piyush Ingale Before You Go Thanks for reading! If you want to get in touch with me, feel free to reach me on [email protected] or my LinkedIn Profile. You can view my Github profile for different data science projects and packages tutorials. Also, feel free to explore my profile and read different articles I have written related to Data Science.
https://towardsdatascience.com/pywedge-a-complete-package-for-eda-data-preprocessing-and-modelling-32171702a1e0
['Himanshu Sharma']
2020-12-09 12:58:57.066000+00:00
['Machine Learning', 'Data Science', 'Python', 'Data Analysis', 'Data Visualization']
Engine Room 2020: going virtual
The challenges and benefits of running our internal tech conference virtually This was the 6th year we’ve run an internal tech conference, Engine Room, and the first time that we’ve done it virtually. Every year the first thing we do when planning for Engine Room is work out what the key thing is we want to achieve. This helps us focus our efforts and keeps the experience fresh for people who have been at the FT for a while. For the very first Engine Room, we wanted to show people that we didn’t need to go to external conferences to learn: we had a lot of interesting and relevant experiences to share with each other. In 2019, we wanted to focus on being one team, running the conference across two locations — our software development offices in London and Sofia — with full and equal involvement from both. This year, our focus was on providing ways for our teams to connect. At the time of writing we’ve been working from home for 250 days, and while teams are finding ways to work virtually, we are missing the more casual interactions we had with people: bumping into people outside the lifts, or in the coffee queue. We’re tired of virtual quizzes but we crave connection. So, how did we approach a virtual conference differently? And did it work out? Setting the agenda Use the tools people are familiar with One thing we’ve learnt over the years is that it’s best to keep things simple. We use Google Meet all the time at the FT, we all know how it works. So that’s what we used! We have the enterprise version so we can have up to 250 people in a Meet. Using Google Meet for lockdown bingo We have Slack at the FT too, so we had channels set up there as well, to answer questions etc. Shorter, sharper, more interactive We already knew that technical conferences had changed the way they did things when they switched to online, running shorter talks with longer Q&A, and running for a shorter time overall — maybe 4 hours rather than a full 9 to 5. It made sense to learn from that — so no 45 minute long talks this year. We ran multiple talk sessions, including some in parallel tracks, since we were no longer restricted by getting space in a conference suite. And we encouraged people to chat in the sidebar and to ask questions via the new Q&A function of Google Meet — although I’m pretty sure no one actually did that. But the chat was fun and made it all feel very interactive. Space to hang out with people you don’t know yet We know there’s no easy way to hangout together and eat pizza when we’re all WFH. We definitely still wanted a space where people could meet and talk to people outside their own team, and that needed to be in a smaller group, so we set up open spaces — with several sessions running in parallel, each with a particular topic as the focus for discussion. We did a little bit more up front for the open spaces than normally happens, asking people to suggest topics they would like to cover a few days in advance. We left space for more sessions to be added on the day, and some did get added. We did something similarly parallel late in the day: multiple small groups learning from others in masterclasses that were largely not that closely connected to our day jobs. A nice decompression from the day, and I now know how to sign my name in British Sign Language, and many others are much more informed about recycling and/or are about to take up whittling. And we finished off with Lobster lockdown bingo — how many of these lockdown cliches can you mark on your card — which also proved that making someone wear a lobster costume never gets old. Strive to be inclusive We set up the agenda to run from 10 till 3:30 UK time, which is 12 till 5:30 Bulgaria time, so that as many people as possible in our teams could attend during normal work hours. Having things in the middle of the day meant the best chance for people in our Manila and US offices to be able to join in too. However, we also recorded all the talk sessions, for those groups and also because lots of people are working flexibly at the moment and we wanted those people to be able to watch later. We wanted to have a lineup that reflected the diversity of our organisation, and that meant doing more than sending out a call for papers. We did do that, and got a fantastic response, mostly from people who have spoken before. To get new people, and more junior people, you need to approach them directly, and you need to make it clear that they will get all the support they need. We strongly encouraged every speaker to come to practice sessions arranged for the few days before the conference, to get feedback on their talks. We have done this for a few years and people really appreciate the difference it can make (and it also provides a ‘soft’ deadline for people, which I can tell you is a great thing for a speaker!). My colleague Euan Finlay ran all of these, putting in a huge amount of effort. I got involved in a few and it was fascinating to see the way talks developed and improved — I highly recommend running through any presentation you are going to give with someone else ahead of time, you will get a much better outcome! We also offered people the chance to pre-record their talks — in the end, no one did, but it’s definitely well-supported by Meet and if people are a little nervous or have dodgy internet, it’s a good option. Finally, we made sure we had budget set aside to support our Deaf colleagues, and sorted this out as soon as we had the first outline of our agenda — the best captioners and interpreters get booked up months in advance. We asked our colleagues what would work best for them, and they said live captioning and recommended White Coat Captioning. We had live captioning for our talk sessions with transcription sent real time to a specific url. Lots of people find it helpful to have live captioning, including those who speak English as a second language: this is a benefit for everyone. What we got right We always send out a feedback form after we run an Engine Room, and this time the feedback was generally very positive. From the feedback to the question “what did you enjoy?”, we can tell we largely succeeded with making people feel more connected: “Being able to see people that aren’t in my team” “I liked the feeling of togetherness, even though it was an online conference it had a great atmosphere” “the open spaces were really great and a really nice chance to chat to people I’ve not engaged with since Feb or before” “Community, seeing people I haven’t seen in a while and feeling part of a bigger team again.” “It was great to have contact with colleagues from other teams which I don’t usually have contact with on a daily basis.” We got so much good feedback about open spaces that we plan to run more of them early in 2021. What we got wrong But obviously, we made mistakes too. We left space between each session for people to leave their screens and get a drink, maybe. But we didn’t get this right: many sessions ran over and left a very quick turnaround, and this was stressful for the speakers at the end of a session and left those attending multiple sessions a bit frazzled. What we learned is that talks generally only get longer as people refine them! The open spaces were very popular, meaning most groups were too large for everyone to be able to take part. One open space had more than 50 people show up! We also got a lot of feedback that they were too short at 20 minutes: when we run more of them, the sessions will be longer and we will try to make sure there aren’t too many people in any one session. Finally, as someone very kindly told us via the feedback form, we were too UK-centric on the lockdown bingo — furlough didn’t happen in Bulgaria, and no one drove to Barnard Castle. But overall Generally, people thought it was excellent. “It was brilliant, the best one I’ve been to so far.” “The quality and energy of each presenter was amazing. Reminded me that I’m surrounded by wonderfully talented people and it was inspirational.” “Thumbs up, I enjoyed it way more than I’d expected to.” “It felt on par with some of the best tech conferences out there. Extremely well organised and run. I would pay for this” We’re delighted it went so well and we’d encourage you to try something similar at your company. If you want to find out more about how to run an internal tech conference and why you should do it, check out the Internal Tech Conferences book, co-authored by our own Victoria Morgan-Smith.
https://medium.com/ft-product-technology/engine-room-2020-going-virtual-7d4f77bc2969
['Sarah Wells']
2020-12-04 09:16:29.857000+00:00
['Virtual Conference', 'Software Engineering', 'Financial Times']
Tortured and Executed — The Grand Story of Noor Inayat-Khan
in June 1943 she was sent to France, where she assumed the name Jeanne-Marie Renier, posing as a children’s nurse. Madeleine was her code name. Noor was constantly on the move in France. The Gestapo struggled to keep up with her as she swiftly moved from one safe house to another. She managed to stay one step ahead of the Germans by constantly changing her address. At one point, huddled late at night transmitting her coded messages, she could her the German soldiers reveling in the apartment below her. Noor had taken up residence in an apartment block full of German officers. SOE twice offered her repatriation to Britain, but she refused and remained working. “Her transmissions became the only link between the agents around the Paris area and London,” Ms. Basu wrote in her biography “Spy Princess: The Life of Noor Inayat Khan.” Eventually, for the grand sum of 100,000 francs, a French woman, named Renée Garry, betrayed Noor. Garry was arrested after the war and trialed for treason. She was found not guilty by a single vote. Captured and taken to the Gestapo headquarters for interrogation, Noor escaped almost immediately. She made a 2nd escape On 25 November 1943, along with fellow SOE agents John Renshaw Starr and Léon Faye, but was recaptured in the vicinity. Noor was viciously tortured as the Germans attempted to extract information from her. Chief of German police, Heinrich Himmler had ordered: “The agents should die, certainly, but not before torture, indignity and interrogation has drained from them the last shred of evidence that should lead us to others. Then, and only then, should the blessed release of death be granted them.” Source — Ungentlemanly Warfare Torture of women included beatings, sleep deprivation, cutting breasts off, pulling out finger and toenails and near-drowning. Still, Noor refused to give up any information. Hans Kieffer, the former head of the SD in Paris, testified after the war that she did not give the Gestapo a single piece of information, but lied consistently. The Gestapo finally had enough after she refused to sign a declaration renouncing future escape attempts. On 27 November 1943, Noor was transported to Germany “for safe custody” and imprisoned at Pforzheim in solitary confinement as a “Nacht und Nebel” (“Night and Fog”: condemned to “Disappearance without Trace”) prisoner, in complete secrecy. For ten months, she was kept there, shackled at her hands and feet. For almost a year, manacled, barely able to move, Noor was kept this way in solitary confinement. Her food was passed through a tiny hatch in the door. As the prison director testified after the war, she remained uncooperative and continued to refuse to give any information on her work or her fellow operatives. In despair at the appalling nature of her confinement, other prisoners could hear her crying late into the night.
https://medium.com/lessons-from-history/tortured-and-executed-the-grand-story-of-noor-inayat-khan-fc0ee8dcebae
['Reuben Salsa']
2020-08-13 23:00:24.111000+00:00
['Non Fiction Story', 'Salsa', 'War', 'History', 'Writing']
How to Profile Your Code in Python
How to Profile Your Code in Python Finding bottlenecks and optimizing performance using cProfile Photo by Anthony Maggio from Pexels If you’ve ever written a line of code (or even tens of thousands of lines), you’ve surely wondered “Why does my code take so long to run?” Answering that question isn’t always simple, but it can be easier if you search for answers the right way. Perhaps you trust your knowledge of the problem at hand, using your subject matter expertise to check certain snippets first. Maybe you time a few different modules/classes/functions to see where most of the execution time is spent. Better yet, you can profile your code to get more information about the relative time spent in different functions and sub-functions. Whatever your process is, this blog might teach you some ways to get to the answer more quickly. In this post, I’ll start off by showing you a basic profiling method. I’ll add more and more features and flavor to it, ending with a good, solid profiling decorator. For those in a hurry (or who want to refer back to this material), head over to this GitHub repository where you can find the profile decorator and an example. Time It! To profile your code, you’re gonna need to know how to time it. To do so, you can use something as simple as this: from time import time start = time() # your script here end = time() print(f'It took {end - start} seconds!') To easily time several functions/snippets (or if you just prefer to use a cleaner and more pythonic approach), you can convert the above into a timer decorator (discussed with an example here). Using a timer on any function shows us the run time of that piece in isolation. For this approach to be useful in finding the bottleneck, we need more information. At least one of the following two conditions should be true in order to profile your code effectively: We should know the total run time of the program to have a better picture of the relative run time of our desired function/section. For example, if a piece of code takes 5 minutes to execute, is that 10%, 40%, or 90% of the total run time? We should know enough about the problem at hand or the run time of other parts of the program to confidently label a given piece of code as the bottleneck. Even if a function takes 10 minutes to run (let’s assume 10 minutes is relatively high), we should only worry about its inefficiency if we’re confident that there’s no other part that takes longer. As Donald Knuth famously said: Premature optimization is the root of all evil. A profiler package like cProfile helps us find the bottlenecks in our code by satisfying both of these conditions. How to Use cProfile Basic Usage The most basic way of profiling with cProfile is using the run() function. All you need to do is to pass what you want to profile as a string statement to run() . This is an example of what the report looks like (truncated for brevity): >>> import cProfile >>> import pandas as pd >>> cProfile.run("pd.Series(list('ABCDEFG'))") 258 function calls (256 primitive calls) in 0.001 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 4 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:997(_handle_fromlist) 1 0.000 0.000 0.000 0.000 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 _dtype.py:319(_name_get) .... 11/9 0.000 0.000 0.000 0.000 {built-in method builtins.len} 1 0.000 0.000 0.000 0.000 {built-in method numpy.array} 1 0.000 0.000 0.000 0.000 {built-in method numpy.empty} .... The first line indicates that 258 calls were monitored, out of which 256 were primitive (a primitive call is one that was not induced via recursion). The next line, Ordered by: standard name , indicates that the report is sorted based on the standard name, which is the text in the filename:lineno(function) column. The line after that is the column headings: ncalls : is the number of calls made. When there are two numbers (like 11/9 above), the function recurred. The first value is the total number of calls and the second value is the number of primitive or non-recursive calls. tottime : is the total time spent in the given function (excluding time made in calls to sub-functions). percall : is the quotient of tottime divided by ncalls . cumtime : is the cumulative time spent in this and all subfunctions. This figure is accurate even for recursive functions. percall : is the quotient of cumtime divided by primitive calls. filename:lineno(function) : provides the respective data of each function. The run() function can accept two more arguments: a filename to write the results to a file instead of the stdout, and a sort argument that specifies how the output should be sorted. You can check the documentation to learn more about valid sort values. Some of the common ones are 'cumulative' (for cumulative time), 'time' (for total time), and 'calls' (for number of calls).
https://towardsdatascience.com/how-to-profile-your-code-in-python-e70c834fad89
['Ehsan Khodabandeh']
2020-06-16 17:12:01.600000+00:00
['Data Science', 'Programming', 'Python']
What in the “Hello World” is Natural Language Processing (NLP)?
Alright. You have delivered some fancy hype and the nuts and bolts of NLP. What about the essential niche that powers NLP? In NLP, the data is normally audio or text data. Sometimes it may involve pictures or video that needs to be transformed into text. The technology that powers an NLP solution depends highly on the application area and the solution usage. For example, if an NLP solution should work in a mobile application, it would require the basic components of a traditional mobile application with frontend and backend logic. If high processing power is needed, the NLP components could reside in a cloud environment, such as AWS, MS Azure, or Google Cloud. Some of these technologies even include ready-made components for NLP. In fact, it is quite straightforward to build a prototype NLP solution using these tools by the cloud providers. Amazon Transcribe is a ready-made tool to add speech-to-text capability to applications, and Amazon Transcribe Medical is able to transform medical speech to text. MS Azure also has tools for Cognitive Services. On the other hand, if the NLP solution is placed in a scarce environment, such as a mine or a forest, in which very rudimentary network connections are available, the NLP logic should reside in the device that reads the needed data, such as a cell phone or some other audio or Internet of Things (IoT) device. In such a case, where Machine Learning (ML) or DL is required, TensorFlow Lite is a promising solution. It can be used to deploy machine learning models on mobile and IoT devices. Python is the de-facto tool for NLP. It is widely used as a general programming language and includes numerous NLP libraries. The most popular NLP library for Python is the Natural Language Toolkit (NLTK) that states to be the leading platform for building Python programs to work with human language data. Additionally, Apache OpenNLP is a machine learning-based toolkit for NLP. It has support for the most common NLP tasks. Photo by Kevin Ku on Unsplash However, there are other programming languages besides Python that can be used for NLP, the other two popular being Java and Julia. Java has been around for many years, while Julia is slowly gaining momentum and popularity among data scientists and NLP practitioners. It is nevertheless still in its infancy at the moment. The choice of programming language for NLP depends on numerous factors: what is the processing power requirement, what NLP tools are needed, what is the community support for the language, and how the solution can be deployed and functions in a production environment. Another league of its own is using NLP for the Finnish language. Yes, our very own unique language with lots of double consonants, umlaut letters, and peculiar sentence structures. At the moment there aren’t that many NLP libraries aimed specifically for Finnish. This is an area that has a lot of potential in the future. The blog post here details the steps required in building a Finnish Part of Speech Tagger. But what about audio data? This deals with recognizing certain patterns from speech that are well known. The first step is always extracting the necessary features from the audio signal and doing subsequent analysis on them. Which features to analyze depends again on the use case. Writing a synopsis from a spoken narrative may include first transforming the audio data into text data and doing follow-up synopsis writing on it. In terms of technologies, neural networks have become handy in these. Popular Python libraries include Keras, Sklearn, and Tensorflow. Also, pandas, numpy and matplotlib can be used for data extraction, analysis, and visualization. Photo by metamorworks Automatic speech recognition system (ASR) takes into account the phonemes of a language. For example, in Finnish there are 21 phonemes, consisting of 13 consonants and 8 vowels. However, the exact number may depend on the spoken dialect. After gathering all phonemes from a sound wave, the ASR software uses statistical probability analysis to deduce whole words and from there, form complete sentences. Check out this awesome infographic for more information. Using phonemes to form speech is also common in concatenation based speech synthesizers. So what’s the conclusion? I have delivered a no-frills introduction to NLP and its various applications and technologies. NLP has a lot of potential to revolutionize industries and application areas. However, NLP practitioners are a scarce resource and more training and development with practical real-world applications need to be done. As a statistical and linguistic data nerd, I am looking forward to exciting things to come. I will be digging more deeply into NLP methods in the upcoming posts, so stay tuned.
https://medium.com/finlabsoy/what-in-the-hello-world-is-natural-language-processing-nlp-8b1eeec04a53
[]
2020-05-13 17:32:39.473000+00:00
['Naturallanguageprocessing', 'Deep Learning', 'Software Development', 'Artificial Intelligence', 'Machine Learning']
The First Non-Fiction Book I Ever Read Changed My Life
The First Non-Fiction Book I Ever Read Changed My Life Entering the world of self-development Photo by Fabiola Peñalba on Unsplash I was only a 15-year-old boy. I saw this book advertised to me on Instagram. “Have you ever wondered why there are few people living their dream, yet others seem to be slipping further away from theirs?” I was hooked from the start. I felt an urge inside of me to read what this author had to say about life, and so I ordered it. £10 of my pocket money spent on this book which would turn out to be a changing point in my life. You see, our life is where it is because of little decisions that we make. If I didn’t go ahead and purchase that book, I may not have got fascinated in the self-development world, and I wouldn’t be writing this article right now. Every action we take today has repercussions in the future, whether positive or negative. My actions lead to me reading this book and changed me into the man I am today:
https://medium.com/the-ascent/the-first-non-fiction-book-i-ever-read-changed-my-life-21d7fff8e685
['Liam Terrington']
2020-09-03 18:01:01.430000+00:00
['Self Development', 'Personal Development', 'Self Improvement', 'Reading', 'Books']
Content Marketing in 2018: 5 Trends You’ll Need to Know
Content Marketing in 2018: 5 Trends You’ll Need to Know By Heike Young As long as there’s been marketing, content has been a part of the marketing practice. That’s led some marketers to wonder: Is content marketing really a net-new strategy, or is it just another fly-by-night marketing buzzword? “It does have a buzzword factor, and I know people who absolutely hate the term. But it’s the industry standard term, so we’re just going to have to accept it, love it or hate it,” says Rebecca Lieb. In the newest episode of the Marketing Cloudcast, Salesforce’s award-winning marketing podcast, we’re talking about the state of content with Rebecca — who has published more research on content marketing and content strategy than anyone else in the field. You may know her from her reports as an analyst at Altimeter Group, or maybe her latest book: Content — the Atomic Particle of Marketing. Take a listen here. For the full conversation that’s filled with many more insights, subscribe on Apple Podcasts, Google Play Music, Stitcher, or wherever you listen to podcasts. Now a few takeaways about the state of content marketing in 2017 from Rebecca’s perspective. #1: Content marketing jobs are changing. Content jobs have changed a lot — and they’re continuing to evolve, as companies become more strategic with their content efforts. Rebecca predicts, “I see the roles of a modern content team as really changing from maybe five or six years ago when people were saying, ‘Oh, hire an out-of-work journalist. They’re cheap and they can blog, and bingo, you’ve got content marketing.’ That doesn’t tie content strategically into other marketing activities, and it also doesn’t address the changing formats of content.” Changing formats mean that content marketers should do more than just write a mean blog post. Rebecca says, “Video is very much on the rise, graphics are on the rise. Audio, like this podcast, is on the rise. Content teams very often need people with more than just writing skills, but production skills as well. You also need people who have talent in content distribution and optimization. You need strategists who can help tie content teams together with other marketing functions, be they communications or branding or social media or advertising. Analytics are crucial to calculating the effectiveness of content. It’s really become quite a strategic role, operationally, in an organization.” #2: Contextual content experiences — like smart speakers and the IoT — are lifting content off the screen. “One of my most recent projects was taking a look at what happens to content when it goes beyond screens, which is something that really fascinates me. In a world of beacons, censors, and the Internet of Things, content isn’t just about Facebook or a blog. It’s in the very air that we breathe,” Rebecca shares. Consumers are interacting directly with AI and virtual assistants through Siri, which has its own form of call-and-respond content. That means that marketers must think about content and engagement in new contexts. “Everything can become digital, and everything really is becoming digital. What happens when your clothing or devices you might wear interact with devices in a store or a location, and what kind of content emerges out of those interactions? I’m calling these contextual campaigns, because they almost go beyond personalization and the right message to the right person at the right time, and they pull in other elements, such as the right place and under the right conditions,” says Rebecca. If you’re not already thinking about how your content can become more contextual, the time is now. #3: Transparency is becoming just as important as the content itself. With tactics like native advertising and brand-sponsored influencer content on social media, customers are growing more skeptical of branded content. Companies must follow suit by doubling down on transparency and honesty. Rebecca explains, “Native advertising is incumbent not only on marketers to be more transparent, but also the publishers who are the bearers of those native advertising executions. These publishers need to have strong policies in place that basically come down to disclose, disclose, disclose, but also ensure that the message, the voice, the tone, the look, the feel is congruent with the publishing vehicle.” If you’re working with social media influencers, you should also ensure that any content they create is fully transparent about the paid relationship. #4: Marketers are establishing content as the building block of all marketing. Rebecca advises, “I’ve realized that no marketing is possible without content. Social media doesn’t work without content, and neither does paid advertising. If there wasn’t content … all you’d have would be empty squares and rectangles and blank videos.” Once you begin to view content as the starting point for all marketing efforts, you can more appropriately map the right content to the right stage of the buyer journey. Rebecca continues, “Without content, not a thing moves forward in the entire marketing landscape. That’s why I call it the atomic particle. It’s one of the building blocks.” #5: The lines between paid, owned, and earned media are converging. When I asked her to define content marketing today, Rebecca responded, “As a separate entity, I define content marketing in its purest form as owned media. Owned media are those channels that a marketer largely controls. It might be your website, it might be a blog, it’s even your email, because email is a content channel. Content marketing can exist in its purest state, pure owned media. But increasingly, those lines are blurring. If you ask, ‘What’s Facebook? What’s Twitter? Is it paid, owned, or earned?’ The answer is yes, it’s all three.” The truth is, content can no longer live in silos, with one team focusing on owned channels like the blog and another team worrying only about paid media. Efforts must work together in conjunction for your content strategy to actually work. PS: This podcast has a new format. Several weeks ago, we shifted the Marketing Cloudcast to an entirely new format and style (think narrative with multiple guests — more Freakonomics, less live interview), and I’d love to know what you think! Join the thousands of smart marketers who are Cloducast subscribers on Apple Podcasts, Overcast, Google Play Music, and Stitcher. Tweet @youngheike with feedback on this episode — or ideas for future guests and topics.
https://medium.com/marketing-cloudcast/content-marketing-in-2018-5-trends-youll-need-to-know-af93691f6d4e
[]
2017-08-09 17:25:37.894000+00:00
['Marketing']
My automated RKE update pipeline broke with version 0.2.x — my fault
I’m using an automated build pipeline to install, update and destroy my Kubernetes test-environments based on Rancher Kubernetes Engine (RKE). This worked perfectly until this week. Let me shortly explain the important parts of my pipeline before I talk about the details: checkout the cluster.yml from a git repository extract the kube_config_cluster.yml from the secured repository cache download the latest stable RKE binary from GitHub (I use this in my test environment where I would like to stay updated all the time) run “rke up/remove” As I said, this worked perfectly for some time now. Earlier this week I pushed a new version of my cluster.yml to update my Kubernetes Cluster to a newer version. The pipeline started and failed some minutes later with the following error: Failed to bring up Etcd Plane: [etcd] Etcd Cluster is not healthy I started debugging the issue but I couldn’t find any issues. The whole Kubernetes Cluster including the etcd looked good. After some time I realized that Rancher released a new RKE version 0.2.x. (until now I used 0.1.x but because of step 3 of my pipeline the build run used the latest available stable version). So why does this even matter? With the new version of RKE Rancher introduced a new way to store the cluster state. They moved it from a configmap entry (0.1.x) to a file called cluster.rkestate (0.2.0) which is stored next to the cluster.yml. Because I wasn’t aware of this file my pipeline didn’t store it somewhere and therefore a “rke up” always created a new cluster.rkestate file. which then leads to the issue described above. After changing my pipeline configuration to also cache the state file the updated finished successfully without any issues. What have we learned from this? Always read the release notes. 😏
https://medium.com/01001101/my-automated-rke-update-pipeline-broke-with-version-0-2-x-my-fault-959a71a37c0d
['Nico Meisenzahl']
2019-04-12 07:52:53.020000+00:00
['Rke', 'Kubernetes', 'K8s', 'DevOps', 'Rancher']
Data Science, the Good, the Bad, and the… Future
How often do you think you’re touched by data science in some form or another? Finding your way to this article likely involved a whole bunch of data science (whooaa). To simplify things a bit, I’ll explain what data science means to me. “Data Science is the art of applying scientific methods of analysis to any kind of data so that we can unlock important information.” That’s a mouthful. If we unpack that, all data science really means is to answer questions by using math and science to go through data that’s too much for our brains to process. Data Science covers… Machine learning Artificial Intelligence Deep Learning Predictive analysis … and all the buzzwords we hear today, like data visualization, voice assistants, etc. To understand how data science was used to find this article, I’ll ask you to think of the steps you used to get here. For the sake of this explanation, let’s assume that most of you were online looking at pictures of kittens and puppies when you suddenly came across a fancy word related to data science and wanted to know what it was all about. You turned to Google hoping to find the meaning of it all, and you typed “What is *fill in your data science related buzzword*.” You would have noticed that Google was kind enough to offer suggestions to refine your search terms — that’s predictive text generation. Once the search results came up, you would have noticed a box on the right that summarizes your search results — that’s Google’s knowledge graph. Using insights from SEO (Search Engine Optimization) I’m able to make sure my article reaches you easily, which is a good data science use case in and of itself. All of these are tiny ways that data science is involved in the things we do every day. To be clear, going forward I’m going to use data science as an umbrella term that covers artificial intelligence, deep learning and anything else you might hear that’s relevant to data and science. Positives: astrophysics, biology, and sports Data science made a huge positive impact on the way technology influences our lives. Some of these impacts have been nice and some have been otherwise. *looks at Facebook* But, technology can’t inherently be good or bad, technology is… technology. It’s the way we use it that has good or bad outcomes. We recently had a breakthrough in astrophysics with the first ever picture of a black hole. This helps physicists confirm more than a century of purely theoretical work around black holes and the theory of relativity. To capture this image, scientists used a telescope as big as the earth (Event Horizon Telescope or EHT) by combining data from an array of eight ground-based radio telescopes and making sense of it all to construct an image. Analyzing data and then visualizing that data — sounds like some data science right here. A cool side note on this point: a standard Python library of functions for EHT Imaging was developed by Andrew Chael from Harvard to simulate and manipulate VLBI ( Very-long-baseline interferometry) data helping the process of creating the black hole image. Olivier Elemento at Cornell uses Big Data Analytics to help identify mutations in genomes that result in tumor cells spreading so that they can be killed earlier — this is a huge positive impact data science has on human life. You can read more about his incredible research here. Python is used by researchers in his lab while testing statistical and machine learning models. Keras, NumPy, Scipy, and Scikit-learn are some top-notch Python libraries for this. If you’re a fan of the English Premier League, you’ll appreciate the example of Leicester City winning the title in the 2015–2016 season. At the start of the season, bookmakers had the likelihood Leicester City winning the EPL at 10 times less than the odds of finding the Loch Ness monster. For a more detailed attempt at describing the significance of this story, read this. Everyone wanted to know how Leicester was able to do this, and it turns out that data science played a big part! Thanks to their investment into analytics and technology, the club was able to measure players’ fitness levels and body condition while they were training to help prevent injuries, all while assessing best tactics to use in a game based on the players’ energy levels. All training sessions had plans backed by real data about the players, and as a result Leicester City suffered the least amount of player injuries of all clubs that season. Many top teams use data analytics to help with player performance, scouting talent, and understanding how to plan for certain opponents. Here’s an example of Python being used to help with some football analysis. I certainly wish Chelsea F.C. would use some of these techniques to improve their woeful form and make my life as a fan better. You don’t need analytics to see that Kante is in the wrong position, and Jorginho shouldn’t be in that team and… Okay I’m digressing — back to the topic now! Now that we’ve covered some of the amazing things data science has uncovered, I’m going to touch on some of the negatives as well — it’s important to critically think about technology and how it impacts us. The amount that technology impacts our lives will undeniably increase with time, and we shouldn’t limit our understanding without being aware of the positive and negative implications it can have. Some of the concerns I have around this ecosystem are data privacy (I’m sure we all have many examples that come to mind), biases in predictions and classifications, and the impact of personalization and advertising on society. Negatives: gender bias and more This paper published in NIPS talks about how to counter gender biases in word embeddings used frequently in data science. For those who aren’t familiar with the term, word embeddings are a clever way of representing words so that neural networks and other computer algorithms can process them. The data used to create Word2Vec (a model for word embeddings created by Google) has resulted in gender biases that show close relations between “men” and words like “computer scientist”, “architect”, “captain”, etc. while showing “women” to be closely related to “homemaker”, “nanny”, “nurse”, etc. Here’s the Python code used by the researchers who published this paper. Python’s ease of use makes it a good choice for quickly going from idea to implementation. It isn’t always easy to preempt biases like these from influencing our models. We may not even be aware that such biases exist in the data we collect. It is imperative that an equal focus is placed on curating, verifying, cleaning, and to some extent de-biasing data. I will concede that it isn’t always feasible to make all our datasets fair and unbiased. Lucky for us, there is some good research published that can help us understand our neural networks and other algorithms to the extent that we can uncover these latent biases. When it comes to data science, always remember - “Garbage in, garbage out.” The data we train our algorithms with influences the results they produce. The results they produce are often seen by us and can have a lasting influence. We must be aware of the impact social media and content suggestions have on us. Today, we’re entering a loop where we consume content that reinforces our ideas and puts people in information silos. Research projects that fight disinformation and help people break out of the cycle of reinforcement are critical to our future. If you were trying to come up with a solution to this fake news problem, what would we need to do? We would first need to come up with an accurate estimate of what constitutes “fake” news. This means comparing an article with reputable news sources, tracing the origins of a story, and verifying that the article’s publisher is a credible source. You’d need to build models that tag information not corroborated by other sources. To do this accurately, one would need a ton of not “fake” news to train the model on. Once the model knows how to identify if something is true (to a tolerable degree of confidence), then the model can begin to flag news that’s “fake.” Crowdsourced truth is also a great way to tackle this problem, letting the wisdom of the crowd determine what the “truth” is. Blockchain technology fits in well here by allowing data to flow from people all over the world and arrive at consensus on some shared truth. Python is the fabric that allows all these technologies and concepts to come together and build creative solutions. Python, a data science toolset I’ve talked about data science, what it means, how it helps us, and how it may have negative impacts on us. You’ve seen through a few examples how Python is a versatile tool that can be used across different domains, in industry and academia, and even by people without a degree in Computer Science. Python is a tool that makes solving difficult problems a little bit easier. Whether you’re a social scientist, a financial analyst, a medical researcher, a teacher or anyone that needs to make sense of data, Python is one thing you need in your tool box. Since Python is open source, anyone can contribute to the community by adding cool functionalities to the language in the form of Python libraries. Data visualization libraries like Matplotlib and Seaborn are great for representing data in simple to understand ways. NumPy and Pandas are the best libraries around to manipulate data. Scipy is full on scientific methods for data analysis. Whether you want to help fight climate change, analyze your favorite sports team or just learn more about data science, artificial intelligence, or your next favorite buzzword — you’ll find the task at hand much easier if you know some basic Python. Here are some great Python libraries to equip yourself with: NumPy Pandas Scikit-Learn Keras Matplotlib I’ll illustrate an example of how easy it is to get started with data science using Python. Here’s a simple example of how you can use Scikit-Learn for some meaningful data analysis. Python example with Scikit-learn This code is available at the Kite Blog github repository. I’ve used one of Scikit-Learn’s datasets called Iris, which is a dataset that consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal length, stored in a 150×4 numpy.ndarray. The rows are the samples and the columns are: Sepal Length, Sepal Width, Petal Length, and Petal Width. I’m going to run a simple linear regression to display the correlation between petal width length. The only libraries used here are scikit-learn (for the regression and data set) and matplotlib for the plotting. from sklearn import datasets, linear_model import matplotlib.pyplot as plt iris = datasets.load_iris() # Data and features are both numpy arrays data = iris.data features = iris.feature_names Now, we’ll plot a linear regression between the length and width of the petals to see how they correlate. # Create the regression model regression = linear_model.LinearRegression() # Reshape the Numpy arrays so that they are columnar x_data = data[:, 2].reshape(-1, 1) y_data = data[:, 3].reshape(-1, 1) # Train the regression model to fit the data from iris (comparing the petal width) regression.fit(x_data, y_data) # Display chart plt.plot(x_data, regression.predict(x_data), color='black', linewidth=3) plt.scatter(x_data, y_data) plt.show() Here’s a tutorial I created to learn NumPy, and here’s a notebook that shows how Keras can be used to easily create a neural network. Just this much will allow you to build some pretty cool models. Concluding thoughts Before I end, I’d like to share some of my own ideas of what I think the future of data science looks like. I’m excited to see how concerns over personal data privacy shapes the evolution of data science. As a society, it’s imperative that we take these concerns seriously and have policies in place that prevent our data accumulating in the hands of commercial actors. When I go for walks around San Francisco, I’m amazed at the number of cars I see with 500 cameras and sensors on them, all trying to capture as much information as they possibly can so that they can become self driving cars. All of this data is being collected, it’s being stored, and it’s being used. We are a part of that data. As we come closer to a future where self driving cars become a bigger part of our life, do we want all of that data to be up in the cloud? Do we want data about the things we do inside our car available to Tesla, Cruise or Alphabet (Waymo)? It’s definitely a good thing that these algorithms are being trained with as much data as possible. Why would we trust a car that hasn’t been trained enough? But that shouldn’t come at the cost of our privacy. Instead of hoarding people’s personal data in “secure” cloud servers, data analysis will be done at the edge itself. This means that instead of personal data leaving the user’s device, it will remain on the device and the algorithm will run on each device. Lots of development is happening in the field of Zero Knowledge Analytics which allows data to be analyzed without needing to see what that data is. Federated Learning allows people to contribute to the training of Neural Networks without their data to leaving their device. The convergence of blockchain technology and data science will lead to some other exciting developments. By networking people and devices across the globe, the blockchain can provide an excellent platform for distributed computation, data sharing, and data verification. Instead of operating on information in silos, it can be shared and opened up to everyone. Golem is one example of this. Hypernet is a project born out of Stanford to solve a big problem for scientists — how to get enough compute power to run computationally and data intensive simulations. Instead of waiting for the only computer in the university with the bandwidth to solve the task and going through the process of getting permission to use it, Hypernet allows the user to leverage the blockchain and the large community of people with spare compute resources by pooling them together to provide the platform needed for intensive tasks. Neural networks for a long time have felt like magic. They do a good job, but we’re not really sure . They give us the right answer, but we can’t really tell . We need to understand the algorithms that our future will be built on. According to DARPA, the “third-wave” of AI will be dependent on artificial intelligence models being able to explain their decisions to us. I agree, we should not be at the mercy of the decisions made by AI. I’m excited with what the future holds for us. Privacy, truth, fairness, and cooperation will be the pillars that the future of data science forms on.
https://medium.com/kitepython/data-science-the-good-the-bad-and-the-future-kite-blog-cfac9ba130ce
['Kirit Thadaka']
2019-08-07 17:15:53.980000+00:00
['Privacy', 'Responsible Data Science', 'Python', 'Data Science', 'Machine Learning']
Brexit: EU’s Epitaph and Take-aways for Pakistan
Brexit: EU’s Epitaph and Take-aways for Pakistan Tomorrow Britain goes to its historic Brexit referendum where it decides whether it wants to stay in EU or not. After Stolz Germany signed a humiliating Versailles Treaty in 1919, only 100 years later it invaded Brussels this time through economic and diplomatic means some would call EU. With far-right populism on the rise in Europe, UK wants independence from this French-German dominance. In 1919 Treaty of Versailles was heaped upon an already defeated and subjugated Germany after a devastating World War I. Its allies humiliated, its resources depleted and economy on its knees, Germany signed arguably one of the most ignominious peace deals in world politics. The terms were so brutal that the reparations included stallions, mares, cows, rams, sheep and bulls along with coal mines and literally everything worth anything. Historians have argued that this was the very blow that drove Stolz German nation to give into Nazi nationalism and die for the mother land. Nietzsche’s works were reinterpreted and Uber alles spirit infested the Teutonic tribe. World War II ended with similar results but la République and Union Jack’s aspirations were only beginning to materialize. The only problem was to prevent itself from wars and create a combined front to counter future aggressions of Russia, Germany, Japan and anything that rose above its heights. Soft power emerged in the form of United Nations and its international organizations. War-torn economy stood upon its feet by generous Marshal Plan of a paranoid US fearing Cold War escalation. Alliances and Treaties like SEATO and CENTO were forged to keep Kremlin away from the liberal economic world order after Bretton Woods Agreement in 1944. Most important of all was to create a strong neighborhood that ensured mutual cooperation. 1950 brought 6 powerful post-war nations together in sharing their steel resources, the first step towards what we know as the European Union today. Treaty of Rome created European Economic Community half a century later actually institutionalizing the dream. Today EU’s budget for 2014–2020 is 960 billion Euro with Germany being the largest contributor, a remarkable feat from the war-torn country that could barely survive without the help of its enemies. Brexit, many say, is the psychological alarm that triggered precisely because of this German dominance in Brussels. Nigel Farage, the leader of UK Independence party, has been the most ferocious of the EU critics. Uncontrolled EU immigration laws, economic dependence, inability to legislate independently, and islamophobia seems to be his key arguments. Even though the UK has opted out from Euro and border control agreements, pro-brexiters argue for complete independence. Populist slogans like more jobs, Brits first and lower tax and welfare benefits for immigrants are working like they always do. The situation has polarized public opinion to the verge of violence and Jo Cox, a British MP has already lost her life to right-wing extremists. This seething national fervor is not without its opposition. Liberal voices have locked horns with the conservatives on literally every issue from statistics to politics and morality to culture. David Cameron, the prime minister, stands against it with all the forces he could summon. Even US has warned the UK of ‘consequences’ while Putin calls it Cameron blackmailing EU. The UK will certainly lose its negotiation powers in international trade deals, skilled labor attraction, freedom of navigation and a lot more. IMF has already flagged UK’s departing decision as a precursor to British recession and global ‘contagion effect’. Does Pakistan have a lesson or two to learn from it? Winston Churchill’s famous quote ‘History is written by victors’ appears to be a self-fulfilling prophecy. While the crimes of Soviet Union are chronicled by pro-western writers and Japanese imperialism is lacerated by American historians the problem doesn’t lie in the history itself. Historical revisionism is what makes it impossible to learn from history and repeat it unwisely. Years ago my visit to Auschwitz camp in Poland was a painful reminder of this. One of the memories I have from the camp was the famous quote form George Santayana that goes like this. ‘Those who cannot learn from the history are doomed to repeat it.’ This seems to be the case for Pakistan at least. Only a generation ago we were told that economic liberalization and free markets are the way forward to prosperity, economic independence, and globalization. Generous US support through Pakistan during Soviet-Afghan war ensured our allegiance to allies in their new world order. Failed coup of General Akbar Khan with Faiz Ahmed Faiz and other Marxist actors was first thwarted attempt to socialize the country that later resurrected in its short-lived Bhutto regime where nationalization of industries destroyed country’s economy beyond repair. PPPs policies of economic centralization were very similar to Mao’s Great Leap Forward. Marxists in Pakistan, however, didn’t have what it took to overthrow the Czar. Hide and seek between army generals and so-called democratic regimes continued. War on terror electrified public opinion against foreign influence in country’s affairs. Political arena saw a new player named Imran Khan, former cricket player become an emerging star and nuisance for the status quo political parties. More or less a tripartite democracy, free market, and free media, Pakistan has joined the camp of allies into the globalist world. The question is, does Britain feel the same way about globalization? Brexit is a lesson to the developing world as if it wasn’t clear enough until now. Rules of the game can change anytime its major players find them unfair. Alliances are broken, treaties nullified, promises reinterpreted and arsenals revamped. It was the UK, France, Britain and US that lead the world into surplus economics and multinational corporations. Margaret Thatcher’s vehement TV performances of pitching this new world order with her charming ‘breaking free from the cage’, spread like an epidemic among the masses. Decades later, that same Britain wants to get back into its nationalistic cage because it looks safer, stronger and to be fair familiar. Whatever happened to fighting for freedom, democratic values, and openness? With China’s rise as an alternative power, US annihilation in Afghanistan and Europe’s pre-occupation with its migrant crisis, the global vacuum of power is emerging. Is it a good chance for Pakistan to break free of its colonial chains? Do we have anything to lose but the yokes around our necks? The jury is still out on it. We certainly need to set our priorities right and choose our partners wisely. This could be a turning point in Pakistan’s history. Originally published at www.lhrtimes.com on June 22, 2016.
https://medium.com/minhaajmusings/brexit-eus-epitaph-and-take-aways-for-pakistan-5b413b3f618c
['Minhaaj Rehman']
2016-08-08 08:50:10.303000+00:00
['Brexit', 'Politics', 'European Union', 'Colonialism', 'Pakistan']
Detection of Surface Cracks in Concrete Structures using Deep Learning
Detection of Surface Cracks in Concrete Structures using Deep Learning Doing Cool things with data! Crack in Concrete Building Introduction Detection of surface cracks is an important task in monitoring the structural health of concrete structures. If cracks develop and continue to propogate, they reduce the effective load bearing surface area and can over time cause failure of the structure. The manual process of crack detection is painstakingly time-consuming and suffers from subjective judgments of inspectors. Manual inspection can also be difficult to perform in case of high rise buildings and bridges. In this blog, we use deep learning to build a simple yet very accurate model for crack detection. Furthermore, we test the model on real world data and see that the model is accurate in detecting surface cracks in concrete and non concrete structures example roads. The code is open sourced on my Github at link. Original full story published on my website here. Data set For this blog, we are using the Publicly available Concrete Crack Images data set. This data set was made publicly available from the paper by Ozgenel and Gonenc. The data set consists of 20,000 images of concrete structures with cracks and 20,000 images without cracks. The dataset is generated from 458 high-resolution images (4032x3024 pixel). Each image in the data set is a 227 x 227 pixels RGB image. Some sample images with cracks and without cracks are shown below: Sample images with cracks Sample images without cracks As can be seen, the data set has a wide variety of images — slabs of different colours, cracks of different intensities and shapes. Model Build For this problem, let's build a Convolution Neural Network (CNN) in Pytorch. Since we have a limited number of images, we will use a pretrained network as a starting point and use image augmentations to further improve accuracy. Image augmentations allow us to do transformations like — vertical and horizontal flip, rotation and brightness changes significantly increasing the sample and helping the model generalize. For the steps below follow along with my code on Github. Shuffle and Split input data into Train and Val The data downloaded will have 2 folders one for Positive and one for Negative. We need to split this into train and val. The code snippet below will create new folders for train and val and randomly shuffle 85% of the data into train and rest into val. Split into train and val Apply Transformations Pytorch makes it easy to apply data transformations which can augment training data and help the model generalize. The transformations I chose were random rotation, random horizontal and vertical flip as well as random color jitter. Also, each channel is divided by 255 and then normalized. This helps with the neural network training. Transforms Pretrained Model We are using a Resnet 50 model pretrained on ImageNet to jump start the model. To learn more about ResNet models please read this blog from me. As shown below the ResNet50 model consists of 5 stages each with a convolution and Identity block. Each convolution block has 3 convolution layers and each identity block also has 3 convolution layers. The ResNet-50 has over 23 million trainable parameters. We are going to freeze all these weights and 2 more fully connected layers — The first layer has 128 neurons in the output and the second layer has 2 neurons in the output which are the final predictions. ResNet50 Model ResNet model layers As shown by the model summary, this model has 23 Million non-trainable parameters and 262K trainable parameters Model Parameters We used Adam as the optimizer and train the model for 6 epochs. Model Training and Prediction on Real Images We use transfer learning to then train the model on the training data set while measuring loss and accuracy on the validation set. As shown by the loss and accuracy numbers below, the model trains very quickly. After the 1st epoch, train accuracy is 87% and validation accuracy is 97%!. This is the power of transfer learning. Our final model has a validation accuracy of 98.4%. Model Training Stats Testing the model on Real World Images Now comes the most interesting part. Yes, the model works on the validation data but we want to make sure it also works on unseen data from the internet. To test this, we take random images of cracked concrete structures and cracks in road surface. These images are much bigger than our training images. Remember the model was trained on crops of 227,227 pixels. We now break the input image into small patches and run the prediction on it. If the model predicted crack, we color the patch red (cracked) else color the patch green. The following code snippet does this. Prediction on crops The model does very well on images that it has not seen before. As shown in the image below, the model is able to detect a very long crack in concrete by processing 100s of patches on the image. Crack Detection on Concrete. Left-Original Image. Right-Red regions are predictions with crack and green regions are predictions of no crack Furthermore I tested the model on road cracks too. This model was not trained on road surfaces but it does very well in picking road cracks too! Crack Detection on Roads. Left-Original Image. Right-Red regions are predictions with crack and green regions are predictions of no crack More real world images and model predictions on them are shared on the github link for this project. Conclusion This blog shows how easy it has become to build real world applications using deep learning and open source data. This entire work took half a day and outputs a practical solution. I hope you try the code for yourself and test it on more real world images. I am extremely passionate about computer vision and deep learning in general. I have my own deep learning consultancy and love to work on interesting problems. I have helped many startups deploy innovative AI based solutions. Check us out at — http://deeplearninganalytics.org/. You can also see my other writings at: https://medium.com/@priya.dwivedi If you have a project that we can collaborate on, then please contact me through my website or at [email protected] References
https://towardsdatascience.com/detection-of-surface-cracks-in-concrete-structures-using-deep-learning-f8f85cd8ac8b
['Priya Dwivedi']
2019-12-31 22:16:34.952000+00:00
['Deep Learning', 'Manufacturing', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
A Year of Hard Times
A Year of Hard Times 2020 — Smoke and Ashes (Sunflowers in our Front Yard in a Sky of Yellow Haze) Author’s Photo I brewed a strong cup of coffee and called my high school English teacher today. Her name is Claire, Ms. Claire, to be precise. She is 87 years old. Claire is recovering from a major stroke. Her daughter attributes the stroke to this year’s sharp increase in chain-smoking that rose steadily each morning as she read the paper. During our video chat, Claire mentioned that she hasn’t seen my hair this long since I was seventeen. I haven’t brushed it. She raises a balding right eyebrow. Her speech is still slurred from the stroke and she has to pause from time to time to recall a lost word but I understand her just fine. Her sense of humor has remained sharp and steady as a blunt rock. I promise to ask my wife to cut my hair. I rub my hand over the top of my head in quick circles and lament in dramatic jest, but where will the birds nest? We let out a much-needed laugh. Claire reaches for an invisible cigarette. It’s hard to break a seventy-year habit. I tell her I am proud of her for quitting. Claire’s expression shifts to a serious look, like I might be suspended and assigned summer school if I am not completely honest. She asks if the armed “Pro-Trump” militias have left the Oregon State Capitol Building. I adjust my smartphone so she can see the yellow haze outside my window to assure her that the smoke from the wildfires is much too thick for anyone to be marching around in. Her daughter jumps in on the conversation and reminds us that Claire is on restriction from all political discussions while she is in recovery. She hands Claire a glass of orange juice, quickly, as if it were a valium but it doesn’t even come with a splash of vodka. Claire rolls her eyes and I wink back at the screen — she chucks her foam “stress ball” at the couch. Her aim is still impressive with that good arm. We all chuckle. Claire is a tough old bird. She survived polio in her late teens, left an alcoholic husband and raised three daughters as a single mom. She beat the odds and attended night school to become an English teacher in her early thirties. I ask if COVID-19 reminded her of polio in the 1950s. She bellows, she has never in her life seen anything as bad as COVID-19, this President and 2020. Fuck 2020! I echo, Fuck 2020! Her daughter gives us a sideways look. Claire playfully shoos her away from her wheelchair. I smile remembering the first time I heard Claire cuss. It was after graduation. I didn’t win any awards but Claire invited me to go out to coffee, said I could stop calling her Ms. Rhodes, lit up a cigarette and extended the pack toward me. I passed on smoking but took up profanity. Damnit. I am glad that we have stayed in touch all of these years. I was such a pain in the ass as a student and spent a lot of time in detention. No one knew anything about learning disabilities back then but she wouldn’t give up on me and I sure as hell will not give up on her, not now, not ever. Claire asks how things are going. I tell her about all the animals that B. and I have adopted since this year turned sour with people losing their jobs, packing up and leaving their pets behind. I tell her about the barn cats: Thelma and Louise and the Muscovy Ducks: Hilda, Gilda, Matilda and Geraldine Jr. (I point out that Geraldine is named after my mother and Claire nods, smiling). Finally, I list off the newest addition of rabbits: Hazel, Maude, Gracie and Nutless George. Claire asks if the rabbits bite and I confirm her suspicion by showing the chewed up cuff of my shirt sleeve. Claire inquires about how our old poodle is taking all of this and I promise that, while he is not amused, he is still King. Claire asks about the garden and I try to prove that I know a little about science. I straighten up in my chair to explain that we are going to try our hand at phytoremediation on the ash contaminated soil. I tell her about the benefits of growing mushrooms and sunflowers and that we will be switching to raised beds. Claire wonders if we were able to harvest anything before the ash fell and I assure her that we sent a carload of squash to the shelter. I pan the phone’s camera to show off cupboards stocked with pickles and kraut and then, I open the fridge so she can see that it is filled with vegetables and jars of kimchi. She requests that we send her some of my wife’s pickles and that makes me happy. Claire’s eyes begin to grow heavy, looking about ready for a nap. We exchange an I love you, turn our phones off, gently . . . and then, it begins to rain.
https://medium.com/a-cornered-gurl/a-year-of-hard-times-63e4fd4f449
['Noe', 'Lisa Arana']
2020-09-18 21:31:51.487000+00:00
['Short Story', 'Wildfires', 'Nonfiction', 'Friendship', 'A Cornered Gurl']
Science in the System: Fluent Design and Material
Science in the System: Fluent Design and Material Using physical elements to create a visual hierarchy and organize content in a way that’s easy for the user to process Making an origami crane out of a sheet of acetate was a bit trickier than I expected. Material science is a growing field full of ground-breaking discoveries and inventions that seem to defy the laws of physics. For example, take carbon nanotubes. Each nanotube is about 10,000 times smaller than a human hair. When rolled into sheets, carbon nanotubes have 100 times the strength of steel — but only 1/6 of the weight. In addition to being small, they’re semiconductors that can be used to create chips that are smaller, more efficient, and faster than their silicon counterparts. MIT is creating fluorescent polymer gels that change color when disturbed by heating, shaking, or exposing them to acid. This new color-changing material can be used to detect structural failures, which could help make vehicles and buildings safer. At least one theoretical physicist believes that the right exotic materials could generate a warp bubble that would enable faster-than-light travel… On a more practical note, material has come to the Windows user interface. Before material: Glass in the UI We have experimented with material-like visual effects in UI for years. In the late ‘90s and early 2000s, “glass” and “plastic” buttons were so common that popular image-editing programs, like Adobe Elements, provided presets for creating them. Eventually, mainstream operating systems embraced transparency effects: Apple introduced translucent, glass-like effects to the macOS in 2000, and Microsoft added the Aero Glass theme to Windows Vista in 2005. The right materials can help you distinguish otherwise similar elements. It works in origami, and it works in the UI, too. With the Windows 10 Fall Creators Update, released last year, we’ve added material to the OS. Our first material, Acrylic, is a translucent surface. You can customize Acrylic and give it a colored tint or changing its opacity. More than a special effect Early glass and plastic effects typically used one or two linear gradients and a bevel. An old-school glass effect. Acrylic material is more sophisticated. On a PC, it’s a two-dimensional simulation of a 3D, real-world object that uses light, blur, noise, and color to replicate a physical, acrylic material. The recipe for Acrylic uses several layers, each with their own subtle touch. By simulating an actual material, rather than applying a simple 2D special effect, means we do some interesting things with Acrylic in Mixed Reality environments… Material benefits Usability studies show that people prefer attractive UI. Regardless of whether it actually demonstrates good UI design, users find them more enjoyable to use. But in addition to enhancing the look of an app, acrylic material serves a purpose; like other components of the Fluent Design system, material is more than just a special effect. As I’ve mentioned in my other articles, one of the biggest challenges in UI design is organizing content and presenting it in a way that’s easy for the user to process. Some apps provide so much functionality that the user is bombarded with menus and buttons and context menus and text. It can be overwhelming. We can reduce information overload and make content easier to process by providing cues that help the user mentally group it into chunks. Creating a strong visual hierarchy can help. Techniques for creating visual hierarchy The concept of visual hierarchy in layout design has been around for a long time, long before computers and graphical UI — even before typesetting and printing. Thanks to centuries of experimentation, we have a solid set of techniques for creating a strong visual hierarchy. Contrast: The differences between elements is what establishes the visual hierarchy. For example, using different background colors or different fonts creates contrast between two otherwise similar elements. Size: Use size to create relationships between elements. For example, make dominant elements (such as section headers) larger than their child elements. The Microsoft type ramp for UWP apps Proximity: Users assume that elements that are next to each other are related. Take advantage of that assumption by putting actions and content that are related next to each other. Negative space: This is the opposite of proximity. Use negative space to separate elements that aren’t related to each other. Repetition: The opposite of contrast, repetition makes elements look similar to each other, which encourages users to mentally group them together. Effective use of repetition creates a feeling of consistency and predictability. Material: Acrylic material is another technique for creating a visual hierarchy. Use it as the background of a region to make that region stand out. Acrylic creates a contrast that’s more subtle than a color background change — and it feels more open, too, because it’s transparent quality lets light shine through. When to use Acrylic Acrylic is a lightweight mechanism for creating a contrast between a set of elements and the rest of the UI. The more carefully you use it, the more powerful the effect. In general, we recommend using acrylic for supporting or secondary elements, such as navigation and commanding elements. This example uses acrylic to distinguish the navigation menu from the app’s primary content. Another version of the same app, this example uses acrylic to separate the horizontal navigation menu from the app’s primary content. Acrylic’s translucent nature makes it particularly useful for dialogs and flyouts because the user can see past the dialog and be reminded of the UI that triggered it. How to use Acrylic in your UWP app In UWP terms, Acrylic is a type of brush you can use to paint the background of any element. You can create your own and customize it, or you can use one of the 20+ predefined Acrylic brushes we provide, as shown in this example: <Grid Background="{ThemeResource SystemControlAcrylicElementBrush}"> Some controls, such as the Navigation View, automatically use Acrylic. Find out more For a complete list of Acrylic brushes and detailed instructions (with code examples), check out our Acrylic Material article on docs.microsoft.com. Follow me on Instagram.
https://medium.com/microsoft-design/science-in-the-system-fluent-design-and-material-42b4dc532c14
['Mike Jacobs']
2019-08-27 17:26:54.747000+00:00
['Design', 'Fluent Design System', 'UX Design', '3d Design', 'Microsoft']
TypeScript Generic Rest Parameters and Tuple Types in Practice
Authorization In this article we’ll focus on how we handle step 4, the authorization, and how generic rest parameter types provide full type support for it. The framework facilitates our developers in the process by providing an authorize method, which takes a higher-order auth function and some arguments for it. The auth function is called with those arguments. If it returns false a specific UnauthorizedError will be thrown. This is then caught by the framework and a 403 response is returned to the client. The method itself is straightforward: If we look more closely at the auth function, it is called with: The instantiation of the controller (which allows the function to perform db operations, logging, etc. in context of the request) The other arguments given to the authorize function. These arguments are gathered using the rest operator and then applied to the auth function call using the spread syntax. Basically we are passing all the parameters of the authorize function to the auth function, except for the first one. An example auth function that checks whether the job belongs to given user could be defined as: Types Now, we could type the authorize function saying that both rest arguments are of type unknown[]: But then we are missing out on a lot of type checks when calling authorize: The number of arguments required, because array types don’t have a fixed length. The type of arguments required, because the elements are homogeneously typed as unknown (meaning, all of them have the same type, unknown). In other words there would be no proper type checking on the rest arguments, so the following calls are allowed: await this.authorize(isJobOf, job) await this.authorize(isJobOf, job, "id") await this.authorize(isJobOf) What we actually want to tell the type system is that the arguments that are gathered in the authorize’s rest args are exactly the same as those of the auth fucntion. We can try this by using generic types, saying that the authorize function’s rest parameters is of type P and the auth functions’s rest parameter is of type T. Then, we instantiate the auth function’s generic type with P so that T = P: The type system will complain that args needs to be of type array, because that’s the only allowed type when using a generic type on rest arguments. This is solved by telling it that the generic types P and T both extend the Array type unknown[] (you could also use any). Although it feels natural to type the rest arguments as an array, it is actually a special construct where the type system will use tuple types instead of homogeneous array types. Tuple types solve exactly the problem we had, namely they type an array with a fixed length and a specific type for each element. When calling authorize with isJobOf, the type system will: Match the signature type of isJobOf with AuthFunction<T>, setting T to the tuple [number, number] to match the types of the 2nd and 3rd argument of isJobOf. As a result the generic type P of authorize is also set to [number, number]. So now the type system enforces the fact that the rest parameter of authorize need to be of type [number, number] and consequently that the 2nd and 3rd parameter need to be of type number. So now the type system does exactly what we wanted it to do. Furthermore, amazingly enough when using VSCode it will even copy the right argument names of the auth function to show in the type signature of authorize when calling it (job: number, customer: number): This seemingly simple result is all made possible by the intrinsic combination of rest arguments, the spread syntax and tuple types with generic rest arguments. How cool is that! We are so happy with the amazing work by the TypeScript team to finally come up with a type system that works with us and not against us!
https://medium.com/javascript-in-plain-english/typescript-generic-rest-parameters-and-tuple-types-in-practice-edc2bb0bdcb9
['Tim Coppieters']
2020-12-18 07:55:47.594000+00:00
['Web Development', 'Nodejs', 'Typescript', 'Programming', 'JavaScript']
The Fitness Challenge Helping Me Socialize During COVID
The Fitness Challenge Helping Me Socialize During COVID Our problem during the pandemic is a Sisyphean struggle that doesn’t seem to end. Photo by Jenny Hill on Unsplash In popular anime, One Punch Man, protagonist Saitama becomes the most powerful anime hero in the world. How? He had a rigorous workout regime of the following: 100 push-ups, 100 sit-ups, 100 squats, and a 10-kilometer run. Of course, when Saitama describes his workout regime to other heroes, they are utterly baffled. In the world of super serious anime characters who have preposterous workout regimes, Saitama’s daily workout is, well, nothing. They think he must have some secret to have gotten to become the strongest hero in the world, but there is no secret — by doing what is popularly known on YouTube as the One Punch Man challenge, Saitama is able to defeat his enemies with only one punch with nearly God-like power. Yesterday, I started the One Punch Man challenge. The goal is to do 100 push-ups, 100 sit-ups, 100 squats, and 10 kilometers for 100 days. It doesn’t seem like it’s hard for an anime superhero, but yesterday and today, my whole body has been in pain. I thought the most difficult part of the workout was the run, and the least difficult was the squats. However, I failed to recognize that the squats were the most difficult because I simply haven’t been very used to them. As a runner who runs every day, the run has actually been the easiest part, but I dread the soreness I feel through my whole lower and upper body every day. The challenge, for me, started off as a joke. My roommate mentioned the One Punch Man challenge while we were watching the show on TV, and I mentioned the challenge to a group chat of my friends from college. We decided that seven of us were going to do the challenge and see the results. However, I must note that I’m the only person in the group doing the 10 km run every day. Everyone else is only running 20 miles a week or replacing the running with pull-ups, and we were all runners during college! Needless to say, I feel a bit cheated. But I will endure. This morning, I did a nine-mile run, and I still have squats, push-ups, and sit-ups to do. Yesterday, I did a seven-mile run, and this is only the beginning. There are only pros and no cons to being in the fitness challenge. First of all, it’s a great way to, well, grow fitness. As a runner, I have always been very slim. My family tells me often that they worry that I’m too skinny because I run so much, so putting in core, pecs, and shoulder exercises will help me work out other parts of my body besides just cardio. During COVID, it’s also a great way to socialize and build camaraderie with friends. I wanted to go to Atlanta and other cities to see my friends this year and especially this summer, but with a pandemic, travel has been mostly restricted. According to Stephanie Mansour at NBC News, 30-day challenges have been the talk of the town across social media for quite some time now. People do 30-day plank challenges, 30-day cardio challenges, and 30-day squat challenges. They have their pros and cons in terms of fitness and health, but what often goes unstated in terms of these challenges is the social camaraderie aspect. According to the American Psychological Association, loneliness has negative effects on health that parallel smoking 15 cigarettes a day or having alcohol use disorder. I consider myself an extrovert, and the lack of social interaction outside three or four people in my daily life is getting to me. With the rise of COVID cases in my state and all over the United States, it’s clear we’re in this for the long haul, and there is no way I will be out on the streets meeting dozens of new people in public gatherings any time soon. Sure, a humorously challenging fitness challenge probably isn’t most people’s idea of socializing, but it is a change from the norm. I don’t expect an eight-pack and a huge chest by the end of the challenge, but I do expect to be more fit than I was at the beginning and connect with not only my friends, but a litany One Punch Man fans all over the Internet and social media. And I can’t help but feel a Sisyphean, existential struggle in the challenge as well. When Sisyphus was cursed by the Gods in Greek mythology, he had the task of pushing a rock up a mountain that would roll down every day. It would be akin to torture, with Sisyphus never seeming to make progress or leeway, never being able to escape his punishment. Perhaps it isn’t always punishment. Think about it — you finish the workout on a given day, only to have the rock move down the mountain the next day, all over again. The rock doesn’t change. The mountain doesn’t change. But you do. You get stronger. You build the support network you need to get through it. You develop the tools and wisdom to find ways to succeed. And that’s our long struggle during this pandemic in short — a Sisyphean struggle that doesn’t seem to end, where the rock doesn’t change, where the mountain doesn’t change. The pandemic doesn’t seem to end any time soon. The only thing that changes is us, and the One Punch Man challenge is a larger analogy for the fact that life doesn’t get easier. We just get stronger.
https://medium.com/the-apeiron-blog/the-fun-fitness-thats-helping-me-socialize-during-covid-42f2c9538028
['Ryan Fan']
2020-11-12 11:24:20.760000+00:00
['Fitness', 'Philosophy', 'Spirituality', 'Culture', 'Coronavirus']
The OBEY sign from They Live
The OBEY sign from They Live The COVID-5G tower conspiracy theory and the typography in John Carpenter’s movie It a testament to the seriousness of the COVID crisis that both Facebook and Youtube have banned conspiracy theorist David Icke, despite his immense popularity. For platforms like Facebook and Youtube engagement (the depth and time an average user spends viewing their content) means more advertising $$$. Mark Zuckerberg, in particular, has stated that Facebook should not fact-check politician’s claims. While this is seemingly in the service of free speech, there is slight conflict of interest here. Fake, outlandish clickbait and outright lying is a far better source for revenue than boring facts. Our brains are hard-wired to engage more with nonsense, than we do with real information — the same way junk food tastes better than a salad. Google’s Youtube is equally guilty of preferring popular content over social purpose. The platforms, as a consequence, are awash with all kinds of false information, xenophobia and hate speech. The only God here is money. David Icke had almost a million Youtube subscribers, and his videos discussing various conspiracy theories, it is estimated, cross over 30 million views across social media. His last video, about getting kicked off Facebook, had received 120,000 thousand hits, before Google pulled the plug on him, too. Icke’s theory fell afoul of new rules which specifically disallow content that claims the virus does not exist or offers false, medically unsubstantiated advice about the virus. Icke is responsible, along with other odd-balls and cranks, for spreading the idea that coronavirus symptoms were somehow caused by radiation from 5G telecommunication towers. In the UK, Icke’s home country, there have been over 77 arson attacks on phone tower masts and counting. This theory is now spreading in the United States and other parts of the world. Icke, probably the world’s premier conspiracy theorist, is no stranger to controversy. His book, The Truth Shall Set You Free, was even recommended by Pulitzer-winning author, Alice Walker in The New York Times. The problem, however, is that Icke cites The Protocols of the Elders of Zion repeatedly through his work and with approval. Unfortunately, no matter how amusing or even insightful Icke’s conspiracy-mongering can be, The Elders of the Protocol of Zion is the primary text of anti-semitism. Much more important is that it is a completely fraudulent, forged document. Its principal purpose is the targeted defamation of Jews in order to prejudice public opinion against them. From Russia to Germany, this “book” has helped fan the flames that have led to murders, pogroms and genocide. In fact, The Protocols is the prescribed textbook of anti-semitic feeling. It has inspired everyone, from Adolf Hitler to the grass-root skin-head on his way to his local synagogue with a petrol bomb in hand. Icke’s ban, along with his work, is generally discussed in the context of free speech and its limits. However, the matter, as we will see in this essay, is much more complex than a simple disagreement over what kind of material should be allowed to be circulated in the public domain. The vulgarisation of high art The Protocols of the Elders of Zion is plagiarised from an earlier work by a French writer named Maurice Joly. The best book on this subject is a graphic novel by Will Eisner. © Will Eisner Maurice Joly was a 19th century lawyer and polemist who was particularly angry with the government of Napoleon III. Despite a ban on publications critical of the monarch, Joly liked to write exactly those kinds of pamphlets. While a monarchist and a conservative, Joly felt Napoleon III was a tyrant who was not respecting the limits placed on his powers by the French constitution of that time. Most of his books, put out by subversive editors, were destroyed. In a review of Umberto Eco’s The Prague Cemetery, Rebecca Newberger Goldstein writes: The story of the “Protocols” is rendered even stranger by the labyrinthine history of plagiarisms and hoaxes that went into its making, and it is this astounding back story that Eco fictionalizes. One of the plagiarized sources is an 1864 French political pamphlet, satirizing Napoleon III, entitled “Dialogue in Hell Between Machiavelli and Montesquieu.” The author, Maurice Joly, who spent 15 months in jail for his efforts, attacks the legitimacy of the emperor by showing plotters in hell undermining a rightful regime. Roughly two-fifths of the “Protocols” so closely parrots Joly’s wording that there is little doubt of the borrowing. Joly, in turn, had plagiarized a popular novel by Eugène Sue, “The Mysteries of a People,” which presented the schemers as Jesuits. These sources are predated by a late-18th-­century best seller, “Memoirs Illustrating the History of Jacobinism,” by the French cleric Augustin Barruel, who charged that behind the French Revolution lurked a conspiracy of Freemasons. In other words, conspiracy theory is not a new phenomenon. The idea of who is conspiring has changed over time: Jesuits, Freemasons, Jews or lizard-people aliens, depending on the political goals of the writer. Joly’s narrative of a conspiracy targeting Napoleon III’s administration was gutted and filleted by Russian propagandists, and then deployed at the Jewish community. The point here is the vulgarisation of intellectual criticism. Joly was trying to hold the government of his day to account, and promote a just society. Instead, his work helped motivate pogroms and genocide. The medium is cable television Similary, Icke’s ideas about an alien-lizard race invading and living amongst humans as their rules is hardly original. To my mind, the plot seems a straight lift from the John Carpenter film, They Live. In fact, Icke has spoken in glowing terms about the film. Obviously, when I first saw They Live as a teenager, all I saw was a fun action-movie. Proceedings involved a large Caucasian male. He had a blonde mullet, wore a pair of prominent shades and a lumberjack shirt. He proceeded to blow out the brains of the alien-monsters with a pump-action shotgun. Moreover, I had found my beloved Duke Nukem 3D’s source for the line: “here to kickass and chew bubble gum, and I’m all out of bubble gum.” However, almost a decade later, I discovered, to my considerable delight that Slavoj Zizek uses They Live as a principal text to explore the concept of ideology and how it shapes politics and culture in his documentary, The Pervert’s Guide to Ideology. Ideology is an important concept in this documentary because the subject is, literally, the concept of ideology (look carefully, it’s in the title). Zizek summarises the plot of They Live and explain his theory of ideology (directed by Sophie Fiennes) Those interested in political theory that describes how elites use ideology to justify social inequality can consult Thomas Piketty’s latest book: The signage typefaces in They Live: A background As Toshi Omagari puts it: “They Live is among the best films that use typography for storytelling.” Many films, particularly of the socialist realism genre, attempt to educate us about the silly but effective tricks elites use to con masses. These movies can be boring as hell. Where Carpenter succeeds, and the empanelled writers of Pravda failed, is the inventive trick he uses to talk about what Karl Marx would call: “class consciousness”. When the magic sunglasses are worn, the signal broadcast by the aliens ceases to have effect. This not just reveals the aliens, who in this world are ordinarily camouflaged by the broadcast and walk freely amongst humans without being noticed, but the propaganda they use to keep humans blinded and compliant. With the sunglasses on, money is now seen to be simply a piece of paper that says: “THIS IS YOUR GOD” Similarly, signage and billboards issue other commands. A set of directions states: “NO INDEPENDENT THOUGHT”. A sign that you would normally expect to say “job vacancy” says “ CONSUME”. The most iconic of these commands is now is the “OBEY” advert in a magazine. John Carpenter’s signature typeface for credit titles in his movie is Albertus. Rumsey Taylor’s linked article refers to an amazing coincidence. There is a road in London that has a sign that says “John Carpenter Street”. It is in Albertus because that is the official typeface of the City of London council. As the article puts it: “John Carpenter was a 14th century figure and has no connection with the director, and the films precede the Albertus branding of the Corporation.” A curious story emerges from the message boards and comments on articles discussing the typefaces in which the commands are printed. STAY ASLEEP and THIS IS YOUR GOD is set in Tempo. CONSUME is set in Twentieth Century as is NO INDEPENDENT THOUGHT (but in an ultrabold font). But what of OBEY? OBEY is the now the most famous of all of They Live’s alien command advertising and signage. The sign is now ubiquitous on the clothing of skateboarding teenagers in city squares around the planet. They Live: a brilliant piece of print art from Roughtrade Books OBEY is seen widely principally because the sign has been expropriated into a clothing brand by street artist Shepard Fairey. OBEY clothing claims to be “manufacturing dissent” (a wink, scholars trained in Critical Theory 101 will recognise, to Noam Chomsky). Fairey is also responsible for disseminating the “Andre the Giant Has a Posse” sticker and logo into the streetscape which is also seen in many places. Artists like Fairey are “subvertisers” — artists who are interested in hijacking the messaging idioms and platforms of mainstream advertising, and conducting media experiments, to transmit political ideas. One of the most creative examples is “Led by Donkeys”, a group of British subvertisers who regularly punk their government. Toshi Omagari’s article remains inconclusive on the exact typeface used to make the OBEY sign. Commenters have suggested that Classroom JNL by Jeff Levine is a good fit. However, Levine only released the typeface in 2009. They Live was made in 1988. A commentator, Florian Hardwig, offers the most plausible solution, quoting Levine: “A set of old die-cut cardboard letters and numbers used by teachers directly on bulletin boards or for tracing was the inspiration for Classroom JNL. In turn, these letters take their cue from typefaces such as Franklin and earlier wood type designs.” So, this clears up the mystery. The prop makers of They Live, film school graduates, must have used a fairly standard and commonly available set of film school stencils to make the OBEY sign — the same ones Jeff Levine used to make his typeface. It is easy to forget, in the age of computers, that typefaces can be hand-drawn or made from stencils. Propaganda and technology To offer an attractive service to advertisers, Google and Facebook build up a psychological profile of their users. In order to do so, they collect giant amounts of data about each individual. Apart from the damage done to an individual, by stealing their attention and focus, this would not be a wide social problem. However, this psychometric profile has become of utmost interest to a very specific kind of major advertiser: political strategists. Elections tend to be decided by thin edges. Political parties are mostly firmly established in certain heartlands. What decides an election are “bell weather” places: places which where a floating population of voters can shift the overall result. Political strategists are always on the lookout to unlock issues that can motivate people to vote (or, in Southern America, prevent people from voting). This is why they advertise heavily. But, as Brexit and Donald Trump’s successful run to be elected to the White House show us, a very new kind of vote-bank has been created: the conspiracy theorist. The psychometric data that Facebook has collected allows special interest groups to identify, for instance, people who believe (because of the tabloids) that Europe wants to prevent the English from having bananas with curves. They have weaponised this group into a vote-bank. Social Media and Big Data’s neat tricks have facilitated dangerous ideas to influence policy. This includes preventing children from being vaccinated and inoculation or climate change denial. We are seeing a serious deterioration of political rights and basic freedoms. In America, women are on the cusp of losing a once constitutionally guaranteed right to privacy and maternal healthcare (including, but not restricted to, family planning and abortion), while the British have lost their freedom to work and settle in Europe. Conspiracy theory was once a harmless, somewhat niche activity. An almost genteel pastime. This group, the tin-foil hat wearers, were a kind of helpless people who belonged to a Philip K. Dick milieu, and spent their time fixated on things like cryptozoology and crop-circles made by aliens. Before a professor at the notoriously puritan St. Xavier’s Mumbai filed a successful petition asking for a legal ban, there was actually nudity to be found on late-night TV in India. One of these programs for which I would stay awake was on TV6 Mockba — a Moscow-channel that would broadcast Playboy’s Red Shoe diaries starring David Duchovny (dubbed over in thick Russian). I had no idea when the broadcast would begin or if it would happen at all. So, often I ended up switching channels to Star Movies (beamed up from Singapore). Sandwiched between Shaw Brothers kung fu movies, They Live seemed to be on late-night loop. That most ultra of ultra conservative-capitalist businesses, the Rupert Murdoch media conglomerate was broadcasting a film about how broadcasting and the media use technologies of mass hypnosis so that the conservative-capitalist-media axis can dominate and rule. Some humble, low-level Murdoch cassette jockey was using his corporate gig to broadcast John Carpenter’s message to the freethinkers and prospective delinquents of Asia. Our man, a mole embedded deep within the structures of enemy’s bureaucracy, was getting out a message to us. Icke will not be the last false advertiser who guts and fillets powerful political criticism — whether of Maurice Joly or John Carpenter — for his own political message. That is why Slavoj Zizek argues that ideology can never be beaten, it can only be changed.
https://medium.com/fan-fare/the-obey-sign-from-they-live-bea1aa107a27
['Neel Dozome']
2020-10-02 16:20:54.401000+00:00
['Design', 'Technology', 'Typography', 'Visual Design', 'Film']
7 Steps to Take If Your Social Following Is Inactive
Photo by Merakist on Unsplash In social media marketing, bigger isn’t necessarily better. It’s certainly tempting to chase after fluff metrics like follower counts — and who wouldn’t want a million followers? — but this rarely leads to meaningful results, such as increased site traffic, more conversions, and a more substantive brand reputation. Instead, you can measure your true effectiveness in terms of the engagement of your social media followers. How often are these people commenting on your posts? How often do they ask you questions, engage you in conversation, or mention your brand in their own posts? If your social media following is inactive — even if it’s large in size — you’ll need to take action to keep seeing a positive ROI. These seven steps should take you in the right direction: 1. Research the competition. Before you do anything else, research your competitors on social media. Note what they’re doing the same, what they’re doing differently, and how much engagement they see on a regular basis. You might find that a difference in tactics is responsible for an increase or decrease in social engagement. For example, do they use a different platform than you do? Do they rely on more images and videos? Do they have a sharper, more identifiable tone? Do they have longer, more detailed posts? Take note of anything interesting here, and apply it to other areas as you see fit. 2. Identify any perceivable patterns and adjust. Take a look at some of your past engagement metrics, and see if there are any perceivable patterns on which you can base your adjustments. For example, track down the most popular post you can find on your social media accounts. What is it about this post that led to such an outlying course of popularity? Does this post feature especially detailed content? Visual content? Or is it the nature of the topic that stands apart from the rest. Similarly, you can look to your least popular posts to try and track down qualities that are especially unpopular. Then, adjust your editorial calendar to add or remove those qualities, respectively. 3. Respond more consistently. Someday, we might have suitably advanced AI to respond on behalf of your brand automatically (and in a way that passes the Turing test), but until then, you’re responsible for responding to your audience personally. In a personal conversation, if one person stops responding to the other’s comments, the other person will eventually grow bored and leave — the same principle holds true on social media. If you never respond to your followers or acknowledge their comments, they aren’t going to keep engaging with you. Make an effort to respond to every poster you can, and do so in a friendly, accessible way. You’ll be amazed at the difference this one addition could make. 4. Post more often. It could be that you aren’t giving your audience enough “meat” to elicit a reaction. One way around this is to develop more content for them. Though each platform has different standards when it comes to posting frequency, as a general rule, you’ll want to make at least one new post every day. Just don’t use this as an excuse to allow the quality of your content to slip — even though you’ll be posting more frequently, it’s still vital to maintain the level of detail, insight, practicality, and uniqueness that your audience has grown used to (or increase that level to new heights). 5. Invite participation. Maybe your followers aren’t participating because you haven’t given them a reason to — after all, nobody dances without a dance floor. Encourage your followers to participate with different kinds of content that naturally invite responses. For example, you could ask an open question to your users or start a debate or discussion. You could also use a contest or competition that demands some form of response to bribe your users into taking more action. 6. Experiment with new content. It’s free to post content on your brand’s page, so post as much of it as you’d like. If your audience isn’t biting with your usual lineup of posts, consider trying an alternative angle. There are dozens of different formats and mediums to tinker with; for example, you could start an interview series or podcast, you could create new infographics, or you could use whiteboard drawings in an engaging video series. There’s nothing right or wrong here, so try as many different angles as you can to get a big data set and find out what works best. 7. Build and use personal brands. People are far more likely to trust and engage with human beings than they are with corporate brands. That’s why personal brands are such a strong tactic for engagement. Use leaders within your company (or even other employees) to circulate your corporate branded posts and engage with users directly — you’ll see far higher levels of engagement almost immediately. Every brand is different, and your target audience won’t respond to these tactics in the same way that someone else’s audience might. Accordingly, you’ll need to temper your expectations and treat all of these steps as small experiments; if they work, increase your efforts in that area. If they don’t, move onto a new strategy. Between these steps, you should find at least one tactic that works to improve your brand engagement; if you don’t, it may be time to seek professional counsel.
https://medium.com/swlh/7-steps-to-take-if-your-social-following-is-inactive-249e38de1095
['Jayson Demers']
2020-05-08 09:47:12.786000+00:00
['Marketing', 'Content Marketing', 'Social Media Strategy', 'Online Marketing', 'Social Media Marketing']
How to remove Multicollinearity in dataset using PCA?
How to remove Multicollinearity in dataset using PCA? Address Multicollinearity using Principal Component Analysis Photo by Alvaro Reyes on Unsplash Multicollinearity refers to a condition in which the independent variables are correlated to each other. Multicollinearity can cause problems when you fit the model and interpret the results. The variables of the dataset should be independent of each other to overdue the problem of multicollinearity. In this article, you can read why multicollinearity is a problem and how to remove Multicollinearity in the dataset using Principal Component Analysis (PCA). Why is Multicollinearity a Potential Problem? Multicollinearity highly affects the variance associated with the problem, and can also affect the interpretation of the model, as it undermines the statistical significance of independent variables. For a dataset, if some of the independent variables are highly independent of each other, it results in multicollinearity. A small change in any of the features can affect the model performance to a great extent. In other words, The coefficients of the model become very sensitive to small changes in the independent variables. How to handle Multicollinearity in data? To handle or remove multicollinearity in the dataset, firstly we need to confirm if the dataset is multicollinear in nature. There are various techniques to find the presence of multicollinearity in the data, some of them are: Getting very high standard errors for regression coefficients The overall model is significant, but none of the coefficients are significant Large changes in coefficients when adding predictors High Variance Inflation Factor (VIF) and Low Tolerance are some of the techniques or hacks to find multicollinearity in the data. In this article, we will see how to find multicollinearity in data using Correlation Matrix and PCA, and remove it using PCA. The basic idea is to run a PCA on all predictors. Their ratio, the Condition Index, will be high if multicollinearity is present. About the Data: For further analysis, the dataset used is the Diamonds dataset download from Kaggle. This classic dataset contains the prices (target variable) and the other 9 independent variables of almost 54,000 diamonds. Preprocessing of the dataset: The dataset has 9 independent features and ‘price’ is the target class label. Before proceeding to statistical correlation analysis, we need to encode the categorical variables such as ‘cut’, ‘color’, and ‘clarity’. (Image by Author), Left: Dataset before preprocessing, Right: Dataset after preprocessing Correlation Analysis: To find the person correlation coefficient between all the variables in the dataset: data.corr(method='pearson') Method of correlation: * pearson (default) * kendall * spearman (Image by Author), Correlation heatmap of data From the above correlation heatmap, we can observe that the independent variable: ‘x’, ‘y’, ‘z’, ‘carat’ are highly correlated (person coefficient> 0.9) with each other, hence conclude the presence of multicollinearity in the data. We can also drop a few of the highly correlated features to remove multicollinearity in the data, but that may result in loss of information and is also a not feasible technique for data with high dimensionality. The idea is to reduce the dimensionality of the data using the PCA algorithm and hence remove the variables with low variance. Handling Multicollinearity using PCA: Principal Component Analysis (PCA) is a common feature extraction technique in data science that employs matrix factorization to reduce the dimensionality of data into lower space. To extract features from the dataset using the PCA technique, firstly we need to find the percentage of variance explained as dimensionality decreases. Notations, λ: eigenvalue d: number of dimension of original dataset k: number of dimensions of new feature space (Image by Author), Plot for % of cumulative variance explained vs the number of dimensions (Image by Author) From the above image, np.cumsum(pca.explained_variance_ratio_) , the total variance of data captured by 1st PCA is 0.46, for 1st two PCA is 0.62, 1st 6 PCA is 0.986. For the individual variance captured the variance of data captured by 1st PCA is 4.21, for 2nd PCA is 1.41, 3rd PCA is 1.22, and the last PCA is 0.0156. Since 98.6% of the total variance is captured by the 1st 6 PCA itself, we take only 6 components of PCA and compute a correlation heatmap to overserve the multicollinearity. (Image by Author), Correlation heatmap of data From the above correlation heatmap, it can be now observed that none of the independent variables are now un-correlated. we can observe that the independent variable: ‘x’, ‘y’, ‘z’, ‘carat’ are highly correlated (person coefficient> 0.9) with each other, hence conclude the presence of multicollinearity in the data. Hence by reducing the dimensionality of the data using PCA, the variance is preserved by 98.6% and multicollinearity of the data is removed. Implementation: Click on the Google Colaboratory below to get the full code implementation. Conclusion: There are various methods to remove multicollinearity from the dataset. In this article, we have discussed the PCA dimensionality reduction technique to remove multicollinearity from the dataset and preserving the maximum variance. There is one disadvantage of this technique, the interpretability of the features is lost. References: [1] Multicollinearity in Regression Analysis: Problems, Detection, and Solutions: https://statisticsbyjim.com/ [2] Eight ways to detect multicollinearity: https://www.theanalysisfactor.com/
https://towardsdatascience.com/how-to-remove-multicollinearity-in-dataset-using-pca-4b4561c28d0b
['Satyam Kumar']
2020-12-19 19:23:24.321000+00:00
['Machine Learning', 'Artificial Intelligence', 'Education', 'Data Science', 'Data Scientist']
You’re In the Way
It sucks to feel like life isn’t working in our favor or that we aren’t able to achieve our goals. It’s easy to find people and things to blame when we find ourselves falling short. Of course we all have barriers in life that interfere with our progress (some more than others, privilege check). We may be slowed down, we may be blocked, and we we may even be stopped by these barriers. Acknowledging them is one thing, creating excuses out of these barriers is another. For example; you left for work at the last minute you could before the possibility of being late but now theres traffic and the traffic ends up making you 15 minutes late. You could say it was the traffic because in reality, the traffic did slow you down! But if you really want to be honest with yourself, you’ll admit you should’ve left earlier. More times than not, it’s that simple, we’re in our own way. Shifting blame and making excuses are much easier alternatives than accepting we may have played a hand in our own shortcomings (ouch). Putting forth effort in understanding this concept allows us to take our power back. It allows us to reframe our experience. Taking accountability for what we did or did not do when it comes to the course of our lives can be tough. It takes integrity to admit we are in our own way. It is an enlightening realization but more importantly a humbling one. We shift our language from “you, they, it, or that” to “me or I”. Circling back to ourselves puts the ball in our court. It becomes our move, our go. Accountability to self means acknowledging the role we play(ed). We can focus on our own actions this way and assess what needs to change. Accountability encourages change but to change we need discipline. This has been a word I’ve struggled with for a long time, unfortunately. This year it is an action I am committing to no matter how often I may struggle with it. Discipline creates stability. We need it to be consistent in getting from one destination to the next. Imagine playing a video game and its time to pick your fighter, you have the option to choose discipline or motivation (mind you these are characters lol). Which do you choose? Motivation sounds great because we think of enthusiasm, desire, and initiative. But discipline will take the win. Why? Because discipline reminds us that slow and steady wins the race. Discipline remains consistent, unwavering, and dedicated. To bring it back to ourselves, we aren’t always going to feel motivated or inspired but we can always be disciplined despite that. Just think to yourself for a moment, what do I want? Am I living a life I love? We need to know the answers to these questions. How do we view our goals and desires in life? What do we think of them? If the answers aren’t what we want them to be we need a plan to change that. Thinking and wishing of change is just that, thinking and wishing. Creating a plan means taking steps forward, taking action. One step forward means being one-step closer than we were. Planning makes our thoughts real. It creates possibility. Once we figure out those answers and create our plans we need to promise ourselves not to settle. If we’ve decided, why settle on anything less? Commit to the decision, to the dream, to the goal. We’re not accepting ‘almost, close, or this will do’. Selling ourselves short by settling for less leads to mediocrity (*shrugs*). I don’t know too many people who enjoy being mediocre by choice. Doubt has a way of bullying us into the settling corner though. It has a way of making us believe we don’t deserve, like we’re asking for too much, or aiming too high. It usually brings it’s other bully friend fear along too. But 10/10 doubt and fear aren’t as big and bad as they convince us they are. They’re wrong. Faith will prove doubt wrong every time. If we believe in ourselves, our abilities, and capacity to create that which we desire; doubt shrinks. Fear shrinks. They may even disappear. Then we’re only left with optimism and opportunity. Life has a funny way of reminding us of what’s important, or what should be, rather. It’s honestly too short to do anything less than what we absolutely love. We deserve to do what fulfills us and to live our best lives! But, we have to get out of our own way. We have to take accountability, create plans, be disciplined, and be diligent in creating the realities we desire. It’s our world. We take our power back when we shift our perspective and really get honest with ourselves. Don’t let life pass you by and have you pointing every which way. The happenings of life will be out of our control often, no doubt, but choosing to take initiative in how we maneuver it is always in our power.
https://dashayna-b.medium.com/youre-in-the-way-96a4d676cc96
['Dashayna Brown']
2020-02-13 02:17:34.023000+00:00
['Self-awareness', 'Discipline', 'Change Your Life', 'Accountability', 'Goal Setting']
Effective Kotlin: Item 23 — Prefer class hierarchies to tagged classes
Tagged classes as discussed in item 23 of Joshua Bloch’s famous book, Effective Java, are classes that contain a tag field indicating the flavour of the instance. For example, a Shape class may have a tag field to denote if it’s a Rectangle or a Circle with methods such as area using switch statements. Often these styles of classes can be re-written as a class hierarchy with an abstract parent containing any shared functionality and abstract methods. Kotlin is no different to Java and as such the above Shape and Rectangle classes may look as follows: abstract class Shape { abstract fun area(): Double } class Rectangle(val length: Double, val breadth: Double) : Shape() { override fun area() = length * breadth } Of course, there is more to creating a class in Kotlin. One thing to consider is whether to use functions or properties. Functions are good to denote side-effects, but with the above code the area function exposes our current state and so would be better as a property. Igor Wojda 👋🤖 talks about should I define Function or Property in more detail. Given we aim to minimise mutability (item 17) in an immutable class the calculated area value would never change. If the calculation were costly it could also be delayed until when the value is used rather than on object creation with the lazy delegate behaviour: abstract class Shape { abstract val area: Double } class Rectangle(length: Double, breadth: Double) : Shape() { override val area by lazy { length * breadth } } Another thing to consider in Kotlin is whether to use a sealed class which represents a restricted class hierarchy; they explicitly prohibit inheritance (item 19). As all subclasses are known when statements used as expressions become exhaustive without the need for an else clause and indeed the compiler will complain about missing statements should you change the hierarchy later: fun hasFourSides(shape: Shape) = when (shape) { is Rectangle -> true is Circle -> false } Note that in many instances code with when statements would be better written using polymorphism. One caveat of using a sealed class is that you must declare all subclasses of a sealed class in the same file as the sealed class itself — not ideal for a complex class. Also, avoid sealed classes if you are designing for inheritance (item 19).
https://appmattus.medium.com/effective-kotlin-item-23-prefer-class-hierarchies-to-tagged-classes-de99d37f815a
['Matthew Dolan']
2018-09-03 19:26:49.476000+00:00
['Android', 'Effective Java', 'Android App Development', 'Design Patterns', 'Kotlin']
Presenting: The Donut Kids of California
In partnership with the Federal Reserve Bank of San Francisco, Pink Box Stories presents The Donut Kids of California, a narrative photo series highlighting the stories of children of independently-owned Cambodian donut shop owners who grew up in and around donut shops. These stories are more than a snippet of their — and for some of us, our — lives. They are stories of growing and running a small business, intergenerational support, social mobility, and the unique contributions of the Cambodian-American community to California’s economic fabric. Read the reflections of these eight donut kids on the Federal Reserve Bank of San Francisco’s website. Add your reactions here or on our Instagram @pinkboxstories and share this with someone who should know about this!
https://medium.com/pinkboxstories/pink-box-stories-presents-donut-kids-of-california-105477f507b4
['Michelle Sou']
2020-10-16 04:40:16.674000+00:00
['Cambodian', 'Photovoice', 'Donuts', 'California', 'Entrepreneurship']
Visualization with pydot for beginners
How about drawing graphs and trees programmatically ? Yes, that’s what we are going to learn in this post today. This post is for beginner’s who are very much interested in visualizations. So, today we’ll be attempting to draw the above recursion tree programmatically with pydot. But before that, we need to have some basic information: i) pydot: which is an interface to Graphviz, can parse and dump into the DOT language used by GraphViz, is written in pure Python, etc. ii) Graph: It is a non-linear data structure consisting of nodes and edges. The nodes are sometimes also referred to as vertices and the edges are lines or arcs that connect any two nodes in the graph. More formally a Graph can be defined as, “A Graph consists of a finite set of vertices(or nodes) and set of Edges which connect a pair of nodes”. ii) Tree Data Structure: A tree is a nonlinear hierarchical data structure that consists of nodes connected by edges. Tree data structure iii) Recursion Tree Method: Recursion Tree Method is a pictorial representation of an iteration method which is in the form of a tree where at each level nodes are expanded. In general, we consider the second term in recurrence as root. It is useful when the divide & Conquer algorithm is used. Now, we’ll setup the environment for graphviz first. We can download the graphviz binary from here. Now we have to add graphviz bin folder to the environment variable path. If you are having trouble here is the installation guide. Pip install pydot: pip install pydot After installing pydot let’s try to draw some simple graphs, for that we can use any text editor of our choice, but I’ll be using VS Code. Above code will create a simple graph like: Looks cool!!!. Now, let’s say we want to show direction from node1 to node2 and from node1 to node3. We just need to change a single line and it should work fine. To And the output looks like: Now, we want to have multiple edges between the same node. It can be easily done with the help of following single line of code. The output looks like: How to prevent another edge from drawing if it is already drawn ? Here is the output: Till now we got some basic idea on drawing edges, it’s time to explore how we can add some nodes on graph without edges. Here is the output of above code: Now, is it possible to have create two nodes with same name ? Here is the output of above code: We’ve seen that we’re only able to get a single node with name “1”.So, our nodes always need a unique names, otherwise we cannot name them uniquely to attach edges between them. However, we can give each node a label, which is what is displayed when rendered. So, what exactly is a label? It is the value that appears on Node whereas the name is the actual value that identifies the node. Here is the output of above code: Adding different colors on nodes and edges to make our graphs more attractive: Here is the output of above code: Now it’s time to wrap up the tutorial by drawing some really cool graphs. We’ll try to draw some graphs found on google image results: https://s3-ap-south-1.amazonaws.com/av-blog-media/wp-content/uploads/2018/09/Screenshot-from-2018-09-05-16-36-16.png Here is the output of the above code: The output looks like: At last, we’ll try to visualize the recursion tree for nth Fibonacci number. Updating the above code: And the output looks like: Looks cool !!! We’ll be adding more nodes in upcoming posts. Keep following….for more updates.
https://tmilan0604.medium.com/visualisation-with-pydot-for-beginners-ca99c9dc530b
['Milan Thapa']
2020-10-16 16:30:50.732000+00:00
['Trees', 'Visualization', 'Data Structures', 'Pydot', 'Graph']
One-liner lambda expressions as function decorators: (ab)using Python 3.9’s new PEP 614 feature
The sole goal of this is to use (or abuse) one new feature comes with Python 3.9: Relaxing Grammar Restrictions On Decorators (PEP 614). Since almost no one mentioned about it before, I decided to write a little article here. Of course, the way I use it is very probably not “Pythonic”, and maybe shouldn’t be used at all in your work, but we can still have some fun here, right? Apparently, PEP 614 is mainly designed for PyQt5, which you can attach a button clicking method as a decorator to another function to create button events. But the PEP also says “The decision to allow any valid expression”. What does that mean? It turns out “any expression” should still return a decorator wrapper. It’s just that you don’t have to assign the decorator to a name anymore. I’ll assume that you have the basic idea of what decorators are, and how to write them normally in Python. If don’t, you can take a look at Primer on Python Decorators on RealPython.com. Two Lambs, One Func Photo by Matt Seymour on Unsplash Is it possible to write a one-line decorator expression? The first thing I’ve tried is using two lambda functions: @lambda func: (lambda *para: func(*para).upper()) def greet(name): return f'Hello, {name}!' def greet(name):return f'Hello, {name}!' print(greet('Arthur Dent')) Result: HELLO, ARTHUR DENT! This is a very simple decorator, it converts the result text to all upper case. The second lambda is the wrapper or closure that would be returned and replace the function greet. To reuse the decorator: @shout := lambda func: (lambda *para: func(*para).upper()) def greet(name): return f'Hello, {name}!' lambda func: (lambda *para: func(*para).upper())def greet(name):return f'Hello, {name}!' @shout def reminder(name, thing): return f'Don\'t forget your {thing}, {name}!' def reminder(name, thing):return f'Don\'t forget your {thing}, {name}!' print(greet('Arthur Dent')) print(reminder('Arthur Dent', 'towel')) See? This is a good place to use the walrus operator (:=) added in Python 3.8. Result: (You can see *para can cope with different number of parameters too:) HELLO, ARTHUR DENT! DON'T FORGET YOUR TOWEL, ARTHUR DENT! Build a Logger Cabin Photo by Geran de Klerk on Unsplash You can write only one statement in lambdas, or is it though? Let’s abuse Python more, by using or operator to add a print() function: print('Func called:', func) or func(*para).upper() def greet(name): return f'Hello, {name}!' @lambda func: lambda *para: \func(*para).upper()def greet(name):return f'Hello, {name}!' print(greet('Arthur Dent')) It returns Func called: <function greet at 0x00000212EEFC38B0> HELLO, ARTHUR DENT! Since print() returns None (all Python functions without explicit return statements do so), the expression None or value still returns the value itself. So by this way, you can do some logging before running the decorated function. The one problem is: if you put print() behind the or operator, it won’t be executed at all, because as the first part of the logical expression, func() would return a non-empty string (which equals True),there is no need for Python to check the second part. So it looks like you can only do things before the decorated function. Or…is it? Once Upon a Timer Photo by Fabrizio Verrecchia on Unsplash I really do want to do something after the decorated function as well, like timing the code. And, indeed, it is possible after all. This is what I came up with: use a list as the return value of wrapper to execute a bunch of stuff. The list comprehension outside of it will filter out the only value we need. import time [_ for _ in [ \ print('Func called:', func), print('Start:', time.time()), func(*para), print('End:', time.time()), ] if _][0] def calculate(n): x = 0 for i in range(n): x += i ** n return x @lambda func: lambda *para: \print('Func called:', func),print('Start:', time.time()),func(*para),print('End:', time.time()),def calculate(n):x = 0for i in range(n):x += i ** nreturn x print(calculate(3000)) Result: Func called: <function calculate at 0x00000254CA9638B0> Start: 1602050421.943357 End: 1602050422.5030253 13440703023871524924619199858289162761099130089897931730777... However, I can’t calculate time duration in the decorator, because using walrus operators inside list comprehensions is now illegal. So for the second version, I use the filter() function instead: import time list(filter(lambda _: _,[ print(f'Func called: {func}'), (start := time.monotonic_ns()) and False, func(*para), (end := time.monotonic_ns()) and False, print(f'Duration: {(end - start) / 1000000} ms') ]))[0] def calculate(n): x = 0 for i in range(n): x += i ** n return x @lambda func: lambda *para: \lambda _: _,[print(f'Func called: {func}'),func(*para),print(f'Duration: {() / 1000000} ms'))[0]def calculate(n):x = 0for i in range(n):x += i ** nreturn x print(calculate(3000)) What a Frankenstein! But it works, and technically still in one (very long) line. Now, there are also two expressions in the list without using print(). After using := to assign time data to variables start and end, these expressions would combine with False so the results (become Falses) will be emitted by filter(). Now we can calculate the time in the decorator! Result: Func called: <function calculate at 0x0000024F9EE048B0> Duration: 531.0 ms 13440703023871524924619199858289162761099130089897931730777... Wrappering Up Photo by Nick Bolton on Unsplash Actually, if you just want to decorate something very simple or two on functions, using this lambda decorator expression may not be a really bad idea. After all, you need to write at least 4–5 lines to implement a basic decorator otherwise. RealPython.com also demonstrated that you can wrap up different decorators in a dictionary and select one of them with a key (see here). Although, you’ll have to input the key before defining the function, and the function will be forever changed unless you redefine it again.
https://alankrantas.medium.com/one-liner-lambda-expressions-as-function-decorators-ab-using-python-3-9s-new-pep-614-feature-3b8e2603bdff
['Alan Wang']
2020-10-17 09:34:04.694000+00:00
['Decorators', 'Lambda Expressions', 'Walrus Operator', 'Python', 'Python Tricks']
Of Sunlight and Hope
Of Sunlight and Hope a sonnet Photo by Max Rovensky on Unsplash Bring me the sunlight’s kindly, warm embrace, Its mother-touch that meets me day by day To soothe the long and weary night away And wake me with its greeting on my face. Bring me its noontime lullaby, and trace Its gracious fingers past the fearful gray That haunts me day and night; and ray by ray Return my heart’s lost hope to its own place. Remember that my life is not so long As is the sun’s, that hope will fail indeed Some bitter night unless the sun and I Make cause together, find the daylit song Within its rays. Meanwhile my simple creed: That as the sun returns, hope shall not die.
https://medium.com/sonnetry/of-sunlight-and-hope-a6df7769c33d
['A. Christine Myers']
2020-03-12 02:02:04.044000+00:00
['Mental Health', 'Sonnet', 'Poem', 'Poetry', 'Hope']
Dark Matter and the Frontier of EUV Astronomy
Dark Matter and the Frontier of EUV Astronomy How the discovery of dark hydrogen provides a mundane (and profound) resolution to the Dark Matter problem. Dark matter detection from gravitational lensing. Illustration by Matt Schmidt. In 1933, Caltech astronomer Fritz Zwicky noticed that the galaxies within the Coma cluster were orbiting one another too quickly. Much too quickly. In the solar system, the orbit of an object is related to its mass and its velocity. If it is moving too fast, like the asteroid Oumuamua, the orbit becomes hyperbolic and the object leaves the solar system. If it is moving too slowly, it falls into the sun. The same rules apply for stars in a galaxy, and galaxies within a galactic cluster. Based on the visible luminosity of the Coma galaxies, they should have been orbiting one another at about 80 km/s. Instead, they were moving over 1–2,000 km/s. He was led to speculate: “dark matter is present in much greater amount than luminous matter.” There was more stuff out there, and it was ‘dark.’ (Or, if the six-year-old Calvin had naming rights, it could have been the “invisible omnipresent lurking mass of doom.”) Sinclair Smith found the same thing after studying the Virgo cluster. He theorized that the missing mass was contained in large clouds of gas that formed halos around galaxies. While small inconsistencies between theory and experiment is acceptable and even encouraging, huge inconsistencies make us groan. It means there are likely a number of different possible ways we are wrong; or a combination of ways that we are wrong. Most astronomers completely ignored the problem. Fifty years went by before there was serious discussion in the scientific community. Meanwhile the evidence continued to accumulate. J. H. Ort, studying the rotation of the Spindle galaxy (NGC 3115) found that if you look above the equatorial plane, the starlight tapers off dramatically (by a factor of ten) but the mass, judging from the velocity of stars, holds about constant. At the outer edge of the galaxy, the mass was 250 times what he was seeing from luminous matter. In 1964, Vera Rubin and W. Ford began studying Andromeda, our close neighbor in space (only 2.5 million light years). Ford had invented a new spectrograph that allowed them to measure velocity with a resolution 10 times better than before. By 1968, they were able to plot the angular velocity of the stars, from the core to the galactic edge of Andromeda. They expected to find a diminishing curve. Instead, the line was mostly flat. The stars in the metropolis of Andromeda’s central bulge, those in the suburbs, and those in the rural outskirts were all moving at the same velocity. Even those stars at the rim of the visible disk didn’t show any sign of slowing down. Radio observations confirmed the data. Perhaps it was the same problem Zwicky had seen in the Coma Cluster, Smith with the Virgo Cluster, and Oort with the Spindle Galaxy. Over the next decade, every galaxy studied had a flat rotation curve. Astrophysicists ran simulations of galaxies to study their motion. The early conclusion: if our theory of gravity is correct over scales vastly larger than our own solar system, the whole Andromeda Galaxy is surrounded, submerged, and stabilized by enormous halos of unseen matter, and these get more dense as you move outward from the galactic core. This dark matter represents not just some of what is out there, but the overwhelming majority of matter in the universe — perhaps 10 times what can be seen. Astronomers received this possiblity with incredible hostility. Some thought that Rubin and Ford were ruining their careers by pursuing the problem. Dark matter became the source of arguments at conferences. But eventually astronomers had no choice but to accept the overwhelming evidence that there was a serious problem. With a sigh, they began to theorize.
https://medium.com/discourse/dark-matter-and-the-frontier-of-euv-astronomy-460f92d6ca84
['Brett Holverstott']
2019-11-13 21:56:29.987000+00:00
['Astrophysics', 'Dark Matter', 'Physics', 'Hydrogen', 'Science']
How to Be More Stable, Flexible and in Control of Your Life
Agency refers to the feeling of control over actions and their consequences. It’s the feeling that we’re in the driver’s seat. (Dr. James W. Moore, University of London) Successful people make decisions, even in difficult times, because they have what’s called a high sense of agency. You, too, can cultivate this valuable and life-changing skill. I get that it’s hard to make any decision in 2020. Like most people, you feel out of control because you don’t know what the future holds. We’re all now staring 2021 in the face and asking, what will 2021 be like? But is that the right question to be asking? Shouldn’t we instead be asking how can I make 2021 a great year for me? As I wrote in How to Use Strategic Ignorance to Increase Focus and Win in Life, 2020 was one of my most productive years ever because I: Decided what was essential to me; Focused on what I could control; and, Ignored self-defeating distractions. You also can achieve this level of success in 2021. How? Employ what psychologists call a high sense of agency. Let me tell you how it works. Focus on the good, throw out the trash A wise person is hungry for knowledge, while the fool feeds on trash. (Proverbs 15:14, NLT) Gosh, isn’t that proverb true… there’s an endless abundance of trash to fill our tummies. We’re continuously bombarded by people and organizations fighting for our attention, seeking to advance their agendas and their version of the truth. These forces want you to believe that you need them and that without their precious help, you can’t succeed. Do you know what they’re doing? They’re stealing your sense of agency. They’re robbing you of your sense of control. They’re taking your confidence to make decisions about your future. Don’t let this happen! Albert Bandura, a Professor at Standard University and a social cognitive theory pioneer, said there is a socially embedded interplay between personal agency and environmental influences. Dr. Bandura then goes on to say, “Humans…are producers of their life circumstances, not just products of them.” There it is, plain as day. You can produce your life’s circumstances, or you can let life’s circumstances produce you. But how do you manage the dynamic tension between your environment and what you can control? Control is an illusion Studies have shown that the healthiest people have agency because they’ve developed confidence and competence, states Mary C. Lamia, Ph.D., in Your Sense of Agency, are You in Control of Your Life? Successful people with a high sense of agency focus on what they know. They understand that they cannot control external events, and they cannot control other people. Think about it; you usually feel defeated when you try to control something you have no business controlling. Perhaps you’ve tried to influence people or events, with no success. As a result, you end up feeling discouraged and depressed. The truth is, you can’t control situations outside of yourself, so don’t do it! Your efforts will simply result in misery. Instead, says Dr. Lamia, you increase your sense of autonomy and control by: Setting reasonable goals, Increasing interactions with friends, Building a community to help you, Improving your health, and Accomplishing new challenges. Just picture for a moment the compounding effect on your life if you sat down for five minutes every morning and committed to accomplishing something from Dr. Lamia’s list above. I’m not talking a laundry list of to-dos, that’s also self-defeating, just one or two actions in each of the above areas. You want to cultivate agency in your life. Research shows that agency will make you more: Psychologically flexible and stable, and Competent and confident in uncertain or difficult situations. Your life is your choice. Make decisions to influence your future. It’s up to you. Be wise and successful Identify a reasonable goal you want to achieve this week and do the following:
https://medium.com/illumination/how-to-be-more-stable-flexible-and-in-control-of-your-life-e87719d029df
['Greg Longoria']
2020-11-19 14:18:20.866000+00:00
['Personal Growth', 'Focus', 'Self Improvement', 'Self', 'Productivity']
Music as Social Fabric
Music as Social Fabric Designing Social Tools that Make Us More Expressive People gather at a park in Beijing to sing together. Music-making is something of an universal social medium— it has been a part of every society in human history. (image: Laurel F. | Creative Commons) “People who make music together cannot be enemies. At least while the music lasts.” — Paul Hindemith Let’s face it. The “social” part of tech-mediated social media still sucks. Developers tend to reduce the functions of social tools to news feeds, photos, video sharing, likes, and comments. It’s much harder to consider the lasting impact of the activities, or to design for the sheer experience they foster. A 2017 study (from Mediakix) found the average person, at the current rate, will spend more than five years of their lives on social media. That is literally more time than it takes to go to college—more time than you will spend eating in your E.N.T.I.R.E. L.I.F.E.! This sheer amount of time—of life!—demands that we hold the design of social tools to a higher standard. I mean, shouldn’t such tools elevate us? Make us really feel—more expressive, truthful, and appreciative of our human connections? Right?! But how do we design more virtuous social tools, tools that make us better? Perhaps we can take a cue from music-making — the kind of activity that does not require you to know who you are playing music with, but will leave you feeling expressive, connected, and, well, social. If our social tools were made more like instruments and contexts for music-making, what might they look like? What would we value in such tools? No identity? No problem! It’s about feeling connected. Identity is not that important: Music goes beyond names, age, gender, culture, and location to create meaningful, authentic connection between people. For example, in Ocarina (designed 10 years ago, which is like a bazillion in app-years) had this globe feature that let’s you listen to strangers blowing into their phones from around the world (see video above). This is an anonymous social medium that is not about who is playing and how good they are, but that there are people out there making music, for the fun of it. Participation is a value (dammit!): With social media there’s a lot of looking at and scrolling through but not a lot of doing things together with other people (I have a separate diatribe about “likes” and comments). Social activities like music create an opportunity for meaningful shared experiences. It puts you in the zone, and gets you out of your funk (sometimes by employing funk). As composer Paul Hindemith once said, “People who make music together, cannot be enemies. At least while the music lasts.” Expression is a virtue: Social media has interactive components but wasn’t designed to go much farther than looking at and liking things. You can’t quite express yourself fully with five emojis — if we could, that would actually be super sad :(. To make something is a unique kind of joy that can only come from within. Yes, you can make a post, but when was the last time a post moved you the same way as listening to a favorite song? Or more broadly, why don’t our social tools make us feel more? Or feel more like ourselves? This is what people are doing in Ocarina — blowing into their phones. It is time the tools we use reflect our desire to express and to connect. Here endeth our first rant. For more, follow me or the Artful Design series on Medium, and check out my book, Artful Design: Technology in Search of the Sublime. See you down the rabbit hole!
https://medium.com/artful-design/music-as-social-fabric-14dd17275ce1
['Ge Wang']
2019-01-04 03:37:00.392000+00:00
['Craft', 'Technology And Design', 'Music', 'Social Network', 'Ethics']
Guided Agile and the changing role of the Information Architect
A part of “Structuralism and a Pattern Language for Online Environments” In the past, if a farmer wanted a sweeter strawberry, they needed to search through the strawberries in their field to find one that was sweeter than the others. The next year they would plant their strawberry crop using those seeds, and with each cycle they would get slightly sweeter strawberries. This approach takes a long time, and at the end a farmer would still not be able to say why their strawberries are now sweeter. They just are. This process of doing, measuring, adjusting, and doing again is akin to the process of agile development conducted in businesses today. Agile development is a useful tool to get people to talk to customers, experiment, and shift course quickly. But agile can also be highly inefficient because it can take dozens of iterations to reach a design that works for your given market. Agile development is like a blind-folded archer that locates the target of product-market fit through a game of Marco Polo. They can shoot and adjust quickly, but this experimentation takes a lot of time. They also lose a lot of arrows. Product-Market fit Marco Polo The discovery of DNA opened a new realm of exploration through the insight that small base pairs of guanine–cytosine and adenine–thymine combine in complex sequences to form different genes. This understanding changed how new fruits and vegetables come to market. Suddenly scientists could understand what genes create sweetness in a strawberry, and use this understanding to engineer sweeter fruits in fewer iterations. Objects and modules are the digital equivalent of the natural world’s base pairs and genes. The patterns information architects uncover are like the understanding scientists cultivate of how genes are expressed. Understanding interaction patterns can help businesses see why certain website structures fit certain markets, or how adding something small, like a Like or Comment button, can create ripple effects throughout a system. In the archer-playing-Marco-Polo metaphor, this is like a person that knows the target is likely to be on one half of the field rather than the other. This person only needs to make two or three guesses to get to the target instead of five or six. In a business, this is like a design team that creates a great product in the third or fourth iteration instead of the sixth or eighth. They still need to iterate to understand what patterns match their specific circumstances, but their actions are better directed. They aren’t shooting as blindly. The archer knows the target is to their left. Guided Agile Currently a lot of information architecture work is done unconsiously, hidden in product design, UX Research, and business management. A business person might see people like Tinder, so they make their app with swipe cards. They understand people like Karma on Reddit, so they put these incentives on their websites as well. They might be able to test and redirect their course of action quickly if these things turn out to be bad choices. But since they never understood the underlying architecture, they cannot enunciate why they were unsuccessful and their next iteration is not guaranteed to be more successful. The patterns developed in information architecture are not meant to supersede an agile approach. These patterns are meant to make agile experimentation more efficient by guiding design decisions, making design closer to engineering than intuition. If a company wants to build a place where their users can be vulnerable, information architects know that an important element to create vulnerability is to have clarity in who can access a user’s information. If a company wants to build a place where work happens, information architects can align what kind of work (research work, editing, or collaborative work) with user permissions. If a company wants to build a community, information architects can understand the community’s values to see if users should interact primarily in constant streams or separate pools of different discussions. The ideas of objects, blocks, modules, and channels, as well as the Patterns of Work, Play, Education, and Home are the very first steps to make online designs more conscious and effective. This increase in exactness of design will likely make the Information Architect a more valued member within design teams and organizations as they are better able to move from system goals to the design of these systems.
https://uxdesign.cc/guided-agile-and-the-changing-role-of-the-information-architect-1f639f848d6d
['Rachel Aliana']
2019-12-04 00:25:29.753000+00:00
['Information Architecture', 'Software Engineering', 'Agile', 'Product Management', 'UX']
Investing with Python:
Combining Technical and Fundamental Analysis to Kick Start Your Investing Photo by Ishant Mishra on Unsplash Introduction Having little experience in investing, I’m somewhat intimidated when deciding to buy a stock. Stocks contain a plethora of data-points, making it easy to experience information overload. Amidst all of this data, it’s natural to ask what metrics to consider, and is there a fast and easy way to make an investment decision — Warren Buffet is rolling his eyes right about now. There are two schools of thought on investing — Technical and Fundamental analysis — and each uses a variety of metrics to assess whether to invest in a stock. In this article, I’ll briefly describe each approach and the merits of combining the two, along with outlining several of the metrics that each camp uses. Finally, I’ll show how to combine both fundamental and technical analysis in Python, making for an easy-to-use tool to kick start your analysis. A Brief Background on Technical and Fundamental Analysis Technical and fundamental analysis go about investing in separate ways: Technical analysis uses past price and trading history to assess whether a stock is a sound investment. One of the key tenets to this approach is that the future will resemble the past. Fundamental analysis, on the other hand, eschews past trading history and focuses on financial and economic variables that affect a stock’s price. Adherents from both schools usually argue that each school’s principals run counter to each other. For example, The technical analyst believes that the stock’s price incorporates all publicly available information. There is no need to look at other factors because they are already included in the price. Therefore, historical trading activity and price movements are key indicators for future movements. Fundamental analysts would counter that past price movements and activity, dictated by supply and demand for a security, aren’t necessarily indicative of future performance. Enron is a classic example. Being a stock market darling for a period of time, its demise seemed unfathomable. Yet there were red flags along the way— questionable accounting methods (Mark-to-Market), and management’s questionable conduct — and focusing on these aspects would have given a more nuanced picture of the stock’s future movements. Marrying Fundamental and Technical Analysis While philosophically opposed, the two camps can be unified on a practical level. The two can very well lead to the same conclusion of a stock being a buy or a sell. For example, say stock’s XYZ short-term moving average is above its long term moving average (this is a buy signal to the technical analyst), and it has posted strong financial growth over the past few years, leading to a strong cash flow and balance sheet, and is poised to continue its growth due to management successfully executing on its business strategy (all very good signs to fundamental analysts). Conversely, the two camps can yield opposing conclusions on whether to buy or sell a stock. A stock’s past trading history can yield one signal, whereas its financials tell a different story. This does not necessarily mean that one technique is more reliable than the other. The point is that using the two in conjunction with each other can offer a more complete picture. Some Key Indicators Before describing how exactly to combine fundamental and technical analysis, I’d like to give a brief overview of several key indicators from each camp, all which can be found on Investopedia. One technique that technical analysts employ is comparing a stock’s short-term moving average with it’s long term moving average. Short-term is usually defined as a 50-day moving average and long-term is usually defined as 200-day moving average, although these numbers are not set in stone. When the short-term moving average exceeds the long-term moving average, it signals that the stock is a “buy.” Conversely, when the short-term average falls below the long-term average, that signals that it’s time to sell the stock. Fundamental analysts, on the other hand, consider a variety of financial ratios such as Earnings-Per-Share (EPS), Price-to-Earnings (PE), Current Ratio, Debt-to-Equity (DE), etc. They also take into consideration the management of a company as well as the economy at large. Earnings-Per-Share (EPS) is “a company’s net profit divided by the number of common shares it has outstanding.” It “indicates how much money a company makes for each share of its stock and is a widely used metric for corporate profits.” A “higher EPS indicates more value because investors will pay more for a company with higher profits.” the price-to-earnings ratio “relates a company’s share price to its earnings per share” and is used to assess future earnings and growth. Depending the situation, a stock with a high PE ratio can indicate that it’s poised for growth or that it is overvalued. The current ratio measures a company’s ability to pay its short term debt — usually debt that is due within a year. A ratio below 1 can potentially signal trouble. For example, a startup may take on a significant amount of debt to help it grow and achieve profitability in the future. While the company could be a success, having a low current ratio may raise legitimate concern whether the company will survive long enough to reach its goal. Similarly, like the current ratio, the debt-to-equity ratio measures a company’s liquidity (how well a company can pay its debts). It measures how much a company is borrowing to move its business forward, as opposed to using its own funds. A company with a high DE ratio, like our startup, may be seen as a risky investment because in the event of a downturn where funds dry up, it may not be able to cover its debts.
https://medium.com/swlh/investing-with-python-ea5da7a4a5c4
['Curt Beck']
2020-06-28 20:39:02.329000+00:00
['Investing', 'Data Science', 'Python']
On Replacing Self-Pity with Self Compassion
The pandemic has made us feel isolated, alone, disturbed, anxious among many other feelings. In the last seven months of isolation, I’ve experienced it all. In difficult times, it’s easy to get trapped in the cycle of self-pity. It’s easy to start feeling sorry for yourself and ruminate endlessly about your problems. It’s easy to start bearing resentment in situations that feel unfair. Sadly, as easy as this feeling of self-pity comes, it doesn’t help us in any way. In fact, it only leaves us feeling worse. “I don’t want it to end, and so, as every therapist knows, the ego does not want an end to its “problems” because they are part of its identity. If no one will listen to my sad story, I can tell it to myself in my head, over and over, and feel sorry for myself, and so have an identity as someone who is being treated unfairly by life or other people, fate or God. It gives definition to my self-image, makes me into someone, and that is all that matters to the ego.” - Eckhart Tolle Self-pity is nothing like self-compassion, which on the other hand, is a superpower that can help us navigate through difficult times. The more I read about self-compassion, the more I am convinced that each and every person needs to start practicing it. Especially at this time. There’s so much awareness about mental health but not enough about the tools that can help us navigate through difficult times. Self-compassion is one such tool that can help us change internal dialogue, thereby improving our mental health. Here are 5 ways to practice better self-compassion — 1. Replace Feeling Sorry For Yourself With Self-Kindness When the official lockdown ended, almost everyone I knew started moving around and meeting people. I stayed indoors as I didn’t want to put my parents at risk, especially my mother who suffers from Asthma. It didn’t bother me at first. But after a few months, I started to feel disconnected from the outside world. Though I stayed indoors by my own choice, I started to feel resentment at my friends and everyone else who was getting to live a more normal life. But here’s the thing — Resentment is a feeling that only harms the person who is harboring it. And if it manifests in the form of anger towards the other person, it spoils relationships. Behind all feeling of resentment, lies a belief that something unfair happened to you. In my case, the resentment came from the feeling that it was unfair that I had to be indoors while others were getting to have the fun that I so badly missed. But that’s how it is many times. Life is unfair and it’ll be unfair to you in some ways. During such times, your internal dialogue will decide how you deal with the situation. It’ll determine whether you remain stuck or pull through. Instead of feeling sorry for yourself, try another approach of just being more kind towards yourself. Start practicing self-care seriously. Track your sleep and plan your meals. Start giving yourself more breaks, more playtime, more treats, more time, and more space to recover from feeling the way you do. Allow yourself to feel sad for all the things that feel unfair to you. And then process the pain and let it go.
https://medium.com/change-your-mind/on-replacing-self-pity-with-self-compassion-b0c80abdcc6e
['Shreya Dalela']
2020-11-21 18:19:00.564000+00:00
['Mental Health', 'Kindness', 'Self Love', 'Compassion', 'Self Care']
How to Hack Daylight Saving Time and Gain 7 Extra Hours Every Week
On Being a Dissident I got my inspiration from a little girl I heard about, who happily went along with her First Communion, until she got to the part when the priest held out the consecrated wafer and said, “Body of Christ.” At that, she reconsidered and said, ever so politely, “No, thank you.” A brief discussion ensued, and after a few minutes, to the relief of her parents and everyone else gathered, the child acquiesced. As for me, I’m not so compliant. No, thank you, I will not change the sleeping schedule to which I rigidly adhere because health experts say we’re healthier when we establish a sleep routine and stick to it. (No Ambien for me, no thank you, not needed.) No, thank you, I will not dutifully go around the house and change my clocks twice a year for reasons most people can’t even explain. No, thank you, I will not participate in a scheme that most people loathe but persists because national lawmakers can’t bring themselves to enact change. A Standard Time strike is a small, hopeful act of rebellion that actually has plenty of precedences. As David Prerau explained in his 2005 book Seize the Daylight, people have been ignoring state-ordered time changes for most of Daylight Saving Time history. King Edward VII, for example, was a supporter of the first Daylight Saving Time act proposed in the UK because he was already observing his own version of it, having moved the clocks ahead 30 minutes at his castle and palaces. And of course, the United States has a checkered history of observance, with many states and municipalities opting in or out at will, most comically in 1965 when St. Paul, Minnesota, and neighboring Minneapolis began Daylight Saving Time on different schedules, throwing the metropolis into dysfunction that required first responders to wear two watches in order to navigate the Twin Cities. Benjamin Franklin was an early proponent of changing clocks to save money. He hated to see people asleep when it was light and later burning expensive candles and lamp oil in order so they could be awake and do things when it was dark. Later, an English home builder by the name of William Willett campaigned for a national Daylight Saving Time plan because he believed being outside in light after the workday made people healthier in body and spirit. “While daylight surrounds us, cheerfulness reigns, anxieties press less heavily, and courage is bred for the struggle of life,” Willett wrote in 1907. That remains true today, possibly even more now than then. As someone who lives in New England, where it is fully and cursedly dark at 5 p.m. during winter, I’d be fine with changing our clocks every week if it meant that it was light until eight. But that won’t happen, and our current system sucks and everyone hates it. The only benefit of having twice-annual time changes is that we have a national reminder to install new smoke alarm batteries. But we could do that on Arbor Day. The nation will one day abandon the madness, but until then, there’s nothing to stop us from doing it on our own. Go forth and ignore the time change. You’ll thank me in March, if not before. There’s one more thing I should warn you about. You might still dread the day in March when the rest of America (save Hawaii and Arizona) “springs forward,” not because you are “losing” an hour of sleep, but because you are losing your productivity edge. Yes, if you join me in a Standard Time strike, there will come the day, specifically March 14, 2021, when you will have to operate on the same clock as everyone one else again. You will have to re-join the herd. You will lose seven hours of productivity, not one hour of sleep. This may be disappointing, as you likely will have enjoyed the sense of moral superiority that comes from getting up an hour earlier, not to mention all the wonderful things you’ve been accomplishing during it. Fear not: November is coming. And I, for one, can’t wait.
https://medium.com/better-humans/how-to-hack-daylight-saving-time-and-gain-7-extra-hours-instead-of-1-7120340dadb0
['Jennifer Graham']
2020-10-07 19:57:45.882000+00:00
['Daylight Saving Time', 'Morning Routines', 'Morning', 'Life Hacking', 'Productivity']
Top Five Reasons to Take the Certified Enterprise Blockchain Architect (CEBA) Certification Now!
Top Five Reasons to Take the Certified Enterprise Blockchain Architect (CEBA) Certification Now! Joseph Holbrook Follow Nov 2 · 5 min read As a well provisioned test taker, pre-sales engineer and technical trainer I wanted to write a short review on the 101Blockchains Course and Certification and how you could become a Certified Enterprise Blockchain Architect (CEBA). This post covers the following topics about the CEBA certification. What is the Certified Enterprise Blockchain Architect (CEBA) What you will learn What to Expect of the CEBA Certification Content TOP FIVE REASONS TO TAKE THE CEBA Certification NOW! Taking the Certification Exam Lets get started! Certified Enterprise Blockchain Architect (CEBA) Whether you’re a CIO, pre-sales engineer, solution engineer or IT-focused customer-facing expert, knowing how to speak blockchain is going to a required skill, especially for large VARs, vendors, integrators, etc. This certification course is perfect for developers, engineers and IT professionals who want to broaden their skill set. This certification course is ideal for technology-focused engineers, application developers, IT administrators, or anyone wanting to obtain the Certified Enterprise Blockchain Architect (CEBA) certification. The link for the certification course is here on 101Blockchains https://academy.101blockchains.com/courses/certified-enterprise-blockchain-architect?ref=497691 WHAT YOU WILL LEARN What’s covered to enable you to become a Certified Enterprise Blockchain Architect (CEBA): Blockchain Architecture basic and advanced concepts such as development Choose appropriate blockchain systems for various use cases Understanding customer requirements and how to use a blockchain decision tree Work effectively with both public and permissioned blockchain systems such as Ethereum and Hyperledger. Resources to help study for the exam (Practice Exam and Review) Objectives of the Certified Enterprise Blockchain Architect (CEBA) exam Certified Enterprise Blockchain Architect (CEBA https://academy.101blockchains.com/courses/certified-enterprise-blockchain-architect?ref=497691 The demand for blockchain expertise is multifold, the myth that the demand is for “developers” is false.. If your a developer and know Node.js, PHP, etc then your in great shape to hop on over to the blockchain world.. However, if your not a developer there is still plenty of room for you on the blockchain such as a blockchain architect or blockchain analyst. Companies have increasing placed as part of their job requirements some level of blockchain knowledge. What to Expect of the CEBA Certification Content Below is a short review of the course materials and what you need to know to pass the certification. 1. You were tested on the basics of blockchain such as how blocks are written, what a nonce is, permission based and permissionless blockchains,etc. 2. You were tested on blockchain use cases and how to success put together a whiteboard/proof of concept for a prospective customer. Review uses cases to help understand why a smart contract could be used. 3. You were expected to know the basics of Bitcoin, Ethereum and Hyperledger. I did not see anything on R3 Corda, Quantum, etc. Know discussion points for using Hyperledger over Ethereum and vice versa. Uses cases and technical merits are important to know for the exam. 4. The exam matched to what you should expect as a “pre sales” architect role should need to know.. As one who had previous roles in a pre sales mode selling millions in data storage solutions this would be the exact level an integrator or vendor should need. As a pre sales engineer you need to be able to understand the customer business requirements and translate them into a technical solution. For example should the customer use a permission bases or permissionless blockchain based on their budget, infrastructure or performance requirements? 6. Know what some of the consensus algos are deeply…. Be able to identify Proof of Work, Proof of Stake, etc. Be able to compare and contrast PoW vs PoS… Be really good at this… 7. Understand the BFT… No its not a dance..Byzantine Fault Tolerance and why that was important issue to solve. 8. Know that tokens and coins can add value to a blockchain but are not critical to a blockchain especially if its a permissioned. Hyperledger does not have a native coin or token but you need to know how you can use Hyperledger as a use case. 9. Smart Contracts… ERC20 and ERC721…. Must know…Study this area if you not confident. So what is fungible and what is not….. 10. Terminology.. Know your terms such as ledgers, centralized vs decentralized, imutable and mutable, distributed, etc…. 11. Get to know the Ethereum specific IDEs, programs, browsers, testnets etc. Mist, Remix, Ganache, Truffle.. 12. When it comes to development know what a persona is, guiding principle and use stories. TOP FIVE REASONS TO TAKE THE CEBA Certification NOW! This is the first to market professional developed Blockchain exam.. It clearly had funding and effort put into unlike other blockchains certifications I have seen. Blockchains101 has several training options and certifications that could compliment this certification. Its written by experts that live, eat and breath blockchain. Kris Bennett is a great example of this expertise. If your a Pre Sales Architect, Solutions Engineer or even in technical sales then this course and certification is for you. The growth of the blockchain clearly is going to change the dynamics in specific verticals such as financials and government. Get you first blockchain certification before your coworkers to help distinguish your level of expertise. Fact is your job is going to be more competitive to keep. Demand is hot… Check out this Medium article by Hackernoon Blockchain jobs and salaries. https://hackernoon.com/blockchain-jobs-and-salaries-2018-report-45d3e7741c19?gi=e4622ae02823 For those interested in the Certified Blockchain Developer — Hyperledger Certification (CBDH) please check this link TAKING THE EXAM Below is the link for the exam info. Its $299 dollars, includes full content and exam. The link for the certification course is here on 101Blockchains ttps://academy.101blockchains.com/courses/certified-enterprise-blockchain-architect?ref=497691 Once you pass you will receive the certification! Congratulations! Joseph Holbrook, CLO TechCommanders is an online training platform for both aspiring and veteran IT professionals interested in next generation IT Skills. TechCommanders is led by Joseph Holbrook, a highly sought-after technology industry veteran. Techcommanders offers blended learning which allows the students to learn on demand but with live training. Courses offered are used to prepare students to take certification exams in Cloud, DevOps, IT Security and Blockchain. Techcommanders was established in Jacksonville, Florida in 2020 by Joseph Holbrook, both a US Navy Veteran and a technology industry veteran. Techcommanders, Advancing your NextGen Technology Skills.
https://medium.com/cryptolinks/top-five-reasons-to-take-the-certified-enterprise-blockchain-architect-ceba-certification-now-b2bebc50fddb
['Joseph Holbrook']
2020-12-08 18:38:11.316000+00:00
['Presales', 'Blockchain', 'Blockchain Technology', 'Blockchain Development', 'Development']
Package Design for Sales Increasing
A package of goods is not just an outer shell. It’s an essential component of a successful brand and a medium for consumer communication. It’s a marketing tool that helps you sell instantly and increase your sales by over 30%. Some manufacturers don’t understand how important packaging is and are trying to save money on it. This is not just a mistake, but a misunderstanding of market mechanisms and consumer psychology, leading to financial losses. Investing in effective custom package design yields big dividends for the business. A product in an apposite package sells itself, allowing to save advertising costs. A proper design of the package is a guarantee that the buyer will notice the product on the shelf or the website page, considers it to be attractive, high-quality, and trustworthy. Only 6–8 seconds — and the buyer decides whether to purchase the product or not. Good package design makes a brand special, recognizable, and loved, attracts new clients. Many people collect beautiful boxes and bottles, some cannot throw away the bags and wrappers, and somebody collects labels! What businessman would not enjoy this attitude towards their products? How to make a package that the customer feels even sorry to throw away? Rules for creating attractive package design 1. The idea. Who should develop it? The biggest mistake is when an entrepreneur dictates to a designer how their product should look like, what images and inscriptions should be on the package (label). They sincerely believe that they know best what’s needed. The designer dutifully fulfills the order. The result is sad: the packaging doesn’t meet expectations and doesn’t fulfill its functions. Another mistake is that an entrepreneur completely relies on the designer, giving them a wide room for creativity. The result is the same: people don’t hurry to snap up the goods. Why? It’s simple. An important link is missing. The manufacturer knows everything about the product, the designer knows everything about the design. Who will take care of customers and competitors? Without a marketing specialist, the chances of attracting customers with even the most beautiful packaging tend to zero. Product packaging is a bridge on three pillars: entrepreneur — marketing specialist — designer. Without knowing the goals and objectives of the brand, product features, the needs of the target audience, without a concept, the designer has no right to start working. 2. Informativeness The design of the package should convey to the consumer the essence of the brand and its positioning, build communication with buyers, and reflect their expectations. First of all, it’s clear and understandable information about the product. In a few seconds, the user should understand that in front of him is exactly the product that is necessary and useful for them (it’s especially important). The product is definitely better than others and they like it at first sight! 3. Matching the brand style The design must match the fundamentals of the brand’s corporate style, work for its recognition, memorability, and popularity. A color palette, unique icons, packaging architecture, visuals, typography, trademark, the texture of the package — all this together creates an authentic, unique packaging. Such a package arouses interest and motivates them to buy. 4. Emotional appeal Emotional design is the key to high sales. This is a design that acts on the emotions more than on logic — simple, clean, bright (contrasting) in key details. Components of the emotional design: Expressive images with which users can identify themselves; Positive emotions that are transmitted to users (delight, admiration); Personalization, creating a sense of belonging among users; Honesty, empathy; Humor, laughter, joy; Storytelling; Microinteractions, positive experiences. Emotional design is not only a delight at the sight of aesthetic and convenient packaging. This is the experience of interacting with it and thinking about it. The best package is the one that not caught the fancy of the users, but evoked certain associations and feelings. This is the package that users think of and remember! 5. Harmony of package and product Effective packaging always conforms to the product and is its visual and logical follow-up as its natural part. There are also distinctive features of the category. In an effort to stand out among competitors, you shouldn’t violate the principle of visual correspondence of packaging to the features familiar to this category. That is, a ketchup jar should not resemble shampoo, and a chocolate bar shouldn’t look like a drug package. By breaking this principle, you’ll attract one nosy buyer but lose ten. 6. Color scheme Color attracts people in the first instance. Researches show that 60 to 90% of people make hasty judgments about products based on color alone. The color of the package is also a tool for information communication. Different categories of products have their own predictable color, which evokes a certain emotional response in buyers. Standard color schemes: Natural and organic products — unbleached carton, brown, green, light green tones. Children’s toys — bright basic colors: red, blue, yellow, as well as pink and purple. Premium brands — black, white, gold, silver. Products for women — lavender, pink, pastel colors. High-tech products — black, white. On the one hand, you need to be predictable and not go out of the given color palette, and on the other hand, you need to stand out from competitors. How can this be achieved? Designers have two ways: Use unusual shades within the required color palette. Combine the primary color of the brand with the secondary one. This will help differentiate the brand and create a unique solution. All new palettes and color combinations need to be tested to understand user reactions. 7. Shape The shape of a good package combines beauty and functionality. The shape can be quite original, but at the same time, it must remain recognizable. Buyers also care about its strength, lightness, compactness, and ease of transportation. The package design should be in harmony with the shape and take into account all its features. 8. Simplicity People have no time to look at the details of an image closely. The fewer unnecessary elements, the better. The minimalistic and expressive design immediately attracts attention and is better remembered. 9. Uniqueness The unique packaging design is not the figment of a designer’s wild imagination, but a deliberate solution based on a design concept. The concept is created on the basis of marketing research and proven through testing. The more revolutionary a design idea is, the more important it’s to test it. 10. Aesthetics and creativity You can follow all of the rules indicated above when developing the package design, but end up with not the most effective result. Why? Not all designs are created equal. A true design is not only functional but also aesthetic, unique, tasteful, and made with love. Package, that’s a shame to throw away, can only be created by seasoned professionals. If you need packaging that won’t allow your customers to stay indifferent, seek professional designers` help.
https://medium.com/outcrowd/package-design-for-sales-increasing-b3230d8b51d9
[]
2020-11-30 08:31:59.484000+00:00
['Packaging', 'Design', 'Sales', 'Internet Marketing', 'Packaging Design']
The Phenomenology of Feeling Part 1
Phenomenology of Feeling (photo by mjboyce) The idea is to tune-in. This is more than feeling. But it is feeling, too. Feeling is sensing and emotion. They are distinct experiences, yet captured by the same word. Feeling, as in sensation, is experienced through, on, by the whole body. Emotion is experienced by the mind, the heart, the stomach. When I say “experienced” I mean located. When I say “mind” I mean me. Experience is me locating feeling. Feeling is itself and is also of something (including itself). When “of” is removed from a description — for example, a feeling of love; a feeling of fear; a feeling of cold; a feeling of feeling; a feeling of feeling feelings (emotions) — then there is no separation between experience and the one having the experience. Experience already, as a word, is predicated by separation. Separation is built into my (English) language. It is built into the syntax: Subject Verb Noun (I touch the flower; I feel calm). An emotion becomes a thing (noun) in (my) language: (I feel love). But emotions, to me, feel transient and dynamic like verbs. And sensations likewise. Emotions are named and thereby have (social) currency. Sensations are not named — in fact, they are positioned as ineffable (unnameable). It is important to note that what is unnameable is (still) positioned. To be positioned is to be named (still) in a more abstract way. If I feel the sensation of my hand, my right hand as it writes this sentence using/with a pen, that has meaning. Meaning and value are not the same. Value is more specific, usually, and named, as well as positioned. Meaning is more abstract in it being more ineffable sometimes, but can be profound for all that. Meaning can be (more) meaningful. Something can feel profoundly meaningful without being nameable. Other than to say that it is or it feels meaningful. Meaning can be meaningful. Meaning can be not meaningful. Because meaning, like value, can be a thing, a noun, an identified something. This order of abstraction is registered through social syntax as named and meaningful. That is a paradox. Social usage — its language and grammar — can confound and reverse the specificity of meaning — of names (nouns). It can also transform names (nouns) into actions (verbs), and verbs (actions) into nouns (names). This is magic. What is can be positioned as, and hence turned into, what it isn’t. The war, any war, is a contradiction — it is a saying against another saying. All war is a challenge between perceived realities. No reality exists without perception (other than one that is positioned — that is imagined — that is perceived as such). It is a paradox to say “this is an unperceived reality.” All elections are wars fought in the interest of the sound of a voice. You want to hear someone’s voice speaking, and you got for that voice. You believe that voice will say the things you want to hear, sounding meaningful to you in the way you want it to sound. You will always be disappointed by the results, and you know this, because you always have been disappointed before. But regardless, you will still be glad to hear the voice you want to hear. If that changes, if the voice changes, and you no more want to hear it, if the speaker loses their voice (so to speak), then you will not give your vote again to that speaker. You will not identify with them any more. There are people. There are groups. They are not the same. There is reality. There are realities. They are not the same. There is the way it is and the way it goes. How it is and how it goes. They are not the same.
https://medium.com/writing-thinking-saying/the-phenomenology-of-feeling-part-1-d06de555fe1e
[]
2019-04-23 18:35:12.471000+00:00
['Emotional Intelligence', 'Philosophy', 'Phenomenology', 'Writing', 'Feelings']
What Large Corporations Can Learn from Startups
1. Keeping Things Simple In startups, there’s a scarcity of human resources always. So, they needed to find a way to make things done with that limited resource. So, what did they do? they kept things simple. They have reinvented collaboration based teamwork. Here, no one is designated for anything specific(though they have designations in their business cards), rather everyone is designated for everything. There’s small email loops, fewer departments and small hierarchy structure to keep things simple. This actually helps the startups to get in touch with their end customers very easily and also make more out of the available resources. 2. Working Fast Working fast is another important phase of learning for corporations. What happens in corporations when a problem is found? Here’s the process, you find out the problem, have a group discussion, find out the problem, consult with the supervisor, get approval from the department head, execute the solution, report effectiveness and so on. But in startup culture, it is very much appreciated to do these things faster. The basic reason of this cultural build up is investment deficiency. As most of the startups has limited investment, they are bound to work fast and skip the unnecessary mail loops. Through this, the maximum output is ensured in the minimum amount of time. 3. Hyper-Collaboration Now many of you might know the terms ‘hyper-collaboration’ and ‘experience partners’. Hyper-collaboration defines a certain form of collaboration among multiple ventures to support each other in certain activities; so that they can grow together. And experience partnership is a strategic move between two organizations who can share their resources systematically to make sure the optimum output. For example, I knew a SaaS product company and a image based AI company working together in a shared office space. They share one’s technical resources with the other to maintain minimal technical support. On the other hand, the other company provides branding support to the AI based startup. The concept here is to find the solution with multiple partnership and to ask the wider community to share expertise whenever the firm lacks experience. 4. Being Brave to Uncertainty If you ask me one single strength that startups have, I will say it is ‘to deal with uncertainty and failure’. These lean businesses always face uncertainty and failure in financial, operation and resource related aspects. And they became stronger while developing the survival instinct. In a corporation, most of the time there is a plenty of resources to get a job done; besides the financial backbone is strong mostly. So, if any uncertain situation occurs or any failure show no mercy; it doesn’t hurt the corporation much. Or, I will say, in most of the case it doesn’t even feel the lack of comfort. But it affects in long-run. Renowned corporations ended up getting merged or sold off majority shares to another corporation for this lack of discomfort in uncertain situations. The learning of not being afraid is a very important one. Google develops hundreds of products in times, and an huge number of these products are thrown away just because those didn’t work. If you don;t experience it how can you embrace it? So, getting afraid of failure/uncertainty is not an option, rather learning from it, is.
https://medium.com/swlh/4-things-corporations-can-learn-from-startups-fbafddccb4b5
['Khan Tanjeel Ahmed']
2017-12-04 14:30:16.471000+00:00
['Corporations', 'Startup', 'Learning', 'Collaboration', 'Failure']
“You Better Tell Him to Stay Away”
Once my grad school roommate Mark moved to Mexico to begin his new consulting firm, “LOS — In Service of the Imagination,” I had to scour the want ads for someone else to help me cover the $220 monthly rent in our cereal box apartment complex on Kingston Pike in Knoxville. Those were the days when sharing a two-bedroom apartment with another guy didn’t give me the creeps, and so I never thought about being too selective. I had already put up with a joker, a smoker, and all-day toker — a guy whose idea of cleanliness was setting up roach motels throughout the kitchen, which made early morning discoveries before coffee enough to put into perspective later day seminars on early literary Modernists like Woolf and Joyce…and Kafka. So when I saw an ad from a guy who had just returned from the Peace Corps in Africa and who was entering the second degree program in architecture, I thought, “Might as well be him as anybody.” I already knew several people in that program, knew how difficult it was and how hard they had to work as the semester drew to a close, pulling all-nighters for at least the last ten days of every term. So I figured this guy would be working steadily and would also be gone from our place enough to give me a certain freedom. And on some level, I was right about both. Still, new roommates, unknown ex-volunteers, can’t be buttonholed or categorized, and this couldn’t have been more true for the guy who answered my ad early one September evening. His name was Sean Mackie, and, as his name might lead you to conclude, he was red-haired and of Irish descent. I seem to remember his saying that he came from Boston, but that could be my mind reaching for greater stereotypes. Yet he had a northern accent, and even on that first night, was friendly, suggesting that we get a beer while we talk. It took me maybe thirty seconds to say, “Okay, you want to move in now?” And he did want to, and by the next night, he had the front bedroom all made up in his image, though I hardly remember anything about that image except for the African flag he flew from whichever country he had lived in. I’m going to guess Mali, but it’s just a guess at this point. Sean was an able guy, and there were nights that we cooked for each other, but mostly, our lives passed only late at night. Sean didn’t smoke tobacco, so we often shared a late-night joint and discussed who we were, or at least who we had been that day. I took him to English Department/grad school parties, and most people thought that at the very least, he was an unusual guy. Most said they liked him in that way many of us say we “like” corned beef tongue. Nothing bad happened really until it sort of did. But before I get to the semi-bad story, here are a few other moments from the year we spent together: I can be a selfish asshole sometimes, and so I decided to make for my supper one night kielbasa and sauerkraut. I mean, a whole package of kielbasa, one pound, and two cans of kraut. It was enough for three people, but I ate the entire batch. And when Sean came home and declared that “Something smells mighty good,” all I could say was, “Yeah, it was.” And I deserved to receive the look on his face, too. I hate being an asshole, but sometimes my feeling doesn’t stop me, which, I guess, firmly situates me in the middle of the human experience. One of my other friends — a guy who deserves to be featured in this series, and whom I also assholed myself to — liked Sean pretty well. They must have bonded over something like sci-fi movies because one night Sean told me that the friend was coming over and that they were, “…going to drop some acid and head over to the university center to watch 2001: A Space Odyssey.” They invited me, but I had seen the film and didn’t want to see it again, especially while tripping. I had given that up after watching The Rocky Horror Picture Show for the first time as an undergrad. Some experiences leave a mark. I suppose they had a good time, but Sean’s eyes when he returned — eyes that were borderline crossed under normal circumstances — had turned in on themselves and back out and back in again. He rambled loudly, and when I went to bed, he was mumbling something about women and cars, and how one can tell the two apart. Another time — and here’s the scene I hope I prepared you for — I took Sean to one of our parties, and he found a woman there he liked. She was redheaded, too, and her name was Joanna. That she happened to be dating another guy in our program didn’t deter Sean, and I suppose that consenting adults consent to whatever they want to. And I was just a roommate and had no right to express any moral outrage even if I actually had any. It was none of my business, though, but I often think that twenty-five-year-old guys should express their moral outrage more than they do, which might also help prevent them from making similar mistakes. Anyway, I left the party at some point and next morning saw Joanna leaving our apartment. Which again, didn’t exactly concern or bother me, except a couple of weeks later, her boyfriend confronted me in the department mailroom, where all the throwdowns, of course, occur. “Listen, you better tell your roommate to stay away from my woman.” I looked at him, a guy I had casually known for three years, and said, “Hey Gary, tell him yourself. I’m no one’s messenger.” And I’m not. Poor Gary. He looked just as Sean did when he heard about the engulfed kielbasa. But he took care of his own somehow, for a year later, he and Joanna married each other, and lived, well…I don’t know how they lived, but I do know that they gained and lost weight together. Sean was non-plussed by the affair, and as far as I can remember, the rest of the year passed without incident. But as spring came, I decided to move further off campus, and I found a basement apartment just one street over from where my future wife’s family would one day move. Knoxville both is and isn’t that large. So I said goodbye to Sean, who moved out, too. We kept in touch for a few months, and the last time I saw him was at another department party at an apartment complex on the other end of town. Sean showed up with a guy whom we met while camping out for Police tickets the previous spring. A troubled guy who for some reason was looking for a fight with me. But I had neither stolen his girl nor made him trip in any way. Sean just shrugged when he saw me giving him the stink eye for bringing this asshole. The asshole looked at my semi-Mohawk haircut and said, “It’s not even,” in a semi-threatening way. My friend Steve sidled over at that moment and asked if I needed any help. “No, don’t think so, but I’m glad you’re here.” Who knew grad school in English could be so threatening? Nothing happened, though, and even Gary and Joanna weren’t bothered by the night. But, it was the last night I ever saw or heard from Sean. I neither know what happened to him, if he finished his degree, nor if he ever took acid again. It seems a shame, but that’s what happens to old roommates and lost friends. They look at you so meaningfully, until one day, they don’t.
https://medium.com/weeds-wildflowers/you-better-tell-him-to-stay-away-70d2bddceb4
['Terry Barr']
2020-12-20 02:12:10.337000+00:00
['Food', 'Weeds And Wildflowers', 'Nonfiction', 'Friendship', 'Roommates']
How to Solve Super Egg Drop Problem with Dynamic Programming
How to Solve Super Egg Drop Problem with Dynamic Programming NMTechBytes Follow Aug 6 · 7 min read Find the minimum number of egg drops needed to know the lowest floor in a building from which an egg won’t break. Problem Description There are E eggs (= allowed egg breaks), a building with F floors [1 ≥ F] and a special floor S [0 ≤ S ≤ F] - any unbroken egg dropped at a floor higher than S will break and any egg dropped at or below this floor will not break. Given that an unbroken egg can be dropped from any floor, what is the minimum number of egg drops D [1 ≤ D ≤ F] needed in order to find S in the worst case? Input1: E =1 , F=1 | Output1: D=1 Input2: E =1 , F=2 | Output2: D=2 Input3: E =1 , F=7 | Output3: D=7 Input4: E =2 , F=7 | Output4: D=4 Input5: E =3 , F=7 | Output5: D=3 Common confusion points: What is “Worst case”? It means that egg breaking must happen only when the search range from top to bottom is exhausted. What is “floor0” in the range for special floor? Treat it as a basement where egg drop isn’t allowed but can be used for reference. Example: S=0 incase the the egg breaks on floor 1 which would mean any drop above floor 0 will result in a break. 🍳 Solutions Linear Search Take an egg to floor1 and start dropping it by going up 1 floor at a time until it breaks or we go beyond the top floor. The worst case is going through all floors [O(F)] to find S=F. Linear Search in a Building Unfortunately, this algorithm will not work because we have to find the minimum number of egg drops needed to find S which will be smaller than S in several cases. Modified Binary Search A typical binary search will get us O(Log2(F)) egg drops. Here we will repeatedly do an egg drop from the middle floor in a range (bottom to top floor) to check whether it breaks. In the illustration below, you can see bottom=1, top=7 and middle=4 for a building in the first iteration. Binary Search in a Building If the egg breaks at floor4, then this floor is above S because an egg will break on all floors above S. So bottom stays as 1 but top changes to middle-1=4 -1=3. If the egg doesn’t break at floor4, then this floor is below S because an egg will not break on any floor below S. So top stays as 7 but bottom changes to middle+1=4 +1=5. After the range change, we continually repeat this process until we find S. The worst case is going through Log2(F) floors to find that S=F or S=0. This works great in a world with unlimited eggs (=egg breaks) but will be much less efficient with an egg break limit. In the scenario above, what if the first egg breaks from floor4 and then the second egg also breaks from floor2? Since there will be no additional eggs left to continue the algorithm, it will be impossible to find whether S=1 or S=0. To make this work, instead of dropping the second egg at floor2 we can take the second egg to the bottom(floor1) and do a linear search by dropping it as we go up 1 floor at a time. Even with this modification, imagine a case with 1000 floors and 2 eggs, we will end up going through 500 floors anyways which is F/2 ~ O(F) in this case. Can we do better? O(ExF) Complexity Two Egg Breaks Lets revisit the problem of F floors with a limit of 2 eggs for now along with the modified binary search algorithm we learnt to understand the basic theme of the algorithm. Assume that we already know that X egg drops need to be performed to find S. Given these constraints, the first egg has to be dropped at floor X (jump to X directly). This is necessary to keep the egg drop count equal to X at the end. When the first egg breaks, we start the linear search with the second egg from floor1. X = 1 (drop at floor X with first egg) + X-1 (drops from floor 1 to floor X-1 with second egg) When the first egg doesn’t break at floor X, we can jump up by X-1 and drop it on the floor (X+(X-1))=2X-1. Incase the first egg breaks on this second drop, then we start the linear search at X+1, otherwise we continue to jump up. X = 1 (drop at floor X with first egg) + 1 (drop at floor 2X-1 with first egg) + X-2 (drops from floor X+1 to floor 2X-2 with second egg) We keep reducing the jumps by one each time we jump up, until we only have one floor left. To generalize for 2 eggs, in worst case the sum of the jumps should be greater than or equal to the number of floors as follows: X + (X-1) + (X-2) + … + 2 + 1 >= F Limited E Egg Breaks Let’s recall a few things before we dive into the actual algorithm: Total number of floors in a building = current floor + floors explored when egg breaks at current(bottom to current-1) + floors explored when egg doesn’t break at current(current+1 to top) It is possible find egg drops from number of floors and vice-versa (as per the equation) under a constraint of egg breaks. We have a 2-D matrix floors which is D x E and floors[d][e] is the number of floors that we can find S for with egg drops d and egg breaks e (basically the number of eggs) allowed. floors[d][e] = 1 (new floor against new egg drop) + floors[d-1][e-1] (floors with 1 less egg drop and 1 less egg break) + floors[d-1][e] (floors with 1 less egg drop) floors[d][e] is the number of floors we can find S for with d egg drops and e egg breaks. As we drop a new egg, if egg breaks then S is below the current floor else it’s above the current floor. The total number of floors is the sum of the three values: current + floors below towards the bottom : 1 + floors[d-1][e-1] is total number of floors that we can get to from the bottom, given that the egg breaks on the new drop. : 1 + floors[d-1][e-1] is total number of floors that we can get to from the bottom, given that the egg breaks on the new drop. floors above towards the top: floors[d-1][e] is the number of floors that we can get to from the new floor to the top. Code # Language: Java # Time Complexity: O(ExF) because of the nested loop. public int superEggDrop(int E, int F) { int[][] floors = new int[F+1][E+1]; for (int d=1; d<=F; d++) { for (int e=1; e<=E; e++) { floors[d][e] = 1 + floors[d-1][e-1] + floors[d-1][e]; if (floors[d][e] >= F) { return d; } } } return -1; } Lets discuss an example in detail; floors is 10 for for D=4 and E=2. The visualization of the floors matrix (for more dimensions than what the code will calculate) is as follows: Floors Matrix 7x7 Notice how E=1 column values are for the linear search on the floors as only 1 egg break is allowed and E=2 column values follow the logic described in Two Egg Breaks section. The algorithm steps are: Step1: Drop egg at floor=floors(3,1)+1=3+1=4. We have 3 egg drops left. If broken, linear search from floor1 to floor3, else go to Step2. Step2: Drop egg at floor=4+(4–1)=4+3=7. We have 2 egg drops left. If broken, linear search from floor5 to floor6, else go to Step3. Step3: Drop egg at floor=7+(4–2)=9. We have 1 egg drop left. If broken, drop egg at floor 8 else at floor 9.
https://medium.com/javarevisited/super-egg-drop-problem-ac42a4b4b09a
[]
2020-09-05 02:57:27.777000+00:00
['Algorithms', 'Coding', 'Java', 'Dynamic Programming', 'Leetcode']
Dear Julie: Voice
Dear Julie: Voice How do you find your distinctive writing voice? Photo by Jason Rosewell on Unsplash Dear Julie, I’ve been writing a novel for about a year with a few stops and starts and I’m nearing the end (finally!). Reading back through, I feel as though the voice and tone changes as the story progresses. Perhaps this is because of what I was reading at the time of writing certain chapters or maybe it has more to do with my style improving during those weeks and months when I was able to write with more regularity. Is this something all authors deal with when they’re writing over a long period of time? In some parts, there are whole perfect chapters, which read as though a stranger wrote them, whereas in other places I’m mortified at how stunted and awkward things seem. Should I go back and rewrite or is this something a good editor will be able to help fix? Alix Dear Alix, First of all: congratulations on nearly finishing your novel! I need you to do something right now. Put either a bottle of champagne, or a box of chocolates, on your shopping list, and the next time you shop, buy them. Put them away someplace safe, in readiness for the day that you do finish your novel. Because when you finish, it is compulsory to celebrate. Go put them on your shopping list now. Okay? I’ll wait. All right, you’re back. So: your question. You feel that your writing voice is uneven because you’ve been writing your novel in fits and starts. This is actually is a good sign, for several reasons. As a new writer, I think it’s important that you try out different styles, voices, and tones — even if they aren’t your own. You’re learning a skill, and a great deal of learning is imitation. What’s even better, is that you recognise that your writing voice is uneven. That shows that you have a good ‘ear’ for writing voice, and it shows that you have an instinctive feel for when something seems like the correct sort of voice and tone for you. This skill is actually quite difficult to acquire if you haven’t got it already. I think we learn it mostly through reading and absorbing others’ voices, and we then apply it to our own writing. You ask if this is something that all authors deal with. I’d say that most new authors deal with it. But with experience, writers settle into their own voice. The more you write, and the more regularly you write, the more you will find that there’s a way with words that feels comfortable to you. This voice will be flexible enough to be applied to different types of stories, but it will be distinctly yours. It’s hard to pinpoint exactly what this writer’s voice consists of, without a great deal of intricate analysis of the type done on Creative Writing Masters degree courses. But as an author, when you’ve found your own voice — you will know. It will feel like pulling on your favourite pair of jeans: not clumsy, not too stiff, not tight in all the wrong places and loose in others. Just right. The good news is, you will find this voice. The bad news is, it takes time. Time, and a lot of writing, and a lot of trying out things and getting it wrong. Reading drafts of first novels for my consultancy work, I often see how a writer settles into their voice as the book progresses. The beginning might be awkward or too literary or too casual or too overworked or too much like Helen Fielding. But then as the chapters roll on, something very special begins to emerge from the words, as the writer finds their feet and their voice. Maybe when you reread your draft, you’ll see this happening. Then again, I’m a little concerned that you say that some chapters sound perfect, ‘as though a stranger wrote them’. While clumsy writing isn’t good, you should be able to feel that you are part of what you’ve written. It shouldn’t be too perfect; it should just be you. If you’ve found your voice, there should be a little tingle of recognition when you read your own work. (As an example of this, I was on Twitter recently and someone posted a photograph of a page of the book they were reading. I didn’t remember the dialogue, but there was something about the writing that made me think ‘…Hmm. Familiar.’ I tweeted the person — and yes, it was one of my backlist novels they were reading.) You ask if an editor can help you. As publishers won’t generally take on a manuscript where the voice and tone is uneven, I assume you mean an independent editor whom you hire. And yes, I think that a good, sensitive, very experienced editor can help guide you in finding your voice…but what I really think is that voice is something that you have to discover yourself. You have to do the work, and write and write and write. That little voice inside you that says: ‘Maybe I have to rewrite parts of this manuscript’? That’s your writer’s voice talking. Listen to it. My advice is this: Finish the book first. Drink champagne and/or eat chocolate. Then, when you’re ready to revise your manuscript, choose some scenes to rewrite — but do them completely afresh. Don’t look at your original version at all. Write them with what you’ve learned about yourself and your story…and see if what you write feels like a favourite pair of jeans. Love, Julie x
https://medium.com/novel-gazing/dear-julie-voice-4d461ee300dd
['Julie Cohen']
2020-05-22 10:52:12.218000+00:00
['Writing Voice', 'Writing', 'Editing', 'Writing Advice', 'Writing Tips']
Roman Terror
Imagine waking up in a far-flung future where people literally pray to models of the electric chair. The veneration of the crucifix by millions of Christians would be as puzzling to the Romans as electric chair veneration would be to you. Crucifixion is one of the nastiest forms of execution devised, and yet, it’s one of the most ubiquitous symbols of spiritual “comfort” in the modern world. In the Roman world, the crucifix was a sight that would send a chill down your spine. The public punishment was so humiliating and excruciating that Roman citizens were spared from the horror (beheading was the preferred punishment for citizens). Crucifixion was reserved for the people who mattered the least to Romans: slaves and the conquered mobs of the empire. Crucifixion was the Roman Empire’s Death Star. It was a weapon of terror, an example-making deterrent to sedition. Surprisingly little is actually known about crucifixion. Wooden structures rot away over time, and the nails were taken as amulets and keep-sakes. Nobody knows for sure if the “cruciform” shape of the cross was used consistently. Nobody really knows how Jesus was crucified. The chances are that his “cross” was actually T-shaped (“Tau” cross). The written evidence shows that it is more likely that crucifixions occurred on many shaped structures, from X shaped crosses to single beams of wood (“crux simplex”). The consistent aspect of crucifixion was simply that people were nailed or tied to wooden structures (or trees) and left to die in public view. The Jewish historian Josephus tells us that soldiers amused themselves by nailing their victims to the posts in odd positions. Some of what we know about crucifixion comes from a Roman joke relayed to us by the novelist Gaius Petronius (not to be confused with the biblical Petronius), writing in the first century C.E., a few decades after the execution of Jesus of Nazareth. The joke is told by a character in the novel The Satyricon during a voyage at sea, which was probably common as sea voyages were both boring and dangerous. Long-winded jokes are a great way to pass the time and calm the nerves. Here’s a summary of the joke: There was once a lady that was famed for fidelity to her husband. When he died, she was not content to merely display her grief at the funeral. She took up residence at his tomb, mourning day and night without eating. Despite pleading, her family and the authorities could not separate her from her husband’s resting place. Meanwhile, the governor had sentenced some thieves to be crucified near the burial ground where the widow had taken residence. That evening, a soldier tasked with guarding the crucified bodies noticed a light in one of the tombs. Overcome with curiosity he left his post and discovered the beautiful woman mourning her husband. He tried to tempt her with food, but she refused, grieving all the more. Eventually, the soldier persuaded the woman to eat and drink. They got talking. After getting to know the woman, the soldier eventually seduced her. The new lovers slept together three nights, closing the vault so that anybody who came to visit would think the woman died of starvation in her grief for her dead husband. But while the soldier was sleeping with his new lover, the parents of one of the crucified criminals — seeing the coast was clear — took his body away for a proper burial. When the soldier realised the body was gone he was terrified, he’d surely be executed himself for dereliction of duty. To save her new lover, the widow offered the body of her own husband to take the place of the crucified criminal. The soldier duly nailed the body up on the cross, knowing it was his only hope. The next day the terrified villagers wondered how on earth the dead man had managed to get up on the cross.
https://stevengambardella.medium.com/roman-terror-505cf28b91ea
['Steven Gambardella']
2020-12-20 17:45:49.093000+00:00
['Literature', 'History', 'Culture', 'Psychology']
How Can We Create a Better Literary Landscape?
In the past decade, we’ve seen the rise of the slow food movement and the buy-local movement. Both of these represent ways that consumers can use their purchasing power to make a stand for the type of world they wish to see. Readers can make an equally big impact by learning more about how books are made and sold, by making deliberate choices about how they interact with the literary ecosystem (from reviewing books online to attending readings and literary events), and by supporting independent book publishers, bookstores, and media whenever possible. On May 11, Kickstarter will host The Next Page, a digital conference exploring future of publishing through four live-streamed panels (open to anyone with an internet connection). Ahead of the conference, we asked a few of our panelists to share their suggestions for actions anyone can take to help create a more vibrant reading and writing landscape. — Margot Atwell, Director of Publishing at Kickstarter
https://medium.com/kickstarter/how-can-we-create-a-better-literary-landscape-fc588ab6aa7d
[]
2019-05-09 14:01:00.852000+00:00
['Kickstarter', 'Publishing', 'Creator Toolkit', 'Media', 'Books']
Scaling a Mature Data Pipeline — Managing Overhead
Background There is often a natural evolution in the tooling, organization, and technical underpinning of data pipelines. Most data teams and data pipelines are born from a monolithic collection of queries. As the pipeline grows in its complexity, it becomes sensible to leverage the Java or Python Spark libraries, and implement your map reduce logic in code, rather than in raw queries. The monolith is broken down, and you trade complexity in orchestration for simplicity in logic. Your one monolithic job becomes a dozen beautiful, tightly scoped steps structured into some sort of dependency graph. However, orchestration complexity has a cost: overhead. Broadly, overhead is everything your pipeline is doing other than computing: performing IO, waiting for resources to be allocated to your job, waiting for your job to get scheduled, and so on. Overhead is insidious: It grows slowly, with your pipeline, and generally only becomes problematic when you have dozens of tasks to manage. At that point, there are many variables affecting pipeline performance, and observing and isolating sources of overhead can be extremely difficult, especially when your pipeline legitimately spends a large amount of time computing. The Payments team operates a number of time-sensitive Spark pipelines, and central to our team’s goals is the timely delivery data from these pipelines. However, as Airbnb has matured and grown, our pipeline has had to continually rise to meet the challenges presented by the scale and scope of our operations. There was a time when running our pipeline once a day was sufficient to both meet our delivery SLA, and manage the risk of data integrity issues occurring over the following 24 hours. However, increasingly, processing data daily was not meeting our needs. So, we began to investigate the technical feasibility running hourly data pipelines. And so we became aware of our overhead problem: Due to a number of choices we had made regarding how we structured our business and orchestration logic, we discovered that we were totally blocked from implementing hourly pipelines until we got our overhead under control. Technical Stack Before delving into our specifics, I want to take a moment to discuss the technical stack backing our pipeline. Our platform uses a mixture of Spark and Hive jobs. Our core pipeline is primarily implemented in Scala. However, we leverage Spark SQL in certain contexts. We leverage YARN for job scheduling and resource management, and execute our jobs on Amazon EMR. We use Airflow as our task orchestration system that takes care of the orchestration logic. For a data pipeline, we define the orchestration logic as the logic that facilitates the execution of your tasks. It includes the logic that you use to define your dependency graph, your configuration system, your Spark job runner, and so on. In other words, anything required to run your pipeline that is not a map-reduce job or other business logic tends to be orchestration logic. In total, the our pipelines are made up of a little over a thousand tasks. A Case Study: The Integration Test Pipeline Our journey of discovery began in an unlikely place, in one of the integration test pipelines. This is a special set of pipelines managed by the Payments team, which take events emitted from unit tests and run them through our entire pipeline to detect issues in the Payments team’s code before it even gets merged to the master branch. In terms of structure, scope, and configuration, the integration test pipeline is identical to our production pipeline. The only difference is the integration tests handle data volume on the order of a few hundred records per run, which is an incredibly small amount of data: even an hourly pipeline in production will process several orders of magnitude more data than this. That said, the performance of an hourly pipeline is likely closer to the integration test pipeline than the current daily pipeline. The core segment of our pipeline, running at full scale, is expected to take 6 hours to complete each day. Assuming the same resource allocation, an hourly pipeline, in theory, should take about 1/24th of 6 hours, or 15 minutes. The integration test pipeline, in contrast, should take close to no time at all, due to the tiny data load. However, the execution time is roughly 2 hours. Adjusting the Spark configuration to account for the smaller data load has nearly no effect. When we investigated further, we found that the time spent doing ETL operations, or any sort of Spark compute was close to 0. Clearly, the pipeline was spending its time doing something else. This was undesirable for our team. Because accounting logic is inherently stateful, for a new pipeline run to start, it requires the previous run to complete. Obviously, we were blocked from even starting our experiments with an hourly pipeline, unless we could get our execution time under an hour. The Overhead: A Silent Killer We expected that our pipeline had some overhead, but we did not know how much. When we test our jobs, we generally don’t use the same orchestration tools as we use in our full production pipeline — we use unit tests, or run jobs individually. In addition, we run tests with different resource allocations, and on a different map-reduce cluster than we do for our production pipelines. All of this served to obfuscate the impact the overhead was causing in our pipelines. So we took a step back, and analyzed our pipeline holistically. While exact sources of the overhead will vary wildly based on the pipeline structure and stack, we identified some common sources: Scheduler delay: Be it a crontab, Airflow, a task queue, or something else, every complex data pipeline has some system that manages dependencies and job scheduling. Generally, there is some delay between a job completing and the next job starting. Pre-execution delay: Great, the job has been scheduled — now what? In many cases, before the job is started, there is some pre-work to perform. For example, we have to push our JAR onto the machine that is executing our task, and perform some sanity checks to guarantee that data from previous tasks has landed. Lastly, there is the initialization of the application code, loading any configuration or system library dependencies, and so on. Spark Session instantiation and resource allocation: Spark Sessions take time to setup, so the more often you start and stop your session, the longer everything will take. In addition, your job has to acquire all necessary resources from your cluster. Spark features such as dynamic allocation can make this faster, but there will always be some ramp-up time. This can also impact your pipeline at the cluster level. When using an autoscaling cluster like Amazon EMR, while you may theoretically have access to a large pool of resources, it may take significant time to assign them to your particular cluster via a scaling operation. Saving and loading data: In a complex pipeline, you often have intermediate results you need to persist for other pipelines or jobs to consume. It’s often a good idea to break your pipeline into discrete steps, to both manage logical complexity, and limit the impact of job failures on your pipeline’s run time. However, if you make your steps too granular, you end up spending a lot of unnecessary time serializing and deserializing records to HDFS. However, all of these impacts were small, on the order of minutes — they did not explain a 2-hour delay. Sizing Up Your Pipeline What we realized was that the impact of the overhead depends on the size and shape of the pipeline. The relationships between dependent tasks in data pipelines tend to be structured as a Directed Acyclic Graph, or DAG. A helpful exercise here is to physically draw the structure of your pipeline, if you don’t have a tool with a UI (like Airflow) that does it for you. Make each distinct task a node in your DAG, and you will come up with an image like the following one. The shape and size of a DAG can be measured in two factors: the depth and the width. The depth is the measure of how many links there are between any given task, and its closest edge node. The width is the number of nodes with some given depth. The overhead tends to scale with the depth of your graph, particularly in “linear” segments of the graph, where tasks execute in series. However, wide graphs are not immune to the overhead — you must be cautious about the time spent when saving and loading records from HDFS. IO has fixed overhead, so while the landing time of your tasks may not suffer, the total compute time will, and this can lead to increased compute costs. (Managing cost is outside the scope of this post, so I’ll leave things there, but try to avoid trading a time problem for a cost problem.) So, with this framework, we realized the structure of our pipeline was the root cause of our overhead problem. So what did we decide to do? Simple — scale it down! Phenomenal Data Processing, Itty Bitty Pipeline While ideally, you can simply remove or reduce sources of the overhead directly, often, this is infeasible or would cost an unreasonable amount of development time. So, the only remaining solution is to shrink the size of your pipeline. This does not mean to cut the scope of your pipeline, or, really, change it at all. It really comes down to the difference between business logic, and orchestration logic. Often, data pipelines reflect the structure of the underlying application. It is really easy to end up in a situation, where the common way to submit jobs to a cluster is via the spark-submit script provided by Spark. The script asks you to provide a class, so you tend to make a class that maps to a single task in your pipeline. Classes are generally designed by traditional software engineering principles, and thus, perform a single tightly scoped transformation operation. This is problematic for a bunch of reasons — often orchestration logic lives separately from the business logic, so you get into a situation where making a change to your pipeline requires you to synchronize the deploy of changes to multiple systems, which is never a good place to be. And of course, doing this can cause massive overhead if you don’t consider the orchestration implications of your logical abstractions. We came to the understanding that our business logic should be designed in a maintainable and scalable way, and our orchestration logic should be designed to maximize performance, visibility, and reliability of our pipeline. While this is a fairly obvious statement, what was less clear is that when business and orchestration constructs, i.e., the actual classes that define your business and orchestration logic, are coupled together, often the aims of the systems are in opposition to each other. As we have discussed, tightly scoped logical abstractions, when mapped directly to a DAG, lead to a pipeline that is wide or deep enough to cause significant overhead. What we decided to do was build a layer in our application between our business logic and orchestration logic that would allow us to meet the needs of each system. Consider the following data pipeline: Traditionally, your application code would look something like: In this setup, each class maps to a single logical transformative step. However, we have architected a Single Entry Point that defines a contract between the orchestration system and our business logic that has no inherent correlation to our application logic. The basic usage of it looks something like: This example, functionally, is pretty similar to the initial setup — we are still mapping tasks in our pipeline to classes in our code on a 1:1 basis. However, this gives us a few benefits. For example, we can restructure our underlying classes however we want without making any changes to the orchestration system. We could do something like: The critical goal to achieve here is to completely decouple the orchestration of your pipeline from the structure of your business logic, so that your orchestration-related components can be architected to minimize overhead, and your business logic can be structured to minimize its complexity. Put another way, you want to architect your application so the structure of your DAG is totally independent from the structure of your business logic. Leveraging the system above, we could easily restructure our pipeline into: In doing so, we have significantly reduced the overhead, without any changes to the structure of our business logic, or the delivery order of our tasks. An Aside on Fault Tolerance A final consideration here is fault tolerance. Broadly, the longer a job runs in a distributed computing environment, the more likely it will fail. Generally, data jobs are easy to retry in an idempotent manner, so the cost of a job that fails is generally just the time spent up to the failure, plus the time spent to retry. This is something to consider when trying to fight the overhead — theoretically, combining all your jobs into one huge task would get rid of all the overhead, but massively increase the risk of failure, which could eat up all the time savings you get by removing sources of overhead. Overall, this is a balance. In our case, for large segments of our pipeline, the cost and risk of retries was much, much lower than the overhead incurred by splitting the tasks up, but this won’t always be true. A good rule of thumb is if your overhead is equal to or greater than 10% of your job’s execution time, merging them is likely a safe bet. Conclusion So what did we learn? The natural evolution of data pipelines, from monolithic collections of scripts to Spark applications, naturally pushes you to encode your application structure in your pipeline. The overhead is everything your pipeline is doing other than computation. It’s caused by orchestration complexity, and scales with the depth of your pipeline. Encoding your application structure in your pipeline means you intrinsically couple your application logic to your orchestration logic. This means you are often inviting overhead by making your map-reduce tasks too granular. By decoupling your orchestration logic from your application logic you gain tools to fight the overhead, without compromising the quality of your application. When attempting to reduce the run time of a data pipeline, be careful not to miss the forest for the trees. Analyze your whole pipeline’s execution time, not just the obvious factors, like map-reduce computation time. You can’t neglect fault tolerance considerations. Make sure you don’t lose all the time you saved lowering overhead by spending it retrying tasks that fail frequently. While our execution of our overhead reduction measures are still ongoing, early tests show us that we will be able to decrease our overhead from 2 hours to 15–30 minutes. Not only will this improve the delivery time of our pipeline, this will allow us to pursue an hourly pipeline in the future.
https://medium.com/airbnb-engineering/scaling-a-mature-data-pipeline-managing-overhead-f34835cbc866
['Zachary Ennenga']
2019-09-24 17:27:00.979000+00:00
['Airflow', 'Infrastructure', 'Spark', 'Data Engineering', 'Data']
Books That Foster Personal Growth: Range
Do we need to specialize to succeed? Range: Why Generalists Triumph in a Specialized World by David Epstein throws cold water on the specialization movement we are witnessing in the sports world. This does not mean deliberate practice and homing specific skills is unnecessary or a waste of time, they are necessary to excel in a given arena. But there is great value in processing a range of skills. When starting a new endeavor, whether it be organized sports or a new career, we benefit from a “sampling period.” Range uses research to show the benefits of developing a range of skills and experiences to excel in your career endeavors. “Highly credentialed experts can become so narrow-minded that they actually get worse with experience, even while becoming more confident — a dangerous combination.” Find more recommendations at zacharywalston.com *Book link is an Amazon Affiliate Link
https://medium.com/curious/books-that-foster-personal-growth-range-cd660d677453
['Zachary Walston']
2020-12-22 17:21:16.455000+00:00
['Growth', 'Personal Development', 'Personal Grow', 'Reading', 'Books']
A Non-Volatile INDIRECT Alternative in Excel using the Pub/Sub Pattern
A Non-Volatile INDIRECT Alternative in Excel using the Pub/Sub Pattern Dramatically improve spreadsheet performance and decouple your workbooks. The INDIRECT function in Excel is a tricky beast. One the one hand it can be incredibly useful, but on the other hand, it is responsible for crippling the performance of many spreadsheets. In this article, we’ll look at what the INDIRECT function is, why it is so bad for performance and an interesting alternative that is superior in almost every way. A brief explanation of the INDIRECT function Using INDIRECT to get a value from another workbook The INDIRECT function takes a cell address and returns the value contained within the cell. When designing a spreadsheet or set of spreadsheets it pays off to plan ahead and keep them well organized. Conceptually, using INDIRECT combined with Named Ranges can seem like a great way to do that. You can keep one area of functionality in one workbook and share key results with other dependent workbooks by looking up those values with INDIRECT. Using named ranges avoids hard-coding explicit address references and allows us to refactor or restructure the referenced workbook later. Why using INDIRECT is terrible for performance The INDIRECT function is a volatile function. This means that every time anything in your workbook changes or any time you press F9 to calculate, the INDIRECT function will be called. On its own this may not be such a big deal, but because the INDIRECT function is called repeatedly and calculations that take the result of that as an input will also be called repeatedly. If the result of the INDIRECT call is an input to some complex calculation or slow function then your spreadsheet will crawl. Every time you change anything the entire calculation will be re-done, even when the change you’ve made has nothing to do with that part of the spreadsheet. Excel maintains a dependency graph which enables it to know what cells need recalculating after any changes have been made. This allows it to do the minimal number of computations when recalculating a worksheet after a change has been made. This is a very efficient way of minimizing the work that needs to be done so that spreadsheets can update quickly. Using INDIRECT ruins this as anything that is dependent (directly or indirectly) will end up be recalculated every time Excel recalculates. The developers of Excel have not done this by accident. The INDIRECT function retrieves the value of the address specified, but it is not dependent on the cell pointed to by that address. You can see this if you use Trace Precedents from the Formula tab in Excel. This means that it is not sensitive to the referenced cell changing. It doesn’t know whether the referenced cell has changed or not and so it has to be recalculated every time, and this is why it is a volatile function. Introducing an alternative to INDIRECT Using INDIRECT as above is a common solution to the problem of referencing values in one spreadsheet from another. It decouples the two spreadsheets so that calculations from one (we’ll call it the producer) can be used by the other (the consumer). There doesn’t have to be just one consumer, there can be multiple consumers for a single producer. This problem of needing to decouple producers from consumers is not unique to spreadsheets. In fact, in software engineering it is very well known and there are patterns for doing exactly that. The pub/sub or publisher/subscriber pattern is one such pattern that is commonly used to decouple producers from consumers. In this pattern, messages are published and subscribers are notified of those messages. The delivery of messages between publishers and subscribers is handled by a message broker. So that a single message broker can be used for different types of messages it is usual to split messages into topics. A topic is just a string that is known to both the publisher and the subscriber. Messages are published on a specific topic and subscribers subscribe to a topic. The subscribers will only receive messages published to the topic they are subscribed to. Using a message broker to publish and subscribe to messages In our spreadsheet rather than using INDIRECT to pull values from another workbook we can use this pub/sub pattern. The producer workbook will publish values to the message broker whenever a change is made, and the consumer workbook will subscribe to those messages and update only when a message is received. We will implement this in the next section. Implementing the Pub/Sub pattern in Python We will use Python to implement the pub/sub pattern. Later we will call use this from Excel using PyXLL, the Python Excel Add-In. PyXLL is a commercial product that enables us to use Python code in Excel instead of VBA. Crucially for this article it can also be used to write RTD, or Real Time Data, functions. We will use an RTD function in the consumer workbook to update the value whenever a message is published from the producer workbook. PyXLL can be downloaded from https://www.pyxll.com and there is a free 30 day trial. The same technique presented here could be achieved in another language so long as it is possible to write Excel worksheet functions and RTD functions in that language (for example, using Excel-DNA and C# or Jinx and Java, Scala, Kotlin or Clojure). Often when using the Pub/Sub pattern some messaging middleware like Kafka, RabbitMQ or ApacheMQ is used. This is really useful in situations where we are messaging between applications or even between servers. In our case everything will be running inside Excel in a single application so using a messaging service like these is overkill. All we need is a way to pass messages from producers to consumers that are all in the same process. We’ll start with a MessageBroker class with three methods: publish, subscribe and unsubscribe. Our producer will publish messages using the publish method, and our consumers will subscribe using the subscribe method. When they are no longer interested they can use the unsubscribe method. The messages themselves will simply be Python objects, and the consumers will be Python functions accepting these Python object messages as a single argument. Our MessageBroker will maintain a dictionary of topics to subscribers. There we have it! Using this we can subscribe to a topic and receive a call-back whenever a message is published to that topic. Hopefully this shows that the pub/sub pattern doesn’t need to be complicated in order to be useful :) There are a few more things we can do to improve on this. In our case of passing values between Excel sheets it would be useful if when subscribing we got the last published value. That way if the consumer subscribes after the producer has already published something it will get the latest value, rather than have to wait until the next one. Additionally Excel functions can (optionally) be called from multiple threads and so if that is something we would want to do then we need to be careful about multiple threads accessing the MessageBroker at the same time. The complete code with these additional improvements can be found in the “pubsub” folder of the PyXLL Examples repo on github. Putting it all together in Excel As a reminder, the reason we went down this pub/sub path was to find an alternative to INDIRECT in Excel and now we’ll get back to that! We need two new Excel functions, “publish” and “subscribe”. The publish function will be called from our producer workbook with a topic name and the value we want to publish. The subscribe function will be called from the consumer workbook where we want to receive the value. The subscribe method will be an RTD, or Real Time Data, function. That’s a special type of function that can update its value even after it’s been called. If you’ve not already downloaded PyXLL then you’ll need to now, as that’s what we’re going to use to call our MessageBroker class from the previous section from Excel. You can download a 30 day trial of PyXLL from https://www.pyxll.com. We’ll use the MessageBroker class from above and create a single global instance of it. We’ll also add some convenience functions so we can call publish, subscribe and unsubscribe on our global MessageBroker instance. Next, using PyXLL we can write the “publish” Excel function so it can be called from an Excel workbook. If you’ve not used PyXLL before you might be surprised at how easy this is! We write a normal Python function and simply add the @xl_func decorator to it. This is what tells PyXLL to expose our Python function as an Excel function. To keep things clean I’ve put the MessageBroker class and the publish, subscribe and unsubscribe functions into a single module, pubsub.py. The function above is in a new module “pubsub_example.py” and imports the pubsub module as well as the @xl_func decorator. You can find the complete code in the “pubsub” folder of the PyXLL Examples repo on github. To call this function from Excel you will need to install the PyXLL add-in if you’ve not done so already, and add your new pubsub_example.py module to the PyXLL config file, pyxll.cfg Publishing a value from an Excel function Now we’re ready to add the “subscribe” function. To write an RTD function using PyXLL we create a class derived from PyXLL’s RTD class. You can read more about this in the user guide. The RTD class has two methods, connect and disconnect. These are called when Excel is ready to start receiving updates and when it no longer needs them, respectively. We will override these in our class to subscribe to and unsubscribe from the message broker. When a new message is received we set the “value” property on the RTD object which updates the value in Excel. To create the “subscribe” function in Excel we use the @xl_func decorator as before, except this time we return a SubscriberRTD object. We also need to provide a bit more information to PyXLL when calling the @xl_func decorator so it knows to treat the returned value as an RTD object. And that’s all there is to writing an RTD function in Python with PyXLL! We can now call this new subscribe function from another workbook with the same topic, and each time the producer sheet publishes a value it will be updated in the consumer sheet. We can have multiple consumers subscribing to the same topic, and we can have multiple producers publishing on different topics. Whenever a published value updates, the “publish” Excel function will be called with the topic and that new value. That will cause all of the results of the “subscribe” function subscribed to the same topic to update automatically. As the RTD “subscribe” function is non-volatile any dependencies will only be calculated when the value actually changes. Using PyXLL we’re not just limited to passing numbers or strings between sheets. We can return complete Python objects from Excel functions and publish those in exactly the same way. Recap; What have we done? We started off looking for an alternative to Excel’s INDIRECT function without the performance implications of using a volatile function. The reason for using the INDIRECT function was to decouple results produced in one spreadsheet that were used as inputs in another. Named ranges were identified as a way to avoid hard-coding specific cell references. Using the pub/sub pattern we can now publish results from any workbook and subscribe to those results in another. Using topic strings we can have publish and subscribe to multiple named values at the same time. Using an RTD function for subscribing to a topic allows us to update values in Excel as new values are published without having to resort to making our function volatile. We have achieved our aim of decoupling multiple spreadsheets, and by using named topics we protect ourselves from referencing cells in another workbook directly. By not using a volatile function we have ensured that our workbooks only need to calculate the minimum required when values change. References
https://towardsdatascience.com/a-non-volatile-indirect-alternative-in-excel-using-the-pub-sub-pattern-15cea21272a3
['Tony Roberts']
2020-07-16 08:08:24.106000+00:00
['Computer Science', 'Python', 'Excel']
Run State of the Art NLP Workloads at Scale with RAPIDS, HuggingFace, and Dask
TLDR: Learn how to use RAPIDS, HuggingFace, and Dask for high-performance NLP. See how to build end-to-end NLP pipelines in a fast and scalable way on GPUs. This covers feature engineering, deep learning inference, and post-inference processing. Introduction Modern natural language processing (NLP) mixes modeling, feature engineering, and general text processing. Deep learning NLP models can provide fantastic performance for tasks like named-entity recognition (NER), sentiment classification, and text summarization. However, end-to-end workflow pipelines with these models often struggle with performance at scale, especially when the pipelines involve extensive pre- and post-inference processing. In our previous blog post, we covered how RAPIDS accelerates string processing and feature engineering. This post explains how to leverage RAPIDS for feature engineering and string processing, HuggingFace for deep learning inference, and Dask for scaling out for end-to-end acceleration on GPUs. An NLP pipeline often involves the following steps: Pre-processing Tokenization Inference Post Inference Processing NLP workflow using Rapids and HuggingFace Pre-Processing: Pre-Processing for NLP pipelines involves general data ingestion, filtration, and general reformatting. With the RAPIDS ecosystem, each piece of the workflow is accelerated on GPUs. Check out our recent blog where we showcased these capabilities in more detail. Once we have pre-processed our data, we need to tokenize it so that the appropriate machine learning model can ingest it. Subword Tokenization: Tokenization is the process of breaking down the text into standard units that a machine can understand. It is a fundamental step across NLP methods from traditional like CountVectorizer to advanced deep learning methods like Transformers . One approach to tokenization is breaking a sentence into words. For example, the sentence, “I love apples” can be broken down into, “I,” “love,” “apples”. But this delimiter based tokenization runs into problems like: Needing a large vocabulary as you will need to store all words in the dictionary. Uncertainty of combined words like “check-in,” i.e., what exactly constitutes a word, is often ambiguous. Some languages don’t segment by spaces. To solve these problems, we use a subword tokenization. Subword tokenization is a recent strategy from machine translation that breaks into subword units, strings of characters like “ing”, “any”, “place”. For example, the word “anyplace” can be broken down into “any” and “place” so you don’t need an entry for each word in your vocabulary. When BERT(Bidirectional Encoder Representations from Transformers) was released in 2018, it included a new subword algorithm called WordPiece. This tokenization is used to create input for NLP DL models like BERT, Electra, DistilBert, and more. GPU Subword Tokenization We first introduced the GPU BERT subword tokenizer in a previous blog as part of CLX for cybersecurity applications. Since then, we migrated the implementation into RAPIDS cuDF and exposed it as a string function, subword tokenization , making it easier to use in typical DataFrame workflows. This tokenizer takes a series of strings and returns tokenized cupy arrays: Example of using cudf.str.subword_tokenize Advantages of cuDF’s GPU subword Tokenizer: The advantages of using cudf.str.subword_tokenize include: The tokenizer itself is up to 483x faster than HuggingFace’s Fast RUST tokenizer BertTokeizerFast.batch_encode_plus . Tokens are extracted and kept in GPU memory and then used in subsequent tensors, all without leaving GPUs and avoiding expensive CPU copies. Once our inputs are tokenized using the subword tokenizer, they can be fed into NLP DL models like BERT for inference. HuggingFace Overview: HuggingFace provides access to several pre-trained transformer model architectures ( BERT , GPT-2 , RoBERTa , XLM, DistilBert , XLNet …) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pre-trained models in 100+ languages. In our workflow, we used BERT and DISTIILBERT from HuggingFace to do named entity recognition. Example of NER in action from https://huggingface.co/dslim/bert-base-NER Combining RAPIDS, HuggingFace, and Dask: This section covers how we put RAPIDS, HuggingFace, and Dask together to achieve 5x better performance than the leading Apache Spark and OpenNLP for TPCx-BB query 27 equivalent pipeline at the 10TB scale factor with 136 V100 GPUs while using a near state of the art NER model. We expect to see even better results with A100 as A100's BERT inference speed is up to 6x faster than V100's. In this workflow, we are given 26 Million synthetic reviews, and the task is to find the competitor company names in the product reviews for a given product. We then return the review id, product id, competitor company name, and the related sentence from the online review. To get a competitor’s name, we need to do NER on the reviews and find all the tokens in the review labeled as an organization. Our previous implementation relied on spaCy for NER but, spaCy currently needs your inputs on CPU and thus was slow as it required a copy to CPU memory and back to GPU memory. With the new cudf.str.subword_tokenize , we can go from cudf.string.series to subword tensors without leaving the GPU unlocking many new SOTA language models. In this task, we experimented with two of HuggingFace’s models for NER fine-tuned on CoNLL 2003(English) : Bert-base-model : This model gets an f1 of 91.95 and achieves a speedup of 1.7 x over spaCy. Distil-bert-cased model : This model gets an f1 of 89.6 (97% of the accuracy of BERT ) and achieves a speedup of 2.5x over spaCy Research by Zhu, Mengdi et al. (2019) showcased that BERT-based model architectures achieve near state art performance, significantly improving the performance on existing public-NER toolkits like spaCy, NLTK, and StanfordNER. For example, the bert-base model on average across datasets achieves a 13.63% better F1 than spaCy, so not only did we get faster but also reached near state of the art performance. Check out the workflow code here. Conclusion: This workflow is just one example of leveraging GPUs to do end to end accelerating natural language processing. With cudf.str.subword_tokenize now, most of the NLP tasks such as question answering, text-classification, summarization, translation, token classification are all within reach for an end to end acceleration leveraging RAPIDS and HuggingFace. Stay tuned for more examples and in, the meantime, try out RAPIDS in your NLP work on Google Colab or blazingsql notebooks, see our documentation docs page, and if you see something missing, we welcome feature requests on GitHub!
https://medium.com/rapids-ai/state-of-the-art-nlp-at-scale-with-rapids-huggingface-and-dask-a885c19ce87b
['Vibhu Jawa']
2020-09-10 17:24:39.611000+00:00
['NLP', 'Python', 'Data Science', 'Text Processing', 'Machine Learning']
The Haunted Amusement Park
You felt an icy chill lick your skin as the sense of familiarity crawled from the depths of your unconscious. You hadn’t wanted to come back as the guilt from way back then manifested itself in the form of dreams and spontaneous thoughts in your mind. “Please… don’t leave.” The whisper from that dreadful night was lost amongst the howling of the wind, but it was as though she spoke through a loudspeaker as those words were forever ingrained in the depths of your soul. “We shouldn’t be here,” you said to the friend a few feet ahead. Your friend turned and met your nervous countenance with one of sadness and guilt and said reassuringly, “Nothing is going to happen this time. We’ll be extra careful.” “No.” The word left your lips before you had a chance to stop it. You hadn’t meant to sound as harsh as you did, but you couldn’t hide the growing fear spreading like wildfire inside. You were simply concerned for the safety of your friend and yourself. “We’ll be in and out. I promise.” You watched your friend smile at you and disappear behind the iron gates of the eerily quiet amusement park. Rumours of this strange and abandoned carnival had been the talk of the town as sightings of deceased children and adults had been reported to be still wandering the premises and enjoying its festivities. That alone had ignited a spark of curiosity in you and your friends at the time, and what started as a bit of teenage fun ended in horrific tragedy. You slipped through the gates and called your friend’s name, but there was no response. When you felt the familiar chill, you folded your arms and ran deeper into the carnival. The murmurs and whispers you heard were all reminiscent of that night, but you trod on as the farther you went, the more you began to believe that facing your guilt and fears would provide some closure. You saw your friend standing in front of a dark rollercoaster, its seats extremely tattered and safety bars in their upward positions. Your eyes closed for a brief moment, but it only took that moment for it to all come rushing back. “Rock…” “Paper…” “SCISSORS!” There was laughter, squeals, and low hoots as the winners nudged the losers closer and closer to the eerie roller coaster. “You promised! Losers have to do it,” one said. “Are you chickening out?” another one chimed in. “I’m not,” a girl said annoyingly. Her male counterpart had already begun approaching the ride. He hopped over the few obstacles in his way and stared for a few seconds at the ripped seats. “Five seconds,” he said, then turned to his friends, “Right?” “I don’t want to do it,” she said sternly. “You’ll be fine,” you said. “It’s only for five seconds.” Your friend sighed softly, then followed the boy. The two friends looked at each other and took a seat in separate compartments of the ride. “Pull down the lap bar,” your friend encouraged. “I don’t think that’s a good idea,” you said. “The worst thing that’ll happen is that it’ll break,” another countered. You heard the loud creak of the safety bars coming down, which was followed by a similar sound. Your male friend sat comfortably in his seat, smiling widely at your friends. However, the other looked extremely uncomfortable as her eyes kept wandering and hands rubbing together constantly. “Okay, five seconds is up,” you shouted. “You got the pictures, right?” you heard a friend asked another. “Yeah, I did. I’m posting it to Instagram right now!” You were the first to sense that something was wrong. The boy had his lap bar up and was about to stand until an ear-splitting groan pierced the still, cold air. Your friends had stopped talking amongst themselves, their amused countenance instantly replaced with one of sheer terror as the wheels spun and spun and spun. You didn’t know who screamed. Perhaps it was you, but that was the least of your concerns. The ride was eagerly following its designated course on its tracks and increasing in speed at an exponential rate. You felt your blood run cold as you saw the shadow of a falling figure once the rollercoaster followed the giant, impending circle. The ride showed no signs of stopping. A loud, sickening crack rang in your eardrums. You weren’t sure if your friends had heard it, but whatever happened was confirmed once the ride came to a full stop in front of the terrified crowd. It was as if the ride hadn’t moved at all. Her body was beyond repair. There were multiple areas with skin completely ripped from her being, exposing the raw meat that your friend now was. Parts of her bone were exposed with dry and fresh blood trailing its surface. Her eyes had been completely rolled inwards, and her mouth hung open as if she was a vegetable. It took a moment for you to realize that your male friend was nowhere to be seen. “W-w-we have to get out of here.” “Oh my God,” you heard another whisper loudly. You felt the robust and adamant pull on your arm, but your feet stayed rooted to the spot. The image of your deceased friend was forever burned in your memory as tears came like waterworks from your red eyes. “Let’s go… WE NEED TO GO.” Another was screaming for an ambulance. Your feet had just begun to move alongside your friends but had stopped when you heard a calming whisper. “Please… don’t leave.” It had come from behind you, but there was no one behind. No one alive at least. A familiar tune made your eyes snap open. Laughter blended conversation and screams polluted your surroundings. You spun around in confusion as adults with children and snacks were walking in all directions. None were paying any attention to you. Your gaze landed on the active rollercoaster, it’s macabre decor exposed by the glowing lights from all directions. “Please… don’t leave.” You recognized the male voice. “Stay…” a familiar female voice chimed in. “Stay… and play my game.”
https://medium.com/lit-up/the-haunted-amusement-park-b2084f61ddb3
[]
2019-06-09 19:43:49.123000+00:00
['Short Story', 'Fiction', 'Horror', 'Writing', 'Thriller']
Analysis of the crypto surge pt.1
More importantly, how did our A.I. system react due to these surges? As BTC’s price was going up and reached $7600, our AI system kept telling us that the price was about to drop again (back to $7400 and lower): But with each new hour it kept improving its output, as for many hours the price kept going up and stayed above $7500. Right now it’s well over $8000 — so the system has adapted itself and is making new predictions: Hourly prediction (from an hour ago): These are 10-min interval predictions (so that’s very short-term): Hourly predictions: As you can see, in the short-term the predictions indicate a drop, that is in the coming few hours the price may drop to ±$7800. But over the course of the day, that is the coming 12–24 hours, the price may reach a new height of $9000. I’m eager to see how our predictions shall evolve in the next 12–24 hours and how well their forecast shall be. As mentioned in the introduction, this crypto surge is not limited to BTC, but almost every market is affected by this pump: Stay tuned for part 2! :) - Ilya Nevolin
https://medium.com/coinmonks/analysis-of-the-crypto-surge-pt-1-4dec51b642cf
[]
2018-04-13 12:04:33.583000+00:00
['Bitcoin', 'Investing', 'Trading', 'Artificial Intelligence', 'Cryptocurrency']
PEGASUS Simple Abstractive Summarization using 🤗 Hugging Face & Python
PEGASUS Simple Abstractive Summarization using 🤗 Hugging Face & Python Demo for State-of-the-art Abstractive Summarization with Google’s PEGASUS Photo by Mati Mango from Pexels Google AI recently released PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization which achieved state-of-the-art results on 12 diverse summarization datasets. Since the released of BERT, pretrained Transformers with self-supervised objectives on large text corpora is the go-to way to solve nearly every NLP task. Also, it has been found that the closer the pre-training self-supervised objective is to the final down-stream task, the better the fine-tuning performance. Therefore in PEGASUS important sentences are masked from the input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. To read more you can check the Google AI blog.😁( It’s an amazing blog). Below is a python demo using Huggigface Library. Install Library !pip install transformers==3.5.0 !pip install torch==1.7.0 Text input: About 749 words input text from Wikipedia Titanic Page. I have taken the first four paragraphs as input. Text Output The sinking of the passenger liner Titanic 100 years ago on 15 April 1912 remains one of the world's worst maritime disasters, one of the most infamous tragedies in human history, and one of the most famous tragedies in the history of the film and theatre world, with a reputation as one of the greatest disasters in the history of the entertainment industry, with more than 1,500 people lost when the Titanic hit an iceberg on its maiden voyage from England to the United States, but also more than 200 people who died when the ship capsized off the coast of Newfoundland, Canada. The code : from transformers import PegasusTokenizer,PegasusForConditionalGeneration mname = "google/pegasus-xsum" titanic_text = '''RMS Titanic was a British passenger liner operated by the White Star Line that sank in the North Atlantic Ocean in the early morning hours of 15 April 1912, after striking an iceberg during her maiden voyage from Southampton to New York City. Of the estimated 2,224 passengers and crew aboard, more than 1,500 died, making the sinking at the time the deadliest of a single ship in the West and the deadliest peacetime sinking of a superliner or cruise ship to date.With much public attention in the aftermath the disaster has since been the material of many artistic works and a founding material of the disaster film genre.RMS Titanic was the largest ship afloat at the time she entered service and was the second of three Olympic-class ocean liners operated by the White Star Line. She was built by the Harland and Wolff shipyard in Belfast. Thomas Andrews, chief naval architect of the shipyard at the time, died in the disaster.Titanic was under the command of Captain Edward Smith, who also went down with the ship. The ocean liner carried some of the wealthiest people in the world, as well as hundreds of emigrants from Great Britain and Ireland, Scandinavia and elsewhere throughout Europe, who were seeking a new life in the United States. The first-class accommodation was designed to be the pinnacle of comfort and luxury, with a gymnasium, swimming pool, libraries, high-class restaurants, and opulent cabins. A high-powered radiotelegraph transmitter was available for sending passenger "marconigrams" and for the ship's operational use.The Titanic had advanced safety features, such as watertight compartments and remotely activated watertight doors. The ship carried 16 lifeboat davits which could lower three lifeboats each, for a total of 48 boats. However, Titanic carried only a total of 20 lifeboats, four of which were collapsible and proved hard to launch during the sinking.The carried lifeboats were enough for 1,178 people—about half the number on board, and one third of her total capacity—due to the maritime safety regulations of those days. Though at the time of the sinking the lowered lifeboats were only filled about half.After leaving Southampton on 10 April 1912, Titanic called at Cherbourg in France and Queenstown (now Cobh) in Ireland, before heading west to New York.On 14 April, four days into the crossing and about 375 miles (600 km) south of Newfoundland, she hit an iceberg at 11:40 p.m. ship's time. The collision caused the hull plates to buckle inwards along her starboard (right) side and opened five of her sixteen watertight compartments to the sea; she could only survive four flooding. Meanwhile, passengers and some crew members were evacuated in lifeboats, many of which were launched only partially loaded. A disproportionate number of men were left aboard because of a "women and children first" protocol for loading lifeboats.At 2:20 a.m., she broke apart and foundered with well over one thousand people still aboard. Just under two hours after Titanic sank, the Cunard liner RMS Carpathia arrived and brought aboard an estimated 705 survivors.The disaster was met with worldwide shock and outrage at the huge loss of life, as well as the regulatory and operational failures that led to it. Public inquiries in Britain and the United States led to major improvements in maritime safety. One of their most important legacies was the establishment of the International Convention for the Safety of Life at Sea (SOLAS) in 1914, which still governs maritime safety. Several new wireless regulations were passed around the world in an effort to learn from the many missteps in wireless communications—which could have saved many more passengers.The wreck of Titanic was discovered in 1985 (73 years after the disaster) during a Franco-American expedition and United States Military mission.The ship was split in two and is gradually disintegrating at a depth of 12,415 feet (2,069.2 fathoms; 3,784 m). Thousands of artefacts have been recovered and displayed at museums around the world. Titanic has become one of the most famous ships in history, depicted in numerous works of popular culture, including books, folk songs, films, exhibits, and memorials. Titanic is the second largest ocean liner wreck in the world, only being surpassed by her sister ship HMHS Britannic, however, she is the largest sunk while in service as a liner, as Britannic was in use as a hospital ship at the time of her sinking. The final survivor of the sinking, Millvina Dean, aged two months at the time, died in 2009 at the age of 97.''' # download model model = PegasusForConditionalGeneration.from_pretrained(mname) # download tokenizer tok = PegasusTokenizer.from_pretrained(mname) batch = tok.prepare_seq2seq_batch(src_texts=[titanic_text]) # Hyperparameter Tuning gen = model.generate( **batch,max_length = 200, # max length of summary min_length = 100, # min length of summary do_sample = True, temperature = 3.0, top_k =30, top_p=0.70, repetition_penalty = 1.2, length_penalty = 5, # if more than 1 encourage model to generate #larger sequences num_return_sequences=1 # no of summary you want to generate) # for forward pass: model(**batch) summary = tok.batch_decode(gen, skip_special_tokens=True) print(summary) The output produced is The sinking of the passenger liner Titanic 100 years ago on 15 April 1912 remains one of the world's worst maritime disasters, one of the most infamous tragedies in human history, and one of the most famous tragedies in the history of the film and theatre world, with a reputation as one of the greatest disasters in the history of the entertainment industry, with more than 1,500 people lost when the Titanic hit an iceberg on its maiden voyage from England to the United States, but also more than 200 people who died when the ship capsized off the coast of Newfoundland, Canada. References
https://medium.com/datadriveninvestor/pegasus-simple-abstractive-summarization-using-hugging-face-python-7bfaaf534087
['Parth Chokhra']
2020-11-16 07:01:30.907000+00:00
['Programming', 'Artificial Intelligence', 'Data Science', 'Technology', 'Machine Learning']
I Feel like I Am Watching the World Burn
I have good days and I have bad days but most days have been grey. A 6.5/10. Olympic bronze medal. Not bad but not great. I sometimes feel Iike I am watching myself live my life. I am numb to most things. It’s like a part of me died a while ago and though I feel almost like myself, there is something not quite right. I watch moments slip by like a silent film montage and think, “Is this what happiness feels like?” “Should I be happy by now?” I question if I ever experience anything. Surely I shouldn’t be questioning it, I should live. But I don’t feel alive. I feel like I’m here, having to do what I have to do to survive. I don’t enjoy it nor do I hate it. I just do it. I imagine this is what people who work a dead-end job feel like Monday to Friday. Barley present yet functional. I wonder if I have ever been happy. Do I even remember what happiness feels like? To be carefree in joy. Trusting the world to catch me. This world that I didn’t yet know had inhabitants who wanted to bury me. Then I’m back to thinking about dying. The feel of a wooden coffin. The taste of damp, cold, air. I think about the taste of soil. Worms feeding on my body to grow. Do they feel happy or do they do what they have to do? I wonder if I lost the possibility of attaining joy when I found out I was Black. The realization that everything is built on structures meant to stifle me. That the vast majority of people will hate me before they know me. Will judge, feel intimidated, feel threatened by my existence. Yet will never say it with their chest. They will choose to gaslight instead of admitting the discomfort of my presence.
https://medium.com/an-injustice/i-feel-like-i-am-watching-the-world-burn-b5a3606c82aa
[]
2019-11-14 10:18:33.463000+00:00
['Mental Health', 'Equality', 'Life', 'Race', 'Zuva']
We’ve updated our submission guidelines
We know the new year is coming up, but we didn’t want to wait to make changes. We’ve made some updates to our submission guidelines that will be important for all of our contributors. Here are the highlights: Subject matter Previously, all content needed to be consistent in either inspiring or educating our audience within the writing or general creative industry. We have broadened this scope to include stories of emotional tales or situations that you have experienced. It doesn’t have to be specifically about creativity, but it must be personal in describing your emotional condition. How to submit If you are new to CRY and not already one of our writers, we ask that you email your draft to [email protected]. Please make sure it is a draft within Medium and not already published. To send a draft, simply click on the three dots beside “publish” on the top right corner of your story. You will then have the option to copy and paste the link which you can then email to the above address. If you’re already a writer for CRY, simply submit your story in draft form. If you have not heard back from us within three days, it means we will not be publishing your piece. No more writing prompts We’ve made a name for ourselves with our regular writing prompts, but we’re giving that a bit of a rest, at least for this upcoming year. Instead, submissions will be open all year round. Feel free to submit whatever you like as long as it fits with what we’ve described in the subject matter heading.
https://medium.com/cry-mag/weve-updated-our-submission-guidelines-7c117ae849b7
['Kern Carter']
2020-12-14 12:02:40.875000+00:00
['Newsletter', 'Writing']
Work for Yourself
Work for Yourself The prospect of leaving stable employment and doing your own thing might be scary. But there is no greater tool in the growth of self. After you work for yourself for a while, you won’t go back. If you are of a certain ilk, that is. You’re not born that way, working for yourself makes you that. You might go back in body for a brief period, but it’s impossible to go back in mind. Once you know, you know. The cat’s out of the bag, the can of worms is open, the horse has bolted, and there’s no putting them back. Krishnamurti said governments want technicians. Corporations do too. They might present all the right optics, say that they want free thinking, creative people in their organisation. But they don’t. Work for yourself. At first it will be exciting. Then it will be hard. After that, both combine. If you stick it out you’ll be a happier, more fulfilled, more contented human being. Anything less than that and you forgo your right to liberty.
https://medium.com/the-reflectionist/work-for-yourself-277522e44ab7
['Larry G. Maguire']
2020-05-16 22:07:57.397000+00:00
['Work', 'Freelancers', 'Self', 'Psychology', 'Working']
matmul() is eating software
As an open-source project, TensorFlow has been incredibly successful, garnering over 20,000 commits since November 2015. The main TensorFlow GitHub repository is synced bidirectionally at least once a week with Google’s internal mirror and has received major and minor contributions from engineering teams at Intel, Microsoft, IBM, RStudio, Minds.ai, and other companies. In terms of reach, TensorFlow Lite will increase the efficiency of TensorFlow models on mobile and embedded devices later this year, and projects like XLA are even more ambitious: XLA supports ahead-of-time and just-in-time compilation of the linear algebra primitives underlying deep learning to generate accelerated machine code for any target backend system. The promise of XLA is a quantum leap for hierarchical optimization not just on GPGPUs, but on any arbitrary architectures that can parallelize linear algebra primitives. Internally at Google, TensorFlow is used in a staggering number of projects following Sundar Pichai’s call to become a true “AI-first” company. And the trend towards machine learning-based software is accelerating not just at Google: Amazon, Apple, Baidu, Facebook, Microsoft, Salesforce, Uber, Lyft — nearly every major tech company — has hired dedicated research teams to help push machine learning into production. And along with these major players, platforms for deep learning are also proliferating: PyTorch and Caffe2 from Facebook, CNTK from Microsoft, Core ML from Apple and MXNet from Amazon, just to name a few. What does Software Engineering look like in 10 years? With the rise of machine learning frameworks, the clean abstractions and modular design patterns are being replaced by high-dimensional floating-point tensors and efficient matrix multiplication. As this trend continues, it is fundamentally altering the practice of software engineering. In “Machine Learning: The High-Interest Credit Card of Technical Debt”, D Sculley maps out the myriad of ways that machine learning systems encourage (or worse, necessitate) poor software design choices. These systems “have all the basic code complexity issues as normal code, but also have a larger system-level complexity that can create hidden debt.” Machine learning systems erode model boundaries and abstraction by tightly coupling all system inputs: the desired behavioral invariants flow not from software logic, but from the specific external data driving them. Although tools exist to identify dependencies in code via static analysis and linkage graphs, in general such tools are not yet available for analyzing data dependencies. D et al. discuss several system-design anti-patterns that will resonate with machine learning practitioners: Glue code system design pattern, “in which a massive amount of supporting code is written to get data into and out of general-purpose packages.” Pipeline jungles, which evolve organically over time where the data preparation system “may become a jungle of scrapes, joins, and sampling steps, often with intermediate files output.” Configuration debt, which accumulates as systems and pipelines mature, collecting “a wide range of configurable options, including which features are used, how data is selected, a wide variety of algorithm-specific learning settings, potential pre- or post-processing, verification methods, etc.” And even in smaller, less complicated projects, practitioners struggle with issues related to: Model architecture and weights versioning during experimentation — particularly when models are partially pre-trained with a different regime or weights are borrowed from other runs; Data source and feature versioning; Domain-shifts between the experimentation environment and production deployment; Monitoring inference quality in production. One answer may be found in TFX, an internal platform developed at Google for distributing and serving machine learning models in production: Creating and maintaining a platform for reliably producing and deploying machine learning models requires careful orchestration of many components — a learner for generating models based on training data, modules for analyzing and validating both data as well as models, and finally infrastructure for serving models in production. This becomes particularly challenging when data changes over time and fresh models need to be produced continuously. Unfortunately, such orchestration is often done ad hoc using glue code and custom scripts developed by individual teams for specific use cases, leading to duplicated effort and fragile systems with high technical debt. TFX standardizes these patterns and components and integrates them into a single platform that simplifies the platform configuration, and reduces the time to production from the order of months to weeks, while providing platform stability that minimizes service disruptions. Some parts of TFX have already been open-sourced, including TensorFlow Serving and tf.Transform. What does hardware look like in 10 years? Moore’s Law is slowing down and we’re poised to re-enter the Golden Age of Architecture, seeing rapid development across a wider variety of chips and instruction sets. Companies like Nervana (Intel), NVIDIA, Cerebras, and Google are all working on next-gen hardware architectures to accelerate linear algebra for machine learning. And by default, each of these architectures would typically require its own low-level, hand-optimized primitive libraries a la cuDNN. Combatting this trend will require enormous community effort around more general compiler frameworks such as XLA. Google’s Tensor Processing Units (TPUs) are perhaps the farthest along in becoming a generally available alternative to the current GPGPU hegemony. Each Cloud TPU provides up to 180 teraflops of floating-point performance, 64 GB of ultra-high-bandwidth memory and can be connected together. Unlike previous supercomputer architectures, TPUs are designed from the ground up to realize peak-performance on the linear algebra workloads that are common in machine learning. TPUs are integrated with TensorFlow, and Google provides both a paid hosted infrastructure option (Cloud TPU) as well as a grant program for ML experts who want early access to the hardware and are willing to share their research with the world via publications and open-source software: To accelerate the pace of open machine-learning research, we are introducing the TensorFlow Research Cloud (TFRC), a cluster of 1,000 Cloud TPUs that will be made available free of charge to support a broad range of computationally-intensive research projects that might not be possible otherwise. Coda Graph computation and deep learning libraries such as TensorFlow are a major driving force behind the future of computing, requiring us to rethink systems architecture, from hardware to compilers to higher level programming languages and design patterns. It is incredibly humbling to see the sheer amount of work ahead of us as software architects, engineers, researchers and practitioners, but it is also an incredibly exciting time to be working in AI. As Zak summarized this hope: When I was in grad school, most of these amazing new applications weren’t even possible — what will it be like when people can take this machine learning technology for granted and start doing things we can’t even envision now? What will the first wave of TensorFlow-native products be? This is a summary of a talk Zak Stone gave at the South Park Commons AI Speaker Series titled “TensorFlow, Cloud TPUs, and ML Progress.” Selected slides from the talk provided by Google and used with permission.
https://medium.com/south-park-commons/matmul-is-eating-software-afebccda1745
['Joseph Reisinger']
2017-08-17 16:33:12.145000+00:00
['Deep Learning', 'Software Development', 'Artificial Intelligence', 'TensorFlow', 'Machine Learning']
What To Do When You Miss A Writing Or Publishing Deadline
There are two kinds of deadlines in a writer’s life: those you set for yourself, and those literary agents or editors set for you. If you ignore your own self-imposed deadlines in favor of sleeping in a few extra minutes every day, you only have yourself to answer to. But if you miss a writing or publishing deadline set by an agent or editor, it could derail your writing career. So what should you do when you suddenly glance at your watch, gasp, and realize you’re late? 7 Tips For Handling A Missed Deadline With Grace And Professionalism Take the blame. You might be tempted to point to your editor’s unreasonable expectations or unexpected occurrences in your life, but editors and literary agents will appreciate it when you take responsibility. Offer an explanation. Do you have a good reason for missing your deadline? Literary agents and editors want to be supportive of your writing: grant them the chance by giving them a legitimate explanation. Without passing blame or making excuses, briefly state the reason for missing your deadline. Ask for a specific extension. Editors and agents have printers, coworkers, and bosses to answer to, so give them hard facts they can pass along. Let your editor and/or agent know exactly how much more time you need to meet your writing obligations. Demonstrate understanding of the larger issues in play. Your deadline lapse may have a ripple effect and make other people’s jobs more difficult. So you may want to apologize for making your agent or editor’s life unnecessarily complicated. Make the call. While your go-to means of communicating about your manuscript may be email — in this case, a phone call may be required to smooth ruffled feathers. Be willing to take unconventional steps. Let your editor or agent know you are open to suggestions about how you might ameliorate any damage caused by you bungling the deadline. Perhaps your editor would be willing to start on the first half of your manuscript while you keep working on the second half. Say thank you. Offer a sincere expression of gratitude to both your editor and agent for understanding. The Best Way To Handle Deadline Drama…Is To Avoid It Entirely If you take on a deadline, be sure you have realistic expectations of your own writing goals. Don’t bite off more than you can chew. Calculate how many words you’ll need to write or revise every day to arrive at a rough estimate for a feasible timeline. Then, give yourself 15% padding — to allow for unforeseeable complications. If your projected numbers don’t match your editor’s deadline expectations, negotiate your due date ASAP — before it’s too late.
https://writersrelief.medium.com/what-to-do-when-you-miss-a-writing-or-publishing-deadline-4d7fb0d5e4ac
['Writer S Relief']
2019-04-26 20:04:48.103000+00:00
['Writing', 'Writing Life', 'Writing Tips', 'Time Management', 'Publishing']
Sepia Memories
Sepia Memories A reflection about traditions Photo by Amit Srivastava on Unsplash Yesterday was Sharad Purnima or maybe day before yesterday. I know because in the family what’s app group pictures of kheer/ indian rice pudding were floating around. I was reminded by mother in law to light a diya near Tulsi plant for a month because this month I won’t be travelling anywhere. I remember my grandmother doing the same. She would get the diya from the market and prepare cotton wicks for the whole month and for diwali too. I believe traditions make us hopeful about life. I am not religious like my mother but I do believe in following these simple traditions. They take me back to those memories which are slowly fading. I don’t have any memory of Halloween the new craze in India and I don’t have any inclination to celebrate it so yesterday while people dressed up in black and flooded social media with halloween pics, I lit that diya and placed it near Tulsi plant. That October breeze caressed me quietly and for a minute I thought I am in my home. The memories of those old days again littered my mind.
https://medium.com/spiritual-secrets/sepia-memories-ba5a60b0cb51
['Priyanka Srivastava']
2020-11-01 14:43:09.142000+00:00
['Childhood', 'India', 'Memories', 'Writing', 'Tradition']
How To Get Past Writing That First Chapter
How To Get Past Writing That First Chapter Hint: Stop rewriting! Photo by Hannah Skelly on Unsplash If you’re like me and have multiple draft manuscripts in your writing folder that never get past that first chapter, here are a few helpful hints I’ve picked up along the way to get you to that final chapter. Plot Your Storyline I’m fairly organised with my writing schedule, but I consider myself to be a plantser in regards to my writing style. That’s smack-bang in between a plotter and a pantser. If you’d like to read more about plotters and pantsers, here’s an article I wrote about it: Having a plot outline is easy if you’re a plotter. It comes naturally. But if you’re a pantser or a plantser, like me, then sometimes going with the flow means nothing gets done. Creating a plot outline gives you the confidence to know where you’re writing towards so you don’t keep rewriting that first chapter because your storyline keeps changing every second. It doesn’t have to be as intricate as Joseph Heller’s plot outline for Catch-22. Joseph Heller’s detailed plot outline A plot outline can be as simple as filling in the blanks. Here’s a simple plot outline that I created: Fill-in-the-blanks simple plot outline template Having a plot outline of where your manuscript is headed is one way to ensure you get past writing (or rewriting) that first chapter. Put Perfectionism In Its Place I don’t know about you but perfectionism plays a big part in my struggle to get past that first chapter. Just when you think you’ve written the ‘perfect’ chapter, you re-read it and find major flaws. The structure just isn’t working. Or you realise you’ve written the word ‘that’ 128 times. You do a quick edit and let the manuscript ‘rest’. You come back a week later (because you’re too impatient to leave it on the shelf any longer) and realise there are more things that need fixing. You change the perspective to first-person, but that doesn’t work, so you change it back to third-person again. After another edit, you realise the entire first chapter is probably ‘back story,’ and you need to start all over again… If any of this sounds familiar, then you too may be suffering from perfectionism. It’s not all bad news though. Perfectionists get a lot done. But we are also in constant fear of not getting a lot done. We procrastinate because we haven’t got around to doing everything on our list. We’re anxious. Fearful of criticism. The list goes on. If you want to read more about perfection, here’s my take on it: Just Keep Writing It’s a simple tip but can be the most difficult to follow. Having the guts to keep going even when you think you’re failing (or you have failed) is a hallmark of success. When you don’t have enough hours in the day to write, then do it tomorrow. Just don’t stop writing. Don’t beat yourself up about not keeping to a strict writing schedule, because life is unpredictable and we are only human, after all. The important lesson to take away is to keep at it. Just keep writing and you will succeed.
https://medium.com/mama-write/how-to-get-past-writing-that-first-chapter-be1e9d06bca9
['Lana Graham']
2019-11-02 11:37:20.274000+00:00
['Perfectionism', 'Plot', 'Writing', 'Writer', 'Writing Tips']
The Top 6 JavaScript Features From ES2020
1. Dynamic import() Dynamic import in JavaScript enables you to import JS files only when you need it. It is known as code splitting. Previously, this was possible using Webpack with your project. But with this new version of ES2020, the feature will be handled natively.
https://medium.com/better-programming/the-top-6-javascript-features-from-es2020-9beba927a9ae
['Joan Saum']
2020-06-01 14:19:52.066000+00:00
['Nodejs', 'React', 'Programming', 'JavaScript', 'Angular']
Feature Selection Techniques
Image source: kdnuggets What is feature selection? You all have seen datasets. Sometimes they are small, but often ,they are tremendously large in size. It becomes very challenging to process the datasets which are very large, at least significant enough to cause a processing bottleneck. The training time and performance of a machine learning algorithm depends heavily on the features in the dataset. Ideally, we should only retain those features in the dataset that actually help our machine learning model learn something.Having too many features pose a problem well known as the curse of dimensionality. Unnecessary and redundant features not only slow down the training time of an algorithm, but they also affect the performance of the algorithm. The process of selecting the most suitable features for training the machine learning model is called “feature selection”. Before performing feature selection we need to do Data pre-processing. You can check this Benefits of performing feature selection: There are several advantages of performing feature selection before training machine learning models, some of which have been enlisted below: Models with less number of features have higher explain-ability It is easier to implement machine learning models with reduced features Fewer features lead to enhanced generalization which in turn reduces overfitting Feature selection removes data redundancy Training time of models with fewer features is significantly lower Models with fewer features are less prone to errors Feature Selection Techniques: Several methods have been developed to select the most optimal features for a machine learning algorithm. Note: In this article we will discuss the methods which are widely preferred. All the techniques will be implemented independent of each other and not in succession Filter Method. Wrapper Method. Embedded Method (Shrinkage). Filter Methods: Filter methods can be broadly categorized into two categories: Univariate Filter Methods and Multivariate filter methods. The uni-variate filter methods are the type of methods where individual features are ranked according to specific criteria. The top N features are then selected. Statistical tests can be used to select those features that have the strongest relationship with the output variable. Mutual information, ANOVA F-test and chi square are some of the most popular methods of univariate feature selection. The scikit-learn library provides : SelectKBest : It keeps the top-k scoring features. SelectPercentile: It keeps the top features which are in a percentage specified by the user. It must be noted that you can use chi² only for data which is non negative in nature. The example below uses the chi² statistical test for non-negative features to select 10 of the best features from the Mobile Price Range Prediction dataset. You can download the dataset : Now we will see how we can remove features with very low variance and correlated features from our dataset with the help of Python. If the features have a very low variance (i.e very close to 0), they are close to being constant and thus do not add any value to our model at all.It would just be nice to get rid of them and hence lower the complexity.Please note that variance also depends on scaling of the data. Scikit-learn has an implementation for Variance Threshold that does this precisely. All columns with variance less than 0.1 will be removed Correlation between the output observations and the input features is very important and such features should be retained. However, if two or more than two features are mutually correlated, they convey redundant information to the model. We can remove features which have a high correlation. Please note we will be using Pearson correlation for calculating the correlation between different numerical features. Importing Libraries heatmap makes it easy to identify which features are most related to the target variable, we will plot heatmap of correlated features using the seaborn library. We see that the feature MedInc_Sqrt has a very high correlation with MedInc. We can thus remove/drop one of them. Now, you might say why not remove irrelevant features by intuition or just looking at the heatmap? In general it’s advisable not to be influenced by one’s bias or intuition. In a real-life situation, we would have to deal with more than 3 features (from some hundreds to many thousands, typically). Thus, it would be unfeasible to go through each of them and decide whether to keep it or not. Moreover, there might be relationships among variables that are not easily spotted by a human eye, not even with accurate analysis. However, in some scenarios, you may want to use a specific machine learning algorithm to train your model. In such cases, features selected through filter methods may not be the most optimal set of features for that specific algorithm. There is another category of feature selection methods that select the most optimal features for the specified algorithm. Such methods are called wrapper methods. Wrapper Methods: Wrapper methods use combinations of variables to determine predictive power. They are based on greedy search algorithms.The wrapper method will find the best combination of variables. The wrapper method actually tests each feature against test models that it builds with them to evaluate the results. Out of all three methods, this is very computationally intensive. It is not recommended that this method be used on a high number of features and if you do not use this feature selection properly, then you might even end up overfitting the model. Common wrapper methods include: Stepwise/Subset Selection, Forward Stepwise, and Backward Stepwise(RFE). Here I have mentioned the basic steps to be followed: Train a baseline model. Identify the most important features using a feature selection technique Create a new ‘limited features’ dataset containing only those features Train a second model on this new dataset Compare the accuracy of the ‘full featured’(baseline) model to the accuracy of the ‘limited featured’(new) model Forward Selection: Identify the best variable (eg. based on model accuracy) Add the next variable into the model And so on until some predefined criteria is satisfied Stepwise/Subset Selection: Similar to the forward selection process, but a variable can also be dropped if it’s deemed as not useful any more after a certain number of steps. Now let’s implement various feature selection techniques 1. Backward Stepwise (Recursive Feature Elimination (RFE)) Recursive = Something that happens repeatedly As the name suggests, Recursive Feature Elimination works by recursively(repeatedly) removing features and building a model on the features that remain. The example below uses RFE with the linear regression algorithm to select the top 3 features. The choice of algorithm does not matter, instead of linear we can use any other algorithm. We use feature selection module from sklearn library to apply Recursive Feature Elimination (RFE) Scikit learn also offers SelectFromModel that helps you choose features directly from a given model.You can also specify the threshold for coefficients or feature importance if you want and the maximum number of features you want to select. 3. Embedded Method (Shrinkage). Embedded Method is inbuilt variable selection method. We don’t select or reject the predictors or variables in this method. This controls the value of parameters i.e. not so important predictors are given very low weight(close to zero), this is also known as Regularization. Features selection using models that have L1(Lasso) penalization. When we have L1 penalization for regularization, most coefficients will be 0 (or close to 0), and we select the features with non-zero coefficients. L2(Ridge) penalization, this adds a penalty, which equals the square of the magnitude of coefficients. All coefficients are shrunk by the same factor (so none of predictors are eliminated). In the end, I would like to say feature selection is a decisive part of a machine learning pipeline: being too conservative means introducing unnecessary noise, while being too aggressive means throwing away useful information. If you are curious to learn about missing values treatment, then check this out. If you found this article useful give it a clap and share it with others. — Happy Learning — Thank You
https://medium.com/datadriveninvestor/feature-selection-techniques-1a99e61da222
['Nishant Shah']
2020-11-07 03:20:09.860000+00:00
['Python', 'Data Sc', 'Feature Selection', 'Feature Engineering', 'Machine Learning']
What’s New in Hadoop 3.0 ?— Enhancements in Apache Hadoop 3
What’s new in Hadoop 3 - Edureka This “What’s New in Hadoop 3.0” article focus on the changes that are expected in Hadoop 3, as it’s still in alpha phase. Apache community has incorporated many changes and is still working on some of them. So, we will be taking a broader look at the expected changes. The major changes that we will be discussing are: Minimum Required Java Version in Hadoop 3 is 8 Support for Erasure Encoding in HDFS YARN Timeline Service v.2 Shell Script Rewrite Shaded Client Jars Support for Opportunistic Containers MapReduce Task-Level Native Optimization Support for More than 2 NameNodes Default Ports of Multiple Services have been Changed Support for Filesystem Connector Intra-DataNode Balancer Reworked Daemon and Task Heap Management Apache Hadoop 3 is going to incorporate a number of enhancements over the Hadoop-2.x. So, let us move ahead and look at each of the enhancements. 1. Minimum Required Java Version in Hadoop 3 is Increased from 7 to 8 In Hadoop 3, all Hadoop JARs are compiled targeting a runtime version of Java 8. So, users who are still using Java 7 or below have to upgrade to Java 8 when they will start working with Hadoop 3. Now let us discuss one of the important enhancement of Hadoop 3, i.e. Erasure Encoding, which will reduce the storage overhead while providing the same level of fault tolerance as earlier. 2. Support for Erasure Encoding in HDFS So now let us first understand what is Erasure Encoding. Generally, in storage systems, Erasure Coding is mostly used in Redundant Array of Inexpensive Disks (RAID). As you can see in the above image, RAID implements EC through striping, in which the logically sequential data (such as a file) is divided into smaller units (such as bit, byte, or block) and stores consecutive units on different disks. Then for each stripe of original data cells, a certain number of parity cells are calculated and stored. This process is called encoding. The error on any striping cell can be recovered through decoding calculation based on surviving data cells and parity cells. As we got an idea of Erasure coding, now let us first go through the earlier scenario of replication in Hadoop 2.x. The default replication factor in HDFS is 3 in which one is the original data block and the other 2 are replicas which require 100% storage overhead each. So that makes 200% storage overhead and it consumes other resources like network bandwidth. However, the replicas of cold datasets which have low I/O activities are rarely accessed during normal operations, but still, consume the same amount of resources as the original dataset. Erasure coding stores the data and provide fault tolerance with less space overhead as compared to HDFS replication. Erasure Coding (EC) can be used in place of replication, which will provide the same level of fault-tolerance with less storage overhead. Integrating EC with HDFS can maintain the same fault-tolerance with improved storage efficiency. As an example, a 3x replicated file with 6 blocks will consume 6*3 = 18 blocks of disk space. But with EC (6 data, 3 parity) deployment, it will only consume 9 blocks (6 data blocks + 3 parity blocks) of disk space. This only requires the storage overhead up to 50%. Since Erasure coding requires additional overhead in the reconstruction of the data due to performing remote reads, thus it is generally used for storing less frequently accessed data. Before deploying Erasure code, users should consider all the overheads like storage, network and CPU overheads of erasure coding. Now to support the Erasure Coding effectively in HDFS they made some changes in the architecture. Lets us take a look at the architectural changes. HDFS Erasure Encoding: Architecture NameNode Extensions – The HDFS files are striped into block groups, which have a certain number of internal blocks. Now to reduce NameNode memory consumption from these additional blocks, a new hierarchical block naming protocol was introduced. The ID of a block group can be deduced from the ID of any of its internal blocks. This allows management at the level of the block group rather than the block. – The HDFS files are striped into block groups, which have a certain number of internal blocks. Now to reduce NameNode memory consumption from these additional blocks, a new hierarchical was introduced. The ID of a block group can be deduced from the ID of any of its internal blocks. This allows management at the level of the block group rather than the block. Client Extensions – After implementing Erasure Encoding in HDFS, NameNode works on block group level & the client read and write paths were enhanced to work on multiple internal blocks in a block group in parallel. On the output/write path, DFSStripedOutputStream manages a set of data streamers, one for each DataNode storing an internal block in the current block group. A coordinator takes charge of operations on the entire block group, including ending the current block group, allocating a new block group, etc. On the input/read path, DFSStripedInputStream translates a requested logical byte range of data as ranges into internal blocks stored on DataNodes. It then issues read requests in parallel. Upon failures, it issues additional read requests for decoding. DataNode Extensions – The DataNode runs an additional ErasureCodingWorker (ECWorker) task for background recovery of failed erasure coded blocks. Failed EC blocks are detected by the NameNode, which then chooses a DataNode to do the recovery work. Reconstruction performs three key tasks: Read the data from source nodes and reads only the minimum number of input blocks & parity blocks for reconstruction. New data and parity blocks are decoded from the input data. All missing data and parity blocks are decoded together. Once decoding is finished, the recovered blocks are transferred to target DataNodes. ErasureCoding policy – To accommodate heterogeneous workloads, we allow files and directories in an HDFS cluster to have different replication and EC policies. Information about encoding & decoding files is encapsulated in an ErasureCodingPolicy class. It contains 2 pieces of information, i.e. the ECSchema & the size of a stripping cell. The second most important enhancement in Hadoop 3 is YARN Timeline Service version 2 from YARN version 1 (in Hadoop 2.x). They are trying to make many upbeat changes in YARN Version 2. 3. YARN Timeline Service v.2 Hadoop is introducing a major revision of YARN Timeline Service i.e. v.2. YARN Timeline Service. It is developed to address two major challenges: Improving scalability and reliability of Timeline Service Enhancing usability by introducing flows and aggregation YARN Timeline Service v.2 can be tested by the developers to provide feedback and suggestions. It should be exploited only in a test capacity. The security is not enabled in YARN Timeline Service v.2. So, let us first discuss scalability and then we will discuss flows and aggregations. YARN Timeline Service v.2: Scalability YARN version 1 is limited to a single instance of writer/reader and does not scale well beyond small clusters. Version 2 uses a more scalable distributed writer architecture and a scalable backend storage. It separates the collection (writes) of data from serving (reads) of data. It uses distributed collectors, essentially one collector for each YARN application. The readers are separate instances that are dedicated to serving queries via REST API. YARN Timeline Service v.2 chooses Apache HBase as the primary backing storage, as Apache HBase scales well to a large size while maintaining good response times for reads and writes. YARN Timeline Service v.2: Usability Improvements Now talking about usability improvements, in many cases, users are interested in the information at the level of “flows” or logical groups of YARN applications. It is much more common to launch a set or series of YARN applications to complete a logical application. Timeline Service v.2 supports the notion of flows explicitly. In addition, it supports aggregating metrics at the flow level as you can see in the below diagram. Now lets us look at the architectural level, how YARN version 2 works. YARN Timeline Service v.2: Architecture YARN Timeline Service v.2 uses a set of collectors (writers) to write data to the backend storage. The collectors are distributed and co-located with the application masters to which they are dedicated, as you can see in the below image. All data that belong to that application are sent to the application level timeline collectors with the exception of the resource manager timeline collector. For a given application, the application master can write data for the application to the co-located timeline collectors. In addition, node managers of other nodes that are running the containers for the application also write data to the timeline collector on the node that is running the application master. The resource manager also maintains its own timeline collector. It emits only YARN-generic life cycle events to keep its volume of writes reasonable. The timeline readers are separate daemons separate from the timeline collectors, and they are dedicated to serving queries via REST API. 4. Shell Script Rewrite The Hadoop shell scripts have been rewritten to fix many bugs, resolve compatibility issues and change in some existing installation. It also incorporates some new features. So I will list some of the important ones: All Hadoop shell script subsystems now execute hadoop-env.sh, which allows for all of the environment variables to be in one location. which allows for all of the environment variables to be in one location. Daemonization has been moved from *-daemon.sh to the bin commands via the –daemon option. In Hadoop 3 we can simply use –daemon start to start a daemon, –daemon stop to stop a daemon, and –daemon status to set $? to the daemon’s status. For example, ‘hdfs –daemon start namenode’. to the bin commands via the option. In Hadoop 3 we can simply use –daemon start to start a daemon, –daemon stop to stop a daemon, and –daemon status to set $? to the daemon’s status. For example, ‘hdfs –daemon start namenode’. Operations which trigger ssh connections can now use pdsh if installed. ${HADOOP\_CONF\_DIR} is now properly honored everywhere, without requiring symlinking and other such tricks. is now properly honored everywhere, without requiring symlinking and other such tricks. Scripts now test and report better error messages for various states of the log and pid dirs on daemon startup. Before, unprotected shell errors would be displayed to the user. There are many more features you will know when Hadoop 3 will be in the beta phase. Now let us discuss the shaded client jar and know their benefits. 5. Shaded Client Jars The hadoop-client available in Hadoop 2.x releases pulls Hadoop’s transitive dependencies onto a Hadoop application’s classpath. This can create a problem if the versions of these transitive dependencies conflict with the versions used by the application. So in Hadoop 3, we have new hadoop-client-api and hadoop-client-runtime artifacts that shade Hadoop’s dependencies into a single jar. hadoop-client-api is compile scope & hadoop-client-runtime is runtime scope, which contains relocated third party dependencies from hadoop-client. So, that you can bundle the dependencies into a jar and test the whole jar for version conflicts. This avoids leaking Hadoop’s dependencies onto the application’s classpath. For example, HBase can use to talk with a Hadoop cluster without seeing any of the implementation dependencies. Now let us move ahead and understand one more new feature, which has been introduced in Hadoop 3, i.e. opportunistic containers. 6. Support for Opportunistic Containers and Distributed Scheduling A new ExecutionType has been introduced, i.e. Opportunistic containers, which can be dispatched for execution at a NodeManager even if there are no resources available at the moment of scheduling. In such a case, these containers will be queued at the NM, waiting for resources to be available for it to start. Opportunistic containers are of lower priority than the default Guaranteed containers and are therefore preempted, if needed, to make room for Guaranteed containers. This should improve cluster utilization. Guaranteed containers correspond to the existing YARN containers. They are allocated by the Capacity Scheduler, and once dispatched to a node, it is guaranteed that there are available resources for their execution to start immediately. Moreover, these containers run to completion as long as there are no failures. Opportunistic containers are by default allocated by the central RM, but support has also been added to allow opportunistic containers to be allocated by a distributed scheduler which is implemented as an AMRMProtocol interceptor. Now moving ahead, let us take a look at how MapReduce performance has been optimized. 7. MapReduce Task-Level Native Optimization In Hadoop 3, a native Java implementation has been added in MapReduce for the map output collector. For shuffle-intensive jobs, this improves the performance by 30% or more. They added a native implementation of the map output collector. For shuffle-intensive jobs, this may provide speed-ups of 30% or more. They are working on native optimization for MapTask based on JNI. The basic idea is to add a NativeMapOutputCollector to handle key-value pairs emitted by the mapper, therefore sort, spill, IFile serialization can all be done in native code. They are still working on the Merge code. Now let us take a look, how Apache community is trying to make Hadoop 3 more fault tolerant. 8. Support for More than 2 NameNodes In Hadoop 2.x, HDFS NameNode high-availability architecture has a single active NameNode and a single Standby NameNode. By replicating edits to a quorum of three JournalNodes, this architecture is able to tolerate the failure of any one NameNode. However, business-critical deployments require higher degrees of fault-tolerance. So, in Hadoop 3 allows users to run multiple standby NameNodes. For instance, by configuring three NameNodes (1 active and 2 passive) and five JournalNodes, the cluster can tolerate the failure of two nodes. Next, we will look at default ports of Hadoop services that have been changed in Hadoop 3. 9. Default Ports of Multiple Services have been Changed Earlier, the default ports of multiple Hadoop services were in the Linux ephemeral port range (32768-61000). Unless a client program explicitly requests a specific port number, the port number used is an ephemeral port number. So at startup, services would sometimes fail to bind to the port due to a conflict with another application. Thus the conflicting ports with ephemeral range have been moved out of that range, affecting port numbers of multiple services, i.e. the NameNode, Secondary NameNode, DataNode, etc. Some of the important ones are: There are a few more which are expected. Now moving on, let us know what are new Hadoop 3 file system connectors. 10. Support for Filesystem Connector Hadoop now supports integration with Microsoft Azure Data Lake and Aliyun Object Storage System. It can be used as an alternative Hadoop-compatible filesystem. First Microsoft Azure Data Lake was added and then they added Aliyun Object Storage System as well. You might expect some more. Let us understand how Balancer have been improved within multiple disks in a Data Node. 11. Intra-DataNode Balancer A single DataNode manages multiple disks. During a normal write operation, data is divided evenly, and thus, disks are filled up evenly. But adding or replacing disks leads to skew within a DataNode. This situation was earlier not handled by the existing HDFS balancer. This concerns intra-DataNode skew. Now Hadoop 3 handles this situation by the new intra-DataNode balancing functionality, which is invoked via the hdfs diskbalancer CLI. Now let us take a look at how various memory management have taken place. 12. Reworked Daemon and Task Heap Management A series of changes have been made to heap management for Hadoop daemons as well as MapReduce tasks. New methods for configuring daemon heap sizes. Notably, auto-tuning is now possible based on the memory size of the host, and the HADOOP_HEAPSIZE variable has been deprecated. In its place, HADOOP\_HEAPSIZE\_MAX and HADOOP\_HEAPSIZE\_MIN have been introduced to set Xmx and Xms, respectively. All global and daemon-specific heap size variables now support units. If the variable is only a number, the size is assumed to be in megabytes. have been introduced to set Xmx and Xms, respectively. All global and daemon-specific heap size variables now support units. If the variable is only a number, the size is assumed to be in megabytes. Simplification of the configuration of the map and reduce task heap sizes, so the desired heap size no longer needs to be specified in both the task configuration and as a Java option. Existing configs that already specify both are not affected by this change. I hope this article was informative and added value to you. Apache community is still working on multiple enhancements which might come up until beta phase. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of Big data.
https://medium.com/edureka/hadoop-3-35e7fec607a
['Shubham Sinha']
2020-09-10 10:04:36.047000+00:00
['Big Data', 'Hadoop', 'Hadoop 3', 'Mapreduce', 'Hadoop Yarn']
Why I’ve Made a Writing Schedule
Why I’ve Made a Writing Schedule And, why you should, too. Photo by Estée Janssens on Unsplash Every morning, I make a cup of coffee, take a minute or 30 for myself, then I sit down in front of my computer and write. On the weekends, that minute for myself gets longer and longer, but I still sit down and write. I write different things, fun projeects, things just for myself, things that don’t have deadlines. For the longest time I never held a writing schedule; instead, I wrote only when I felt like it, only when the inspiration struck. Which meant that I was writing maybe 1,000 words a week. Then, I started a blog and had to write twice a week. Then, I started writing on Medium and had to write 4–5 times a week, combined with my blog. Which may not be a lot for some writers here in Medium who churn out work daily, but for me, it’s the perfect amount. Is it making me rich? No. Maybe, eventually, I’ll make better money off of my writing, but for now it’s all I need. While writing when you feel inspired and excited about something is the best time to write, the words just flow and everything feels perfect, it’s not going to get you very far if that’s the only way you write. Sure, you may finish that book in 30 years and you may publish a really great article or two, but you won’t be consistent and consistency is key with everything, especially writing. I now started to write even when I don’t want to. I’ve made myself this little schedule and I carved out time to get my writing down in the morning hours. I chose the morning because that’s when I work the best. I’m a morning person and love to get up and start the day and start banging away on any project I have in front of me. In the evenings I’m a lazy slob who doesn’t want to move from the couch even to get more snacks. Sure, I deviate from the schedule every so often, and have some weeks where I barely write, at all. But, I’ve learned that by keeping to an actual schedule, I’m treating my passion more like actual work, and allowing myself to take it a little more seriously. It helps keep me focused on the tasks at hand, at my immediate goals, and those so far off in the future they’re shaking hangs with Chewbacca. I may not be writing the best stuff every day (that’s what editing is for), but I’m writing. I’m getting what I need to be doing done and continuing on with my life. It helps keep me focused on my writing and lets me relax the rest of the day. Because once it’s time to leave for my short shift at work, I’m done writing for the day. If the mood strikes at 8:00pm, I’ll sit down and write, or if I’m on a deadline, I’ll sit down and write, but I won’t feel guilty about not writing in the evening because I’ve already done what was needed to be done. Find a time that works for you. Stick to it, no matter what. Obviously illness and emergencies and long-lost friends showing up at your door don’t count. Keep doing it until it becomes a routine. Eventually, you’ll start seeing results, even if it has nothing to do with monetary gain.
https://medium.com/bulletproof-writers/why-ive-made-a-writing-schedule-92df4c8e09bd
['Michelle Lee-Ann']
2020-12-11 20:26:12.969000+00:00
['Writing Life', 'Writers On Writing', 'Writing', 'Write', 'Writing Tips']
null
ในหนังสือ High Output Management หนังสือ management ระดับตำนานโดย Andy Grove ได้สรุปเรื่อง output ไว้อย่างกระชับได้ใจความว่า… A manager’s output = the output of his organization + the output of the neighboring organizations under his influence.
https://medium.com/skooldio/%E0%B8%84%E0%B9%88%E0%B8%B2%E0%B8%82%E0%B8%AD%E0%B8%87%E0%B8%84%E0%B8%99%E0%B8%AD%E0%B8%A2%E0%B8%B9%E0%B9%88%E0%B8%97%E0%B8%B5%E0%B9%88%E0%B8%9C%E0%B8%A5%E0%B8%82%E0%B8%AD%E0%B8%87%E0%B8%87%E0%B8%B2%E0%B8%99-%E0%B9%81%E0%B8%A5%E0%B9%89%E0%B8%A7%E0%B8%9C%E0%B8%A5%E0%B8%82%E0%B8%AD%E0%B8%87%E0%B8%87%E0%B8%B2%E0%B8%99%E0%B8%99%E0%B8%B5%E0%B9%88%E0%B8%A7%E0%B8%B1%E0%B8%94%E0%B8%A2%E0%B8%B1%E0%B8%87%E0%B9%84%E0%B8%87-c270244da244
['ตง', 'วรพล ร ตนพ นธ']
2019-02-19 00:57:27.024000+00:00
['New Ways Of Working', 'Management', 'Productivity']
Our Editorial Team
Our Editorial Team Dialogue & Discourse is made possible through the efforts of our remarkable editorial team, like-minded people seeking to expand empirically-focused coverage and share interesting ideas worthy of discourse. Eric Song [Founder, Head Editor, Submissions, Recruitment] worked as a researcher at the University of Michigan, conducting simulations of diffusion impedance within complex microstructures. He is the creator of Raphson, a physical modeling platform, among other independent projects, including Sector. He enjoys volunteering in teaching positions and reading about economics, public policy, and history in his free time. Kevin Gosztola [Submissions] is a journalist and documentary filmmaker serving as the managing editor of Shadowproof and the co-host of Unauthorized Disclosure, a weekly podcast. He is co-author of “Truth and Consequences: The US vs. Bradley Manning” and has been interviewed on Democracy Now!, BBC Radio, PBS Frontline, The Real News, and CounterSpin. His written work has been featured in The Nation, Salon, OpEdNews, Consortium News, and Common Dreams. Wilson da Silva [Submissions] is a science journalist, feature writer, and the co-founder and long-time editor of COSMOS magazine. The winner of 33 awards for journalism, publishing and filmmaking, including the AFI Award for Best Documentary for “The Diplomat”, he’s also served as editor of Newton, Science Spectra, 21C Magazine, and ABC Online. Among his writing credits are Reuters, Nature, New Scientist, Australian Geographic, The Sydney Morning Herald, The South China Morning Post, and The Australian Financial Review Magazine. Brai Odion-Esene [Submissions] is the founder of SW4 Insights Inc., an advisory firm analyzing public policy in the District of Columbia. He is a contributor to Emerging Market Media and formerly a Senior Director at Hamilton Place Strategies. Brai is a frequent guest on CGTN America. Jonathan D. [Business, Licensing, Publicity] is a lead manager at a large financial advisory firm on the East Coast. He is an executive within several political interest groups and an active council member of his city’s government. When not at work, Jonathan enjoys playing hockey, learning American history, fishing, and organizing community events in his area. Meziechi Nwogu [Submissions] is an analyst and contract negotiator in the gas and oil sector. He is a passionate follower and writer of geopolitics, international relations, and the global economy. Nathaniel E. [Recruitment, Newsletters, Inquires] is a student completing his Master of Business Administration. He works part-time as a finance consultant and is raising a young family. In his very limited free time, Nathaniel enjoys organizing and participating in political events in his community. He hopes to adopt a Border Collie puppy in the near future. James Holley [Submissions] is a recent graduate from Ohio State University, where he dual majored in political science and philosophy. He has previously worked for Manley Deas Kochalski, LLC and enjoys writing about public policy, elections, and foreign affairs. James aspires to pursue a career in the political, administrative, and legal sectors.
https://medium.com/discourse/our-team-70189a5f9109
['D D Editorial Team']
2020-12-08 19:26:28.258000+00:00
['Publishing', 'Publication', 'Writing', 'Editor', 'Politics']
The Path to Better Pollution Forecasting Goes Through Nested JSON
Think about the steel industry in the US, and you’ll likely think of Pittsburgh. Known as the “Steel City” for leading the nation in steel production in the first half of the 20th century, Pittsburgh also went by the moniker “the Smoky City,” due to the air pollution from steel and other heavy industries. With increased regulation and the decline of the steel industry, Pittsburgh has gotten much cleaner since its darkest, smokiest days in the 1940s, but it still hasn’t shed all the vestiges of steel-related pollution. Coke, one of the raw materials in steelmaking, is manufactured by heating coal at high temperatures. The largest coke plant in North America resides in Allegheny County, which includes Pittsburgh. During the coke production process, the facility emits a mixture of particulate and gas pollutants that can aggravate existing respiratory ailments, such as asthma and emphysema. This is where Pittsburgh resident, Doug Balog, a data engineer for a large retailer by day and civic hacker by night, comes into the picture. He aims to use his technical skills to bring about a greater recognition of the impact of pollution in his Pittsburgh community. Gaining Greater Visibility into Pollution Doug is particularly interested in tracking temperature inversions, so called because the normal decrease in temperature with altitude is inverted. During an inversion, a layer of warmer air traps cooler air close to the ground. This phenomenon also prevents smoke and air pollution from escaping, and exacerbates the poor air quality in the areas surrounding the coke plant. Doug has been collecting National Weather Service (NWS) data on inversions for more than a year. He hopes to combine this weather data with crowdsourced pollution data-occurrences of pollution odors logged through a self-reporting app-for analysis. His goal is to reliably forecast periods of heavier pollution to provide adequate warning to sensitive populations, so that they may take appropriate precautionary measures. He also hopes to use the collected data to support calls for stricter enforcement of air pollution regulations by the county. Taming Complex Weather Data Using Rockset Doug has developed tools that scrape NWS forecasts hourly for about a hundred points within Allegheny County. The NWS data is represented in nested JSON format, which is difficult to handle in a relational database. The data either has to be converted into SQL columns, requiring a fixed schema along with considerable ETL, or stored in JSON columns that support limited indexing, neither of which is an ideal solution. Instead, using Rockset, Doug never has to specify any schema, and is able to run fast SQL queries directly on fully indexed JSON. Doug also encounters unexpected situations with field types and values from the NWS data. To indicate gusting wind, the NWS data shows a value like “20G30,” for example, instead of a numeric value. With Rockset, Doug can ingest and analyze unanticipated data types and values without errors and without any additional data cleaning. Accelerating the Path from Data to Insight As a solo developer attempting to use data to help the community tackle pollution, Rockset has proven particularly useful to Doug, saving him significant time and effort compared to alternative approaches. “There is a lot of data we can gather that can provide pieces of the answer to the problem of pollution in Pittsburgh, but it’s a difficult job to bring it together for analysis because the data quality is lacking. There’s always going to be something unexpected in the data that trips you up,” says Doug. “With Rockset, I don’t have to worry about data being typed or formatted in a way I didn’t anticipate, and I don’t have to modify my code every time the schema changes. Rockset just sucks in all the raw data and makes it accessible using SQL, so it’s faster and easier to develop on the data.” Having spent much of his career around data management, Doug is well aware of the true cost of standing up a SQL database to store his data. Using Rockset’s cloud service, he has been able to get a reliable SQL API into all his data, while avoiding the challenges associated with setting up and managing a database. In Doug’s words, Rockset required no setup on his part, and creating Rockset collections for the NWS data was very easy-simply point Rockset to the data, with no data preparation required. Doug’s next steps will be to find more uses for the data he has gathered. He is working to provide pollution researchers an interface for them to query the NWS data he has collected in Rockset. He also intends to train machine learning models on the data to predict pollution levels in the community.
https://medium.com/rocksetcloud/case-study-the-path-to-better-pollution-forecasting-goes-through-nested-json-bd88c57b20e3
['Kevin Leong']
2019-08-14 21:07:32.474000+00:00
['Json', 'Data', 'IoT', 'Sql', 'Weather']
Chaos Engineering comes to Ruby
In a micro-service environment, an unexpected service outage often comes as a surprise and when it happens it’s stressful for engineers, managers, and the clients. If you want to prepare your Ruby application for the next surprise and prevent it early then this post is for you. I will show you how you can simulate a connection failure with timeout in your Ruby application and check if it can handle a service outage and recover from that. Chaos Engineering is the discipline of experimenting with injecting harmful behaviors into the software to prepare the system and the engineering team for unknown situations. The idea behind chaos engineering is similar to flu prevention. Doctors inject a weaker form of the flu virus into the human body to prepare its defense mechanism for the real virus so it can memorize, practice the recovery. In a multi-service environment, viruses are network blips, network latency, memory and CPU spikes, unlikely situations. Injecting these sorts of issues into a production environment in a controlled way helps to set up and test backup plans, minimize the downtime of your service and also put less stress on you in the future. Flu Shot Flu Shot is an open-source Ruby gem that allows you to inject harmful behaviors into your application and control the behaviors externally like a control panel for a train layout. Using Flu Shot you can emit and simulate unusual events in your production environment in a controlled way and test your app’s resiliency. You can find the project on Github. Setup First, you must install the gem before by gem install flu_shot or add it to the Gemfile of your application and run bundle install . # Gemfile gem 'flu_shot' Injection FluShot.inject defines a point in the execution flow where the harmful behaviors can be added later. The following example adds the user_controller_show injection point into the UsersController#show method right before the user is fetched from the Users Service. It’s not doing anything at the beginning, we will configure it later. class UsersController < ApplicationController def show FluShot.inject(:user_controller_show) # injection point user = UsersService.find(params[:user_id]) # ... end end Vaccine Basically, we will inject vaccines, weak harmful behaviors into the system. FluShot::Vaccine classes define the behaviors that can be executed at the injection points. Every vaccine needs to be labeled by using label :label_name . The behavior that the vaccine contains goes to the constructor method and additional parameters can be passed in a hash argument. In my example, the Latency vaccine adds random length of sleep in [min..max] range simulating slow service. The min and the max values of the range must be passed in the params hash. class Latency < FluShot::Vaccine label :latency def initialize(params = {}) sleep(rand(params[:max] - params[:min]) + params[:min]) end end You can also raise and simulate exceptions in a vaccine by using FluShot::Sneeze that encapsulates an exception. The reason why we need to use the Sneeze object is Flu Shot catches every exception that is raised in the vaccines to make sure it does not abort your app. In order to raise exceptions from vaccines, the exception needs to be wrapped into a Sneeze object. The following vaccine will raise a Faraday::Error::ConnectionFailed exception. class ConnectionError < FluShot::Vaccine label :connection_error def initialize raise FluShot::Sneeze.new( Faraday::Error::ConnectionFailed .new('Connection Failed') ) end end Prescription FluShot::Prescription associates vaccines with injection points. The Prescription provides a .for method to specify the injection point and it returns a block with a prescription local variable. You can specify which vaccine needs to be executed and with what parameters by using the .add method on this local variable. The next example simulates a Faraday connection failure with a random timeout in UserController#show . The prescription is written for the user_controller_show injection point that we have already defined before. Using the prescription object, we can add a latency vaccine with [1..3] seconds random timeout and raising a Faraday::Error::ConnectionFailed exception by the connection_error vaccine. FluShot::Prescription.for(:user_controller_show) do |prescription| prescription.add(:latency, {min: 1000, max: 3000}) prescription.add(:connection_error) end Now if you call the UsersController#show method, it will raise a connection failure error and simulates when the Users service is unreachable. If you have some test accounts, you can add some filters to execute the vaccine only for test accounts and in this case your clients are not affected at all. Leaving the block empty will reset the prescription to no-op. FluShot::Prescription.for(:user_controller_show) do |prescription| end Storage By default, Flu Shot stores the prescriptions in memory and it works fine for single-process applications. Multi-process apps require shared memory to allow the processes to communicate with each other, therefore Memcache or Redis like services are necessary. In case you use Redis, you can pass your Redis connection instance to FluShot::Config.storage by wrapping it into FluShot::Storage::Redis . If you put the following lines into your initializers, new prescriptions will be applied automatically in each process of your app. require 'redis' FluShot::Config.storage = FluShot::Storage::Redis.new(Redis.new) If you use the Redis storage, you can control your application directly from Ruby console by writing prescriptions there, otherwise, you need to prepare your application to receive commands through HTTP and define new prescriptions configuration there. Contribution Flu Shot is looking for contributors for writing vaccines for popular third-party libraries like Faraday and Rails, and to create a “pharmacy” for the vaccines. This project is in the early phase, your ideas and recommendations are welcome. If you like the project but you don’t know how to contribute, please star it on Github. You can find me with questions on Twitter or leave a comment below this post.
https://medium.com/zendesk-engineering/chaos-engineering-comes-to-ruby-8273333eff6c
['Laszlo Papp']
2020-03-31 03:32:41.140000+00:00
['Microservices', 'Ruby', 'Chaos Engineering', 'Reliability', 'Ruby on Rails']
FROM Pre-trained Word Embeddings TO Pre-trained Language Models — Focus on BERT
Here is the result : Dropout, Add & Norm https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1184/lectures/lecture12.pdf Before this layer, there is always a layer for which inputs and outputs have the same dimensions (Multi-Head Attention or Feed-Forward). We will call that layer Sublayer and its input x. After each Sublayer, dropout is applied with 10% probability. Call this result Dropout(Sublayer(x)). This result is added to the Sublayer’s input x, and we get x + Dropout(Sublayer(x)). Observe that in the context of a Multi-Head Attention layer, this means adding the original representation of a token x to the representation based on the relationship with other tokens. It is like telling the token: “Learn the relationship with the rest of the tokens, but don’t forget what we already learned about yourself!” Finally, a token-wise/row-wise normalization is computed with the mean and standard deviation of each row. This improves the stability of the network. We compute the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Position-wise Feed-Forward Network In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel=512, and the inner-layer has dimensionality dff=2048. Couches de neurones formels avec une ReLU comme fonction d’activation, formalisé par : Ici W1 a pour dimensions [dimension des embeddings]x[dimmension d’entrée du FFN — au choix] et W2 [dimmension d’entrée du FFN — au choix] x [dimension des embeddings]. Source ici. 3.→Decoder Block Each decoder layer consists of sublayers: Masked multi-head attention (with look ahead mask and padding mask) Multi-head attention (with padding mask). V (value) and K (key) receive the encoder output as inputs. Q (query) receives the output from the masked multi-head attention sublayer. Point wise feed forward networks Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)) . There are N decoder layers in the transformer. As Q receives the output from decoder’s first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder’s input based on the encoder’s output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. The Decoder consists of: Output Embedding Positional Encoding N decoder layers The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer. What are the inputs of Transformer? We feed it both the input and output sentences at the same time. The outputs initially can be filled with anything, the model ignores whatever you fill into that. It uses the entire input sentence and output sentence to predict the next word in a single go. Once we predict the word, we replace that in output sequence, and model only considers output till that point and ignores what is ahead of it. We continue to do that till we have a complete sentence. Multi Head Masked Self Attention In encoder, self-attention layers process input queries, keys and values that comes from the output of previous layer. Each position in encoder can attend to all positions from previous layer of the encoder. In decoder, self-attention layer enable each position to attend to all previous positions in the decoder, including the current position. To prevent positions from attending to subsequent position (http://www.peterbloem.nl/blog/transformers) In other words, the self-attention layer is only allowed to attend to earlier positions in the output sequence. Masking multi-head attention is done by masking future positions (setting them to -∞) before the softmax step in the self-attention calculation. This step ensures that the predictions for position i can depend only on the known outputs at positions less than i. Since we want these elements to be zero after the softmax, we set them to −∞. With RNNs — there is no issue like that, since they cannot look forward into the input sequence: output i depends only on inputs 0 to i. With a transformer, the output depends on the entire input sequence, so prediction of the next words/characters becomes vacuously easy, just retrieve it from the input. To use self-attention as an autoregressive model, we’ll need to ensure that it cannot look forward into the sequence. We do this by applying a mask to the matrix of dot products, before the softmax is applied. This mask disables all elements above the diagonal of the matrix. After we’ve handicapped the self-attention module like this, the model can no longer look forward in the sequence. The “Decoder Attention” layer works just like multiheaded self-attention, except it creates its Queries matrix from the layer below it, and takes the Keys and Values matrix from the output of the encoder stack. Self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot- product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. 4.→The Final Linear and Softmax Layer The decoder stack outputs a vector of floats. How do we turn that into a word? That’s the job of the final Linear layer which is followed by a Softmax Layer. The Linear layer is a simple fully connected neural network that projects the vector produced by the stack of decoders, into a much, much larger vector called a logits vector. This space is the size of vocabulary (all words). We just project the matrix of weights (provided by the decoder block) into a “vocabulary space”. Mathematically speaking, what does it mean? Appelons S la matrice en sortie du décodeur. On la multiplie par une matrice de poids (qui peuvent apprendre) W1. C’est une couche totalement connectée qui projette simplement la sortie précédente dans un espace de la taille de notre vocable. W1 est la matrice qui va permettre d’extraire un mot dans notre dictionnaire de vocabulaire. Elle aura donc pour dimensions [dimension des embeddings, i.e. dmodel] x [nombre de mots dans notre vocable]. Let’s assume that our model knows 10,000 unique English words (our model’s “output vocabulary”) that it’s learned from its training dataset. This would make the logits vector 10,000 cells wide — each cell corresponding to the score of a unique word. That is how we interpret the output of the model followed by the Linear layer. The softmax layer then turns those scores into probabilities (all positive, all add up to 1.0). The cell with the highest probability is chosen, and the word associated with it is produced as the output for this time step. Softmax provides us the most likely word to predict (we take the word of the column which give us the highest probability). This figure starts from the bottom with the vector produced as the output of the decoder stack. It is then turned into an output word. 5.→ Residual connection A residual connection is basically just taking the input and adding it to the output of the sub-network, making training deep networks easier in the field of computer vision. Layer normalization is a normalization method in deep learning that is similar to batch normalization. In layer normalization, the statistics are computed across each feature and are independent of other examples. The independence between inputs means that each input has a different normalization operation. 7. → Model Training — How BERT is trained? A — Masked Language Modeling (MLM) “The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective allows the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer.” The Google AI researchers masked 15% of the words in each sequence at random. The task? To predict these masked words. A caveat here — the masked words were not always replaced by the masked tokens [MASK] because the [MASK] token would never appear during fine-tuning. So, the researchers used the below technique: 80% of the time the words were replaced with the masked token [MASK] 10% of the time the words were replaced with random words 10% of the time the words were left unchanged XLNet B- Next Sentence Prediction Generally, language models do not capture the relationship between consecutive sentences. BERT was pre-trained on this task as well. For language model pre-training, BERT uses pairs of sentences as its training data. The selection of sentences for each pair is quite interesting. Let’s try to understand it with the help of an example. Imagine we have a text dataset of 100,000 sentences and we want to pre-train a BERT language model using this dataset. So, there will be 50,000 training examples or pairs of sentences as the training data. For 50% of the pairs, the second sentence would actually be the next sentence to the first sentence For the remaining 50% of the pairs, the second sentence would be a random sentence from the corpus The labels for the first case would be ‘IsNext’ and ‘NotNext’ for the second case Applications? A- Real Applications : Pre-trained vs. Fine-tuned A review of BERT based models (https://towardsdatascience.com/a-review-of-bert-based-models-4ffdc0f15d58) Models pretrained on domain/application specific corpus are Pre-trained models. Training on domain specific corpus has shown to yield better performance when fine-tuning them on downstream NLP tasks like NER etc. for those domains, in comparison to fine tuning BERT (which was trained on BooksCorpus and Wikipedia). BioBERT (biomedical text) SciBERT (scientific publications) ClinicalBERT (clinical notes) G-BERT (medical/diagnostic code representation and recommendation) M-BERT from 104 languages for zero-shot cross lingual model transfer (task specific annotations in one language is used to fine tune model for evaluation in another language) ERNIE (knowledge graph) + ERNIE (2) incorporates knowledge into pre-training but by masking entities and phrases using KG. TransBERT — unsupervised, followed by two supervised steps, for a story ending prediction task videoBERT (model that jointly learns video and language representation learning) by representing video frames as special descriptor tokens along with text for pretraining. This is used for video captioning. Fine tuned models. Models fine tuned for a specific task using a pretrained model : B- Case study BONUS : 1. Why divide by square(dk) ? 2. Fine tuning From https://yashuseth.blog/2019/06/12/bert-explained-faqs-understand-bert-working/ What is the fine-tuning procedure for sequence classification tasks? What is the fine-tuning procedure for sentence pair classification tasks? What is the fine-tuning procedure for Question Answering tasks? What is the fine-tuning procedure for single sentence tagging tasks? 3. BERT as a service The final hidden states (the transformer outputs) of the input tokens can be concatenated and / or pooled together to get the encoded representation of a sentence. bert-as-a-service is an open source project that provides BERT sentence embeddings optimized for production. Serving Google BERT in Production using Tensorflow and ZeroMQ. 4. Kullback Leibler divergence
https://towardsdatascience.com/from-pre-trained-word-embeddings-to-pre-trained-language-models-focus-on-bert-343815627598
['Adrien Sieg']
2020-01-21 14:50:32.059000+00:00
['Deep Learning', 'NLP', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
How I Became a Top Writer in 3 Topics in One Month
Develop Your Routine No one will give you the magic writing routine that will let you shit out masterpieces. You can learn from other writers and test out their methods. But ultimately, your routine will be of your creation. I read a ton of articles about the tools and routines of other writers, trying to emulate their strategies. It increased my productivity, to an extent. However, creating my own strategy increased my output by at least 5x. When you’re focusing on other writer’s routines, you put yourself in a do-not-have mindset. You think there’s some tip or trick out there that is waiting to take you to the next level. If you let that feeling fester, it will come back to haunt you when you’re vulnerable. In my case, on days when I was feeling uninspired, I would read “How to Become a Better Writer on Medium” articles instead of focusing on writing. This desire to copy another writer’s routine made me look at what I don’t have. There is a successful routine inside of you. You just need to find it. Here’s what I do, in case you’re wondering. When I sit down for a dedicated writing period, I make sure to have all my essentials within reach. My essentials include my laptop and charger, a water bottle, a blank sheet of paper and a pen to jot down notes, a pair of socks in case my feet get cold, and a bowl of healthy snacks. Once my essentials are covered, I turn my phone on silent and stash it in my desk. I turn on some chill music; I type ‘Chillhop study playlist’ into Youtube. I blacklist sites like YouTube or Instagram on my laptop. Then I sit my ass down and don’t get up until I’ve worked through my writing period. I may have taken bits and pieces of other people’s processes. But I didn’t inherit my method from a Medium article or YouTube video explaining how to become a better writer. I discovered a process that worked for me, and I made it my own. Find what works for you.
https://medium.com/mind-cafe/how-i-became-a-top-writer-in-3-topics-in-one-month-1e064c3dfd9f
['Tenzin Ozaki']
2020-09-23 20:41:26.066000+00:00
['Writer', 'Self', 'Life Lessons', 'Productivity', 'Self Improvement']
How Conspiracy Theories are Influencing our Minds and Beliefs
The Goofy but Scarier Part So, the intentions of human originators have a lot to do with the destructiveness of fantastic theories. But there is also a fantastical theory that might better explain the “why now?” as well as the “what next?” of our flood of falsehoods. Since at least the 1990’s people have asked, “Will the Internet ever become conscious?” That is to say, without anyone ever intending it to happen. It’s arguably a silly question. But when Christof Koch, the computational neuroscientist, was asked, he said that theoretically, it could happen. His reasoning was based on his Integrated Information Theory of consciousness. Now here’s the scary part. “if Koch and Tonini’s theory is correct, then at some point the growing complexity of the internet will force human brains to become absorbed into the collective mind.” “’Brains would cease to be conscious in their own right,’(Philosopher Phillip) Goff writes, ‘and would instead become cogs in the mega-conscious entity that is the society including its internet-based connectivity.’” — Is the Internet Conscious?, Meghan O’Gieblyn This is one way to answer the question so many of us have asked, “How can they possibly believe all that (stupid, stupid) stuff?” A person could believe if they were in the early stages of a process that, if it continued, would absorb them into global consciousness. Yes, there are plenty of other available psychological explanations of why people believe in nonsense theories. I have touched on some of these elsewhere. But the current global spike seems unprecedented and might call for a new level of understanding. I have a friend who appears to just mindlessly drift from one conspiracy theory to another. He has bought into stories that are idiotic and odious, even though he is intelligent and widely read. It seems that he can only believe in things that he supposes are secret lore, suppressed by the establishment. Perhaps his spaciness is a symptom. Consider O’Gieblyn’s vision of a sliding surrender into the global consciousness. “You may occasionally succumb to the delusion that everyone you know sounds the same, as though their individual minds, filtered through the familiar syntax of tweets and memes, have fused into a single voice. You might find yourself engaging in behaviors that are not in your self-interest, mechanically following the dictate to spread and share personal information, even though the real beneficiary is not you or your friends, but the system itself.” — O’Gieblyn, op. Cit. This sounds a lot like the people who are self-segregating on their own ‘trusted’ social media to join in a groundswell of truth-telling about their vile, conspiring enemies. The Point Our fascination with stories of secret collusion can be, and now often is, hijacked for bad purposes. The whole trend might also be a bad omen. Maybe our invention of hyper-massive communication is about to lead us into a phase change of consciousness; an eventual erosion of our individual identities into a collective mush. That would be an ironic turn for the people who most believe in American individualism, and who are today’s conspiracy theory fanatics.
https://medium.com/the-apeiron-blog/how-conspiracy-theories-are-influencing-our-minds-and-beliefs-f3c48a55cab7
['Ted Wade']
2020-12-22 17:02:14.463000+00:00
['Consciousness', 'History', 'Culture', 'Psychology']
A Superpower I Wish I Had
On days when despite all the medication, sleep would tease me from a distance for several days in a row, I would feel like an extremely unsatisfied customer of the Indian mental health industry. — - The thing about taking anti-depressants, sleep medication and therapy is that it’s all very expensive business. And before any sociologists / social sectorists pull out (s)words, I mean relatively expensive. Now, I consider myself ‘middle-class’ with respect to how I think and how the term has been popularly used as a way of communicating miserliness in someone’s purchasing habits. The way I use it, also refers to an extended family nurtured mindset of not wanting to spend on anything unless it’s an item of necessity that one may die without. Fortunately, or unfortunately — this is also how I reasoned with myself when I gave in to psychiatric treatment. I didn’t mind dying. But wanting to die, according to a vast majority of educated adults, I realized, was a wrong thing to want. And while I didn’t care about this majority, I realized I might actually kill myself if I didn’t resort to treatment. And that, I thought, would hurt a very very small subset of this majority viz. people I cared for. Worth the expense, I told myself, before agreeing to expensive prescriptions after prescriptions of medication and therapy. But on days when, despite all the medication, sleep would tease me from a distance for several days in a row, I would feel like an extremely unsatisfied customer of the Indian mental health industry. Today has been one such day. I am in Pondicherry to attend the annual staff retreat of the organization I work for. With my routine disturbed and no sleep for three days in a row now, I was contemplating between overdosing and distress calling my psychiatrist when a colleague sitting next to me caught the zoned-out fatigue on my face. “Why don’t you take a nap? We have a 30-minute break after this session”, she said with concern. I took a long breath in. Nodded and smiled. Now, for those of you who have struggled with falling asleep (without having to fight for it) you might also be extremely familiar with the urge to throw water in the faces of people who manage to fall asleep in all kinds of places and conditions — for e.g. while sitting or standing at busy railway stations, during long bus rides or all throughout meetings with their heads hanging low. Maybe the urge to splash water on their faces is just me — but the sharp twinge of jealousy can’t be denied. A nap, I think to myself, before responding to this question. A nap, as per my definition, sounds like a slot of timed sleep in which you tell your mind to sleep for exactly as much time as you have and then wake up feeling fresh and rested. A nap, I think, is something I’m yet to experience in my life. I’m still smiling. Because as grateful as I am for kindness — it also tires me. How am I to explain that sleep, for me, has been a long three stage process for as long as I can remember. The first stage involves making sure there are no sounds ( an extremely difficult exercise if you’ve grown up in the Indian ‘joint family’ where you’ve to wait till the last person is done watching TV placed strategically in the middle of the house from where everyone in the house can hear it), the mattress is laid out properly ( because a separate room with doors shut is a tabooed arrangement and allowing teenage kids to explore sexuality did not feature on the list of my families’ troubles) and finally, waiting for the lights to be put out. The second stage and the real stage — is trying to fall asleep. And just when you think you’re tired enough to fall asleep as soon as you hit the pillow — you’ll wake up. Or at least I would. I would wake up and suddenly become aware of every tiny detail around me. The leaking water taps in the bathroom. My parents’ whispering their opinions on family matters and related politics. My youngest uncle waking up to use the computer in the hall, to watch porn on mute. (Poor thing — this was still early 2000s and he didn’t know his adolescent niece knew how to check browsing history on a single computer being used by the entire family for functions ranging from accounting to school work and of course, entertainment). The second and longest stage, would, on most days last till early next morning, when I’d be shaken awake by my father yelling around loudly about how my brother and I were running late for the school bus, yet again. This is when I think I’m finally approaching the third stage of actual restful sleep. But missing school because of missing the school bus was a punishable offence in our household. It meant my father having to borrow the family car, pay for fuel he didn’t want to afford, to drive us to our private school situated 20 kilometers from where we lived. Yes, we were middle class about everything but private education. A scarring private education, which fared excellently on the matter of imparting effective English communication skills — but caused a big blow to my then sensitive self-esteem. My father paid the school fees, which for him, was the only dream I think he was still working for back then. But he could not afford all the extra costs of the latest in-style school bag, water bottle, a fresh set of textbooks and uniforms which the others had and teased me and my brother for not having. These remain unresolved matters, needing a chapter of their own, but I’ve strayed too far from the matter of sleep now. As it often happens with me, I’m still smiling lost in my own trail of thoughts. I’d learnt the art of nodding and smiling — without meaning it — a while back in my career. I snap out of my thoughts as I see people getting up from the meeting, to take their respective tea/ smoke breaks. I look at my colleague who’s still looking at me — both confused and worried now. A nap, I say quite dramatically with sudden wisdom, is a super power. It’s a super power I don’t have. She smiles and shrugs in an ‘as-you-wish’ sort of way. It’s a 30-minute break and neither of us could wait to step out from the windowless, air-conditioned conference room we’d been stuck in since the morning. So, we got up to leave and I switched back to my ‘smile and nod’ mode. I braced myself for some intelligent — this time over tea and therefore casual — conversation around the organization’s strategy and goals.
https://akankshababbar2.medium.com/a-superpower-i-wish-i-had-440cd9741de9
[]
2020-01-13 09:34:20.150000+00:00
['Sleep', 'Women', 'Anxiety', 'Mental Health']
Filtering Fruits in JavaScript and React
Photo by Brenda Godinez on Unsplash The filter() method in JavaScript is a useful method that loops through an array and grabs all elements that are true based on the condition provided. filter() does not mutate the original array on which it is called. A new array will be created and will be filled with the contents of the original array that are true based on the callback function. I will go through examples on how to filter information in JavaScript and React. Writing a filter function vs using filter() method We can filter an array without using the filter() method but we would have to write much more code. We have this array of fruits: let fruits = ['strawberry', 'banana', 'apple', 'blueberry', 'orange', 'grape'] We want all the fruits that are longer than five letters and put them into a new array. Without using the filter() method, we can create a function that takes in an array. This function will iterate through each element of the array using a for-loop and see if that element is longer than five letters. If the element is longer than five letters, it will be put into a new array. This function returned: ['strawberry', 'banana', 'blueberry', 'orange'] The filter() method can shrink all that code into a single line. This code much cleaner and easier to read. let filteredFruits = fruits.filter(fruit => fruit.length > 5) Using filter() in a React app The filter() method is a programmers best friend to render or grab specific data. We will use filter() to grab an object and delete from one array and add it to another. We have the fruits from above and When you click on a fruit in the fruits section, you have successfully eaten the fruit. We want the fruit clicked on to be deleted from the fruits array and be added to the fruits eaten array. Initial state apple was clicked on in the Fruits section. It is deleted from the Fruits array and moved into the Fruits Eaten array. To see how each component communicates with each other, you can find this project’s GitHub repository here. The fruits array is iterated over with a map() method and creates a Fruit component for each fruit. When the fruit is clicked, that fruit is deleted from fruits array and added to the fruits eaten array. When we click on the fruit, it will grab that fruit object and trigger the handleClick function.
https://medium.com/swlh/filtering-fruits-in-javascript-and-react-127b3506a890
['Chandler Hanson']
2020-11-24 09:58:15.973000+00:00
['React', 'Javascript Tips', 'Codingbootcamp', 'Coding', 'JavaScript']
The New Gratitude Prayer
Photo by Love Your Neighbour on Unsplash August 13, 2017: This has been a rather scary week. I’m sure not just for me. I have tried very hard since I brokered my deal with The Universe to not watch the news, to remain calm, to listen to my heart, to trust that She’s got this. To know on the other side of whatever is going on we will have learned the necessary lessons, people will be changed for the better, and Love will eventually win. Other outcomes are not options. But news still floats into my world. Never the good stuff either. And I ponder how we got here. I was concerned we might end up standing in this very place, an unstable personality at the helm — with the best toy a cowboy could ever want right there at his fingertips to prove he had the biggest dick in the world! Especially if the pissing contest involved the other most unstable world leader who also liked to show off his junk. But these are not little boys on a playground, as much as they are acting like it. I’m not an “I told you so” sort of person. I’m really not. So I won’t say it out loud. We’ll just leave it at that. And my deal with The Universe means I need to write from a place of Love and Compassion about this. For all of us. First, there can be no blame placing for how we got here. We, The People, elected this person and worse — all those members of Congress long before him. The system was broken before 45 decided to throw his hat in the ring and run for the nomination way back when. Many of those senators and representatives have not done their jobs to run this country and represent the best interests of their constituents for most of their careers. Yet, they remain comfortably in their jobs because we put them there and leave them there. We should have taken a page from a popular TV show and yelled “You’re Fired!” at the lot of them. This is on us. All of us. Not one party or the other. No finger pointing. Second, we fix it the same way. Together. We do not let hate or the media or one set of values or anyone else’s thoughts deter us from what we know to be right in our hearts. Marching with torches in your hands makes you a mob, just like the one that marched on the castle to kill Frankenstein’s monster. An ignorant, bloodthirsty, mass of mindlessness. Bombing innocent civilians do not make you a hero or a conqueror, it makes you a murderer. Unleashing a nuclear weapon which humanity has kept safely inside missile silos for over seventy years makes you a psychopath. Even threatening to do so puts you on the other side of crazy town. Humanity as a pack needs to recognize the sick animals among us and separate them from their power bases, not keep giving them more power. There still remains more Love in this world than hate. We fix this by not letting hate and fear win. We calmly, with Love and Compassion, take back our county. Third, we start with each other. Every day. In our own world. The parts we can reach. I can not make peace with North Korea, but I can repair relationships in my immediate circle. I can begin a ripple of Love and Kindness and send it out into this world and pray it reaches Washington. I can refuse to live in Fear. Fourth, I will love and cherish each moment of my life, because who knows where this madness leads us — but mostly because that is the way we should live our lives. We only have Now, it is the only moment of our lives we should be concerned with. The future is never promised. There was a saying I heard a long time ago. The gist of it is this — never answer an angry word, with an angry word. It is the response that makes the fight. Never is a philosophy truer than what is going on in our world today. I woke up this morning and the world is still here. For that — I will repeat a new mantra. Thank you, thank you, thank you. Note: I wrote this story in August 2017. Oddly — the world has not changed, but the good news, it hasn’t ended either. This week the issues were different, but our emotions still roll and churn. May Love lead us all to Peace. Namaste.
https://medium.com/recycled/the-new-gratitude-prayer-91a28597c283
['Ann Litts']
2018-10-06 15:53:49.410000+00:00
['Self-awareness', 'Life Lessons', 'Gratitude', 'Life', 'Living In The Present']
Automate Your Freelance Income and Expense Tracking with Google Sheets
How to Track Your Freelance Writing Income and Expenses As you complete projects for clients and have different expenses come in, enter the information in the appropriate sheets. Then you can start entering the different formulas: To calculate your total freelance writing income, go to the Freelance Writing Income worksheet and type “Total” in one of the cells below the freelance writing income information you’ve started entering. In the cell below the amounts you’ve entered type “=sum(“ without the quotes. Google Sheets may automatically highlight the totals you’ve entered. If it does, just type the closing parentheses. Now you should see the total you’ve earned from freelance work so far. As you finish more work, you can go to the “Insert” menu and choose “Row above” to enter additional rows above the Total row. As you do, the total will automatically update. Screenshot taken by the author. You’ll use the same process to figure out how much you’ve spent on freelance writing expenses. Once you’ve got these two totals, enter them in the Main Worksheet. The process is easy. Enter them so they automatically update as you put in new information. The only extra thing you may want to add to your freelance writing expenses worksheet is a column called Category. I’ll explain how to do that in the next section Here are the steps: Go to the main worksheet. In the cell next to “Freelance Writing Income,” type just an equals sign. Now go to the Freelance writing income worksheet and click on the cell that has the total. It should automatically add the cell reference to the Main worksheet in the space you typed the equals sign in. You may have to type a quotation mark to end the cell reference, then type an end parentheses. Once you leave that cell, the total automatically shows up there and update every time you update the Freelance Writing Income worksheet. Do the same with the Freelance Writing expenses worksheet. In space for the total, type an equal sign, then highlight the cells that have the income and expenses, if they’re not already highlighted. If they are, close the formula. The difference between income and expenses should automatically show up. It will change as you enter new information. Here’s a screenshot of what your expenses worksheet would look like before you enter any information: Screenshot taken by the author Optional Step If you chose to add a Category column in your freelance writing expenses sheet, here’s how to set it up: Go to the Data menu and choose “Data Validation.” Choose “List of Items,” and make sure “Show Dropdown List in Cell,” is checked. Type the categories of your different expenses. Some ideas could be Website hosting, social media, Research Organization, and Freelance Worker. As you enter different expenses, choose the appropriate category from the dropdown list. How to figure out the totals for your different categories: List your categories in a column on your spreadsheet. In the next column, type this formula: =sumif([range of cells listing at least one instance of your expense], [cell that contains at least one instance of your category], [range of cells containing the total]). In the square brackets, you’re going to add cell ranges or cell references mentioned. Here’s an example of one of mine: Screenshot taken by the author In this example, the formula tells Google Sheets to look in range B2 to B30 for an example of what’s going to be listed under my category, then look in B2 for an actual instance of that category, then look in C2:C30 for an instance of the amount. Here’s what’s it returned for the amount it found: Screenshot taken by the author The range B2 to B30 contains the name of the website hosting company I use, 1 and 1. Cell B2 contains the first instance of that name. And the range C2 to C30 contains the amount of that charge, $14. Since the charge occurred twice in the example data I put in, Google sheets added them together to get $28. Here’s a screenshot showing what your spreadsheet would look like after your entered the information: Screenshot taken by the author. That’s how you set up a Google workbook that keeps track of your freelance writing income and expenses. If you do this for the entire year, you’ll have all the information you need next year about how much you made from different sources and how much you spent on different freelance writing business expenses. This makes it easier to itemize your deductions if you spend enough to deduct your business expenses on your taxes.
https://medium.com/the-post-grad-survival-guide/automate-your-freelance-income-and-expense-tracking-with-google-sheets-762c32c776c8
['Erica Martin']
2020-12-09 15:17:30.285000+00:00
['Work', 'Analytics', 'Freelancing', 'Writing', 'Money']
US Reaches 200,000 Covid Deaths
More from Liza Donnelly Follow Visual journalist/writer for New Yorker, New York Times, CBS News, CNN. TED, SXSW speaker. Looking to change world w humor. lizadonnelly.com
https://lizadonnelly.medium.com/us-reaches-200-000-covid-deaths-bf776c4804f2
['Liza Donnelly']
2020-09-23 19:48:06.630000+00:00
['Covid 19', 'Politics', 'Death', 'Trump', 'Coronavirus']
The worst way to get the first row from a table
The worst way to get the first row from a table Andrew Davis Follow Apr 1 · 2 min read There are a lot of ways to retrieve a specific row from a table in a database. During an incident I recently discovered one of the worst. Take a look at this Typescript and Knex-esque code. Lets say each row has two unique identifiers. The caller can supply either of them. async function getSingleRow(id?: number, otherId?: number) { const query = getTable() if (id) { query.where('id', id) } if (otherId) { query.where('otherid', otherId) } return await query.select().first() } But what happens if you supply neither parameter? What happens if due to a bug you supply a falsey value for both parameters? The compiler doesn’t complain as that satisfies the types you have specified. The code even runs without error and a row is returned as expected. It appears all is well however you have just run a select statement with no where condition then grabbed the first row in the table. Probably not what you want and possibly something you may not even notice unless you are really paying attention. A Simple Fix Doubtless there are multiple ways to avoid this subtly broken behaviour. Below is a simple way to make this implicit error condition very explicit. This greatly increases the chances that this will be caught during local development or by automated tests.
https://medium.com/pixel-and-ink/the-worst-way-to-get-the-first-row-from-a-table-bc3650fdf976
['Andrew Davis']
2020-04-07 08:03:11.459000+00:00
['Database Development', 'Typescript', 'Knexjs', 'Web Development']
Dear Julie: The Fear
Dear Julie: The Fear Does it ever get any easier to write? Photo by Tonik on Unsplash Dear Julie, How do you get over that feeling of not being good enough when you’re writing? I love writing, but I have so much self-doubt. Does every author go through this or is it a sign that it’s not meant to be? Linda (Warning: the following answer makes copious use of all caps.) Dear Linda, Here’s the real answer: YOU DON’T EVER GET OVER IT. I have never, ever, EVER met a successful author who feels that they are good enough. There may be some out there, but I haven’t met them yet. When a group of authors get together, if the coffee or wine is flowing, sooner or later they are going to start talking about how rubbish they are, how their book is going badly, how they don’t think anyone cares, how it used to be easy but now it’s not, how they’re pretty sure all the negative reviews are true and that they really might as well give up now… I call it THE FEAR. You have only to mention THE FEAR to an author to see a gleam of recognition in their eyes. I once took wine with two of the most successful commercial fiction authors in the UK. I won’t say their names, but believe me, you’ve read their books. And when I said, ‘But surely by now you two must have got over THE FEAR,’ both of them turned to me and said ‘Oh my God, no! It’s worse than ever!’ It’s not surprising, I suppose: writing, like any creative pursuit, is very personal. You feel that you’re not only putting your work out there for scrutiny, but also yourself. It’s bound to be scary. Also, writers often work alone, and it’s very difficult to trust your own judgment, especially when you’re starting out. Personally I think that THE FEAR is not only natural, but necessary. Having THE FEAR means that you are challenging yourself, trying to break out of your comfort zones. It means that you are putting your work out there to be seen. It means that you’re serious about what you’re doing and that you care about it. Often the hardest, and the scariest things to do are the most worthwhile and rewarding. So…how do you get over it? If I’m right, you can’t. It will follow you your entire career. However, I do think it helps to know that everyone has THE FEAR, and that it’s normal, and actually even maybe a good thing. (If you want a good book to read on this very subject, try The Courage to Write by Daniel Keyes, which talks about the ways that some very famous authors have tried to overcome their fear. I also love the idea behind Sarah Painter’s charming and helpful Worried Writer podcast; check that out and hear how other authors get through their fears. http://www.worriedwriter.com) Having writer friends who know what THE FEAR is like can help, too. They can remind you that you’ve overcome it in the past, that no, this attack is not worse than any other time, that yes, you’re always this miserable, and of course it’s okay to have a bit of chocolate to get over it. (Non-writers don’t tend to understand. They look at you like you’re a crazy person when you tell them that the book you loved yesterday is the WORST THING IN THE WORLD today. The best thing non-writers can do is nod indulgently and supply you frequently and copiously with cake.) Mostly though, Linda, I’m afraid you’re just going to have to suck it up and deal with it. Feel THE FEAR and write anyway. If you’re scared, you’re doing it right. Hopefully it will help you to know that this is absolutely normal. When I teach, I spend a lot of time talking about THE FEAR, and the most common response when I mention it to new writers is utter relief. Being afraid isn’t a sign that there’s something wrong. Everyone feels the same way. Of course sometimes, THE FEAR is a sign of something more serious. If you have mental health issues, it’s often much harder to write. I have suffered from anxiety and depression in the past, and they do make THE FEAR almost impossible to overcome. Please do look carefully at your life and feelings as a whole, and if you feel this is part of a greater mental health problem, call your GP. But in general, if THE FEAR is unattached to deeper issues: it’s normal. It’s expected. It never, ever goes away. Sorry to be the bearer of bad news. But no one ever said this was going to be easy. Thank God for chocolate. Love, Julie x
https://medium.com/novel-gazing/dear-julie-the-fear-bd6c03f47859
['Julie Cohen']
2020-05-15 14:28:09.351000+00:00
['Writing Tips', 'Self Confidence', 'Writing', 'Writing Advice', 'The Fear']
What does coronavirus mean for property investment?
Photo by Fusion Medical Animation on Unsplash In light of the recent COVID-19 pandemic, there is an understandable uncertainty among the British public and those who invest in property for what the future holds. Despite many questioning whether property prices will continue to rise and how capital growth will be affected long-term, the property investment market remains strong throughout it all. Since the current coronavirus pandemic started to impact countries worldwide, the global stock market has also plummeted, which is cause for concern for those who invest in UK property and what it means for their investment. However, the UK property market remains one of the most robust and lucrative investment journeys for any investor who wants to make returns despite global happenings. As an investor or someone looking to invest in property soon, you may be curious to see how coronavirus will impact the current property market in both long and short-term. Plus, it’s interesting to see how the market has held up over term despite economic uncertainty… The Current Property Market As it goes, the property market in the UK right now is a stable and secure investment. This is because, unlike other investment types, bricks and mortar are one of the safest places to keep your savings and cash. While the property market may have had an initial decline (which is only minute compared to the likes of the stock market), investing in property is about more than a quick cash grab. Essentially, property investment will give an investor returns over 5/10/15 years (even longer if you’re willing to wait). However, COVID-19 and other economic uncertainty are only for a short period. The evidence of the property market continuing to soar in the long term can be shown by looking at property price growth over the last decade and looking at predictions for the future. For example, property prices in Liverpool have increased from around £45,000 in January of 1995 to £149,500 in January of 2020. This 25-year period has seen a change of +227%, which is a huge profit and gain for anyone who has had a property investment during this time. Additionally, these prices are set to continue to rise in Liverpool with predictions that North West property prices will grow by 24% by 2024. Investing in UK Property The one aspect you must understand about UK property and investing during coronavirus is that it’s still essential you look at the best places to invest. For a long time, people have considered London the main UK city for investment, but now, investors are flooding up North to get better property prices, and capital growth yet still the same tenant demand. Cities like Manchester and Liverpool have seen a surge of investors wanting properties there and making a success of this. In Times of Uncertainty How do we know that the property market will continue to be a long-standing and stable investment for an investor despite uncertainty? While predictions show a huge increase in property prices, looking back over the last decade, we can see how robust the UK property market has been and how it has continued to thrive and blossom into what it is today. An example similar to the COVID-19 pandemic is when there was a global outbreak of swine flu in 2009. This pandemic was first identified in Mexico in April 2009 but rapidly spread from country to country. During this time, property prices in the UK continued to rise by over 10% (between 2009 and 2010), which highlights the strength and resilience of the UK market. In times of uncertainty, the property market may decrease, and property prices do tend to reduce in value for a short amount of time. This is only understandable with how much the economy is impacted. Yet, despite these small decreases and reductions, the property market always comes out on top. Unlike the stocks and shares market, in which people lose thousands of pounds every day due to decline, property investment is something a lot more trustworthy and worthwhile.
https://medium.com/the-property-talk/what-does-coronavirus-mean-for-property-investment-43baeae32142
['Sarah Roberts']
2020-04-03 08:27:28.626000+00:00
['Covid 19', 'Property Market', 'Property Investment', 'Coronavirus', 'Economy']
Announcing Okio 2: Our fast + simple I/O library, Okio, has a new release that supports Kotlin.
Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog At Square, we’re excited about Kotlin. It’s a capable language to build Java libraries and applications with. We love to write code that is both compact and efficient. We’re also eager to adopt Kotlin’s powerful new features. Kotlin/Native will allow us to share code between iOS and Android. Coroutines make concurrent programs easier to create and maintain. Today we’re releasing Okio 2.0. In this release we’ve converted the project’s source code from .java to .kt . The conversion lets us use Kotlin in the library and offer APIs that feel right when the calling code is in Kotlin. It also gives us the opportunity to support multiplatform and coroutines in the future. Compatibility The new release is binary-compatible with Okio 1.15.0. You can replace the old .jar file with the new one and your applications and libraries should just work. The update is also Java source-compatible. When you configure your pom.xml or build.gradle to use the new version you won’t have to change any .java code to get a clean build. (Our changelog notes one minor exception to this, unrelated to the Kotlin transition.) But the update is not Kotlin source-compatible; it adopts Kotlin idioms where they apply. For example, if you’re using ByteString.decodeHex("f00d") you’ll now need "f00d".decodeHex() . We’re using Kotlin’s @Deprecated annotation for a smooth upgrade process. IntelliJ makes fixing deprecations easy Dependencies Okio is core to many of Square’s open source projects. We use it for fast I/O in OkHttp, Retrofit, Moshi, and Wire. When these projects upgrade to Okio 2 they will gain a transitive dependency on Kotlin’s standard library. Kotlin is the best language for Android application development. We expect many Android teams to already be using Kotlin. For their projects Kotlin is already among the applications’ dependencies. Those that don’t can use R8 or ProGuard to shrink the Kotlin library dependency. When we measured, the net increase was negligible: just 7 KiB. For server applications we expect the dependency size to be inconsequential. But we worry about potential diamond-dependency versioning problems. To minimize the chance of such problems we will avoid experimental and deprecated APIs. For libraries and SDKs that use Okio or its sibling libraries we assume the ultimate deployment target will be to Android or a server. In either case we believe the dependency is acceptable. Good Stuff Kotlin is a beautiful language with broad applications. We hope Okio 2 helps to further extend Kotlin’s capabilities. Get the new Okio with Gradle: compile ‘com.squareup.okio:okio:2.0.0’ Or Maven: <dependency> <groupId>com.squareup.okio</groupId> <artifactId>okio</artifactId> <version>2.0.0</version> </dependency> Full release details are in the changelog. The readme has an API guide.
https://medium.com/square-corner-blog/okio-2-6f6c35149525
['Jesse Wilson']
2019-04-18 21:36:13.869000+00:00
['Android', 'Engineering', 'Open Source', 'Coding', 'Kotlin']
Exploring ways to export clean .svg icons with Sketch…the correct way
Let me walk you through this particular example that was both confusing and eye opening. Recently, I had to make an icon set for our new product at Lucidworks. The workflow was pretty straight forward, but once I started to export these bad boys everything went downhill. Once you open the .svg file into Sublime Text you see all of the clutter that was generated from Sketch. Most is pretty useless and starts to mess with the way the .svg acts if you want to manipulate them in code. In our case, we wanted that full control. To speed up the workflow on my end, I needed to export cleaner code for the team so they can continue on the project, rather than go into each .svg file and delete all of the clutter themselves. When you export in Sketch, it generates a lot of messy code. In this example, I designed a simple right arrow icon. This was the outcome: Most of this code is worthless and we don’t need. In fact all we really want is the <viewbox> and <path d>. The rest of the junk can get tossed b/c we can control all of this in css globally to the whole set, rather than individual icons. The real headache was the <rect id>, <transform> & <translate>. No matter what I did, Sketch would export these attributes. This was a real problem b/c once you open the .svg in html, the icon was all messed up. Especially if you want to manipulate the icon like we need to.
https://medium.com/sketch-app-sources/exploring-ways-to-export-clean-svg-icons-with-sketch-the-correct-way-752e73ec4694
['Sean Kesterson']
2016-06-01 15:51:50.978000+00:00
['Design', 'Sketch', 'UX']
“The Penis Is Too Jarring And Ugly to Be Seen With No Warning.”
“The Penis Is Too Jarring And Ugly to Be Seen With No Warning.” Ozzy Etomi, a supposed feminist once said. And this is the message she tried to pass Photo by Dainis Graveris on Unsplash Friday, 27th of November at precisely 04:38 GMT, a supposed feminist and founding member of a Nigerian feminist organization called Feminist Coalition(@feminist.co), Ozzy Etomi tweeted, “The penis is too jarring and ugly to be seen with no warning.” Photo from Twitter The message administered rather very wrongly turned out to be seen by many due to Twitter’s accurate algorithms. Her tweet went viral in a short period of 3 hours. The main reason being the huge amounts of comments the tweet received — most were from men. The statement, “The penis is too jarring and ugly,” is the only line a lot of people saw. As a matter of fact, the rest of the tweet was entirely disregarded by nothing less than 90% of men in the comment section. Such a tweet went viral but no message was passed — all because of a wrongly administered word. Although most women got the message and correlated with it, the message was particularly meant for men. These are some of the replies the tweet received: Photo from Twitter Photo from Twitter Photo from Twitter Photo from Twitter A lot of men went out of their way to shame the vagina thereby speaking very badly of how it looks — forgetting the message of the tweet. Ozzy’s tweet per se had one very specific message to men. Don’t send nude photos of your penis to people without notifying the one you aim to send them to. Why? Because it could be a turn-off. Men have failed to understand that the penis is not the only part of the body which is attractive, there are other parts women would love to see — not just the penis. We’ve been seeing it for years in our dms. So, no. Your penis is not ugly. It’s just better when it’s not in pictures, and no one wants to kill the excitement of seeing the penis live and being able to touch, except men. Understand that before you send a dick photo, it is important to forewarn your partner or whoever is in question. It’s respectful and attractive. More from the author:
https://medium.com/sex-and-satire/the-penis-is-too-jarring-and-ugly-to-be-seen-with-no-warning-299b16bd2190
['Jada Mikel']
2020-12-05 10:54:44.846000+00:00
['Social Media', 'Beauty', 'Science', 'Sexuality', 'Sex']
Incentivizing Death
Incentivizing Death How the modern GOP was designed to fail us Jared Kushner, his father-in-law, and state TV pundits are declaring the COVID-19 response a success. In real life, we face a possible depression and 170,000+ deaths from it this year. The Administration’s management has been disastrous. But it makes sense from a cynically cutthroat perspective. Below is a list of party priorities and policies that hurt all but a few of us, likely by design. This guy wants to see death certificates FIRST RULE: REPRESENT YOUR FUNDERS Put yourself in Mitch McConnell’s shell. He and other Republican politicians must protect the interests of their biggest donors by maximizing corporate profits. That means cutting taxes for the top 1% and corporations plus protecting loopholes and use of offshore tax havens. And while certain Democratic politicians deserve to be held accountable for similarly selling out, here we’ll prioritize our most corrupt “leaders.” Other GOP methods with immediate relevance: Paying workers as little as possible and minimizing regulations on worker safety. Most Americans prefer working for a livable wage without being put in immediate danger. For now Mitch McConnell refuses to pass another relief package. His demand: protection for companies against lawsuits from employees forced to return to work in unsafe conditions. Notably only 31% of Americans making under $10.80 per hour have paid sick leave. Rubber stamping outsourcing of manufacturing to countries where workers make even less. This allows cutting corners on environmental and human rights. It has also created PPE scarcity and test material supply chain issues. After four months most states still haven’t been able to procure enough tests to meet basic benchmarks for reopening responsibly. Shrinking unprofitable parts of government. Pandemic preparedness like testing and contact tracing abilities seems important. Congressional Republicans and neoliberals have repeatedly cut CDC funding, and the federal pandemic response team was disbanded in 2018. We also like having a Postal Service. The POTUS is currently withholding disaster aid from it. These are parts of government Grover Norquist has long sought to cut in order to be able to “drown it in the bathtub.” Using public-funded industry bail-outs in bad times (a.k.a. socializing costs). The airline industry received $50 billion in the CARES Act. We similarly bailed out big financial institutions like Bank of America in 2009 with a Democratic administration. Privatizing healthcare, infrastructure, war, education, etc. In social democracies like Denmark, taxpayers fund the healthcare system and in return get guaranteed care. In a pandemic universal access to quality care makes us all safer. In the U.S. millions of laid-off workers and others currently lack it, since our system is mostly private and linked to employment. In addition, the Trump Administration has joined Republican-led states in a suit that has reached the Supreme Court to “invalidate the entire Affordable Care Act.” If successful and absent miraculously better alternative legislation, that would further reduce coverage.
https://tysonvictorweems.medium.com/incentivizing-death-8f85ea604ff9
['Tyson Victor Weems']
2020-05-12 19:28:28.405000+00:00
['Donald Trump', 'Mitch Mcconnell', 'Politics', 'Coronavirus', 'Republican Party']
Change Blindness & UI Design
The test results are in and I’m afraid you have Change Blindness Blindness. It’s a terrible condition. Victims suffer from an exaggerated sense of their ability to detect changes in their field of view. The good news: You are not alone. 99.999% of the world’s population suffers with you. However, if you design websites or software for a living, Change Blindness Blindness can have serious side-effects including… Positioning error messages where users won’t see them. Inadequately communicating state changes. Depressed conversion rates. Loss of income, trouble sleeping. With a little time and attention, you can learn to manage your condition and, more importantly, help your users better manage theirs. Change Blindness Experimental designs used to demonstrate change blindness have eerie similarities to common, online user experiences. First, study participants are shown a scene. Then, some kind of flash or other noise interrupts the visual field. Finally, a second scene is shown, identical to the first except for one (often glaring) change. People are surprisingly bad at detecting even drastic changes, like entire objects (dis)appearing from the scene. From Supplementary Information for ‘Change blindness as a result of mudsplashes.” Used here without permission. This handy demo lets you try it for yourself… http://www.gocognitive.net/demo/change-blindness Why this happens isn’t entirely understood, but it seems to be a result of how our brains construct meaning from visual information. The interruption is crucial, whether it’s a total flash or just a few dots splashed across the screen. It’s very much like what happens when you refresh a web page. State Changes & User Experience Consider what happens when you fill out a typical web form: You hit a button and some invisible process is kicked off. Your browser thinks for a few seconds, then its screen flashes white. If anything failed validation, the same page probably reappears with a new element: an error message. Something like this. As a designer, that all looks pretty obvious. You have the luxury of already knowing what’s going on, already understanding the state change you’re trying to communicate. You’re also looking at the whole page, you’re not in the middle of trying to complete task, and you didn’t have to look at it with a flash of page refresh in-between. What your users see or, more accurately, don’t see. Even without the refresh flash, having one’s attention focused on the bottom of the page, near the submit button, will make that error message as invisible as a ninja on your ceiling. (Look up right now. If you don’t see a ninja on your ceiling… it’s already too late.) Slow feedback on button press, long load times, and extraneous things competing for your user’s attention can all make them blind to seemingly obvious changes in state. Fortunately, change blindness has its own blind spots. Facilitating Change Detection The flash is essential to causing change blindness. If the state change is instantaneous, change detection is pretty much guaranteed. To prevent the refresh flash, don’t wait until form submit to perform field validations. Whenever possible, validate client-side as the user types. If you’ve used the internet since 2005, this should feel familiar. Validations that can’t be performed client-side can still be done in the background. Keep attention focused on the button location while validation is running, then display any state changes in the same general area. Transforming the button into a progress indicator is ideal. A less conventional approach. Participants in change blindness studies are better (still not great) at detecting changes in the center of the visual field, where most of their attention is focused. Summary Change blindness is a serious epidemic. Changes in state that occur during a page refresh are particularly likely to escape users’ notice. To clearly communicate state changes in web forms…
https://medium.com/user-experience-behavior-design/change-blindness-ui-design-687a8abf5ab6
['Dan Bayn']
2015-05-12 19:51:01.906000+00:00
['Usability', 'Psychology', 'User Experience']
The case for and against pro-rata rights
The case for and against pro-rata rights What can we learn from nuclear physics to make pro-rata discussions less radioactive? One question that inevitably comes up in every investment round (except for a startup’s very first one) is whether existing investors participate in the financing, and if so, to what extent. As Fred Wilson wrote a little while ago, it’s become an increasingly controversial question in recent years and has led to many arguments between founders, early-stage investors, and later-stage investors. Pro-rata 101 If you’re not familiar with the topic, here’s a quick primer. If you know the basics of pro-ratas, you may want to skip the next few paragraphs. If a company raises capital by issuing new shares to a new investor, the total share count of the company increases, and consequently, the ownership percentages of existing shareholders decrease. That process is called “dilution”, a term that, before raising my first VC round in 1998, I only knew in the context of homeopathy. Homeopathic dilutions are typically so extreme that not a single molecule from the original substance remains in the solution, which means that homeopathy is a $5 billion business of selling nothing (but water and alcohol). The amount of dilution in a VC round (or any equity financing) depends on the valuation of the company and the investment amount and is typically in the 15–30% range. As a general rule, all shareholders in a company have the right to participate in a new financing round on a pro-rata basis. So every shareholder can decide to invest more money to partially or completely offset the dilution or not invest and be diluted accordingly. The amount that a shareholder needs to invest to completely offset the dilution can be calculated by multiplying the total volume of the round with the shareholder’s ownership percentage before the financing. An early version of pro-rata rights from the Prussian Commercial Code as of 1862 (!) That amount is the shareholder’s “pro-rata amount” or just “pro-rata”. Note that if the company’s ESOP is increased as part of the financing round, as is often the case, the shareholder will still end up with a lower ownership percentage after the round. Pro-rata rights usually don’t protect from this type of dilution. The right to participate in new financing rounds on a pro-rata basis, the “pro-rata right”, is a fundamental right that protects the interests of minority shareholders e.g. in highly dilutive financing rounds. In many countries, pro-rata rights are enshrined in the law, i.e. shareholders, by default, have a pro-rata right in these legislations. The problem(s) with pro-ratas That, in a nutshell, is what a pro-rata right is. So what’s the problem? There are two very different scenarios: Scenario #1: The company is doing OK but not great. Fundraising is somewhat difficult, and to avoid negative signaling, the founders want existing investors to participate. This can put existing investors in a situation where they would prefer not to invest more money, but doing so might send such a bad signal to prospective new investors that it could jeopardize the financing. As you may know, we’ve addressed this issue with our “Series A pledge”. Whenever we make a seed investment, we commit to participating in the Series A to avoid any potential concerns around signaling from the get-go. Scenario #2: In this scenario, the company is doing well and is becoming “hot”. In these cases, there is usually “too much money on the table”, i.e. the new investor(s) want to invest more rather than less in order to reach their ownership targets, and existing investors would like to participate, too. This is the scenario Fred Wilson wrote about: “In the last ten or so years, companies, lawyers, boards, management teams, founders, and in particular late stage investors have been disrespecting the pro-rata right by asking early stage VCs to cut back or waive their pro-rata rights in later stage financings. […] I think this is bad behavior as it disrespects the early and critical capital that angels, seed investors, and early stage VCs put into the business to allow it to get to where it is. If the company agrees to a pro-rata right in an early round, it really ought to commit to live up to that bargain.” Here’s an example: A new investor, which the company is keen on getting on board, insists on getting a certain percentage of the company, say, 20%. The founders don’t want to get diluted by more than, say, 23%. As a result, only 13% of the round (3/23) is available to existing investors, even if their pro-rata right amounts to much more, say 30% of the round. In this situation, existing investors are often asked (sometimes urged, and occasionally more or less forced) to give up their pro-rata rights, or large parts of them, to make space for the new investors. As Fred says, this can be extremely frustrating for existing investors for whom the pro-rata right might have been an important part of the initial deal. Things get particularly nasty if existing investors are treated differently, especially if larger existing investors use their voting powers to waive pro-rata rights for all existing investors while securing allocations for themselves. Giving up pro-rata rights in your best-performing portfolio companies is particularly bad if you consider that investors often end up participating in financings of companies that aren’t doing so well in order to support them (see Scenario #1 above). This effectively means adverse selection — you have to participate in companies that aren’t doing well, and you cannot participate in those that do well. At the same time, it’s rational that founders want to give a larger allocation to new investors: Existing investors will continue to support the company whether their stake gets diluted or not. Therefore, founders would rather use a larger allocation to get new investors on board and incentivize them. In a way, allocations can become a currency to get value-add from investors. If a company raises several rounds of funding, pro-rata rights can become a real burden. Imagine that at some point, investors own 60% of a company. If that company wants to raise, say, $30 million and all investors take their full pro-rata, $18 million will come from existing investors, and only $12 million will be available for new investors. That might not be enough for the type of investor which the company wants to bring in at that stage of the journey. What could a fair solution look like? It’s a situation in which all stakeholders — founders, existing investors, new investors — have legitimate interests that can’t be fully aligned. What could a fair solution look like? There is a spectrum of opinions: On one end of the spectrum, there is the view that pro-rata rights are sacred because they are, well, rights. Pacta sunt servanda, “agreements must be kept”. ;-) I can relate to that view, but I don’t think it can be applied categorically. As explained above, ever-increasing pro-rata rights can become a massive burden for companies. On the other end of the spectrum is the view that pro-rata rights can basically be ignored and must be waived whenever there’s not enough space in a financing round. This is the most convenient solution for the company and maximizes the founders’ ability to use pro-ratas as a currency for getting value-add, so it has some merits. However, if a company doesn’t want to deal with pro-rata rights, it would be more honest and straightforward not to grant them in the first place and have the hard conversation before taking the investor’s money. What if pro-rata rights had a decay rate built-in? Here’s an idea. What if participation rights faded out over time? What if investors had a full pro-rata right in the round following their initial investment, but in each round thereafter, their participation right was cut in half? Like a radioactive element that loses 50% of its energy in each half-life. So a seed investor, for example, could do 100% of their pro-rata in the Series A, 50% in the Series B, 25% in the Series C, 12.5% in the Series D, and so on. Likewise, a Series A investor could do 100% in the Series B, 50% in the Series C, and so on. To a certain extent, this is how things often play out naturally anyway. But I’m wondering if agreeing on it upfront could make it more predictable for everyone, and make the sometimes explosive pro-rata discussions less radioactive. :-) Would a built-in decay rate make pro-rata discussions less radioactive? When I shared this idea with Tilman Langer, he rightly pointed out that there would be various practical difficulties. But the current standard — investors keep their pro-rata rights (almost) forever, but frequently it’s a right that exists only on paper, while companies are burdened with an ever-increasing stack of pro-ratas — doesn’t look particularly smart. So maybe the radioactive approach is worth a shot? Don’t miss out on any future P9 content by signing up to our ICYMI: newsletter!
https://medium.com/point-nine-news/the-case-for-and-against-pro-rata-rights-4e7760ad1feb
['Christoph Janz']
2020-12-01 13:35:44.431000+00:00
['Venture Capital', 'Startup']
The Difference Between Living in New York and San Francisco
This originally appeared on TheCooperReview.com. After living in New York for 5 years, I recently moved to San Francisco. Neither city is clearly superior, but there are some distinct differences… Read more funnies at TheCooperReview.com.
https://medium.com/conquering-corporate-america/the-difference-between-living-in-new-york-and-san-francisco-3e8ae58832a5
['Sarah Cooper']
2016-04-18 17:45:23.912000+00:00
['Humor', 'San Francisco', 'New York']
Hate Speech Has No Place In The World, Even Online
New Zealand is a country often associated with postcard picturesque beauty, brimming with spectacular mountain ranges, mischievous parrots and locals with unfathomable accents. That temporarily changed this week after the abhorrent acts of a single coward, armed with a hoard of weapons and a brain infected with the virus of extreme right-wing ideology, perpetuated in part by online forum 8chan, a place where like-minded individuals come together and discuss which cross-sections of society should be slaughtered, for the betterment of our race. A natural period of enquiry usually follows such a tragic event, in an effort to prevent similar occurrences, and given that it is exceptionally difficult to identify potential mass murderers, our attention turns to factors that we can control. Gun reform is already being discussed by the New Zealand cabinet, just four days after the attack occurred, testament to their progressive government and laudable prime minister Jacinda Ardern. The terrorist’s mental health is another consideration. In his rambling, racist manifesto he claims to be an ordinary white man, as though everyday, mentally-healthy people harbour urges of puncturing the organs of innocent people with bullets. As a native Australian, the shooter had access to discounted mental health programs via their Medicare system, providing him with a limited number of appointments with a mental health professional, though it’s unclear whether these were ever utilised, or how effective they would have been in steering him away from extreme ideology. The third major consideration, and much murkier problem, is how to moderate hate-filled discussion boards on websites like 8chan. These are hotbeds of righteous discontent, loaded with reclusive figures whose pitiful anger can develop into violent, unbridled extremism, occasionally forming a character of such severity as the Christchurch shooter, so psychologically disturbed and miseducated that he considers his actions enough to prevent Muslims from migrating to predominantly white countries such as New Zealand. The United States, UK, Australia, and many other countries fall under the United Nations’ International Covenant on Civil and Political Rights treaty, which includes the prohibition of certain types of hate speech, such as inciting violence against an ethnic group. The problem is one of enforcement — given that there’s no such thing as an internet police force (thank god), is it possible to systematically and efficiently censor lunatics like the Christchurch shooter, so that their violence-inciting ideology is eliminated before it reaches more gullible and mentally-unhealthy minds? The web is enormous — over 1.5 billion sites and growing. For this reason, websites are expected to moderate their own content in an effort to keep things in accordance with international law, often through the use of self-written codes of conduct. This method is useless for websites like 8chan, which was created as a place for people to share whatever content they wanted, regardless of its illegality. It even had chat boards dedicated to child rape. Though Google does have the power to remove illegal content from its directories (it removed 8chan after child porn was discovered), the company is understandably reluctant to ban websites that host content that isn’t categorically illegal, such as right-wing ideology. It’s up to the creators of discussion-based websites to moderate their content, including having the financial resources needed to overcome the potentially gargantuan challenges that accompany moderation. Diligent physical and algorithmic moderation of content along with constant refining of rules is needed to reduce illegal and hateful content on large websites, a mammoth, ongoing task that Facebook is gloomily familiar with. For 8chan — a website created with the purpose of allowing the most vile opinions to be shared and discussed freely — moderation is unimportant. 8chan’s owner Jim Watkins claimed that he doesn’t have a problem with white supremacists talking on his site, despite it encouraging mass murder in far-flung, usually peaceful cities such as Christchurch. With the failure of self-moderation, one might expect the responsibility of regulating hateful content to fall to a government appointment regulatory board in the country where the website is hosted, which reviews the content of questionable sites such as 8chan, with the power to take them offline if necessary. 8chan is infamous for hosting illegal content, making it a prime target for such a regulatory board. Surely a government cannot stand by while a public, highly popular website that is hosted in their country openly discusses child rape, or advocates the destruction of the Muslim faith? While this kind of moderation will be challenging beyond belief, and probably require much free assistance from the general public, the alternative is allowing destructive, hateful ideas to perpetuate among the most depressed and disillusioned minds in the human race. Freedom of speech is essential for a democratic, fair society in which ideas can be discussed without fear of consequence. The ICCPR tells us that the right to freedom of expression is not an absolute right. This means that platforms such as 8chan cannot have free reign to host disgusting, violence-promoting content. The ICCPR exists for this very reason. The problem with freedom of speech is that it’s also freedom to be evil. It’s possible to protect freedom of speech and censor websites that repeatedly violate hate speech laws. The difficult part is working out how to do so. Figuring out how to regulate echo chambers of mentally-deranged hate such as 8chan is an absurdly challenging task, but also an incredibly important one, worthy of the extensive time and investment needed in order to remove the soapboxes of senseless, would-be terrorists.
https://medium.com/antidotes-for-chimps/hate-speech-has-no-place-in-the-world-even-online-cb1ed9c080fe
['Rob Marchant']
2019-03-20 02:46:54.188000+00:00
['Technology', 'Politics', 'Mental Health', 'Government', 'Life']
Startup Spotlight: VComply
VComply is a cloud SaaS that intertwines responsibility mapping with management of compliance & risk. Its governance suite offers the capability to let organizations manage policies, contracts, documents, compliance evidence etc. Read on to learn more on how VComply’s CEO, Harshvardhan Kariwal, manages his team and the advice he’d give every founder. — In a single sentence, what does VComply do? VComply helps businesses automate compliance & risk management. – How did VComply come to be? What was the problem you found and the ‘aha’ moment? When we were running another Edtech startup, our compliance function was outsourced to a corporate secretarial firm. We had little oversight on what was going on and as a result of that we missed an important regulatory compliance deadline & ended up having to pay a $4,000 penalty. That was it! That’s when we decided that there has to be a better way of managing compliance and VComply was born. — What sets VComply apart in the market? Simplicity- the ease of use of the product & the flexibility that it offers to customers is one of the biggest factors in VComply’s success. While the product is simple to use, it is robust enough to handle complex workflows and can cater to a variety of use cases. — Have you pursued funding and if so, what steps did you take? VComply has been funding by Accel Partners. — What KPIs are you tracking that you think will lead to revenue generation/growth? Monthly Recurring Revenue, Churn rate, NPS, CLTV (Customer lifetime value) are top of mind. The most important metric to track from a growth perspective is growth in MRR month over month. — How do you build and develop talent? We have implemented OKRs at the company which keeps our team focused and result-oriented. We have frequent one on one meetings with team members to identify the key challenges they are facing. A large part of our budget is earmarked for our in-company learning & development program. We have 2 days every month that are just dedicated to learning & trying out new ideas. — What’s been the biggest success for the team? While focusing on growth is extremely important for any Startup it should not come at the cost of unsustainable growth hacks. Over 90% start-ups fail because of this. For us at VComply, it is extremely important that we prioritize sustainable growth. What we mean by that is, that the business needs to be cashflow positive whilst having a steady and exponential revenue growth but a tight control over the burn-rate. — What are the biggest challenges for the team? The biggest challenges for us today is growing the team fast enough to meeting with the growing customer demand & ensuring that we continue to stay relevant & establish ourselves as thought leaders in the industry. — What milestone are you most proud of so far? Partnering and working with large enterprise customers and banks very early on & winning their trust and support for our product. — What advice would you give to other founders? While there are many exciting things you could be working on within your startup, it is extremely important that you focus on building the right team! The right team creates all the difference in the growth journey of your Startup & is the single most important factor underpinning its success.
https://medium.com/startup-grind/startup-spotlight-vcomply-5d52c9afd609
['The Startup Grind Team']
2020-06-03 18:25:22.281000+00:00
['Tech', 'Startup', 'Startup Lessons', 'SaaS', 'Startup Spotlight']
How to Expose Your Services With Kubernetes Ingress
How to Expose Your Services With Kubernetes Ingress Allow your applications to talk to outsiders Photo by Dima Pechurin on Unsplash In my previous posts about Kubernetes, in order to expose my services to my home network, I used a load balancer service called MetalLB to expose the service to the VM bridge and an instance of Nginx on the main host as a reverse proxy to my home network. But there are some limitations to that. Kubernetes is not aware of Nginx, and can’t control its configuration, so you have to manually configure it. Also, MetalLB steals away some IP addresses from the VM bridge, and the number of IPs you configure it with, you have to guess at. If you need more than allocated, you’d have to go back in and configure MetalLB again. It would be nice to have services that are defined as ingress routes directly exposed to your home network. In order to do that, you need an ingress controller and a route from the main network to the Kubernetes bridge network. We’ll start with the ingress controller. To follow along with this article, you’ll need a setup similar to the one I created with my article Kubernetes from Scratch. In it, I created a bare-bones Kubernetes system on VMs running on bare-metal. My intention for this article is to help understand what’s going on behind the scenes by not using a pre-packaged system or one offered by a cloud provider. I’m also trying to be cloud provider agnostic, so knowledge gleaned from this article will help you better understand Kubernetes no matter what cloud providers you are using. You will also need an understanding of Linux systems, especially networking. First, we need something to ingress to. I happen to have a simple service that listens for HTTP get requests and responds with “hello world”. Create a file called simple-service.yaml and add the following: apiVersion: apps/v1 kind: Deployment metadata: name: hellok8s-deployment labels: app: hellok8s spec: selector: matchLabels: app: hellok8s template: metadata: labels: app: hellok8s spec: containers: - name: hellok8s image: docker.io/rlkamradt/hellok8s:latest ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: hellok8s-service spec: type: ClusterIP selector: app: hellok8s ports: - port: 8080 targetPort: 8080 That’s about as simple as you get in Kubernetes. Now, run kubectl apply -f simple-service.yaml That will start up the service, and expose port 8080. But expose where? If you run curl http://localhost:8080/ on the main host, you’ll get nothing. You can look at the service: rkamradt@beast:~/scratch$ kubectl get service hellok8s-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hellok8s-service ClusterIP 10.111.98.155 <none> 8080/TCP 80s Notice that it has a cluster IP but not an external IP. You can try curling with the cluster IP, but you won’t get anything. Let’s try one other option. If you have access to your nodes (my nodes are called kube1–4) you can ssh to them. Now run the curl with your cluster IP: Hello World rkamradt@kube1:~$ curl http://10.111.98.155:8080/ Hello World You can try that on each node, and you’ll get the same answer. So one of the functions of the ingress controller is to expose the service back out to the main host. Now we must pick an ingress controller, of which there are several. If you are running with a cloud provider, they will normally have a preferred one (or only one). But since I’m my own cloud provider, I have to pick one. I’m picking the semi-official Nginx ingress controller. You can install it straight from the Kubernetes GitHub page: This creates a namespace called ingress-nginx . You can see all the objects it creates with kubectl get all -n ingress-nginx . rkamradt@beast:~/scratch$ kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/ingress-nginx-admission-create-lfc2b 0/1 Completed 0 89s pod/ingress-nginx-admission-patch-rmcvq 0/1 Completed 0 89s pod/ingress-nginx-controller-5f98fb55b8-7pnjt 1/1 Running 0 99s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller NodePort 10.96.223.195 <none> 80:32688/TCP,443:32569/TCP 99s service/ingress-nginx-controller-admission ClusterIP 10.100.112.59 <none> 443/TCP 99s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ingress-nginx-controller 1/1 1 1 99s NAME DESIRED CURRENT READY AGE replicaset.apps/ingress-nginx-controller-5f98fb55b8 1 1 1 99s NAME COMPLETIONS DURATION AGE job.batch/ingress-nginx-admission-create 1/1 8s 99s job.batch/ingress-nginx-admission-patch 1/1 8s 99s A couple of things to note: first, it has two jobs that run once and then leave their pods around, perhaps for diagnostic purposes. They may have a TTL, or we may just need to get rid of them after a while. I’ll have to look into that when I get sufficiently annoyed seeing them hanging around. The other thing to note is that it created a NodePort service, which would require you to forward with kubectl port-forward . I suppose it does this because it can’t expect that a load balancer would be configured, but in our case, we have MetalLB. We need to rectify that by editing the service/ingress-nginx-controller service. Run kubectl edit service ingress-nginx-controller -n ingress-nginx . Find the spec.type field and change it from NodePort to LoadBalancer . Now when we run kubectl get all -n ingress-nginx we can see the type has changed to LoadBalancer and that it has an external IP address. service/ingress-nginx-controller LoadBalancer 10.96.223.195 192.168.122.240 80:32688/TCP,443:32569/TCP 4m36s The IP address was allocated from MetalLB, yours will probably be different. If we curl that address, we’ll get: <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.17.8</center> </body> </html> rkamradt@beast:~/scratch$ curl http://192.168.122.240 404 Not Found 404 Not Found nginx/1.17.8 Now we can configure our ingress point. There are a lot of different options including virtual hosts, path redirect, and https termination. But we’ll start very simple indeed. Create a file called simple-ingress.yaml and add the following to it: apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /testpath pathType: Prefix backend: serviceName: hellok8s-service servicePort: 8080 Start it up with kubectl apply -f simple-ingress.yaml . This will configure the ingestion service to forward all requests that have /testpath as the path to the hellok8s-service . Try it out: Hello World rkamradt@beast:~/scratch$ curl http://192.168.122.240/testpath Hello World But curling again without the path /testpath it’s still the 404 page: <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.17.8</center> </body> </html> rkamradt@beast:~/scratch$ curl http://192.168.122.240 404 Not Found 404 Not Found nginx/1.17.8 We could change the ingress rules and have different paths go to different services, or use virtual hosts that go to different services based on the Host header. You can also define a default ingress in case you don’t like the standard Nginx 404 page. But there’s still a problem: if I use the browser on my laptop to go to http://192.168.122.240/testpath , I’ll get an error. We need to somehow route to the 192.168.122.0/24 network from our home network. That’s going to require a bit of skill in Linux networking. Where does the 192.168.122.0/24 network come from? That’s KVM’s default bridge network. All the VMs we create get IP addresses assigned to that range. MetalLB also uses that network to assign addresses within a certain pool (my configuration is 192.168.122.240-192.168.122.250 ) You can find the addresses that KVM assigns from by running sudo virsh net-edit default . You can see the output and ensure that it doesn’t interfere with the range used by MetalLB: <network> <name>default</name> <uuid>dc658641-aded-465d-b472-1cc427c76626</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:1d:5b:25'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.232'/> </dhcp> </ip> </network> I set my dhcp range to end at 192.168.122.232 so it wouldn’t interfere with the range from MetalLB. You’ll also see that the forward mode is nat so it acts similar to your home router, traffic can go out, but the outside world can’t initiate a connection. But the main host has an ‘in’ to this network, it can access the bridge without going through NAT. If the main host can see it, I should be able to play around with iptables and port forward. Be sure to shut off any service on your main host that might be listening on port 80 or 443. Here are the commands that I use to accomplish that: You’ll have to use sudo /bin/bash to run all these commands as root (or prefix each with sudo ). Test it out by opening http://<mainhost>/testpath in your browser. It worked the first time for me! Then you can make your changes permanent. I was able to use apt-get install iptables-persistent which saved the rules to /etc/iptables/rules.v4 which are automatically loaded on reboot. Your particular distribution of Linux may require different ways to save. You might be tempted to open up additional ports as well, but by opening up just the two, you can keep your ingress security simple. I might even remove the ingress to port 80 and force people to use 443. That is, as soon as I get HTTPS set up. Which brings me to my next section: setting up HTTPS. Now, HTTPS, SSL, and TLS have given great security to the internet. But they have also caused many programmers to pull out their hair. Even if the applications you write don’t terminate HTTPS, eventually you’ll be called upon to make calls to an HTTPS service with a funky certificate. Why are certificates so hard to deal with? First, they need to come from official certificate authorities (CA). Second, the common name (CN) value in the certificate must match the hostname to ensure that the hostname you typed in your browser matches the certificate offered by the server. This is true not just for browser access, but for application access. One way to alleviate this situation is to have one server provide all ingress and terminate HTTPS. When I say “terminate HTTPS”, that means that the ingress server provides the certificate and then forwards requests to the application in plain HTTP. This is what I like to call “exoskeleton security,” because your security is provided by the fact that you have only one ingress point, and it can be locked down pretty tightly. The fact that we can’t route to the KVM bridge network — except by making specific rules in the IP tables of the main host — gives us that security. It’s a little harder when you have multiple hosts or cloud hosts where you don’t know exactly where they are, but the subnet isolation is still there and should provide the exoskeleton you need. It also alleviates the need to have HTTPS handling in your application, and that’s a security risk in and of itself. If we browse to our new service using HTTPS on Chrome, we get this screen: Other browsers may present different screens. It’s meant to look scary; if this was a real website on the internet, I would definitely be hitting the “back to safety” button. When Nginx is installed, it will create a self-signed certificate. Clicking the “Advanced” button gives us more information: Now you can set up Chrome to trust your certificate, but because there’s no standard place for trusted certificates to be stored, each browser you might use (and each programming language that might read from this URL) needs to be configured to trust this certificate. By clicking on the “Proceed to beast (unsafe)” link, you’ll get your “Hello World”. If you click on the “Not Secure” next to the URL, it will show you why it thinks it’s not secure. Click on the “Certificate (Invalid)” link and you’ll see the certificate Nginx created for you. So, not only is this certificate not verified by a third party (a CA), the Common Name, which should be the hostname, is “Kubernetes Ingress Controller Fake Certificate.” As stated above, this isn’t a tutorial for creating a production system. It’s for learning the nuts-and-bolts of Kubernetes, so you’ll be better prepared to create a production system. We could just leave it as is and make you click through the warnings on every access. (I’ve been in many workplaces that made you do that). But let’s go one step further and install a certificate manager into Kubernetes that will allow us to create certificates from a CA for each ingress. We’re going to use the originally named cert-manager for our certificate manager. To install all the artifacts needed for the basic manager system, use the following: That will create the namespace call cert-manager . You can see all the artifacts with kubectl get all -n cert-manager . rkamradt@beast:~/scratch$ kubectl get all -n cert-manager NAME READY STATUS RESTARTS AGE pod/cert-manager-74d6c4d49b-md4lf 1/1 Running 1 22h pod/cert-manager-cainjector-77bc84779-7vhgj 1/1 Running 1 22h pod/cert-manager-webhook-5b5485577f-6csgf 1/1 Running 1 22h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/cert-manager ClusterIP 10.100.138.27 <none> 9402/TCP 22h service/cert-manager-webhook ClusterIP 10.97.218.179 <none> 443/TCP 22h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cert-manager 1/1 1 1 22h deployment.apps/cert-manager-cainjector 1/1 1 1 22h deployment.apps/cert-manager-webhook 1/1 1 1 22h NAME DESIRED CURRENT READY AGE replicaset.apps/cert-manager-74d6c4d49b 1 1 1 22h replicaset.apps/cert-manager-cainjector-77bc84779 1 1 1 22h replicaset.apps/cert-manager-webhook-5b5485577f 1 1 1 22h Now we should have a certificate from a CA. We’re going to create our own CA and produce a root certificate that the certificate manager can use to sign individual certificates. They still won’t be trusted by your browser, but you only have to install the one root certificate on the browser to get any certificate created from it trusted and it’s only one step away from using a real CA. This will create two files, ca.key and ca.crt. The key file is your private key which will need to be secret. The crt file is your public root certificate, which you can install in your browser. The key file is needed to create certificates on behalf of your new CA, hence the need to keep it absolutely secret. You have to let anyone that you want to trust you have the crt file. The crt file is base-64 encoded, so it’s just text, you can print it out if you want. The last two lines of the above snippet will allow it to be used as a secret. The third line removes the passphrase, and the fourth line creates the secrets in Kubernetes. This secret will be used by cert-manager to create an Issuer . Create a file called issuer.yaml and insert the following: apiVersion: cert-manager.io/v1alpha2 kind: Issuer metadata: name: ca-issuer spec: ca: secretName: ca-key-pair This creates an Issuer named ca-issuer and associates the CA certificate secret we just created. Now the Issuer can be called upon to issue certificates as needed. Let’s create an ingress that will use a virtual hostname and issue a certificate for it. Create a file called vhost-ingress.yaml and add the following: apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: cert-manager.io/issuer: ca-issuer spec: rules: - host: helloworld.local http: paths: - backend: serviceName: hellok8s-service servicePort: 8080 path: / tls: - hosts: - helloworld.local secretName: myingress-cert In this file, the cert-manager.io/issuer annotation tells it which issuer to use to issue certificates, in this case, the one we just created. Under spec.rules we set the host to helloworld.local . Finally, we add the tls section that says to create a certificate for helloworld.local and store it in the secret myingress-cert for Nginx ingress to use. Now we can add them to the system in the normal fashion: kubectl apply -f issuer.yaml kubectl apply -f vhost-ingress.yaml Run kubectl get secrets myingress-cert to see the generated certificate. Back on your development machine, you can add helloworld.local as an alias for the main host in the /etc/hosts file. Then browse to https://helloworld.local. You should get the same warning because our system doesn’t trust the new root certificate yet. But click through the warning, and have another look at the certificate. It should be a new certificate, not the dummy Nginx certificate. If you’re running Chrome on macOS, it turns out you can add it to the trusted certificates on the Mac Keychain. Open up the Keychain tool and select the system tab. Then, under the File menu, choose Import Item. Select the certificate (ca.crt) you copied over from your main host. Once imported, it should appear in your list of certificates, but it should have a red X indicating it’s not yet trusted. Double click on it, and set trusted to all. Now if you refresh https://helloworld.local, it should come up as trusted. If you click the lock icon next to the URL, you’ll see the details of the certificate and it’s trusted root. It’s set to expire in three months, but if I understand cert-manager, it will create a new one as needed. Pretty nifty! If you’re running another browser or another OS, I’m sure you’ll be able to find instructions on installing root certificates. It’s probably overkill to create a CA and install a certificate manager just for one virtual host. But now I can add services to my system with virtual hosting and https just by adding another ingress and updating my /etc/hosts file. While we’re still very far from a production system, we’ve come a long way towards it and solved a lot of problems along the way. If you had a domain name, you could easily use a service like LetsEncrypt. The cert-manager component has a configuration for LetsEncrypt built-in. This has been a long and complicated article, and I hope you stuck with it to the end. We solved a lot of problems and I know I’ve learned a lot. I hope you have too and thank you for sharing this journey with me. All of the scripts used can be found on my GitHub page.
https://medium.com/better-programming/how-to-expose-your-services-with-kubernetes-ingress-7f34eb6c9b5a
['Randal Kamradt Sr']
2020-10-25 13:28:34.591000+00:00
['Kubernetes', 'Cert Manager', 'DevOps', 'Programming', 'Nginx']
Šest hrvatskih inovatora osvojilo je 11 odličja na Britanskom sajmu inovacija
in In Fitness And In Health
https://medium.com/sidrome-freelance/%C5%A1est-hrvatskih-inovatora-osvojilo-je-11-odli%C4%8Dja-na-britanskom-sajmu-inovacija-f96248ac713
[]
2017-01-11 10:09:28.025000+00:00
['Startup', 'Poslodavac']
Shopify & Dynamic Remarketing
Shopify & Dynamic Remarketing The “what, why & how” guide to winning back your non-converting customers iStock Photo by Getty Images The holiday season is here and due to the special circumstances we live in (COVID-19), merchants should think about investing more in digital marketing strategies. The most common pain point of online merchants around the world is non-converting customers: people who visit your e-store, search around a bit, and then leave without purchasing. Well, not for long — dynamic remarketing ads are the answer to all your problems. By setting up dynamic remarketing campaigns, you will be able to personalize the ads displayed to your customers and stop guessing what they would be interested in. Trust me on this: the results in sales are outstanding. Unfortunately, this is not always an easy task to complete, especially when we talk about Shopify stores. The implementation is often complicated and the results are not always to be trusted. What you will need:
https://medium.com/better-marketing/shopify-dynamic-remarketing-613111c3b9af
['Alexandra Poulopoulou']
2020-12-07 16:02:48.177000+00:00
['Shopify', 'Dynamic Remarketing', 'Marketing', 'Google Ads', 'Digital Marketing']
The Importance of Feeling to Heal
The Importance of Feeling to Heal Is there a shortcut to healing? Photo by Luis Galvez on Unsplash I couldn’t heal because I kept pretending I wasn’t hurt. I came upon this quote today and it got me thinking. When things hurt, we try to pretend they don't. We convince ourselves that we are fine but we are well aware the thoughts are as loud as ever. And we beat ourselves up about it. Why can’t I just get over it? I’m much stronger than this. This is not even that important, why am I upset? What is wrong with me? We keep on flooding our minds with self-deprecating thoughts till we reach a point where the problem is no longer what happened to us. Now, the problem is within us. The problem seems to be us. Well, actually it’s our minds. And it’s not really a problem, at least it doesn't intend to be. We are human, and humans have different responses to different stimuli. One very important physiological reaction to stressful or frightening events is to fight or flee. This is our fight or flight response. When you sense danger, you might choose to fight it, or, as many of us do, you chose to run away from it. And running away can come in all forms. Some focus on their work and forget about their well-being, others insist that nothing is wrong, and others simply don't let themselves think about it. We try to run away from the truth of things. We run away from the truth of us. From the truth of our hearts and minds. Why? This comes from the fear of facing reality, because reality might hurt. If you refuse to acknowledge the damage your experience is causing you, it will find ways to acknowledge its self. You might start getting mad easily. You might cry. You might withdraw from society. You might get physically sick. If something hurts, you should not ignore it. For the pain to heal, you must first accept the fact that the pain exists. You must feel it. It is there and it hurts. But guess what, it’s nothing you can’t handle. Because if you start to feel, you start to heal. And healing does not mean no pain But healing pain is different than non-healing pain. Healing pain will help you grow in the long run. Keep in mind that healing is different for each person. It is crucial to see how you could efficiently heal. Going to therapy, turning to religion, working out, finding what makes them happy are all ways people use to heal. It’s not one thing or another, it could be a combination. With this people are now in their fight response. They are fighting against their negativity. They are growing stronger even though it hurts temporarily. If you keep pretending you're not hurting and refuse to get help if it is needed, you're delaying your healing process.
https://medium.com/illumination/the-importance-of-feeling-to-heal-3c52a18e9318
['Tina S']
2020-12-28 14:06:02.907000+00:00
['Mental Health', 'Self', 'Feelings', 'Healing', 'Self Improvement']
1 Codebase. 3 Platforms. Here’s How.
Which Cross-Platform Framework Should I Use? There’s a lot of different solutions for building cross-platform apps. You have quite a few popular solutions such as React-Native, Cordova, Ionic, Capacitor, Flutter, so the decision isn’t an easy one. I’m not going to discuss the tradeoffs between these in this article, but here’s some good articles that compare the differences between these alternatives: Instead, today I’m going to talk through the framework that I believe is the best, and why it’s been a pretty amazing stack to work in. My Cross-Platform Tech Stack A lot of React developers would opt for something like React-Native. I decided against this because I didn’t want to write any native code at all. I also didn’t want to deal with compatibility issues with other React libraries that I might want to include, so I went with Ionic Ionic released its version that officially supported React back in October 2019. I was an early adopter to this release, and started using it a month after it’s released. I’ve now been on the technology for 9 months and although it’s had some issues, I think the technology has come a long way. Here’s some of the things I love about Ionic + React: It’s deployable to everything. iOS. Android. Electron (desktop). Web. Etc. I’ve never had to write any native code. I can test my code and develop it in the browser, then when I’m ready to test how it works on mobile, I can build the code and test it directly in XCode/Android Studio Ionic is now framework agnostic There’s a repository of ionic react-hooks that give you an easy interface for a lot of native capabilities Here’s a rundown of the other pieces in my cross-platform tech stack. Ionic Ionic is basically a UI kit of different generic components that you can use in your web code to have native-looking features in your app. It offers generic components like an <IonButton> that might look like this on iOS: and this on Android: The difference between these may look small, but it’s meant to make your application look and feel native no matter which device you’re on. I think this has it’s tradeoffs because your UI won’t always be consistent across platforms. For example, on iOS those buttons are slightly bigger which means your UI will look different on both platforms. This is annoying at first, but over time you’ll learn the oddities of each component and what they look like on each platform, and then it’s much easier to work with them. I’ll also note that these Ionic components are generic, so they can be used paired with React, Angular, Vue, or even no framework at all. This is very powerful because it means you won’t be locked in to a frontend framework as-well if you choose this technology. In my opinion, this is a large benefit over something like React-Native, because it future-proofs the technology much better. React React is the most popular web framework, pulling ahead of angular by a greater margin every single day. I first started pairing Ionic + React in 2019 within a month of Ionic being supported on React. Through this process things have become far more stable, so now I feel comfortable recommending the two technologies together. Capacitor The React code is what will get displayed in the “WebView”, and it will use Ionic components to mimic native the look and feel of native components. Then, Capacitor comes in and acts as the “build step” that will take that web code, bundle it into an app and then toss it into Android Studio or XCode for you to configure further. Firebase Cloud Messager On the backend, I like to use FCM because it gives you the same cross-platform capabilities, except this time it’s with your push notifications. If you didn’t know, android and apple both have their own push notification services. If you want to send out a notification to all your users, you’d have to write the API calls twice to upload that notification for both services. Using something like FCM, you can have 1 single API to upload push notifications to, and then they’ll get forwarded to all the different notification services for the platforms your users are on. Ionic Appflow for CI/CD I’ll admit, I haven’t use Ionic Appflow yet, but I must say the technology looks attractive. One of the really interesting features to me is their idea of “live deployments”. Basically the idea here is to let you push new web code to your mobile apps without having to push a new release to the stores. This means you can use their CI/CD platform to just push new frontend code once and have it automatically be update the website and every single user’s app. That’s insanely powerful if you think about it! Not only do you only have to write code once, but you only have to deploy it once too. If AppFlow was free, I’d give it a try right now because that’s a super attractive service for small development teams. Conclusion In the end, there can be quite a few tradeoffs you’ll have to make with a cross-platform solution, however the downsides can be minimized by choosing a proper tech stack. Using something like capacitor is going to give you a really solid way to embed your web apps into native code. But combine that with a cross-platform CI/CD platform like AppFlow and a nice UI-kit like Ionic and you can end up with a solid app. Enjoy!
https://medium.com/swlh/1-codebase-3-platforms-heres-how-c524e0be8c63
[]
2020-10-06 08:14:10.773000+00:00
['React', 'Programming', 'JavaScript', 'Software Development', 'Ionic']
Data Science job search: Using NLP and LDA in Python
2. Analysing the data Volume of posts from Companies / Recruiters Unsurprisingly most of the frequently positing companies are recruiters. Although in greater London it seems Harnham and Datatech Analytics have the largest number of roles. It may be worth speaking directly with these 2 to understand more about the roles they contain. Removing duplicate postings Sometimes recruiters or companies post the same advert for a job which results in duplicate data. These can simply be removed based on the job description however if competing recruiters are posting adverts for the same job there can be slight differences. The solution that seemed the easiest to remove these closely matched jobs was to compare cosine similarities based on their word counts. Any that contained a score close to 1 without equalling 1 would almost certainly be a repetition of the same job. Removing similar job descriptions with cosine similarity Cosine similarity is a metric used to determine how similar the documents are irrespective of their size. It measures the cosine of the angle between two vectors (containing word counts) projected in a multi-dimensional space, where each dimension corresponds to a word in the document. The cosine similarity captures the orientation (the angle) of the documents and not the magnitude. The cosine similarity is advantageous because if two similar documents are far apart by the Euclidean distance due to their respective size they could still have a small angle between them, which indicates they are similar. These job adverts are transformed to a vector based on their word counts before calculating their cosine similarities. This method will definitely catch any documents almost exactly the same but should also catch any that have had an extra section added and changed the length of the post (another recruiter posting the same job description in addition to information related to the recruitment company for example). I made a few checks by inserting a reasonably long unrelated recruiter contact information into adverts that had a high cosine similarity with another (close to 1). With this “noise” the cosine similarity reduced but still returned a cosine similarity >0.99 which suggests these similar job description should still be picked up. However the Euclidean distance had a noticeable increase which indicates why it might not be worth using a distance measure to remove similar documents. Some exploration on closely matched scores resulted in the cutoff of above 0.98 cosine similarity being used as a removal. This cutoff level captures adverts that have slight variations but are essentially the same job posting. Job Titles Data Scientist or machine learning engineer are the highest returning results for job title. However of the approximate 600 results returned from the search it seems there’s a lot of variation in titles used and approximately 400 job titles only used once. For now I will class each job titles into the following categories: Lead: Any title containing lead, chief, head or manager Senior: Any title containing senior or principal Graduate: Any title containing graduate Regular: Will class anything else as a regular role I’m sure there will be some roles that end up getting classed as regular when in reality they are a more senior or junior role. However this seems currently to be a better way to narrow down jobs based on titles. I at least know for certain if a role is Senior, Lead or Graduate that it is not required in the set of results I need for narrowing down my job search. Contract Type A mixture of contract and permanent roles are contained on Indeed. 40% of roles have a type of contract associated with them. It may be possible to further investigate the ones without a contract type and find out more from their description. For now I will class contract types as the following: Contract: Anything that has ‘contract’ or ‘temporary’ associated with it Apprenticeship Internship Part-Time Permanent: Any contract that is classed as ‘Full-time’ or ‘Permanent’ Python vs R Looking into whether Python or R is specified in the job description is more out of curiosity. Python is overwhelmingly the program of choice in the job descriptions. This is higher than I thought it would be and makes the case for using Python over R for producing data science related analysis / models. Salary On Indeed permanent roles have a salary per year associated with them but for contractor roles the rate (hourly, daily,weekly, monthly) is specified.
https://medium.com/analytics-vidhya/data-science-job-search-using-nlp-and-lda-in-python-12ecbfac79f9
['Thomas Caffrey']
2020-05-14 14:11:09.190000+00:00
['NLP', 'Python', 'Topic Modeling', 'Data Science', 'Machine Learning']
A year in review — what I learned about UX and work in 2020
But this just makes your weaknesses more salient to the interviewer, increasing the chances that they’ll remember them after the interview. A strength-based approach is a better way to go. Instead of asking about your weaknesses, ask the interviewer what characteristics and skills would make someone successful in this role. If they highlight anything that you have not mentioned or might be a weakness of yours, you can bring it up then. This way the interview finishes with a demonstration of how closely you measure up to their dream candidate rather than focusing on all the ways you might fall short. Reflection #2: Getting a Head Start on The New Job For most of my career, I always assumed that my first day was the first day that I showed up at the office. However, if you really want to make a great impression on your new boss and hit the ground running, your first day is actually at least a week before. When starting my latest job, as soon as I signed the paperwork, I asked my boss what I research or tasks I could do that would help me get a head start on things. Even if it was something as small as having lunch with my future coworkers or familiarizing myself with the systems, tools, and features that I would be working on. Reflection #3: The Power of Research Repos and Democratizing UX Research repositories and democratizing research across team members has been a huge focus of mine this year, and during the pandemic I had the chance to attend one of the best webinars of the whole year: Dscout’s How to Democratize Your User Research Practice (Even Now). The webinar had some great insights on how to increase the impact of user research, and I highly recommend you watch the recording through the link above! However, I’ll list some of my favorite points below. #1) Most employees are not attuned to the process and pace at which companies make decisions and there is a mismatch between the speed of research and the speed of decisions at companies. #2) UXRs should have a more holistic view of ourselves as researchers — not just as someone who talks to users but as someone who helps stakeholders make good decisions. In general, there is generally a lack of good problem definition on projects and a lack of desire to fix that issue, which is a space UXRs need to get more involved in solving. #3) UXRs need to frame our work in terms of the values that the company is interested in such as money/monetization. The only way we will be able to convince people of the value of our work is to speak their language and link our research findings to values the company cares about. Reflection #4: The Bollywood Technique Photo by Rajesh Rajput on Unsplash One of my favorite books that I have read this year was A Book Apart’s Cross Cultural Design. It’s a great book that’s a quick read, and I strongly recommend it to anyone who’s new to cross-cultural research and design as there are many tips and insights about the impact of culture on UX and design. However, one of the most insightful concepts for me from the book was on how to elicit feedback from participants who come from cultures where direct critiques are culturally inappropriate. Performing usability tests with users from these cultures can, obviously, be challenging as participants will feel that it is taboo to be to be negative about the designs. Photo by Jakob Owens on Unsplash A way around this cultural taboo is to use the Bollywood Technique. This technique involves reframing the usability test scenario as a larger dramatic Bollywood-style story which stars the participant as the main character and positions the website being tested as a crucial tool the participant needs to complete the main goal(s) of the story. For example, if testing a travel website, the story could be that the participant has just learned their sister’s fiancé is an evil mastermind, and they have to use the site to quickly book a plane trip to the upcoming wedding to stop it. This set-up allows the participant to use the cover of the Bollywood narrative to be critical of the site without it being offensive as it is just part of the drama of the story.
https://uxdesign.cc/a-year-in-review-what-i-learned-about-ux-and-work-in-2020-fe8bf2cb4007
['Adam Engstrom']
2020-12-29 01:32:15.312000+00:00
['Product Design', 'Design', 'Research', 'User Experience', '2020']
Did I Just Succeed In Detecting Breast Cancer From A Single Image With Python And Machine Learning?
Did I Just Succeed In Detecting Breast Cancer From A Single Image With Python And Machine Learning? The complete guide on how to combine Python and ML to detect whether a person suffers from breast cancer with 98.24% accuracy. Photo by National Cancer Institute on Unsplash Breast cancer is a type of cancer that develops from breast tissue. Following skin cancer, breast cancer constitutes the most common cancer diagnosed in women in the United States, as well as the most common form of cancer in women over the age of 50 in the United Kingdom. Although symptoms differ from person to person due to the many variables involved, according to experts, signs, and symptoms of breast cancer include, but are not limited to: Change in the size, shape or appearance of a breast Changes to the skin over the breast A newly inverted nipple Redness or pitting of the skin over ones breast A breast lump or thickening that feels different from the surrounding tissue Peeling, scaling, crusting or flaking of the pigmented area of skin surrounding the nipple (areola) or breast skin These are mere examples of the plethora of possible symptoms a woman (or man in rarer occasions) may experience. The problem is, that women tend not to pay much attention to such symptoms. They pass them off as something random that will simply stop by itself as they fail to grasp that it will not simply fade away. Most women have made the flawed and non-factual assertion that breast cancer is something rare and it is unlikely that they ever fall as one of its victims. Unfortunately, data and scientific observation simply do not confirm this. On the contrary, around 1 in 8 women are diagnosed with breast cancer during their lifetime. Researchers have identified hormonal, lifestyle and environmental factors as indicators of a person’s risk of being a victim of breast cancer. Despite the aforementioned, there are many cases where people with no risk factors develop such cancer. At the same time, people being identified with several risk factors may never do. The assertion is thus made, that breast cancer is caused by a complex interaction of one’s genetic makeup and environment. Nipple changes observed in breast cancer victims The Problem It must have become apparent that breast cancer is an issue that concerns a great number of people around the world. The issue is, that doctors are not always reliable when detecting the above-said type of cancer. From personal experience, my grandmother had to visit a dozen different radiologists across different continents, in order for a consensus to be reached concerning her condition. Even then, there was much uncertainty about whether the final diagnosis was the correct one. Years later and doctors kept giving her different opinions and diagnosis’. In a different setting, perhaps such a situation would be acceptable. When on the other hand, lives are on the line, professionals not being able to reach common grounds on an ideal course of action should be a phenomenon averted at all costs. This is by no means attributed to the doctors on my part. I believe that the doctors are doing the best they can with their pre-existing knowledge and skills. But accepting the situation the way it is, is simply not possible. The Solution An ideal and realistic solution would be one, that would in no case remove the doctors from the equation. On the contrary, a third set of eyes should be established in order to assist doctors in confirming their diagnosis. This same set of eyes could also be used by women at home, interested in hearing at a first opinion without physically going to a doctor. Remember, prevention is the optimal and best way to solve any problem. Having the aforementioned thought process in mind, in this paper, I will be attempting to build and compare different models that will successfully detect if a person has breast cancer.
https://medium.com/swlh/did-i-just-succeed-in-detecting-breast-cancer-from-a-single-image-with-python-and-machine-learning-3ed24780e354
['Filippos Dounis']
2020-04-25 10:18:55.319000+00:00
['Cancer', 'Python', 'Medicine', 'Data Science', 'Machine Learning']
Rejection City, Here I Come: Trying to Accrue 100 Rejections in 2018
The best way to get your poems published is to send them out. That being said, you’ll end up receiving more rejections than acceptances, no matter who you are. The whole thing is just a numbers game. So why not reframe and welcome the reality of those inescapable rejections? One way to do this is to aim for 100 rejections a year. You can read more here about this approach, one taken by numerous writers to remove the inevitable sting of the submissions process. In 2018, I continued submitting poems, essays, and writing projects. Sure, I welcomed the publications and acceptances, but what I was really aiming for was 100 rejections. Where I Started I ended the previous year, 2017, with 13 rejections out of 27 total submissions. Why the high acceptance rate? Many of the submissions were to extremely tiny presses in my hometown, projects run by friends who asked me to submit, or community initiatives where nearly everyone who wanted to participate got published. I really wasn’t sending my work out to places where I didn’t have a direct connection. At the end of 2017, I was happy about what I had published, but I knew that I was missing a ton of opportunities by not submitting more widely. My Self-Imposed Rules I entered 2018 already in graduate school. I was producing longer works. I also believed that I was producing better quality writing. In my quest for 100 rejections, I decided to set a monthly submissions quota: I would submit to 12 venues per month. All writing-related submissions would count towards this quota: journals and magazines, publishers, grants, residencies, writing contests, application-based reading series, and more. All genres counted towards my single goal: poems, book reviews, essays, etc. I would not count rejections from venues I had submitted to before the year began; I gave myself a clean slate. How Did It Go? In 2018, I accrued 86 rejections and 18 acceptances. I still have 45 pending submissions, for a total of 149 submissions for the year. I maintained the 12 submissions/month rule pretty well. The few months I lagged (four submissions in June and three submissions in November), I made up for in other months (21 submissions each in September and October). Some Notes About Strategy I discovered calls for submission through reading, social media posts, and the CRWOPPS Listserv . I compiled these opportunities in what is now a 294-row spreadsheet with far more venues than I’ve had time to read, research, and submit to. It’s a good problem to have! . I compiled these opportunities in what is now a 294-row spreadsheet with far more venues than I’ve had time to read, research, and submit to. It’s a good problem to have! If a journal sent me a “friendly rejection” (i.e. “We can’t publish your work now, but we hope that you send us more work soon”), I made sure to resubmit…and resubmit…and resubmit. I submitted “packets,” i.e. the same set of poems, to say, 25 venues. I sent different packets of poems to contests than I did to literary journals. Analyzing the Acceptances With 149 submissions, I didn’t get that many more acceptances than when I submitted to only 27 places. Was all the extra work worth it? Even after reframing the value of a rejection, the true goal is to produce good writing that garners readers. Here’s my break-down of the 18 acceptances. I wanted to see if analyzing these would give me any more insights about strategy: Two acceptances at journals where I knew the editors and was asked to submit (this is customary) One acceptance for a feature in a daily online poetry series for which my work had been solicited Seven journal acceptances from the “slush pile” One acceptance into a Milwaukee-themed anthology One acceptance to attend a residency One accepted guest blog post One article in a regional LGBTQ news magazine One publication associated with being a poetry contest semi-finalist One acceptance for inclusion in a collaborative poetry sculpture Two acceptances for inclusion in a poetry festival and at a zine fair I also published non-submissions work as well, most notably a few book reviews from when, later in the year, I became a staff book reviewer for South Florida Poetry Journal. I also did some fun projects with The Drunken Odyssey. 2018 gave me the chance to conduct some great interviews as well, and, of course, to co-edit The Politics of Shelter. I also started this blog! Analyzing the Rejections Out of 86 rejections in 2018, 16 of them were friendly, “send us more work, please” rejections. This gives me some valuable information about where to prioritize submitting next year. One of the early friendly rejections was from a journal that has multiple submissions periods every year. I sent them another round of poems and received word in December that they wanted to accept one. This is a journal I’ve admired for a while, so it was a satisfying accomplishment. The Numbers Game I spent $629 on submissions in 2018. But really, looking at the numbers, it’s not that bad, just $52 and some change each month. Not all that high for a self-employed artist. And I made the money back: I won a contest at Cutbank (from a 2017 submission) that came with a $500 prize (I wrote a longer post about the economics of contests on this blog a few months ago). Two journals paid me $10 and $25, respectively, for publishing poems. I was paid $100 for a poetry performance in November, and $100 in April. My submissions have also led to some paid event planning work. No doubt there are are a few other gigs I’m forgetting at the moment. I can look at it like I made $735 and netted about $100, or I can look at it holistically: poetry is my contribution to the universe and like any artist, I’m finding ways to make it work. My teaching and web writing gigs (most of my income) are also part of my poetry life. There is no doubt that my ability to pay for submissions fees relies on class privilege. To even have $52 a month in the first place to spend on something like this is a huge privilege. Journals and contests need to have more options for writers who can’t afford the fees — otherwise, people move farther along in their “careers” and gain more positive recognition because they have the money. At the same time, I know plenty of people who spend $52 a month on pizza, or beer, or nails. It’s true that one can prioritize writing over other things: I use my bicycle to get to school and to do local errands, spend a lot of time cooking from scratch, don’t own a car (though my partner does), don’t buy makeup, wear used clothes, cut my hair at home, etc. What can I say? Some people gamble; I submit to writing contests. Out of the $629 I spent on submissions in 2018, I delegated: $303 to sixteen poetry contests (I won Cutbank and was a semi-finalist in Nimrod) $42 to four residency applications (I was accepted to one) $15 to consideration of inclusion of my poetry in a sculpture in NYC (I got it!) $75 to four chapbook contests (still seeking a publisher) The rest to the $2 and $3 fees that many journals charge all submitters to fund the journals’ online platforms My Submission Goals for 2019 I got a lot closer to 100 submissions in 2018 than I’d expected, just 14 away from my goal. In 2019, I’m planning to submit to 20 venues a month instead of 12, though I don’t want to submit work simply to reach a certain number. It’s important to thoroughly research every venue so that you don’t accidentally send a journal work you should already know they wouldn’t be interested in. After submitting 149 times, I have a lot more information than I did this time last year about who likes my work, who has no interest in my work, and which journals would be the best fits for me. I didn’t receive that many more acceptances in 2018 than I received in 2017, but my acceptances represent a hugely broader and more vibrant range of venues. I definitely gained a wider audience and readership. And I know about (and have read the work in) a ton more venues than I had previously. My goal for 2019 is to go bigger and be more selective at the same time, if that makes any sense. I want to continue to expand what I believe to be possible for my poetry, while at the same time making more informed decisions related to who would enjoy and connect with my work. Submitting widely helped me feel brave when approaching intimidating opportunities. So what if I didn’t feel like I had a chance? At the very least, it would be another rejection towards my 100. I am going to continue to experiment with entering writing contests. I’ve been in the contest game for less than two years and would like to see what happens if I continue to submit. I’m also planning to continue to read and research new potential venues from the backlog on my spreadsheet, which may include spending some money on subscriptions. How do you get your work out into the world? Do you enter contests, submit to journals, post your work online? Do you aim for 100 rejections? Where do you go to find readers? CC IMAGE COURTESY OF WAFERBOARD ON FLICKR
https://freesiamckee.medium.com/rejection-city-here-i-come-trying-to-accrue-100-rejections-in-2018-90e2d8f3b334
['Freesia Mckee']
2019-01-12 20:27:31.841000+00:00
['Publishing', 'Advice', 'Writers On Writing', 'Writing', 'Poetry']
What’s the average rental price of a 2-bed in Dubai these days? (PART 2)
In more detail, these are the operations defined in the rentals function: 1. Declare the results dataframe: results = pd.DataFrame() 2. Send requests to the Property Finder website, 10 at a time: rs = (grequests.get(url) for url in alist) responses = grequests.imap(rs, size = 10) 3. Create a for loop to go through each batch of responses, parse the content with Beautifulsoup and find all the tags identifying the listed properties: for response in responses: soup = BeautifulSoup(response.text, 'lxml') div_tag = soup.find_all('div', {'class':'card-list__item'}) In order to identify the property tag, we go on the Propertyfinder page -> right-click -> Inspect -> Click on the first button on the left side of the down tab which opens, and then we select the property tab. We copy the class of the div tag, namely ‘card-list__item’, and then paste it in the script. You can check, there are 25 such card-list items. 4. Next, we iterate through each of these cards to extract the data from all of them. A very important aspect that took me one day to figure out is how to deal with missing data. My solution was to use a try-except scheme for each variable extracted and to assign the None value in case the variable tag does not exist or is empty. For instance, the script is going to look for the h2 tag with of class card__title card__title-link. If the tag does exist and is not empty, then, the script extracts the text attribute and deletes any extra empty spaces with the strip() function. If the tag does not exist, then it assigns value None to the title variable. The process is similar for all the other data points. for div in div_tag: try: title = div.find('h2', {'class':'card__title card__title-link'}).text.strip() except: title = None try: ttype = div.find('p', {'class':'card__property-amenity card__property-amenity--property-type'}).text.strip() except: ttype = None try: bedrooms = div.find('p', {'class':'card__property-amenity card__property-amenity--bedrooms'}).text.strip() except: bedrooms = None try: bathrooms = div.find('p', {'class':'card__property-amenity--bathrooms'}).text.strip() except: bathrooms = None try: area = div.find('p', {'class':'card__property-amenity card__property-amenity--area'}).text.strip() except: area = None try: price = div.find('span', {'class':'card__price-value'}).text.strip() except: price = None try: frequency = div.find('p', {'class':'card__property-amenity--bathrooms'}).text.strip() except: frequency = None try: location = div.find('span', {'class':'card__location-text'}).text.strip() except: location = None try: link = div.find('a', {'class':'card card--clickable'})['href'] link = 'www.propertyfinder.ae' + link except: link = None A special mention for the last variable, which is the direct link to the listed property page. Note that we extract the ‘href’ attribute rather than the text and, because we may want to use these links in order to extract some more data present on the individual-property pages, we concatenate the relative link of each property with the base website address, www.propertyfinder.ae. 5. The next step is to declare a temporary dataframe, temp_df, to store the output from each iteration by specifying the variables to store and names of the columns in the dataframe. Finally, we append the output of each iteration to the dataframe, called results. temp_df = pd.DataFrame([[title, ttype, bedrooms, bathrooms, area, price, location, link]], columns=['title', 'type', 'bedrooms', 'bathrooms', 'area', 'price', 'location', 'link']) results = results.append(temp_df, sort=False).reset_index(drop=True) 6. Scraping can be tricky. Different websites have different limitations in terms of the number of requests they accept. It will be your job to find the best way to deal with it. For the purpose of this exercise, I am using a lag of 2–3 seconds between requests, by using a randomly generated series of 2 and 3 seconds and telling the system to add that lag between iterations. The important thing to remember is that if you notice that your requests do not pass through it may be because the website blocked you for making too many requests in a very short period of time. In this script, one can try and reduce the number of pages you ask every time — for instance, change 10 to 5 in the line above: responses = grequests.imap(rs, size = 5). In addition, you may want to increase the time between requests i.e. change the below to something like sleep(randint(4,12)) or bigger numbers. However, this will increase the time needed for the scraping.
https://medium.com/the-innovation/whats-the-average-rental-price-of-a-2-bed-in-dubai-these-days-part-2-b12d3337df4b
['Silviu Matei']
2020-09-28 18:39:13.001000+00:00
['Dubai Real Estate', 'Python', 'Pandas', 'Beautifulsoup', 'Web Scraping Series']