title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Should I Bother With Structure in Python?
Should I Bother With Structure in Python? Formatting and structuring your Python project when needed Structure can surely help! (Image by Johnny Gutierrez from Pixabay) In the last three months I wrote a Python course in 10 minutes bits a day called: learn Python 10 minutes a day. It was fun to write and I got many positive responses through my LinkedIn (which I really appreciate). I also get questions regularly, which I always try to answer. Last week I got an interesting question from Domenico: “𝙸𝚜 𝚝𝚑𝚎𝚛𝚎 𝚊 𝚒𝚗𝚍𝚞𝚜𝚝𝚛𝚢 𝚜𝚝𝚊𝚗𝚍𝚊𝚛𝚍 𝚏𝚘𝚛𝚖𝚊𝚝 𝚝𝚑𝚊𝚝 𝚢𝚘𝚞 𝚗𝚎𝚎𝚍 𝚝𝚘 𝚜𝚝𝚛𝚞𝚌𝚝𝚞𝚛𝚎 𝚢𝚘𝚞𝚛 𝙿𝚢𝚝𝚑𝚘𝚗 𝚙𝚛𝚘𝚐𝚛𝚊𝚖𝚜?“ I think this is a great question which can open a great discussion. As usual, the answer starts with ‘it depends’. Here is my personal view: Python is a multi-purpose programming language and it has incredible many use-cases. Sometimes, it’s just used as glue to connect two different services. Service A has some weird binary format as a result and service B only accepts JSON. Python can solve this problem and ‘glue’ these services together. Even if no changes are needed, services are easily linked into one master service. The use-cases for Python do not have to be simple and it can be a full-fledged service itself. There are many examples such as Dropbox, Quora, Spotify, and Netflix. There are even complete GUI applications created using Python! As there are so many different use-cases, there is definitely not one industry standard format, project layout, or structure. However, each field (and sometimes sub-fields) sure have their layouts, structures, and ways of work. “The joy of coding Python should be in seeing short, concise, readable classes that express a lot of action in a small amount of clear code — not in reams of trivial code that bores the reader to death.” — Guido van Rossum Python is flexible and by itself, does not really care about structure. Structure is mostly useful when you collaborate with other people (teams), when a project gets large, or when you want to re-use certain parts of a project. If you are unsure about structure, you probably do not need it (too much) at the moment. In my early career I mostly practiced functional programming, all in a single file. For small projects this is perfectly fine. When projects get larger, you will (probably) feel a need to split to code, for several reasons. These reasons might be maintainability or simply getting tired of scrolling through the code. One way of restructuring is to split certain functions into other modules that can be imported later. However, don’t worry too much about it in the beginning. Your understanding of these topics will grow with experience and when you see the need! It can become a bit different when you are working with particular frameworks. For example, if you want to work with web-frameworks such as Flask or Django, they come with their (somewhat) fixed project layout and way of work. These structures can be confusing in the beginning but have big benefits in the long run. It feels like you almost have to do bookkeeping on where you wrote what piece of the code. After a short while, you get used to the structure and it is actually very helpful. Due to the standardization, all your website or web-apps have the same layout, making it super easy to find bits of code you want to reuse. Also collaborating is easier as all your collaborators work in the same layout. Therefore, while it feels like a burden, it can be picked up quickly and has large benefits in the long run! A possible structure of a Flask project When collaborating with other people, it also makes sense to have some structure. For example, in a data science project, it makes sense to structure all the different steps (data collection / EDA / etc.). There are a couple of “standard”, so called cookie-cutters, that help you set up a project layout. These are merely suggestions on how you can structure your project. There are many different project-types available as cookie-cutter. Definitely search for them if you like to have some structure. If you are alone and the project is simple, it can be a bit too much. Still, getting used to some structure in the beginning can help in the long run. We mentioned a cookie-cutter, which is not the actual layout, but a tool to create projects (and file/folder layouts) in a standardized way. Actually, it is a Python package that creates a directories and files for a new project after asking a couple of questions. The tool itself uses predefined structures that are published on Github specifically created for cookie-cutter. To install cookie-cutter install it in a new environment (not sure how to do that? Here is a great guide!): pip install cookiecutter After installing the cookie-cutter tool you can create a new project from the command line. For this you need a cookie-cutter format. A great one for data science is from drivendata and is called Cookiecutter-data-science. The tool will ask you a couple of questions and afterwards create a new project folder and all the structure inside the project. To initiate the process, type: cookiecutter https://github.com/drivendata/cookiecutter-data-science The first question is the project name, which will also be used for the project folder. Next it asks for a repository, you can use the default value. Next are the author and a brief description. For a license, if you are not sure, choose a MIT license, which lets every use the code at free will. I think this is better than no license. The last two questions are about Amazon services, which you can leave empty (default). When you are done, it has created a new folder with all the files in place. Pretty easy! Well, this was a much longer answer than I anticipated. To summarize a all the points: If you are just starting Python, do not worry too much about it. You will feel the need for structure and layout when your projects get bigger. When using certain frameworks, they come with their own formatting (i.e. project layout). There is generally not much choice and you have to adapt. However, it is almost certainly worth the investment. In collaborations it also makes sense to apply structure. You can come up with your own layout or use a predefined cookie-cutter to set it up. These are just my two cents and I am curious what others think. Feel free to comment below this post or connect on LinkedIn.
https://medium.com/swlh/should-i-bother-with-structure-in-python-7ad7ef668a1a
['Dennis Bakhuis']
2020-09-29 09:41:00.513000+00:00
['Machine Learning', 'Pyhton', 'Programming', 'Data Science', 'Planning']
A newbie experience with a web application
Hello readers!!✨👋 Want to know a newbie experience in a real web application project? You are in the right place!!💖 These last four weeks have been challenging. Learning from zero how to be part of a real work team and learning React. Which was difficult, but not impossible. I went through rough days of being in front of the computer the whole day searching and studying and still not being able to solve the problem I had. But, the key was working as a team and with a lot of communication. I had the trust of going with my team for help and fix it together. I felt multiple times, but i got back on my feet and kept pushing harder. You need to fail to get to success -Nassim Taleb When I was having these blockers of not being able to fix something as fast as I wanted, I started lowing my expectations of what I was able to contribute to do not feel like I was failing to my team or myself. Big mistake to do that, after I realized I was taking the wrong path I started changing my game plan and elevating my expectations. The trick was trying to maintain optimism and realism. Low expectations don’t mean not getting disappointed when good things happen. -Tali Sharot Once I set these new goals and a new mindset, I started understanding the way of working as a team and how important is to have the organization with every single task related to the project. We made a board on Asana with all the tasks divided between front-end and back-end and we were taking new tasks as the old ones were completed. We mapped everything we needed to do and due dates. Of course, we had some problems and blockers but having this organization helped to get back on our feet. Also, we started the project by implementing continuous integration. It was one of the best decisions we could have done. Checklists are tools to make experts better -Atul Gawande “Anything fragile eventually breaks and, anything robust would survive.” -Nassim Nicholas To conclude this post I want to emphasize how doing pair programming helped me. I am so thankful for the people I had the opportunity to share this experience with because every each of them left me an apprenticeship, both technical and personal.
https://medium.com/@marianayazp/my-experience-in-a-web-application-project-cfeceaca1641
['Mariana Yañez']
2020-11-24 04:48:37.103000+00:00
['Team Collaboration', 'Agile Methodology', 'Web App Development', 'Organization', 'Pair Programming']
sleepBaby Sleep at 2 Months — How to Help the Baby Get Better Sleep?
Baby sleep at 2 months is still a fragile phase when you cannot perform a sleep training program. Although there are some tips for you and your little one that will help you two sleep better! It is the right moment to start replacing sleep schedule for a sleep routine! How to do that? I say more about it in my How to teach a baby to fall asleep alone book, but here I have some tips for you. Waking and feeding Baby sleep at 2 months should take 18 hours a day but stay awake would last for only 2 hours in between. Remember that your baby’s stomach is small and breastmilk is digested relatively quickly. Although if your baby doesn’t’ wake up for longer than three hours — there is no need of waking for feeding. Sleep environment Blinding shades, white noise machine, and proper room temperature — make sure you have those provided! Whether the baby is co-sleeping in your room or you are away at grandparent’s house, always keep right sleeping conditions in mind. Wrapping Some children sleep better when their movements are restricted; one of the methods is wrapping a baby. It is what it sounds — wrap your little one in a swaddle, like a burrito! The pressure will comfort your baby, strengthening the sleep. You will find exact graphics instruction in my book. Drowsy but awake Baby sleep at 2 months should start on baby’s own. Not while rocking, not during the feeding. Try to put your kid in the crib drowsy but awake. If you succeed now — sleep training may not be even necessary! It is also helpful to establish a bedtime routine. Repetitive scheme every evening will give your baby a sense of security at some point. Remember, baby sleep at 2 months is a halfway to sleep training at 4 months old. You can do it! Susan Urban
https://medium.com/@contact_96166/baby-sleep-at-2-months-how-to-help-the-baby-get-better-sleep-1e0ad5b00c4b
['Susan Urban']
2019-06-17 07:28:40.209000+00:00
['Moms', 'Baby Sleep', 'Sleep Training', 'Parenting', 'Sleep']
Introducing the Slept On Sports Podcast
Introducing the Slept On Sports Podcast Lesser-known sports stories told by Medill graduate students. I’m thrilled to announce that the first episode of my new podcast, Slept On Sports, is now live! Join me and other Medill students as we dive into the lesser-known stories in sports history — stories you could say have been Slept On. New episodes are currently scheduled to come out on Mondays. Slept On Sports is available on most podcast platforms. I’ve linked to Apple Podcasts below. I hope you enjoy it!
https://medium.com/top-level-sports/introducing-the-slept-on-sports-podcast-25b27f1dd55f
['Connor Groel']
2020-10-20 21:45:13.717000+00:00
['Sports', 'Podcast']
I Dare You to Watch These Movies and Still Tell Me You Hate Musicals
I Dare You to Watch These Movies and Still Tell Me You Hate Musicals 4 musicals that will change the way you think about movies. Photo by Liam McGarry on Unsplash I found a movie the other day on Netflix called The Prom. I hadn’t heard any reviews, but it had some pretty great actors and actresses in it, such as Meryl Streep, Nicole Kidman and James Corden, so I decided to give it a shot. One of my favourite things in the world is when I begin a movie and then five minutes in, unbeknownst to me, they start singing. I know it sounds like I’m being sarcastic because people typically hate musicals in general, but not me. As I watched this movie, out of nowhere, Meryl was singing and whirling, complete, with choreographed dancers in a damn background. I was basically in broadway heaven. My toes started involuntarily tapping on the floor, and a light and fluffy feeling expanded through my chest. I immediately knew that this was going to be a great movie. I love musicals. I’m not even ashamed of it. There’s something about the idea that in the magical universe where every musical takes place, the normality, or downright casualness of stopping everything and signing your woes to the world, happens at least a handful of times throughout the day. If only real life could work that way! As the cast of The Prom sung about the injustice of bigotry in this small Illinois town in which they live, their ear to ear smiles and big voices uplifted me the same way I am heartened by the way my dog nuzzles my forehead when she wants more attention from me. Whenever I find myself in the brilliant grips of a good musical, I am transported into the brightly, sequin-clad scene and pretend that I am right along there with my favourites and sing and dance on my living room carpet while my children look on in horror. If you are one of those people who don’t enjoy musicals, I challenge you to watch the following movies on this list and still try to tell me that “no musicals are good.” Rent (2005 film) I first watched Rent while living in a small bachelor apartment across from the community college I attended in 2006. At the time, I was only doing upgrading at the college and a Theatre Studies course. Instead of studying and working hard at my school work, I’d rush home every day, bust out my Rent soundtrack and sing out my balcony window until the bitchy old broad in 217 would scream at me to shut the hell up. Then, I’d sing louder. I didn’t get a lot accomplished in school that year. Still, I did learn invaluable lessons in the ways of surviving as an impoverished artist in Lower Manhattan’s East Village and how to achieve a captivating rendition of La Vie Boheme without gasping for air. Cabaret (1972 film) I recently tried singing some of my favourite songs from Cabaret's 1972 film adaptation, where the great Liza Minelli plays Sally Bowles to my children. As soon as I hit “Life is a Cabaret, old chum, come to the Cabaret,” they fell to the floor while holding their ears and screaming for me to stop. I’m actually pretty good at singing Liza songs because I’ve been doing it my entire life. Plus, I have that deep-set smoker’s voice that really suits my cause well. But those little brats didn’t appreciate my talent at all! They were all like, what kind of music is this?! Who would even listen to something that weird? Ingrates. Maybe 10 and 12-year-olds wouldn’t like this one. Still, with the plethora of weird and wonderful characters, catchy songs and a riveting storyline based around sex, deception and the precarious Kit Kat Klub, it’s hard to refuse the awesomeness of this story. Chicago (2002 film) I could listen to Chicago's soundtrack on repeat all day. Actually, I have done exactly that on multiple occasions. A story about a women’s cell block where multiple women have been accused of killing their lovers, and a fierce competitive rivalry between two accomplished performers in prison breaks out? Yes, please! If Cabaret is too old and stogy for you, then I highly recommend Chicago. Here we are still incorporating the excitement of the jazz era but have some pretty modern themes as well. Rocky Horror Picture Show (1975 film) RHPS is my all-time favourite musical ever written. My youngest memory with this masterpiece is sitting in my best friend Ashley’s living room when we are a mere 7 years old and anxiously waiting for Janet, Ashley’s mom, to get off the phone with my mom to see if I was allowed to watch this risque movie. Some of you are probably gasping right now thinking of a seven-year-old watching Rocky Horror Picture Show, but I must remind you that this was the 90s, and we covered our eyes on the sex-scene, so it was fine. This movie changed my life. I had never seen such flair in cinematics before. Dr. Frankenfuter became my idol and Rocky the love interest of my life. As I got older, I maintained my obsession with Rocky Horror, and much to my disappointment, it was usually only Ashley who shared in my enthusiasm. Anyone I tried to show the movie to immediately turned their nose up to it as soon as the iconic singing lips hit the screen, “Ugh, a musical?” they’d say, rolling their eyes to show me how disgusted they were in the prospect. Not me, though. I believe RHPS set me on this obsessive track to strive to live in a more musical-like world. If only we all had the vocal cords of angels and could maintain a huge shit-eating grin on our face while we sang-narrated the happenings throughout our day. I, for one, think this would be a much happier and healthy world if we all appreciated the musicals in our life a little bit more.
https://medium.com/illumination-curated/i-dare-you-to-watch-these-movies-and-still-tell-me-you-hate-musicals-c7465e743653
['Lindsay Brown']
2020-12-27 18:06:28.995000+00:00
['Lifestyle', 'Entertainment', 'Musicals', 'Life', 'Movies']
AI mental health app taps into smartphone use
Australian startup Svelte Ventures has created an app called Frank that uses AI to scan smartphones to monitor mental health. Svelte Ventures co-founders Dave Chetcuti, Danny Connery and Joey Cassar. Briony Schadegg not pictured. The new ‘mood technology’ is currently undergoing trials in Adelaide, South Australia and is set to launch in mid-2021. The AI technology behind Frank measures the user’s cognitive health by language recognition, similar to how a Fitbit measures physical health. Svelte Ventures Co-Founder Dave Chetcuti said it does this by supplying a keyboard for your smartphone much like Grammarly or GBoard does for word processing. “When people are typing, whether that be messages or emails, on whatever application they are using on their phone, the language input is analysed and then compared against our models,” Chetcuti said. “What differentiates Frank is that people continue to utilise their phone as they normally would and receive continuous monitoring of their mental health,” Chetcuti said. “Frank’s keyboard will become invisible and users will forget it is even there.” Danny Connery, another Co-Founder of Frank, said the software has both consumer and commercial markets. He said the software can provide businesses with the opportunity to collect anonymous data from employees in order to make informed decisions. “A lot of people are afraid to come forward with their struggles with mental health, especially in emergency services, and Frank is able to provide businesses with an aggregate of what their population is going through,” Connery said. A 2014 PwC Australia report into mental health in the workplace, indicated that despite 91 per cent of Australians believing mental health in the workplace is important, only 52 per cent of employees believe their workplace is mentally healthy. The report found that untreated mental health conditions cost Australian workplaces approximately $10.9 billion per year, comprising of $4.7 billion in absenteeism, $6.1 billion in presenteeism and $146 million in compensation claims. One in five Australians have taken time off work in the past year because of issues of mental health, according to the report, and this number is twice as high among those who consider their workplace mentally unhealthy. Based in South Australia’s Lot Fourteen innovation neighbourhood, Svelte Ventures was created under the maxim of what you can measure, you can manage and to use technology to help rather than damage mental health. Frank is able to link with other smartphone applications, such as the Health app and Screen time, and draw conclusions on the user’s mental wellbeing. “Because Frank monitors language on the whole phone, it can extract topics that are meaningful to the user,” Chetcuti said. “If they are talking a lot about their sports team or their work, we can pull that out and measure their emotional content against that topic.” If Frank detects issues, artificial intelligence is able to suggest user-specific self-care practices via a notification. “Frank can deliver unique insights to the user that would not be possible any other way.”
https://medium.com/@newsleads/ai-mental-health-app-taps-into-smartphone-use-fae8c50b9b3c
['Solstice Media']
2020-12-17 23:05:02.768000+00:00
['Startup', 'AI', 'Australia', 'Smartphones', 'Mental Health']
Top 20 benefits of having an .edu Email Address
During this phase of pandemic, many institutions and universities are forced to conduct their sessions online. It is very likely that if you are a student or a faculty, your institution or university provided you with an educational email address with the .edu domain. Photo by Micheile Henderson on Unsplash The benefits of having an .edu email account are tremendous. It can get you a lot of free stuff and if not free in most of the cases it can get you heavy discount on things you may need regularly. In this article, we will be helping you cut down your monthly expenses on those lavish licenses and utilise them to focus more on education. 1. GitHub Student Developer Pack GitHub Student Developer Pack is the best resource one may come across in the world of software development. GitHub knows the best way to learn is to actually try them out and hence they give access to more than 100 development tools so that you get complete hands-on experience. Check out all the tools GitHub has to offer here. 2. Autodesk Autodesk provides industry-leading powerful software and services for 3D design and modelling of literally anything under the sun. Being an engineering student it’s impossible to miss out on a popular application like AutoCAD made by them. Usually, these applications cost around Rs 20,000 INR per year but Autodesk provides them for free to students. Click here to avail their industry graded software for free. 3. Microsoft Office 365 Education Microsoft’s flagship products consist of Microsoft Word, Powerpoint, Excel, OneNote, OneDrive and Teams. All of these productivity tools are included in Office 365 Education with other additional tools to assist classroom learning. Get started with your University/Institution email address here. 4. Adobe Creative Cloud for Education Adobe no longer sells their products separately instead all of their applications come under their subscription-based model enabled with cloud storage called Creative Cloud. It consists of Photoshop, Illustrator, Premiere Pro, After Effects, XD and other 20 plus applications. At the time of writing this article, Adobe priced this subscription at Rs 4,230 INR per month but students can avail it at 60% discounted price of Rs 1,596 INR per month. Click here to get your Adobe Suite at discounted price 5. JetBrains Free Educational Licenses JetBrains, The maker of widely used developer tools like IntelliJ IDEA and PyCharm. Let it be Web development, Game development or a side python project, its hard to skip their tools to have a productive and enjoyable experience at the same time. They give free access to all their products for educational and personal use. Head to this page to know more about their Free Educational Licenses Programs 6. Microsoft Azure Student Developer Resources Microsoft Azure is a Cloud Platform which provides more than 200 products and services to help develop cloud solutions. Microsoft provides free resources for students to gain skills and jumpstart their career along with $100 worth of credits to use Azure, internship opportunities at Microsoft and expert talks. Get started with Microsoft Azure here 7. Canva Got anything related to graphics? Canva got your back, it's easy to learn with drag and drop feature and professional layouts. It has a library of millions of images, icons, fonts and ready to use templates. Canva Pro gives furthermore customisations for fonts and logos with 100GB of cloud storage and the ability to schedule yours publishes to 7 social media platforms. They have collaborated with GitHub Student Developer Pack to provide Canva Pro free to students. Check out Canva Pro for Students here 8. LucidChart LucidChart provides end to end visual flowcharts to help teams of any size collaborate and see where they are today and where they need to go in future. It is completely free for students and teachers. Create a free account here and click on Get your free educational upgrade to avail the offer. 9. LastPass LastPass is a password management tool that lets you store your passwords in in an encrypted environment. This allows you to skip the burden of storing your passwords in a risky environment like notepad or sticky notes and also know all the passwords with a single handy master password. Get 6 months of LastPass premium account free of cost here with your educational account 10. Roboform Another similar password management tool is Roboform. It is really similar to LastPass except the fact that you could all the premium features for free until you are in your institution or university instead of 6 months. Get Roboform Premium for Education 11. Amazon Prime Student Pack Amazon Prime consists of many benefits like getting 2 days confirmed delivery of on Amazon products without any shipping charges, unlimited movies and TV shows on Prime Videos, unlimited ads free music streaming on Prime Music and much more. Amazon offers 6 months of trial of Amazon Prime to students and other student exclusive deals free of cost. Avail Amazon Prime Student Pack here 12. Evernote Premium Track your college due dates, make notes or scan your notebooks using Evernote. Evernote is a cloud based note taking application which syncs all your notes across devices and provides features around them to help you make notes and study them easier. Get 50% student discount on Evernote Premium here 13. Squarespace Want to host a website for your college project? Or do you need a portfolio or a blog website? Then Squarespace is a perfect place for you with award winning designs that could be personalised to fit your needs. Squarespace provides 50% discount to students with accredited university email address. Start building your squarespace website here 14. Wix Another website builder is wix.com, whether it is a an online portfolio, a small store, a blog or you just need a logo for your side project Wix has you covered. Along with just building websites Wix helps you manage SEO and grow your business online to cut down your student expenses. Get 50% student discount on Wix Premium plans here 15. Spotify Premium Spotify provides streaming of millions of songs and music tracks for free with ads. In Premium version, Spotify removes these ads and gives other features like offline streaming, better quality and playlists based on moods. Students get 50% discount on using their edu account for signing up. Get 50% student discount on Spotify Premium here 16. YouTube Premium Student Plan YouTube launched their Premium service this year with ads free streaming on YouTube and YouTube Music with other features on their mobile app like background streaming and Picture and Picture mode. They have exclusive service for students as well at discounted price. Avail YouTube Premium Student Plan here 17. Ableton Ableton makes software and hardware for music creation and performance. Their flagship products Live, Push and Link help communities deliver amazing perforances across the world since 1999. Ableton gives 40% discount on Live for students and teachers. Get Live at 40% discount here 18. Lenovo Student Discounts Lenovo makes smartphones and laptops at wide variety of range for both office use and enthusiasts. Students get exclusive discounts and zero interest EMI on Lenovo Laptops. Register here to avail the offer 19. HP Students Store HP is giving away 50% off to students on printers, accessories and monitors on buying a HP PC. Register here at HP Students Store 20. Samsung Education Discount Program Get 10% education discount almost all the Samsung products and upto 40% off on smartphones, laptops, tablets and other accessories. See all the discounted Samsung products here Being a student is difficult with all the expenses, I hope this list of benefits helps you manage your budget better. Next when you are buying a subscription or a product make sure to check out for student benefits. That’s all for now folks. This is Vinayak Tekade from Coder’s Capsule signing off. Follow me on LinkedIn and Twitter. Looking forward to learn, teach and grow together. Check us out on Instagram for more similar content.
https://medium.com/coders-capsule/top-20-benefits-of-having-an-educational-email-address-91a09795e05
['Vinayak Tekade']
2020-12-22 09:33:10.495000+00:00
['Microsoft', 'Benefits', 'Lifestyle', 'Github Student Pack', 'Students']
Remove
Sometimes we put ourselves into a certain position from which seems difficult or at times impossible to get away from, almost forgetting that we were the ones that created it in the first place. That’s what happened to me when I decided to keep writing everyday for 1 year after the conclusion of my 100 day project. It was was great, I was motivated, but suddenly felt trapped. I trapped myself into the commitment and the routine instead of the habit, and there is a difference between these, very subtle, the one that makes it bearable and attainable. A commitment is about it getting it done no matter what, weather you like or not, easy or hard, it has to be completed. A habit on the other hand, is about embedding it into our daily lives to the point that we don’t think about it and it becomes natural. We don’t question it, we just do it. How to know the difference? Remove it. Stop doing it. As my daily writing started to become a burden, and I started to question the real purpose and value of my decision, only two path became available. The first, just keep doing it, no questions asked, no dwelling, just sticking to plan. The second, hit pause for a couple of days and see how it goes, how I felt and what came to mind. I choose number 2. For me, removing is always the best way to realise if something is really important and how much value it adds to my life, if any. Removing is the best to learn if you don’t need something, it’s also difficult, because it can makes us feel that we might be wasting time, that we are going backwards or nowhere at all; especially in the present days where everything is presented to us as urgent, everything is immediate, everyone needs it no matter what, the real question is, is it truly important? In my case, yes, it is important, I learned through removing myself from the daily commitment of writing, that I like it, I enjoy it, and most importantly, that I missed it. I missed creating on daily basis, to have something tangible at the end of the day that I can related to and that over time, slowly and consistently, will become and important body of work. I also learned that, I want to keep do it on regular basis, daily as a matter of fact; but that for this to happen, I need to do other things as well, organise, balance, and somedays, let go, or even remove something else if necessary.
https://medium.com/thoughts-on-the-go-journal/remove-ae4c60304022
['Joseph Emmi']
2018-08-08 08:05:27.737000+00:00
['Habits', 'Self Improvement', 'Journal', 'Commitment', 'Writing']
Honored to serve as California’s next U.S. Senator
Everything I know about work and opportunity, I learned from my parents. They risked everything to ensure my family and I could have a shot at the American Dream. It’s a dream that I know well because I’ve lived it, and I’m committed to making it possible for all Americans as California’s next U.S. Senator. I was born March 22, 1973 at Kaiser Hospital in Panorama City in the San Fernando Valley. My parents, Santos and Lupe Padilla, immigrated separately from Mexico and met in Los Angeles in the late 1960s. It was love at first sight and the young couple decided to get married, apply for green cards, and start a family. Growing up, my mom and dad relentlessly emphasized hard work and a good education as key to a better future. With just an elementary school education, my father worked as a short order cook for forty years before retirement. He liked to boast that his kitchen “never failed an inspection.” For the same forty years, my mother worked tirelessly as a housekeeper for a group of families in the affluent communities of Studio City and Sherman Oaks. Santos and Lupe raised my sister Julie, my brother Ackley, and me in a modest home in Pacoima. In the 1980s, the neighborhood became one of the more violent areas of Los Angeles and gang activity, prostitution and open-air drug dealing were rampant. Going to sleep to the sound of police helicopters was not uncommon. I attended local public schools, keeping my focus on books and baseball. I worked my way into the starting rotation at San Fernando High as a senior. The same year, my countless hours of study paid off and I won admission to the Massachusetts Institute of Technology where I earned a Bachelors of Science Degree in Mechanical Engineering. I worked my way through college doing a variety of janitorial and administrative jobs while mentoring younger students back home to follow the same path. It was the conditions in my neighborhood growing up and the feeling that the Northeast San Fernando Valley wasn’t adequately served by government that awakened my interest in political activism. When I was a teenager, my family helped organize neighbors to take back the streets from crime. My mother and I would periodically join community leaders to protest environmental injustice and demand the closure of the Lopez Canyon Landfill. In 1994, after California voters passed Proposition 187, the sweeping anti-immigrant measure, my parents finally applied for citizenship and I, now a recent MIT graduate, resolved to put an engineering career aside and dedicate my life to public service. Demanding a fair share of opportunity and resources for the people of the Northeast San Fernando Valley, I was elected to the Los Angeles City Council as a political outsider at the age of 26. As a member of the City Council, I worked to expand after school programs to serve 16 schools in my district, worked to reduce class sizes, built state-of-art libraries and a children’s museum. I worked to retain and create more local job opportunities through industrial, commercial, and residential development and community reinvestment. And I championed citywide measures to improve air and water quality while directing the Los Angeles Department of Water and Power to dramatically increase procurement of renewable energy sources. In 2001, my colleagues elected me the youngest Council President in Los Angeles history. As President, I provided citywide leadership at critical times. I was Acting Mayor during the tragedy of September 11, 2001. I assisted in the interview and selection of William Bratton as Chief of the Los Angeles Police Department and helped negotiate the approval of LA Live and the modernization of Los Angeles International Airport. In 2005, my colleagues throughout the state elected me President of the California League of Cities. In 2006, I was elected to the State Senate to represent the more than 1 million people in the San Fernando Valley. As a State Senator, I authored more than 70 bills signed into law by both Republican and Democratic governors. Around the Capitol named me one of Sacramento’s “Most Effective Legislators” for my ability to “cross ideological lines, take on big bills and keep warring parties within the caucus.” Over two terms, I passed major legislation: Fighting climate change : I passed landmark legislation increasing renewable energy standards, expanding green manufacturing and solar power, developing clean fuels and modernizing the electrical grid. : I passed landmark legislation increasing renewable energy standards, expanding green manufacturing and solar power, developing clean fuels and modernizing the electrical grid. Expanding educational opportunity : I passed bills bridging the digital divide and expanding college access, helping English language learners and protecting student athletes. : I passed bills bridging the digital divide and expanding college access, helping English language learners and protecting student athletes. Fostering healthier communities : I fought for universal health care, stopping tobacco sales to minors, fighting diabetes and obesity, expanding patient protections and improving food safety. : I fought for universal health care, stopping tobacco sales to minors, fighting diabetes and obesity, expanding patient protections and improving food safety. Increasing gun safety: I passed common-sense gun safety measures like tracking stolen guns and stopping felons from possessing body armor. I passed common-sense gun safety measures like tracking stolen guns and stopping felons from possessing body armor. Harnessing innovation: As an engineer, I fought for the ethical advancement of science and technology. I authored legislation protecting Californians from discrimination based on genetic information and wrote the bill creating a statewide Earthquake Early Warning System. I was sworn in as California’s first Latino Secretary of State on January 5, 2015 and pledged to bring more Californians into the democratic process as the state’s top elections official. With President Trump attacking immigrants and democracy, I fought for voting rights and the American Dream. I was re-elected in 2018 and received the most votes of any Latino elected official in the United States. Since taking office, I have worked to make our elections more accessible and inclusive, while fighting to protect the integrity of our voting systems: Registered over 22 million voters: Voter registration is at an all-time high — over 22 million Californians are registered to vote (an increase of more than four million from the day I took office) and the highest rate in nearly seven decades. Voter registration is at an all-time high — over 22 million Californians are registered to vote (an increase of more than four million from the day I took office) and the highest rate in nearly seven decades. Expanded access to the ballot : I implemented innovations like same-day registration, online pre-registration for 16- and 17-year olds and automatic voter registration, also known as ”California Motor Voter.” : I implemented innovations like same-day registration, online pre-registration for 16- and 17-year olds and automatic voter registration, also known as ”California Motor Voter.” Protected our elections: I oversaw the upgrades/replacement of voting systems in all 58 counties in the state to systems that meet California’s newer higher security standards. Today, I live with my wife Angela, a mental health advocate, and our three sons in the San Fernando Valley. As the proud son of immigrants, as a public servant, and as a husband and father, I understand the urgency needed in the Senate. We must fight for families as the Covid-19 pandemic rages on, protect access to health care, and get our economy working again so that everyone can share in the American dream. Will you join me?
https://medium.com/@alexpadilla4ca/honored-to-serve-as-californias-next-us-senator-f4b5178f87de
['Alex Padilla']
2020-12-22 20:08:20.267000+00:00
['California', 'Senate']
Tips to Quickly Sell Your Products and Services This Festive Season through Digital Marketing
Tips to Quickly Sell Your Products and Services This Festive Season through Digital Marketing Pihu Singh Oct 29, 2021·3 min read Being a favorable occasion, individuals anticipate this celebration, particularly for enormous buys. There will be gigantic digital speedup for brands as customers gear up for Diwali and the bubbly season past it. Brands across the shopping and food and drink industry are relied upon to acquire from the blast in digital reception and the extended versatile market. This is the place where digital marketing agency in chandigarh and different pieces of the nation come right into it! To be a piece of this exceptionally aggressive circumstance during Diwali, you should carry out certain means and digital marketing tips that can assist you with remaining alive on the lookout. Timing is Crucial The circumstance and meaning of marketing are consistently significant. You should initially inspect: Items or administrations you are intending to advance The philosophy behind it Client’s inclination You ought to pose these inquiries, and afterward no one but you can sort out whether Best Seo Company in Chandigarh are marketing your item at the ideal opportunity or not. Doing this will likewise assist you with concocting some innovativeness that can legitimize your item dispatch at this occasional time. Assess Your Product Impact Your item or administration region is exceptionally critical before you promote it. During celebration time, individuals are bound to purchase new garments, presents, lighting, fireworks, welcoming cards, gadgets, and obviously, propitious things. Assuming you figure your item can welcome clients during the Diwali season, you can certainly go on in marketing. Don’t simply expect that anything you accomplish will work during the bubbly season. READ MORE: User experience: a key to digital marketing success! Know Your Customers Before you begin selling your item, comprehend the conduct of your objective clients. Their response towards your item Their medium to purchase the item for example regardless of whether through the site, application or by and by from market Social affair this data will assist you with contemplating your arrangement and where to concentration to get deals. In the event that Seo Services Company in Hyderabad are bound to purchase from an internet based stage, for this situation, you can utilize distinctive marketing channels, for example, Google AdWords Facebook Ads Online Media Email Marketing Pay Per Click (PPC) Advertising Comprehend Your Competitors During celebrations, you will consistently see that your rivals are consistently at their heels for selling their item/administration. Further, with the accessibility of different sorts of examination instruments like Google Analytics, the justification for eliteness blurs off. Consequently, to get an edge over your rivals, you should do exhaustive examination about the market and think about the most recent patterns. Make Effective Strategy Subsequent to having a decent information on every one of the connected possibilities, you have set yourself up to structure an arrangement that can assist you with remaining in the market during the cutthroat Diwali season. Be that as it may, you must be somewhat cautious since you can’t bear to submit any misstep whatsoever second. Utilize social media marketing like Facebook, Twitter, or Instagram, and run the best social media crusades this Diwali. Dissect the market, plan your vision, center around the section point, find your crowd, and simply shoot the bolt on track.
https://medium.com/@pihusingh1322/tips-to-quickly-sell-your-products-and-services-this-festive-season-through-digital-marketing-a57b6edbc805
['Pihu Singh']
2021-10-29 06:40:06.348000+00:00
['Online Business', 'Social Media Marketing', 'Digital Marketing', 'Seo Services', 'PPC Marketing']
Will Smith, Apology and the Power of Being Forgiven
Will Smith, Apology and the Power of Being Forgiven Photo credit: Gage Skidmore By Dyanne Brown A few weeks ago, The Fresh Prince of Bel-Air aired a Reunion Special on HBOMax. During the show, the cast revisited what it was like to be a part of the show as well as behind-the-scenes stories of working together. There was a touching tribute to actor James Avery, who played Uncle Phil, and who passed away years after the end of the show. The surprising part of the Reunion show was when Will Smith sat down with Janet Hubert, the actress who played the original Aunt Vivian in the first season of the show. During the time that I was watching the show as a young person, she disappeared from the show without a lot of explanation. There were rumblings that there was a problem between Janet and Will, but back then, stories weren’t in-depth on very popular shows. It seemed like a contract dispute. The special revealed that there was much more tension between the two on the set with Will treating the set as a party atmosphere while Janet, who is Julliard-trained, preferred solitude before a performance. Their discussion uncovered what led Hubert to create a 2018 YouTube video calling Will Smith out on his role in her eventually quitting the show after being offered a substantial pay cut. Over the years, there were rumors that Janet Hubert felt acting with a Rapper was beneath her training or that she should be the star of the show, but it’s been revealed that wasn’t the truth of what happened. In the discussion, Will Smith explained that he was twenty-one at the time and he didn’t really understand what was going on for Janet Hubert. Hubert explained that during that time in her life, she was involved in an abusive relationship with an unemployed spouse and had a young child. She felt ostracized by the staff once Will let it be known that they weren’t getting along. When she was offered a significantly lower contract, she declined because her life couldn’t support the reduced salary. They hired another actress, Daphne Maxwell Reid, to take over the role. However, the damage was done for Janet Hubert and she was unable to find another role while Will Smith’s success soared and he became a superstar. What was remarkable about this exchange was that Will Smith let Janet Hubert share her entire story from her perspective and he listened to her pain, even when it was hard for him. He didn’t defend his views or his intention. He understood that despite what he intended or how he saw their interactions that his behavior ultimately damaged her. He apologized and he asked for her forgiveness. At the end of it, they embraced and you could see a change in Janet Huber. Where she looked guarded and in pain, at the beginning of the exchange, she opened up by the end. She pulled Will Smith to her and called him her “baby boy”. It was as if their 27-year feud was gone. And, Janet Huber returned to sit with the rest of the cast and taking her place as a part of Fresh Prince of Bel-Air history. It was a powerful exchange and an example of what give and take is required to achieve true forgiveness. Both had to be vulnerable and both had to receive. Will Smith took it a step further and used the Smith family platform, Red Table Talk on Facebook Watch, to further explore the discussion with Janet Huber, but this time aided by a psychologist. During this session, Will explored how we felt during the discussion and how he achieved turning a very difficult conversation into a moment of redemption for both of them. He had to process that he could cause someone a level of pain that he never wanted to produce in another human being. He also had to see the lie in his own image if he allowed himself to be beloved by everyone else while having someone made bitter by his existence because he couldn’t apologize. With the psychologist’s help, Will made the connection that his childhood household was chaotic and his way of feeling safe came through his humor. In his child’s mind, he affected the mood in the house by being funny or performing. He continued to think that as long as people were laughing then he was safe. Janet Huber didn’t appreciate the partying or the joking prior to going onstage because, for her, she needed the opposite. Will Smith admitted that he respected Janet Hubert and James Avery so much that they represented parental figures to him and he fell into his family dynamic. Janet Hubert’s rejection felt unsafe to him and so he lashed out at her in an attempt to control the situation. What I found monumental about the Red Table Talk episode was here was Will Smith, not only showing how you reach forgiveness within a conversation, but also showing the benefit of therapy in making the connection to the underlying traumas affecting your ability to properly assess a situation. I do believe that Janet Hubert was healed by the apology and public redemption that was orchestrated by Will Smith. But, I also think Will Smith was healed by seeing how his inner child was approval-seeking and admitting how he handled the situation affected the outcome. I think we were all healed by seeing that having a deeper conversation can lead to reconciliation when both are ready and accepting.
https://medium.com/change-becomes-you/will-smith-apology-and-the-power-of-being-forgiven-137258446f37
['The Good Men Project']
2020-12-09 12:12:29.389000+00:00
['Acceptance', 'Life Lessons', 'Forgiveness', 'Apology', 'Will Smith']
How To Grow On Instagram in 2022?
For Instagram growth in 2022, there are a few things that you need to keep in mind. Instagram has rolled out some new reel features like replying with reels, and it’s constantly pushing new effects for reel videos and also asking people to show up on videos and reels. With this constant push towards IG Live and Reels formats, making videos will remain the top way to grow on IG in 2022. For 2022 I believe the ability to create quality videos is going to be very important. Either by yourself or through software. Use reels and IGTV the most. For a new account starting in 2022, here are the things you can follow; Learn what type of videos is trending already in your niche. To do this: Analyze some top influencers and their content. Use Viralspy for checking out your competitor’s post and account. You’ll get the engagement rates and top hashtags they use. You can use this to figure out what content to create to grow faster. Keep in mind that the content that you create must be similar to them. Apart from niche, how you create content also matters. If you will create dancing videos like influencers, check influencers’ accounts on Viralspy. If you will create videos through video apps, check out accounts that are creating content like that. Because hashtag ranks video content that is similar to each other, you need to create similar videos to get a higher reach. That’s how you can hop on trends. Now, once you know what video content is working out best for you, focus on Carousels and single slide posts to get the mix of content. Carousels are a great way to share valuable content. If you’re in the educational field, carousels will help you convert the new people who come across your account into followers. Here again, try to make the best design along with sharing valuable content. Try to maintain a brand identity for the long term.
https://medium.com/@llohani1996/how-to-grow-on-instagram-in-2022-36bf53d9cfd3
['Lokesh Lohani']
2021-12-30 07:43:30.306000+00:00
['Instagram Expert', 'Instagram Marketing', 'Instagram Growth']
Utilizing Web Accessibility Evaluation Tools
There are many different web accessibility tools that you can use for testing the usability of a website. Here are the different ones you can test: — Internet Explorer Web inspector. This is a very effective tool to test the cross browser compatibility of websites. This feature can help you detect any differences in the behavior of the web page across different versions of the browser. Besides, this also helps you in finding missing features and functions on the web page. o IE Contrast. Similar to Web Inspector, an IE web browser also has a Contrast toolbar. For this, just install the Microsoft web accessibility evaluation tools for IE. The Microsoft web accessibility evaluation tools for IE will also enable you to test pages that use JavaScript. If there are differences between these two web browser tools, then your website must be compatible with all of them. o W3C. For people with vision disabilities, it is a must that you make your web site accessible. To do this, you can use WAAP (Web Accessibility for the disabled), a web accessibility evaluation tool developed by the World Wide Web Consortium. Although this tool was previously known as “the one-size-fits-all” universal web browser, it is now a much-anticipated and powerful tool, which is ideal for testing your website. o WAAP. As mentioned earlier, Waap is a portable, browser-based software program. This means that it can evaluate your website’s cross-browser compatibility. In addition to that, it comes with a number of other features such as: screen-reader detection, text-to-speak capability, spell-checker, tables for content positioning, and forms layout; just to mention only some of its many benefits. o W3C. Another way to get a feel of using the web accessibility evaluation tool is by using the w3cag 2.0 tool. The w3cag is an HTML editor that comes with over 80+ icons. These icons provide you with handy tools to help you maintain the readability and functionality of your website. You can even customize your icons depending on your preference. o Arial. This kind of font is widely used worldwide. If you are using Arial font, you will notice that you can read web contents without eye strain. Because of this reason, aria font is considered one of the most important font types for people who have eye problems. To make your web contents more readable, you can use Arial, which is available in 90+ different styles. o Microsoft Internet Explorer. Even though this tool does not really consider yourself as a screen reader, it is useful when you want to look at various links and web contents from your computer screen. With the proper keyboard combination, you can even select words that appear normally on your screen and highlight them so you can read them. For example, if you are using Microsoft Word to write an essay, you can set your cursor on the word “ibliography” and then tap on the” highlighted text” option, so you can easily see all the information on that particular word. Web Accessibility Evaluation Tool o Color Contrast. Aside from the above-mentioned tools, there is also a great color contrast tool available in Accessibility Evaluation Tool. This tool analyzes your web contents based on the color contrast. There are basically four levels in this feature, namely: normal, best, ideal and worst. So as long as you are using a site or page that has a lot of contrast, you should be able to enhance the usability or the readability of your site. o Colour Frequency. As long as you are using a web page or a site that has text, you can change the colour scheme, so you can get the most out of the site. However, if you are using Accessibility Tools for web page layout, you have to remember that each color has its own distinct frequency in the palette. For example, cyan has a shorter duration and greater contrast ratio than red. In addition, this frequency may vary according to the type of color used, like for example, when a site is made of text or pictures. o W3Cag 2.0. Another helpful feature of the Web Accessibility Evaluation Tool is the W3Cag 2.0, which is designed to compare two different web pages using just one criterion. If the site contains only text, then it will display the text contrast and the text dimension simultaneously. You can use either the text comparison tool or the two dimension comparison tool from Access, which works differently. The following graphic shows some of the main features of the Web Accessibility Evaluation Tool that you should check on every time you find an accessible site. You will need to use a small soft, white mouse, or a dark gray pen to turn on the W3Cag testing tool, and you should enter a site address or domain name. The left part of the screen will show you a visual comparison of your current configuration and the one provided by the website. The right side will provide a summary of the results, so that you will know how to proceed. A red dotted line indicates that a site is inaccessible, while a green rectangle indicates that the particular page is well-designed and offers a good user experience. The next time you need to evaluate accessibility of a site, you should use the W3Cag testing tools, as they are quite effective in finding out whether or not a web page is usable.
https://medium.com/@davidsamith23/utilizing-web-accessibility-evaluation-tools-8fbd901bc7ee
[]
2020-12-27 06:10:49.370000+00:00
['Website', 'Tool Website']
Gradient Descent Machine Learning Optimization Algorithm from Scratch in Python
Let's discover the fundamentals of Gradient Descent Introduction: Gradient descent Machine Learning method is an optimization algorithm that is used to find the local minima of a differentiable function. It can be used in Linear Regression as well as Neural Network. In the realm of Machine Learning, It is used to find the values of parameters of a differentiable function such that the loss is minimized. Let us understand the gradient descent algorithm with a simple practical example. Imagine a blind hiker is trying to get down a hill to its lowest point as shown in the image above. There is no way for the hiker to see which direction to go as he is blind. However, there is one thing hiker understands clearly If he is going down, its the right progress and if he is going up, it is wrong progress. Gradient Descent from Scratch in Python Therefore, if he keeps taking small steps, that takes him downwards, he will be able to get down the lowest point on the hill. Here, taking small steps can be considered as a learning rate, and the height above the lowest point can be considered as the loss. Also reaching the lowest point of the hill can be considered as a convergence which indicates no further possibility of going down, and the loss is minimum. Note that this is one of the posts in the series Machine Learning from Scratch. You may like to read other similar posts like Linear Regression from Scratch, Logistic Regression from Scratch, Decision Tree from Scratch, Neural Network from Scratch You may like to watch a more detailed video version of this article as below: Implementation: Let's implement the gradient descent from scratch using Python. We will start by importing the required libraries. # Import the required Libraries import pandas as pd import numpy as np Then let's define the function we want to optimize. The function we are considering is y = (x-5)*(x-5) # Gradient of y with respect to x gradient_of_y = lambda x: 2(x-5) Initialize the value of x #Initial value of x close to zero current_x_value = 0.1 Define the maximum number of iteration that can be done. # maximum number of iterations that can be done maximum_iterations = 500 Define Learning rate, current iteration, and previous step size. # Learning Rate learning_rate = 0.01 current_iteration = 0 previous_step_size = 0.5 Finally, you need to run a while loop to optimize the loss Gradient Descent from Scratch in Python Note that the minimum value of x found is 4.99, which is very close to 5, as expected. Gradient Descent from Scratch in Python End Notes: In this article, we implemented the Gradient Descent Algorithm from scratch in Python. Note that when the control is coming out of the while loop, we are able to print the value of x, which is the minima of x found by gradient descent algorithm. As you can see, the minimum value is 4.99 quite close to 5 as expected. Please note that the total iteration taken is around 342. Hence, the Gradient Descent optimization Algorithm converges and finds the expected minima. The associated code can be found here. Happy Learning !!
https://medium.com/@dhirajkumarblog/gradient-descent-from-scratch-in-python-2a2348d48d56
['Dhiraj K']
2020-12-01 08:31:21.252000+00:00
['Python', 'Gradient Descent', 'Numpy', 'Machine Learning', 'Optimization']
The rise of Stablecoins: is the Future that bright?
by Gintautas Scerbavicius Ever since Bitcoin appeared many things have been heard. Some of them were good, others were bad, but most of them were controversial. In the end, after all the commotion, Bitcoin hasn’t yet achieved what many enthusiasts expected it to. One of the reasons why Bitcoin is still not used as a mainstream currency today is because of the fact that its value is very unstable. This is why it’s used only in closed systems for large to medium-sized transactions. The fact that the price of the cryptocurrency can change drastically in a single day makes it hardly usable every day. The second most popular cryptocurrency by volume is Tether. It has gained broad prominence both as a medium of exchange of other cryptocurrencies and as a coin failing to provide the audits for their reserves while simultaneously printing millions. Tether or USDT and other coins are in the news again because of a magical buzzword spreading across the crypto economy these days — “stablecoins”. What are Stablecoins? Most Bitcoin is traded in crypto-to-crypto only exchanges. Stablecoins provide the essential infrastructure to trade bitcoin and other cryptocurrencies. A lot of people recognized this and decided to create cryptocurrencies that won’t have price volatility issues and this is how stablecoins first appeared. To put it simply, “stablecoins” are basically cryptocurrencies which have their values linked to a real-world asset. Many Stablecoins have complex mechanics behind them but the main thing is that these are mere crypto coins that are designed to have stable values. Stablecoins can broadly be divided into two main stability mechanism categories: algorithmic and asset-backed. Blockchain Luxembourg S.A., the company behind the first major empirical research study, count that there are currently at least 57 launched or pre-launch stablecoins: 23 stablecoins (40%) that are live and 34 stablecoins (60%) that are at the pre-launch phase. Gintautas Scerbavicius, CTO at HODL Finance Some of the best-known Stablecoins are TUSD, DAI, Bridge Coin, BITUSD, and of course — Tether. There are 2 main types of stablecoins: asset-backed (or reserve-backed) and algorithmic. Asset-backed stablecoins Asset-backed stablecoins are backed one-for-one by reserves of the currencies they are pegged to. In theory, stablecoins could be linked to anything, but the majority are linked to currencies such as the US dollar or Euro. Just as the dollar is a major reserve currency in the world, it is also the most popular currency backing stablecoins. The most popular asset-backed stablecoin, Tether, is based on Blockchain and designed to hold a stable value of $1. It currently dominates the market with over 90% of all stablecoin market value. Tether claims to have dollar reserves which can be redeemed for a token, while some other stablecoins use lending or hedging systems and crypto assets to hold their value. Algorithmic stablecoins Algorithmic stablecoins, just as their name suggests, are not backed by any reserves but instead controlled by an algorithm. “They’re really using software rules to try and match supply with demand to maintain a peg to something like the US dollar”, as Garrick Hileman, blockchain researcher told Business Insider recently. “As demand for an algorithmic stablecoin increases, supply also has to increase to make sure there’s not an appreciation in the value of the stablecoin. At the same time, as the value decreases, there needs to be a mechanism by which supply can be reduced again to try and bring the price of the stablecoin back to the peg”, Mr. Hileman explains. To name a few algorithmic stablecoins currently in development we should mention Terra, Carbon, Basis, and Fragments. Many uses & even more concerns There are numerous uses for stablecoins. Among the biggest arguments for using them is their ability to serve as a medium of exchange. In this case, a consumer could protect himself from the volatility of the market by getting stablecoins rather than highly volatile cryptocurrencies. Or a trader could trade bitcoin against a stablecoin on the crypto-to-crypto exchange. Stablecoins can also be used as the unit of account (to the measure by which goods and services are priced) or as a store of value (a commodity, asset, or money that retains its purchasing power or value into the future) and much more. Most importantly, stablecoins could help create a tipping point for much broader crypto assets adoption by addressing one of the focal concerns in this relatively new sort of economic activity — volatility. Precisely, volatility is often named as one of the key reasons why traditional financial institutions and private citizens have refrained from the crypto economy. But most importantly, creators of stablecoins have to deal with exactly the same challenge we are all struggling with every day — ensuring investors, clients, and the broader community trust them. Needless to say that a legal clarification is a mandatory starting point. HODL Finance is the European digital lending company. HODL Finance issues loans backed by cryptocurrency and other digital assets. Founded by the shareholders of the peer-to-peer lending platform, Savy, HODL Finance now serves clients around the world.
https://medium.com/hodl-finance/the-rise-of-stablecoins-is-the-future-that-bright-dad4074e098c
['Hodl Finance']
2018-12-18 15:13:59.579000+00:00
['Bitcoin', 'Stablecoin', 'Cryptocurrency', 'Blockchain', 'Tether']
Why You Shouldn’t Go to Casinos (3 Statistical Concepts)
Why You Shouldn’t Go to Casinos (3 Statistical Concepts) The house always wins. We all know this phrase. But this is more than a phrase. This is a simple, mathematically proven fact. And you’ll only have to know three statistical concepts to see why the house always wins. Tomi Mester Sep 21, 2020·7 min read Photo by Kay on Unsplash You are at the casino. The roulette wheel is spinning and the ball is bouncing. Bounce, bounce, bounce, you smile: “it’s red!” And then it bounces one more. No, it’s black! You lose everything again and go home with empty pockets. Well, I hope you won’t — because you don’t go to casinos, you don’t buy scratch tickets, you don’t play the lottery or any gambling game in general. Why? Because these games are designed to make you lose money. And in this article I’ll tell you why. (Check out the podcast or video version of it, too!) The house always wins. We all know this phrase. But this is more than a phrase. This is a simple, mathematically proven fact. And you’ll only have to know three statistical concepts to see why the house always wins. These three statistical concepts come up often in data science projects, too. So if you are wondering why I’m talking about gambling on a data science channel, rest assured, you’ll be able to take advantage of this knowledge in your data science career, too. Anyways, three statistical concepts. These are: survivorship bias expected value and the hot-hand fallacy Let’s start with the first one. Survivorship bias Everyone loves good stories! A good story sticks. And I bet that you, too, have a friend — or a friend of a friend — who won big on a sports bet, or came home with 10,000 bucks from Vegas or won the dream trip to Malta on a scratch ticket… So won something big. The trick is that in gambling the good stories are always the ones that end with winning big. It makes sense. My grandma never talks about how she played the family numbers on the lottery last week and won nothing, again, for the 200th time. But she never forgets to mention when she won $6000 on it in 2003. Why is that? Because losing is boring. It’s everyday. It happens with everyone. Winning is exciting, it’s a fun-to-tell story, even after years. The story of winning big survives the filter of boredom. This is why this statistical concept is called survivorship bias. In this case, the story of winning is the thing that survives. And why is it a bias? Because what happens here? My brain hears a winning story. That’s one datapoint. Then it hears another one, then another one, then another one. Sometimes it hears losing stories, too… but by far not as many as there are in reality. So my poor brain will have a disproportionately big sample size of winning stories and a relatively small number of losing stories. And it unconsciously creates false statistics from the skewed data — and so it thinks that I have a much bigger chance to win than I have for real. This is how my silly brain works. Well, okay, the bad news is that it’s not just my brain, it’s yours, too. In fact, it’s everyone’s brain: this is how humans are created. We instinctively believe that we have a bigger chance to win in games than we do. Because of survivorship bias. Oh, and of course, almost all casinos and online betting companies amplify this effect as much as they can. Anyways, if there wasn’t survivorship bias, we’d see our chances at gambling more rationally and probably none of us would ever go to the casinos. So if you hear a good winning story, you should always remember that’s not the full picture… and that on the full scale, the house always wins. I keep saying this, by the way: the house always wins. But I haven’t yet explained the math behind it. So let’s continue with that and head over to the second statistical concept. Expected value Here, I won’t go into the details of the expected value calculation itself. But check out this article to learn more: Expected Value Formula. But, let’s get back and let me talk a little bit about expected value. Expected value shows what result you would get on average if you made the very same bet infinite times. I know, this sounds a bit tricky, so let me give you a very simple example to bring this home. Flipping a coin. Flipping a coin is usually a fair game. When you flip a coin, there’s 50%-50% for tails or heads. Let’s say that you bet and when it’s tails you double your money, if it’s heads you lose your money. If you do this over and over again several times, let’s say for 1,000 rounds, your wins and losses will balance each other out. Your average profit will be 0 dollars. That means that the expected value of this game is exactly $0. expected value — coin flip simulation (Image by author) In roulette, there’s a pretty similar bet to flipping a coin. That’s betting red versus black. But in roulette your winning chances are a tiny bit lower compared to flipping a coin. When you put $10 on black, your expected value is not 0. It’s minus $0.27 per round. Again, I won’t go into the math here, check out the article I mentioned. But the point is that in every round you play, you lose an average of 27 cents. It seems like a very small amount of money. But over 1,000 rounds, it adds up and your losses will be around $270. expected value — roulette simulation (Image by author) I mean sure, expected value is a theoretical value, but it always shows itself in the long term. In other words: the more you play, the more you lose. The point is: roulette is a game where the expected value is negative — because the probabilities in it are designed in a way that you’ll lose in the long term. And it’s not that big of a secret, that every single game in a casino is designed with a negative expected value. And that’s why the house always wins. So that was the second statistical concept, expected value. Let’s talk about the third statistical concept: Hot hand fallacy This is another bias and it explains why people don’t get out of a game when they are in a winning series. First off, you have to know that probability is tricky. It works in a way that’s really hard to interpret for the human brain. There are events that are extremely unlikely to happen. Like the chance that you’ll get 10 heads in a row when flipping a coin. The probability of that is less than 0.1%. Still, in a big enough sample size, let’s say when you toss the coin 100,000 times, it’ll inevitably happen, even multiple times. And the same thing can happen to you. If you play 1,000 rounds of roulette for instance, it’s actually pretty likely that you’ll have lucky runs. expected value — roulette simulation — there are lucky runs! (Image by author) And when one’s in the middle of a lucky run, it’s easy to feel that she has a hot hand. So she raises the bar, plays with bigger bets — in the hope of getting the most out of these winning series. But the thing is that these winning series are nothing else but blind luck, and statistically speaking it happens to everyone every now or then. In gambling, there’s no such thing as a hot hand. In the casino, just as fast as you win something, that’s how fast you can lose it.”. Again, you can’t get out of the law of statistics — and the more you play, the bigger the chance you’ll lose. Remember, the house always wins. So don’t fall for the hot hand fallacy. If you go to the casino and play (against common sense) and you win (against the odds), the best thing you can do is get out immediately and be happy for being lucky! Conclusion So why shouldn’t you go to casinos? Because of 3 simple statistical concepts: survivorship bias expected value and hot-hand fallacy And don’t get me wrong, it’s your choice whether you gamble or not. I get it. It’s fun to play sometimes, it’s fun to get lucky and it’s fun to win. I just wanted you to understand the math behind gambling — and to give you a more realistic picture of your chances and about why the house always wins. Tomi Mester, data36.com
https://towardsdatascience.com/why-you-shouldnt-go-to-casinos-3-statistical-concepts-a3b600086463
['Tomi Mester']
2020-09-23 20:19:01.837000+00:00
['Gambling', 'Data Science', 'Bias', 'Editors Pick', 'Statistics']
Where were you when I needed you the most?
What were you thinking, when you first saw my face? I didn’t ask for it to be born in this cruel, egotistical world Aren’t you supposed to be with me, when I needed you the most? What were you thinking, when you were counting the stacks of money, while I was teaching myself how to count in numbers? You should be proud of me, the person I have become, without your guidance and life lessons. Couldn’t you atleast listen to me, when I reached out to you? Could you atleast shed a tear for me, when I painfully gathered up all my courage, to tell you my deepest secrets? What were you thinking, when I came into your life? [41/100]
https://medium.com/100-naked-words/where-were-you-when-i-needed-you-the-most-f2244b2c4c86
['Kimberley Chung']
2017-03-26 05:02:01.446000+00:00
['100 Naked Words', 'Poetry']
A Look Ahead: 2019
RootProject uses blockchain to bring the power of markets to nonprofit fundraising. Our crowdfunding platform facilitates a new form of campaign design that maximizes rewards to supporters and beneficiaries, creating replicable and recurrent paths to funding. A Look Forward for RootProject in 2019 Focus = Scaling Up After four successful small campaigns in 2018, we are spending much of early 2019 focused on assisting organizations running large campaigns: bringing stakeholders and outside organizations together to form deep-rooted coalitions capable of real change. On a tactical level, this must be our core competency — one that will be amplified as more parts of the system are decentralized. The campaigns in the first half of the year will be larger in several ways: scope, dollars raised, stakeholder engagement, and tokens burned. Each campaign contributes to building and testing parts of our core competency. While, of course, each of our campaign partners are meaningfully contributing to larger anti-poverty efforts. Initial Campaigns -> Test Campaigns -> Large Campaigns With Decentralized Governance -> Large Campaigns AND Projects Governed by the RootProject Ecosystem As we build on our core competency, we are building tech tools that will extend decentralized control not just to the campaigns themselves but the projects they fund. The campaign for a blockchain school in Lagos, Nigeria, for example, will be our first real-world test of crowdfunding forks. Civilytics will continue building data tracking of police behavior, an issue intimately tied to urban poverty in the United States. Research A serious attempt to build a community governed by a decentralized network of stakeholders requires a disciplined approach to the data environment that is created. In crypto, price data is vital. An attempt to build an incentive system that revolves around an economic asset must be serious about how markets operate. In the next month we will be releasing to the public a 30 page research report that sums up some of our initial findings. We hope it will be of use to our backers, and crypto holders more generally. Big Picture We have learned some hard lessons this year. It is a difficult time for many projects whose underlying models are vastly more easy to scale and capable of attracting revenue than RootProject’s. We did not hit our hardcap, mandating that we rethink how to build such an ambitious project. Ecosystems based on market economies are difficult to build while people question existentially the value of the asset class (tokens) at the center of those economies. Yet RootShop and Judge Research are allowing RootProject to survive the crypto winter. New hires are coming in Q1 of 2019 for RootProject, even while vastly larger crypto projects are folding.
https://medium.com/rootproject/a-look-ahead-2019-3e8b03c47b30
['Nicholas Adams Judge']
2019-02-08 03:58:08.016000+00:00
['Cryptocurrency', 'Blockchain', 'Crowdfunding']
Rhythm
Rhythm My sister and I playing, singing and composing The beat is strong and regular I feel it my bones I cannot live without the cadence and its tones. I feel it in my heart and soul my feet move to the beat the beauty of the rhythm the excitement and the heat. I can feel it up the street and down the avenue the swing pulse throb and tempo a cadence tried and true. Give me a little latitude put on my dancing shoes play some twelve-string guitar and more rhythm and blues. Hear the words that resonate and flow in verse and prose from the tip of my head right down to my toes. The music moves and resonates the words smoother than honey patterns and arrangements that’s funny where’s the money. When the rhythm reoccurs smooth groovy and symphonic please and ease each other both mnemonic and harmonic. © David Rudder 22nd November 2020 Thanks for reading.
https://medium.com/illumination/rhythm-f1979030590f
['David Rudder']
2020-11-21 20:31:45.765000+00:00
['Rhythm', 'Lyrics', 'Singing', 'Poetry', 'Guitar']
Too Many Signs
After a couple of hours patrolling the streets, turning in his badge sounded better than ever. Not that he wanted a crime wave, but some nights passed too slowly. He stopped at a corner and saw something two blocks down. A dark figure loitering near a street sign. Frank turned the corner and drove down the street, his headlights soon illuminating the person. The man looked up as Frank parked, then turned back to the pole. He plunked a three-step ladder against the base of it and climbed up. Frank called in his location on the radio and stepped out of the car. “Evening, sir. May I ask what you’re doing?” “What does it look like I’m doing? I’m taking down this sign.” The man rummaged in the backpack he had slung over his shoulder and withdrew a hammer. “Sir, could you come down from there for a moment?” “What for?” The man peered over his shoulder at Frank. “I’d like to talk to you.” “I don’t have time for idle chatter. I have a lot of work to do.” “Humor me, please. Come down so we can talk.” The man muttered about young people having no sense of humor so why should he, but he stepped down from the ladder. “Do you mind if I hold on to the hammer, sir?” “I’m going to need it back. These signs don’t fall down by themselves.” “I understand that, sir.” Frank held out his hand and the man placed the hammer in it. Frank looked the man over. He wore a faded green military jacket over a t-shirt stained with grime. His eyes were rimmed with wrinkles and a straggly gray-white beard covered his chin. “What’s your name?” “Wayne Merrick. What’s yours?” “Do you have any ID, Mr. Merrick?” “Why? Don’t believe me?” “Standard procedure.” The old man grimaced and withdrew a grimy wallet from the pocket of his coat. He handed over a worn driver’s license. “This license expired six years ago.” “Don’t see me driving, do ya?” Frank handed it back. “What are you doing tonight, sir?” “Talking to you, of course. Proves my point, proves it exactly.” “What point?” The old man waved a hand at him, dismissing the question. “What are you doing with the sign, sir?” “Trying to take it down ’til you wanted to gab.” “Have you taken down any other signs?” “As many as I can. Some of the bolts are pretty rusty. Those take a lot of work.” Frank blinked. Not many people straight up confessed to committing a crime. “I imagine so. Why are you taking down the signs, Mr. Merrick?” “They’re useless. No point in having them around, cluttering up the city.” “The signs tell people where to go and what to do.” “Hogwash.” “Excuse me?” The old man waved a hand through the air again and Frank stepped back to avoid getting hit by the sweeping gesture. “Look around you! Nobody knows where to go or what to do. The world’s full of people stumbling around in the dark. Everyone’s ignorant and confused. The signs aren’t doing any good!” Frank shook his head. Another crazy person. Why did he get so many of the whackos? “The signs are for public safety. By removing them, you increase the chance for accidents.” “Maybe an accident is just the wakeup call somebody needs.” “I’m going to have to take you to the station, Mr. Merrick.” “What for? I’m just doing a public service.” “It’s illegal to remove the signs.” “The signs aren’t helping. If we leave them up, people will stay lost and confused.” “People will have to figure things out on their own.” Frank placed the old man in handcuffs and escorted him to the car. “Watch your head when you get in.” They drove for a few minutes before Merrick spoke again. “You think I’m crazy, don’t you?” Frank looked at him in the rear-view mirror. The man’s face was shadowed despite the numerous streetlights. “I’ve seen crazy and confused,” Merrick said. “I was in ‘Nam.” That explained a lot. “I’ve seen enough crazy to know that I’m not it.” That’s what they all said. “You know what I’m talking about. You’re a cop. You’ve seen it.” “Seen what?” “People. Lost, searching, reacting. Not thinking.” “I see crime, that’s what I see.” “People have no hope, no direction.” “Taking down street signs is going to give them direction?” “The signs aren’t helping where they are. Maybe if all the signs are gone, people will figure out their own direction in life.” “They’ll have to take their chances with the signs up.” Merrick sighed and shook his head. As Frank drove to the station, he started to notice all the different types of signs that lined the streets, covered the buildings, cluttered the sidewalks. Words and pictures filled the space around them. He wondered why he’d never really noticed all of the signs before.
https://medium.com/fiction-foundry/too-many-signs-1d54a2c7cfaa
['Cheryl Corbin']
2019-12-19 22:18:35.414000+00:00
['Short Story', 'Fiction', 'Life Lessons', 'Contemporary', 'Short Fiction']
Classictic.com
Getting to know the users The first thing I did was to get to know their customers better. We are talking about people who want to go to a classical music concert, and this portal offers tickets worldwide, in 8 languages: English, German, French, Italian, Spanish, Chinese, Japanese and Russian. Why so many and why those ones? Because they cover most of the tourists in the world. Basically, the website has three types of users, in this order: people who are already in a foreign country or who plan a trip or who just want to go to a concert in their hometown Interestingly, most of classical music concert halls can’t provide a website with translated content, assuming that they actually have one! So, I dove into the users’ stats and knowledge base, and interviewed the customer and marketing services to narrow their users to 6 personas. An example of Classictic’s personas. Each persona came with: A name, age, and defined lifestyle and work style (symbolized by an image) A catchphrase that distinguishes the persona from others Key attributes that affect use and expectations of the product, service, or website (payment preferences) Frequently performed tasks Tools and resources used Pain points relevant to their goals UX design Apart from making the new website responsive and mobile first, which meant thinking about 4 given dimensions at the same time, I came up with all the UX patterns that I could muster. Everything comes with the little big details, right ?! Main principles The idea was to stay within big conventions across the web and e-commerce sites to not disturb our customers’ experience, who were considered to have a low level of surf experience. Therefore, the portal is divided in 3 big parts, consistent across the user’s path: the header with a navigation divided in two: first, the most used ways to browse the offer — with a high contrast put on the search button the main tools used through the entire website (login, help, etc.) the main block has the content, divided into 2 columns, except for the home page the footer that would host a sitemap for the more organizational part of the portal In order to keep the grasp and reading of the website smooth and easy, I kept most features simple and consistent: only two different fonts are used with a vertical rhythm implemented in most cases, 3 complementary colors are used throughout the site with some variations of light 3 levels for forms (especially used in the checkout process) 4 different types of panels with background color and depth used to symbolized their importance 4 levels of buttons, the first of which is dedicated to the main action of the page like “Buy tickets” animations to guide the user, with a duration that is long enough to be understood Expandable boxes were used to avoid cluttered pages and favored over tabs to let the users choose if they want to have all content available at the same time. Interesting problems faced and their solutions Smart scroll As I said earlier, expandable boxes were favored over tabs as a main pattern of the website. Trouble is, what if a box opens below the fold of the page? You can’t really tell something happened. That’s why every opening of an expandable box of the site is testing the possibility and will scroll smoothly to show you the best option available: always at least the beginning of the box and its end at the bottom at the viewport if it’s not taller than it. Of course, this motion will be discarded if the user scrolls manually. Date of the event The users were often mistaken with the time and place of their purchases. So we added the information on the website at strategic places: on the event page: in the main information box right after the title of the event in the ticket box where they choose their seats on the cart page: in the first step where they can review their cart in the third step as a recap before they choose their payment method The change category panel We also added the possibility to book different rates (full rate, children discount fare, etc.) of an event at the same time. The problem was then that for back-end reasons, I was asked to find a solution so that if the customer wants to change the category in the cart page, he will be redirected to the event page where he can make the change, while his previous choice was removed. The solution involved a panel that would appear on top of the page and triggered on the select box change. One page checkout process To reduce the loss of transactions, the checkout process has been redesigned in a unique page. Each step is displayed from the start but only the first one is activated and open. The user has to complete each step to open the next one, but can open previous steps to review his/her inputs. Inputs in form The inputs on a form To help the understanding of inputs and keep a clear look, labels have been placed inside their corresponding inputs using a default value. On focus, they are placed on top of the input in a small panel so they are always visible, even on a smartphone. Some inputs like country selection were enhanced with the technique described by Christian Holst: instead of a very long select box, we replaced it with an input field with a smart auto-complete when Javascript is available. Language choice The former website used images of countries’ flags next to the language options, but to avoid offending susceptibilities, they have been removed since the same language can be used by various countries. Visual design The main principles here for the redesign were guided by: no changes on the logo, except for the main color that was redefined for a lighter gold classy and classical but also warm and not intimidating differs from the competitors most customers are women I chose to follow two main inspirations: the look and feel of paper sheets as something that everybody will recognize and understand and the material design released by Google for a more modern approach. The result is what I consider to be a clean and modern design, colorful and dynamic but not confusing for the target audience with some elegant 3D effects brought by discreet shadows on blocks. Sketch for the upcoming portal of classictic.com Live style guide Classictic’s style guide I also delivered to the Classictic team a live style guide so they could maintain and expand the project of the new portal. After some research, I chose to use the Hologram plug-in to compile the style guide. The main reasons of my choice are the fact that besides its relative novelty, it was stable enough and very easy to implement in my workflow with Grunt, even more so because Hologram is compiling comments directly implemented in the source files using the Markdown syntax. So it’s easy to read for the developers in the files AND in the resulting style guide.
https://medium.com/ccoutzoukis/classictic-com-4c2dd94a7405
[]
2017-06-28 21:22:31.419000+00:00
['Design', 'Ux Por', 'Portfolio', 'Front End Development', 'UX']
Hadoop Tutorial - A Comprehensive Guide To Hadoop
Hadoop Tutorial - Edureka If you are looking to learn Hadoop, you have landed at the perfect place. In this Hadoop tutorial blog, you will learn from basic to advanced Hadoop concepts in very simple steps. Alternatively, you can also watch the below video from our Hadoop expert, discussing Hadoop concepts along with practical examples. In this Hadoop tutorial blog, we will be covering the following topics: How it all started What is Big Data? Big Data and Hadoop: Restaurant Analogy What is Hadoop? Hadoop-as-a-Solution Hadoop Features Hadoop Core Components Hadoop Last.fm Case Study How It All Started? Before getting into technicalities in this Hadoop tutorial blog, let me begin with an interesting story on how Hadoop came into existence and why is it so popular in the industry nowadays. So, it all started with two people, Mike Cafarella and Doug Cutting, who were in the process of building a search engine system that can index 1 billion pages. After their research, they estimated that such a system will cost around half a million dollars in hardware, with a monthly running cost of $30,000, which is quite expensive. However, they soon realized that their architecture will not be capable enough to work around with billions of pages on the web. They came across a paper, published in 2003, that described the architecture of Google’s distributed file system, called GFS, which was being used in production at Google. Now, this paper on GFS proved to be something that they were looking for, and soon, they realized that it would solve all their problems of storing very large files that are generated as a part of the web crawl and indexing process. Later in 2004, Google published one more paper that introduced MapReduce to the world. Finally, these two papers led to the foundation of the framework called “Hadoop“. Doug quoted on Google’s contribution in the development of Hadoop framework: “Google is living a few years in the future and sending the rest of us messages.” So, by now you would have realized how powerful Hadoop is. Now, before moving on to Hadoop, let us start the discussion with Big Data, that led to the development of Hadoop. What is Big Data? Have you ever wondered how technologies evolve to fulfill emerging needs? For example, earlier we had landline phones, but now we have shifted to smartphones. Similarly, how many of you remember floppy drives that were extensively used back in 90’s? These Floppy drives have been replaced by hard disks because these floppy drives had very low storage capacity and transfer speed. Thus, this makes floppy drives insufficient for handling the amount of data with which we are dealing today. In fact, now we can store terabytes of data on the cloud without being bothered about size constraints. Now, let us talk about various drivers that contribute to the generation of data. Have you heard about IoT? IoT connects your physical device to the internet and makes it smarter. Nowadays, we have smart air conditioners, televisions etc. Your smart air conditioner constantly monitors your room temperature along with the outside temperature and accordingly decides what should be the temperature of the room. Now imagine how much data would be generated in a year by smart air conditioner installed in tens & thousands of houses. By this you can understand how IoT is contributing a major share to Big Data. Now, let us talk about the largest contributor to the Big Data which is, nothing but, social media. Social media is one of the most important factors in the evolution of Big Data as it provides information about people’s behavior. You can look at the figure below and get an idea of how much data is getting generated every minute: Social Media Data Generation Stats - Hadoop Tutorial Apart from the rate at which the data is getting generated, the second factor is the lack of proper format or structure in these data sets that makes processing a challenge. Big Data & Hadoop — Restaurant Analogy Let us take an analogy of a restaurant to understand the problems associated with Big Data and how Hadoop solved that problem. Bob is a businessman who has opened a small restaurant. Initially, in his restaurant, he used to receive two orders per hour and he had one chef with one food shelf in his restaurant which was sufficient enough to handle all the orders. Traditional Restaurant Scenario - Hadoop Tutorial Now let us compare the restaurant example with the traditional scenario where data was getting generated at a steady rate and our traditional systems like RDBMS is capable enough to handle it, just like Bob’s chef. Here, you can relate the data storage with the restaurant’s food shelf and the traditional processing unit with the chef as shown in the figure above. Traditional Scenario — Hadoop Tutorial After a few months, Bob thought of expanding his business and therefore, he started taking online orders and added few more cuisines to the restaurant’s menu in order to engage a larger audience. Because of this transition, the rate at which they were receiving orders rose to an alarming figure of 10 orders per hour and it became quite difficult for a single cook to cope up with the current situation. Aware of the situation in processing the orders, Bob started thinking about the solution. Distributed Processing Scenario - Hadoop Tutorial Similarly, in Big Data scenario, the data started getting generated at an alarming rate because of the introduction of various data growth drivers such as social media, smartphones etc. Now, the traditional system, just like cook in Bob’s restaurant, was not efficient enough to handle this sudden change. Thus, there was a need for a different kind of solutions strategy to cope up with this problem. After a lot of research, Bob came up with a solution where he hired 4 more chefs to tackle the huge rate of orders being received. Everything was going quite well, but this solution led to one more problem. Since four chefs were sharing the same food shelf, the very food shelf was becoming the bottleneck of the whole process. Hence, the solution was not that efficient as Bob thought. Distributed Processing Scenario Failure - Hadoop Tutorial Similarly, to tackle the problem of processing huge datasets, multiple processing units were installed so as to process the data parallelly (just like Bob hired 4 chefs). But even in this case, bringing multiple processing units was not an effective solution because: the centralized storage unit became the bottleneck. In other words, the performance of the whole system is driven by the performance of the central storage unit. Therefore, the moment our central storage goes down, the whole system gets compromised. Hence, again there was a need to resolve this single point of failure. Solution to Restaurant Problem - Hadoop Tutorial Bob came up with another efficient solution, he divided all the chefs in two hierarchies, i.e. junior and head chef and assigned each junior chef with a food shelf. Let us assume that the dish is Meat Sauce. Now, according to Bob’s plan, one junior chef will prepare meat and the other junior chef will prepare the sauce. Moving ahead they will transfer both meat and sauce to the head chef, where the head chef will prepare the meat sauce after combining both the ingredients, which then will be delivered as the final order. Hadoop in Restaurant Analogy - Hadoop Tutorial Hadoop functions in a similar fashion as Bob’s restaurant. As the food shelf is distributed in Bob’s restaurant, similarly, in Hadoop, the data is stored in a distributed fashion with replications, to provide fault tolerance. For parallel processing, first, the data is processed by the slaves where it is stored for some intermediate results and then those intermediate results are merged by the master node to send the final result. Now, you must have got an idea of why Big Data is a problem statement and how Hadoop solves it. As we just discussed above, there were three major challenges with Big Data: The first problem is storing the colossal amount of data. Storing huge data in a traditional system is not possible. The reason is obvious, the storage will be limited to one system and the data is increasing at a tremendous rate. Storing huge data in a traditional system is not possible. The reason is obvious, the storage will be limited to one system and the data is increasing at a tremendous rate. The second problem is storing heterogeneous data. Now we know that storing is a problem, but let me tell you it is just one part of the problem. The data is not only huge, but it is also present in various formats i.e. unstructured, semi-structured and structured. So, you need to make sure that you have a system to store different types of data that is generated from various sources. Now we know that storing is a problem, but let me tell you it is just one part of the problem. The data is not only huge, but it is also present in various formats i.e. unstructured, semi-structured and structured. So, you need to make sure that you have a system to store different types of data that is generated from various sources. Finally let’s focus on the third problem, which is the processing speed. Now the time taken to process this huge amount of data is quite high as the data to be processed is too large. To solve the storage issue and processing issue, two core components were created in Hadoop — HDFS and YARN. HDFS solves the storage issue as it stores the data in a distributed fashion and is easily scalable. And, YARN solves the processing issue by reducing the processing time drastically. Moving ahead, let us understand what is Hadoop? What is Hadoop? Hadoop is an open-source software framework used for storing and processing Big Data in a distributed manner on large clusters of commodity hardware. Hadoop is licensed under the Apache v2 license. Hadoop was developed, based on the paper written by Google on the MapReduce system and it applies concepts of functional programming. Hadoop is written in the Java programming language and ranks among the highest-level Apache projects. Hadoop was developed by Doug Cutting and Michael J. Cafarella. Hadoop-as-a-Solution Let’s understand how Hadoop provides solution to the Big Data problems that we have discussed so far. Hadoop as a Solution - Hadoop Tutorial The first problem is storing huge amount of data. As you can see in the above image, HDFS provides a distributed way to store Big Data. Your data is stored in blocks in DataNodes and you specify the size of each block. Suppose you have 512MB of data and you have configured HDFS such that it will create 128 MB of data blocks. Now, HDFS will divide data into 4 blocks as 512/128=4 and stores it across different DataNodes. While storing these data blocks into DataNodes, data blocks are replicated on different DataNodes to provide fault tolerance. Hadoop follows horizontal scaling instead of vertical scaling. In horizontal scaling, you can add new nodes to HDFS cluster on the run as per requirement, instead of increasing the hardware stack present in each node. Next problem was storing the variety of data. As you can see in the above image, in HDFS you can store all kinds of data whether it is structured, semi-structured or unstructured. In HDFS, there is no pre-dumping schema validation. It also follows write once and read many models. Due to this, you can just write any kind of data once and you can read it multiple times for finding insights. The third challenge was about processing the data faster. In order to solve this, we move the processing unit to data instead of moving data to the processing unit. So, what does it mean by moving the computation unit to data? It means that instead of moving data from different nodes to a single master node for processing, the processing logic is sent to the nodes where data is stored so as that each node can process a part of data in parallel. Finally, all of the intermediary output produced by each node is merged together and the final response is sent back to the client. Hadoop Features Hadoop Features- Hadoop Tutorial Reliability: When machines are working in tandem, if one of the machines fails, another machine will take over the responsibility and work in a reliable and fault-tolerant fashion. Hadoop infrastructure has inbuilt fault tolerance features and hence, Hadoop is highly reliable. Economical: Hadoop uses commodity hardware (like your PC, laptop). For example, in a small Hadoop cluster, all your DataNodes can have normal configurations like 8–16 GB RAM with 5–10 TB hard disk and Xeon processors, but if I would have used hardware-based RAID with Oracle for the same purpose, I would end up spending 5x times more at least. So, the cost of ownership of a Hadoop-based project is pretty minimized. It is easier to maintain the Hadoop environment and is economical as well. Also, Hadoop is an open source software and hence there is no licensing cost. Scalability: Hadoop has the inbuilt capability of integrating seamlessly with cloud-based services. So, if you are installing Hadoop on a cloud, you don’t need to worry about the scalability factor because you can go ahead and procure more hardware and expand your setup within minutes whenever required. Flexibility: Hadoop is very flexible in terms of the ability to deal with all kinds of data. We discussed “Variety” in our previous blog on Big Data Tutorial, where data can be of any kind and Hadoop can store and process them all, whether it is structured, semi-structured or unstructured data. These 4 characteristics make Hadoop a front-runner as a solution to Big Data challenges. Now that we know what is Hadoop, we can explore the core components of Hadoop. Let us understand, what are the core components of Hadoop. Hadoop Core Components While setting up a Hadoop cluster, you have an option of choosing a lot of services as part of your Hadoop platform, but there are two services which are always mandatory for setting up Hadoop. One is HDFS (storage) and the other is YARN (processing). HDFS stands for Hadoop Distributed File System, which is a scalable storage unit of Hadoop whereas YARN is used to process the data i.e. stored in the HDFS in a distributed and parallel fashion. HDFS Let us go ahead with HDFS first. The main components of HDFS are: NameNode and DataNode. Let us talk about the roles of these two components in detail. HDFS - Hadoop Tutorial NameNode It is the master daemon that maintains and manages the DataNodes (slave nodes) It records the metadata of all the blocks stored in the cluster, e.g. location of blocks stored, size of the files, permissions, hierarchy, etc. It records each and every change that takes place to the file system metadata If a file is deleted in HDFS, the NameNode will immediately record this in the EditLog It regularly receives a Heartbeat and a block report from all the DataNodes in the cluster to ensure that the DataNodes are live It keeps a record of all the blocks in the HDFS and DataNode in which they are stored It has high availability and federation features. DataNode It is the slave daemon which run on each slave machine The actual data is stored on DataNodes It is responsible for serving read and write requests from the clients It is also responsible for creating blocks, deleting blocks and replicating the same based on the decisions taken by the NameNode It sends heartbeats to the NameNode periodically to report the overall health of HDFS, by default, this frequency is set to 3 seconds So, this was all about HDFS in nutshell. Now, let move ahead to our second fundamental unit of Hadoop i.e. YARN. YARN YARN comprises two major component: ResourceManager and NodeManager. YARN - Hadoop Tutorial ResourceManager It is a cluster level (one for each cluster) component and runs on the master machine It manages resources and schedule applications running on top of YARN It has two components: Scheduler & ApplicationManager The Scheduler is responsible for allocating resources to the various running applications The ApplicationManager is responsible for accepting job submissions and negotiating the first container for executing the application It keeps a track of the heartbeats from the Node Manager NodeManager It is a node level component (one on each node) and runs on each slave machine It is responsible for managing containers and monitoring resource utilization in each container It also keeps track of node health and log management It continuously communicates with ResourceManager to remain up-to-date Hadoop Ecosystem So far you would have figured out that Hadoop is neither a programming language nor a service, it is a platform or framework which solves Big Data problems. You can consider it as a suite which encompasses a number of services for ingesting, storing and analyzing huge data sets along with tools for configuration management. Hadoop Ecosystem - Hadoop Tutorial Now in this Hadoop Tutorial, let us know how Last.fm used Hadoop as a part of their solution strategy. Last.fm Case Study Last.fm is internet radio and community-driven music discovery service founded in 2002. Users transmit information to Last.fm servers indicating which songs they are listening to. The received data is processed and stored so that, the user can access it in the form of charts. Thus, Last.fm can make intelligent taste and compatible decisions for generating recommendations. The data is obtained from one of the two sources stated below: scrobble: When a user plays a track of his or her own choice and sends the information to Last.fm through a client application. When a user plays a track of his or her own choice and sends the information to Last.fm through a client application. radio listen: When the user tunes into a Last.fm radio station and streams a song. Last.fm applications allow users to love, skip or ban each track they listen to. This track listening data is also transmitted to the server. Over 40M unique visitors and 500M page views each month Scrobble stats: Up to 800 scrobbles per second More than 40 million scrobbles per day Over 75 billion scrobbles so far 3. Radio stats: Over 10 million streaming hours per month Over 400 thousand unique stations per day 4. Each scrobble and radio listen generates at least one log line Hadoop at Last.FM 100 Nodes 8 cores per node (dual quad-core) 24GB memory per node 8TB (4 disks of 2TB each) Hive integration to run optimized SQL queries for analysis Last.FM started using Hadoop in 2006 because of the growth in users from thousands to millions. With the help of Hadoop, they processed hundreds of daily, monthly, and weekly jobs including website stats and metrics, chart generation (i.e. track statistics), metadata corrections (e.g. misspellings of artists), indexing for search, combining/formatting data for recommendations, data insights, evaluations & reporting. This helped Last.FM to grow tremendously and figure out the taste of their users, based on which they started recommending music. I hope this blog was informative and added value to your knowledge. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of Big data.
https://medium.com/edureka/hadoop-tutorial-24c48fbf62f6
['Shubham Sinha']
2020-09-10 09:39:54.454000+00:00
['Big Data', 'Hadoop', 'Hadoop Training', 'Hdfs', 'Mapreduce']
So he was sexually assaulting women.
So he was sexually assaulting women. Can’t say I feel bad about his death. At least the women in that vicinity no longer need to worry about being sexually assaulted by him. I have a zero sympathy approach when it comes to this type of men. He should’ve been in prison a long time ago.
https://medium.com/@r-h/so-he-was-sexually-assaulting-women-798650bd7f6b
[]
2020-12-27 16:12:44.868000+00:00
['Sexual Assault', 'Life']
9.25 Presidential Social Intelligence Battleground Tracker — Language Analysis: NC v. FL Data
9.25 Presidential Social Intelligence Battleground Tracker — Language Analysis: NC v. FL Data A pulse of how potential voters in battleground states are discussing the Presidential candidates Please check out the background post on this project if it is your first time reading. — Our initial dataset from the week of 7.3 was detailed here. — Data from the week of 7.11 was detailed here — Data from the week of 7.19 was detailed here. — Aggregate data to date through 8.14 was detailed here — Language data from the weeks of 8.07 & 8.14 was detailed here — Aggregate data through 8.25 was detailed here. — Language data from 8.21–8.28 was detailed here — Language data from 8.28–9.04 was detailed here — Aggregate data through 9.5 was detailed here — Language data from 9.04–9.11 was detailed here — Aggregate daily support gained/lost data through 9.17 was detailed here — A first look at Supreme Court Data was detailed here — A deeper dive into Supreme Court Data was detailed here Battleground States in our Analysis: AZ, CO, FL, IA, ME, MI, MN, NV, NH, NC, PA, TX, WI Date Source: All data is publicly available and anonymized for our analysis from Twitter, Facebook, Online Blogs, & Message Boards. Technology Partners: Eyesover & Relative Insight
https://medium.com/listening-for-secrets-searching-for-sounds/9-25-presidential-social-intelligence-battleground-tracker-language-analysis-nc-v-fl-data-a6d4cccc0780
['Adam Meldrum']
2020-10-09 00:57:45.973000+00:00
['Language Analysis', 'Social Listening', 'Joe Biden', '2020 Presidential Race', 'Donald Trump']
7 Creative and Easy Ideas for Decorating Your Room
Our room is where you escape from the din and clutter at the end of a tiresome day. It is the place that tells a lot about you and your favorite things. And most of all, it is your happy place and one which deserves a pretty decor. We all have our unique taste in decorative home décor but there are some elements that will delight every homeowner with their presence and impression. Here are home decor ideas to help you transform your room and achieve that aesthetic look with ease: 1. Mason Jars Mason jars can be attractive additions to your table as vases or pen holders. You can also make a DIY shelf, for which all you need is four jars, a wooden plank, adjustable hose clamps, a drill, and some screws (which can be easily found on Amazon or your local plumbing store). Watch this tutorial on how to actually make a Mason Jar Organizer. 2. Ladder Hanger/Shelf Do you struggle to find a place to keep your blankets and scarves? Well, who knew you could use an old ladder as a hanger! This not only gives a modern-meets-country look to your room but also scores high on functionality. Learn how you can use a ladder as a shelf or bookcase here 3. Polaroid String and Donut Garlands You can also make customized hangings for your room wall. All you have to do is print a few of your favourite memories into Polaroid-style photos and attach them to a string or rope with paper clips. Another way is to cut a colourful sheet of paper in the shape of a doughnut, paste these on any waste CDs that you might have lying around. Then paste these cute doughnuts on a string, and you’re good to go! You can also check out these donut printablesby Amy moss, which is completely free for your personal use. 4. Succulents Plants are always a nice inclusion to the room decor. Succulents give off a natural and lively vibe and don’t even come with the added responsibility of remembering to water them every day. You can go for matte ceramic vases for an urbane look, or tiny, earthen pots to achieve a kitschy look. 5. Lamps and Candles Light up your favourite corner with a scented candle and decorate your study table with a unique night lamp, and you will be amazed to see how much of a difference these small additions make. Your room is your sanctum and it deserves a redolent and resplendent vibe. Try fairy lights or bauble lights to perk up the overall room decor. 6. Rugs and Throw Pillows You can adorn your room with beautiful rugs, especially the ones with unique and striking patterns. Experiment and intersperse your bed or couch with colourful throw pillows that come in all sorts of quirky prints nowadays. You can easily find all this online. 7. Character Walls A colourful, fun front wall décor can lend a lot of character to your room. Try painting a DIY mural or simply hang a big, offbeat artwork or a cute wall clock on a pastel-coloured wall. When you have numerous character pieces in your room, the classic neutral wall works best. Check outthis YouTube videofor more inspiration. You can use these ideas for home decor ideas bedroom and even home decor ideas living room. These home decorating ideas on a budget should be you’re anytime savage. If you are fan of art, crafts and all things DIY, watch this space for more creative inspiration, tips and tricks. Subscribe toMillionCenters.com and never miss out on learning something new! Originally published at https://www.millioncenters.com.
https://medium.com/@betterthinkingsolutions/7-creative-and-easy-ideas-for-decorating-your-room-53581d99a158
['Better Thinking Solutions']
2020-11-24 14:19:49.980000+00:00
['House Decoration', 'Gardening', 'Decor', 'Home Decor']
Reflections on a starry night
‘Verily, We have decorated the nearest sky with an adornment, the stars,’ 37:6 I stood out to stare at the sky after many, many years. The brightest star always gets your attention first, and then as you keep staring into the universe, many more twinkling wonders make themselves visible to you. It’s a sight to behold, and it’s a sight that was made for me by my Creator. One can never offer enough praise nor reach the optimal appreciation for Allah. The strive to live that life, however, is what will make us the best versions of ourselves. As I kept staring into the sky, the verse quoted above kept ringing in my ears. This is that stretch of the sky that we have not even begun to explore, and as far as our science can see, there are stars. The lowest heaven engulfs our reality as we know it, yet, this magnanimity was created for my wonderment…for our wonderment…so that we could find Him. You don't have to be a scientist to realize the sheer distance, power and order to keep the universe in balance. The black holes, we are told have a pull that doesn't even let light escape; yet there are millions and billions in this universe being controlled by the Almighty. Our sun alone, which is one of the weakest stars in comparison to what the universe holds, is beyond and will always be beyond any human control and exploration. Instability in our sun alone would perish life on earth in a matter of moments. Yet, it is the giver of life, source of our sustenance and what our life revolves around — literally. Photo by Guillermo Ferla on Unsplash Come to think of it, our solar system was enough to keep us occupied in exploration for a lifetime. This earth is enough to keep us indulged in awe. The human body was enough to keep us bowed down in His greatness. Yet, here we are, thankless and ungrateful, unhappy and bored, self-righteous and arrogant. Isn’t this Allah’s love and respect for His creation, that this entire universe twinkles for us at night so we could find it beautiful? Your eyes were made with the capability to see the sky and find peace, tranquillity and Him in it. But, we chose to use the night to indulge in sin, abhorrence and crime. The universe, the lowest heaven, was made with the consideration that to your eyes, these gigantic forces of gases and emissions look like stars, and that they are placed far enough that all you see them is as mere ‘stars’. As I kept my neck arched towards the sky, the overwhelming thought that this entire vision has the capacity to fit in my eyes, just so I can find it in my heart to appreciate my Lord left me stunned. Oh, what unthinkable mercy is this? There is so much happening around me that has been filtered away from my eyes by a few wavelengths here and there. The animals see the same colours differently, the same reality differently. Those with weaker eyesight see the world differently, infants see it differently, the elderly see it differently. Positivity sees life differently, negativity sees it differently. The greatest perception is shaped by our thoughts. Blind is not the one who can’t see, but the one who can’t see Him. Look around, can you really not see Him? Photo by v2osk on Unsplash In this sky, mankind has been exploring life, finding more home grounds like the one we are on. The fascination with finding life elsewhere is as old as time. At first, the expeditions drove us around the earth, made us leap in deep waters, travel the land, climb mountains…travel to the moon. Now, mankind is going beyond. There is something inside of us that is driving this seek. There is another destination out there, but not in the lowest heaven, but on the highest heaven. There is an eternal bode that we are living for. Is it possible that our collective efforts as people to keep exploring our reality is a push coming from our souls? The soul was never made for this earth. It wants to return to the beautiful reality it has seen and known up there. The soul is in an unsettled state, and it keeps longing to nourish itself in Allah’s signs. But, you, the human, you are doing it grave injustice by keeping it away from that pleasure. Your thoughts and your ability to think is what, ironically enough, is keeping you from seeing Him. There is an unmistakable coherence in everything Allah has created. The balance, the harmony is humbling. The solar system is around a central nucleus — the sun. The orbits are calculated, timed and have no room for error. On earth, the Holy Kaaba unites the believers with that same construct. Furthermore, the atom is made in the same way. The deeper you go in any creation that you put your hands on, there is a depth that we have only recently begun to understand how to explore. The cell has only made sense to us in the last few decades. The quantum reality is diverse in the same way on both ends of the spectrum. The choice is yours. Go explore what you want to, you will only find more and more and more. Photo by Jongsun Lee on Unsplash Like the system of an atom forms you, is it not possible that the solar system forms something greater, and the system of the universe forms something even far greater, and the billions of these centres orbit around something Supreme? Everything revolves around a nucleus. That is the basic unit of our reality. Then, how are we living a life without the acknowledgement of The Nucleus? There is no room in this perfection for an alteration by a Planck length. Science tells us that the universe will collapse if that were to happen, yet, it doesn’t move us? It doesn’t put us in submission? It doesn’t make us think? May we be saved from the curse of ignorance. There is so much we don’t understand, yet the mind-numbing overconfidence with which we walk the earth is stupefying. We have not nurtured our minds in a way to see the marvels of what surrounds us, yet we are egoistic and proud of our non-sensical thoughts and views. What a disappointing lot we are. Go travel the universe, the multiverse and more. Develop telescopes, technology and whatever else you can create — all you will find are stars, all you will see is the lowest heaven. To travel beyond, you need to travel within. As I lowered my gaze and bowed down in front of my Lord, I prayed for guidance. Because this is a journey that can’t begin without it, and this is the only journey worth taking.
https://medium.com/alt-op/reflections-on-a-starry-night-1ebc6a1458f4
[]
2020-05-24 12:21:05.687000+00:00
['God', 'Universe', 'Faith', 'Stars', 'Reflections']
I journaled consistently throughout the month of May
I journaled consistently throughout the month of May Photo: Oroma Elewa I strongly believe that if I were to sit with something long enough, it would reveal itself to me. That’s what I’m hoping would happen with this piece I’m writing. On a serious note, I cannot deny that May was a special month. I went skinny dipping. No, seriously. I said yes to what I later found out was going to be the best trip I’ve ever been on. I threw caution to the wind and allowed myself to be mesmerized. Every day felt like I was in a trance. I journaled consistently throughout the month of May and I really wonder why. For the first time in my life, I made it up to 20 days of journaling. Pretty impressive. These were my first and last sentences; I had the time of my life. I’m ready to start again from ground zero. What I realized was that I was really letting myself write what I felt. Short or long. I felt content with writing only one sentence and I didn’t make a conscious effort to stay consistent. I just wrote when I felt and I wrote what I felt. I counted, 23 days. That’s not so bad.
https://medium.com/@ginikachi/i-journaled-consistently-throughout-the-month-of-may-5888231d95c9
['Kachi Eloka']
2021-06-01 20:24:58.459000+00:00
['Writing Challenge', 'Self-awareness', 'Life Experience', 'Journaling', 'Self Love']
3 Ways To Get Featured In Forbes, Business Insider, & More
Photo by CoWomen on Unsplash How do you get featured in places like Forbes, Business Insider, etc? This is one of the most common non-job questions in my inbox. If you want to get featured, there are three ways to do it: 1. Pitch The Publication Many sites have submission forms or you can find an editor and pitch your piece. This is a BIG waste of time. Like applying online, these people get thousands of pitches and most don’t get read. 2. Relationships Building relationships with people who write for these sites is a great way in. That’s how I got my first in at Forbes. I made a list of 20 people who had columns on the site and worked to add value. I shared their pieces, left positive comments, etc. It took time, but they started asking to interview me and feature my writing. 3. Go Omni-Channel With Your Content If you’re creating content, re-purpose it! Editors at these companies are always scouring sites like Medium, Quora, LinkedIn, etc. to see what pieces are gaining traction. I landed my Inc. and Fast Co features because an editor saw my answers on Quora. I landed in Biz Insider because they saw my posts on Medium. Lastly, be patient. Each of these strategies takes time. Rather than focusing on the feature, focus on the quality of your content. That’s what will get noticed.
https://medium.com/@austin-belcak/3-ways-to-get-featured-in-forbes-business-insider-more-9158b0e516e6
['Austin Belcak']
2020-10-26 12:22:57.542000+00:00
['Marketing', 'Business', 'Personal Branding', 'Branding', 'Content Marketing']
Helping Students with Autism Adjust When School Re-Opens
by Coleen Vanderbeek, Psy. D., LPC, Director of Autism Services, Effective School Solutions Students and teachers across the United States are finishing up the 2019–20 academic year via remote instruction after a demanding and anxiety-provoking four months under quarantine. The combination of fear, cabin-fever, social isolation, and “Zoom Fatigue” has left all members of the school community more vulnerable. As states move to contain the further spread of COVID-19 and to re-open schools and businesses, experts in multiple fields are fearing not only a second wave of infection, but also an epidemic of mental health symptoms, including anxiety, depression and post-traumatic stress (PTSD). The COVID-19 pandemic has given rise to a collective, mass trauma. People with reasonably good coping skills and who have the ability to self-regulate, are struggling, so perhaps it goes without saying that individuals who were already more vulnerable, including people with autism spectrum disorder (ASD), are particularly impacted by this crisis. National statistics show that 1 in 59 individuals are affected by ASD, and in New Jersey the ratio is a staggering 1 in 32. Children with ASD are at increased risk for both encountering traumatic events and developing traumatic sequelae, and although the topic is understudied, it is commonly believed that 100% of individuals with ASD will experience at least 1 traumatic event prior to age 18. The very experience of navigating the world with the social and communication deficits that are common to autism are for many, a trauma in and of itself. Educators are painfully aware that there has been a significant increase in mental health symptoms in the general student population in recent years. One in five students ages 13 to 18 has or will have a serious mental illness. About 11 percent have a mood disorder, and 10 percent have a behavior disorder. At the same time, the prevalence of autism spectrum disorder is increasing, and in many cases, mental health and autism challenges overlap. Experts estimate that 75–80% of individuals with autism are dually diagnosed with a mental health or psychiatric disorder. The most common psychiatric diagnoses are depression, anxiety and PTSD. Psychology professor and best-selling author Dr. Jean Twenge conducted a survey in April to assess the impact of the pandemic on U.S. adults, and found what she labeled a “devastating effect on mental health”. The 2020 survey revealed that 70% of participants met criteria for moderate to serious mental distress, compared with 22% in a similar survey conducted in 2018. Younger adults, ages 18–44, and parents with children under 18 at home, have been particularly hard hit. And, it is not much of a leap to assume that parents of special needs children are struggling the most, attempting to manage Individualized Educational Programs (IEP) at home, while either working from home, potentially worrying about the loss of employment, and managing their own stressors. So, what can school professionals expect when students with ASD return in the fall, either in-person, or with some blended version of remote and in-person learning? Preliminary data suggest that these students have been regressing during the pandemic despite the best efforts of districts to provide some version of IEP mandated services, and parents struggling to facilitate their children’s remote learning. The full impact of disruptions in speech and occupational therapy, skill development programs, and other services are not yet known, and will depend on each student’s age, developmental stage, academic skill, the severity of ASD symptoms, family environment, and a myriad of other contributing variables. Mental health symptoms have increased in ASD-affected children, along with perseverative and self-soothing behaviors, and the quality of social relationships has declined. Many students with ASD already feel socially isolated and ostracized, and when school resumes, might feel even more anxious and detached because of the quarantine. Stable, supportive relationships with teachers and members of their care team have been disrupted, and may take a while to re-establish. Because of the need for social distancing, special and enriching relationships with grandparents and other family members have been affected, potentially causing students to feel unsafe and insecure. While some students with ASD back away from physical contact due to sensory and/or social issues, others favor hugging and physical connection, and may struggle to respect boundaries and maintain social distance. Sensory issues might also affect students’ comfort with wearing masks, with hand washing, or with the use of hand sanitizer. Students on the spectrum need consistency and predictability, and almost all home, school, and recreational routines have been turned upside down over the last few months. Due to difficulty adjusting to change, students with autism will likely need more time to acclimate to a more typical school schedule. This will become even more complicated if a district needs to combine in-school and remote learning in order to safely reopen, as a student’s schedule might vary from day to day. A predominant emotion of parents and students alike upon the return to school will be ambivalence — a welcoming of a return to some version of normal, coupled with fear about venturing out into a newly unsafe world. After the trauma of Hurricane Katrina, child specialists observed that there was an increase in students’ externalizing behaviors. Many students with ASD already have challenges communicating their wants and needs, so an increase in behavioral expression can be expected. Since trauma causes difficulty with self-regulation, hyper-vigilance, and changes in cognitive ability and introspection (already a challenge for many students on the spectrum), teachers can expect more emotional outbursts, more fearfulness, increased engagement in restricted interests, increased perseveration, and increased problems with verbalizing emotions and needs. On the other hand, some students may respond with hypo-arousal, appearing passive and detached from learning and social interactions. Other changes brought about by the COVID-19 crisis may also impact students. Many have been less physically active, have spent long hours on computers and other devices, and have had sleep and eating patterns changed, and these disruptions of routine can have significant negative effects. Parental stressors, such as unemployment and financial hardship, can also impact a child’s well-being and sense of safety. When school reopens, teachers may find that students with more stable and supportive family environments have greater difficulty separating from parents, while those in high-conflict families may have been subject to emotional and/or physical abuse because of the heightened tension associated with quarantine. Some parents will be mourning the deaths of friends and family members taken by the virus, and although the loss of a person does have some impact on students with ASD, the loss of routine, the loss of tangible objects, and even the loss of pet are shown to be a greater contributor to their sense of grief. If all of this sounds very daunting, it is. But there are many things that school professionals can do for students with autism to ease them back into school. There are a few key areas to consider in planning for re-opening: Remember that ASD is a spectrum disorder; there are many subtypes, and each student can have unique strengths and challenges. Be cautious about discounting the difficulties of students with milder forms of ASD as students with higher developmental capacities tend to be overlooked when it comes to supports. Since they do not complain, they do not get attention. The other contributing factor to the lack of support is the misconception that because a student has higher capacities, he should “know better”, or have the ability to problem solve on his own. Sadly, this is not true: if it were, these higher functioning individuals would not carry the ASD diagnosis. Encourage parents to frequently document where their children are as far as behavioral and mental health symptoms over the summer in order to track regression and provide a baseline for the beginning of in-person instruction and intervention in the fall. Work with parents to identify rewards and incentives that can help motivate their children to re-engage in academic and therapy activities. And be sure to inquire about tangible losses such as family deaths or job loss so as fully to support students who are in mourning, or whose families are struggling financially. If your district is proposing some form of blended instruction, with some students attending on some days and the rest on others, consider an exception for students with autism, e.g. daily attendance, or 3–4 school days in a row rather than alternating days. Plan a school, classroom, and schedule “walk-through” right before school re-opens so that students can be prepared for the new flow of their days. Create visual supports — label lockers and cubbies, desks and chairs, the bathrooms and closets; create task checklists and visual daily schedule cards, etc. to help orient students. One of the universal losses during the COVID-19 crisis has been the loss of the sense of control over our lives. Where possible, offer students choices, e.g. which cubby they want, or what side of the classroom they want to sit on. Prepare students for new procedures or practices that might trigger sensory issues, such as frequent hand washing, wearing masks, or needing to refrain from hugging. If a blended instructional model will be used, make sure to share your screens and include a lot of visual material during remote video instruction since individuals with ASD tend to be more visual than auditory with regards to learning. Consider sensitivity to volume and noise when adjusting computer sound settings. Complete an environmental check of the classroom and learning environments, go through each sense and modify the environment as needed based on the results (e.g., adjust the lighting, temperature, sound). Maintain consistency as students with ASD struggle with unexpected changes and transitions. The #1 reason that students with ASD present in psychiatric crisis is a disruption of their routines. Take the time to be proactive and create a variety of options for routines and structure throughout the school day. Remember to include visual supports when possible. Go back to the basics of working with ASD affected students. Find a common interest from which to build rapport/relationships. Create a list of each student’s personal preferences (e.g., video games, TV shows, music etc.) with the student/family, then do some research on the student’s interest and regularly spend time touching base on these interests. Communicate frequently with members of each student’s support team, including community providers, psychotherapists, psychiatrists, primary care physicians, neurologists, OT, PT, ABA, speech therapy providers, and family caregivers. Such coordination is necessary in order for skills to generalize outside of the educational setting. Consider the function of typical ASD behaviors, and examine your responses. For example, tantrums and repetitive behaviors may occur when students feel threatened and are unable to verbally express distress or soothe themselves. See yourself as an important part of the equation in each teacher-student relationship; your stress level and reactivity cannot be dismissed. It is most helpful if school professionals can adopt and maintain a trauma-informed stance toward students. Essentially, this means asking yourself “what is going on here? what happened to this child? what does this child need?” rather than “what is wrong with this child?” A trauma-informed stance will favor the conceptualization of “problem” behaviors as fight-flight, trauma-related survival mechanisms, rather than viewing them as oppositional or “bad” behaviors. Where possible, help the student focus on internal feelings and sensations when feeling threatened, coaching them on how to attach words to feelings, and ask them what would help them feel safe. As in-person classes resume, don’t act like nothing happened, but don’t talk endlessly about the crisis either. Make room for students to express their experiences and distress, but redirect students to the educational tasks at hand, and toward hopeful planning and preparing for the future. Institutionalize classroom practices such as mindfulness, movement breaks, singing, or art tasks. The ESS Autism toolkit offers some suggestions about communicating more effectively with students with autism. Check in often with parents and caregivers who may remain highly stressed, especially if the district implements a blended educational model that incorporates in-home remote instruction. As school re-opens, some may greet you with a greater sense of respect and cooperation, while others may be more irritable and demanding. Try not to take it personally. It is also critically important to remember that the COVID-19 crisis has disproportionately affected minorities and poorer communities, so don’t assume that all families have the level of support and resources needed to keep food on the table, much less to supervise in-home instruction and interventions. In the aftermath of Hurricane Katrina, traumatized students looked to teachers to provide personal affirmation and hope. It is important to remember that even one strong, supportive relationship with a school professional can go a long way to help a student heal from trauma and grief. But remember: you are grieving many tangible and intangible losses as well. Both staff and students will need to “name and claim” this grief in order to move forward from losses of all kinds. Self-care is more important than ever, and ESS staff members are here to help with resources and referrals for both you and your students. Resources: Twenge, J.M., Cooper, A.B., Joiner, T.E., Duffy, M.E., & Binau, S.G. (2019). Age, Period, and Cohort Trends in Mood Disorder Indicators and Suicide Related Outcomes in a Nationally Representative Dataset, 2005–2017. Journal of Abnormal Psychology, Vol. 128, №3, 185–199.
https://effectiveschoolsolutions.medium.com/helping-students-with-autism-adjust-when-school-re-opens-f40040f095ce
['Effective School Solutions']
2020-06-17 20:40:30.018000+00:00
['Mental Health', 'Education', 'Autism Spectrum Disorder', 'K12 Education', 'Autism']
A Kind of Trust
A Kind of Trust A Poem About Leaning Into the Unknown Photo by Milada Vigerova on Unsplash eyes open palms turning up, even in darkness the air, every breath given and then let go, before it came into this body it was exchanged in the complex interior labyrinth of countless other beings for eons this heart, beating my-o-cardium raise this cup and drink it may be that to be truly alive in this uncertain world is. . . yes
https://berrywoman08.medium.com/a-kind-of-trust-83f2fd376fcb
['Michelle Berry Lane']
2020-11-23 20:02:02.957000+00:00
['Poetry', 'Spiritual Tree', 'Faith', 'Vulnerability', 'Spiritual']
How to turn misjudgement into education
How to turn misjudgement into education Has there ever been a time in your life when you have been misunderstood or misjudged from an inacurate perception of your situation? You then go into an explaination only to be told not to seek approval or validation. You not going to get what you want but what we think you need to hear from us. Few and far between go out of their way to help you unless there is something in it for them. Giving people permission to misunderstand you can cause confusion because media today can misrepresent and skew the situation with negetive slants to the story and so can onlookers who dont really know you or your heart behind what you are saying. This level of public scrutiny can be intense especially if you get on the wrong side. When this happens things dont work in your favour and it can feel you are going against the grain. Not everyone will get you or understand what you are about even with the best intentions or with coming from a place of serving and honour. When you are in the music industry for example you can base everything on recognition or affirmation that a person or their feelings or opinions are valid or worthwhile. But really even if you know you are worthwhile and have high self esteem you still need supporters who come around you and are part of your tribe and community. No one person can exist in a island. We all need companionship, love and support. If you have a business you need people to become fans and buyers of your work otherwise what is the point you just creating work for you alone. Support is a two way street and should be genuine. Imagine you had a concert in a large concert hall and only you showed up how damn lonely would that be. We are social beings and need connection and seek acceptance from each other. Even if its a few people that really get us. So the next time you are misunderstood go out of your way to teach and show them the road to empathy by the way you communicate effectively. Its great for you and for them as well. If you found this inspirational recommend, follow and yes do applaud cos no one wants a concert of none.
https://medium.com/@angeliichoo77/how-to-turn-misjudgement-into-education-a1a6b7f0ce51
[]
2020-12-20 12:52:18.402000+00:00
['Validation', 'Media Criticism', 'Education', 'Celebrity', 'Misunderstanding']
LINQ in c# : concise guide. LINQ (Language Integrated Query) is a…
LINQ (Language Integrated Query) is a uniform query syntax in C# to retrieve data from different sources and formats. LINQ generalizes the query interface by eliminating most of the hurdles to make raw data query/operation ready and as a result, LINQ returns an object for easy accessibility. Required Namespace using System; using System.Linq; using System.Collections.Generic;//for method LINQ Three Parts of a Query Operation At the time of actual query execution of dataQuery executes & creates IEnumerable<T> as result set which replaces dataQuery in foreach. // 1. Obtain the data source int[] dataList = new int[7] { 0, 1, 2, 3, 4, 5, 6 }; // 2. Query creation var dataQuery = from num in dataList where (num % 2) == 0 select num; // 3. Query execution foreach (int num in dataQuery) { Console.Write("{0,1} ", num); } Two ways to write LINQ
https://medium.com/@shubham-nikam/linq-in-c-concise-guide-31a3cee2f31a
['Shubham Nikam']
2021-01-09 06:45:49.958000+00:00
['Net Core', 'Lambda', 'Linq', 'Dotnet Core', 'C Sharp Programming']
Assessing Burnout — A Case (and Tool) for Change
Written by Arief Kartolo & Christine Yip, edited by Lillie Sun They say knowledge is power — the sooner you can recognize the symptoms of burnout, the quicker you can take steps to resolve it. At Organizations for Impact, we believe that awareness is the critical first step to preventing and addressing the risks of burnout. In this post, we’re sharing a statistically validated survey tool to help individuals identify and understand the symptoms associated with burnout, and provide leaders and managers with a tool to use to help manage a team that might be at high risk for burnout. Awareness is the first step Experiences of burnout across workplaces are becoming increasingly common. In a 2018 survey of working Canadians, Morneau Shepell found that 1 in 3 Canadian were more stressed from work than they were 5 years ago. Deloitte’s marketplace survey of over 1,000 professionals found that 77% had experienced burnout at their current job. Despite this, there is still very little that is being done to address burnout. Deloitte’s leadership survey found that 69% of professionals feel that their employers aren’t doing enough to address the problem. While there are many reasons why this is the case, we know many people still suffer in silence, unaware that they are on a path to burnout, or unsure of where or who to go to for help. Burnout still carries a harmful stigma, leading many to ignore the warning signs out of fear that they will be seen as lazy, incompetent or simply “not cut out for the job.” This prevents individuals from speaking up and seeking the appropriate help, leading to serious burnout along with other mental and physical health problems. See our blogpost “Demystifying burnout and what to do about it” for a more detailed discussion on the impacts of burnouts to individuals and organizations. While we still have a long way to go before burnout is seen as legitimate management issue, increasing our awareness of the signs and symptoms of burnout and having the tools to track and monitor burnout in our teams is an important first step. Assessing your signs and symptoms Listed below is the survey tool developed by organizational researchers to measure and better understand experiences of burnout. An online version that allows individuals to self assess and analyze their scores immediately is available here. The tool is a statistically validated 10-item scale that assesses the extent to which an individual is experiencing symptoms of burnout by assessing their levels of apathy and exhaustion towards work — the two primary symptoms of burnout. Apathy is associated with having a negative attitude toward work, and exhaustion is related to the depletion of both mental and physical energy at work. The purpose of sharing this scale is to help individuals identify and understand the symptoms associated with burnout, and provide leaders and managers with a tool to use to help manage a team that might be at high risk for burnout. It should be noted that this is not a diagnostic tool. This scale cannot diagnose whether someone is burnt out or not. However, it can be used to help increase awareness of the symptoms of burnout and inspire action to create healthier work habits and work environments. Response Scale: Never (1), not often (2), some of the time (3), very often (4), and all of the time (5) Survey Items: Employment Apathy Items: · I feel my work is meaningless · I am not motivated at my job · Work is not fulfilling for me · I feel as though I am hitting a wall at work · I feel dissatisfied with work Employment Exhaustion Items: · I think my job is making me sick · I do not have the energy to complete my work · I feel mentally exhausted from work · I feel physically exhausted from work · Thinking about my job makes me tired What Makes This Scale Different While employee surveys are a dime a dozen, many do not meet the psychometric standards required to ensure their validity and reliability. This means that there is no way of proving that the survey is measuring what it intends to measure, or that the results are generalizable and reliably capture the same behaviors and attitudes across different groups being measured. The scale above has been developed using the appropriate statistical techniques to ensure both the validity and reliability of the measure. For more details, please refer to the academic article the authors published here. How to use the scale This burnout assessment tool can be administered in multiple ways. 1. Individual employees can administer, assess, and score this burnout assessment themselves. This can help individuals to become more aware of their own mental health by identifying their own symptoms associated with burnout. 2. Managers, leaders, and HR professionals/consultants can administer the assessment across their teams. This can help identify patterns of high stress and burnout at the organizational level, which can be used to build a case for making changes to certain management processes or behaviours. However, given that burnout is a very sensitive topic that can carry harsh stigmas, there are some important factors to be mindful of when you are administering the burnout assessment tool as a manager: The tool is intended only for developmental purposes, not performance appraisal. Showing symptoms of burnout does not mean that the employee is weak or incompetent. It is important to communicate that the results from this assessment are not indicative of their performance level, and will not be used for performance appraisal. Employees may be reluctant to provide the full picture of their mental health if they are afraid that their responses are “incorrect” or would be reprimanded. 2. Respect the privacy and anonymity of responses. Burnout is an extremely sensitive topic. It is important to respect the privacy and anonymity of those who responded to the assessment to ensure the accuracy and validity of the results. To ensure anonymity and privacy of the data, managers should administer the measure to a group of employees, rather than singling out an individual team member, while ensuring no identifying information (e.g., names, gender, role) is collected during the process. Engaging a third party to administer the tool and analyze the results can also ensure that employees privacy is protected and confidentiality is maintained. 3. Follow up with your employees — but only use the aggregated results. “Why am I doing this if nothing ever changes around here?” An assessment without action can discredit efforts and undermine change initiatives in the future. Employees’ morale may suffer as a result of false expectations. It is important to follow up with your employees to communicate the results and discuss the patterns that might have emerged across the team or business unit. Ask questions and invite collaborative conversations to devise appropriate and meaningful strategies for burnout prevention. The Key Takeaways An important step to minimizing the risk of burnout is to increase awareness of its signs, symptoms, and negative implications to the health and wellness of individuals and organizations. In this post, we have provided an evidence-based assessment tool to help individuals and managers determine the extent to which themselves or their teams are experiencing symptoms of burnout. While there are many benefits to assessing symptoms of burnout, there are also risks. Today, burnout still carries a negative stigma, meaning many people are afraid to share their experiences out of fear that it might reflect poorly on their job performance. This means, that any attempt to assess or address burnout has to do so with care, respecting the privacy and sensitivity of the data being collected. At Organizations for Impact, we believe that awareness is the first step towards making important changes to the work habits and management practices that contribute to burnout. These changes will play an important role in encouraging efforts to better address high pressure, stressful working conditions, leading to healthier and more productive work environments.
https://medium.com/organizations-for-impact/assessing-burnout-a-case-and-tool-for-change-b3a5eeb0b338
['Organizations For Impact']
2020-05-19 22:24:36.208000+00:00
['Management', 'Surveys', 'Leadership', 'Hr Talent Management', 'Burnout']
The Need the Church Doesn’t Fulfill
I’ve never felt very at home in Christianity. It’s not that I’ve struggled with existential doubt or felt excluded from the church community — it’s just that I never connected much with the Christian message. It didn’t spark my imagination or inspire an emotional response, and hearing about the beauty of the Church made me roll my eyes. Something always niggled at me. I blamed the asymmetry of the faith. Other religions look round and complete: Christianity is jagged, broken in two, disproportionate, unaesthetic. I blamed the asexual fabric of the doctrine — the Trinity in its cheery homeostasis, generating and being generated but never longing, colliding, melding, the focus on celibacy. The disconnect didn’t bother me much until I was introduced to the writings of Luigi Giussani, who emphasized time and time again that Christ and the Church correspond to the needs of our heart. This was best explained to me at a discussion group— I held a guitar close to my ear and someone struck a tuning fork, and I was meant to listen to hear the string that corresponded to that note vibrate in response. Christ is the tuning fork, we are the guitar strings, and whenever we start resounding, when something attracts us, it is a sign of our ‘religious sense’, a course we can follow to God. I never came closer to losing my religion entirely. I knew that my heart had never vibrated like a guitar string in response to the faith I had followed intentionally for my whole life. Was I that out of tune? Or was there nothing to listen for?
https://inpurpledurance.medium.com/the-need-the-church-doesnt-fulfill-85ff930f4439
['Cecily Lawless']
2020-11-10 14:07:14.450000+00:00
['Nature', 'Religion', 'Spirituality', 'Christianity', 'Catholic']
How to approach layout design for medical illustration
Understand the messaging As with any other project, the first things you need to define are: What do you want to show? What do you want your audience to know? What are the important key players of this story? If you haven’t defined these things then you will find it challenging to move forward in an optimal direction when it comes to the final layout design. So here’s an example: What do you want to show? I want to show atherosclerosis What do you want your audience to know? They need to understand what an artery looks like and what atherosclerosis looks like. They should also understand that this is a gradual buildup What are the important key players of this story? Artery, red blood cells, fibrin, platelets, fatty deposits, and some associated text. Do some market research Next, you want to see if anyone has done something similar to this in your field. I do this so that I have an understanding of what’s available in the market as well as what good or bad trends are being repeated. I can then use these examples as baseline to try and do something a little different to what’s already available. Personally, I don’t think there’s any value on copying or repeating something that’s already been done — you’re only going to shortchange yourself and not grow creatively. Do some inspiration research Once I’ve done my market research the next step for me is to go and look at other parts of the creative industry, such as graphic design or web design, for layout inspiration. I usually save these as links, take screen grabs or pin them to a private Pinterest mood board. When I’m doing this I look for designs that catch my eye or designs that might be on-trend. Translate what you like into design thinking Now, this is probably the hardest part for most people and that’s breaking down what you like about a piece of inspiration into simple design thinking. Once I’ve gathered about 5 to 10 pieces of inspiration I’ll have them opened together on my Pinterest board or on Adobe Bridge so that I can see them all at once. This way I can begin to pinpoint any repeating design ideas that I like, notice any immediate trends between these selected pieces, and also to identify what design ideas could be translated into my own layout. When I was first starting out I wrote this stuff down, like the example image below, but now that I have more practice I can list and remember these in my head. If you want to practice putting design ideas into words then it might also be useful reading this article as well as reading up on design fundamentals. Speed layout Once I have all of this information in place I’ll dive into photoshop or even pen and paper to start creating multiple layout ideas for my project. To make sure that I create multiple layout options and ideas with speed I’ve adopted Aaron Draplin’s approach to logo design because I feel that his methodology translates well to creating layouts. The approach that he takes with creating logo concepts of one logo is to duplicate and retain various copies of each logo design change. If he finds that he’s going down a not so great design path, he can look back and pin-point where he went wrong and revert to a previous design. By the end of his logo design process, he has a wonderful overview of the various design iterations he made to reach the final product. Before I adopted this method I would do multiple layouts within the one canvas and delete layouts that I didn’t quite like, which meant that they were lost throughout my development process. After a couple of hours of pixel pushing I would have not record of what I had achieved and would come out feeling pretty disheartened. Once I embraced Aaron’s method I found my process sped up and felt like I could gain a good overview of my process and ideation. The goal for speed layouts is to do it quickly, do as many as you can and to not overthink things. At the end of it you’ll have a couple of layout examples where you can pull elements of what’s working into a final layout and drop what’s not working. Speed layouts take practice, at first you might only come up with 3 variations but after much practice that number might increase to more. Choose the best parts and refine After I’ve done all this I can then select the best layout design choices and go deeper to refine the overall layout. This is where I can hone in on certain areas without losing the overall messaging because I’ve already established that. I’ll still follow the iteration method as I refine the layout, making sure not to lose a record of any changes made in case I need to revert back to a previous version.
https://medium.com/@annie-campbell/how-to-approach-layout-design-for-medical-illustration-edec907f7473
['Annie Campbell']
2020-12-22 16:34:21.572000+00:00
['Design Thinking', 'Design Process', 'Sciart', 'Medical Illustration', 'Layout']
Creator Hangout: Catching up with Green Pea Cookie Creator Larissa Russell
More food tips from Terry And while we were on the subject of food, we asked Terry for advice she’d share with anyone considering launching a project in that category: Make sure your food photos are clear, in-focus, and delicious looking, always. Instagram is overflowing with inspiration when it comes to food photography on a budget (and done with a cellphone!). The Brigadeiro Bakery team had some great photos on their project page. And on that note, learn to love Instagram, Facebook, Twitter, and Pinterest . There are huge food communities on these platforms. Make sure what you’re posting is sharable. . There are huge food communities on these platforms. Make sure what you’re posting is sharable. Stumped for video content ideas? A voice-over in combination with some clips of your ingredients or cooking process stitched together into a short (two minutes or less) video is a fantastic recipe to build on. Check out the Subzero Superfood video for inspiration. Want to minimize local shipping (especially great for delicate or chilled products!)? Host a “pick-up” party for backers who opt-in (surveys are your friend). Not only will you cut down on shipping costs, but you’ll have the chance to meet some of your biggest supporters. Do you have experience shipping your product? Wondering how it will survive transit? Test your shipping and handling! Send test packages to friends or relatives in a few major cities, remote areas, and even internationally. Ask them to report back on how long it took them to receive the package, and the quality of the packaging and food once it arrived. We hope you enjoyed our month of food chats, tweets, and tips! Thanks to everyone that joined February’s Creator Hangouts, including our friends from New Food Economy. If you’re interested in joining our next Creator Hangout, check the schedule here or subscribe to our Facebook page to see who we’re talking to next. Peas out! ✌
https://medium.com/kickstarter/catching-up-with-green-pea-cookie-d1b7b103455f
[]
2018-12-17 20:41:54.449000+00:00
['Food', 'Kickstarter', 'Creator To Creator']
PACKET SWITCHING- The Big Picture
Packet switching Packet Switching transmits data across digital networks by breaking it down into blocks or packets for more efficient transfer using various network devices. Each time one device sends a file to another, it breaks the file down into packets so that it can determine the most efficient route for sending the data across the network at that time. The network devices can then route the packets to the destination where the receiving device reassembles them for use. Delays in Packet switching : 1.Transmission Delay 2.Propagation Delay 3.Queuing Delay 4.Processing Delay Transmission Delay : Time taken to put a packet onto link. In other words, it is simply time required to put data bits on the wire/communication medium. It depends on length of packet and bandwidth of network. Transmission Delay = Data size / bandwidth = (L/B) second Propagation delay : Time taken by the first bit to travel from sender to receiver end of the link. In other words, it is simply the time required for bits to reach the destination from the start point. Factors on which Propagation delay depends are Distance and propagation speed. Propagation delay = distance/transmission speed = d/s Queuing delay : Queuing delay is the time a job waits in a queue until it can be executed. It depends on congestion. It is the time difference between when the packet arrived Destination and when the packet data was processed or executed. It may be caused by mainly three reasons i.e. originating switches, intermediate switches or call receiver servicing switches. Average Queuing delay = (N-1)L/(2*R) where N = no. of packets L=size of packet R=bandwidth Processing Delay : Processing delay is the time it takes routers to process the packet header. Processing of packets helps in detecting bit-level errors that occur during transmission of a packet to the destination. Processing delays in high-speed routers are typically on the order of microseconds or less. In simple words, it is just the time taken to process packets Reliability: The packet switching process is reliable in that the destination can identify any missing packets. However, circuit switched networks deliver packets in order along the same route and are therefore less likely to experience missing packets in the first place. Data Transmission : A packet switched network follows networking protocols that divide messages into packets before sending them. Packet-switching technologies are part of the basis for most modern Wide Area Network (WAN) protocols, including Frame Relay, X.25, and TCP/IP. Compare this to standard telephone network landline service, which is based on circuit switching technology. Circuit switching networks are ideal for most real-time data, transmission, while packet switching networks are both effective and more efficient for data that can tolerate some transmission delays, such as site data and e-mail messages. Network Performance Improved efficiency means less network bandwidth wastage. No need to reserve the circuit even when it’s not in use means the system is more efficient. A constantly reserved circuit results in wasted network bandwidth, so network efficiency tends to increase with the use of packet switching. Implementation : Packet Switching is implemented using store and forward and pipeline concepts. Every node is independent of deciding next journey of packet. Packet has header unlike that in circuit switching, which contains source and destination IP along with sequential number. Packet Switching is implemented mostly for small data , because in case of large data delay increases significantly. Error Handling : Occasionally, packets might bounce from router to router many times before reaching their destination IP address. Enough of these kinds of “lost” data packets in the network can congest it, leading to poor performance. Data packets that bounce around in the network too many times may get lost. The hop count addresses this problem, setting a maximum number of bounce times per packet. “Bouncing” simply refers to the inability to locate the final destination IP address, and the resulting transfer from one router to another instead. If a certain packet reaches its maximum hop count, or maximum number of hops it is permitted before reaching its destination, the router it is bouncing from deletes it. This causes packet loss. Applications : LTE technology in mobile phone E-mail services used in daily life Does INTERNET Is Using Packet Switching As a Default Transmission Technique? The Commission has defined the Internet as “a global, packet switched network that enables interconnection between networks using the Internet Protocol.” While this is true, it is only part of the story. The IP is not used just for interconnecting existing provider networks-it extends all the way to the end users of these networks. Put differently, end-user devices utilize the Internet as a packet-switched network directly-unless blocked by a firewall or similar device, every Internet-connected computer can send messages to every other Internet-connected computer in the same native IP format. More technically, the Internet is the packet-switched network that receives IP formatted packets from connected users and delivers them, immediately and unmodified, to the destination specified by the user in the corresponding field of the IP packet header. The genius of the IP, and the reason for its success, is that it enforces standardization only at a single, abstract network layer. This allows considerable flexibility in the lower transport layers and provides a consistent and transparent interface to higher application layers.79 Thus, an IP network can be built using any physical layer technologyso so long as that technology can accurately deliver IP packets to the next switch in the network. The advantage of this approach is that it allows the creation of a large, interconnected network using existing networks as transport. This flexibility greatly reduced adoption costs, which was crucial to initial adoption New technique used by internet Multiprotocol Label Switching (MPLS) Multiprotocol Label Switching (MPLS) is data forwarding technology that increases the speed and controls the flow of network traffic. With MPLS, data is directed through a path via labels instead of requiring complex lookups in a routing table at every stop. Scalable and protocol independent, this technique works with Internet Protocol (IP) and Asynchronous Transport Mode (ATM). When data enters a traditional IP network, it moves among network nodes based on long network addresses. With this method, each router on which a data packet lands must make its own decision, based on routing tables, about the packet’s next stop on the network. MPLS, on the other hand, assigns a label to each packet to send it along a predetermined path. How Does Multiprotocol Label Switching Work? Label Switched Paths (LSPs) are predetermined, unidirectional paths between pairs of routers across an MPLS network. 1.When a packet enters the network through a Label Edge Router (also known as an “ingress node”), it is assigned to a Forwarding Equivalence Class (FEC), depending on the type of data and its intended destination. FECs are used to identify packets with similar or identical characteristics. 2.Based on the FEC, the ingress node will apply a label to the packet and encapsulate it inside an LSP. 3.As the packet moves through the network’s “transit nodes” (also known as Label Switch Routers), those routers continue to direct the data by the instructions in the packet label. These in-between stops are based on the packet label, not additional IP lookups. 4.At the “egress node,” or final router at the end of the LSP, the label is removed and the packet is delivered via normal IP routing.
https://medium.com/@atharvnhavkar5/packet-switching-the-big-picture-72d4c275c580
[]
2020-12-18 16:37:03.204000+00:00
['Packet Switching', 'Thebigpicture', 'Internet', 'Analysis']
Objects, Services, and Dependencies
ARTICLE From the Object Design Style Guide by Matthias Noback In this article we’ll discuss all the relevant aspects of instantiating a service. You’ll learn how to deal with its dependencies, what you can and can’t do inside its constructor, and you should be able to instantiate it once and make it reusable many times. ________________________________________________________________ Take 37% off the Object Design Style Guide by entering fccnoback into the discount code box at checkout at manning.com. ________________________________________________________________ Two types of objects In an application there are typically two types of objects: Service objects which either perform a task, or return a piece of information. Objects that hold some data, and optionally expose some behavior for manipulating or retrieving that data. Objects of the first type are created once, then used any number of times, but nothing can be changed about them. They have a simple lifecycle. Once they’ve been created, they can run forever, like little machines with specific tasks. These objects are called services. The second type of object is used by the first type to complete tasks. These objects are the materials which the services work with. For instance, a service may retrieve such an object from another service, and it manipulates the object and hands it over to another service for further processing (Figure 1). The lifecycle of a material object may be more complicated than that of a service: after it has been created, it could optionally be manipulated, and it may even keep an internal event log of everything which has happened to it. Figure 1. This UML-style sequence diagram shows how services call other services, passing along other types of objects as method arguments or return values. Inside a service method, such an object may be manipulated, or a service may retrieve data from it. Objects that perform a task are often called “services”. These objects are doers, and they often have names which indicate that: controller, renderer, calculator, etc. Service objects can be constructed by using the new keyword to instantiate their class, e.g. new FileLogger() . Inject dependencies and configuration values as constructor arguments Services usually need other services to do their job, which are its dependencies, and they should be injected as constructor arguments. An example of a service with its dependency is the FileLogger class in Listing 1. Listing 1. The FileLogger service. interface Logger { public function log(string message): void; } final class FileLogger implements Logger { private Formatter formatter; public function __construct(Formatter formatter) ❶ { this.formatter = formatter; } public function log(string message): void { formattedMessage = this.formatter.format(message); // ... } } logger = new FileLogger(new DefaultFormatter()); logger.log('A message'); ❶ Formatter is a dependency of FileLogger . Making every dependency available as a constructor argument makes the service ready for use immediately after instantiation. No further setup is required, and no mistakes can be made with that. Sometimes a service needs some configuration values, like a location for storing files, or credentials for connecting to an external service. Inject such configuration values as constructor arguments too, as is done in Listing 2. Listing 2. Besides a dependency , FileLogger also requires a configuration value. final class FileLogger implements Logger { // ... private string logFilePath; public function __construct( ❶ Formatter formatter, string logFilePath ) { // ... this.logFilePath = logFilePath; } public function log(string message): void { // ... file_put_contents( this.logFilePath, formattedMessage, FILE_APPEND ); } } ❶ logFilePath is a configuration value which tells the FileLogger to which file the messages should be written. These configuration values may be globally available in your application, in some kind of a parameter bag, settings object, or otherwise large data structure containing all the other configuration values too. Instead of injecting the whole configuration object, make sure you only inject the values which the service should have access to. In fact, only inject the values it needs. Keeping together configuration values that belong together A service shouldn’t get the entire global configuration object injected, but only the values that it needs. Some of these values are always used together, and injecting them separately breaks their natural cohesion. Take a look at the following example where an API client gets the credentials for connecting to the API injected as separate constructor arguments (Listing 3). Listing 3. The ApiClient class with separate constructor arguments for username and password. final class ApiClient { private string username; private string password; public function __construct(string username, string password) { this.username = username; this.password = password; } } To keep these values together, you can introduce a dedicated configuration object. Instead of injecting the username and password separately, you can inject a Credentials object which contains both (see Listing 4). Listing 4. Username and password now reside together in a Credentials object. final class Credentials { private string username; private string password; public function __construct(string username, string password) { this.username = username; this.password = password; } public function username(): string { return this.username; } public function password(): string { return this.password; } } final class ApiClient { private Credentials credentials; public function __construct(Credentials credentials) { this.credentials = credentials; } } Exercises 4) Rewrite the constructor of the MySQLTableGateway class in such a way that the connection information can be passed as an object: final class MySQLTableGateway { public function __construct( string host, int port, string username, string password, string database, string table ) { // ... } } Inject what you need, not where you can get it from If a framework or library is complicated enough, it offers you a special kind of object which holds every service and configuration value you could ever want to use. Common names for such a thing are: service locator, manager, registry, or container. What is a service locator? A service locator is itself a service, from which you can retrieve other services. The following example shows a service locator which has a get() method. When called, the locator returns the service with the given identifier, or throw an exception if the identifier is invalid (Listing 5). Listing 5. A simplified implementation of a service locator. final class ServiceLocator { private array services; public function __construct() { this.services = [ 'logger' => new FileLogger(/* ... */) ❶ ]; } public function get(string identifier): object { if (!isset(this.services[identifier])) { throw new LogicException( 'Unknown service: ' . identifier ); } return this.services[identifier]; } } ❶ We can have any number of services here. In this sense, a service locator is like a map; you can retrieve services from it, as long as you know the correct key. In practice, this key is often the name of the service class or interface that you want to retrieve. Most often the implementation of a service locator is more advanced than the one we saw. A service locator often knows how to instantiate all the services of an application, and it takes care of providing the right constructor arguments when doing this. It also reuses already instantiated services, which can improve runtime performance. Because a service locator gives you access to all of the available services in an application, it may be tempting to inject a service locator as a constructor argument and be done with it, like in Listing 6. Listing 6. HomepageController uses a ServiceLocator to get its dependencies. final class HomepageController { private ServiceLocator locator; public function __construct(ServiceLocator locator) ❶ { this.locator = locator; } public function execute(Request request): Response { user = this.locator.get(EntityManager.className) .getRepository(User.className) .getById(request.get('userId')); return this.locator.get(ResponseFactory.className) .create() .withContent( this.locator.get(TemplateRenderer.className) .render( 'homepage.html.twig', [ 'user' => user ] ), 'text/html' ); } } ❶ Instead of injecting the dependencies we need, we inject the whole ServiceLocator , from which we can later retrieve any specific dependency, the moment we need it. This results in a lot of extra function calls in the code, which obscures what the service does. Furthermore, this service needs to have knowledge about how to retrieve dependencies. This is the opposite of the Inversion of control we’re looking for when we use dependency injection: we don’t want our service to bother with fetching its dependencies, we want them to be provided to us. Besides, this service now has access to many more services that can potentially be retrieved from the service locator. Eventually this service ends up fetching all kinds of unrelated things from the service locator ad-hoc, because it doesn’t push the programmer to look for a better design alternative. Whenever a service needs another service to perform its task, it has to declare the latter explicitly as a dependency and get it injected as a constructor argument. The ServiceLocator in this example isn’t a true dependency of HomepageController ; it’s used to retrieve the dependencies. Instead of declaring the ServiceLocator as a dependency, the controller should declare the dependencies that it needs as constructor arguments, and expect them to be injected, as shown in Listing 7. Listing 7. HomepageController with its dependencies injected as constructor arguments. final class HomepageController { private EntityManager entityManager; private ResponseFactory responseFactory; private TemplateRenderer templateRenderer; public function __construct( EntityManager entityManager, ResponseFactory responseFactory, TemplateRenderer templateRenderer ) { this.entityManager = entityManager; this.responseFactory = responseFactory; this.templateRenderer = templateRenderer; } public function execute(Request request): Response { user = this.entityManager.getRepository(User.className) .getById(request.get('userId')); return this.responseFactory .create() .withContent( this.templateRenderer.render( 'homepage.html.twig', [ 'user' => user ] ), 'text/html' ); } } The resulting dependency graph is much more honest about the dependencies of the class (See Figure 2). Figure 2. In the initial version, HomepageController only seemed to have had one dependency. After we get rid of the ServiceLocator dependency, it’s clear that it has three dependencies. We should make another iteration here. In the example we only need the EntityManager because we need to fetch the user repository from it. We should make it an explicit dependency instead, as shown in Listing 8. Listing 8. Instead of the EntityManager , HomepageController needs a UserRepository . final class HomepageController { private UserRepository userRepository; // ... public function __construct( UserRepository userRepository, /* ... */ ) { this.userRepository = userRepository // ... } public function execute(Request request): Response { user = this.userRepository .getById(request.get('userId')); // ... } } What if I need the service and the service I retrieve from it? Consider the following code which needs both the EntityManager and the UserRepository dependency: user = this.entityManager .getRepository(User.className) .getById(request.get('userId')); user.changePassword(newPassword); this.entityManager.flush(); If we follow the advice to inject the UserRepository instead of the EntityManager , we end up with an extra dependency, because we’ll still need that EntityManager for flushing (i.e. persisting) the entity. Situations like this usually require a redistribution of responsibilities. The object which can retrieve a User entity might be able to persist any changes which were made to it. In fact, such an object follows an established pattern, the Repository pattern. Because we already have a UserRepository class, it makes sense to add a flush() , or (now that we’ve the opportunity to choose another name) save() method to it: user = this.userRepository.getById(request.get('userId')); user.changePassword(newPassword); this.userRepository.save(user); All constructor arguments should be required Sometimes you may feel like a dependency is optional; the object could function well without it. An example of such an optional dependency could be the Logger we saw. You may consider logging to be a secondary concern for the task at hand. To make it an optional dependency of a service, make it an optional constructor argument, as is done in Listing 9. Listing 9. Logger is an optional constructor argument of BankStatementImporter . final class BankStatementImporter { private Logger? logger; public function __construct(Logger? logger = null) { this.logger = logger; ❶ } public function import(string bankStatementFilePath): void { // Import the bank statement file // Every now and then log some information for debugging... } } importer = new BankStatementImporter(); ❷ ❶ logger can be null or an instance of Logger . ❷ BankStatementImporter can be instantiated without a Logger instance. This unnecessarily complicates the code inside the BankStatementImporter class. Whenever you want to log something, you first have to check if a Logger instance has been provided (if you don’t, you’ll get a fatal error): public function import(string bankStatementFilePath): void { // ... if (this.logger instanceof Logger) { this.logger.log('A message'); } } To prevent this kind of workaround for optional dependencies, every dependency should be a required one. The same goes for configuration values. You may feel like the user of a FileLogger doesn’t need to provide a path to write the log messages to because a sensible default path exists, and you add a default value for the corresponding constructor argument (Listing 10). Listing 10. The client doesn’t have to provide a value for logFilePath . final class FileLogger implements Logger { public function __construct( string logFilePath = '/tmp/app.log' ) { // ... } } logger = new FileLogger(); ❶ ❶ If the user omits the logFilePath argument, ‘/tmp/app.log’ is used. When someone instantiates this FileLogger class, it isn’t immediately clear to which file the log messages are written. The situation gets worse if the default value is buried deeper in the code, like in Listing 11. Listing 11. The default value for logFilePath is hidden in the log() method. final class FileLogger implements Logger { private string? logFilePath; public function __construct(string? logFilePath = null) { this.logFilePath = logFilePath; } public function log(string message): void { // ... file_put_contents( this.logFilePath != null this.logFilePath : '/tmp/app.log', formattedMessage, FILE_APPEND ); } } To figure out which file path a FileLogger uses, the user is forced to dive into the code of the FileLogger class. Also, the default path is now an implementation detail which could easily change without the user noticing it. Instead, you should always let the user of the class provide any configuration value the object needs. If you do this for all classes, it’s easy to find out how an object has been configured by looking at how it’s being instantiated. In summary: whether constructor arguments are used to inject dependencies, or to provide configuration values, in both cases constructor arguments should always be required and not have default values. That’s all for now. If you want to learn more about the book, check it out on our browser-based liveBook reader here.
https://medium.com/swlh/objects-services-and-dependencies-58106df2ac2b
['Manning Publications']
2020-07-03 08:27:39.633000+00:00
['Oop', 'Software Engineering', 'Programming', 'Object Oriented', 'Software Development']
Walls Pt. 3, Day 15
December 21, 2020 Time for a cameo! I’m Marissa (Missy), Pat’s sister, here to give you the inside scoop on VAN LIFE. As Pat said, I couldn’t let the rest of my family have all the fun. I mean, how much damage can I do in 4 days? ;) (Mobile) home sweet home My first assignment was to finish drilling those ventilation holes on the top of the bed/benches, which Mom then sanded and cleaned to prep them for a paint job. Mom prepping the seat boards for Plinko It sounds like the interior of the van will be mostly white and natural wood, with clean lines and simple design. I told my brother this was very “Scandanavian chic”, which he promptly googled and then agreed. Meanwhile, Pat and Dad were figuring out how exactly the windows will be trimmed. Answer: scrap mahogany they cut to line the inside of the windows and be flush against the white shiplap. I’ve observed that it takes about 2 hours of deliberation and planning for every 1 hour of actual work. Busting out my Woodworking 101 Karate chop! Behind the scenes, I can tell you that Dad is like the Producer of this project and Pat is the Director. One has the vision and provides direction and the other thinks through the details of execution. Key supporting cast and set design is Mom, with special talents in sewing and attention to detail. But let’s be honest, she’s the real boss. Cabinets on both sides or just one? Commit to having a loo in the van or not? Will it be the toilet that composts with sawdust (very rustic and romantic) or vacuum seals your #2 like you’re on a spaceship? Mom will decide. We were able to do most of the wall on one side of the interior by the end of my first day, and it’s looking tight. Lots of measuring and sawing of the boards but it was a pretty simple install by screwing the shiplap to the wall studs. Watch out, Joanna Gaines! Tomorrow, we are finishing the other wall and cutting into the ceiling to mount the roof vent.
https://medium.com/@the-magno-mobile/walls-pt-3-day-15-2f1a3dbc392e
['The Magno-Mobile']
2020-12-24 04:08:06.915000+00:00
['DIY', 'Conversion Van', 'Vanlife', 'Campervan', 'Projects']
When it Comes to Crypto-Assets, Asking the Right Questions is More Important than Finding Answers
One of my mentors in artificial intelligence(AI) always says that with modern machine learning technologies you can find almost any answer but the hard thing is to ask the right questions. That principle certainly applies to crypto-assets. As a new financial asset class, crypto-tokens are, more often than not, evaluated using traditional metrics based on price and volume but we can do so much more. In a data-rich universe where blockchains and exchange data generates billions of data points, we can certainly find all sorts of fascinating patterns and factors that explain behaviors in crypto-assets. The hard thing is to know what to look for. How would you describe Ethereum? Based on the price? As a blockchain? Based on the decentralized applications(DApps) built on top of it? Understanding the behavior of crypto-assets starts by asking the right questions. A lot of our questions about crypto-assets are targeted to determine relationships with price movements (what else 😉 ) but the analysis could go so far beyond that. Very often, non-obvious correlations provide the greatest insights. A Lesson from Planes and War During World War II, the Pentagon assembled a team of the country’s most renown mathematicians in order to develop statistical models that could assist the allied troops during the war. The talent was astonishing. Frederick Mosteller, who would later found Harvard’s statistics department, was there. So was Leonard Jimmie Savage, the pioneer of decision theory and great advocate of the field that came to be called Bayesian statistics. Norbert Wiener, the MIT mathematician and the creator of cybernetics and Milton Friedman, future Nobel prize winner in economics were also part of the group. One of the first assignments of the group consisted of estimating the level of extra protection that should be added to US planes in order to survive the battles with the German air force. Like good statisticians, the team collected the damage caused to planes returning from encounters with the Nazis. For each plane, the mathematicians computed the number o bullet holes across different parts of the plane (doors, wings, motor, etc). The group then proceeded to make recommendations about which areas of the planes should have additional protection. Not surprisingly, the vast majority of the recommendations focused on the areas with that had more bullet holes assuming that those were the areas targeted by the German planes. There was one exception in the group, a young statistician named Abraham Wald who recommended to focus the extra protection in the areas that hadn’t shown any damage in the inventoried planes. Why? very simply, the young mathematician argued that the input data set( planes) only included planes that have survived the battles with the Germans. Although severe, the damage suffered by those planes was not catastrophic enough that they couldn’t return to base. therefore, he concluded that the planes that didn’t return were likely to have suffered impacts in other areas. Very clever huh? What Wald’s story teaches us is that, no matter how sophisticated the mechanism for analyzing data, asking the wrong questions will get us nowhere. When comes to an unknown universe such as crypto-assets, that premise holds truer than ever. A Poor-Man’s Methodology for Asking the Right Questions About Crypto-Assets We know the price and “semi-fake” volume of any crypto-assets but what other factors do we need in order to understand the behavior of this new asset class? If we use price as the driving factor for most of our questions, then it is key to understand the types of relationships we can extrapolate between prices and other factors. At a high level, there are some characteristics about the relationships between two variables that are important. · Correlation vs. Causality: When we see a factor that fluctuates similarly to price, we tend to assume that it can be use as a predictor or price movements. While many factors might have obvious correlations, that seldom means that there is a causal relationship between the two. Let’s take a time in the market in which the prices of gold and Bitcoin are both trending upwards. While the correlation might be useful to make headlines on CNBC or Bloomberg, it might be far from explaining a causal relationship between the two. A very simple explanation could be that higher levels of volatility in US equities is causing investors to move some of their positions to Gold while a deacceleration in the Chinese economy is increasing investments in Bitcoin. If the causal factors change, then the “apparent” correlation will disappear. · Linear vs. Non-Linear: When we think about relationships between price and other factors we visualize them as linear correlations that “co-move” together. However, some of the most fascinating patterns in financial asset investments are based polynomial, exponential and many other non-linear representations. For instance, a movement in the price of Bitcoin could be attribute to the expiration of many future contracts a few months down the road and an increase optimism in the guidance of tech companies during earnings season. · Uni-Factor vs. Multi-Factor: We are constantly tempted to find a one-to-one correlation between a given factor and price. However, many price movements can be explained by complex linear and non-linear combinations of different factors that are far from obvious to the human eye. For instance, a price fluctuation in Bitcoin can be a combination of an increase in new investors joining the network and the fact that the price is moving in areas closer to the position of a large number of investors. Now that we understand the types of relationships we can extrapolate between price and other factors, we can start thinking about the type of relevant questions we can ask in order to understand the behavior of a crypto-asset. In our analysis, we believe that most of the relevant questions about crypto-assets can be grouped in five main categories: Financial Financial questions attempt to describe the financial behavior of an asset. Some non-trivial questions in this area: · Are investors making money or loosing money? · How up or down are investors in their respective positions? · Is the number of large transactions increasing or decreasing? · Is trading happen within new investor or historical investors? · ….. Network Crypto-assets operate in networks which dynamics are incredibly relevant to price movements. Relationships in crypto-networks describe growth patterns as well as unexpected movements of crypto-assets. Some relevant questions in this area: · How many addresses are capitulating? · Are new addresses joining the network? · …. Ownership Understanding counterparties has been an elusive goal of all financial asset classes but blockchains bring a unique dimension to this problem. For the first time in the history of finance, the behavior of individual investors is recorded in public ledgers. Some relevant questions related to ownership could be: · Is a crypto-asset over-concentrated? · Are large investors buying or selling? · Are minority investors buying or selling? · Are investors long-term or active-traders? · …. Behavioral Complementing the previous point, crypto-assets offer a wide canvas to understand the psychology of investors. While we can’t predict patterns for individual investors, there is enough information to extrapolate relevant trends at the token level. Some interesting question in this area: · Are token holders overly confident? · Do they follow trends? · Are they averse to loss and unlikely to liquidate their positions? · ….. Macro-Economic Regardless of the non-obvious correlations, crypto-assets are part of the broader financial markets and is influenced by marco-economic factors as other asset classes. Understanding the relationship between crypto-asset and macro-economic factors can generate questions such as the following: · Is money moving from Chinese equities into Bitcoin? · Are there visible correlations between micro-cap stocks such as the Rusell 2000 index and crypto-assets. · …. Understanding and predicting the behavior of crypto-assets is a fascinating and data-intensive exercise. As you start digging deeper into the behavior of crypto-assets you will confront a puzzling fact: asking the right questions is more important than getting the right answers.
https://medium.com/intotheblock/when-comes-to-crypto-assets-asking-the-right-questions-is-more-important-than-finding-answers-7797f04b0eb5
['Jesus Rodriguez']
2019-07-25 15:33:31.366000+00:00
['Machine Learning', 'Invector Labs', 'Ethereum', 'Cryptocurrency', 'Bitcoin']
Plutus Weekly Report — July 1st 2018
In situations where users compete for value, it is inevitable that eventually someone will attempt to seize any advantage they can, even resort to actively malicious means to gain access. Developers know that every time a vulnerability is patched, a new one may be discovered elsewhere. Malicious actors may use social engineering, code injection, overflow exploits, race conditions, and more esoteric exploits to gain access and wreak havoc. This is why we have come to the conclusion that the best way forward is to offer rewards to anyone who can help us create a more robust software ecosystem. As such, we will be rewarding users who manage to find exploits in our software and report these in a private, timely, and discreet manner. Additionally the structure of our upcoming bounty program will provide numerous incentives that we believe ethical hackers will find quite enticing. The Plutus Security Bounty Program will officially launch July 11th, and will include a comprehensive list of instructions and rewards.
https://medium.com/plutus/plutus-weekly-report-july-1st-2018-b48ab375d405
[]
2018-07-01 13:37:53.851000+00:00
['Bitcoin', 'Mobile Payments', 'Ethereum', 'Cryptocurrency', 'Report']
Skaffolder UX Case study — How to create a user experience on a platform you don’t know how to use yourself
1. Overview The problem Every time a developer starts a new project, he (/she) needs to write from scratch the scaffolding of the code, copying and pasting bits and pieces of code from other projects, before he can concentrate on the actual “fun” part. This process is boring and time consuming, as well as a big waste of money for companies. What is Skaffolder? Skaffolder is a web app that helps developers build custom web and mobile applications faster, building it through an interface instead of writing lines and lines of code. Users and audience Freelance developers, software houses, big companies that need internal softwares and require a standard Roles and responsibility Skaffolder was born as a start-up, and I was the only UX designer in the team. I worked closely with Luca, the CEO of Skaffolder and the developer that came up with the concept of Skaffolder in the first place (he developed it as thesis of his master degree) 2. The process The Research In a way, it was a good thing that I was so far out the target of this project: this way I could stay completely unbiased to the research results and really focus on what users needed rather than what I thought they needed. I started by analyzing the other platforms that where doing a similar jobs and pointed out what worked and what didn’t. Then we interviewed developers to understand their attitudes, needs, and experiences, both through focus groups and one-on-one interviews. Data collected Protopersonas Challenges ・Developers are a though crowd, they like to feel like they are in control of what they do, they trust their code and they are “know-it-alls” ・There is no limit for how big a project can be, so the platform needs to stay functional even when there are 300 pages to navigate through, and the developers need to be able to find what they need easily ・Softwares for developers look boring.They usually look like they have been designed in the past century Goals ・Easy to use for all kinds of developers, from students to big companies CTOs ・Create a platform that developers can trust and feel like the are still in control ・Make it as tidy as possible so it doesn’t become a mess after working on it for a few hours ・Make it look fun and like it actually belongs in 2018 Userflow Prototypes We tested a lot of different prototypes adjusting according to the feedback before finally landing on the one we thought was the best solution Prototype 1: This is the first prototype we tested, which worked on small projects and received a fairly positive feedback from the developers we tested it on, but once we released the beta version we started getting the complain that as the project was getting bigger, it was getting very hard to navigate through the models. We had to work on the usability. Prototype 2: We asked developers a simple question “Would you rather have a “UML-like” view of your project, or a list of the items in your project always at sight? 79% of the interviewed opted for the second answer, which led us to this new prototype 3. The Solution
https://medium.muz.li/skaffolder-ux-case-study-how-to-create-a-user-experince-on-a-platform-you-dont-know-how-to-use-4b78429bb00
['Gloria Nervi']
2020-03-11 08:45:42.504000+00:00
['Ux Case Study', 'Platform', 'UX Research', 'UX Design', 'Uxui Design']
Death Grips Are A Little Scary — But Their Music Is Unstoppable
Death Grips Are A Little Scary — But Their Music Is Unstoppable Death Grips are angry. No one knows why. The experimental hip-hop trio from Sacramento, California is loud, abrasive, and in an odd way, pure. Death Grips is from the gut, as frontman MC Ride puts it. “Lyrically, Death Grips represent the glorification of the gut…the id..summoned, tapped, and channeled before being imprisoned and raped by the laws of reason…” If it sounds scary, it is. Nothing can prepare you for the first time you listen to the group. You will almost certainly hate it. This is why Death Grips fans have memefied the first listening experience; they call it getting noided. Abrasive Ear-Candy The rest of the group comprise of Zach Hill on drums, who has once said: “[He] wants to create a drone-like sound from his drums by playing as rapidly but simply as possible.” Hill plays like a gorilla who grew up listening to Keith Moon and Sonic Youth. Lastly, there’s Andy Morin, who creates horrifying beats from another planet. But for some reason or another, they are just as catchy as some of the best pop beats you’ll ever hear. Popular internet music critic Anthony Fantano calls their music ‘Abrasive Ear-Candy’ This kind of sound shouldn’t work. It won’t work the first seven or eight times you listen to it. Then something happens. No one can explain it. Death Grips starts to sound normal. The beats start to sound dope. The lyrics from manic psychopath MC Ride make sense. Death Grips becomes your favorite band. This will almost certainly happen to you push past the first few uncomfortable experiences.
https://medium.com/@mccallisaiah/death-grips-are-a-little-scary-but-their-music-is-unstoppable-56007ea3754e
['Isaiah Mccall']
2020-12-20 14:57:23.438000+00:00
['Experimental', 'Innovation', 'Music', 'Culture', 'Hip Hop']
How to Prevent Cyber Attacks from Ruining Your Business
If you’re wondering how to prevent cyber attacks, you’re not alone. With 3.8 million records stolen every single day, a cyber attack occurs about every 39 seconds, and the Ponemon Institute cites the chances of experiencing a data breach to be as high as 1 in 4. When looking to take precautions against cybercrime — something even the World Economic Forum recommends for businesses — there are a few things you can do to ensure you know how to protect your company from cyber attacks, which we’ll cover below. What Is Cybersecurity Risk? Cyber risk is the fastest growing threat to businesses and organizations today. Ranked as a top 5 priority by 79% of global organizations, common cybersecurity risks include: Data breaches Phishing or social engineering attacks IoT-based attacks (using WiFi-enabled devices) Ransomware Malware DDos attacks (distributed denial-of-service attacks) Internal employee attacks There are a number of ways your company can fall victim to a cyber attack. Someone may have access to more company information or databases than they should, an unknowing employee may accidentally download malware, or a teammate may abuse their internal access for personal profit. Any or all of these can occur in an organization, and the bigger the place of employment, the more cybersecurity risks present themselves. And with 2021 continuing to see an uptick in remote work, remote workers continue to be a target for cybercriminals. This is an unsettling statistic, given that 95% of cybersecurity breaches are already caused by internal human error. How Cybersecurity Risk Management Can Help Risk management is a concept that’s been around for as long as companies have had assets they wish to protect. Cybersecurity risk management acts like other insurances and applies it to the cyber world. By identifying your assets and vulnerabilities and applying solutions to make sure your company is adequately protected and prepared, cyber risk management helps you take the necessary precautions against cyber crime. When looking for the right CRM, it’s helpful to use a risk analysis equation to calculate what you’ll need: Breaking Down the Analysis Your consequence of attack is how much of an impact a cyber attack will have on your business. For instance, if your business maintains sensitive personal information about your customer’s addresses and credit card info, there’s more at stake than just your business’ information (which is bad enough). The ramifications of a data breach stem far and wide, so it’s imperative you’re on guard. Thelikelihood of an attack helps you understand how much of a target your business may be. Roughly 43% of all cyber attacks are aimed at small businesses, and it’s precisely these smaller organizations that are underprepared and overconfident in their limited cybersecurity capabilities. While a risk analysis isn’t going to help you create the perfect plan to halt all future cyber attacks, it will help you define a more disciplined approach to setting up a security strategy, which will: Help you take the necessary precautions against falling victim Help your recover as soon as possible if a cyber attack were to occur Remember, any plan is better than no plan at all, and finding an effective cybersecurity risk management plan can save you a lot of time, money, and headaches. Insurance Ensures a Healthy Strategy When looking for general liability insurance to cover your business, we suggest something that protects sensitive data (like customer info). It’s also important to choose a plan that notifies customers about a breach in security and helps restore the affected customers immediately. We also recommend looking for a plan that offers cyber protection — not just a reactive plan. That way, you’re ensuring your business is as safe as possible from a virus or cyber attack from the get-go. How to Prevent Cyber Attacks Targeting Your Business While we certainly recommend the importance of something like general liability or cyber liability insurance, there are a few practical steps you can implement to keep your company safe from cyber attacks: Ensure software and firmware is up to date Make sure security software and firewalls are installed Use a full-service Internet security system Set up appropriate internal access for employees Encrypt data and back it up Hire ethical hackers to test your systems’ vulnerability Educate your employees on responsible workplace cyber protocol In the end, what matters most is that you’re taking the necessary precautions against cyber crime, and working to preserve the integrity of your business, your employees, and your customers. Finding a quality, comprehensive cyber risk management service and implementing some basic security protocols will help you understand how to prevent cyber attacks. It will also help your business from becoming another statistic. Finding the right insurance to help you prevent cyberattacks can be tricky, and the digital sphere is costly if you don’t know how to navigate it. For some help in the process, contact us and our experienced insurance agents would love to sit down with you and see how we can help.
https://medium.com/@robertsoninsurance/how-to-prevent-cyber-attacks-from-ruining-your-business-5a74dc2b2b7a
['Robertson Insurance', 'Risk Management']
2021-03-15 20:16:38.232000+00:00
['Cybercrime', 'Cybersecurity', 'Insurance', 'Risk Management', 'Robertson Insurance']
NULS and YVS Finance Form Partnership
The NULS team is delighted to announce a new partnership with YVS Finance, an innovative project offering a unique DeFi solution. We would like to offer a warm welcome to our new partner and we look forward to a promising collaboration. What is YVS Finance? YVS Finance offers the first and only yield-farming, vaults, and staking deflationary token with no admin control. It is an innovative, decentralized finance project combining the best features for creating a unique, transparent, and secure yield-farming platform. What will YVS Finance achieve with NULS? Our vision is in line with the YVS ideals, we both strive to build and contribute to a truly open, transparent, and secure DeFi ecosystem. YVS will combine its offerings with the many advantages of NULS ecosystem and use Nerve Network’s Cross-Chain DeFi infrastructure to interact with structurally different blockchains, enjoying fast and cheap transactions. In a first phase, YVS will introduce and use one of Nuls’ DeFi solutions, POCM. This mechanism will allow NULS holders to stake NULS and earn YVS. In the other hand, YVS will be offering yield-farming, staking, and vaults and many other interesting features planned to be released in the future to our users. Learn more about YVS Website: yvs.finance Whitepaper: Whitepaper.pdf Twitter: twitter.com/YVSFinance Telegram: t.me/YVSFinance Blog: blog.yvs.finance Repository: github.com/yvs-finance/yvs-protocol ERC-20 token contract: 0xec681f28f4561c2a9534799aa38e0d36a83cf478
https://medium.com/@nuls/nuls-and-yvs-finance-form-partnership-30c92226ae78
[]
2020-12-15 22:49:33.220000+00:00
['Cryptocurrency', 'Yvs Finance', 'Partnerships', 'Defi', 'Nuls Blockchain']
Using Python to Scrape NFL Stats and Compare Quarterback Efficiencies
I have always been apprehensive about trying to scrape my own data, and the fact that websites like Kaggle aggregate such high quality datasets has made learning this skill less of a need. However, the abundance of educational articles on data science on this platform have helped me make progress towards collecting my own datasets. A lot of the inspiration and methods for scraping data came from here. In this article, I will pull quarterback stats from the 2019–20 NFL season from Pro Football Reference, and use them to create radar charts to assess QB efficiency. Load Packages To open the webpage and scrape the data, we will use two modules, urllib.request to open the URL, and BeautifulSoup to parse through the HTML. # Import scraping modules from urllib.request import urlopen from bs4 import BeautifulSoup In addition to these packages, we will need some packages to manipulate data, numpy and pandas , and plot our data, matplotlib . # Import data manipulation modules import pandas as pd import numpy as np # Import data visualization modules import matplotlib as mpl import matplotlib.pyplot as plt Scrape Data The data we are going to import is the NFL passing data from the 2019 season, which can be found here. We open the site and pass it to BeautifulSoup with the following: # URL of page url = 'https://www.pro-football-reference.com/years/2019/passing.htm' # Open URL and pass to BeautifulSoup html = urlopen(url) stats_page = BeautifulSoup(html) Note that we can easily adapt all of this analysis to previous years by changing the 2019 in URL to the year of your choosing. The two BeautifulSoup functions we will use to scrape the page are findAll() and getText() , which return values based on the HTML of the page we are scraping. I present simplified use cases below — for all possibilities you should refer to the documentation. findAll(name) Parameters name -- HTML tags to use to parse webpage Returns array of all matches to name tag getText() Returns text from HTML For these to be effective, we have to determine patterns in the page source. In our, case the data is nicely formatted in a table, so we can find all the table rows ( tr ) and columns ( td ) and extract the text directly from the cells. First, we need to collect the column headers so we can use them later in our DataFrame. To do this, we find the first tr element in page and collect the text from all the table headers ( th ): # Collect table headers column_headers = stats_page.findAll('tr')[0] column_headers = [i.getText() for i in column_headers.findAll('th')] We index the first element because this is the row that contains the column headers. We can view our result: print(column_headers) >>> ['Rk', 'Player', 'Tm', 'Age', 'Pos', 'G', 'GS', 'QBrec', 'Cmp', 'Att', 'Cmp%', 'Yds', 'TD', 'TD%', 'Int', 'Int%', '1D', 'Lng', 'Y/A', 'AY/A', 'Y/C', 'Y/G', 'Rate', 'QBR', 'Sk', 'Yds', 'NY/A', 'ANY/A', 'Sk%', '4QC', 'GWD'] To collect the actual data, we will first collect all the table rows ( tr ) and store them in an array. Then, we iterate through each row and collect the text in each column ( td ) with getText() : # Collect table rows rows = stats_page.findAll('tr')[1:] # Get stats from each row qb_stats = [] for i in range(len(rows)): qb_stats.append([col.getText() for col in rows[i].findAll('td')]) We skip the first row because these are the column headers that we just collected previously. When we examine the first row of our qb_stats list: print(qb_stats[0]) >>> ['Jared Goff', 'LAR', '25', 'QB', '16', '16', '9-7-0', '394', '626', '62.9', '4638', '22', '3.5', '16', '2.6', '220', '66', '7.4', '7.0', '11.8', '289.9', '86.5', '', '22', '170', '6.90', '6.46', '3.4', '1', '2'] Now we can combine our headers and stats into a pandas DataFrame. If you haven’t used pandas before, it’s essentially an Excel spreadsheet that you can manipulate programmatically, so it’s great for handling larger datasets. We notice above that the Rk column does not have corresponding data in our qb_stats list — this is because those values were all table headers, so our findAll() function did not pull their data. Regardless — this data is frivolous for our analysis so we will omit it when creating the DataFrame. # Create DataFrame from our scraped data data = pd.DataFrame(qb_stats, columns=column_headers[1:]) columns — the column titles for our DataFrame (we are omitting Rk ) Now, let’s take a look at the first five rows of our DataFrame: # Examine first five rows of data data.head() Success! Manipulating and Cleaning Data Now, we can manipulate the data in this DataFrame to get what we need to make our radar charts. First, let’s look at all the columns in the set: # View columns in data data.columns >>> Index(['Player', 'Tm', 'Age', 'Pos', 'G', 'GS', 'QBrec', 'Cmp', 'Att', 'Cmp%', 'Yds', 'TD', 'TD%', 'Int', 'Int%', '1D', 'Lng', 'Y/A', 'AY/A', 'Y/C', 'Y/G', 'Rate', 'QBR', 'Sk', 'Yds', 'NY/A', 'ANY/A', 'Sk%', '4QC', 'GWD'], dtype='object') We see one issue immediately — there are two columns titled Yds ; one is for passing yards, and the other is yards lost due to being sacked. We can easily remedy this by renaming the latter: # Rename sack yards column to `Yds_Sack` new_columns = data.columns.values new_columns[-6] = 'Yds_Sack' data.columns = new_columns Now, let’s view our columns again: # View columns in data data.columns >>> Index(['Player', 'Tm', 'Age', 'Pos', 'G', 'GS', 'QBrec', 'Cmp', 'Att', 'Cmp%', 'Yds', 'TD', 'TD%', 'Int', 'Int%', '1D', 'Lng', 'Y/A', 'AY/A', 'Y/C', 'Y/G', 'Rate', 'QBR', 'Sk', 'Yds_Sack', 'NY/A', 'ANY/A', 'Sk%', '4QC', 'GWD'], dtype='object') We have successfully renamed the column to Yds_Sack and no longer have a conflict in our column names! Next, let’s identify which statistical categories we are interested in for our visualization. We will choose: (1) Completion percentage — Cmp% (2) Passing yards — Yds (3) Passing touchdowns — TD (4) Interceptions — Int (5) Yards per attempt — Y/A (6) Passer rating — Rate # Select stat categories categories = ['Cmp%', 'Yds', 'TD', 'Int', 'Y/A', 'Rate'] Now let’s create a new DataFrame as a subset of our original data, only with the data from our chosen categories. Additionally, we will add the player name and team. # Create data subset for radar chart data_radar = data[['Player', 'Tm'] + categories] data_radar.head() Now that we have our subset of data, let’s check the data type, because we scraped the values as text from the URL: # Check data types data_radar.dtypes >>> Player object Tm object Cmp% object Yds object TD object Int object Y/A object Rate object dtype: object All of our numerical data have been stored as objects, so we can’t manipulate them. So before we proceed further, we must convert all this data to numerical values. To do so, we use a function called pandas.to_numeric . # Convert data to numerical values for i in categories: data_radar[i] = pd.to_numeric(data[i]) Now, let’s check our data types once again: # Check data types data_radar.dtypes >>> Player object Tm object Cmp% float64 Yds float64 TD float64 Int float64 Y/A float64 Rate float64 dtype: object We have one last piece of data cleaning to do. On the original website, they put ornamental characters next to players that had end-of-season achievements, such as a Pro Bowl (*) or All-Pro (+) selection. We will remove these using str.replace() : # Remove ornamental characters for achievements data_radar['Player'] = data_radar['Player'].str.replace('*', '') data_radar['Player'] = data_radar['Player'].str.replace('+', '') Let’s filter our data down to only the quarterbacks who threw for more than 1500 yards: # Filter by passing yards data_radar_filtered = data_radar[data_radar['Yds'] > 1500] Now, for our radar chart, we want to calculate the each quarterback’s statistical rank by percentile, which is easily done in pandas with DataFrame.rank(pct=True) . The rank() function can take other arguments to rank based on other parameters for which you can look up the online documentation. Additionally, we want to flip our interceptions rank, as we don’t want the QB with the most interceptions to have the highest percentile! # Create columns with percentile rank for i in categories: data_radar_filtered[i + '_Rank'] = data_radar_filtered[i].rank(pct=True) # We need to flip the rank for interceptions data_radar_filtered['Int_Rank'] = 1 - data_radar_filtered['Int_Rank'] Now we examine our new data: # Examine data data_radar_filtered.head() Great! We can now make our visualizations! Generating Radar Charts We will start by editing some general plot parameters. We are going to be generating a polar plot, so the x-ticks correspond to the angle around the circle — we are increasing the padding between the axis and tick labels: # General plot parameters mpl.rcParams['font.family'] = 'Avenir' mpl.rcParams['font.size'] = 16 mpl.rcParams['axes.linewidth'] = 0 mpl.rcParams['xtick.major.pad'] = 15 For colors in our chart, we will use the HEX codes of the NFL team colors, which were collected from this link. team_colors = {'ARI':'#97233f', 'ATL':'#a71930', 'BAL':'#241773', 'BUF':'#00338d', 'CAR':'#0085ca', 'CHI':'#0b162a', 'CIN':'#fb4f14', 'CLE':'#311d00', 'DAL':'#041e42', 'DEN':'#002244', 'DET':'#0076b6', 'GNB':'#203731', 'HOU':'#03202f', 'IND':'#002c5f', 'JAX':'#006778', 'KAN':'#e31837', 'LAC':'#002a5e', 'LAR':'#003594', 'MIA':'#008e97', 'MIN':'#4f2683', 'NWE':'#002244', 'NOR':'#d3bc8d', 'NYG':'#0b2265', 'NYJ':'#125740', 'OAK':'#000000', 'PHI':'#004c54', 'PIT':'#ffb612', 'SFO':'#aa0000', 'SEA':'#002244', 'TAM':'#d50a0a', 'TEN':'#0c2340', 'WAS':'#773141'} The angles at which we plot all our points will be dependent on the number of statistical categories we have. In our example, we have 6 categories, so we will plot our points every 2π/6 radians, or 60 degrees. We can calculate this as follows, using numpy.linspace() : # Calculate angles for radar chart offset = np.pi/6 angles = np.linspace(0, 2*np.pi, len(categories) + 1) + offset We are adding the offset term to adjust where the labels appear on the circle (instead of the first label at 0 radians, it now appears at π/6). When we plot the data, we need to actually duplicate the first category, so that the shape closes itself. If you notice when we calculated the angles, we have 7 data points, and the first and last angles correspond to the same point on the circle. So when we plot the data, we use the following (we will turn this into a function, so player_data represents a row of player-specific data that we will pass): # Plot data and fill with team color ax.plot(angles, np.append(player_data[-(len(angles)-1):], player_data[-(len(angles)-1)]), color=color, linewidth=2) ax.fill(angles, np.append(player_data[-(len(angles)-1):], player_data[-(len(angles)-1)]), color=color, alpha=0.2) The above code works because our categories are the last 6 columns of our DataFrame, so len(angles) — 1 corresponds to the first of these categories (since angles has an extra element). We then append the value of the first category onto the end of this array so that we can close the shape. Now we can set the labels for the category names (since we have one less category than angles, we omit the last element): # Set category labels ax.set_xticks(angles[:-1]) ax.set_xticklabels(categories) Finally, we will add the player name on top of the radar chart — we place the text object at (π/2, 1.7), in absolute plot coordinates so that it appears above the axis: # Add player name ax.text(np.pi/2, 1.7, player_data[0], ha='center', va='center', size=18, color=color) We can put this all together to make a helper function to generate our radar charts as follows: Let’s also make another helper function to return a numpy array of QB data when given a team input: # Function to get QB data def get_qb_data(data, team): return np.asarray(data[data['Tm'] == team])[0] Visualizing Data NFC West Since I’m a huge 49ers fan, let’s start by looking at the radar charts for QBs in the NFC West: # Create figure fig = plt.figure(figsize=(8, 8), facecolor='white') # Add subplots ax1 = fig.add_subplot(221, projection='polar', facecolor='#ededed') ax2 = fig.add_subplot(222, projection='polar', facecolor='#ededed') ax3 = fig.add_subplot(223, projection='polar', facecolor='#ededed') ax4 = fig.add_subplot(224, projection='polar', facecolor='#ededed') # Adjust space between subplots plt.subplots_adjust(hspace=0.8, wspace=0.5) # Get QB data sf_data = get_qb_data(data_radar_filtered, 'SFO') sea_data = get_qb_data(data_radar_filtered, 'SEA') ari_data = get_qb_data(data_radar_filtered, 'ARI') lar_data = get_qb_data(data_radar_filtered, 'LAR') # Plot QB data ax1 = create_radar_chart(ax1, angles, lar_data, team_colors['LAR']) ax2 = create_radar_chart(ax2, angles, ari_data, team_colors['ARI']) ax3 = create_radar_chart(ax3, angles, sea_data, team_colors['SEA']) ax4 = create_radar_chart(ax4, angles, sf_data, team_colors['SFO']) plt.show() The data seems to support what we see from the eye test : Russell Wilson is on an elite level, reaching at least 75th percentiles in all statistical categories. This analysis also doesn’t take into account what he brings with his scrambling ability. Jimmy Garoppolo is generally a solid QB, but throws a few boneheaded interceptions from time-to-time. Jared Goff gets a lot of yards, but it does not seem to translate to a lot of production (i.e. “empty yards”). Kyler Murray is hard to judge because he was a rookie, but his efficiency looks to be the lowest of the four. Like Wilson, his mobility is not accounted for in this plot. MVP Race The MVP race (at QB) came down Russell Wilson and Lamar Jackson (who eventually won): # MVP Race # Create figure fig = plt.figure(figsize=(8, 4), facecolor='white') # Add subplots ax1 = fig.add_subplot(121, projection='polar', facecolor='#ededed') ax2 = fig.add_subplot(122, projection='polar', facecolor='#ededed') # Adjust space between subplots plt.subplots_adjust(hspace=0.8, wspace=0.5) # Get QB data bal_data = get_qb_data(data_radar_filtered, 'BAL') sea_data = get_qb_data(data_radar_filtered, 'SEA') # Plot QB data ax1 = create_radar_chart(ax1, angles, sea_data, team_colors['SEA']) ax2 = create_radar_chart(ax2, angles, bal_data, team_colors['BAL']) plt.show() From a pure passer standpoint, Russell Wilson has more balanced stats, but Lamar Jackson clearly had the more “bang-for-buck” passing yards, as shown by his number of passing TDs and quarterback rating. Additionally, Lamar Jackson set the QB rushing record for a season, which is not captured in this chart. Teams who drafted 1st round QBs Four quarterbacks were taken in the 1st round of the 2020 NFL Draft: Joe Burrow (CIN), Tua Tagovailoa (MIA), Justin Herbert (LAC), and (surprisingly) Jordan Love (GB). Let’s visualize the stats of the quarterbacks that these draft picks would replace. # 1st Round Draft Picks # Create figure fig = plt.figure(figsize=(8, 8), facecolor='white') # Add subplots ax1 = fig.add_subplot(221, projection='polar', facecolor='#ededed') ax2 = fig.add_subplot(222, projection='polar', facecolor='#ededed') ax3 = fig.add_subplot(223, projection='polar', facecolor='#ededed') ax4 = fig.add_subplot(224, projection='polar', facecolor='#ededed') # Adjust space between subplots plt.subplots_adjust(hspace=0.8, wspace=0.5) # Get QB data cin_data = get_qb_data(data_radar_filtered, 'CIN') mia_data = get_qb_data(data_radar_filtered, 'MIA') lac_data = get_qb_data(data_radar_filtered, 'LAC') gnb_data = get_qb_data(data_radar_filtered, 'GNB') # Plot QB data ax1 = create_radar_chart(ax1, angles, cin_data, team_colors['CIN']) ax2 = create_radar_chart(ax2, angles, mia_data, team_colors['MIA']) ax3 = create_radar_chart(ax3, angles, lac_data, team_colors['LAC']) ax4 = create_radar_chart(ax4, angles, gnb_data, team_colors['GNB']) plt.show() Andy Dalton had a terrible season by all metrics — it clearly appears that it was time to move on Ryan Fitzpatrick was below average in all six statistical metrics and had a fairly mediocre season. Considering Josh Rosen was benched for Fitzpatrick, Miami needed to invest in Tua for the future. Philip Rivers actually performed average to above average in most categories, but clearly had some bad decision-making, judging by the number of interceptions he threw. Aaron Rodgers had what appears to be an above average season, but still below average by the lofty expectations we have for him. The Jordan Love pick was a shock for everyone. Other Divisions NFC North NFC East NFC South AFC West AFC North AFC East AFC South Final Remarks As you can see, the possibilities are endless — and this framework can be easily adapted for other stats such as rushing and receiving metrics. The Jupyter notebook used for this article can be found at this Github repository. Thank you for reading! I appreciate any feedback, and you can find me on Twitter and connect with me on LinkedIn for more updates and articles.
https://towardsdatascience.com/scraping-nfl-stats-to-compare-quarterback-efficiencies-4989642e02fe
['Naveen Venkatesan']
2020-05-19 00:53:50.543000+00:00
['Python', 'Data Science', 'Statistics', 'Data Visualization', 'NFL']
Right-Wing Media Warns of ‘Left-Wing Mob’ As Neo-Fascist Proud Boys Rampages Across the Nation
The Proud Boys after their New York rampage (Twitter) FOX News is the worst TV station in media history. It’s actually anti-news, because it works in reverse. News is supposed to inform people, but FOX keeps people ignorant. FOX, also known as state TV, has been pushing the Republicans latest lie about angry left-wing mobs. However, like most of the “news” FOX puts out, this is the complete opposite of the truth. This weekend, the Proud Boys, a violent white supremacist gang, rampaged through the streets of New York. They also attacked random strangers and yelled homophobic slurs. Although there were police on hand, they initially failed to arrest anyone. (The police later made some arrests.) The Proud Boys also combined with another neo-fascist group, Patriot Prayer, and brawled in the streets of Portland, Ore. According to local media reports, this fight could have got much worse. It was later discovered that Patriot Prayer had stashed weapons on rooftops and had set up sniper positions. The Proud Boys are a little-known group to most Americans. But they’ve been active in far-right circles for a while. The group was created by Gavin McInnes, one of the founder of VICE magazine, which has grown into a major left-wing media outlet. (The Southern Poverty Law Center classifies the Proud Boys as a hate group.) The group describes itself as “Western chauvinists.” They bill themselves as men who won’t apologize for creating Western civilization. They also advocate for a return to 1950s values. And strangely enough, in spite of the fact that some of their members spout white supremacist rhetoric, they have non-white members. Although McInnes got his start with VICE, he gradually started moving to the right. It started with so-called “ironic racism,” and eventually grew into openly anti-Semitic comments. As VICE started to blow up, they decided they had to cut McInnes loose. But McInnes didn’t go away. He made regular guest appearances on FOX News, and white supremacist media. Now he’s found a home on CRTV, a neo-fascist propaganda outlet that also provides a platform for Eric Bolling, who made racist comments about President Barack Obama. Apart from being openly racist and homophobic, (he once complained about having non-whites as neighbors in New York,) McInnes is also violent. Media Matters recently posted a video featuring some of his bizarre antics. This included advocating street brawls and a Proud Boys jumping in ceremony. The Proud Boys have also been involved in several other disturbing incidents. They’ve partnered with skinhead groups, former Proud Boy Jason Kessler helped organize the Charlottesville, N.C. neo-Nazi march and a group member threatened comedian Vic Berger at his home. According to an internal memo leaked to Berger, the group instructed members how to go after their critics. “Let’s get the social media profiles, phone numbers and addresses for their bosses, mothers, fathers, brothers, sisters, boyfriends, friends and get to work,” said the memo. You would think that with McInnes’ reputation for violence, his organization would have been shunned by the GOP. But he was invited to New York to give a speech at the Metropolitan Republican Club. (He also received a police escort to the event.) McInnes also said the Proud Boys had wide support from local police. According to the Daily Beast, the Proud Boys have been pictured with Reps. Devin Nunes and Mario Diaz-Bart. They’ve also provided security for Trump advisor Roger Stone. “I think some Republicans appreciate the Proud Boys because they understand what we actually stand for: love of country, small government, freedom, and fun,” said Jason Van Dyke, a Proud Boy, according to the Daily Beast. Van Dyke was kicked out of his college for a gun offense. A campus security officer later found a book on starting a race war in his dorm. Not surprisingly these kinds of views are still welcome on FOX News. Laura Ingraham, who has already been exposed as a white supremacist, recently hosted Joey Gibson, leader of Patriot Prayer in a soft-ball interview. “No, we never have any intention to get violent. For us, it’s about challenging the mayor, challenging these protest groups, and just being able to march,” said Gibson. After the Saturday night brawl, some Proud Boys have been arrested and prosecutors are looking at charges. But why has this taken so long? Also, I hope that federal authorities are looking at charging the Proud Boys, which is effectively a domestic terrorist group. What’s it going to take, an actual murder?
https://eaafolabi.medium.com/right-wing-media-warns-of-left-wing-mob-as-neo-fascist-proud-boys-rampages-across-the-nation-c3d1a8f4050
['Manny Otiko']
2018-10-16 17:10:02.523000+00:00
['Alt Right', 'Fascism', 'Racism', 'Republican Party', 'Nationalism']
Advocacy and Spirituality are at the Core of Désirée Sprauve’s Doula Work
Advocacy and Spirituality are at the Core of Désirée Sprauve’s Doula Work Catherine Morrison Jun 17·9 min read Désirée Sprauve in Central Park, NYC. (Photo by Catherine Morrison) Désirée Sprauve has always been of the nurturing kind, always the one to cheer people on. “I have always been the mother,” Sprauve said. “Mothering and just loving on people without the desire of wanting something back.” However, it’s only once Sprauve turned 27 that she realized her nurturing spirit could turn into a profession. After earning her degree from Howard University, Sprauve worked at an aerospace company doing global supply chain management. But she didn’t feel fulfilled by the work and realized it wasn’t her calling, and she says that her subconscious worked to encourage Sprauve to take on a new challenge as a doula. Doulas are non-medical professionals who support individuals through childbirth, miscarriage, abortion, stillbirth, or death. For almost a year, the idea of becoming a doula came to Sprauve through her dreams at night. She believes the idea was placed in her spirit because she wasn’t looking for a career change and didn’t know much about being a doula. But the idea continued to appear in her dreams, in her daydreams, and even on social media. While the vast majority (98.4%) of births in the United States occur in hospitals, 0.99% occur at home and 0.52 % occur at birth centers, facilities staffed by midwives, obstetricians, doulas, or birthing coaches. In recent years, doulas have become increasingly popular because of the emotional and physical support they offer individuals at all stages of their pregnancy, according to 2019 research. Sprauve said it took her so long to seriously consider a shift to becoming a doula because she already had a great job, she had never been a mother, and she wasn’t looking to pursue a new career. But she couldn’t deny how she felt every time the idea popped into her head. Sprauve explains that, after waking up from a dream during which she was a doula, she remembers feeling happy, realizing she had, in some sense, witnessed a very spiritual experience. In 2018, after a year of not listening to the thoughts, Sprauve decided it was time to learn more about becoming a doula and purchase in-person training, readings, and essays. A spiritual person, Sprauve explained that all dreams have some sort of tie to the real world, a mentality that eventually led her to listen to her subconscious and try out the doula career. Sprauve was raised in Canarsie, Brooklyn, which she describes as a “mini Caribbean island,” because of its abundance of Caribbean-owned businesses, restaurants, music, culture, and dense amount of residents of Caribbean descent. Sprauve says the neighborhood suits her family perfectly, as her mom’s side of the family is from Saint Thomas and her father is Jamaican. Sprauve is her mother’s only child and she has two brothers from her father. Growing up, she didn’t always see her nurturing side as a benefit as she didn’t feel that people would reciprocate her feelings and support her. “It definitely felt like a burden,” Sprauve said. “Like, ‘ugh, why am I so good to people? People aren’t like this to me.’ And so growing up, I was like, ‘this is terrible.’ And then after a while, it’s like, ‘Oh, it’s actually my power.’ My power is to serve people. And so that mind shift really had to occur.” Spirituality is an essential component of Sprauve’s everyday life and her work, she said. She believes, and tries to teach her clients, that holistic wellness, an approach to health that considers a person’s body, mind, and spirit, is the foundation needed to have a successful conception, pregnancy, birth, and postpartum period. Though Sprauve has learned a lot about spirituality from her experiences as an adult, she was also raised in a household and by a family where spirituality was always a presence. “I grew up in that space,” Sprauve said. “There were always rumblings [about spirituality] or herbs [like ginger, echinacea, hibiscus, ginkgo biloba] were around. And my grandma was praying or fasting or reading these books that none of my other friends knew anything about… I never really paid attention to it or knew it was different or really anything. It wasn’t until college that I was like, ‘that stuff was weird.’ And then after college, I started coming back to who I originally was.” As a doula, Sprauve offers various services including supporting pregnancy and postpartum, meditation, yoga, and holistic wellness. According to a 2014 report by Choices in Childbirth, a nonprofit organization, the average cost of doula services in New York City was $1,200. However, the report found that prices could range from $150 to $2,800, depending on the doula’s experience. During the pandemic, Sprauve has provided sessions remotely and has been able to accompany those giving birth to hospitals, a process she said has been difficult during the pandemic due to testing requirements. However. Sprauve explains that the shift to online work has been mostly easy as says she’s able to create the same energy with clients as she would in person. “How I show up within the space is a part of my spirituality that I’m inviting my clients into,” Sprauve said. “It’s very important to set the tone, set the mood, shift the energy if necessary.” In her free time, Sprauve loves to do yoga, make homemade skincare products, spend time with her friends, and watch stand-up comedy, particularly her favorite comedian Dave Chappelle. Sprauve likes to learn when comedians are going to give their punchlines so she can know they engage the audience, which she says is a combination of actions that leads to amazing storytelling. Sprauve says that she got her personality from her mom, Daisy Lee, with whom she now makes YouTube videos about “motherhood, friendship, love, and laughter.” Daisy Lee worked as a nurse, which Sprauve believes is a major reason she inherited so many nurturing characteristics. “I would definitely say just her personality is why I’m a successful doula because I naturally know how to care for people,” Sprauve said. She put this into practice, when, within only a year of starting her career, Sprauve experienced something that many doulas never do: a live birth with little support. Aronda Sparks-Bowman was Sprauve’s company’s first client. When Sparks-Bowman first met Sprauve at a birthday brunch for her sorority sister, she knew she would be a great professional match. “My connection with Desiree was just very natural… So I definitely just felt like we just had a higher connection and that she would be somebody who I would want in my corner during the birthing process,” Sparks-Bowman said. The day before the birth, Sparks-Bowman was having what she didn’t know at the time were contractions. Thinking she just had an upset stomach from some tacos she ate, she went on with her day. She went to church, went grocery shopping, visited a friend, and went to bed, though the contractions increased in intensity. Because it was around 1 a.m, Sparks-Bowman worried about being an inconvenience, but texted Sprauve, thinking that if Spauve replied, it would be a sign that the birth was near. Sprauve made her way to the Bronx immediately and soon after and the contractions grew stronger and more painful. Sprauve rushed over to her home in the Bronx. A few hours later, as Sparks-Bowman prepared to give birth, she called her midwife, who was planned to help deliver the baby. But the midwife hit standstill traffic on her way from Brooklyn, so Sprauve was left to help with the delivery, which doulas often don’t do as they are not medically trained to help with birth like midwives. “I knew she was about to give birth, and the midwife was still in traffic,” Sprauve said. “So I got the water in the tub, I went into the tub with her, and within one push [her son] was born into my arms. And that’s an experience most doulas don’t have.” The experience was a bonding moment for the two women. “Right after [the baby was born] I was like, ‘Wow, like you just delivered a baby.’ and she was like, ‘You just gave birth to one!’ and we just laughed,” Sparks-Bowman said. Since the birth, Sparks-Bowman and Sprauve have become friends, celebrating her son’s birthday and texting to stay in touch during the pandemic. It was Sprauve’s first birth experience since starting her company, and it didn’t go as planned, but it reaffirmed her passion for the career. After experiencing this birth, one of the things that stood out to Sprauve the most was how sacred the birthing space is because so few people have the opportunity to be in the physical and emotional space of someone giving birth. Doulas all have different tolerances and beliefs, which determine what kinds of birthing practices they support, and which services they provide. While some doulas might only support home births without any forms of medication like epidurals to be used during birth, some doulas are more accepting of this. Sprauve supports all types of birthing practices, excluding unassisted births, meaning births where the client is not accompanied by any medical provider, such as a doctor or a midwife, as it could be a liability. “If they come in and go straight for a C-section, that’s their decision and I’ll support them either way. They are absolutely still getting all the education on the pros and cons of a Cesarean or an epidural… They are 100% still going to get that,” Sprauve said. Sprauve maintains this mentality because she identifies advocacy as a major component of doulaism. According to DONA International, a major doula certifying organization, doulas play a key role in advocating for their clients’ wishes to medical professionals in regards to their birthing plan as well as their prenatal and postpartum care. Sprauve notes that she has had to advocate for clients who have been mistreated or ignored by hospital staff. While Sprauve argues that doctors can work to improve their practices to make them more holistic, the issue of obstetric violence — when a person in labor or birth experiences mistreatment — is a systemic issue that is ingrained in the western medical system. Sprauve notes that, as a doula, it’s crucial to recognize obstetric violence, a combination of institutional violence and violence against women during the pregnancy, childbirth, and post-partum stages. Obstetric violence can come in the form of physical, sexual, and verbal abuse and can include coercion or assault. This violence has a particularly large history of abuse among people of color, and in the United States, particularly the Black community as, before the mid-1900s, almost all births occurred outside of hospitals, with Granny Midwives playing a major role in supporting individuals in childbirth. Sprauve explains that Granny Midwives were women who arrived in America by slave ships and grew up in the country and had to help birth and raise white people’s babies. “Granny Midwives were women who literally birthed America,” Sprauve said. During the 20th century, births slowly moved to hospitals so that those who could afford it could have medicine during birth, a process described as the “Twilight sleep,” the predecessor to today’s epidurals. This shift was also made due to racial bias about Granny Midwives. “Over time, it was decided that uncleanliness was occurring because of the Black Granny Midwives,” Sprauve said. “It was also said that babies were dying because of the Black Granny Midwives. And none of these things were actual.” Sprauve says that, because of these biases, people stopped seeking out Black Granny Midwives and, without clients, they quickly disappeared. The history of these midwives is intrinsically linked to the racial injustice that continues to exist in medicine today, she says, leading to discrimination and higher mortality rates of Black women while giving birth. “We have perception, and people should acknowledge that we perceive people differently, and what is the consequence of that?” Sprauve asked. “That is something that needs to occur to counter this horrendous system that is really not made for Black and brown bodies.” Sprauve hopes that, through her work and on her Instagram, she can help educate people about systemic issues to combat injustices. “I am teaching people so that they can learn how to speak up for themselves and I have a lot of doctor friends, so I am planting seeds in their minds of how to support our people better,” Sprauve said. “A lot of people can speak, but do they have a voice?”
https://medium.com/@cathmorrison/advocacy-and-spirituality-are-at-the-core-of-d%C3%A9sir%C3%A9e-sprauves-doula-work-e8d3a821f60
['Catherine Morrison']
2021-06-17 15:18:10.833000+00:00
['Spirituality', 'New York City', 'Advocacy', 'Doula', 'Health']
3 Steps To Finding Your New Job
Gone are the days when we saw professionals joining an organization and retiring from the same organization 25 or 30 years later. Today, you have aspirations to get a wider variety of experience that your current organization may not be able to give at the time you want it. So you choose to look for another job. But wait, last I checked on Linkedin, there were at least 200 applicants for the Director role that you want. And if you are in a middle management role the competition is even higher. You know, many years ago, I lost my job, because the startup I was working for, shut shop. Every role I applied for had around 2000 applicants. Are you serious? I thought. “I need a different strategy.” While I did not stop applying for jobs online, I started reaching out to my networks and landed 3 interviews within a week. This was the time when Linkedin did not have recruitment solutions. And I landed my job as an Executive Recruiter (which I loved). Click here to get the three steps and get interviewing.
https://medium.com/@preeti-khorana/3-steps-to-finding-your-new-job-1439e0c305af
['Preeti Khorana']
2020-12-17 02:15:30.535000+00:00
['Resume Writing', 'Job Hunting', 'Job Search', 'Career Advice', 'Careers']
For Jeremiah
For Jeremiah The First Jeremiah smiling, why? Auntie had some of his favorites shipped to him for his 7th birthday. My phone dings a Tune divvied out to my Brother who has been exceptional At fathering his son. I watch the images load One-by-one And the smiling face that appears Before me is my oldest nephew Happy about some little things Received for his seventh year on Earth. I am engrossed in the blips Of life — caravaning their way Into my Sunday morning. I am not quick enough with my hand And the tears begin to fall. Distance is a powerful thing that Keeps us in different States while he Rises above the many woes Of childhood. I look at him and see my Brother smiling back at me When he was his age. I am thinking — *Where has the time gotten off to?* *You just got here little one.* But it is I who is stuck in the past, Afraid to move from the day His birth was announced. I am surrounded by boys, Little men — cemented foundations That shake when storms come Because life is a tricky thing And magic isn’t something they Were taught in school. They are not spinning webs Of reflections shared with wanderers of Forgotten faith and mercenaries Welding their shielded souls. No one is rescuing them from tattered halls Of misunderstandings and silencers catapulting Bullets of fire into their chests. Protection — My brother offers it to him and I find myself overwhelmed with joy Knowing that he is doing, Has done What he needs to for A son. He is doing what I pray Others will do For their sons who are Poisoned before purity Can find its space for settling. Emotion strikes me and the only Thing I can muster up to type Among all other things is: “I am so proud of you. You are a great father, Joshua.” My nephew is a walking reminder That life is cyclical. A turn of phases that no Time machine can stop. I can sulk in the potential Demands of a Nation too focused on Drowning his beginnings, but I choose to find solace instead. After all, hope is in his smile.
https://medium.com/a-cornered-gurl/for-jeremiah-26bac6bf6703
['Tre L. Loadholt']
2017-11-19 17:42:30.940000+00:00
['Family', 'Poetry', 'Love', 'Children', 'A Cornered Gurl']
Detecting Small Objects from Images/Videos using AI
Figure 1. Detecting “Small Objects” — A ship from the satellite image Object detection is a technique related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos. Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Well-researched domains of object detection include face detection and pedestrian detection. Object detection has applications in many areas of computer vision, including image retrieval and video surveillance. It is also used in tracking objects, for example tracking a ball during a football match, tracking movement of a cricket bat, or tracking a person in a video. While generic object detectors perform well on medium and large sized objects, they perform poorly for the overall task of recognition of small objects. Few examples of small objects would be ships as seen in satellite images (as shown in Fig. 1) or traffic signs seen from far away drone imaging. Small objects detection is a challenging task in computer vision due to its limited resolution and information. In this article we will explore Feature Pyramid Networks for small object detection and Super Resolution GANs for data augmentation and performance improvements. Feature Pyramid Networks[1] Figure 2. Feature Pyramid Network Feature pyramids[2] are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, as shown in Fig.2, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. When implemented on the airbus ship dataset[3], which is a collection of satellite images of ships in the ocean, a recall of 0.954 and mAP of 0.911 was achieved. Sample results as shown in Fig.3–4. Figure 3, Small objects (ships) detected by FPN Figure 4. Small objects (ships) detected by FPN Super Resolution[4] Super Resolution is the process of recovering a High Resolution (HR) image from a given Low Resolution (LR) image. An image may have a “lower resolution” due to a smaller spatial resolution (i.e. size) or due to a result of degradation (such as blurring). SR received substantial attention from within the computer vision research community and has a wide range of applications. As one of the main issues with small object detection is lack of appropriate picture clarity and resolution, it was thought that performing super resolution on the images might come in handy. For this, SRGAN[5] was used. During the training, A high-resolution image (HR) is downsampled to a low-resolution image (LR). A GAN generator upsamples LR images to super-resolution images (SR). We use a discriminator to distinguish the HR images and backpropagate the GAN loss to train the discriminator and the generator as shown in Fig.5. SRGAN uses a perceptual loss measuring the MSE of features extracted by a VGG-19 network. For a specific layer within VGG-19, we want their features to be matched (Minimum MSE for features). Figure 5. Basic SRGAN architecture However, with the airbus dataset, using super resolution showed no improvement in the performance. This is most likely because the image quality was not the issue for said dataset. The comparison table is shown in Fig.6. Figure 6. Comparison Table Small object detection is a challenging problem in computer vision. Showcased here is one of the many ways that we can continue working on it. Feature Pyramid Networks show significant improvement over more popular object detection methods such as YOLOv3 and thus show promise in the domain of small object detection. It has been widely applied in defense, military, transportation, industry, etc. It is extensively used for self driving cars in order to recognize street signs and pedestrians from a long way away and avoid accidents. Another major application is in the manufacturing industry, where detecting a small defect early on during assembly can save more money required for repairs or replacement than if the defect was found at a later stage in the assembly process. SRGAN may not have helped improve the performance for the airbus dataset, but it should not be dismissed when working on detecting small objects in lower quality images. We, at AlgoAnalytics, have used innovative techniques for small object detection in satellite imaging using Feature Pyramid Networks and created a demo for the same. Demos and Contact Information To check our demos, please visit: https://onestop.ai For further information, please contact: [email protected] References [1] https://github.com/DetectionTeamUCAS/FPN_Tensorflow [2] https://arxiv.org/pdf/1612.03144.pdf [3] https://www.kaggle.com/c/airbus-ship-detection/data [4] https://github.com/krasserm/super-resolution [5] https://arxiv.org/pdf/1609.04802.pdf
https://medium.com/algoanalytics/small-object-detection-828cf373461
[]
2020-12-08 14:25:39.325000+00:00
['Artificial Intelligence', 'Feature Pyramid Network', 'Computer Vision', 'Object Detection', 'Super Resolution']
Exceptional Blockchain Projects
Exceptional Blockchain Projects For those of you just starting to pay attention to the different “altcoin” projects, I am going to attempt to summarize what I think are the more technically interesting ones out there. While fundamentals do matter, I am in no way insinuating any of these projects are “good investments” or not. So if you are a Crypto trader looking to get tips on what’s the next 100x altcoin in the next 2 months, you are probably reading the wrong post. However, if you are interested in getting the cliff notes of what I think is technically exciting out there in this space or looking for a post that will kick you off to further research for a long term investment with strong fundamentals, this post might just be for you. Before I begin, I am going to first note that I am skipping the trending and technically interesting projects that are already super well known like Ethereum, Cardano, and Polkadot to keep my list under a controllable size. So without further ado, here are 12 of my favorites: Cosmos Network Cosmos is simply amazing for building your own proof-of-stake blockchain/dapp. To get started with a full blockchain template with tokens, accounts, and wallets ready for you to start developing with, all you need to do is execute 2 commands after installing Starport: Voila! Everything is ready to go and you can checkout the localhost link it spits out and see what’s already there: Pretty amazing right? It’s no wonder why some of the biggest projects out there are built with Cosmos. It scaffolds everything that would otherwise be reinventing the wheel and let you focus on the real innovations of your project. You will also notice that transactions are super fast and IBC will allow for robust cross-chain compatibility which is extremely important for building successful dapps. For those who require “name dropping” to be convinced, Binance Chain and Binance Smart Chain were built with Cosmos. You can also see the rest of the projects built on Cosmos here. There’s so much more to Cosmos that will turn this into too long of a post so I suggest that you start with their website and documentation and see what the hype is all about yourself. Akash Network Speaking of building on Cosmos, Akash Network, a Censorship-resistant, permissionless, and self-sovereign open source cloud was built on Cosmos too! If you are using Kubernetes or containers at work or for fun, you will love this project. It essentially allows for you to host your containers on some of the most robust data centers and compute resources around the world for a fraction of the cost of our existing cloud services like AWS, Google, and Azure. You can learn more about this project from one of my previous posts here or simply visit their website. Skynet Skynet AKA SiaSky is a really cool solution for building dapps but perhaps it’s most known for its storage capabilities and how fast the content delivery is. It even has a dapp store where you can monetize your app through them easily. It seems that there’s a lot of interest in the Akash community to use Akash for compute and Skynet for storage too. Storj Are you building a dapp coming from a lot of experience building on AWS with S3 or other cloud providers’ block storage services? Well, Storj is probably your answer. It’s a decentralized object storage solution that’s backwards compatible with S3. IPFS + Filecoin The IPFS and Filecoin combo has been one of the earliest leaders on building a decentralized internet. It’s also a great solution for storing/archiving files. Although many other similar solutions have popped up over time, this project with strong ties to Stanford University has been the one that remained strong and prevailing against all odds. Solana Solana has been one of the most talked about small cap gems lately. Some of my crypto investor contacts are all-in with Solana’s promise for being the blockchain that’s super fast and low cost on transaction. By concept, Solana can handle up to 65,000 transactions per second. It even has a live update on how many transactions has already occurred on chain which is constantly moving on their homepage. Akash also chose Solana as the tech to handle smart contracts since the CEO of Akash deemed Solana to be simply the best solution available. The side effect of doing super fast transactions at low costs is that Solana’s blockchain is extremely huge in size. Understanding this downfall, Solana decided to partner with Arweave to handle the permanent data storage which seems like a great move on their part. Theta Theta is looking to disrupt the video delivery space with blockchain just like how Youtube and Twitch did in the web2.0 days. It already has a working product with a lot of things done right. The advisory team includes some heavy hitters too like Steve Chen (YouTube), Justin Kan (Twitch), and Fan Zhang (Sequoia Capital). NuCypher Security and encryption seems to always be the boring topic to the average person but yet, such a pertinent part of all technologies used in the real world. As proven in the past, while the concept of blockchains are more secure than its predecessors, it’s not invincible. NuCypher is looking to provide additional cryptography infrastructure (secrets management and dynamic access control) so that users are provided with more secure dapps that are also privacy friendly. Brave Rewards What can I say… Brave is hands down my favorite browser. Aside being a kick ass browser to begin with, it also comes with a privacy focused blockchain based advertising mechanism called Brave Rewards. You can literally earn money from just using your browser. There’s really nothing else out there that comes close to it. Mysterium Mysterium is not the only player out there in the dVPN space. However, it’s the only cross-platform solution that just worked when I tried it. The best part of it is that you get to try things out free before you start buying those myst coins to pay for their VPN service. The Graph The Graph provides a way for developers to create subgraphs (custom API) to access the open data indexed from Ethereum and IPFS. Need I say more? It definitely is a crucial piece of the dapp equation. The Internet Computer I have mixed feelings about this project due to the partial closed source nature and certain aspects of governance (permissioned and require standardized hardware) but nonetheless worth a mention. This project is launched by a non-profit out of Switzerland that has been around for quite some time so it has earned a lot of confidence as a legitimate project by both retail investors as well as institutions. Recently, it got listed on Coinbase and Binance and within just the matter of 24–48 hours, it went from US$0 to US$750. What a crazy bull run or what some of my other friends might call, “pump and dump”. So what is Internet Computer? Think Ethereum but not just covering the application layer but it plans to replace the whole entire internet stack including networking to replace our existing internet all together. In the past, they have run a decentralized version of Linkedin that is just as fast as the centralized one. They have been in the past criticized for their Network Nervous System (NNS) component. It has been said that this component has negative implications on privacy and that rather than solving the problem of the existing Internet that has a small number of tyrants ruling it all, it will have a new consolidated tyrant instead. Does this give one organization too much power? I leave that up to you to judge. Needless to say, since the scope of the project is so large, I would suggest that you look at their website yourself to learn more as this post would turn into a book if I had to share all the underlying concepts of what the Internet Computer is all about. That’s it! Hope this list of 12 technically more interesting projects will be a good stepping stone towards your research on your next big dapp project or long term investment.
https://levelup.gitconnected.com/exceptional-blockchain-projects-7a1836d549af
['Jeremy Cheng']
2021-06-24 03:18:59.759000+00:00
['Dapps', 'Web3', 'Blockchain', 'Cryptocurrency', 'Technology']
William Henry Johnson A.K.A. Zip the Pinhead
Welcome to my latest blog post on disability as entertainment, specifically the freak show of the 19th century. So far, I have discussed P.T. Barnum, General Tom Thumb, as well as Chang and Eng Bunker, the original Siamese twins. This week it is the turn of Zip the Pinhead. As with everything I cover, it is far more complicated than I initially anticipated. What is a Pinhead? Pinheads were a common feature in 19th and early 20th century freak shows. Pinhead was the term used to refer to a person born with the condition microcephaly. This is a neurological disorder which means that the head fails to grow at the same rate as the face. This results in the top of the head being much narrower than the face. As the person grows older, the size difference becomes more apparent. In a majority of instances, the size of the person’s body is also smaller than average. Furthermore, the mental capacity can be reduced too, delaying motor functions and speech. As you may have guessed, the term Pinhead is not an acceptable one anymore. I ran (well not literally) into several problems when researching ‘Zip the Pinhead’. You see, there was more than one of them. The man I was looking for, and the one I will be discussing later, was William Henry Johnson. However, he was not the only one, but perhaps he was the most famous. To further complicate things, he was also known as ‘What is It?’. The idea was that he was so strange looking, it was impossible to tell whether he was human or an animal. This name arose when Charles Dickens attended one of Barnum’s exhibits. Perplexed at what he saw, he leaned over and asked Barnum “What is it?” There had been other iterations of ‘What is it?’ before William Johnson arrived on the scene. The most famous of these was a legless man in London by the name of Harvey Leech. Unfortunately for Barnum, Leech was an actor, who had appeared in plays, and the people of London easily identified him. Johnson was a more successful attempt at duping the public. The life of William Henry Johnson William Henry Johnson was a black man born in the 1840s in Liberty Corners, near Bound Brook, New Jersey. He was the son of former slaves, William and Mahalia Johnson. He was believed to be between four and five feet tall and his head was an unusual shape. The top of his head was narrower than the bottom. This attracted some attention, and the nearby Van Emburgh’s Circus came calling to add Johnson to their sideshow. In 1860, P.T. Barnum came across Johnson and decided to take him on. As can be expected, Barnum had an elaborate backstory for his new performer. He shaved Johnson’s head, leaving only a small tuft of hair at the top. He also changed his name to Zip the Pinhead. It is only when I started to do my research that I discovered this topic has more to do with race than disability. The origin of the name Zip is most likely from “Zip Coon”, an early minstrel show character, who for white people came to personify the stereotypically dumb but dapper black person. Thinking along these lines, Barnum dressed Johnson in a fur suit and stated that he had been found along the Gambia River in West Africa. He was thought to be part of a tribe that was the missing link between monkey and man. As well as being called Zip the Pinhead and “What is it?”, he was given the names “The Monkey-Man” and “The Missing Link”. Charles Darwin’s On the Origin of Species was published in 1859, so evolution was the hot topic of the day. For several years, Johnson was presented on stage in a cage (hey, that rhymes). His job was to act like a monkey to entertain people. So I guess you could say that he was in a rage, in a cage, on a stage! Barnum paid him $1 a day not to speak, which Johnson happily agreed to, becoming rich in the process. Instead of speaking, he would grunt every time he was addressed. I know it sounds pretty bad, but Johnson seems to have enjoyed performing. He did so until he was in his 80s. In fact, when he got bronchitis and influenza in 1926, he ignored the advice of doctors, his manager, and his sister to rest. He insisted on finishing the run of the play he was in. He died a few weeks later, on 9 April 1926 in Bellevue Hospital, New York. His funeral was attended by several of his fellow performers and he was the longest serving freak show attraction. How Intelligent was Zip? There has been some debate as to how intelligent William Henry Johnson was. Robert Bogdan, in his 1988 book, Freak Show: Presenting Human Oddities for Amusement and Profit thinks that he had reduced intelligence due to a diagnosis of microcephaly. However, since then opinions have changed. I for one feel that Johnson was not mentally deficient. Here’s why. While his head was an unusual shape, it was not as pronounced as others with microcephaly. In fact, by shaving his head, Barnum made it look much worse than it actually was. Johnson also showed several signs of intelligence. For instance, he held up the act of not speaking for over 60 years, even though his sister reported that he was capable of holding conversations. If he was not able to speak, Barnum would not have had to pay him $1 a day. I think he had to be intelligent not to break character for such a long period. Furthermore, he acquired a fiddle on his travels which he played for several years. Apparently, he was so bad at playing it that observers and even fellow performers would pay him to stop. He made $14,000 by doing this. If he was not intelligent, he may have either stopped playing entirely or kept playing and not taken the money. Near the end of his life Johnson performed at the Coney Island freak show. One Sunday afternoon in 1925, while out walking on his break, Johnson heard a girl screaming. It was a seven-year-old who had fallen into the water. Johnson (in his 80s) quickly jumped in and rescued the girl, immediately running away as a crowd gathered around her. Finally, his last words to his sister on his deathbed were reported to be “Well, we fooled ’em for a long time”. This suggests that he may have been an (incredibly wealthy) actor playing a role for most of his life. To keep up to date with my latest blog posts, you can like my Facebook page, or follow me on Twitter. You can find them by clicking the relevant icons in the sidebar of my blog on Blogger. Next week I will be examining the life of ‘The Elephant Man’, Joseph Merrick. The Wheelchair Historian Further Reading Bogdan, Robert, Freak Show: Presenting Human Oddities for Amusement and Profit, (University of Chicago Press, 2014). CandyGuy, ‘ZIP THE PINHEAD — What is it?’ https://www.thehumanmarvels.com/zip-the-pinhead-what-is-it/ Accessed: 27th November 2020. Fact Index, ‘Zip the Pinhead’ http://www.fact-index.com/z/zi/zip_the_pinhead.html Accessed: 27th November 2020. Gerber, David A., ‘Volition and Valorization in the Analysis of the ‘Careers’ of People Exhibited in Freak Shows’, Disability, Handicap & Society, Vol.7, №1, 1992. Newspapers.com, ‘Zip the Pinhead (“Barnum’s What-Is-It”) saves a life’. https://www.newspapers.com/clip/17123705/zip-the-pinhead-barnums-what-is-it/ Accessed: 27th November 2020. WeirdNJ.com, ‘Zip The “What Is It?”’ https://weirdnj.com/stories/local-heroes-and-villains/zip-the-what-is-it/ Accessed: 27th November 2020.
https://medium.com/@wheelchairhistory/william-henry-johnson-a-k-a-zip-the-pinhead-53daf5999f1b
['The Wheelchair Historian']
2020-12-12 12:42:07.301000+00:00
['Freak Show', 'History', 'Entertainment', 'Disability', 'American History']
How to Customize Matplotlib and Seaborn Plots with .mplstyle Sheets
How to Customize Matplotlib and Seaborn Plots with .mplstyle Sheets If you are new to creating data visualizations with Python, it is not unlikely that you’ve run into the struggles of delving deep into the catacombs of the Matplotlib and Seaborn documentation pages to try and change the smallest details of your plots away from the defaults. While both Matplotlib and Seaborn do a excellent jobs of providing built-in styles to work with, I know I always felt myself wanting to make my graphs a little bit more unique with some personal flare. I warn you now: there is no substitute for reading through the documentation and getting a basic understanding the nature of the vast collection of objects and plotting functions available in these libraries. But, if you are a tactile learner like myself, you’ll probably learn ten times as much by getting your hands dirty and practicing the skills yourself. Continue reading if you would like to learn how to Create custom styles by setting parameters with an .mplstyle file Exercise greater control over the fonts and colors used in your plots Decide when to use Seaborn vs. Matplotlib For this demonstration, I’ll use the following imports, so it’s important that they’re all installed to your environment. You can download them using pip install <library_name> , replacing <library_name> with pandas , numpy , matplotlib and seaborn if they are not already installed to your environment, then you can import them with the standard aliases, and if you're using Jupyter, then you can add %matplotlib inline to avoid having to execute plt.show() for each plot. For those new to any of these Python libraries, I suggest first reading the documentation/watching some YouTube tutorials (which I’ll link below) to get some basic working knowledge. We’ll start with some matplotlib plots using numpy to generate the points. # Arrays and DataFrames import numpy as np import pandas as pd # Visualizations import matplotlib.pyplot as plt import matplotlib as mpl import seaborn as sns With matplotlib The default settings that matplotlib visualizations use when other parameters are not passed to plotting functions are contained in the library's attribute rcParams , which can be thought of as a sort of global dictionary that can be accessed/altered just as any user-defined dictionary in python. This can be accessed with matplotlib.rcParams or in this case, since we are using an alias, mpl.rcParams . During a Python or IPython session, these can be changed dynamically using dictionary subscripting, but by using an .mplstyle file (an example is included further below and in this article's repository) we can avoid having to set these parameters manually for every session. First, we’ll plot three points and see what we get. You’ll notice that the line does not include markers for the actual points by default when plotting a line. This is because the lines.marker key in rcParams is assigned to None . To change this for just one plot, we pass in marker='o' , for example, if we want dots. We'll also give it a title and axis labels. # Defining Points x = [1, 2, 3] y = [2, 4, 6] # Plotting x vs. y plt.plot(x, y) # Assigning Labels plt.xlabel('X-Axis Label') plt.ylabel('Y-Axis Label') plt.title('Plot Title') # Display the current figure plt.show() And now with markers… plt.plot(x, y, marker='o') plt.xlabel('X-Axis Label') plt.ylabel('Y-Axis Label') plt.title('Plot Title') plt.show() If we want all plots we create in a given context to have markers, we can set this directly in rcParams . To access the rcParams value we just access the lines.marker key just as we would with any Python dictionary. As we can see, it is None by default. If we assign this value to 'o' , then all plots made after that line's execution will include circle markers by default. mpl.rcParams['lines.marker'] Output: 'None' However, say that you want some plots to use certain parameters while using the defaults (or another custom style) for other types of plots. Manually setting these parameters for each plot somewhat defeats the purpose, since they are global and will need to be changed each time. Unless you know from the onset that you want all plots to use the same parameters. These parameters can be set in an external file with the extension .mplstyle , which at the most basic level can be accessed in two ways: Setting it for an entire script/notebook with plt.style.use('<file_path>'): Only using it for a certain code block using with plt.style.context('<file_path>'): followed by indented code To demonstrate this, I’ve created a sample style sheet called custom.mplstyle that we'll access using the second method above. First, let’s plot a simple family of functions, f(x) = x² + c, where x is between -5 and 5 and c ranges from 1 to 30 in multiples of 3, giving 10 curves. When plotting these lines on the same matplotlib.Axes object, Matplotlib will automatically give them different colors to identify them, using a default cycler . We'll define a function to plot them for us, along with our x-values and function, f(x), defined as x and f respectively. # Values for the x-axis, 11 points from -5 to 5 x = np.linspace(-5, 5, 41) # Define function to plot: x^2 f = lambda x: x ** 2 # Define values for c: 0, 3, 6, 9, ... 30 c = np.arange(0, 31, 3) def plot_family(x_vals, f, c_vals): for c in c_vals: plt.plot(x, f(x)+c) plt.title('Plot Title') plt.xlabel('X-Axis Label') plt.ylabel('Y-Axis Label') plt.show(); First we’ll call the function without the style sheet for comparison, which will use the default color cycler, font, font sizes, and grid parameters that are set in rcParams . plot_family(x, f, c) Now, using the style sheet… with plt.style.context('./custom.mplstyle'): plot_family(x, f, c) The style sheet even allows you to customize the order of colors for each line object on the axes. Matplotlib recognizes many CSS colors by name, and you can find a page that lists them all here and in a link at the bottom of this page. With seaborn Since the seaborn library is built on top of matplotlib, it too can make use of an .mplstyle sheet to provide customization to your plots. To demonstrate this, we'll take a look at the Iris dataset to generate some simple plots. For context, this dataset contains 150 records with petal and sepal measurements of three species of Iris flowers, along with their labels of setosa, virginica and versicolor. The seaborn library conveniently includes this dataset, which saves us some work. # Load the dataset iris = sns.load_dataset('iris') # Display the first 5 rows iris.head() Using Seaborn’s scatterplot function, we can construct a figure plotting any of these measurements against another in a 2D plot while coloring each species of flower with a different hue. Seaborn has several other default styles that can be used. We'll show the default style, then the 'darkgrid' style, and then compare to the custom style defined in custom.mplstyle . def petal_length_v_width(data): sns.scatterplot(x='petal_length', y='petal_width', hue='species', data=data) plt.title('Petal Length vs. Petal Width in Iris Flowers') plt.xlabel('Petal Length') plt.ylabel('Petal Width') plt.show(); # Default petal_length_v_width(iris) # Using 'darkgrid' seaborn style with sns.axes_style('darkgrid'): petal_length_v_width(iris) with plt.style.context('custom.mplstyle'): petal_length_v_width(iris) As you can see, a lot of information can be wrapped into a style sheet to avoid repeated formatting of plots, especially if you know that certain parameters should be uniform across all visualizations you create. While using a style sheet may be slightly more labor intensive at the beginning, it can definitely help keep your code D.R.Y. (i.e. Don’t Repeat Yourself). Also, writing a style sheet at the beginning of a project can save you a lot of work in the future and helps to keep your plots looking consistent, and overall much more interesting and attention-grabbing than the default plot settings. For further information check out the following documents, articles, and videos which I have found very helpful for all aspects of data visualization. Also, feel free to clone the repository for this notebook and let me know if there’s any other topics I should cover.
https://medium.com/@ian-sharff/custom-mpl-ebb2c8bb91d7
['Ian Sharff']
2021-08-10 00:21:38.310000+00:00
['Pandas', 'Numpy', 'Matplotlib', 'Python', 'Seaborn']
The Lysenko Lesson: Ninety-One Years Later
The revolution of 1917 was a breeze of relief that put an end to the pallid Tsarist servitude. Little did anyone know about the servitude yet to come. Scientists were not the first of the Russians to succumb to the new regime, but their day had to come. It was inevitable. But, why do we need to remember this? This is 2020. Amidst the rise of climate-change denial and flat-earth theories, with an occasional rejection of the theory of evolution, it is indeed a perfect time to look ninety years back. 1928, the year a crackpot scientist presented his theory which, by Stalin’s grace, ascended to the peak of Soviet science. In 1934, octogenarian physiologist Ivan Pavlov wrote a letter to the Sovnarkom (The Council of People’s Commissars), criticizing the Bolsheviks. Instead of setting the stage for a “world revolution”, complained Pavlov, the Bolsheviks had instigated the rise of fascism. Having been a recipient of the Nobel Prize, Pavlov was somehow immune from the harsh treatment dissidents used to face in the Soviet regime. However, five volumes of reports from the informants were prepared on him, thanks to the mighty OGPU. Sovnarkom chairman Vyacheslav Molotov replied to Pavlov in 1935, in a manner somewhat softer than what one could expect from a Soviet authority, indicating his disapproval of Pavlov’s unsolicited interference in political matters. Further, Molotov assured Pavlov that the political leadership of the Soviet Union would never interfere in the study of science: “I can only express my surprise that you tried to make categorical conclusions on principal political questions in a scientific area which you, apparently, have no knowledge of. I can only add that the political leaders of the USSR would never allow themselves to use such ardor in questions of physiology, the field in which your scientific authority is without question.” The promise, however, was false. The authority kept poking its nose into science. Stalin’s tendency of exuding authority in science should not be seen in isolation. It is the very principle of totalitarianism to influence every aspect of life. The exuberant achievements of Soviet science cannot vindicate its authority’s countenance to pseudoscience of the worst kind. For example, the rejection of Mendelian inheritance. What? In 1928, Soviet biologist Trofim Lysenko came up with his revolutionary “innovation” in the field of agricultural sciences. Lysenko rejected the concept of genes and held Lamarckism as the gospel truth. Tweaking the method of vernalization, he proposed a theory based on the heritability of acquired traits. His rejection of Mendelian genetics won the favour of Stalin. Soon, Lysenko rose to a position powerful enough to suppress conflicting opinions from other scientists. Mendelian genetics was rejected primarily to reaffirm the view that the human nature is infinitely malleable. Anything not in accordance with the ruling political philosophy was to be rejected. Science, sadly, does not care about ideology. Lysenkoism was a disaster. Its fabricated nonsense resulted in the deaths of millions. The August 1948 session of the Agricultural Academy wreaked havoc on true science — genetics in particular. More than 3,000 biologists lost their jobs because of their staunch opposition to Lysenko’s spurious ideas. Academician Orbeli refused to sack geneticists Rose Manzing and Ivan Kanaev from the Institute of Evolutionary Physiology and Pathology of the Highest Nervous Activity within the Medical Academy. Interestingly, posing as an anathema to the regime, he hired another geneticist named Mikhail Lobashov. Mikhail had just been sacked from the Leningrad University. Orbeli could no longer hold his position as the secretary academician. Oparin replaced him. Long before this, geneticist Nikolai Vavilov’s strong opposition to Lysenkoism ignited a bonfire of criticism in the Stalinist intelligentsia. He was eventually arrested. He starved to death in 1943. The prosecution of Vavilov, a sad reminder of the fate of Anaxagoras, was only the tip of the iceberg. For many in the Stalinist USSR, voluntary human sacrifice was an honor. People were ready to submit themselves to the government. American journalist Anna Louise Strong, in her book The Stalin Era, gave a particularly sordid and scary account: “Other Russian friends took an even more ruthless view. I recall one who maintained that if the political police held one hundred suspects and knew that one was a dangerous traitor but could not determine which one, they should execute them all, and the ninety-nine innocent ones should be willing to die rather than let a traitor live.” Needless to say, no totalitarian regime can prosper without such sclerotic disregard for human life. Under the carapace of moral-sounding intentions, Lysenkoism demolished Soviet genetics. The problem, however, was not Lysenko’s espousal of pseudoscience. Pseudoscientific ideas are to be discarded by rigorous analysis. Pseudoscience runs its course and eventually withers away. But, Lysenkoism was protected by the state because of its supposed adherence to Marxism, not because of its scientific value. Lysenko’s critics, however accurate their scientific methods were, were to be denounced as counter-revolutionaries. Dr. Vadim Birstein’s book The Perversion Of Knowledge: The True Story Of Soviet Science recounts these events in chilling detail. Soviet government’s relationship with pseudoscience did not stop there. After Stalin’s death in 1953, the effect of Lysenkoism started to minimize. In 1962, Soviet physicists Yakov Zel’dovich, Vitaly Ginzberg, Pytor Kapitsa denounced Lysenko’s ideas. Lysenko was eventually removed from his position. However, the authorities found a new product to market — the ancient astronaut hypothesis. This hypothesis, often called the “ancient astronaut theory”, suggests that our ancient ancestors were visited by extraterrestrials. The primitive humans, unable to understand alien technology, mistook those aliens for gods. However, the proponents of this “theory” have not been able to put forward any conclusive evidence — apart from the ones which have already been debunked. The Ancient Astronaut hypothesis owes its popularity mostly to Erich von Daniken’s bestseller Chariots of the Gods? (1968). Nearly a decade before its publication, Soviet mathematician Matest Agrest proposed the so-called “theory”. He suggested that the tale of Sodom and Gomorrah might have been the description of a nuclear war. As researcher Jason Colavito rightly points out, Agrest’s conjectures might have influenced Jacques Bergier and Louis Pauwels to write Morning of the Magicians (1960), which later served as Daniken’s (possible) primary reference. Things did not remain the same. Just after the US participated in the “sky-gods” bandwagon, it became inevitable for the Soviets to abandon the ship. This points to an important feature of totalitarian systems, at least of the USSR kind. They did not embrace pseudoscience like cults usually do. They merely utilized it. This can help us discern the directly-theocratic systems from the ones that are structurally theocratic but ideologically secular. While theocracy mingles with pseudoscience and embraces it as a part of its religious agenda, secular totalitarian governments use it as a tool. Sociobiologist Edward O. Wilson was heckled multiple times for his difference with the dominant left-wing tendencies. Apart from being accused of propagating racism, Wilson had a pitcher of water thrown at him. The backlash Dr. Wilson received from a section of the “left” exposes how the nascent totalitarian tendencies try to influence science. This is an ominous sign. These signs appear everywhere in the world. All sorts of pseudoscience (airplanes in ancient India, plastic surgery in the ancient times) spring up. Misinterpreting science to fit racist agenda is fairly common, even in today’s world. So, why do we need to remember the Lysenko episode? The answer can be found in the growing trend of pseudoscientific ideas, fueled by guided mass-delusions and collective persecution-complex (“The Majority Is In Danger!”). It has been more than ninety years since Lysenko presented his mumbo-jumbo. Last year, Lysenko turned 121. To avoid a possible reincarnation of Lysenko (or, his shadow), it is extremely important to remember this particular piece of history — a lesson, a textbook case of institutionalized fake-news. References:
https://medium.com/@aleftwanderer/the-lysenko-lesson-ninety-one-years-later-825fdbeaff69
['A Left Wanderer']
2020-06-09 22:36:28.896000+00:00
['Communism', 'Soviet', 'Ussr', 'Stalin', 'Lysenko']
Machine Learning for Earth Observation Market Map
Machine Learning for Earth Observation Market Map Meet the 100+ organizations that focus on machine learning applications with satellite data By Louisa Nakanuku-Diggs, Marketing & Communications Manager, Radiant Earth Foundation and Gracie Pearsall, former Science Communications Intern, now a Ph.D. student at UC Santa Cruz Building geospatial machine learning applications involve many dependable moving parts, from accessing Earth observation (EO) data, labeling imagery, and generating training data to creating and developing models and running analytics. A growing list of organizations from various sectors are providing solutions and services to advance these applications. Who can help you build machine learning applications, identify patterns from your data, or run your crowdsourcing campaign? What organizations are providing software or a platform that you can utilize to develop your machine learning model? To help you navigate these questions, we curated a list of organizations focusing on different aspects of machine learning with a satellite data pipeline. The list of organizations are divided into two groups — Commercial and Non-Commercial — and five categories: Data Analysis and Services — Organizations that build machine learning models and analytical solutions for various geospatial applications using EO. Analytics Platforms — Organizations that provide software or a platform to process EO data and build customized applications. Labeling Platforms — Organizations that provide a solution for generating labels on satellite imagery and creating training data catalogs. Competition Platforms — Organizations that help you run challenges on your training dataset to outsource modeling solutions. Data Access and Storage — Organizations that provide access to satellite imagery and geospatial training data to facilitate machine learning analytics pipelines. What organizations are we missing? Please follow this link to submit the names of organizations we might have missed: http://bit.ly/MissingML4EOOrgs.
https://medium.com/radiant-earth-insights/machine-learning-for-earth-observation-market-map-d3a1f3936cb3
['Radiant Earth Foundation']
2020-10-21 16:57:52.667000+00:00
['Geospatial Industry', 'Infographics', 'Satellite Data', 'Earth Observation', 'Training Data']
HOW TO ATTRACT INVESTORS WITH COOKIES
Amsterdam trip Introduction A bit of our team made a wonderful journey to the magical country of the Netherlands to participate in the world Blockchain Expo. Later I received a lot of questions how does it feel to participate in such a huge event, so I decided to present my memories below. Preparing for the storm Preparation in advance, check-lists and other useful things are not our cup of tea. Of course, it’s not about being bad-organized, but about being creative minds who are getting inspiration on the fly and yet sometimes at the last moment. Our biggest challenge was to present our small project as serious and yet interesting one on the world stage. How can we attract hundreds and thousands of people on our platform competing with experienced players who invest heavily in creating their launching pad? Brainstorming for a while, we dismissed thousands of ideas and focused on the core issue — a human approach. This is our strategy — transparency, openness. It should be used in this form. The first thing we decided to make was a presentation, which would create a WOW effect on the reader. A lot of mocks were burned out without the right for restoration. Desperate designers started drawing prototypes on what was at hand — and presto! A tracing paper, which apparently had been lying on the table since old-believers typewriters waiting for its triumph, comes into the hands of our designer Olga. A tracing paper. It was exactly what we needed. It is moderately transparent and thick. You can write, draw, apply effects while preserving feeling of transparency, cleanliness and depth. Immediately, the endorphin bomb caused the artists ‘ neurons to explode with renewed vigor to implement a concept that, after the realization can be summed up as WOW. Resembling little hobbits with One Ring, they fussed over the project, not showing it to anyone before the trial version — so big was the fear that the evocation would not correspond with their mental image. Finally, when our font manager Vladimir came with a ready-made version he could not hide his emotions, and almost crying (although he could be crying because of tough communication with print workers ) — presented us the release version . We were so amazed, that beautiful words of foul language filled the office. The presentation was ready. The only thing left was to disseminate it and it could already be said that our “face” was ready and waiting for its display. But there will be a lot of people at the Expo — how do we get their attention from afar? Again, the brainstorming of our creative minds started rasping in the dark corners of the office. The tension was in the air, and even our artist, who created our first mascot — Alphy the Bull — accidently was involved in the discussion. Just a quick look at Katerina dispelled any doubts — our Alphy the Bull must be on the banner. Neither an infographic, nor a windy talk. There should be our Alphy who will force all passer-by people to come to our stand. Two out of three important components of the engagement chain were ready. We were able to attract people. We could give information. Yet, we needed to inspire and express the human attitude. Here the garland was mine. I have always thought that the art of calligraphy is something magical and very interesting. And when I started thinking about expressing our attitude to visitors — I quickly realized that if our calligrapher signed personal cards to everyone who wanted to know more about our company , it would be stronger than even an expensive promo. And the main thing: people will remember it, because it was written to them, it was personal and unusual. And that’s life. Sometimes you need to sweat to give birth to idea, and some concepts come out of thin air. But the main thing is even if everything that had been done before is thwarted — is not a fail. All that was a bridge that led to the shore of a successful concept. Teleportation Surely, the day of the trip was exciting. Firstly, the printing office had to send us a ready volume of presentations. Secondly, the trip itself. Journey. Of course, in the best traditions of a good trip, everything went not so smoothly. The first stumbling block was the printing house, which did not have time to deliver our childrens on time. Vladimir, as a responsible person for the war, rushed to pick them personally. Only when arriving there, he realized that had underestimated their weight. These were really “weighty arguments” of our event. The total weight — 100 kg. Immediately the question of transportation come up — we needed to find some space to pack the volumes in a short time. In general, even this small fragment of Alphateca life shows how splendid abstract and creative vision can be applied to any aspect of our lives. Amsterdam met us with the sun and mashed-up people. We went to the hotel to get ready for the next day. The first day of the Expo. Surely, we had already laid out our gimmicks on the first day and it was too late and silly to rush to a candy shop, as it was not “Hell’s Kitchen” show. Expo Day 1 The first day of any event is very important. Firstly, you give a taste of the quality, and secondly, you know, what’s in store for you. A tender-hearted organizer has informed us that the way from our hotel to the exhibition will take not more than 20 minutes walk. But she forgot to specify that there are such interesting moments as the absence of pedestrian zones on some roads and if you walk on them, it turns out to take at least 1 hour. It does not matter — there is a taxi, call Uber, an excellent international system, which is so useful in many countries. Fun fact : Amsterdam taxi can stop not where you want, but only on special parking zones, which is extremely inconvenient for our brother. Uber quickly brought us to the Expo and we saw a huge building, actually a modern barn, where hundreds of exhibitors from different countries gathered under the roof. And this wonderful smell of ozone from the printed entrance tickets for Expo participants. An international crowd. Russian voice. Stop, what??? We thought we were going to be the only Russians in the event. But no. It turns out that there is a sufficient number of Russian speaking guys from Russia, Kazakhstan, Ukraine, Belarus. Having entered into the internal spaces of Expo, I began to understand that some companies had invested nearly annual budgets of some small firms in the event. Miracles of design and details. All stands are full of a various goodies for attraction. Everyone is friendly, good-looking, and full of interactive offers. Will we be able to beat their approach to grab at least some piece of the crowd and interest them? We were full of this panic feeling while walking to our stand. A lot of strong and experienced teams. Obviously, I tried to play poker. When we reached the stand and began to unpack our roll-up, arrange furniture and lay out the printing materials -we realized that: Roll-up should not be placed aback like everyone does, it should be actually the first web on the way of passing people. Typography should also be lying on the table, slightly pushed forward, so that guests from afar could see that there is something unusual and interesting to see. We had a wonderful quartet: me, our artist Violette, Account manager Ksenya and Font manager Vladimir. Initially, we planned me and Violetta sitting in front of the stand, and Vladimir with Ksenya beating a path to target clients and partners. Finally, it turned out that the perfect scheme is different: me and Vladimir, Ksenya and Vita. Our “customer funnel” worked the following way: firstly, people saw a roll-up with the bull, which clearly stood out among all the other stands, then they saw Vladimir, who was making personal calligraphic and while they were waiting I gave them information about the project. Unlike all others, we did not have luxury banners, interactive stands. We were simple and showed that we are able to work with a completely different approach. With humanity, openness and readiness to show our skills right then and there. The day was emotional, productive, indicative. There were a lot of pleasant moments, especially when strangers came up to us and said that they heard at the other end of the hall that there were creative and interesting guys presenting an unusual project. Or when people were smiling having their postcard signed and immediately started taking photos with you and signing up for Alphateca . It was also great when they were coming already knowing our names and talked about how they would like to become partners with us, even if they were from a different sphere. Of course, there were moments when we knew that we had gone the wrong way, for example , we needed plasma to show our promo. Or that the presentations are cool, but too heavy to carry. Or that the business cards were a total fail as the concept was not implemented . But mistakes are important and useful, as this is one more step to your dream. Expo Day 2 The second day is like the second day of a wedding party — everyone has already got used to each other and everything goes in a more relaxed atmosphere. Nevertheless, it was the most important day for us. If on the first day we just presented the project, then the second day was the day of investors. Which implied more formalized approach. It is important not only to show that you are creative, but also assure the seriousness of your approach to the process. The day started later and ended earlier. We also knew that people will go in waves, tied to the beginning and ending of lectures. On this day, everyone took out their gimmicks — FOOD. Yes! You would not believe , but the people began luring investors with cookies and cupcakes! What a shock! We did not even think that this could happen at the European summit. Ok, let it be. Surely, we had already laid out our gimmicks on the first day and it was too late and silly to rush to a candy shop, as it was not “Hell’s Kitchen” show. Others tried to attract with everything they could — glasses, caps, gadgets, bags. We had stickers and postcards — only souvenirs. Again, in their case, everything was done in large quantities and circulation, and in our case, part of our gift material was created right in front of a person. That definitely increased the loyalty level. On this day there were quite serious technical and economic talks. Serious uncles, bringing their auditors resembling tiny microscopes considered the projects under the maximum zoom, in order to assess whether it is worth considering even deeper. A pleasant moment occurred when after a long dialogue your interlocutor opened his notebook and started writing a brief summary of the Alphateca project. The end. Amsterdam and other thoughts To sum up, such Expo is a great opportunity to show yourself, your project, to prove that you are not a cheat and it is not a scam. That the team is not afraid to answer questions, that you are creative and able to meet with the expectations of you. This is a great opportunity to get partners, to convey the information about your project to the target audience , get coverage in media. Among other things, it is just a huge charge of energy and emotions, the opportunity to learn, gain information about current trends, as well as possible competitors from the top officials. Moreover, it is a live and active target audience that asks questions — and this is, actually, a huge stress- test of your platform. I managed to visit different countries during my lifetime– France, Norway, Sweden and others. But Amsterdam is a different city. It’s a strange mix of different theses. It is contradictory cozy and not very clean. It is open and sometimes dark. It has marvelous narrow streets with crowded bars and different small shops. It is certainly not the cheapest city — but the question is controversial — Oslo is more expensive. It is silly to judge country only by its capital residents, as there are many tourists and all of the business is focused mostly on working with them. And every big city in any country has this sin. Anyway, I can tell you for sure — one day I will come back to Amsterdam and see it under my own microscope.
https://medium.com/alphateca/how-to-attract-investors-with-cookies-d7d783c805e
[]
2018-08-02 11:15:05.762000+00:00
['Startup', 'Cryptocurrency', 'Alphateca', 'Becryptoone']
Simplifying the ROC and AUC metrics.
Logistic Regression Now if we fit a Logistic Regression curve to the data, the Y-axis will be converted to the Probability of a person having a heart disease based on the Cholesterol levels. The white dot represents a person having a lower heart disease probability than the person represented by the black dot. However, if we want to classify the people in the two categories, we need a way to turn probabilities into classifications. One way is to set a threshold at 0.5. Next, classify the people who have a probability of heart disease > 0.5 as “having a heart disease” and classify the people who have a probability of heart disease < 0.5 as “ not having a heart disease”. Let us now evaluate the effectiveness of this logistic regression with the classification threshold set to 0.5, with some new people about whom we already know if they have heart disease or not. Our Logistic Regression model correctly classifies all people except the persons 1 and 2.
https://towardsdatascience.com/understanding-the-roc-and-auc-curves-a05b68550b69
['Parul Pandey']
2020-09-10 01:33:22.317000+00:00
['Machine Learning', 'Statistics', 'Classification', 'Metrics', 'Data Science']
Swoogle
Swoogle is a crawler based indexing and retrieval system for the Semantic Web documents like RDF or OWL. It was a search engine for Semantic Web ontologies, documents, terms and data published on the Web. Swoogle provided services to human users through a browser interface and to software agents via RESTful web services. Several techniques were used to rank query results inspired by the PageRank algorithm developed at Google but adapted to the semantics and use patterns found in semantic web documents. Swoogle was developed and hosted by the University of Maryland, Baltimore County with funding from the US DARPA and National Science Foundation agencies. It was PhD thesis work of Li Ding advised by Professor Tim Finin. Swoogle interface Swoogle’s architecture can be broken into four major components such as SWD discovery, metadata creation, data analysis, and interface. This architecture is data centric and extensible: different components work on different tasks independently. The SWD discovery component is responsible to discover the potential SWDs throughout the Web and keep up-to-date information about SWDs. The metadata creation component caches a snapshot of a SWD and generates objective metadata about SWDs in both syntax level and semantic level. The data analysis component uses the cached SWDs and the created metadata to derive analytical reports, such as classification of SWO and SWDB, rank of SWDs, and the IR index of SWDs. The interface component focuses on providing data service to the Semantic Web community. The architecture of Swoogle Current web search engines such as Google and other web search engines do not work well with documents encoded in the semantic web languages RDF and OWL. These retrieval systems are designed to work with natural languages and expect documents to contain unstructured text composed of words. They do a poor job of tokenizing semantic web documents and do not understand conventions such as those involving XML namespace. Moreover, they do not understand the structural information encoded in the documents and are thus unable to take advantage of it. Semantic web researchers need search and retrieval systems today to help them find and analyze semantic web documents on the web. Swoogle is the research project which gave the solution for this scenario. These systems can be used to support the tools being developed by researchers such as annotation editors as well as software agents whose knowledge comes from the semantic web. Swoogle is useful to avoid creating new ontologies by reuse. Swoogle search find the suitable already exist ontologies within underlying domain, matching with the user’s need. Swoogle reasons about the semantic web documents on the Web and their constituent parts and records meaningful metadata about them. Swoogle provides web scale semantic web data access service, which helps human users and software systems to find relevant documents, terms and triples, via its search and navigation services. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. Swoogle advanced query and Swoogle query result Being a research project, and with a non commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question is provided that the search engines return very relevant results for a query how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology.
https://medium.com/@sangeevan/swoogle-3d10611f9aa3
['Sangeevan Siventhirarajah']
2020-12-24 17:05:15.674000+00:00
['Rdf', 'Web', 'Ontology', 'Semanticweb', 'Search Engines']
Is Your Promotional Merchandise Being Trashed?
For the amount of money you spend on each promotional piece, do you see your customers keeping the item? You do not want to spend money on promo material just because you are “supposed” to. Is It Useless Crap? I just went to an expo with my brothers. Fortunately, I walked away with absolutely nothing. They, however, came home with loads of what I perceived as crap. Yes, I said it…crap, garbage, bunk and a waste of good resources for the companies that gave it away. They didn’t gain a customer or even an advocate from their giveaways. What I’ve found is these items get passed down to the kids in the house who don’t make buying decisions, have a car, bank account or job. Promotional Merchandise Smarts Be smart about where and how you spend your marketing dollars when it comes to events where you exhibit at or participate in. Here are some things to think about when buying promotional merchandise: Is It Cost-Effective? Price, Price, Price! For the amount of money you spend on each promotional piece, do you see your customers keeping the item? You do not want to spend money on promo material just because you are “supposed” to. You need to spend wisely on this just like everything else in your business. What’s My End Game? Promotional merchandise can differ based on what your end game is. Do you simply want your business contact info on the swag? Do you want to create an experience? Both? Asking yourself these questions can help you sort it out and get the most bang for your promotional buck. Do You Have Brand Recognition? Yes? If you have brand recognition, putting your contact info on your promotional merchandise is unnecessary. People know you! Great job! But don’t stop here. Do You Have Brand Recognition? No? “I don’t have brand recognition,” you say. Then be smart about it. Use promotional materials as an opportunity to create brand recognition, even if it’s only in your local market. Depending on the merchandise you order, you can put your contact info, logo and even a tagline on your item. Do It Right Make it catchy, fun, quirky. Basically, make it memorable. If you are not creative with words, find someone around you who is, or hire a branding expert to help you in this area. It’s worth the money to do it right the first time. Walking by the garbage can at the event and seeing your hard-earned money in it can be completely demoralizing. Give this some real thought, planning, and effort. Will Your Customers Love It or Leave It? Of all the giveaways my brothers brought home from this event, their favorite was a stack of bumper stickers that were relevant to what they like. They didn’t fight over them, but there were definitely some side-eyes thrown around about who was going to get which bumper stickers. You want people to WANT what you have. People love free crap; really, they do. If it’s free, even if it’s a condom, they want it. So find something worth wanting. Over the years of working with large brands, I’ve had a lot of fun ordering promotional material. There is some crazy stuff you can order! It can be challenging to come up with the right promotional merchandise for your brand at the right price, but done right, it’s worth every penny. #LanySaidSo My favorite resource for promotional merchandise is Bamko. I have worked with the owner personally for over five years. Not only do they provide exceptional service and competitive pricing, but they give back to their community. To me, corporate social responsibility is important in addition to pricing and service. Bamko does a wonderful job with all three. Got questions regarding promotional merchandise for your upcoming event? I can help you sift through the innumerable possibilities that are available to you, so your swag doesn’t end up in the trash.
https://medium.com/lany-sullivan/is-your-promotional-merchandise-being-trashed-c395beae4343
['Lany Sullivan']
2020-12-18 20:25:47.973000+00:00
['Promotion', 'Marketing', 'Branding', 'Brand Strategy']
Blockchain, GraphQL, and More — In One Database
“FlureeDB is a database purpose-built to fit the requirements of modern enterprise applications while providing blockchain capabilities for data security, workflow efficiency, and industry interoperability.” Sound intriguing? I thought the same, and if the team behind Fluree delivers everything it’s promising, the results could be staggering. To find out more about Fluree, I spoke with its Co-CEO, Brian Platz. Hear the full interview below. For over two decades, Brian Platz and Flip Filipowski have been building software companies together. They oversaw two IPOs — one of which involved the 8th largest company in the world — and have secured the largest cash sale of a software company ever. Suffice it to say; they have experience working with software. Fluree is Brian and Flip’s latest project, conceived just over four years ago. It’s a new type of data platform for modern apps, and they created it because they frequently found themselves struggling with database limitations. Brian and Flip felt that while software and software delivery (think SaaS) has moved in leaps and bounds over the years, the DBs that underpin software haven’t evolved — despite the increased importance of data. One of Fluree’s most significant differentiating factors is that it decouples the processes involved with updating data and querying it. Furthermore, a blockchain records every single DB change ever made, allowing for limitless querying of a potentially infinite number of versions. Fluree has the concept of “Fuel,” which is similar to Ethereum Gas and is calculated based on every DB query that’s performed, as well as the three current Fluree interfaces: GraphQL, FlureeQL (a JSON query interface), and SPARQL. Fluree released a licensed version last December and, so far, early adopters have tended to be other tech start-ups — blockchain-based apps that are using Fluree as the foundational pinning. That’s because, with Fluree, you can write custom blockchain logic without having to fork another blockchain, i.e., you can get your project off the ground in a significantly shorter space of time. Two such early adopters are IdeaBlock — which is looking to disrupt the digital patent system — and Fabric — which is challenging the traditional advertiser ecosystem and looking to help people monetize their data (rather than having it sold by the likes of Facebook). Brian also mentions that the biggest beacon on the 6-month Fluree roadmap is that it will be fully open sourcing this quarter. It’s APIs are stabilized and good to go. All in all, there’s no question that Fluree is jam-packed with features and heaps of potential, let’s take it for a test drive. Hands-On You can download and unpack a hosted zip file. Then run the following command to start a Fluree instance: ./fluree_start.sh And there are Homebrew taps and Docker images available. Once started, Fluree runs on port 8080 and has a GUI and REST endpoints for most operations you need. After installing and starting, I followed the “ Examples “ section of the documentation, that walks you through creating a cryptocurrency. With Fluree, you always have the choice of using FlureeQL, GraphQL, SparQL, or curl. For example, to create a schema with curl, use the commands below: curl \ -H “Content-Type: application/json” \ -H “Authorization: Bearer $FLUREE_TOKEN” \ -d ‘[{ “_id”: “_collection”, “name”: “wallet” }, { “_id”: “_attribute”, “name”: “wallet/balance”, “type”: “int” }, { “_id”: “_attribute”, “name”: “wallet/user”, “type”: “ref”, “restrictCollection”: “_user” }, { “_id”: “_attribute”, “name”: “wallet/name”, “type”: “string”, “unique”: true }]’ \ [HOST]/api/db/transact The authorization token is one of the interesting parts of Fluree, as it is tied to a keypair, something familiar to any Blockchain users. Read the documentation for more details, but I used ./fluree_start.sh :keygen to get me started with an autogenerated pair and user id, and derived a token from that. You might have noticed that Fluree is not a NoSQL or schemaless database, which means that you need to cope with schema changes, I couldn’t find any official mention in the documentation on any specific functionality on how to handle these changes. Next, you add sample data, again with the four methods available to you. As Fluree is a somewhat relational database, you can add “relations” using what Fluree calls “predicates.” Fluree also bundles a set of predicate types to define what data type the relationship is, or you can use functions to define the predicate, which is where Fluree gets interesting. For instance, with the Cryptocurrency example from the docs, you can define predicates that are somewhat like Solidity (the Ethereum smart contract language) functions, for checking balances or protecting against double spends. Final Thoughts Fluree is fascinating, but the multitude of bundled features overwhelming, sometimes too much choice can be a little daunting and confusing. It’s kind of like a database engine, plus a semblance of an application layer bundle into one. I know that many older, relational databases have packed these sorts of features in the past, but it’s been a while since I have used a relational database, and have got used to the simplicity of NoSQL offerings. The different interface options are welcome, but I wonder if maybe picking and sticking to one might have been a better engineering decision, especially FlureeQL, which is unique to Fluree. Adding “blockchain” to the tech stack is a choice I am unsure about. I covered BigchainDB before, which attempted to do the same, albeit in a different way. I’m unsure if Fluree’s blockchain features comprise and actual blockchain, or just blockchain-like features, but that’s fine, if you have a use case for them, it doesn’t matter what you call them. I was also unable to test anything like performance or reliability metrics of Fluree thoroughly, so whether all the features add much overhead I’m unsure. All in all, I strongly suggest you test Fluree and see how it may work for your application use case.
https://medium.com/hackernoon/fluree-blockchain-graphql-and-more-all-in-one-database-e17752f42da8
['Chris Chinchilla']
2019-06-12 01:50:38.572000+00:00
['GraphQL', 'Blockchain', 'Open Source', 'Database']
[EBOOK]-Figuring Out Fibromyalgia: Current Science and the Most Effective Treatments
DOWNLOAD>>>>>> http://co.readingbooks.host??book=0982833970 Figuring Out Fibromyalgia: Current Science and the Most Effective Treatments READ>>>>> http://co.readingbooks.host??book=0982833970 Fibromyalgia is a medical condition characterized by widespread muscle pain and fatigue that affects 6–10 million people in the United States. Huge progress in research over the past decade has established dysfunction in sleep, pain, and the stress response in fibromyalgia. Current research suggests that the muscle pain of fibromyalgia may be generated from the fascia, the connective tissue surrounding each muscle of the body.As medical understanding of fibromyalgia has increased, so have our treatment options. With the unique perspective of a physician studying fibromyalgia “from the inside,” Dr. Liptan explains the most up-to-date science and guides you to the most effective treatments from both conventional and alternative medicine.ABOUT THE AUTHOR: Ginevra Liptan, MD, is a graduate of Tufts University School of Medicine, and is board-certified in internal medicine. After developing fibromyalgia as a medical student, she spent many years using herself as a guinea pig in a search for effective treatments. She is now medical director of the Frida Center for Fibromyalgia, and an associate professor at Oregon Health and Science University. Books are a valuable source of knowledge that affects society in different ways. Whether you are reading a masterpiece by an award-winning author or narrating a bedtime story to children, the significance of books cannot be overemphasized. Human beings need to learn and stay informed, which are crucial needs that books can fulfill. They are also essential for entertainment and enable individuals to develop wholesome mindsets throughout their lives. “Millions of books have been published over the years and they continue to be an integral aspect of people’s lives around the globe. From making it easier to understand different aspects of life to serve as worthwhile companions that take you through challenging times, books have proven to be precious commodities. Books are essential in a variety of ways that go beyond enriching your mind or entertaining you. They have stood the test of time as reliable references for centuries. They stimulate your senses and promote good mental health. Other benefits include enhancing your vocabulary, allowing you to travel through words, and inspiring positivity through motivational literature. While the internet and television are useful in their own ways, nothing can compare to a great book. Books ignite your imagination, give you new ideas, challenge your perspectives, provide solutions, and share wisdom. At every stage of your existence, you can find a relevant book that will add value to your personal and professional life. Books are filled with knowledge and they teach you valuable lessons about life. They give you insight into how to navigate aspects of fear, love, challenges, and virtually every part of life. Books have been in existence since time immemorial and they hold secrets of the past while providing a glimpse of the cultures of previous civilizations. A book has the power to change or reinforce how you feel about your surroundings. It is a therapeutic resource that can equip you with the tools you need to stay on track and maintain a good attitude. Whether you want to learn a new language or delve into the intrigues of nature, there is a book for every situation. There are numerous reasons why books are important. Reading books is a popular hobby as people around the world rely on them for relief and entertainment. Books contain records of history and are used to spread vital information. Reading books helps to improve your communication skills and learn new things. It can be useful for easing anxiety among students and professionals. Other reasons that highlight how important books are in their positive impact on intelligence, writing abilities, and analytical skills. Books give people a great way to escape into another dimension. They are packed with endless possibilities for adventure and experiences that would be difficult to access in reality. It is essential for people to strive to include books in their daily lives aside from using them for academic or professional purposes. They aid emotional and mental growth, boost confidence, and sharpen your memory. It is natural for people to be curious and want to learn more, which is why books are still significant today. Books are a valuable source of knowledge that affects society in different ways. Whether you are reading a masterpiece by an award-winning author or narrating a bedtime story to children, the significance of books cannot be overemphasized. Human beings need to learn and stay informed, which are crucial needs that books can fulfill. They are also essential for entertainment and enable individuals to develop wholesome mindsets throughout their lives. “
https://medium.com/@fennema4452/ebook-figuring-out-fibromyalgia-current-science-and-the-most-effective-treatments-221dc8e677a0
[]
2021-12-13 02:03:36.033000+00:00
['Download', 'Reading', 'Book', 'Read', 'eBook']
How to Avoid a Mountain Lion Attack
Whether you call them puma, cougar, mountain lion, catamount, mountain screamer, panther, ghost cat, or another of its 40 different names the species Puma concolor ranges across the Americas from the West Coasts of Canada down to the Southern regions of Argentina. Even with such a wide range, these cats are rarely ever seen. These cats are secretive and you are incredibly lucky to ever see one, but chances are the cat will have seen you before you ever get a glimpse of them. While they are rare and elusive cats, as humans encroach on their territory human-wildlife conflict grows. They will take dogs left outside at night and have attacked hikers. These cats are not evil, this is a result of us encroaching on their land. What should you do if that cat starts approaching you? How can you avoid becoming a target? Well, there are several steps you can take to avoid becoming a target and actions you can take if you come face to face with one of the most successful predators in all of the Americas. Preventing the attack: Mountains lions will attack for two reasons. They may attack if they feel you are a threat to their territory, their kill, or their cubs. A mother mountain lion will defend her cubs with her life and if she feels that you are a threat she will neutralize that threat as soon as possible. Another reason they may attack is if they see you as a meal. There are ways to prevent both types of attacks. First, when in mountain lion territory make yourself a less appealing meal. Mountain lions size up their meals from the shadows, it will judge whether you are worth the potential risk to hunt. A good way to avoid this is to travel with a buddy and do not separate. Predators like to pick targets that are alone or have strayed away from the group. Try to avoid hiking during their feeding times, the main times they hunt is at dawn and dusk. During these times the cat will be out stalking prey. While the chances are low that it finds you and starts sizing you up if you are going alone avoid going at dawn or dusk. Always care a hiking pole or other device that can be used as a weapon on you, not only will it help protect you from an aggressive mountain lion but if the cat starts approaching you it can help make you look bigger. As for territorial disputes, simply avoid areas where mountain lions are known to have their cubs. If you see any signs of mountain lion activity near a cave or den, back up and avoid going near it again. What if you see the mountain lion? This is what a mountain lion will generally look like when stalking prey except they would be behind cover like bushes, logs, or tall grass. If somehow you have spotted a mountain lion you need to read its behavior before you stop to take photos. If it is low to the ground in tall grass and staring at you, it is likely stalking you. If it is still moving or sitting still in that position it likely doesn’t know you have already seen it. Make it apparent that you have already seen it. Cats are generally very reluctant to hunt prey that has already seen them. You can do this by yelling in its direction and making loud noises. This will discourage further stalking. Whatever happens, never run from a mountain lion. This should be the rule for any predator, but especially with mountain lions. You can not outrun a mountain lion, these cats can go 45–50 mph (72–80 kilometres per hour). Cats have a really strong prey drive, meaning if you run they automatically assume that you are scared of them. The only reason you should be scared of them is if you are weaker than them (which you generally are) or a potential prey item so they will automatically start chasing you. If you run you are also exposing the back of your neck to the cat, they often target the throat so you give the cat even more incentive to give chase. If the cat is acting aggressive (hissing and growling at you, trying to intimidate you), simply start backing away but walk backwards with your eyes on the cat in case it charges. If it is not doing either of these things you may stop and take pictures of an animal you may never see in the wild again and then quietly back away, but no matter what do not take your eyes off the back, always walk backwards. Mountain lions are solitary and have massive territories (males can have territories upwards of around 390 square miles) so you are likely seeing the only one in the area unless it is a mother with cubs. No matter what do not give the cat an opportunity to get your neck so do not crouch down or bend over, it makes you an easy target for the mountain lion. If it advances what do you do? If the cat advances make sure you make yourself look as big as possible. You want to make yourself look intimidating. While still backing down put your child on your shoulders, or anyone in the group whom you can have on your shoulders. Grab branches and sticks and wave them around while yelling. It may sound dumb, but mountain lions are intelligent. When stalking prey they will assess if it is worth the risk to attempt to make a kill. The bigger you look, the less likely it will be that the cat continues its advance. If it keeps advancing and gets way too close start throwing rocks, logs, branches, pebbles, sand, and any other small objects you have at the cat. Just be careful not to bend over or crouch at all. Try to grab things that you can grab without bending down, assuming you are hiking grab a branch or a if you have a rock face to one side grab a loose rock. This should discourage it and warn it that you are not going down without a fight. If it does attack prepare to fight back. Aim for its eyes, whiskers, ears, and nose. If it attempts to restrain you kick it while covering your neck with your hands. Remember, its target is your neck and if it bites you there you will likely be dead. Mountain lions are powerful but after sustaining a few injuries they will realize it is not worth it and retreat. After the mountain lion is fought off, immediately go to a hospital to get your wounds treated.
https://medium.com/@rayyanibrahim16/how-to-avoid-a-mountain-lion-attack-fc51b3d973ba
['Rayyan Ibrahim']
2020-12-12 23:23:42.181000+00:00
['Survival', 'Hiking', 'Wildlife', 'Mountain Lion', 'Outdoors']
Happiness Has Left Us
Happiness Has Left Us Photo by Arnaud Mesureur on Unsplash Happiness has left us, In this era of technology Everything is artificial and robotic, This modern world is so tiresome and monotonous. Trees are being cut down everywhere, Because of technology and urbanization; Seems like we forgot how trees bring us happiness. Trees are silent while they are alive, They are beautiful by nature, Trees are strong, Because they can recover even after days without sun. Trees can inspire us, Because they live by giving life to others. We forgot these teachings that trees teach us, Because we don’t see enough trees in our surroundings anymore. We are so busy We don’t plant trees anymore, Our love for trees is no more That’s why happiness is nowhere to be found. We should plant more trees And feel the happiness of breathing fresh air. We should be happy Cause we can still breathe, Amidst the trees that remain in this beautiful world. O mankind! Please don’t destroy trees anymore!
https://medium.com/illumination/happiness-has-left-us-180c676d090a
['Tamjid Hossain']
2020-12-26 20:49:07.841000+00:00
['Poetry', 'Happiness', 'Life', 'Trees', 'Poetry On Medium']
Joey Krug Co-founder of Augur Explains the Value of Origin
Origin is backed by some of the most incredible people in the entire crypto space. Joey Krug co-founded Augur, the first Ethereum ICO, and is the co-investment officer at Pantera Capital, one of the earliest crypto funds. We are proud to have Joey as an advisor and Pantera as our lead investor. Check out our interview of Joey below, part of Origin’s video testimonial series. Joey talks about the problems with existing so-called peer-to-peer marketplaces and how Origin, powered by blockchain technology, can offer an alternative vision for the world where users are not unnecessarily taken advantage of. He also discusses why Pantera Capital decided to back Origin after their first meeting with the Origin team. Learn more about Origin:
https://medium.com/originprotocol/joey-krug-co-founder-of-augur-explains-the-value-of-origin-b2358d3488db
['Coleman Maher']
2020-01-17 19:39:58.291000+00:00
['Prediction Markets', 'Venture Capital', 'Videos', 'Sharing Economy', 'Cryptocurrency']
Taking Responsibility For Recycling During COVID-19 — @OlumideClimate
I’ve been staying at home for the past few weeks with very good reason — preventing the spread of COVID-19 and taking necessary precautions. During this time, you all probably are generating more waste, particularly food and beverage packaging along with Pet bottles, takeout containers and cardboard boxes. As household waste increases, it’s more important than ever to be mindful of the right things to put in your recycling bin. When it comes to recycling, it’s super important to know exactly what and how to recycle. One contaminated item in your recycling bin can ruin the entire batch, sending it to the landfill instead of the recycling centre — and, just like that, all your good recycling intentions go to waste.
https://medium.com/climatewed/taking-responsibility-for-recycling-during-covid-19-olumideclimate-333007403cb9
['Iccdi Africa']
2020-05-08 08:37:18.783000+00:00
['Recycling Services', 'Women', 'Covid 19', 'Climate Change', 'Recycling']
Remembering Wii Fit
Remembering Wii Fit The pandemic has created a dearth of workout options. Perhaps the Wii Balance Board would be handy right now? During the coronavirus pandemic, finding creative ways to get some exercise has been one of the most dynamic struggles. Gyms open and then they close back down. The weather allowed for outdoor activity in the summer, then it put us back indoors for the winter. Yet where there’s a will to stay fit, there’s a way. Still, home equipment and workout space come in short supply for many. As a gamer, I instantly thought about how Nintendo’s now (in)famous Wii Fit would have been selling like hotcakes if released at the peak of the pandemic. As it is, Ring Fit Adventure, the company’s current workout title for the Switch, was extremely hard to come by back in the spring with relatively no marketing compared to Wii Fit. The game embodied Nintendo’s innovative and casual game approach as well as any other title on the shelf, and the Wii Balance Board looked like something out of a sci-fi Richard Simmons aerobics clip. Families had enormous fun with it, and the game sold millions of copies. Still, did it actually help you get any exercise? And did the negative aspects of the experience outweigh the positive ones? Time to dive deep and find out. The super wonky BMI scale The most glaringly obvious screw-up Nintendo made in developing this game was the decision to include a body mass index measurement for each player. BMI uses a ratio of height to weight to determine whether someone is underweight, overweight, or obese. Wii Fit. Source: UK Resistance. This scale has always been considered just a baseline that should be taken with many grains of salt, as basketball legend LeBron James has a BMI of nearly 27, a number which would push him to the “overweight” category. Anyway, getting told that you are a tub of lard by Wii Fit was a rite of passage in the gaming community in the late 2000s. For some children, they’ll shrug it off and move on with their day. For others, it was devastating. Normal-sized kids were being told to drop weight that wasn’t necessary to lose, and the fun that was taking place before the scale took hold of the living room was suddenly absent. Nintendo was forced by public criticism to apologize for the controversy around their strategy to weigh players, but they never put a warning on the game about it like many parents wanted them to do. All in all, this is a huge knock against the experience. If Miyamoto and company really wanted to create a realm in which fitness could be accurately simulated in the living room, they needed to come up with a better way to classify your healthy weight or abort that part of the experience completely. Age diversity in players As people get older, bones get brittle and joints get sorer than they are supposed to after just getting up off the couch. Fitness for this age group is always a task that requires the perfect balance of exertion and restraint. One study done by JAMA Facial Plastic Surgery says that facial fractures in adults older than 55 increased by more than 45 percent from 2011 to 2015. It’s clear that older people are encouraged to exercise, but it often comes with a price. Wii Fit was perfect for grandparents and their little tykes to get a little healthy activity in without hurting themselves or being put into a dangerous situation in terms of exhaustion. The scene described above fits right in with what Nintendo’s marketing strategy has been for eons: appeal to the largest audience possible. Researcher Ayesha Afridi, among others, found after an experiment with 16 adults averaging 67 years of age that Wii Fit Plus (the original’s sequel) improved dynamic balance and mobility in this control group after six weeks of using the game. While it may have been fair game to mock the chances of improved fitness for an adult in their 20’s using this experience, it would be foolish to do the same for older groups. These low-risk activities should have been aimed even more at this demographic. As it is, Nintendo once again was able to entice people to use their products who normally wouldn’t know what a video game does or is capable of doing. Age diversity was a huge boon for Wii Fit. Wii Fit had a sequel on the Wii U console, aptly titled “Wii Fit U”. Source: Nintendo. Setting an example for other fitness games This is the game’s lasting legacy. While it is now outdated over a decade later, Wii Fit set a standard in the industry to strive for fitness and healthy content. As I already mentioned, Nintendo improved upon their own experience with Wii Fit Plus and Ring Fit Adventure. Other companies followed suit, with Ubisoft’s Xbox One exclusive Shape Up, which used the Kinect, and their multi-console hit series Just Dance. The latter’s use of rhythm and music to get you moving has been a much better strategy for long-term success than Wii Fit’s rigid dedication to replicating a traditional yoga or gym workout. The fitness genre is an evolving industry. Developers are still trying to get down just the right formula that combines a broad appeal with actual fitness results. It also calls into question the definition of a video game. Is a simulated workout actually a game? And what needs to be a part of the experience to classify it as such? Until that fine balance is reached, hardcore gamers will view fitness games as fads and gym nuts won’t take them seriously as real healthy alternatives to the health club. Wii Fit directly encouraged players to take care of their health. It demonstrated a unique avenue for video games to pursue. Source: Nintendo. Despite these questions, it’s undeniable that Wii Fit is the main reason that there is any desire for working out with a game showing you the way. Technology has evolved, and Nintendo was a pioneer in these innovative ideas for expanding what a video game can provide. I’d say this definitely makes the game an overall success despite its technical shortcomings. Guinea pig concepts are always going to be critiqued, but the legacy of Wii Fit should be positive. Do you remember playing Wii Fit back in the day? If so, tell me some of your best or worst experiences with the title and whether it inspired you to try any other exercise games on your home console. Thanks for reading.
https://medium.com/super-jump/remembering-wii-fit-2d4d96554d9f
['Shawn Laib']
2020-12-19 08:00:58.537000+00:00
['Gaming', 'Health', 'Features', 'Fitness', 'Videogames']
5 Simple Steps to Lose Weight
If you’re carrying a few extra pounds (I use the term loosely) like most of us, everyone knows all the reasons why we should lose weight. It will reduce the stress on our heart, reducing the possibility of heart disease, cancer, possibility of diabetes; feel better, look more attractive, etc. There are 101 reasons, but it can be a real pain to finally lose weight. You push really hard; you progress a bit, you veer off and bam, you’re heavier than you started. It’s maddening. Enough with that hassle, I’ve put together 5 simple steps to lose weight and keep it off. These steps can be applied to anything, but here we focus on weight loss. 1. Know What You Want 2. Know Where You Are 3. Track Your Results 4. Make Heading Corrections 5. Be Responsible Now all your extra weight will effortlessly melt away. Yeah, sure! A simple 5-item list is not enough, so let me explain the intricacies of the 5-items and where blockage typically occurs when applied for weight loss. 1. Know What You Want This one is absolutely simple. Most people who want to lose weight have an idea of ​​how many pounds they would like to lose or how they would like their body to look. I think we can safely say that if your body started looking the way you wanted it to, you would know it, even if you didn’t have it fully mapped and detailed. 2. Know Where You Are This is a great journey. It is almost always overlooked and is the number one reason for the up and down Yo-Yo effect we experience when losing weight. We are not happy with how our body feels and looks, and it is painful to zoom in and get an accurate picture of where we are. Unfortunately, we have to know where we are in order to accurately judge the results we are getting. You’re on a trip to San Diego, California. You would like to start your journey from Phoenix, Arizona, but you really don’t know where you are. No problem. You get a map of Arizona / California and discover that a simple 6-hour drive west on Interstate 8 will get you to San Diego. 6 hours later, you have no idea where you are and it’s definitely not hot in San Diego. You feel defeated and want to give up! Do you blame the car? The stupid maps? San Diego? Yourself? Now, what if I told you that you were actually in New Your City, not Phoenix, Arizona? A 6-hour drive west from New York City will never get you to San Diego. If he had taken the time to find out precisely where he was, he could have chosen the appropriate means of getting to San Diego and had an expectation of how long it would take. The same goes for weight loss and our personal fitness. In our minds, we think we are starting out in a different physical condition then we really are. When the results don’t follow the illusion in your mind, we get angry. 3. Track Your Results This is conceptually simple. If we have no idea what we are doing, how do we know what is working? It may be easy to do, but it is also easy not to. In the end, most people never keep track of their results. 4. Make Heading Corrections We love doing this one. We certainly have it. It’s like the first step in knowing what you want. We tried the new diet, super abs machine, and yoga cardio hip hop power energy fat burning class. We could be great at changing it, but without knowing where we’re starting from and a history of recorded results, we have no idea what we need to change or even which direction we need to go. 5. Be Responsible No, no responsibility! That is a bad word. I don’t want anyone to know what I’m doing or what I’m not doing. So why do we resist responsibility, your secret weight-loss weapon? Because it works! If you know someone is going to be watching, they will follow or at least feel very uncomfortable not doing so. Accountability, when used wisely, is the best turbo booster for weight loss. Since you will be responsible anyway, you might as well get something out of that. Using responsibility early on, you can achieve the body of your dreams. If you let your body hold you accountable, it will accumulate fat in places you don’t want it to be. The biggest obstacle to losing weight is not taking any action. No matter how good the plan is, it is of no use if you don’t follow it. So what do I do now? First of all, take responsibility. Send this article to someone you respect and care about and tell them that I want to be responsible for finally shedding the extra pounds. When they agree to support you, start working on Steps 1 to 5. Gustavo Woltmann.
https://medium.com/@gustavo-woltmann/5-simple-steps-to-lose-weight-2ce6a03042c2
['Gustavo Woltmann']
2020-12-26 04:59:46.914000+00:00
['Weight Loss Tips', 'Lose Weight', 'Weight Loss', 'Gustavo Woltmann']
Utility Tokens
Initial coin offerings have been revolutionizing the world of traditional investing and have since brought large amounts of capital into the cryptocurrency and blockchain industries. When a project goes through with an ICO, they issue tokens that run on a blockchain to their investors who help them raise capital. It’s important to understand that all tokens aren’t the same, which is what brings us to the topic of this piece: utility tokens. Let’s go over what these are and the role they play in the cryptocurrency industry. This is not financial investment advice. This article touch upon key aspects of what utility tokens are and how they function as a whole. In this article Terminology Token: Crypto tokens are special kind of virtual currency tokens that reside on their own blockchains and represent an asset or utility. ICO: ICOs act as fundraisers of sorts; a company looking to create a new coin, app, or service launches an ICO. Next, interested investors buy in to the offering, either with fiat currency or with preexisting digital tokens like ether. In exchange for their support, investors receive a new cryptocurrency token specific to the ICO. Blockchain: The easiest way to understand blockchain is to think of it as a fully transparent and continuously updated record of the exchange of information through a network of personal computers, a system which nobody fully owns. This makes it decentralized and extremely difficult for anyone to single-handedly hack or corrupt the system, pretty much guaranteeing full validity and trust in each exchange of information. Crowdsale: Unlike traditional crowdfunding, a crowdsale doesn’t pre-sell a widget or promise to put your name in the credits of a movie. Instead, it sells you something that you might not know what to do with unless you are clued in: a token. Familiarize yourself with these key terms to better understand what utility tokens are and why they’re so important. What are Utility Tokens? Utility tokens are one type of token that’s given out during crowdsales as a project executes an ICO. What makes a utility token special is that it represents future access to a company’s product or service, giving it some value but not guaranteeing anything. These are not used as investments, as they can be exempt from applicable federal laws governing securities if they’re properly set up. If a company creates a utility token, it is essentially creating a kind of digital coupon that can be redeemed in the future for special access or discounted fees for their service or product. A good analogy for this is a pre-order, which is usually made for products that have not yet been developed or produced. Usually, there is a lot of hype surrounding the project and utility tokens serve the purpose of facilitating the investment process early on. Utility tokens are a type of token that companies issue out during crowdsales when it executes an ICO. What makes these special is that they guarantee access to that company’s product or service in the future. How Do They Work? Utility tokens aren’t supposed to be digital asset investments. Yet, many people contribute to utility token ICOs with the hope that the value of the tokens will increase as demand for the company’s product or service increases. Based on fundamental supply and demand models, token price fluctuations can be compared to those of sporting event tickets. The value of a ticket to a future sporting event may increase if one or both of the teams wins a significant number of games and becomes a contender for the championship. On the other hand, that same ticket may decrease in value if that team starts losing or no longer has a star player who people want to see. Simply put, utility tokens — or any token for that matter — are not a guaranteed investment and should be understood as digital coupons that can give you access to a company’s product or service in the future. With that being said, ICO’s have been famous for producing some of the largest returns imaginable, while also being known to run the risk of losing your investment since tokens do not guarantee any substantial returns. Tokens can be used and redeemed to access a company’s product or service in the future. They can also be sold if the company starts to gain a lot of popularity and the demand for their utility tokens go up. Other Types of Tokens Tokens can be split into three categories: utility, security, and payment. As we already discussed, utility tokens give someone access to a company’s product or service in the future while not guaranteeing any returns on their investment. Security tokens, on the other hand, represent assets such as participation in real physical underlyings, companies, or earnings streams, or an entitlement to dividends or interest payments. In terms of their economic function, the tokens are essentially the same as equities, bonds or derivatives. Lastly, there are payment tokens which are intended to provide many of the same functions as long-established currencies such as the U.S. dollar, Euro or Japanese Yen but do not have the backing of a government or other body. Tokens can be categorized as utility, security, or payment tokens. Each have different use cases and values. Conclusion ICO’s have quickly gained legitimacy as one of the best ways for new startups to raise capital, especially if they’re in the blockchain industry. If you’re interested in investing in cryptocurrency, then it’s important that you know the difference between a coin and a token, which is why you should be able to tell a utility token apart from other kinds of tokens. Having a general understanding of how these assets work will suffice, but what separates a smart investor from an average investor is understanding the intricate details that categorize different tokens and coins. As always, happy investing!
https://medium.com/coinbundle/utility-tokens-978d117290cd
['Coinbundle Team']
2018-10-16 10:00:52.843000+00:00
['Beginnerscoinbundle', 'Beginner', 'Cryptocurrency', 'Token', 'Blockchain']
Break the Tension at Work with a Bit of Humor
Break the Tension at Work with a Bit of Humor In all seriousness, humor belongs in the workplace, too! Photo by Jud Mackrill on Unsplash There seems to be an unspoken rule about seriousness in the workplace that tends to suck the life out of employees– conversations are kept to a minimum, eyes glued to the screen all day, and absolutely no laughter heard. However, contrary to this belief that the workplace should be strictly professional, I think it’s safe to say that a little humor can earn its place in the office, too. In fact, when it comes to working with others, humor is the best way to liven the mood while simultaneously boosting the morale of you and your team. With that being said, consider these eight benefits of bringing (appropriate) humor into your workplace: 1. Helps make Connections One of the easiest ways to make friends and build connections is through humor! Humor helps make you and your coworkers more relaxed and comfortable with one another. Whether you share the same sense of humor or share a joke that opens doors to further conversation, sharing a laugh is oftentimes the best way to create bonds, even in the workplace. 2. Breaks Down Barriers As previously mentioned, humor helps bring out the humanity in even your most daunting of superiors, making them feel much more approachable. If your humor is really that good, maybe you’ll be able to get a good laugh or two out of your boss, which can help you realize that they’ve got a good sense of humor just like you. 3. Makes Working More Fun Humor helps demolish the quiet, dreary stereotype of the workplace by filling it with laughter and lively conversations. It can help make your days at work feel more enticing and exciting by giving you something to look forward to by replacing the old, monotonous workdays. 4. Relieves Stress Feeling stressed out by work? Turns out laughter is a good natural stress reliever! Try making best out of your situation by using some humor to turn the workplace from your source of stress into an area where you can de-stress with a bit of laughter. 5. Creates a Relaxed Atmosphere Slipping in a bit of humor into your workspace helps alleviate the tension in the room. It helps dissipate the expectation of pristine perfection and rectitude by replacing it with more slightly casual jokes and conversations. 6. Improves Customer Relations Having some humor in the workplace and in how your workplace approaches customers helps strengthen relationships by making you, as the business, appear much more approachable and trustworthy. Don’t hesitate to foster some good relations with some good humor! 7. Sparks Creativity With something as spontaneous as humor, it’s no wonder that humor helps foster the creative thinking process. When thinking of things in a lighthearted way, your brain has so many new avenues to explore when compared to banal, straightforward thinking. 8. Boosts Productivity Taking into consideration all these benefits, having humor in the workplace does wonders for building a sense of community and thus, boosting productivity. Once we are able to feel comfortable and relaxed at work, we are able to work more freely, with less worries about judgement and less stress. All in all, a lively office gives its workers the potential for growth and happiness. Yes, respect is part of the job, but don’t let that cage your friendly and funny nature. The more lively the office is, the more likely our happiness will improve as well so don’t be afraid to be friendly and funny, yet respectful at work!
https://medium.com/joincurio/break-the-tension-at-work-with-a-bit-of-humor-2877bd884c1f
['Madison Estrella']
2020-08-24 17:43:38.904000+00:00
['Workplace', 'Humor', 'Work', 'Relationships', 'Creativity']
ABOUT SLEEP
A neuro-signaling chemical called neurotransmitters regulates whether we are asleep or awake by acting on nerve cells in the brain, or on different groups of neurons. The neurons in the brain that connect with the spinal cord in the brain produce neurotransmitters such as serotonin and norepinephrine, which keep parts of the brain active when awake. When we are asleep, other neurons begin signaling at the base of the brain. Research shows that when we wake up, a chemical called adenosine is formed in our blood and causes drowsiness. This chemical breaks down slowly during sleep. Because sleep and wakefulness are affected by various neurotransmitter signals in the brain, foods and medications that alter the balance of these signals can affect whether we are alert or careless and how well we sleep. Caffeinated beverages such as coffee and drugs such as diet pills and decongestants can stimulate parts of the brain and cause insomnia, or insomnia. Most antidepressants suppress REM sleep. Heavy smokers often fall asleep too easily and reduce the amount of REM sleep. They wake up even after 3 or 4 hours of sleep due to nicotine withdrawal. Lots of people who suffer from insomnia try to solve the problem of alcohol . Alcohol helps people fall asleep easily, which also robs REM and more deep sleep. Instead, it keeps the light stages of sleep, allowing them to wake up easily. The amount of sleep that each person gets depends on many factors, including age. Babies usually need 16 hours a day, and adolescents need an average of 9 hours. Although some people need 5 hours or 10 hours of sleep every night, for most adults, 7 to 8 hours of sleep a night is best. During the first 3 months of pregnancy, women need many hours more sleep than usual. People sleep more easily and for a shorter period of time as they get older, although they usually need as much sleep as they do in adulthood. About half of people over the age of 65 often have sleep problems, such as insomnia, and in most older people, the condition of deep sleep is often greatly reduced or completely stopped. This change may be a normal part of aging or may be the result of medical problems commonly seen in the elderly and with medications and other treatments for those problems. If a person has sleep deprivation in the previous days, the amount of sleep will also increase. Too little sleep creates a “sleep date” that is like an overdrive at the bank. Eventually, your body will want to repay the loan. It does not seem to be enough to get less sleep than we need; As we get used to the sleep deprivation schedule, our decisions, reaction times and other tasks are still weak. Although scientists are still trying to figure out why people need sleep, animal studies show that sleep is essential for survival. For example, mice typically live two to three years, while REM insomniacs live an average of only 5 weeks, and insomniac mice live only 3 weeks. . Sleep deprived mice develop unusually low body temperatures and have lesions on their tails and feet. Injuries develop because the rats’ immune system is weakened. Some studies suggest that lack of sleep can adversely affect the immune system. To maintain the functioning of the nervous system Sleep is essential for our nervous system to function properly. Too little sleep can discourage us and make us unable to concentrate the next day. This leads to impaired memory and physical performance and decreased ability to calculate mathematics. If sleep deprivation persists, hallucinations and mood swings may develop. Some experts believe that the use of sleep gives neurons the opportunity to wake up when we are awake and repaired. Without sleep, neurons may have low energy or be contaminated by byproducts of normal cellular activity, which begin to degenerate. Lack of sleep gives the brain the opportunity to learn important neural connections that can be impaired by a lack of activity. To raise ourselves Deep sleep is similar to the release of growth hormone in children and adolescents. As in deep sleep many cells in the body reduce the production and breakdown of proteins. Since proteins are the building blocks needed for cell growth and damage from factors such as stress and ultraviolet rays, deep sleep can actually be “aesthetic sleep”. During deep sleep the parts of the brain that control emotions, decision-making processes and social interactions are significantly reduced, indicating that this type of sleep maintains proper emotional and social functioning in awake people. A study in mice showed that some of the neuro-signaling patterns that arose during the day during deep sleep were replicated as in mice. This model helps to encode repetitive memories and improve learning. Sleep and sleep problems play a role in a large number of human disorders and affect almost every field of medicine. For example, problems such as stroke and asthma attack are more likely to occur at night and in the morning, possibly due to hormonal changes, heart rate and other sleep-related symptoms. Sleep can also affect some types of seizures in complex ways. Lack of sleep can also trigger seizures in people with certain types of epilepsy. Effect of sleep on the immune system Almost everyone with a mental disorder with depression and schizophrenia has sleep problems. For example, people with depression may wake up in the morning and not be able to sleep again. The amount of sleep a person gets can strongly influence the symptoms of overall mental disorders. Sleep deprivation is an effective treatment for some types of depression, but it can also lead to depression in others. Excessive sleep deprivation can lead to a mood of insanity and hallucinations in healthy individuals, and interrupted sleep can trigger episodes of mania (anxiety and hyperactivity) in people with depression.
https://medium.com/@adhikoli2010/about-sleep-713cd822b393
['Adhiraj Koli']
2020-12-24 06:04:09.648000+00:00
['Sleep', 'Healthcare', 'Better Sleep At Night', 'Health', 'Good Sleep']
Over time, the right people will find you. They’ll present themselves to you, becoming more evident and visible throughout your life journey. The higher power will place the right ones in your path…
Photo by Hassan OUAJBIR via Unsplash “Everything works itself out in the end. If it didn’t work out, it’s not the end!” Tina Joy — The Business Guru Over time, the right people will find you. They’ll present themselves to you, becoming more evident and visible throughout your life journey. The higher power will place the right ones in your path, for you to take notice of and discover. Don’t worry. Stress less. Pay attention. Believe and you shall receive. Let life play out in your favor, the way it should!
https://medium.com/mega-mantras/tina-joy-the-business-guru-8323ac458b3
['Tina Joy']
2020-12-22 02:18:28.906000+00:00
['Purpose', 'Believe In Yourself', 'Mega Mantras', 'Monday Motivation', 'Inspiration']
Using And Getting Results Of Facebook Hashtags In 3 Steps.
Photo by Solen Feyissa on Unsplash The hashtag is one of the common ways you can promote your content more engaging for free. Let’s find out more about how to use Facebook hashtags on your brand and content. On Facebook also, we can add hashtags to promote and make your product or service unique in people’s views. And some brands ignore them and go to the paying advertisements. Meanwhile, some brands and users use both of these. It’s okay because Facebook ads have more chances and specialty to interact with users than hashtags. So, it’s fine but it cost some money too. But using hashtags is a cost-free method to promote your content to a wide audience. Read Next: Using the Instagram Hashtags More Suitable Way. Effect Of Hashtags Using the hashtags on your Facebook content is make it more searchable. If someone searches some hashtags (eg:- #Marketing) and if your content is included in that hashtag, you have a chance to be in that search results. But your post needs to be recent. In hashtags, the recent items are displayed on top. While comparing to the normal Facebook post, hashtags included posts have more chance to interact with the audience. Like we said before users can search hashtags and as well as save them. So, you have a chance when they search hashtags. What Are You Need To Do When Using Hashtags. You can add your hashtags using the “#” symbol. For example, #Love. And don’t add any space or other symbol when adding hashtags. Use simple and capital letters to write hashtags. (#ContentMarketing). Try to include popular and branded hashtags every time you add them but don’t think about it too much. Like you must do it. You need to add relevant hashtags to your content. If you add non-relevant hashtags to your content, that can decrease your user engagement. Best Hashtags.com Most Popular Hashtags Single tag is better but adding a several one won’t be a big deal. Add a few relevant hashtags so it’s doesn’t make your reader uncomfortable. Use special days and events as your hashtags When you create your video or post. For example, if you give some offer to your customers during the Black Friday season add a hashtag #Blackfriday or #BlackfridayOffer. Use actionable hashtags. (Don’t, Do, Method…etc). This must be depending on what you need to say or explain through your content. As an example, if you create a blog post about “Things you need to avoid while using a smartphone.” You can use #Don’t in your promotion post on Facebook. Finally, be mined with hashtags when you repost and schedule your Facebook posts. Because depending on the time and occasions in your hashtag value can be changed. For example, in November and December adding the hashtag #Christmas may be useful for some content. But after January there is no point in adding this tag to your content unless your post is related to Christmas. After Process Of Adding Facebook Hashtags. We add hashtags with a purpose. Now it’s time to check can it fulfill our purpose or not. It is better you can track your hashtag and do a little research about how it works with your content. Based on those results you decide what needs to change and how? You can use a tool like Hashtagify.me to find relevant and trending hashtags for your content. After deciding and adding your hashtag you need to track them measure your hashtag activities. Sprout Social can help you to track these hashtags on your social media accounts and show how they work. Finally,
https://medium.com/@Dexter-D/using-and-getting-results-of-facebook-hashtags-in-3-steps-3fab7889141b
[]
2021-11-27 10:09:50.402000+00:00
['Hashtag', 'Facebook Marketing']
My Publish0x stats and why you should consider Joining.
30 days statistics overview Alright, I’ll do a very quick one. If you are reading this post on publish0x then it’s definitely a lazy repost and you can ignore — you’re safe already…lol. If you’re reading this on my other blogs, then this is specifically for you. Curiosity landed me on Publish0x and it’s been a very cool experience since then. Despite joining on the first quarter of 2019, I actually published my first article on publish0x on the last quarter of 2019 and started writing frequently from the first quarter of 2020. For knowledge sake, Publish0x is a sleek blogging platform built to serve the curious minds. I’ve seen a couple of articles call it ‘an alternative to Medium’; but this is totally wrong. Despite having few similarities, Publish0x is very much different from Medium and presents its own unique features which makes it an amazing platform. Arguably the best platform for cryptocurrency bloggers who hunger for visibility, untamed engagement and simplified reward scheme. Despite cryptocurrency bloggers dominating the platform, Publish0x is meant for all categories of blogging. All other categories get good attention on the platform too. Publish0x provides a good environment for individual writers and project teams. With appropriated tagging system, articles are categorized to simplify readers’ surfing. Don’t take more than 20%, lol Publish0x rewards you for reading as well as writing on the platform. Users earn cryptocurrency tips for their efforts. Readers who tip a post also earn rewards in cryptocurrencies. Couple of cryptocurrencies have been integrated on the platform and users can earn rewards in Ethereum, Ampleforth, and Basic Attention Token (BAT) currently. Loopring, DAI, Bounty0x and Hydro protocol have been integrated on the platform in the past. My Earnings break-down as at 4th december 2020 In contrast to Medium, Publish0x gives writers a very good visibility and makes it easier for writers to gain a very good audience on the platform. Despite being centralized, its administrators are very professional and hospital. Publish0x provides writers with a great infrastructure to work with, sleek User interfaces and smooth user experience. Users are allowed to modify their experiences to suit their preferences. Yeah, follow me when you get there! Frequent writing contests with great rewards, writers’ promotion, consistency… publish0x gives you almost everything on a platter. Its been an amazing experience I wish to share these experiences. If it entices you, then you should consider getting a publish0x account and extending your writing adventure. Ready? Click here to get a Publish0x account
https://medium.com/@joelagbo/my-publish0x-stats-and-why-you-should-join-d653ef82e90a
['Agbo Joel']
2020-12-21 02:57:11.924000+00:00
['Publishing', 'Cryptocurrency', 'Writing Life', 'Publish0x', 'Writing']
“Portrait of a Lady on Fire” Restored My Faith in Love Stories
I’m not one for typical love stories. The insipid platitudes, the over-the-top drama, and the inevitable end where everything works out perfectly all feel like a boring waste of time. Certainly, part of it is my own bitterness. My love and my relationships will never be celebrated or respected in the same way as these love stories, because dominant narratives don’t have room for me. I’m not going to have hundreds (if not thousands) of dollars thrown at me for getting married, due to me being estranged from nearly my entire family because I’m queer and trans. It shouldn’t come as a surprise that queer love stories mean so much more to me than mainstream dramas like The Notebook. While I like looking at Ryan Gosling for hours just as much as the next dude, the storyline isn’t one that I see myself in. The same is true about gay romance movies like The Happiest Season starring Kristen Stewart and Mackenzie Davis, where the only issue is that someone had a hard time coming out — and once they did, everything was magically wonderful. The feel of that movie was almost identical to a “straight” love story with the neat bow that tied up the conclusion, and everyone was fine and great and lived happily ever after. By contrast, Portrait of a Lady on Fire, a period drama set in the 1770s, is a queer love story that is as enthralling as it is heartbreaking, not to mention one of the most beautiful films I’ve ever seen. Best of all, the film’s director, Céline Sciamma, made the intentional decision to not spend half of the narrative having the leads, Héloïse and Marianne, obsessing about how to come out or what it meant to be queer (unlike Happiest Season). They simply existed together as lovers for just a few days, falling for each other hard despite knowing that they could never be together. It was love for love's sake--intoxicating, consuming, pure and unrestrained. The actors playing the two leads, Adèle Haenel (Héloïse) and Noémie Merlant (Marianne), are actually queer women. This is an oddity in a world where mainstream entertainment will do anything it can to profit off of queer lives and stories while making sure that cisgender and heterosexual people stay squarely in the middle of the narrative. To add to the authenticity of the film, Adèle Haenel is Sciamma’s ex-girlfriend in real life. Sciamma, Merlant and Haenel attend NEON Celebrates Portrait of a Lady on Fire and Parasite — TIFF 2019 at Soho House Toronto in Canada. Getty Images, North America. It was evident that queer minds created this story and brought it to life. The romance wasn’t ham fisted or forced. There was no suitor who came back again and again despite the object of his affection rebuffing him. Instead, there were lingering, meaningful looks. There was genuine interest to understand one another. And there were both subtle and intense ways that Marianne and Héloïse expressed their devotion. Even the sex scenes invited the casually intimate; the two women weren’t concerned about their hair looking perfect or proving that they were good in bed. While one scene featuring a close-up of a recreational substance being rubbed into Marianne’s armpit was pretty cheesy, there was also an element of vulnerability present in that moment that represented a theme that was woven throughout the film. Initially, Marianne followed the instructions of Héloïse’s mother to pretend she was merely a companion while gathering all the visual information she needed to compose Héloïse’s portrait. Héloïse refused to sit for painters in the past as a protest to the life that would be forced upon her — marrying a man chosen by her family for reasons of status. Finally, Marianne revealed her motives and showed Héloïse the portrait she was creating. Héloïse felt deep pain, asking “Is this how you see me?” Marianne, overwhelmed with defeat and frustration, ruined the painting. Héloïse convinced her upset mother to give Marianne another chance. This time, she would sit for the portrait while her mother left their homestead for five days. Adèle has publicly opposed the trope of the passive muse, and this stereotypical concept was turned on its head in the film. What does it say about a muse when the artist whom finds her so inspiring is also a woman? When the exploitation of the muse is revealed and that muse dares to critique the piece her image created, what does that intentional act mean? Furthermore, when Héloïse acquiesced to sitting for the portrait that Marianne was tasked to paint in secret, how does that relate to her small opportunity for agency in a lifetime of coercion? Toward the end of their time together, Marianne lashed out with frustration and pain that Héloïse was not fighting the upcoming nuptials. Upon finding out that Héloïse’s mother was returning, Marianne raised her voice and projected her heartbreak onto her. Héloïse fled to the beach on which they spent so much time together. Marianne ran up to Héloïse from behind, embracing her and sobbing into her neck. “Forgive me,” she cried. “Forgive me.” Still from “Portrait of a Lady on Fire,” Lilies Films, fair use Marianne and Héloïse were not given a proper opportunity to say goodbye, as Héloïse’s mother was hurrying her into her new husband’s arms. Marianne ran down the stairs of the home before Héloïse appeared behind her. “Turn around,” she said. She was referencing the myth of Orpheus that came up during a game of cards one night. In this story, Orpheus (son of a muse) married his love, Eurydice, who died soon after from a snake bite. Orpheus traveled to the land of the dead to ask Hades to allow him to bring her back. Hades agreed, on one condition: they would not be permitted to look back. Once he saw the sun again, Orpheus turned around to share his delight with Eurydice, breaking his promise to Hades. Eurydice disappeared. During their conversation over cards, Marianne was baffled at Orpheus’ decision to look back. Héloïse posed the possibility that it was Eurydice who told him to turn around. In the stilted farewell between Héloïse and Marianne, there was no other option besides Héloïse disappearing. Perhaps it felt like they had one last moment of agency, a final opportunity to rebel against all that was keeping them away from each other. I loved this film because it invited us to get swept away in Marianne and Héloïse’s whirlwind romance, to feel the devastation of the moment when it was stolen from them. As the saying goes, “it’s better to have loved and lost than to have never loved at all.” American culture is often obsessed with the idea of a happy ending, and I found it so refreshing that this French film focused on the pureness of love and its inherent worth — what one may be willing to endure just to luxuriate in the incomparable feeling of love. Still from “Call Me By Your Name,” Sony Pictures, fair use The mood of the final shot of this film felt eerily similar to the last scene of Call Me By Your Name when Elio, portrayed by Timothée Chalamet, quietly cried in front of the fireplace after receiving news that his fleeting summer love was engaged to be married. Elio’s face was the picture of classic heartbreak, raw and shocked, inconsolable and numb. Sufjan Stevens’ “Visions of Gideon” played softly against the crackling of the fire and the footsteps of Elio’s mother setting the table behind him. In the final shot of Portrait of a Lady on Fire, Héloïse was watching an orchestral performance, unaccompanied by the man that she had been forced to marry. The orchestra began to play Vivaldi’s “The Four Seasons — Summer in G Minor,” a song she recognized from a conversation she had had with Marianne years earlier. When Marianne had played an amateur version of the song on a harpsichord, Héloïse smirked and couldn’t take her eyes off of her. Héloïse’s facial expressions as she experienced the music were more dynamic and fluid than Elio’s. You could see the entire love affair flash across her face. She had brief moments of laughter, held her breath in grief, cried at what she would never get to have with Marianne. The complex layers, intensity, passion and depth of their love was truly reflected in the way she experienced the piece of music. I couldn’t help but cry myself while watching her travel through every heavy feeling she had, as if her unled life was flashing before her eyes. The song ended abruptly with an epic undertone that lingered long after the musicians placed down their instruments. Marianne spotted Héloïse from across the theater, watching her as her chest heaved and tears ran down her face. Marianne had already seen a new portrait of Héloïse, holding hands with a little girl. In the painting, Héloïse held a book that was bookmarked by her finger — the page that with an image drawn of Marianne in bed. Héloïse did not notice Marianne in the theatre. The two never saw each other again, but several years later, Marianne’s painting called Portrait of a Lady on Fire was revealed when she was working with a class of women who were perfecting their painting skills. It referenced the night on the beach during which it became clear that the two were enamored with one another, the night that would change their lives forever. The story as told through the painting highlighted the idea that both Héloïse and Marianne would carry their love in their hearts for as long as they lived. These women did not have the option to live happily ever after with one another, and they knew that from the beginning of their love affair. Each night that they lay together, watched the ocean while sitting on the beach, and had dinner together, they knew their time together was passing swiftly. Even when anticipating deep pain, they focused on each other and the way they felt together. I cried thinking of the life that they could have had if they were in a different time or place. I thought about Héloïse being forced to spend her days with a man she didn’t love, perhaps never having a romantic or sexual relationship with a woman again. And I thought about Marianne, whose life circumstances could have allowed her to be with Héloïse, and who was tragically unable to break Héloïse out of the prison of her life’s circumstances. Héloïse and Marianne never forgot about each other. Their pain was difficult to bear, and they had to move on with their lives. Portrait of a Lady on Fire was tragic, unforgettable, and has taken up station in my mind ever since I watched it. Their story is about living when one’s heart aches and cries out; it’s about the writhing and timeless pain that underscores life’s struggles; and it’s knowing that you can’t always get what you want, or even deserve. This was the queer love story I’d been searching for, even when it ended in a way that tore me apart.
https://medium.com/prismnpen/portrait-of-a-lady-on-fire-restored-my-faith-in-love-stories-d20871cb4f88
['The Transgender Therapist']
2020-12-30 13:14:54.605000+00:00
['LGBTQ', 'Love', 'Review', 'Opinion', 'Film']
5 Best Productivity Apps For 2021
5 Best Productivity Apps For 2021 In this day and age, we are surrounded by technology but sometimes it is not used to its fullest potential. There are so many great ways that devices like phones and iPads can be used to improve your productivity. In almost all app stores there are many productivity apps but knowing which ones are the best and will help you the most can become a challenge. In this article, we are going to look and the 5 best productivity apps that you can use to boost your productivity in 2021! Photo by Andreas Klassen on Unsplash 1. Notion Notion is an all in one productivity tool that will fit into almost anyone’s workflow whether you are a student, a business owner or you work a 9–5 job. Notion is the top of our list of productivity apps because it has everything that you could possibly need all in one place and there is a great free plan if you just want to try it out. Notion has many great tools that can increase your productivity. You may currently have many different productivity apps like a calendar, todo list, note-taking app and many more but Notion has all of these plus more built-in. A notion workspace is fully customisable and comes with many free templates that you can use to get started. There are way too many features that Notion offers for us to cover in this article but think of it this way. Any productivity apps that you currently use probably have the same functionality that Notion has just all in one space that you can access from any device. Notion has four different plans that you can choose from: Personnel — Free Forever Personnel Pro — $4 per month (Billed Yearly) Team — $8 Per Month (Billed Yearly) Enterprise — Custom Pricing As you can see, Notion is an increasingly cheap productivity app that has all the features that you could possibly need. One thing to watch out for in Notion is that there is a larger learning curve than other platforms because there is so much that you can with it. If you spend a few hours in Notion, you will have no problem getting used to the software and this can therefore become you’re all in one productivity app! Photo by Sigmund on Unsplash 2. Microsoft OneNote OneNote is an excellent note-taking by Microsoft that is 100% free forever so anyone can access it. OneNote is a great productivity app because it is simple and has many useful but simple features that anyone can use. OneNote has a very small learning curve making it great for people who need to quickly set up and integrate an app into their workflow. I know so many students that like to colour code their notes and OneNote is perfect for those people. They have so many different colours that you can make your notebooks. There are so many different options that they have to design your notes. Because it is free all you need to get started is a Microsoft account and you are good to go. All your notes are saved and synced to the cloud so you can access them and edit them from all of your devices. If you need to transfer your notes to another program like Word or Google Docs then you can export all your toes to multiple different formats instead of copying and pasting everything. If you want a simple note-taking app then you have to check out OneNote. Its simple user interfaces and small learning curve allows anyone to create detailed and comprehensive notes. OneNote is one of the best productivity apps because anyone can get started for free and it is packed full of features. Photo by Kaleidico on Unsplash 3. Google Calendar There is no doubt that you have heard of Google, in fact, you probably use Google as your main search engine. You most likely also have a google account because there is so much that you can do with it. Everyone that has a google account also gets access to Google Calendar which is a great multi-device calendar as well as a great productivity app. Google Calendar has many features that you can use but if you just want a basic calendar then you can do that. If you need a productivity app that can give you many alerts in advance, be colour coded or have many different settings that can be personalised for each event then Google Calendar is for you. As well as having a fully-featured calendar, you also get a todo list and goal builder built right in. The reminder section can be very useful but I don’t think that it will become your main reminders app as it doesn’t have enough features. However, the calendar makes up for this as it has everything that you could possibly need. I have used Google Calendar for the longest time and I can wholeheartedly say that this is probably one of my favourite productivity apps of all time. I really enjoy the way that the app is laid out and I think that this app will work well for anyone whether you are student, business owner or someone who needs an app to become more organised. The app is free forever and all you need is a Google account. All your events will be synced through the cloud and will immediately appear on all of your devices. This is by far the easiest and simplest calendar that you can use and that is why it is one of the best productivity apps for 2021! Photo by Waldemar Brandt on Unsplash 4. Lastpass Lastpass is an all-in-one product storage solution that is extremely secure and trustworthy. I have been using Lastpass for over 3 years now and I can tell you that it is more than just a password storage solution, it has many extra features that turns this into a productivity app. The chances are that you are signed up to many different websites and you are also probably using the same password which is very insecure. This can become a problem because while remembering your password isn’t a problem, it means that if your password is leaked then anyone can have access to all of your accounts. LastPass makes this easy as it has a built-in password generator that will create you a very strong password that will be hard to remember and hard to breach. Lastpass will then store those passwords for you so that you never have to remember a password again! They also offer a very useful app and extension that will auto-fill passwords for you. If you are on a phone or tablet then it will auto-fill by using your face or fingerprint id and if you are on a computer it will auto-fill without a password or authentication. Lastpass has many security features that you can use to make your account very secure include 2-factor authentication. If you decide to upgrade to the paid plan you can get more features like dark web monitoring. I have always used the free version of LastPass but there is a paid version that costs around $50 per year that will get you more security features. I think that for most people will be fine with the free version as the extra features in the premium version are quite advanced so most people won’t use them. I have put LastPass on this list because it can save you a lot of time and headaches because you don’t have to remember any more passwords and Lastpass will auto-fill passwords for you. This in my opinion makes Lastpass a great productivity app for 2021! Photo by Micah Williams on Unsplash 5. Google Drive Google Drive is Google’s all in one cloud storage solution that is not only cheap but also easy to use. Anyone with a Google account gets 15GB of free storage that has no limits on it and if you need more then you can pay a very small amount to get an extra 100GB or all the way up to many terabytes. Google Drive is a great solution if you are constantly running out of space on your computer or phone because you can keep expanding when you need to and it is backed up to the cloud so you won’t lose your data. You mainly access this service through your web browser but there is also an app for iOS and android as well as being able to integrate with Windows File Explorer and Mac Finder. It is the most versatile cloud solution because it supports almost all file formats and there are extra add-ons that you can get to improve your overall experience. Google Drive also works great with all the Google services like Docs and Sheets because it can do things like autosave so you never have to save a document again! You can also make backups of your various different devices if you don’t want to use the standard one that comes with them or if you want an extra backup. I think that everyone should be using Google Drive because it is very inexpensive and it is packed full of features that anyone can use. No matter who you are I can guarantee that you have a use for Google Drive unless you are already using another cloud storage solution. Because you don’t have to save documents and everything is very easy to access I have put this on this list of the best productivity apps for 2021! Photo by Elle Cartier on Unsplash Conclusion If you are looking to become more productive in 2021 then there are loads of apps that can help you get there. Some apps only do one thing but that one thing could end up saving you a lot of time in the long run.
https://medium.com/illumination/5-best-productivity-apps-for-2021-be8fb3764331
['Ewan Mcbride']
2020-12-23 09:05:21.417000+00:00
['Self Improvement', 'Productivity', 'Entrepreneurship', 'Business', 'Self Development']
Getting started with PostCSS: A quick guide for Sass users
You may have heard about PostCSS and how it is almost 2x faster than libsass (and 28x faster than Ruby Sass); or about its cssnext and CSS Modules support and extensible functionality. But have you had a chance to try it out? PostCSS’s biggest strength — its modularity and plugin-oriented architecture—is also a downside. If you have been using Sass (which is the majority of designers and front-end developers) for your projects, you never had to configure anything—Sass comes with all functionality included, out-of-the-box. PostCSS, however, requires you to do some work. You have to choose from a seemingly endless list of plugins and put all pieces together yourself. This guide provides (what I think is) a good base configuration for Sass users, so you can easily try out PostCSS and dive in the details later. Hope you find it useful. Any suggestions and comments please drop a tweet to @svileng — thanks! Note: there are PostCSS projects that attempt to give you Sass-like functionality in a single plugin. I would personally avoid them and pick the plugins individually when I need a specific feature—this gives you more flexibility, and you can also use some new plugins that are even more powerful than their Sass equivalent. Running PostCSS There are a number of ways to run PostCSS. You can easily plug it into your Gulp or Webpack build process; for this guide, however, to keep things as simple as possible, we’re going to use PostCSS’s CLI. The majority of people would probably install it globally like so: npm install -g postcss-cli However, I recommend installing the runnable locally, so that it resides within the project you’re working on: npm install --save-dev postcss-cli And run it like so (from the main project directory): ./node_modules/.bin/postcss [options] I find this approach better than managing versions of globally installed modules across projects. To make it even easier, you can add the following line to your “scripts” section in package.json: { "name": "mysite", "version": "0.0.0", "private": true, "scripts": { "start": "node app.js", "postcss": "postcss --config postcss.json" }, "dependencies": { "conveyor-belt": "0.0.5", "express": "~4.9.0", "express-handlebars": "^2.0.1", "morgan": "~1.3.0" }, "devDependencies": { "postcss-cli": "^2.5.1", } } Turns out you can omit “./node_modules/.bin” and just call postcss here — thanks to vectorsize for the tip! So from now on you can just run npm run postcss You have probably noticed the “--config postcss.json” argument — this is going to contain our PostCSS configuration. Rather than passing lots of arguments in the command line/package.json file, we can specify everything within a single JSON file. Here’s the basic structure: { "use": [], "input": "css/main.css", "output": "public/main.css", "local-plugins": true, "watch": true } While this is a valid example, it actually doesn’t do anything at all! Notice the empty “use” array — this is where we specify our PostCSS plugins that help us transform the input CSS and add functionality. Example PostCSS configuration If you’re coming from a Sass project, you’ll likely want to have: CSS @imports CSS @extends $variables Nested classes Mixins Autoprefixing To get all that, you need to install the relevant modules: Note: the plugins provide almost identical syntax to Sass — some are slightly different (and in the case of postcss-mixins—much more powerful), so make sure to check the pages above for more info. And to install them in one line: npm install --save-dev postcss-import postcss-simple-vars postcss-extend postcss-nested postcss-mixins autoprefixer Then update your postcss.json: { "use": [ "autoprefixer", "postcss-import", "postcss-simple-vars", "postcss-extend", "postcss-nested", "postcss-mixins" ], "input": "css/main.css", "output": "public/main.css", "local-plugins": true, "watch": true, "autoprefixer": { "browsers": "> 5%" } } Notice we added an extra key for autoprefixer — you can also use the json to configure individual plugins! Now you can just do npm run postcss (there is currently no output in the Terminal, sadly, so you’ll just get a blank line) and it will automatically transform and watch the code for changes. Further reading Now that you have most of the things you need to get started using PostCSS, you may want to have a look at cssnext to start using CSS4 today, or have a look at the long list of language extensions, linters and optimisers currently available as plugins. — I’m Svilen — a full-stack web developer and co-founder at Heresy. We’re always looking for engineers who enjoy working with the latest technologies and solving challenging problems. If you’re curious, check out jobs page!
https://medium.com/heresy-dev/getting-started-with-postcss-a-quick-guide-for-sass-users-90c8b675d5f4
['Svilen Gospodinov']
2017-08-31 15:02:10.899000+00:00
['Web Development', 'Sass', 'CSS']
IBM is Recognized in the 2020 iF Design Awards
On behalf of our design team at IBM Cloud, Data and AI, we’re excited to announce that we’ve won iF Design Awards in the Communications category for IBM AutoAI and IBM Watson Studio Desktop. We are thrilled to see these two products get recognized for their outstanding design work. This year, the iF Design jury, comprised of 78 international experts, judged 7,300 products and projects submitted from 56 countries from around the world. The iF Design Award is one of the world’s oldest, most celebrated, and most competitive design competitions. This is our third year in a row being recognized by this organization, and the first time that we have seen two of our products get awarded at the same time. It’s truly an achievement and an honor for us, and I’m so proud that our team’s hard work has paid off. What is IBM AutoAI? IBM AutoAI, part of IBM Watson Studio, automates the process of building machine learning models for users such as data scientists. Businesses looking to integrate AI into their practices often struggle to establish the necessary foundation for this technology due to limited resources or a gap in skill sets. The process of understanding how to use AI and generate machine learning models from data sets can take days or weeks. With a distinct emphasis on trust and explainability, IBM AutoAI visualizes this automated machine learning process through each stage of data preparation, algorithm selection, model creation, and data enhancement. The tool is able to teach and empower users to identify and apply the best models for their data in a matter of minutes, helping businesses save time and resources. AutoAI guides users through the process of joining multiple data sources with suggestions and prompts throughout the data preparation experience. Designing for IBM AutoAI One of the primary goals for the design team was making IBM AutoAI understandable for users with varying levels of expertise. It was a challenge for the designers to understand the AI and machine learning technology behind this automated solution, and then communicating the model creation process in a comprehensive but visually appealing way. The team set to create a software product that guided the user through these complex technological processes step by step. IBM AutoAI visualizes the entire model creation process through multiple “lenses”, providing transparency to users in a way that they can understand the process to whatever extent of detail that they need. The design team worked directly with IBM Research to understand the underlying technology and user expectations for this type of tool. The team also interviewed target users and conducted competitive research to increase their domain knowledge in artificial intelligence and better inform their design decisions. Based on deep user research, the designers found that users inherently didn’t trust an automated solution. The design team wanted to avoid this perception of an automated solution as a “black box”, where it is unclear to the user how a result was generated from the information that they input. Throughout the design process, the designers placed emphasis on explaining all steps of the software tool’s process in laymen’s terms in order to build confidence and trust with the users. By leveraging the IBM Enterprise Design Thinking framework the design process also extended to development, content, and offering management teams, which helped create a product more aligned with all stakeholder goals. What is IBM Watson Studio Desktop? IBM Watson Studio Desktop is a data science and machine-learning software platform that provides self-service, drag-and-drop data analytics right from the user’s desktop. The software platform’s features include the ability to automate data preparation and modeling, data analysis, enhanced visualizations, and an intuitive interface that doesn’t require coding knowledge. It can integrate with on-premise, cloud, and hybrid-cloud environments. This dashboard offers users a way to explore, prepare, and model their data with simple drag and drop features, without needing coding abilities. Data analysis can be a painstaking process as users need to gather, clean, sort, and sift through the data while working with data scattered across several sources and locations. IBM Watson Studio Desktop is an end-to-end solution that helps businesses to get started with the data analysis process faster, giving data scientists all the tools they need to improve their workflow. This product is a desktop version of IBM Watson Studio, a collaborative cloud data analysis software. Designing for IBM Watson Studio Desktop The design team behind IBM Watson Studio Desktop conducted research on their target users, primarily data scientist, to understand their needs. The designers conducted interviews with sponsor users and corporations as well as on-site user testing. The team found that data-scientists primarily worked in isolation, and were looking for a more dynamic, collaborative workflow, where they had all of their tools in one place. The team aimed to design a tool and interface where data scientists were provided with an ecosystem of data analysis tools. They wanted to create a space for their users to collaborate, access all of their needed tools and information at once, and create a cohesive workflow between themselves and their peers. User Experience Journey for IBM Watson Studio Desktop users Another challenge for the UX team was to design and implement all of these capabilities that were originally designed for the cloud version of the software into the desktop version. IBM Watson Desktop Studio was created for users who wanted to work offline as well as in an interface with more narrowed and tailored machine learning capabilities. The team wanted to design a desktop tool that translated well as an extension of the cloud tool, with a user experience that was more simplified and focused, but still familiar to users from the original cloud version. The team designed an interface that used similar design principles, as well as carried over key features from the cloud version that the users wanted to see in this new environment. “IBM Watson Studio Desktop and IBM Watson AutoAI bridge gaps in skills and knowledge and make data analysis and machine learning more accessible for businesses in the modern age. We designed these products with empathy and a user-centered approach, so that our users could confidently integrate AI into their business workflows.” --Alex Swain, Design Principal at IBM Cloud, Data and AI Designing Watson Products As described above, designing software products with AI and machine learning capabilities is a challenging task that requires an in-depth understanding of the field and its challenges. AI has the power to impact businesses on a large scale, and understanding how to take advantage of these capabilities is essential for businesses to succeed and excel with their data strategy. Being recognized for the design work behind these products is a true testament, to how much user experience can play a role in shaping how this AI technologies can impact our lives. Winning Teams IBM AutoAI Design Principal: Alex Swain Design Team: Dillon Eversman, Voranouth Supadulya IBM Watson Studio Desktop
https://medium.com/design-ibm/ibm-is-recognized-in-the-2020-if-design-awards-1221123585f8
['Arin Bhowmick']
2020-02-13 05:10:31.955000+00:00
['Machine Learning', 'UX', 'Data Science', 'Design', 'AI']
Database Transaction and ACID Revisit
Traditional DBMS processing is build among transactions. Transaction is a crucial and useful abstraction provided by DBMS to model communication procedure. Database users depend on standard sets of properties that guarantee database transactions are processed reliably. These standard properties is a widely used acronym as ACID, which stands for Atomicity, Consistency, Isolation and Durability. Inspired by Martin Kleppmann’s book “Designing Data-Intensive Application”, I want to share some of my thoughts on transaction and ACID properties. Transaction A transaction is a coherent and reliable unit of work performed on a database and should be independent of other transactions. The purpose of transaction abstraction is to deal with two problems To provide a reliable way to recover from error and keep system integrity in case of system fault. To provide isolation among programmers accessing database concurrently. Without the isolation, concurrency usually cause erroneous, sometimes indeterministic outcome. A transaction is only a logic abstraction, but it can be made up by multiple operations. For example, in MySQL system, we can use BEGIN/END to compound multiple statements into a single commit. If there are multiple operations within one transaction, they must be executed or cancelled together. In a database system, a transaction should be executed in an atomic, consistent, isolated and durable manner. Atomicity Atomicity is designed for error handling purpose. Atomicity guarantees that one transaction can be fully committed, aborted or roll backed. In other words, the process of a transaction can not be partially accomplished. Additionally, atomicity also make sure that system can roll back a transaction without side-effect, which can be considered as abortability. If the execution of a transaction has some side effect, such as sending external request to downstream application and cannot be reverted, the transaction is not considered as atomic since it does not guarantee abortability. Consistency In the context of ACID, consistency makes sure that a transaction can only bring database from one valid state to another, maintaining database invariant.The legitimacy of data is usually not a database concern, which is usually defined by the application logic. In other words, consistency should be considered as a handy feature offered by some databases rather than a hard requirement for database transaction. For example, when people create their user profile on a social network, they usually need to choose a user name that is not be used by others. The uniqueness of the user name is not a concern for the database, which is necessary for the social network application. In fact, there are quite a few common use cases, such as uniquenss, anti-orphan data etc, among database users. Therefore, database provides some tools, such as primary key, foreign keys etc to enforce some constrains on transactions. Consistency feature therefore free users from low level integrity check. Isolation The isolation property guarantees that the processing of a transaction will not be interrupted by other transactions. With strong isolation level, every transaction can consider itself as the only one in the system. In other words, each transaction is independent and even invisible to others. If there are multiple transactions running concurrently, database system should handle it as they are running sequentially. This isolation level is called as serializability, which is usually considered as the strongest isolation level. However, serializability comes with expensive performance penalty, therefore some system also provides weak isolation level to achieve better performance. Some weak isolation levels are read committed, snapshot isolation. To design a proper isolation level and handle concurrency is one of the most complicated challenge for database design especially for distributed database with multi-leader or leaderless architecture. Most database builds their solution based on two phase locking(2PL). In the design of 2PL, there are 2 kinds of locks, shared locks for read operation and exclusive locks for write operations. A operation must acquire the lock for the object before process the transaction. Depending on the access operation type, acquiring the lock may be blocked and postponed, if another transaction is holding a lock for that object. Durability The durability means committed transactions will survive permanently. We can consider it as a guarantee against fault tolerance. We can consider it from two perspective. From hardware perspective, it usually means the the transaction has been stored on a persistent storage, such hard disk. On the other hand, people looks at the durability problem from a system perspective and claim the transaction is durable if it is properly replicated. There are pros and cons for either solution. For examples, the annual failure rate for hard disk drive (HDD) is around 1% while 20% of SSD suffers various reliability issues over a four year period. Therefore, it is possible for a system to lose some transactions data due to hardware failure. On the other hand, during network partition, the replicated data may be inaccessible as well. In reality, system architecture usually combines both solutions to achieve SLA for durability. Example Let us review these concepts through a real life example. In a banking application, a common use case is fund transfer. Assuming Bob wants to transfer $100 from his checking account to his saving account. Before the transfer, there are $500 balance in his checking account and $300 in saving account. After he issues the fund transfer in the portal, there is a transaction issue to underlying database. There is a fund subtraction from checking account and a fund addition to the saving account. Two operations must be either succeed or failed together. Otherwise, there may be some fund missing. This is guaranteed by atomicity. In any time during the transaction, there are $800 total fund belong to Bob. This is guaranteed by consistency. If during the transaction is being executed, Bob is checking his balance of two accounts. He can see either $500/$300 or $400/$400 since the transfer operation is not independent and will not be interrupted by the balance checking operation. This is guaranteed by isolation. After the transaction is committed, there will be $400/$400 balance in Bob’s two account accordingly before other transactions committed. System fault will not roll it back. This is called Durability. Some banking system sends email or SMS notification after the transaction committed. However, we cannot consider the notification as part of the transactions since The execution is not in the context of database and therefore it is not the semantic of database transaction. Even we extent the transaction to overall banking system, the SMS and notification can not be retracted easily. Therefore it breaks the abortability. What is next This wraps up our initial discussion about ACID. Since 2000s, large scale distributed database gains more popularity among the industry. It is not easy to achieve ACID with performance in consideration. Therefore some relaxed alternatives have been proposed. In the next article, we will discuss some other popular concepts in the distributed data system community. Reference Designing Data-Intensive Application Martin Kleppmann’s, O’Relly, 2017 2. https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2020/
https://medium.com/@humbo/database-transaction-and-acid-revisit-192d350d590e
[]
2020-11-15 00:07:44.955000+00:00
['Distributed Systems', 'Software Engineering', 'Database']
A Year in Review: Moka Highlights of 2018
As a young start-up, last year was all about sustainability for us. This was true for our operations in the USA as well as Cameroon Africa, where we faced tremendous operational complexities due to social-political unrest. Our work in Cameroon is needed now more than ever and despite a long road ahead, we are tremendously proud of the strides we made in 2018. As we excitedly look forward to 2019, we take a moment to recognize a few accomplishments of last year and honor those who made them possible. We’ve hit some major milestones in 2018 and even surpassed our own expectations. These accomplishments were made possible by all of our supporters and customers- as every bar of chocolate and every bag of coffee we sell directly impacts our mission around the globe. Its never been more rewarding to enjoy chocolate and coffee like this. Our Impact at origin In total, we’ve planted 30,320 trees, produced 6,492 days of employment, harvested 5,808 pounds of food, donated 480 trees into our Farmer Field School, and invested $75,000 into the communities. On our own farm in Cameroon, we also added 7,000 additional trees in just this year alone, while increasing job opportunities along the way. Even more enriching is our expanding community of cacao farmers who value having Moka as an on-the-ground partner. We are seeing more people affected by the growing success every day, which only fuels our enthusiasm to keep pushing harder than ever. Follow our impact at MokaOrigins.com/Impact Meanwhile in the USA Our coffee roastery and chocolate factory have been seeing consistent growth. We doubled our retail partnerships last year, won 5 awards, and expanded our production output. We poured, hand-wrapped and shipped you 18,000 bars of chocolate and 10,000 lbs of coffee last year. These past few months have been the biggest we’ve ever seen, and if you visit our factory now, you’ll see signs of expansion throughout our space. Moka Box monthly subscription In June, we officially launched our subscription product called Moka Box, which allows subscribers to receive a unique experience each month and enjoy free shipping. These subscription boxes highlight special products that pair together for a unique tasting experience. We’ve even released some super limited batch bars, such as our coveted Cameroon 68% bar, our Strawberry White Chocolate Bar, and our Single-Origin Brazil Coffee. Subscribers in December found themselves with our legendary Organic Peppermint Bark Dark Chocolate Bar. Origin Trips To top it all off, we’ve just announced our Origin Trips. These expeditions to our farmer’s countries of origin are designed to make the chocolate & coffee supply chain much more transparent, while also providing service to the communities that we source from. These trips signify our commitment to transparency, quality, and beyond fair-trade relationships. This year, in June of 2019, we will be leading a group of 20 chocolate lovers to the Dominican Republic for our 2019 Rainforest Chocolate Trip- it’ll be the ultimate “farm to bar” experience. You’ll get a chance to meet the cacao farmers, help process cacao, and support their communities all while enjoying the beauty of the Dominican Republic including forests, beaches, food and more! Thanks to you all who have made Moka Origins a part of your family. We can not be more thankful for the support and growth we have seen, which is only fueling our efforts to continue to make great chocolate and coffee — all while creating a better world. Empowerment comes in many forms-ours just happens to be delicious. Jeff Abella Co-Founder, Moka Origins
https://medium.com/@mokaorigins/a-year-in-review-moka-highlights-of-2018-b58f77da8c22
['Moka Origins']
2021-02-01 21:26:05.903000+00:00
['Social Enterprise', 'Craft Chocolate', 'Sustainable Agriculture', 'Impact', 'Tours And Travels']
What’s Their Endgame?
Invariably, in a conversation about environmental destruction, war in the Middle East, or the pandemic, someone eventually asks the question: “Yes, but what’s their endgame?” Behind this question is the assumption that an elite cabal of capitalist overseers controls everything and foresees the outcome of all their decisions — that they possess an unassailable plan that can never be defeated. Behind this question also lurks capitalist realism, characterized by Zizek and Jameson as the mental state in which “it’s easier to imagine the end of the world than it is to imagine the end of capitalism.” “We find ourselves at the notorious ‘end of history’ trumpeted by Francis Fukuyama after the fall of the Berlin Wall,” wrote Mark Fisher in Capitalist Realism: Is There No Alternative? “Fukuyama’s thesis that history has climaxed with liberal capitalism may have been widely derided, but it is accepted, even assumed, at the level of the cultural unconscious.” This mindset is so common that many of us don’t think twice before asking “What’s their endgame?” We believe that in posing the question we’re making a meta-critique of capitalism. The mere formulation of the question supposes that there’s such a thing as organization — a system — a concept that in itself is truly revolutionary for many of us, who don’t even realize that we live within a system, and that alternatives are possible. The question “What’s their endgame?” is often posed in the context of the pandemic. In this case, the assumption behind the question is that capitalism, if it wanted to, could have reacted better to the pandemic, and could have saved more lives. Therefore, the pseudo-intellect wonders, was there perhaps not a master plan behind letting hundreds of thousands die? Perhaps the US colluded with the leaders of Brazil, Britain, China, Cuba, and the United Nations, and they all agreed on a plan to usher in a police state? The theory falls apart when we accept that various communities had divergent responses to COVID-19. It’s more likely that the agenda behind allowing mass death was the same that capitalists always have: to make as much money as possible with little thought for the consequences. On the topic of environmental destruction, we aver that billionaires are building spaceships to colonize Mars; that they’ve already planned for the destruction of planet earth and its ecosystems. Whether it’s the singularity, or life on Mars as imagined in books and film, we’ve internalized the idea that the contamination of earth and humanity’s exile into space — or at least an elite fraction of humanity — is actually an ingenious plan devised by the super-intelligent billionaires, not the inevitable result of an economic system that ignores the most basic principles of nature in order to line the pockets of the oligarchy. Global war? We’re supposed to believe that the imperialist wars in Syria, Afghanistan, or Yemen, are going exactly as planned; that the US withdrew from Iraq in 2011 strategically — the US actually won the war. We’ve internalized the fallacy that Vietnam’s Resistance War Against America went according to US plans; that powers beyond our control act unilaterally on our beings, that we’re not actors in history. I’m reminded of a comment someone posted on my Facebook wall this week: “if one wishes an answer, there is none, to war and chaos. When it all spills over it becomes a world war, and history repeats itself, over and over.” There are elements of truth to these ideas, which is why they’re so appealing. “This malaise, the feeling that there is nothing new, is itself nothing new of course,” wrote Fisher. History repeats itself: an appealing idea Perhaps the root of this idea is the oft-repeated saying that “those who do not learn from history are doomed to repeat it.” It turns out that this is really an altered version, or a misquote, of the original written by Spanish philosopher Jorge Santayana in 1905: “those who cannot remember the past are condemned to repeat it.” From this deeply ingrained idea we extrapolate the notion that history repeats itself — but that’s not at all what Santayana meant. Neither of these maxims is meant to teach us that history repeats itself. Santayana’s statement warns us that if we don’t learn from history, we won’t be involved in the making of the future. While there are certainly cyclical elements in both nature and human history, to believe that either nature or history truly repeats itself is simply to bow out of the game — to quit before we’ve even played. A similar logic is at play when we ask “What’s their endgame?” The reasonable alternative is to recognize that reality is always changing, and will continue to change; to recognize that humans play an active role in creating the future. Our realities are not determined by forces entirely outside of our control. This recognition gives us both strength and optimism. “For revolutionary hope to come into being, we need to discard determinism,” writes Yanis Iqbal. “Instead of dialectically locating an individual in the interconnected economic, political and cultural systems, institutions and structures, determinism considers him/her to be unilaterally influenced by it. A determinist conception is based on the dichotomous division of existence into an ‘external world’ and ‘human consciousness.’ In this conception, the external world and consciousness are two different components of human existence.” In truth, humans are not separate from the world around us. It’s self-evident that we’re part of it. We require air, water, and sustenance to live and to think. Conversely, we alter and metabolize the world around us by our existence. The oligarchy and the elite, while they may live in ivory towers, are subject to the same forces of nature as we are, with the same powers and limits of agency. They may have various plans, and diverse strategies, but none of this ensured that their plans worked perfectly in the past, nor will in the future. Despite its popularity in academia — particularly in philosophy, cultural studies and postmodernism — it’s easy to demonstrate that capitalist realism is incorrect. One only has to imagine other mental states or cultural tropes such as apartheid realism, feudalist realism, or hunter-gatherer realism. Understanding and analyzing the plans of our opponents or enemies is important. The assumption that we are powerless before them is fallacious and futile. Often, what lies beneath the question “What’s their endgame?” is conspiracy theory. Yes, many elements of conspiracy theory are certainly true and yes, groupings of like-minded people marshal increased power to guide events. However, blind adherence to conspiracy theory ignores the self-evidence of the greatest conspiracy of all: that we, as humans, all conspire together to create the future. [originally published by Orinoco Tribune, March 9, 2021] **** Please feel free to repost, translate, republish, use, reuse, in part or in whole, any of the content found on this particular page. You have my permission to do so, free of charge. If you have questions or comments please send me a message through my Twitter account, or on Facebook. Thanks to the following outlets for sharing this item: Hampton Institute Internationalist 360° Orinoco Tribune Revolutionary Strategic Studies
https://medium.com/@stevelalla/whats-their-endgame-4769656829e
['Steve Lalla']
2021-03-23 01:12:06.819000+00:00
['History', 'Capitalist Realism', 'Conspiracy Theories', 'Marxist Theory', 'Dialectical Materialism']
SUCH BONDS
When you envy someone, what is it that you really envy? Is it simply the way she is, the way she talks, her skills and manners? Or is it rather her her interaction with others, her participation with her surroundings? I wish I had such bonds.
https://medium.com/@staffancarle/such-bonds-b593fbf14e09
['Staffan Carle']
2020-12-20 13:43:08.174000+00:00
['Unity', 'Human Behavior', 'Social']
Tech Series #11: An Overview of Vite Storage Layer
The storage layer is responsible for the persistence of various data on the Vite chain, and providing key functions such as caching and sophisticated query. The data stored in Vite storage layer includes transactions (Transaction/AccountBlock), snapshots (SnapshotBlock), account states (AccountState), and virtual machine states (VmState). Business Requirements Each type of data satisfies a special use case with unique data reading/writing requirements: Transaction In Vite’s DAG ledger, a transaction (Tx/Transaction) presents as an AccountBlock. Except in some rare situation, a transaction is always initiated by a FromAddress and sent to a ToAddress, and called SendTx. When the account of ToAddress receives the transaction, a ReceiveTx will be generated and linked to the SendTx. Similar to other blockchains, it is possible to use the Tx hash to query the transaction, or find all transactions belonging to a certain account by the address on Vite. Additionally, querying based on the relationship between SendTx and ReceiveTx must also be supported. Snapshot The snapshot chain is a special blockchain on Vite. Each SnapshotBlock in the snapshot chain stores the basic information of all transactions in the snapshot, and a link relationship is established between the SnapshotBlock and each Tx that has been snapshotted. Therefore, in addition to the query and traversal demands of the SnapshotBlock, SnapshotBlock and Tx must also be queried based on the indexed relationship between SnapshotBlock and Tx. Account state and virtual machine state The account state and the virtual machine state are very similar. They are both storage data structures used to indicate the state of an address. The difference is that the account state is associated with a common account, while the virtual machine state belongs to a contract. The state keeps changing during the execution of a transaction, so the state storage must be versioned to satisfy the update, traversal and revert. In addition, in order to have the external applications being aware of the state change in time, Event/VmLog should be supported for the convenience trace of changes. System Design Conceptual design The general design purpose is to provide support for upper-layer business modules and improve reusability and system reliability by following the subsequent guidelines. Small file storage Using small files of fixed size for both temporary storage and permanent storage of blocks. The performance of incremental writing and batch reading of small files is highly efficient, and it also takes into account the needs of random reading. It is very suitable for storing the massive blockchain transactions. Indexing Using LevelDB to implement index storage. LevelDB has an excellent performance in batch incremental writing, which is very suitable for blockchain that has more writes but fewer updates. LevelDB supports endianness that facilitates the read/write of multi-version states through customized keys. It can also perform K-V based random reading and writing. Cache Using cache to store hot data to make full use of the performance advantage of memory to speed up reads. Cache can be implemented to align with certain strategies. Asynchronous flush In order to improve I/O performance, the persistent storage of data is flushed asynchronously. Two-phase commit is introduced to ensure data consistency, while ‘redolog’ is used to avoid the loss of uncommitted data. Data compression Using data compression to reduce the amount of data stored. Engineering implementation Vite storage layer is implemented as the following three modules. blockDB blockDB realizes the storage of AccountBlock and SnapshotBlock. Considering that the data format is fixed, most blocks have a fixed size, and storing in small files. Multiple blocks are allowed to store in one file to reduce fragments and facilitate indexing. indexDB indexDB is used to index the location of blocks, and it also stores the relationship between various blocks. stateDB stateDB is used to store account states and virtual machine states. By carefully designing the byte positions of the LevelDB key, it can support multiple versions of data. Summary This article briefly introduces a high-level perspective on how the Vite storage layer has been designed in alignment with actual business needs. In the next article, we start to introduce the design details of the three modules mentioned above. Stay tuned!
https://medium.com/vitelabs/tech-series-11-an-overview-of-vite-storage-layer-30841aed2253
['Allen Liu']
2020-11-23 08:18:10.869000+00:00
['Software Development', 'Blockchain Development', 'Blockchain Storage', 'Tech', 'Vite']
Who Is Lark Davis?
Lark Davis is a platform that provides information on content to topics. Such as Bitcoin and other Cryptocurrency and anything else in relation to blockchain technology. As well daily news and current events. This usually comes in the form of videos on this channel. Always providing videos on a daily basis to which is happening in the daily markets of all industries. Now let’s go over to what Lark Davis is more in depth.
https://medium.com/@newcodelogic/who-is-lark-davis-cd136cb3354d
['Exciting World Cryptos']
2020-12-24 21:44:34.777000+00:00
['Investing', 'Dialog', 'Banking', 'Mining Industry', 'Bitcoin']
Is being a Product Owner a Full Time Job?
Have you ever been part of an agile team full of rockstars who can commit to a ton of work per sprint and still you notice that something is not totally right? Maybe, you see lots of development but no concrete results? As you may already know, a Product Owner is one of the many key pieces that makes an Agile team run smoothly. This person is the one that provides that heart beat to the team and feeds our backlog with all sorts of requirements for new work and improvement ideas. But, why is it so hard to find a dedicated PO that can keep up to the task of providing a backlog that keeps a team busy sprint after sprint? There are several answers to that.. PO positions at agile teams are generally filled with business representatives who have sufficient experience in a specific area of interest. (Those famous Subject Matter Experts) This doesn’t always mean that this person is fully knowledgeable of how an agile process works. When teams are formed is usual to see very strict filters to get good Developers, Scrum Master, QA’s and other specialists. But, what about PO’s? Well, the answer usually is.. - We found a guy who knows a lot about the business process but he’s never been a PO.. Oh! And BTW, he has many other things to do. Another answer may be.. - We found a person that knows a bit about Agile, however, she is new to the company and doesn’t understand the business process very well.. Oh! And BTW, she’ll be PO for more than one team at the time. Now let’s take a step back and analyze why is the PO position a full time job. Prioritizing and building a backlog that suffices the necessity of keeping a team (sometimes an oversized one) busy through several sprints and keeping it as tidy and logically ordered as possible it’s sometimes an astronomical and almost impossible to reach goal for some PO’s. It requires hours and hours of stakeholder management, business insight and an incredible talent for documenting in a fast, concise and easy to understand style. On a high profile Agile team, work items (often called User Stories) are usually taken as literally as they get written by the Product Owner, these work units must be fully groomed (once or twice) prior bringing them into sprint planning and hopefully written in an easy to read and hard to get lost methodology (i.e. Behavioral Driven Development). This implies that there’s always room for business rules and functionality gaps between User Stories, if individual features are not documented in a way that these gaps get covered the team will start noticing that something is missing in between. This can cause multiple setbacks (including tech debt increase) which will increment uncertainty and lack of confidence on what is being delivered as a quality product that will last. But that’s only the first part! A PO job involves a holistic view of the picture; it is not only defining the path to follow but it also means to be on top of what’s being done and reacting fast to team discoveries along the way. If you are a Product Owner remember that this is your product (no need to say that the role’s name is self explanatory) and you are responsible for what the team is doing and how the team spends their time; at the end of the day everything gets translated to $$$. It is also necessary to say that priority changes are always welcome in Agile, however, these changes have to make sense and be cohesive with business goals. That being said, I will call out some signals that may help you identify if your Product Owner is having trouble: User story acceptance issues. Poorly documented US acceptance criteria causes confusion, misunderstandings and time waste. Poorly documented US acceptance criteria causes confusion, misunderstandings and time waste. No logical order of priorities. Team might get blocked by non-existent context or architecture to develop on top of what’s needed. Tech debt increases. Team might get blocked by non-existent context or architecture to develop on top of what’s needed. Tech debt increases. Not enough backlog. Time management related — Team member usage can stall and resources are wasted Time management related — Team member usage can stall and resources are wasted Lack of business acumen. Knowledge gaps cause misdirection of efforts and overhead when issues need to be fixed. Knowledge gaps cause misdirection of efforts and overhead when issues need to be fixed. Constant priority shifts. Agile welcomes changes, however these are accepted to a certain extent, common sense needs to be put to work. Agile welcomes changes, however these are accepted to a certain extent, common sense needs to be put to work. Team frustration. Teams members are the perfect thermometer when finding out that something is wrong. Ridiculous priority shifts, direction issues and lack of work items can frustrate a team. Teams members are the perfect thermometer when finding out that something is wrong. Ridiculous priority shifts, direction issues and lack of work items can frustrate a team. Increased tech debt. Frequently caused by last minute requirements which increase overhead and affect team’s velocity Frequently caused by last minute requirements which increase overhead and affect team’s velocity Team image issues. Value delivery delays/errors can change how a team is perceived by upper management, these are not always provoked by the team itself. As a final thought, a Product Owner role is not to be taken lightly. Our job is to make it visible in our organizations, these people are in charge of sailing the boat and in their hands is the responsibility of assuring that final customers needs are fulfilled at their maximum. Let’s give this role the importance it deserves. — Luis Carlos Arias
https://medium.com/@luiskcr/is-being-a-product-owner-a-full-time-job-91c59d677ce6
['Luis Carlos Arias Leiva']
2020-03-03 06:51:26.583000+00:00
['Agile', 'Agile Methodology', 'Product Owner', 'Scrum', 'Agile Coaching']
Anna Szypszak | Diversity boosts creativity and development.
What does IT have in common with Alice in Wonderland? Once you fall into the rabbit hole, it starts to pull you in. Inside, you may be in for a real adventure that will let you change the world for the better. A heroine of such a story is Anna Szypszak, a Scrum Master and Agile Coach at KMD Poland, who often stresses in our conversation the importance of diversity at work, in every sense of the word. Anna Szypszak, a Scrum Master and Agile Coach at KMD Poland. When did you join KMD? I joined the team in January 2020, in the middle of the pandemic and the related staff scattering, remote work and team support from a distance. But despite the fact I have been here for merely a few months, it feels like it has been much, much longer. I was welcomed with huge openness, understanding and willingness of my new colleagues to provide support and assistance. Why did you decide to join the team? Because of the people. Seriously! I met a few brilliant people at the consecutive stages of my recruitment and this convinced me. It may sound funny, but the organisational structures, equipment, computers and projects or products are, in a sense, similar in all IT companies. It is the commitment, talent and openness of the people that make certain organisations simply better than others and pleasant to work in. What do you do? What is your role at KMD? I am a Scrum Master. I work with two developer teams that I support in streamlining the process of software delivery, communication, problem solving, and their development as a group. I also take part in a number of interesting initiatives within our Agile Community, such as, for example, co-teaching training courses for new employees on the fundamentals of working methods, such as Scrum. What made you decide on a career in IT? Was it a well-thought-out decision or just pure coincidence? They say that nothing happens by chance in life and I firmly believe that. However, I must admit that it is a very interesting story, full of extraordinary “coincidences” and wonderful people who helped me find my way in a place I had never even looked for. I joined the IT world through an internship competition, but my previous internships and work placements were in the field of quality and production management. IT wanted me, but without reciprocity — I preferred to work closer to production and was fascinated by modern factories and manufacturing systems. However, I said to myself that three months is not an eternity — I will manage somehow. Unfortunately, (of course, I say it jokingly) I came across a passionate manager thanks to whom I not only completed my internship but also accepted the offer of further cooperation, and eventually moved from Łódź to Warsaw. And I stayed, both in the capital and in IT. How were your beginnings in IT? You might expect that once I ended up in IT, like Alice in the rabbit hole in the fairy tale, it was downhill from there. But it was quite on the contrary! After all, IT is a huge area full of secret nooks and crannies, specialisations, winding paths and miscellaneous professional opportunities. Surely, there are also straightforward paths. However, I wasn’t lucky enough to follow them. I think that I am generally a type of adventurous explorer and therefore those simple and straightforward paths are always miraculously closed for me. I started out as a project analyst. I liked it for some time, but then it was no longer enough for me. I was transferred to the position of project manager. Very traditional projects, with a steering committee, a budget and strict schedule. It then seemed to me that the only career path is to become a manager. And again, thanks to my supervisor and the mentoring scheme, I understood that this was not the way and there were millions of other options. I began to search for my own. In the meantime, I completed a lot of courses and training sessions, I moved from one department to another, and read tonnes of books. Finally, I came across Agile and I have been developing myself in this area ever since. You mentioned that before entering the IT world, your professional experiences were completely different? Do you make any use of them in your daily work? Yes! I am strongly convinced that we limit our potential very much if we think that in a given sector we can only use our related experiences. What often comes in handy in my case is the fact that I spent many years preparing for my studies at the Academy of Fine Arts, by learning how to draw, paint, compose and apply various artistic techniques, which I still use today while preparing workshop materials. Then I chose the Technical University, but this is again a different story. I very often use what I learned from coaching courses — for example how to help people go through a conflict or how to work with a group. And what I learned while working in quality management at a factory, meaning how to streamline processes based on given measurements. My trainer experience is also very helpful — this was also an episode in my career path. Perhaps the only professional experience I do not actively use is the time I spent working at a florist’s. But I am almost sure that I will be able to apply these skills to IT sooner or later. You say you were going to attend the Academy of Fine Arts, but then you chose a completely different path. What is your educational background? I graduated from the Technical University of Łódź, inter-faculty studies in the International Faculty of Engineering. I still believe that these are the best studies in the world, though — interestingly enough — I had very little contact with information technology there. As I said, it took me a long time to prepare for the studies at the Academy of Fine Arts. I changed my mind in the first grade of secondary school. As you can see, it is never too late in your life to change the focus of your interests. It is important to have an idea for yourself and be able to pursue it consistently. There are numerous stereotypes in IT. Some even say that this is a male-dominated sector. What is your opinion on this? Do you agree with it or not at all? It depends on what we understand by “male-dominated sector”. If you ask whether it has more male than female employees, then I do agree. Anyway, it is reflected in the statistics. If you mean that male employees perform better in IT — in my opinion, it is absolutely not true. The only thing women, especially young ones, lack is the self‑confidence and belief that they will manage. When I started out, I was an absolute mouseburger, timid and silent, without self-confidence, but with loads of persistence and commitment. I came across a superior who recognised the potential I had failed to see in myself and it was largely thanks to him that I got the opportunity and support, so I am where I am now. When I was employed in IT, the whole department had merely a few female employees, with only one holding a managerial position. It was tempting to sing silently: “It’s a man’s, man’s world”. Now, it is changing, but still women are in the minority. I think it’s great that various sectors, previously quite homogeneous, are becoming increasingly diverse, vivid, multidimensional, and full of people with different experiences — it contributes greatly to development! Yes, indeed, development. What do you believe women bring to IT? How do they enrich the sector? First of all, they bring in exactly the same things as their male colleagues — technical competences and soft skills, such as good team-working skills or problem-solving abilities. Do they bring in anything particular? Over the years I have worked with many teams and I see that diversity really boosts creativity and development. On a team where everybody is somehow similar to one another, has similar opinions on every single subject and similar experiences, break-through solutions are rather difficult to expect. But, on the other hand, if we are forced to juxtapose various points of view and different ideas, we can work out something really exceptional — of course, if we treat differences in a constructive manner, unless they become the source of a destructive conflict. It does not seem to me that women, as a group, have any particular features in common, just like men have different personality traits, beliefs and values. However, I am convinced that mixed teams in terms of sex, age and professional experience simply work, learn and develop better and faster, because they do not work on a one-track basis, but have to open up to a much broader range of possibilities. What advice would you give to girls who are thinking about a career in IT? Should they focus on any specific aspects? Or, on the contrary, should they give up? If you are drawn to something never give up! And if you’re still not sure if it’s for you, the only way to make sure is by trying it yourself. I not only work in a “male-dominated sector”, but I also have a typical “male” hobby — I ride a mountain bike and take part in mountain bike races. Sure, I have heard many times that I am doing something pointless, elbowing my way to somewhere I shouldn’t be, that I had better quit or else I might bruise and get myself dirty (if you have ever watched a broadcast of MTB races, you know what I mean). But it only gives me more energy to do it. I love what I do. Besides, nowadays you do not choose a job for your whole life — it is not a mortgage loan. If you fail to try, you may never learn if you like something or are good at it. The worst that can happen is that you make a more conscious decision that you would rather do something else. Now that doesn’t sound scary, does it? Moreover, IT is really diverse and offers various roles, not only strictly programming positions. And there are really fantastic people. How does it feel to be a woman at KMD? Great! Let’s generalise it even more, it feels really great to be a girl in IT. In the “old IT” in which I started out, it was sometimes more difficult and you had to struggle for your position by not allowing some of your colleagues to dominate you. Back then, many people still remembered those old times and, as a result, believed that a woman in IT must be the CEO’s assistant. And now there are female programmers, testers, design or UX experts, or girls in the roles of Scrum Masters, Coaches or managers. It is good to see that at KMD too. I don’t feel that anybody treats us differently here. I hope this trend will continue. What brings you the most joy at work — both at KMD and in IT itself? There are two things. First, a feeling of having an impact, that my work really has the power to change something for the better within the organisation, that its effects are visible and positively affect other people, their performance and satisfaction level. Second, the privilege to take part in a number of miscellaneous initiatives or projects and work with many people, to learn from them, to continuously gain new experiences and broaden my professional horizons. You mentioned mountain biking, which definitely make your heart beat faster after work. What else are you passionate about? Do you use your passions to face the challenges of professional life? As I said, I am a kind of a seeker-explorer. I have lots of passions. I ride an MTB, paint, dance tango, and love cooking. If a day had 48 hours, I would certainly find additional hobbies. I owe all this to my parents who encouraged me to try my hand at various things ever since I was a child. And this desire to “experiment” is still deeply rooted in myself today, helping me at work when I need to consider a given issue from different perspectives, think up and try out some abstract solution, and find a new one when the first one fails. But experimenting alone wouldn’t be worth much without persistence, discipline and patience. I learned these three things from mountain biking. And, perhaps, it taught me one lesson more that results should not be expected to come overnight, but once they come, they bring a lot of satisfaction. A good result in a race requires months, or sometimes years, of hard work. In your professional career, it’s the same — if you are just a beginner, do not expect spectacular results tomorrow, or else you will keep being dissatisfied with yourself and lose your motivation. Compete with yourself rather than with your colleagues who may be doing better in a given moment. Give yourself time to fall, make mistakes, work hard, and try the same thing a few times until it goes well. These are all extremely important and valuable experiences! The process of learning itself is wonderful. The rule is not to give up and know what you want. And, of course, to surround yourself with people who emanate positive energy and support you in your difficult moments, as a team cheering on you during competitions. Then it will all be downhill from there, and once you overcome the first one, there will surely be an even bigger hill to face.
https://medium.com/kmd-blog/diversity-boosts-creativity-and-development-b9e91a30d426
['Kmd Poland']
2020-12-17 15:59:43.480000+00:00
['Kmd', 'Industry']
Classification from scratch — Mammographic Mass Classification
In our previous article, we discussed the classification technique in theory. It’s time to play with the code 😉 Before we can start coding, the following libraries need to be installed in our system: Pandas: pip install pandas Numpy: pip install numpy scikit-learn: pip install scikit-learn The task here is to classify Mammographic Masses as benign or malignant using different Classification algorithms including SVM, Logistic Regression and Decision Trees. Benign is when the tumor doesn’t invade other tissues whereas malignant does spread. Mammography is the most effective method for breast cancer screening available today. Dataset The dataset used in this project is “Mammographic masses” which is a public dataset from UCI repository (https://archive.ics.uci.edu/ml/datasets/Mammographic+Mass) It can be used to predict the severity (benign or malignant) of a mammographic mass from BI-RADS attributes and the patient’s age. Number of Attributes: 6 (1 goal field: severity, 1 non-predictive: BI-RADS, 4 predictive attributes) Attribute Information: BI-RADS assessment: 1 to 5 (ordinal) Age: patient’s age in years (integer) Shape (mass shape): round=1, oval=2, lobular=3, irregular=4 (nominal) Margin (mass margin): circumscribed=1, microlobulated=2, obscured=3, ill-defined=4, spiculated=5 (nominal) Density (mass density): high=1, iso=2, low=3, fat-containing=4 (ordinal) Severity: benign=0 or malignant=1 (binomial) Screenshot of top 10 rows of the dataset So we talked a lot about the theory behind it. It’s fairly simple to build a classification model. Follow the below steps and get your own model in an hour 😃 So let’s get started! Approach Create a new IPython Notebook and insert the below code to import the necessary modules. In case you get any error, do install the necessary packages using pip. import numpy as np import pandas as pd from sklearn import model_selection from sklearn.preprocessing import StandardScaler from sklearn import tree from sklearn import svm from sklearn import linear_model Read the data using pandas into a dataframe. To check the top 5 rows of the dataset, use df.head() . You can specify the number of rows as an argument to this function in case you want to check different number of rows. BI-RADS attribute has been given as non-predictive in the dataset and so it won’t be taken into consideration. input_file = 'mammographic_masses.data.txt' masses_data = pd.read_csv(input_file,names =['BI-RADS','Age','Shape','Margin','Density','Severity'],usecols = ['Age','Shape','Margin','Density','Severity'],na_values='?') masses_data.head(10) You can get a description of the data like values of count, mean, standard deviation etc as masses_data.describe() As you might have observed, there are missing values in the dataset. Handling missing data is something very important in data preprocessing. We fill out the empty values using the mean or mode of the column depending on the data analysis. For simplicity, as of now, you can drop the null values from the data. masses_data = masses_data.dropna() features = list(masses_data.columns[:4]) X = masses_data[features].values print(X) labels = list(masses_data.columns[4:]) y = masses_data[labels].values y = y.ravel() print(y) The vector X contains the input features from column 1 to 4 except the target variable. Their values will be used for training. The target variable i.e Severity is stored in the vector y . Scale the input features to normalize the data within a particular range. Here we are using StandardScaler() which transforms the data to have a mean value 0 and standard deviation of 1. scaler = StandardScaler() X = scaler.fit_transform(X) print(X) Create training and testing set using train_test_split . 25% of the data is used for testing and 75% for training. X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.25, random_state=0) To build a Decision Tree Classifier from the training set, we just need to use the function DecisionTreeClassifier() It has a certain number of parameters about which you can find on the scikit-learn documentation. For now, we would just use the default values of each parameter. Use predict() on the test input features X_test to get the predicted values y_pred . The function score() can be used directly to compute the accuracy of prediction on test samples. clf = tree.DecisionTreeClassifier(random_state=0) clf = clf.fit(X_train,y_train) y_pred = clf.predict(X_test) print(y_pred) clf.score(X_test, y_test) The DecisionTreeClassifier() without any tuning gives a result around 77% which we can say is not the worst. To build an SVM classifier, the classes provided by scikit-learn include SVC , NuSVC and LinearSVC . We will build a classifier using SVC class and linear kernel. (To know the difference between SVC with linear kernel and LinearSVC you can go to the link — https://stackoverflow.com/questions/45384185/what-is-the-difference-between-linearsvc-and-svckernel-linear/45390526) svc = svm.SVC(kernel='linear', C=1) scores = model_selection.cross_val_score(svc,X,y,cv=10) print(scores) print(scores.mean()) In this section, I am trying to show you a different approach for creating a classifier. The svc classifier object is created using SVC class on the training set. cross_val_score() function evaluates score uses cross-validation method. Cross-validation is used to avoid any kind of overfitting. k-Fold cross-validation implies k-1 folds of data used for training and 1 fold for testing. The score obtained using this is around 79.5% Similar to the Decision Tree Classifier, we can also create Logistic Regression classifier. The function LogisticRegression() is used. The classifier is fitted on training set and similarly used to predict target values for the test set. It gives a mean score of 80.5% clf = linear_model.LogisticRegression(C=1e5) clf = clf.fit(X_train, y_train) y_pred = clf.predict(X_test) scores = model_selection.cross_val_score(clf,X,y,cv=10) print(scores) print(scores.mean()) Thus, if we want to build a single classifier we can do it in just 10 lines of code😄. And in no effort, we achieved an accuracy of 80%. You can create your own classification models (there are plenty of options) or fine-tune any of these 😛. Also if you are interested you can give a shot to Artificial Neural Networks as well 😍. For me, I got the best accuracy of 84% with ANNs. You can find the code for this on my GitHub account. If you liked the article do show some ❤ Stay tuned for more! Till then happy learning 😸
https://medium.com/datadriveninvestor/classification-from-scratch-mammographic-mass-classification-a0b5f53fb5
['Apoorva Dave']
2019-03-13 11:52:21.197000+00:00
['Machine Learning', 'Python', 'Classification', 'Data Science', 'Beginners Guide']
Being a First-Year Teacher During a Global Pandemic: a Survival Guide
This story has been published on The Educator’s Room When I decided to go back to school to obtain my teaching license, I could have never dreamed my first-year teaching would be during a global pandemic. Who could have dreamed a year like this? I began my journey after having and raising children, so I am quite a bit older than most first-year teachers. I got my undergraduate degree in Child Development while I ran a preschool in my town. Finally, my children were old enough to allow me to go back to school and pursue a Master’s in Elementary Education. It has always been my dream to teach elementary-age students. I received my Master’s in December 2019 and started a job as a small-group interventionist in a local school district in my state in January 2020. I was hoping to get a contract in the school district once the hiring season began. I could have never predicted what would happen next. March 13, 2020, will live forever in my memory. I was getting ready to accept a long-term substitute position in the same school I had been working in as an interventionist. Suddenly, we are being told to prepare to send the children home with work and computers. I was going to hiring conventions when the country went into lockdown in an effort to slow the spread of a virus called Covid-19. No one knew that we would not return for the remainder of the year. While in lockdown, I continued applying for jobs, taking interviews in local school districts via Zoom, and failing miserably to feel professional while I sat in front of my laptop camera at my kitchen table. By some small miracle, I was able to secure an interview which eventually led to a job offer at a small school in a rural mountain town about 40 minutes from my home. I accepted the position with excitement and began to plan my first-year teaching as a third-grade teacher! The year began in the strangest of ways. I never set foot in the building and was hired from my home before meeting the staff or administration. I was allowed to enter the school in July, just before school began so I can look at my classroom and set my classroom up before the students were due to arrive in August. I worked in the building for about 4 days before I met a single member of my teaching team. When I imagined my first year as a teacher, I never dreamed I would experience having my entire class go into medical quarantine twice, having to split classes with other teachers on my team because there is a huge substitute shortage, or having to plan lessons completely on my own because my planning partner got sick for 3 weeks with the virus. I never dreamed that we would have to stay in our rooms for specials classes, eat breakfast and lunch in our classrooms, and cancel holiday parties due to a high number of cases in our classrooms. I never thought I would ever feel so lost, alone, and mentally exhausted. We are being asked to balance the safety of our students, social distance, and still maintain state testing as if this were a normal year. Our students' scores are all over the place because many of our students have missed a month or more of class due to being exposed to the virus. Yet we are responsible for collecting data and that data is being counted toward judging our effectiveness as educators. I am officially halfway through my first-year teaching and I have somehow found a way to make room for my mental health and survival. In addition to being a first-year teacher, I am also getting my Doctorate of Education and have spent much of my free time in class online as well as doing research for my upcoming dissertation. I began this journey in February, just one month before the pandemic changed our lives forever. This year has been full of stress, reading, crying, comforting children, reassuring the families of my students, and balancing work-life and family life to the best of my ability. My students, my co-workers, my husband, and my own children have had various struggles with maintaining good mental health during this very scary time. As a first-year teacher, I have stumbled, fallen, and fully found my footing this year in my own way. I discovered many things about myself during this time, and I would like to share with anyone who could use support after starting a career as a teacher during a global pandemic. 1. Take time to do the things you love: It is so easy to let work and stress consume you even in a normal year. Do not forget to read the types of books you love, do the activities you enjoy, and take time to reflect in whatever ways you did before you began your new teaching career. 2. Relax: Remember to relax and let go of whatever stress you experienced at work each day. Take a hot bath, meditate, or pray. Whatever you need to do at work, it can wait until tomorrow. Your students need you to be renewed and healthy so that they can get the support they need from you each day. 3. Do not forget to call your friends and family: It is so easy to let work consume us in even the most normal of school years. Take the time each week to reach out to friends and family and talk about anything other than work. 4. Leave work at work: I know this is a difficult one, but this one is perhaps the most important one. Take your planning and your early mornings before school and use that time to get all grading completed, record data, and send emails. Do not take this home with you. When you get home, make sure to focus on your family and your mental health and leave all school matters at the school where they belong. I know these things seem simple, but they will take practice and hard work to be able to master. Teaching is a calling, and we have been called to do it during a very hard time. Take care of your mental health. This will not be easy; teaching was never easy. It is perhaps the most important job in the world. Take care of yourself and have a great first year!
https://medium.com/@mrsgammon/being-a-first-year-teacher-during-a-global-pandemic-a-survival-guide-bf88f0125097
['Crystal Gammon']
2020-12-28 13:41:40.967000+00:00
['Teaching', 'Teachers', 'Covid 19', 'Education']
Java Bean Mapping — Usage of Mapping Libraries, Part II
MapStruct As mentioned, MapStruct generates mapping codes at the compile time. Therefore, it is the same code that is written by us as performance. Well, How MapStruct generate these codes at the compile time? 🤔 Annotation processing! Annotation processing is integrated into the java compiler. Passing processor flag to javac, we are able to invoke annotation processors. The processor is only able to generate new source files rather than modifying existing sources. If you are interested in the topic, I suggest checking this book. “Core Java Volume 2, Advanced Features at Chapter 8.6.3 Using Annotation to Generate Source Code”. MapStruct uses this approach in order to generate our mapping codes. Now let’s see some code. I would like to recap our Request DTO and the Entity object. These are our Mapping objects. Basically, in order to define our mapping logic, we should define an interface. MapStruct will implement this interface and generate our concrete methods. We define our mappings with abstract methods. As could be seen, we should also annotate our interface as “Mapper” and component model also should be added for dependency injection. After the compilation, there will be some concrete classes under target/generated-sources/annotations/… This code is generated by MapStruct according to our definitions. The library can understand the source and destination from the parameter and the return values. Besides, we are able to also implement list mapping easily. List conversions will use one to one mapping into a for-loop. It also adds a component annotation for spring because we explicitly declared that this component belongs to spring. componentModel = “spring” Now, let’s add more functionality to our mapper. As could be seen in the object, I added a new field called Status. The purpose of this field is to keep the user status. When the user register, we should initialize the user with active status. You may want to achieve this during the mapping process. After mapping all parts, we also add some definitions. To do this we have an annotation offered by MapStruct. As of Java 8, we are able to add concrete methods to our interfaces. To do this, we use static or default methods. With the AfterMapping annotation, this function will be called at the end of the mapping process from the concrete class. We should also add MappingTarget annotation to indicate which object is targeted. Therefore, after the mapping, we can initialize the Status field with the default value. Let’s look at another example. Suppose that we should save our companies with the Company keyword to the database. How can we achieve this kind of mapping in MapStruct? 🤔 To do this, we should add Mappings annotation which could be consist of many mappings. In the mapping annotation, we should add source and the target value. Then, we can define our own intent during the mapping process. In this case, we are able to use Named annotation in order to give a specific name to our default function. After that, we are able to indicate our logic to the mapping. In the concrete class, the function will be called and set the return value of the function to the Company field as below. user.setCompany( mapCompany( userRequest ) ); With this approach, you are able to define your own mappings rule for specific fields. We can also define some mappings for specific types. I mean, for all Long to String conversions, we may need a specific mapping for them. To do this, we have to add uses parameter to the Mapper annotation. As could be seen above, Token Mapper was used for mappings. You are able to add more classes inside of curly brackets. In this case, all Long to String field conversions, this concrete method will be used. MapStruct is able to understand this conversion from the parameter type and the return value type of the method. It is time to map the User entity to the User Response object. Let’s see different examples! Suppose that, we have to add another parameter to our mapper function because we would like to set this parameter to the destination field. User Response Object As you may have noticed, AccountType has been added to the response object. How could we pass AccountType as a parameter to our mapper function like below? To achieve this, we have to learn one more Annotation. Context! We can add more parameters to our mapper function with Context annotation. With AfterMapping, AccountType was able to set as in the example! Thanks to Context annotation, we are able to bind parameters to each other. Instead of setting statically, this approach is recommended. Wait! This is an abstract class! Could we use abstract classes instead of interfaces? The answer is yes. As of Java 8, we are able to use interfaces instead of using abstract classes because of default and static methods. There may be one reason for using abstract classes rather than using interfaces. If we have to inject other services to our mapper, we should use abstract classes. During the mapping process, we may need to use other services for retrieving data. In our case, there is a country service which consists of one function. With this function, we are able to retrieve the country name from the country id. For now, adding County keyword to the end of country ID will be enough. How could we inject this service to our mapper? There is one more parameter that belongs to Mapping annotation. We are able to write our own expressions manually in this field. For country name mapping, this expression will be used. For two reasons, I did not find this approach sufficient. First off, Autowire was used for dependency injection which is field injection. The other reason is expression seems too manual. We have to write all expression as a String. It is prone to making mistakes. Fortunately, If there is a compile-time error in our mapping definitions, we will produce an error. You may also consider checking more usage of these mapping libraries. Links have been added to the references. For all codes and configurations, the GitHub URL has been attached. References:
https://iyzico.engineering/using-mapping-libraries-part-ii-a2ce6178ba1
['Güven Seçkin']
2020-08-04 10:34:10.432000+00:00
['Java', 'Mapping']
Christmas and Real Estate Strategy
Real estate agents can take a break from the hustle and bustle of summer by taking a vacation during the holidays. Real estate professionals often include a Christmas slump in their annual plans. They assume they won’t have to be at closings or open houses nearly as often. There’s nothing wrong in enjoying the holidays and taking a step back, but it is easy to become complacent as the year ends. Real estate marketing plan It’s time for you to look ahead to next year as another year draws to a close. What can you do to get the highest return on your investment when you create new marketing strategies? A real-estate marketing plan may be helpful. This important tool helps you organize your marketing efforts and ensures that they are done correctly. · Mobile optimization of your website · Email marketing allows you to keep in touch with clients and prospects. · Content marketing is a great way to show off your expertise · Keep active social media presence · Manage and solicit online reviews Many of the strategies listed below can be integrated into your real-estate marketing plan. This is the perfect time to adapt your plan to meet current needs due to the Christmas slowdown in real estate. Remind clients you are still available Although there are many factors that can cause an annual slowdown in real estate sales, clients often assume that agents won’t be interested or unable get involved with fast-paced holiday sales. Many people will feel more comfortable listing their home or looking into buying opportunities if they are assured of your availability. These reminders can be as simple and quick as an email or a message on Instagram Stories. Paid Ads To secure the best clients to list in spring, the nurturing process (which includes building strong relationships with potential clients) should be started in fall. Paid advertising can give you a quick boost if you need a boost. You can target specific audiences with this approach, rather than trying to reach everyone and hoping that your message resonates and property. Google Ads can bring potential clients to your site, where they can find out more about exciting real-estate opportunities. Facebook ads might be a better option for growing your social media followers. Instagram and LinkedIn offer paid options. Whatever your marketing strategy, it is important to personalize your target audience. This will ensure your marketing dollars go to qualified leads that are likely convert. Payout campaigns should include holiday-oriented keywords and content. You might use these terms to attract attention to potential buyers and sellers looking for homes to sell or buy during the Christmas season. Charitable events Christmas marketing does not have to be about business. You should be an active member of your community as a realty agent. Your reputation will be strengthened if you show that you are committed to your local community and are willing to help. Show appreciation by sending client gifts The closing gift is an essential part of real estate transactions. The closing gift should be a thoughtfully chosen gift that shows appreciation for your clients and ends the whole process in a positive way. This Christmas season is a great opportunity to add holiday flair to this time-honored tradition. You can show appreciation by small tokens that are small and personal. This will help you to keep in mind the important role you played in the lives of previous clients. This will keep you in the forefront of potential clients’ minds, increasing your chances of being referred or even working with former clients again when they become second-time homebuyers. Finish the year on a high note by implementing your Christmas real-estate marketing strategy Christmas real estate marketing is a great way to get new clients, strengthen existing relationships and prepare for the next year. Do not let the holiday season pass you by. Use this opportunity to set your goals for the year and keep clients thinking of you. A simple Christmas card, even a personalized one, can show clients that you care about their community. The positive responses you get will be amazing.
https://medium.com/@maniarain4/christmas-and-real-estate-strategy-f2108a2214ec
[]
2021-12-20 09:12:31.034000+00:00
['Christmas', 'Property', 'Mortgage', 'Dealty']
astringent
this time it wasn’t to be and that would normally be disappointing, but okay her thoughts circled her head in rotation on warp speed, endless cycle all she had to do was get in the car and drive to the store buy something, anything chocolate melting in her mouth blood singing with sugar, instant ways of forgetting instead, she went into the bathroom with Clorox and cleaned the toilet trying to scrub the male dominance away with bleach and toilet brush the smell, a lion’s cage roar sharp and undeniably fragrant, burning her nose she needed a space to forget, calm her anger, her loneliness, her freshly opened wounds distraction by astringent, anything to keep herself from leaving, flushing away into destruction by sugar ©Jennifer Cowie King If you liked this poem, you might like this one…
https://medium.com/loose-words/astringent-e4d8d7c63e78
['Jennifer Cowie King']
2020-12-05 15:30:32.188000+00:00
['Addiction', 'Emotions', 'Poetry', 'Free Verse', 'Self']
Distributed subsegment sums
Task We have a fixed array of integers of size N. There are also incoming requests that we need to process. Each request consists of two indices i and j. The response to that request should contain a sum of all elements of the array starting from index i to index j, that is, a[i] + a[i + 1] + … + a[j] The array is so large that it does not fit into memory of a single computer. Time complexity: O(1) per request. Solution Solve for single machine First, let’s understand how would we solve this task if the data fitted into the memory of a single machine. In that case, we can calculate the prefix sums array p. p[0] = 0 and p[i] = a[0] + a[1] + .. + a[i -1] if i >= 1. Then the answer to the request (i, j) would be p[j + 1] -p[i]. For example, if the input array is a = [1, 2, 3, 4, 5], then we first construct the prefix size array [0, 1, 3, 6, 10, 15]. If we encounter a request with indices (2, 4), we should compute 15–3 = 12, which is the same as computing 3 + 4 + 5 = 12. So, to conclude, we can preprocess the array by constructing the prefix sums array. Then, on each request (i, j) we respond with p[j + 1] -p[i], which is O(1) time complexity per request. Multiple machines problem If we want to apply the same approach with multiple machines, we will immediately face a problem. We cannot simply preprocess data to compute prefix sums, because the data is randomly scattered across multiple machines. But even more fundamentally, here comes a question of how is data stored on the machines in the first place. When the data was stored on a single machine, it was obvious what was the order of the data. We could have easily understood what is the 1st element, what is the 2nd one and so on. But now, when data is stored across multiple machines, how do we even define the order of the elements in the array? There are many ways to define the order of elements. Let’s take the simplest one: each element will be stored together with its index. So if we want to store the array [3, 7, 9, 8] on 2 machines, one can store pairs (0, 3), (2, 9), on one machine and pairs(1, 7), (3, 8) on the other. The first element in a pair is the index of a number, and the second element in a pair is the number itself. Now at least we have a semantically clear idea of the order of elements. But, we still need to somehow compute prefix sums. Let’s try to do that with map-reduce. Preprocessing with Map-Reduce In map-reduce, we have key-value distributed data tables (k, v), and there are several operations allowed with this data: Map: apply some function to each row (k, v) to transform it into 0, 1 or many rows of the output table Reduce: if table has multiple rows with the same key k: (k, v1), (k, v2), …, (k, v_n), clamp those rows into a single row with values group in array: (k, [v1, v2, …, v_n]) Join: if we have two tables keyed by the key of the same type, produce a third output table, which has values from both initial tables. For example, if the first table had row (k, v1) and the second table had row (k, v2), produce the output table with row (k, [v1, v2]). Coming back to our initial task, let’s assume data is stored initially in the table keyed by indices and values are numbers of the array. We will denote it like this: (i, a[i]) Let’s compute the following tables for each k: T(k) = (i, a[max(i -2^k + 1, 0)] + … + a[i]) T(0) is exactly (i, a[i]), which we already have. T(log(N) + 1) = (i, a[0] + … + a[i]), which is the one we want (prefix sums table). The only thing remaining is to explain how to compute T(k + 1) having computed T(k) before. T(k) = (i, a[max(i -2^k + 1, 0)] + … + a[i]). First we can map it by adding 2^k to all keys: M(k) = (i + 2^k, a[max(i -2^k + 1, 0)] + … + a[i]). Secondly, we notice that M(k) = (i, a[max(i -2^k -2^k + 1, 0)] + … + a[i -2^k]). It is the same, but mathematically expressed differently by substituting i with i -2^k. Now let’s join T(k) and M(k) and let’s sum up the values. Then we exactly get T(k+1), because a[i -2^k -2^k + 1] + … + a[i -2^k] + a[i -2^k + 1] + … + a[i] = a[i - 2^(k + 1) + 1] + .. + a[i]. So we explained how to compute T(k + 1) having computed T(k) already. T(0) is our initial table. From it we compute T(1), then from T(1) we compute T(2), etc. until we compute T(log(N) + 1), which contains all the prefix sums of the initial array. We’ve learned how to compute prefix sums using map-reduce, now we should also learn how to answer the client request (i, j). How to respond to request (i, j) We can put our map-reduce computed table (i, p[i]) into the distributed real-time key-value storage. After that, on each request (i, j), we will need to get value p[i] by key i, and p[j + 1] by key j + 1. After that, we compute p[j + 1] -p[i]. So all the remaining complexity is hidden under the implementation of distributed real-time key-value storage, which we do not discuss in this solution, because it is a bit out of the scope of the problem being discussed. Despite that, distributed real-time key value storage is a very complicated piece of infrastructure if done properly to account for machine failures, network splits and hot keys.
https://medium.com/verbetceteratech/distributed-subsegment-sums-7c58e3eb22d3
['Oleg Tsarkov']
2020-06-04 10:26:00.390000+00:00
['Mapreduce', 'Arrays', 'Distributed Systems', 'Subarray', 'System Design Interview']
Five Time-Saving Hacks to Use Plagiarism Checker X!
Five amazing tips and hacks to use Plagiarism Checker X Let’s admit we all have tons of pending work, poking at the back of our minds. Yet we keep procrastinating and delay it till the 11th hour! Why not try some cool tips and tricks to analyze your documents faster with Plagiarism Checker X. Interested huh? Let’s dive straight into it…. 1 — Use Plagiarism Checker X in your Native Language Native Language? Yes, you got it right. According to several researches published in the native language reading studies from 2000 till date, it is clearly shown that you read and work faster in your native language. Why? Because it is built-in your brain since you were born. In fact, you started to recognize it sooner with your mother’s voice. So, taking edge of this natural ability, it is possible to use PCX software in your native language. Plagiarism Checker X is available in 20+ languages including English, Spanish, French, German, Italian, Dutch, and Portuguese. Even more languages are coming soon. No more language barriers for all users around the globe. 2 — Easy Reading with Color Coded Text Easy on the Eyes : Color-Coded Reports According to several studies on effect of reading color coded words and its effects on fluency and decoding ability, it is perceived that a color-coded text segment is easier to read due to several factors; main being the cognitive patter of our brain to process text and visuals. Online Plagiarism Report is visually appealing and easy-to-understand. It’s COLOR-CODED layout makes it easier to pick duplicacy sources in just a glance. The copied content is given a particular color, depending on the percentage of the duplication. At last, you can export the report into standard word, pdf or html file formats. In current version, we provide built-in Word and PDF Support. So, you can use this feature even if you do not have Microsoft Word installed on your system. 3 — Use Dashboard to quickly access files No need to remember manual file path. Now, you can use the Main Dashboard Layout to access files directly. Isn’t it great. Quick and faster access to all files, and selecting from recent files makes the usage even more seamless. Dedicated dashboard layout gives users more control while organizing plagiarism report content. It provides useful insights about cumulative plagiarism checking. Also, it makes it easy to navigate around the features. 4 — Scan past content to avoid self-plagiarism Scan with your own previous documents Well, to err is human. It is a complete possibility that due to lack of time, focus or in a rush, you could accidentally copy or misquote even your own work. This could happen when you unknowingly write similar ideas from your previous work. Or maybe there are sections in your text where you should mention what you did in the past. When done with your draft, try to scan your current work with your referenced past documents. In order to avoid the hassle in future. You can easily fix it using BULK COMPARISON in Plagiarism Checker X. If you want to know more how can you use this feature, feel free to read more about it here. 5- Use Lunch Breaks to save scanning time Analyse your text during a break. When meeting the deadline, every minute counts. So do not waste a single moment. It is always a good idea to use passive time effectively like showers, lunch/tea breaks or even a small nap. Before your leave your desk for any such reason, simply load your file for similarity check. And get complete results on your return, to start with the next tasks of report writing. We all know how important it is to get done faster with that long-pending deadlines. So, even though you will be away from your desk, but practically you will be utilizing this break-time effectively. Hey, Let’s catch up on PCX Instagram to know your hacks. Hey, you know what, our Plagiarism Checker X family is growing. PCX is now on Instagram too. We are excited to reach more people who believe in power of original content. Let’s catch up on insta @plagcheckerx and share your stories of quality content. Stay connect for all updates www.lagiarismcheckerx.com |ww.plagx.com
https://medium.com/@plagiarismcheckerx/five-time-saving-hacks-to-use-plagiarism-checker-x-8a6dd1df2abe
[]
2020-12-09 10:05:27.396000+00:00
['Plagiarism Checker Tool', 'Tips And Tricks', 'Original Content', 'Similarity Score', 'Hacks']
Why developers love TypeScript every bit as much as Python
Why developers love TypeScript every bit as much as Python And why you might consider switching if you’re dealing with front-end web, or back-end Node development Python and TypeScript are among the most-loved programming languages. Photo by Obi Onyeador on Unsplash Python is my bread-and-butter, and I love it. Even though I’ve got some points of criticism against the language, I strongly recommend it for anybody starting out in data science. More experienced people in the field tend to be Python-evangelists anyway. However, this doesn’t mean that you can’t challenge the limits in your field from time to time, for example by exploring a different programming paradigm or a new language. The list of Python’s competitors is long: Rust, Go, Scala, Haskell, Julia, Swift, C++, Java, and R all find an entry on it. In such a crowded field, it’s quite understandable that JavaScript and TypeScript don’t tend to get that much recognition. That doesn’t mean that TypeScript isn’t a staple in the general programming world. Among data scientists, however, it’s never been particularly popular. You could conclude that TypeScript may not be a good match for data science. But don’t rush. Although it might not be suitable for every part of data science, there are areas where it has distinct advantages over Python. If you happen to work in one of these areas, it’s worth giving TypeScript a shot. And if you don’t, who knows where you’ll land next? The field is moving fast. You have a competitive advantage if you can look beyond your nose. TypeScript: JavaScript, but type-safe About a decade ago, software engineers at Microsoft noticed that JavaScript wasn’t meeting all their needs any more. On the one hand, the language was evolving rapidly and adding extremely interesting new features. On the other hand, none of the features in the pipeline could solve one fundamental problem: JavaScript was great for small programs, but writing whole applications with it was a mess. There were a few possibilities for solving the problem: one could, for example, use a different language, or redesign JavaScript with a better syntax. The development team at Microsoft took a different approach: they created TypeScript by expanding JavaScript. This way, they were able to use all new features of JavaScript while getting rid of the stuff that wasn’t good for big projects. Similarly to Python, JavaScript is compiled at runtime, meaning that you need to run the code to debug it. TypeScript, on the other hand, is compiled. This provides an extra layer of safety because programmers get information about possible bugs before execution time. With non-compiled languages like Python or JavaScript, it can be quite time-consuming to locate bugs once you’ve realized that your code isn’t behaving as expected. The key difference between JavaScript and TypeScript, however, is the type checker. Upon compilation, every TypeScript-program gets checked on whether the data types are consistent with one another. This might sound trivial to do manually. But when you’re working on projects with thousands of lines of code, you’ll thank the Lord for having it. There are a few other differences, like the fact that TypeScript has anonymous functions and asynchronous functions. Anonymous functions are a key feature of functional programming, which can make a program more efficient with big data loads. Asynchronous programming is extremely useful when you need to perform multiple operations in parallel, or when you’re dealing with I/O operations that shouldn’t interrupt background processes. Asynchronous programming is possible in Python and Javascript, but in TypeScript it’s built in from the core. TypeScript is a superset of JavaScript. Image by author, with references to Guru99 and Wikipedia. How TypeScript became popular If you hate the illogical but hilarious WTF-moments that keep happening in JavaScript, then I have bad news for you: you’ll see all that stuff in TypeScript, too. But these syntactical hiccups aren’t the reason why programmers love TypeScript so much. Rather, it boils down to the idea that you take a great product — JavaScript — and add something even greater — static typing. Of course I’m not saying that JavaScript is always great, or that static typing is always the best way of doing things. But for building large Microsoft-style applications, this seems to work extremely well. That being said, TypeScript still only has a fraction of the popularity that JavaScript has. This could be attributed to the age: TypeScript is eight years old. JavaScript is three times as old! And Python is also an oldie in the field with its thirty years. Despite its young age, there are fields where TypeScript is inevitable. This adds to its popularity. For example, when Google announced that Angular.js would run with TypeScript in 2016, the number of tags on StackOverflow exploded. TypeScript only shares a fraction of the traction that Python and JavaScript have. Still, it’s inevitable in some areas. Image from StackOverflow Trends Where TypeScript might have an edge over Python What made JavaScript popular back in the days is that it runs everywhere. You can run it on a server or in your browser or wherever you like. You compile it once, and it works everywhere. When that first came out, it almost seemed like magic. As it’s built on JavaScript, TypeScript shares that magic. Python does, too! Granted, it has a slightly different implementation since it uses an interpreter instead of a virtual machine. But that doesn’t change the fact that in terms of run-me-anywhere, TypeScript and Python are on par. Features like generics and static typing make it easier to do functional programming in TypeScript than in Python. This could be an advantage because demand for functional code is growing due to developments in data science, parallel programming, asynchronous programming, and more. On the other hand, Python has been adding more and more features of functional programming, too. And when it comes to data science, machine learning, and more, Python is at the forefront of frontiers. That leaves parallel programming and asynchronous programming on the table. Even though you can pull both of these things off in both languages, there is a big difference: in Python, you need to use particular libraries for the task. In TypeScript, all libraries are asynchronous from the core. And since the latter is a bit more functional by default, it’s often a tiny bit easier to do parallel programming. In other words, if you’re a Python developer that is involved in asynchronous processes and parallel computing, you might want to give TypeScript a try. Is TypeScript better than Python? Sometimes. Photo by THE 9TH Coworking on Unsplash What makes TypeScript great for data science — or not Many data scientists deal with asynchronous and parallel programming. You might already be considering writing your next project in TypeScript rather than Python. Whether that’s a good idea depends on many other factors, though. First of all, TypeScript doesn’t have a straightforward way of doing list comprehensions. This can be frustrating when dealing with large arrays, for example. Second, there are no native matrix operations in TypeScript. Python has NumPy, as well as a host of other tools, that make them easy. So if your project is heavy in linear algebra, you might want to stay away from TypeScript. Third, if you’re not too familiar with JavaScript, you’re almost guaranteed to get some moments of confusion. Since TypeScript is built upon JavaScript, it inherited all of its features — the good, the bad, and the WTF. Although, if I’m honest, encountering these phenomena can be quite amusing, too… Finally, you’ll want to take into account that programming isn’t a solitary occupation. There is an enormous community for Python in data science which offers support and advice. But at this point in time, TypeScript isn’t that popular among data scientists. So you might not be able to find that many helpful answers to your questions on StackOverflow and elsewhere. That being said, if you’re starting a small project without too many big arrays and matrix operations, you might want to give TypeScript a go anyway. Especially if it involves some parallel or asynchronous programming. The bottom line: know where to use your tools There is no one language for every task. Sure, some languages are more fun or more intuitive than others. Of course it’s important that you love your tools because that will keep you going when times are tough. Starting with a well-loved language like TypeScript or Python is therefore not a bad idea. But at the end of the day you shouldn’t stick to one language like to a religion. Programming languages are tools. Which tool is best for you depends on what you’re trying to do with it. At the moment, Python is huge for data science. But in a rapidly evolving field, you need to be able to look past your nose. As your tasks are changing, so might your tools. TypeScript, on the other hand, has a buzzing community around front-end web, back-end Node, and mobile development. What’s interesting is that these areas intersect with data science more often than one thinks. Node, in particular, is gaining more and more traction among data scientists. Of course this doesn’t mean that you should dabble with a dozen languages at a time. There is enormous value in knowing one language really well. But being curious about other languages and technologies will help you stay ahead of the curve in the long term. So don’t hesitate to try something new when you feel like it. Why not with TypeScript?
https://towardsdatascience.com/why-developers-love-typescript-every-bit-as-much-as-python-687d075cfb5e
['Rhea Moutafis']
2020-10-18 14:14:58.810000+00:00
['Software Development', 'Typescript', 'Towards Data Science', 'Programming', 'Programming Languages']
Pregnant Women Face Additional Diabetes Risk
Pregnant Women Face Additional Diabetes Risk More pregnant women than believed may be at risk for a dangerous condition that can lead to difficult deliveries of huge babies, juvenile diabetes in the child, or severe birth defects. This week, a large-scale study of pregnant women in nine countries showed that current guidelines for diagnosing a condition called gestational diabetes mellitus, or GDM, may be too generous, letting at-risk women fall through the cracks. GDM — which is essentially diabetes that occurs during pregnancy — can lead to a whole host of problems for mother and baby. To learn more, Dr. Boyd Metzger, a professor of endocrinology at Northwestern University, led an international research team that studied more than 23,000 pregnant women around the globe from about 28 weeks until birth. The moms to be had their blood sugar levels tested around 28 weeks of gestation. First, they had a fasting blood draw; then, they consumed sugar and had their blood drawn again after two hours. This “glucose tolerance test” is often used to confirm diabetes. The scientists found that women with higher blood sugar levels were more likely to have very large babies and need a C-section. In the group of moms with the lowest blood sugar, only 5 percent of the babies were in the top 10 percent in terms of size, while the 20 percent of the moms with the highest blood sugar had oversize babies. The results were presented at the 67th Annual Meeting of the American Diabetes Association. The study seems to hint that even women with borderline high blood sugar may be at risk for complications from GDM. Women in the study who were found to have excessively high levels of blood sugar — more than 105 milligrams per deciliter of blood fasting, or more than 200 after the glucose tolerance test — were pulled from the study and treated for safety reasons. But many of the remaining women whose sugar was on the “high side” but not high enough to keep them out of the study, experienced some of the same complications as women with GDM. Metzger said he expects that, as a result of this study, the threshold for blood sugar levels in pregnant women may be lowered when it comes to testing for diabetes. Roots of Diabetes Diabetes is a disease of the pancreas — a pistol-shaped organ in the abdomen responsible for making the hormone insulin, which helps the body store sugar. In diabetics, the pancreas either can’t make insulin anymore (type 1) or makes insulin that the body “resists” (type 2). Either way, without insulin the body has no way to store the food you eat. Seventy percent of women with GDM develop full-blown diabetes within 10 years, meaning that they have blood sugar that needs to be controlled all the time. But endocrinologists note that most cases of GDM are really regular diabetes that just gets discovered during prenatal OB appointments. In a diabetic mother to be, the delicate hormonal balance that supports a growing fetus can be disrupted. “In most mammals, insulin resistance increases in pregnancy,” said Dr. Tom Buchanan, an endocrinologist who attended the diabetes meeting, adding that this is thought to be a way for more sugar to get to the blood of the hungry fetus. But, in diabetic mothers, “the baby gets more glucose than it needs. That baby’s insulin production goes up due to the increased exposure to blood sugar.” This is why, in addition to overweight babies, the study authors also found that when the babies were born, they tended to have dangerous drops in blood sugar soon after they were breast- or bottle-fed. Their bodies were literally soaking up nutrition at an abnormally high rate — because they were used to it. Screening, Treatment Both Important Treatments for GDM include dietary therapy, insulin or pills such as metformin and glyburide — drugs that don’t cross the placenta to the baby. But are women with borderline-high blood sugar at risk for some of the worst outcomes, like birth defects? Buchanan doesn’t think so, since none of the women in the study had stillborn babies or babies who otherwise died at an early age. Still, he emphasized that the more pregnant women are studied, the more we will know about what level of blood glucose is associated with problems at birth. “The relative risk [for those with slightly above normal blood sugar] might be very small. … What we generally do in medicine is set a low threshold and treat everyone.”
https://medium.com/@medjournalist/pregnant-women-face-additional-diabetes-risk-76b144c151fc
['Monya De', 'Md Mph']
2016-06-27 18:06:44.593000+00:00
['Health', 'Diabetes']
Drivers Declare War on Uber, and Rydzz Is the Big Winner
We all (RYDZZ inclusive) know how we got here, this fight between workers, employees (or independent third party contractors as Uber would have them go by) and their employers, over their work data. In a classic man vs. machine setup, drivers may well be the Connors and tech giants, the T-1000. Gone are the days when employees trusted the little pop-up that told them how well and carefully companies will handle their person data once they clicked/tapped on “I Agree” (consumers stopped trusting that a long time ago). Tech companies collect a bulk of data on both their clients and employees, which serve as precious metrics to measure the quality of their services, client tendencies, and employee performance. Of course these companies say they protect this data, but given Facebook’s recent revelations (just to name but one case), nothing is certain anymore. It comes there as no surprise that Uber drivers are going at the rideshare giant with their pitchforks, demanding that their data be released. RYDZZ, a new ridesharing company about to hit the market will be watching with keen interest and taking rookie notes as best as it can, as Uber struggles with a slew of bad decisions and legal battles. Uber, much like RYDZZ, collects driver data to measure performance through rider ratings, reviews and other information. This data is, however, not released to the drivers, something which former Uber driver, James Farrar, believes to be unfair. According to the General Data Protection Regulation (GDPR) workers are entitled to demand data collected on them from their companies, and the European Union is in debate over how much data companies release to employees, researchers and entrepreneurs, their stance a demand for more freedom, something which big firms like Uber totally oppose. Farrar has over 100 Uber Drivers with him as the case goes to court. Uber has said that the GDPR does not demand them to create an extra database, which will be what has to happen if they have to release employee data. RYDZZ is closely following the developments laden with opportunities to best its competitor.
https://medium.com/@ringnyushalom/drivers-declare-war-on-uber-and-rydzz-is-the-big-winner-d85dd7873a46
['Robert Dwight']
2019-11-14 23:17:31.208000+00:00
['Rydzz', 'Ridesharing', 'Rideshare Drivers', 'Uber', 'Lyft Vs Uber']
How to Fix Your Broken Bank Account
Quit Stealing From Your Piggy Bank! Does your bank account swing up and down on a regular basis? Are you tired of checking your account and finding there is barely any money there? The good news is that if you invest 15 minutes upfront and then 10 minutes a week, you can fix that broken bank account. How do you fix a broken bank account you ask? Be a better money manager! Even in the best of times, it is important to be conscious of your spending, create a budget, and use your resources to give you the biggest bang for your buck. When times get tough, it is even more of a moment to step up and be more concise with your spending. Here are simple steps to become a better money manager! Week 1: Build a Budget A simple budget seems so daunting, but it is much easier to reach your goals when you use one and they are simpler to build than most think. Step 1: Spend 15 minutes and start a fresh budget. Just start thinking about all the places you spend money and create a list of the expenses and the amount, guesstimates work! It won’t be perfect at first, but you will start to remember other costs and add them to the list. Step 2: Create a “budget amount” next to each entry. For some entries, it will be the same amount. For some, you might try to budget to spend less money, or for things like savings, you may decide to plan to put more money aside. Step 3: Try it out for a week and be more aware of where you are spending your money. Week 2: Evaluate Your Dynamic and Static Expenses First, it is crucial to understand the difference between dynamic and static expenses. Second, spend at least 10 minutes on this vital component in Week 2 to evaluate. Static expenses are those that stay the same month over month and don’t change or make your life better. These are things like rent, utilities, etc. Dynamic expenses are things like paying for courses, coaching, or other pieces of training that will directly help you make more money. Saving up and buying a new laptop might be another dynamic expense as it could give you the tools to build your side hustle. 🙂 You will want to trim static expenses by negotiating with vendors or re-evaluating if the fee still serves you. Often as you look at your spending, you will identify ways in which you can trim down. On dynamic expenses, it is critical to recognize that when you cut some of these expenses, it can reduce your opportunity to make more money. Spend 10 minutes reviewing your list and making adjustments as necessary to reach your personal goals, like a holiday fund, a vacation fund, or general savings. Week 3: Be Brutally Honest About Your Spending Spend 10 minutes evaluating what “extras” you purchased in the last two weeks that were above the budget. Be brutally honest and answer these questions: Were they necessary? Could you have gone with a lower cost solution? Could you have purchased less quantity? Week 4: Increase Your Income While one way to increase your bank account is to reduce spending, another way is to increase your income. How would you like to increase your account by several hundred dollars a month? How would you like to increase your account by several thousand dollars a month? Great! Go find some people to help!! For families, do some housesitting, dog walking, pet sitting, landscaping, or just general handy work and get paid. For individuals, you could become a virtual assistant, proofreader, secretary, travel planner, transcriptionist, virtual organizer, or personal stylist. For businesses, you could be a mystery shopper, product reviewer, a social media manager, graphic designer, salesperson, affiliate marketer, ghostwriter, podcast editor, or even an online event planner. For yourself, you could start your own business and sell online goods or services such as sell clothing, resale operator, project manager, online courses, author, coach, trainer, instructor, YouTuber, or Instagrammer. The point is if you want to make more money, find more ways to help other people. So now that you know HOW to fix your broken bank account, the real test is what action you will take to fix it. Grab an accountability partner and make this the year to increase cash flow! Kristen David, a former trial lawyer and partner who went from working 85 hours a week and barely making ends meet, built it up to a million-dollar-plus business, then sold her shares and pivoted into a business coach guru. She is now an international speaker, author, and operates a successful business, empowering business owners to build thriving, profitable businesses that are self-managed with systems.
https://medium.com/live-your-life-on-purpose/how-to-fix-your-broken-bank-account-538f100c86d6
['The Kristen David']
2020-06-11 02:07:30.792000+00:00
['Financial Planning', 'Financial Stress', 'Business Strategy', 'Successful Entrepreneurs', 'Profit']
A gentle introduction to the 5 Google Cloud BigQuery APIs
1. BigQuery API The principal API for core interaction. Using this API you can interact with core resources as datasets, views, jobs, and routines. Up today exists 7 client libraries: C#, Go, Java, Node.js, PHP, Python, and Ruby. Example For this example, I will use the python client library for the BigQuery API on my personal computer. Consider that you need to have python already installed. Installing Python As a recommendation install Visual Studio Code and enter the terminal. I’ll use it for all the examples. 1.1 Install the client library pip install --upgrade google-cloud-bigquery Installing the client library 1.2 Setting up authentication To access the BigQuery service first you need to create a service account and setting an environment variable. A service account is a special type of Google account intended to represent a non-human user that needs to authenticate and be authorized to access data in Google APIs [GCP Doc]. Enter the Google Cloud Console and then APIs & Services > Credentials > + Create Credentials > Service Account Creatin a service account Creatin a service account After the SA is created we need to set up a JSON key. Automatically a .json file will be downloaded, keep it safe since it is your key to your BigQuery resources. Back to the Visual Studio Code terminal provide authentication credentials to your application code by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS set GOOGLE_APPLICATION_CREDENTIALS=D:\medium\bigquery-apis\key\key_bqsa.json For this example, I’ll query the Covid dataset. You could also get the result in a pandas dataframe, fir install the following libraries pip install pandas pip install pyarrow pip install google-cloud-bigquery-storage 2. BigQuery Data Transfer API API used for ingestion workflows. If you want to include periodic ingestion from Google Cloud Storage or get analytics data from other Google Service like Search Ads 360, Campaign Manager or YouTube, or other third parties services like Amazon S3, Teradata, or Amazon Redshift. 2.1 First install the library pip install --upgrade google-cloud-bigquery-datatransfer 2.2 Enable the API Important: You need to enable billing https://console.developers.google.com/billing Let’s move the data from a YouTube Channel to BigQuery. Go to Marketplace and look for YouTube Channel Transfers and click on Enroll. Click on Configure Transfer. Lets set some inputs like schedule options and destination setting. This will prompt a windows asking to access your YouTube data. When you scheduled a query this API is used. Now let’s start using this API. First showing all the Data Transfer created The output give us the Transfer ID. With the Transfer ID, using the BigQuery Data Transfer API I’m able to run it. Executing the code returns the job values. In the BigQuery interface you could follow the execution. In this case I didn’t have YouTube data so this trigger an error. 3. BigQuery Storage API This API exposes high throughput data reading for consumers who need to scan large volumes of managed data from their own applications and tools [Google Doc]. Let´s build a simple example 3.1 Install the client library pip install --upgrade google-cloud-bigquery-storage 3.2 Set up the authentication Follow the steps from 1 to create a service account and get the JSON with the service account key. 3.3 Example 1 Getting the top 20 page views from Wikipedia. This example use both APIs and run a custom query. 3.4 Example 2 If we need fine-grained control over filter and parallelism the BigQuery Storage API read session could be used instead of a query. The following code only changes to use the Apache Arrow data format. 4. BigQuery Connection API This API is for establishing a remote connection to allow BigQuery to interact with remote data sources like Cloud SQL. This means BigQuery is enabled to query data residing in Cloud SQL without moving the data. Let´s build a simple example 4.1 Install the client library and enable the API pip install --upgrade google-cloud-bigquery-connection Enable the API accessing the console. 4.2 Set up the authentication Follow the steps from 1 to create a service account and get the JSON with the service account key. 4.3 Add an external source For this example, I’ve deployed a Cloud SQL instance in the same project. Next, I’ve added a simple MySQL database as an external source in the BigQuery interface. Adding the connections parameters Running the federated query 4.4 Example This example uses the API to list the connections. For more examples check the official repository. 5. BigQuery Reservation API This API allows provisioning and manage dedicated resources like slots (virtual CPU used by BigQuery to execute SQL queries) and BigQuery BI Engine ( fast, in-memory analysis service) memory allocation. 5.1 Install the client library pip install --upgrade google-cloud-bigquery-reservation 5.2 Set up the authentication Follow the steps from 1 to create a service account and get the JSON with the service account key. 5.3 Enable Reservation API Go to the BigQuery UI and click on Reservations 5.4 Buy Slots This feature is more recommendable to organizations that desire to have predictable pricing (flat-rate). Remember that by default on BigQuery you pay per-query (on-demand). The next images simulate a Slot reservation process in order to use later the API. 5.5 Example Use the API to list the reservations. For more actions check the repository. Conclusions BigQuery gives us many features and with the help of these APIs, we can extend the functionality. All the code is available on Github. PS if you have any questions, or would like something clarified, ping me on Twitter or LinkedIn I like having a data conversation 😊 Useful Links Core API Transfer API Storage API Connection API Reservation API
https://towardsdatascience.com/a-gentle-introduction-to-the-5-google-cloud-bigquery-apis-aafdf4ef0181
['Antonio Cachuan']
2020-12-28 04:45:08.138000+00:00
['Python', 'Bigquery', 'Data Engineering', 'Data Science', 'Towards Data Science']
Important things to know about GDPR in a risky world!
Is anyone protecting our data? We are sharing our data with many companies and many people for different reasons. It’s better to have someone protect our personal data and letting us know the purpose for which our data is being used. GDPR Does. What is General Data Protection Regulation (GDPR)? It’s a set of rules created by the European Union that needs to be strictly followed by anyone collecting and processing personal data from European continent countries. GDPR sets forth fines of up to 10 million euros, or, in the case of an undertaking, up to 2% of its entire global turnover of the preceding fiscal year — whichever is higher — for violations. Other than personal data, GDPR imposes strict laws on Genetic data, Children’s data, etc. GDPR gives individuals freedom- · to obtain information from the companies/individuals regarding the processing of your personal data · to inform company/companies to erase your data · to know how long the companies would store your data · to know to whom the data of yours have been shared with There are four key terms in GDPR: Controller, Processor, Data Subject, Supervisor Authority. Controller: The one (company, individual etc) who collects the data directly from the data subject. Ex: Bank Processor: The one who follows instructions given by the controller to process data shared by the controller. Ex: IT Company maintaining Bank data. Data Subject: An individual who owns the data, Ex: Bank Customers. Supervisor Authority: Established by European member states to oversee the administrative work of the GDPR. From 25 May 2018, European continent countries started to follow GDPR. It is not only applicable to the European continent but also for companies outside of Europe working for European continent countries. Before any data that is shared from countries outside of the European Union, the European Union will take a call based on whether the recipient county provides an adequate level of protection or not. So far European Union have a list of countries who have promised adequate level of protection, they are Andorra, Argentina, Canada (commercial organisations), Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand, Switzerland, Uruguay and the United States of America (limited to the Privacy Shield framework). Hoping India will soon be among the trusted countries list.
https://medium.com/work-insight/important-things-to-know-about-gdpr-in-a-risky-world-52cec75cd034
['Mahesha N']
2020-02-26 04:49:33.851000+00:00
['Workplace', 'Gdpr', 'Employee Engagement']
>⚽+LIVE|| EFL 2020!! “Coventry City vs Stoke City” FULL-Match
WATCH Live Streaming (Coventry City vs Stoke City) Full HD [ULTRA ᴴᴰ1080p] | Live Stream Live sport streams free all around the world. Visit here to get up-to-the-minute sports news coverage, scores… t.co Watch Live Streaming : “Coventry City vs Stoke City” live stream In HD VISIT HERE >> https://t.co/OiDm8xoktf?amp=1 ●LINE UP : Coventry City vs Stoke City, live ●Date : 10:00 PM, December 26, 2020 ●VENUE: St. Andrew’s Trillion Trophy Stadium Livestreaming, what’s in it for us? Technology has advanced significantly since the first internet livestream but we still turn to video for almost everything. Let’s take a brief look at why livestreaming has been held back so far, and what tech innovations will propel livestreaming to the forefront of internet culture. Right now livestreaming is limited to just a few applications for mass public use and the rest are targeted towards businesses. Livestreaming is to today what home computers were in the early 1980s. The world of livestreaming is waiting for a metaphorical VIC-20, a very popular product that will make live streaming as popular as video through iterations and competition. Shared Video Do you remember when YouTube wasn’t the YouTube you know today? In 2005, when Steve Chen, Chad Hurley, and Jawed Karim activated the domain “www.youtube.com" they had a vision. Inspired by the lack of easily accessible video clips online, the creators of YouTube saw a world where people could instantly access videos on the internet without having to download files or search for hours for the right clip. Allegedly inspired by the site “Hot or Not”, YouTube originally began as a dating site (think 80s video dating), but without a large ingress of dating videos, they opted to accept any video submission. And as we all know, that fateful decision changed all of our lives forever. Because of YouTube, the world that YouTube was born in no longer exists. The ability to share videos on the scale permitted by YouTube has brought us closer to the “global village” than I’d wager anyone thought realistically possible. And now with technologies like Starlink, we are moving closer and closer to that eventuality. Although the shared video will never become a legacy technology, before long it will truly have to share the stage with its sibling, livestreaming. Although livestreaming is over 20 years old, it hasn’t gained the incredible worldwide adoption YouTube has. This is largely due to infrastructure issues such as latency, quality, and cost. Latency is a priority when it comes to livestreams. Latency is the time it takes for a video to be captured and point a, and viewed at point b. In livestreaming this is done through an encoder-decoder function. Video and audio are captured and turned into code, the code specifies which colours display, when, for how long, and how bright. The code is then sent to the destination, such as a streaming site, where it is decoded into colours and audio again and then displayed on a device like a cell phone. The delay between the image being captured, the code being generated, transmitted, decoded, and played is consistently decreasing. It is now possible to stream content reliably with less than 3 seconds of latency. Sub-second latency is also common and within the next 20 or so years we may witness the last cable broadcast (or perhaps cable will be relegated to the niche market of CB radios, landlines, and AM transmissions). On average, the latency associated with a cable broadcast is about 6 seconds. This is mainly due to limitations on broadcasts coming from the FCC or another similar organization in the interests of censorship. In terms of real-life, however, a 6 second delay on a broadcast is not that big of a deal. In all honesty a few hours’ delay wouldn’t spell the doom of mankind. But for certain types of broadcasts such as election results or sporting events, latency must be kept at a minimum to maximize the viability of the broadcast. Sensitive Content is Hard to Monitor Advances in AI technologies like computer vision have changed the landscape of internet broadcasting. Before too long, algorithms will be better able to prevent sensitive and inappropriate content from being broadcast across the internet on livestreaming platforms. Due to the sheer volume of streams it is much harder to monitor and contain internet broadcasts than it is cable, but we are very near a point where the ability to reliably detect and interrupt inappropriate broadcasts instantaneously. Currently, the majority of content is monitored by humans. And as we’ve learned over the last 50 or so years, computers and machines are much more reliable and consistent than humans could ever be. Everything is moving to an automated space and content moderation is not far behind. We simply don’t have the human resources to monitor every livestream, but with AI we won’t need it. Video Quality In the last decade we have seen video quality move from 720p to 1080p to 4K and beyond. I can personally remember a time when 480p was standard and 720p was considered a luxury reserved for only the most well funded YouTube videos. But times have changed and people expect video quality of at least 720p. Live streaming has always had issues meeting the demands of video quality. When watching streams on platforms like Twitch, the video can cut out, lag, drop in quality, and stutter all within about 45 seconds. Of course this isn’t as rampant now as it once was, however, sudden drops in quality will likely be a thorn in the side of live streams for years to come. Internet Speeds Perhaps the most common issue one needs to tackle when watching a live stream is their internet speed. Drops in video quality and connection are often due to the quality of the internet connection between the streamer and the viewer. Depending on the location of the parties involved, their distance from the server, and allocated connection speed the stream may experience some errors. And that’s just annoying. Here is a list of the recommended connection speeds for 3 of the most popular streaming applications: Facebook Live recommends a max bit rate of 4,000 kbps, plus a max audio bit rate of 128 kbps. YouTube Live recommends a range between 1,500 and 4,000 kbps for video, plus 128 kbps for audio. Twitch recommends a range between 2,500 and 4,000 kbps for video, plus up to 160 kbps for audio. Live streams are typically available for those of us with good internet. Every day more people are enjoying high quality speeds provided by fibre optic lines, but it will be a while until these lines can truly penetrate rural and less populated areas. Perhaps when that day comes we will see an upsurge of streaming coming from these areas. Language Barrier You can pause and rewind a video if you didn’t understand or hear something, and many video sharing platforms provide the option for subtitles. But you don’t really get that with a live stream. Pausing and rewinding an ongoing stream defeats the purpose of watching a stream. However, the day is soon approaching where we will be able to watch streams, in our own native language with subtitles, even if the streamer speaks something else. Microsoft Azure’s Cognitive Speech Services can give livestreaming platforms an edge in the future as it allows for speech to be automatically translated from language to language. The ability to watch a livestream in real time, with the added benefit of accurate subtitles in one’s own language, will also assist language learners in deciphering spontaneous speech. Monetization One of the most damning features of a live stream is the inherent difficulty in monetizing it. As mentioned before, videos can be paused and ads inserted. In videos, sponsored segments can be bought where the creators of the video read lines provided to them. Ads can run before videos etc. But in the case of a spontaneous live stream sponsored content will stick out. In the case of platforms like YouTube there are ways around ads. Ad blockers, the skip ad button, the deplorable premium account, and fast forwarding through sponsored segments all work together to limit the insane amount of ads we see every day. But in the case of a live stream, ads are a bit more difficult. Live streaming platforms could implement sponsored overlays and borders or a similar graphical method of advertising, but the inclusion of screen shrinking add-ons like that may cause issues on smaller devices where screen size is already limited. Monthly subscriptions are already the norm, but in the case of a live streaming platform (Twitch Prime not withstanding), it may be difficult for consumers to see the benefit in paying for a service that is by nature unscheduled and unpredictable. Live streams are great for quick entertainment, but as they can go on for hours at a time, re-watching streamed content is inherently time consuming. For this reason, many streamers cut their recorded streams down and upload them to platforms like YouTube where they are monetized through a partnership program. It is likely that for other streaming platforms to really take off, they would need to partner with a larger company and offer services similar to Amazon and Twitch. What Might the Future of Livestreaming Look Like? It is difficult to say, as it is with any speculation about the future. Technologies change and advance beyond the scope of our imaginations virtually every decade. But one thing that is almost a certainty is the continued advancement in our communications infrastructure. Fibre optic lines are being run to smaller towns and cities. Services like Google Fiber, which is now only available at 1 gigabit per second, have shown the current capabilities of our internet infrastructure. As services like this expand we can expect to see a large increase in the number of users seeking streams as the service they expect to interact with will be more stable than it currently is now. Livestreaming, at the moment, is used frequently by gamers and Esports and hasn’t yet seen the mass commercial expansion that is coming. The future of live streaming is on its way. For clues for how it may be in North America we can look to Asia (taobao). Currently, livestreaming is quite popular in the East in terms of a phenomenon that hasn’t quite taken hold on us Westerners, Live Commerce. With retail stores closing left and right, we can’t expect Amazon to pick up all of the slack (as much as I’m sure they would like to). Live streaming affords entrepreneurs and retailers a new opportunity for sales and growth. Live streaming isn’t the way of the future, video will never die, but the two will co-exist and be used for different purposes, as they are now. Live streaming can bring serious benefits to education as well by offering classrooms guest lessons and tutorials by leading professionals. Live streaming is more beneficial for education than video as it allows students to interact with guest teachers in real-time. The live streaming market is waiting to be tapped. Right now there are some prospectors, but in North America, no one has really found the vein leading to the mine. So maybe it’s time to get prospecting.
https://medium.com/@calcareoargilla/live-efl-2020-coventry-city-vs-stoke-city-full-match-4215603a90f8
[]
2020-12-26 08:11:09.553000+00:00
['Social Media', 'Soccer']