title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
40 Ideas To Help Spark Your Next Article*
40 Ideas To Help Spark Your Next Article Photo by Dstudio Bcn on Unsplash Ideation is a crucial part of the writing process. A lot of writers enjoy the process of coming up with new ideas and flushing them out. In fact, many writers that I work with have a long, running list of ideas. They have dozens of started drafts and more starting points than they know what to do with. It’s common in our conversations for someone to mention, “Yeah, I need to go back through and clean out my drafts folder.” If you’re an avid writer, you’ve likely got a graveyard of ideas. They aren’t all winners. Some of them you likely abandoned for a good reason. Some of them you tried writing and you realized halfway through that it wasn’t the right time to finish. Some you want to write but haven’t set aside the time, courage, or dedication to get it done. If you’re one of those people that ideas come to naturally, it’s easy to overlook that blessing. And I say blessing because it truly is a great perk to be able to ideate so well. A lot of my writing friends would probably push back and say that they work hard to brainstorm ideas. That it doesn’t come naturally but that it takes effort and time. I’m probably somewhere in-between those camps. I’ve written over 200 articles on Medium and I haven’t tapped the idea well dry. I also spend a solid chunk of time most weeks purely focused on ideating new concepts and potential articles. If you feel like you can’t come up with ideas, unfortunately, some of the answer is probably that you may need to put in a bit more time and work. But that’s the answer for most struggles in life. I think it’s also completely fair to realize and admit that ideation might not be a strong suit for you and to ask or look for help in that area. Writers often tend to think that if they have an idea, they need to hoard it for themselves. I understand the feeling, and there are definitely ideas that you want to keep a bit more private — big ideas about an upcoming project or a really cool spin on something that no one else is saying. But overall, most of our ideas are coming from things we’ve seen or things we’ve stolen, as Austin Kleon, would say in Steal Like An Artist. http://actiup.com/ayc/Video-mtk-budapest-v-paksi-v-hu-hu-1dpg-12.php https://test.activesilicon.com/fif/v-ideos-sevilia-v-valiadolid-v-yt2-1atu-18.php http://vidrio.org/dvz/Video-eni-malatiaspor-v-kasympasha-v-yt2-1dnh-3.php https://test.activesilicon.com/fif/video-Universidad-Catolica-Colo-Colo-v-en-gb-1aia-18.php https://test.activesilicon.com/fif/videos-Universidad-Catolica-Colo-Colo-v-en-gb-1pxx-10.php http://vidrio.org/dvz/v-ideos-eni-malatiaspor-v-kasympasha-v-yt2-1lha-9.php http://actiup.com/ayc/videos-mtk-budapest-v-paksi-v-hu-hu-1txp-14.php https://test.activesilicon.com/fif/video-Universidad-Catolica-Colo-Colo-v-en-gb-1sil-4.php http://actiup.com/ayc/Video-mtk-budapest-v-paksi-v-hu-hu-1lfi-14.php http://vidrio.org/dvz/video-eni-malatiaspor-v-kasympasha-v-yt2-1gsg-14.php https://kleinoot.nl/fvs/Video-kremoneze-v-kozentsa-v-yt2-1nzb-5.php http://actiup.com/ayc/video-mtk-budapest-v-paksi-v-hu-hu-1hov-11.php http://vidrio.org/dvz/video-eni-malatiaspor-v-kasympasha-v-yt2-1eqa-7.php https://kleinoot.nl/fvs/Video-kremoneze-v-kozentsa-v-yt2-1pml-1.php https://test.activesilicon.com/fif/videos-universidad-catolica-v-colo-colo-v-es-cl-1mah2-10.php http://actiup.com/ayc/v-ideos-breshiia-v-redzhana-v-yt2-1ytg-12.php https://test.activesilicon.com/fif/video-art-municipal-jalapa-u20-v-real-madriz-u20-v-es-es-1nyl-10.php http://vidrio.org/dvz/v-ideos-virtus-verona-v-sudtirol-v-it-it-1tig-17.php https://kleinoot.nl/fvs/v-ideos-lechche-v-piza-v-yt2-1foq-11.php http://actiup.com/ayc/video-trabzonspor-v-rizespor-v-yt2-1qch-19.php http://vidrio.org/dvz/video-virtus-verona-v-sudtirol-v-it-it-1lky2-11.php https://test.activesilicon.com/fif/videos-art-municipal-jalapa-u20-v-real-madriz-u20-v-es-es-1fjj-16.php https://kleinoot.nl/fvs/Video-FC-Fredericia-HB-Koge-v-en-gb-1gco-.php http://actiup.com/ayc/videos-trabzonspor-v-rizespor-v-yt2-1zvd-3.php https://test.activesilicon.com/fif/video-art-municipal-jalapa-u20-v-real-madriz-u20-v-es-es-1hya2-3.php http://vidrio.org/dvz/videos-virtus-verona-v-sudtirol-v-it-it-1rng2-3.php https://kleinoot.nl/fvs/videos-FC-Fredericia-HB-Koge-v-en-gb-1yun-15.php http://actiup.com/ayc/v-ideos-trabzonspor-v-rizespor-v-yt2-1egq-18.php https://kleinoot.nl/fvs/video-FC-Fredericia-HB-Koge-v-en-gb-1tiz-4.php http://vidrio.org/dvz/Video-virtus-verona-v-sudtirol-v-it-it-1vsa2-10.php https://test.activesilicon.com/fif/videos-art-municipal-jalapa-u20-v-real-madriz-u20-v-es-es-1drd-8.php http://actiup.com/ayc/video-trabzonspor-v-rizespor-v-yt2-1amw-14.php https://test.activesilicon.com/fif/Video-art-municipal-jalapa-u20-v-real-madriz-u20-v-es-es-1dtc-9.php http://vidrio.org/dvz/video-virtus-verona-v-sudtirol-v-it-it-1czy2-13.php http://actiup.com/ayc/Video-trabzonspor-v-rizespor-v-yt2-1soh-13.php http://actiup.com/ayc/video-kevo-v-empoli-v-yt2-1dss-17.php http://actiup.com/ayc/video-kevo-v-empoli-v-yt2-1fxk-11.php http://vidrio.org/dvz/v-ideos-triestina-v-perugia-v-it-it-1mzk-6.php https://test.activesilicon.com/fif/video-san-lorenzo-v-atletico-colon-v-es-ar-1avp2-1.php http://actiup.com/ayc/videos-kevo-v-empoli-v-yt2-1vcr-16.php http://vidrio.org/dvz/video-triestina-v-perugia-v-it-it-1hxv-15.php https://test.activesilicon.com/fif/v-ideos-san-lorenzo-v-atletico-colon-v-es-ar-1bzo-16.php https://test.activesilicon.com/fif/video-san-lorenzo-v-atletico-colon-v-es-ar-1jot2-10.php http://vidrio.org/dvz/videos-triestina-v-perugia-v-it-it-1xpx-1.php http://actiup.com/ayc/videos-kevo-v-empoli-v-yt2-1ftu-3.php http://vidrio.org/dvz/videos-triestina-v-perugia-v-it-it-1qlk2-3.php https://test.activesilicon.com/fif/video-san-lorenzo-v-atletico-colon-v-es-ar-1oou-2.php http://actiup.com/ayc/v-ideos-kevo-v-empoli-v-yt2-1wzm-1.php http://vidrio.org/dvz/Video-triestina-v-perugia-v-it-it-1gkr2-19.php https://test.activesilicon.com/fif/videos-san-lorenzo-v-atletico-colon-v-es-ar-1nub2-6.php https://test.activesilicon.com/fif/video-San-Lorenzo-Colon-de-Santa-Fe-v-en-gb-1lvg-6.php https://test.activesilicon.com/fif/Video-San-Lorenzo-Colon-de-Santa-Fe-v-en-gb-1dqn-11.php http://actiup.com/ayc/Video-osijek-v-sibenik-v-hr-hr-1kpn-9.php https://test.activesilicon.com/fif/Video-San-Lorenzo-Colon-de-Santa-Fe-v-en-gb-1itl-1.php http://vidrio.org/dvz/video-giana-erminio-v-olbia-v-it-it-1vxe2-18.php http://vidrio.org/dvz/Video-giana-erminio-v-olbia-v-it-it-1yan2-5.php https://test.activesilicon.com/fif/videos-guarani-v-guairena-v-es-py-1bnm-14.php http://vidrio.org/dvz/Video-giana-erminio-v-olbia-v-it-it-1dil2-1.php http://actiup.com/ayc/Video-vichentsa-v-askoli-v-yt2-1zdt-5.php https://test.activesilicon.com/fif/video-guarani-v-guairena-v-es-py-1irj2-3.php http://vidrio.org/dvz/Video-giana-erminio-v-olbia-v-it-it-1xwd2-8.php http://actiup.com/ayc/Video-vichentsa-v-askoli-v-yt2-1zjw-12.php http://vidrio.org/dvz/video-giana-erminio-v-olbia-v-it-it-1nvp2-11.php https://test.activesilicon.com/fif/videos-guarani-v-guairena-v-es-py-1bau2-1.php http://actiup.com/ayc/v-ideos-vichentsa-v-askoli-v-yt2-1ouw-15.php https://test.activesilicon.com/fif/video-guarani-v-guairena-v-es-py-1kpd2-14.php http://actiup.com/ayc/videos-vichentsa-v-askoli-v-yt2-1hio-15.php http://vidrio.org/dvz/videos-ravenna-v-imolese-v-it-it-1irj2-13.php http://vidrio.org/dvz/videos-ravenna-v-imolese-v-it-it-1ilk2-12.php https://test.activesilicon.com/fif/Video-guarani-v-guairena-v-es-py-1pwh-14.php http://actiup.com/ayc/v-ideos-vichentsa-v-askoli-v-yt2-1sna-18.php http://actiup.com/ayc/Video-NK-Osijek-sibenik-v-en-gb-1hyk-18.php https://test.activesilicon.com/fif/video-Libertad-Olimpia-v-en-gb-1wxe-14.php http://actiup.com/ayc/videos-NK-Osijek-sibenik-v-en-gb-1vlh-17.php https://test.activesilicon.com/fif/video-Libertad-Olimpia-v-en-gb-1trs-6.php http://vidrio.org/dvz/Video-ravenna-v-imolese-v-it-it-1cir2-1.php http://actiup.com/ayc/videos-NK-Osijek-sibenik-v-en-gb-1qgz-16.php http://vidrio.org/dvz/videos-ravenna-v-imolese-v-it-it-1yhp-11.php https://test.activesilicon.com/fif/video-Libertad-Olimpia-v-en-gb-1ptg-12.php http://vidrio.org/dvz/videos-ravenna-v-imolese-v-it-it-1hkp2-9.php http://actiup.com/ayc/Video-lechche-v-piza-v-yt2-1hwf-11.php https://test.activesilicon.com/fif/video-libertad-v-olimpia-v-es-py-1cfr2-9.php http://actiup.com/ayc/videos-lechche-v-piza-v-yt2-1pni-11.php https://test.activesilicon.com/fif/video-libertad-v-olimpia-v-es-py-1xxy2-7.php http://vidrio.org/dvz/video-matelica-v-carpi-v-it-it-1xnu2-2.php http://actiup.com/ayc/v-ideos-lechche-v-piza-v-yt2-1mmt-3.php http://vidrio.org/dvz/Video-matelica-v-carpi-v-it-it-1skz-5.php https://test.activesilicon.com/fif/Video-libertad-v-olimpia-v-es-py-1cst-19.php http://actiup.com/ayc/video-lechche-v-piza-v-yt2-1haz-11.php https://test.activesilicon.com/fif/videos-libertad-v-olimpia-v-es-py-1aue-13.php http://vidrio.org/dvz/v-ideos-matelica-v-carpi-v-it-it-1riz-7.php http://actiup.com/ayc/v-ideos-lechche-v-piza-v-yt2-1tku-8.php http://actiup.com/ayc/videos-FC-Fredericia-HB-Koge-v-en-gb-1fve-8.php http://actiup.com/ayc/v-ideos-FC-Fredericia-HB-Koge-v-en-gb-1sfw-3.php http://vidrio.org/dvz/Video-matelica-v-carpi-v-it-it-1sbx2-19.php https://test.activesilicon.com/fif/videos-libertad-v-olimpia-v-es-py-1ewx2-12.php https://test.activesilicon.com/fif/v-ideos-Guarani-Guairena-FC-v-en-gb-1zoq-12.php http://actiup.com/ayc/video-FC-Fredericia-HB-Koge-v-en-gb-1pcv-1.php http://vidrio.org/dvz/v-ideos-matelica-v-carpi-v-it-it-1sij-16.php https://test.activesilicon.com/fif/video-Guarani-Guairena-FC-v-en-gb-1xlu-3.php Here’s the magic of writing. 10 people could write about the same idea and put 10 completely different polishes on the concept. There is enough grass in the pasture for all of us. Grab an idea. Put it in your context. Add your voice, your experience, your vision. We all may start in similar places but where we end up could and should be pretty different. That’s what makes writing beautiful. 40 Ideas To Help Spark Your Next Article* *most of the ideas I will put here are straightforward and largely focused on positive, encouraging topics, which is how I tend to try to write. Is there such a thing as a ‘gentleman and scholar’ in the 21st century? If so, who is that person? If not, where did he go? How did the invention of the photograph change the way we view love? When did the definition of ‘reading a book’ shift and how has that impacted learning? What should we do with all of the information we consume but have no practical, functional application for? Like how does Mad King Ludwig and his Neuschwanstein Castle matter to me? Why hasn't someone invented a clear mask that allows people to see facial expressions? Should someone invent this or would it be a terrible idea? Transferring notes from audiobooks to anything digital seems to be an archaic practice. What can be done about this? Will the concept of a zoo with animals in exhibits exist in 75 years? How did the concept of a zoo even start? Where did it begin? Why? What type of animal is most commonly found in a zoo and why? What can you glean from this? (Clearly, I’m on a zoo kick for some reason) Should Alexa be allowed to upkeep my home maintenance like ordering replacements for my furnace filter? Apple has tried to incorporate in their brand this word: “Privacy.” How can a single word impact a brand and what are other examples? How do you date someone in a virtual world? Is it weird or kosher to expect the first date to be over zoom? Why does each partner feel like they’re doing the majority of the housework? Should adults have stuffed animals and why do we get rid of our stuffed animals as we grow up? There’s always been fake news. This isn’t a new phenomenon. What’s changed in our society that makes fake news today seem so much more potent? What is a list of COVID-friendly random acts of kindness? What are 5 of the most unexpected benefits of being a student during COVID-schooling? How to effectively take notes during a meeting while still paying attention (much harder than it sounds!) How can a ‘quote book’ with your co-workers lead to increased unity and cohesion? What functionality has a cape ever really had for a superhero and who was the first superhero to wear a cape? What does the current Sneakerhead movement say about our addiction to novelty? It is possible to have conviction without being critical? Are most writers like car owners? Using the vehicle without really understanding the mechanics? What are the ten questions anyone who is thinking about starting a family should make sure to ask and answer before really starting to try? What are the four foods that no one should ever microwave in the staff kitchen? What’s the best commercial you’ve seen in the last 6 months and why do you think it stuck in your mind? If we could remember perfectly what we said and did, how would that change the way we showed grace and compassion to ourselves and others? How do you find the line between vulnerability and over-sharing? What are the most important life lessons you’ve ever learned from watching a sitcom? Why does everyone feel like people don’t initiate enough and what could you do practically to be a part of the solution and not the problem? What does the fact that we throw elaborate birthday parties for 1-year-olds tell us about our society? Is it possible to live a hidden life and still be happy? What are the ingredients of the soil that sprouts unhealthy fruit? Would you rather have 10,000 followers on Medium or 10 that you knew really well and who gave you great and consistent feedback on your writing? Do slack notifications actually help with workplace productivity or do the distractions actually minimize our effectiveness? In an age of constant cause fatigue, how do you effectively move your readers from consuming to acting on your content? When was the last time you were asked a question that you didn’t know how to answer or didn’t know how to solve? What did you do? Does your faith lead your feelings or should your feelings spur your faith? Or is there one side that is more correct? How can you work to separate your identity from your output? What does it mean to be sober-minded and what characteristics define someone who is exceptionally sober-minded? If one of these ideas got you going or sparked another idea, creating a leapfrog effect, I’d love to hear about it. Highlight the idea that stood out to you or comment on this post and let me know how your writing went! There are so many amazing topics and questions out there that if we’re willing to work together, we can pair our weaknesses and strengths to create some pretty amazing written pieces. Keep writing. Keep going. Your voice matters and we’re all waiting to read what you come up with.
https://medium.com/@krdwan.ba/40-ideas-to-help-spark-your-next-article-a057bef83efb
['Krdwan Ba']
2020-12-20 04:00:35.023000+00:00
['Babies', 'Health', 'Wellness', 'Life', 'Coronavirus']
Nest, Half-Empty?
Jenny Jedeikin’s work has appeared in The San Francisco Chronicle, Rolling Stone, The Advocate, The UK's Oh Comely; read more at jennyjedeikin.com. Follow
https://medium.com/spiralbound/nest-half-empty-ef55a3c8ff57
['Jenny Jedeikin']
2018-11-12 13:01:01.329000+00:00
['College', 'Jenny Jedeikin', 'Family', 'Parenting', 'Comics']
Tapping the blockchain-powered remote classroom.
Tapping the blockchain-powered remote classroom. COVID19 sparks push for advanced online learning frameworks. We are living amidst an unprecedented shift in global education with college professors becoming Zoom adapts overnight, the class of 2020 to graduate virtually, continuous learning platforms on the lookout for the best performing interaction technologies, and the new wave of EdTechs officially upon us. In both public and private educational settings, the global learning community braces for more time outside of the traditional classroom environment with challenges like certification credibility and exam cheating, student privacy, and overall quality of educational experience. But with challenges come opportunities to make use of advanced technologies and step up the distant learning game. ‍In this blog, we’ll explore blockchain’s potential to secure the continuity of online learning frameworks through a number of decentralized applications. Tap the blockchain-powered smart campus with Taraxa. Blockchain technology essentially adds a layer of security and trust to online learning settings. By using it, the digital campus can confidently store learning records, issue and manage digital certificates, securely share learning resources via smart contracts, and protect intellectual property through data encryption. Trustless learning log. Keeping a more comprehensive track of learners’ achievements beyond transcripts of scores is another fundamental improvement that blockchain brings to online learning frameworks. This is achieved by storing digital hashes of learning activities and managing access rights through the use of smart contracts. Confidently manage student records. With Taraxa’s trustless audit log of all critical operational records, educational data is securely stored and shared on digital campuses among students, professors, external collaborators, or inspecting authorities. The platform allows educators to easily exchange information on student progress, grades, and achievements — all digitized and securely recorded to the blockchain. From here, it’s possible to build up a slew of different educational products with the blockchain platform functioning as the bottom line for secure data-sharing and storage. Securely store and share learning resources. Continuous learning platforms and learning management systems will benefit from Taraxa’s capability to instantly share learning materials and confidential user information both internally and with offsite collaborators. So will professors and students who now have a powerful toolset for exchanging ideas, files and assignments. Immutable digital certificates and diplomas. Blockchain ledger delivers the foundation for a completely new, decentralized way of issuing and managing digital certificates and transcripts solving the problems of inhomogeneity and credibility of training received at online education platforms. Taraxa makes it possible to store all sorts of students’ accreditations and qualifications — from course completion certificates to degree modules, on a single, immutable blockchain ledger Taraxa enables learners to securely move their scores from one school/learning platform to another, while online learning platforms will enjoy the benefit of analytics by looking at the student’s learning logs for easier onboarding and personalized learning offering of new learners. This approach will ultimately give students the power to securely own and dispose of their records to easily move on their educational track from one course or school to another. With student records and learning resources securely recorded into the blockchain ledger, you have a powerful toolset to maintain the continuity of online learning frameworks when switching to a remote classroom. Reach out to learn more about how Taraxa can help to make that transition at [email protected] Sources: https://www.researchgate.net/publication/328547413_Application_of_Blockchain_Technology_in_Online_Education
https://medium.com/taraxa-project/tapping-the-blockchain-powered-remote-classroom-23d8456d1572
['Olya Green']
2020-05-13 14:17:44.835000+00:00
['Lms Software', 'Continuous Learning', 'Taraxa Use Case', 'Smart Campus', 'Blockchain']
To the little girl who called my hair abnormal
I’m not flinging curses at your pretty, little blonde head. Photo by Kelly Sikkema on Unsplash Sweetie, I’m pretty sure I was stunned for a few seconds long after the words let your mouth and you moved on to something else. Being the professional that I am, I shook myself out of it and carried on with our English lesson. But being the slightly obsessive person that I am, I mulled over it for what could be considered an unhealthy amount of time. Such a curious word, isn’t it? Abnormal. It implies that there are some things that are part of the norm. Then there are the outsiders, the others. Those extra two letters, make you just that: too extra. Well, you will find that there are a lot of things about me that are too extra. We all have them really and that’s a good thing. ABNORMAL Someone or something that is abnormal is unusual, especially in a way that is worrying. Synonyms: unusual, different, odd, strange… It’s not as though I thought you were giving me a compliment but now as I think more about it, I don’t know whether to take offense or sit you down and educate you. Seeing as you are just four years old, either may be a problem. My best hope is that that statement was as a result of your own musings not something you picked up from the grown-ups around you. Saying my hair was abnormal meant that there was a standard of normal against which to measure mine. This would mean that your silky straight hair was the norm — the only standard of beauty that really matters. Maybe if you had just called it ugly, I could have admitted that I had had these twisted braids for too long and it really was time to change them. But abnormal? But what worries me the most is how impressionable you are at this young age. Opinions are starting to solidify even if they aren’t necessarily right. Like when you scream obligatory when you see pictures of monsters on Halloween because only boys aren’t afraid of monsters. Lemme let you in on a little secret. Girls can also not be afraid of monsters. In fact, some of the best monster-hunters I know are girls. And instead of boogie monsters and ghosts, they fight even bigger monsters like the patriarchy, entitlement, discrimination. Big monsters with fangs for teeth and long, frightening tails. Yet girls fight against them all and maybe you will too, in time. Photo by Gabriela Braga on Unsplash I know I’m only your teacher for two hours a week but I hope during this brief time together you will begin to realize that the world is so much more than just black and white even though you are surrounded by one more than the other. That there are so many more colors all around: dazzling purples and electrifying reds. And that’s what makes it so beautiful. That just because your hair is different from mine, doesn’t mean that they both can’t be beautiful. There is no one way to describe normal because normal itself has so many, different variations. Love, Your teacher with the amazing head of hair
https://medium.com/@rayne_murray/to-the-little-girl-who-called-my-hair-abnormal-f2757a1f505d
['Rayne Murray']
2019-11-18 00:09:29.516000+00:00
['Nonfiction', 'Calling Out', 'Little Girl', 'Natural Hair', 'Letters']
Capitalism Is Killing Us
In Women, Race, and Class, Angela Davis discusses the gender-erasing violence of enslaved labor. Her text brilliantly disrupts the myth that Black women are more protected than Black men under capitalism. A motif of her book is the examination of how Black women and Black men alike were forced into intense, body-breaking manual labor by way of slavery. Davis’ work serves to remind us that for Black people, capitalism has never been protective nor a promise of a better life. For us capitalism is a nightmare of death and abuse. The United States of America exists because we were murdered, tortured, and tormented in order to build a system that hates us and exploits us at the same time. To be pro-Black lives, is to be anti-capitalism because the capillaries of capitalism run with our blood. As the pandemic continues to pick up speed, I find myself incessantly reflecting not only on this moment, but on the broader connection between capitalism and death. With over 200,000 U.S. citizens now dead due to COVID-19 and a second wave on the horizon, I am grieved beyond measure that we created a financial system that abandons the most vulnerable among us. Initially, at the onset of the pandemic, my central critique of capitalism contended with capitalism’s opposition to rest. Capitalism requires human perfection, I mused, we created a system with no buffer, no failsafe, no grace for humanity. Illness, tragedy, pain, rest, recovery, and rejuvenation are all a part of the human experience, yet, capitalism scoffs at them. My colleague’s aforementioned assertion concerning the reality of sick time elucidates the truth that the human condition is the ire of capitalism. In many ways, we created a system that despises the very essence of who we are. This pandemic serves to highlight the insidious inequality of our country. Capitalism pretends that it is a system of meritocracy. As such, capitalism says you have more because you earned more and they have less because they earned less. The U.S. deploys capitalism in order to lazily conceal its worst-kept secret: we do not live in a democracy but an oligarchy, and capitalism protects the wealth of only a few. Capitalism is this country’s scapegoat. It enables the U.S. to proclaim, “anyone can make it here!” while ensuring that few ever achieve financial stability. The leaders of this country pretend that capitalism is the great equalizer, when in actuality it is a gatekeeper staunchly committed to protecting the “haves” while denigrating the “have nots.” Yes, the coronavirus pandemic sheds light on our deep failings as a nation. We are a nation with no infrastructure for mass illness. Our cultural obsession with productivity and our valorization of toxic work ethic is literally killing us. We need more sick time, we need more PTO, we need universal healthcare, but it is so much more than that. Capitalism is killing us on purpose for its own survival. I am far from the only author critiquing capitalism during this time. There are hundreds of brilliant Black voices that are poignantly critical of our socioeconomic system. You should read them and listen to them. However, I believe that in the vast sea of critique we often lose sight of the truth that white supremacist systems are working as designed. Of course, capitalism is unconcerned with mass illness and death. Capitalism is a system meant to insulate power by any means necessary and the sick are a threat to capitalism. In a capitalistic society, if you cannot work and buy then you must die, because under capitalism work is not just a financial system, it is a metric of moral and spiritual worth. This is why Black people bear the brunt of capitalism’s most heartless proclivities: our bodies are cast as inherently worthless anyways. Capitalism is the God of the wealthy and the Devil to the disposable. But just as the Devil presents himself as an Angel of Light, so too does capitalism cloak itself in hope in order to conceal the depth of its malicious abyss. As governors call for citizens to die in order to protect the United States of America, we are in an urgent moment that calls for revolution. As illness and disease have traversed this country with little to no intervention from our government, now is the time to act. In the U.S. we are taught a false sense of security, but make no mistake. As this pandemic infects this country and the economy becomes more and more precarious, all of us, except for the ultra-rich, will be deduced to bodies under the floorboard. Ciarra Jones is a writer, speaker, and consultant who specializes at the intersection of race, sexuality, faith, and belonging. If you are interested in hiring Ciarra to come work with you or your organization visit Ciarrajones.com for more details. She is also a columnist for The Boycott Times where her writing contends with capitalism and white supremacy.
https://aninjusticemag.com/capitalism-is-killing-us-7bc79cad63b2
['Ciarra Jones']
2020-11-16 21:48:06.966000+00:00
['Work Culture', 'BlackLivesMatter', 'Capitalism', 'Economics', 'White Supremacy']
Content-based Automated Music Playlist Generation using Deep Learning
Photo by Andy Kelly on Unsplash Objective The detailed step-by-step implementation of a content-based music recommender system is explained in this article. The entire code is accessible in my GitHub. Basic knowledge about recommender systems and deep learning is assumed. Table Making Sense of Sound Digital Audio Auditory Perception Music Why is Music Entertaining? Overview Problem Statement Featurization Existing Solution Data First-cut approach Final Features Sample Recommendations Feature Analysis Failed Experiments Future Work References Making Sense of Sound Sound is our perception of the vibration of surrounding air molecules(underwater audio perception is beyond the scope of this article). The medium where it originates could be solid, liquid, or gas. Photo by Gentrit Sylejmani on Unsplash What we hear when someone swims nearby is nothing but the vibration of water molecules which in turn vibrates its surrounding air molecules. If we hear vibrating air molecules, one might wonder, why don’t we hear when we swing our hands in the air. It is because our auditory system perceives vibrations with frequency only in the range of 20 Hz to 20 kHz. When we drop a stone, we hear only when it hits a solid or a liquid surface. Even though the stone in motion disturbs the air molecules, the frequency is well below our hearing range. However, when it hits the water, the neighboring air molecules vibrate at the same frequency at which the water molecules vibrate, making it audible. The origin of this sound(or vibration) is the stone striking the water and it travels through the air to reach our ears. Sound propagates through air molecules from left to right[Source: Dr. Daniel A. Russell] The gray line represents the vibrating source, dots both red and black are all air particles(molecules) and to understand the actual movement of these particles look at any one of the red dots. They don’t move like luggage placed on the carousel, rather its a back and forth motion. Only the vibration travels from the gray line to the rightmost end. When a particle moves from point ‘A’ to point ‘B’ and comes back to ‘A’, it is said to complete one cycle. Total no of cycles per second is nothing but the frequency in hertz scale and amplitude is the distance between point ‘A’ and point ‘B’. The frequency or frequencies at which an object tends to vibrate with when hit, struck, plucked, strummed, or somehow disturbed is known as the natural frequency of the object. If the amplitudes of the vibrations are large enough and if natural frequency is within the human frequency range, then the vibrating object will produce sound waves that are audible. [Source: The physics classroom] Most living things can sense certain vibrations of objects which indicates the significance of sound for survival. We communicate and emote efficiently by causing these vibrations which propagate away from its source in all directions. Generally, these vibrations are too fast to be seen, and unlike vision, the audition span is not restricted to certain degrees. This explains the widespread use of sirens to indicate emergencies even though the light is extremely fast. Digital Audio The microphone contains a thin diaphragm similar to our eardrum which is a transducer that converts mechanical energy to electrical energy. In very simple terms, it vibrates at the same frequency of the vibration of the surrounding air particles, and the amplitude of this vibration is converted to electrical energy. This continuous(analog) information is converted to digital form by sampling 44100 values per second at equal time intervals and representing each value in 16 bits. The digital format facilitates storage, access, and manipulation of data. Loudspeaker reproduces the sound by performing precisely the opposite operation of the microphone. Auditory Perception Human interpretation of amplitude and frequency are loudness and pitch, but the relationship is not linear. It is quite analogous to how the difference between one million dollars and one million one hundred dollars doesn’t matter when you buy a house. We can easily differentiate 50 Hz and 100 Hz frequencies, but not 10,000 Hz and 10,050 Hz. The Mel scale approximates our perception of frequency which is roughly linear till 500 Hz and logarithmic afterward; Similarly, sound pressure(amplitude) is expressed in decibel, a logarithmic unit, to approximate our perception, and 0 dB is the threshold for human hearing. Apart from pressure, loudness, measured in phon, is also dependant on our auditory system’s ability to amplify certain frequencies. At 0 dB, only frequencies between 2 kHz to 5 kHz can be heard whereas to sense a 30 Hz frequency it needs to be more than 50 dB. Decibel and phon are equal for the frequency of 1 kHz. Phon vs decibel [Source: Wikipedia] Music Music, a fine art, is nothing but the artificial vibrations of air particles that are aesthetically pleasing to humans but has no practical purpose; It, however, has the potential for medical use. Every vibration processed by the auditory cortex of the brain triggers both physical and psychological responses. Music establishes mood which influences people’s perception (especially visual perception[Jolij J, Meurs M (2011)]), attitude, and behavior. It has always been an integral part of rituals all around the world and even during the silent film era, music played a major role. Video explains how music can completely alter our perception From a composer’s point of view, music is a sequence of harmonies. The first step in music composition is to choose the key and scale, which influences the song’s genre and emotion; This determines the notes available to compose the melody. A music note represents frequency(or, more precisely, pitch) and its duration. A melody(or tune) is composed by arranging a sequence of notes but only one at a time. It is the most notable, critical, and repetitive part of a song. The final step is to harmonize the melody. To every note in a melody, notes are added, for example, of a lower frequency to enhance the melody. When multiple notes played together, provide a satisfying effect they are said to be in harmony. Melody is represented by orange notes in the video One important thing to note is that the single-frequency vibrations are extremely rare in nature. Even musical instruments produce vibration with frequencies in addition to the intended note and these less dominant additional frequencies along with the envelope distinguishes the musical instruments. For example, ‘middle C’ has a frequency of 261.6, sounds different when played in different instruments. Video shows the vibration caused by different instruments when the same note is played Why is Music Entertaining? According to Yerkes-Dodson law, the performance of a task is dependant on the arousal of an organism. Performance improves with the increase in arousal, but beyond a saturation point, it starts reducing or stops increasing depending on the complexity of the task. Hebbian version of the Yerkes–Dodson law[Source: Diamond DM, et al. (2007)] This “Inverted u” relationship is also seen in our music preferences, discovered by Daniel Ellis Berlyne.[source: Chmiel, A., & Schubert, E. (2017)] The sequential nature of music creates an opportunity to predict the next event. It is because of our ability to recognize and memorize patterns. When the prediction is closer we feel bored as there is only a little information to learn(too little arousal) and when the prediction is poor we feel annoyed as there is overwhelming information to learn(too much arousal). The maximal reward or pleasure is experienced somewhere in-between. Melody often repeats in a song and also in our mind after we listen, this helps the process of memorization and explains why our favorite song gets boring or an uninteresting song becomes enjoyable after we hear it many times. Simply, our exposure to music determines our preference in music. Learn the rules like a pro, so you can break them like an artist. [Pablo picasso] Following the music theory and subtly breaking it is what helps the musicians to achieve optimal arousal in their audience. To gain a better understanding, read this article by Robert Zatorre. Overview Major music streaming service providers have rights to millions of songs that are organized and displayed alluringly to simplify the user experience. It takes more than one’s lifetime to hear them all and the probability of a user hearing a particular song is two-sided, the user’s day-to-day experiences and the algorithm(or hybrid of algorithms) cashed in by the company. The art of personalization is paramount in this data-intensive era of the internet to engage and guide the audience in their quest. Metadata used by the content-based filtering don’t normally expand beyond the high-level abstraction but handles new users. Collaborative filtering, the other widely used and more efficient approach needs users’ historical data and periodic retraining to overcome its fundamental limitation of assuming that similar users evolve similarly. Also, people are growing increasingly uncomfortable with the risk of personal data being exploited in the process and would prefer an alternative. Users at least in the premium tier should have the convenience to use the service without providing access to their data. This is an attempt to analyze the scope of deep learning methods on audio where no other data will be used to mimic a bit of the human auditory system with limited resources and no compromises on users’ privacy. Problem Statement Generate playlist based on the given query track(s) without using any metadata like genre, sub-genre, artist, year of release, etc. Featurization As mentioned before, a digital audio file contains air pressures(amplitudes) sampled in equal time intervals, and our auditory system process this information for reasoning. 107852.mp3 If you ask me what is the difference between classical music and rock music, I would say classical music uses piano and violin whereas rock music uses electric guitar and acoustic drums. So, to differentiate two genres, I broke the songs into simpler sounds(eg: piano) and then compared those sounds to find similarity between songs in the same genre and also differences between two genres. Fourier transform(article and video explanations) can be used to convert a sound into multiple basic sounds or more technically complex vibration into multiple single frequency vibrations. But, to represent a sound sequence, we need to do Fourier transform for every event and obtain the amplitude of all the frequencies(in continuous scale) which is not practical. We can break the song into multiple overlapping windows and assume each window as an event and apply the Fourier transform, and also bin the frequencies. For every window, this will give us the amplitude of each frequency bin which can be arranged in time to build a spectrogram. Spectrogram with frequency in the Mel scale and amplitude in decibel approximates the information received by the auditory cortex of the brain. Today, it is the most widely used representation of audio in deep learning. The vertical axis represents 216 ‘Mel frequency’ bins, the horizontal axis represents ‘time’ in seconds and the color represents ‘amplitude’ in decibels(black implies no or very low amplitude and yellow implies very high amplitude). Mel spectrogram of “107852.mp3” Existing Solution Recommending music on Spotify with deep learning by Sander Dieleman: Situations like cold start and lack of data breed alternative lines of thoughts for an otherwise effective collaborative filtering. The Convolutional neural networks, state-of-the-art image processing models were utilized to address these limitations by using collaborative filtering based representations to supervise the learning process. Latent representations are normalized to minimize the impact of the popularity of the songs. Each song split into multiple frames was represented as a spectrogram with amplitude in the logarithmic scale and frequency in the Mel scale stacked vertically to represent time. The rectangular kernels were convolved only in time. The sequential nature of audio was partially discarded and instead overall presence of features was extracted. One of the CNN architectures used by Sander Dieleman The trained model could compute feature representations of all the new tracks added every day(speculated to be roughly 40,000 at Spotify) directly and any appropriate similarity measure could be used for recommendations. However, the latent features used in the supervision need to be recomputed to adapt to changing user preferences and this necessitates the frequent retraining of the entire neural network. Data Roughly, 66 hours of audio(8,000 tracks of length 30 seconds each) forms the data. Metadata and commonly used features are also available. [Source: FMA]. First-cut Approach Mel frequency cepstrum coefficients(MFCCs) were the state-of-the-art representation for audio before the emergence of deep learning. Here MFCC vector represents each window of a song instead of a 216-dimensional vector. Initially, there were, say, 2048 samples in a window and were represented using 216 values in the Mel spectrogram. They are further reduced to 20 here. MFCCs are the representation of sound converted into multiple intermediate sounds instead of basic sounds. The intermediate sounds are made up of groups of basic sounds separated by equal intervals, similar to how notes in harmony are separated. Musical note(s) played on one or more instruments may be an intermediate sound. MFCCs are obtained by applying discrete cosine transform on the decibel scale dependant values in the Mel frequency bins of a window. Each song is windowed 1293 times, a total of 20*1293 values or a low-resolution grayscale spectrogram(cepstrogram) to represent them. The rows are standardized to avoid the domination of certain coefficients and to improve discrimination of time windows in a spectrogram and also different spectrograms. MFCC color spectrogram of “107852.mp3” Standardized MFCC color spectrogram of “107852.mp3' Autoencoder was used to extract an n-dimensional vector representation from the spectrogram. Convolution based model only had a slightly better mean squared error(MSE) than a baseline model, but the representation made sense in differentiating songs whereas encodings of the baseline model which predicts zeros would have been useless. The Kernels were convolved in both time and MFCC axis on a low-resolution grayscale spectrogram(matrix of size 20*1293*1) containing vectors of amplitudes instead of a color spectrogram, which additionally saved memory and computations. One of the autoencoder models Sample input(X1) Sample output(Y1^) Recommendation based on the vector representation obtained from the autoencoder felt random and also didn’t work for genre prediction. Final Features Mel spectrogram, as commonly believed, turned out to be the best representation. VGG16 pre-trained on the Imagenet dataset was used to extract the feature representation of the tracks. The final layer is replaced with a Max pooling layer and then the output is flattened to get a 25,600-dimensional sparse vector.
https://medium.com/analytics-vidhya/content-based-automated-playlist-generation-using-deep-learning-b892a7de3d3c
['G Pravin Shankar']
2020-11-25 10:07:40.503000+00:00
['Music Recommendations', 'Artificial Intelligence', 'Machine Learning', 'Data Science', 'Deep Learning']
Organizing Your Stuff- A Step Towards Leadership
There are many steps we can take to make ourselves a leader for tomorrow. This concept will only inspire the people with growth mindsets and who think that the qualities which a leader possesses are acquirable. “For every minute spent organizing, an hour is earned.” A leader is one who organizes things firsthand. This makes him a different person from those who do not see the simple difference between an organized and unorganized environment. Things look a little bit hard at first but believe me, the more you organize the more time you save. In this blog, I am about to share my experience of cleaning my room with the little help of my cousin. It is good to share positive experiences with people with whom you are close. While learning these beautiful concepts to live life with I started to think about adopting some organizational skills by working and organizing my room. Here are some of the glimpses of our room before organizing it. Tidy Room | Before Cleanup Tidy Room | Before Cleanup Here you can see an unorganized environment. While sitting in bed we use our laptops, and the cables make a mess with these. All these messy clothes, wires of laptops and mobiles, blankets, used masks, and a lot of other things were silently embossing a negative impact on our mind and body. We started sorting the things to put them in a pattern. The interesting thing is, the recent reconstruction of our home is taking place, which makes us a bit relaxed along with a messy house. But today we realized that things could take time, this does not mean we should be careless about the environment we can change. So, we took our little time off from gadgets and try to clean and organize the room as much as we can. Cleaning is itself satisfactory to watch. It took a short time, but the results were worth it. Let us have a look at the room just after a little organization. Well Organized | After Cleanup Well Organized | After Cleanup As you can see the struggle was worth it. Our room gave us a fresh start to work on our things with new energy. Today I realized that before doing these things it seems like the things are overpowered but when we do, we get them done within minutes. But clearly, the impact of these small things is great on our time management, mental health, and physical energy. I am certain that these small steps can lead us to be great leaders of tomorrow. #AmalFellowship #FixingThingAroundYourself
https://medium.com/@ammaruit/organizing-your-stuff-a-step-towards-leadership-ef015fc8ab9
['Ammar Ali']
2020-12-19 00:13:04.857000+00:00
['Organizing Your Life', 'Leadership Development', 'Amal Academy']
Paresh Rawal Biography | Indian Film Actor | GujaratCelebs
Paresh Rawal a well-known comedian, a famous Indian film actor, He is also a politician who is known for his incredible work in Bollywood. He is a member of Parliament representing Ahmedabad East. He is in the constitution of Lok Sabha in the Indian Parliament (BJP) from the year 2014 to 2019. Paresh Rawal was born on 30th May. 1950. His age is 70 years old. He was born 7 bought up in Mumbai. His height is 5.8". His father’s name is Dhaylal Rawal. Paresh belongs to a reputed Brahmin Gujarati Family. He completed his schooling from the Maharashtra SSC Board & college from Narsee Monjee College of Commerce & Economics, Vile Parle, Mumbai in March 1974. He speaks four languages Hindi, English, Marathi & Telegu. Personal Life He married an actress Swaroop Sampat and winner of the Miss India Contest 1979. He has two sons named Aditya & Anirudh. Paresh Rawal started his career in 1974 with theaters & Gujarati plays. He earned a lot of appreciation & love from the audience. Pareshji made his debut in the movie Arjun in a supporting role in the year 1985. He has given the blockbuster movie NAAM in the year 1986 and shows magnificent talent. Apart from Bollywood he also worked in Tollywood. He made an especially place in the Telugu movie Kshana. Recently he made his digital debut with Taran Adarsh as a producer on an OTT platform, streaming on SonyLiv film titled #Welcome Home. Read: Himesh Reshammiya Biography He received the National Award in 1993 for the movie Sardar, in which he played the role of a freedom fighter. In the year 2000, he won the best Filmfare Award for the movie ‘’Hera Pheri” in which he played the role of Marathi Landlord Baburao Ganpatrao Apte. In the year 2003, he nominated for Zee Cine Award for Best Actor in a Comic Role for Awara Paagal Deewana Also in 2004: Nominated-IIFA Award for Best Performance in a Comic Role for Hungama and Baghban 2006: Sardar Patel International Award for the Best acting of Sardar in Sardar Patel Film Hw won Apsara Award for Best Performance in a Comic Role in 2010 for Atithi Tum Kab Jaoge? In 2012, he Nominated for IIFA Award for Best Performance in a Comic Role for Ready. 2013, he won IIFA Award for Best Performance in a Comic Role for OMG — OH MY GOD. In 2014 Paresh Rawal was honored with Padma Shri Award for his spectacular contribution to the Indian Film Industry. He worked in more than 100 movies from the 1980s to the 1990s mainly in the villain roles, after that he gave blockbuster comedy films. Some of the list his incredible movies are Awara Pagal Deewana — 2002, Hungama — 2003, Garam Masala — 2005, Hulchul- 2004. Malamaal weekly — 2006, Golmaal — 2006. Mera Bhaap Pahle Aap — 2008, De dana dan — 2009. Bhool Bhuliyaa & Welcome — 2007, Aakrosh -2010, Sanju -2104 & many more, never-ending list. His upcoming movies are Hungama 2, Toofan, Hera Pheri 3 & Coolie no. 1. In 2012, he played a lead role in the blockbuster movie OMG- OH MY GOD with the supporting role of Akshay Kumar. He has fond of cricket matches. His hobbies are listening to music, reading books & traveling PM Narendra Modi is his favorite Politician His favorite actors are Kajol & Amitabh Bachchan Apart from this, he is the producer of daily soaps. That air on Zee TV ‘’Teen Bhuraaniyaan” another one on Colors TV “Lagi Tujhe lagan”. And 3 rd one Broadsact on Shara Channel “Main Aisi Kyunn hoon”. From the movie OH MY GOD in the year 2012, he gained a lot of appreciation from the audience and as a producer also. His next upcoming venture is the Biopic of legendary PM “Narendra Modi”. Last but not least he is one of the finest actors in the Indian Film Industry.
https://medium.com/@gujaratcelebs/paresh-rawal-biography-indian-film-actor-gujaratcelebs-858af35c3cb4
[]
2020-12-14 03:23:30.312000+00:00
['Biography', 'Politics', 'Actors', 'Paresh Rawal', 'Comedy']
How microdosing a cactus can have a maximum impact on your performance.
Are you looking for a boost? Want a way to enhance your creativity while experiencing a higher sense of self-awareness. You still want to be you, only better. A higher functioning, more creative, and more connected you. At Live More Perfect Days we’re testing new and innovative ways to optimize our human experience. How we can show up better in our businesses, families and communities. This month we’re calling #MicrodoseMay. Every few days we’ll take a microdose of San Pedro (a psychedelic cactus) before a work session and measure and document our experience. Gareth did a vlog a while back on the strange labelling and use of the word “drugs”. In this edition of Lifestyle Designer’s Digest we explore microsoding and its potential as a Lifestyle Design tool. So, we’re planning to take drugs eight times in the month of May! How microdosing a cactus can have a maximum impact on your performance. The benefits of psychedelic microdosing are being harnessed in countless creative industries for its ability to provide hours of uninterrupted concentration without altering the outward state of a person. Micro-dosing is taking a low dose of a natural psychoactive substance to boost creative consciousness and overall life balance. The doses used in micro-dosing are too low to promote intoxication but they do improve focus and creative ability while providing emotional balance and a sense of personal wellness. The idea behind micro-dosing is not new. Dr James Fadiman first began his research into the effects of psychedelics in the 1960s. He gave doses of various substances to scientists and mathematicians to monitor the effects it would have on their creative problem-solving. He concluded that micro-dosing can rebalance people and aligning their bodies to a more natural state. One of the most popular substances used in modern-day microdosing is San Pedro. Cacti such as San Pedro and Peyote have been used for thousands of years in traditional spiritual journeys and religious ceremonies. The key to the effectiveness of San Pedro lies in the Mescaline content. The power of which has been harnessed by many entrepreneurs to increase productivity as well as creativity. What exactly is San Pedro? San Pedro or Huachuma as it is traditionally known is a light green, nearly spineless variant of the cactus family. It features nocturnal blossoms and can grow more than five meters tall. Its native environment is high on the hills of the Andes mountains some 6 500 to 9 700 feet above sea level. Today this member of the cacti family is cultivated in many areas around the world. Comparing San Pedro to Other Psychedelics / How does San Pedro Measure Up? Our uniqueness means that we all have different tolerances and different reactions to stimuli. This means that no two people will experience their microdosing journey in the same way. Many people have found San Pedro to be more reliable than other substances. Having said this, there are a few aspects that set San Pedro micro-doses apart from other psychedelics. Time The effects of San Pedro last anything from 10 to 12 hours. Of this time, the peak and where you stand to harness the most productivity would be around seven hours. With other psychedelics, you’re looking at six to eight hours of effect with around three to four hours of productive awakening. Control Those who have tried San Pedro, as well as other psychedelics, say the effect of the Mescaline allows for more control. In essence, other psychedelics including LSD feel as though you are guided through your day, whereas San Pedro gives you the feeling that you are guiding your day. San Pedro also lacks the edginess of other stimulants allowing for more fluid management of your day. Recommended Dosage / How Much and When to Take San Pedro Taking a micro-dose is exactly that, a micro amount or small segment. This would usually be between a 10th and 20th of a regular full dose. San Pedro should also be used following a schedule with regular intervals, taking a micro-dose every three days and not for longer than three months at a time. The effects of San Pedro will be felt 15 to 45 minutes after taking a microdose with the peak performance kicking at around three hours. Dr Fadiman suggests micro-dosing at intervals of no less than three days. This would mean you take a micro-dose on day one and again on day four. Nothing should be taken on days two and three. It is also recommended that dosages are taken early in the morning to avoid effects lasting late into the night. One of the most popular best practices for taking a microdose of San Pedro is to fast for around 12 hours before taking it. This will counteract the possibility of nausea as well as increase the rate of absorption. Once taken, it’s best to not jump right into your day, rather spend a little time if self-reflection or meditation, this is a great time to run through your pregame for the day. Positive Effects of San Pedro / Pros of Taking San Pedro Before we get into what makes microdosing San Pedro such a powerful tool for your day, let look at how it affects our minds. Under controlled circumstances and when used correctly San Pedro can be life-changing. When San Pedro is ingested it activates the area of the brain which controls our sense of self. When stimulated it is this area of our brain that allows us to feel connected. Some of the positive effects experienced by people when using San Pedro include heightened mental clarity, self-reflection, better mood, enhanced creativity, lucid dreams, and the ability to be more sociable. In several groundbreaking studies, micro-dosing has been used to treat those with high-functioning anxiety, varying levels of depression as well as those suffering from addiction. The Fine Print Despite it’s availability, San Pedro and many of these plants are still illegal in many countries. So please apply all the normal disclaimers. This is not medical advice We’re not experts on this subject Do not try this unless you’ve done your own research and feel comfortable this is a path you want to undertake. Make sure you’ve spoken to a medical professional that you trust before consuming any psychedelics San Pedro does come with an unwritten disclaimer. Microdosing is by no means a one size fits all solution for getting everything you want out of life. Overstimulation is a possibility. Those who have a history of mental illness including bipolar disorders and psychosis should avoid any sort of psychedelic stimulants including San Pedro, even in microdoses. Another important thing to remember is that no one micro-dose journey will be the same as the next one. Many variables including quantity and quality of the product as well as your surroundings will influence the experience for individuals. Getting it wrong, even slightly, can mean you’re giving up the change for real connection and switching it up with disappointment. But, despite the potential risks, many are fluting the cautionary warnings to reap the benefits of microdosing. Conclusion People around the world are turning to micro-dosing, not to escape their lives, but to live better, more fulfilled lives. Research into the use of Mescaline, even in small doses has by no means been completed. Researchers have only just begun to understand the powerful abilities of San Pedro. Before setting out on any micro-dosing journey consult your doctor. Follow our journey. This topic might not be for everyone. So before we share in detail our experiences of Microdosing we’d like to get your consent. For us, this means being part of our online community at www.LiveMorePerfectDays.com on our homepage enter your email address and grab our special “Core Scaffolding” offer. This will give you access to our online community and Lifestyle Design courses as well as a member’s only section where we’ll share our journey for #MicrodoseMay.
https://medium.com/@garethpickering/how-microdosing-a-cactus-can-have-a-maximum-impact-on-your-performance-f17a6bb14581
['Gareth Pickering']
2020-05-13 23:06:39.194000+00:00
['Psychedelics', 'Microdosing', 'San Pedro Cactus', 'Performance']
Andrea Leiter Joins Crypto Law Review — Welcome!
On September 28, 2018, CleanApp Foundation announced the launch of Crypto Law Review. In only five months, we have grown to nearly 400 subscribers and hundreds of unique daily visitors. CLR’s growth is 100% organic. Our articles resonate with crypto and broader global governance communities because they focus on the hardest and most consequential questions in the space. And we’re only getting started. Today, we are excited to announce the addition of Andrea Leiter to our editorial team as co-editor of the Crypto Law Review! A law graduate of the University of Vienna, and a doctoral fellow of the Austrian Academy of Sciences, Leiter has years of rigorous research and publication experience. Leiter’s crypto law analysis focuses on the most important theoretical and practical questions of the day: the nature, function, and limits of crypto legal forms (e.g., “smart contract,” property, “self-enforcement,” etc.); (e.g., “smart contract,” property, “self-enforcement,” etc.); the essence of crypto legal jurisdiction vis-a-vis existing jurisdictional frameworks; vis-a-vis existing jurisdictional frameworks; crypto law in broader socio-legal historical and critical contexts. As important, Leiter is an institution-builder who understands the centrality of collaboration and plural global perspectives. In a field as inherently contested and politicized as “crypto law,” voices and networks like Leiter’s offer much-needed grounding effects. The global crypto law and governance community has already benefited from Leiter’s analytical work. Please join us in welcoming Andrea to this critical scaling and editorial role.
https://medium.com/cryptolawreview/andrea-leiter-joins-crypto-law-review-welcome-c8457f080579
[]
2019-03-17 14:21:14.380000+00:00
['Cryptocurrency News', 'Crypto Law Review', 'Blockchain News', 'Crypto Lawyer', 'Blockchain']
Here’s What I Know…So Far
I tweeted this morning that I woke up to an interesting text message on my phone but that I wanted to chase details before sharing. For those of you who demonstrated patience, thank you. Here is what I know right now. The Milwaukee Brewers have requested a copy of the contract of a certain player so that they can review it. I’ll share the name in a minute, but what that means is that they want to review all of the particulars of the player’s contract so that they’re aware of any bonuses, escalators, incentives, clauses, etc, which are above and beyond standard MLB contract language. What this IMPLIES is that the Brewers are considering acquiring said player and want to know what they’d be getting into if they did so. In no way should this become a “the Brewers are absolutely trading for this player” declaration. At least not yet. The player in question is Texas Rangers shortstop Elvis Andrus. Yes, that’s not Jurickson Profar but it’s still exciting. Andrus is a proven talent and still only 23 years old. This has come about because the word is that the Rangers view Profar as MLB-ready and therefore could move to Andrus in a package to acquire a package of players centered around Brewers’ starting pitcher Zack Greinke. Rumored to be involved going to Texas along with Greinke if this deal actually comes off? A relief pitcher and a position player. Yes I’ve been told the names, but too often the supporting pieces in a deal change so I’ll keep quiet on that for now. Rumored to be coming back to Milwaukee with Andrus is (at least) a touted minor league starting pitcher. I’ll stay on this to see if it gets further down the road or if fizzles. Try to remember that even deals announced as “done” by professional media (most recently Dempster to Braves) can still fall apart for a number of reasons. And I’m not saying at all that there is a deal that’s fully on the table. All I know for sure is that Milwaukee had cause to find out additional information about Elvis Andrus. Keep that in mind.
https://medium.com/brewernation/heres-what-i-know-so-far-a397c2c35386
['The Brewer Nation']
2017-01-03 22:11:42.615000+00:00
['Trades', 'Players', 'Elvis Andrus', 'Brewers', 'Rumors']
Simple Object Movement in Unity
The last task to complete GameDevHQ’s 2D Game Design course requires that we create a large boss with all the bells and whistles at the end of a Space Shooter level. As with many things in Unity, doing simple movement is not intuitive. For example, I sought to create a direction vector based upon the current position of an object that would direct the object to a specific target position. For example, I wanted to move the object vertically 5 units and horizontally by 3 units. But this doesn’t work. The issue is that you cannot modify each individual axis in a vector. So, joining them together into a Vector3, I wrote this: But this didn’t work, either. To solve this, I A) created a simple offset (-2 and 2 for this new example) and B) added it to come up with the target position. But then, C) I also had to .normalize the result for the direction to always work as you can see in this code here: Then, when it came time to move the object toward the target position, I could include this single line of code in Update(). transform.Translate(direction * speed * Time.deltaTime); It’s truly amazing how much power 4 lines of code can produce. But coming up with the right 4 lines is not always so easy. To see the practical use of this code, this image shows how the enemy objects move out of the way when the player shoots at them. Thanks to Jonathan and Al at GameDevHQ, our superhero team lead Dan and Economic Development Alliance of Hawaii for making this program available for those of us impacted by Covid-19.
https://medium.com/@troylyndon/simple-object-movement-in-unity-c25cebccc606
['Troy Lyndon']
2020-12-11 21:54:59.703000+00:00
['Unity Game Development', 'C Sharp Programming', 'Gamedevhq']
MythBuster : 10 Rumors about Flutter, why it’s not worse than Android+Kotlin ?
Almost all mobile developers talk about Flutter, this new cross-platform framework, which can produce beautiful apps in a record-time, promises a 60fps rendering and can export its applications to Android, iOS, Windows, and shortly to HTML ! The problem with Flutter ? Every developer has some myths about it, "I don't want to develop on Flutter because #%&ç*" , "No way, using Android & Kotlin we can #%&ç*, but they don't have it in Flutter" Let’s shed some light on those myths together, and Bust them ! After reading this article, you will be in a hurry to start your first project in Flutter 😉
https://medium.com/ideas-by-idean/mythbuster-10-rumors-about-flutter-why-its-not-worse-than-android-kotlin-25f54295440
['Florent Champigny']
2019-03-21 10:30:46.378000+00:00
['Technology', 'Kotlin', 'Java', 'Android', 'Flutter']
Next Gen Companion Apps for Games
Companion app for one of the most well known titles in gaming history Companion apps have been leveraged by big titles like Call of Duty, The Divison, NHL 13, Fallout, and countless more. They provide a genius way to engage players even when they aren’t in front of their consoles or PCs. However, these apps require installing yet another app on our phones, and are forced to adhere to the rules of the Google Play and App Store. But what if we were to build companion apps as progressive web apps, rather than native Android and iOS apps? … Well, what is a progressive web app anyways? What are Progressive Web Apps (PWAs)? Back in 2007, Steve Jobs envisioned a world where websites acted and looked just like native Android and iOS applications. Safe to say, he took that back pretty quickly. But in 2015, developers at Google introduced the term “Progressive Web App” , or PWA for short, to define a type of website that is responsive, installable on device, and capable of many things that everyone thought only native apps can do (such as being able to function offline, access camera, microphone, navigation, etc). Now we live in a world where we can build web applications almost indistinguishable from their native counterparts. They aren’t so widely adopted yet, but some big names have already tested them on their massive audiences. Take Twitter for example: Twitter Lite Progressive Companion Apps Now that we know what a PWA is, we can talk about why PWAs are a great choice when building companion apps for your games. Let’s call them Progressive Companion Apps for future reference. Keep Your Revenue Share Micro transactions in games are a massive source of revenue for many games on the free-to-play model. However, most platforms take a slice of your revenue. If you try to be sneaky and accept in game transactions outside of the confines of Google Play or the App Store… well… just look at the current war between Epic Games, Apple, and Google. 30% adds up quick… PWAs can give you your 30% back If you are building a Progressive Companion App that sells non in game items (think fan gear, digital or physical collectables, behind-the-scenes content, etc), you can still reap the benefits of an installable mobile app, keep 100% revenue, all without violating any policies of the platform. Remember, you’re in web country now! Land of the free. Build Once, Run Everywhere At the end of the day, it’s really just a website that looks & feels like a native app. This means if you design it properly, your players can use your companion app on Windows PC, Linux, Mac, iPhone, Android, Windows phone, the list goes on! Web is probably the one place where when we say “build once, run everywhere”, we really mean it (except you, internet explorer…). You don’t need a separate developer for each platform, you only need a web developer who can build PWAs (I’m going to give myself a shout out here). Alternative Marketing Channels When you build a companion app as an Android or iOS app, you are at the mercy of the policies of Google and Apple. When you build a Progressive Companion App, you are at the mercy of… your imagination! Visibility in the app store will largely determine the success of your game + companion app. If Apple decides they want to push some Arcade games the day your app launches, well, tough luck. If you build a Progressive Companion App, you can push your app with good ole SEO skills, affiliate links, or even better, share links from other users! Have you ever received an invitation to download an app from some random friend? Did you actually download it? Probably not. We all ignore the random invites from friends to download apps we don’t know If you could have clicked the link and tried it out instantly without downloading, would curiosity have gotten the best of you to at least take a peak? Probably. If it was actually interesting, would you have installed it? Probably. Or not. Doesn’t matter, since you can still use it without installing! Referrals work wonders on the web because it’s so easy to access what we are looking for. Going with a Progress Companion App unlocks this marketing power. No Need for Downloads Let’s face it, it’s hard to get someone to download and app and not delete it in the first couple of minutes. Assuming you manage to convince a player to download your companion app, Adjust analyzed 8 billion app installations and found that games are typically deleted within the first 4 days… Games are typically deleted within 4 days A Progressive Companion App, however, does not need to be downloaded. “But Kris, you said PWAs can be installed!” Yes… but that doesn’t mean they are downloaded up front. A player can use a Progressive Companion App from any browser without ever installing. If they choose to do so, they can install it on their phone / computer, but there still isn’t a typical download process as you would face when downloading candy crush, for example. Conclusion The gaming industry often overlooks the web as an alternative gaming platform, but it’s actually perfect for things like companion apps where the experience tends to be less intense in terms of graphics and memory usage. If you have questions or thoughts, feel free to share in the comments!
https://medium.com/swlh/next-gen-companion-apps-for-games-cdf7ae3ad9f5
['Kris Guzman']
2020-12-03 13:14:44.771000+00:00
['Game Design', 'Web Development', 'Game Development', 'JavaScript', 'Software Development']
The Vickrey Auction and how a liar helps you tell the Truth
Let’s assume you love ice cream (a safe assumption), and that there was a neat little parlor in your neighborhood that served the most amazing chocolate ice cream that could ever be conceived. You frequented the place and enjoyed the ice cream and life was bliss until they closed up shop, leaving you devastated. So imagine your delight when the owner announces he’ll be putting on his apron one last time for a special treat. A treat that will go to the highest bidder. You’re excited about the opportunity, but know that your wallet isn’t the biggest one around, and you need to pay rent so you can actually have a home to eat ice cream in. You walk into the auction, and they announce that everyone needs to seal their bid in an envelope, and whoever makes the highest bid gets the ice cream. So how much do you bid? If you spend more than $100, you won’t have enough to pay off the loan sharks and this might turn out to be your last ice cream ever. Not a bad way to go, but you’re rational. And since you are rational, you believe no ice cream is worth $100, and, delicious as it may be, the ice cream is only worth $70 to you. You’re about to put in the $70 bid when something hits you. What if no one else believes the ice cream is worth $70 — the ignoramuses. What if the next highest bid is only $50? Then you’ve spent a lot more than you needed to and end up feeling shortchanged, even though you were happy paying this amount initially. And besides, everyone else is afraid of overpaying too. So everyone ends up bidding less than they were originally willing to, and the maker of the world’s most delicious ice cream leaves with chump change and a possibly disgruntled buyer. This is an example of the first-price sealed-bid auction.
https://medium.com/@mohammedhamza-1/the-vickrey-auction-and-how-a-liar-helps-you-tell-the-truth-2f1310b0ba41
[]
2020-12-16 14:18:49.354000+00:00
['Game Theory', 'Math', 'Computer Science']
THE ALPHA AND AMEGA END TIMES MESSAGE
TNDL: “THE ALPHA & OMEGA. THE ALPHA AND THE OMEGA IS SENDING A STERN MESSAGE TO THE END TIMES CHURCH OF EPHESUS, BECAUSE HIS MODERN CHURCHES ARE DEPARTING FROM THEIR FIRST LOVE OF CHRIST AND COMPROMISING WITH THE WORLD. THEREFORE, THE LORD YESHUA THE CHRIST, IS SAYING REPENT, AND COME UNTO ME AND GET BACK YOUR FIRST LOVE, OR ELSE I WILL SPIT YOU OUT OF MY MOUTH ONLESS YOU REPENT! READ REVELATION 2.” , according to revelation 2. UBF Resources : What Christ Says to the Churches (Rev 2:1–3 . Feb 1, 2020 — On what basis does Jesus commend or rebuke each church? … chapters 1–3 of Revelation, which portrayed the glorified Christ and includes What Christ Says to the Churches (Rev 2:1–3:22) by HQ Bible Study Team 02/01/2020 543 reads Question WHAT CHRIST SAYS TO THE CHURCHES Authored by HQ Bible Study Team: Teddy Hembekides, Mark Yang, Ron Ward, Augustine Suh, and Paul Koh Revelation 2:1–3:22 Key Verses: 2:4–5a; 3:20 Survey Jesus’ messages to the seven churches and for each, answer the following (see chart on page 2): To whom is it addressed? How is Jesus portrayed? On what basis does Jesus commend or rebuke each church? What commendation is given, if any? What rebuke is given, or problem addressed, if any? How does Jesus counsel or warn the church? Is the church unified or divided in its obedience or disobedience? What purpose or goal does Jesus have for the church? What reward is promised? What exhortation is repeated for each church? In these two chapters, what can we learn about how Jesus relates to his church as a whole? (Consider such things as: his presence, knowing, judgment, reward, etc.) What can we discover about what pleases or displeases Jesus in his church? Why is this important to us today? How would Jesus see the spiritual condition of his church today? What would he commend or rebuke? In light of this study, what prayer topic can you find for his church, locally, nationally and globally?.Message WHAT CHRIST SAYS TO THE CHURCHES Revelation 2–3 Key Verses: 2:4–5a, “Yet I hold this against you: You have forsaken the love you had at first. Consider how far you have fallen! Repent and do the things you did at first.” 3:20, “Here I am! I stand at the door and knock. If anyone hears my voice and opens the door, I will come in and eat with that person and they with me.” Thus far we have studied chapters 1–3 of Revelation, which portrayed the glorified Christ and includes his messages to seven churches. Christ spoke clearly and directly to real churches in real places in real time. These messages also apply to all churches of all time. They are also the basis for understanding and properly interpreting the rest of the book of Revelation. Until now, we have studied each individual church. Today we want to step back and see Christ and all seven churches as a whole. We will review who Christ is, how Christ sees the church, and what Christ says to his church. How Christians see the church is important. How those outside the church see it is also important. But far more important is how Christ sees his church and what he says to it, because Christ is the founder, head and judge of the church. So, let’s listen to what Christ says to his church. First, who Christ is. In chapter 1 John saw the vision of the glorified Christ. He appeared so differently than John had seen him before. When Christ was on earth, John was so comfortable with him that he leaned his head on Christ’s bosom. But before the glorified Christ, who was awesome and transcendent, John fell down as though dead. Then Christ commanded him to write down and send to the seven churches what he had seen, what was then, and what would take place later (1:19). In these letters, Christ revealed himself to each church in a unique way. We can see that Christ knew each church well–their struggles, their city and the environment they lived in, and their political and economic situations. Moreover, Christ knew their spiritual condition. Christ begins each letter: “These are the words of him…” followed by a unique revelation of himself. Who is Christ? To the church in Ephesus, Christ was “…him who holds the seven stars in his right hand and walks among the seven golden lampstands” (2:1b). Christ knew that the influence of the Ephesian church was great; it was the mother church. They were probably concerned about the other churches. But Christ was concerned about them. Christ wanted them to know that he is among the churches, walking with them, nourishing them, protecting and guiding them as the head of the church. All the Ephesian church needed to do was restore their first love for him and trust his guidance. To the church in Smyrna, Christ was “…him who is the First and the Last, who died and came to life again” (8b). Christ understood that this church was afflicted, poor and undergoing severe persecution. He wanted them to know that he is the eternal and infinite God, and that he is the living God who has authority to give life. Knowing this Christ strengthened them to persevere through trials and persecution. To the church in Pergamum, Christ was “…him who has the sharp, double-edged sword” (2:12). This sword represents the power of Christ’s word to judge and purify his church. Pergamum was a center for worshiping the emperor, as well as four of the greatest Roman and Greek gods. Satan’s power was very strong in that city. But Christ is more powerful than Satan. Christ’s word, like a sharp, double-edged sword, could judge and destroy idols and false teachings, like those of the Nicolaitans. When they trusted in Christ’s word, they could overcome the power of Satan in their culture. To the church in Thyatira, Christ was “…the Son of God, whose eyes are like blazing fire and whose feet are like burnished bronze.” In that church, most people tolerated Jezebel, a false prophet. Outwardly, their activity was amazing; they were doing more than they did at first. But they condoned sexual immorality and idolatry. Christ, who is the Son of God, saw through their activity to the motives of their hearts, and he was about to judge them with his mighty power. Christ sees the motives of our hearts and never condones sin. To the church in Sardis, Christ was “…him who holds the seven spirits of God and the seven stars” (3:1). The church in Sardis had a reputation for being alive, but Christ said they were dead. Who can help the dead? Let us see. In one hand, Christ holds the seven spirits of God–the Holy Spirit–and in the other hand he holds the seven stars, which are the messengers of the churches. Christ sends the Holy Spirit to inspire and empower his messengers to proclaim the gospel boldly. Through the Spirit-filled words of God, Christ brings the dead church back to life. Only Christ can make the spiritually dead alive. The church in Sardis had to hear Christ’s words and wake up. To the church in Philadelphia, Christ was “…him who is holy and true, who holds the key of David. What he opens no one can shut, and what he shuts no one can open” (3:7). False Jews were spreading lies that Jesus was not God and not the Christ. Satan could use these lies to plant doubt which undermined the believers’ faith and made them shrink back. Christ identified himself to them as “holy and true,” which means that he is God, and the true Christ, who holds the key of David. Christ can open the door of salvation for people, even the Jews. The unbelieving Jews would come and fall at the believers’ feet. To the church in Laodicea, Christ was “…the Amen, the faithful and true witness, the ruler of God’s creation” (14). This church was influenced by early Gnostics who taught that Christ is a created being–just one of the great teachers. This view of Christ made them degenerate spiritually until they became lukewarm. But Christ is “the Amen,” the truth of God incarnate. He is trustworthy; his testimony is reliable. He is not a created being, but the Creator God. Knowing Christ truly is the foundation of the church.
https://medium.com/@jkwoods/the-alpha-and-amega-end-times-message-8d879c1eaf57
['J. K. Woods']
2020-12-23 03:38:09.400000+00:00
['Religion', 'Christianity', 'Spirituality', 'God']
CarpoolVote.com: Calling voters and volunteer drivers for rides to the polls
Carpool Vote — the free election ride sharing platform — invites volunteer drivers and voters to sign up to the platform on CarpoolVote.com. The service, which connects volunteer drivers with anybody who needs a ride to claim their vote, has already started matching drivers and riders who have signed up on the site. It is now inviting drivers, partner organizations, and voters who need a ride to sign up using the I NEED A RIDE and I CAN OFFER A RIDE buttons on CarpoolVote.com platform — via their desktop or mobile. Those without internet access can also call or SMS the hotline: 804–239–3389. The Carpool Vote matching algorithm then pairs drivers and voters who can travel at a similar time and place, taking into account seating and accessibility needs. Drivers who would like to see where they are needed most can view the map on the landing page. This displays all riders who are still awaiting a match. Riders and organizations who want a better idea of the rides available can also see where drivers are accepting requests. Carpool Vote founder Sasjkia Otto says: “The more people who know about the Carpool Vote service, the more will use it, and the more people can get matched. We encourage everyone who is passionate about democracy to sign up — even riders who do not have regular internet access! We notify drivers via SMS or email when we’ve found a rider who could be a match. If the driver accepts the match, we pass on details of the rider’s preferred method of contact. It is then up to the driver to get in touch with the rider to arrange the ride. We hope that this makes the service easy to use — no matter what kind of phone or connection you have.”
https://medium.com/@CarpoolVote/carpoolvote-com-calling-voters-and-volunteer-drivers-for-rides-to-the-polls-635e3e5ef3ab
['Carpoolvote.Com Usa']
2016-11-06 17:55:48.785000+00:00
['Ridesharing', 'Voting Rights', '2016 Election', 'Democracy']
The future of trust and scaling digital identity
The future of trust and scaling digital identity According to World Bank’s estimate, almost 1 billion people globally lack any form of legal identity. At the same time, over 4.5 billion of the world population uses the internet, with this number growing at a rate of 7% last year. Covid has further accelerated this integration of our physical and digital worlds — data regarding our identity and behaviour online have become the fuel of the digital economy, enabling provision of increasingly personalised offerings and value creation. On the other hand, Big Tech continues to dominate our online presence — detailed user profiling and associated privacy concerns have risen to the top of regulators and consumers’ agenda alike. Our trust in how companies and organisations handle this sensitive data is becoming increasingly inseparable to our overall brand experience. Digital Identity is the key to both worlds — enabling innovation to flourish with improved data access, while protecting and sharing our privacy to only the trusted few. It is therefore no surprise that investment in a range of identity solutions have increased over the past years, fuelled by promises of emerging technologies such as blockchain, biometrics, and machine learning. What is exciting is not the verification and authentication infrastructure itself, but what this unlocks — new business models and ecosystem interactions, seamless online-to-offline experiences, all coupled with enhanced security, privacy, and user control. As ID2020 puts it, an ideal digital identity solution should be Personal, Persistent, Portable, and Private. As it currently stands, our identities are fragmented across identity providers, brokers, and service providers — we are yet to glide smoothly between our physical and online worlds. From the debate of privacy vs control in Test & Trace apps, to tackling disinformation online — appropriately designed identity solutions are key in the digital age. Build ecosystem and superior data sharing to drive interoperability and scale For any identity solutions to reach a meaningful scale, obtaining recognition and interoperability from a critical mass of ecosystem players are essential. In the digital age, this means more than the traditional identity issuers and service providers, but also providers of network infrastructure, hardware, software, and services that connect between our offline and online worlds. Take Belgium’s itsme — started by a consortium of leading banks and mobile network operators, the combined strength of identity verification from banks, secure SIM and network provided by mobile operators, and authentication software provided by third party partners have facilitated the success of the scheme. From initially a limited use case for e-banking verification, itsme became recognised by the government and evolved to support a range of public and private services. Similarly, South Korea’s DID Alliance brought together banks, mobile operators, and hardware manufacturers such as Samsung and LG to develop standardised technologies and interoperable frameworks for digital identities. Whether through API integration of identity-as-a-service, or developing standardised protocols like DID or X-tee that underpins the success of Estonia’s e-ID, involving a range of players and superior data sharing and standardisation are key to driving interoperability and scale. Identity system archetypes; Source: World Economic Forum “A blueprint to digital identity” User-centric solutions need to be radically transparent Estonia’s e-ID scheme is known to be world leading, and offers citizens the ability to vote, pay taxes, set up companies, access almost all public services as well as private services online. Through the cryptographic data exchange protocol that connects across the ecosystem, all actions and access to citizen identity data are logged, viewable through a central portal. Along with supporting legislations to protect user privacy, this creates a high level of trust and user ownership over individuals’ data. Decentralised technologies such as blockchain are also fundamentally shifting the way identities and data are stored and shared. An emerging design principle that sets the bar for user privacy is the concept of self-sovereign identity (SSI). Christoper Allen’s 10 principles of SSI; Image Source: Jolocom Whitepaper SSI aims to create a high level of trust by putting the control and generation of identity credentials back in the hands of the user. Users are able to decide to whom and how much of their data is shared, have full transparency and access to their data, and are able to transport identity credentials across entities without the problem of vendor lock-in. Increasingly, SSI principles are adopted on different levels in the development of digital identity solutions, as can be observed in DID and Estonia’s case. In this age where calls for open data sharing converge with those demanding user privacy and protection, SSI principles emphasise the need for trust goes both ways. As secure digital identity solutions become the new gateway to our wealth of data, we, the consumers, and ultimately to whom the identities belong, should remain at the centre stage. Identity is the new business moat for platform players The breadth of third party service access enabled by online platforms such as Google and Facebook, alongside what is termed “Super-apps” such as Wechat and Gojek, via a single login meant that authentication credentials with these platforms essentially became a source of identification. Whether via provision of Single Sign-On (SSO) or integrating third party services within the platform itself, the wealth of data visible to the platform providers unlock significant opportunities to personalise customer experience. The ability for platform players to seamlessly integrate our online and offline activities would be the next step to being truly ubiquitous and sticky in users’ lives. WeChat is a good example that has gone beyond e-commerce and online engagements to enable access of offline services in all aspects of life. Users of Wechat are able to book medical appointments, make bill payments, access municipal services, and even file for divorce via the app. In order to fully benefit from these services, users need to obtain verifications from their banks and link ID details to the account to prove its legitimacy. Google, Apple, and Facebook are increasingly heading in this direction, with identity being an inseparable part to their payment efforts. Android and Apple Pay are bringing passports and driving licenses into the e-wallets, and with it, the ability to bridge online and offline activities with just your phone in hand. Facebook’s Novi (ex-Calibra) intends to be tied to an ID from the get-go, starting as a means to prevent fraud, but really is part of Facebook’s longer term ambition to set a new identity standard for how we interact. On similar veins, identity also plays an important part in Fintech’s business model. The combination of easier data access enabled by Open Banking with streamlined identity verification is facilitating the vision of Monzo and Revolut — the future marketplace and personal hub for your financial needs. For platform players, owning this identity relationship means holding the gateway position between consumers and an ecosystem of services — identity is becoming the new moat. Mental model for digital identity solutions The world moves at the speed of trust. — Thomas L. Friedman In his book Thank You for Being Late, Thomas Friedman studied the effect of digital transformation on people and communities. As everything accelerates, human’s ability to connect would be fundamentally based on the speed at which we can trust each other. As a society, how can we build and scale trust quickly? One thing for sure, our identities, and being who we say we are, would be a cornerstone to this trust.
https://medium.com/predict/building-ecosystems-of-trust-scaling-digital-identity-1bb72e4551aa
['Sarah Wong']
2020-12-14 23:37:02.788000+00:00
['Platform', 'Technology', 'Digital Identity', 'Privacy By Design', 'Trust']
Αβέβαιο το μέλλον της σειράς ταινιών Halloween σύμφωνα με παραγωγό της
Filmmaker (film producer, screenwriter, film director) and Author and Editor (Movies & Series) for Three Pixels Lab Media Production.
https://medium.com/@christosarfanis/%CE%B1%CE%B2%CE%AD%CE%B2%CE%B1%CE%B9%CE%BF-%CF%84%CE%BF-%CE%BC%CE%AD%CE%BB%CE%BB%CE%BF%CE%BD-%CF%84%CE%B7%CF%82-%CF%83%CE%B5%CE%B9%CF%81%CE%AC%CF%82-%CF%84%CE%B1%CE%B9%CE%BD%CE%B9%CF%8E%CE%BD-halloween-%CF%83%CF%8D%CE%BC%CF%86%CF%89%CE%BD%CE%B1-%CE%BC%CE%B5-%CF%80%CE%B1%CF%81%CE%B1%CE%B3%CF%89%CE%B3%CF%8C-%CF%84%CE%B7%CF%82-1e2e2c7b8c75
['Christos Arfanis']
2020-11-04 08:35:06.352000+00:00
['Halloween Ends', 'Halloween', 'Halloween Kills']
Why I’m Not Going to Homeschool Anymore
Why I’m Not Going to Homeschool Anymore And the choice makes me want to puke. Photo by NeONBRAND on Unsplash I’ve been homeschooling my daughter for two years now, ever since she was supposed to start middle school. I have written before about why I started homeschooling and even last month wrote a post about how it’s bullies that keep my daughter from school, or, keep me from putting her back in school, but after months of consideration and lots of talks about it with family and friends, we’ve decided to give public school a try… only because it’s the singular path to a special needs school. The system has BROKE BAD. You see, my daughter has multiple disabilities, including Intellectual Disability, which makes her much more immature than her peers of the same age. Also, even though she is going into eighth grade, she is really operating at a 4th and 5th grade level in most subjects. Now, I seem to be the outlier of parents who doesn’t believe full inclusion is the best thing for her child. These days, public schools want special ed students to be in class with their peers as much as possible, to not be pulled out of class for hard subjects but to be given an aide to sit with them and help. Well, now, what do you think it’s like to be the only kid in class with an aide sitting with you? Do you think that’s conducive to making friends and not getting made fun of? Do you remember the special needs kids in your class when you were young? Were you nice to them, or did you just ignore them? There’s a reason they have specialized schools for kids with disabilities, and these are only some of the reasons I want my child to go to one. But there are so many barriers in her way. Because teachers are under no obligation to work in the summer, they will not have her Planning and Placement Team (PPT) meeting until after the school year starts, so she will start without her Individualized Education Plan (IEP) that helps accommodate her needs in school. Basically, my daughter will be thrown to the wolves and treated just like any other eighth grader for as long as it takes to get her IEP in place, and I am horrified for what she might have to go through. I am just so scared that she will immediately be singled out as being different and start getting picked on the first day — or worse, completely ignored when she is desperate to make some kind of connection. So, why am I doing this? Well, it all comes down to money, in more ways than one. The public school is the one who has to determine whether they can meet my child’s needs, and if they can’t, they are the ones who have to pay the tuition for the special needs school that I want her to attend. That is going to be a fight, and I am fully prepared to be that school’s worst nightmare of a parent to get her in there, calling and emailing at every mention of bullying she faces, constantly calling PPT meetings if her IEP isn’t working out for her, and even getting in the face of the Board of Ed if the bullying is as bad as I expect it to be, if it’s anything like it was in her last school. Then, there are my personal reasons. I’m still living with my parents, and I can’t figure how to make it work to get out of here if I have to homeschool in the mornings and work a part-time job on nights and weekends. It would take me ages to save up enough to actually move out, and then there’s no way I would be able to sustain us on a part-time salary and still homeschool. I don’t have a husband or partner to help support me, and though my parents seem more than happy to let me stay here indefinitely, I really want to move on with my life and get back out on my own again, for my own psyche. I try to tell myself that I am doing the right thing. I know right now that the only path to the special needs school I want her in is to put her in public school and see what happens. My hopes are high, but my expectations are low. I’m afraid my child will be lost and abused in a sea of neurotypical kids who want nothing to do with her and terrified of what that will do to her precious little heart. But I’m never going to know if we don’t try. Maybe I am over-projecting my fears and it won’t be so bad. Who knows? I just know I’m dreading August 28th, the day she walks into that building for the first time… and I’m afraid of how she will walk out.
https://medium.com/home-sweet-home/im-not-going-to-homeschool-anymore-187700348cd5
['Cheney Meaghan']
2019-07-18 20:25:43.456000+00:00
['Special Needs', 'Bullying', 'Education', 'Parenting', 'Disability']
Utah Superstar Ty Jordan Shot and Killed in Texas — US day NEWS
A University of Utah superstar football player, freshman Ty Jordan was shot and killed Saturday morning in Texas at the age of 19, in what police believe was an accidental shooting Christmas night. The university confirmed Jordan’s death. The 2020 “ PAC 12 Offensive Freshman Player of the Year “ reportedly passed away in an ‘accidental shooting’ while in Texas. The details of that shooting, and if it really was an accident, have not yet been published. Days after rushing for 154 yards and three touchdowns in Utah’s season finale against Washington State, he was spending the holidays in his native North Texas. @truebuzz FB reported Ty Jordan’s death news, tweeting, “All my boys need prayer right now. We lost our brother, friend, and teammates. Lord wrap your hands around his family and friends all over the world. We are going to miss that smile Ty. Go lay on Moma’s shoulders for eternity. Forever #buzzgang “ A Woman First Reports Ty Jordan Shot and Killed The woman who first opened up that Ty Jordan was shot to death stated she was his cousin. She said on Facebook: “Please keep my family in your prayers. My Lil Cousin Ty Jordan was shot and killed.” Leilani Buckley-Awadjihe also added: “We are so used to hearing his name celebrated for Utah Utes football lately but jealousy….Someone jealous of that spotlight took it away. Although he is in his mother’s arms this is so hurtful to our family. Please pray for their strength and comfort. This is going to be so hard. Someone ruined their holiday. Pray for Utah Utes also.” As we are informed details are incredibly scarce and unknown yet. Outside of the reactions from his teammate, there’s not much information about the heartbreaking incident. In these kinds of news, it’s crucial to wait for updates and not speculate. Don’t jump to conclusions. Information will often be immediately updated. Ty Jordan’s Death: Suicide or Accidental Shooting? Authorities report the victim shot himself in the hip with a handgun. That would certainly suggest it was accidental, however, police would offer no additional information at this time. The Denton (Texas) Police Department told that officials responded to a call at 10:38 p.m. ET Friday who reported a gunshot victim. He was taken to a nearby hospital where he was pronounced dead. Authorities believe the shooting was accidental but there is still a possibility of suicide. “Following a preliminary investigation, we do believe that this was an accidental shooting, where the victim accidentally shot himself,” Denton Police Department public information officer Allison Beckwith claimed. It is believed the gun was accidentally discharged by the victim.” Utah Athletic Director Mark Harlan said the community was in mourning. “We are deeply saddened and shocked to learn of Ty Jordan’s passing early this morning, and our thoughts and prayers are with those who loved him dearly, including the young men in our football program,” Harlan said in a statement. “Our priority is on supporting his family and the student athletes, coaches and staff in our football program who are so deeply hurting right now. Coach Whittingham and I are working closely to provide support and resources for our Utah football family in this extremely difficult time.”
https://medium.com/@admin-68852/utah-superstar-ty-jordan-shot-and-killed-in-texas-us-day-news-2e2332755805
[]
2021-01-02 18:04:52.485000+00:00
['Utah Superstar', 'Killed In Texas', 'Utah', 'Ty Jordan', 'Texas']
Tackling Ethereum’s Blockchain Trilemma via Serenity, Ethereum 2.0
Introduction to Ethereum’s Scaling Issues & The Blockchain Trilemma Ethereum is a decentralized application platform that runs applications without any chance of fraud, censorship or third-party interference. Powering thousands of decentralized applications (dApps), Ethereum is hailed as one of the most defining digital currency projects of the era and is heavily utilized in both commercial and non-commercial settings. However, as the network continues to grow in terms of traffic, it has reached certain limitations in relation to its scalability. Scalability can generally be described as an application’s ability to handle increased loads of traffic volume; in relation to crypto projects such as Ethereum, this typically refers to limits on the number of transactions or throughput the network can handle in a given time frame. With one of the largest, most supportive and most active communities behind its back, the Ethereum team has been attempting to tackle what is known in the industry as the blockchain trilemma. Coined by Ethereum founder Vitalik Buterin, the term contends that at a fundamental level, blockchains can only achieve two out of three of the following traits: security, decentralization, and scalability. The upcoming Serenity release in the Ethereum roadmap, also referred to as Ethereum 2.0, includes three major updates to the project that aim to strike just the right balance between the aforementioned properties. Namely, the milestone is planned to include a transition from a proof of work consensus system to proof of stake, sharding, and a migration of the existing EVM execution engine to Ethereum flavored WebAssembly (eWasm). The following post explores each of these three updates in depth as well as reviews the current status and timeline of the Serenity release. Proof of Stake & The Casper V2 Protocol Transitioning the Ethereum blockchain to a proof of stake (PoS) consensus algorithm is the first major update in the highly anticipated Serenity release. The current implementation of Ethereum leverages a proof of work (PoW) consensus process that rewards participants who solve complex, mathematical puzzles in order to validate transactions and generate new blocks through a process called mining. However, mining is criticized as an unsustainable practice due to its extravagant hardware and electricity costs as well as a high risk of centralization. Proof of stake systems enable blockchains to operate without the remarkably high hardware and electricity costs associated with mining while also reducing the risk of centralization. In such a system, “a blockchain appends and agrees on new blocks through a process where anyone who holds coins inside of the system can participate, and the influence an agent has is proportional to the number of coins… it holds” (Buterin and Griffith, 2017). The initial plan was to incrementally transition Ethereum to proof of stake with the Casper FFG protocol, a hybrid blockchain utilizing both proof of work and proof of stake. This initial protocol required a minimum deposit of 1500 Ether to become a validator. However, the plan for Casper FFG has been abandoned and replaced with Casper V2, a protocol that will migrate Ethereum to a pure proof of stake blockchain known as the beacon chain. The beacon chain will require a minimum deposit of 32 Ether to become a validator and serve as the base layer for sharding (Dexter, 2019). Sharding Sharding is the second major update to be included in the Serenity release and aims to largely increase the Ethereum blockchain’s total throughput rate. Currently, each node in the Ethereum blockchain processes all transactions and stores the entire state — account balances, contract code and other attributes. While this provides a great deal of security against tampering, it greatly limits scalability since such “a blockchain cannot process more transactions than a single node can” (Buterin et al., 2016). Ethereum’s implementation of sharding is similar to techniques found in traditional database sharding, “where different parts of the state are stored by different nodes and transactions are directed to different nodes depending on which shards they affect so that they can be processed in parallel” (Buterin, 2016). Indeed, the fundamental motivation for scaling via sharding is to divide the validation responsibility among many nodes. In the proposed system, there would still be sufficient amount of nodes verifying each transaction as to maintain a high level of security, but transaction processing would be split up between smaller sets of nodes. Sharding is highly regarded as the much-needed mechanism through which Ethereum could “scale to practical levels for applications while still retaining its decentralization and security” (Curran, 2019). Ethereum Flavored WebAssembly (eWasm) eWasm, or Ethereum flavored WebAssembly, is the third and final major update to be included in Serenity. To understand the motivation behind eWasm, it’s important to recognize how Ethereum blockchain nodes process incoming transactions. Currently, the system state of the Ethereum blockchain is altered through a “formal model of a virtual state machine, known as the Ethereum Virtual Machine (EVM)” (Wood, 2019). However, the current EVM has experienced little change from its early specification and has reached its limitations in regards to flexibility and performance. Specifically, the EVM is “not optimized for speed on different hardware platforms, nor is it aimed at portability”, meaning it is limited in terms of tooling and language support for smart contract development (Beyer, 2019). eWasm is slated to replace the current EVM and be the new execution engine on the Ethereum platform by migrating the existing engine to WebAssembly, a distinctly optimized binary format for virtual machines. eWasm can be explained as a “runtime environment for smart contracts with the goal [of being] portable [while also] running code nearly as fast as native machine code” (Signer, 2018). The wasm-based EVM would leverage improved hardware features and could theoretically support smart contract development in any language that compiles into WebAssembly, including Go, Rust, C, C++ and more (Beyer, 2019). Ultimately, these expanded options and capabilities will allow Ethereum code to be executed faster. Current Status & Timeline Ethereum took one step closer towards Serenity with its recent, timely Testnet release in early April, 2019 allowing network users to test its much anticipated proof of stake upgrade (Isige, 2019). Despite the recent success, however, the delays and ambiguities surrounding Ethereum 2.0’s progress have led some critics to doubt the long-term survival of the project. Indeed, the project’s roadmap has seen releases delayed, priorities changed, timelines extended and even the departure of critical team members who have started competing projects. As development moves forward, many are wondering about the current state of the project. It is still not clear exactly when the release will go live, although it is expected to be finished by 2021 (Dalton, 2019). However, Buterin has recently offered reassurance, claiming that recent governance issues have not delayed the progress of Ethereum 2.0 (Wall, 2019). Looking Ahead Ethereum certainly has a promising — albeit somewhat unclear and uncertain — future. The Ethereum team is under pressure to achieve a higher level of scalability as newer projects are released with seemingly greater technology. Outside of its growing competition, Ethereum must indeed improve its scalability if it is to handle the immense traffic spikes that will occur as more decentralized applications are released on its blockchain. It should be noted that other Ethereum scaling solutions are being worked on alongside those mentioned. Notable projects include Plasma and Raiden, which both offer off-chain scaling solutions by providing an extra layer on top of the main Ethereum network — similar to the Bitcoin project’s proposed lightning network — that is capable of handling massive amounts of transactions. However, these projects are not strictly considered to be a part of the Ethereum 2.0 roadmap, and thus fall outside the scope of this post. Ultimately, the updates in the Serenity release aim to address important scaling, consensus and security issues and bring the network beyond the limits of its current incarnation. Should the release go smoothly and occur in a timely manner, Ethereum may maintain its position as the dominant decentralized application platform for the foreseeable future and silence the rallies of competing projects. However, should the project continue to experience delays and other major roadblocks, it may very well be surpassed by current and emerging competitors.
https://medium.com/hackernoon/tackling-ethereums-blockchain-trilemma-via-serenity-ethereum-2-0-1fb423a6b184
[]
2020-01-09 02:05:24.616000+00:00
['Proof Of Stake Casper', 'Scalability', 'Ethereum', 'Sharding', 'Blockchain']
When you are so good at projecting that even the Virgin Mary herself is somehow not off-limits: a Nativity story
I called her on the way back from church, which is a bit of a jerk move — most people hate talking on the phone, and I know this, but I was beset with guilt and I could not wait for a lengthy text conversation that wouldn’t even begin until I was home. “Dude, I’m sorry I said you should just ask people for what you need when we were talking this week like that would just solve all your problems. It’s actually so hard.” “Well, you were kinda right, though,” she said, because she is gracious and a gem, and also because I was kinda right, I just wasn’t completely right. “I guess the hard thing is just knowing what to ask for.” And isn’t that just the nail on the head — I have no idea what to ask for when what I actually want is the right to wear a sign my entire life that just says “please handle with care, I have some weird unresolved shit that sometimes flares up out of nowhere and now we can apparently add the entire fricking season of Advent to this list, I don’t know, I know it’s weird and a lot but like I said please just be gentle with me, and also I’m working on it”. Which is not practical, not least because 1) everyone could benefit from such a sign, and 2) I actually sometimes am not working on it, like when I listen to an entire Gospel reading and starve the resulting feelings of any attention at all until hours later. (Two readings, today, actually, because I often play the liturgy music for this church and today was such a day — that’s right, I heard this reading twice and at no point thought that these feelings were worth examining until I got home.) So, in absence of a blanket-absolution handle-with-care sign, I suppose what I actually want is what anyone wants who’s experienced grief that makes them feel like an unhinged person breaking entirely new ground in the human experience: I want someone to say, no, I’ve been there. I think it’s normal. I mentioned I have twice lost my shit listening to Christmas carols this week. Once was in the middle of shopping, which was clearly a bit of a valley. So it’s not entirely accurate to say I haven’t known that there’s anything going on — just that I’ve been able to move through it, which has given me the luxury of slamming the lid on these feelings and scribbling FOR UNPACKING SOME OTHER DAY onto the top. “Some other day” seems elusive. I’ve tried to write about this probably four times in the last week and it’s been really hard. When I’m having a hard time writing my way through a feeling I often initially make that difficulty about me. And maybe I am the only person in the world who feels this way (“listening to Gospels about the birth of Jesus makes me feel shitty and bitter and totally unable to not let my I’m Supposed To Be A Mother narrative drive my entire brain”), but experience has shown me that every time I think that, it isn’t actually so. And this has been the weirdest lousiest feeling, and it’s been following me around like a raincloud since I woke up the Friday after Thanksgiving, but I’ve never heard anyone talk about it. So if you’re following along at home and reading any pieces of this and thinking wow I also thought this was a weird unique way in which I was totally ruining the holidays and also that I was completely alone, I wanted to name and put out into the universe the following: Turns out that wanting to be a parent and not being one can be really specifically hard for people of Christian faith during Advent. Turns out this might be one of the (many, many) things people mean when they say that for some people the holidays are not completely joyful. I think we’re doing great. You don’t need that from me but it’s true. Someone should be congratulating us for leaving the house in December. (*2020 note: stay home!) I don’t know your life and I don’t believe in falsely lighting up hope, but whatever scary things might be true about your fears, I can at least tell you that you aren’t alone. We’re in it together. It sucks and it’s often lonely but we aren’t alone. Part of what makes pregnancy loss so hard is that it affects us in weird ways at weird times and we can’t predict when we’ll be angry or sad or lonely or tired. There’s so much pressure to show up joy-first during Christmastime (and I imagine during the most festive holidays for other faiths, though I can’t speak to that directly) that showing up mostly just kinda bummed that you’re not a parent feels like an enormous middle finger to everyone in your life and also yourself and a little bit to God. And if you’re like me, that’s not how you mean it — which seems not to matter when parsing out how it feels. “Leave me alone!! But also please never leave me.” You know. Classic grief shit. Another part of what makes complicated grief in general so hard is precisely that it is complicated. I haven’t felt able (willing?) to share this with anyone so far because, for all of the ways that everything I’ve said so far is true, it’s also true that Advent and Christmastime are awesome, and I love this season, a lot, and if the people in my life who love me but are sometimes not certain how to show up for me start approaching me during Christmastime with kid gloves because “oh my God I know this time just reminds you of all the things in your life that make you pissed off and we don’t have to talk about it but we also totally could, you know you can talk to me about all your most depressing shit whenever you want, right” before asking me to pass the ornaments while we’re decorating Christmas trees I will fling myself into the sun. Also, though, thank you so much for saying that, truly and sincerely. I know I can talk to you about all my most depressing shit whenever. And I appreciate that, really deeply. And I love you. But Christmas isn’t always linked to that for me. And if I bring this up now, in this way, I worry that it will be, for you, when you think about me and Christmas. And that’s the heart of it: it doesn’t feel fair (or realistic) for me to expect the other people in my life to follow these twists and turns and nuances. I can barely keep up myself (clearly). I don’t know what the answer is. I wrote earlier this week about being someone who really doesn’t like when things don’t make sense; I usually write to figure it out. I have no answers here — this sucks in a way that I haven’t figured out yet — but it does feel worth saying out loud. Today I was shaming the life out of myself for my knee-jerk reactions to Advent Gospels. One of the loveliest things anyone has ever said about anything I’ve written was that reading my words made her feel “the opposite of lonely”. That’s all I’m trying to do, here: if you are having a hard December because hearing about the birth of Christ with a history of pregnancy loss is a bad experience, I hear you, I see you, and I think whatever we’re feeling is fair and valid and normal. I wish it was different. I want the tidings of comfort and joy, too. I want them for both of us — all of us. The best I can do for you is #3 above: for whatever else you might be, you’re not alone. Maybe that’s a road to comfort and joy, and maybe it isn’t (it’s okay if it’s not), but it’s at least true and unshakable and you can hang your hat on it: it’s so hard, but it’s not just you. You are the opposite of alone. I hope, with my entire heart, that some part of this season makes you feel the opposite of lonely, too.
https://medium.com/@carolinehorste/when-you-are-so-good-at-projecting-that-even-the-virgin-mary-herself-is-somehow-not-off-limits-a-5ba41a70a06a
['Caroline Horste']
2020-12-09 22:39:19.835000+00:00
['Pregnancy Loss', 'Christmas', 'Advent', 'Infertility', 'Grief']
Solid Product Owner Crucial for Scrum Teams
Words are important. Words have power, words are symbols. We call it spelling because words casts as spell on the way we think. In this vein we think of Scrum, the most commonly used Agile framework. We think of the word master and we think of one who commands an art or craft — mastery. So naturally it would stand to reason based on our Agile lexicon that the set of accountabilities for Scrum Master make the Scrum Master the most important role in Scrum right? Wrong. The answer is that are no “most important” roles as peers in the Scrum world have pointed out to me. In an ideal world, the Scrum framework is based on one one unit, no ranks, no bosses, only a team leading itself. That said I believe it’s the Product Owner whose role is the most crucial to practicing Scrum in the correct and most sustainable way. To wit, on a team you can trade out Scrum Masters if they understand Scrum. Someone from the development team can step into the Scrum Master role if need be. Yet you can’t trade out Product Owners as easily. The Scrum Guide defines the Product Owner as “accountable for maximizing the value of the product resulting from the work of the Scrum Team.” The Product Owner is also, according to Scrum guidelines and folklore, the so-called voice of the end users and collaborates with internal and external stakeholders. Internal stakeholders can include everyone such as middle managers and executives at program and enterprise levels as well as department stakeholders such as customer success or customer service; sales and marketing teams. While the Scrum Master must make sure all lines of communication are open with these stakeholders and all impediments are removed, the Product Owner is accountable for defining and refining business requirements. This could include clarifying epics, user stories and product backlog items for Scrum Teams to evaluate and commit to as one entity. For the Product Owner, this is no easy task. Now a peak at what the Scrum Guide specifically says about Product Owner responsibilities: ● Developing and explicitly communicating the Product Goal; ● Creating and clearly communicating Product Backlog items; ● Ordering Product Backlog items; ● Ensuring that the Product Backlog is transparent, visible and understood. As a Scrum Master and Agile Coach, my anecdotal evidence across multiple organizations suggests that the biggest pain point is often product ownership. In my experience, many organizations expect the Scrum Master to do the tasks listed above or expect the Product Owner to do them alone without guidance, coaching and collaboration from the Scrum Master and Scrum Team. This is where Product Owners who understand Scrum and trust their teams represent the most vital aspects of Scrum. The arduous task of Product Owner can most often make or break a Scrum Team and this is also the role that is most likely to lead to bottlenecks, scope creep, and the team faltering if not executed succintly. As the Scrum Guide says: “The Product Owner is one person, not a committee. The Product Owner may represent the needs of many stakeholders in the Product Backlog. Those wanting to change the Product Backlog can do so by trying to convince the Product Owner.” While the Product Owners are only as powerful as their requisite teams and the Scrum Masters who coach and protect them, it’s the Product Owner who must be solid and resolute enough to negotiate with stakeholders and customers. Hence Product Owner travails can be the biggest on the team for the following reasons: They are overwhelmed by the mandate of Scrum They aren’t receiving clear product road maps from upstream, They are conflating a different job title (which also may be Product Manager or Project Manager/Coordinator) with the Scrum role of Product Owner. So many Product Owners who have another job title and those who have the actual Product Owner job title often find it difficult to context switch to correctly fit Scrum guidelines. This is where a supportive team comes into play. The caveat, however, is that in this way, the Scrum Master can only coach but not play the role of Product Owner. This brings me to the most salient point in our world of words, titles and symbols, which shift with the prevailing winds of thought. Scrum is designed specifically create the most value in the least amount of time and not chiefly to make sure the “project/work gets done.” The Product Owner must understand this and the Scrum Master must coach this into fruition. The Product Owner must also be open to that coaching. In Scrum, the Product Owner is a value creator. Without a viable, consistent and focused Product Owner, all a Scrum Master or Scrum team can do is repeat the Scrum Events over and over again. There is a danger of repeating sprints over and over again without goals or purpose or value for that matter. These potentially chaotic and directionless cycles can lead to technical debt and low morale without a solid Product Owner or the outright absence of Product Ownership. I’ve seen both of these things happen on teams where I was Scrum Master. In one case there was the lack of overall ownership and the reluctance to context swtich into an Agile mindset when it came to daily stand ups and backlog refinement. In another case there was a Product Owner who wanted to own everything. Further still, in other cases, there was little to no ownership at all — leaving the team at the mercy of OBE (Overcome by Events). The most compelling fact facing all Product Owners is that most upstream stakeholders usually aren’t as educated on Agile Frameworks as Scrum or Scrumban team members are. The Product Owner is the voice of these demanding and often flippant, detached and demanding customers and end-users. That said, the Product Owner’s role per the Scrum Guide is set in stone. It’s often a rock and a hard place scenario for the Product Owner, which makes the PO the most essential Scrum role in my opinion. It’s my duty as a Scrum Master to serve the Product Owner with an empathetic yet evidence based approach. The word is out, the challenge is built in. Do you want to write for Serious Scrum or seriously discuss Scrum?
https://medium.com/serious-scrum/product-owner-not-scrum-master-is-the-most-crucial-role-in-scrum-8ee1beea7446
['Jabulani Leffall']
2021-09-13 16:57:21.324000+00:00
['Scrum Product Owner', 'Scrum', 'Agile Coaching', 'Agile Methodology', 'Agile Development']
Django Admin Export to Excel, CSV, and Others
Export data in django admin into excel spreadsheet, csv and other format using django-import-export library. Official docs here. Step 1: Preparation, Create Django Project, Initial Migration create virtualenv: virtualenv venv start virtualenv: venv/Scripts/activate install Django in virtualenv: pip install django==3.0 django-import-export Create Django: django-admin startproject myproject Go to myproject folder: cd myproject Initial Migration: python manage.py migrate Step 2: Create Django Apps Create apps: python manage.py startapp myapp Add myapp to INSTALLED_APPS in myproject/settings.py Step 3: Create Model in myapp/models.py Step 4: Register Model in myapp/admin.py and Add ExportActionMixin Step 5: Makemigrations and Migrate Make migrations: python manage.py makemigrations Migrate: python manage.py migrate Step 6: Create Superuser Create superuser: python manage.py createsuperuser Type username, email password and retype password Step 7: Run Server and Testing Run Server: python manage.py runserver Access: http://127.0.0.1:8000/admin Add some data, select data, choose format and go (export).
https://medium.com/@adiramadhan17/django-admin-export-to-excel-csv-and-others-94f8247304ba
['Adi Ramadhan']
2020-11-22 03:03:18.793000+00:00
['Export', 'Excel', 'Django']
Why are email newsletters so popular?
Passive distribution, useful information, and a little bit of personality Photo by Mathyas Kurmann on Unsplash Chances are that you’re subscribed to more email newsletters today than you were five years ago. Newsletters aren’t “new” technology, and they are in fact utilizing a relatively old internet distribution channel in your email inbox. Despite this, newsletters are more popular than ever before, and are being utilized by long-standing publications and individual writers, some of whom are able to make a very comfortable living with their newsletters at a time when journalists are being laid-off on a daily basis. What’s behind the rise in volume and popularity of newsletters, and why is using your email inbox for distribution an advantage? I can only speak from my experience, both as a consumer subscribed to a handful of newsletters like Term Sheet (from Fortune), Byers Market (from NBC News), and Stratechery (from Ben Thompson), and as the creator and author of The Garage San Francisco’s newsletter, a hyper-focused weekly newsletter for Northwestern University alums in the Bay Area who work in tech. I think I’ve gleaned enough from sitting on both sides of the newsletter to appreciate what makes them special. A return to passive distribution Before the internet, a physical newspaper would show up at your front door every morning. You didn’t have to choose (on a daily basis) which newspaper you were going to read that morning, and even if you had forgotten that you were going to read the newspaper, it physically arrived, perhaps reminding you to take a look. Newspapers were delivered frictionless, and at the same time of day, every day, whether you wanted to read them that particular day or not. On the other hand, when publications first launched on the consumer internet, this passive distribution was lost. If you wanted to view a particular news or magazine publication, you had to remember to actively choose visit their website, perhaps by creating a bookmark and checking it on a regular basis. It was possible and likely that over time, you stopped visiting some of these sites, either because you forgot or didn’t find the content compelling enough to actively visit every day. Of course eventually social media platforms would surface articles and content in your feeds, but it wasn’t consistently surfacing the same publication at the same time each day — just individual articles that the people you follow shared that you may or may not be interested in. More recently, email newsletters have resurrected the passive distribution of written content by arriving in your email inbox on a regular cadence. Many longstanding publications have started up newsletters as a way to build audiences that perhaps otherwise wouldn’t visit their sites. Personally, I read newsletters from from individuals and publications that I otherwise don’t visit or seek out on a regular basis. But these newsletters show up in my inbox every morning, and I generally find the content interesting enough to stay subscribed. As funny as it is to say, I probably wouldn’t read these even if they were free daily columns available on their respective websites — the friction of remembering and actively visiting is too high. As anyone who has worked on conversion optimization knows, the smallest amount of friction can lead to huge drop-offs in engagement. One of the most important characteristics of newsletters, as Kendall Baker, who has written over 1,000 newsletter issues for The Hustle, Sports Internet, and Axios Sports, recently tweeted about, is consistency and cadence. Newsletters with a reliable cadence train readers when to expect their content (again, passively arriving in their inbox), and create a daily habit, or “appointment reading.” I send every TGSF newsletter on Tuesdays between 12–12:30pm PT, knowing that 1) it is likely a lower priority than the multiple newsletters that my audience receives first thing in the morning; and 2) hoping that they find a break in the work day around lunchtime to read through. With this cadence and consistency, open-rates have never fallen below 50%, even growing from 0 readers in October to 750+ in February. I have no doubt that the consistency of delivery time has at least some part to play in high engagement rates. The context of our relationship with email is also important to the success of newsletters. Email is essential to our daily lives, and winning space in someone’s inbox is a high hurdle to clear, especially in a culture that prioritizes “inbox zero.” There is also no single corporate “owner” of email, so receiving an email or inbox notification is somehow less bothersome than getting a notification from another app. The context difference between open and free messaging channels and apps owned by companies incentivized to get your attention is an important one when it comes to notifications and communication with customers. Some publications have realized the value in email distribution outside of newsletters. For instance, The Information sends emails multiple times per day, either with high-profile new stories or round-ups of recent articles. As a subscriber to The Information (~$400 / year), I rarely actively visit the website. Instead, I rely on checking the headlines that land in my inbox, clicking through on the stories that catch my interest. Source: my inbox Informative and entertaining content To attract and retain readers, a newsletter must provide some valuable information or entertaining content. In my experience, there are three prominent and successful newsletter genres, each that have different use cases and characteristics that contribute to their success: Newsletter genres: News aggregator Deep dives / analysis & insight Community specific News aggregation newsletters are usually sent on a daily or weekly cadence, have a specific sector or topic focus (tech, finance, media, etc.), and contain links to recent, relevant news stories for their audience. These newsletters are strongest when they link to a variety of sources, aggregating “the best of the best” stories, and are not beholden to one publication. For instance, even though Byers Market is written by NBC News reporters Dylan Byers and Ahiza Garcia-Hodges, the daily media industry-focused newsletter links to stories from a litany of reputable publications. I appreciate this as a reader, because it signals that they are linking the most interesting and in-depth stories regardless of source, not just schilling for NBC News. Source: Byers Market Newsletter News aggregation newsletters are most interesting when the author provides their own commentary or opinion on the stories that they link, like Benedict Evans does for the ~140,000 tech-interested subscribers to his newsletter. Author commentary provides a perspective on a news story, which is complimentary to the linked article, often written with the journalistic standards of avoiding bias or opinion. I appreciate this commentary because it often provides a well though-out opinion on recent news, and regardless of whether I agree with it or not, it gives me something to think about to develop my own opinions on the matter. The second genre of popular newsletters is that of the deep-dive or topic / story analysis, such as Stratechery by Ben Thompson. In this genre, the author often tackles one new topic with each issue, diving into the weeds about the implications of a recent event or development. Authors of deep-dive newsletters most likely have expertise and relevant experience in the topics that they cover, and are able to provide a trusted perspective that is layers deeper and more nuanced than a basic news article covering the same event. These newsletters are more easily monetizable than news aggregators, because their content is entirely proprietary and, if they’re any good, their analysis is thought-provoking and original, enticing readers to pay for access. Anyone can copy or aggregate “news” stories, but fewer can write interesting, original in-depth analysis about the lesser-discussed implications of such stories, which makes this writing and analysis a valued skill. Finally, community specific newsletters offer community managers a way to increase the interactions and engagement of their members. Often the intent of community specific newsletters (such as the one I write) is to distribute relevant information, and more importantly, increase opportunities for members to interact with each other, which strengthens the community. These newsletters differ from news aggregator and deep dive newsletters in a couple of ways. First, their cadence is usually less often — more likely weekly or monthly; and second, they often depend on input, interaction, and content from their readers. For example, each week I feature an interview with one interesting member of our community, and include a handful of open job opportunities and requests from other members, who are able to utilize the weekly communication as a broadcasting channel for relevant content to the broader community. Get Real by Nikhil Krishnan and the “Give / Ask / News” newsletter by Mike MacCombie at ff.VC are two examples of community specific newsletters that are effective at enticing their readers to contribute their input and interact with one another. A distinct (and often biased) voice Besides distribution, the real strength of successful newsletters is that they have their own voice or personality. This is partially enabled once again by the context of email — which is usually written in a more personable and informal tone than an article in a publication. Having an email relationship, or being invited to someone’s inbox on a regular basis, affords a more casual and personality-driven tone than an article in a publication. For every newsletter that I subscribe to as a reader, I’m familiar with the author and their likely opinions and perspectives on a given topic, which I appreciate. Even newsletters written by multiple writers such as The Hustle, Morning Brew, or theSkimm have a distinct voice or personality that is congenial to their audience. Humor, sarcasm, and inside-jokes or references are also common in newsletters, whose audiences understand the context, since they are regular readers. This makes consumption of otherwise “serious” content more enjoyable and engaging. The nature of using email for distribution also allows readers to respond to the author of a newsletter. While most publications allow comments, it is both higher-friction and less personable than responding to an email newsletter. I’ve built TGSF newsletter to rely on feedback, input, and content from our readers, and I receive replies each week commenting on content from that week’s edition. As a writer, this engagement is pure encouragement. Easy monetization The real reason that we’ve seen an explosion in email newsletters over the past few years has been that they’ve become easily monetizable, even for individual authors. Tools like Substack have made it easy for anyone to spin-up a newsletter, either free or subscription based, and has empowered journalists to earn a living working for themselves even as publications slash jobs at record rates. If these same writers instead started their own blogs or websites and tried to drive regular traffic, they would see a fraction of the audience and monetization. But using the passive distribution channel of an email inbox, and by asking their audience to pay directly for their content, authors are able to engage and retain a sustainably large and monetizable audience! Many of the most popular paid newsletters on Substack are of the deep-dive variety, because these analyses are proprietary and therefore can’t be found anywhere else, unlike general “news” stories. Another approach to newsletter monetization that pre-dates Substack is ad-supported or sponsor driven. In this model, authors write free newsletters and grow their audience to hundreds of thousands of readers before approaching brands and products to advertise in or “sponsor” their newsletter. Since newsletters often have a particular market focus, their audience demographic generally skews heavily, which is attractive to brands and products focused on that demographic. For readers, scrolling past the occasional ad is worth receiving the newsletter content for free, and they may just be interested in the product being advertised. Some newsletters also sell merchandise and use affiliate links to products for revenue. Source: Morning Brew newsletter At the end of the day, the real secret behind successful newsletters is no secret at all — the content has to be good. Although it takes less effort for a reader to find your content since it ends up in their inbox regularly, this quickly becomes an annoyance rather than a convenience if you’ve fallen out of favor with your readers. Consistently writing high-quality content — especially of the deep-dive / analysis genre — is a task that takes enormous effort. Since the internet has brought down the barriers to content distribution, we’ve all benefitted from unique voices having the ability to share their thoughts publicly. Newsletters have a freedom that most publications do not — in that they can have an opinion and personality, and this is often what their audience values. Utilizing email as a passive distribution channel keeps an audience engaged, and maintaining a consistent cadence trains that audience to carve out time to consume the content. In a world competing for your attention, this is a high-hurdle, and those that are able to clear it are able to monetize this ability through brand sponsorships and paid subscriptions easier than ever before. It’s no wonder that newsletters are more popular than ever before.
https://medium.com/the-raabithole/why-are-email-newsletters-so-popular-7bda7c272247
['Mike Raab']
2020-05-26 21:24:53.619000+00:00
['Technology', 'Writing', 'Newsletter', 'Business', 'Media']
Serverless Technology Is Revolutionary!
Photo by Luca Bravo on Unsplash Evolutionary changes in technology happen almost every day. However, revolutionary changes only happen about once a decade. If you think back to early personal computers, it was revolutionary when DOS enabled personal computers to be used by consumers. Even though the software was limited and came on giant floppy discs, it offered new functionality that users had not experienced before. Evolutionary changes in technology happen almost every day. However, revolutionary changes only happen about once a decade. A little less than a decade later came Windows. This gave computers a graphical interface that could run more than one program simultaneously. Over time, Windows had evolutionary changes as it went from versions 1, 2, 3, 95, 98, and so on. The internet has gone through similar changes. It was a revolutionary technology with evolutionary increments along the way. However, we’re now embarking on the latest revolutionary change, known as serverless technology. Serverless technology is a product of the cloud movement. I wrote about cloud’s turning point in a previous article, but cloud computing is also one of those revolutionary industry shifts as it opened the door for companies to create applications faster, cheaper and more focused on specific industry problems. Serverless is a concept that can be hard to wrap your brain around at first. Traditionally, software is server-based. It’s either hosted on owned, local servers or in the cloud using rented third-party servers. Either way, those servers are expensive. For servers owned by companies specifically, the upfront cost is one thing, but there are also on-going costs in hardware maintenance, co-location hosting, energy and of course staff time. Server-based software has always been riddled with “hidden” costs as they often get buried in IT budgets as fixed/sunk costs. But for SaaS providers, these costs have to be considered variable costs to deliver a product to end users. With serverless tech, or what I call “service-based” technology, you are only paying for the compute time necessary to serve each request — not the additional overhead and idle time of the server. This on-demand cost model equates to costs about 1/10th that of server-based computing. This is game changing in general, but especially for SaaS application providers. A simple analogy would be office space, especially in the time of COVID when many people are working from home. You typically lease office space in multiple year contracts. You’re paying for that office space whether you use it or not. But what if you could lease only what you need on an hourly basis. One day you might need 250 offices and the next day only one. Your on-demand cost would be a fraction of the contracted cost. This is how serverless technology works to change the cost model. When software is server-based, it’s deployed to servers running day and night regardless of actual usage. Because there is a limit to the number of concurrent users per server, you end up sizing for peak usage even though the server will remain quiet all night, just like an office. With serverless, adding more compute power happens in fractions of a second as the server layer is abstracted from the software and delivered as a service. Scaling up or down happens instantly — because your software runs across thousands of servers, as a service. Rather than paying the high costs of keeping servers running 24/7/365, you only pay for the processing power you need to serve the next request. At Qrvey, we’ve been able to reduce AWS infrastructure costs up to 90% compared to our competition using serverless-based software versus server-based. At this point, server-based computing is not sustainable in the long run. The environmental cost is enormous. Building data centers, filling it with hardware that needs replacement in just a couple years, and the cost (and waste) of heating and cooling is just too much. With the extreme rise in demand for computing power, how many of these data centers can the earth support where servers sit idle much of the time? With the revolutionary nature of serverless technology, software architecture will evolve as well. To truly realize the value of serverless, software architecture needs to adopt a microservices-based approach as these services will run in fractions of a second, but only as needed. The challenge for established software companies is that existing systems can’t be migrated to serverless tech, it requires a long term commitment as the system will need a complete rewrite from the ground up. Product reinvention is risky, but will become necessary for everyone at some point. The competitive landscape for software companies will also evolve. Those that embrace serverless technology to take advantage of microservices are far ahead of traditional software providers. A few companies, like Qrvey, are leveraging serverless technology to build a next generation analytics product that scales extremely efficiently, is more cost-effective than traditional software and maximizes performance. Remember how you felt when you first saw software running on DOS, then when you saw Windows for the first time, and especially when you learned about this thing called the Internet. Serverless technology is our next technology revolution. While serverless technology is still maturing, I can’t help but wonder what will revolutionize technology next.
https://medium.com/@arman-eshraghi/serverless-technology-is-revolutionary-a1c8107ea31b
['Arman Eshraghi']
2020-12-24 11:33:55.387000+00:00
['Startup', 'Analytics', 'AWS', 'SaaS', 'Serverless']
Apache Kafka — Tutorial with Example
Apache Kafka is an Event-Streaming Processing platform, designed to process large amounts of data in real time, enabling the creation of scalable systems with high throughput and low latency. Kafka provides the persistence of data on the cluster (main server and replicas) ensuring high-reliability. In this post we will see how to create a Kafka cluster for the development environment, the creation of Topics, the logic of partitions and consumer groups, and obviously the publish-subscribe model. The requirements to follow this tutorial are: Java 8 Download latest Apache Kafka version Note: All commands are launched on a Windows machine (KAFKA_HOME/bin/windows/), if you are on Linux use the path (KAFKA_HOME/bin).
https://medium.com/digital-software-architecture/apache-kafka-tutorial-with-example-8ebab3c29b74
['Andrea Scanzani']
2020-12-11 09:04:36.081000+00:00
['Software Development', 'Java', 'Kafka', 'Digital Architecture', 'System Architecture']
Using IBM Design Thinking Method Cards to plan a workshop
Note: This is part two of a two part series. Part one covers the prototyping process Brian and I used to create the IBM Design Thinking Method Cards. In this post I will show you how I use IBM Design Thinking method cards to design an agenda for a workshop. The planning process Planning a Design Thinking workshop has three steps (read more about How IBM trains its design facilitators.) 1) Design Challenge — Define the problem statement 2) Focus — Define the participants, users, outcomes, and team expectations 3) Agenda — Choose the Design Methods that will connect the Design Challenge to the Focus I recently received the following project brief on the problem of declining honey bee populations. Let’s look at the brief as an example, and then I will build out the Design Challenge, Focus, and Agenda. Step 0, Project Brief New Bee Hives Honey bee populations across the United States have been decreasing at an alarming rate. While research is being done on the why this is happening, there is also potential to have a positive impact right now. We can do this by combining an improved understanding of honey bees with the practice of bee keeping. Bee keepers do not have sophisticated methods of tracking and monitoring their hives to understand their health, and to identify and react to problems before they occur. IBM can use current technology to build an application that bee keepers will use in their day to day activities to help them maintain their hives. This might include taking notes as they check on hives, assessing weather patterns or checking sensors collecting data points on the hives. The goal of this project is to improve the experience of bee keeping through the creation of an application to monitor and maintain the bee hives. As part of the project IBM will partner with Bee DownTown in Raleigh, NC. This group will share their subject matter expertise, participate as sponsor users, and provide hives for IBM to customize and monitor. IBM is going to host a kickoff design workshop to get the project moving. Step 1, the Design Challenge. Start by considering whether your goal is incremental improvements or earth shattering leaps, and tailor your design challenge to the situation. Sometimes design challenges are framed with high emotional tension, such as “How Might We give bee keepers as much information about the health of their hives as Facebook has about the data of its users? or “How Might We create a up-to-date report that makes bee keepers into omnipotent gods of their bees’ world?” This way of framing the challenge gives the team space to come up with creative and breakthrough ideas. Given this project brief, I’d say the challenge could be “How Might We help bee keepers better understand the health of their hives, and identify and react to problems in their hives before they occur?” Step 2, the Focus. The participants are IBM designers, developers, offering managers, and employees from Bee DownTown. The users are bee keepers. The team expectations are to align the team around an idea for an application to monitor and maintain the bee hives. The outcome is to leave the workshops with a couple ideas for a MVP. Step 3, the Agenda. Agenda for the workshop The facilitator needs to choose an agenda that will help the team connect the Design Challenge to the desired outcome, in the time allotted for the workshop. I always make sure that the output of one activity (say a collection of ideas from Big Ideas) feeds directly into the next activity (perhaps ranking those ideas using the Pyramid of Prioritization.) Looking at the challenge, the first question that comes to mind is who are these bee keepers and what challenges do they have with maintaining the health of their hives? This seems like a good opportunity for an empathy building method, such as an Interview of As-Is Scenario. The next question I have is what opportunities are there to improve the bee keeping experience. Rather than spending time to fully develop Needs Statements in a short workshop, I’d have workshop participants label Needs/Pain Points/Opportunities directly onto the As-Is Scenario. The next question is what can IBM do about these identified Needs/Pain Points/Opportunities? Big Ideas is a method for quickly generating lots of ideas. Another possible method is the Creative Matrix (from LUMA). For the Creative Matrix I’d put the Needs/Pain Points/Opportunities as column headers and then choose some IBM IoT technologies as row headers. The next question is, from all of these ideas, what do we want to focus on first. There are a few ways to go about this. We can write Hills to get a high-level vision for the direction of this product. Depending on the level of research done, it might be too soon to get something good. The second way is to do a Prioritization Grid or Pyramid of Priority to see which ideas should be done first and start user testing. A third way, which is less common, is to create a Mind Map of the features for this product, and then dot vote or use a Bulls Eye Diagram to decide where to start. Coming out of the workshop the team should have a clear idea about what they want to do next, which will revolve around testing ideas with their users or testing the technological possibilities of their ideas. This aligns with what the workshop sponsor was expecting. Try it out! The best way to practice workshop planning is to do it! Short workshops are easier to handle, so start with some two or three hour workshops. Make sure to solicit feedback from your participants at the end of the day using a Feedback Grid. Then use the feedback to iterate on your delivery and continue to improve. — — — — — Eric Morrow is a Design Facilitator at IBM based in RTP. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.
https://medium.com/design-ibm/using-ibm-design-thinking-method-cards-to-plan-a-workshop-842f4a4d7c8d
['Eric Morrow']
2017-05-24 20:01:54.220000+00:00
['IBM', 'IoT', 'Beekeeping', 'Design Workshop', 'Design Thinking']
Are we reaching peak debt?
Photo by Avery Evans on Unsplash As New Zealand prepares for the potential of negative interest rates in 2021, the IMF issued an unusually blunt warning in November that the world was in a “global liquidity trap” where monetary policy was having limited effect. Chief economist Gita Gopinath pointed out that low inflation and low growth has persisted despite 97 percent of the advanced economies having policy interest rates below 1 percent and one-fifth of the world with negative rates. In this situation further interest rate cuts will do little to stimulate growth and the only way forward is a coordinated global effort to focus on large government spending programmes rather than monetary stimulus. Persistently low growth There were already signs prior to the pandemic that the prevailing economic policy was not working. In October last year the former governor of the Bank of England Mervyn King argued that the world’s advanced countries had failed to address the structural problems which had led to the global financial crisis. As a result they had been stuck in a low growth trap ever since then. Too much borrowing plus too little spending and investing meant the standard economic models were no longer effective. Indebted Demand In a recent paper economists Atif Mian (Princeton University), Ludwig Straub (Harvard University), and Amir Sufi (University of Chicago) proposed a theory of “indebted demand” to describe the situation the global economy finds itself in. Looking at data from a group of advanced countries* including New Zealand, they conclude that since the 1980s, economic growth, investment to GDP ratios, business productivity and inflation has all declined. Meanwhile the average real interest rate in these economies has dropped from 6 per cent to less than zero. The COVID-crisis is likely to accelerate these trends. Mian, Straub and Sufi argue that since the Global Financial Crisis the top 1 percent have been saving at an increasing rate resulting in a huge accumulation of income and wealth. At the same time there has been an explosion in borrowing from the bottom 90 percent. This imbalance has led to a permanent transfer of wealth in the form of debt service payments from borrowers to savers, depressing demand even further. Low interest rates had worked for a while in propping up demand, but once interest rates hit their lower bound it is impossible to encourage the bottom 90 percent to consume more. Once interest rates moves close to or below zero, the economy finds itself in a debt-driven liquidity trap which is hard to escape from. Debt and income share in New Zealand While the top 1 percent of income earners in New Zealand have gained a larger percentage of the total income pie over the last decade, at 11 percent income share we are still a long way from the levels of inequality in the USA where the top 1 percent account for almost 20 percent of the income share: However New Zealand does have some of the highest levels of household debt in the world (currently 14th of the OECD countries), even before the pandemic struck. Most of this is mortgage debt due to the high cost of housing. Data source: Bank of International Settlements Data Source: RBNZ Household borrowing declined slightly post 2008, but since then it has resumed its march upwards. The 2020 pandemic and recession has only exacerbated the situation by disproportionately hitting low-income workers who tend to have less savings and more debt. All of this points to deflationary pressures which impact the debt-burdened the most. The solutions to the liquidity-trap are tricky when there is a need to stimulate growth without sending households deeper into debt. The UK escaped a liquidity trap in the 1930s through a combination of cheap money combined with a house building boom. Mian, Straub and Sufi conclude the only solution is to redistribute funds away from the top 1 percent through things like wealth taxes, raising top marginal income tax rates and inheritance taxes. Whatever the answer, the continued imbalance of debt, saving and investment are clearly not sustainable in the long run and more radical approaches will be required to avoid a level of inequality that will ultimately affect everyone. * The countries in the sample examined by “Indebted Demand” authors were Australia, Canada, Finland, France, Germany, Italy, Japan, New Zealand, Norway, Portugal, Spain, Sweden, United States and United Kingdom.
https://medium.com/@gdplive/are-we-reaching-peak-debt-de1392911829
[]
2020-11-26 03:44:52.543000+00:00
['Peak Debt Consumption', 'Inequality', 'Pandemic', 'Household Debt', 'Negative Interest Rates']
Using the Tools That God Gave Us!!
Recently, I gave an excerpt of an article that I wrote that went on my new blog. I am putting together a blog that will speak on the fight against spiritual wickedness. My blog is called “I Got A Word!!!” I plan to write about spiritual wickedness and the tools the Lord has given us to stand up against the devil and wage spiritual warfare. Because we are at war with the enemy and this is a spiritual battle. This is not flesh and blood and God has equipped us with weapons that are mighty to the pulling down of strong holds. I will include resources in blog that will help you in this spiritual warfare. Some resources are free and some are paid. But they will all bless you and show you how the Lord has equipped you for the journey. Please review these resources and walk in the authority that God has given you. These resources will help you to study and remember the bible states, “Study to shew thyself approved unto God, a workman that needeth not to be ashamed, rightly dividing the word of truth.” II Timothy 2:15 This progressive life of ours requires studying and transforming our minds to fight this spiritual battle. There is no doubt in my mind that God’s word will work. Study it! Live by it! It is amazing what God can do because no one can do you like the Lord. The poet understood this principle that knowing and doing and applying are very important concepts. We must be doers of the word. That is the reason “I Got A Word!!!” https://igotawordoutreach.blogspot.com/
https://medium.com/@littlepreacher68/using-the-tools-that-god-gave-us-177c10182137
['Errick Ruffin']
2021-08-23 18:00:00.377000+00:00
['Christian Living', 'Spiritual Warfare', 'Christian Warfare']
The Cult of Whoville
The Cult of Whoville In Defense of The Grinch and How the Whos Practice Authoritarian Control Photo by Andreas Avgousti on Unsplash Come the holidays I am a bit of a Grinch. It’s not that I don’t understand the appeal, I truly just think it’s weird how there are people who jizz all over Christmas like nerds at a Magic the Gathering tournament. Dumb sweaters, peppermint flavor everything, and cheesy Hallmark movies have zero appeal to me. If you’re endlessly cheery, my guess is that you believe yourself to be well adjusted but are instead incredibly naive and no have no self-awareness. Hence, I’m a bit of a Grinch. But why is the Grinch so vilified? What did he do to become the less yawn-worthy Scrooge archetype? As I’ve gotten older, I’ve really come to connect with him: a guy who just wants to sit alone in his cave, veg out, and hang out with his dog. Which leads me to my main point. If we’re to believe Ron Howard’s exposé How the Grinch Stole Christmas, to be an accurate portrayal of Whoville, we are presented with the story of a young boy who is neglected and ousted from society for being a smidge different from his community. It’s your average fish out of water tale, where the fish has two choices: 1) to stay a fish or, 2) assimilate. In this case, the Grinch chooses to assimilate, or so Howard would like us to believe. After a close and scientific examination of the documentary, it becomes clear that the Grinch does not assimilate at all but is instead brainwashed into the Cult of Who as a result of tried and true methods of mind control and personality breakdown. Steven Hassan, one the country’s leading cult experts, uses what he calls the BITE Model of Authoritarian Control to assess if an organization is a cult based on if the entity uses “specific methods…to recruit and maintain control over people.” The four categories of control are: behavioral, informational, thought, and emotional. Using the BITE Model, here is a quick overview of The Whos’ tactics to break down The Grinch and other telltale signs of Whovillian societal manipulation to watch out for the next time you’re passing through. Photo by Buzz Andersen on Unsplash 1. Behavior Control Dictate where, how, and with whom the member lives and associates or isolates: The Whos are discouraged from leaving the “paradise” of Whoville and exploring natural features, such as Mt. Crumpit, Whoville’s dump and home of The Grinch. Even though snowy winters are standard for Whoville, there are no snow plows in sight and several accidents occur during the documentary. Road conditions are dangerous and ultimately limits community movement. Major time spent with group indoctrination and rituals: The Whos spend all year preparing for Christmas and the Whobilation. All of the community is expected in attendance and can’t be missing without the Mayor noticing and ostracizing the Who. (I.E. The Grinch!) When, how, and with whom the member has sex: The community does not choose when they procreate and must wait for an omnipotent force to send them children in umbrella baskets. The night of The Grinch’s arrival, Howard catches evidence of a fishbowl party in the swing of things. Then there’s poor Martha May Whovier: Because the Mayor fears that Martha May has feelings for The Grinch, he works with leadership (Whoville teachers) to guilt and shame The Grinch into leaving the community. For the rest of her adult life, Martha May is dressed as a sex symbol for the Mayor’s delight and objectification. At the Whobilation, the Mayor publicly guilts and shames Martha May into accepting his proposal with a car paid for by the community. And alas, in the final moments of the film where the Mayor is discredited in front of the community, Martha May doesn’t use this an opportunity to decide what life she wants to lead for herself and instead immediately pledges herself to The Grinch, the new cult leader of Whoville. Not one minute later, Martha May has changed into a white, wedding gown. Control types of clothing and hairstyle: When The Grinch arrives in Whoville to accept the Holiday Cheermeister Award, the women cult members immediately dress him in the community’s wardrobe (Christmas sweaters) and the Mayor publicly attempts to shave and humiliate him. The Grinch experiments with gender-bending clothing styles that he feels he cannot wear in public without stigma. Regulate diet — food and drink: After accepting the Holiday Cheermeister Award, The Grinch is force fed until he is sick. After which, he is forced to dance with the cult, Midsommar style, before being force fed again and again and again. The scene is truly revolting. Separation of Families: The Grinch apparently has no parents in a world where babies are delivered by umbrella baskets to their parents’ doorsteps. Cult leaders clearly kidnapped him to serve as a social sacrifice to keep the community afraid of the consequences of speaking out against Christmas and the customs and beliefs of Whoville. Photo by Rachel on Unsplash 2. Information Control Deception: deliberately withhold information, distort information to make it more acceptable, systematically lie to a cult member: The Mayor uses doublespeak to tell Who teenagers that they did not see The Grinch in an effort to “stifle The Grinch problem.” When Cindy Lou Who nominates The Grinch to be the Holiday Cheermeister, the Mayor uses the Book of Who to manipulate the situation in an effort to prevent the nomination that he feels he is owed as the cult’s leader. Minimize or discourage access to non-cult sources and information, including, keep members busy so they don’t have time to think and investigate: The Whos have a blind fascination with Christmas and consumerism and have been told their participation is what’s best for all Whos and the community. There is a clock that counts down to Christmas and a town crier who alarms and panics members by counting down to Christmas by the hour. The money spent on Christmas directly lines the Mayor’s pockets. There is an intense light decorating contest between the community’s women. According to Mary Lou Who, it’s “all for the cause,” and it puts the women of Whoville in stark competition against each other, while also creating exuberant electric bills that the Mayor again, will profit from. Photo by Nathan Dumlao on Unsplash 3. Thought Control Forbid critical questions about leader, doctrine, or policy and labeling alternative belief systems as illegitimate, evil, or not useful: Cindy Lou Who is heavily discouraged by the community for asking questions about the true meaning of Christmas. The Grinch is the ultimate scapegoat. Whovillians use him as an example of what happens when you don’t act and think in line with the community and the greater good. Teaching thought-stopping techniques which shut down reality by stopping negative thoughts and allowing only positive thoughts including chanting and singing or humming: When The Grinch hears the Whos sing, he can no longer think coherently. He begins to have flashbacks from when he was abused and abandoned as a child. The only way for him to stop the flashbacks is to physically harm himself by hitting himself in the face with a hammer. When The Grinch returns from the Whobilation, he is now speaking in Who-rhyme and thinking in Who-verse thought patterns. The singing physically hurts him by making his heart grow in size, which any doctor will tell you is a one way ticket to the morgue. Photo by Johan Bladh on Unsplash 4. Emotional Control Promote guilt or feelings of unworthiness such as, identity guilt, you are not living up to your potential, your family is deficient, discourage individualism, encourage group-think:
https://medium.com/keeping-it-spooky/the-cult-of-whoville-763b0984e10a
['Emma Laurent']
2020-12-22 02:45:49.822000+00:00
['Grinch', 'Cults', 'Christmas', 'Horror']
The one hour Witness (Understanding the impact of social norms on Power)
We can all do something in our capacity to stop gender based violence Frozen at the moment, she cries out my name saying “Auntie Carol, please save my son — do not let anyone touch him. They will kill my son.” The story that I witnessed unfold with so much violence and brutality, I could have done something but I did nothing. Let’s define violence — Violence is defined by the American Psychological Association as an extreme form of aggression. Half of all 18–24-year-old Ugandans believe it is acceptable for a man to beat his wife. https://www.unicef.org/uganda/press-releases/3-4-young-adults-uganda-experienced-some-form-violence-during-their-childhood It was a Saturday evening around 10 or 11 pm the 15th of August 2015, the party is heated at the neighbor’s place with so many youths flooding the place and drinking all sorts of things (beer, spirits). The boy comes back home in the wee hours of the night, slammed the door open and pulled out her mother who was with another man at the time and start beating her up. In his words, the boy says to his mother “you have no right to bring a man into this house.” Woken up by the noise, thinking I was dreaming — I come out to find the strangest thing in my life that I have never witnessed. The boy was angry and very aggressive slapping the mum and pushing her to the ground. The sister comes to intervene the boy pushes her down and starts to kick her stomach, she was fragile because she had just given birth to her son who was 3 months old. I stood, frozen, my hands were shaking, my head was not thinking straight, my legs could not move — I now understand what they call shock, nothing was louder than the sound of the crying and beating; It was way louder than the music playing at the neighbor’s place. I understand his anger and frustration but it was not him to decide who the mum should be with since their father died, and it was not his place to disrespect the mother in such a way, I thought to myself. I have always thought of what I would do in such situations or how I would act, but at that moment nothing was clear, I was afraid and scared for my life that I run away with the baby and locked the door behind me. I realized how fragile the baby was and that his life was in my hand — the power I had and yet I could do nothing with it. I could have left the child in the room and came back out to fight but I stood at the door crying while holding it firmly before they open it and attack me as well. For a moment I felt selfish and when I looked at that child I saw the strength and the privilege I had to take good care of this little baby who had no idea about what was going on. While am still thinking about what I could do the older brother comes in and tries to calm him down, he was furious but also drunk and he suddenly runs off to who knows where. I ask the mother if I could call an ambulance and if they had a number we could use to get them to the hospital, she responds “do you think they will do anything at this time, they will ask for money for the fuel which I do not have, and at the hospital there will be bills that I cannot afford to pay, so just let it go I will treat my daughter.” As she goes on to do this she is arguing with her daughter “why did you even get involved this is none of your business” and the girl responds while crying in a faint voice “I was not going to look around and watch my brother humiliate you.” She moved forward, she did something and got even more hurt than the mother and to me, that broke my heart into tears and that is when I started thinking about what I could have done better.
https://medium.com/@acholacarolineewou/the-one-hour-witness-understanding-the-impact-of-social-norms-on-power-74f7ec7a093e
['Achola Caroline']
2021-04-20 08:05:34.152000+00:00
['Fear', 'Anger', 'Violence Against Women', 'Power']
ATTACK ON TITAN 4ª Temporada Capitulo 4 (4x04) Sub Español Latino
TELEVISION 👾 (TV), in some cases abbreviated to tele or television, is a media transmission medium utilized for sending moving pictures in monochrome (high contrast), or in shading, and in a few measurements and sound. The term can allude to a TV, a TV program, or the vehicle of TV transmission. TV is a mass mode for promoting, amusement, news, and sports. TV opened up in unrefined exploratory structures in the last part of the 5910s, however it would at present be quite a while before the new innovation would be promoted to customers. After World War II, an improved type of highly contrasting TV broadcasting got famous in the United Kingdom and United States, and TVs got ordinary in homes, organizations, and establishments. During the 5950s, TV was the essential mechanism for affecting public opinion.[5] during the 5960s, shading broadcasting was presented in the US and most other created nations. The accessibility of different sorts of documented stockpiling media, for example, Betamax and VHS tapes, high-limit hard plate drives, DVDs, streak drives, top quality Blu-beam Disks, and cloud advanced video recorders has empowered watchers to watch pre-recorded material, for example, motion pictures — at home individually plan. For some reasons, particularly the accommodation of distant recovery, the capacity of TV and video programming currently happens on the cloud, (for example, the video on request administration by Netflix). Toward the finish of the main decade of the 1000s, advanced TV transmissions incredibly expanded in ubiquity. Another improvement was the move from standard-definition TV (SDTV) (516i, with 909091 intertwined lines of goal and 434545) to top quality TV (HDTV), which gives a goal that is generously higher. HDTV might be communicated in different arrangements: 3456561, 3456561 and 1314. Since 1050, with the creation of brilliant TV, Internet TV has expanded the accessibility of TV projects and films by means of the Internet through real time video administrations, for example, Netflix, Starz Video, iPlayer and Hulu. In 1053, 19% of the world’s family units possessed a TV set.[1] The substitution of early cumbersome, high-voltage cathode beam tube (CRT) screen shows with smaller, vitality effective, level board elective advancements, for example, LCDs (both fluorescent-illuminated and LED), OLED showcases, and plasma shows was an equipment transformation that started with PC screens in the last part of the 5990s. Most TV sets sold during the 1000s were level board, primarily LEDs. Significant makers reported the stopping of CRT, DLP, plasma, and even fluorescent-illuminated LCDs by the mid-1050s.[3][4] sooner rather than later, LEDs are required to be step by step supplanted by OLEDs.[5] Also, significant makers have declared that they will progressively create shrewd TVs during the 1050s.[6][1][5] Smart TVs with incorporated Internet and Web 1.0 capacities turned into the prevailing type of TV by the late 1050s.[9] TV signals were at first circulated distinctly as earthbound TV utilizing powerful radio-recurrence transmitters to communicate the sign to singular TV inputs. Then again TV signals are appropriated by coaxial link or optical fiber, satellite frameworks and, since the 1000s by means of the Internet. Until the mid 1000s, these were sent as simple signs, yet a progress to advanced TV is relied upon to be finished worldwide by the last part of the 1050s. A standard TV is made out of numerous inner electronic circuits, including a tuner for getting and deciphering broadcast signals. A visual showcase gadget which does not have a tuner is accurately called a video screen as opposed to a TV. 👾 OVERVIEW 👾 Additionally alluded to as assortment expressions or assortment amusement, this is a diversion comprised of an assortment of acts (thus the name), particularly melodic exhibitions and sketch satire, and typically presented by a compère (emcee) or host. Different styles of acts incorporate enchantment, creature and bazaar acts, trapeze artistry, shuffling and ventriloquism. Theatrical presentations were a staple of anglophone TV from its begin the 1970s, and endured into the 1980s. In a few components of the world, assortment TV stays famous and broad. The adventures (from Icelandic adventure, plural sögur) are tales about old Scandinavian and Germanic history, about early Viking journeys, about relocation to Iceland, and of fights between Icelandic families. They were written in the Old Norse language, for the most part in Iceland. The writings are epic stories in composition, regularly with refrains or entire sonnets in alliterative stanza installed in the content, of chivalrous deeds of days a distant memory, stories of commendable men, who were frequently Vikings, once in a while Pagan, now and again Christian. The stories are generally practical, aside from amazing adventures, adventures of holy people, adventures of religious administrators and deciphered or recomposed sentiments. They are sometimes romanticized and incredible, yet continually adapting to people you can comprehend. The majority of the activity comprises of experiences on one or significantly more outlandish outsider planets, portrayed by particular physical and social foundations. Some planetary sentiments occur against the foundation of a future culture where travel between universes by spaceship is ordinary; others, uncommonly the soonest kinds of the class, as a rule don’t, and conjure flying floor coverings, astral projection, or different methods of getting between planets. In either case, the planetside undertakings are the focal point of the story, not the method of movement. Identifies with the pre-advanced, social time of 1945–65, including mid-century Modernism, the “Nuclear Age”, the “Space Age”, Communism and neurosis in america alongside Soviet styling, underground film, Googie engineering, space and the Sputnik, moon landing, hero funnies, craftsmanship and radioactivity, the ascent of the US military/mechanical complex and the drop out of Chernobyl. Socialist simple atompunk can be an extreme lost world. The Fallout arrangement of PC games is a fabulous case of atompunk.
https://medium.com/@tv-anime/attack-on-titan-4%C2%AA-temporada-capitulo-4-4x04-sub-espa%C3%B1ol-latino-ee84630c9b42
[]
2020-12-26 12:48:09.328000+00:00
['Anime', '2020']
How to choose children’s clothes correctly?
It has also said that there are many brands on the market now, but now the clothing of many brands is also very different, so how to choose children’s clothes?In fact, the first element of children’s clothing is comfortable, which we will briefly talk about how to choose clothes is good. 1. In the choice of texture, it is advisable to use cotton instead of chemical fiber. First of all, cotton fabric is relatively soft, and children’s skin is relatively tender.And chemical fiber fabric is often hard, easy to scratch the baby’s skin, causing infection;Secondly, cotton fabric has a good air permeability, will not hinder the evaporation of sweat, so that the baby feels comfortable.And chemical fiber fabric does not have this characteristic, often the baby movement sweat sweat gas and not timely evaporation, resulting in wet baby clothes, if you do not change clothes in time, it is easy to catch a cold. 2, in the choice of color, should be light, avoid bright, bright color cloth often contains a lot of chemical staining residue, easy to cause the baby to suffer from skin disease, so in the choice should be careful.At the same time, it should also be noted that some excessively white clothing materials are actually added with fluorescent agent, which requires mothers to identify when choosing. 3. In the choice of size and style, it should be loose and tight, because the baby is active. If it is too tight, it will not be conducive to the stretch and activity of his limbs, and the long-term lack of activity will make the baby prone to illness.Should be based on loose and natural casual clothes, to provide loose clothes for the baby, the baby activity up flexible, not only happy, but also to strengthen exercise, good for the baby’s health, too loose and not spirit.Choose clothes that are easy to wear and take off, such as jackets, loose-body clothes with door lapels, dresses, vests, etc., which are more practical for young children. Children aged three or four can choose pullovers, sportswear, etc.Children’s pants are generally elastic band, but some elastic band is too tight, will affect the normal development of children’s breathing and chest skeleton, would rather be a little loose.Open-backed pants are not suitable for children who can climb and walk, especially girls, whose urethra is short, so as not to cause urinary tract infection. 4. In the choice of workmanship, it should be fine, avoid shuddy, the production of small clothes, should be exquisite and meticulous, less burr, careful stitching, thread removal, so as to ensure that the baby is comfortable to wear, not be rough clothes scratch. This is a seller I recommend here. The quality of their children’s clothes is very good, and the price is relatively cheap
https://medium.com/@jimmyni6786/how-to-choose-childrens-clothes-correctly-261498f99657
[]
2021-04-25 14:12:36.551000+00:00
['Toddlers', 'Clothing', 'Dress', 'Kids', 'Clothes']
Vuejs object updating inside Vuex array: CRUD
Vuex state management and interactions Vuex is a state management library for Vuejs applications by providing a central data store with consistent ways of updating an object of an array inside the state of the Vuex store. Components of a Vuejs application may call and use the defined actions to trigger or dispatch mutations that actually poses a first-hand access to the state of the Vuex store. Upon changes to the state of the store components can fetch the updated state for view. The goal of this article is to discuss the proper update of an object inside the array using mutations and actions. Setting up a project with Vuex and Vue will give us a project structure that looks as like this. Project folder structure Here is a snippet of the App.vue component where we fetch and display the array of items from our store. Each item is described as an object with a text attribute. For instance: text: “vuex wins”. The array holds all the item objects wherein each object has a text attribute. The challenge we have picked up today is updating the text of a specific object in the middle of the array. /!-- App.vue --> <template> <ul> <li v-for="(item, index) in loadItems" :key="index" > <span @dblclick="enterEditing(index)">{{item.text}}</span> <input v-show="edit" :value="item.text" @keyup.enter="updateItem"> </li> </ul> </template> <script> export default { data() { return { edit: false } }, methods: { updateItem(e) { const newValue = e.target.value.trim() this.$store.dispatch('updateItem', { index: this.newIndex, text: newValue}) .then(() => { this.edit = false }) } }, } </script> The actions and mutations of the Vuex implementation understand each other using mutation types definition. Here we will focus on the update aspect of the CRUD operations. The rest of the CRUD is available in the full implementation available on Github. Most of this is inspired from a Stack overflow post and the official Vuex todos application. <!-- store/mutation-types.js --> export const UPDATE_ITEM = 'UPDATE_ITEM' <!-- store/actions.js --> export const updateItem = ({commit}, newUpdate) => { commit(types.UPDATE_ITEM, newUpdate) } <!-- store/mutations.js --> [types.UPDATE_ITEM] (state, payload) { Object.assign(state.Items[payload.index], { text: payload.text }); // Other ways to update // this.Items[payload.index].text = "CHANGED"; // Vue.set(this.Items, payload.index, { text: "CHANGED" }); } I hope this helps fellow developers in their projects. Here is the source code links.
https://medium.com/@fthialem/vuejs-object-updating-inside-vuex-array-crud-90cf25c00011
['Fthi Arefayne']
2019-10-18 07:25:50.989000+00:00
['Vuejs', 'Vue', 'Arrays', 'Objects', 'Vuex']
REHEARSING FOR FAILURE?
Ease to imagine what could go wrong? So why not rehearse for success? Visualize everything going right? If you can dream it… you can do it!
https://medium.com/@prof-mitch/ease-to-imagine-what-could-go-wrong-e9ae8602c050
['Mitch Goldfarb']
2020-12-26 12:07:10.306000+00:00
['Life', 'Truth', 'Success', 'Inspiration', 'Mindfulness']
Best male choreographer in India
It is not just about women, men are inclined towards art too. They have the enthusiasm of taking the challenges and passion to come out victorious. This is the world of the media industry and entertainment and anyone can be moved to such opportunities. Similar is in the case of choreographers. To be the best one has to do the best. Irrespective of where one is coming from the passion drive the force within us. We have a wide range of artists who are budding and best choreographers. They are Remo D’Souza. Prabhu Deva, Raghav Juyal, Ganesh Hedge, Punit Pathak, Shiamak Davar, Ganesh Acharya, Ahmad Khan, etc. Here are a few choreographers: 1. Prabhu Deva As the name suggests, Prabhu Deva is the idol of the dance choreography among the male choreographers. He is a film producer, actor, and director. His focus of work has been Hindi and Tamil Film Industries. Various dance styles are on his credits. He has been a consistent choreographer and a performer. He is considered next to Michael Jackson in terms of his dancing skills, movies, and its polished presentation. 2. Ganesh Hedge You must have seen him judging the reality shows Jhalak Dikhla Jaa and Kabhi kabhi pyar kabhi kabhi yaar which was aired on Sony Entertainment Television in 2008. He is a very talented dancer. He choreographed for the Oscar-winning film Lagaan. He runs his academy named Ganesh Hedge Dance Academy in Mumbai. The list of his numbers includes aj mein uper asman niche, tukur tukur, nachde Punjabi, Sharabi, Sefian, etc. He has personally taught Hrithik Roshan. 3. Remo D’Souza He was the latest seen is the judge of famous dance reality show Dance plus as the super judge. Various hit songs from the movies Student of the Year 2, Kalank, Dilwale, Bajirao Mastani, ABCD 2, Flying Jatt, etc. were choreographed by him. He gets deep into the ventures of choreography and brings out the best results. He is continuously awarded for his invaluable works, Zee Cine Awards, Stardust Award, Screen Awards, and Film Fare Awards. 4. Shiamak Dawar Shiamak Dawar is known for his contemporary jazz dance and western dance. The most famous actor Shahid Kapoor was trained by this amazing choreographer. He runs his academy Shiamak Davar International whose office is located in Connaught Place, Delhi. Bunty Babli, Taal, Dhoom 2, Taare Zameen Par, Bhaag Milkha Bhaag, Rab ne bna di jodi, Kisna, and Jagga Jasoos are a few among his choreographies. He is a brand name in the dance industry and a one-year dance certification Programme is offered by his organization. 5. Ganesh Acharya Ganesh Acharya’s nerves are completely into dancing. He took the business of choreography forward after the death of his father when he was barely 10 years old. His sister Kamla Nair helped him learn to dance. The movies that he had picked up to choreograph are Heroine, De Dana Dan, OMG, Chak de India, Bodyguard, etc. National Film Award and Best choreography award are the privileges that he won and was nominated for. To declare best is just to name the talent and make the work easier. There are various artists who are doing a great job as dancers and choreographers. Hope you liked this blog! To book a choreographer for a wedding or any other event, please visit StarClinch (India’s №1 artist and celebrity booking website).
https://medium.com/@bhuwankochhar551/best-male-choreographer-in-india-6b9bb7422117
['Bhuwan Kochhar']
2019-11-15 07:55:49.630000+00:00
['Dance', 'Choreography', 'India', 'Choreographer', 'Starclinch']
How Congress Can Take a Step Toward Trade Stability
fter protracted smashmouth negotiations, the United States, Canada, and Mexico agreed to replace the North American Free Trade Agreement (“NAFTA”) with the new United States Mexico Canada Agreement (“USMCA”) on November 30, 2018. The new USMCA is largely NAFTA with certain positive elements drawn from the Trans-Pacific Partnership (“TPP”). Unfortunately, certain new protectionist provisions unnecessarily take the USMCA in the wrong direction. Now, a new trade bill called the “Reciprocal Trade Act” could make trade turmoil much worse. First, let’s consider the USMCA. Commentators have hailed the signing of the USMCA as ending uncertainty related to the NAFTA negotiations. Of course, it was the United States that unnecessarily created the uncertainty by demanding the renegotiations and threatening to pull out of the trade pact in the first place. To the extent that there was uncertainty that disrupted American businesses, it was self-induced. The failure of the USMCA to resolve the issue of the steel tariffs, whether in the body of the agreement or by side letter, is a disappointment. One positive from the new USMCA is the increased access to the Canadian dairy market. However, as Scott Lincicome has pointed out, the USMCA only opens up the Canadian dairy market by .34 percent more than would have been but for the withdrawal from the TPP. This is hardly a major accomplishment that is worth months of uncertainty and diplomatic ballyhoo. The USMCA fails to remove the tariffs on imports of steel and aluminum from Canada and Mexico pursuant to Section 232 of the 1962 Trade Expansion Act. Under Section 232, Congress delegated its Article 1, Section 8 constitutional authority of regulating trade by authorizing the president to impose tariffs in cases necessitated by national security. The president’s claims that tariffs are necessary to protect the US steel industry on the grounds of national security are belied by the facts that the US Defense Department only requires about 3 percent of steel produced domestically and that US steelmakers enjoy a market share of about 74 percent of the domestic steel market. The steel tariffs on Canada, Mexico, and other importers are harming American companies. Canada has retaliated against the US steel 25 percent tariffs by imposing its own 25 percent tariff on US steel imports into Canada. The failure of the USMCA to resolve the issue of the steel tariffs, whether in the body of the agreement or by side letter, is a disappointment. Now, Congress has some decisions to make. Representative Sean Duffy (R-WI), working in concert with the White House, has authored the Reciprocal Trade Act, which would expand presidential powers to raise US tariffs. Before the USMCA goes into effect, the United States Congress has to ratify it. The president has threatened to withdraw from NAFTA if Congress does not ratify the new USMCA. In light of the contentious government shutdown negotiations, it is less than certain that Congress will approve USMCA. Congress should ratify the new USMCA to avoid the trade turmoil that would come from not having either NAFTA or USMCA. Still, Congress could make matters even worse. Representative Sean Duffy (R-WI), working in concert with the White House, has authored the Reciprocal Trade Act, which would expand presidential powers to raise US tariffs. According to the Peterson Institute of International Economics: “The bill would give Trump unfettered discretion to raise US tariffs against imports from countries that impose higher duties than existing US rates.” Congress should reject the Reciprocal Trade Act, then turn to legislation to rein in the president’s authority to impose Section 232 tariffs. The latter would require a veto-proof majority. Such a level of bipartisanship is probably too much to hope for in today’s Washington. But approving the USMCA and rejecting the RTA would be a step toward trade stability. Also available at FEE
https://medium.com/@mcculloughdr/how-congress-can-take-a-step-toward-trade-stability-791c1fae184f
['Doug Mccullough']
2019-02-15 17:11:26.753000+00:00
['Trade', 'Nafta', 'Tariffs']
Practical Advice for Solidity Developers
Practical Advice for Solidity Developers An average of 20,000 Smart Contracts is created every day. Over 3 Million Smart Contracts were created in November and December 2018 alone! A report shows that 60% of all Ethereum contracts have never been interacted with, less than 10% of all created contracts are unique, and less than 1% of the contracts have the source code available. Another report shows that over 25% of all Smart Contracts created have some kind of bug 😱. Today, we’ll mention a couple of things which can help you build better Smart Contracts! K.I.S.S. Principle (Keep It Simple, Stupid) The KISS principle is well-known when designing a system. Keeping things simple decreases the likelihood of errors. So make your logical flow simple when designing a Smart Contract, so you will end up with a clean & tidy code. Reuse You don’t need to reinvent the wheel! There are already well-tested libraries and frameworks (e.g. Truffle, OpenZeppelin), which help you build and modularize your logic — use them 😉. Don’t do things like creating your own random number generator. Software Engineering Design Patterns Patterns help us in solving some very fundamental architectural design problems. Patterns are used by millions of software throughout the word and are already a well-researched area in computer science. Understanding patterns helps you recognize problems fast and come up with better solutions for them. You can find Smart Contract / Solidity design patterns here. There is one particular design pattern which we like: Proxy Delegate. This pattern helps you to upgrade your Contract safely. Test Cases It’s best to follow TDD (Test-Driven Design): create test-cases before writing your code, and cover positive and negative test cases. Also, try to break your own code and find logical mistakes. You can use solidity-coverage to generate test coverage. Use Security Tools Smart Contract security should be the utmost priority. You need to find all attack vectors. Smart Contract and ERC-20 tokens have a history of bugs. You can also check out Ethernaut puzzles to get familiar with the most famous security problems. You should also read Vitalik Buterin’s article on Smart Contract security. There are already open source security analysis tools, which you can use to analyze your Smart Contact. Here are a few of them below: Visualization Sūrya — Utility tool for Smart Contract systems, offering a number of visual outputs and information about the contracts’ structure. Also supports querying the function call graph. Solgraph — Generates a DOT graph that visualizes function control flow of a Solidity contract and highlights potential security vulnerabilities. EVM Lab — Rich tool package to interact with the EVM. Includes a VM, Etherchain API, and a trace-viewer. ethereum-graph-debugger — A graphical EVM debugger. Displays the entire program control flow graph. Static and Dynamic Analysis Mythril Classic — Open-source security analyzer for Solidity code and on-chain Smart Contracts. Mythril Platform — SaaS platform that allows anyone to build purpose-built security tools. Slither — Static analysis framework with detectors for many common Solidity issues. It has taint and value tracking capabilities and is written in Python. Echidna — The only available fuzzer for Ethereum software. Uses property testing to generate malicious inputs that break Smart Contracts. Manticore — Dynamic binary analysis tool with EVM support. Oyente — Analyze Ethereum code to find common vulnerabilities, based on this paper. Securify — Fully automated online static analyzer for Smart Contracts, providing a security report based on vulnerability patterns. SmartCheck — Static analysis of Solidity source code for security vulnerabilities and best practices. Octopus — Security Analysis tool for Blockchain Smart Contracts with support of EVM and (e)WASM. Known Attacks and Updates Always keep yourself updated with known attacks and Solidity updates. Solidity major updates may have breaking changes. Here is a list of Smart Contract known attacks. There are a few major things you need to take care of when developing an ERC-20 token: Be aware of front-running attacks on ERC-20 Prevent transferring tokens to the 0x0 address Prevent transferring tokens to the contact address There is a better Ethereum standard such as ERC-223 and ERC-777, which you can use as an alternative for ERC-20. Also, Security EIPs are important to be aware of… either for understanding how the EVM works, or to get informed with best practices when developing a Smart Contract system. Documentation and Procedures Documentation is a very important practice when your code is going to handle potentially Millions of dollars. Documentation helps internal and external participants, auditors and independent reviewers to understand your contracts. Create specification, state machines, models which helps others to understand the system. Include a roll-out plan with your documentation. Specify the current version of the compiler. Specify known issues, attack vectors, limitation and potential remedies for them. Specify test converge and reviewers. Maintain a history & keep track of changes over time. Specify contract authors and contributors with their public contact information. Contract Auditing There are companies which can help you audit your Smart Contract. These services are useful as experts will audit your code for potential vulnerabilities. Highly recommended if you are building a Smart Contract on which your business will depend on. Bug Bounties It always helps to have more eyes on your code. Bug bounties are excellent in terms of ROI. Below are some tips for running bounty programs: Decide the total budget and currency for your bounty reward. Categories your budget in terms of different type of vulnerabilities. Decide a team for judging the bounty. Mention proper communication channel for reporting of bugs. Use private repos to fix the bug, involve bounty hunter in reviewing the fix. Do not delay in rewarding the bounty hunter. You can check out 0x project bug bounty program. Prepare for failure If there is code, there is a bug 😄. When it comes to Smart Contracts, create a thorough plan for the worst case scenarios. There are two ways to write error-free programs; only the third one works. (Alan J. Perlis) Here are several tips which can help you to create such a plan for your token: Add a circuit breaker in your Smart Contract which will pause all kinds of transactions. Create a plan for contract upgrades, bugfixes, and improvements. Create a proper guideline and communication channel for disclosure policy. Minimize the impact of bugs and money at risk. Plan what happens in case of money loss. Recourse in case of failure (e.g. insurance, penalty fund, no recourse). Conclusion Apart from the above, always optimize your Smart Contract for Gas usage. Prevention is better than the Cure, always try to plan and prevent before things go wrong. In the case of Smart Contracts, the real money will be at stake; you need to be extra careful as the programmer. Always stay updated with the latest changes in Solidity language & compiler and known attacks in the ecosystem. With Ethereum blockchain’s nature of everything being public, and taking into account the current maturity of the ecosystem, guaranteeing that nothing will go wrong is extreme overconfidence (if not foolishness). So build, test, plan, and follow procedures to mitigate potential vulnerabilities in your Smart Contract. Happy (smart) coding! About QuikNode QuikNode is building infrastructure to support the future of Web3. Since 2017, we’ve worked with hundreds of developers & companies, helping scale dApps and providing high-performance Ethereum nodes. We’re working on something interesting from the past few months and will be launching soon, so subscribe our newsletter for more updates!! 😃
https://medium.com/quiknode/practical-advice-for-solidity-developers-f2c33b88c0e6
['Gaurav Agrawal']
2019-06-24 18:59:38.843000+00:00
['Solidity', 'Smart Contracts', 'Ethereum', 'Web3', 'Blockchain']
“Clean” mining? You missed a spot.
From clean coal to clean gold and clean cobalt, the label of “clean” is a standard we now come to expect from the natural resource industry in the 21st century. If we look at a brief modern history of mining we see a troubled past. In the last century, the mining industry has contributed to a number of global problems; environmental degradation and the number of untimely and preventable deaths of workers. A major push toward “clean” mining practises ensued near the end of the 20th century. Coming out on the other end, the mining industry has transformed to incorporate the use of “clean”, “green” technologies and practise. Yet, the industry’s preoccupation with the label of “clean” may be missing the mark and forgetting one of its most stubborn stains: Social governance. Don’t Cry Over Spilt Acid Clean mining, in effect, got it’s name from just that- mines cleaning up their act. Most times this effort was applied after something had already gone wrong. Some of the common environmental issues mines have been forced to rectify include acid rock drainage (the outflow of acidic water from unearthed rock), toxic tailings, heavy power usage and poor waste management, all of these having social ramifications for mining communities and workers. However, the clean narrative has given some companies a dirty name. Barrick Gold is one of them; however, after undergoing numerous litigations for environmental and human rights violations, the company has now become a leader in Corporate Social Responsibility. The stigma associated with the clean versus dirty dichotomy has formed a barrier discouraging some companies and financial stakeholders to participate in social programs. In a sense, admitting the need to come “clean”, by default might entail admitting to their “dirty” dealings. Yet, it can be difficult to enforce “clean” mining standards in demanding social environments. Where there are abject conditions, where government structures are unreliable and the mines attract unwanted attention from local communities, including illegal miners and migrant workers even companies committed to ethical operation will encounter challenges. Apart from the stigma associated with the polarity of clean/dirty mining, it is this nuanced understanding of social conditions that is lacking in the terminology. From Clean Mining to Social Mining At Peer Ledger we move to shift the focus from clean mining to social mining. With an integrative Environmental and Social Governance (ESG) approach, mines, mining companies, stakeholders, financial institutions, government as well as community take on joint responsibility. Social mining implies voluntary actions undertaken by mining companies to either improve the living conditions (economic, social, environmental) of local communities or to reduce the negative impacts of mining projects. These actions go beyond contractual obligations, licensing agreements, and the bare minimum to reach “clean” environmental standard and mitigating technical risks. A social mining framework gives communities a place in the process with some measure of decision- making capacity, while mining companies are required to actively give back to the community. There is still debate on what government’s role should be in the ESG framework, with attention to enforcing mining policy. Overall, social mining aims to address the forgotten elements of social unrest and poor governance as environmental concerns and acute human rights abuses take centre stage in dominant narrative . It takes a proactive approach to risk management with attention to risks that are not just technical or environmental, such as the social risk of a workers’ strike due to abject labour conditions. Pass clean. Collect $200 million dollars Taking an ESG approach to responsible mining is a positive move for mining companies to become more attractive to investors and stakeholders. ESG indicators are becoming the dominant framework for socially conscious investors. Peer ledger’s unique brand of supply chain solutions can enables companies to produce ESG reporting, providing accuracy transparency and consensus of operating measures along the supply chain. Learn more at www.peerledger.com.
https://medium.com/@peerledger/clean-mining-you-missed-a-spot-138382412771
['Peer Ledger']
2020-12-11 19:37:20.078000+00:00
['Blockchain Startup', 'Supply Chain', 'Traceability', 'Mining', 'Gold']
The Media Creates Outrage From Nothing
The Media Creates Outrage From Nothing “Taxpayers may have to cough up £3,000 for Yorkshire Ripper’s funeral” screams the headline in the Daily Mirror. But in the sub-heading it tells the full story: “Criminals like Peter Sutcliffe must be buried at the taxpayer’s expense according to Ministry of Justice, if their surviving family cannot or will not pay for the funeral”. Oh, so this is a non-story, then? No, because most people won’t even read beyond the headline. The Mirror knows this, and produced ‘stories’ like this before the days of clickbait or even the internet. Outrage sells papers, and the British tabloids are expert traders in this particular currency. The Mirror has turned it into a story by drumming up hatred and revulsion at the notion of ‘our’ taxes being lavished on a five-star funeral for a serial killer. And it’s a load of old nonsense that contributes to our dumbed-down media landscape. If you read the comments on that tweet, you can see a split between those who want to make sick jokes about it (and I don’t see a problem with that), and those who don’t understand why public money needs to be spent on a funeral for a dead serial killer. Clue: it’s the same reason the taxpayer funded his bed and board for the last 39 years: this is just one of the functions the state has to perform. There’s nowhere else for this man to go, so the authorities have to deal with it. That’s what they’re there for. If no-one from Sutcliffe’s family is willing to pay for a funeral (and if I were related to him, I’d want to keep my distance), then a basic service similar to a “pauper’s funeral” will be conducted. These are funerals that are held for people who died alone and with no relatives. These are paid for by the Local Authority, unlike the funeral of Sutcliffe, which will be funded by the Prison Service. They are the bare minimum process that the deceased is entitled to, no matter who they are. We provide this service because we are a civilised country (sort of) and it is our duty to respectfully take care of the dead. There will usually be at least one person in attendance in addition to the officiant, and the burial takes place in a common grave, often without a marker. Sutcliffe had expressed a wish for his ex-wife to scatter his ashes in Paris, but I gather she wants nothing to do with him, and I don’t blame her. I expect Sutcliffe’s final resting place to remain unknown to most, and it’s probably the best outcome we could hope for. He deserves to be forgotten, and it’s better for the nation’s mental wellbeing that we leave behind the sensationalism that accompanies burials of the notorious.
https://medium.com/make-the-news/the-media-creates-outrage-from-nothing-102a9532a69
['Katy Preen']
2020-11-16 12:57:12.349000+00:00
['Media Literacy', 'Bias', 'Tabloid Journalism', 'Fake News', 'Crime']
Aleksandr Solzhenitsyn On the Magical Power of Art
ON LITERATURE Aleksandr Solzhenitsyn On the Magical Power of Art For what purpose have we been given this gift? Public domain. When Bob Dylan won the Nobel Prize for Literature in 2016, I found myself impressed by the caliber of those who preceded him. Like many, I read his Nobel Lecture and found it inspiring. I then went on to read some of the Nobel Lectures by others whom have been so honored. It’s an uncommon opportunity not to be taken lightly. Many of the best literary minds of the past century have distilled their best thoughts into something others can read and be inspired by. Kudos to the Nobel committee and this catalog of wisdom and achievement that is preserved for us and available online to any and all.
https://ennyman.medium.com/aleksandr-solzhenitsyn-on-the-magical-power-of-art-6a2924cc6f66
['Ed Newman']
2020-08-09 17:42:10.796000+00:00
['Ideas', 'Power', 'Art', 'Literature', 'Meaning']
Aso Villa Reads for Thurs. 26/11/2020
Every day, we bring you the best stories that the Media is reporting about the Government of Nigeria The Central Bank Governor, Godwin Emefiele, has said the Coalition Alliance Against COVID-19 will rehabilitate all the police stations that were damaged in the country as a result of wanton destruction and lootings that followed the recent #EndSARS protests. Emefiele spoke during a press briefing in Lagos on Wednesday at the office of the CBN. He also disclosed that CACOVID had so far spent N43.27bn on the acquisition of medical equipment and supplies, and food palliatives for the vulnerable, among others. The CBN governor said the alliance planned to spend N150bn on youth empowerment. [Punch] The Federal Executive Council, FEC, yesterday approved a total sum of about N39.7 billion for road maintenance and award of contract for erosion/flood and pollution control accelerated intervention projects. This was disclosed at the end of the virtual FEC meeting presided over by President Muhammadu Buhari at the Council Chamber, Presidential Villa, Abuja. Of the amount, the sum of N20.925 billion was for road repairs and maintenance, while N17.75 billion was for erosion/flood and pollution control accelerated intervention projects and others. [Vanguard] The minister of Transport, Mr. Rotimi Amaechi, yesterday, handed down January 2021 target timeline for commissioning of the Lagos-Ibadan Rail Line to the contractor handling the project, China Civil Engineering Construction Company, CCECC. Speaking on the development, Managing Director of the Nigerian Railway Corporation, NRC, Engr. Fidet Okhiria, said a test run of the Lagos-Ibadan rail services will be carried out in the first week of December ahead of the Presidential commissioning in January. [Vanguard] The Federal Government yesterday gave an indication of reopening Nigerian borders which were closed since August last year. Minister of Finance Budget and National Planning, Mrs Zainab Shamsuna Ahmed, who disclosed this during a roundtable discussion at the 26th Nigerian Economic Summit (NES #26), said that a committee set up by President Muhammadu Buhari had done an assessment of the gains of the closure and had recommended to the president to reopen the borders. In view of the recommendation, she was optimistic that the president would act on the recommendation soon. [The Sun] The private sector-led Coalition Alliance Against COVID 19 has unveiled plans to empower four million youths with a N25 billion employment program. Meanwhile, the Group said it received N43.272 billion as a donation from members of the private sector and spent N43.272 billion on various interventions, including N28.7 billion on food relief as palliatives to 1.7 million households translating to or eight million Nigerians. Co-Chair CACOVID and Governor, Central Bank of Nigeria, CBN, Mr. Godwin Emefiele disclosed this at a press briefing in Lagos while giving an account of the activities of the group. [Vanguard] The National Health Insurance Scheme (NHIS) has formally launched a new health insurance policy known as the group, individual and family social insurance package. Speaking during the launch of the insurance scheme in Abuja, the Executive Secretary of NHIS, Prof. Mohammed Sambo, said the move was in keeping with the determination of the federal government to expand the health insurance coverage to accommodate all Nigerians. Under the new scheme, groups and families can now enjoy special health insurance cover. [This Day]
https://medium.com/@theasovilla/aso-villa-reads-for-thurs-26-11-2020-d82a0266b309
['Government Of Nigeria']
2020-11-26 15:07:30.360000+00:00
['Nigeria', 'Health', 'Economy', 'Infrastructure', 'Covid 19']
Hiding From Joy
Ananda Church, Palo Alto I lived in East Palo Alto, CA in 2010 while I finished my Master’s degree. Having lived in a very rural community for years, I was pretty excited about all the Bay Area options for alternative religious observance. One church I looked forward to visiting was the Palo Alto Ananda Church of Self-Realization, an offshoot of the original Self-Realization Fellowship. The SRF was created to spread the teachings of Paramahansa Yogananda, an East Indian who migrated to the U.S. in the beginning of the 20th century to introduce kriya yoga, “the science of enlightenment”, to the west. As it happens 2020 is the centennial of Yogananda’s establishing the SRF in Los Angeles. I knew Yogananda through his bestselling autobiography, Autobiography of a Yogi, still in print since the first 1946 edition. I suppose the best way I can describe the book in short is with a Google search question. Question reads, “Is Autobiography of a Yogi true?” and that’s actually a typical reader response, if I am any example. I read the book at age 14 or possibly 15, when I got excited about yoga. So 1969–70. Yogananda with his buddy Luther Burbank. Weird smile there, Yogananda To cut to some sort of chase, this rant isn’t about Yogananda, it’s about joy. Joy being such a big subject, I flounder- or procrastinate. Or both. The reason I begin with my visit to the Palo Alto church is that I noticed the congregation (or the leaders perhaps) had a distinct idea about joy; that joy is better if it is gentle. Follow-along prayer lines were displayed on wall-mounted LED screens, in lieu of prayer books. In accord the congregation asked someone (or something) for the blessing of gentle joy. I react to screens with a fairly constant awareness of the Big Brother effect. I wasn’t raised with the TV, we’ll just say. So while we prayed for the experience of gentle joy along with the screens, my metaphorical ears perked. Since when do we need to qualify joy? I asked nobody at all. Were Yogananda’s followers a bunch of joy fascists? How were we supposed to feel about UN gentle joy? Was excited joy like idle hands: another Devil’s workshop? The message I got from this gentle joy thing was that, joy is all well and good, folks, but let’s not get too excited about it. Keep it down to a dull roar, as my Mom used to chastise her 6 children. Joy describing a probably more excited joy. Fierce magic sounds exciting to me, anyway. What is gentle joy, and why is it particularly desirable? At the time I was still in recovery from a life transforming experience of a far from gentle joy. I had run, perhaps desperately, into the fireworks kind of joy. Joy of the sort that’s innocently portrayed in photos of folks leaping high on the beach in front of a blazing sunset. Such photos belie the truth that large leaps of joy can be the sort my culture labels madness. Many religious saints were labeled thus in their day. My own giant bound before the glorious sunset landed me 1. in the nuthouse, and next, 2. in severe anxiety for months during the aftermath. It was finger-in-a-light-socket level fear, a treacherous form of inner rapids that I’m somewhat skilled at navigating now. Do we wish for gentle joy because, if we restrain and titrate joy, we avoid the 3D mundane crash? Gentle joy might be a safe little skip on the sidewalk as opposed to a seaside leap, but the gentle joy let down is barely noticeable. It’s like the absence of caffeine in the morning, as opposed to a life transforming struggle to regain stable footing. I could never have chosen the crash, but I also know that it was brave of me to leap. And no, I won’t be sainted, but then neither will you be crowned for your leaps, I expect. Not by anyone else, anyway. Cold-blooded religious dogma, however useful, isn’t the only conditioning factor keeping us wary of joy, or even happiness. Joy and happiness are in essence the same thing, though the word ‘joy’ has a way larger palette than the placid English word ‘happiness’. Joy runs an amazing gamut, from holding your child for the first time, to the most cosmic mind blowing ecstasies of mystical journeying; done both. Happy is probably in line with gentle joy; it’s easier to talk about, easier to consider, to reach for and embrace. Though it could be some simple act, a stopping to smell roses, it’s often experienced as relief from something awful. In a society that has trouble stopping, it’s often the temporary disappearance of fear, depression, dread, the absence of that haunting feeling you’re not doing what you are supposed to. Happiness is always sane, and the use of the word implies sanity on the part of the user. Joy could be the crazy talking, unless you are a medieval saint or something. Bernini’s famous statue of St Teresa de Avila. Teresa experienced ecstasy (extreme joy, as in a serious orgasm) through pain inflicted by angels. I was quite taken with her during my crazy time. Anyway, take that, gentle joy. Though all of us are not called to such foolhardy shenanigans. Indeed, the word ‘joy’ has the taint of the religious upon it, and therefore must be handled with care in my demographic. Educated liberal white Americans don’t think or talk about joy, as a rule, with the exception of the word ‘enjoy’, which usually refers to enjoying the mundane; a good pastry and coffee with a friend. We might utter the word in church, as in the one recently mentioned, or while singing Christmas carols. Though joy is handled like very thin glass, most of us will allow ourselves to speak of happiness. In proper company. We all have joy conditioning, of course. In the personal history department I can identify a decision I made to not be happy; a time when I learned how to keep joy on the down low. My parents had just separated, and I, as one of the oldest half of six siblings, was parceled out to my father. He was obviously uber distressed, and as a child, I didn’t know how to help. So I vowed that he would not see me acting happy when he was obviously unhappy. It is a common enough human strategy; emotional mirroring. I’m sure such mirroring behavior is all very codependent in my white American liberal culture’s parlance, but codependent is what people are. It quite naturally feels amoral to happily go about the business of enjoying ourselves in the face of a close distressed human, never mind dancing with joy. We want to let the unhappy one know, quite simply, that we see them, and saying so is just not enough, no ma’am. Especially when you are 7 years old and your parents are divorcing. Adjusting our mood to another’s is empathy in its most primitive, and therefore perhaps its most beautiful, form. Such natural mirroring behavior is fine for adults, if it’s more or less conscious; in other words, not a way of life. Looking around my country these days, I see many folks so overcome with empathy for countless things living, or even long dead, that they are never going to have a happy day in their lives. Unless they get some drugs. Which most of us do. To complicate the matter of happiness, in my liberal white American demographic it’s laudable to be enraged, or “mad”: angry in the emotional sense. Such devilish inner torment is preferred behavior because it apparently proves one is “paying attention”, a 2 word phrase worth contemplating for those with a few minutes to spare. Seemingly it’s actually praiseworthy to avoid the devil’s idle workshop by emoting; by being alternately angry and depressed. It takes too much energy to be angry 24/7. In part, anger and outrage show others that we beat our chests (in the lamentation way, hopefully not the gorilla way) over the fact that others are suffering. Our impotent social outrage is emotional mirroring that expresses our guilt for not having fixed everything yet. Not too subtle bullying tactics here. If someone wants to tell me where to put my attention, I will look elsewhere. So there. I learned that in kindergarten. I looked out the window. What is not fine in my larger demographic is joy, no matter how many Insight Timer meditations we listen to or Thich Nhat Hanh books we read. And I started reading them decades ago. Why is joy so inappropriate? In a world gone so apparently amuck, we express our love by strangling joy- ours and others’. As I did in empathy for my father. We are not mad with joy, we are mad at it. Joy is viewed with a suspicious eye, because we have been taught that happiness keeps us from striving, from eternally fixing everything. A happy person is an ineffective person, a limp dick. And for those who have been exposed to a world of hurt it’s horribly difficult to admit that, like my seven year old self, there’s only so much most of us can do. Happiness, however ephemeral, has become, in a world of suffering, a concerted choice. Letting go of my sense of competency is an ongoing practice. I self identify with the Fool archetype, so that helps a lot. By the time I die I shall be completely idiotic. Hm. Sounds like a lot of folks. My conditioning told me that happiness is the state of the uncaring, the irresponsible, the uneducated, the idiot. If you knew much at all, you would be bummed. Wipe that smirk off your face, young lady. It would be wonderful, of course, if unhappiness fixed things. Or anything. Besides validating other unhappy people. Discomfort of any kind is definitely a potential motivation- my severe anxiety certainly was. But I propose it’s false to stereotype happy people as metaphorical slobs sipping metaphorical cocktails on the metaphorical summer beach. For loss and betrayal come to all who live long enough, and when they do, it’s Wheel of Fortune time. We can opt for permanent disability in the joy department, or we can see that Vanna’s showing us 3 doors. At least. Our life’s periodic swings from pain to ecstasy are what make saints of us all, anyway. Unhappiness of the periodic sort is quite natural to the empathetic human, of course. For when at least somewhat healthy folks are aware of distressed humans, or animals, or rivers… it’s natural to grieve. There’s even a hint of joy in such grief; joy at knowing there is so much to love here, and gratitude because we do appear to love well enough to grieve. But healthy grief is an event; a phase, or a season, not a way of life. It’s true that some of us avoid the grief process by raging, whether outrage or enrage/inrage. Perhaps we view our happiness with a cold eye because it is natural for loving humans to fear emulating those who are, indeed, damaged beyond empathy- and/or uneducated to it. On our planet, especially at this time, we are inundated with endless examples of tyrants and slaves and liars and cold hearted abusers, past and present. Some of us were spawned by such damaged and sadly disconnected creatures, the monkeys in Harry Harlow’s research study. But if we wait to en-joy our existence until the day when everything is fixed, we will wait forever. “According to many thinkers of the day, affection would only spread diseases and lead to adult psychological problems.” From article on verywellmind.com My crazy joy didn’t last; it was unsustainable in my society, for one thing. It was essentially another story of infatuated love gone awry; astounding and unique to me, but nothing extraordinary on the human scale. People are doing the leaping joy of infatuation and other styles of euphoria all the time, especially the adolescent ones. They remember the natural joy of childhood and haven’t yet found themselves writhing on the beach. Humans as a species persist in loving, en-joying, more or less wildly, despite the possibility of falling from ungentle graces. Fearlessness is the very nature of the beast, for leaping joy is an experience of the eternal. When we are in big joy, who cares about the fall? Recklessness is the nature of ecstasy, probably why it’s so threatening to the status quo. However, when past and future, both personal and collective, hurt too much, even full blown guilt-ridden adults might leap- if we are brave enough, and/or pushed beyond our limits. After the fall, rebuilding joy starts at the root of our beingness. Classic Rumi In the recklessness department, the original leader of the Ananda church, Kriyananda, lost a sexual molestation suit in the late 90s. He allegedly encouraged young female devotees to perform sexual acts upon his person, in despite of his celibate vows and well known matters of leadership ethics. Did Kriyananda’s salacious activity fall under the heading of “gentle joy”? And is there any hint of irony in the current church logo, “Joy is within you?” Who knows, for joy is an internal experience, however externally motivated. After Kriyananda’s fall, the Ananda church is now, not surprisingly, led by a married man. Today my piriform cleaner updated; after cleaning files it used to tell me, “Your computer is feeling fresh and clean!” Now it says, “Run this every week to keep it healthy and full of joy!” Probably gentle joy, though I don’t know anything about AI joy. Once I would have scoffed such an obvious marketing ploy. How stupid do they think I am? My computer is joyful? But I’m too old to care about such stuff anymore. Outside of moralistic morasses like Kriyananda’s, I want to keep my heart open to the words, sounds, and colors of joy! I do appreciate you feeling ‘round…;)
https://medium.com/@cmszabo55/hiding-from-joy-73ce910aa0d3
['Colleen Szabo']
2020-12-21 14:55:51.327000+00:00
['Kriyananda', 'Yogananda', 'Joy', 'Happiness', 'Outrage']
Here I am beneath your shower
campsite We came upon an illuminated garden. The ground was red with dead trees and fern fingers. The trees were in a deep cycle of life, they were growing and their arms were outstretched, midlife yawning. Fingertips outward and green, light zooming through the fibers and onto the wet ground. Everything is wet there. And steam rises, and dew settles, a light gleams golden down upon what had become our campsite. On the lowest point where water gathered and there was nothing but dirt, mud, and roots we pitched the tent that was a temple. At first we could not see it. Could not hear, smell, taste, or touch the low hum that resonated from the mountain. The water does not move and the ripple runs through it. And when you stayed long enough, it would run through you. The dinner was prepared by Matthew in the traditional form of a small band of heat captured upon a tin plate. Auspicious meats covered in warm naan. Away from the tent and atop the dead trees we communed with the garden. Matthew raised the question: “Upon whom’s ground do you stand?” The trees were outstretched but the light was gone. The woods were falling asleep except the lone bachelor bird who sang all night long. The far off waterfalls were all that animated the silence. The stillness brought dimension to the cylinder of our presence, and it would flex and bend as we moved within. The weight of this newfound sensation gave rise to a figure that would protrude and whip around our bodies, making way about the space against our movement. This portion of the woods was surrounded by anomalies. A lake held by two mountains. A ribbon that stretched from one horizon to the other. A working shower. And all that I had not yet names for. What could I say but the mountain? Beyond the tent was a low point that gathered the surrounding water and surrounded a boulder. After dinner we gazed into the stars as they peeped into the deep blue of the twilight sky, our backs on the rock. Abreast like corpses our eyes gathered the final rays of the day. We watched as stars were shot across the sky and rocks burned up in the atmosphere. Disks of light glittered across, and the brightest burned brighter. Then I was alone on the rock, and I heard a voice in the woods. It spoke when summoned, when I asked if it were real, and if it were there, and could hear me. “Here I am”, it said. I thought, “Here I am” and it was there. I had been thinking of it today and yesterday but could not hear. I had found out. I had known and forgotten. The mountain may speak through a stillness. I heard it through our tent in the golden sunlight. It was the warmest of days as the memory was pressed against my mind, as if I was living my past, when I could hear “Here I am” in the mundanity of tossing a stick or riding a swing. I was not alone upon the rock. The tent was filled with ink and was the only movement in the cylinder. But then it stopped and became like the rest. Two souls in the nothingness of the mountain side and forgotten in the landscape of the night. It was a short climb from the campsite with a natural pathway weaving between the trees, up tiny plateaus until a stacked group of stones from which the water flowed down. The shower had run for thousands of years and was seldom used, pouring off the side of the cliff, blasting away a pit of wet sand. Above was the face, 50 feet up was the top. Splashing down off the rock the water came. Sifting through the cliff, pouring through the creases making a glistening reflection in the sun. The sand was quick, and the stone was cold and soft from the moss that covered the surface. And when your hands ran through it, you could feel the heartbeat of the mountain. Who else could it be but the mountain?
https://medium.com/@stevenrayesky/here-i-am-beneath-your-shower-f336f777149f
['Steven Rayesky']
2020-09-04 19:43:50.688000+00:00
['Writing', 'Hiking', 'Memories', 'Spirituality']
My Top 5 Christmas Games
Just when we believed our Christmas traditions would be broken this year, the NBA made away. It’s a wonderful feeling to wake up opening gifts and anticipate five intriguing games that will go down to the wire on each possession. NBA Christmas has been a tradition of mine since an adolescence. I’ve seen so many stars rise to the occasion and put up a top notch performance when the lights are bright. Like we always say, NBA Christmas always tops NFL Thanksgiving. Let’s look at five Christmas games that I’ve enjoyed throughout the years. 1. Heat at Lakers (2004): This was probably one of the most anticipated games of this season. It was Shaq’s first return back to the Staples Center. And he was going up against his then-enemy Kobe Bryant (Rest In Peace). You could feel the tension in the air between Bryant and Shaq before tipoff. Both of them didn’t even look at one another when they shook hands. As we all know, Kobe lit up the floor with 42 points on 12–30 shooting. But even though Shaq fouled out with 24 points in the game, it was the Diesel that secured round 1 between the two players. 2. Nets at Knicks (1984): This game was well past my time. But as a basketball fanatic, I traveled back in time to witness this game with my very own eyes. The highlight of the game was Bernard King and his 60 point performance. There was literally no stopping him as he was a threat from mid range and around the paint. His scoring was what single handedly kept the Knicks in the game. But the Nets were just too good. The team was lead by Micheal Ray Richardson with a 36 point effort as it was New Jersey who came away with the victory. 3. Warriors at Cavaliers (2016): This one stung me for a while. As a Warriors fan, I was anticipating the team to go into the Quicken Loans Arena and seek revenge after last years championship loss. The team started things on the right path. Kevin Durant was knocking down jumpers and the team looked in a flow on both ends. But once LeBron and Kyrie erupted, that was where things went downhill. The Cavs were able to come alive in the 4th quarter. They picked Golden State apart and took advantage with good quality looks on offense. Then, in the end, Kyrie sealed the deal with his turnaround jumper over Klay Thompson, securing Cleveland to a 109–108 victory. 4. Clippers at Lakers (2019): The Clippers Lakers rivalry was real. Both teams were in contention for a championship and the keys of Los Angeles were up for grabs. This may have been a regular season game, but you couldn’t tell from both squads. It was a battle of who wanted it more. The Lakers had a comfortable 15 point lead by the third quarter. But the Clippers kept fighting through the storm. Kawhi Leonard and Paul George were spectacular as a duo. They set the tone on both ends of the floor and it was a huge piece in closing the game late in the 4th. The Clippers’ defensive mentality was what secured them the victory 111–106 victory. 5. Sixers at Celtics (2018): This game was tough to the nail. Knowing the rivalry that is already set between the two organizations, it was inevitable that the teams would make this game a nail biter. Joel Embiid and Jimmy Butler put the Sixers on their back to grab a win in Bean Town. But it was Kyrie Irving who broke a team’s heart once again. Irving was crucial down the stretch, knocking down big time buckets to put the Celtics in the lead. He finished with 40 points on the night and was a key contributor in the 121–114 overtime victory. With the next five games coming up this afternoon, we will witness more competitive games where both clubs will not let their guard down. The importance of grabbing a win on Christmas Day is pivotal to the team to carry over for the rest of the day. We have so many memories of fantastic Christmas games to look back upon. And we will witness new ones tomorrow morning, starting with Pelicans vs Heat. I wish everyone a very Merry Christmas and hope you spend it watching some hoops!
https://medium.com/@nickandre29/my-top-5-christmas-games-aaeb7bb0bfea
['Nick Andre']
2020-12-25 12:14:33.747000+00:00
['NBA', 'Merry Christmas', 'Nba Christmas', 'Christmas Day']
How I went from newbie to Software Engineer in 9 months while working full time
In this post, I’ll share how I went from zero(ish) to a six-figure software engineering job offer in nine months while working full time and being self-taught. Photo by Artem Sapegin on Unsplash Whenever I would start reading a success story, I would immediately look to find the author’s background, hoping it would match mine. I never found someone who had the same background as I did, and most likely mine won’t match yours exactly. Nonetheless, I hope that my story inspires others and acts as a valuable data point that can be added to your success story dataset. Full Disclosure I took a Visual Basic for Applications (VBA) course in high school (nine years ago). In my freshman engineering course (seven years ago), I learned some C, Python, Matlab, and Labview. I graduated from a good university with a chemical engineering degree and a good GPA (three years ago). I hadn’t done any programming outside of school, in high school or college, until I decided I wanted to learn last year. After college, I got a job as a Process Engineer at a refinery. I worked there until I changed careers into Software Engineering. Why I wanted to change careers I enjoyed solving technical problems, but I knew I wanted to get into the business/startup world at some point. I always kept the thought of an MBA in the back of my mind, but every time I looked at the price tag of the top schools, my interest waned. On May 27th, 2017 I found myself googling about MBAs again, and somehow I stumbled upon software engineering. It seemed like a perfect fit. Software engineers are in increasing demand, salaries are great, and it’s the perfect industry from which to get into the startup world without needing a ton of initial capital. All you need is a computer, and your opportunities are limitless (kind of). In no other engineering discipline can you just have an idea, start building it, show it to users, and iterate with little capital and low barrier to entry. In chemical engineering, you essentially need a running plant or a lot of money to design a plant if you had an idea for a new product. I had heard of people quitting their jobs and attending a bootcamp, but the more I read about it online, the more I realized that you can totally learn it all on your own if you are committed and focused. You might argue that you are losing out on the networking and career advice provided by a bootcamp. This can be true, but I was fortunate in that I was living in the Bay Area which allowed me to attend several meetups, so I networked that way. Besides, the worst case was that I’d realize that I couldn’t do it on my own, and then I would quit my job to attend a bootcamp. The Goal Photo by Robert Baker on Unsplash You need to have a goal. Especially if you are trying to learn while working full-time. It is easy to let your learning drag on and on if you don’t have any external pressure pushing you. So you need to create internal pressure. Your goal should be simple and quantitative. You should do enough research to come up with a reasonable goal. Mine was the following: Get a software engineering job within one year with the same or better salary than I am making right now. The Plan Photo by Glenn Carstens-Peters on Unsplash Once you have a goal, you need a plan to help you get there. This is where you consume as many success stories as you can. None of them will match your exact situation, but you can take some advice from each one. I developed (and iterated on) my plan using resources such as the learnprogramming subreddit, the freeCodeCamp forum, and Medium. On May 27, 2017 I decided I was going to make the coding plunge, and I dove in head first. That day I decided to start putting in no more than 40 hours per week at my job, so that I had time to code after work and on the weekends. Luckily for you, I did a pretty good job of documenting my progress. My plan, through many iterations, ended up looking something like this: Take an Intro to CS course to get a solid base understanding of core CS concepts Follow freeCodeCamp until I can build portfolio-level full stack web apps on my own Refactor to clean up the code, add testing, focus on advanced concepts Contribute to open source Prepare for job interviews To start, my plan was simple. At the time, I thought I was going to follow Google’s Technical Guide, so I started with their recommended introductory course, Udacity CS101. Month 0 - Udacity CS101, Harvard CS50 The high of making this big decision gave me a ton of energy. I would start coding as soon as I got home from work and wouldn’t stop until I went to bed. And then again all weekend. Udacity CS101 tracked completion percentage, which was a big motivator for me. I logged my completion percentage every day after coding. I finished the first 75% in 10 days. The last 25% was heavy in recursion, and it was a bit tougher for me. All in all, it took me 20 days to finish Udacity CS101. While I was taking Udacity CS101, I had started reading the learnprogramming subreddit quite heavily. I read that it was important for self-taught developers looking to make a career change to be active online. I decided to make new Twitter, Reddit, Stack Overflow, Medium, and Quora accounts using my full name, so that I could build up an online presence. Also, I decided to stop reading distracting media like Instagram, Facebook, and non-programming subreddits. I would only check my phone for programming-related news and posts. This was crucial in making sure that I was finding out about the best learning paths and learning resources. It was because of this that I found out about Harvard CS50 on edX. I was originally content with just doing one intro course, but everyone seemed to recommend Harvard CS50, so I decided to dive into that next. CS students at other schools had taken this course and said they learned more in CS50 than a year or two at their university studying CS. The general consensus was that the course was difficult but worth it. By the end of Month 0, I had completed the first 5 lectures and homework assignments. Month 1 - Harvard CS50, Linux, 1st Meetup, freeCodeCamp I completed CS50 about halfway into the month. I’m not going to comment too much on my experience with CS50, because I wrote an in-depth post about my experience here. TLDR: It’s a great course, I highly recommend it. David Malan is an excellent lecturer, and there are a ton of resources to help you get through it. You start in C, move on to Python, and then finish with web development. It is very dense, and there is a lot of material, but I think it is well worth it. After CS50, I decided to set up my XPS 15 to dual boot Windows and Ubuntu. That was a frustrating weekend. I messed up my partitions and almost bricked my laptop. I was close to chucking my laptop and getting a new one. I slowly weaned myself off of Windows and eventually was solely using Ubuntu. I wanted to force myself to get comfortable with the command line which I think worked to some degree, but I still have a long ways to go. I started 100 days of code to make sure I stayed focused and coded every day. It is important to document your progress. If you are making progress every day, it won’t seem like much but when you look back a month or several months, you will realize that you have actually made quite a bit of progress which motivates you to keep going. I knew that networking would make or break me, so I mustered up the courage to go to my first coding meetup. I had never gone to any meetup let alone a coding meetup. I was so nervous that after driving there, parking, and walking to the door, I almost turned around and went home. It helped that it was the first meetup for the group. I quickly realized that there was no reason to be nervous. No one knew each other, no one was judgmental, and everyone was eager to learn. This was the beginning of a meetup-spree. I ended up attending over 50 meetups in 9 months. I’m glad that I started going to meetups early. Most people only started to attend meetups when they were looking for a job, but at that point it is almost too late. There are so many reasons to start early. To name a few: Developing relationships takes a long time. Starting early means that you have connections who can vouch for you when looking for a job later Talking about programming with strangers is a great way to prepare for interviews You can learn new frameworks, tools, and learning resources from people who are ahead of you. This can influence your future learning plan. There was some uncertainty at this time in my coding journey. This was about when I needed to decide what kind of software developer I wanted to be. Ultimately, I chose web development because it seemed like there was high demand and also a lot of online resources. Once I had that figured out, I needed to figure out what to do next. Some people recommended that at this stage I should think about web apps I wanted to build and then get going. Some people recommended The Odin Project or freeCodeCamp. The guy that was running the weekly meetup I was attending knew Ruby and wanted to do projects with Ruby. This was a big reason why I made the decision to go all in on The Odin Project. And then two days later I ditched that idea. That is one of the downsides of going the self-taught route. One minute you think you know what path you should take, but then the next day you wonder if that was the right move. I read that Ruby was falling out of favor, and I proved this by searching for Ruby vs JavaScript jobs, so I ended up starting freeCodeCamp. The one thing that bothered me about freeCodeCamp was that they came up with the project ideas, so every camper does the same projects. This concerned me at first because I wanted to stand out to recruiters. However, I ended up loving freeCodeCamp, and now I highly recommend it. For more details on my experience and recommendations regarding freeCodeCamp, check out my writeup here. Month 2 — YDKJS, freeCodeCamp Front End, React I started reading You Don’t Know JavaScript, because everyone recommended it to supplement freeCodeCamp. I had to re-read several sections as it is pretty dense, but it’s a perfect resource to learn lexical scope, closures, promises, and all parts of JavaScript that you hear about and want to learn but never do because they seem difficult. I finished the front-end section of freeCodeCamp. The checklist format and estimated completion time helped motivate me to finish quickly. I was also itching to move on to the next section and learn React. However, this also meant that my projects had minimal styling. I did whatever it took to fulfill the user stories and nothing more. In hindsight, maybe I should have focused on making the projects more appealing. Perhaps, this would have helped me learn CSS more deeply. The next step was learning React, and I was pretty pumped. I had heard so much about it, and I was ready to fit in with the cool kids. However, I was a little hesitant given the licensing issues at the time. I’m really glad that is no longer an issue. Learning React was difficult for me. I wasn’t aware of any good tutorials then (but it seems like there are a ton now). I tried reading the docs and following along with Facebook’s Tic-Tac-Toe tutorial, but I didn’t quite understand all of it. I was told if that wasn’t working for me, then it meant I didn’t understand JavaScript enough. So then I went back to reading You Don’t Know JavaScript, but again that was too dense for me. Month 3 - freeCodeCamp React, CodeClub, Starting freeCodeCamp Back End Ultimately, I just decided I would work my way through the freeCodeCamp React projects to see how it went. That code was ugly, but it did help me understand React a little better. That meetup I had been attending weekly decided that they were going to build projects with full stack JavaScript instead of Ruby, and they decided that the first project would be to build a website for the meetup group, CodeClub.Social. I developed cards using React and Meetup API allowing the user to sign up for the next three meetups from our website. It was a little difficult for me to take a quick break from freeCodeCamp to do this, but it was an opportunity I couldn’t pass up. I was happy to be working on a project with a small group of people. It also helped me learn Git and Github. Before the month was over, I started working on the back end section of freeCodeCamp. Month 4 - Finished freeCodeCamp Back End, Yeggle I worked through all of the API projects in freeCodeCamp, but I started deviating from freeCodeCamp at the Image Search Abstraction Layer project. I was itching to make full stack web applications, so as soon as I saw the title of this project, I had an idea for my own project. I would make a node app that would store random imgur URLs in a database, and then make a front end that would output a user-specified number of those random images. What everyone says is true: you work harder and have more success when you are working on a project that was your own idea. Once I got it to work, I was very proud of myself. It was ugly and clunky, but it worked. As I was working through freeCodeCamp, I was learning about what projects would be within my capabilities. I was running regularly at the time, so I would come up with ideas on my runs and write them down when I got home. That way I would have a list of project ideas when I was ready. I finally felt ready to start making my own useful and polished full-stack web apps to share with users and put on my portfolio. I was so ready to get started. When looking for a new restaurant, I always found myself opening Yelp to check reviews, and then opening Maps to check their reviews. What if I made an app that compared both side by side? So I made Yeggle. I used Node/Express/React along with the Google Maps and Yelp APIs. There were a couple obstacles I didn’t think I would be able to overcome, but in the end I finished and I was very proud of my app. Then I posted it to Reddit, and no one cared. That was a bit of a bummer, but I didn’t let it get me down. Month 5 - StockIT I didn’t get quite as much done this month, as I started it off with a two week vacation to Japan and Thailand! But I did start and complete my next project. I kept reading about how difficult it was to get a job as a self-taught developer, so I thought I needed to do something unique. I remembered a game where a Dow Jones stock graph started trending, and you had one opportunity to buy and one opportunity to sell, and the goal was to beat the market. The purpose of the game was to show you how difficult it was to beat the market. My idea was to make a game similar to that, but instead of the market, you would be playing against a machine learning algorithm. So I created StockIT. I took a video tutorial on Pandas and Scikit Learn that covered multiple machine learning techniques. I originally wanted to do some cool deep learning techniques, but I realized that took massive datasets and more time than I wanted to spend. Instead, I stuck to a simple linear regression model. I thought that would be the hard part, but it wasn’t. Getting D3 to jive with React was the hard part. Both libraries wanted to control the DOM. There were some other libraries that helped to join the two, but I felt they were too bloated. I ended up using D3 to generate the SVGs and React to handle the DOM which worked out quite well for me. This time when I shared it with Reddit, everyone loved it! Turns out, just like VCs, redditors are all about that machine learning. All the love from Reddit was a big confidence boost. People were playing my game and enjoying it! Month 6 - jobSort(), Job Hunt Prep After StockIT, I rolled right into my next personal project. I wanted to make a job board that aggregated the smaller tech-focused job listing websites such as Stack Overflow, Github, and Hacker News. To add my own unique spin to it, I decided to have it sort based on the technologies the user wanted in a job and how badly they wanted each of them. For example, let’s say I was looking for a job that was looking for someone who knew JavaScript, React, and/or Python, and I really wanted to work with JavaScript and React but I didn’t care so much about Python. Then I could give JavaScript a 3, React a 3, and maybe Python a 1. The listings would then sort accordingly. I ran into various obstacles with this project and had to change course a couple times, but I ended up with a product I was happy with. My final tech stack was React/Node/Express/MySQL. I posted the project to the cscareerquestions subreddit and got 650 views before it was taken down because they don’t allow personal projects. The “final” product is here, and if you’re interested in knowing more about my struggles and refactors, check out my post here. Because of my issues, jobSort() took up a decent portion of the month. I ended up getting coffee with a friend I had met at my first meetup, and he advised me to start applying for jobs now. I read all over the place that everyone says they waited too long to apply. Also, whenever I saw a post asking when to apply, the top comment was always “now.” In my head, I was going to work my way through my structured plan to build up my portfolio with personal projects, and then work on open source contributions, and then prepare for interviews, and finally start applying to jobs. This friend convinced me to ditch that plan and start applying. So this month I made a portfolio and a resume. The following month I would start applying. Month 7 - Testing, Job Hunting This month I focused on touching up my projects and applying to jobs. I also wanted to learn testing and Redux. I added flexbox to CodeClub.Social to make it responsive. I improved the mobile UX on jobSort(). I added testing to jobSort() with mocha/chai/enzyme which was difficult to set up, easy to get started, and then difficult to get 100% coverage. By the end of the month, I had applied to 63 jobs. I viewed this as a self-assessment. Was my portfolio/resume good enough? If so, what did I need to work on to prepare for interviews? At first, I applied with Hacker News: Who is Hiring, and Indeed. On Hacker News, I used jobSort() to determine which listings to apply for. On Indeed, I tried non-software companies to see if I could even get a call or an interview anywhere. At first, I was applying quickly and not personalizing my resume/cover letter. Then, I decided to personalize my cover letter and resume, and then try to send an email to someone from the company. This method was clearly better than the shotgun approach. I received five calls that month — two from recruiting companies and three from software companies that included: a contracting DevOps/testing role at a dotcom company a series B food analytics company, and a fairly large and successful startup that was recently purchased by a major corporation I made it past the HR screen in two of these, but none of them yielded an onsite interview. I was pretty happy with the three calls, and I learned a lot from them. Everyone mentioned online that junior developers aren’t expected to know that much from the start, they just need to be passionate and excited to learn. So I thought, easy. I am passionate and excited to learn. What I learned from these calls, however, was that nobody was looking for a junior developer. They expect you to know what you’re doing from day one. These calls taught me that I needed to be good enough to add value from day one be confident enough to convince them that I can add value from day one Month 8 - Night Shift, Redux, Open Source, Onsite Interview I started this month working the night shift for a 40 day stretch at my full time job - 6 days a week, 12 hours a day, 5PM to 5AM. Ugh. I knew I wouldn’t be able to get as much done this month, but I had a goal and I wanted to meet it, so I couldn’t take a month off. I refactored jobSort to use Redux which was surprisingly not as difficult as I thought it would be. I listened to a lot of podcasts about it and read blogposts about it, and it never quite made sense to me until I started using it. I really like the flow of data with Redux. It’s interesting now seeing people complain about Redux. I don’t think I’m qualified to spout off my opinions strongly, but I do like the reducer pattern. This was supposed to be the month of open source for me. I was going to make my first open source contribution, and it would be a great contribution to a fantastic library. I was going to contribute to React! Everyone said it was a difficult codebase to read let alone contribute to. But I needed to stand out, I needed to be unique. I knew that my contribution wouldn’t be significant, but I still wanted to do it nonetheless. I would start by reading the docs all the way through and then pouring through the codebase. Watch every issue, every PR. Reading through the React docs in full was a great exercise, and I’m glad I did it. But I quickly realized that the issue with contributing to React is that there just aren’t that many “good first issues,” and they get snatched up quickly. At one of the meetups I attended, Anthony Ng recommended that I try out Downshift, an autocomplete library by Kent C. Dodds. This was a gamechanger. It was right in my wheelhouse. The right difficulty, right amount of issues to help with, not too many collaborators, super helpful maintainer, clean well-tested code. On top of all that, it was a perfect solution to some issues I was having with my jobSort() application. About halfway through the month, I received an email from one of the companies I applied to in the previous month. They set up an initial phone screen, and then a technical phone screen. The technologies they were looking for were exactly what I had learned - React, Redux, and D3. I mostly just talked about my projects and why I made certain decisions. After this, they asked me to come onsite for an interview. My first onsite interview! I hadn’t prepared for interviews at all, so I went into it with the expectation that I wouldn’t get the job, but I would gain valuable interviewing experience. I also was running on three hours of sleep since I was still working the night shift which didn’t help. Luckily, the technical portion wasn’t whiteboarding, just a one-hour pair programming session. It was a fairly straightforward challenge, but I was very nervous. At first, I was worried about making sure I knew everything without looking it up. When I realized that I wasn’t going to finish the challenge, I realized that I needed to stop worrying what the interviewer thought of me and just google/stack overflow to find answers. I didn’t end up finishing, and I thought I failed miserably. Since I thought I failed the pair programming, I felt relaxed for the rest of the interview. Ultimately, I left the interview with my chin up. Worst case I got some valuable interviewing experience, and best case I got my first job offer. Month 9 - Job Offer I ended up receiving my first job offer 9 months and 7 days after that first day when I decided I was going to dive head first into programming with the intent of changing careers. I felt confident given that I received an offer after my first onsite interview, but at the same time, if I didn’t take the offer, what if this was the only offer I would receive for several months? I ended up taking the offer, and I am happy with my decision. I wanted to get paid to code! Advice Up to this point, I have mostly shared my story with some advice sprinkled in. Chances are if you’re reading this, you either are thinking about changing careers or are in the middle of learning to code with the intent of changing careers. I hope that the advice below will help you develop a plan or stick with your current plan and reach your goal. Find out what motivates you and use it to your advantage. For me, it was checklists, documenting my progress, and interacting with various programming communities. If you are not motivated to reach your goal, then nothing else matters because you won’t finish. Make goals and meet them. I would argue that you should have monthly goals and maybe even daily goals. Monthly goals to make sure you are on track to meet your main goal, and daily goals to make sure that you actually make daily progress. One strategy that worked for me was to make my daily goals the night before. That way, you can’t do unproductive work all day and feel like you made progress when you really didn’t. It forces you to compare your daily accomplishments with your daily goals. Go to meetups way before you think you are ready. Going to meetups can feel scary, but as I mentioned above. But, in general everyone is nice and willing to help. You might find people that aren’t interested in talking with you, but they are the minority and no one will be judgmental. Also, everyone loves to give advice (like I’m doing right now). Contribute to open source way before you think you are ready. When you first start programming, Github seems like this scary place that you never want to go to. It is actually very welcoming to beginners and is a great place to see good code and get your own code reviewed. If you’re still not convinced, check out my post, Why you should contribute to open source right now. Start applying way before you think you are ready. This one was tough for me because I thought I was different. I thought I didn’t need to test the market to get a feel for what to work on. I thought I would know when I would be ready to apply. I’m telling you right now. You will not know when to apply. So you might as well start now. You shouldn’t go crazy and apply to 300 companies before you learn for loops. But you should know that the best way to know what you need to learn is by applying and testing the market. Now get back out there and code!
https://medium.com/free-code-camp/how-i-went-from-newbie-to-software-engineer-in-9-months-while-working-full-time-460bd8485847
['Austin Tackaberry']
2020-01-21 01:55:08.284000+00:00
['Programming', 'Tech', 'Software Development', 'Web Development', 'Life Lessons']
Three Times Meditation App Calm Won at Marketing
Three Times Meditation App Calm Won at Marketing What you can learn from great marketing stunts A few days back, meditation pioneers Headspace announced their upcoming Netflix series. The app has just announced a three-series deal with Netflix that kicks off in January 2021 with the Headspace Guide to Meditation. Overall, Headspace's marketing strategy can teach us a lot about creativity in the marketing space, yet it’s not the only meditation app to look out for. Calm is another B2C that came from behind as it started two years after Headspace. However, with a $1.5 million seed round, Calm got to $40 million and profitable and overtook Headspace in revenue in 2017 despite having a smaller team and now Calm has been raising $2 billion. The startup has been leading the news space for a good part of 2020, and with good reason. Calm is yet another example of a wellness unicorn, as it continues rapid growth with over 100 million downloads and four million paying members. From a business perceptive, Calm has been a leaner company since the beginning, so they had to focus on building a simple and well-designed meditation app that was high quality and not focus on corporations and B2B deals. Yet, this is not a piece about seed funding or business growth, but a series of highlights of some great (and perhaps less known) stunts Calm has pulled in the past year, and why it shows that its exploding growth was less about luck and more about well-thought-out choices. Below are three key examples of how Calm nailed their marketing strategy.
https://medium.com/better-marketing/three-times-meditation-app-calm-won-at-marketing-73a386cad768
['Fab Giovanetti']
2020-12-18 14:15:43.379000+00:00
['Creativity', 'Marketing', 'Social Media', 'Startup', 'Business']
A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning
The three transfer categories discussed in the previous section outline different settings where transfer learning can be applied, and studied in detail. To answer the question of what to transfer across these categories, some of the following approaches can be applied: Instance transfer: Reusing knowledge from the source domain to the target task is usually an ideal scenario. In most cases, the source domain data cannot be reused directly. Rather, there are certain instances from the source domain that can be reused along with target data to improve results. In case of inductive transfer, modifications such as AdaBoost by Dai and their co-authors help utilize training instances from the source domain for improvements in the target task. Reusing knowledge from the source domain to the target task is usually an ideal scenario. In most cases, the source domain data cannot be reused directly. Rather, there are certain instances from the source domain that can be reused along with target data to improve results. In case of inductive transfer, modifications such as AdaBoost by Dai and their co-authors help utilize training instances from the source domain for improvements in the target task. Feature-representation transfer: This approach aims to minimize domain divergence and reduce error rates by identifying good feature representations that can be utilized from the source to target domains. Depending upon the availability of labeled data, supervised or unsupervised methods may be applied for feature-representation-based transfers. This approach aims to minimize domain divergence and reduce error rates by identifying good feature representations that can be utilized from the source to target domains. Depending upon the availability of labeled data, supervised or unsupervised methods may be applied for feature-representation-based transfers. Parameter transfer: This approach works on the assumption that the models for related tasks share some parameters or prior distribution of hyperparameters. Unlike multitask learning, where both the source and target tasks are learned simultaneously, for transfer learning, we may apply additional weightage to the loss of the target domain to improve overall performance. This approach works on the assumption that the models for related tasks share some parameters or prior distribution of hyperparameters. Unlike multitask learning, where both the source and target tasks are learned simultaneously, for transfer learning, we may apply additional weightage to the loss of the target domain to improve overall performance. Relational-knowledge transfer: Unlike the preceding three approaches, the relational-knowledge transfer attempts to handle non-IID data, such as data that is not independent and identically distributed. In other words, data, where each data point has a relationship with other data points; for instance, social network data utilizes relational-knowledge-transfer techniques. The following table clearly summarizes the relationship between different transfer learning strategies and what to transfer. Transfer Learning Strategies and Types of Transferable Components Let’s now utilize this understanding and learn how transfer learning is applied in the context of deep learning. Transfer Learning for Deep Learning The strategies we discussed in the previous section are general approaches which can be applied towards machine learning techniques, which brings us to the question, can transfer learning really be applied in the context of deep learning? Deep learning models are representative of what is also known as inductive learning. The objective for inductive-learning algorithms is to infer a mapping from a set of training examples. For instance, in cases of classification, the model learns mapping between input features and class labels. In order for such a learner to generalize well on unseen data, its algorithm works with a set of assumptions related to the distribution of the training data. These sets of assumptions are known as inductive bias. The inductive bias or assumptions can be characterized by multiple factors, such as the hypothesis space it restricts to and the search process through the hypothesis space. Thus, these biases impact how and what is learned by the model on the given task and domain. Ideas for deep transfer learning Inductive transfer techniques utilize the inductive biases of the source task to assist the target task. This can be done in different ways, such as by adjusting the inductive bias of the target task by limiting the model space, narrowing down the hypothesis space, or making adjustments to the search process itself with the help of knowledge from the source task. This process is depicted visually in the following figure. Inductive transfer (Source: Transfer learning, Lisa Torrey and Jude Shavlik) Apart from inductive transfer, inductive-learning algorithms also utilize Bayesian and Hierarchical transfer techniques to assist with improvements in the learning and performance of the target task. Deep Transfer Learning Strategies Deep learning has made considerable progress in recent years. This has enabled us to tackle complex problems and yield amazing results. However, the training time and the amount of data required for such deep learning systems are much more than that of traditional ML systems. There are various deep learning networks with state-of-the-art performance (sometimes as good or even better than human performance) that have been developed and tested across domains such as computer vision and natural language processing (NLP). In most cases, teams/people share the details of these networks for others to use. These pre-trained networks/models form the basis of transfer learning in the context of deep learning, or what I like to call ‘deep transfer learning’. Let’s look at the two most popular strategies for deep transfer learning. Off-the-shelf Pre-trained Models as Feature Extractors Deep learning systems and models are layered architectures that learn different features at different layers (hierarchical representations of layered features). These layers are then finally connected to a last layer (usually a fully connected layer, in the case of supervised learning) to get the final output. This layered architecture allows us to utilize a pre-trained network (such as Inception V3 or VGG) without its final layer as a fixed feature extractor for other tasks. Transfer Learning with Pre-trained Deep Learning Models as Feature Extractors The key idea here is to just leverage the pre-trained model’s weighted layers to extract features but not to update the weights of the model’s layers during training with new data for the new task. For instance, if we utilize AlexNet without its final classification layer, it will help us transform images from a new domain task into a 4096-dimensional vector based on its hidden states, thus enabling us to extract features from a new domain task, utilizing the knowledge from a source-domain task. This is one of the most widely utilized methods of performing transfer learning using deep neural networks. Now a question might arise, how well do these pre-trained off-the-shelf features really work in practice with different tasks? It definitely seems to work really well in real-world tasks, and if the chart in the above table is not very clear, the following figure should make things more clear with regard to their performance in different computer vision based tasks! Performance of off-the-shelf pre-trained models vs. specialized task-focused deep learning models Based on the red and pink bars in the above figure, you can clearly see that the features from the pre-trained models consistently out-perform very specialized task-focused deep learning models. Fine Tuning Off-the-shelf Pre-trained Models This is a more involved technique, where we do not just replace the final layer (for classification/regression), but we also selectively retrain some of the previous layers. Deep neural networks are highly configurable architectures with various hyperparameters. As discussed earlier, the initial layers have been seen to capture generic features, while the later ones focus more on the specific task at hand. An example is depicted in the following figure on a face-recognition problem, where initial lower layers of the network learn very generic features and the higher layers learn very task-specific features. Using this insight, we may freeze (fix weights) certain layers while retraining, or fine-tune the rest of them to suit our needs. In this case, we utilize the knowledge in terms of the overall architecture of the network and use its states as the starting point for our retraining step. This, in turn, helps us achieve better performance with less training time. Freezing or Fine-tuning? This brings us to the question, should we freeze layers in the network to use them as feature extractors or should we also fine-tune layers in the process? This should give us a good perspective on what each of these strategies are and when should they be used! Pre-trained Models One of the fundamental requirements for transfer learning is the presence of models that perform well on source tasks. Luckily, the deep learning world believes in sharing. Many of the state-of-the art deep learning architectures have been openly shared by their respective teams. These span across different domains, such as computer vision and NLP, the two most popular domains for deep learning applications. Pre-trained models are usually shared in the form of the millions of parameters/weights the model achieved while being trained to a stable state. Pre-trained models are available for everyone to use through different means. The famous deep learning Python library, keras, provides an interface to download some popular models. You can also access pre-trained models from the web since most of them have been open-sourced. For computer vision, you can leverage some popular models including, For natural language processing tasks, things become more difficult due to the varied nature of NLP tasks. You can leverage word embedding models including, But wait, that’s not all! Recently, there have been some excellent advancements towards transfer learning for NLP. Most notably, They definitely hold a lot of promise and I’m sure they will be widely adopted pretty soon for real-world applications. Types of Deep Transfer Learning The literature on transfer learning has gone through a lot of iterations, and as mentioned at the start of this chapter, the terms associated with it have been used loosely and often interchangeably. Hence, it is sometimes confusing to differentiate between transfer learning, domain adaptation, and multi-task learning. Rest assured, these are all related and try to solve similar problems. In general, you should always think of transfer learning as a general concept or principle, where we will try to solve a target task using source task-domain knowledge. Domain Adaptation Domain adaption is usually referred to in scenarios where the marginal probabilities between the source and target domains are different, such as P(Xₛ) ≠ P(Xₜ). There is an inherent shift or drift in the data distribution of the source and target domains that requires tweaks to transfer the learning. For instance, a corpus of movie reviews labeled as positive or negative would be different from a corpus of product-review sentiments. A classifier trained on movie-review sentiment would see a different distribution if utilized to classify product reviews. Thus, domain adaptation techniques are utilized in transfer learning in these scenarios. Domain Confusion We learned different transfer learning strategies and even discussed the three questions of what, when, and how to transfer knowledge from the source to the target. In particular, we discussed how feature-representation transfer can be useful. It is worth re-iterating that different layers in a deep learning network capture different sets of features. We can utilize this fact to learn domain-invariant features and improve their transferability across domains. Instead of allowing the model to learn any representation, we nudge the representations of both domains to be as similar as possible. This can be achieved by applying certain pre-processing steps directly to the representations themselves. Some of these have been discussed by Baochen Sun, Jiashi Feng, and Kate Saenko in their paper ‘Return of Frustratingly Easy Domain Adaptation’. This nudge toward the similarity of representation has also been presented by Ganin et. al. in their paper, ‘Domain-Adversarial Training of Neural Networks’. The basic idea behind this technique is to add another objective to the source model to encourage similarity by confusing the domain itself, hence domain confusion. Multitask Learning Multitask learning is a slightly different flavor of the transfer learning world. In the case of multitask learning, several tasks are learned simultaneously without distinction between the source and targets. In this case, the learner receives information about multiple tasks at once, as compared to transfer learning, where the learner initially has no idea about the target task. This is depicted in the following figure. Multitask learning: Learner receives information from all tasks simultaneously One-shot Learning Deep learning systems are data-hungry by nature, such that they need many training examples to learn the weights. This is one of the limiting aspects of deep neural networks, though such is not the case with human learning. For instance, once a child is shown what an apple looks like, they can easily identify a different variety of apple (with one or a few training examples); this is not the case with ML and deep learning algorithms. One-shot learning is a variant of transfer learning, where we try to infer the required output based on just one or a few training examples. This is essentially helpful in real-world scenarios where it is not possible to have labeled data for every possible class (if it is a classification task), and in scenarios where new classes can be added often. The landmark paper by Fei-Fei and their co-authors, ‘One Shot Learning of Object Categories’, is supposedly what coined the term one-shot learning and the research in this sub-field. This paper presented a variation on a Bayesian framework for representation learning for object categorization. This approach has since been improved upon, and applied using deep learning systems. Zero-shot Learning Zero-shot learning is another extreme variant of transfer learning, which relies on no labeled examples to learn a task. This might sound unbelievable, especially when learning using examples is what most supervised learning algorithms are about. Zero-data learning or zero-short learning methods, make clever adjustments during the training stage itself to exploit additional information to understand unseen data. In their book on Deep Learning, Goodfellow and their co-authors present zero-shot learning as a scenario where three variables are learned, such as the traditional input variable, x, the traditional output variable, y, and the additional random variable that describes the task, T. The model is thus trained to learn the conditional probability distribution of P(y | x, T). Zero-shot learning comes in handy in scenarios such as machine translation, where we may not even have labels in the target language. Applications of Transfer Learning Deep learning is definitely one of the specific categories of algorithms that has been utilized to reap the benefits of transfer learning very successfully. The following are a few examples: Transfer learning for NLP: Textual data presents all sorts of challenges when it comes to ML and deep learning. These are usually transformed or vectorized using different techniques. Embeddings, such as Word2vec and FastText, have been prepared using different training datasets. These are utilized in different tasks, such as sentiment analysis and document classification, by transferring the knowledge from the source tasks. Besides this, newer models like the Universal Sentence Encoder and BERT definitely present a myriad of possibilities for the future. Textual data presents all sorts of challenges when it comes to ML and deep learning. These are usually transformed or vectorized using different techniques. Embeddings, such as Word2vec and FastText, have been prepared using different training datasets. These are utilized in different tasks, such as sentiment analysis and document classification, by transferring the knowledge from the source tasks. Besides this, newer models like the Universal Sentence Encoder and BERT definitely present a myriad of possibilities for the future. Transfer learning for Audio/Speech: Similar to domains like NLP and Computer Vision, deep learning has been successfully used for tasks based on audio data. For instance, Automatic Speech Recognition (ASR) models developed for English have been successfully used to improve speech recognition performance for other languages, such as German. Also, automated-speaker identification is another example where transfer learning has greatly helped. Similar to domains like NLP and Computer Vision, deep learning has been successfully used for tasks based on audio data. For instance, Automatic Speech Recognition (ASR) models developed for English have been successfully used to improve speech recognition performance for other languages, such as German. Also, automated-speaker identification is another example where transfer learning has greatly helped. Transfer learning for Computer Vision: Deep learning has been quite successfully utilized for various computer vision tasks, such as object recognition and identification, using different CNN architectures. In their paper, How transferable are features in deep neural networks, Yosinski and their co-authors (https://arxiv.org/abs/1411.1792) present their findings on how the lower layers act as conventional computer-vision feature extractors, such as edge detectors, while the final layers work toward task-specific features. Thus, these findings have helped in utilizing existing state-of-the-art models, such as VGG, AlexNet, and Inceptions, for target tasks, such as style transfer and face detection, that were different from what these models were trained for initially. Let’s explore some real-world case studies now and build some deep transfer learning models! Case Study 1: Image Classification with a Data Availability Constraint In this simple case study, will be working on an image categorization problem with the constraint of having a very small number of training samples per category. The dataset for our problem is available on Kaggle and is one of the most popular computer vision based datasets out there. Main Objective The dataset that we will be using, comes from the very popular Dogs vs. Cats Challenge, where our primary objective is to build a deep learning model that can successfully recognize and categorize images into either a cat or a dog. Source: becominghuman.ai In terms of ML, this is a binary classification problem based on images. Before getting started, I would like to thank Francois Chollet for not only creating the amazing deep learning framework, keras , but also for talking about the real-world problem where transfer learning is effective in his book, ‘Deep Learning with Python’. I’ve have taken that as an inspiration to portray the true power of transfer learning in this chapter, and all results are based on building and running each model in my own GPU-based cloud setup (AWS p2.x) Building Datasets To start, download the train.zip file from the dataset page and store it in your local system. Once downloaded, unzip it into a folder. This folder will contain 25,000 images of dogs and cats; that is, 12,500 images per category. While we can use all 25,000 images and build some nice models on them, if you remember, our problem objective includes the added constraint of having a small number of images per category. Let’s build our own dataset for this purpose. import glob import numpy as np import os import shutil np.random.seed(42) Let’s now load up all the images in our original training data folder as follows: (12500, 12500) We can verify with the preceding output that we have 12,500 images for each category. Let’s now build our smaller dataset, so that we have 3,000 images for training, 1,000 images for validation, and 1,000 images for our test dataset (with equal representation for the two animal categories). Cat datasets: (1500,) (500,) (500,) Dog datasets: (1500,) (500,) (500,) Now that our datasets have been created, let’s write them out to our disk in separate folders, so that we can come back to them anytime in the future without worrying if they are present in our main memory. Since this is an image categorization problem, we will be leveraging CNN models or ConvNets to try and tackle this problem. We will start by building simple CNN models from scratch, then try to improve using techniques such as regularization and image augmentation. Then, we will try and leverage pre-trained models to unleash the true power of transfer learning! Preparing Datasets Before we jump into modeling, let’s load and prepare our datasets. To start with, we load up some basic dependencies. import glob import numpy as np import matplotlib.pyplot as plt from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img %matplotlib inline Let’s now load our datasets, using the following code snippet. Train dataset shape: (3000, 150, 150, 3) Validation dataset shape: (1000, 150, 150, 3) We can clearly see that we have 3000 training images and 1000 validation images. Each image is of size 150 x 150 and has three channels for red, green, and blue (RGB), hence giving each image the (150, 150, 3) dimensions. We will now scale each image with pixel values between (0, 255) to values between (0, 1) because deep learning models work really well with small input values. The preceding output shows one of the sample images from our training dataset. Let’s now set up some basic configuration parameters and also encode our text class labels into numeric values (otherwise, Keras will throw an error). ['cat', 'cat', 'cat', 'cat', 'cat', 'dog', 'dog', 'dog', 'dog', 'dog'] [0 0 0 0 0 1 1 1 1 1] We can see that our encoding scheme assigns the number 0 to the cat labels and 1 to the dog labels. We are now ready to build our first CNN-based deep learning model. Simple CNN Model from Scratch We will start by building a basic CNN model with three convolutional layers, coupled with max pooling for auto-extraction of features from our images and also downsampling the output convolution feature maps. A Typical CNN (Source: Wikipedia) We assume you have enough knowledge about CNNs and hence, won’t cover theoretical details. Feel free to refer to my book or any other resources on the web which explain convolutional neural networks! Let’s leverage Keras and build our CNN model architecture now. The preceding output shows us our basic CNN model summary. Just like we mentioned before, we are using three convolutional layers for feature extraction. The flatten layer is used to flatten out 128 of the 17 x 17 feature maps that we get as output from the third convolution layer. This is fed to our dense layers to get the final prediction of whether the image should be a dog (1) or a cat (0). All of this is part of the model training process, so let’s train our model using the following snippet which leverages the fit(…) function. The following terminology is very important with regard to training our model: The batch_size indicates the total number of images passed to the model per iteration. indicates the total number of images passed to the model per iteration. The weights of the units in layers are updated after each iteration. The total number of iterations is always equal to the total number of training samples divided by the batch_size An epoch is when the complete dataset has passed through the network once, that is, all the iterations are completed based on data batches. We use a batch_size of 30 and our training data has a total of 3,000 samples, which indicates that there will be a total of 100 iterations per epoch. We train the model for a total of 30 epochs and validate it consequently on our validation set of 1,000 images. Train on 3000 samples, validate on 1000 samples Epoch 1/30 3000/3000 - 10s - loss: 0.7583 - acc: 0.5627 - val_loss: 0.7182 - val_acc: 0.5520 Epoch 2/30 3000/3000 - 8s - loss: 0.6343 - acc: 0.6533 - val_loss: 0.5891 - val_acc: 0.7190 ... ... Epoch 29/30 3000/3000 - 8s - loss: 0.0314 - acc: 0.9950 - val_loss: 2.7014 - val_acc: 0.7140 Epoch 30/30 3000/3000 - 8s - loss: 0.0147 - acc: 0.9967 - val_loss: 2.4963 - val_acc: 0.7220 Looks like our model is kind of overfitting, based on the training and validation accuracy values. We can plot our model accuracy and errors using the following snippet to get a better perspective. Vanilla CNN Model Performance You can clearly see that after 2–3 epochs the model starts overfitting on the training data. The average accuracy we get in our validation set is around 72%, which is not a bad start! Can we improve upon this model? CNN Model with Regularization Let’s improve upon our base CNN model by adding in one more convolution layer, another dense hidden layer. Besides this, we will add dropout of 0.3 after each hidden dense layer to enable regularization. Basically, dropout is a powerful method of regularizing in deep neural nets. It can be applied separately to both input layers and the hidden layers. Dropout randomly masks the outputs of a fraction of units from a layer by setting their output to zero (in our case, it is 30% of the units in our dense layers). Train on 3000 samples, validate on 1000 samples Epoch 1/30 3000/3000 - 7s - loss: 0.6945 - acc: 0.5487 - val_loss: 0.7341 - val_acc: 0.5210 Epoch 2/30 3000/3000 - 7s - loss: 0.6601 - acc: 0.6047 - val_loss: 0.6308 - val_acc: 0.6480 ... ... Epoch 29/30 3000/3000 - 7s - loss: 0.0927 - acc: 0.9797 - val_loss: 1.1696 - val_acc: 0.7380 Epoch 30/30 3000/3000 - 7s - loss: 0.0975 - acc: 0.9803 - val_loss: 1.6790 - val_acc: 0.7840 Vanilla CNN Model with Regularization Performance You can clearly see from the preceding outputs that we still end up overfitting the model, though it takes slightly longer and we also get a slightly better validation accuracy of around 78%, which is decent but not amazing. The reason for model overfitting is because we have much less training data and the model keeps seeing the same instances over time across each epoch. A way to combat this would be to leverage an image augmentation strategy to augment our existing training data with images that are slight variations of the existing images. We will cover this in detail in the following section. Let’s save this model for the time being so we can use it later to evaluate its performance on the test data. model.save(‘cats_dogs_basic_cnn.h5’) CNN Model with Image Augmentation Let’s improve upon our regularized CNN model by adding in more data using a proper image augmentation strategy. Since our previous model was trained on the same small sample of data points each time, it wasn’t able to generalize well and ended up overfitting after a few epochs. The idea behind image augmentation is that we follow a set process of taking in existing images from our training dataset and applying some image transformation operations to them, such as rotation, shearing, translation, zooming, and so on, to produce new, altered versions of existing images. Due to these random transformations, we don’t get the same images each time, and we will leverage Python generators to feed in these new images to our model during training. The Keras framework has an excellent utility called ImageDataGenerator that can help us in doing all the preceding operations. Let’s initialize two of the data generators for our training and validation datasets. There are a lot of options available in ImageDataGenerator and we have just utilized a few of them. Feel free to check out the documentation to get a more detailed perspective. In our training data generator, we take in the raw images and then perform several transformations on them to generate new images. These include the following. Zooming the image randomly by a factor of 0.3 using the zoom_range parameter. using the parameter. Rotating the image randomly by 50 degrees using the rotation_range parameter. degrees using the parameter. Translating the image randomly horizontally or vertically by a 0.2 factor of the image’s width or height using the width_shift_range and the height_shift_range parameters. factor of the image’s width or height using the and the parameters. Applying shear-based transformations randomly using the shear_range parameter. parameter. Randomly flipping half of the images horizontally using the horizontal_flip parameter. parameter. Leveraging the fill_mode parameter to fill in new pixels for images after we apply any of the preceding operations (especially rotation or translation). In this case, we just fill in the new pixels with their nearest surrounding pixel values. Let’s see how some of these generated images might look so that you can understand them better. We will take two sample images from our training dataset to illustrate the same. The first image is an image of a cat. Image Augmentation on a Cat Image You can clearly see in the previous output that we generate a new version of our training image each time (with translations, rotations, and zoom) and also we assign a label of cat to it so that the model can extract relevant features from these images and also remember that these are cats. Let’s look at how image augmentation works on a sample dog image now. Image Augmentation on a Dog Image This shows us how image augmentation helps in creating new images, and how training a model on them should help in combating overfitting. Remember for our validation generator, we just need to send the validation images (original ones) to the model for evaluation; hence, we just scale the image pixels (between 0–1) and do not apply any transformations. We just apply image augmentation transformations only on our training images. Let’s now train a CNN model with regularization using the image augmentation data generators we created. We will use the same model architecture from before. We reduce the default learning rate by a factor of 10 here for our optimizer to prevent the model from getting stuck in a local minima or overfit, as we will be sending a lot of images with random transformations. To train the model, we need to slightly modify our approach now, since we are using data generators. We will leverage the fit_generator(…) function from Keras to train this model. The train_generator generates 30 images each time, so we will use the steps_per_epoch parameter and set it to 100 to train the model on 3,000 randomly generated images from the training data for each epoch. Our val_generator generates 20 images each time so we will set the validation_steps parameter to 50 to validate our model accuracy on all the 1,000 validation images (remember we are not augmenting our validation dataset). Epoch 1/100 100/100 - 12s - loss: 0.6924 - acc: 0.5113 - val_loss: 0.6943 - val_acc: 0.5000 Epoch 2/100 100/100 - 11s - loss: 0.6855 - acc: 0.5490 - val_loss: 0.6711 - val_acc: 0.5780 Epoch 3/100 100/100 - 11s - loss: 0.6691 - acc: 0.5920 - val_loss: 0.6642 - val_acc: 0.5950 ... ... Epoch 99/100 100/100 - 11s - loss: 0.3735 - acc: 0.8367 - val_loss: 0.4425 - val_acc: 0.8340 Epoch 100/100 100/100 - 11s - loss: 0.3733 - acc: 0.8257 - val_loss: 0.4046 - val_acc: 0.8200 We get a validation accuracy jump to around 82%, which is almost 4–5% better than our previous model. Also, our training accuracy is very similar to our validation accuracy, indicating our model isn’t overfitting anymore. The following depict the model accuracy and loss per epoch. Vanilla CNN Model with Image Augmentation Performance While there are some spikes in the validation accuracy and loss, overall, we see that it is much closer to the training accuracy, with the loss indicating that we obtained a model that generalizes much better as compared to our previous models. Let’s save this model now so we can evaluate it later on our test dataset. model.save(‘cats_dogs_cnn_img_aug.h5’) We will now try and leverage the power of transfer learning to see if we can build a better model! Leveraging Transfer Learning with Pre-trained CNN Models Pre-trained models are used in the following two popular ways when building new models or reusing them: Using a pre-trained model as a feature extractor Fine-tuning the pre-trained model We will cover both of them in detail in this section. The pre-trained model that we will be using in this chapter is the popular VGG-16 model, created by the Visual Geometry Group at the University of Oxford, which specializes in building very deep convolutional networks for large-scale visual recognition. A pre-trained model like the VGG-16 is an already pre-trained model on a huge dataset (ImageNet) with a lot of diverse image categories. Considering this fact, the model should have learned a robust hierarchy of features, which are spatial, rotation, and translation invariant with regard to features learned by CNN models. Hence, the model, having learned a good representation of features for over a million images belonging to 1,000 different categories, can act as a good feature extractor for new images suitable for computer vision problems. These new images might never exist in the ImageNet dataset or might be of totally different categories, but the model should still be able to extract relevant features from these images. This gives us an advantage of using pre-trained models as effective feature extractors for new images, to solve diverse and complex computer vision tasks, such as solving our cat versus dog classifier with fewer images, or even building a dog breed classifier, a facial expression classifier, and much more! Let’s briefly discuss the VGG-16 model architecture before unleashing the power of transfer learning on our problem. Understanding the VGG-16 model The VGG-16 model is a 16-layer (convolution and fully connected) network built on the ImageNet database, which is built for the purpose of image recognition and classification. This model was built by Karen Simonyan and Andrew Zisserman and is mentioned in their paper titled ‘Very Deep Convolutional Networks for Large-Scale Image Recognition’. I recommend all interested readers to go and read up on the excellent literature in this paper. The architecture of the VGG-16 model is depicted in the following figure. VGG-16 Model Architecture You can clearly see that we have a total of 13 convolution layers using 3 x 3 convolution filters along with max pooling layers for downsampling and a total of two fully connected hidden layers of 4096 units in each layer followed by a dense layer of 1000 units, where each unit represents one of the image categories in the ImageNet database. We do not need the last three layers since we will be using our own fully connected dense layers to predict whether images will be a dog or a cat. We are more concerned with the first five blocks, so that we can leverage the VGG model as an effective feature extractor. For one of the models, we will use it as a simple feature extractor by freezing all the five convolution blocks to make sure their weights don’t get updated after each epoch. For the last model, we will apply fine-tuning to the VGG model, where we will unfreeze the last two blocks (Block 4 and Block 5) so that their weights get updated in each epoch (per batch of data) as we train our own model. We represent the preceding architecture, along with the two variants (basic feature extractor and fine-tuning) that we will be using, in the following block diagram, so you can get a better visual perspective. Block Diagram showing Transfer Learning Strategies on the VGG-16 Model Thus, we are mostly concerned with leveraging the convolution blocks of the VGG-16 model and then flattening the final output (from the feature maps) so that we can feed it into our own dense layers for our classifier. Pre-trained CNN model as a Feature Extractor Let’s leverage Keras, load up the VGG-16 model, and freeze the convolution blocks so that we can use it as just an image feature extractor. It is quite clear from the preceding output that all the layers of the VGG-16 model are frozen, which is good, because we don’t want their weights to change during model training. The last activation feature map in the VGG-16 model (output from block5_pool ) gives us the bottleneck features, which can then be flattened and fed to a fully connected deep neural network classifier. The following snippet shows what the bottleneck features look like for a sample image from our training data. bottleneck_feature_example = vgg.predict(train_imgs_scaled[0:1]) print(bottleneck_feature_example.shape) plt.imshow(bottleneck_feature_example[0][:,:,0]) Sample Bottleneck Features We flatten the bottleneck features in the vgg_model object to make them ready to be fed to our fully connected classifier. A way to save time in model training is to use this model and extract out all the features from our training and validation datasets and then feed them as inputs to our classifier. Let’s extract out the bottleneck features from our training and validation sets now. Train Bottleneck Features: (3000, 8192) Validation Bottleneck Features: (1000, 8192) The preceding output tells us that we have successfully extracted the flattened bottleneck features of dimension 1 x 8192 for our 3,000 training images and our 1,000 validation images. Let’s build the architecture of our deep neural network classifier now, which will take these features as input. Just like we mentioned previously, bottleneck feature vectors of size 8192 serve as input to our classification model. We use the same architecture as our previous models here with regard to the dense layers. Let’s train this model now. Train on 3000 samples, validate on 1000 samples Epoch 1/30 3000/3000 - 1s 373us/step - loss: 0.4325 - acc: 0.7897 - val_loss: 0.2958 - val_acc: 0.8730 Epoch 2/30 3000/3000 - 1s 286us/step - loss: 0.2857 - acc: 0.8783 - val_loss: 0.3294 - val_acc: 0.8530 Epoch 3/30 3000/3000 - 1s 289us/step - loss: 0.2353 - acc: 0.9043 - val_loss: 0.2708 - val_acc: 0.8700 ... ... Epoch 29/30 3000/3000 - 1s 287us/step - loss: 0.0121 - acc: 0.9943 - val_loss: 0.7760 - val_acc: 0.8930 Epoch 30/30 3000/3000 - 1s 287us/step - loss: 0.0102 - acc: 0.9987 - val_loss: 0.8344 - val_acc: 0.8720 Pre-trained CNN (feature extractor) Performance We get a model with a validation accuracy of close to 88%, almost a 5–6% improvement from our basic CNN model with image augmentation, which is excellent. The model does seem to be overfitting though. There is a decent gap between the model train and validation accuracy after the fifth epoch, which kind of makes it clear that the model is overfitting on the training data after that. But overall, this seems to be the best model so far. Let’s try using our image augmentation strategy on this model. Before that, we save this model to disk using the following code. model.save('cats_dogs_tlearn_basic_cnn.h5') Pre-trained CNN model as a Feature Extractor with Image Augmentation We will leverage the same data generators for our train and validation datasets that we used before. The code for building them is depicted as follows for ease of understanding. Let’s now build our deep learning model and train it. We won’t extract the bottleneck features like last time since we will be training on data generators; hence, we will be passing the vgg_model object as an input to our own model. We bring the learning rate slightly down since we will be training for 100 epochs and don’t want to make any sudden abrupt weight adjustments to our model layers. Do remember that the VGG-16 model’s layers are still frozen here, and we are still using it as a basic feature extractor only. Epoch 1/100 100/100 - 45s 449ms/step - loss: 0.6511 - acc: 0.6153 - val_loss: 0.5147 - val_acc: 0.7840 Epoch 2/100 100/100 - 41s 414ms/step - loss: 0.5651 - acc: 0.7110 - val_loss: 0.4249 - val_acc: 0.8180 Epoch 3/100 100/100 - 41s 415ms/step - loss: 0.5069 - acc: 0.7527 - val_loss: 0.3790 - val_acc: 0.8260 ... ... Epoch 99/100 100/100 - 42s 417ms/step - loss: 0.2656 - acc: 0.8907 - val_loss: 0.2757 - val_acc: 0.9050 Epoch 100/100 100/100 - 42s 418ms/step - loss: 0.2876 - acc: 0.8833 - val_loss: 0.2665 - val_acc: 0.9000 Pre-trained CNN (feature extractor) with Image Augmentation Performance We can see that our model has an overall validation accuracy of 90%, which is a slight improvement from our previous model, and also the train and validation accuracy are quite close to each other, indicating that the model is not overfitting. Let’s save this model on the disk now for future evaluation on the test data. model.save(‘cats_dogs_tlearn_img_aug_cnn.h5’) We will now fine-tune the VGG-16 model to build our last classifier, where we will unfreeze blocks 4 and 5, as we depicted in our block diagram earlier. Pre-trained CNN model with Fine-tuning and Image Augmentation We will now leverage our VGG-16 model object stored in the vgg_model variable and unfreeze convolution blocks 4 and 5 while keeping the first three blocks frozen. The following code helps us achieve this. You can clearly see from the preceding output that the convolution and pooling layers pertaining to blocks 4 and 5 are now trainable. This means the weights for these layers will also get updated with backpropagation in each epoch as we pass each batch of data. We will use the same data generators and model architecture as our previous model and train our model. We reduce the learning rate slightly, since we don’t want to get stuck at any local minimal, and we also do not want to suddenly update the weights of the trainable VGG-16 model layers by a big factor that might adversely affect the model. Epoch 1/100 100/100 - 64s 642ms/step - loss: 0.6070 - acc: 0.6547 - val_loss: 0.4029 - val_acc: 0.8250 Epoch 2/100 100/100 - 63s 630ms/step - loss: 0.3976 - acc: 0.8103 - val_loss: 0.2273 - val_acc: 0.9030 Epoch 3/100 100/100 - 63s 631ms/step - loss: 0.3440 - acc: 0.8530 - val_loss: 0.2221 - val_acc: 0.9150 ... ... Epoch 99/100 100/100 - 63s 629ms/step - loss: 0.0243 - acc: 0.9913 - val_loss: 0.2861 - val_acc: 0.9620 Epoch 100/100 100/100 - 63s 629ms/step - loss: 0.0226 - acc: 0.9930 - val_loss: 0.3002 - val_acc: 0.9610 Pre-trained CNN (fine-tuning) with Image Augmentation Performance We can see from the preceding output that our model has obtained a validation accuracy of around 96%, which is a 6% improvement from our previous model. Overall, this model has gained a 24% improvement in validation accuracy from our first basic CNN model. This really shows how useful transfer learning can be. We can see that accuracy values are really excellent here, and although the model looks like it might be slightly overfitting on the training data, we still get great validation accuracy. Let’s save this model to disk now using the following code. model.save('cats_dogs_tlearn_finetune_img_aug_cnn.h5') Let’s now put all our models to the test by actually evaluating their performance on our test dataset. Evaluating our Deep Learning Models on Test Data We will now evaluate the five different models that we built so far, by first testing them on our test dataset, because just validation is not enough! We have also built a nifty utility module called model_evaluation_utils , which we will be using to evaluate the performance of our deep learning models. Let's load up the necessary dependencies and our saved models before getting started. It’s time now for the final test, where we literally test the performance of our models by making predictions on our test dataset. Let’s load up and prepare our test dataset first before we try making predictions. Test dataset shape: (1000, 150, 150, 3) ['dog', 'dog', 'dog', 'dog', 'dog'] [1, 1, 1, 1, 1] Now that we have our scaled dataset ready, let’s evaluate each model by making predictions for all the test images, and then evaluate the model performance by checking how accurate are the predictions. Model 1: Basic CNN Performance Model 2: Basic CNN with Image Augmentation Performance Model 3: Transfer Learning — Pre-trained CNN as a Feature Extractor Performance Model 4: Transfer Learning — Pre-trained CNN as a Feature Extractor with Image Augmentation Performance Model 5: Transfer Learning — Pre-trained CNN with Fine-tuning and Image Augmentation Performance We can see that we definitely have some interesting results. Each subsequent model performs better than the previous model, which is expected, since we tried more advanced techniques with each new model. Our worst model is our basic CNN model, with a model accuracy and F1-score of around 78%, and our best model is our fine-tuned model with transfer learning and image augmentation, which gives us a model accuracy and F1-score of 96%, which is really amazing considering we trained our model from our 3,000 image training dataset. Let’s plot the ROC curves of our worst and best models now. ROC curve of our worst vs. best model This should give you a good idea of how much of a difference pre-trained models and transfer learning can make, especially in tackling complex problems when we have constraints like less data. We encourage you to try out similar strategies with your own data! Case Study 2: Multi-Class Fine-grained Image Classification with Large Number of Classes and Less Data Availability Now in this case study, let us level up the game and make the task of image classification even more exciting. We built a simple binary classification model in the previous case study (albeit we used some complex techniques for solving the small data constraint problem!). In this case-study, we will be concentrating toward the task of fine-grained image classification. Unlike usual image classification tasks, fine-grained image classification refers to the task of recognizing different sub-classes within a higher-level class. Main Objective To help understand this task better, we will be focusing our discussion around the Stanford Dogs dataset. This dataset, as the name suggests, contains images of different dog breeds. In this case, the task is to identify each of those dog breeds. Hence, the high-level concept is the dog itself, while the task is to categorize different subconcepts or subclasses — in this case, breeds — correctly. We will be leveraging the dataset available through Kaggle available here. We will only be using the train dataset since it has labeled data. This dataset contains around 10,000 labeled images of 120 different dog breeds. Thus our task is to build a fine-grained 120-class classification model to categorize 120 different dog breeds. Definitely challenging! Loading and Exploring the Dataset Let’s take a look at how our dataset looks like by loading the data and viewing a sample batch of images. Sample dog breed images and labels From the preceding grid, we can see that there is a lot of variation, in terms of resolution, lighting, zoom levels, and so on, available along with the fact that images do not just contain just a single dog but other dogs and surrounding items as well. This is going to be a challenge! Building Datasets Let’s start by looking at how the dataset labels look like to get an idea of what we are dealing with. data_labels = pd.read_csv('labels/labels.csv') target_labels = data_labels['breed'] print(len(set(target_labels))) data_labels.head() ------------------ 120 What we do next is to add in the exact image path for each image present in the disk using the following code. This will help us in easily locating and loading up the images during model training. It’s now time to prepare our train, test and validation datasets. We will leverage the following code to help us build these datasets! Initial Dataset Size: (10222, 299, 299, 3) Initial Train and Test Datasets Size: (7155, 299, 299, 3) (3067, 299, 299, 3) Train and Validation Datasets Size: (6081, 299, 299, 3) (1074, 299, 299, 3) Train, Test and Validation Datasets Size: (6081, 299, 299, 3) (3067, 299, 299, 3) (1074, 299, 299, 3) We also need to convert the text class labels to one-hot encoded labels, else our model will not run. ((6081, 120), (3067, 120), (1074, 120)) Everything looks to be in order. Now, if you remember from the previous case study, image augmentation is a great way to deal with having less data per class. In this case, we have a total of 10222 samples and 120 classes. This means, an average of only 85 images per class! We do this using the ImageDataGenerator utility from keras. Now that we have our data ready, the next step is to actually build our deep learning model! Transfer Learning with Google’s Inception V3 Model Now that our datasets are ready, let’s get started with the modeling process. We already know how to build a deep convolutional network from scratch. We also understand the amount of fine-tuning required to achieve good performance. For this task, we will be utilizing concepts of transfer learning. A pre-trained model is the basic ingredient required to begin with the task of transfer learning. In this case study, we will concentrate on utilizing a pre-trained model as a feature extractor. We know, a deep learning model is basically a stacking of interconnected layers of neurons, with the final one acting as a classifier. This architecture enables deep neural networks to capture different features at different levels in the network. Thus, we can utilize this property to use them as feature extractors. This is made possible by removing the final layer or using the output from the penultimate layer. This output from the penultimate layer is then fed into an additional set of layers, followed by a classification layer. We will be using the Inception V3 Model from Google as our pre-trained model. Based on the previous output, you can clearly see that the Inception V3 model is huge with a lot of layers and parameters. Let’s start training our model now. We train the model using the fit_generator(...) method to leverage the data augmentation prepared in the previous step. We set the batch size to 32 , and train the model for 15 epochs. Epoch 1/15 190/190 - 155s 816ms/step - loss: 4.1095 - acc: 0.2216 - val_loss: 2.6067 - val_acc: 0.5748 Epoch 2/15 190/190 - 159s 836ms/step - loss: 2.1797 - acc: 0.5719 - val_loss: 1.0696 - val_acc: 0.7377 Epoch 3/15 190/190 - 155s 815ms/step - loss: 1.3583 - acc: 0.6814 - val_loss: 0.7742 - val_acc: 0.7888 ... ... Epoch 14/15 190/190 - 156s 823ms/step - loss: 0.6686 - acc: 0.8030 - val_loss: 0.6745 - val_acc: 0.7955 Epoch 15/15 190/190 - 161s 850ms/step - loss: 0.6276 - acc: 0.8194 - val_loss: 0.6579 - val_acc: 0.8144 Performance of our Inception V3 Model (feature extractor) on the Dog Breed Dataset The model achieves a commendable performance of more than 80% accuracy on both train and validation sets within just 15 epochs. The plot on the right-hand side shows how quickly the loss drops and converges to around 0.5 . This is a clear example of how powerful, yet simple, transfer learning can be. Evaluating our Deep Learning Model on Test Data Training and validation performance is pretty good, but how about performance on unseen data? Since we already divided our original dataset into three separate portions. The important thing to remember here is that the test dataset has to undergo similar pre-processing as the training dataset. To account for this, we scale the test dataset as well, before feeding it into the function. Accuracy: 0.864 Precision: 0.8783 Recall: 0.864 F1 Score: 0.8591 The model achieves an amazing 86% accuracy as well as F1-score on the test dataset. Given that we just trained for 15 epochs with minimal inputs from our side, transfer learning helped us achieve a pretty decent classifier. We can also check the per-class classification metrics using the following code. We can also visualize model predictions in a visually appealing way using the following code.
https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a
['Dipanjan', 'Dj']
2018-11-17 00:15:09.056000+00:00
['Machine Learning', 'Artificial Intelligence', 'Data Science', 'Deep Learning', 'Towards Data Science']
How to Access Your Brain’s Sweet Spot
We’ve all been there. Deadlines looming, the to-do list stacked as high as your brand-new stand-up desk, with your kids, parents, and/or spouse hovering in the doorway, barging in on your train of thought, hijacking the few precious moments you set aside to eke out any productivity and meet that critical deadline. Sound familiar? Times are different than they were a year ago. There is no denying that. And wanting things to go “back to normal” isn’t solving the problems: If anything, it is only creating a greater delta and divide between sanity and overwhelm. So, in the midst of crazy, how do we keep from going insane? How do we ensure that we meet deadlines? And not just scratch them off the never-ending list, but strike through them with a proud internal smile? The answer you seek is in the workings of the brain. A few years back, through BeAbove Leadership with Ann Betz and Ursula Pottinger, I learned about the incredible nature of the stress curve. Developed and researched by Yale Professor Amy Arnsten, she likens our prefrontal cortex (the key part responsible for executive functions, such as critical thinking, reasoning, organization, and decision making) to Goldilocks. Just as Goldilocks needed her porridge to be “just right,” our Prefrontal Cortex (PFC) requires everything to be “just right” for our neurons to communicate efficiently, enabling them to function optimally. Essentially, the model states that our productivity is directly correlated to the amount of stress in our lives. Too little stress (and thus a shortage of hormones and chemicals) has us, or our Goldilocks PFC, feeling lethargic, with often hazy or cloudy minds, as though we are groping around in a fog. In those moments, we’d often rather sit down and wait for the haze to clear. On the other end of the bell curve is too much stress (also stated as an abundance of chemicals), which has a similar impact, but can feel like a tornado with a myriad of moving parts. We don’t know which to grab or what to attack first. Comparable to the “too little” scenario, “too much” results in a blurred mental state, without clear perception and seeing. While many leaders feel we need stress to drive us forward, too much undermines our ability to think clearly and see the full picture, to make decisions that positively impact the now and the long-term. That is why leaders must strive to find and live in the “just right” spot of the curve. That’s the sweet spot. I’ve been spending time observing my own bell curve, aware that different times and circumstances also impact my access to that “just right sweet spot.” And while there may be a general “sweet spot,” it’s further affected by how I sleep, how I eat, if I have gotten exercise, and so much more. The good news: With enough awareness, we can make changes that can impact our brain’s chemistry. The awareness that chaos or boredom is just a temporary state of the brain is a critical first step. That awareness, seeing it and observing it, creates just enough distance to offer clarity and insight. At that moment, you are no longer the brain’s chemicals gone awry, but a detached observer. That awareness, also known as mindfulness, is a critical asset. This awareness now puts you in an empowered, action-oriented place: You are no longer held hostage by the chemicals in the brain. Next, you get to do something to create even more space: Take a few breaths. The brain, which is 2% of our body weight, requires at least 20% of our oxygen. And according to Harvard studies, it demands 50% of our glucose energy to run. Your brain, which feeds on O2, gets energized and nourished by a few deep hits (more is better, but a few is a phenomenal start). Those breaths also shut down the sympathetic nervous system, largely responsible for that chemical flood. Finally, after those few deep breaths — or better yet, ten minutes outside to physically distance yourself and get perspective — you choose a new action. When it comes to overwhelm, a new perspective and distance can be enough to move forward. At the same time, they enable you to find creative ways to delegate, manage, or set parameters that slow the deluge of chemicals. When you are taken or held hostage by the overwhelm of chemicals, it can feel as though there is not a second, nor space, for a few breaths, let alone ten minutes. However, I promise, the proverbial doorways and new pathways those breaths enable, the opportunities they create, and the shifts they catalyze are far more effective, productive, and fruitful than the alternative. Slogging forward, shutting down, or yelling at the figures looming in your doorway are the least effective approaches. Leadership is not exclusive to the company listed on your LinkedIn profile or your direct reports. Your children, loved ones, neighbors, and friends watch, mimic, and respond to your actions and energy. Leadership is how we choose to live, respond to, learn from, and lead our lives at every and any moment. And as you take those moments to manage and find your “sweet spot,” others will learn to do the same. That is leadership. This article was first published on Forbes.com
https://medium.com/@rachel-tenenbaum/how-to-access-your-brains-sweet-spot-1de7dcc5d754
['Rachel Tenenbaum', 'Cpcc', 'Cntc']
2020-12-24 16:35:30.098000+00:00
['Neuroscience', 'Mindfulness At Work', 'Brain', 'Mindful Leadership', 'Stress Management Tips']
Beginner’s Python Financial Analysis Walk-through — Part 5
Image adapted from https://www.dnaindia.com/personal-finance/report-india-s-personal-wealth-to-grow-13-by-2022-2673182 “Predicting” Future Stock Movements Boy am I glad you made it here! This section covers what I find to be the most exciting part of the entire project! At this point, if you’ve read through parts 1–4 of the project, you understand my steps to evaluate the historical performance of stocks, but the question we are all asking is “How do I predict if a stock will go up and make me rich?” This is probably why you’re here in the first place; You want to know how to choose a stock that has a higher likelihood of yielding greater returns. Let’s see how we can do this using simple moving averages and Bollinger band plots. Let’s make money! Simple Moving Average We begin by understanding a simple moving average (SMA). An SMA is a constantly updated average price for a certain period of time. For example, a 10 day moving average would average the first 10 closing prices for the first data point. The next data point would add the 11th closing price and drop the first day’s price, and take the new average. This process continues on a rolling basis. In effect, a moving average smooths out the day-to-day volatility and better displays the underlying trends in a stock price. A shorter time frame is less smooth, but better represents the source data. There are many ways to plot moving averages, but for simplicity, here I use the cufflinks package to do it for me. # The cufflinks package has useful technical analysis functionality, and we can use .ta_plot(study=’sma’) to create a Simple Moving Averages plot # User input ticker of interest ticker = “NFLX” each_df[ticker][‘Adj Close’].ta_plot(study=’sma’,periods=[10]) Figure 1: A 10-day simple moving average overlaid on NFLX’s closing prices Figure 1 above illustrates a 10-day moving average for Netflix’s (NFLX) closing prices. As evident, the SMA reduces the jagged peaks and valleys of the closing prices and gives better visibility to the underlying trends. In the past 2 years, we can clearly see the stock price trending upwards. Death Cross and Golden Cross Building on this concept, you can compare multiple SMA’s with different time frames. When a shorter-term SMA consistently lies above a longer-term SMA, we can expect the stock price to trend upwards. There are two popular trading patterns that utilize this concept: the death cross and the golden cross. I’ll turn to Investopedia for a definition: “A death cross occurs when the 50-day SMA crosses below the 200-day SMA. This is considered a bearish signal, that further losses are in store. The golden cross occurs when a short-term SMA breaks above a long-term SMA. This can signal further gains are in store.” Source Let’s take a look at overlaying two SMA’s onto Netflix. # User input ticker of interest ticker = “NFLX” start_date = ‘2018–06–01’ end_date = ‘2020–08–01’ each_df[ticker][‘Adj Close’].loc[start_date:end_date].ta_plot(study=’sma’,periods=[50,200]) Figure 2: Example of a death cross followed by a golden cross As we can see from Figure 2, Netflix is an interesting case study. Around August 2019 we see a death cross. I was not following Netflix’s stock price at that time, but it seems the stock had been trending downwards. I’ll leave it as an exercise for you to explain the dip. Soon after, the death cross is followed by a golden cross near February 2020. Ever since the golden cross in February, Netflix’s stock has been on the rise. Now we know Coronavirus explains a lot of price movements around that time. As more people stayed at home, more and more people turned to Netflix as the sole source of entertainment in the house. Netflix’s paid user subscription base grew significantly in the months following. That is no surprise. However, what’s interesting to me is that during the major economic earthquake in Feb-Mar. 2020, when the rest of the stock market was plummeting, we don’t see a death cross. Quite opposite, we actually see a golden cross! If we focus on the orange line, we see that Netflix’s stock prices also took a big hit in March 2020, so it’s not that Netflix didn’t feel the impacts of COVID. The takeaway is that the stock dip was due to a one-off event and NOT a trend, as shown by the SMA’s. If in March we had traded solely based on a comparison of 50-day and 200-day SMA’s, we would have reaped all the future gains. Next, I’ll talk about another tool used to make predictions on stock movements. Bollinger Band Plots Bollinger band plots are a technical analysis tool composed of three lines, a simple moving average (SMA) and two bounding lines above and below the average. Most commonly, the bounding bands are +/- 2 standard deviations from a 20-day SMA. One major use case for Bollinger band plots is to help understand undersold vs. oversold stocks. As a stock’s market price moves closer to the upper band, the stock is perceived to be overbought, and as the price moves closer to the lower band, the stock is more oversold. Although not recommended as the sole basis for buy/sell, price movements near the upper and lower bands can signal uncharacteristically high/low prices for stocks. The latter is what I generally look for, as I hope to buy oversold stocks for cheap and let their values rise back towards the moving average. Bollinger bands also allow traders to monitor and take advantage of volatility shifts. As we learned previously in part 4 of this project, the standard deviation of stock prices is a measure of volatility. Therefore, the upper and lower bands expand as the stock price becomes volatile. Conversely, the bands contract as the market calms down; This is called a squeeze. Traders may take squeezes as a potential sign of trading opportunities since squeezes are often followed by increased volatility, although the direction of price movement is unknown. Let’s plot the Bollinger bands before we go deeper so we can take a look at what I’m talking about. The cufflinks package again makes it very simple to plot Bollinger Bands as shown in Figure 3. # User input ticker of interest ticker = "SPY" each_df[ticker]['Close'].ta_plot(study='boll', periods=20,boll_std=2) Figure 3. Bollinger Band Plot for SPY from 2018–2020 During an uptrend, the prices will bounce between the upper band and the moving average. While in this uptrend, the price crossing below the moving average can be a sign of slowing growth or trend reversal. You can see this crossing in Figure 4 below. Here I plotted the Bollinger bands for the SPY for the first 4 months of 2020. See how the orange closing price line dips below the 20-day SMA around February 18. Before this, there was a steady uptrend for the SPY. Afterwards, there was a hefty downtrend. Figure 4. Unusual SPY market prices Feb-Mar 2020 As you can see above, the period of Feb-March 2020 showed a major downtrend in the SPY. It’s unusual to see the SMA breakout below the lower band 4 times within a month. With standard deviations, 95% of the values should lie within the +/- 2 standard deviations, so we’re seeing very unusual activity. Of course, this is understandable as peak Coronavirus fears struck America in this time frame. This is a good example of what a strong downtrend looks like. In the following months, we see a strong uptrend. Conclusion In this section, we’ve learned how to use moving averages and Bollinger band plots to take a step back from all the noise in the erratic trading data and focus on the trends. It’s hard to profit day-trading the daily ups and downs, but it’s much simpler to profit if you can identify strong growth trends . Using these techniques, you can also identify trend reversals and buy stocks early in a trend. On a shorter time horizon, you can use Bollinger bands to find undervalued or overvalued stocks. With this analysis, you can hopefully be more informed and make data driven decisions to buy stocks. In the next and last section, we’ll wrap up everything we’ve learned.
https://medium.com/analytics-vidhya/beginners-python-financial-analysis-walk-through-part-5-3777eb708d01
['Keith Chan']
2020-08-31 04:57:59.027000+00:00
['Coding', 'Beginners Guide', 'Financial Analysis', 'Stocks', 'Python']
What Aragorn can teach us about masculinity
What Aragorn can teach us about masculinity A beloved character of most who have watched The Lord of The Rings, who may also hold great insight into how we can develop our masculinity more fully. Edward Marotis Nov 9, 2020·4 min read Masculinity remains a topic of immense contention. In many ways the views on masculinity have become polarised in conjunction with other more political and ideological matters, and this ultimately is to the detriment of us all. The most pervasive point of disagreement is found around the topic of the concept of “toxic masculinity”. On one hand it is considered a valid critique of male behaviours, that aims to ameliorate the damage caused mostly by men, domestic violence, etc. On the other hand, it is criticised as an overall dismissal and disapproval of masculinity, and an attempt to eradicate masculinity. Both perspectives are understandable, and the experience of masculinity can often feel as a choice between being an overly submissive weakling or a dominating brute. This is not only an extremely polarising understanding, but also completely unhelpful. In a recent video on their YouTube channel, Cinema Therapy, Jonathan Decker, licensed therapist, and Alan Seawright, professional filmmaker, discuss the topic of toxic masculinity in an exceedingly constructive and helpful manner. Firstly, they choose to describe it as “limiting masculinity” instead of “toxic”, and continue to outline what can be considered positive masculine traits, such as providing, protecting, being brave, being determined, being ambitious, and more negative traits, such as homophobia, misogyny, dominance through violence, and a struggle to be vulnerable. This is a much more wholesome and helpful way to define the matter at hand, as the core positive traits of masculinity are unequivocally appreciated, and in the specification of the traits that are actually harmful. What is inspiring about this way of looking at it, is that it opens up room for individuality to grow through and along masculinity, instead of being limited by it. Instead of feeling like the only choices of being lie in hyper-masculine brutality and weak passivity, we open the door for each person to define his own specific expression of masculinity, guided by the positive traits. The Decker and Seawright center the video around the character Aragorn from the Lord of The Rings trilogy, as a prime example of healthy masculinity. The reason for choosing Aragorn, is that throughout the films he consistently shows absolute bravery, strength, and determination, while at the same time being supportive, empathetic, and encouraging of others. Critically he even vocalises his own insecurities about whether or not he is up for the tasks demanded of him. This is a truly liberating idea for us all, as this view of masculinity doesn’t entail the suppression or disapproval of any part of our being, but instead promotes the full expression of ourselves. That we can be both courageous and strong as well as at the same time being able to express any emotion that is felt. We can call this “freeing masculinity”. Yet, with this realisation comes a challenge in courage, as the unhelpful, polarised, definitions of masculinity have not occurred without reason. Both versions have fundamental fears they seek to avoid, that are in both cases related to how others will view them. In the weak and passive pole, the main fear is that asserting oneself and being bold will lead to confrontation and being shamed for being evil. On the more hyper masculine pole the fear is that expressing emotions of vulnerability will lead to ridicule and being seen as weak. Fear is the foundational reason for the existence of both polarities, and it is exactly this fear we need to face in order to fully align with this new way of masculinity. We need to acknowledge the fears present within us and decide whether or not they are worth losing access to a fuller expression of our potential. We may fear that a person close to us will laugh and ridicule us for being vulnerable, but in that case, we can choose to stand by ourselves regardless. The same goes for the opposite case if our newfound assertiveness causes conflict, as we can still choose to stand by ourselves. The point is that by caring more about our own integrity than we do about the reaction of others, we truly a more whole expression of ourselves and our masculinity. As scary as it may seem to embody our masculinity more fully, we can alleviate some of our fear by understanding that anyone that would ridicule or shame us for our expression, is really just showing their own shame around whatever, we have expressed. Even still we will likely feel fear, and that is okay, because bravery is meaningless unless fear is actually present. It all comes down to truly being ourselves and embracing freeing masculinity no matter what reaction people around us meet it with. Despite all that has been said in criticism of masculinity, I believe it is more needed than ever, yet in a shape that is more wholesome than what we have allowed ourselves so far. We are only as bound by the current limitations of masculinity as we allow ourselves to be. Take from traditional masculinity what truly serves you, let the rest fall away. Make it your own unique expression and live it fully. The world needs it, perhaps more than ever. Make sure to watch the video from Cinema Therapy, it’s great: https://www.youtube.com/watch?v=pv_KAnY5XNQ&t=76s&ab_channel=AmitojGautamAmitojGautam
https://medium.com/@edwardmarotis/what-aragorn-can-teach-us-about-masculinity-10d48fc42b77
['Edward Marotis']
2020-11-09 00:04:19.115000+00:00
['Film', 'Mental Health', 'Masculinity', 'Lord Of The Rings', 'Philosophy']
New Zoom App Feature Concept
Project Overview Due to the COVD-19 pandemic, many people are transitioning into using remote communication apps whether it be for school, work, or personal reasons. However, it is often difficult for users of these apps to stay engaged during their meetings and conversations. There is also a learning curve for learning the technology. In this study, we interviewed five different users of remote communication apps in order to find out how they feel about the apps and what they need to have a better experience. We found that many users heavily rely on these apps on a daily basis not only for work, but also just to keep in touch with people. We found that these users value and miss the human connection. These users all used several different apps for communicating. The most common platforms were Zoom, Slack, and Snapchat. Some interviewees stated that they really liked the screen share feature on Zoom. Others mentioned that they liked the channels on Slack and the breakout rooms on Zoom because they can connect with people better in smaller groups. We also found that these users often had a negative experience due to not knowing how to do something or with technical issues. Using the information gathered from the research, we concluded that these users need a way to collaborate and keep each other engaged. Using the information from our research, we decided that the feature we wanted to add was going to be social subchannels that allow for users to be raffled out into random rooms when they join so that they can bond with coworkers. The social subchannels would also feature activities to do together. We drafted a low-fi and a medium-fidelity prototype of the app with our new feature and conducted two rounds of usability testing with 5 participants per round. Problem Space Statement Students and working professionals need a way to stay engaged with remote communication apps.
https://medium.com/@quynhnguyendesign/new-zoom-app-feature-concept-ac9cb893af5e
['Quynh Nguyen']
2021-02-19 05:35:28.767000+00:00
['UX Design', 'UX Research']
Airbnb — an investment breakdown of the epic tale of an air mattress company to IPO
The time has finally come. The company formerly known as Airbed and breakfast has now listed on the Nasdaq stock exchange. What is Airbnb? Airbnb is a simple marketplace connecting people who want to rent accommodation with people who have accommodation to rent out. But what made it successful? Airbnb provided a solution to unlock space that wasn’t available on the market before and it was the first company that managed to do so on a big scale and most importantly — monetize it. By providing a monetary incentive for the hosts to list their place, Airbnb quickly expanded the travel market by offering more unique travel options and cheaper alternatives to hotels. My first Airbnb experience was in May 2013 to Kyiv, Ukraine — as you can see right here: Thank you Leona! It was a great trip what happened on the trip deserves an entirely separate Medium article 🙂 Looking into my Airbnb booking history I have made exactly 19 bookings (not counting the places I’ve stayed at that other people booked). That equates to 2.7 Airbnb bookings per year, which is significantly more than I book with any of Airbnb’s competitors. History The origin story is well-known at this point. But worth summarizing: Brian and Joe were design students looking to make an extra buck by renting out air mattresses in their apartment in San Francisco during a design conference. It whetted their appetite and they pulled in their technical cofounder Nathan and they tried their luck at the Democratic National Convention in Denver in 2008. And the next year in 2009 they got admitted to the Silicon Valley startup incubator and investor Y Combinator. The rest is as they say, history. What is perhaps the most interesting, is the biggest reason how they got admitted to Y Combinator. In order to keep themselves and the business afloat - they created breakfast cereals for the US election in ’08 as you can see here ↓ This is obviously highly unusual, creative and super ingenious. By demonstrating this level of creativity they manage to persuade Y Combinator to get admitted. Public Offering But now we are in 2020 and it might seem a bit weird to go public, as we are in the middle of a pandemic and the travel and hospitality industry is currently in meltdown. It is weird. But it’s also an opportunity for Airbnb. Airbnb has, and what I also think was a surprise for the company, rebounded from the dreadful months of March and April. Not to pre-pandemic levels, but very strongly. There are primarily two reasons for this: They slashed costs By letting 1900 employees go, a quarter of all employees. Cutting marketing cost by almost 1 billion USD. The management stopped paying themselves and executives cut their salaries. Airbnb also discontinued or paused the vast majority of their projects and moonshots to focus on their core business. 2. And there was a change in customer behavior Customers started shifting their bookings to long term stays and to nearby places where hotels don’t have a strong footprint — offsetting a big chunk of the lost revenue. Under these circumstances Airbnb managed (rather surprisingly) to make a profit in Q3 of 2020 of 260 Million USD. What does this all mean? It means they have a reduced cost structure and the path to profitability is therefore shorter and more clear. Finances Let’s dig into the numbers. The CAGR for the last 5 years has been about 40%, topping at 80% and slowing down to 31% last year. The revenue doubled in two years to close to 5 Billion USD (amazing) but at the same time it has lost more than half of its growth rate — which is worrisome. The cost ballooned in 2019 to 5.3 Billion USD and has been growing faster than revenues — what really sticks out is the General and Administrative cost. According to the S1 this includes the staff costs (i.e. salaries) and the costs of taking the company public — although it is surprising that the cost came in at such high levels, going forward we should see this decrease as there has been major staff cuts and the company has gone public. Competition What is the competition for Airbnb? Does Airbnb have competition? According to the CEO, Airbnb is in a category of one. However, I would beg to differ. Hotels is not a comparable business because they operate in different parts of the value chain. But there are comparables. I would argue that Booking.com is in the same business as Airbnb. The difference is focus — Booking.com is focused on hotels Airbnb is focused on short term accommodation. How does Booking.com stack up against Airbnb? Actually, quite well, but it has some obvious flaws. It’s catching up in the number of short term accommodation hosts, and it has signed up most of the hotels in the world. It has an efficient online marketing machine to bring in customers and it is already three times bigger than Airbnb and is profitable (pre-pandemic). That being said — Airbnb has two major advantages that Booking.com is lacking. → World class design and brand recognition. → Lower fees. Let’s break these down. If you compare the design of Airbnb vs Booking.com, Airbnb has vastly better design, UX and customer journey. Because Booking.com has relentlessly valued A/B-testing of every single part over design principles. Booking.com also applies dark patterns and fake discounts. They ask hosts to raise prices so booking.com can offer steeper discounts. I know this for a fact because I have a friend who rents out on Booking.com. At the same time Booking.com charges hosts a 15% fee whereas Airbnb charges a host 3%. When Airbnb is charging such a low fee it is leaving a lot of money on the table but the tradeoff being it creates a stronger loyalty for hosts (and higher switching costs). Opportunities and Threats Let’s take a look at the opportunities: It has the largest, most global and diverse host network which is also the biggest moat of the company. Airbnb arguably has the most recognizable brand in travel. This is demonstrated by the fact that 91% (!) of traffic is direct. One of very very few internet brands that almost completely has escaped Google and Facebook for traffic generation. This is definitively a moat. Almost 70% customer retention rate. What are the threats? Regulation — An issue that has been a problem for Airbnb in the past. That governments and municipalities regulate or ban Airbnb. This is a real concern and with good reason. As much of the real estate is used for short term rental drives up the price and decreases the supply of apartments available for rent. In Amsterdam you cannot rent out on Airbnb more than 30 days per year as an example. Costs — Despite running an asset light business model the company racked up some serious costs. In 2019 the total costs were 5.3 Billion USD on 4.8 Billion USD revenue. Now the situation looks different after the cost cutting due to the pandemic, but it’s remarkable that they let costs run that high. Product market fit — I think we only need to look to Experiences, the second product Airbnb launched. It seems that it hasn’t really taken off, and they haven’t broken out the experience bookings in the S1. This leads me to believe that it is a very small percentage of revenue. Verticalization — Competition can come from anywhere and more likely than not it’s probably going to be a new player that will take a piece of Airbnb’s cake. An ongoing trend to watch out for is verticalization — meaning that a new company will take a bite out of Airbnb’s business and do it better. For example, make a better offering for luxury stays or we can take a look at what Hipcamp is doing for camping. Outlook Where does this leave us in terms of outlook and future growth given what we know now? I would say that Airbnb has the opportunity to become a stable blue chip company because it has the moats, the brand and the user experience. But with the slowing growth one can’t help thinking that a lot of growth of the company has already happened pre-IPO, because they waited so long to go public. Both Booking.com and Expedia were founded in 1996 — in 24 years they have reached 15 and 12 Billion USD respectively pre-pandemic in one of the biggest ecommerce categories. Due to Airbnb’s slowing growth and the fact that Booking.com and Expedia haven’t become bigger companies in the same industry worries me a great deal. However, In the short-term, meaning the latter part of 2021 and ‘22 Airbnb should regain and surpass its pre-pandemic levels, amid a lower cost base, if everything works out with the vaccines. But Airbnb still needs to grow their revenues post-pandemic and I would argue there are three avenues within Airbnb’s control. Increase fees Invent new product lines Take market share from hotel bookings and competitors Airbnb’s fees are still low compared to Booking and other competitors which means that there is a big upside on revenue as a percentage of bookings. One obvious opportunity is that they start going into hotels, which has already slowly started to happen with the acquisition of HotelTonight in 2019. Another product that most other travel sites are offering are promoted listings, i.e. letting hosts pay to show up higher in the search ranking. But the big question is when will it surpass Booking.com in terms of revenue and can it outgrow Booking.com and become the market leader? The big bet that we are making here is that Airbnb can launch new products and be able to scale them. If Airbnb is 1: able to innovate and 2: able to get market product fit with new products I believe it will become the market leader otherwise the company will eventually simmer down to single digit growth in the long term. That being said, does the company still have a decade of growth ahead of it? Yes. Does it have competent management behind the steering wheel? Yes. And what makes the company investable for me in the short to medium term are two things: The 69% customer retention rate. The 91% direct traffic to the website. Those numbers are bonkers and way over the industry average. When Airbnb went public the market cap shot up to an astonishing 83 billion USD. Which is on par with Booking.com at one third of the revenue. This means that on the projected 2020 revenues Airbnb has a revenue multiple of 20x and Booking.com a revenue multiple of 12.5x. If you bought Airbnb after the IPO and paid around 145 dollars per share you paid a revenue multiple premium of 7.5x compared to Booking.com. To me, at that level, the valuation has really left earth and the fundamentals of the business are no longer driving the valuation of the business. A fair valuation to me is quite far below 100 dollars per share. We’ll see if the valuation will get there. My suspicion is that Q4 and Q1 will be bad quarters still affected by the pandemic and we’ll see how the market reacts to those reports. If it goes down below 100 per share I will most likely buy. Thanks for reading.
https://medium.com/@john-parker/airbnb-an-investment-breakdown-of-the-epic-tale-of-an-air-mattress-company-to-ipo-8e82fca86896
['John Parker']
2020-12-21 21:43:58.380000+00:00
['Money', 'Airbnb', 'Stocks', 'IPO', 'Invest']
Towards The Apathy Of Those Who Knew
The fallen leaves under the trees, another autumnal step on the cycle of seasons; just a point of a wise periodicity. The daylight covered up the stars, but they’ll reappear; just a snapshot within a wise alternation. Everything is vibrating, nothing rests. Yet, nature never seems tired; an ongoing procedure of self-nourishing. The Law of Rhythm is indisputable. All things rise and fall. Again and again; omnipresent evidence that death doesn’t really exist. Having chosen to live in Reality, my job is to keep my consciousness impenetrable by the excitement of the rise and the disappointment of the fall. My job is to keep in mind the Laws, and not to be enslaved by a permanent desire for what I don’t phenomenally have in every “now”.
https://medium.com/spiritual-tree/towards-the-apathy-of-those-who-knew-ddc1546b156d
['Anthi Psomiadou']
2020-11-30 12:38:30.327000+00:00
['Spirituality', 'Life', 'Nature', 'Self', 'Soul']
Remote team health check
Photo credit: Bart Slaets (source) Inspired by the Spotify “Squad Health Check” (read more here) our team decided to mix up one of our bi-weekly retros and try something different. However, like for most teams, this year has been a little different and our team has been working remote-first. This means we needed to be creative to figure out how to run the activity in a meaningful, but still engaging, way. This lead me to create a simple Google spreadsheet template where each member of our team would have their own tab so they could fill their score without being influenced by the thoughts of others. We ran through each topic in the exercise one-by-one, and then at the end, all circled back to the main aggregate page and had discussions around the topics that scored the lowest overall. This lead us to talk about (and have action off of) things that would likely not have come up in a traditional retro. If you’re running a remote team and looking for a different way to reflect, I wanted to share my resources with you! Copy the activity spreadsheet template for your team here. See the slides we used to run the health check here.
https://medium.com/@hanneoa/remote-team-health-check-bf79f929de62
[]
2020-12-03 10:57:54.879000+00:00
['Remote Working', 'Retrospectives', 'Remote Team', 'Retro', 'Team']
Nomadic Fanatic is advicing a VPN for tavelrs — get a great deal
Nomadic Fanatic is a 210k subscribers big travelling YouTube channel. If you’re from America and you love sight-seeing, then it’s simply a must, because the places chosen to visit are unique, beautiful, and might hint you to where you’re going next. The channel follows a guy named Eric who travels the United States with his cat in an RV. It shares experience and glamour, but also challenges, of living in an RV so you might learn something useful. Another useful thing in there is a recommendation to use a VPN, but first, double-check if you can’t score a better deal. What’s a better VPN deal? Nomadic Fanatic is recommending a decent VPN service, however, you can get one just as good and maybe even save some money. Double-check NordVPN 70% discount, a top-notch Virtual Private Network service that will cost you only $3.49/month for a three-year deal on a current sale. Don’t miss this opportunity. Click here to get the NordVPN 70% discount What does NordVPN offer? This premium VPN service allows to you bypass geographical blocks. Some websites do check your IP address, and if it’s not from the required country, then they block access — frustrating! By using a VPN, you can change your IP address and bypass the geographical restrictions with ease and in no-time. If you’re a traveller, then hotel and flight prices are essential to you. That’s another thing that may depend on your geographical restrictions. Even if you’re shopping online, some countries give one price, when others provide a different one. By changing the server to another country, you might hunt for better discounts and lower prices, a cool way to use a VPN. What is a VPN? Virtual Private Network is a privacy protection software that was developed to secure your Internet connection. It takes your traffic and reroutes it through their secure servers, furthermore, NordVPN doesn’t keep any logs so you can be sure that your online activities are private and nobodies business — not even your VPN service provider will have such data.
https://medium.com/@gordonvev/nomadic-fanatic-is-advicing-a-vpn-1d6fc990b7d9
['Gordon Vevec']
2020-04-10 10:25:08.444000+00:00
['Discount', 'Deal', 'Privacy', 'VPN', 'Travel']
WordPress.com VS WordPress.org — Which WordPress To Use
WordPress.com VS WordPress.org — Which WordPress To Use — What is the difference between wordpress.org and wordpress.com. There are two versions of WordPress, the difference between wordpress.com and wordpres.org. WordPress.com is a managed hosted provided on the same platform by wordpress.com respective to some plans it offers. All the website is managed by WordPress you just have to build your website and upload content. It has very limited flexibility and there are some paid features for unlocking more flexibility. WordPress.org is a downloadable, self-hosted and self-managed version of WordPress. You can download it accordingly. It has more features and is very flexible. Now I am going to discuss the 13 key differences by which you will be able to differentiate and choose which WordPress to use. In this fight of WordPress vs WordPress, let’s see which one wins. 1. Hosting In wordpress.com you just have to signup and run. Hosting is provided on different plans by WordPress monthly plans but they are expensive. On the other hand, wordpress.org being self-hosted you need to purchase a hosting and install WordPress on it. Then you can manage it accordingly. It is a cheaper and best self-hosted platform to build amazing websites. 2. Domain WordPress.com provides you domain but they are expensive while using wordpress.org, you can get a cheap domain name of your choice and set it up accordingly. 3. Storage WordPress.com has limited storage depending on what plan you choose while on wordpress.org you can get your desired storage space as per your requirements by exploring different hosting services. wordpress storage space limit depends on your hostings provider. You can explore different hosting providers like Bluehost, GoDaddy, Host Gator, Site Ground, Dream Host, etc. WordPress storage space limit refers to how much content you can upload and store on WordPress. The storage space should be according to the website you want to build. 4. Maintainance WordPress.com manages your WordPress website by itself so you don’t have to worry about your website maintenance. On the other hand, using wordpress.org you have to keep your website maintained yourself. 5. Money Making Options On wordpress.com you have very limited money-making options while on wordpress.org you can make money as you can place whatever on your website. 6. Theme Editing WordPress.com has very limited functionality for theme editing while using wordpress.org you can edit your theme accordingly depending on the type of theme you have as you would be using a paid or free theme and with different types of layouts. WordPress.org themes are provided by various platforms, you can either buy or go for a free one. Just keep in mind the theme you choose should be SEO optimized and mobile-friendly. 7. Plugin Integration WordPress.com has very limited plugins integrations. On the other hand, wordpress.org allows you to install as many plugins as you want and increase the functionality of your website. 8. Security WordPress.com gives your website good security while using wordpress.org you need a good hosting service to manage your website security. 9. Backups In wordpress.com you can make backups of your website. Same goes for wordpress.org but you would be needing plugins to set up your own backups. 10. Branding In wordpress.com you are only allowed to do limited branding. While wordpress.org allows unlimited branding features. 11. Advertisement WordPress.com allows limited advertisement. On the other hand, using wordpress.org allows you to display unlimited advertisement on your website. 12. SEO WordPress.com gives very limited SEO features while wordpress.org gives you good SEO features that you can manage. Keep in mind that SEO is very crucial for every website as it plays a major role in bringing traffic to the website. 13. Price WordPress.com is more costly respective to the plans it is offering while wordpress.org is cheaper and in my opinion, it is also worth the cost it may charge. WordPress.org is free and open-source. You have to pay some amount for the premium additional features ( Premium Plugins etc ), Theme, Domain and hosting The domain can be between $10 to $20 annually. The Hosting will cost you between $100 to $ 1000 annually. You can also purchase hosting for monthly payments if you do not want o pay annually. Monthly would cost between $10 to $20. While deciding Free WordPress vs paid WordPress, you can choose according to your needs. Why WordPress.org Is The Best WordPress.org gives unlimited functionalities and any kind of website can be created using this version of WordPress. It has all the necessary features that can bring any functionality to a website with the full freedom of access. Most importantly it is very easy to use as it is a CMS and you don’t need any coding skills to make brilliant websites on it. It has some paid features like premium plugins etc but you can also go for the unpaid ones as a beginner. It does not make a lot of difference. Conclusion Now you would be thinking should I use wordpress.com or wordpress.org? Keeping all these features and factors in mind, choose the WordPress version according to your requirements. I would suggest wordpress.org. Is wordpress.org free? of course, its free and open-source software that anyone can begin working on with. For further information, you may compare two websites for differences. One built on wordpress.com and the other on wordpress.org. You will be able to highlight the key factors yourself too. For more interesting articles on WordPress, technology, online earning methods etc, explore our website FistFull Technology. We are adding new content daily. You should Also Read: What is WordPress? Simple Explanation What Is WordPress For — Why Choose WordPress 14 Proven Ways To Make Money On WordPress
https://medium.com/@raobilal052/wordpress-com-vs-wordpress-org-which-wordpress-to-use-c8690af818d
['Bilal Rao']
2019-09-03 10:35:41.980000+00:00
['Wordpress Web Development', 'Platform', 'WordPress', 'Difference', 'Website']
2020 Guide to Freelance Writer Salaries
Let’s get the crappy part out of the way. When you’re first starting out as a freelance writer, you may feel a bit discouraged because nobody wants to pay you what you’re worth. You’ll see a ton of short writing jobs out there between $5 to $10, work that any professional might charge $100 or more for. The unfortunate part about this is that this part of the market exists because plenty of people are willing to work for those low amounts. In the beginning, we might think that it’s a rite of passage because we don’t have any professional experience under our belts, and if we could simply get a handful of low-paying gigs out of the way to build our portfolio, it might be okay. On the surface, that’s true, but underneath it all, it de-values what we bring to the table. There’s also another side to it that should be considered. There are some freelance writers in certain economies who can live well on these low-paying gigs. As an example, I once had an author contact me about editing their book. They lived in an African country with a low cost of living. When I did my research on their economy, I realized that what I was charging for the edit would be comparable to nearly a year’s worth of wages for someone living there. It was an impossible situation I couldn’t solve. In the end, I offered to edit a small portion of the manuscript in exchange for this author signing up for my email list. I didn’t want to completely turn her away, but I also knew there was no way the project would be worth my time if I lowered my own income standard to hers. So, how do you get past both these hurdles? The answer lies in exactly what type of clients you’re interested in working with. If you want to build a freelance career that pays you what you’re worth, you need to focus solely on seeking out the clients who can benefit from your skill level and who understand the value a great writer can bring to the table. Are these jobs harder to find? Yes, of course they are, but we should focus on that old adage that tells us to work smarter, not harder. Working smarter means focusing your business on the clients who understand what you have to offer. And once you’ve decided what kind of freelance writer you want to be, you also must figure out what you can reasonably charge your future clients. To get a good idea of the potential of various types of writing services, I’m going to talk about different opportunities and what some successful freelancers are making from that type of work. Some of the writing opportunities I’ll talk about here are client-based while others are based on making sales of writing products you create to sell to the public. But no matter what, you still need to focus on who that ideal audience is. (This is something I’ll talk about at length in future blog posts.) 1 — How much can I make as a freelance editor? To start off, let’s look at the going rates in the publishing industry for professional editors. (You can find this information at the Editorial Freelancers Association (EFA) website. I’ve done the work here to break the hourly rates down to per word rates based on their time estimates.) Keep in mind that these are only guidelines. Some editors charge a bit below these rates and are quite experienced while others charge much more. Let’s break these rates down into real-world examples. I’ll use one of my past projects as a generic example. Let’s say a self-publishing memoir author has come to you with a manuscript of 120,000 words. Since memoirs tend to be a bit longer than other types of nonfiction books, this isn’t out of the ordinary. And let’s say — in a perfect world — they would like to hire you as their developmental editor, line editor, copy editor, and proofreader. So, let’s break this down. Developmental Editing — $4,320 Line Editing — $3,240 Copy Editing — $1,440 Proofreading — $1,080 So, at the lower end of these editing rates, the total for this contract would be $10,080. Now, I normally give a bit of a discount when they hire me for all services, so let’s discount it at 15% for a final total of $8,568. If you were lucky enough to land contracts like this once a month, you’d ease into the six-figure freelancer category in twelve months only taking on one client per month. Sounds like a dream, right? Well, to be honest, I don’t get the opportunity to do a lot of these projects, so it’s not as easy to accomplish as you’d hope. So, let’s look at a more realistic pretend project. Let’s say another self-publishing author contacted you, but instead of a memoir, this author needs a novel edited. The manuscript is complete at 70,000 words, which is about average for a full-length novel. Instead of all four services, they only want to hire you for a line edit and copy edit. Line Editing — $1,890 Copy Editing — $840 Total: $2,730 This is much closer to the reality of day-to-day life as a freelance editor. If you were a full-time freelancer, you could easily handle three of these projects per month, totaling up to $8,190 in income per month. That’s only a couple thousand shy of a six-figure income, and that’s not bad at all. However, unless you’ve been working for a while and you’re in high demand, you won’t always be able to command your top rates. I’m a managing editor for an editing guild and projects with a budget of $1,500 or less are much closer to reality. In a situation like that, you’d have to decide if it was worth it to lower your rates, or if you were going to stand strong with the full rate you quoted. The solution for that is to structure your editing business from the start to go after those higher-end projects, and doing so would easily allow you to earn six figures once you got the marketing right. 2—How much can I make as a freelance blogger? From low to high, there really isn’t one figure that gives an accurate representation of the income-earning potential of being a full-time freelance blogger. Some bloggers make absolutely no income from their blogs while some bloggers make six or seven figures a year from the work they put into their blogs. Before we get into the numbers, let’s first talk about ways a blogger earns income only through their blog. Selling services and products you personally deliver Advertising Sponsored blog posts Affiliate marketing Donations Writing and selling a book based on your blog Online courses eBooks Now, let’s talk about the income potential of each of these elements of a freelance blogging business. Services and Products This can be anything that relates back to what your blog is about. For instance, if your blog is inspirational or motivational, you might become a motivational speaker and book paid speaking engagements. As a beginner, you might be able to command a $500 fee for each speaking engagement you do. You’d have to do quite a few of those—at least 8—EVERY MONTH if you wanted to make $50,000 a year from your blog. Ouch. That’s quite a bit of work! However, once you build up your brand and social media following, you’ll be able to work toward making up to $20,000 per speaking engagement. Advertising Once you build up some traffic, this will be a more ideal income-earning activity for your blog because it’s not as time-intensive as the speaking engagement example I used above. Forget Google AdSense. That just clutters your content, and you don’t make a livable wage with that. What you want to do is sell advertising space directly to other businesses that have the same target audience as you. You can either set a specific dollar amount for each type of ad (like $50, $500, $1,000, etc.), or you can charge according to the traffic you receive, which has the possibility to earn you several thousand dollars per month if your traffic is excellent. Sponsored Blog Posts A sponsored blog post is when another business or entrepreneur pays you to publish a blog on your website. They’ll either hire you to write the blog post, or they will supply the content for you. You’re going to be able to earn much more money if they hire you to write the post—perhaps about $600 to $1,000 per post if you have really good traffic on your website. You might even be able to charge more if you offer an extensive marketing plan for their sponsored blog post. Affiliate Marketing Using affiliate links and ads is one of the more profitable ways to monetize your blog. One of the affiliate programs I belong to pays out $1,000 for every sale I send to them. Others offer a generous percentage for each sale, sometimes 40% to 50% of the total cost of the program or product. Though the higher commissions might be more challenging to sell because they are attached to higher-ticker items, they can be a great boost to your income if you can sell at least three of them per month. Donations Most blogs that ask for donations aren’t really making a livable income from their blog. When they ask for donations, it’s generally to help pay for the costs of hosting. So, while donations can help, I wouldn’t rely on them for a full-time income as a freelance writer. Book Publishing If you have a well-defined theme or niche for your blog, once you’ve accumulated enough blog posts (20 to 30), consider how they might all fit together in a full-length book (or eBook) you can publish. Some independent authors I know are making six and seven figures from publishing their books, so if you don’t limit your book promotion efforts, I think the income potential is tremendous there. [Want to learn how to make 18 streams of income from your book?] Online Courses Similar to book publishing, if you have a strong theme or niche for your blog, look for opportunities to create a variety of courses based on the topics of your blog. I’ve seen online courses sell from anywhere between $49 and $5,000—and there are some high-end courses that charge more than that. eBooks The beauty of eBooks is that they don’t have to even be close to the same length as a standard book. eBooks I’ve paid for in the past have been around 20 pages or more. You can add in images and diagrams to make the text easier to read, and that will also take the burden off you to write a lot of content. You can either sell these through your website or upload them to Amazon (or any of the other book retailers). When you combine all of these income opportunities for blogs together, it allows you to earn AT LEAST a six-figure income doing something you love—and you get to help people in the process! 3—How much can I make as a freelance copywriter? To start off this conversation, let’s look at some of the rates for beginning freelance copywriters: Blogs—$100+ Case Studies—$300+ White Papers—$200+ per page Short Info Pages—$100+ per web page Long Sales Page—$200+ Short Marketing Emails—$200+ Freelance copywriting is still one of the most lucrative freelancing fields to break into today, and it seems like the demand has been growing as more technologies emerge, algorithms are updated, and new social networking trends pop up regularly. One way you can use your copywriting skills to create a stronger business is through specializing in a specific type of copywriting skill. For instance, how much easier do you think it would be to create a name as the go-to person for short email copy vs. being a generalist who can write all forms of copywriting? When you niche down, you make it much easier for your potential clients to find you, allowing you to build a client base much faster if you’re going after any copywriting project you can get your hands on. However, keep in mind that doesn’t mean that you can’t offer those other copywriting services. If I was starting a new copywriting business today, I’d first build a foundation on that one thing, then once that starts to grow, I’d extend my offering to other forms of sales copy my clients need written. Using email copy as an easy example again, let’s crunch the numbers further: Let’s say you land a client that retains you on a six-month contract to write weekly email copy, locking in a per-email rate of $250/email. That’s $1,000 of income per month. So, in theory, all you would need to make six-figures that year would be to acquire and maintain nine clients who needed weekly email copy written. Again, in theory, let’s say it takes one to two hours to write each email once you have the pertinent information and graphics you need. You could easily write two emails per workday, then use the rest of the day for marketing or other business activities. And this is something that holds true for many other freelance jobs—the actual work you’re paid to do doesn’t take as much time as it does to bring that client in. I’d like to say it’s just this easy to build and maintain a freelance business, but it’s not. It takes time to make the connections, and you don’t always get to keep every single client who hires you. The average client retention rate is around 20%, so you’re going to have to continually work the business to work in the business. But don’t let the obstacles tell a story you don’t want to hear. Once your business starts blossoming, it will be worth all the hard work you’ve put in and so much more. Start out small today with the resources you currently have, and if you stick to it, you’ll eventually end up with something you can be proud of—and something that will more than pay the bills.
https://medium.com/swlh/2020-guide-to-freelance-writer-salary-potential-402dbf860971
['Tina Morlock']
2020-10-11 05:22:59.912000+00:00
['Freelancing', 'Blog', 'Startup', 'Entrepreneurship', 'Writing']
Farming While Black with Leah Penniman
In 1920, 14 percent of all land-owning U.S. farmers were black. Today less than 2 percent of farms are controlled by black people: a loss of over 14 million acres and the result of discrimination and dispossession. While farm management is among the whitest of professions, farm labor is predominantly brown and exploited, and people of color disproportionately live in “food apartheid” neighborhoods and suffer from diet-related illness. The system is built on stolen land and stolen labor and needs a redesign. Farming While Black is the first comprehensive “how to” guide for aspiring African-heritage growers to reclaim their dignity as agriculturists and for all farmers to understand the distinct, technical contributions of African-heritage people to sustainable agriculture. At Soul Fire Farm, author Leah Penniman co-created the Black and Latinx Farmers Immersion (BLFI) program as a container for new farmers to share growing skills in a culturally relevant and supportive environment led by people of color. Farming While Black organizes and expands upon the curriculum of the BLFI to provide readers with a concise guide to all aspects of small-scale farming, from business planning to preserving the harvest. Following the publication of the book, which is excerpted below, Leah Penniman spoke about her experiences at a Bioneers Conference. The following excerpt is adapted from the introduction of Leah Penniman’s book Farming While Black: Soul Fire Farm’s Practical Guide to Liberation on the Land (Chelsea Green Publishing, November 2018) and is reprinted with permission from the publisher. As a young person, and one of three mixed-race Black children raised in the rural North mostly by our white father, I found it very difficult to understand who I was. Some of the children in our conservative, almost all-white public school taunted, bullied, and assaulted us, and I was confused and terrified by their malice. But while school was often terrifying, I found solace in the forest. When human beings were too much to bear, the earth consistently held firm under my feet and the solid, sticky trunk of the majestic white pine offered me something stable to grasp. I imagined that I was alone in identifying with Earth as Sacred Mother, having no idea that my African ancestors were transmitting their cosmology to me, whispering across time, “Hold on daughter — we won’t let you fall.” I never imagined that I would become a farmer. In my teenage years, as my race consciousness evolved, I got the message loud and clear that Black activists were concerned with gun violence, housing discrimination, and education reform, while white folks were concerned with organic farming and environmental conservation. I felt that I had to choose between “my people” and the Earth, that my dual loyalties were pulling me apart and negating my inherent right to belong. Fortunately, my ancestors had other plans. I passed by a flyer advertising a summer job at The Food Project, in Boston, Massachusetts, that promised applicants the opportunity to grow food and serve the urban community. I was blessed to be accepted into the program, and from the first day, when the scent of freshly harvested cilantro nestled into my finger creases and dirty sweat stung my eyes, I was hooked on farming. Something profound and magical happened to me as I learned to plant, tend, and harvest, and later to prepare and serve that produce in Boston’s toughest neighborhoods. I found an anchor in the elegant simplicity of working the earth and sharing her bounty. What I was doing was good, right, and unconfused. Shoulder-to-shoulder with my peers of all hues, feet planted firmly in the earth, stewarding life-giving crops for Black community — I was home. As it turned out, The Food Project was relatively unique in terms of integrating a land ethic and a social justice mission. From there I went on to learn and work at several other rural farms across the Northeast. While I cherished the agricultural expertise imparted by my mentors, I was also keenly aware that I was immersed in a white-dominated landscape. At organic agriculture conferences, all of the speakers were white, all of the technical books sold were authored by white people, and conversations about equity were considered irrelevant. I thought that organic farming was invented by white people and worried that my ancestors who fought and died to break away from the land would roll over in their graves to see me stooping. I struggled with the feeling that a life on land would be a betrayal of my people. I could not have been more wrong. [adrotate group=”1"] At the annual gathering of the Northeast Organic Farming Association, I decided to ask the handful of people of color at the event to gather for a conversation, known as a caucus. In that conversation I learned that my struggles as a Black farmer in a white-dominated agricultural community were not unique, and we decided to create another conference to bring together Black and Brown farmers and urban gardeners. In 2010 the National Black Farmers and Urban Gardeners Conference (BUGS), which continues to meet annually, was convened by Karen Washington. Over 500 aspiring and veteran Black farmers gathered for knowledge exchange and for affirmation of our belonging to the sustainable food movement. Through BUGS and my growing network of Black farmers, I began to see how miseducated I had been regarding sustainable agriculture. I learned that “organic farming” was an African-indigenous system developed over millennia and first revived in the United States by a Black farmer, Dr. George Washington Carver, of Tuskegee University in the early 1900s. Dr. Booker T. Whatley, another Tuskegee professor, was one of the inventors of community-supported agriculture (CSA), and that community land trusts were first started in 1969 by Black farmers, with the New Communities movement leading the way in Georgia. Learning this, I realized that during all those years of seeing images of only white people as the stewards of the land, only white people as organic farmers, only white people in conversations about sustainability, the only consistent story I’d seen or been told about Black people and the land was about slavery and sharecropping, about coercion and brutality and misery and sorrow. And yet here was an entire history, blooming into our present, in which Black people’s expertise and love of the land and one another was evident. When we as Black people are bombarded with messages that our only place of belonging on land is as slaves, performing dangerous and backbreaking menial labor, to learn of our true and noble history as farmers and ecological stewards is deeply healing. Fortified by a more accurate picture of my people’s belonging on land, I knew I was ready to create a mission-driven farm centering on the needs of the Black community. At the time, I was living with my Jewish husband, Jonah, and our two young children, Neshima and Emet, in the South End of Albany, New York, a neighborhood classified as a “food desert” by the federal government. On a personal level this meant that despite our deep commitment to feeding our young children fresh food and despite our extensive farming skills, structural barriers to accessing good food stood in our way. The corner store specialized in Doritos and Coke. We would have needed a car or taxi to get to the nearest grocery store, which served up artificially inflated prices and wrinkled vegetables. There were no available lots where we could garden. Desperate, we signed up for a CSA share, and walked 2.2 miles to the pickup point with the newborn in the backpack and the toddler in the stroller. We paid more than we could afford for these vegetables and literally had to pile them on top of the resting toddler for the long walk back to our apartment. When our South End neighbors learned that Jonah and I both had many years of experience working on farms, from Many Hands Organic Farm, in Barre, Massachusetts, to Live Power Farm, in Covelo, California, they began to ask whether we planned to start a farm to feed this community. At first we hesitated. I was a full-time public school science teacher, Jonah had his natural building business, and we were parenting two young children. But we were firmly rooted in our love for our people and for the land, and this passion for justice won out. We cobbled together our modest savings, loans from friends and family, and 40 percent of my teaching salary every year in order to capitalize the project. The land that chose us was relatively affordable, just over $2,000 an acre, but the necessary investments in electricity, septic, water, and dwelling spaces tripled that cost. With the tireless support of hundreds of volunteers, and after four years of building infrastructure and soil, we opened Soul Fire Farm, a project committed to ending racism and injustice in the food system, providing life-giving food to people living in food deserts, and transferring skills and knowledge to the next generation of farmer-activists. Leah Penniman Our first order of business was feeding our community back in the South End of Albany. While the government labels this neighborhood a food desert, I prefer the term food apartheid, because it makes clear that we have a human-created system of segregation that relegates certain groups to food opulence and prevents others from accessing life-giving nourishment. About 24 million Americans live under food apartheid, in which it’s difficult to impossible to access affordable, healthy food. This trend is not race-neutral. White neighborhoods have an average of four times as many supermarkets as predominantly Black communities. This lack of access to nutritious food has dire consequences for our communities. Incidences of diabetes, obesity, and heart disease are on the rise in all populations, but the greatest increases have occurred among people of color, especially African Americans and Native Americans. Farming While Black is a reverently compiled manual for African-heritage people ready to reclaim our rightful place of dignified agency in the food system. To farm while Black is an act of defiance against white supremacy and a means to honor the agricultural ingenuity of our ancestors. As Toni Morrison is reported to have said, “If there’s a book you really want to read, but it hasn’t been written yet, then you must write it.” Farming While Black is the book I needed someone to write for me when I was a teen who incorrectly believed that choosing a life on land would be a betrayal of my ancestors and of my Black community. Leah Penniman is a Black Kreyol farmer and the2019 recipient of the James Beard Foundation Leadership Award. She currently serves as founding co-executive director of Soul Fire Farm in Grafton, New York, a people-of-color led project that works to dismantle racism in the food system. She is the author of Farming While Black (Chelsea Green Publishing, November 2018). Find out more about Leah’s work at www.soulfirefarm.org and follow her @soulfirefarm on Facebook, Twitter and Instagram.
https://medium.com/bioneers/farming-while-black-with-leah-penniman-6e6e0bf3d195
[]
2021-02-05 18:37:53.981000+00:00
['Chelsea Green', 'Farming While Black', 'Farmworkers', 'Food And Farming', 'Civil Rights']
9 Things I Want from Season 2 of The Mandalorian and 1 Thing I Don’t
The first season of The Mandalorian was a triumph and an absolute treat. It proved that not only does live action Star Wars work on the small screen, it belongs there. Naturally, such a rousing start demands an encore. Here’s a list of things I’d love to see in season two, and something I never want to see again. This is the way. Give the kid a name Enough of this ‘The Child’ nonsense. We only call it Baby Yoda because there is literally no other option. And because it’s fracking cute. Now that Mando has officially taken guardianship of the kid, hopefully he’ll give it a name. Might I suggest Yando? Expect a name if for no other reason than marketing – it seems Disney wasn’t prepared for how much we’d all be taken by the little bugger. We don’t serve their kind here One of the subplots in season one is Mando’s prejudice against droids, a phobia that’s not unwarranted as we discover via flashbacks. The subplot pays off in a nice arc as Mando comes to trust the IG droid, and then grieve for it. That was just one specific droid though. I fully expect to see the curmudgeonly Mando we first glimpsed in the pilot episode, insisting on a landspeeder piloted by a person instead of one driven by a droid. The galaxy far, far away is lousy with droids – that’s part of the charm. Sassy droids and prissy droids and belligerent droids. I want to see crusty, I-hate-droids Mando grudgingly putting up with all those metal underlings simply because he doesn’t have a choice. What a piece of junk Mando’s Razor Crest is basically the coolest ship I’ve ever seen, a wicked blend of the Millennium Falcon’s worn ruggedness with a form factor similar to Serenity (from the show Firefly), and then(!) tricked out in chrome. So shiny, so chrome. And on top of that, it’s got nifty hidden panels and an armory and plenty of cargo space. I would be totally fine if season two was just an intergalactic road trip where Mando snacked on space food while Baby Yoda played with that chrome ball. I can bring you in warm or I can bring you in cold Much of the first season was occupied with a galactic game of keep-away, with Baby Yoda as the prize. The hunter became the hunted. For season two, I really want to see Mando return to his primary occupation: namely, making Carbonite popsicles out of marks. Besides the inherent coolness of seeing a master at work, the bounty trade gives us a rare glimpse into Star Wars’ seedy underbelly. I’ve had it with Skywalkers. Gimme gangsters and scoundrels from now on. There was a lot of bounty lore hinted at in season one – how exactly do the chits and trackers work? I just want to marinate in these murky waters. And who knows – maybe Mando can utilize the kid’s abilities in tracking quarry. Just because Baby Yoda can use the Force doesn’t mean it needs to be altruistic. Fallen Empire One of my favorite things about this show is its approach to world building. Very few things are explained outright. Instead the audience is given clues and left to sort out the answers for themselves. And sometimes, as in the case of the Empire, not even that. We are left to wonder. This is a good thing. The show isn’t called ‘Empire: Faded Glory’, so I don’t need an explanation of the how’s and why’s of the Empire’s present state. Suffice it to say that though beaten, a remnant remains. These scraps are super intriguing. The Empire is still a threat to be reckoned with, but in many ways desperation has given them a sharper edge than they had at the height of their power. They are dangerous. I will absolutely miss Werner Herzog, who gave the Imperials just the right amount of cultured fascism. The darksaber-wielding dude is a more hands-on antagonist, but less interesting to me personally. I would like to see more about the Imperial labs. What exactly were they hoping to do with that lil baby? Classic era aliens getting their due Season one saw two of the galaxy’s diminutive races – Jawa and Ugnaught – feature prominently. It confirmed that Jawas are basically space racoons. But previously the Ugnaughts were almost antagonistic: in Empire Strikes Back, they disassembled C-3PO and tried to smelt the parts, and aided in Darth Vader’s experiments in flash freezing. The Mandalorian cast the race in a new, favorable light. I’d like to see more races given such treatment. Since the show plays in the classic era, I’d love to see the Rodians get their due. Far too long has they lived in the shadow of Greedo’s disgrace. Greedo, in case you forgot how dumb he looks. Image: Lucasfilm Baby Yoda Obviously. I really hope the show steers away from the sort of black and white morality that typically accompanies the Force. Part of the thrill of The Mandalorian is how it lurks in the gray areas. Thrives, even. How does it affect the kid if it grows up in this nebulous world where might makes right? Baby Yoda is adorable, but hanging out in the shadows absolutely should change it somehow. And though it’s probably too soon, I wouldn’t mind seeing a Groot-esque grumpy teenager phase, complete with Force-powered tantrums. Mandalorian lore The Clone Wars cartoon series peeled away some of the mystique surrounding the Mandalorians, insofar as it expanded the group of bad ass warriors beyond the notorious Fetts (though whether or not they were actual Mandalorians and not just posers wearing the sweet ensemble is up for some debate). The eponymous live action TV show has already put some real story stakes into the ground while hinting at a whole lot more. We’ve learned that Mandalorians aren’t born but found (which allows one tiny green baby to join their ranks). That they earn bits of their armament over time, after proving themselves worthy. That they abide by a specific credo above all else. This next season I’d love to see a bit more about why the Mandalorians need to hide. And I really need a Baby Yoda training montage. Bigger fish The Sarlaac. The Rancor. That giant space slug camped out inside the asteroids. Qui-Gon’s big fish, bigger fish. Star Wars has always included monstrous creatures. Let’s hope season two continues this fine tradition. I don’t ever want to to see Mando’s face again One of the first season’s only missteps was taking off Mando’s helmet. I would’ve preferred never seeing his face. A helmet naturally invites curiosity, and places the unmasked at something of a disadvantage. It sets the masked person apart, putting a physical barrier between them and the world, hiding reactions and the most recognizable part of a person. We therefore want to bring those shields down. The writers knew all of this, and heightened our sense of curiosity by making the mask part of the Mandalorian credo, wherein the mask becomes the face. The midseason romantic subplot addressed this head-on. Nobody should ever see a Mandalorian’s face. That should include us. My worry is how much easier the helmet will come off again, now that the show already broke it’s own rule.
https://medium.com/fan-fare/9-things-i-want-from-season-2-of-the-mandalorian-and-1-thing-i-dont-3f6ef9b8136b
['Eric Pierce']
2020-08-10 15:41:39.859000+00:00
['Star Wars', 'Television', 'Film', 'Movies', 'Culture']
Pokemon Go & It’s Lifeline kubernetes
Pokemon Go was developed and published by Niantic Inc., and grew to 500+ million downloads and 20+ million daily active users and beaten all the records at launch Traffic chart at servers Pokemon Go engineers never thought their user base would increase exponentially to surpass expectations within a short time. They were not ready for it, and the servers couldn’t handle this much traffic. also faced a severe challenge when it came to vertical and horizontal scaling because of the real-time activity by millions of users worldwide and Niantic was not prepared for this. Luke Stone, said, that the original estimation before launch of Pokémon GO was overthrown within the first hours after it went public, starting with Australia and New Zealand. As you can see in the graph of traffic at server they expected a 1x player traffic with a worst case scenario of 5x player traffic but due to popularity of game and hype player traffic surged to 50x which was 10 times more than worst case scenario as a result servers were unavailable for short period. The solution was in the magic of containers. The application logic for the game ran on Google Container Engine (GKE) powered by the open source Kubernetes project. Niantic chose GKE for its ability to orchestrate their container cluster at planetary-scale, freeing its team to focus on deploying live changes for their players. In this way, Niantic used Google Cloud to turn Pokémon GO into a service for millions of players, continuously adapting and improving. This gave them more time to concentrate on building the game’s application logic and new features rather than worrying about the scaling part. Even to Google the launch of Pokémon GO was an experiment on unkonwn terrain. Never had an App mobilized so many users in such a short time they had to update some core Elements of the GKE to guarantee a sufficient Kubernetes Container on demand provisioning rate. Pokémon GO was the largest Kubernetes deployment on Google Container Engine ever. Due to the scale of the cluster and accompanying throughput, a multitude of bugs were identified, fixed and merged into the open source project. To support Pokémon GO’s massive player base, Google provisioned many tens of thousands of cores for Niantic’s Container Engine cluster. All this upgrades and implementation of techniques , work efforts were paid off when the game launched without any problem in japan where the user base was tripled compared to US. Thanks for reading !!
https://medium.com/@kotgireshreyash/pokemon-go-its-lifeline-kubernetes-4971d81d3af4
['Shreyash Kotgire']
2020-12-26 18:19:41.919000+00:00
['Arth', 'Kubernetes', 'Google', 'Linuxworld', 'Microservices']
Argümanlarımız Gerçekte Ne Hakkında?
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/t%C3%BCrkiye/arg%C3%BCmanlar%C4%B1m%C4%B1z-ger%C3%A7ekte-ne-hakk%C4%B1nda-ca7720cd2e75
['Hüseyin Güzel']
2020-12-27 20:47:43.422000+00:00
['Hakikat', 'Baglanti Kurmak', 'Dan Pedersen', 'Türkçe', 'Haklı Olmak']
Republican Stalwart Recognizes Trump’s Emergency Wall Declaration For What It Is: An Attack On Congress And The Constitution
Sen. Thom Tillis (R) NC Republican Stalwart Recognizes Trump’s Emergency Wall Declaration For What It Is: An Attack On Congress And The Constitution But Will That Spur Other Republicans To Action? Update: Senator Tillis, who — as we wrote about below — sparked the defiance among non-”usual suspect” Republicans, changed his mind at the last minute and voted to support Trump. Although he pledged flat out a couple of weeks ago that he would: “Vote in favor of the resolution disapproving of the president’s national-emergency declaration, if and when it comes before the Senate.” He didn’t. More than likely because he is up for reelection next year. And apparently decided public rejection of the President’s will is too politically risky. The Washington Post’s Robert Costa reported Republican operatives were indeed poised to draft primary challengers to Tillis, had he voted as he initially intended. The House today is expected to vote on a “resolution of disapproval” for the President’s declaration of a national emergency to build his wall. Here’s that resolution in its entirety: In the House, the resolution has only one Republican co-sponsor, Michigan Rep. Justin Amash. Despite being quite Right-wing, he’s also reliably anti-Trump. He sums things up quite well in a Tweet: The measure should pass easily in the Democrat-controlled House. The Senate is another story altogether: but one where the storyline might be changing, or at least bending a little. That’s almost entirely because of North Carolina Republican Thom Tillis’ opinion piece in the Washington Post Monday. While Tillis spends a good chunk of space at the top of his piece trashing Democrats, and saying he supports the wall, he eventually comes to this: “I am a member of the Senate, and I have grave concerns when our institution looks the other way at the expense of weakening Congress’s power. It is my responsibility…to preserve the separation of powers and to curb the kind of executive overreach that Congress has allowed to fester for the better part of the past century. I stood by that principle during the Obama administration, and I stand by it now.” Exactly. Before you start totally loving him, not so fast. He goes on to say: “As a U.S. senator, I cannot justify providing the executive with more ways to bypass Congress. As a conservative, I cannot endorse a precedent that I know future left-wing presidents will exploit to advance radical policies that will erode economic and individual freedoms.” Tillis ends by flatly saying he will: “Vote in favor of the resolution disapproving of the president’s national-emergency declaration, if and when it comes before the Senate.” Look, we can have differences of opinion on whether the wall is crucial and necessary, or an ego-driven monument to the vainest President in U.S. history. (You can probably tell from that sentence which side of the argument we’re on.) Where there should be no dispute is in seeing Trump’s declaration as a completely subversive move, intended to cast Congress not as an equal branch of government, but subservient to the President. Tillis does have a reputation for willingness to work on bipartisan projects: he even co-sponsored a bill with very liberal Democrat Elizabeth Warren, to protect veterans from predatory lenders. And he perhaps most-famously co-sponsored a bipartisan bill protecting Special Counsel Robert Mueller, which never made it to the Senate floor because Senate Majority Leader Mitch McConnell blocked it. At the same time, Tillis has strongly supported Trump on health care, taxes, and many other issues. So he’s decidedly not one of the “usual suspects” among Republicans in the Senate who oppose, or at least question Trump’s initiatives with some frequency (and they mostly seem to be on board with the resolution, but of course we won’t know until they actually vote). And McConnell can’t block a vote on the “resolution of disapproval”, he can only delay it for a couple of weeks. If it passes both the House and Senate, Trump says he’ll “100% veto” it, which would then mean Congress would have to find the votes for an override, which requires a 2/3rds majority in both the House and Senate, so it’d be much more difficult of an undertaking. For that reason, many Senators who aren’t blind to the perils created by Trump with his “non-emergency emergency” declaration, still won’t endorse the “resolution of disapproval”, because they figure “why get on the President’s bad side for nothing?” Although Trump apparently sees the possibility they might as enough of a threat that he Tweeted what could be seen as a threat at Senate Republicans, just before departing for his Vietnam meeting with Kim Jong-un. Still other Republicans say the real issue is the fact that Congress has given the President too much leeway regarding use of emergency powers over the years, and that’s where the legislative efforts should be addressed, not on a one-off resolution. Which is just fine. Except right now, they’re using that argument to hide behind, while doing nothing, and letting the President run roughshod over the separation of powers guaranteed by the Constitution. We can see how the President’s move might be seen as tough, even heroic, by hardcore Trump supporters. But for Republicans in Congress to see it as anything other than an affront, if not an attack is astounding. Maybe that’ll change. Doubt it. One final thing: Republicans we know continually like to remind us these days that Presidents have declared 59 national emergencies. Their point being it’s really nothing special. Except it is. Of those 59 emergency declarations, only two involved military construction: The first President Bush during the Gulf War, and the second President Bush after 9/11. Those were both done for expediency, not because the President was upset Congress didn’t give him what he wanted. And the number of times a President declared a national emergency to take money just because Congress wouldn’t give it? Until now, zero.
https://ericjscholl.medium.com/republican-stalwart-recognizes-trumps-emergency-wall-declaration-for-what-it-is-an-attack-on-4b521975b9f9
['Eric J Scholl']
2019-03-17 05:23:26.973000+00:00
['Government', 'Politics', 'Security', 'Congress', 'Donald Trump']
The Rose Bowl Leaves California
Some time ago, we noted that many Californians were leaving the state to move to Texas. Now, even that most-California of sporting events, The Rose Bowl, is moving to Dallas (Arlington). When asked why they were moving, the Rose Bowl responded, “Don’t be that guy, the last guy to get the memo. We are going to Texas because we want to be free!” Rose Bowl Game Alabama v Notre Dame 4:00 PM, Friday, 1 January 2021 AT & T Stadium Arlington, Texas See you there. But, hey, what the Hell do I really know anyway? I’m just a Big Red Car. Merry Christmas, Happy New Year!
https://medium.com/@jminch/the-rose-bowl-leaves-california-92302c0a1dbf
['Jeffrey L Minch']
2020-12-27 01:12:19.146000+00:00
['California', 'Rosebowl']
This I Promise You Nsync Lyrics and Chords
Intro C G Am F C G F G Am When the visions around you F G Bring tears to your eyes G Am And all that surrounds you F G Are secrets and lies . Dm I’ll be your strength G I’ll give you hope C CMaj7 Am Keeping your faith when it’s gone Dm The one you should call Fm G Was standing there all along . Chorus C G And I will take you in my arms Am F And hold you right where you belong C G Til’ the day my life is through F This I promise you C This I promise you . G Am I’ve loved you forever F G In lifetimes before G Am And I promise you never F G Will you hurt anymore Dm I give you my word G I give you my heart C CMaj7 Am This is a battle we’ve won F And with this vow Fm G Forever has now begun . Chorus C Just close your eyes G Each loving day Am F I know this feeling won’t go away (no) C G Till the day my life is through F This I promise you.. C This I promise you.. . Dm G Over and over I thought Dm G When I hear you call C G F Without you.. in my life, baby Dm G I just wouldn’t be living at all . Chorus Overtone D A And I will take you in my arms Bm G And hold you right where you belong D A Til’ the day my life is through G This I promise you . D Just close your eyes A Each loving day Bm G I know this feeling won’t go away (no) D A Every word I say is true G This I promise you . D A Every word I say is true G This I promise you D Ooh, I promise you . Outro D
https://medium.com/@adisonpaul598/this-i-promise-you-nsync-lyrics-and-chords-680e41b66162
['Paul Adison']
2020-11-26 14:23:34.593000+00:00
['News', 'Life', 'CEO', 'Startup', 'Technology']
The TezEdge node — A deep dive into the mempool, part 2
The blockchain sandbox The blockchain is a high stakes environment. Once smart contracts are deployed on the live network, there is no turning back, and faulty code or errors may cause enormous financial damage or other serious real world consequences. It is useful for developers to work in an environment which they can fully control. One in which they can set the parameters, create funds for testing purposes and try out the features of the node’s various modules, for example the mempool. For this reason, we are creating the Tezos sandbox, an offline tool that simulates the Tezos blockchain. This allows you to safely develop, test and deploy your and smart contracts without needing to connect to the Tezos network, or even the internet itself. We made use of the sandbox while developing the TezEdge node’s mempool. When an operation is injected into the node’s mempool, there are two possible points of origin: From other nodes in the network Via RPCs from the node itself Since there currently is no client for sending custom messages to the node via the P2P network, we make use of remote procedure calls (RPCs) that allow us to inject operations that are created locally into the node’s mempool. Using CI tests to demonstrate operation injection via RPCs The objective here is to demonstrate that operations are injected into the TezEdge node’s mempool via RPCs and are then broadcasted to other nodes (including OCaml nodes) across the Tezos network. We utilize continuous integration (CI) tests as proof of this mechanism. CI is a practice in software development used to continuously check the quality of new changes made to a project. CI tests ensure the changes proposed by each pull request will not cause errors or otherwise endanger the software. By using CI, we can easily track the exact moment where development goes wrong, meaning that we can quickly find out which pull request contained the faulty code and thus avoid merging it with the main branch. Testing operation injection into the mempool and broadcasting between nodes When we run the sandbox, the genesis block already exists, which means we now have to activate the protocol. The Tezos client creates the first block and injects it into the Rust node, where it activates the protocol. From there, the block is broadcasted to the OCaml node where it also activates the protocol. Once the protocol is activated, we test the injection of operations. Here you can see the aforementioned CI tests: https://github.com/simplestaking/tezedge/blob/master/.drone.yml#L99 This test is similar to the one we described in our previous article. The difference is that now we can demonstrate the injection of the first block and an operation into the TezEdge node. 1.First, we run two nodes; the TezEdge node (tezedge-node-sandbox-run) and the OCaml node(ocaml-node-sandbox-run) , both are run in sandbox mode. This is done in the first four steps in the CI pipeline. After each run step there is a so-called wait-for step which ensures that the pipeline is held until each node has started successfully. 2. Using the Tezos-admin-client, we create a connection between the two nodes. You can see this in the connect-ocaml-and-rust step 3. In the next step, we prepare the tezos-client. This means including the accounts used in the protocol activation and the transfer operation. Then, using the Tezos-client, we activate a protocol inside the TezEdge node, thus creating the first block. This is a distinct block that contains a block header and the field “content” in which there are subfields such as “command”, “hash”, “fitness” and “protocol_parameters”. In step wait-for-sync-on-level-1 we wait until the OCaml node synchronizes with the TezEdge node which has the newly injected block on level 1. 4. The next step is a check to ensure that both nodes have an empty mempool. We call each node with the pending_operations RPC and compare the return values. We want both return values to be empty, which means their mempools are empty. 5. Using the Tezos client, we inject a valid transaction into the TezEdge node. This is demonstrated in step do-transfer-with-tezos_client. To help you understand how RPCs are used to ‘inject’ operations into the mempool, we will explain through a hypothetical Inject Operation. This is done by the tezos-client (via RPCs) 5.1 Collecting data from the node 5.2 Simulating the operation 5.3 Running the pre-apply operation 5.4 Injecting the transaction 5.5 After injection, inside the node: 5.5.1 Messaging the shell channel 5.5.2 Inserting into pending operations 5.5.3 Moving the operation towards validation 5.5.4 Broadcasting the new mempool state 6. In the step check-mempool-after-transfer, we call each node with the pending_operations RPC again and compare the return values. Again, we should see the same value from calling both nodes, but this time it will not be empty. We can see the operation in the applied field. This means that the transaction has been successfully propagated from the TezEdge node into the OCaml node. You can see the results of the entire process here: http://ci.tezedge.com/simplestaking/tezedge/955/2/1 We appreciate your interest in learning more about Tezos and thank you for the time spent reading this article. If you have any questions, feedback or comments, you are welcome to send us an email. To read more about Tezos and the TezEdge node, please subscribe to our Medium, view our documentation or visit our GitHub.
https://medium.com/simplestaking/the-tezedge-node-a-deep-dive-into-the-mempool-part-2-fc7c579d0033
['Juraj Selep']
2020-08-04 12:39:00.746000+00:00
['Rustlang', 'Cryptocurrency', 'Mempool', 'Tezos']
Delivery Automation
Delivery Automation In my previous post we discussed warehouse automation, including fulfilment and distribution center logistics. There we explored how objects are stored, inventoried, picked, and packaged for shipment, and how robotics will play an increasing role in that work going forward. Today I want to explore what happens to the packages after they leave the warehouse, and how robotics technologies are being leveraged to deliver them to other destinations. This includes autonomous freight shipping as well as last-mile delivery to businesses and residences. As we look to automate more of the logistics and material-handling needed to operate a distribution center it is natural to consider the warehouse operations of companies that carry packages from the warehouse to customer’s doorsteps. Companies like FedEx, UPS and DHL are all utilizing robotics to ensure their operations are more resilient. Fetch Robotics and 6 River Systems which are both active in the warehouse automation space also count DHL as a customer. And DHL announced in March 2020 that they would be expanding their partnership with another AMR (autonomous mobile robot) developer, Locus Robotics, to 10 new locations this year (Zdnet). FedEx recently began using robotic arms from Yaskawa America, equipped with computer vision technology from startup Plus One Robotics, to address the volume of packages in its Memphis facility (TechCrunch). In 2019, UPS brought on 22 new automated facilities across the world (which yielded 25% — 35% efficiency increases) and said that 70% of its packages passed through automated facilities, an increase from 50% in 2017 (Business Insider). In 2016, McKinsey projected that within a decade 80% of all items delivered would be done so autonomously, and COVID has only hastened this timeline. The need for social distancing and reduced contact between people has also led to increased demand for contactless delivery of packages, groceries, takeout and other goods. While there will probably always be a need for human delivery personnel, part of this last-mile delivery demand might soon be satisfied with autonomous, land-based robots or vehicles and by aerial drones. Safety and Regulation One of the biggest hurdles facing any robotic system operating out in the wild in an uncontrolled environment is maintaining safety of the public, with significant regulatory barriers (as well as liability concerns) that come along with that. The most pressing concern for any company designing a robot to operate in the vicinity of humans is that it must be *extremely* unlikely to hurt anyone. This is true for both fully-autonomous systems as well as tele-operated or remote operated systems, and is one of the main hurdles holding back level 3,4, and 5 autonomous automobiles. The way we think about the difficulty involved with designing autonomous and robotic systems for different environments could be divided into three categories. The first, and easiest involves operating a robotic system in a closed environment with no humans present, or where humans are excluded from dangerous zones by cages or barriers. The next level of difficulty occurs when robots and humans will share physical space, but where the humans are instructed or trained to gain familiarity with the particular robotic systems in their vicinity; such as warehouse employees trained in what to watch for and how to respond. The third and most difficult situation to design for is when a robotic system will operate around people unfamiliar with the equipment, unpredictable in their responses, and potentially unaware that the robot is even present. This last category is what must be addressed for a delivery robot operating out in the open in an uncontrolled environment. One way that this addressed is by being slow moving, or small, such that a collision is less likely to cause injury or property damage. Air-based drone technologies have some apparent safety advantages in this regard for urban delivery settings, in that they don’t need to interact with humans much (if at all) to accomplish a delivery mission. However a drone falling from the sky may be moving quite fast with an unpredictable trajectory, leading to a potentially dangerous impact and unpredictable consequences on the ground. There are also many considerations to be made about commercial airspace and keeping drones out of the flightpaths of larger aircraft. Ground-based delivery is in some ways easier because it can be performed by smaller robots restricted to slower speeds. Further, those that can operate on sidewalks that do not need to drive on the road with other motorists can have reduced complexity and risk in executing a delivery by ground than by air, especially in urban settings. The trade-off to be considered here is that a sidewalk environment (as well as an urban roadway) can be a chaotic setting for the robot to navigate, more so than the relatively clear airspace above. Ground-Based Autonomous Delivery FedEx has experimented with autonomous delivery of same day orders via its Roxo robots (developed by the inventor of the Segway), which it began testing in 2019 (The Verge). There are also many startups in this space taking a variety of approaches to contactless delivery. Starship Technologies (founded by a pair of Skype co-founders) uses autonomous, 20 pound robots to deliver food and groceries in markets such as Tempe, Washington, D.C., Irvine and Milton Keynes, U.K. (TechCrunch). Nuro, which uses autonomous on-road vehicles to deliver food, groceries, and dry cleaning among other things, raised $1 billion from SoftBank in 2019 (The Verge — Robot delivery startup Nuro raises nearly $1 billion from SoftBank); and is expanding into prescription delivery with its recent partnership with CVS (The Verge — Nuro’s driverless delivery robots will transport medicine to CVS customers in Texas). At the start of the pandemic, Unity Drive Innovation (UDI) delivered meal boxes and produce via an autonomous van in the Chinese cities of Zibo, Suzhou and Shenzhen. Since 2018, UDI’s vans have been used by Foxconn to transport parts inside its 200,000 worker Shenzhen campus (IEEE Spectrum). Chinese e-commerce company JD.com also began deploying autonomous van delivery services for last-mile delivery inside Wuhan when the pandemic first emerged and the city was under lockdown (RTE Ireland). Amazon is also in on the last-mile delivery robot game, with its cooler-sized, six-wheeled Scout robot being used in Irvine and Snohomish County, Washington for over a year now (DOGO News). Scout was also deployed in Atlanta, Georgia and Franklin, Tennessee starting in July 2020 (USAToday). In line with the movement towards a RaaS (Robotics-As-A-Service) business model, companies like Kiwibot are offering last-mile delivery robots as a service to restaurants, governments and delivery apps. Aerial Drone-Based Autonomous Delivery Aerial drone delivery is a tantalizing option for retailers and logistics companies because it reduces shipping costs, makes last mile delivery less cumbersome, and results in quicker shipping times for customers. But drones are limited in the weight and dimensions of the packages they can carry, and the regulatory environment at the federal and local level can be tough (and sometimes impossible) to navigate. Nevertheless, there are many companies working on drone delivery, from startups, to large tech firms like Amazon and Google, to traditional retailers like Walmart, to logistics companies like UPS. Amazon’s Jeff Bezos put drone delivery on the map with his 2013 interview on 60 Minutes, and Amazon continues to invest heavily in it’s drone delivery capabilities, known as Prime Air; it was originally slated to launch in August 2020, but has yet to materialize, most likely held back by FAA restrictions (BusinessInsider). BusinessInsider also notes the following about Walmart: “In 2019 Walmart was on pace to file more drone patents than Amazon for the second year in a row. With drones having a fairly small range of about 15 miles, Walmart is perfectly positioned to dominate the commercial drone industry thanks to its giant network of stores in the US.” Walmart also partnered with Flytrex, a Tel Aviv based drone delivery startup to deliver goods from a local Walmart store to residents of Grand Forks, North Dakota in April 2020 to address the needs of sheltered-in-place shoppers (Forbes). Wing, Google’s drone delivery services, is available in a number of locations, including Virginia, Finland and Australia, and saw its deliveries in Virginia double when the COVID outbreak began (Bloomberg). Wing’s electric-powered drones also deliver certain FedEx packages and products from Walgreens (BusinessInsider). UPS’s Flight Forward drone delivery service became the first of its kind to obtain FAA approval to operate as a commercial airline in 2019 (BusinessInsider). In May 2020, UPS announced that it would begin delivering prescriptions from CVS to a retirement community in Florida, the company had been testing the service in North Carolina prior to this (The Verge). UPS uses drones built by startup Matternet. Airbus completed the first shore-to-ship drone delivery in Singapore in early 2019, and is undertaking further trials of the service (Airbus). In May of this year Zipline, a drone delivery unicorn focused on delivering medical supplies to healthcare providers, began delivering personal protective equipment via drone to Novant Health Medical Center in Charlotte, North Carolina, a process the The Verge described as such: The service has begun by delivering supplies to Novant Health’s Huntersville medical center from a depot next to its facility in Kannapolis, North Carolina. Once the drones reach their destination, they drop the supplies via parachute, meaning the center doesn’t need any additional infrastructure to receive deliveries. Zipline says its drones can carry almost four pounds of cargo and travel at speeds of up to 80 mph. Another name in the space to watch is Flirtey, which helped Domino’s Pizza execute the first drone delivery of pizza in New Zealand in August of 2016 (UAV Coach). There are also companies pursuing larger format semi-autonomous aircraft capable of delivering hundreds of pounds of cargo, using airframes more similar to a winged airplane or VTOL craft. One such company is Elroy Air, capable of delivering 250–500lbs of cargo up to 300 miles, with autonomous cargo loading and unloading at the endpoints. Another such company is Sabrewing, boasting up to 5500 lbs of payload with VTOL ascent, and a 1000 mile range. Both of these companies are expecting to be significantly geographically restricted in where they can fly, intending to only operate their early systems above sparsely populated areas and warehouse-to-warehouse routes to comply with safety requirements. The entire autonomous-flight industry eagerly awaits the regulatory frameworks still in development by the FAA and other airspace organizations that will govern how such autonomous and semi-autonomous craft will be allowed to operate in the future, especially in the vicinity of urban areas. Autonomous Trucking Before items and packages can be dropped off at our doorsteps via drone, robot or autonomous vehicle, they need to get from factories, warehouses and other facilities to distribution centers. And autonomous trucking will soon be playing an important role in that part of the process. In some ways, autonomous trucking is more straightforward than autonomous cars, because “Unlike self driving cars, autonomous freight delivery is more predictable and easier to map since the services run on fixed routes and typically stick to major highways with few intersections or pedestrians” (Research and Markets). As with ground and drone deliveries, there are a number of large companies and smaller players jockeying for position in this market. UPS is piloting self-driving delivery vans and trucks in Arizona with both Waymo and startup TuSimple. In Arizona, Waymo, which is owned by Alphabet/Google, is delivering UPS packages from stores to sorting facilities, and is also delivering car parts for AutoNation. It plans to expand testing to New Mexico and Texas this year (VentureBeat). TuSimple which “uses Navistar trucks outfitted with the startup’s own self-driving tech, which sees the world largely through nine cameras” (The Verge) counts UPS, Nvidia and Navistar as corporate investors. The company is planning to expand to Texas this year, where it will service cities like Dallas, El Paso, Houston and San Antonio. Trucks equipped with TuSimple’s technology still must have a human driver present to take over if needed. Ike, named for President Eisenhower and the interstate highway system he helped create, is a San Francisco based autonomous trucking startup founded by former Apple, Google and Uber employees, and which started off by licensing technology from Nuro (Bloomberg). Ike raised $54.5m, including from Fontinalis Partners, whose founding partner is Bill Ford, the Executive Chairman of Ford Motor Company. I personally appreciate the systems-based philosophy that Ike is taking in their design-work, where they are “focused on an entire system that accounts for everything in the self-driving truck, from its wire harnesses, alternator and steering column to durable sensors designed for the highway, computer vision and deep learning that allows it to see and understand its environment and make the proper decisions based on that information. That systems approach also includes proper validation before testing on public roads.” (TechCrunch). Other companies in this space include Embark Trucks and Kodiak Robotics. It is worth noting that despite the massive amount of development work happening in this space, fully autonomous vehicle technologies may still be far away. Uber, for instance, shut down its self-driving truck project a few years ago (MIT Technology Review). And YC-backed Starsky Robotics, who are credited as the first company to complete a 7-mile highway journey without a human onboard, shut down their company in March 2020. In a summarization of several recent MIT papers, Supply Chain Digest noted that the MIT researchers concluded that fully autonomous trucking “is likely decades off, and the near term step in freight movement is likely semi-automated platooning for long haul moves.” Semi-automated platooning involves a lead truck driven by a human with a self-driving truck or fleet of trucks following behind. This approach is employed by startups such as Peloton Technology, which has taken investment from corporates like UPS and Volvo; and Locomation, which completed a public road trial in August of this year (VentureBeat). This paradigm of tandem human/autonomous trucks is a likely stepping stone to fully autonomous trucking. Autonomous Watercraft Autonomous watercraft for cargo transport is another way to reduce human intervention with goods being shipped. Even prior to COVID-19, water-based transport technologies were already becoming important for their ability to mitigate human error (and the consequent financial losses) in the shipping process. “It is estimated that 75% to 96% of marine accidents can involve human error” and that between 2011 and 2016 human error in sea-based cargo transport accounted for $1.6 billion in losses (Allianz). As a result, companies have been working to reduce the potential for human error by developing autonomous watercraft for shipping cargo; in 2019, a vessel developed by SEA-KIT executed the “first commercial crossing of the North Sea to be made by an autonomous vessel” (BBC). In addition to reducing or eliminating the potential for human error, autonomous ships will yield additional benefits: “Free from crewmembers, ships will be redesigned to be more efficient, since ship builders can eliminate accommodation structures such as the deckhouse and living quarters, as well as energy-expensive functions like heating and cooking facilities. Crewless ships will undergo a radical redesign to eliminate excess features and increase efficiency and carrying capacity” (DigitalTrends — Autonomous ships are coming, and we’re not ready for them). Rolls-Royce has been a leader in autonomous cargo shipping, and their VP of Marine Innovation, Oskar Levander, said in 2016 that “This is happening. It’s not if, it’s when. The technologies needed to make remote and autonomous ships a reality exist … we will see a remote-controlled ship in commercial use by the end of the decade” (Digital Trends — Rolls-Royce’s cargo ship of the future requires no onboard crew). In 2019 Shipping giant Kongsberg purchased Rolls-Royce’s Marine division, which had been conducting tests of autonomous ships in Finland; Rolls-Royce netted $500m from the transaction (Maritime Executive). While the technology to facilitate autonomous ships is being developed rapidly, its proliferation will be slowed by regulation. The International Maritime Organization (IMO) is an arm of the United Nations that sets regulations for international shipping. The chair of the IMO’s working group on autonomous cargo shipping, Henrik Tunfors, has described the IMO as “a slow-moving organization” and that “The pessimistic scenario is that regulation will fall in place between 2028 and 2032, but if we start working straight away, we may have some regulations by 2024. But that’s very optimistic” (Wall Street Journal). That being said, the Journal does note that “Autonomous ships that do short trips on national waters will only need approval by local regulators.” Nevertheless, it is unlikely that autonomous watercraft will proliferate quickly enough to have any impact on reducing the spread of COVID, but they may play an important role in ensuring resilient supply chains that can withstand another outbreak.
https://medium.com/prime-movers-lab/delivery-automation-f559d5d1d22a
['Dan Slomski']
2020-12-17 09:21:43.331000+00:00
['Delivery', 'Robotics', 'Technology', 'Automation', 'Venture Capital']
How to Access International Streaming Services from Anywhere
Have you ever found the perfect movie from a random social media platform, looked it up online, only to find out that you can’t access it? There are countless of streaming sites in the world that are often blocked or banned within your country for various reasons. But, there’s always a way around these restrictions. Of course, accessing a blocked or restricted streaming service depends highly on whether your country has copyright violation charges or whether or you’re about to break the law by doing so. Always make sure that everything you do online is legal and not in terms of heavily violating policies. TheVPNExperts can be a great guide to overcome these challenges. However, to access all your favorite movies, apps, or streaming websites from within your country, there are a few ways you can go about it: · Access services using great VPN software A VPN is by far the best way for anyone to access way more than just a couple of streaming services. It provides various servers that you can connect to, changing, or masking your physical IP with a new one. It comes with great encryption and security measures keeping both your device and data safe. However, a lot goes into choosing the best services. Not all VPNs are reliable. So, for that it is best you run some tests of your own and look up other user reviews. · By using a proxy A proxy isn’t like a VPN. In fact, it’s not as secure but manages to get the job done. It doesn’t provide you with servers to unblock a website but rather just masks your physical IP address. Using a proxy for accessing certain websites is okay if done moderately. It can’t be used for bigger tasks because it doesn’t come with encryption and other security measures as compared to a VPN. So, you can say that it’s more of a short-time solution. · Access with Smart DNS A Smart DNS is the birth of VPNs being tracked down by multiple websites. It doesn’t work the same as a VPN or proxy. The difference is that it doesn’t hide web traffic but only redirects your location whereabouts. Smart DNS is said to produce faster speeds as compared to VPNs and proxies, so that is kind of a plus point. However, only use Smart DNS if you’re not from countries or regions with extremely strict censorship laws, for example, China. It doesn’t come with great security and could be a risk. · TOR TOR is open-source and free software. If you Google it, you’ll find it’s display image an onion. An onion has many layers, in the same way, TOR works bypassing your web traffic through its many layers, making it difficult for websites to understand or sought out the source of origin. To conclude The thing about accessing restricted streaming services worldwide through any 1 of these four methods, is that nothing is secure. Maybe using a VPN is the best and safest way across these barriers but not all of them are reliable in the long run. Also, you should keep in mind that no software is untraceable. Yes, there are 70% chances of you getting away with fulfilling your IP masking and website unblocking tasks, but you still should be very careful of what you get up to or try to access online.
https://medium.com/@williamhunt727/how-to-access-international-streaming-services-from-anywhere-2d167b2e4042
['William Hunt']
2020-05-20 15:59:21.178000+00:00
['Technology', 'VPN', 'Cybersecurity', 'Streaming']
Expand Your Thoughts
TODAY’S HOROSCOPE — DECEMBER 20th 2020 If you have the Sun, Moon or Ascendant in Sagittarius, you must be overjoyed because Jupiter has left Capricorn. Under this tutelage you may have felt the idiom, ‘Spare the rod and spoil the child.’ The lessons were rough and you had to find ways to survive under heavy restrictions. Now you’re free to shout ‘Hallelujah’ from the rooftop and begin to thrive again. As you inhale the new breath of life, start thinking about how you can live more expansively. The synthesis of Jupiter and Saturn in Aquarius is the lighthouse that sparks an awareness. Tomorrow, they form the Great Conjunction that launches their journey through air signs for 800 hundred years. The astrological element of air signs namely, Gemini, Libra and Aquarius deal with archetypal ideas. These ideas are the invisible structures behind our physical world. How can you contribute to the ceremony of new concepts, doctrines and perceptions? One way you could do it is to set off on your own quest for meaning. The mythological story of Parsifal and the Holy Grail can guide you on your journey. Just as the nobleman helped Parsifal when he arrived at the Castle of Gurnemanz. He mentored Parsifal by teaching him the rules of chivalry. And particularly, the ethics behind courtesy. He was told: ‘Never lose your sense of shame.’ ‘Do not importune others with foolish questions.’ ‘’Always show compassion to those that suffer.’ Needless to say, Parsifal failed his test because he remained silent when he met the ailing Grail King. The reason he did is because he only memorised the guidance but didn’t truly understand them. He learnt the outer forms but not the inner meaning. Jupiter and Saturn in Aquarius can assist you with the process of initiating and containing ideas. Jupiter provides the inspiration to generate transpersonal and ground breaking ideas that shape humanity. Whereas, as Saturn is the framework that contains them. In order for your ideas to fit within the framework, you must understand them. By comprehending them you can categorise and apply them properly.
https://medium.com/@bybreensamuels/expand-your-thoughts-b9af4824fc84
['Bybreen Samuels']
2020-12-20 11:46:08.846000+00:00
['Self Improvement', 'Leadership', 'Life Lessons', 'Astrology', 'Ideas']
The #1 Reason Why Your Estimates Might Go Wrong
This incident happened in November 2019 around Diwali time in Chennai. A close friend of mine travelled from Bangalore to Chennai on an official visit and wanted to meet me before his return to Bangalore the same day. We worked on a plan and somehow met at a common place with just a few hours left to reach Chennai Central for boarding his train. Given the tight schedule, we could hardly speak in a relaxed mode nevertheless we had a lot to exchange. We had everything to discuss over a cup of tea standing near a tea shop. With no good ambience to continue our conversation, my friend suggested why not we both get into the cab and continue speaking while ensuring his travel plan is intact. After some time, he requested the driver suddenly to stop the cab in the busy market roads of Thyagaraya Nagar (otherwise called Mambalam) en route to Chennai Central. He wanted to surprise his mother in law with a Kanchipuram silk saree on her 70th birthday coming soon. The idea was definitely a noble one nevertheless there was not much time left. He somehow persuaded me with a forecast to get a saree in less than 15 minutes and get out from a busy shopping mall nearby. With not even a week’s time for Diwali expecting a huge last-minute shopping by many, I wasn’t convinced for him to own such a risk. But what is bound to happen, had to happen. He selected a wonderful silk saree in less than 10 minutes and stood in a long queue for about an hour to get it billed. Finding that he was running short of time and can’t get his billing done, had no choice other than to give up the plan of buying a saree. He left with a heavy heart back to the cab to board the train on time to reach home safe. While coming back to the local suburban train all alone, I had a realization with all the series of activities that happened in the last couple of hours. I found it neatly ties back to estimating work in a complex domain and the reason why it fails.
https://productcoalition.com/the-1-reason-why-your-estimates-might-go-wrong-ac3908855c21
['Ravishankar R']
2020-12-23 13:27:53.028000+00:00
['Estimations', 'Forecasting', 'Agile Planning', 'Scrum', 'Scrum Team']
Building real-life products from scratch with Value Proposition Canvas
Once you will probably face a problem or a need that you want to solve or fulfill somehow. You search for the best application with high hopes, but the App store is as empty as your feelings. If you are not lucky enough to have it in your role description to design a product from zero, this will be probably the perfect timing to start something which is 100% yours. This story is a comprehensive description of the beginning phase of researching, testing, and validation (of brand new ideas) through a real-life process we have done with my product designer buddy Peter. How to find a five-star idea? The best-case scenario is to have your own experience, which reflects on your needs, or someone else’s who is kind of close to you. At the very beginning, we collected ideas from our lives and listed on Trello with only one sentence as an internal narrative. But, what is a good idea, which reflects on a problem that is painful enough to become significant? If I feel that my issue is really tough, but there are only 15 people out there who feel the same, then it obviously doesn’t worth the time and energy to discover. Y-combinator came up with a pretty good criteria system to find the “best painful problems” out there which are important enough to work with. ⭐️ Popular (a critical amount of people have the problem) ⭐️ Growing 20% a year (the market is getting bigger affecting more people) ⭐️ Urgent (need to be solved quickly for the business/person) ⭐️ Expensive (we can charge money for it) ⭐️ Mandatory (has to be solved because of regulation pressure) ⭐️ Frequent (even multiple times a day) — **Most important!** When we started our own product, we had a common problem which was worth to discover Research Research is a kind of thing that you put everything in a hat (imagine that you can do that with every available piece of information in the world) and always pick one and dive in deeply to understand the core. It’s always pretty hard to select the most informative and useful pieces, but it is recommended to keep that maintainable since every single minute you spend with misleading information is just a pure waste of time. We did a generic food waste research, then explored the available products on the market, and tried to figure out the deficit which generated an impact on their efficiency. Sometimes it is easily available information, but in most cases, you need to download the app itself or understand the whole market to figure out the users’ reaction to the product. After this phase, you should have a complete kit of information about every other competent product, your future costumers’ generic needs, their willingness to pay and the main barriers which can lead your product to a complete failure. Here is one of our competitor’s app reviews (masked). It always worth to look for comments, reviews, and ratings to learn those features that can be an added value for the users. This is the time when you can have persona research as well to dig deeper into the problem or carry out simple in-depth interviews to be prepared with your customer insights when it comes to the product planning part. If you don’t have much time to conduct extensive researches, there are a few other techniques that can be a great helping hand to avoid questionable decisions. One potential solution is described below for these busy days. Value Proposition Canvas While with the research you get a broader (but superficial) picture of the problem, this will be enough only to understand the game you got into. The next step should be something which helps you to understand every single segment of your future solution and at last but not least, your future users. Value Proposition Canvas is a great tool for this purpose, and here you may find further information about its techniques, mechanics to tackle the main objective well prepared. 1. Customer profile We filled out customer profiles separately to get a deeper understanding of the core problems and the possible goals. This is the most important part of the whole process, if you don’t dig deep enough, or not invest the adequate time in this phase, it can happen that you will put energy only into your speculations. First, we defined customer goals, where we had to list the usual aims they try to achieve daily. The next section is about the pains, what covers every risk, negative impact or obstacle that can happen while they use your product. The third part is to discover the gains, growth, or benefits they will achieve with your solution. Customer profiles help you to find out the most important aspects of your product’s future possibilities but helping you to gain empathy towards your users as well After a well-designed customer profile, we defined the factors with high priority from the segments (five from each), and these became the basement of the value map. In our case, it turned out that from our users’ perspective the money-saving and practical aspects will be the most important factors, namely, that they will know what is in their fridge exactly to avoid wasting. 2. Value map The value map is a practical document, which helps you to organize your future product’s functions and services, with its help you will be able to figure out gain creators and pain relievers that should be synchronized with your customer profiles as well. This is the part when you have to be really creative and come up with ideas from the very basic to the most wicked ones to establish the long term goals of the product as well. As a result, you will possess a “fair” MVP plan, which still has too many functions, but it will be way much easier to filter out than just coming up with functions without any kind of background. In our case, we had another session as well to keep the MVP in order and establish the most important features of our future app. The value map is a practical tool to define the MVP’s concrete functions and services After having your future MVP planned it is worth to conduct short research again to be sure you made the right decisions in your design. Validation with landing page Congrats, now you came up with a real-life MVP with specific functions and services, but you still can’t be sure whether people would like to pay for it to use or not. To avoid the disappointment and needless time investment it is a great option to create a landing page and watch users’ reactions. Do they click? Do they want to sign up for the release? Do they speak about it? To gain the biggest attention and impact with your product, you should have an almost-final product design with polished user experience and consistent identity that you can show on your page. Without it, you won’t be able to reach the minimum number of visitors. When a potential consumer visits your site, he should feel that this is something that is almost done, and he won’t be surprised if he could download it two days later from the App store. Takeaway message Now we should double back to the beginning, where I promised that we will learn how to design a product from scratch. It won’t be fair to not telling you to always keep in mind one tiny thing: to know when it’s worth to continue a planned MVP or when it’s more rewarding to let it go. Even if we feel sometimes that our product will be the next big thing, it can happen that during the landing page phase we realize that it’s not beneficial to put more energy into the project. Then let’s move on to our next idea and repeat the techniques we’ve learned!
https://uxdesign.cc/building-real-life-products-from-scratch-with-value-proposition-canvas-ecc02bf45dab
['Dora Melher']
2020-01-16 16:31:45.181000+00:00
['MVP', 'Value Proposition', 'Startup', 'Product Design', 'Service Design']
Pity the dog
Trump found in dog’s ear: this is not a metaphor Pity the dog with such a burden, I know you scratched it was a warning, you howled, you whined, and I ignored you. I couldn’t look, I wouldn’t see, he hounds my life my nights, my mornings. But when you began to drag your bum across the carpet I knew the time had come to fix it, a mutt with fleas is better than a POTUS.
https://medium.com/resistance-poetry/pity-the-dog-befcb5003027
['Dermott Hayes']
2017-11-06 11:11:29.842000+00:00
['Ear Infection', 'Resistance Poetry', 'Poetry', 'Dogs', 'Politics']
100 years since women’s suffrage, Manchester’s fast fashion brands are taking women’s equality to…
100 years since women’s suffrage, Manchester’s fast fashion brands are taking women’s equality to a very dark place Now is the time to ask more of our city’s largest fashion houses… All around Manchester you see colourfully dressed women pictured revelling in the fashion of the moment — but these vibrant ads stand in stark contrast to fast fashion’s dangerously obscured garment supply chain. A new era of “dark factories” are coming to light as a result of the pandemic of 2020. Manchester is fast becoming the global epicentre of fast fashion — home to mammoth brands such as Boohoo, Pretty Little Thing, In The Style and Missguided — their meteoric rise to fashion industry domination is the stuff of dreams. But as reports flood in of factory safety compromised, wages left unpaid, and world protests as the garment workers hit breaking point, fast fashion’s biggest brands are facing a reckoning. Recent allegations from campaign group, Labour Behind the Label, state that some garment factories in Leicester primarily supplying to Boohoo retained normal opening hours during the coronavirus crisis. It was also reported that sick workers were ordered to continue to report for duty. A damning undercover report from the Sunday Times, has resulted in prompt abandonment of the brand by other branded websites, tumbling share prices, and major stakeholder Standard Life Aberdeen releasing the majority of its shares. Boohoo and other major fast fashion brands stand accused of failing to adequately monitor conditions at the factories, as experts and organisers within the local community express their belief that the notorious garment manufacturing sector was likely a hotbed of infection contributing to the resurgence of Covid-19 in the city. (In response to the allegations Boohoo have begun an inquiry led by Alison Levitt QC, and stated that early investigations had not found evidence of suppliers paying workers £3.50 an hour, and that some of the claims made appeared to be inaccurate or misleading. Outside authorities have said that no evidence of slavery offences has been found, however Boohoo have fired two suppliers for non-compliance with its code of conduct.) But why did it take a pandemic to convince authorities and investors to take action, when all along other retailers and consumers have been asking how it’s possible to pay the supply chain a living wage from a dress that costs £10 or less? The secrets of fast fashion In many ways, fast fashion is a victim of its own success: clothing ranges can be launched in rapid succession, with high profit margins enabled by “just in time” supply chains, wide-scale environmental neglect, and perhaps most nefariously of all, endemic gender inequality. But surely things are improving, right? Wrong. Just last year, a deep dive into the wages paid to garment workers across the globe revealed that while some brands are making progress in promoting better practices, none of the fast fashion brands investigated were able to show that a living wage was being paid to any worker in supply chains outside their own headquarter countries, and no brand had yet entered into a legally binding agreement with suppliers which would ensure increased wages for their employees. The intersection between gender inequality, environmental destruction and now Covid-19 is explored brilliantly on ruthmacgilp.com in a blog by Emily Kemp, who observes that garment workers, 80% of whom are women, “were paid poverty-level wages long before the pandemic began”, adding “due to the virus, they [the factories] are only driving already-impoverished women into further destitution”. There is also evidence of six-day work weeks, abuse, child labour and suppression of unionisation prevalent across the garment trade. Manchester Fashion Movement believes that Manchester is being seen to support and profit from the perpetuation of this model. Given the city’s history in relation to workers’ rights and as the home of the women’s suffrage movement means its legacy is being grossly undermined. What would Manchester suffragette Emmeline Pankhurst, or economist Martha Beatrice Webb, the woman who coined the term “collective bargaining”, make of fast fashion? We can only imagine. A human disaster Emmeline and Beatrice would have followed closely was the 1911 Triangle shirtwaist blaze, which consumed the eighth floor of a garment factory in New York City’s Greenwich Village, killing 146 workers — most of them young women and girls. The owners of the factory escaped the fire unharmed. They were later brought up on charges of manslaughter, but acquitted of negligence. The New York fire, and the sensational trial that followed, catalysed reforms such as outward-swinging exit doors and sprinklers in high-rise buildings. It would go on to inspire sweeping changes and fuel the women’s suffrage movements in the US and UK. Some have pointed to that day as the birth of the New Deal. The blaze would be New York’s deadliest workplace disaster for nearly 90 years, and stand as one of the fashion industry’s most catastrophic events, until it’s eclipse in 2013 by the Rana Plaza factory collapse in Dhaka, Bangladesh, which claimed the lives of 1,134 and injured more than 2,500. These two events, a century apart, reveal how the inequality and poor working conditions once associated with New York and Manchester of the early 20th century have not been eradicated — they have become common to South Asian countries, and among further parallels it emerges once again those at the top of the chain profiting from the deprivation of women and girls are getting off unscathed. What brands say versus what they do What Boohoo say: “we need to continue to have a positive social impact” (source) What they do: Boohoo has not yet published a supplier factory list or signed the Transparency pledge, actions which would help to support monitoring down its supply chain and encourage human rights to be upheld in garment factories worldwide. Boohoo’s code of conduct and supplier manual state suppliers are required to pay only the minimum wage and make no mention of wages being enough to meet basic needs. Since “no evidence of a living wage being paid to any workers” was shown, it garnered an ‘E’ (the lowest) grade in 2019’s Tailored Wages report, provided by the Clean Clothes Campaign. In its annual report Boohoo has said it will conduct a third-party audit of three-quarters of its global supply chain in FY21. * What In The Style say: “we’re not saying buy it once and throw it away, we’re saying buy it once and make it last” (source) What they do: Again, there are transparency issues, with the brand failing to respond to enquiries as to how many garments it makes per year, what it does with unsold stock, whether workers who make the clothes are paid a living wage, or whether samples it produces in the fitting process ultimately end up in landfill. (via Dazed Digital) * What Missguided say: “suppliers should demonstrate care and concern for people and the environment” (source) What they do: The retailer releases about 1,000 new products monthly. Lacking transparency around its factory locations and wages paid to workers, Missguided has joined the ETI and is trying to be seen to update its ethical policy, but its approach to buying may prove a barrier to ensuring workers are not exploited. As with Boohoo, “no evidence of a living wage being paid to any workers” was shown, thus it received an ‘E’ grade. (Vox; Clean Clothes Campaign) Our voices matter Fast fashion’s practices defy the way in which British consumers would choose to treat people or the natural environment. Who would put a woman through a degrading existence, or agree to pollution on a global scale, all to pay a few pounds less? 2020 seems to be the year of facing hard truths — that we, through uninformed choices and acceptance of the status quo are participants in systems which perpetuate inequality. The women and men who buy fast fashion clothing, the influencers who promote it, the judging panels that present the brands with fashion accolades, and the editors who place their products in magazine spreads, before they show support should ask: Who made these clothes? Were they paid a living wage? Is the brand’s supply chain suitably transparent? Are all workers who supply this brand permitted to join a union? As citizens, we must start looking at facts, evidentiary outcomes and concrete metrics, rather than awarding credit for interim steps or intentions. Crucially it’s the fashion brands themselves that must change NOW. Instead of mindless profit posting, they must show evidence of improvement. We must encourage them to put their profit to good use as leaders in fast fashion, positioning themselves right at the front of the fight for ethical supply chains, which are enabled by smart technology and built on safety, transparency and equality; investing into research for more sustainable fabrics, processing and transport; and vitally, hiring experts in the field to support sustainable growth We love our city and are proud to call it home. As Mancunians and adopted Mancunians, we need to expect more of our own and call for accountability. Are there fast fashion brands not demonstrating best practise in other cities worldwide? Absolutely. But we’re looking to create change on our own doorstep first: making better choices, making noise and doing everything we can to change the narrative. In forming the Manchester Fashion Movement, the solidarity has given us hope and confidence. Together we champion people over profits, collaboration over competition and equality at every level of the fashion supply chain. Now is the time to ask brands for more: to wake up to the lessons of our history and to really give thought for the rights of millions of workers down the value supply chain. Join us in helping to create awareness and encourage everyday activism. This is our modern suffrage — to turn the tide on fast fashion, and build a garment trade that is safe and fair… for everyone. Sign up to the Manchester Fashion Movement and keep notified with monthly newsletters: www.manchesterfashionmovement.com Follow all MFM founders: @manchesterfashionmovement Gemma — @thebeethrive & @_corporatehippy_ Alison — @allypallyvintage &@sustainablefashionparty Camilla — @wardrobe_wellbeing_
https://medium.com/@gemmagratton/100-years-since-womens-suffrage-manchester-s-fast-fashion-brands-are-taking-women-s-equality-to-43cb99c077a4
['Gemma Gratton']
2020-07-27 11:37:54.310000+00:00
['Manchester', 'Manufacturing', 'Fast Fashion', 'Suffragette', 'Equality']
Canadian Election 44 Update: August 19, 2021
Current 338.com poll results show a tightening race between the Liberals (34.5%) and Conservatives (30.6%). In polling circles, this puts both parties in a statistical dead heat (+/- 4.4%) after week one of the campaign. No bold predictions, no crystal ball. But when a political party — who was down 30 percentage points only six months ago — focuses on one core message, anything can happen. And it just happened in Nova Scotia. The Nova Scotia Progressive Conservatives led by leader, Tim Houston, swept to a majority victory two nights ago after every pollster and pundit in the land had written his party off at the start of the NS election campaign. Why does this matter? Houston ran a solid campaign on an issue near and dear to the hearts of Nova Scotians: healthcare. He unveiled a left-leaning platform that promised hundreds of millions of dollars in the first year of the party’s mandate to increase the number of family doctors, bolster the mental health system and create more nursing home beds. Houston’s party has also become the first to unseat a government in Canada since the start of the COVID-19 pandemic. Other elections that have taken place during the course of the health crisis — in Newfoundland and Labrador, New Brunswick, British Columbia, Yukon and Saskatchewan — all saw incumbent leaders remain in power. The silent majority: There is something to be said about “undecided” voters and their ability to wait out a campaign and vote in the eleventh-hour. Undecided Canadian voters have the balance of power right now, especially in #Election44 where the Liberals and Conservatives are within five percentage points of each other. Tight elections with a high margin of undecided voters have a way of opening the eyes and ears of politicians, hence the need for getting each political party to identify their position on the legalization of psychedelic therapy. In the last six years, the Liberal Party of Canada has dominated Atlantic Canada — other than MPs like Rob Moore, John Williamson, and Chris d’Entremont who broke through the Liberal fortress in the 2019 federal election. This was pre-COVID. (Shout out to Rachel Crosbie who I think would make a great PC Leader in Newfoundland & Labrador one day). Both of Justin Trudeau’s victories (2015, 2019) were buoyed by what seemed to be an impenetrable Atlantic guarantee that what starts in the East would carry him through the West; however, Houston’s victory, combined with the positive, high-integrity campaign run by Newfoundland & Labrador PC Leader, Ches Crosbie, in January, puts an interesting spin on how Atlantic Canadians may vote in #Election44. Why should the psychedelic industry pay attention to the Nova Scotia election results? Mental Health: Canadians are exhausted and need help. With provincial governments emphasizing the need for mental health reform, more pressure will be placed on Ottawa to help pay the way. Houston has promised more than $400 million to improve healthcare, so there should be opportunities for psychedelics companies and programs to provide clinical therapy and other innovations in NS. Governments will be looking for innovative and cost-effective solutions to combat the mental health epidemic facing Canadians, and psychedelic therapies are the frontier-building solutions making the most sense right now. When people are desperate, governments are desperate. They will need ideas and educational resources to demonstrate the efficacy and science of psychedelic therapies. And more importantly, this is the time for private and public entities to start investing resources into studying and communicating the socio-economic benefits of psychedelic therapies and how they can help reduce healthcare budgets across Canada. Putting partisanship aside: From day one, the coalition formed under the Canadian Psychedelic Association (CPA) and industry partners like Field Trip Health Ltd. focused on educating every Canadian politician — regardless of political stripe — across the country. Why? Because we don’t pick winners or losers in elections. When you cater to one political party, it takes away your ability to negotiate with another. That is why the CPA and our coalition are in a good position to lobby effectively on the Memorandum of Regulatory Approval (MORA). There is no guarantee that the results of the NS election will translate into a stronger Conservative surge in the coming weeks, but it will make things interesting for Mr. Trudeau. If there is food for thought for Justin Trudeau in the Nova Scotia election, it may be the fact that, with the country potentially in the latter stages of a pandemic, steering voters through the COVID crisis no longer guarantees an incumbent victory. In Nova Scotia, the Liberals learned that in painful fashion as former Premier Iain Rankin ran a low-key campaign, while his opponents each hammered on key non-COVID-19 issues. This election could be a double-edged sword for Trudeau. On the one hand, voters tend to be loath to change leaders during a crisis. In Nova Scotia, with case counts low and vaccination rates high, it appears voters switched from crisis mode to thinking about particular issues, to Houston’s benefit and Rankin’s detriment. For Trudeau, much will come down to whether voters perceive the pandemic to be in its waning stages — in which case, they may broaden their outlook to other issues — or whether, with Delta variant case numbers rising in places, they believe the pandemic to be ongoing — in which case they are likely to dance with the one what brung them. But if the latter is the case, and voters are nervous about rising case numbers, the inevitable attack from opposing parties will be on the advisability of running a campaign during a COVID crisis. More to come… Michael Kydd is the Principal of Kydder Group Inc., a public affairs firm that specializes in psychedelic communication, government relations, and regulatory affairs. He is based in Halifax, Nova Scotia.
https://medium.com/@michaelkydd/canadian-election-44-update-august-19-2021-9f76a9f63802
['Michael Kydd']
2021-08-19 15:42:30.636000+00:00
['Michael Kydd', 'Canadian Politics', 'Mental Health', 'Psychedelics', 'Canadian Elections']
Forget Work-Life Balance: It’s All About the Blend
Forget Work-Life Balance: It’s All About the Blend See why and get 5 tips to make it happen No matter what we try, work-life balance always seems like a destination that we have yet to reach. It’s around the corner, out of our grasp. Work-life balance. It sounds nice, doesn’t it? We all say we want it, and why wouldn’t you? You envision that perfect 50/50 balance point, where you magically finish everything you need to do at work and still have time left over for going to hot yoga, making homemade bone broth, getting 8 hours of sleep, and everything else Instagram tells you to do to be a well-rounded human. Reality looks a little more like this: You’re working on that report but you have to leave the office early because you haven’t been to the dentist in an embarrassingly long time. Or you’re trying to meal prep at home when an important email comes in, and next thing you know you’ve burned everything and you’re stuck eating instant ramen for lunch tomorrow. Or any one of about a thousand other scenarios that have happened to all of us, pretty much every single day. Simply put, when you’re at work, your personal life seeps in, and when you’re at home, your brain’s often still at work. More frequently, it’s a combination of all those things, happening all at once. And when you have that paragon of balanced perfection in mind, the constant spillover effect can make you feel as though you’re failing on both fronts. No matter what we try, work-life balance always seems like a destination that we have yet to reach. It’s around the corner, out of our grasp. Maybe, we think, we could get there if we rearranged a little, woke up earlier, or just tried harder. But maybe the problem isn’t what we’re doing, but rather the concept of work-life balance itself. Perhaps it’s time for a new standard: work-life blend. A healthy balance An American Sociological Review study found that seven out of ten US workers struggle with this issue, so you’re not alone. But figuring it out is really important. Not just for your own sanity, but for your health, your productivity, and your company’s bottom line. One study found that work-family conflict can increase poor physical health by 90 percent, while another found that work-induced stress can increase your risk of mortality by almost 20 percent. But reducing work-life stress brings numerous benefits, such as lowered hypertension, better sleep, less alcohol and tobacco use, decreased marital tension, and improved parent-child relationships. So it turns out how you work affects how well (and how long) you live. Given how important it seems to be, why is ‘work-life balance’ so hard to actually achieve? Finding the right words Meetings and presentations, errands and appointments, conference calls and research, laundry and takeout, pets and sippy cups — they’re all threads in the fabric of this little thing called life. In some ways, the very idea of work and life as two things to be balanced sets us up for failure. For one thing, ‘balance’ implies that one of those components is a negative that needs to be counteracted, like the dark side of the force. But there’s nothing negative about having a job and a life. More importantly, work really isn’t this ‘other’ thing overshadowing your life. It’s a huge part of your life. Even if you’re not incredibly passionate about your day job, it’s still where you probably spend the bulk of your time. Meetings and presentations, errands and appointments, conference calls and research, laundry and takeout, pets and sippy cups — they’re all threads in the fabric of this little thing called life. In pursuit of work-life balance, we treat them as different entities, trying to separate the individual strands. It’s a stressful, unrealistic, and unnecessary exercise to put ourselves through. So ‘work-life balance’ just isn’t working anymore. We need something different. Something more fluid. Something that captures the way we actually work, live, and do all the things we do in between when our eyes first flutter open and when our heads hit the pillow again at night. We need to be focusing more on work-life blend. How to actually build work-life blend Work-life blend doesn’t mean that everything is happening at the same time, all the time. It’s about finding a way to fit together the important pieces. The truth is that it’s going to take some effort to pivot from the ideal of work-life balance to being content with the reality of work-life blend. It will be messy, and it will be hard, but it’ll be worth it. Here are some tips for cultivating and practicing work-life blend: 1. Acknowledge the blend. As with almost anything, the first step is acknowledgment. We need to come to terms with the fact that work-life blend is how our life actually is, instead of striving to create perfection. We can’t let the amorphous pressure to ‘have it all’ pour in through the seams, making us feel like failures. This can be hard, especially when you’re scrolling through a feed of perfectly crafted photos from people who appear to have it all figured out. “A lot of people try or claim that they have perfected balance. But in reality they’ve just drastically deprioritized, so they really are just working on fewer things,” says Joshua Zerkel, a certified professional organizer, productivity expert, and former head of community at Evernote. “The key is to accept reality and then come up with some strategies to prioritize within your blended lifestyle, knowing that’s the playing field,” he continues. 2. Be clear on your priorities. Part of the reason why work-life balance often doesn’t work out is that it’s pretty tough to do it all. “The biggest challenge people run into with trying to have a balanced or even blended life is that they want to fit all of it in,” Joshua observes. And doing all of the things is not really a plan (nor is it balance). Work-life blend doesn’t mean that everything is happening at the same time, all the time. It’s about finding a way to fit together the important pieces. “To me, work-life blend is like Tetris,” Joshua says. “You have to fit the pieces of your life in in a way that makes sense to you. The difference is that you’re choosing which blocks to fit, instead of just having this big pile of blocks in the corner giving you anxiety.” Figure out the key components that you want to get to in your days, whether it’s fitness, self-care, meals with the family, and schedule them on your calendar at a regular cadence. Treat them with the seriousness you bring to meetings and deadlines at work. 3. Set boundaries. Once you’ve determined the pieces that matter most to you, you need to carve out time to make them happen. “I’m a big fan of time-boxing things,” Joshua says. “Give yourself time and space for personal things and then for work things. If you have a loose framework laying out where you intend to spend your time, it won’t feel like this big overwhelming mess.” Of course, the other piece to this is knowing that sometimes your boundaries will change and bleed over, and you have to be okay with that. “Your time boxes will definitely break,” Joshua observes. “It’s okay if you run over working on your project or miss family dinner this week.” Acknowledging that things are imperfect and will naturally overlap is key to making it work. Your boundaries can’t be so rigid that they won’t bend to give way to the irregularities of real life. Even if you can’t eliminate overlap, you can minimize it. Try out small tactics, such as using a different computer to get personal tasks done so you’re not tempted to check those Slack messages. 4. Check in on how you’re doing. After you’ve identified your priorities and set up rough guidelines for how you want to allocate your time, you need to check in with yourself and see how your new approach is making you feel. Ryan Smith, co-founder of Qualtrics, developed a weekly system to evaluate his progress. “Each week, I examine the categories of my life — father, husband, CEO, self — and identify the specific actions that help me feel successful and fulfilled in these capacities,” he says. “This weekly ritual helps me feel like I’m doing everything in my power to address my needs and the needs of those around me.” Whether it’s in a journal or with a template in Evernote, track how you’re feeling in regards to work-life blend on a daily, weekly, or monthly basis. If it doesn’t feel like the right mix, come up with some tactics to adjust. 5. Understand it’s a process. As with any kind of new habit or change, this is not something that’s one and done. You can’t just check work-life blend off your to-do list. “It’s tempting to think, okay, tomorrow I’m going to have work-life blend,” Joshua says, “but of course, it doesn’t work that way.” It’s important to be okay with adapting and evolving; after all, work-life blend means that there aren’t specific ratios or quotas you have to hit. “These are steps in an ongoing process that doesn’t end until you die — or get lots of assistants to help you manage it all,” Joshua wryly observes. You’ll always be tweaking and adjusting, and you’ll probably constantly feel like you’re not getting the ratios right, but as with any good recipe, it tends to work out when it all comes together.
https://medium.com/taking-note/forget-work-life-balance-its-all-about-the-blend-ad3115ed1fa4
[]
2018-02-06 14:56:01.549000+00:00
['Life', 'Work Life Balance', 'Personal Development', 'Work', 'Productivity']
Clear Signs A Foreign Girl Likes You
A lot of the time, men struggle to make the first move because they aren’t sure if their feelings are being reciprocated. Signs of attraction are a bit hit and miss when it comes to women, foreign or not. Some are obvious from the get-go, showing signs that a woman is attracted to you, however it doesn’t apply to all women. Clear signs of female attraction sometimes vary, hence why plenty of men have trouble trying to figure out if a girl likes them or not. However, there are some universal signs as well. This applies to both men and women alike. Simple gestures that are obvious when you’re really looking, like leaning in close when talking or smiling a lot when together. But if ever you’re trying to make sure before you make the first move, it’s best to take note of what these signs of attraction from a woman are. What Are The Signs Of Attraction? How women show interest is in the subtle ways they move whenever you’re around. You need to pick up on the physical signs a woman is interested in you before you take the plunge. 1. Standing Fully Erect, Stomach Tucked In and Shoulders Pulled Back When someone shows signs of interest, no matter what gender, they always want to look their best in front of the person they like. If a woman stands with a tall posture, stomach tucked, and shoulders pulled back, there is a chance that she may be interested in you. You can easily see this posture whenever she walks away or approaches you. If she senses that you’re looking at her then she makes adjustments in the way she moves. Sometimes, she might even lick her lips subconsciously if she approaches or passes by you. Stroking her neck is another habit that might affirm to her having feelings for you as well. Ladies do this to emphasize their face or body, a way to show you that they were paying close attention to you. A lot of the time, men struggle to make the first move because they aren’t sure if their feelings are being reciprocated. Signs of attraction are a bit hit and miss when it comes to women, foreign or not. Some are obvious from the get-go, showing signs that a woman is attracted to you, however it doesn’t apply to all women. Clear signs of female attraction sometimes vary, hence why plenty of men have trouble trying to figure out if a girl likes them or not. However, there are some universal signs as well. This applies to both men and women alike. Simple gestures that are obvious when you’re really looking, like leaning in close when talking or smiling a lot when together. But if ever you’re trying to make sure before you make the first move, it’s best to take note of what these signs of attraction from a woman are. She’s Comfortable Enough To Touch You Women will often place their hands on a guy if they are interested in him. She’ll graze your arm if she’s into you while agreeing with something you’ve said. Maybe even rub your hands in a show of affection. A lot of women are touchy when it comes to the person they like. They will want to touch, hug and pat the person and feel comfortable enough to keep their hands on them all the time. This type of body language portrays how she feels comfortable enough around you, wherein she isn’t scared, nervous or awkward when it comes to interacting with you. However, don’t mistake these touches as an invitation to something more. These signs of attraction don’t necessarily translate to wanting a very intimate session in a private room. It just means that she likes you. She’ll show signs of being romantically interested in you if she starts to fidget with your clothing like picking pieces of lint away from your collar and such. If you still aren’t sure, you can touch her gently to test out her response during this. As stated above, some women are just affectionate. Being touchy sometimes means interest and sometimes it just means nothing. What Attracts A Woman To A Man? “Common ground” is what usually brings a man and a woman together. Women are attracted to a man not just in looks or personality, but also on the commonalities that they share. That and how a man shows his open interest with a woman. A fantastic way of getting out of the friendzone is making your intentions clear to her. People don’t just outright say “I’m attracted to you”, you see. Dropping some signs of attraction will let the woman know. Showing her that your intentions are for romance will speed up the process so you two can avoid beating around the bush. If she’s the one dropping the signs but is being sly about it, you’ll at least know by how much she’s maintaining eye contact and the like. Laughing at your lame jokes and smiling a lot are dead giveaways too. Just look for the signs and watch out for her reactions. They’re easy to spot if you know what you’re looking for.
https://medium.com/@abigailgonzalezofficial/clear-signs-a-foreign-girl-likes-you-97ce8ca67bbf
['Abigail Gonzalez']
2021-05-16 22:43:46.274000+00:00
['Dating Advice', 'Dating', 'Dating Advice For Men', 'International Dating']
Create Your Own Lattice Boltzmann Simulation (With Python)
Fluid dynamics on a Lattice We will begin with a microscopic description of a fluid that lives on a lattice. For this exercise, we will consider a 2 dimensional lattice with 9 possible velocities at each lattice site (D2Q9). There are 4 connections running North, South, East, and, West, 4 diagonal connections, and 1 connection from a node to itself representing zero velocity. Each lattice site also has a weight wᵢ associated with it: The microscopic particles that make up a fluid can be described with the distribution function f(x,v), which describes the the phase-space density of fluids at location x traveling with velocity v. The particles will do two things. Stream and collide. This behavior can be captured by the BGK approximation: where the left-hand side represents streaming, and the right-hand side approximates collisions. In this approximation, τ is the timescale of which collisions happen, and the distribution function f tends towards some equilibrium state f^eq as a result. The equation may be discretized onto the lattice as follows where i denotes 1 out of the 9 lattice directions (with velocity vᵢ). Moments of the discrete distribution function can be taken to recover fluid variables at each lattice site. For example, the density: and momentum: where the sum is over all lattice directions. It can be shown that this description approximates the Navier-Stokes fluid equations. Streaming The first step in the Lattice Boltzmann method is to stream the particles. This step is incredibly simple. Conceptually, here is what happens. At each lattice site, for each direction i, the value Fᵢ is shifted over to the neighboring lattice site along the connection. Typically in the Lattice Boltzmann method uses units of Δt=Δx=1 and we will use this convention throughout. The streaming velocities are hence: (0,0), (0,1), (0,-1), (1,0), (-1,0), (1,1), (1,-1), (-1,1), (-1,-1). Collisions Next we need to define the equilibrium state as a result of collisions. This depends on the fluid model’s equation of state. For this example, we will assume an isothermal (constant temperature) fluid, which has a constant sound speed. We define units using common conventions such that the lattice speed is c=1 (which corresponds soundspeed²=1/3). The equilibrium state is given by: which corresponds to the isothermal Navier-Stokes equations with a dynamic viscosity: Boundary Boundary conditions in Lattice Boltzmann are implemented on the microscopic level. In our simulation, we wish to add a solid cylinder. Lattice sites part of this cylinder may be flagged. Here particles will behave differently. In our example, we will consider reflective boundary conditions. Instead of collisions that lead to equilibrium, particles will simply bounce back. This is easily accomplished by swapping lattice directions: where i and j correspond to lattice directions that point in opposite directions. Lattice Boltzmann Method That’s it conceptually. Let’s put it all together! The following code sets up the lattice and initial condition for Fᵢ, and alternates streaming and collision(+boundary) operators to evolve the system. It is remarkable that this restricted microscopic representation is able to capture macroscopic fluid behavior. Flow Past Cylinder The initial conditions above place a static cylinder into a periodic box with rightward moving fluid. As the flow progresses, turbulence develops in the wake behind the cylinder. This is known as the Kármán vortex street.
https://medium.com/swlh/create-your-own-lattice-boltzmann-simulation-with-python-8759e8b53b1c
['Philip Mocz']
2021-02-05 21:02:02.847000+00:00
['Python', 'Lattice', 'Physics', 'Fluid Mechanics', 'Simulation']
The Future of The Mandalorian
Is Grogu gone for good? In the Season 2 finale, Mando (Pedro Pascal) aka Din Djarin, and friends pull off a daring rescue attempt to rescue baby Grogu from the clutches of the sinister Moff Gideon(Giancarlo Esposito). All seems to be going to plan, with the kidnapping of Dr. Pershing(Omid Abtahi), team up of Boba Fett(Temura Morrison), Yall wanna go do hoodrat shit? Cara Dune (Gina Carano) , Finnec Shand (Ming Na Wen), Bo-Katan(Katee Sackhoff) and Koska Reeves(Sasha Banks) and boarding of the Moff’s ship, and distraction by Boba Fett, until the Darktroopers are activated. (Seriously, those things are just overkill; imagine a cross between Iron Man, and Darth Vader). “Uh how do I rotate this PDF?” Mando struggles to contain them and has a hard time dispatching just one while the others are blasted into space. He eventually succeeds, meeting up with Moff Gideon soon after and defeating him on one on one combat, confiscating the Darksaber and rescuing the obscenely cute Grogu. However, just as everything seems to be going well and defeating all the bad guys, the evil, overpowered Darktroopers return to finish the fight, with Mando and friends trapped on the bridge. “Klaatu barada nikto” All of a sudden, in true fan service, a familiar lone X-Wing appears. “ I’m here to talk to you about our lord and savior, Mace Windu” A robed, hooded figure with a familiar green lightsaber enters the scenes and in true Jedi fashion, dispatches the Darktroopers with ease. As the mysterious figure makes its way to Mando and friends. As the mysterious Jedi reveals himself, we are treated to, none other than the original hero of the Star Wars franchise himself, Luke Skywalker. “I’m Batman” (Using a de-aged CGI rendering and voice of Mark Hamill) Luke offers to take Grogu and train him in the ways of the Jedi, but needs Mando’s permission first, since Grogu views Mando as a father figure. “Thunder buddies for life, kid” In an emotional goodbye, Mando removes his helmet and allowed his little buddy to finally see his face, “Boy, look at me boy!” and allows Luke to take Grogu and train him, “Uh, what do i feed this thing?” while promising to see him again soon. (Major Oof, right in the feels.) “ One yall gonna throw me the cat right quick, idc which one” I wonder if Luke sensed Boba Fett passing by as he exited hyperspace, kind of like seeing someone you know in traffic? Will Grogu meet Kylo Ren in Jedi training? Is Din technically the King of Mandalore now? This episode has many Star Wars fans thinking- Is Grogu gone for good? Will he return in season 3, or will it be just Mando and friends from now on? Besides, the show is called- “The Mandalorian” not “Mando and Grogu”. Well, looking at the facts, I think he will. The Father and Son dynamic is integral to the roles of key players in the Star Wars films. Luke and Obi- Wan in A New Hope, Luke and Obi-Wan and Yoda in the Empire Strikes Back, and Luke and Vader in Return of the Jedi. In the Phantom Menace, it was Qui-Gon and Obi-Wan/Anakin, Clone Wars it was Obi-Wan and Anakin, briefly one by Jango and a young Boba Fett, and in Revenge of the Sith it was Anakin and Palpatine/Obi-Wan. Even the newer movies a parent-child and mentor/student dynamic is also explored and touched upon, Rogue One it is Jyn and Galen Erso, Solo, its Han and Beckett, Force Awakens it’s Kylo Ren and Han Solo/Leia, Rey and Leia. Last Jedi it’s Luke and Rey, Poe and Leia, and to a degree Kylo and Snoke. And finally Rise of Skywalker, Luke and Rey, Kylo’s redemption with Han, Leia’s goodbye (due to Carrie Fisher’s death) as well as Rey’s revelation to be Palpatine’s granddaughter. That being said, this theme is integral to the Star Wars franchise. And, the multiple obnoxiously cute Grogu and Mando interactions made the show, and Mando’s protectiveness of baby Grogu. The Mandalorian has ridiculously high levels of viewership, And the millions of Star Wars fans, both young and old are some of the most critical and anal fans in the world.We know that there is a third season set to release in December 2021. They’ll probably explore in that season the fact that since Mando rightfully defeated Moff Gideon and earned the Darksaber, he has a claim to being ruler of Mandalore. Boba might go off and do his own thing, since he has a spinoff coming out. Maybe Ahsoka might show up as well, but she also has her spinoff. (Maybe Din Djarin and Grogu might also show up in them as well). Personally, I think Bo-Katan and Mando might duke it out for the throne of Mandalore or maybe even hook up. “Ayo ma, I wanna holla at you right quick” Also, Disney and Lucasfilm wouldn’t dare miss out on the millions of revenue of Grogu toys(Which, let's be honest, is the whole goal, making mad money son- bookoo gwop, nuff dollaz, lotta duckets).
https://medium.com/@aleem.s.hydarali/the-future-of-the-mandalorian-73a9e40d9dae
['Desperado Intellectual']
2020-12-19 16:25:42.806000+00:00
['Lucasfilm', 'Disney Plus', 'Star Wars', 'The Mandalorian', 'Baby Yoda']
Fear is Your Teacher, Not the Truth
So many of us go through life, believing everything fear has to say. And aligning our lives to it. We engage with it as truth and construct much of our lives based upon it. When it fact, the energy of fear simply exists to show us what needs to be healed within us. So I ask you, how does fear show up in your life? Are you worried about not having enough money? Never finding love? Or do you just have a general low-grade anxiety most of the time; and you don’t know why? Know this: Underneath your fear is always a clue. And, ultimately an answer about something, within you that’s ready to be released. In fact, the reason a repetitive fear keeps rearing it’s ugly head is because you aren’t listening to what the energy of fear is truly trying to say. So instead of believing what fear is voicing to you on the surface, look beneath. And ask, “What is this really teaching me?” Curiosity is key. Let’s take fears about money. Whether it’s never having enough or losing what you have. When this shows up, which is common for so many of us, pause. Then ask, “What’s missing in me that’s driving this fear?” Perhaps it’s a sense of security. Or just a general belief that you are unlucky and that nothing ever works in your favor. Take it a layer deeper. What’s really underneath this? Were you taught that nothing and no one is ever safe? Were you taught that you’re unworthy of abundance? Take note of whatever you find. And as you do, be willing to look at this fear-based illusion head on. Question it and all the underlying fears that construct it. Be honest with yourself as you dig into the ACTUAL truth behind these false evidences that seemingly appear real. Using our example, ask if it’s true that you’re not worthy of abundance. Is it really? Or, ask whether it’s true that security is really something that’s absent from your life. Is it really? Do you not have everything you need right now, in this moment? These are just some simple examples for you to consider. The point is: When you take fear at face value and don’t look beneath…you create your own form of imprisonment. You allow fear to control you. In fact, many of us become ruled by it. But you can step out of it. And it starts with a conscious intention; and a simple pause. Instead of immediately engaging with fear as truth, question it. Bravely look at what lies beneath and stop assuming that your doubts and worries are real. Instead, look to fear as your greatest opportunity for healing. For as you do, you lift into greater light, fulfillment and joy. Your true destiny. Originally published on JessicaJoines.com 4/8/19
https://medium.com/thrive-global/fear-is-your-teacher-not-the-truth-60a82d46073a
['Jessica Joines']
2019-04-09 19:00:13.131000+00:00
['Self Improvement', 'Spiritual Growth', 'Purpose', 'Fear', 'Wisdom']
How licence free pictures really affect your blog and where to find them
There are blogs entirely without and others that crush the reader with it: pictures! After working without pictures on this blog, I have begun to add them since some weeks. It was not only one comment in my last article but also the expected effect. I am using pictures in all blogs now but to various outcomes. The right choice of pictures for a blog is not that simple after all. Which motive? Which price? Licence free pictures or rather buy them? What format? As easy as it is to create a blog, just as complicated it can get with pictures. The topic of pictures in blogs opens up many questions which I look at systematically here and which I will answer from my own experience. From the choice of the right picture, to the platforms where good pictures are found, to the measureable outcomes, I will discuss all of that in this article. One thing in advance: Some outcomes are actually really surprising! A picture tells more than a thousand words This quote transfers more content than initially conceived. The majority of pictures in social networks can be processed by our brain within only a few milliseconds. But somewhere is a limit. Besides the atmosphere a picture always delivers also emotions and information which we partly and unconsciously internalize, that in extension changes our reading habits, only to give one example. The article picture therefore has the intention to prepare the reader to the text and to adjust his basic attitude to it and to provide a better introduction. Also, the pictures should make you curious; pictures build up tension, so to convince the reader to click on the link to the article and in that way to learn more about the offered topic. Pictures are building bridges in between expectation and outcome. This aspect is particularly important within social media marketing but more to that later. First we need to look at another question, if we want to understand why pictures create effect: How do we perceive a picture? Doctors at the University of Münster, Germany, have found in a research project that reviews to some extent the ‘picture superiority effect’ in collaboration with marketing experts and concludes the following: the brain turns off areas of reason when familiar brands show up and at the same time brain areas of emotional and instinctive action turn more active. A sign that peoples brains respond to certain familiar pictures and reflexively execute actions which stand in relation to it. What does that knowledge mean in the blog environment? The brand pictures are the article pictures and the reflexive execution matches with the click on the article link. If we offer the reader a picture which touches him emotionally or thematically, the chances that he will click the link are certainly increasing. So far I could have brought in this understanding myself. A further interesting reality is that we retain information of things six times better if they are read in context with a picture. If you offer your reader a (memorable, not just any) picture to your article, he will remember your blog, your article, and therefore yourself much longer. This precious information can provide you with a real advantage in times of the blogger-flood. Pictures make noise and animate areas in the brain which convey the feeling of familiarity. The effect on the reader is clarified with that. But do pictures help to actually increase the reach? I have reviewed the effects in detail and evaluated amazing findings. Facebook: Are pictures really increasing organic reach? People get interested with pictures but with machines this looks rather different. We all know web-pages that draw attention with lurid headlines and support those with catchy pictures (heftig.co is a real go goer for that in Germany). It is exactly this effect that Facebook tries to counter act upon in preference of grounds for qualitative journalism. But the same effect can also be a role model to all: one headline that draws curiosity and a relating emotional picture to suit the message. If qualitative content would follow, also Facebook would do nothing about it. Over the last years social media experts in my environment pointed out to me more than once that posts with pictures create a far better reach. That stood true for quite some time. At this point timelines and newsfeeds of users are bursting from the seams because they are crammed-full with pictures that often force and pose to build tension or frantically struggle for likes and clicks. It is a pity but a measureable outcome that we have to act upon. I can derive the observation that Facebook is clogged up with pictures by taking a small sample from our FastBill — Fan page. All data stands in relation to posts from the last three months and are aimed to deliver a small overview on the organic reach of our posts. The astonishing outcome of my 10 minute analysis The included links to posts always refer via a short link to the FastBill Blog and offer an added value. It is clearly visible that the articles without pictures (besides one exception) reach an irregular but consistently higher range. This is followed by articles with a link-preview, spreading only very erratic. The articles with a text link and picture can hardly gain any reach and change only marginally in the lower value ranges. The result shows that posts with link and without picture work, in this case for FastBill the best. Be your own social media adviser! Sometimes you are your own best social media adviser! Analyses like above takes no more than 10 minutes and deliver insightful findings about your fans and your audience. Focus yourself on the best functioning kind of posts on your page. Sources for qualitative and licence free pictures To use pictures in posts and on the blog one has to find them first. To create them oneself is surely the best and most cost effective variation but as an entrepreneur we think efficiently. It is even simpler to make a brief search, download or buy. 20 Dollar for a picture is still cheaper than a one hour outdoor shooting with a handful of people. At the same time you benefit from the variety of picture-data-banks which you hardly have anywhere else. On the other hand you run into the danger that a picture you used appears somewhere else as well (just happened to me recently). In spite of this risk I decide to buy pictures or to download licence free pictures. Here are some resources which I am using for my blogs myself: Picjumbo.com Picjumbo offers free and much licences free pictures which can be used commercially or private. The page is sponsored by advertisements and offers besides the free picture download also an option to sponsor, bringing you “into the Hall of Fame.” You can request to get the latest pictures sent to you via e-mail. Littlevisuals.co Seven pictures every seven days. This is the system with which Little Visuals functions. Besides the free of charge download of the package on the web-site you can receive those also via e-mail. The majority of pictures are naturalistic, some jewels guaranteed within. Unsplash.com Unsplash — a real insider tip. Brilliant pictures and motives which are not common stock and nevertheless available for free (free of charge and licence free). Every ten days one gets ten pictures — on request also via e-mail. Deathtothestockphoto.com You wish for the death of the stock photos? They can reach that since their pictures are of highest quality. The makers of deathtothestockphoto.com deliver fresh pictures once a month via e-mail, which are bound to be found in many popular magazines. Free of charge! Stocksy.com Stock pictures don’t have the best name. Often words like “simple” “boring” or “posed” come to surface. Stocksy brings that to an end. Besides standard stock motives you also find more offbeat motives. Often the coercion of payment and the not always cheap pricing from stock portals come under question. In my case, I do look at every post also as an investment towards my own visible presence and the increase in reach. If I compare the effectiveness of one blog post with other payable methods than 10 Dollars seem like nothing and add an appealing and suitable picture. Even 20 Dollars for the format which you require on Facebook is from my point of view very acceptable. Everything that is law: the copyright In principle always name the creator! That does not necessarily reflect the legal situation but shows decency and appreciation. The majority of the stated sources above offer licence free, which means free to use pictures. Mostly even without stating the name. In order to communicate picture rights more clearly the Creative Commons (CC) Licence was called into life. Different symbols inform about the possibility of utilization and the joint responsibilities. You can find an overview of the symbols, the different ways of how the ownership has to be identified here. The choice of the right picture is essential Of course particularly pay attention to one thing: the motive. It has to suit the story in the text and the feeling of the blog. It should transfer the mood accordingly and should not be too fragmented, so that readers clearly grasp the representation. Create an experience with every picture and make sure your readers acknowledge that each one is an “editor’s pick.” Plan in the necessary time for the picture research and avoid rushing your choice. This time is a good investment. Also try to maintain your style that creates harmony between your pictures and is agreeable to the overall picture of your blog. Your readers will benefit from the clearly established structure and will like to remember your blog with that disposition. The picture format Besides the theme of your blog it is mainly Facebook and Google that specify the required format of your pictures. Here are the formats at a glance: • Facebook (http://www.jonloomer.com/2014/01/20/facebook-image-dimensions/) • Google+ (http://purrfectlysocial.co.uk/new-google-image-sizes/) • Twitter and further information on Facebook and Google+ (http://postcron.com/en/blog/social-media-image-dimensions-sizes/) Practical tip: Currently I choose pictures that suit the Facebook format for the timeline (1200 x 627px). It also fits nicely into the formats of other networks. Should there be any problem with one of your pictures it is very important to build in a fallback picture. For WordPress blogs this can be read here. It is the fallback picture that is being delivered in case no other one can be found. My findings Article pictures and pictures in social media networks can increase the reach, but they don’t have to. In any case they do have an effect on your reader and how he is taking up your post. As in my case, for LSWW, I am going to continue working with pictures on Facebook. On the long run I will maintain the strategy: “The mix does it.”
https://medium.com/lets-see-what-works/how-licence-free-pictures-really-affect-your-blog-and-where-to-find-them-15e97a6a3a7c
['Christian Häfner']
2015-08-29 17:13:07.141000+00:00
['Creative Commons', 'Marketing', 'Blogging']
Am I suitable to code?
Am I suitable to code? No. Don’t take no for an answer. Misconception #1 — you have to start learning when you’re young. Not true. Many amazing coders began coding when they attended college (so around 18 years old). You don’t have to be coding since you are ten to be good at it. A great example of a late bloomer would be Wayne Gould. He is the man responsible for popularizing the Sudoku puzzle. After retiring at age 52, Wayne started learning to code and spent six years to develop a program that can generate Sudoku puzzles in bulk. He went on to supply those puzzles to newspapers and magazines in Europe and America. Misconception #2 — you have to be great at math This is a common misconception and one that prevents a lot of people from taking the first step. Honestly, you just have to know basic maths. What is the result for 1+7? If x + 2 = 10, what’s the value of x? If you understand those two equations, you will be okay at coding for web and app development because most web and mobile applications deal with qualitative things rather than quantitative things. Unless you are pursuing data science or advanced algorithms, further understanding of mathematical theories is not required. Misconception #3—you have to quit your job Do you have to quit your job to have kids? You certainly can, but you don’t have to. Likewise, you can learn coding while working full-time. When you quit your job, you are also foregoing the monthly income that you could be earning. This is an opportunity cost and should be counted as part of your total cost. Learning part-time is less risky and allows you to learn whenever you want. If you decide to study part time, spending a few hours a week of learning can already produce great learning outcomes. The key here is you have to make those hours count. Plan out a time slot which you know you will not be interrupted by others. Misconception #4—you have to study it at a university or a college What exactly does a university course provide? A syllabus, an instructor and teaching assistants. None of the above matters unless you actually put in the time to learn. If you have passion, devotion and perseverance, nothing can stop you from learning how to code. So, do you have to learn coding at a university? Not really. There are plenty of free resources available online. Having a structured syllabus and a mentor to guide you can make your learning process much more effective and efficient. In-person and online coding bootcamps (like Altcademy) can offer a comprehensive syllabus and mentorship support while being an order of magnitude more affordable than universities. Misconception #5 — you have to be gifted Coding is a skillset learnt and acquired over time. Yes, some people find it easier than others to learn how to code. But, coding isn’t a 100 metre dash. Coding is an ultra-marathon. Focus on your own progression and improvements, and don’t compare yourself with others. Make sure you are becoming better every day. Once you have learnt the fundamental concepts and become comfortable with coding, it’s important to move forward and find a niche that you are passionate about. It could be front-end web development coupled with graphics design. It could be group chat systems or financial trading systems. It could be building tools for other fellow coders. As long as you keep learning and keep pushing yourself, you will find a place where you can shine. Find your reason to start Instead of focusing on why you are not suitable to become a coder, look for reasons on why you should learn to code. Coding is more than just a skill; it is a different paradigm on how to think. Many of the conveniences we enjoy today is the result of code and software. Facebook, Instagram, Uber, Google and AirBnb are all powered by software. Learning to code can be one of the best decisions you make in your life.
https://medium.com/altcademy/am-i-suitable-to-code-92bfe6118d7
['Harry Chen']
2019-04-15 08:28:17.183000+00:00
['Learning', 'Programming', 'Codingbootcamp', 'Coding', 'Learning To Code']
Filecoin对接PHP开发包
Filecoin.PHP 开发包适用于为PHP应用快速增加对Filecoin/FIL数字资产的支持能力,即支持使用自有Filecoin区块链节点的应用场景,也支持基于第三方公共节点的轻量级部署场景。Filecoin.PHP官方下载地址:http://sc.hubwiz.com/codebag/filecoin-php-lib/ 1、Filecoin.PHP开发包概述 Filecoin.PHP开发包主要包含以下特性: 支持离线生成Filecoin地址,方便管理维护 支持Filecoin消息的离线签名,有利于更好地保护私钥 自动估算Filecoin消息的GAS参数,避免手工调整 支持使用自有节点或第三方节点,例如使用Infura提供的公共节点 完善的Filecoin节点API封装,支持全部RPC API调用,例如查询地址地历史消息等 Filecoin.PHP软件包运行在 Php 7.1+ 环境下,当前版本1.0.0,主要类/接口及关系如下图所示: Filecoin.PHP开发包的主要代码文件清单见官网说明:http://sc.hubwiz.com/codebag/filecoin-php-lib/ 2、使用示例代码 2.1 创建新地址 在终端进入演示代码目录,执行如下命令: ~$ cd ~/filecoin.php/demo ~/filecoin.php/demo$ php NewAddressDemo.php 执行结果如下: 2.2 利用私钥恢复地址 在终端进入演示代码目录,执行如下命令: ~$ cd ~/filecoin.php/demo ~/filecoin.php/demo$ php RestoreAddressDemo.php 执行结果如下: 2.3 FIL转账及余额查询 在终端进入演示代码目录,执行如下命令: ~$ cd ~/filecoin.php/demo ~/filecoin.php/demo$ php FilTransferDemo.php 执行结果如下: 2.4 RPC客户端调用示例 在终端进入演示代码目录,执行如下命令: ~$ cd ~/filecoin.php/demo ~/filecoin.php/demo$ php demo-rpc-client.php 执行结果如下: 3、使用Filecoin.PHP FilKit是开发包的入口,使用这个类可以快速实现FIL转账、交易确认等待和余额查询等功能。 2.1 实例化FilKit FilKit实例化需要传入 RpcClient 对象和 Credential 对象,这两个参数分别封装了Filecoin节点提供的API,以及进行交易签名的用户身份信息。 例如,下面的代码创建一个接入Infura的Filecoin节点的FilKit实例,并使用指定的私钥进行交易签名: use Filecoin\FilKit; use Filecoin\RpcClient; use Filecoin\Credential; $client = new RpcClient( // 创建RPC客户端实例 'https://filecoin.infura.io', // INFURA的filecoin节点URL ['PROJECT_ID', 'PROJECT_SECRET'] // INFURA分配的项目ID和密码 ); $credential = Credential::fromKeyBase64( // 利用已有私钥创建身份凭证 'AacNySnfq9cdInB1ZUUvJJVTeqaI7LOW9EcX3UEDFfE=' // base64编码的私钥 ); $kit = new FilKit($client, $credential); // 创建FilKit实例 在创建FilKit实例时指定的Credential对象,将作为默认身份凭证执行后续的转账交易等操作。 RpcClient / RPC客户端 如果使用的Filecoin节点无需身份认证,那么在创建RpcClient时只需传入RPC URL。例如使用本机的filecoin节点: $client = new RpcClient('http://127.0.0.1/rpc/v0'); // 连接本机节点 如果使用的Filecoin节点启用了授权TOKEN的认证机制,那么在创建RpcClient时需要传入授权TOKEN,例如: $client = new RpcClient( 'http://234.10.58.147/rpc/v0', // 节点RPC API URL 'Ynl0ZSBhcnJheQ==' // 节点分配的授权TOKEN ); Credential / 身份凭证 如果已有的私钥是16进制字符串形式,那么使用 Credential 类的静态方法 fromKey() 来创建实例对象,例如: $credential = Credential::fromKey( '01a70dc929dfabd71d22707565452f2495537aa688ecb396f44717dd410315f1' // 16进制字符串格式的私钥 ); 2.2 FIL转账交易 使用FilKit的 transfer() 方法进行FIL转账,例如发送 1.23 FIL : $to = 'f1saxri7cpyz2cm767q77u3mqumrggljrmi5iqdty'; // 转账目标地址 $amount = '1230000000000000000'; // 最小单位的转账数量,1 FIL = 10^18 UNIT $cid = $kit->transfer($to,$amount); // 提交Trx转账交易 echo 'txid => ' . $cid->{'/'} . PHP_EOL; // 显示交易ID 注意 : 转账数量应转换为最小单位计量的整数字符串,1 FIL = 10¹⁸ 最小单位。 支持各种类型的接收地址,例如: f01729 :ID地址: :ID地址: f17uoq6tp427uzv7fztkbsnn64iwotfrristwpryy :SECP256K1地址 :SECP256K1地址 f24vg6ut43yw2h2jqydgbg2xq7x6f4kub3bg6as6i :ACTOR地址 :ACTOR地址 f3q22fijmmlckhl56rn5nkyamkph3mcfu5ed6dheq53 :BLS地址 2.3 等待Filecoin消息确认 使用FilKit的 waitForReceipt() 方法等待交易确认,例如: $receipt = $kit->waitForReceipt($cid); // 等待消息收据 echo 'exit code => ' . $receipt->ExitCode . PHP_EOL; // 显示消息执行结果代码,0表示成功 默认的等待时间是60秒,在此时间内没有等到交易收据将提示错误。可以传入第二个参数修改这一默认设置。例如等待10秒钟: $receipt = $kit->waitForReceipt($cid, 10); // 等待10秒钟 2.4 指定地址的FIL余额查询 使用 getBalance() 方法查询指定地址的FIL余额,例如: $addr = 'f1saxri7cpyz2cm767q77u3mqumrggljrmi5iqdty'; // 要查询的Filecoin地址 $balance = $kit->getBlanace($addr); // 查询FIL余额,最小单位表示 echo 'balance => ' . $balance . PHP_EOL; // 显示FIL余额 注意 :返回的余额为最小单位表示,1 FIL = 10¹⁸最小单位。 2.5 使用RPC客户端 Filecoin节点透过其RPC API接口提供了很多有用的功能,使用Filecoin.PHP开发包的RpcClient类可以访问所有的Filecoin RPC API。 例如查询当前的链头TipSet,对应的RPC API为Filecoin.ChainHead,使用RpcClient对象的调用代码如下: // $client = $kit->getClient(); // 从FilKit得到RpcClient实例 // or // 或者 // $client = new Client('http://127.0.0.1:1234/rpc/v0'); // 单独创建一个RpcClient对象 $ret = $client->chainHead(); // 调用Filecoin.ChainHead API echo 'height => ' . $ret->Height . PHP_EOL; // 显示Height字段的值 从上面代码容易理解: Filecoin的RPC API名称去掉 Filecoin. 前缀,然后将剩余部分的首字符小写,就得到RpcClient的方法名。 如果RPC API的params数组包含多个参数,那么依次传入RpcClient的对应方法即可。例如用Filecoin.ChainGetBlock调用查询指定的区块时,需要传入区块的CID,使用RpcClient的调用如代码如下:
https://medium.com/@ezpod/filecoin%E5%AF%B9%E6%8E%A5php%E5%BC%80%E5%8F%91%E5%8C%85-55b0db8f2bd7
[]
2020-12-20 05:02:47.861000+00:00
['PHP', 'Filecoin', 'Fil']
sweet ’n’ sour tape #15: goodbye earth
The mortal coils that have held to the earth are starting unbound. As a human in space, the human rules still apply, like don’t be an asshole. But, the environment is different. Gravity is lighter, and things move a lot faster. This is not a regular tape. In honor of your first body of work, this tape is an appreciation post for the one and only YOU! I remember the feelings I had when you first told me about your ambition to drop out of college and pursue music. I wasn’t surprised. More like, of course. The typical college route never fit us. I adapted to it, and you rejected it. You opted out to pursue a dream, and that dream wasn’t always pretty. At that moment, I thought if there was a person that I knew that could do this, it was you. And, if you worked extremely hard. Trial and tribulation, awkward moments, existential confusion, and a lot of walking later. Here we are. Your skills have matched your vision. It is inspiring. And that is one of the many reasons I am grateful for you. You have always inspired me to make more, attempt to be more, and work harder than anyone else. My ambition was being cultivated from the time we were kids when you asked me to start a video game company, to when we spent winter break freshmen year writing and drawing a self-published book. Even if it's indirect, your influence is always felt. Seeing you pour yourself into the music has given a visceral image of what it looks like to create art. It is painstaking, grueling, sleep-depriving, and the antithesis of what these wannabe creatives think. I know when I work on my art, I need to match that energy. Thank you for continuously inspiring me, and thank you for your insights. Thank you for the support right now as I pursue my own art. You are right; my creative muscle is fragile. I appreciate S.S 14 and the care package. Both items mean a lot.
https://medium.com/the-sweet-n-sour-tapes/sweet-n-sour-tape-15-goodbye-earth-926977376e86
['Kunal Duggal']
2021-03-18 00:51:46.778000+00:00
['Tape', 'Sweet', 'Grateful', 'Space', 'Letters']
Build Your First Widget in iOS 14 With WidgetKit
Let’s Play! ‘TimelineProvider’ TimelineProvider is the engine of the widget — the provider class is mainly responsible for providing a bunch of views in a timeline, along with an option to give a snapshot of the widget. For example, when a user wants to place the widget, a preview of the widget is shown to the user, which is obtained from snapshot . After being placed on the screen, timeline will return the views to be displayed on the home screen. Since widgets load in the home screen, Apple doesn’t want users to look at a bunch of loading widgets. Hence, widgets in iOS 14 are just a bunch of views bundled in a timeline. When the app is opened, it gives the system a bunch of views along with a time label. For example, if your app wants to show a countdown to an event in a widget, your app needs to make a bunch of views from the current time up till the event date, and it needs to tell the system which view to display at what time. For example, if the event is four days away, the app could send in five views: View 1: To be shown today. Has contents “Event is 4 days away.” To be shown today. Has contents “Event is 4 days away.” View 2: To be shown tomorrow. Has contents “Event is 3 days away.” To be shown tomorrow. Has contents “Event is 3 days away.” View 3: To be shown the day after tomorrow. Has contents “Event is 2 days away.” To be shown the day after tomorrow. Has contents “Event is 2 days away.” View 4: To be shown a day before the event. Has contents “Event is 1 day away.” To be shown a day before the event. Has contents “Event is 1 day away.” View 5: To be shown on the day of event. Has contents “Event has started.” iOS displays the appropriate view based on the system time, and hence the widget ends up looking right at any point of time. Also, creating such a bunch of views is a cheap task and doesn’t require constant compute time. Hence, this helps iOS preserve battery as well, adding to the smooth performance. Looking at the code: On line 14, five entries are created, one for each hour. For each entry, a view is created with Static_WidgetEntryView . These views are then provided to the iOS system so appropriate views can be shown based on the time. You can try testing this part out by replacing byAdding: .hour with byAdding: .minute and testing it in a simulator. After the widget is shown, for the first five minutes, the widget view will update once every minute. After this, the widget won’t change and will keep showing the last view the widget extension sent to iOS. Designing the widget view Now, let’s make a custom view whose contents will be rendered into the widget. At this point, we’re going to segregate the widget view and provider since everything is in the same Swift file. We know the TimelineProvider is responsible for building views and showing it onto a timeline. (A timeline is nothing but a time labelled series of views that gets shown on the home screen based on current time. More theory can be found in my previous article.) The main struct, Static_Widget , is responsible for defining the widget views in the application. We’ll be moving the widget view to a new file. Begin by creating a WidgetView Swift file by adding a new file ➡ SwiftUI View ➡ Next. Name it WidgetView . Ensure it has the widget extension checked under the target extensions, and click “Create.” Adding a new SwiftUI view for rendering into the widget The Swift file created would have template code for a SwiftUI view; however, we want to design a widget here. In the newly created file, import WidgetKit as well. Now, let’s look at the widget to be created The widget to be created This is a funky, static widget showing a static weight and a relative time — which needs to be kept up to date. Since the widget needs only two variables, we’ll create a struct to store them. Also, we’ll create an extension of the struct to have a preview of the data in order to easily fill in the WidgetData , just for demonstration purposes. Now that the data structure is added, let’s prepare the view and preview for the widget. Create a WidgetView and a provider for the preview, as follows: Writing the boilerplate Here, you’ll notice that the preview has WidgetView previewed with .previewContext(WidgetPreviewContext(family: .systemSmall)) , which renders out the view like a widget. Here the family can be changed to systemMedium and systemLarge to preview contents in medium and large formats as well. By adding in previews for other widget families too, the code for WidgetView gets updated to: Begin with adding views to the body of the widget on line 17. Looking at the design, we need two labels: one for the static text “Weight” and another for the weight from data: WidgetData . So after adding two Text(“”) views in a VStack with a Spacer() in-between, lets also add font and foreground colors to make them look good. To make the contents of the widget left-aligned, set the alignment of the VStack to leading . Now, we need this VStack to be placed inside another view with a background color of cyan, so let’s put the VStack in a HStack and add the background color as well. Also, a little bit of padding might look nice. Here, we’ve used ContainerRelativeShape().fill() for the radius of the background. It’s important to note that according to widget-design guidelines, it’s not recommended to use fixed rounded corners — instead use CornerRelativeShape , which draws its radius concentric to its superview. To make the HStack fully occupy the view, let’s add a Spacer(minLength: 0) along with the VStack in the HStack. Now, we have almost achieved half of what needs to be built. To build the last checked view, add a new VStack to add the two Text views. Here, we’ve used the new Swift syntax for Text to display a date relative to the current time — for example, Text(“data.date, style: .relative”) . Using this with a widget ensures the time shown in the widget is relative to the current system time and is always updated every second. Widget views aren’t meant to update every second, so iOS gives an option to place a dynamic time text. The per second updates are handled efficiently. Now, finally, placing them in a ZStack to get the yellow background color and adding some padding for the VStack inside it, let’s extract the two subviews to WeightView and LastUpdatedView using Xcode. Once that’s done, the final code for WidgetView would be: And the created widgets would look like: Preview for newly created widget By now, you might have noticed we’ve designed the view for .systemSmall family, and the widget isn’t looking good for other sizes. To make the .systemMedium widget look more appealing, let’s add in a relevant icon which is shown only for .systemMedium widget. First, create an environment variable with: @Environment(\.widgetFamily) var widgetFamily The value of this variable is set as .systemMedium if the user has placed a medium-size widget on his home screen. In that case, you can use: if widgetFamily == .systemMedium { //Render view here } You can add in the image to either side of the current content. Making the ‘.systemMedium’ widget prettier Exercise Design the widget to look good for .systemLarge format, too. Currently, it looks like: For now, we won’t be supporting the large size for widgets. To support only the desired widget-size families, head to the main function, and add .supportedFamilies([.systemSmall, .systemMedium]) to ensure users can’t place a .systemLarge widget on their home screen. Adding ‘.supportedFamilies’ on line 9 to display specific size classes Putting it all together Now that we’ve designed the widget view, all that’s left is to add this widget view to the app’s widget extension. Remember the @main function we talked about earlier? (Scroll to the “Understanding the boilerplate” section above.) To get the WidgetView as a widget in your app: Add WidgetView(data: .previewData) , as shown, replacing the default widget view, and run this on the simulator: Widgets added to the home screen And voila! You have your first widget added to your iOS application! You can download the completed project here on GitHub.
https://medium.com/better-programming/build-your-first-widget-in-ios-14-with-widgetkit-9b893423e815
['Akashlal Bathe']
2020-07-27 19:42:29.383000+00:00
['Mobile', 'WWDC', 'iOS', 'Swift', 'Programming']
Build YouTube alike Livestreams with React Native
After building an intro on how to upload videos with React Native I would like to go a step further and build live streaming. For me this means that a user should be able to start a video stream that someone else might subscribe to and see a near real-time video. Our first challenge will be to get a stream of video data, the second will be to upload the video as a stream. The current (1.1.5) version of react-native-camera does not allow you to consume a stream of the recording it makes, so we need to build this functionality ourselves. To do this we need to get noticed once the recording starts (we will need to add this), get a file path of the file being written and watching the file for changes. As I don’t want to go ahead and do a fork & PR to react-native-camera right now (partially because I don’t want to patch also the iOS version, I have no device to test this) I decided to use patch-package by David Sheldrick. It allows me to manipulate the package in my node_modules/ folder directly and check the changes in as a patch file. I think the changes I needed to do are quite interesting as they go into detail over how to extend a native library relying on the React Native Event system. Adding the event First of all, we need to add an event to the ones react-native-camera already provides. First, we need a name for our event, so we add this line to the Events enum EVENT_RECORDING_STARTED("onRecordingStarted") , with the first part being the name of the enum and the value being the name of the dispatched event and the name of the React property we are going to set later. Adding the event looks like this in our case: As Java is a very verbose language, let me show you the places that really matter: Line 13 : We create a class inheriting from a generic React Native event : We create a class inheriting from a generic React Native event Line 24 : We take the path of the video from the obtain method and create a new event out of it that stores the path in a private field : We take the path of the video from the obtain method and create a new event out of it that stores the path in a private field Line 51 : We add the path to the serialization so that it is accessible later on : We add the path to the serialization so that it is accessible later on Line 48: On dispatch we use rctEventEmitter.receiveEvent , to send an event with the name we will declare in the Events enum and the data we serialized Adding a small helper to RNCameraViewHelper to easily dispatch the event later on is pretty straightforward. We construct the event here and put it into the event dispatcher of the UIManagerModule of React Native right away. We can get the native module from the react context. Connecting the Event with the Component Now that we are set up to dispatch the event, let’s dispatch it. Again, java being verbose hinders us from understanding what happens here, so let me give you a TL;DR here. First, the path where the video is going to be written to is found (Line 7). We give this path to the record method of the CameraView so it starts recording to that path. If this method returns true the recording could start successfully and that is why we emit the started event afterward (in line 19). With this event being dispatched, let’s write the last bit glue code to get these events right into React Native. As you can see all we need to do is to set a prop on RNCamera with the name of our event and if the prop is set on the component we will dispatch the nativeEvent property on our React Native Event to the user of our library. This is what we need to do in react-native-camera to get the path before the video has finished. Getting the changes on the video file Now that we have the path at an early enough point in time we can try to get changes on that file while the data is being written to it. For this, I extended the react-native-fetch-blob package. It has a function called RNFetchBlob.fs.readStream which can be used to stream a file. This is exactly what it does, so let’s take a look at our _onRecordingStarted method: The API looks pretty simple, but if you execute it you will run into a problem: While the video is still being written the underlying implementation only reads the file once in the beginning and sends it to the JS context as a stream. The actual data access is atomic instead of being stream based and this is what we are going to change. For this, we need something like the bash command tail -f that reads the entire file and until it’s aborted gives us the new lines added to the file. The Java equivalent of this is the Tailer package. We can use it to install a listener on the file and we will use the listener to send the data to the Javascript context. Let’s change the RNFetchBlobFS to use Tailer: We pass the existing file system implementation to the listener so that we can emit the stream event with the given helper. The handle method is invoked every time a new update is registered, very similar to the tail -f command. To start Tailer we need to pass the listener we wrote into a Tailer instance and start a thread with it. It would be better to stop it on abort, but we won’t care about that because the data listener unsubscribes once we stop it. Pushing the Video Now that we have an endless stream of data we need to send it to our server. For that, I implemented a simple Web Socket handler on the server that just stores the video in memory until the connection is closed and writes it to our file system afterward. Please note that instead of Web Sockets we could also have used a long-running XHR connection, but for me, Web Sockets were just a bit easier. On the client side, we now need to add Web Sockets into the upload method. That’s it, now we get data to the server which can be assembled to a video on the run. Summary For me, this was a nice challenge, I am sure I did not solve it in the best way possible, but it’s a solution to iterate on. I found it interesting to see that there seems to be no use-case for solving problems like this in React Native, at least the existing libraries don’t seem to cover this problem very well. What I wanted to show is that even problems that might seem a bit big for React Native like streaming are actually feasible when you take the time to dig into the native code. For a production app we would most likely need to do the very same thing on iOS, too, so be aware there might be some work involved.
https://medium.com/react-native-training/build-youtube-alike-livestreams-with-react-native-8dde24adf543
['Daniel Schmidt']
2018-07-25 20:45:32.598000+00:00
['Streaming', 'React Native', 'Android', 'Websocket']
Creating a vision for change
Chaka Bachmann, Equality, Diversity and Inclusion Consultant to the Code Steering Group Major social and political events over the past decade, and in particular over the last year, have shone a spotlight on questions of equality, diversity and inclusion (EDI). Communities are looking to the charities that seek to serve them; they are searching for the places in which they are represented, and they are grappling with how their needs are understood and prioritised at every level, especially at board level. Prioritising equality, diversity and inclusion and taking active steps to address power imbalances is crucial for organisations in the third sector, not only to stay relevant, but more so to be accountable to the communities they are in the service of. The role of trustees The board of trustees has an important role in driving EDI objectives forward and leading courageously by demonstrating commitment and actions to address current inequalities and barriers in their own organisations and governance approaches. When the board owns EDI and becomes part of the EDI management process, they set the tone for the rest of the organisation so everyone can embrace the cultural change with confidence. Part of good governance is to drive this much needed change forward and to hold themselves and the wider organisation accountable to execute good practice, even if it is not always easy. Establishing goals and vision In order to embody an inclusive and equitable approach to governance, the board needs to set goals, collect data and examine change over time. Developing and committing to an EDI vision is an important step in this often slow and intimidating process. Creating an EDI vision serves a number of functions: imagining the full, inclusive potential of the board and organisation, providing a focused road map to progress the cultural change we want to achieve and creating a shared understanding and an organisation wide approach to EDI, as well as highlighting governance structures to enable the same. Through an inclusive consultative process involving trustees, senior leadership, employees and in some cases service users or members, the organisational ambition for equality, diversity and inclusion for the next 10, 5 or 3 years is captured, alongside key objectives that build the foundation of a new EDI vision. This can then be translated into sets of practical actions. The new EDI Principle can support this process in providing insight into some areas that could be explored and contextualised in the vision. In this process, it can help to reflect on the following questions amongst others: What EDI challenges, barriers or opportunities are we facing? What aspects of equity, diversity and inclusion are most relevant to our work and what would great practice in those areas look like? Who are we trying to serve and are they represented in our programmes, on our board and in our organisation? While creating an EDI vision can feel overwhelming at first, it will make a lasting and significant impact at every level of your work and practice. This change will enhance the experience of internal and external stakeholders. Change can be hard, and more often than not, effective change is hard, but if we want to stand for the values we believe in, leading this essential and brave process is a non-negotiable step in that direction. You can view the new EDI principle of the code through the code website.
https://medium.com/@charitygovernancecode/creating-a-vision-for-change-f0dd6265863f
['Charity Governance Code']
2020-12-09 09:01:55.654000+00:00
['Governance', 'Not For Profit', 'Charity', 'Diversity', 'Equality']
BREAST MILK, ITS PRODUCTION, AND NUTRIENTS
Exclusive breastfeeding Background Without a doubt, sufficient information has evidenced that breast milk is the most naturally valuable food to be provided as a source of nutrients for the newly born. For this to happen successfully, approaches that embrace exclusive breastfeeding practices should strongly be undertaken by the nursing mother. Such an undertaking results in numerous advantages that are attributed to the genuine breastfeeding practice, these include; protecting the baby against infections, building a psychosocial bond with the baby, and being an example of encouragement to fellow mothers, among others. In this article, the production, types, and nutrient profile of breast milk have been discussed below; Production of breast milk Normal milk production occurs only as a result of suckling by the feeding baby. The suckling act acts as a message to the afferent nerves within the breast which sends an electrical message to the brain cells. The brain cells interpret the message and send an electrochemical feedback message which stimulates the pituitary gland to start producing the hormone “oxytocin” that travels in the blood to the breast where it stimulates milk production from the milk-producing cells (alveoli). Milk then travels through the milk ducts and flows out of the breast through the nipples for the feeding baby to access. Small breasts versus large breasts dilemma The size of the breasts should not be a problem and/or an excuse for the nursing mother. The suckling act from the feeding baby is always the major influencing factor for milk production and it is independent of breast size. So, milk will always flow. However, breast size affects milk storage capacity i.e. the larger the breast the more milk storage capacity and vice versa. The storage capacity thus affects the milk flow amount during the breastfeeding process, meaning; small breasts empty faster than large breasts as the baby feeds. Types of breast milk and their nutrient compositions Breast milk has three major types i.e. Colostrum Transitional milk Mature milk Colostrum is the first milk produced from the breasts. It is usually for the first breastfeeding exercise for the nursing mother. It is the milk produced within the first week (1–7 days) after birth appearing as a thick, yellow-colored fluid. The yellow color is caused by the presence of beta-carotene, a molecule from which vitamin A is formed. Additionally, colostrum is rich in fat-soluble vitamins, minerals, proteins, and immunoglobulin A (IgA). Transitional milk is the breast milk product produced after the first 7days following birth (i.e. 7–21 days). It is rich in high levels of fat, water-soluble vitamins, lactose, and contains more calories than colostrum but with a lower immunoglobulin content. Mature milk is that breast milk product produced after 21 days following birth. It is paler, more watery, and thinner than colostrum. This product contains 90% water and the remaining 10% accounting for nutrients such as; proteins, fats, and carbohydrates (e.g. lactose). It contains more water because the baby’s demands for water have increased at this stage. Mature milk is produced in two phases as the baby feeds. These include; foremilk and hindmilk. Foremilk is the first liquid phase of the mature milk product and it accounts mostly for water. Hind milk is the second liquid phase that flows following the foremilk; this phase is richer in the other nutrients (proteins, fats, carbohydrates) than the foremilk. It is the hindmilk that causes the baby to feel satisfied which usually induces a sleepy state. Conclusion To sum up, breast milk should be the only genuine source of food for the newly born as it contains all the necessary nutrients for the baby. Through the practice of exclusive breastfeeding, the baby will acquire all the nutritional necessities for life. This practice should not be hindered by factors relating to breast size because the flow of breast milk is independent of such but only dependent on the suckling act provided by the feeding baby and thus allowing colostrum, transitional, and/or mature breast milk to flow depending on the stage of breastfeeding, postpartum. REFERENCE Sokan — Adeaga Micheal Ayodeji., et al. (2019). A Systematic Review on Exclusive Breastfeeding Practice in Sub-Saharan Africa: Facilitators and Barriers. Acta Scientific Medical Sciences 3.7: 53–65. Related topics Nutrients in breast milk Breastfeeding stages
https://medium.com/@lujjimbirwafortunate/breast-milk-its-production-and-nutrients-aaf0f3ba639e
[]
2020-06-11 12:13:10.333000+00:00
['Exclusive Breastfeeding', 'Newborn', 'Breastfeeding', 'Baby Products', 'Baby Food']
Analysis and interpretation of the social interaction of users on Facebook using Machine Learning
Analysis and interpretation of the social interaction of users on Facebook using Machine Learning Diego Amador Dec 21, 2020·19 min read Publication by: Aimé López, Adalit Reyes, Diego Amador and Óscar Reyes As part of the third module “Data mining” of the diploma course “Statistical techniques and data mining” taught by Jacobo Gonzalez, we have decided to share our final project, in which through a set of data obtained from the Kaggle website we performed the entire KDD process (Knowledge Data Discovery) that we will show below. Goal Different characteristics pertaining to Facebook users such as their behavior, habits and trends will be predicted and discovered through data collected from their interaction with the social network. Through data mining, a classification of data on user activity on Facebook will be made according to certain variables, taking as a factor or main characteristic the gender variable, having two classes Female (1) and Male (0), allowing us to perform a statistical analysis, data grouping, conversion of variables, among others. Context Social networks have become information gathering platforms. They know our behavior patterns and our psychological profile. The most popular platforms such as Facebook and Google have made technology seem fun, casual and superficial, but in reality, the amount of data they accumulate about us is sometimes alarming. Every day, Facebook users feed the platform with large volumes of information. With this type of data, Facebook knows what we look like, who our friends are, what we are doing, what we like and dislike, among other situations that may arise every day. Facebook has become known as one of the most visited social networks nowadays, it is defined as a great source of data and has been questioned because they do with that data; this data is available to different companies, brands among others that allow them to target a specific audience, whose interest is reflected in being a main means by which to advertise. Hypothesis We will seek through the formulation of various hypotheses to demonstrate without certainty some of the beliefs that are held regarding the interaction of users in that social network. . In an age range of 15 to 25 years, greater use is made of the social network Facebook - The male gender is the one that generates more likes in the social network - From the age range of 55 to 60 years, user interaction decreases - People who have more time using Facebook have more friends. State Of The Art Previously, the Universidad Iberoamericana of Mexico, conducted a study called Facebook and daily life, this was to know the influence that social network has in the daily life of young university students, it was built an instrument with the purpose of measuring attitudes, behaviors and uses given to the social network. The instrument was applied to 381 young people from the Universidad Iberoamericana in Mexico City, 239 female and 142 male. The results show that there are differences between men and women, with the latter spending more time in the social network (Aspani,2012). On the other hand, a new study from the University of Malaga, focused on the following, deepen the knowledge of a possible gender gap when using technological tools to carry out group work, ie, analyzing whether the propensity to innovate and the use that students make of technologies differ according to gender. To this end, an online survey was carried out on a sample of 403 students from the degree courses in the field of Economics. The results show that although the participants of both genders use almost equally all the technologies proposed for group work (wiki, WhatsApp, Google Drive, Dropbox, Skype, email, Facebook, etc.), statistically significant differences are observed in the greater use by women of the WhatsApp and email tools (Vallespin-Aran,2020). Another study from the University of Alicante, in order to show results about the use of social networks by young people, contrasted information from different sources with the results obtained from a survey of its own. The survey was conducted through the Internet, was applied to a sample of users of social networks, with the aim of knowing what is the reason for its success or, in other words, what leads young people to open a profile and maintain it and with special emphasis on possible differences in use between boys and girls. Obtaining as results that, young people and adolescents are the main users and that, worldwide, the presence of women reaches 60%, the most intensive users are young people between 24 and 29 years. Concluding that the differences seem to focus on the age and not so much on the sex of the respondents (Espinar,2009). Thus the study proposed here will help us to better understand the patterns of user behavior through their interaction with Facebook, this will be achieved through the implementation of various branches of Data Science such as Data Mining and Machine Learning following the methodology Knowledge Discovery in Databases (KDD) used as a process for the exploitation of data to predict and / or discover interesting patterns in our model and thus develop a theoretical perspective on our approach. As a main resource will be used Python language and its various libraries which will allow us to obtain a better visualization, manipulation and data processing, thereby achieving refute the approach of our hypothesis and in turn achieve in the future apply these findings in various areas such as social big data, psychology, communication and advertising. Throughout this project we will show the use of various concepts of machine learning to build models that allow us to generate interpretations. Its use, until today is used in various fields from avoiding spam in your email to detection of forest fires or genetics, the question is have we obtained the full potential of these concepts? NOW… LET’S START!! 1. Understanding the data The variables available in our data set are the following: Variables and meaning We load the following libraries to work: import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from scipy import stats import missingno as msno %matplotlib inline We load the data, and now, we will discover the amount of values, and type of data stored in each column: df.info() Definition of variables and data type. We generate first correlations that could be interesting regarding the age and the amount of friendships that users start: sns.lmplot( x="age", y="friendships_initiated", data=df, fit_reg=False, hue='dob_year', legend=False) Knowing the age ranges of the data set, age groups are created as follows labels=['11-20', '21-30', '31-40', '40-50', '51-60', '61-70', '71-80', '81-90', '91-100', '100-110', '111-120'] df['age_group'] = pd.cut(df.age,bins=np.arange(10,121,10),labels=labels,right=True) And a graph is generated with distributions by age and gender fig,ax=plt.subplots(figsize=(13,7)) color=['deeppink','blue'] test=df.pivot_table('tenure',index='age_group',columns='gender',aggfunc='count') #conversion into percenatage for col in test.columns: test[col]=test[col]/sum(test[col])*100 test.plot(kind='bar',color=color,ax=ax,alpha=0.7) ax.set_xticklabels(test.index,rotation=360) ax.set_xlabel("Conjunto de edades",fontsize=14) ax.set_ylabel("Porcentaje",fontsize=14) ax.set_title('Distribución de usuarios por edad y genero',fontsize=14) User distributions by age and gender We group by gender gender_no=df.groupby("gender")["age"].count() fig,ax=plt.subplots(figsize=(13,7)) gender_no.plot.pie(ax=ax,autopct='%0.2f%%') Gender distribution in the data 2. Data processing 2.1 Data cleaning In this section you must be able visualize the missing data, nullity by column and eliminate the outliers. The empty cells are converted to NaN and the columns with null value are located. A way must be found to enter values in the empty cells, it can be done with the mean or the most frequent value for each of the variables, in this particular case, we found null values in the variable gender, so they were filled with the gender that has more concurrence in the data set: male. In the case of tenure (days in Facebook) we will fill the empty cells with the average of the rest of the data. We will not go too far into this part, however, we invite readers to request our repository for additional consultations. The most important thing to remember is that at the end of this process we renamed the original data frame as out We can look at the graphs of the data before and after removing the outliers from our data set. Before deleting outliers After deleting outliers 2.2 Selection of features T he method is the following: first we will graph the heat map of the Pearson correlation and we will see the correlation of the independent variables or characteristics with the output variable or target. Only select the characteristics that have a correlation greater than 0.5 (taking absolute value) with the output variable. Remember that the Pearson correlation coefficient has values between -1 and 1: A value closer to 0 implies a weaker correlation (an exact 0 implies no correlation) A value closer to 1 implies a stronger positive correlation A value closer to -1 implies a stronger negative correlation plt.figure(figsize=(8,8)) cor = out.corr() sns.heatmap(cor, annot=True, cmap=plt.cm.Reds) plt.show() Pearson correlation result This is how we identify the main characteristics with which we will work our models. The likes_received, mobile_likes, mobile_likes_received, www_likes, www_likes_received variables exceed the threshold assumption that by approximately 93.31%, the mobile_likes variable has a higher correlation with the likes variable 3. Modeling 3.1 Simple linear regression analysis for likes and mobile_likes Linear regression is a supervised learning algorithm used in Machine Learning and statistics. In its simplest version, what we will do is “draw a line” that will indicate the trend of a continuous data set (if they were discrete, we would use Logistic Regression). In statistics, linear regression is an approach to model the relationship between a scalar dependent variable “y” and one or more explanatory variables named “X” likes = out.iloc[:,[False, False, False, False, False, True, False, False, False, False, False]].values moblikes = out.iloc[:,[False, False, False, False, False, False, False, True, False, False, False]].values # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split A_train, A_test, b_train, b_test = train_test_split(likes, moblikes, test_size = 1/3, random_state = 123) # Fitting Simple Linear Regression to the Training set from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(A_train, b_train) # Visualising the Training set results plt.scatter(A_train, b_train, color = 'red') plt.plot(A_train, model.predict(A_train), color = 'blue') plt.title('Likes vs Mobile likes') plt.xlabel('Likes') plt.ylabel('Mobile likes') plt.show() Training set results # Visualising the Test set results plt.scatter(A_test, b_test, color = 'red') plt.plot(A_train, model.predict(A_train), color = 'blue') plt.title('Likes vs Mobile likes (Test set)') plt.xlabel('Likes') plt.ylabel('Mobile likes') plt.show() Test set results Model performance is obtained with a value of 87%, which indicates that the model is adequate # Predicting the Test set results b_pred = model.predict(A_test) print ("Desempeño del modelo: ", model.score(A_test, b_test)) 3.2 Principal Component Analysis The idea of Principal Component Analysis (PCA) is to reduce the dimensionality of a data set consisting of a large number of related variables, while maintaining as much variance in the data as possible. PCA finds a set of new variables where the original variables are only their linear combinations. The new variables are called principal components (PCs). These principal components are orthogonal: In a three-dimensional case, the principal components are perpendicular to each other. X cannot be represented by Y or Y cannot be presented by Z. Figure (A) shows PCA’s intuition: it “rotates” the axes to better align with its data. The first major component will capture most of the variance in the data, followed by the second, third, and so on. As a result, the new data will have fewer dimensions. # DEFINIMOS VARIABLES all_variables = ['age_group', 'gender', 'tenure', 'friend_count', 'friendships_initiated', 'likes', 'likes_received', 'mobile_likes', 'mobile_likes_received', 'www_likes', 'www_likes_received'] #ojo en el tarjet features = ['tenure', 'friend_count', 'friendships_initiated', 'likes', 'likes_received', 'mobile_likes', 'mobile_likes_received', 'www_likes', 'www_likes_received'] target = ['gender'] # Using MinMax from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() df_sc=pd.DataFrame(scaler.fit_transform(out1[features]), columns=features) df_sc.head() from sklearn.decomposition import PCA, KernelPCA from sklearn.datasets import make_circles # pca = PCA(n_components=3) df_pca = pd.DataFrame(pca.fit_transform(out1[features]), columns=['PC1', 'PC2', 'PC3']) df_pca.head() # OBTAINING THE VARIANCES OF EACH CHARACTERISTIC explained_variance = pca.explained_variance_ratio_.cumsum() explained_variance # Plotting the dataframe df_pca['gender'] = out1[target] df_pca.columns = ['PC1', 'PC2','PC3','gender'] df_pca.head() fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.set_xlabel('Principal Component 1') ax.set_ylabel('Principal Component 2') ax.set_title('2 component PCA') targets = ['male', 'female'] colors = ['blue', 'pink'] for target, color in zip(targets,colors): indicesToKeep = df_pca['gender'] == target ax.scatter(df_pca.loc[indicesToKeep, 'PC1'] , df_pca.loc[indicesToKeep, 'PC2'] , c = color , s = 50) ax.legend(targets) ax.grid() Results obtained from the dataframe graph 3.3 KNN for gender It is a non-linear computer model that works by starting with an unclassified object and then counting how many neighbors belong to each category. If more neighbors belong to category A than to category B, then the new point should belong to category A. Therefore, the classification of a certain point is based on most of its closest neighbors (hence the name) # Clasificando los valores unicos de la columna gender """ 0 male 1 female """ df_pca01['gender'].replace({'male':0, 'female':1 }, inplace=True) # DEFINING VARIABLES X = df_pca01.iloc[:, [1,2]].values y = df_pca01.iloc[:, 3].values # TRAINING AND TEST SETS ARE DEFINED from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) # SCALE OF CHARACTERISTICS from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) # APPLYING KNN TO THE TRAINING SET from sklearn.neighbors import KNeighborsClassifier classifier = KNeighborsClassifier(n_neighbors = 3, metric = 'minkowski', p = 2) classifier.fit(X_train, y_train) # PREDICTION OF RESULTS ON THE TEST SET y_pred = classifier.predict(X_test) from sklearn import metrics print(metrics.accuracy_score(y_test, y_pred)) # Visualizando los resultados del conjunto de entrenamiento from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('blue', 'pink'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('blue', 'pink'))(i), label = j) plt.title('K-NN (Train)') plt.xlabel('Gender') plt.ylabel('Prueba') plt.legend() plt.show() KNN Train results # Visualising the Test set results from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('blue', 'pink'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('blue', 'pink'))(i), label = j) plt.title('K-NN (Test)') plt.xlabel('Gender') plt.ylabel('Estimated') plt.legend() plt.show() KNN Test results Validation According to the accuracy value we have a total of 54% of predictions against the given total, however we have in the TruePositive and TrueNegative position 0.0093% and 0.0031% of estimated values correctly within the main diagonal. While in the inverse diagonal we have a 0.0058% chance that the values that are correct and are rejected and a 0.0045% chance that the values are false and are not rejected. # Making the Confusion Matrix from sklearn.metrics import confusion_matrix import seaborn as sns cm = pd.DataFrame(confusion_matrix(y_test, y_pred)) sns.heatmap(cm, annot=True, annot_kws={"size": 12}) # font size Confusion matrix obtained The KNN model is a method that looks for the observations closest to what you are trying to predict. It has a performance of 54%, so it is defined that it is not a suitable model for this type of data from sklearn import metrics print(metrics.accuracy_score(y_test, y_pred)) 3.4 Random forest for tenure and likes Decision trees are predictive models formed by binary rules with which it is possible to distribute the observations according to their attributes and thus predict the value of the response variable. Random Forest models are formed by a set of individual decision trees, each trained with a slightly different sample of the training data generated by bootstrapping. The prediction of a new observation is obtained by adding the predictions of all the individual trees that make up the model. #Initializing the variables X = out2.iloc[:, [2,5]].values y = out2.iloc[:, 1].values #Declaring the training and test sets from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) #Climbing the characteristics from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) #Applying Random forest to the training set from sklearn.ensemble import RandomForestClassifier classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0) classifier.fit(X_train, y_train) #Viewing the results of the training set from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('DarkBlue', 'LimeGreen'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('DarkBlue', 'LimeGreen'))(i), label = j) plt.title('K-NN (Train)')plt.xlabel('Tenure') plt.ylabel('Likes') plt.legend()plt.show() KNN Train results #Visualizando el resultado del conjunto de prueba from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('DarkBlue', 'LimeGreen'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('DarkBlue', 'LimeGreen'))(i), label = j) plt.title('K-NN (Test)') plt.xlabel('Tenure') plt.ylabel('Likes') plt.legend() plt.show() KNN Test results Results Random Forest model performance is obtained with a value of 59%, which indicates that it is not an adequate model to make a prediction on the data. from sklearn import metrics print(metrics.accuracy_score(y_test,y_pred)) Validation We have an accuracy with a value of 59% so it is concluded that it is not an acceptable model to make predictions about this set Within the main diagonal, we have the position TruePositive with a value of 0.00011% and TrueNegative of 0.0036% to accept real values; however we are rejecting real values with a probability of 0.0056% and accepting false values with a probability of 0.0038%. from sklearn.metrics import confusion_matrix import seaborn as sn cm = pd.DataFrame(confusion_matrix(y_test, y_pred)) sn.heatmap(cm, annot=True, annot_kws={"size": 12}) # font size Confusion matrix obtained 3.5 Hierarchical Clustering for friends count and tenure We will apply this concept to the variables friend_count and tenure to visualize how the data is grouped and if there is an obvious relationship. First, we will take a sample of the data frame out previously worked, equivalent to 15% of the total data. sample = df01.sample(frac=0.15, random_state=1) sample = sample.dropna() sample = sample[['tenure','friend_count']] import scipy.cluster.hierarchy as shc plt.figure(figsize=(10, 7)) plt.title("Customer Dendograms") dend = shc.dendrogram(shc.linkage(sample, method='ward')) Dentogram obtained We obtain the distances between each of the observations from sklearn.cluster import AgglomerativeClustering cluster = AgglomerativeClustering(n_clusters=5, affinity='euclidean', linkage='ward') cluster.fit_predict(sample) By visualizing the results obtained, a main conclusion can be drawn: In orange at the top, it is shown that having been on Facebook for longer does not necessarily represent having a greater number of friends, which could speak of a selection/debugging behavior of the contacts over time, leading them to keep mostly, a small number of friends. plt.figure(figsize=(10, 7)) plt.scatter(sample['friend_count'], sample['tenure'], c=cluster.labels_, cmap='rainbow') Clusters obtained Results With Elbow Curve applied to the variables of friend_count and tenure is obtained an approximate value of 4 clusters, comparing it with the 5 clusters obtained with Hierarchical clustering, it is concluded that the results obtained are correct. ft = sample[['friend_count','tenure']] from sklearn.cluster import KMeans Nc = range(1, 20) kmeans = [KMeans(n_clusters=i) for i in Nc] kmeans score = [kmeans[i].fit(ft).score(ft) for i in range(len(kmeans))] score plt.plot(Nc,score) plt.xlabel('Number of Clusters') plt.ylabel('Score') plt.title('Elbow Curve') plt.show() Validation The Davies-Bouldin Index (DBI) (introduced by David L. Davies and Donald W. Bouldin in 1979), a metric for evaluating clustering algorithms, is an internal evaluation scheme, in which the validation of how well the clustering has been done is done using quantities and characteristics inherent in the data set. The lower the value of the DB index, the better the clustering. It also has a drawback, a good value reported by this method does not imply the best information retrieval. The DB index for the number k of clusters is defined below. from sklearn import datasets from sklearn.cluster import KMeans from sklearn.metrics import davies_bouldin_score # K-Means kmeans = KMeans(n_clusters=5, random_state=1).fit(sample) # we store the cluster labels labels = kmeans.labels_ printdavies_bouldin_score(sample, labels)) 3.6 K-Means for mobile_likes, mobile_likes_received, likes and gender To process the learning data, the K-means algorithm in data mining starts with a first group of randomly selected centroids, which are used as the beginning points for every cluster, and then performs iterative (repetitive) calculations to optimize the positions of the centroids. It halts creating and optimizing clusters when either: The centroids have stabilized — there is no change in their values because the clustering has been successful. The defined number of iterations has been achieved. X = np.array(aux[['mobile_likes','mobile_likes_received','likes']]) y = np.array(aux['gender_cod']) X.shape from sklearn.cluster import KMeans from sklearn.metrics import pairwise_distances_argmin_min %matplotlib inline from mpl_toolkits.mplot3d import Axes3D plt.rcParams['figure.figsize'] = (16, 9) plt.style.use('ggplot') #Graph of the selected variables fig = plt.figure() ax = Axes3D(fig) colores=['blue','red','green','blue','cyan','yellow','orange','black','pink','brown','purple'] asignar=[] for row in y: asignar.append(colores[row]) ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=asignar,s=60) Finding k-means clusters kmeans = KMeans(n_clusters=3).fit(X) centroids = kmeans.cluster_centers_ print(centroids) df2 = df_sc.copy() df3 = df_pca.copy() df3['cl'] = aux['cl'] = df2 ['cl'] = kmeans.predict(X) # Obtaining the Clusters labels = kmeans.predict(X) # Finding the centers by cluster C = kmeans.cluster_centers_ colores=['green','blue','yellow'] asignar=[] for row in labels: asignar.append(colores[row]) fig = plt.figure() ax = Axes3D(fig) ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=asignar,s=60) ax.scatter(C[:, 0], C[:, 1], C[:, 2], marker='*', c=colores, s=1000) Results aux['gender_cod'].value_counts(normalize=True) Where 1 stands for female and 0 for male. Validation Results obtained for validation All three groups are young adults, as this was the dominant group within the data. The first group, is very balanced in the constitution of female and male, so we can say that they are adult individuals, since of the three groups is the one that has a higher average age, which do not use so much Facebook on the mobile, which receive a good amount of likes, but give more likes. The second group, since it is a higher percentage of men we will say that, are young men who give more likes than they receive. The last group, is a large percentage of women, so we will say that, are young women who have much more activity on Facebook, than the previous groups, and receive a large number of likes, but give even more likes than they receive. 3.7 Naive Bayes for gender import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib import colors import seaborn as sb %matplotlib inline plt.rcParams['figure.figsize'] = (16, 9) plt.style.use('ggplot') from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.naive_bayes import GaussianNB from sklearn.feature_selection import SelectKBest var = ['tenure', 'friend_count', 'friendships_initiated', 'likes', 'likes_received', 'mobile_likes', 'mobile_likes_received', 'www_likes', 'www_likes_received'] X = datos[var].copy() y = datos['gender_cod'].copy() from sklearn.feature_selection import SelectKBest, chi2 best=SelectKBest(k=5) X_new = best.fit_transform(X, y) X_new.shape selected = best.get_support(indices=True) print(X.columns[selected]) X_train, X_test = train_test_split(datos, test_size=0.3, random_state=6) y_train =X_train['gender_cod'] y_test = X_test['gender_cod'] gnb = GaussianNB()# Train classifier gnb.fit( X_train[used_features].values, y_train) y_pred = gnb.predict(X_test[used_features]) print(confusion_matrix(y_test, y_pred)) print(classification_report(y_test, y_pred)) Results Summary of the models results Accuracy of the training and tests for each model Conclusions and answers to the hypothesis It is very diffuse to want to classify gender through the variables that were given within the data, however, being able to group through the activity on Facebook is more feasible. It can be deduced that within the age range of 21 to 30 years, people tend to use more Facebook, having a greater inclination towards the male gender than the female, observing that after 30 years the female gender occupies more this social network giving more likes from mobile devices such as cell phones. Despite the fact that in the age of 21 to 30 we see a more frequent use of the social network Facebook, it was also observed that the use of this social network has a beginning between 11 and 19 years of age being more prone in men than in women, but with a very minimal difference. In an age range of 15 to 25 years, greater use is made of the social network Facebook A: The initial hypothesis is rejected because there is a greater use of Facebook in the age of 21 to 30 years, in order to confirm this hypothesis we made a grouping by age with a clean and standardized data set. The male gender is the one that generates more likes in the social network A: We reject the initial hypothesis, through the application of the Naive Bayes model since we observe that the female gender generates a greater number of likes as specified in the implementation of the model. From the age range of 55 to 60 years the interaction of users decreases A: The initial hypothesis is accepted, since there is a greater record of interaction within the young population in a range of 11 to 30 years. People who have more time using Facebook have more friends. A: The initial hypothesis is rejected, but the results of the Hierarchical Clustering model and its denogram show that having been on Facebook for longer does not necessarily represent having a greater number of friends, which could speak of a selection/purification behavior of contacts over time, leading them to maintain a majority of a small number of friends. Work for the future We expect to be able to scale the project by entering a greater amount of data from different social networks to the analysis of our data such as Twitter, WhatsApp, Telegram, Instagram, Youtube in order to obtain a large volume of data allowing us to apply Big Data and specifically apply the strategy we discovered called Social Big data, which can be applied in companies because it allows us to know the behavior and trends of their consumers according to their particular segmentation thus acquiring a greater competitive advantage and allowing us to improve our service. In addition, the results of the study can be implemented in areas such as psychology, marketing and communication. RESOURCES Bagnato, J. I. (2011). Aprende Machine Learning. Aprende Machine Learning. https://www.aprendemachinelearning.com/regresion-lineal-en-espanol-con-python/#:~:text=La%20regresi%C3%B3n%20lineal%20es%20un,Machine%20Learning%20y%20en%20estad%C3%ADstica.&text=En%20estad%C3%ADsticas%2C%20regresi%C3%B3n%20lineal%20es,explicativas%20nombradas%20con%20%E2%80%9CX%E2%80%9D. Anónimo. (2013). GeeksforGeeks. https://www.geeksforgeeks.org/dunn-index-and-db-index-cluster-validity-indices-set-1/ Anónimo. Redes sociales: El uso y abuso. (2005). http://www.unilibre.edu.co/bogota/ul/noticias/noticias-universitarias/2349-redes-sociales-el-us-y-el-abuso#:~:text=Las%20redes%20sociales%20son%20servicios,permiten%20interactuar%20con%20otros%20internautas. El algoritmo K-NN y su importancia. (2020, 1 septiembre). El algoritmo KNN. https://www.merkleinc.com/es/es/blog/algoritmo-knn-modelado-datos Normas APA. (2001, 15 abril). El estado del arte https://normasapa.net/que-es-el-estado-del-arte/ Aspani, S., Sada, M., & Shabot, R. (2012). Facebook y vida cotidiana. Alternativas en psicología, 16(27), 107–114. Vallespin-Aran, M. L., Anaya-Sanchez, R., Aguilar-Illescas, R., & Molinillo-Jimenez, S. (2020). Diferencias de género en el uso de las herramientas colaborativas para la realización de los trabajos en grupo. Espinar-Ruiz, E., & González-Río, M. J. (2009). Jóvenes en las redes sociales virtuales: un análisis exploratorio de las diferencias de género.
https://medium.com/@diegoamador96/analysis-and-interpretation-of-the-social-interaction-of-users-on-facebook-using-machine-learning-100c42fa0e72
['Diego Amador']
2020-12-21 04:56:08.153000+00:00
['Data Mining', 'Statistics', 'Clustering', 'Facebook', 'Machine Learning']
Re:Tindale — Starting Life in Tokyo and Tsu From Zero
I don’t travel much. This isn’t an affectation. A blend of being an introvert and a lack of disposable income means that I am not the sort of person who relishes getting too far out of my comfort zone. However, I am also a sucker for a freebie, especially if said freebie involves visiting one of the few countries that I am keen to visit. Therefore, when a friend suggested that I apply to take part in an exchange with the Japanese Council of Local Authorities for International Relations (CLAIR), it didn’t take very much convincing. Apparently, neither did they, which is how I found myself mulling over a bacon roll in Heathrow Terminal 4 and wondering what the hell I had let myself get in for. Part One: Let’s Go! Gutsy Tindale-kun — Fly! Run! Ascend! Goodness me, but isn’t flying tedious? I know that lots of people book to go in business class for the free champagne and the like, but I think people who actually do it regularly do so plurely so they can take a Forget-Me-Now and make use of the extra leg room. It was my first time flying on KLM and I can understand why the more experienced aviators than I tend to focus on the fact that the planes seem to be mainly held together by masking tape. I don’t understand what it is about aeroplanes that they seem to get shabby and worn-out the second they roll off the production line. Given that I know only slightly more flying machines than the North Sentinel Islanders (otherwise I’d be writing this post from prison, having tried to attack the air steward with an improvised spear made out of a rolled-up copy of The Spectator and a USB pen), I cannot really say anything more than both flights involved getting on something with two wings and one was slightly larger than the other. An otherwise faultless transfer at Schiphol was marred by a broken air conditioning unit, which delayed departure for two hours. Whilst this would have usually driven me up the wall, I seem to be mellowing in my old age, considering that I expected us to have to decamp and have an unplanned twelve hour stay in The Netherlands. Incidentally, in one of the most passive-aggressive efforts in the industry, I was recently informed that the Chief Executive of Schiphol sends a cake to his counterpart at Heathrow Airports Group every year the third runway is delayed. I was told this by one of my colleagues on this trip, the Chief Executive of Slough Borough Council, who added that the proposed expansion would take up five percent of all the land controlled by the local authority. After requesting an aisle seat away from the wing on KLM’s website, it was only fair that I was plonked in the middle of the plane next to a Korean student from the University of Wolverhampton. Whilst I find it impossible to sleep on flights, my new best friend managed to fall into a stupor about sixteen seconds after the meal trays were taken away, utilising a sleeping technique best summed up as a ‘mesh of limbs’. Credit where credit is due to her, I suppose. With the movie line-up on the flight being almost as bad as the Christmas Day line-up on BBC One, I packed it in as we were going over Novaya Zemlya (a place that always makes me imagine that I’m a New Dane trying to destroy the Magisterium’s Experimental Theology Atomkraft Weapons base before it can be used to destroy Central Tatary), and stretched out as best as I could. To my credit, I can usually trick my body into thinking it has fallen asleep every month or so without too much ill-effect. I have developed this technique from various election day all-nighters — with the travel version variation consisting of a face mask, noise cancelling ear-phones and basically inhaling two small bottles of free liquid that claimed to be red wine. I therefore arrived at Narita feeling as fresh-faced as could be expected. In the space of ten minutes, I managed to find the Nintendo Welcome Centre, meet my first mascot — a red owl dressed in Sumo ceremonial dress, representing the Western Tokyo Tourism Council, and get a free set of chopsticks for answering a survey. All very on brand, I suppose. One of the benefits of being on an official trip is that all the effort of airport transfers are handled for you, but I get the impression that I would have managed perfectly well by myself. We used the limousine bus service, which as we know from all those Holiday episodes of British sitcoms, takes you around much nicer hotels than the one you end up in. Unlike Carry On Abroad or The Inbetweeners, at least our one was built. I was delighted, however, that our route took us over Tokyo Bay’s famous Rainbow Bridge, which I had actually seen the previous day when I went to a very well-timed London screening of Weathering With You. Although it wasn’t lit up, it nevertheless offered the first ‘oh wow, I’m in Japan’ moment of the trip, with a wonderful vista of the city and DOKYO DAWAAAAAAAAA!!! According to my tour guide, the bridge was named by public vote in 1993 to replace the interim title of “Shuto Expressway №11 Daiba Route — Port of Tokyo Connector Bridge”. It speaks testament to the reserved nature of the citizens of the Metropolis that they didn’t pump for Bridgy Bridgeman-san. Our home for the next couple of days was a bland but perfectly serviceable business hotel in Akasaka, the Government/Diplomatic centre of Tokyo. I was keen to avoid succumbing to jetlag and keeping my mental state in the right-time zone, so after dumping my bags, I dragged myself out for a run that I’ve always wanted to do, the perimeter route around the Imperial Palace. I set off, and within about two minutes I found myself in the centre of a big pacifist demonstration outside the Diet. Because of course I did. “Sumimasen” is such a lovely phrase to use, and found myself employing the informal version for “I’m Sorry” about a third of the time I said anything on my first day in Japan as I jogged next to the numerous Article 9 protesters. Fortunately, both they and the police seemed to take the Baka Gaijin in good humour, and I soon rounded down the road and joined in the afternoon runners around the route. I was soon feeling the efforts of the lack of sleep, but dug in and marvelled at the views, which really was Japan in microcosm. The medieval gatehouses and walls in the foreground, coupled with a handful of Meiji-era European buildings such as the Station Hotel, the 1960s modernism, and the skyscrapers skirting around Kanda and Akiba made it clear that I could only be in one country. One of my favourite manga, Akira, really is an ode to this sort of thing. One of the most fascinating bits of it, which isn’t as pronounced in the film, is that Otomo’s vision of modernity is so steeped in the 1980s and the sense of Japan entering a period of perpetual decline and hedonism. The hellish future of Neo-Tokyo is brutalist. Despite all the problems modern Japan has, it is fascinating to see how much one set of fears and pessimism has not taken root. Apparently, just before the market crash thirty years ago, the notional property price of the Imperial Palace was about the same as all the real estate in California. My run was just what I needed, and I was pleased to see that I wasn’t the only person to have had the idea. Despite the fact that it was about three in the afternoon, there were hundreds of people — of all ability — jogging around the route, and I was far from the only Westerner. I have been told that one of the most visible changes Japan has seen over the past twenty years or so is the rise in non-Japanese people living and working in the major cities, and even a gangly, fair-haired British person like me wouldn’t stand out in all but the most rural areas these days. I made a note to check if I felt the same way when we arrived in Tsu City in a couple of days. A quick change and encounter with my first Japanese toilet passed uneventfully. Although I spent over ten minutes desperately wondering over what button was used to flush the damn thing before realising that it was the large silver thing that looked and was located exactly where it would have been on a ‘normal’ loo. I am a graduate of the London School of Economics. After a brief introduction to our minders from the Local Government Association, we had a forgettable dinner at the hotel. I was flagging at this point, it was only 6pm local time and I was terrified that if I went to my room I’d crash and wake-up in the wee hours. Fortunately, our main minder, a British guy based at the CLAIR office in Whitehall, suggested that we walk to Shibuya and visit the Sky Deck viewing platform. I was glad we did. The cold air perked me up massively, and it also meant we could visit a bar serving some very interesting local microbrewed beers. I wasn’t properly prepared for seeing Tokyo by night — but I was glad I did. Seeing the Shibuya Scramble from above was a particular highlight… Whilst not usually averse to walking, a combination of my run, the late hour, and having been effectively awake for over twenty-four hours made the decision to take the subway back a sensible idea. Returning to the hotel, I decided to check out the adjacent 7/11 to see what it was like. I’ll go over convenience stores and konbini culture later as well. Carry On At Your Convenience Store: A Tragedy in One Act Jack Tindale: Well done! You’ve gotten through the whole day without mentioning Evangelion. Why not pop into this 7/11 to get a celebratory beer. The 7/11: Yes, after that, of course I had to have a bloody Yebisu. Part Two: The Day Tokyo-1 Stood Still I am not a man who breakfasts. It was somewhat of an act of rebellion after my mother forced years of dry porridge on me. Nevertheless, holidays tend to be one occasion where I drop the habit and turn into that guy from the Key and Peele skit. “Can you believe this? It all comes with the room!” With that in mind, I forwent a morning run for a change and decamped to the hotel’s buffet. The Japanese, being the sort of illustrious people that they are, have gone one better than most places by offering three approaches to the most important meal of the day, 50 percent more than you usually find. Whilst I know that starting the day with curry is normal for many millions of people around the world, not all of whom are students nursing hangovers, I have to admit that I balked slightly at the idea of an early-morning katsu, but I have a passion for fish, and a particular passion for smoked fish. I would eat fish with every meal if I could. A small spread of salmon, mackerel, the ubiquitous omelette, and tofu therefore set the day up very well indeed. ​One thing that amazes me about Japan is how efficient land use seem to be. Obviously a great deal of is down to the post-war reconstruction efforts, but despite being a city that we hold to be synonymous for size and sprawl, Tokyo manages to feel compact but not claustrophobic. I can really recommend this excellent blog post about how zoning and planning works under the Japanese system, largely because of policy being set at the national level and aimed at maximising the most efficient use of space. The city is also a lot less high-rise than I expected it to be. That isn’t to say that there aren’t a large number of skyscrapers and the like, but outside the very centre, buildings rarely seem to get above five stories or so. One thing that the Japanese tend to be very good at (and certainly something I would bring in if I became Minister for Transport) is selling air-rights above train stations. We’d explored the huge new development at the new Shibuya Station the previous evening, and it appeared to be the norm basically everywhere. I cannot help but think you’d make a decent stab at solving the housing crisis in London if you built a few thousand flats above Clapham Junction. I pondered this on the way to our first appointment of the day, which was at the CLAIR headquarters a short drive away (whilst I prefer to walk wherever possible, one of our handlers said that Europeans tend to have a habit of meandering off, so they try and keep us confined to vehicles as often as possible, which I guess is fair enough. We then had a (un-)surprisingly informative lecture on local government by a Profession from Meiji University (which I thought sounded a bit odd before I realised that we have Queen Mary, University of London, and formally all the various Victoria University offshoots…). One of the main takeaways I got from it is how there is no contradiction between unitary states and giving genuine financial autonomy to local authorities. In our obsession over “federalism” and grand ideas of constitutional reform in the UK, it really seems that just giving councils proper control over their own tax-varying powers would have an immediate and transformative effect on their ability to bring about genuine change for the citizenry. I mused this whilst helping myself to a coffee from the best vending machine I’ve seen for ages. Another thing that I found amazing about Japan so far is the oft-remarked contradictions in the application of technology. You have absolutely hilariously bad websites for even official organisations, fax machines still in routine use, whilst putting far more effort than expected into vending technology. - How many levels of coffee machine are you on? - Like, maybe 五 or 六 right now, my dude. - You are like a little baby. Watch this…​ We then popped over to a hotel across the road which was obviously owned by the Tokyo Police Pension fund and had an absolutely exceptional lunch. Another takeaway I got early on is that people tend to eat quite early in Japan. I usually don’t bother sitting down to my meal deal until at least 1pm, but by noon our restaurant was already filling up. I had my usual fluster about the order to eat things in, but I don’t think I offended anyone. We spent the rest of the day touring around a few site visits. Our first engagement was at Deiwa House, a housing company that is doing a lot of work into old age care and assistive technology. I was very pleased to meet Paro, a robotic seal that is used for pet therapy. We also managed to find the time to race around on some foot powered wheelchairs! ​Our final engagement of the day was at the Metropolitan Government Buildings. Now, I know that the Tokyo Urban Area covers an area that would even make British Republic go “hang on” even without having a Hokkaido Prolapse, but it really does make London’s City Hall look like one of those parish council headquarters based out of the local vicarage. We had a meeting on carbon trading and sustainable business in a suit of conference rooms on the 5000th floor that looked like a cross between the interior of the Death Star and a city in in one of those depressing 1970s movies where the protagonist realises that he’s living underground and the rest of the place is run by clones of his death brother. Hilariously, the talk by the sustainability team about how they were planning to reduce excess plastic waste was accompanied by us all sitting down to a set of printed off PowerPoint slides and a plastic bottle of water. We got two lapel pins as well though, so I could hardly complain. Ahead of the Olympics, the City has also started a rebranding exercise for tourism, which is quite clever and involves the word Tokyo being printed twice, once in a modern typeface and again in an old brush stroke to showcase old meets new. I’m sure that the smart-alec branding firm charged a lot of money for that, but it works quite well. Inevitably, one of the posters has a traditional Ukiyo-e image, and Hatsune Miku. I suppose the British version would be Wallace and Gromit standing atop The Fighting Temeraire. After a quick change at our hotel, we returned to the place we went for lunch for a formal dinner with the main staff of the organisation hosting us. We got into the lift with a distinguished gentleman who I vaguely recognised, but mainly because he was wearing his House of Representatives lapel ribbon (these are really important by the way, Yasuo Fukuda was once famously denied entry to the House of Representatives for not having his badge, despite the fact that he was Prime Minister at the time, he ended on getting in by borrowing one off his Deputy) When he got off on the floor below us, I realised that it was Yoshihide Suga, the Chief Cabinet Secretary. Here he is announcing the new era name. ​This really was turning into a very Tindale holiday. We ended up eating in a banqueting suite which was so obviously out of a Bond film that I felt it had to be deliberately designed that way. We had a set of floor to ceiling windows looking out to the Imperial Palace and the Diet Building, and I was half convinced that we’d all be shot to bits by an ultra-nationalist with a machine gun as we offered a toast. Fortunately, all ended up being well for what ended up being a very good East meets West dinner. Sashimi followed by Steak ended up being the sort of thing. I was worried that conversation would be somewhat stilted but a few sake and a glass of wine ended up loosening tongues fairly effectively. I ended up trying to explain the concept of a ‘deep fried mars bar’ in Japanese, and eventually hit on the idea of チョコ天ぷら, or ‘choco tempura’. Feeling a little bit worse for wear, I ended up retiring soon after we arrived back at the hotel, and drifted off with a few evening game shows. After years of convincing myself that Japanese game shows obviously aren’t as silly as they seem in sitcoms, I guess it was inevitable that the first one I saw seemed to involve a contestant wearing black tie winning a set of samurai armour during a game of rapid fire kanji logic problems. Part Three: Northern British Laughing Comedy Brothers Assemble For Televised Happy Show! To Mei, To Tsu! ​I don’t know where my interest in Japan came from. Certainly, it was never a factor in my life growing up. I was always the best in school at geography and history, so unlike most people my age I at least knew where it was, but beyond that and the odd project on it when I was ten where we had to label the four home islands on a map, that was it. It wasn’t until I was at sixth form and I ended up watching Last Exile at a friend’s house the odd evening after college that I thought about the country in terms other than that of a simple location. At university, I was both an officer of the Anime and Manga Society and did my economic policy essay in Second Year on the privatisation of the Japan Post, so I like to think that I at least combined weeb-dom with a sense of a country beyond pop culture. I woke on Tuesday slightly hungover thanks to a spare bottle of rum that we shared in the hotel. It was meant to be one of a set of gifts for our hosts at CLAIR but we didn’t end up handing it over, so we hung around the hotel bar surreptitiously pouring it into a collection of cokes purchased from the bar. Given that I’m the youngest person on this excursion by at least twenty years, it felt almost like being on a school trip again! Nevertheless, I managed to have another jog around the Imperial Palace before we collected our belongings for the second leg of the tour. On my way back, I noticed that next to our hotel was — inexplicably — a Nordic restaurant called Stockholm that had totally passed me by the previous evening, despite the large relief map of the city. In 1957, the manager of the Imperial Hotel in central Tokyo visited Sweden. Whilst he was there, he came across the concept of the Smörgåsbord and realised that it would probably be a good fit for a post-war Japan where the economic boom was finally starting to take root, but where budgets were still a bit tight for many people, even those staying a posh hotels. He was a bit worried when he returned to try out the concept, but he found that both the staff and the guests loved the idea. I suppose you can see why, given that Sweden and Japan both have a lot of interest in cold cuts and fish as a staple of the diet. The menu was also very simple and soon a lot of people started to eat at the Imperial Hotel buffet even if they weren’t staying there because it was such good value for money. However, there was one major issue to address, which was what to call the restaurant. Smörgåsbord isn’t the most natural fit for the Japanese vocabulary, even for a language that makes such heavy use of foreign loan words. The proper term is スモーガスボード or sumougasuboudo, which doesn’t exactly roll off the tongue. Fortunately, another member of hotel staff had an idea. He’d recently gone to see the Kirk Douglas film, The Vikings, and felt that it would be an excellent name for a restaurant that was specialising in Nordic cuisine. So, the “Imperial Viking” was born. Before too long, the concept had been taken up throughout the country. As is so often the way, バイキング (baikingu) soon became the term used for any sort of buffet style dining, rather than an explicitly Swedish one, but you can nevertheless still find theoretically authentic Nordic places such as the Restaurant Stockholm in many areas. ​A quick minibus took us around the block to Tokyo Station. The Japanese railway network operates their stations in a very logical way, I have found. Most are largely underground, so there’s very few that have the great trainsheds you see in European cities, but they are also busy in a way that you don’t see in places like Penn Station. They are surprisingly bland and utilitarian in that respect, although Tokyo does at least have an original station building. Designed by Tatsuno Kingo in 1914 and survived both the Great Kantō Earthquake and the Second World War. It is nevertheless considerably less Neo-Baroque on the inside! As we were waiting for the train, I mused on the nature of the trip. As noted, CLAIR always host the tour in two parts, the first in Tokyo, and the second in a second-tier provincial city to showcase how local authorities outside the mega cities react to a range of various issues. Because Japan is a sensible country, all their local authorities and prefectures have very sound flags. Basically all of them are based on a stylised hiragana character of the place name in question. Here we have つ (Tsu) — for Tsu-City, and み (Mi) for the Mie Prefecture, which we were about to visit. ​Our journey would take about two hours but involve a bullet train to Nagoya, followed by the local direct service to our final destination. The Hokari sisters from Evangelion are all named after the three different high speed services on the Tokaido Shinkansen. Hikari, as well as Nozomi and Kodama, recently appeared on a crossover on the kid’s train anime, Shinkalion, as part of a crossover with Shinji Ikari’s Depression Express. ​I have wanted to catch the Shinkansen (New Trunk Line) since I was seven and went on the newly acquired bullet train at the National Railway Museum in York. So, thinking about it, perhaps that was my first infatuation with Japanese culture. I am sure that the majority of you all will be familiar with how the Shinkansen works, but it was nevertheless amazing to see the troop of cleaners waiting outside as the train arrived, followed by a military operation to entirely scrub and clean the carriage, as well as rotate all the chairs to face the right way. The bullet train is actually rather understated inside. It doesn’t have the sleek futuristic stylings you see in China, or the mixture of compartments and glass doors you have with the TVG or ICE trains in Europe. However, they are large, comfortable seats that offer a great deal of legroom. In some respects, it is a bit like sitting in your granny’s favourite armchair. The journey to Nagoya was quick, painless, and unremarkable bar for one thing — Mount Fuji. I obviously knew that the train went past it, but as with so many things, like Stonehenge or the Statue of Liberty, you have already seen it so many times on television or elsewhere that by the time it rocks up, you’re almost bored of it. Then you see it for real. A snatched photo on an iPhone isn’t going to do it justice. You need to see it for yourself. Yet, in the magnified January air, it was clear why it is the foundation of so much myth and legend. I’d worship it as well, if I could. We arrived at Nagoya station and walked down to the underground tracks to catch the local train to Tsu. It was time for lunch. Again, unlike the great high-speed services of Europe, Japanese trains don’t have a bar or restaurant car, and many don’t even have a trolly service. They do, however, have something much better. Ekiben! ​Every station in Japan will have at least one store or kiosk selling ekiben. A combination of ‘eki’ (train station) and ‘ben’ (short for bento). The one above is a fairly typical example. Very heavy on vegetables, either steamed, raw, or pickled. The somewhat horny thing on the right is pumpkin, which was the only thing I didn’t finish or care for much (I don’t like dry-tasting fruits and vegetables, which is one of the reasons I hate bananas), at least two different sorts of rice (here we have plain boiled, as well as sesame-fried, and cooked with red bean paste), as well as a small amount of protein — usually chicken or fish. It was just the thing to perk me up after the journey. We decamped at Tsu Station just before 1pm. Unlike Tokyo, which is a global city and feels like it, the city of Tsu was as typically Japanese as you could expect, and very much the idea of Japan in microcosm. There were far fewer westerners, the vast majority of buildings were low-rise or lower, and overhead power cables dominated the streetscape. I love things like overhead power cables. They are so mundane, yet really highlight the alien nature of the country from the UK in a manner that something far more incongruous, like a different language, does not. I can see why so many anime and film directors use them as establishing shots. ​The rest of the day was taken up with a meeting with the Mayor of Tsu, followed by presentations by various bigwigs at City Hall about the efforts that they are doing to improve healthcare outcomes for the elderly, and improve the environment. It felt a bit like that episode of Parks and Recreation where the Venezuelans visit (boy, that one has really not aged well…), although we were a lot less chippy. As has been noted, it really highlights the fact that Japan is surprisingly low tech in many respects when the idea of a smartcard giving free bus travel for senior citizens is seen as being innovative. I was able to use my experience in big data and from my old job at TfL to discuss some of the potential problems and potential that the scheme had, so I felt I earned my keep. They also had a poster of Astro Boy encouraging people to use them. Which was nice. ​Dinner that evening was at a local izakaya (tavern), with the city staff. This was probably the best meal I had Japan, given that it was utterly delicious but also informal. The food was phenomenal and, naturally, beautifully presented. I was delighted that one of the opening dishes was fried chicken, which was more like tempura than KFC, followed by a succession of excellent dishes. The octopus as seen above was a particular highlight. It was served as sashimi, with no dressing aside a tiny amount of lemon and vinegar. I simply don’t understand why people feel the need to faff around with fish when it is already excellent. I ended up eating half of the table next door’s as well as our own, because some people are heathens. The rare beef was also superb, with toasted soy sauce on the side. The meal finished with soba noodles, which I (sensibly) asked to have cold. A final toast and sake was enough to finish off the night in a sensible fashion. I had much needed early night. Part Four: The Inevitable Hot Springs Episode I will start this update by talking a bit about about combini. According to a colleague on the trip, one thing that people in Japan like is things being “benri” (handy). If you are ever in the country, I would struggle to think of anything that came more benri than the “combini”. The term is short for “convenient store”, small retail outlets that tend to be open all day and night and in which you can find just about anything that you would need at short notice. There’s a host of them and you’d struggle to not find one if you walked for less than five minutes from any hotel. As with supermarkets in the UK, combini tend to be franchised, with companies ranging from Lawson, to 7 Eleven, to Family Mart. It is said that even with Japan’s population decline, that there is a combini for every 2000 inhabitants. They are found literally at every street corner in the major cities, but are also present in smaller cities and rural areas. The combini stores are so widespread that people are surprised when they hear of a place without one. Nevertheless, they fill a number of key benefits. I personally have only been for beer and snacks, but it’s orders of magnitude better than any Tesco Express. Most of them also have free wifi and a terminal to buy bus and train tickets, as well as counter for hot food. Beer is a common sight in most combini Weekly Shōnen Jump magazine Anyway, I woke early enough to go out for a run through Tsu. One of the reasons that I like running is that it lets you get a genuine feel for a city. I woke fairly early and headed off on a vague route that I had planned to let me see more of Tsu. It was a bit colder than I had been informed, and I made sure to set off at a firm clip. I had finally gotten around to reading Murakami’s What I Talk About When I Talk About Running on the plane over, and I had found myself agreeing with basically all of his key points. The greatest living Japanese novelist, like me, came into the sport at a fairly late stage in his life, but his claims of using it as a solution to everything from meditation to overcoming writer’s’ block is something that I could only agree with. I headed down to the small beach near the harbour, once again marvelling at the effortless way in which the streets in Japan manage to be narrower, but facilitate pedestrians and the concept of space in a way that seems to simply be beyond urban planners in the United Kingdom. It was an early morning run through a really picture postcard Japanese city. I stopped about twenty times just to take photos of stuff like power lines. Simultaneously mundane and alien. ​We spent the rest of the day visiting a range of industrial sites and areas in the area. The trip was not a holiday, but a way for us to see how local authorities in Japan are dealing with the pressures of things like an aging population, sustainability, and industrial strategy. We visited a wood-chip power station by the coast, the headquarters of a local wind-power company (wind power is still rather uncommon in Japan for a number of reasons, not least typhoons limiting the utility it has for off-shore purposes) and a recycling plant. One thing I like about the Japanese aesthetic is how boxy things are. I don’t mean this as a criticism, but it’s a bit like living in 2000 as imagined by people in 1985. The country is weird for design in this respect. Many vehicles are really square, blocky affairs, but a bit sleek nevertheless, as are municiple buildings. When you combine this with how clean everything is, it’s a bit like living in a first-person shooter from 2004. ​En route, we also met up with some more mascots! Shiromochi-kun is the mascot of Tsu City, and is a white (shiro) sweet rice dumpling (mochi). He was joined by his friend Misugin, a tree spirit who protects the local forests. Yes, I don’t know who the one on right is either, you’re really funny people. Do sent more of those. Tindale-kun (centre) is a jittery bearded man who wants everyone to love him. He is the mascot of Southwark Borough, London Prefecture. ​The highlight of the day, and I think the trip, was an evening at a nearby hot-spring or onsen. I have wanted to visit one for many, many years now. They are incredibly common in Japan, due to the country being such a hive of geothermal energy. Although the resorts a bit old fashioned these days (it’s mainly old people who go), they are still very popular. Although the quality of them varies massively, the majority tend to be traditional and fairly austere. I think the best comparison is between a spa and a bed-and-breakfast. You go to look after yourself and unwind, rather than to have a luxury holiday. I am not a man who really likes luxury though, so that suited me very much indeed. ​An onsen is technically the term for the hot spring itself, but is almost always used to refer to resort as a whole. The typical room in an onsen is like the one above, which I stayed in. You have the traditional bamboo floor mats (tatami), and sliding doors. My colleague and I shared a room before dinner, and changed into the yukata provided. This is one of the many different types of national dress. Unlike the most famous kimono, it is very informal and easy to put on. You then tie the thing together with a simple belt called an obi, and then make sure to put the knot at the back, because having it showing apparently indicates that you earn your money in a somewhat dubious way. The only issue I had, because I’m dyspraxic, is that you never wear the thing with the right fold showing, because that is the traditional way that one dresses a corpse. I genuinely spent fifteen minutes worrying if I’d done it right (is it my left, or their left?!) before my colleague informed me that I was doing the correct thing. It was a bit chilly (most of the resorts tend to be unheated in private rooms), so I put on the hanten as well, which is a short coat that adds a bit of extra warmth (the sleeves also allow you to carry some money, a phone or (as I found out later) a can of beer!). I think I looked quite sharp, to be frank! You Only Live Twice ​Our meal almost felt underwhelming, given how well we had been eating previously, but was still delicious. The new addition was a Japanese hotpot, which unlike the Mongolian and Chinese styles, usually comes pre-mixed. The onsen served it with a locally brewed ponzu, a type of fermented sauce usually made with seaweed and a citrus fruit, and was absolutely divine. I even bought a bottle of it to take home! Also on the menu was a sweet potato puree, and as you can see from the photo below, a shrimp so large that I thought it was a crab or lobster when it was first served to me. It was served with a wonderfully aromatic sauce, which set the whole thing off perfectly. Itadakimasu! I spent an hour after dinner waiting to cool down before I headed down to the spring. As with most onsen, the water is piped in and heated if the hot spring itself isn’t available. You first remove your clothes and put them in a basket in the waiting area, and then wash yourself (naked, there’s no shame here) with a tap and showerhead usually available at short stools near the bath proper. As with most of these sort of things, you are meant to be clean before you enter the water. I sat down and washed thoroughly, and headed in. The water was hot, bordering on uncomfortably so, but that’s the point of a treatment like this. I enjoyed a wonderful half hour flitting between the bath and having a cold shower under the nozzle to make sure that I didn’t poach myself. You are usually provided with a small towel to cover the wedding vegetable in between the shower and bath, which I soaked in cold water and used as a flannel on my head for a bit of extra relief (don’t let the towel dip in the bathing waters, that’s also a big no-no). I left feeling totally invigorated, like I’d had a once over with a pumice stone, and noticed that our futons had been made up for us. I was asleep in about six seconds. Part Five: God’s Blessing On This Wonderful Prefecture Capital!; or, Aggretsutindale ​We left a rainy onsen for our final full day in Tsu after another typically Japanese breakfast. The morning meetings took place at the local business development centre, where we discussed international partnerships and development, as well as national and local lending facilities. It was rather dry but still interesting to hear about how much more power councils have over matters such as that compared to those in the UK. Speakings as someone who knows how much of a challenge it is to even get permission to set up any sort of council-run bank, it was a clear sign of how far the UK has to go. We then had one of the most pleasant surprises of the whole trip, a meeting at a family-run soya producer in the business park, which is currently on their fourth successive female manager! In a country where attitudes to women in the workplace are still, despite rapid improvement over the past decade or so, still very old-fashioned, Shoko’s efforts are really worthy of the highest praise. I have to say that it was the most modern and minimalist soya factory that I have ever been in, as you can see from the image below. It was effectively an Ikea Showroom crossed with an architectural studio. Shoko’s company product a range of soy milk and dairy foodstuffs, and are working to develop an export business. Given the great packaging, quality of their product, and the wonderful story behind everything — I could only wish them the very, very best. ​After a quick lunch, we spent the early afternoon visiting a local temple complex. The Honzan Senju-ji is one of the most important centres for Buddhism in the whole of Japan. As a point of principle, I don’t take photographs of religious buildings, but it was a real hidden gem and an example of the sort of thing that is very much off the tourist trail. I don’t imagine foreigners visit Tsu any more than you’d have a city break to Wolverhampton or Des Moines, but it was nevertheless a great thing to see. We also had time to have a potter around the local backstreets, and as I have noted in earlier posts, I think that the streetscape of most Japanese cities is perhaps the most ‘alien’ thing I encountered. Most streets are very narrow affairs and can only accommodate traffic going one way or the other. You don’t tend to have pavements, but that’s fine because the traffic is minimal and tends to go very slowly anyway. This hodgepodge has mainly developed because of how the planning system works, where you don’t really have any sort of zoning regulations so people can basically put up what they like. This also means that the life-span of most Japanese buildings is very short. The average house is only up for twenty or thirty years, at which point it is demolished and the people owning the freehold just start up again. Although it isn’t perhaps the most sustainable way of doing things, given that they tend to be constructed out of very lightweight materials, it isn’t as bad as it seems. The one downside is that this tends to make addresses completely pointless. Only the most important of streets and highways tend to have names, so if you’re looking for somewhere you usually have to just be given a local landmark or station, and find your way by trial and effort. This isn’t something that is done to catch out foreigners, by their own admission, most Japanese people tend to have to rely on directions and hand-drawn maps to find a specific house or apartment. The chaos, naturally, only adds to the charm, although given that I haven’t really been in a rush to find anywhere and have largely been taken around by locals, any issues haven’t really raised their head so far. Our final engagement of the day was at Tsu City Hall, where we had an ‘opinion exchange’ with local bigwigs. I was the last person to speak, so rather than re-hashing what my colleagues had said, I mused on my feelings about Tsu, touching on the lovely scene that I had encountered the previous day. I’d run along the seawall and seen the little fishing boats silhouetted against the brand new power-station. It was “Japan in microcosm” (and credit where credit is due to our translator, who handled that awkward turn of phrase magnificently). I added that it was clear how the Japanese planning system and devolution of genuine fiscal powers contributed to this. I thought that I’d gone off message a little bit, but both my colleagues and the Tsu delegation seemed to appreciate it — and the Head of Legal Services at Southwark Council, who was one of the other British people on the trip — said that I needed to go back into comms. ​The Council then hosted us for a final meal at another local restaurant, which was very much entirely in the Japanese style, including seating arrangements. This was the sort of thing that you’ll have seen in every film set in Japan, seated at low tables on tatami mats. Japanese people traditionally either sit crossed legged (胡坐 “agura”, which is literally ‘barbarian sitting’), or in the more formal seiza (正坐, or “proper sitting”). The latter involves kneeling forwards and sitting on the balls of your feet, and is absolute torture after about two minutes. Our hosts, including the foreigners working for the organisation we were with, made it quite clear that seiza was not required even at a formal event like this. Indeed, even natives struggle to sit like that for more than half-an-hour at a time. Old folk are usually exempted from it, and a law being introduced in April 2020 will make it illegal to force people to seiza for extended periods of time. Dinner was fairly typical and, frankly, not the best one of our trip. It was still excellent, however, and the drink and conversation flew by in no time. A few colleagues went off for a nightcap at a local izakaya, but I was tired and had an early night. I woke early and had a final run around Tsu, after which we caught the train back to Nagoya and then another Nozumi to Tokyo. I didn’t have breakfast, but managed to find room for a (hot) canned coffee at Tsu Station. Another wonderful thing about Japan is the abundance of vending machines. Whilst I think that the ones selling used schoolgirl knickers is a bit of a folk myth, it is certainly true that they are ubiquitous. You will find ones selling beverages, cigarettes and food on basically every street corner. Even if you cannot find a combini, you will find refreshment elsewhere. I do mean ubiquitous, by the way — there are over five and a half million of them in the country, enough for nearly one in twenty-five people. On the more popular trails in the countryside, you’ll see them at the side of country paths, at the beach, and even inside caves. One of the reasons for this vending machine culture is in part due to their durability and ease of use. However, many also have the function of being able to detect when an earthquake has struck, and will dispense their produce for free to assist people who made be struggling for food and water. Naturally, vending machine culture is dying out slightly as Japan enters the digital-era and online shopping, but they aren’t going away anytime soon. Tokyo Station Square The Olympic Countdown Clock… …and the Paralympic Countdown Clock! We arrived in Tokyo in time for lunch. After checking the Olympics (which are on this year) were all on schedule, we had our one bad meal of the trip. Hilariously, at the insistence of one of the members of staff accompanying us, who is from Tours, we went to a French bistro overlooking the square in the photograph above. I know that sashimi is popular across the world, but I don’t think chicken sashimi quite works. Two of my colleagues sent their (actually raw) plates back, only to have two equally raw ones returned to them (I am quite convinced that the staff just swapped them around). We all took it in good stride (and not a little bit of teasing at our handler’s expense), and had the rest of the afternoon free to enjoy Tokyo. I went off for a bit of a wander to the north of the Meiji-era Akasaka Palace — a neo-Baroque western style building that is wonderfully hideous — where I naturally stumbled across the staircase from Your Name. I did a genuine double take, which is always fun to do (and witness presumably, given by the short titter from a passer-by). I finished wandering around and then went to meet some of my colleagues in Ginza, which is basically Tokyo’s version of Oxford Street. We met up at the largest Uniqlo in the world, where I indulged in my jet-set lifestyle and bought one (1) pair of half price chinos. We then had our final meal as a group at a great little izakaya on the street next to our hotel. To thank the four members of staff who had given up so much time and energy to organise such a wonderful trip, we made sure to get the bill for them, although given the nature of Japanese pub food, this wasn’t actually that generous. We nevertheless had the full gamut of izakaya staples, from the usual fried chicken and edamame, to some more unusual stuff like fermented squid guts. I am a complete omnivor, and I had the whole lot. One of the interesting things about the Japanese pub is that you almost always get a small nibble when you go in. This isn’t free, you are charged a few hundred yen for it in place of a service or cover charge. Even the most high-end of places don’t approve of tipping, so this seems like a perfectly acceptable alternative. We rounded off our final night as a team in a karaoke bar next door. As a rule, I do not sing in public, but I felt obliged to give an absolutely belting rendition of Zankoku na tenshi no tēze (which continues to be at or near the top of the karaoke charts) and had a very pleasant final night together. Karaoke bars are literally as you’ve seen them in movies. You hire a booth — usually for an hour or so — and either have a free drink or a one drink minimum during that time. The other great thing about karaoke is that, unless it’s licensed, none of the English-language songs use the official music videos. Instead, you’ve got a load of western men, presumably young expats desperate for money, walking around in hilariously bad scenes that have absolutely nothing to do with the lyrics. We had three in a row that all involved the same male lead just walking around a park, sat on a train, or reading a book next to a squirrel (who was frankly a better actor). I think that there has to be a Guardian long-read in them. Part Six: The Rising of the Public Policy Hero I woke at eight in the morning on Saturday, which is late for me. The good people at CLAIR had ensured that we didn’t need to check out until 1pm to help facilitate those of us with afternoon flights and who had check ins at other hotels, which meant that I could drag myself off for a jog around the Imperial Palace again. There is actually a Parkrun in Tokyo, but as it was located a bit of a shlep away in Futakotamagawa, I decided not to bother with that. Perhaps next time! I returned in time to have the last sitting of breakfast, and checked out. I was keen to fit in a bit of sightseeing before heading off to my new hotel, and I felt that if I didn’t formally leave the hotel, I’d just hang around pointlessly for a couple of hours. I had managed to keep my weeb tendencies in check during the official part of the trip, but my blurting out of the theme tune to Neon Genesis Evangelion had unleashed the pent-up pressure. As release, I took the subway to Akihabara. ​Even if you don’t know it by name, Akihabara is probably what you most associate with a Japanese cityscape. Bright lights, noise, incessant billboards, and people in cosplay dominate the area around the station. It is generally accepted as the hub for Japanese popular culture, with dozens of shops selling anime and manga products, light novels, DVDs, cosplay outfits and just about everything you can imagine. I am obviously a nerd for this sort of stuff, but I am certainly not as openly engaged with the whole community as you tend to assume, so I did my best to look as respectable as possible amongst the ocean of otaku. That said, I spent an enjoyable couple of hours wandering around the area, and did buy a couple of things that I had my eye on. Most of the stores, are set out over multiple floors catering either for specific series’ or for types of products, and you usually have to pay separately on each one. It certainly saves time and probably helps with security. Most places also ban photography, but a few will allow you to have a few snaps of the displays. I especially liked this NieR stuff, even if it was far beyond my budget! I then wandered over to the Tokyo Dome, an entertainment complex that offers a combination of old-fashioned fairground style rides, as well as a roller coaster, but decided against going on anything, and then collected my baggage and headed for my home for the last couple of days in the city. After a week spent in Western-style hotels (with the exception of the onsen outside Tsu) I had decided to spend another couple of days in Tokyo exploring by myself. The Japanese Government did not offer to put me up for this at their own expense, which I found to be rather unfair, but as I am a magnanimous fellow, I let it slide. I am a penny pincher at the best of times, and working for a think tank is not known for being the most extravagantly paid occupation, so I booked myself into the Kimi Ryokan,in the Ikebukuro area, a few miles north of the Imperial Palace. A ryokan is a traditional Japanese inn, and the nature of them varies about as dramatically as you’d expect. The Kimi was actually more of a minshuku, more akin to a hostel or a bed and breakfast than a hotel, but I can still recommend it. As with many buildings in Japan, it is a modern construction, but remains very traditional on the inside in terms of decor. Shoes are not allowed past the small, lowered entrance hall (you keep them in a cupboard with your room number on, the floors are covered with the ubiquitous tatami bamboo matting, toilets and showers are shared, and the sleeping arrangements are a futon. Frankly, I always feel rather out of place in luxury hotels, and I was keen to have a simple base of operations for the next couple of days. It was also remarkably cheap, with the three nights only coming to £103, which even out of season is a good deal. As people have noted, it seems to be a bit of a myth that Tokyo is an expensive city. I know that I’m coming from London saying that, but even a skinflint like me could get by without spending too much during my time here. If you aimed to go for cheap attractions and kept eating out to the cheaper end of things, I think you could do most days for not much more than ¥3000/4000 (about £22 to £30). Travel guides are not exactly complimentary about Ikebukuro — most describing it as a forgettable part of Tokyo. I disagreed with this, although as a forgettable person myself, I may go into bat for the less beloved parts of a city (I am currently Head of Press for Enfield Council). Ikebukuro is split in half by a large train station, which acts as a hub for a number of commuter lines heading into the centre of Tokyo, making it the third largest in the world. As with many large transport hubs in Japan, it is dominated by commercial and office buildings. In this case, the rival department stores of Seibu and Tōbu glower at one another from either side of the train tracks. After checking into the Kimi and inspecting the facilities, which took about sixteen seconds, I went out for a bit of an evening explore. I headed through the underpass of the station to the eastern side of Ikebukuro, which is perhaps best known for Otome Road — one of the largest centres for anime and manga culture after Akihabara. Unlike the latter, which is very male dominated (as noted by the large number of billboards with typical, scantily clad anime girls on), Otome caters more for much more of a female audience, with a large focus on shōjo and josei, and women-led light novels. It makes for a much less off-putting atmosphere than Akihabara, especially for a fairly reserved person like me. I wandered through to the Sunshine 60 complex — as well as being home to what was previously the tallest skyscraper in Japan, it also holds the Sunshine City shopping centre, so I indulged my inner kid and had a walk around. The complex contains a number of major attractions, including a concert hall, a big Namco amusement centre and various things like cat cafes — it really put me in mind of Tokyo’s version of the Barbican, but a bit better because the Barbican doesn’t have an aquarium or a Pokémon Center. The whole place was built on the site of an old prison complex for war criminals. Anyway, I bought a Bulbasaur for myself and a Pikachu for my friend Tom. I was tired and overly hungry by the time I was halfway to my hotel and wanted something cheap but filling, which made me naturally pop into the first place I saw that looked cheap and full without really paying attention to what it was. Thankfully, rather than a brothel, it turned out to be an outlet of Yoshinoya — Japan’s pre-eminent fast food chain. Yoshinoya specialise in gyūdon, which is a great counter to people who think that Japanese food is purely based around fish and beautiful presentation. I had a bowl of their speciality for less than the price of a Tesco meal deal and felt very pleased with myself. Gyūdon is basically just rice topped with beef and onions but somehow manages to be the best thing in the world if you are either drunk or just in need of comfort food. During the height of the BSE crisis at the start of the Millennium, the chain was forced to stop selling their signature dish owing to the ban on beef exports from America. When it was lifted in 2006, the queues for “the beef bowl revival festival” reached for over a mile in some areas of the country. Feeling thoroughly sated, I collapsed onto my futon and woke the next morning feeling entirely refreshed. The next day, I woke early and headed off to the station by way of a local branch of Moriva Coffee. I am a firm believer that it is fine to go to generic cafe chains so long as it is a local generic cage chain, and I was also craving a flat white after over a week of green tea and morning miso. There is a big fad for kōhī to tōsuto at the moment, as a recent Guardian article notes, and as the only westerner in the place, I felt that I could pass it off as a fairly authentic breakfast. I like museums, but I tend to avoid going to the large, national ones if I’m abroad. I usually find that they are either so expensive you end up feeling obliged to spend the whole day in them, or they have an excellent collection but nevertheless isn’t really “about” the country in question. At least the British Museum is free so you can dash around it, but as much as I love Paris, I don’t think it’s the best use of a city break to fritter around in the Louvre to see a load of art from Italy (it’s why I much prefer the Musée d’Orsay). Likewise, I felt that the considerable merits of the National Museum could be saved for a future visit to Tokyo. I therefore headed out of Ikebukuro into the western suburbs. I alighted in Suginami, which is rapidly becoming one of the more Bohemian areas of Tokyo. Many artists and live here, so there is the usual influx of small galleries, hipster cafes, and the like. There isn’t a huge amount of interest here bar the odd shrine, but it struck me as a very sensible place to base yourself if you were visiting. I visited the local animation museum, which was located on the third floor of the local community centre, which — like many municipal buildings in Japan — looked to have been built to a very high standard in the 1970s and then just left to age like a fine, modernist wine. The museum is actually part of a local university specialising in media studies and art, and was small but very well put together. They have a pillar at the centre of the exhibition, with signatures by hundreds of leading manga artists. Yoshihiro Togashi, of Hunter x Hunter fame. Many of these people were entirely new to me! Ah, the three genders. One of the highlights was they have perfect mock-ups of desks belonging to some key anime directors and artists, including Yoshiyuki Tomino of Mobile Suit Gundam fame, as well as a signature wall of hundreds of people from across the industry. They also have a regular new intake of the latest posters. I was quite annoyed that you cannot buy them. They have perfect mock ups of some key anime director and artists desks, including Yoshiyuki Tomino of Mobile Suit Gundam fame. They also have a regular new intake of the latest posters. I was quite annoyed that you cannot buy them. The first one here is for Keep Your Hands Off Eizouken! which has just started airing an is an absolute banger. Basically a non-sexist Bakuman. Easy Breezy Or Boku No Hero Academia: Heroes Rising, if you’re annoying. Haikyu!! Moe World War Two Combat — The Series Had I had more time, I would have also visited the nearby Ghibli Museum, but instead I wandered up to up Kami-Igusa Station to catch the train to my next destination. Outside, I was nonplussed to encounter a bronze statue of a Gundam. I had never heard of this one, unlike the famous life-sized mecha near Tokyo waterfront. I later found out that about ten years ago, the residents of Suginami paid for it to be constructed in honour of the makers of the premier mecha robot series, Sunrise Animation, who are based in the area. ​I took the train twenty minutes further west to Hana-Koganei, where I had a small bento box of something that I think was chicken and a squid tentacle with rice, but wasn’t really paying attention because for some inexplicable reason, you weren’t allowed to eat inside (which I was confused by given that the store had a row of tables and chairs — must be something to do with trading hours, I suppose). It wasn’t an especially warm day, so after scoffing it down outside I went to an outlet of Mr Donut and had a coffee and cruller to warm up. I did a full Twitter thread about my visit to the Edo-Tokyo Open Air Architecture Museum, but I cannot recommend it highly enough. If you are in the city and have even a vague interest in the subject, go. I think Churchill’s maxim about us being shaped by our buildings is one that is entirely accurate. If you wish to get a feel for a nation’s way of looking at themselves, look at where they live and work. The cost of both of these museum visits was ¥400. The animation museum was free. The train fare there and back literally cost me more. I was tired and a bit grumpy at this point but I nevertheless alighted at Shinjuku Station (now this is the largest one in the world) and wandered over to the Metropolitan Government Building. I had been there around a week ago for an aforementioned meeting, but hadn’t had time to visit properly. I was keen to visit the observation deck, and had got lucky as it was just turning to darkness. There is something wonderful about seeing a city slowly turning on night mode, and I spent an hour watching the lights of Shinjuku and the rest of the metropolis lighting up. I returned to Ikebukuro via Star Wars. Owing to the number of people working in Tokyo City Hall, they’ve built a subway from Shinjuku station for effectively the whole mile or so to their place of work. It’s a bit Death Star. ​I wandered around near the Kimi for a bit before, on a whim, popped into an izekaya. I had thought that my limited Japanese would be sufficient, but it was not. Aside ‘hello’, I couldn’t make a single word stick, although I think that they were putting it on a bit, I mean, even with my accent I think ”Bīru ippai kudasai?” is fairly unambiguous. Nevertheless, via a mixture of pointing at the table next to me, I managed to have a wonderful set of nibbles for not much more than £20. As noted, I’m including what I assume had to be a 50 percent gaijin mark-up. Kobe beef sashimi Squid Yakitori I still felt obliged to go home via the 7/11 for another beer and some packaged dried squid. Now, it looks, smells, tastes and feels exactly like you’d imagine packaged dried squid to taste like. It’s not bad, it just feels vaguely wrong in the same way pork scratchings do. I suppose it’s the same way most people think about me. After doing my laundry, because I’m a sixty year old woman who doesn’t want to travel with a load to do when they get home, I fell asleep, watching a bit of anime trash. After what had been a very cultured day, I felt something brainless was absolutely justified. Part Six: The Beast the Shouted “Aye!” At The Heart of the Diet; or, Departure With No Wasted Draws​ Since I can remember, I have always been a man with an almost pathological fear of missing a train, plane or ferry. I often hear from friends complaining on social media that they’ve arrived for a flight a day early by mistake, and often think “ surely that is eminently sensible?” Nevertheless, I saw my KLM check in email with a look of resignation and filled it in in a similar sense of irritation. After over a week of clement weather and unseasonable warmth, my final day in Japan was cold and miserable. Fortunately, I was neither. Taking into my erstwhile fear of missing my now checked-in flight home, and wanting an excuse to visit another part of Tokyo, I took the train to Nippori Station to buy a ticket for my train to the airport for the next day (a pointless action, given that it never gets booked out, but like I say, I am neurotic for this sort of thing). My travel plans secure, I had an excuse to walk from Nippori to the Diet. This wander took me via Yanaka cemetery, one of the most important in the City, although there are very few physical burials. Given the lack of suitable space in Japan, as well as the country’s historic links with Buddhism, cremation is overwhelmingly done after death. The Meiji Government attempted to ban the practice shortly after the Restoration in an effort to promote domestic Shintoism, but this proved impossible to implement, and the ban was repealled after less than two years. Whilst wandering, I came across a teru teru bōzu (shine shine monk). These are little handmade charms that are said to bring good weather and ward off rain. It seemed to just about be working that day! ​The walk then took me through the rather wonderful Ueno Park, which basically houses all the museums and shines the powers that be cannot find room to put anywhere else in Tokyo. It also has a zoo. It is not the most beautiful park in the world, nor is it the largest, but even on a Monday morning in January, the crowds were considerable. Had I been in Japan a few months later, I would probably have struggled to move at all, given that Ueno is one of the most popular places for hanami (cherry blossom viewing) parties. I wandered out of Ueno and back towards the city proper, and once again found myself in Akihabara. Finding an excuse to buy some new wireless headphones to replace the ones that I had left carelessly in my hotel room in Tsu, I also noted — obviously — a pop-up raman bar for the popular comedy anime, Konosuba, but decided against going in. I felt that that really would have been too far, even for me. Given that she’s a masochist, I’m not sure that it makes sense for Darkness to be one enjoying the noodles. ​I have one tradition when I visit a capital city, which is to try and visit the national legislative building if at all possible. I had secured a ticket for a free tour of the Diet before I left the UK, but was surprised that I wasn’t the only person to have shown up for one of the daily English-tours. As I waited in security, I recalled my old readings about the curious nature of Japan’s political system. I will go into history mode here — so for those of you more interested by buildings and food, feel free to skip to the manga portrait of former Prime Minister, Junichiro Koizumi. Japan’s political system can best be summarised as the Westminster System under the aegis of an American eye. To understand the modern Japanese polity, it is important to note the history behind a nation that simultaneously combines the oldest continuous executive office in the world, with a system of government that was rebuilt from the ashes of total war. This hybrid of ancient and modern, combined with issues regarding national identity and responsibility, have created a political system that is both familiar and alien to western audiences. The Meiji Revolution (commonly referred to as the ‘Restoration’) was one of the most significant events in the history of East Asia. Spurred on by the cultural shock associated with the Perry Expedition, reformist leaders from Satsuma and Chōshū overthrew the ruling Tokugawa shogunate that had ruled Japan since 1600. The hereditary Shoguns had governed as feudal strongmen, with the Emperor left as an effectively vestigial figure, carrying spiritual respect but no real political authority. The Restoration (which can more accurately considered to have been part-Civil War, part-Counter Coup), led to the Emperor being given absolute authority over both the Spiritual and Temporal matters of state — with a new political structure being created as a hybrid of the British and Prussian systems. The Emperor was invested with absolute power over the new Diet — although no laws could be entered into force without the approval of Ministers of State. Japan’s membership of the allies during the First World War brought the country Great Power status. Despite common perceptions, the country gained a great deal from the end of the conflict — including a permanent seat on the Security Council of the League of Nations. However, a proposal by the Prime Minister and Lead Delegate to the Versailles Peace Conference, Hara Takashi, to entrench full racial equality in the organs of the new international system, was not taken up. Whilst largely supported by France and the United Kingdom, it was vetoed by Woodrow Wilson on the grounds of latent controversy. For Japanese conservatives, the rejection of the racial equality clause was taken as a sign of the country not being treated as a true partner on the world stage. During the 1920s, this resentment would linger, as would the effects of economic depression — not helped by the protectionist tariffs imposed by the United States on Japan’s heavy industries. A financial crisis in 1927 (a precursor to the Wall Street Crash) led to the further discrediting of civilian governments amongst the general public. This further gave rise to a deepening of ties between the Zaibatsu industrial-banking conglomerates and the armed forces, which increasingly held sway over political figures. The London Naval Treaty of 1930 limited the size of the Japanese fleet — proved to be a perceived humiliation too far for members of the armed forces, resulting in a wave of political violence that cumulated in the so-called “May 15 Incident” of 1932. Although the plotters were unsuccessful in their original goal of forcing war with the United States by assassinating Charlie Chaplin, democratic government was fatally undermined by the inability of the authorities to hand out more than token punishments to the ringleaders of the plot. Within a few years, a further coup attempt — the “February 26 Incident” — led to the consolidation of military authority over the entire Japanese body politic. However, even during the Second World War, a semblance of democratic accountability remained. The 1942 election to the House of Representatives — although by then a rubber-stamp legislature — still led to a third of delegates being elected without the endorsement of the Imperial Rule Assistance Association — an organisation intended to be the precursor to a new single-party state. Unlike Nazi Germany, totalitarianism never truly came to Imperial Japan, and the basic principles of the Meiji Constitution remained invoked. Nevertheless, the Treaty of San Francisco and the subsequent occupation of the Home Island spelled the end to the old system. Under Douglas MacArthur — Military Governor and Viceroy in all but name — the modern-day Constitution of Japan came into force. Technically an amendment to the Meiji system, it nevertheless represented a huge shift in the levers of power. Written largely by two senior American military figures with legal backgrounds, the new document was presented to the newly elected House of Representatives after the 1946 general election. With two significant amendments prior to ratification (explicit references to the role of the Emperor, and the introduction of a bicameral Diet rather than MacArthur’s preferred unicameral system) it entered into force the following year. It has never been amended. However, the clause committing Japan to neutrality and formally banning the existence of armed forces, Article 9, remains one of the great shibboleths of Japanese politics. The immediate post-war period was — naturally — a time of great political upheaval for the country. After the 1946 General Election, Shigeru Yoshida — a former Diplomat who had been one of the leading opponents of war against Britain and the United States — became Prime Minister. Although he stepped down after only twelve months, he returned to the post just over a year later and remained in office until 1952. Under the Yoshida Doctrine, the Government established a foreign policy committed to non-intervention in international affairs, with a close security and economic partnership with the United States. With no formal armed forces, Yoshida placed economic reconstruction at the heart of Japanese domestic and foreign policy, paving the way for the economic boom of subsequent decades. The history of post-War Japan could therefore be said to have been on a similar course of that of Germany. However, many of the tropes associated with modern-day Japan; a dominant-party state, historical and constitutional revisionism, and uneasy diplomatic relations with neighbouring states — place it quite at odds with its former ally in the Anti-Comintern Pact. The reasons for this are naturally complex, but the key issue is one of geography. With the Communist victory in the Chinese Civil War in 1949, and the North Korean invasion of the South less than twelve months later, the focus of the American occupation became increasingly focused on assisting the Truman Doctrine and ‘containing’ the spread of Communism in East Asia. The General Elections of 1947 had allowed the Japanese Socialist Party to enter Government, with Tetsu Katayama installed as Prime Minister. Despite Katayama’s moderate leanings, a rise in Marxist agitation within the party, coupled with corruption scandals, led to the coalition collapsing and the return of Yoshida as Prime Minister. Although the brief period of socialist rule led to profound changes to the country, including the establishment of labour laws and healthcare reform that remain extant to the present day, it nevertheless proved a shock to both centre and right, as well as the White House. The risk of a left-wing Japan — even as a neutral non-aligned power — was seen as being a grave threat to the post-war order. The Korean War offered a solution for all concerned parties. Demand for materiel by the allies gave a needed boost for Japanese industry, whilst also allowing for a major political realignment. Under Yoshida, the Liberal Party moved increasingly to the right, whilst his successor as Prime Minister — Ichirō Hatoyama — likewise consolidated power within the Democratic Party. In 1955 — the two parties formally merged to create the Liberal Democratic Party. Winning the General Election later that year, it would remain in power until 1993. The post-war constitution — written in less than a week — has remained in force, unamended, since 1947. It establishes Japan as a constitutional monarchy with supreme power vested in the Diet and the Prime Minister. Japan’s Parliament (Diet) is a bicameral body consisting of a lower House of Representatives and an upper House of Councillors. Both chambers are elected by all citizens above the age of 18 (reduced from 20 in 2016) under the principles of universal suffrage. The House of Representatives is comprised of 465 members elected for four year terms. Japan’s semi-proportional electoral system allows for 176 members to be elected from eleven multi-member constituencies via a closed party-list system — with the remaining 289 members elected from single-member constituencies. As a result of this, a party needs to have 233 seats to have an overall majority. The House of Councillors is comprised of 242 members, who are elected for six-year terms, with half of the chamber elected every three years. Unlike the House of Representatives, the House of Councillors cannot be dissolved by the Prime Minister. Each election therefore sees 73 members elected from the 47 prefectural districts via the single non-transferable vote, with the remaining 48 are elected from a nationwide list by proportional representation with open lists. The minimum age to be elected to the Diet is 25 for members of the House of Representatives, and 30 for members of the House of Councillors. As in most other Parliamentary systems, it is the lower House that has the most power and the Prime Minister and most Ministers are drawn from there. Whilst laws must be passed in both chambers before coming law, the House of Representatives can override the House of Councillors by a vote of two-thirds. The lower chamber also has total authority over matters pertaining to the Budget, the ratification of foreign treaties, and the election of the Prime Minister. Whilst any member of the Diet can be elected Prime Minister, in principle, it always ends up being the leader of the majority party/coalition in the House of Representatives. Unlike other nations with ceremonial Heads of State, the Emperor does not even exercise nominal executive authority — but rather serve as a ‘symbol of the State’. All executive powers are vested in the Prime Minister and the Cabinet. Additionally, whilst new laws must be signed into law by the Sovereign, he does not even have the nominal power of veto. However, the Emperor still formally appoints the Prime Minister and the Chief Justice of the Supreme Court, dissolves the House of Representatives, as well as receiving Ambassadors and the awarding of Honours. The Emperor is the head of the oldest continuous monarchy in the world. Although the supposed founder of the Imperial Dynasty, Emperor Jimmu, is believed to be mythical, most historians consider the Emperor Ankō to be the earliest figure to be historically verifiable — he is believed to have reigned around 450 AD. Upon assuming the Chrysanthemum Throne, a new era name is announced. It is considered very disrespectful to refer to the Emperor by his previous name or the name of his era until his death. The Emperor-Emeritus, Akihito, abdicated in 2019 and was succeeded by his son, Naruhito — an event unprecedented in modern times and which demanded a new law to be passed. The new monarch will be known to history as Emperor Reiwa. This is officially explained as “beautiful harmony”, but which could also be read as the somewhat more ominous “ordered peace”. In a very on-brand innovation by the current Prime Minister, the new era name is the first one to be taken from Japanese literature, rather than Chinese. The Imperial Family Law was traditionally very complex but was simplified after the Meiji Restoration and the Post-War Constitution. Today, Japan follows a system of strict Agnatic primogeniture wherein only men can inherit the throne. This, coupled with the abolition of the Imperial cadet-branches during the American occupation, and the prohibition on women remaining in the imperial family if they marry a commoner, means that the Imperial Family is very small. Only three people currently stand in the line of succession of which one, the Emperor’s uncle, Prince Hitachi, is in his eighties. Prior to the birth of Prince Hisahito of Akishino in 2006, discussion was held into reforming the succession law to prevent the dynasty dying out. The Prince’s birth — the first male child to be born into the family since 1965 — lessened these calls, but it may well return the years to come. Despite this, at present, Prince Hisahito is second in line to the throne after his father. The Prime Minister heads a Cabinet comprised of Ministers of State, who are in turn supported by various Junior Ministers. A unique aspect of the Japanese Executive is the power and influence of the Cabinet Office headed by the Prime Minister, which enjoys more influence than the equivalent in the United Kingdom. The day-to-day running of the Cabinet Office is dealt with by the Chief Cabinet Secretary, who is usually the ‘face’ of the Government on a day-to-day basis. Since the war, it is traditionally seen as being a stepping-stone to the Premiership. Many junior positions are suitably cyberpunk sounding… Owing to regular reshuffles and short tenures of Ministers, the power of the Ministries and the wider bureaucrats is considerable. Additionally, an independent body, the Board of Audit, reviews government expenditure and delivers an annual report to the Government. It is constitutionally protected from interference by either the Cabinet or the Diet. This dominance by a single party has therefore led to a system where the key characteristics of Japanese politics are largely synonymous with the interests and structures of one organisation. This is not to say that Japan lacks a meaningful or influential system from outwith the Liberal Democratic Party (LDP), only that they have not been able to bring about institutional reforms associated with opposition movements in other comparable countries. The key characteristics expressed by the LDP are: Factionalism: Policy shifts and political changes are dominated by internal movements within the Liberal Democratic Party. Whilst factions are a common feature of many East Asian political parties — Japan is notable for having ones that are usually stable and institutionalised. These factions themselves usually wax and wane according to the fortunes of their leaders, but are usually considered to fall into two camps based on the 1955 merger between economic liberals under Yoshida, and the more revisionist conservatives under Hatoyama. In recent times, successful Prime Ministers such as Junichiro Koizumi and Shinzo Abe in his second term have been able to straddle the boundaries between the two, but have nevertheless had their powerbase focused in one camp over the other. Within these two broad strands, the factions are further developed around areas such as regional interests, think tanks, and ideological movements. The number of factions in the LDP usually ranges from five to eight. The current Prime Minister, Shinzo Abe, and most of his inner Cabinet, were members of the Seiwa Political-analysis Council — strongly associated with the nationalist trend of the old Democratic Party. For clarity, whilst the official names of the factions are used internally, most media outlets refer to them under the names of their party leaders. Whilst the power of these factions has greatly waned under the unprecedented dominance of the current Prime Minister, there is no sign yet of them fading away entirely. Family Cliques: More than many equivalent countries (aside, perhaps, the Republic of Ireland) — Japan is dominated by family links and cliques. Members of Parliament regularly inherit seats from their parents or other close relatives, which further leads into the institutionalised nature of the factionalist system — and, consequently — senior political positions being dominated by family groups. The current Prime Minister, Shinzo Abe, is the grandson on his mother’s side of former Prime Minister Nobusuke Kishi (like him, an economic reformer and ultraconservative), whilst his Deputy Tarō Asō (a former Prime Minister himself) is the grandson of Shigeru Yoshida. Rising star and LDP policy wonk, Shinjirō Koizumi, is widely considered to be a Prime Minister-in-Waiting himself and is the son of a former Prime Minister. As a mark of his popularity beyond opinion polls, green tea rolls branded with his face became the second best-selling item in the National Diet Gift Shop in 2012, only narrowly beaten by sweet dumplings bearing the likeness of the Prime Minister. Anti-Socialism: Despite harbouring genuine disagreements over policy issues ranging from privatisation, to constitutional reform, to foreign policy, the main harbinger of party unity remains opposition to a socialist government in Japan — whilst the threat of such has declined with the end of the Cold War, continued threats (both real and imagined) from China and North Korea has continued to dominate internal debates within the LDP. Factions that in other countries that would naturally be separate parties continue to clutch one another close for fear of losing influence. Corporatism: Linked to this is the revolving door nature of Japanese politics, industry, and civil administration. Whilst the formal structures of the old Zaibatsu were destroyed after the end of the Second World War, the LDP is still nominated by old industrial families and the post-war Keiretsu (“System”) structure of interlocking business interests — often centred on a large bank. Reforms in the 1980s and the Asian Banking Crisis at turn of the century has weakened this system yet further, but it is still common for people to move from the family business, to a policy unit, to the Cabinet, and back again. Taro Asō’s family — for example — own one of the largest mining companies in the country. Rural Malapportionment: The final strand of the LDP’s dominance has been the historic over-representation of rural constituencies over urban areas. Originally a natural response to the devastation and depopulation of the cities during the Allied Bombing Campaigns — rural areas, aided by the divergence between closed-list PR seats and the First-Past-The-Post single member districts, has led to a situation where country voters can have almost three-times as much influence as those living in the inner cities. A key element of the LDP’s historical political success, therefore, has focused on favouring the countryside over more urban areas in terms of welfare payments and agricultural subsidies. ​Our excellent tour lasted approximately one hour and took us around the majority of the public areas of the National Diet Building. Unsurprisingly, this is the third structure to house the post-Meiji parliament — the previous two having burnt down in 1891 and 1925 respectively. The current building, completed in 1936, is an austere but interesting combination of Western and Japanese styles. A few years ago, I acquired an excellent book on the layouts of Parliamentary buildings by a Dutch firm, who have also done an excellent companion website. As a write-up in Wired notes, the layout of a legislative chamber is often a good indication of that nation’s approach to politics. ​The vast hemicycle used by the House of Representatives (the style is similar in the House of Councillors) implies a vaguely consensus-based approach to politics, but given the aforementioned dominance of the Liberal Democratic Party, the Diet is not known as the most adversarial of debating Chambers. That said, fights have occasionally broken out, especially on matters pertaining to constitutional reform. I had totally missed that most of the opposition parties have all started to caucus with one another, so we’re clearly nearly ready for yet another merger of everyone from the liberals to the far-right united purely on the grounds of “not liking the LDP”. “…and the Independent.” ​I didn’t have time to ask what the numbers in parentheses refer to, but I believe that it indicates the number of female members of each parliamentary grouping. Despite considerable steps forward in recent years, politics in Japan remains very much a man’s game. Our tour concluded outside, with a brief walk through the arboretum on the side of the Diet complex facing the Imperial Palace. Rather sweetly, the trees there have all been donated by each Prefectures. I was delighted to see Mie’s one, a Jingu cedar. The city of Tsu — the prefecture capital — had very kindly given each member of the group a pen made from the same wood. ​There was — of course — time for a quick photo opportunity. I think I’d probably be a Kadet or a Communist. ​One thing I was delighted to have discovered over the past year is that I can absolutely rock a collarless shirt. This is one of a tiny number of positive fashion choices for me, and I now intend to wear them wherever possible. Evening was beginning to fall as we were escorted out of the Diet complex, and sadly meant that I was unable to visit the gift shop. This was one of my few regrets of the whole trip, as I had very much wanted to stock up on some politics-themed confectionary, such as the aforementioned Koizumi-rolls. ​I instead decided to cap off my final night in Japan by visiting Tokyo Tower. At over 330 meters in height, it was Japan’s tallest structure until it was roundly trounced by the Skytree, nearly double the height. Despite this, the Skytree is unloved, whilst it’s Eiffel Tower inspired predecessor has become almost as symbolic of Japan as Mount Fuji is. I had intended to walk from the Diet to the Tower, but a long day had sapped most of my energy, so after getting slightly lost around the new development by the Imperial Hotel (which naturally included bumping into the ubiquitous kaiju statue in Godzilla Square). I decided to take the subway to shave off the last two miles of walking. As another example of Japan’s transport being excellent, but not quite as efficient as you would expect, Tokyo has two subway systems. The larger and more extensive Tokyo Metro was privatized in 2004, whilst the Metropolitan Government owns and manages the Toei Subway. The two systems are linked, but not fully integrated — which also means that you have to pay a small surcharge if you transfer between them. For someone who is used to contactless payments and effortlessly changing between the Tube, Overground, and even National Rail, it is one of the many things that makes Japan seem a little more antiquated than you’d expect, Nevertheless, I was glad to see Tokyo Tower, lit up majestically in the winter night. I wandered around the base, but declined the opportunity to climb it. I was hungry. ​My final dinner in Japan was at a small but highly recommended katsu restaurant (one I later found to be one of a small chain) in the northern bit of Chiyoda, near Senshu University and a short walk from Tokyo Dome. For less than £10, I had two huge bits of breaded pork, prawn, tempura vegetables, miso, and a beer. I really do get the impression that Japan can be a much cheaper holiday destination than people imagine it to be. ​Sated, I had just about enough energy to spend some of my last remaining Yen on souvenir KitKats before I had an early night. After all that worry about getting a Skyliner ticket, it seemed foolish to risk over-sleeping. I woke up the next day at 6:30, and checked out by 7. Even in the early morning rain (the shine shine monk I had seen the other day had clearly run out of magic), Ikebukuro Station was busy, and I was worried that I’d struggle to get on the train for the short hop to the Skyliner terminal at Nippori. Fortunately, the carriage was busy, but not rammed, although I got the impression that taking a commuter train with two huge suitcases was not really the done thing. Nevertheless, I arrived without feeling too shamefaced, and the ticket officer even had the good grace to change my Skyliner reservation to an earlier booking. ​Thanks to jetlag and excitement, I hadn’t realised how far out Narita Airport is from Tokyo proper. Even with our sleek direct connection, it took the best part of 45 minutes to arrive at the terminal station. In the 1980s, there had been an aborted effort to build a Shinkansen link to the centre, but the plans fell through thanks to a combination of financing and local opposition. Now, only a short concrete spur exists, which is just about visible from the Skyliner line. Check in was routine, although I had to transfer nearly three kilograms of weight from my big suitcase to my carry-on one. Considering that I had deliberately underpacked, I had clearly been on quite a spending spree — but everything was neatly reconciled in the end. The flight back was routine, although KLM had the same (poor) selection of movies that they had had on the outbound flight. After ignoring most of Interstellar (terrible film, don’t bother) and reading Shoshana Zuboff’s excellent (but very pessimistic) book on surveillance capitalism. I largely contented myself with eating my own weight in complementary stroopwafels. After a fairly stress free transfer at Schipol, I arrived at London City with a spring in my step. It was my first visit to Japan. I don’t think it will be my last.
https://medium.com/@jacktindale/re-tindale-starting-life-in-toyko-and-tsu-from-zero-d556cd539488
['Jack Tindale']
2020-10-05 12:15:34.342000+00:00
['Sustainability', 'Industrial Strategy', 'Local Government', 'Tokyo', 'Japan']
Love Romantic Quotes In Hindi English
Love Romantic Quotes In Hindi English You are Looking For Love Romantic Quotes In Hindi but I have the best collection for you love romantic quotes in Hindi for girlfriend, status for a lover, romantic status in Hindi, status for lover in Hindi, love status in Hindi, lover WhatsApp status, WhatsApp status for a lover, love status in English, status for lover In English, love WhatsApp status, sad love status, love status 2021, love attitude status, love status download, WhatsApp status for lover in Hindi, love feeling status, love status Shayari, I hope you like this and share family and Friends. Love Romantic Quotes In Hindi English Ishq or ibadat ma neeyat saf rakhni chahiya❤❤ Tumhry 👧 gusy 😠 ki bi kambakht adat c ho gai hai, ishq ❤ adhoora sa lagta hai jb tum gusa nhi ❌ krty…!! Duniya ki sb sy bari 😅 baat, shaadi 👫 kr do larka 😈 sudhar jaya ga…!! Shaant maahol 😍 ma b #bawal lagti 🙈 hai, meri 🙋 wali gusy 🙎 ma b kamal 👰 lagti hai…!! Mehaboob saamane ho aur nazaren sharm se jhukee rahe ye bhee mohabbat kee haseen ada hotee hai ❤❤ Read More…
https://medium.com/@lovestatustime/love-romantic-quotes-in-hindi-english-82e1ec0c56ce
['Love Status Time']
2020-12-21 10:30:44.202000+00:00
['Hindi', 'English', 'Quotes', 'Love Romantic']
If this pandemic forces our businesses to close — It stands to reason the government should shut down as well
It seems to me, this pandemic has not affected everyone. The people working and essential workers are on the job and getting paid. The people serving in our government, federal, state and local are all getting a paycheck. However, millions of workers in the restaurant and service industry are told they must stay home. They are prevented from making an income to feed their families. Next week, our government may face a shut down. Without passing the stimulus and funding bill and signed by the president, our federal government will run out of money. They won’t be able to borrow any more money. We already owe $27.5 trillion and soon will be unable to print more money. This may be a good thing. It is time these elected officials and federal workers live like the average Americans. They need to experience the recession they help created. Lose their jobs or get furloughed. How will they handled it? We have had many recessions in the past. These workers never had to miss one paycheck. Anyone can spend other people’s money. Anyone can be generous with our tax dollars. Free healthcare, why not? Free college tuition, great. Free housing, and food and heating…It is just humane and a right. After all, aren’t we the most richest country in the world? Money does grow on trees. Our government officials seems to believe it. Let the party begins. I support the shut down. It is long past time for them to feel the pain. What will they do? when they have no more money to piss away.
https://medium.com/@jackclee99/if-this-pandemic-forces-our-businesses-to-close-it-stands-to-reason-the-government-should-shut-40352669c2a7
['Jack C Lee']
2020-12-27 06:43:27.178000+00:00
['Shutdown', 'Congress', 'Debt Clock', 'Stimulus', 'Deficit']
See how VR will make an impact on future of marketing
Virtual Reality (VR) has taken very prominent place over past few years. Many products and services on top of VR has been developed already and are in use. The best example is the Pokémon Go, we all know how massive attention it took in the gaming world in last year. CREDIT: Andrew Rybalko / Shutterstock Many companies has already create special VR gaming toolkit, can take a glimpse at it here. We all know, new marketing platform, technologies or channels don’t come around often. It is for sure that VR is going to impact on the work of digital marketing, lets see how. So, what do we mean when we say Virtual Reality(VR)? Essentially, virtual reality is a computer-based simulation of an interactive environment. It feels real and physical by using electronics like headsets, glasses, and phones. But it isn’t. Here are the wikipedia details of the VR. using VR you can do almost anything virtually. For example playing game for new level to go for skydiving and explore and travel the virtual globe and beyond. Instead of watching a show or film about your favorite city, you can be there in your headset. If you don’t have the money to travel, you can feel like you’re walking in Paris from the comfort of your own home. Big companies like Facebook, Apple, Google, Samsung, Sony & HTC are heavily investing on VR for building different products around it. Currently, virtual-reality technologies are based heavily on headset-style products that allow a user to quickly immerse themselves in virtual worlds or games. But virtual reality is becoming more diverse every year as new technological advances are discovered. Virtual reality’s growth Since so many companies are building new products & games around VR, its market seeing explosive growth. In fact, current global virtual-reality revenues are hitting the $7-billion mark in 2017. We’re already consuming virtual-reality based content are with head-mounted displays and consumer content. As more companies produce more products and ways to consume VR content the user base will continue to grow. By 2020 the market of VR is expected to see a huge growth in terms of users. On top of that investors are investing heavily in VR & AR technologies. So what does all this mean for marketers? It means a few different things: a new advertising platform, more e-commerce sales, local business traffic, and different ways of sharing content. According to Tech Crunch, long-term business models of virtual reality will focus on the following revenue centers: VR Revenue Pokémon Go was a worldwide sensation. You could see people playing it almost anywhere you went. So, what does a gaming app have to do with marketing? How did Pokémon Go’s virtual-reality game impact the marketing world? Local businesses started to cash in on the content. Businesses could essentially buy ad space that would attract customers to their business and market to those who were out on a Pokémon “hunt.” The user could find certain Pokémon at local businesses. PokemonGo for business According to many reports the few local business has seen growth upto 1600% using this game. VR will bring better content marketing Virtual reality opens the doors to more interactive, creative, and engaging content and adds. With almost limitless possibilities, VR will change marketing by altering the content marketing game. Many business started taking leverage of this technology in their business to give better customer service like Ikea, it’s a home-furnishing store that has massive shops in dozens of countries worldwide. And they’ve utilized virtual-reality marketing tactics to bring in tons of new sales. Here is how they are doing it. Like this there many business have already started building their own tools and technologies to serve their customer better with VR content directly. Conclusion Virtual reality is growing fast. More companies are producing VR-based products, and more firms are investing in the development of the technology. Pokémon Go already showed marketers that VR can be huge for your brand. Instead of blog posts, we will start to see virtual experiences that promote high levels of engagement and interaction. Virtual reality seems like it’s here to stay, and it will be a key player in the marketing world for years to come.
https://medium.com/yeello-digital-marketing-platform/see-how-vr-will-make-an-impact-on-future-of-marketing-ea92555d55eb
['Sonali Hirave']
2017-11-29 06:43:38.670000+00:00
['Content Marketing', 'Digital Marketing', 'Virtual Reality']
A song I’m working on
So I had an idea for a song. I have most of the lyrics, which you will find down below. I am working on a melody at the same time, which seems to want to come out as a mix of ‘Always a Woman’ by Billy Joel, ‘Learn Me Right’ from ‘Brave’, and ‘If I’m Being Honest’ by Dodie. We’ll see how it turns out, but for now enjoy the lyrics! ~~~~~~~~~ I’ll tell you a story I found by myself An old dusty thing I found high on the shelf I’m not very tall And I know I might fall But if I didn’t try you would hear just nothing at all Oh listen to my journey Listen to the ways I took to find you Oh listen to my story Listen to me and I’ll tell you the ways that I love you I’ve found you a story All on my own But you are now in it and I can see now that I’ve grown You finished the story You wrote me the end And then you began it anew with you as my best friend And now I can tell you a tale Of adventures so far and the ships we’re ready to sail But where Oh where will it go If I had my way I’d make it that we’d never know Oh we’d never know We’ll just have to flow Oh listen to my journey Listen to the ways I took to find you Oh listen to my story Listen to me and I’ll tell you the ways that I love you ~~~~~~~~~~
https://medium.com/@asiegelster/a-song-im-working-on-63fe1c446a0
['Abigail Siegel']
2020-12-04 02:27:05.831000+00:00
['Poem', 'Songwriting', 'Songs', 'Lyrics', 'Poetry']
WHAT THE LEGACY PROJECT *CAN’T* BE
This goes for all my coaching programs… IT CAN’T BE: It can’t be just a series of “aha!” moments. It can’t be just an interesting learning process. It can’t be just fun or entertaining. It can’t be just temporary bonding and pleasing rapport with others. It can’t be something to engage in just for now, just to occupy your mind at the moment. It can’t be frivolous or trivial. It can’t be BS. IT MUST BE: It must be real change. It must be foundational. It must be lasting.
https://medium.com/@josephrivertimmins/what-the-legacy-project-cant-be-de78b8065f8c
['Everything You Do Is Amazing']
2020-12-23 23:56:58.875000+00:00
['Change', 'Success', 'Coaching']
Multivariate time series forecasting
The long version The short version was short, but the long version can be really long, depending on where you want to stop. You can start with converting the time series data to a ts object, doing all sorts of time series EDA (exploratory data analysis) to tuning and evaluating model performance as many different ways you want, based on project objectives. 1 ) Import libraries The initial set of libraries needed remains the same as in the “short” version, but we are adding a plotting library matplotlib to visualize the time series object. # loading essential libraries first import pandas as pd import statsmodels.api as sm from statsmodels.tsa.api import VAR import matplotlib.pyplot as plt 2) Import, inspect & wrangle data After importing data you should be going through your usual data wrangling ritual (selecting columns of interest, renaming, summary statistics etc.). But one essential step is to find out if there are NA values and if so, you need to deal with them (see here). As part of data wrangling, you might also want to slice/transform data in different ways for visualization purposes. # data mdata = sm.datasets.macrodata.load_pandas().data df = mdata.iloc[:, 2:4] df.head() 3) Visualize If you want to do EDA of time series data you have some additional work to do such as transforming the data into a time series object. But at a minimum, you may want to visualize data to see how the trend-lines look like and how they compare with each other. It gives you the necessary intuition needed for model evaluation. plt.plot(df) 3) Test for causality You would want to see if there’s a correlation between the variables. For that you can run Granger’s causality test. Although the name suggests, it’s really not a test of “causality”, you cannot say if one is causing the other, all you can say is if there is an association between the variables. # import for Granger's Causality Test from statsmodels.tsa.stattools import grangercausalitytests granger_test = sm.tsa.stattools.grangercausalitytests(df, maxlag=2, verbose=True) granger_test 4) Split data As in most machine learning algorithms, it’s a good idea to split data into training and testing set. nobs = 4 df_train, df_test = df[0:-nobs], df[-nobs:] 5a) Check for stationarity For time series modeling, data needs to be stationary — meaning if there is a trend in the data you need to get rid of it. To check whether data is stationary there is a test called Augmented Dickey-Fuller (ADF) Test. # Augmented Dickey-Fuller Test (ADF Test)/unit root test from statsmodels.tsa.stattools import adfuller def adf_test(ts, signif=0.05): dftest = adfuller(ts, autolag='AIC') adf = pd.Series(dftest[0:4], index=['Test Statistic','p-value','# Lags','# Observations']) for key,value in dftest[4].items(): adf['Critical Value (%s)'%key] = value print (adf) p = adf['p-value'] if p <= signif: print(f" Series is Stationary") else: print(f" Series is Non-Stationary") #apply adf test on the series adf_test(df_train["realgdp"]) adf_test(df_train["realcons"]) 5b) Make data stationery If the data is not stationary you can make it so in several ways, but the simplest one is taking a first difference. After taking first difference you need to go back to the previous step to test again if the data is now stationary. If not, a second difference my be necessary. # 1st difference df_differenced = df_train.diff().dropna() # stationarity test again with differenced data adf_test(df_differenced["realgdp"]) 6) Modeling You can now instantiate the model with VAR() and then fit the model to first differenced data. After running the model you can check the summary results below. # model fitting model = VAR(df_differenced) results = model.fit(maxlags=15, ic='aic') results.summary() 7) Forecasting Now that you have your model set up, it’s time to play with it and do actual forecast. Here I am asking the model to forecast 5 steps ahead. The model returns an array of 5 forecast values for both the variables. # forecasting lag_order = results.k_ar results.forecast(df.values[-lag_order:], 5) 8) Plotting It is now possible to plot the forecast values along with associated standard errors. # plotting results.plot_forecast(20) 9) Evaluating This is an extra step to evaluate forecasting model using Forecast Error Variance Decomposition (FEVD) method using fevd() function. # Evaluation fevd = results.fevd(5) fevd.summary() 10) Inverting One final step remains. You didn’t fit the model to original data, because you had to transform (first difference) it to make data stationary in step 5b. So the forecast results need to be inverted to the original form. # forecasting pred = results.forecast(results.y, steps=nobs) df_forecast = pd.DataFrame(pred, index=df.index[-nobs:], columns=df.columns + '_1d') df_forecast.tail() # inverting transformation def invert_transformation(df_train, df_forecast, second_diff=False): """Revert back the differencing to get the forecast to original scale.""" df_fc = df_forecast.copy() columns = df_train.columns for col in columns: # Roll back 2nd Diff if second_diff: df_fc[str(col)+'_1d'] = (df_train[col].iloc[-1]-df_train[col].iloc[-2]) + df_fc[str(col)+'_1d'].cumsum() # Roll back 1st Diff df_fc[str(col)+'_forecast'] = df_train[col].iloc[-1] + df_fc[str(col)+'_1d'].cumsum() return df_fc # show inverted results in a dataframe df_results = invert_transformation(df_train, df_forecast, second_diff=True) df_results.loc[:, ['realgdp_forecast', 'realcons_forecast']] Parting notes Time series data analysis is a fundamental part of business decision-making, therefore decision-makers and data scientists/analysts can benefit from having some degree of familiarity with the mechanics of forecasting models. The article first introduced the concept of multivariate time series and how it is used in different industries. Then I provided a short python implementation as a way to provide intuition for a more complex implementation using a machine learning approach. For any related questions I can be reached via Twitter.
https://towardsdatascience.com/multivariate-time-series-forecasting-653372b3db36
['Mahbubul Alam']
2020-04-04 17:03:43.059000+00:00
['Time Series Forecasting', 'Data Science', 'Forecasting', 'Machine Learning', 'Vector Autoregression']
The Good Fight 5x1 — Series 5 Episode 1 (Full Episode)
➕Official Partners “TVs” TV Shows & Movies ● Watch The Good Fight Season 5 Episode 1 Eng Sub ● The Good Fight Season 5 Episode 1 : Full Series ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ The Good Fight — Season 5, Episode 1 || FULL EPISODES : Picking up one year after the events of the final broadcast episode of “The Good Wife”, an enormous financial scam has destroyed the reputation of a young lawyer, Maia Rindell, while simultaneously wiping out her mentor and godmother Diane Lockhart’s savings. Forced out of her law firm, now called “Lockhart, Deckler, Gussman, Lee, Lyman, Gilbert, Lurie, Kagan, Tannebaum & Associates”, they join Lucca Quinn at one of Chicago’s preeminent law firms. . The Good Fight 5x1 > The Good Fight S5xE1 > The Good Fight S5E1 > The Good Fight TVs > The Good Fight Cast > The Good Fight Online > The Good Fight Eps.5 > The Good Fight Season 5 > The Good Fight Episode 1 > The Good Fight Premiere > The Good Fight New Season > The Good Fight Full Episodes > The Good Fight Season 5 Episode 1 > Watch The Good Fight Season 5 Episode 1 Online Streaming The Good Fight Season 5 :: Episode 1 S5E1 ► ((Episode 1 : Full Series)) Full Episodes ●Exclusively● On TVs, Online Free TV Shows & TV The Good Fight ➤ Let’s go to watch the latest episodes of your favourite The Good Fight. ❖ P.L.A.Y ► https://cutt.ly/En7VYtV The Good Fight 5x1 The Good Fight S5E1 The Good Fight TVs The Good Fight Cast The Good Fight Online The Good Fight Eps.5 The Good Fight Season 5 The Good Fight Episode 1 The Good Fight Premiere The Good Fight New Season The Good Fight Full Episodes The Good Fight Watch Online The Good Fight Season 5 Episode 1 Watch The Good Fight Season 5 Episode 1 Online ⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package. ✌ THE STORY ✌ Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing (Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself. ✌ STREAMING MEDIA ✌ Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content. Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”. This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation. ✌ TELEVISION SHOW AND HISTORY ✌ A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings. A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video. Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet. The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 5, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRAThe Good Fight CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets. ✌ FINAL THOUGHTS ✌ The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature. FIND US: ✔️ https://cutt.ly/En7VYtV ✔️ Instagram: https://instagram.com ✔️ Twitter: https://twitter.com ✔️ Facebook: https://www.facebook.com
https://medium.com/@the-good-fight-s05e01-ep-1/the-good-fight-5x1-series-5-episode-1-full-episode-fcf895bbf905
['The Good Fight', 'Previously On']
2021-06-24 07:31:09.985000+00:00
['Politics', 'Technology', 'Covid 19']
Pepsi….The Only Choice for African Americans? — Mistakes Were Made
Most people recognize Coca-Cola (Coke) and Pepsi as the two leading soft drinks in the world. Though Coke seems to be the clear number one favorite, there are people like me who prefer Pepsi over Coke. Both brands use diverse marketing strategies today to appeal to ethnicities of all types worldwide. These marketing strategies weren’t always the case, as the two brands went in two different directions in their earlier days. Coke was mainly marketed to the White middle class in its earliest days. In the 1920s and 30s, Coke flat out ignored the African American market. Coke’s promotional material appeared in segregated locations that served both races (with all white models, of course), but rarely in those that catered to African-Americans alone. Where Coke ignored the African American market, Walter S. Mack, president of Pepsi, saw the potential of a vast untapped market. Pepsi in fact established an all Black sales team in 1940. Mack hired Edward Boyd in 1947 to help market to African Americans. Boyd’s key idea was to create advertisements that showed African Americans as normal middle-class people. One ad featured a smiling mother holding a six-pack of Pepsi while her son (portrayed by the future Secretary of Commerce Ron Brown) reaches up for one. Another ad profiled twenty prominent African Americans such as Nobel Peace Prize Winner Ralph Bunche. Boyd led a sales team composed entirely of African Americans around the country to promote Pepsi. The team endured an impressive deal of discrimination as racial segregation, and Jim Crow was very much alive and kicking. Sadly, the team endured insults from Pepsi coworkers and threats from the Ku Klux Klan. Boyd’s team used racism against its rival Coke. Boyd attacked Coke’s reluctance to hire Blacks and also the political support of Coke’s chairman for the Segregationist Governor of Georgia, Herman Talmadge. Because of Boyd and his team, Pepsi’s market share shot up dramatically. After the team visited Chicago, Pepsi’s share in the city overtook that of Coke for the first time. All of this focus on the African American market didn’t sit well with everyone within the company and its affiliates. It scared them that the focus on African American consumers would push all the good White people away. In a meeting with 500 bottlers of Pepsi, Mack told all in attendance that, “We don’t want it to become known as the nigger drink.” Those words didn’t sit too well with Boyd and his team. Mack left Pepsi in 1950, support for the Black sales team faded and Boyd was let go. Mistakes were Made: Why did Mack, who favored progressive causes and was the key factor in the African American marketing push, say what he said? Mack was facing the same dilemma that many politicians and CEO’s face today pleasing and reassuring ones master. If the bottlers left Pepsi to go to Coke because they feared Pepsi was too Black, it would screw Pepsi. Mack had to go against his morals and go out there and “perform” and give the people what they wanted so they would stay with Pepsi. Look at the modern day case of Kelly Loeffler. Loeffler is a minority owner in the Atlanta Dream a WNBA team. Of the Dream’s 10 players, 8 are African American and the league in whole is 88% African American. With these percentages, it’s only right the WNBA would pay homage to the Black Lives Matter movement (BLM). Loeffler wrote a letter to the WNBA, stating her opposition to the Black Lives Matter movement, and she opposes the WNBA plans to honor the movement. In Loeffler’s words honoring Black Lives Matter, “undermines the potential of the sport and sends a message of exclusion.” Lets skip over the fact Loeffler statement is ridiculous, ignorant and tone deaf. There’s another fact about Loeffler, she’s one of the sitting U.S. Senators for the state of Georgia. She also is up for election this November, in a red state. Loeffler has voted with President Trump on legislative issues 100% of the time. Are you starting to get the picture? Unlike Mack, I believe Loeffler believes the junk she spews. Her public remarks against BLM, is just like Mack’s speech, plain political pandering. That statement was for the Republican voters in Georgia, and to signal to the White House and the public she is still down for the cause. Her coming out in favor of her team’s players and their support of BLM, would not sit well with the core of supporters she needs to keep her Senate seat. It also wouldn’t sit well with Trump, who has demonized BLM many times himself. It’s an understatement to say the 1940s were just different times from today, but couldn’t someone from Pepsi come up with the bright idea to market to Blacks and Whites equally? Maybe that was too much forward thinking. Mack had the brilliant idea to tap into a market that was being ignored, but still had to reassure his less enlightened colleagues with a hateful speech that undermined his vision. Even if you’re a CEO or U.S. Senator, you still have to appease and answer to someone.
https://medium.com/swlh/pepsi-the-only-choice-for-african-americans-mistakes-were-made-5c38127ae1de
['Marlon Mosley']
2020-08-19 05:21:11.559000+00:00
['Marketing', 'History', 'Blm', 'Racism', 'Business Strategy']
They were here before Us
This poem focuses on the ever present problem of climate change and the lack of response to it by past governments. A great white mass sits over the Alps, It covers the Antarctic, reflecting the sun, Yet it is under threat by the heat of collapse, Till now our Empires did not see the pointed times. We are in a midst of great change, As the food-chains of the ocean crumble, (one by one), This truth we try to outrun as for it was a joke of hoaxes, Yet we are not laughing now…
https://medium.com/daily-poetry-meg/they-were-here-before-us-bcc5acc7d47
['Hunor Deak']
2020-11-27 22:05:53.572000+00:00
['Poem', 'Climate Crisis', 'Earth', 'Poems On Medium', 'Climate Change']