title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Caraku Mempelajari Vue js dan Vuex Dalam 14 Hari
Easy read, easy understanding. A good writing is a writing that can be understood in easy ways Follow
https://medium.com/easyread/caraku-mempelajari-vue-js-dan-vuex-dalam-14-hari-9b013361af88
['Philip Purwoko']
2020-10-21 12:53:53.295000+00:00
['Web Development', 'Vuex', 'Vuejs', 'Programming', 'Vue']
The Fairy Tale Guide to Licensing
The Fairy Tale Guide to Licensing Stephanie Hsieh, began her career as an IP attorney helping a wide range of clients to find and negotiate their ‘happily ever afters,’ and now does the same for her company, Meditope. In this article, she shares some tips on how to avoid getting eaten by a wolf or stuck with a frog. Photo by Patrick Tomasso on Unsplash Licensing intellectual property can provide core revenue to your business. Everyone dreams of the fairytale deal … the brand name “Prince Charming” to fund your “happily ever after”! However, to truly maximize your IP’s value, it’s important to view any partnership as a long-term relationship. Don’t underestimate the challenges of finding the right partners and structuring a healthy marriage … I mean deal! “Prince Charming” doesn’t grow on trees! Have patience when searching for ‘eligible’ partners. Beyond brand names and deep pockets, make sure you’re truly compatible: Do they understand your business? Do they have the experience and reputation to help you advance? Do they share the same values? Is there mutuality? Diving in too quickly without really knowing your partner, at best, will waste time and, at worst, cause irreparable damage to your business. It’s easy (I know, I’ve done it!) to be lured by a sexy upfront payment, especially when you need the cash. Many times, my ‘Prince Charming’ ended up being a wolf in disguise. Thinking they could distract me with upfront cash, partners have made overreaching “land grabs” for IP; sometimes claiming ownership of all future IP or demanding exclusive rights — rights that I might need in the future or that other partners might want … and give more value for. Don’t forget, you need to be their “Prince Charming” too! Otherwise, every discussion and interaction will be unbalanced and haunt you “from this day forward”! Have patience when searching for ‘eligible’ partners. Beyond brand names and deep pockets, make sure you’re truly compatible: Do they understand your business? Do they have the experience and reputation to help you advance? Do they share the same values? Is there mutuality? Diving in too quickly without really knowing your partner, at best, will waste time and, at worst, cause irreparable damage to your business. It’s easy (I know, I’ve done it!) to be lured by a sexy upfront payment, especially when you need the cash. Many times, my ‘Prince Charming’ ended up being a wolf in disguise. Thinking they could distract me with upfront cash, partners have made overreaching “land grabs” for IP; sometimes claiming ownership of all future IP or demanding exclusive rights — rights that I might need in the future or that other partners might want … and give more value for. Don’t forget, you need to be their “Prince Charming” too! Otherwise, every discussion and interaction will be unbalanced and haunt you “from this day forward”! “Happily ever after” takes careful thought and planning. Both sides need a shared vision of ‘happily ever after’. Define it and agree to a plan for getting the deal done at the outset. Know your ultimate goal, understand your limits, but also know theirs: Are your goals, objectives and timelines aligned and complementary? Quickly assess whether you both share how the deal will get done and implemented: What is the decision-making process for you? For them? Identify the decision-makers and ensure they’re engaged in the project. Map out and agree to a timeline, including diligence, negotiations and execution of the definitive agreement. Be realistic and transparent with each other: Are there competing priorities that might interfere? Sufficient bandwidth? Tight timelines? Budget identified and secured? Don’t be afraid to ask what they see as obstacles to executing a deal. Beware of ever-shifting plans. I’ve been strung along by shifting plans — it’s a red flag for sure! Don’t be afraid to walk away. The sooner you know there’s no deal, the more time you’ll have to find the real Prince Charming. Both sides need a shared vision of ‘happily ever after’. Define it and agree to a plan for getting the deal done at the outset. Know your ultimate goal, understand your limits, but also know theirs: Are your goals, objectives and timelines aligned and complementary? Quickly assess whether you both share how the deal will get done and implemented: What is the decision-making process for you? For them? Identify the decision-makers and ensure they’re engaged in the project. Map out and agree to a timeline, including diligence, negotiations and execution of the definitive agreement. Be realistic and transparent with each other: Are there competing priorities that might interfere? Sufficient bandwidth? Tight timelines? Budget identified and secured? Don’t be afraid to ask what they see as obstacles to executing a deal. Beware of ever-shifting plans. I’ve been strung along by shifting plans — it’s a red flag for sure! Don’t be afraid to walk away. The sooner you know there’s no deal, the more time you’ll have to find the real Prince Charming. There’s no “magic” to “magic beans”. People will throw around jargon or, worse (my personal favorite), tell you they can’t change ‘the template’. Understand the other side’s intent — what are they trying to achieve with a given term. If you know their intent, you are freed to be creative and tailor terms to the situation — rather than just accept the ‘magic’ of the template. When I review agreements I think through the value of every term, not just to us, but also the value of us to our partner. Absolutely everything has value: an option to license, exclusivity, scope or field of use … and the one most people forget (or undervalue) time. If you ‘give’ someplace, you should be able to ‘get’ someplace else — finding balance and maintaining mutuality. Recently, we were able to gain some early concessions by explaining the reputational value to us of a joint press release with our major global pharma partner. Truly understanding what ‘happily ever after’ looks like to and the objectives of the other side will free you to think creatively and negotiate more effectively — tailoring the ‘magic template’ to the situation. Find mutuality and people you can (maybe even enjoy!) working with to ensure success. If the vision for the future keeps changing and/or they (or you) get hung up on a specific term, then reread point 1: maybe this ain’t Prince Charming and it’s time to kiss a few more frogs… Stephanie Hsieh is CEO/President of Meditope Biosciences, Inc., an early stage immuno-oncology company, based in Los Angeles. She is an industry veteran having enjoyed close to 30 years in biotech/biopharma. Stephanie began her career as a patent litigator, where she represented a wide range of clients from Fortune 50 companies to biotech start-ups to major academic/research institutions. Prior to taking the helm at Meditope, Ms. Hsieh worked in senior management roles, leading cross-functional teams to develop and execute business and new product strategies built heavily upon the intersection of the patent laws and regulatory landscape. She graduated Phi Beta Kappa and magna cum laude from Wellesley College, majoring in Biological Chemistry. She also holds a J.D. from Columbia Law School, graduating as a Harlan Fiske Stone Scholar, and an M.B.A. from Stanford’s Graduate School of Business.
https://medium.com/been-there-run-that/the-fairy-tale-guide-to-licensing-600cc84f80c7
['Springboard Enterprises']
2020-10-23 13:53:13.904000+00:00
['Negotiation Tips', 'Strategy', 'Licensing', 'Women', 'Entrepreneurship']
The difficult art of being a single interaction designer for 150 people
The difficult art of being a single interaction designer for 150 people If there is anyone else in this situation, I really want this person to know that he / she is not alone (come here, give me a hug). wikimedia.org I worked for a company that develops software for the healthcare segment. It’s a great job. Helping people who care for the health of others do a better job and feel they have fulfilled their mission by completing their 12-hour shift (sometimes more). Thinking about things like that gives me enormous satisfaction. There are 300 system developers / analysts / business analysts / project managers for two designers where I work. Dividing into equal parts, 150 for each of us to manage. It’s a hard work, believe me. There are people who think our job is to do keynote. There are people who think there is a button in Photoshop to do UI / UX of a system. There are people who think that because we talk to the customer we are trying to steal their jobs. Some people think our job is to draw icons. There are those who think we are geniuses and in five minutes we will solve their problem only because of a story they have told us. There are people who think that we have to support their bad idea just because they are too lazy to make the optimal solution. There are people who think that what we do is disrupt, since what matters is that the system is working (whatever that means). But stay calm, you do not have to despair: There are some people who know what we do, that we are here to help. There are people who keep their eyes shining when they see that working together we can do a cool job and deliver the best solution to our final user, who works on the 12-hour shift. The best part is that the people who see us as a resource to help are increasing every day. And, look, we’re not holy. We’re wrong, too. Sometimes we respond a little more rudely. Sometimes we are so tired for explaining, tired of showing empathy with everyone. But we recover quickly. The recipe is to come in softly, show that the Y-way can get better than the X-way. Show it working. Take examples of how to do it. Show what other companies are doing in that direction and what they are gaining. Elect that team (often a small team, a small project) who is willing to receive help, listen and try something new. Start small. I know it will not always be easy to do this, but be nice. And the main thing is not to give up: approach in another way, in another front, sometimes do it hidden when you feel safe for this. Invite people from other companies to pass on their experience to the place where you work. Be willing to do the same too. Get up from the chair, say hello to people and dare ask how things are, if they are in need of help, ask to see their work, but not to oversee: to praise what you think that deserve and to offer your support. Here, where I work, we find a project to work fulltime. And we separate a time every day to help those who need it, no matter how small the demand. It has worked. There is still a long way to go, but Chico Science was already saying: One step ahead and you will no longer be in the same place.
https://uxdesign.cc/the-difficult-art-of-being-a-single-interaction-designer-for-150-people-16eb3f124b21
['Priscila Alcântara']
2017-12-05 11:07:20.399000+00:00
['User Experience', 'Design', 'Careers', 'Interaction Design']
First Blockfolio Signal
From now on Storiqa will be presented in Blockfolio Signal app! How to follow Blockfolio signals: 1) Download the app from Google and App store 2) Add STQ token to your portfolio You can check STQ price on several exchanges: Coinbene CoinFalcon EtherDelta Exmo HitBTC Hotbit IDEX Indodax KuCoin Sistemkoin Tidex Tokenomy Wazirx Briefly, we sum up all latest updates and Storiqa’s achievements in crypto world: January 2018 — we closed $25 million hardcap May 2018 — release of beta MVP beta.storiqa.com June 2018 — release of Wallet prototype and its presentation at RISE November 2018 — platform test launch and its presentation on Websummit Add STQ token to your Blockfolio!
https://medium.com/storiqa/first-blockfolio-signal-883c2149cab2
[]
2018-11-21 15:56:22.577000+00:00
['Storiqa', 'Cryptocurrency', 'Stq', 'Startup', 'Ecommerce']
Are Wealthy Countries Going To Hoard The Vaccine, Like Rich People Hoarded Toilet Paper?
PHOTO: TASOS KATOPODIS/GETTY IMAGES. As pharmaceutical companies start obtaining emergency use authorizations and manufacturing coronavirus vaccine doses as quickly as they can, wealthy nations are hedging their bets by reserving far more of those vaccine doses than they need. In fact, many of the world’s richest countries have enough doses spoken for to inoculate their entire population multiple times over. Meanwhile, many less economically advantaged nations are struggling to secure enough to cover a meaningful fraction of their population within the next few years. Due to manufacturing limits, it could take many developing countries until 2024 to obtain enough vaccines to immunize their entire populations, reports The New York Times. Meanwhile, the world’s wealthiest countries have already laid claim to more than half of the doses that could make it to market by the end of 2020. This dire situation begs the questions: Are the wealthiest countries going to hoard all the vaccines? And, can anything prevent that from happening? In September, a coalition of 156 countries agreed to a COVID-19 vaccine allocation plan, co-led by the World Health Organization, called Covax. The goal is to ensure that vaccines are shared equally between the world’s richest countries and developing nations. The WHO partnered with two nonprofits supported by Bill Gates to secure one billion doses for 92 lower-income countries by the end of 2021. One billion more will go to middle- and high-income nations in the same time frame in an attempt at an early 50/50 split. “This is a mechanism that enables global coordination of the rollout for the greatest possible impact and will help bring the pandemic under control and ensure the race for vaccines is a collaboration not a contest,” Tedros Adhanom Ghebreyesus, head of the UN health body, said when the deal was announced. He added that the plan would help ensure vaccines for “some people in all countries and not all people in some countries.” Despite this agreement, glaring inequities are already emerging. The United States, as of now, has secured 100 million doses from Pfizer, with the option of buying up to 500 million more. It also has 200 million pre-ordered from Moderna, with a 300 million dose addition if needed. The U.S. also put a preemptive order on 810 million doses between offerings from AstraZeneca, Johnson & Johnson, Novavas, and Sanofi. In total, that means the U.S. has direct access to as many as 1.5 billion vaccine doses; the entire U.S. population is 331 million. Meanwhile, the European Union has secured 1.3 billion across most of the same companies, for their estimated population of 447.7 million. Additionally, it has the option to purchase 660 million more from German company CurVac if it chooses. Britain laid claim to 357 million doses with the option of 152 million more from Valneva for its population of just under 68 million. Even with these massive purchases, it is uncertain how quickly these countries will be able to vaccinate everyone because there are many vaccines that are not yet ready for distribution. Because there is no guarantee that each vaccine will come through as planned, countries are not putting all of their eggs in one basket and instead operating under a “just in case” strategy. Rather than risk a huge setback, wealthy countries would rather overbuy. But the race to buy vaccines has turned into a global scale version of the run on toilet paper many experienced earlier this year. While some panic purchased enough toilet paper to last them years, others were put in the dire situation of having to rely on the kindness of neighbors to get a roll while they waited for stores to restock their cleared shelves and find ways to limit per customer purchases. According to data collected by Duke University, should all the vaccines come through, the EU could vaccine its entire population twice over, the UK and the U.S. would have four vaccines to every resident, and Canada would have enough to inoculate everyone six times over. So far, none of these countries have come forward to say what they intend to do with the surplus of doses should they have them. The U.S. began guaranteeing its access to vaccines well before any were developed by providing billions of dollars for the research, development, and manufacturing of five of the most promising COVID-19 immunizations. This support, which sped up the process and broadened the scale of production immensely, came with the condition that Americans would get priority access to vaccines made in the U.S. Other countries that could afford to do so also put money into the efforts. For example, Pfizer’s vaccine, made in conjunction with BioNTech, was funded by $445 million given by the German government. In an attempt to address vaccine distribution inequities, some pharmaceutical companies are promising a percentage of their products to poor and middle-income countries. AstraZeneca leads this initiative with the promise of over half of its 3.21 billion inventory. Testing as a single dose vaccine, Johnson & Johnson has pledged 500,000 shots to low-income countries. Time will tell whether wealthy nations will share excess vaccines or whether Corvax is merely a nice theory that does not stand a chance against panic, scarcity, and enough wealth to block out the front of the line. *** Originally published at http://www.refinery29.com
https://refinery29us.medium.com/are-wealthy-countries-going-to-hoard-the-vaccine-like-rich-people-hoarded-toilet-paper-f560643b113c
[]
2020-12-16 22:54:58.612000+00:00
['Vaccines', 'News', 'Covid 19', 'Coronavirus', 'Culture']
What I Wish I Could Tell You About Grief
What you don’t know about grief is that it’s a living thing. Or maybe you do know this. Maybe you’re shaking your head at my naiveté. Grief has an appetite. It’s insatiable. If I’m not careful, it might replace the person I used to be. If I wake up one day and there is nothing left of me but the sorrow that sits like a heavy stone in my chest, if the grief consumes — because I let it consume — and I become nothing more than the anguish that lives in my daughter’s absence, would it surprise you? It shouldn’t. Death wears the face of my child. If I say I’m okay, I’m lying. I’m never okay. This grief is a lot like the cancer that stole Ana’s childhood and then took her completely. How can I be afraid of dying? If I could know the exact day and circumstance of my death, what a gift that would be. In the deepest corner of my grieving heart, I covet a secret that fills me with shame. Sometimes, I want to die. What I wish I could tell you is that I pictured myself walking out into the snow on a day when it was -7 degrees Fahrenheit and wondered what it would be like to let the cold night take me and my grief away.
https://jacquelinedooley.medium.com/what-i-wish-i-could-tell-you-about-grief-1b4e06662fb1
['Jacqueline Dooley']
2020-02-09 02:33:45.420000+00:00
['Death', 'Parenting', 'Spirituality', 'Family', 'Mental Health']
Security Token Offerings Could Disrupt Venture-Backed Tech Startups Positively
Security Token Offerings Could Disrupt Venture-Backed Tech Startups Positively A handful of methods exist for raising capital, from private offerings to semi-public to a full-blown IPO or ICO. Now STOs are on the rise which might just be what tech startups need to revitalize the market According to data collected by Pitchbook, a smaller number of startups are being obtained by larger firms or are going public. Many of these startups, though venture-funded, have a minimal chance of starting an initial public offering (IPO). Some are resorting to cost-cutting measures to better their operating margins in the hopes of drawing mergers and acquisitions (M&A). The Statistics Today, more young companies are being allocated capital by VCs, and yet, fewer are exiting through M&A. And the exits are taking longer for those who go through an IPO. It is said that 2014 was the height of VC-supported exits with 200 startups lined for an IPO. As the years passed the numbers dropped, in 2017, not more than 100 IPOs reached the market. The sloping movement persisted throughout 2018. Now even less companies backed by large investments are being offered an IPO or M&A exits. Pitchbook notes that in 2006 it took businesses an exit time of around 4.9 years. But by 2016 it took roughly double that time, 8.3 years. Investors are holding back for an exit from a position for 10 years. The STO: A Modern Route to Liquidity Initial coin offerings (ICOs) have transformed crowd-funding and capital-raising. However, a majority of it was ineffectual in delivering business benefits. On the other hand, an STO can lead the way to equip medium, VC- supported tech startups to re-define itself whilst allowing innovative entrepreneurs to elucidate new problems. Anticipated to be compliant, security token offerings (STOs) behave like traditional equities. Typically, it has a standard exemption through the SEC’s Reg D. Its difference from the traditional lays in the execution which is done through a smart contract. Tokenizing a business backed by big money unravels liquidity difficulties. The accepted scope for tokenizing starts from $100 million up to $1 billion. But there are other alternatives such as a partially private token offering using the SEC’s Reg D exclusion which can be offered to qualified investors, or a semi-public offering through the Reg A+ Sec exemption. Though it can be proffered to non-accredited investors, the semi-public offering is restricted to $50 million per year. The significance of tokenized shares can also be felt in the secondary market enabling seed investors to shift funds to other innovative businesses. Advantages of Tokenization Young companies that have undergone several funding rounds possess proof that can be considered quantifiable and comparable. After half a decade or so, these startups already have a client base, a product, revenues, and financial background where fair market valuations can be derived. Startups in tech have the financials ready for tokenization. Through tokenizing, the opportunity for discovering capital and talent is realized. It can revitalize the tech industry and its accompanying market. Another benefit that comes with the process is that token converted shares can be sold at a later time on exchanges viz. Open Finance and tZero. In summation, tokenizing shares and conducting STOs can pick up on the innovation started by ICOs and breathe new life into the market.
https://medium.com/finrazorcom/security-token-offerings-could-disrupt-venture-backed-tech-startups-positively-c54b016eb13d
['Finrazor Team']
2018-12-11 12:13:30.659000+00:00
['Fintech', 'Crowdfunding', 'Startup', 'Crypto']
How to Implement Your Distributed Filesystem With GlusterFS And Kubernetes
Deploying Gluster Server and Heketi API At this point, the hard part is over. All that remains is to configure and deploy the last two components — Gluster Server and Heketi API. Clearly, after all the grind you’ve done in the previous steps, you deserve some kind of uncomplicated, straightforward way to finish things up. And what’s easier than using a preconfigured Helm chart? There are even some options for this: Although it may seem like an obvious choice, it’s also worth noting that the second solution is actually far more simple for both understanding and deployment. So once again, I highly recommend doing your own research. Alternatively, you can also check out my own combined solution, which I made as a part of my open-source Kicksware project. What all of these solutions have in common is that they all use Heketi as a management API, therefore all of them need to be treated with predefined storage nodes topology configuration. And although it might sound a bit overwhelming, this process actually comes down to executing one command and writing down a few IP addresses. Here’s what it looks like: Storage cluster topology example config for Heketi API Basically, all you need is each node’s internal IP, name, and storage device name you created in the last step. If you followed the instruction precisely, then the device name would be /dev/disk/gluster-disk . To get the remaining parts of the puzzle, simply type this command: $ kubectl get nodes -o wide NAME STATUS AGE VERSION INTERNAL-IP kicksware-k8s-3pkt4 Ready 38h v1.18.8 10.114.0.5 kicksware-k8s-3pkt8 Ready 38h v1.18.8 10.114.0.3 kicksware-k8s-3pktw Ready 38h v1.18.8 10.114.0.4 The name of the node is node.hostname.manage and the internal-IP corresponds to node.hostname.storage . Just put this into values.yaml file and use it when performing the deployment of your helm chart of choice. Also, depending on the chosen method (all besides the IBM one) you may need to manually input Heketi service’s cluster-IP into Storage Class configuration and upgrade chart: $ kubectl get services -lrelease=<RELEASE_NAME> NAME TYPE CLUSTER-IP PORT(S) AGE gluster-heketi ClusterIP 10.245.189.193 8080/TCP 1m Once you’ve got your storage class, simply put it in every PersistentVolumeClaim whenever you’re in need of storage provisioning, like so: GlusterFS PersistentVolumeClaim example config Didn’t I tell you that in the end, it would be a piece of cake! And now this is finally it. Great job! One more thing, if you run RBAC-based Kubernetes, please, make sure to provide a proper RoleBinding for the Heketi app. Otherwise, it won’t able to load up topology and you’ll be getting this odd mistake message: Unable to create node: New Node doesn't have glusterd running And though it can be caused by some sort of Gluster Client install problem, it’s likely that Heketi just doesn’t have enough rights to access nodes and pods K8s native APIs.
https://medium.com/better-programming/how-to-implement-your-distributed-filesystem-with-glusterfs-and-kubernetes-83ee7f5f834f
['Timothy Yalugin']
2020-11-18 17:55:26.029000+00:00
['Kubernetes', 'DevOps', 'Containers', 'Glusterfs', 'Programming']
YANA — Your Support System. In indonesia, Based on data from 2018…
YANA — Your Support System Safe environment app for people with depression In indonesia, Based on data from 2018, 28 million people out of 267 million people having mental health disorder. It means 1 out of 10 Indonesian suffer from mental health problem. So, what is the main problem? Stigma, Prejudice & Discrimination. In Indonesia, mental illness is a taboo to talk about, especially when it comes to suicide. For example, if there’s a depressed person wanting to talk with their parents or friends about their mental health issue, there’s a big chance they will be judged and told to pray more. And the truth is… this only can make it worst. I’ll tell you about my friend, a cheerful and easy going person. He has a lot of friend to hangout with even to talk to. But guess what? At night, when he is by himself, Or home alone… he just sit crying inside struggling with depression. He is feeling like nothing is even worth it. He wanted to seek help, but stigma and prejudice make him avoid getting the help he need. Now he doesn’t even know who to trust or who to tell anymore because his support system failing him. Maybe you know somebody like that?
https://medium.com/swlh/yana-app-your-support-system-10923d3b32fb
[]
2020-12-10 08:55:46.386000+00:00
['Support System', 'Mental Health', 'Mental Health App', 'Product Management', 'Project Management']
6 Techniques Which Help Me Study Machine Learning Five Days Per Week
I quit Apple. Started a web startup, it failed. My heart wasn’t in it. I wanted to learn machine learning. It got me excited. I was going to learn it all. I wouldn’t need to program all the rules, the machine would learn it for me. But I had no job. Excitement doesn’t pay for things. I started driving Uber on the weekends to pay for my studies. I loved meeting new people but I hated driving a car all the time. Traffic, stop, start, fuel do I have enough fuel I think I do, the air, the aircon, changing gears, you shouldn’t go that way you should go this way, all of it. I studied machine learning. All day, five days a week. And it was hard. It’s still hard. Uber on the weekends. Machine learning during the week. That was my routine. I had to learn. I must learn this, I can’t keep driving, I don’t know what my goal is yet but I know it’s not driving. One Saturday night I earned $280 and got a $290 fine. -$10 for the night. 9-months into my self-created AI Masters Degree, I got a job. It was the best job I’ve ever had. How’d I study every day? Like this. 1. Reduce the search space Machine learning is broad. There’s code, there’s math, there’s probability, there’s statistics, there’s data, there’s algorithms. And there’s no shortage of learning resources. Having too many options is the same as having no options. If you’re serious about learning, set yourself up with a curriculum. Rather than spend weeks questioning over whether you should learn Python or R, take a course on Coursera or edX, start with math or code, spend one week planning out a rough plan, then follow it. For me, this was creating my own AI Masters Degree. I decided I was learning code first and Python would be my language. I searched far and wide for different courses and books and put the ones which interested me most together. Was the path I made the best for everyone? Probably not. But it was mine, that’s why it worked. Once I had a curriculum, I had a path I could follow, there was no more wasting time trying to decide what was the best way to go. I could get up, sit down and learn what I needed (wanted) to learn. It wasn’t strict either. If something came up which caught my interest, I followed it and learned what I needed as I went. If you’re learning online and not through university, you should make your own path. 2. Fix your environment Your grandfather’s first orange farm failed. The soil was good. The seeds were there. All the equipment too. What happened? It was too cold. Oranges need warm temperatures to grow. Your grandfather had the skills to grow oranges but there was no chance they were growing in a cold climate. When he moved to a warmer city, he started another orange farm. 12-months later, your grandfather was serving the best orange juice in town. Studying is like growing oranges. You could have a laptop, an internet connection, the best books and still not be motivated to study. Why? Because your environment is off. Your room is filled with distractions. You try to study with friends but they aren’t as dedicated as you. Whatsapp goes off every 7-minutes. What can you do? I turned my room into a studying haven. Cleaned it. Put my phone in a drawer in another room, turned off notifications everywhere. I told my friend. My phone is going off until 4 pm, I’ll talk to you then. He said okay. Friends are great when it comes to friend time but study time is study time. Can’t do a whole day without your phone? Try an hour. Any drawer you can’t see will work. Do not disturb should be a default. Fix your environment and let the knowledge juices flow. 3. Set the system up so you always win Problem 13 has me stumped. I’m stuck. I wanted to get it done yesterday but couldn’t. Now it’s time to study but I know how hard I worked yesterday and got nowhere. I’m putting it off. I know I should be studying. But I’m putting it off. It’s a cycle. Aghhhhhhh. I’ve seen this cycle before. I know it. But it’s still there. The pile of books stares at me. Problem 13. I set a timer. 25-minutes. I know I might not solve the problem but I can sit down for 25-minutes and try. 4-minutes in, it’s hell. Burning hell. I keep going. 24-minutes in and I don’t want to stop. The timer goes off and I set another. And then another. After 3 sessions, I solve the problem. I tell myself, I’m the best engineer in the world. It’s a lie but it doesn’t matter. Even a small milestone is a milestone. You can’t always control whether you make progress with study. But you can control how much time you spend on something. Can control: four 25-minute sessions per day. Can’t control: finishing every task you start each day. Set the system up so you always win. 4. Sometimes do nothing I came to the conclusion. Learning is the ultimate skill. If I can learn how to learn better, I can do anything better. I can learn machine learning, I can become a better programmer, I can learn to write better. I must improve my learning, I thought. I began at once. I did the Coursera Learning How to Learn course. One of the main topics was focused versus diffused thinking. Focused thinking happens when you’re doing a single task. Diffused thinking happens when you’re not thinking about anything. The best learning happens at the crossover of these two. It’s why you have some of your best thoughts in the shower. Because there’s nothing else happening. When you let diffused thinking takeover, it gives your brain space to tie together all of the things it absorbed during focused thinking. The catch is, for it to work properly, you need time in both. If you’ve set the system up so you do four 25-minute sessions of focused work, go for a walk after. Have a nap. Sit and think about what you’ve learned. Once you start doing nothing more often, you’ll see many things are valuable because of the empty space. A room is four walls around space, a tire is filled with nothing but air, a ship floats because of the empty space. Your study routine could do with more of nothing. 5. Embrace the suck Studying sucks. You learn one thing and forget it the next day. Then another and forget it. Another. Forgotten. You spend the whole weekend studying, go to work on Monday and no one knows. Someone asked me how do I remember things from books deeply? I said I don’t. If I’m lucky I remember 1% of a book I read. The magic happens when that 1% crosses over with another 1% of something else. It makes me feel like an expert dot connector. After studying something for a year you realise how much more there is to still learn. When will it end? It doesn’t. It’s always day one. Embrace the suck. 6. The 3-year-old principle I was at the park the other day. There was a young boy running around having the time of his life. Up the slide, down the slide, in the tree, out of the tree, in the dirt, out of the dirt, up the hill, down the hill. He was laughing and jumping then laughing again. His mum came over to pick him up. “Come on, Charlie, we’ve got to go.” He kept laughing as she carried him away, waving his blue plastic shovel. What is it that fascinated him? He was playing. He was having fun. The whole world was new. Our culture has a strict divide between work and play. Study is seen as work. You’re supposed to study to get more work. You’re supposed to work to earn money. The money buys you leisure time. Once you’ve bought leisure time, then and only then can you be like Charlie and run around laughing. If you have it in your head study is work, it will be hell. Because there’s always more to learn. You know how it goes, all work and no play. But suppose, you have the idea studying is the process of going through one topic and then to the next. Connecting different things like a game. You start to have the same feeling about it as you might have if you were Charlie going down the slide. You learn one thing, you use it to learn something else, you get stuck, you get over it, you learn another thing. And you make a dance out of it. I learned if you have structured data like tables, columns or data frames, ensemble algorithms like CatBoost, XGBoost and LightGBM work best. And for unstructured data like images, videos, natural language and audio, deep learning and/or transfer learning should be your model of choice. I connected the dots. I tell myself I’m an expert dot connector. Dancing from dot to the next. Do this and you’ll finish a study session with more energy than you started. This is the 3-year-old principle. Seeing everything as play. That’s enough for now. It’s bedtime for me. That’s a bonus. 7. Sleep Poor sleep means poor studying. You’re probably not getting enough. I wasn’t. The best money for driving Uber was Friday and Saturday nights. People go out to dinner, to parties, to night clubs, I didn’t, I was driving. I’d go ’til 2 am, 3 am, come home and sleep until the sun woke me up at 7–8. I was a train wreck for two days. Monday would come and I’d be in a different timezone. Tuesday got better and by Wednesday I was back where I needed to be. Then the cycle would repeat on Friday. This broken sleep schedule was unacceptable. My goal was to learn better. Sleep cleanses the brain and allows new connections in the brain to happen. I cut myself off from driving at 10 pm, 11 pm, got home and got the 7–9 hours. Less money, more learning. Don’t trade sleep for more study time. Do the opposite. Machine learning is broad. To study it well, to study anything well, you need to remind yourself. Reduce your search space Fix your environment Embrace the suck Sometimes do nothing Treat study as play and Sleep your way to better knowledge Goodnight.
https://towardsdatascience.com/6-techniques-which-help-me-study-machine-learning-five-days-per-week-fb3e889fad80
['Daniel Bourke']
2019-09-30 04:13:27.681000+00:00
['Machine Learning', 'Productivity', 'Data Science', 'Learning', 'Tds Narrated']
The New Skeuomorphism is in Your Voice Assistant
Google Home and Amazon Echo Yay, we killed Skeuomorphism! Not too long ago humanity left behind its skeuomorphic interfaces. We became accustomed to the idea of buttons to tap on screens and swipes that moved content right or left. We learned that content could be out of view but within reach. We graduated to a flatter, more abstract representation that still inherits spatial metaphors and relationships but that are communicated more subtly and implicitly. We stripped our visual interfaces off their ornamentations to allow a more authentic approach to visual aesthetics. Skeuomorphism means using real world references and metaphors on interfaces to enhance their comprehensibility. A skeuomorphic button looks like a physical switch, a skeuomorphic canvas can have a wood texture. Killing skeuomorphism made us feel very smart about ourselves. We finally don’t need glossy buttons to understand something is tappable! Skeuomorphism is not dead But is skeuomorphism really dead? Well, no. Skeuomorphism is alive and this time it is invisible. The new skeuomorphism lives inside your voice assistant: Your Amazon Echo, Google Home, your phone. You call it Siri, Alexa, or Cortana. The human assistant as metaphor The voice assistant pretends to be a bodiless human. It speaks to us like a person. We call it names. It throws in a joke every once in a while. We attribute emotions and feelings to our voice assistant. It defines itself as female or male. The sound of a voice assistant imitates human sound and intonation. The articulation of the metaphor of a human assistant and the way voice assistants mimic humans is literal. Just as buttons look literally like button on the skeuomorphic visual interface, the voice assistant that sounds literally like a human is a skeuomorphism. Voice is the interface At the heart of Alexa, Siri, and Cortana are AI-enhanced algorithms that perform searches, execute commands, and read out results. Voice is the input and output channel for these algorithms. Voice is the interface going forward with home automation, autonomous vehicles, and smart objects. With voice as interface the topic of the psychology of robotics becomes relevant. Similar to conversational interfaces, voice can trigger an emotional response by suggesting interaction with another human being. The emotional attachment, the willingness to make the leap of faith into a metaphor is part of the skeuomorphism in the voice assistant. Machinery as metaphors for visual interfaces Skeuomorphic visual interfaces use references and metaphors from the physical world. References were typically made to known tactile surfaces. Such surfaces were control panels known from heavy machinery or common domestic switches. Early computer interfaces were quite literal in their use of these references. As computer became “personal”, metaphors from the office context were introduced. The desktop metaphor is the first instantiation we had to bridge from the abstract to the familiar. When we went to more personal mobile devices with touch screens, we needed more detail in the exact behavior to make objects complete and accessible. De-skeuomorphizing visual interfaces People have eventually become familiarized with visual interfaces to a degree that made using literal surface metaphors obsolete. Affordability and avoidances of visual interface components are comprehensible without skeuomorphism. The definition of the word “button” has transitioned from an exclusive physical domain to a virtual one as well. Even though stripped of skeuomorphism, today’s visual interfaces still use the physical surface as metaphor. A button still makes reference to a physical switch — just without being overly literal. De-skeuomorphizing voice interfaces The value of the voice assistant is not determined by its level of realism in mimicking the human. Its value is providing solutions to human problems in its role as personal assistant, knowledge source, controller, and access point to services. There is nothing wrong with initially applying skeuomorphism in designing voice interfaces. But we will see a trend of de-skeuomorphization of voice interfaces just as we witnessed it for visual interfaces. This will happen as soon as people comprehend voice as a natural way of interfacing with products and services and have developed the corresponding behavioral authenticity. The de-skeuomorphized voice interface will be less literal in mimicking humans and be more focussed on the value it brings to humans. Biomimicry and skeuomorphism Skeuomorphism is a surface layer and addresses human interaction with a product or service. The product or service as such provides a solution to a human problem. Understanding the principles behind human needs and desires ultimately results in great products and services. A reference principle that can be applied in the creation of products is biomimicry. Biomimicry manifests itself at a much deeper level than skeuomorphism and concerns the how and the why a product solves human problems while skeuomorphism is the initial and temporary literalism for human interaction with them.
https://uxdesign.cc/the-new-skeuomorphism-is-in-your-voice-assistant-3b14a6553a0e
['Bert Brautigam']
2017-05-04 04:23:17.578000+00:00
['Skeuomorphism', 'Conversational UI', 'UX', 'Artificial Intelligence']
Authors That Helped Cuddle Introversion
1. Dushka Zapata I feel immensely grateful for the day I found an answer written by Dushka on Quora. I have been reading her posts perpetually since then. From Quora to Instragam now, I never fail to miss her beautiful outlook on life. Each and every article of hers has been delightful, comforting, and consoling. In many of her posts, she has elucidated what introversion is, often by recounting her own experiences. Her stories were a start for me to reflect on my life and understand myself. In my opinion, Dushka’s books are the best enablers of self-reflection and self-love. Her books are a collection of essays from her life. She emphasizes on not giving advice and encourages us to find the answers to our ordeals ourselves. In the books, she’s only letting us know of her experiences and the insights she had afterward, how they helped her in admitting and learning about herself. One of my favourite books, written by Dushka, is described below: In the book titled, “Love Yourself: And Other Insurgent Acts That Recast Everything”, the essays focus majorly on insecurities and how they can disrupt our view of oneself and others. How easy is it for humans to fog our brains with deprecating and hurting thoughts about oneself. How thoughts are just thoughts, and they do not represent reality. And how important it is to define and lucidly state personal boundaries. How love is not an imposition, but mutual respect for boundaries. “Boundaries are love.” My favourite quote would be: “Relinquishing a characteristic that makes you incompatible feels like you are compromising not just your happiness but your identity.” The author never fails to amaze with her freeing perspectives on the ongoings of life. And on embracing introversion! I would recommend her books to anyone struggling with self-love, and the uncertain trajectories of mind and life, so that we can learn and appreciate the fine nuances of our personalities.
https://medium.com/the-innovation/authors-that-helped-cuddle-introversion-7e898a8ebcbe
['Diksha Singh']
2020-12-11 15:01:15.353000+00:00
['Books', 'Authors', 'Introversion', 'Introvert']
Get Bent: An Ideological Classifier. Part 2: Noisy Data
Part Dos: Scrape It All, Let Pandas Sort it Out Here’s where we are so far: I can’t tell users apart by simply reading one tweet mentioning the hashtag. I am making the assumption that those who tweet the hashtag must be active political users and they therefore must have other political tweets that can provide us new clues. Finding these “political tweets” was a classic needle in a haystack problem. On top of that, the workaround scraper I found gave me almost no ability to filter what I received. The scraper I used, snscrape, is somehow able to get around the new Twitter API and html changes but a trade-off exists: if I want some of a specific user’s tweets, I need to take them all. So I built a function that went through my chronological list of users and scraped every single tweet they’ve ever tweeted. The scraper was brutally efficient and once I was able to pull over 350,000 tweets in a single hour. This was my first experience with that sort of horsepower and I learned some important lessons here about data usefulness and data storage that I’m sure will be pertinent in my future employment. For every user I downloaded their full Twitter presence. Every tweet, every emoji, every retweet, from 2020 all the way back to 2009 in some cases. The only information I could not recover was the content of the retweets, but I did get the twitter handle of the user they retweeted, a surprisingly important piece of the overall project, so stay tuned. Between the two camps of users, it was easy to see a distinction and my assumption that those who tweet on political hashtags are political, was immediately apparent. Going to a user’s timeline and seeing that they are constantly calling for the arrest of Barack Obama and a heavy bible presence does not take much to discern their political party and this continued as the norm and not the exception. Very, very rarely I found an ambiguous timeline. Taking a step back here, as political as this project is, I try not to take sides. I think from any reasonable perspective there are two competing ideologies in this country and they share very few similarities. The crux of this project is that liberals sound like liberals, and conservatives sound like conservatives. All of this relies on being able to clearly differentiate between the two and I am hyper aware of the presence of inherent bias in this project. Google “Pierre Delecto” and thank me later. My scraper is now running hot and I need to start thinking about how I can feed these words to my computer in a way that it can understand. My first option would be to use a Count Vectorizer to represent these users as a term-document matrix. The features for your model would then be the amount of times a certain word is used in one document compared to another. Theoretically, with this representation, a user who uses the word Trump more than another is more likely a conservative. The problem here is that I’m comparing users who range from prodigious retweeters to very limited, if ever. Comparing a user who tweets the word Trump 457 times to one who does 4 times, will yield nothing but noise. Count Vectorization is wholly inappropriate for our needs. In a similar vein, Term frequency-Inverse document frequency (Tf-Idf) counts the frequency of word use but multiplies that by the inverse of it’s frequency in all the other documents. This will give us a similar document-term matrix, but with a bit more context on word choice. This gets us closer to what we’re looking for but everyone who tweets about politics is going to use the word Trump. And stimulus. And economy. Again, with such a huge and varying set of “documents” anything to do with counting word occurrences is almost useless. We need to go deeper and find more ways to represent the context of the words used by these active tweeters. Stay tuned for our next exciting chapter where I use Doc2Vec with Gensim to begin training a model. Don’t miss it!
https://medium.com/swlh/get-bent-an-ideological-classifier-part-2-noisy-data-57559151adce
['Steven Markoe']
2020-11-10 17:16:23.964000+00:00
['Machine Learning', 'Python', 'Politics', 'Twitter', 'Data Science']
Type-I & Type-II Error Simplified— COVID Vaccine Example
Type-I & Type-II Error Simplified— COVID Vaccine Example Visualizing the confusion matrix without confusion Photo by Daniel Schludi on Unsplash I am a firm believer that visual interpretation of concepts has a long-lasting retention period than the conventional approach. I assure you when you finish up reading this blog, you will establish a crystal clear understanding of Type-I & Type-II error. Let’s begin with some preamble about Hypothesis Testing: What is Hypothesis testing and why do we need it? Any existing belief/default position/status quo/general statement relating to a population parameter which someone is trying to reconfirm/disprove based on the principles of inferential statistics is called Hypothesis testing. It has two components: · Null Hypothesis(H0) — Existing belief/default position/status quo/general statement about a population parameter · Alternate Hypothesis(Ha) — Something contradictory to the Null Hypothesis Since companies like Pfizer & Moderna have already disclosed the results of their Phase 3 trials of COVID Vaccine, we will take this example only to understand the concept of Type-I & Type-II error. Both companies claim above 90% effectiveness of their vaccines which is incredible, let’s see how can we use this example to visualize various terms associated with the confusion matrix. We shall start by defining our Null & Alternate Hypothesis for COVID Vaccine Effectiveness, assuming measurement of an arbitrary metric that captures the effectiveness of the vaccine(Higher the value of this metric, more the effectiveness of the vaccine): · Null Hypothesis(H0) — No difference(𝛿=0) in the metric value before & after the vaccination · Alternate Hypothesis(Ha) — Difference 𝛿(>0) in the metric value before & after the vaccination Here onwards, visualizations will be more dominant: (Image by author) The above plot is the distribution of 𝛿 that assumes the Null Hypothesis is true and there is no difference reported in the metric before & after vaccination. The value of 𝛿 away from the mean value of 0 reflect the variation in sampling(due to chance/random causes). (Image by author) The general practice of statisticians is to keep this significance level(α) at 5% or 0.05(in probability terms), this is nothing but the ‘Type I Error’ (Incorrectly rejecting a true null hypothesis). (Image by author) The region to the left of the critical value(Red line) is denoted by (1-α), also known as the ‘Confidence level’(Correctly rejecting a false alternate hypothesis). In the current case, it would be 95% or 0.95(in probability terms). (Image by author) The above plot(Green) is the distribution of 𝛿 that assumes the Alternate Hypothesis is true and there is a difference 𝛿(>0) reported in the metric before & after vaccination. Again the values away from the mean value of 𝛿 reflect the variation in sampling(due to chance/random causes). (Image by author) Focus on the green shaded area, it depicts that actually, the alternate hypothesis is true but again due to the extremity of sample representation to the left of the critical value(pre-decided red line) it is being rejected by mistake. This is nothing but the ‘Type II Error’ (Incorrectly rejecting a true alternate hypothesis). (Image by author) The value of the green shaded area(Type II Error) is denoted by β. (Image by author) The region to the right of the critical value(Red line) is denoted by (1-β), also known as the ‘Power’(Correctly rejecting a false null hypothesis). By now we have already described all that is needed for building a confusion matrix, it’s time to provide a structure to the same: (Image by author) Major confusion takes place when these cells are referred to with different terms i.e. True Negative, False Positive, False Negative & True Positive. Let me share an easy to remember method with you for the same: (Image by author) Referring to the above picture, ‘0’ in the column is assigned text — ‘Negative’ and ‘1’ in the column is assigned text — ‘Positive’. Now if the number in the column matches the number in the row we use the prefix ‘True’ else ‘False’. Looking at the top-left cell of the matrix, since the number ‘0’ is matching with the row’s number, we will use the prefix ‘True’ followed by the already allocated suffix ‘Negative’. So this becomes ‘True Negative’. In a similar manner, we can derive these terms for all the leftover cells of the matrix. (Image by author) The very next step is to derive performance metrics out of the confusion matrix namely Accuracy, Sensitivity(Recall), Specificity, Precision & F1-score. More discussion on the same will be carried forward in the upcoming blogs. I will be concluding this blog here, I hope by now you have established a decent understanding of Type I & Type II Error alongside basic terminologies associated with the confusion matrix. Also, the above example represented a One-Tailed test whereas there exists a Two-Tailed test as well, an explanation of which is also on similar lines. I have/will be posting more blogs of the same domain in the future in a simplified & easy to interpret form. Keep a watch! Thanks!!!
https://towardsdatascience.com/type-i-type-ii-error-simplified-covid-vaccine-example-c99b31ddbf41
['Atul Sharma']
2020-11-20 14:22:19.601000+00:00
['Statistics', 'Artificial Intelligence', 'Machine Learning', 'Mathematics', 'Data']
How Much Money Can You Make Writing for Medium?
Other Questions and Answers About Medium Author Payments and Writer Success How Do Writers Actually Get Paid Through Medium? Medium Partner Program Earnings are updated every day. Medium uses UTC days, meaning they include all earnings from activity during midnight UTC to 11:59 PM UTC. Earnings are then updated within a few hours. Your earnings are deposited into your bank account by the 8th of every calendar month. Medium makes all payments through Stripe. Payments may take 5–7 business days to appear in your bank account. Do I Need to be a Paid Subscriber to Write for Medium? No, you do not need to be a paying member to write for Medium. Writing for Medium is free. Can My Friends, Family and Followers Read my Paywall Restricted Content? Yes! Authors receive Friend Links for every story that’s behind Medium’s metered paywall. You can share a Friend Link directly to friends, family, or even through any social media platform. Medium’s Friend Link | Source: Medium Friend Link The Friend Link that gives anyone free access to your story — even if they’re not a subscribing Medium member and have already read all their complimentary stories for the month. Do I Need to Be a Previously Published Author or Professional Writer to Succeed on Medium? Partner Program stories are rewarded by readers who believe writers should be compensated for the quality of their ideas, not the attention they attract for advertisers or their status as a previously published author. So, no — you do not need to be a “professional author” to succeed on Medium. How Do I Find Publications to Submit My Content To? I created Active Publications, a Medium publication, which contains lists of publications organized by topic, that are looking for new writers: Do I Need to Promote my Articles Outside of Medium? Medium won’t be the right platform to promote all types of articles. However, certain industries have really great, established audiences. Topics that do really well on Medium include: Entrepreneurship Startups Political Commentary Cultural Critiques Technology Motivational/Self Improvement Personal Essays Even if you are not publishing articles focused on the above topics, Medium can still provide sufficient traffic. Medium’s unique monthly viewers is approaching 250 million and the site has extremely strong domain authority! As an example, I posted a link to my Medium article on Uber Vomit Fraud on LinkedIn to see whether it would gain any traction. I was shocked to see that within a few days I had amassed 45,000 views and nearly 20,000 reads: However, if you really want to boost your Medium article views, I would suggest sharing your article on other social media platforms and improving your Medium article SEO. The best free tool I have found to promote Medium articles is Signal. Signal auto-tweets your articles on repeat to help you share your articles and grow your audience on your schedule. So while it is not necessary to promote your Medium articles on other platforms, it should certainly be considered to maximize traffic. How Can I Increase My Medium Earnings? There are a number of strategies to increasing your Medium Partner Program earnings: Write and publish on a daily basis — There is no substitute for hard work and consistency. If you want your new and old articles to gain views, you need to write and publish on a daily basis. Get your stories published in Medium publications. Although not a prerequisite for success, having your article published in a major Medium publication can be a game-changer. This is especially true for new writers. If you are struggling to get published in someone else’s publications, consider creating your own. Focus on building long term success. I’ve had articles that initially only received minimal views, which have now grown to receiving 1,000+ views per day and over 150,000 total views! Similarly, you will cultivate long term followers by focusing on creating content that is consistently useful to readers. Keep yourself informed about Medium platform changes. I have a newsletter for Medium writers that covers Medium platform updates, and also includes useful writer tips and tricks. There are also a number of publications that cover Medium writing: Study what other top writers have done. Many top writers have created a number of valuable and free resources to help new Medium writers. I have created a list of 59 Medium writing tips to improve article performance, which many new writers have found useful. You can also utilize Medium Blogging The Ultimate Guide to Writing on Medium, which contains answers to virtually any Medium question. This collection of resources should be enough to get you started writing on Medium. However, if you really want to go down the Medium rabbit hole, I have put together a collection of all my Medium articles. You can also check out my main website, www.bloggingguide.org for more tips and tricks on blogging platforms (including Medium) Best Wishes!
https://medium.com/blogging-guide/how-much-money-can-you-make-writing-for-medium-a3cf0c9c7533
['Casey Botticello']
2020-10-13 03:36:14.732000+00:00
['Medium', 'Medium Partner Program', 'Writer', 'Writing', 'Writing Tips']
How to Tell if You Need a Coach or a Mentor
How to Tell if You Need a Coach or a Mentor Phil Hurst Follow Sep 3 · 5 min read Photo by Monica Melton on Unsplash You need to make sure you get value for money from a coach. They are really expensive, with some costing a small fortune. They can charge this because they know that when they do a good job they can transform not only an individual, but also a workplace. Too often though, people confuse the role of a coach with a mentor. This can be an expensive mistake. Through my job I’ve been a workplace coach for three years, and I’ve seen a lot of people not quite get what a coach is. Although not insurmountable, it leads to the odd false start. Similar, but not the same Coaches and mentors are different roles. I’ve seen adverts and offers and courses that highlight how good a coach is, based on their prior experience in a particular field. Something along the lines of “Hire Bob B as a workplace coach because he founded 23 $3 billion dollar companies”. Well done Bob — but while his experience of creating companies makes him an interesting candidate to be a mentor, it doesn’t guarantee he’ll be a great coach. Coaches don’t need to have made millions in the same field as you. They don’t even need to have worked in the same area. A great coach will bring out the best in you no matter what their personal experience is Mentors have a totally different set of skills to coaches. I’m not saying they’re not great — far from it, I’ve had some excellent mentors who have helped me in my career and helped my writing improve tenfold. Remember though, what you need from a mentor, and what you expect, should be different. So what’s the difference between a mentor and a coach I define the difference with one simple example. When you ask the other a person a question: Coaches help you realise how you should do it. Mentors tell you how they would do it. A good coach helps you solve your problems Coaching is difficult. I’ve done it before, and I’ve been coached before. The key to a good coach is someone who can get you talking about you. Coaches offer tips, tricks and tools and lead you through your thinking to help you get to the answer yourself. A good coach approaches all your conversations with them with a question toolkit. They’ll have done their homework and decided which questions will get the best reaction from you. The best coaches then apply these techniques to your discussions in a way that you don’t even realise it’s happening. After your session, voila! You have an answer that you came up with. To evaluate the coaching, think about the type of thing that you discussed. Did the coach ask you questions that really made you think? Were they difficult to answer? How many times did you get ‘stuck’ on something before the coach helped you find the answer yourself? Good coaches should do all of the above. Coaching relationships are also usually short term. They will help you make an impact on a specific problem or decision, but they won’t hang around afterwards. You can of course employ them for longer, but you should talk about different things at each session. Maybe one session you want to talk about career development, while the other you discuss your approach to difficult conversations. When I was coaching, I always wanted to make an impact on someone’s career, make them think in a certain way, and then leave them to it. Coaches will guide you and question you and sometimes make you face up to things that you don’t want to think about. They will not, however, tell you what they would do in a certain situation. If they do, they’re slipping into the role of a mentor. And that’s what you need to keep an eye out for. A good mentor tells you how they would solve your problems One of my first coaching contracts was with someone who thought I was going to help him answer his difficult project delivery questions. I was experienced in the subject area, and every question he came to me with was about specific situations that he wanted me to give him solutions for. It took a while, but I managed to steer the conversation to get him to think of his own solutions. I explained the difference between a coach and a mentor, and, although I was happy to be a mentor for him in the long term, I was not going to tell him what I would do in a given situation. Once he knew that he started to approach the sessions in a different way. Rather than expecting answers from me, he started to think critically about how he approached problems and issues, both in our sessions and then back at his desk. That change in mentality was worth more to him in the long run that me telling him how to deal with any particular situation. Mentors are great for a long term relationship. They will be able to share experiences that you can learn from (think about Bob from the beginning of the article, with his multi-billion dollar businesses). But you should approach a mentor relationship in a similar way that you’d approach a book (although with a mentor you can ask the book questions). There will be points that you can take straight from your mentor and apply them to your work, there will be points that you have to adapt, and there will be points you’ll have to ignore. The answers come from them, and because of that, you need to be prepared to think critically about how they apply to you. Mentor or coach? It’s ultimately up to you whether you want a mentor of a coach. I enjoy being coached and coaching other people, as I think there’s more room for personal growth. But I’ve also seen benefits from a mentor relationship — and indeed mentors are more likely to increase opportunities for you in the long run (as long as they think you’re any good). But if you take away one thing from reading this article, know the difference between the two roles. Know how to tell when your coach starts to act like a mentor. Because if you don’t you won’t really know what you’re going to get for your money or your time.
https://medium.com/the-innovation/how-to-tell-if-you-need-a-coach-or-a-mentor-64fde99c4a4b
['Phil Hurst']
2020-09-04 18:06:56.510000+00:00
['Coaching', 'Self Improvement', 'Productivity', 'Freelancing', 'Mentoring']
Metrics for the “Now” Normal
Wherever you are in the outbreak locally, the big-picture numbers look bleak at the moment. There is no such thing as a “safe county.” You may reasonably wonder if it is possible to extract a hopeful narrative from a pandemic still in exponential growth, but that is precisely the task before you. The basic elements you will use to construct a narrative of hope are simple. New cases and deaths are just one part of the story. Every day, it is also your responsibility to tell people in plain language: What we’ve done since yesterday. How are we better prepared to respond to the crisis today than we were at the last briefing? What we are doing today. What are we doing to help our neighbors right now? Where we are heading tomorrow. What are the challenges we anticipate and how will we rise to them? The things you are doing (and have done thus far) to respond are just a piece of the picture you want to draw for the public, however. You also want to remind people of what is not wrong. Build a “dashboard of hope” that points to the things that are running and functioning well in the “now” normal, for example: How many volunteers have provided meals for frontline workers? How many bags of groceries have been delivered to senior citizens? How many local businesses are hiring? How much money has your local COVID-19 relief fundraised? Etc. This crisis is not just medical, and neither is the response. Work to get an integrated view of what is happening in your city, point people towards opportunities to help, and share the good news along with the bad. Set the expectation that there won’t be a moment when your city simply switches back on, but rather that you will move forward together in incremental stages. Invest in a robust public health response that will help ensure a steadier and speedier path forward. We do not know what the “new normal” will look like, so we have to help our communities and one another live and lean into the “now” normal.
https://medium.com/covid-19-public-sector-resources/metrics-for-the-now-normal-dad0a76c2bfc
['Harvard Ash Center']
2020-04-07 16:03:27.699000+00:00
['Coronavirus', 'Crisis Management', 'Covid 19', 'Local Government', 'Cities']
Johnny Depp Is a “Good” Victim. Amber Heard Is Not.
Yesterday, the world found out that Johnny Depp lost his defamation suit against The Sun. According to the UK courts, The Sun was — and is — at liberty to call Depp a wife beater. Johnny’s battle isn’t over yet. He’s still got a libel suit to be tried in the US, and many people believe the actor will appeal the UK verdict. In the court of public opinion, however, Johnny Depp has clearly won. To a certain extent, I don’t believe that Johnny Depp will ever really fall from grace in the bulk of the public’s eye because, collectively, we are so much more invested in the man than his ex-wife Amber Heard. On some levels, I get that. At almost forty, I grew up with Johnny Depp on the big screen. Films like Edward Scissorhands, Benny & Joon, and Sleepy Hollow helped shaped my early creative life. For decades, Depp’s performances have been so endearing that he was able to survive plenty of bad press for drug abuse, reckless spending, trashed hotel rooms, and assaults on paparazzi. Simply put, Johnny Depp has long represented our fantasy and fixation for bad boys with a heart of gold. Fans love to point to all of his celebrity support or mild manners in interviews — and who can blame them? Depp has long been typecast as a certain type of hero, often kooky and misunderstood. Of course, fans say there is no misunderstanding the current situation between Johnny Depp and ex-wife Amber Heard. She’s a far lesser-known and deeply disliked actress who’s become the subject of extreme internet scrutiny ever since she was caught on tape with her then-husband. The tapes are chilling. I don’t know anyone who can listen to Amber Heard berate Johnny and feel good about siding with her. Frankly-I don’t know too many people who side with her at all- and it’s led to an exceptionally tragic and inflamed public narrative. The narrative goes something like this: The judge was biased. Depp was victimized by Amber Heard, victimized by the media, and now victimized by the UK courts—all because he’s a man. The courts tend to side with women over men, and the injustice against Depp only goes to show just how biased we are about believing men. And that’s just scratching the surface. Everywhere I go on social media, I am bombarded by people claiming to know that Depp is innocent. They claim it as if they know the man themselves, and as if knowing a man to be good to you (or anyone else) could ever actually be “proof” of his innocence. Personally, I find this narrative so strange, even though I know it’s unsurprising. In 2020, we’re still talking about the possible loss of a man’s career as the absolute worst thing that can happen. The “wrongly accused man” story is so damn popular that by and large, we are unable to even suggest culpability or accountability for Depp. Most folks feel far more content to call Amber Heard the monster and Depp the dogged down hero.
https://medium.com/honestly-yours/johnny-depp-is-a-good-victim-amber-heard-is-not-4b8d8171c5b
['Shannon Ashley']
2020-11-05 17:19:31.588000+00:00
['Women', 'Relationships', 'Life Lessons', 'Culture', 'Mental Health']
Let’s Create a Technical Indicator for Trading.
Technical indicators are all around us. Many are famous like the Relative Strength Index and the MACD while others are less known such as the Relative Vigor Index and the Keltner Channel. These indicators have been developed to aid in trading and sometimes they can be useful during certain market states. For example, the RSI works well when markets are ranging. Technical indicators are certainly not intended to be the protagonists of a profitable trading strategy. Our aim is to see whether we could think of an idea for a technical indicator and if so, how do we come up with its formula. The struggle doesn’t stop there, we must also back-test its effectiveness, after all, we can easily develop any formula and say we have an indicator then market it as the holy grail. We will try to compare our new indicator’s back-testing results with those of the RSI, hence giving us a relative view of our work. If you like to see more trading strategies relating to the RSI before you start, here’s an article that presents it from a different and interesting view: Brainstorming & Formulating the idea The first step in creating an indicator is to choose which type will it be? Is it a trend-following indicator? Maybe a contrarian one? Does it relate to timing or volatility? Will it be bounded or unlimited? To simplify our signal generation process, let’s say we will choose a contrarian indicator. This means that we will try to create an indicator that oscillates around recurring values and is either stationary or almost-stationary (although this term does not exist in statistics). One of my favourite methods is to simple start by taking differences of values. This is a huge leap towards stationarity and getting an idea on the magnitudes of change over time. But, to make things more interesting, we will not subtract the current value from the last value. As I am a fan of Fibonacci numbers, how about we subtract the current value (i.e. today’s closing price or this hour’s closing price) minus the value 8 periods ago. So, the first step in this indicator is a simple spread that can be mathematically defined as follows with delta (Δ) as the spread: The next step can be a combination of a weighting adjustment or an addition of a volatility measure such as the Average True Range or the historical standard deviation. Let’s stick to the simple method and choose to divide our spread by the rolling 8-period standard deviation of the price. This gives a volatility adjustment with regards to the momentum force we’re trying to measure. Hence, we will calculate a rolling standard-deviation calculation on the closing price; this will serve as the denominator in our formula. Remember, we said that we will divide the spread by the rolling standard-deviation. Let’s update our mathematical formula. Knowing that the equation for the standard deviation is the below: We can consider X as the result we have so far (The indicator that is being built). The result is the spread divided by the standard deviation as represented below: One last thing to do now is to choose whether to smooth out our values or not. Sometimes, we can get choppy and extreme values from certain calculations. Luckily, we can smooth those values using moving averages. As we want to be consistent, how about we make a rolling 8-period average of what we have so far? This means we will simply calculate the moving average of X. For more about moving averages, consider this article that shows how to code them: Now, we can say that we have an indicator ready to be visualized, interpreted, and back-tested. Before we do that, let’s see how we can code this indicator in python assuming we have an OHLC array. for i in range(len(Asset)): # Calculating the spread Asset[i, 4] = Asset[i, 3] - Asset[i - 8, 3] # Calculating the Standard Deviation Asset[i, 5] = Asset[i - 8:i + 1, 3].std() # Volatility Adjustment of the spread Asset[i, 6] = Asset[i, 4] / Asset[i, 5] # Smoothing out and getting the indicator's values Asset[i, 7] = Asset[i - 7:i + 1, 6].mean() Visualizing the indicator Visual interpretation is one of the first key elements of a good indicator. Below is our indicator versus a number of FX pairs.
https://towardsdatascience.com/lets-create-a-technical-indicator-for-trading-83828343249d
['Sofien Kaabar']
2020-09-20 17:42:43.719000+00:00
['Machine Learning', 'Data Science', 'Writing', 'Finance', 'Trading']
K-Means Clustering and PCA to categorize music by similar audio features
K-Means Clustering and PCA to categorize music by similar audio features An unsupervised machine learning project to organize my music A year ago, life was much simpler. It was a bustling Tuesday morning in early October, and I was riding the metro to my Artificial Neural Network class in downtown Copenhagen. I was browsing my Spotify library for a playlist to shuffle for the duration of my commute when I bumped into my friend Liddy. I asked for her input, and she told me something which actually kind of changed my life. She said, “Oh Sejal, I am far too indecisive to pick a playlist based on my mood. I actually create a new playlist and add to it on the first day of each month.” Why music genres are outdated With the increased diversity in music that we have seen on streaming platforms such as Spotify, Apple Music, and Soundcloud in the past decade, the lines separating genres have become even more blurred than they were previously. Genre labels are broad umbrella terms that are used to describe music that vary greatly in their characteristics. If someone asks you what kind of music you like, responding with “Rock” or “Indie” does not really say much without namedropping a few artists. Genres are always evolving over time. If someone says they are a fan of pop music, how do you know if they are referring to Michael Jackson or Justin Bieber? They are often socially driven with little to do with the actual characteristics of the music. Fortunately, there has been a movement towards labeling music solely based on musical characteristics or attributes. However, organizing music based on these audio features is still a complicated task, even for a human. Motivation for my machine learning project Ever since Liddy inspired me with the idea to organize my music by time period, I have been hooked. It helps me to embrace the chaos of my diverse and far-ranging music taste while still being able to compartmentalize my music based on certain phases of life. The issue, though, is that when I clicking “Shuffle” on my 2020 playlist of 449 songs, I end up skipping 9/10 songs because I can never seem to find a song that fits the mood. So I set out to find a solution to my problem using a machine learning technique called k-means clustering and Principal Component Analysis (PCA), a dimensionality reduction technique. Data Acquisition via Spotify API The Spotify Developer API allows you to get audio features for a track. These features are described in more detail below: acousticness : [0–1] Confidence measure of whether the track is acoustic. : [0–1] Confidence measure of whether the track is acoustic. danceability : [0–1] Describes how suitable a track is for dancing based on musical attributes including tempo, rhythm, stability, beat strength, and overall regularity. : [0–1] Describes how suitable a track is for dancing based on musical attributes including tempo, rhythm, stability, beat strength, and overall regularity. energy : [0–1] Perceptual measure of intensity and activity. Energetic tracks feel fast, loud, and noisy (e.g. death metal: high energy, Bach prelude: low energy). : [0–1] Perceptual measure of intensity and activity. Energetic tracks feel fast, loud, and noisy (e.g. death metal: high energy, Bach prelude: low energy). instrumentalness : [0–1] Predicts whether a track contains no vocals (values above 0.5 represent instrumental tracks whereas rap songs would have a score close to 0). : [0–1] Predicts whether a track contains no vocals (values above 0.5 represent instrumental tracks whereas rap songs would have a score close to 0). liveness : [0–1] Detects the presence of an audience in the recording. : [0–1] Detects the presence of an audience in the recording. loudness : [-60–0 dB] The average volume across an entire track. : [-60–0 dB] The average volume across an entire track. speechiness : [0–1] Detects the presence of spoken words in a track (values above 0.66 describe tracks that are probably made entirely of spoken words, 0.33–0.66 describe tracks that may contain both music and speech, and values below 0.33 most likely represent music and other non-speech-like tracks). : [0–1] Detects the presence of spoken words in a track (values above 0.66 describe tracks that are probably made entirely of spoken words, 0.33–0.66 describe tracks that may contain both music and speech, and values below 0.33 most likely represent music and other non-speech-like tracks). valence : [0–1] Describes the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). : [0–1] Describes the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). tempo: [0–300 BPM] The speed or pace of a given piece, as derived from the estimated average beat duration. I use Spotipy, which is a Python wrapper for Spotify’s Developer API. First, we must create a token with our API credentials. screenshot of my giant Spotify playlist (the data source for this project) I then extracted metadata (name, artist name, and URI) for each track in the playlist and acquired the audio feature data described above. A preview of the raw data Next, we can form our training data by excluding columns in the DataFrame that are not audio features. We also have to do one more critical step: standardization. Standardization is the process of putting different features on the same scale, with each scaled feature having a mean of 0 and a standard deviation of 1. This is important because the model is not familiar with the context of the data. If we do not standardize our X_train data, the model would place a much larger weight on tempo and loudness, since those variables vary by much more than the variables that are distributed in the range from 0 to 1. Standardization allows for all features to be treated equally by the model. We can accomplish this step by using sklearn’s StandardScaler import. from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_std = scaler.fit_transform(df_X) Dimensionality Reduction What is dimensionality reduction? Well, as we can see from the image above, it is a process that involves using linear algebraic operations to transform n-dimensional data to (n-k)-dimensional data. In this example, we are just reducing 3-dimensional data into 2-dimensional data, so n equals 3 and k equals 1. As you might imagine, some information is lost in this transformation, but we still get a fairly decent approximation with the tremendous benefit of being able to visualize the data. Moreover, we can also effectively reduce the complexity of the problem by reducing the number of variables. Principal Component Analysis (PCA) is a dimensionality reduction approach which attempts to find the best possible subspace that explains most of the variance in the data. It works by deriving components from the original scaled features. From these components, the two most significant ones are typically plotted, depicting the simplified data space. After fitting a PCA object to the standardized matrix, we can see how much of the variance is explained by each of the nine features. from sklearn.decomposition import PCA pca = PCA() pca.fit(X_std) evr = pca.explained_variance_ratio_ print(evr) To inform our decision of how many features to use for our k-means clustering algorithm, it is helpful to make a cumulative variance plot using the evr.cumsum() function and matplotlib. Cumulative Variance Plot after PCA From this plot, we can observe that each of the principal components explain a pretty considerable amount of variance. However, we do not need to keep all of these components. In general, it is a good rule of thumb to preserve around 80% of the variance. Therefore, in this instance, we can select the 6 most important principal components to incorporate in the k-means algorithm. This can be accomplished by instantiating a new PCA object with the n_components parameter set to 6. Having finished our PCA, we now have components which explain most of the variance in the data. From this point on, it is going to be very difficult to interpret meaning when analyzing the numeric differences in tracks because the vectorized audio features have been transformed into a new subspace. However, after implementing k-means Clustering, we will be able to inspect the raw data, and we can even visualize our clusters of similar songs on the same 2D-plane. In order to implement k-means clustering, we must select a number of clusters, k, which distinctly splits the data. Given that we do not have a desired amount of groupings for the huge playlist that we would like to split, and we want this code to be reproducible, we must use an algorithmic approach to determine the optimal value for k. There are actually numerous algorithmic approaches, including but not limited to the elbow method, the silhouette method, and the gap statistic. Let’s take a closer look into the elbow method, which is arguably the most popular technique. The basic principle of k-means clustering is to define clusters such that the total intra-cluster variation is minimized. The within-cluster sum of squares (WCSS) is a metric which can quantify this partitioning effect. It essentially measures the compactness of the clustering. The optimal number of clusters can be found by executing a simple procedure: Implement clustering algorithm for varying values of k, ranging from 1 to 20 clusters. For each k, compute the within-cluster sum of squares (WCSS) by storing the inertia value in a list after the model has been fitted. Plot the WCSS curve according to the number of clusters k. Look for a kink or elbow in the WCSS graph. Usually, the part of the graph before the elbow would be steeply declining, while the part after it — much smoother. [NOTE: there are ways to locate the elbow (a.k.a. “knee”) point programmatically, including an open source Python library called kneed or a visualization library called Yellowbrick] Running the code snippet above yields the following WCSS curve shown below. The KneeLocator method has determined that 7 is the ideal number of clusters or distinct groupings to be used in order to separate the 449 tracks in my absurdly lengthy playlist. WCSS curve used for selecting the optimal number of clusters Now, all that is left to do is to implement k-means clustering once and for all. With the help of sklearn, we can obtain the cluster labels for each track in just 3 lines of code. kmeans_pca = KMeans(n_clusters=n_clusters, init='k-means++', random_state=42) kmeans_pca.fit(scores_pca) df['Cluster'] = kmeans_pca.labels_ Cluster labels have been assigned in the rightmost column of the DataFrame The cluster labels in the rightmost column of the DataFrame screenshot above are the “results” of this ML technique. Okay… this was a bit anticlimactic. What good do these cluster labels do? Well, let’s investigate them further. Analysis and Visualization Analyzing the results of PCA and K-Means Clustering While k-means clustering is not a supervised machine learning technique—meaning we cannot verify whether the results are accurate or not—we can visualize the clusters on a 2D plane to ensure that there is some sensical separation of data based on the sources of signals that are encapsulated by the components. In the figure above, it is clear that Cluster 5 is most similar to Clusters 1 and 3 and that Cluster 4 is quite loosely defined, overlapping with a few other clusters. Despite some noise, there are clear groupings of tracks, as indicated by the concentrated areas of green, red, orange, and blue data points. Additionally, this is only a 2D visualization of a 6-dimensional vector space, so more analysis is needed in order to determine the efficacy of the k-means algorithm. Since meaning cannot be inferred from the transformed components, let’s take a look at the raw values for the audio features of the tracks after they have been clustered. A radar chart can be really powerful in this type of analysis because it allows us to make some quick observations about each cluster. In the figure below, the a radar trace has been plotted for the average audio feature values in each cluster, after normalizing the entire dataframe. Acousticness is a Spotify-defined variable between 0 and 1 while tempo can be in the 100s. Normalization is important because it effectively scales all variables to have values between 0 and 1, making the following visualization far more meaningful. Radar chart comparison of average audio features for each cluster After a quick glance, it is evident that Cluster 2 is most acoustic-sounding, Cluster 4 consists of live-sounding tracks, Cluster 5 contains the most verbose tracks, and Cluster 6 is characterized by more instrumental tracks. Features like energy, danceability, valence, tempo, and loudness did not contain strong enough sources of signal on which to split the data. That being said, some combination of high danceability, tempo, and energy — may have characterized tracks in Cluster 1. The shapes of these traces are quite telling of which types of tracks we can expect to see in each cluster. The code above allows us to create new Spotify playlists based on the cluster groupings of tracks after PCA and k-means. As we initially observed, Cluster 5 is characterized by tracks having high levels of “speechiness”. In other words, we can expect to find Rap and R&B tracks in this grouping. As we can observe above, Burna Boy, blackbear, 6LACK, Playboi Carti, Lil Uzi Vert, Kota the Friend, A Boogie Wit da Hoodie, and Polo G are some the artists included in this subset, which validates the conclusion suggested by the data. Cluster 2 contains more instrumental music, as suggested by the radar chart. There are some tracks by the xx, Lauv, Henry Green, and other lesser known artists. Many of the tracks in this cluster have a very synth-y feel and not too many words. Finally, there is one last thing we can do to analyze the findings. We can make some bar charts, plotting a particular feature for each track in a given cluster in order to see if this feature is, indeed, characteristically high with respect to the average feature value in the entire dataset. Looking at a few select clusters (2, 5, and 6) and features of interest (acousticness, speechiness, and instrumentalness), a trend is observable in the subplots along the diagonal. In the top left subplot, the tracks in Cluster 2 have mostly above average scores for acousticness; in the middle subplot, the tracks in Cluster 5 all have above average speechiness scores; in the bottom right subplot, Cluster 6 is characterized by extremely high instrumentalness scores, relative to the average for the full dataset. Conclusions This was a fun and interesting ML project which served a practical purpose for me and potentially others. Hopefully people can use this audio-based clustering methodology and save themselves a few clicks of the “skip” button next time they wish to listen to music of a particular mood or vibe. I believe it can be especially useful for people who curate monthly or yearly playlists which lack a cohesive theme. Additionally, it can be useful for large collaborative playlists which, at times, can feel a bit all over the place. Thank you very much for reading! All of the source code is accessible here. If you would like to send me any feedback or inspiration, please contact me at [email protected] or through my website. I love connecting with people of similar interests! Also, stay tuned for a potential extension of this work! In the meantime, I will be busy thinking of better names for my new playlists because “cluster1” doesn’t have a great ring to it. References
https://towardsdatascience.com/k-means-clustering-and-pca-to-categorize-music-by-similar-audio-features-df09c93e8b64
['Sejal Dua']
2020-12-29 13:15:15.427000+00:00
['Music', 'Unsupervised Learning', 'Spotify', 'Machine Learning', 'K Means Clustering']
Building a REST Client with Spring Cloud OpenFeign and Netflix Ribbon
Building a REST Client with Spring Cloud OpenFeign and Netflix Ribbon Learn how to build a declarative and highly readable REST client without writing boilerplate code to invoke services over HTTP Photo by Jon Tyson on Unsplash Making HTTP requests between services in Java is quite straightforward. With a number of well-known and open-source HTTP clients available, such as OkHttp and RestTemplate in Spring, choosing a suitable candidate doesn’t seem to be difficult. The real problem is in what lies ahead. In the increasing numbers of distributed services in the cloud where servers come and go, service endpoints are dynamic and unknown ahead of time. Our REST client needs to integrate to a service registry to lookup for a service endpoint before making a request. We also need to handle request-and-response serialization and implement load balancing to distribute loads between servers. And then fault tolerance is needed to enable resilience. All of these are a cross-cutting concern in a distributed system. This is where Spring Cloud OpenFeign steps in. Spring Cloud OpenFeign is not just an HTTP client but a solution to resolve problems surrounding modern REST clients. Spring Cloud OpenFeign provides OpenFeign integrations for Spring Boot through autoconfiguration and binding to the Spring environment. OpenFeign, originally known as Feign and sponsored by Netflix, is designed to allow developers to use a declarative way to build HTTP clients by means of creating annotated interfaces without writing any boilerplate code. Spring Cloud OpenFeign uses Ribbon to provide client-side load balancing and also to integrates nicely with other cloud services — such as Eureka for service discovery and Hystrix for fault tolerance. These are all provided out of the box without writing any code. In this article, you’ll learn how to use Spring Cloud OpenFeign to build a declarative and highly readable REST client to invoke services over HTTP. In the later part, you’ll also learn to configure Ribbon with endpoints for load balancing in OpenFeign. And finally, we’ll enable the Eureka client in your Spring REST service to integrate with the Eureka Server.
https://medium.com/better-programming/building-a-rest-client-with-spring-cloud-openfeign-and-netflix-ribbon-44734c7dfaa7
['Andy Lian']
2020-11-04 14:05:47.108000+00:00
['Spring Boot', 'API', 'Spring Cloud', 'Programming', 'Microservices']
5 Free Books To Take Your Data Science Skills to The Next Level
5 Free Books To Take Your Data Science Skills to The Next Level Books to Help Level Up Your Data Science Skills As things stand, I am nowhere near where I aspire to reach as a Data Scientist. In my journey so far, I have met many helpful people and come across various useful resources. Whilst I pondered on where I have come from and where I am now, I instantly remembered the 4 books that had been revolutionary for my progress as a Data Scientist — Of which I will be sharing with you today. Updated 06/10/2020: Hands-On Machine Learning With Scikit-Learn And Tensorflow is copyrighted so I’ve added the link to purchase (sincere apologies to the Author). This means there are only 4 free books as things stand. I’ve been reading to see if there is something that can fill in, if you find something please send your suggestions so I may read through (It must be free please).
https://towardsdatascience.com/5-free-books-to-take-your-data-science-skills-to-the-next-level-a2026c8cad71
['Kurtis Pykes']
2020-10-06 07:36:53.128000+00:00
['Deep Learning', 'Machine Learning', 'Towards Data Science', 'Data Science', 'Artificial Intelligence']
I Was 47 the First Time I Swallowed
Living My Best Double Life I Was 47 the First Time I Swallowed And he tasted good Photo by engin akyurt on Unsplash He started thrusting back against my bobbing head; his breath growing ragged. He was close. “I’m going to cum,” he grunted out, warning me to move. And I had a problem. We were in the backseat of my SUV in a Walmart parking lot near midnight. “Go ahead, it’s ok,” I told him between downstrokes. I’m sure he liked that, but I had no idea what I was going to do next. Hubby loves his Audi, so spitting a load on the carpet wasn’t an option. I couldn’t clean it up well enough before he saw it, or explain how it got there. “Ah….” His cock pulsated as he exhaled forceful breaths in time with the waves of his orgasm. And he pulsated. And pulsated. Even as my mouth filled with his semen, I wasn’t sure if I was going to hold it. If I could hold it long enough to open the car door I’d be good, but I was filling up fast. No one other than hubby’s came in my mouth, but I’d never swallowed. My gagging always stopped that. And I wasn’t a slut! Thankfully hubby didn’t often come in my mouth. When he did, spitting him out was easy because I was the one cleaning up, so I’d spit on the easiest thing to clean. Even so, it’d been over ten years since this was even a problem. And pulsated. Fuck. And pulsated. Where was he hiding it? And pulsated. I was running out of options — fast. I couldn’t keep sucking and hold his cum in my mouth at the same time. It was already starting to drain out the corners of my lips. Fuck. The door was behind me, and there was no way I could turn fast enough without ruining his moment or dropping it down the front of my blouse. Jesus. Without another thought, I lifted my chin, tilted my head, and gulped him down in two quick swallows. Whoa. Maybe I am a slut, I thought, smiling. “Did you like that?” I looked up. “God, yes! Thank you,” he said. “Your mouth is so good.” Running my tongue around my teeth, clearing the rest, I swallowed again. Considering the texture and taste, now that I had a moment to think, egg whites and a minty aftertaste were the first things that came to mind. Not bad. Thank god. Wait. Do I like it? Hmmm, I did. Fixing my lipstick in the makeup mirror I laughed, “There! Better. I can’t go home looking like I’ve just sucked a cock, now can I?”
https://medium.com/sex-and-satire/i-was-47-the-first-time-i-swallowed-6a312d3bd751
['Teresa J Conway']
2020-10-13 06:54:48.134000+00:00
['Sexuality', 'Cheating', 'Relationships', 'Dating Advice', 'Self-awareness']
Stock Price Prediction with PyTorch
Time series problem Time series forecasting is an intriguing area of Machine Learning that requires attention and can be highly profitable if allied to other complex topics such as stock price prediction. Time series forecasting is the application of a model to predict future values based on previously observed values. By definition, a time series is a series of data points indexed in time order. This type of problem is important because there is a variety of prediction problems that involve a time component, and finding the data/time relationship is key to the analysis (e.g. weather forecasting and earthquake prediction). However, these problems are neglected sometimes because modeling this time component relationship is not as trivial as it may sound. Stock market prediction is the act of trying to determine the future value of a company stock. The successful prediction of a stock’s future price could yield a significant profit, and this topic is within the scope of time series problems. Among the several ways developed over the years to accurately predict the complex and volatile variation of stock prices, neural networks, more specifically RNNs, have shown significant application on the field. Here we are going to build two different models of RNNs — LSTM and GRU — with PyTorch to predict Amazon’s stock market price and compare their performance in terms of time and efficiency. Recurrent Neural Network (RNN) A recurrent neural network (RNN) is a type of artificial neural network designed to recognize data’s sequential patterns to predict the following scenarios. This architecture is especially powerful because of its nodes connections, allowing the exhibition of a temporal dynamic behavior. Another important feature of this architecture is the use of feedback loops to process a sequence. Such a characteristic allows information to persist, often described as a memory. This behavior makes RNNs great for Natural Language Processing (NLP) and time series problems. Based on this structure, architectures called Long short-term memory (LSTM), and Gated recurrent units (GRU) were developed. An LSTM unit is composed of a cell, an input gate, an output gate, and a forget gate. The cell remembers values over arbitrary time intervals, and the three gates regulate the flow of information into and out of the cell. On the other hand, a GRU has fewer parameters than LSTM, lacking an output gate. Both structures can address the “short-term memory” issue plaguing vanilla RNNs and effectively retain long-term dependencies in sequential data. Although LSTM is currently more popular, the GRU is bound to eventually outshine it due to a superior speed while achieving similar accuracy and effectiveness. We are going to see that we have a similar outcome here, and the GRU model also performs better in this scenario. Model implementation The dataset contains historical stock prices (last 12 years) of 29 companies, but I chose the Amazon data because I thought it could be interesting. We are going to predict the Close price of the stock, and the following is the data behavior over the years. Stock behavior We slice the data frame to get the column we want and normalize the data. from sklearn.preprocessing import MinMaxScaler price = data[['Close']] scaler = MinMaxScaler(feature_range=(-1, 1)) price['Close'] = scaler.fit_transform(price['Close'].values.reshape(-1,1)) Now we split the data into train and test sets. Before doing so, we must define the window width of the analysis. The use of prior time steps to predict the next time step is called the sliding window method. def split_data(stock, lookback): data_raw = stock.to_numpy() # convert to numpy array data = [] # create all possible sequences of length seq_len for index in range(len(data_raw) - lookback): data.append(data_raw[index: index + lookback]) data = np.array(data); test_set_size = int(np.round(0.2*data.shape[0])); train_set_size = data.shape[0] - (test_set_size); x_train = data[:train_set_size,:-1,:] y_train = data[:train_set_size,-1,:] x_test = data[train_set_size:,:-1] y_test = data[train_set_size:,-1,:] return [x_train, y_train, x_test, y_test] lookback = 20 # choose sequence length x_train, y_train, x_test, y_test = split_data(price, lookback) Then we transform them into tensors, which is the basic structure for building a PyTorch model. import torch import torch.nn as nn x_train = torch.from_numpy(x_train).type(torch.Tensor) x_test = torch.from_numpy(x_test).type(torch.Tensor) y_train_lstm = torch.from_numpy(y_train).type(torch.Tensor) y_test_lstm = torch.from_numpy(y_test).type(torch.Tensor) y_train_gru = torch.from_numpy(y_train).type(torch.Tensor) y_test_gru = torch.from_numpy(y_test).type(torch.Tensor) We define some common values for both models regarding the layers. input_dim = 1 hidden_dim = 32 num_layers = 2 output_dim = 1 num_epochs = 100 LSTM class LSTM(nn.Module): def __init__(self, input_dim, hidden_dim, num_layers, output_dim): super(LSTM, self).__init__() self.hidden_dim = hidden_dim self.num_layers = num_layers self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True) self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_() c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_() out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach())) out = self.fc(out[:, -1, :]) return out We create the model, set the criterion, and the optimiser. model = LSTM(input_dim=input_dim, hidden_dim=hidden_dim, output_dim=output_dim, num_layers=num_layers) criterion = torch.nn.MSELoss(reduction='mean') optimiser = torch.optim.Adam(model.parameters(), lr=0.01) Finally, we train the model over 100 epochs. import time hist = np.zeros(num_epochs) start_time = time.time() lstm = [] for t in range(num_epochs): y_train_pred = model(x_train) loss = criterion(y_train_pred, y_train_lstm) print("Epoch ", t, "MSE: ", loss.item()) hist[t] = loss.item() optimiser.zero_grad() loss.backward() optimiser.step() training_time = time.time()-start_time print("Training time: {}".format(training_time)) LSTM training Having finished the training, we can apply the prediction. Interactive graph — https://chart-studio.plotly.com/~rodolfo_saldanha/11 The model behaves well with the training set, but it has a poor performance with the test set. The model is probably overfitting, especially taking into consideration that the loss is minimal after the 40th epoch. GRU The code for the GRU model implementation is very similar. class GRU(nn.Module): def __init__(self, input_dim, hidden_dim, num_layers, output_dim): super(GRU, self).__init__() self.hidden_dim = hidden_dim self.num_layers = num_layers self.gru = nn.GRU(input_dim, hidden_dim, num_layers, batch_first=True) self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_() out, (hn) = self.gru(x, (h0.detach())) out = self.fc(out[:, -1, :]) return out Similarly creating the model and setting the parameters. model = GRU(input_dim=input_dim, hidden_dim=hidden_dim, output_dim=output_dim, num_layers=num_layers) criterion = torch.nn.MSELoss(reduction='mean') optimiser = torch.optim.Adam(model.parameters(), lr=0.01) The training step is exactly the same, and the results we achieve are somehow similar as well. GRU training However, when it comes to the prediction, the GRU model is clearly more accurate in terms of the prediction, as we observe in the following graph. Interactive graph — https://chart-studio.plotly.com/~rodolfo_saldanha/14/#/ Conclusion Both models have good performances in the training phase but stagnate around the 40th epoch, which means that they do not need 100 epochs beforehand defined. As expected, the GRU neural network outperformed the LSTM in terms of precision because it reached a lower mean square error (in training, and most importantly, in the test set) and in speed, seen that GRU took 5s less finish the training than the LSTM. Table of results Code related questions — https://www.kaggle.com/rodsaldanha/stock-prediction-pytorch
https://medium.com/swlh/stock-price-prediction-with-pytorch-37f52ae84632
['Rodolfo Saldanha']
2020-06-03 15:21:28.413000+00:00
['Machine Learning', 'Neural Networks', 'Artificial Intelligence', 'Deep Learning', 'Data Science']
If You Are An Editor, Please Be Reasonable When Asking for Changes
If You Are An Editor, Please Be Reasonable When Asking for Changes A good editor should be able to get the changes they want in one edit. Credit: Fredrik Strömberg on Wikimedia Commons I recently submitted an article to a publication. The editor wrote back, effusively expressing how much they loved the piece, praising it to the skies, saying it was one of their favorites of the past year, how thrilled they were that I had chosen their magazine to submit it to and how honored they were to be able to publish it. I was just as thrilled to receive this feedback, and despite waiting several months to hear back (not long for a good publication), I couldn’t have been happier. They just wanted one simple change to the title. I made it in minutes and sent it back. They sent it back with a vague reply about SEO (not important for fiction) and how they were an expert in the area so I needed to make it “SEO friendly.” I asked for more feedback since I had no idea what they wanted and they responded to add long tailed keywords, and put them at the beginning of the title. Although this didn’t make sense for a short story I did my best to comply. Next the wrote back saying the title was now too long, which was a given since the wanted long tailed keywords in it. I trimmed it. Still too long. I trimmed it further. Not SEO friendly anymore. When all was said and done, for a 5000 word story, they wanted zero edits for any of the text but I went through no fewer than a dozen edits for the title. I have had similar situations occur with editors for both Medium publications and those outside the platform. While I think that I respond well to feedback and requests for edits, I also know, as an editor myself, that there is a reasonable way to ask for them and ways that will alienate writers. At the same time, I know that sometimes, after several requests from an editor to edit the same piece, I can become annoyed which I may communicate to the editor. I have tried to work on this and have determined different ways that writers can improve the way they respond to editors and how editors respond to them in turn,which can make the relationship more likely to help them in their writing goals. Strategies Editors Can Use to Improve the Way They Work With Writers From my own experience with editors and as an editor, there are several things that can help someone work well with most writers. Determine What Your Expectations Are This is often a step that is skipped by editors who just think that if they start reading they’ll know what they want. But from what I’ve experienced with my own editing and what other editors have shared with me, we sometimes don’t know what we expect if we don’t spend time thinking about it and establishing a model to work from. An example, I’ve seen on Medium when writers start new publications they don’t always have a good idea of what kinds of stories they want to publish. Or they have an ideal in their mind but when they don’t get a lot of submissions that fit those expectations, they accept things that don’t really fit the description. With other articles they might ask the writer to change it to fit the criteria better while the writer realizes that such changes weren’t required for other articles. This can make writers less willing to submit to the publication in the future. Make Sure to Read Carefully It’s important to read a story carefully at least twice, taking notes about what you want to have changed. Then read through your notes and determine that you have everything down that you want to change. It will not engender good faith if you don’t see things you have problems with until after the writer has already revised the article and come back with further changes. Make Appropriate Requests Make sure that what you are expecting from the writer isn’t unrealistic or inconsistent with other articles in the publication. If you are trying to increase the quality of what’s published with new requirements, for example longer articles or including at least one scholarly reference, then it’s important to make that clear in the guidelines. Be careful about requiring SEO strategies as not all Medium writers are well versed in these, and given the captive audience on Medium it isn’t necessarily critical for article engagement. When a writer has little or no knowledge about SEO, giving instructions that simply says to optimize the article or the title can lead to annoyance or the need for numerous edits as they try to figure out what it is they need to do. If SEO is a required part of your expectations for articles, again, make sure this is spelled out in the submission guidelines. Make Sure You Are Clear and Concrete In What You Want the Writer to Do It’s important that your instructions to the writer provide concrete things that you want changed. I think this is one of the things editors often have the most trouble with. Sometimes editors forget that writers don’t necessarily know everything that they do and so don’t think to explain things as carefully as they should. So when an article has been incompletely revised from the editors point of view, the writer didn’t do what was asked and from the writer’s point of view the editor failed to ask for everything they wanted. Strategies Writers Can Use to Improve the Way They Work With Editors There are also strategies that writers can use to make sure they only need to revise an article once. Check That You Understand What is Wanted Often the best way to prevent misunderstandings in terms of what an editor expects is to restate their request. Write a quick note back saying, “So, if I understand you correctly what you’d like for me to do is . . . “ That will give them the chance to say yes or to correct your misconception before any additional work has been done. It will also help them understand how their requests to authors are perceived. Ask for clarification If you aren’t entirely sure what an editor is asking for, ask them to clarify before you start revising. We may worry that an editor will think we aren’t experienced or don’t have basic knowledge about writing if we say we don’t fully understand what they want. However, I can tell you from personal experience that most editors much prefer you to clarify requested revisions rather than waste time changing things in a way they didn’t desire and then having to read through the piece again only to need to ask for further changes. When Annoyed Wait Before Sending a Response I think that this is good advice any time you feel upset with someone. When we get annoyed, it’s not unusual that we may lash out without thinking about it first. Often we later find that we regret our impulsive response yet can’t take it back. When it’s someone we’re close to personally, usually an apology can smooth things over. But in a professional relationship this isn’t the case, and lashing out can hurt your relationship with a colleague and in some cases your professional reputation overall. It’s natural to feel protective about your writing and to want others to view it positively. We look for validation from other writers and especially from editors who are the gatekeepers to what gets published in publications. When changes are requested we can take that personally as if the editor is suggesting there’s something wrong with our work or it’s not good or weak in some way. When you feel annoyed by something an editor says or requests, instead of dashing off an angry response and hitting send, wait until you cool down and re-evaluate your reply. Once you are calmer you will more likely be able to view what the editor has said in a constructive manner and make the needed changes. If you still disagree with what they want, you will also be able to compose a calm reply that expresses your Thank the Editor For Their Help I think one of the easiest things a writer can do to let an editor know their effort is appreciated is to simply say thank you. Letting them know that you value their help in strengthening your piece can establish a lot of good will and will end the interaction on a positive note regardless of what came before.
https://medium.com/the-partnered-pen/if-you-are-an-editor-please-be-reasonable-when-asking-for-changes-87569083b8d8
['Natalie Frank']
2020-01-30 02:39:39.322000+00:00
['Relationships', 'Success', 'Editing', 'Writing', 'Communication']
How to Quickly Authenticate Users With Ruby on Rails
How to Quickly Authenticate Users With Ruby on Rails Using basic authentication Photo by Luca Bravo on Unsplash You may have cases when you may need to do some authorizations for certain pages so they can’t be accessed by unauthorized clients. There are many authorization methods you can use in a Ruby on Rails application. In this article, I’ll mention one very simple (yet not so famous) method that comes implemented and ready to be used each time you create a new Rails application. ActionController::HttpAuthentication::Basic has a method called http_basic_authenticate_with that you usually need to invoke in the top lines of your controller. Rails is an open-source framework, so we’re thankfully able to see the implementation and the work that’s being done by this method: Let’s take an example of a DocumentsController . I have two methods that render views. I want to display a list of documents to anyone, regardless of whether they’re already authenticated or not. I want to restrict the privileges to modify documents to only the users that are authorized. Namely, they know a name and a password I’ve set in the beginning that can be used. I should certainly use parametrized values that are set in the config files, but for the purpose of this example, I’m using actual values at the time this method is being invoked. Yes, I know it’s not too difficult to find out, but it’s only used for this article and maybe a few grandmas out there. That single line of code calls the method that protects your documents from being modified by people you don’t want. That’s really awesome and time-saving so you don’t have to write everything by yourself. If you need to do this type of authorization for many controllers, then you can simply declare it at ApplicationController : That’s all you need to do to use this incredibly easy-to-use way of authorization for your application. Here you can check out the other methods that are included in this Rails module. Happy coding!
https://medium.com/better-programming/how-to-quickly-authenticate-users-with-ruby-on-rails-c55471308641
['Fatos Morina']
2020-04-01 16:26:02.227000+00:00
['Ruby on Rails', 'Cybersecurity', 'Development', 'Technology', 'Programming']
Rheostat Government: Replacing the On/Off Switch with a Dimmer
Rheostat Government: Replacing the On/Off Switch with a Dimmer When the coronavirus pandemic begins to subside, communities should use a nuanced, calibrated approach to allowing businesses to reopen and residents to return to work and school, argues Kennedy School Professor Stephen Goldsmith. Written by Stephen Goldsmith, originally published on Governing Two friends in different states told me similar stories recently about how social distancing was being enforced at parks with relatively isolated hiking trails. Because getting to the trails caused too many people to use the same narrow steps or park their cars too close together, access was blocked for everyone. Clearly government needs to take action in these kinds of situations, which it did, in this case, the way it knows best: by throwing a switch — you are open or you are closed. But when the coronavirus nightmare begins to subside, state and local officials will face difficult decisions about when and how to begin reviving local economies by allowing residents to return to work and visit shops and restaurants. It will be a time for officials to embrace rheostat government, replacing the binary on/off switch with a dimmer. Instead of closing the park, perhaps meter the number of people climbing the steps or parking cars, or even the hours the park is open, to spread out usage. Nuanced, calibrated use of regulatory powers can help local governments in metering normal life back on….
https://medium.com/covid-19-public-sector-resources/rheostat-government-replacing-the-on-off-switch-with-a-dimmer-d49393a5aab7
['Harvard Ash Center']
2020-04-14 22:06:40.677000+00:00
['Local Government', 'Leadership', 'Crisis Management', 'Coronavirus', 'Cities']
BuyUcoin 3.0 — What’s New for Crypto in India?
Experience the best! Nobody can touch bitcoin, but anybody can have it! At BuyUcoin, we foresee Bitcoin’s future as a ‘household commodity’, and ever since our inception in 2016, we have made consistent leaps & bounds in our efforts to nurture the domestic crypto & blockchain ecosystem. We take pride in a culture that harnesses a team of gifted blockchain demi-gods, customer success obsessionists, product wizards and design maestros who are putting in some of the best work of their lives focussed towards a shared vision — “To make Bitcoin exchange easier than online Shopping’. Together we BUIDL! But to claim that our services are ‘perfect’ or ‘flawless’ would be an overstatement, and we consider it as our foremost duty to really listen to our customers and constantly scale-up our product as per their demands and expectations. We conduct interactive sessions with thousands of our platform users to expand on our understanding of the problems that many of them face during their interaction with our offerings. Based on the said research, we try to solve all such pain-points that our community reports to us. And ever since the historic judgment by The Supreme Court of India on crypto-jurisdiction in India was announced on March 5, we have been working day-in and day-out to release the latest set of offerings along with updates on earlier features and experiences that our users have known and entrusted us since the beginning. We’re excited to let you know that these efforts have culminated in the launch of BuyUcoin 3.0 this week.
https://medium.com/buyucoin-talks/buyucoin-3-0-whats-new-for-crypto-in-india-5e2c8ff0e143
[]
2020-05-14 18:44:52.193000+00:00
['Cryptocurrency', 'News', 'Blockchain', 'Bitcoin', 'Startup']
5 Lessons My Dark Night of The Soul Has Taught Me
1. Suffering is necessary. A dark night of the soul challenges every aspect of your being. It destroys your reality to a point where you don’t even know if your healing journey is worth all the pain that’s coming up to the surface. But guess what? That’s exactly what you need. We need to feel pain to transform ourselves. The biggest transformations don’t take place when we simply tell ourselves we have to change; they take place when our life circumstances force us to self-reflect and change our course of action. “In the West, we generally reject suffering. We see it as an unwelcome interruption of our pursuit of happiness. So we fight it, repress it, medicate it, or search for quick-fix solutions to get rid of it. In some cultures, especially in the East, suffering is acknowledged for the important role it plays in people’s lives, in the meandering path toward enlightenment.” Tal Ben-Shahar, in The Role of Suffering 2. Our loved ones can be the most resistant to our growth. This one is probably the hardest lesson to accept. When you’re growing, your family and friends will, often unintentionally, try to keep you the same instead of supporting your growth. They’ll give you feedback that reinforces unhealthy patterns, they’ll act like you’re not changing for the better and they’ll probably make you feel guilty for finally being able to say “enough is enough” and staying true to yourself. The reason why our loved ones are the most resistant to our growth is because they feel that they’re going to lose us if we keep growing. Our stagnation makes them feel comfortable; but we’re not here to stay stagnant, we’re here to evolve. And this doesn’t mean we’re better than them. It simply means we’re on different paths. 3. Nobody knows us better than we know ourselves. My dark night of the soul has taught me that no one has the right to tell me how to think, how to feel, what I should or shouldn’t do, or what’s better for my life. Absolutely no one. As a child, I was not taught to search for validation inside myself. I was intrinsically trained to search outside myself for my choices; not to build my own identity and listen to my intuition. I have lived many years in fear of being judged, rejected and abandoned. I have spent most of my life putting others’ needs first and letting their opinions dictate my truth. In fact, I was so skilled at it that I didn’t even know what I wanted or who I was. That phase is over. Which brings me to my next point… 4. The real strength lies in our ability to be with ourselves. Most of us spend our lives running away from ourselves. We can’t stand the idea of being alone with our own thoughts and emotions, so we do whatever it takes to keep ourselves busy. My dark night of the soul started a few months before lock-down kicked in. I was gradually becoming aware of some wounds and emotional patterns, which made me feel a deep need to be alone and process everything I was feeling. Then, I slowly came to the realization that, no matter how strong and accomplished I appeared, I never really loved myself. Now, after almost a year of continuous healing and self-reflection, I know that the most important relationship in my life is with myself. Nothing can ever replace our ability to just be, and to love ourselves so deeply that we no longer deny the most painful parts of our existence. 5. There’s always a hidden meaning that’s serving our own evolution. Although this past year has been incredibly difficult, I know I needed to go through this process in order to reconnect with my true self. I believe in the mystical forces of the Universe. I believe that if we learn to be quiet, to sit still and simply listen to our inner voices, there’s always a message or a meaning; a whisper guiding us and supporting our growth. No matter how incomprehensible our experience may be, no matter how desperate and lonely we may feel, there’s always a lesson that we’ll later integrate into our lives. When the storm has passed and we’re finally able to distance ourselves, that’s when we realize how it has served us. The Universe works in mysterious ways.
https://medium.com/change-your-mind/5-lessons-my-dark-night-of-the-soul-has-taught-me-4a1017b144af
['Patrícia S. Williams']
2020-12-27 00:28:03.051000+00:00
['Spirituality', 'Mindfulness', 'Mental Health', 'Self', 'Life Lessons']
A Closer Look At A Dangerous Side Effect of Antidepressants
Content Warning: This article contains detailed descriptions of suicidal ideation. More, the information here is does not oppose antidepressants. Antidepressants have helped many people, including myself. Also, I am not a doctor nor am I a mental health professional. I’m in an overwater bungalow in Moorea, French Polynesia, clutching my heart, deliberating over ways to kill myself. I’ve been on a new anti-depressant — the fourth I’ve tried this year — for seven days. Despite the hope I’d had that it would help take me out of my misery or at least keep me from wanting to take my own life, it has made me feel even worse than before I started it. I’m in one of the most beautiful places in the world, with the man I love, but my soul is screaming. It’s being scorched alive. The only way to stop the searing fury is for me to die. I cannot sleep. The overwhelming urge to kill myself devours any part of my spirit that, during these past 10 months of depression, still had the ability to fight. I’ve been on a new anti-depressant medication, Pristiq for seven days. When you have done everything to fight, and you realize there are no options left but to kill yourself, the decision to give in should be a relief. But as desperate as I am to end my suffering, I’m still afraid to do it. And yet — and this is new for me — what overrides my fear is a pervasive and impersonal force that dictates that I need to kill myself. Even though I’m frightened, I am also dispassionate, which again, is not like me. A 2010 article in CNS Neuroscience & Therapeutics, highlights that “Antidepressant-induced suicidality appears to be an uncommon occurrence but also a legitimate phenomenon.” The article goes on to discuss case-reports of antidepressant-induced suicidality, but also points out how, for the most part, the benefits of antidepressants outweigh the risks. To most people taking antidepressants, this is not new information. All antidepressants list suicidal ideation as a side effect. A black box warning was issued in 2004 and revised in 2007, warning of the increased risk of suicidality for patients under the age of 25 who take antidepressants. I’d been aware of this side effect every time I’d started a new antidepressant. I’m usually already suicidal before starting any of these drugs, so I feel I have nothing to lose. And some have helped me greatly. Then, enter Pristiq. Four days before we left on our trip, I was so depressed I did not think I’d be able to travel. In a last ditch effort to save myself, I popped a 25 mg. Pristiq pill from a prescription I’d picked up a week before. And after taking just that one pill, I felt a lift in my mood. I also experienced a more positive outlook the second day I was on it. I woke up early, running at 6:30 am and relishing the chilly air and sunshine. I enjoyed being around people at my volunteer job, driving home that day thinking, “I’m doing something meaningful.” But during days three and four on the drug my mood bounced between tolerable and despondent multiple times. I was also plagued with insomnia. I felt more restless than usual — akathisia, I’ve later learned. Akathisia is a side effect of some psychiatric medications. It involves inner restlessness accompanied by mental distress. A 2002 NCBI article, “Akathisia: Overlooked At A Cost,” reports, “It’s often difficult for patients to recognize that they’re experiencing akathisia; it’s often mistaken for anxiety or depression.” Similarly, I thought I was anxious and depressed, but that I’d push through the discomfort when the Pristiq fully kicked in. By the time my husband and I were on vacation in Moorea, Pristiq’s side effects were raging. He’d booked this trip to a beautiful place thinking it’d help cure my depression. Both of us had naively hoped that being in paradise with no demands placed on me, and spending time together, would at least curb the malady in my mind. Fast forward to the overwater bungalow and my impulse to die by suicide at 2 am. I have never had such strong, unemotional suicidal ideation. Let me explain: my usual suicidal ideation involved crying, emotional pain, and hopelessness. This time, I did not cry. Agitation bubbled up from under the skin of a flat, detached person. Instead of my standard sadness and desire to end the pain, I felt like I was required to die. I now believe medication-induced akathisia caused this more depersonalized yet urgent suicidality. Similarly, a 2017 New York Times article, “Lawsuit Over a Suicide Points to a Risk of Antidepressants”, discusses how one woman successfully sued Glaxo, the makers of the antidepressant Paxil, after her husband jumped in front of a train after starting the drug. This man also experienced akathisia, which his wife reported played a key role in his suicide. I email my psychiatrist; it’s 2 am and I won’t hear back from her for at least five hours. But maybe we can have a remote session in the morning. Then I take what’s left of my emergency travel Xanax. I fade into the laconic state between sleeping and waking. It’s not the oblivion I’d hoped for, but at the very least, it stops me from acting on my suicidal urges. When the Xanax wears off, I consider swimming out to the rough part of the Pacific Ocean and letting the Tahitian currents take control. I pace the room as my thoughts dart around — would a fishing boat find me first, then put me on a 72-hour hold? Maybe — it’s 5 am and the sun won’t be up for an hour, but an hour isn’t long enough to guarantee the currents or sharks can kill me before someone finds me alive. But how will I get through another day? What if Dr. P. doesn’t email me back? You need to die by suicide. You can’t live anymore. My psychiatrist emails me back at 7 am. We’ve set a Zoom meeting for 11 am. “You promised Dr. P. that you’d talk to her before killing yourself. At least wait a few more hours before you do it,” I tell myself. When we do meet, I robotically tell her, “I know my affect is flat, but I feel like I’m jumping out of my skin. I have a strong urge to kill myself. ” We come up with two possible interventions while I’m still out of the country: either I go back on a doubled dose of Prozac from what I was taking before, or I double the Pristiq. “You’re not on a therapeutic dose yet; we don’t know if it‘s fully working.” “What about the black box warning of suicide with this drug?” “That warning pertains more to people under the age of 25.” “Hm. Ok.” My gut still tells me to go back on a stronger dose of Prozac; something about my emotionless suicidality makes me fear the Pristiq, despite her assurances about the black box warning. I love Dr. P. because she lets me make decisions about my medication. She gives me options, but does not try to sway me in one direction or another. I am very fortunate I chose to double my Prozac and discontinue the Pristiq. I am not here to villianize Pristiq or antidepressants in general. It has worked for many people who suffer from major depression. Other antidepressants, Prozac included, have also caused akathisia and suicidal thinking, often resulting in fatal outcomes. But these medications have also profoundly helped many who suffer from mental illness. I am here to point out that the black box warning, especially regarding akathisia, needs to be taken more seriously with both patients under and over age 25. I took the same risk when doubling my Prozac — that, too, could have caused me akathisia and robotic suicidality. Fortunately, though, it has helped my depression greatly. I haven’t contemplated suicide in over a week. In that respect, the drug has saved my life. But after having experienced akathisia and a different, more impersonal type of suicidal ideation on Pristiq, I’ve learned that adults and young people alike need to be more aware of this potentially deadly side effect.
https://medium.com/invisible-illness/a-closer-look-at-a-dangerous-side-effect-of-antidepressants-2a6666a70423
['Kelley Jhung']
2020-12-27 08:09:51.595000+00:00
['Depression', 'Antidepressants', 'Mental Health', 'Suicide', 'Akathisia']
20 Essential ML Questions Answered
20 Essential ML Questions Answered Must Know Questions for Data Science and ML Interviews Computer scientist and ML expert Santiago Valdarrama (@svpino on Twitter) recently tweeted a list of 20 fundamental questions that you need to ace before getting a Machine Learning job. Claiming: “Almost every company will ask these to weed out non-prepared candidates. You don’t want to show up unless you are comfortable having a discussion about all of these.” Santiago Valdarrama — @svpino 1. Explain the difference between Supervised and Unsupervised methods. When we train machine learning models we use data that is either labeled or unlabeled. In Supervised learning, the data we use to train the model is labeled. Example: If we’re building a classifier to tell if an animal is a cat or a dog, we would train the model on a dataset of dog and cat images correctly tagged as such. Then we can get predictions on new unlabeled images! Supervised learning allows us to collect data or produce a data output from the previous experience. But when we train a machine learning model on unlabeled data, this is called Unsupervised learning. This allows the model to work on its own to discover new information about the dataset and can help us find unknown patterns in data. Example: To refer to the previous example of a dog and cat image classifier; If all of our dog/cat image data was unlabeled we could use unsupervised learning to find similarities in the different classes of images. We could use an unsupervised learning technique called clustering to find out which images are likely to be of dogs or cats! Use of a ground truth (prior knowledge of what the output values for our samples should be. i.e. ‘labels’) is the largest difference between the two types of learning. Unsupervised vs Supervised methods applied to data Supervised: Used on labeled data Allows us to produce a data output from previous experience or examples Most practical machine learning applications use supervised learning Unsupervised: Used mainly on unlabeled data Allows us to learn the inherent structure of data without providing labels Finds unknown discoveries and patterns about data 2. What’s your favorite algorithm? Can you explain how it works? My favorite machine learning algorithm is Naïve Bayes! In probability theory and statistics, Bayes’ theorem (alternatively Bayes’s theorem, Bayes’s law or Bayes’s rule) describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes’s theorem allows the risk to an individual of a known age to be assessed more accurately than simply assuming that the individual is typical of the population as a whole. A Naïve Bayes Classifier is a probabilistic classifier that uses Bayes theorem with strong independence (naïve) assumptions between features. Probabilistic classifier: a classifier that is able to predict, given an observation of an input, a probability distribution over a set of classes, rather than only outputting the most likely class that the observation should belong to. Independence: Two events are independent if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). That assumption of independence of features is what makes Naive Bayes naive! In real world, the independence assumption is often violated, but naive Bayes classifiers still tend to perform very well. For a deeper dive into Naïve Bayes check out my blog post on how to build an email spam filter from scratch with multinomial Naïve Bayes! 3. Given a specific dataset, how do you decide which is the best algorithm to use? In ML and data science there is no one-size-fits-all algorithm. The answer depends on a myriad of factors like the number of features in the data, the kind of output you want, size of the dataset, available computation time/resources, and many others. The type of problem: Input: Is the input data labeled? If so, it’s a supervised learning problem. If it’s unlabeled data with the purpose of finding structure, it’s an unsupervised learning problem. If the solution implies to optimize an objective function by interacting with an environment, it’s a reinforcement learning problem. Output: What should the model output be? If it’s a number, that would be a regression problem. (Linear, lasso, logistic, SVM, etc.) If the output is a class, the it would be a classification problem. (Unless the output is a set of input groups. Then it would be a clustering problem.) After categorizing the problem and understand the data, the next milestone is identifying the algorithms that are applicable and practical to implement in a reasonable time. Some of the elements affecting the choice of a model are: Input: Is the input data labeled? If so, it’s a supervised learning problem. If it’s unlabeled data with the purpose of finding structure, it’s an unsupervised learning problem. If the solution implies to optimize an objective function by interacting with an environment, it’s a reinforcement learning problem. Output: What should the model output be? If it’s a number, that would be a regression problem. (Linear, lasso, logistic, SVM, etc.) If the output is a class, the it would be a classification problem. (Unless the output is a set of input groups. Then it would be a clustering problem.) After categorizing the problem and understand the data, the next milestone is identifying the algorithms that are applicable and practical to implement in a reasonable time. Some of the elements affecting the choice of a model are: The size of the training dataset. Is the training dataset small? (has a fewer number of observations and a higher number of features) If so algorithms with high bias and low variance like linear regression, Naïve Bayes, or linear SVM would be preferable. The accuracy of the model v.s. the interpretability of the model. The complexity/implementability of the model. Do we have the time and computational resources to train the model? Are the improvements gains in accuracy high enough to justify the costs and engineering effort needed to bring them into a production environment? needed to bring them into a production environment? The scalability of the model. Micro or horizontal data scaling? Does the model meet the business goal? 4. When should you use classification over regression? Like the question above about algorithm selection, the choice between classification and regression depends on the available data, problem statement, and expected output. Classification: Hotdog or not hotdog? Regression: If your expected output is a real or continuous value. Example: Predicting the increase or decrease in value of apartment buildings over time. Classification: If your expected outcome is a discrete or categorical value. Used to predict class membership. (i.e. hotdog or not hotdog, dog or cat) Example: Predict whether or not a user is expected to purchase something when they visit your website or online store. (Classes: likely conversion, possible conversion, unlikely conversion) 5. Can you explain how Logistic Regression works? Logistic Regression is a machine learning algorithm used for the classification problems, it is a predictive analysis algorithm and based on the concept of probability. Logistic regression is used to assign observations to a discrete set of classes. Example: Do workers’ education levels and time on the job affect promotions. The independent variables would be education levels and time on the job, and the levels of the dependent variable might be promotion to team-leader roles, sales positions, or management positions. Logistic regression transforms its output using the logistic sigmoid function to return a probability value. The sigmoid function (as depicted below) is a cost function. The hypothesis of logistic regression tends to limit the cost function between 0 and 1. The sigmoid function will map the predicted values of the model to a probability between 0 and 1. We can use a decision boundary to classify data points based on the probability they are likely to belong to a certain class. (Example: If Cats = 0 and Dogs = 1 then any predicted value greater than 0.5 would be classified as a dog) Sigmoid function with decision boundary of 0.5 Bonus: What is Gradient Descent and why is it important in logistic regression? 6. What are the advantages and disadvantages of decision trees? Advantages: Easily understandable and explainable to stakeholders Doesn’t require the data to be normalized or scaled No need to impute missing data because null values don’t affect process Requires less data preprocessing than other algorithms (Good baseline) Disadvantages: Leads to overfitting of the data causing incorrect predictions Noise — Does not work well if you have too many un-correlated variables High variance — small changes early in the tree can have a large impact on the outcome Decision Tree about taking a new job Bonus: What is a random forest? When should you use it over a decision tree? Hint: Does size (of the dataset) matter? 7. Can you compare K-means with KNN? “The ‘K’ in K-Means Clustering has nothing to do with the ‘K’ in KNN algorithm” K-Means Clustering: Used for clustering (K = number of clusters) Unsupervised learning algorithm Takes unlabeled data points and groups them into “k” number of clusters Uses elbow method to calculate “k” and recalculates cluster centroids until it reaches a global optima K-Nearest Neighbor (KNN): Used for classification (K = number of neighbors) Supervised learning algorithm’ Takes labeled data points and uses them to learn how to label other points To label a new point, it looks at the “nearest neighbor” (labeled points closest to new point) Neighbors vote on how to label a new point 8. How much data would you allocate for your training, validation, and test sets? Train / Test / Validation Split There is no exact percentage of how you should allocate your data but a convention in machine learning is to use a 80/20 or 70/30 train/test split. After the initial split, the training set can be further split into validation sets. Again this is a general rule and great starting point, but the best way to determine how to allocate your data is to experiment with different split sizes. 9. Can you explain the “Curse of Dimensionality”? This scary term refers to the difficulty of using brute force — grid search — to optimize a function with too many input variables. In English, this means that when our data has too many features (columns) compared to the number of observations (rows) we risk overfitting our model resulting in false and unreliable predictions. If there are a large amount of features (compared to the observations) it becomes harder to make meaningful clusters with the observations because too many dimensions cause every observation to look equidistant from other data points. Luckily there are some techniques to reduce this and we’ll cover those in the next question. Dimensionality Reduction Visualized 10. What are some methods to reduce dimensionality? There are many ways to reduce dimensionality ranging from intuitive, linear, non-linear, and auto-encoder methods. Here are some of the most popular. Feature Engineering / Selection: If the necessity for dimensionality reduction comes from too many features; let’s get rid of some! We can use heatmaps, visualizations, or even domain knowledge to find which features are contributing to the accuracy of the model and which features are not. We can also combine different features or create entirely new features based on some insight about the data to reduce the number of features but preserve their impact on model accuracy. Principal Component Analysis (PCA): Another way to find the most important features in your crowded dataset is PCA. Used on continuous data, this method projects data along the axis of increasing variance. The features with the highest variance are the ‘principal’ components. We can use PCA to determine which features have the largest impact on the outcome/prediction of the model. Auto-encoders: An unsupervised neural network that compresses data down to a lower dimension and then reconstructing the data based on the most important features. This gets rid of noise and redundancy in the data. Auto-encoders can also be linear or non-linear based on the activation function. 11. How would you handle an imbalanced dataset? Evaluation Metric: In many cases as a machine learning engineer you’ll have to deal with imbalanced data. In anomaly detection (used for credit card fraud, geological events, etc) it is not likely that more than 1% of the data will be classified as an anomaly. You could classify every instance as non-anomalous and you would get an accuracy of 99% but that wouldn’t be good enough in this case so we could use a confusion matrix to calculate precision, recall, and F1 scores to get a better idea of how our model performs on imbalanced data. Algorithm: We can experiment with the type of algorithm that we are using for out model as different algorithms perform better on different types of problems. (i.e. Random Forest instead of Decision Tree) Resampling — Oversampling and Undersampling: Undersampling: When there is a sufficient amount of data, this is used to balance the dataset by reducing the size of the abundant class. By keeping all samples in the rare class and randomly selecting an equal number of samples in the abundant class, a balanced new dataset can be retrieved for further modelling. amount of data, this is used to balance the dataset by reducing the size of the abundant class. By keeping all samples in the rare class and randomly selecting an equal number of samples in the abundant class, a balanced new dataset can be retrieved for further modelling. Oversampling: When there is an insufficient amount of data, this method is used to balance dataset by increasing the size of rare samples. Rather than getting rid of abundant samples, new rare samples are generated. We could also use K-fold Cross Validation, resample the data with different split ratios, cluster the abundant class, and many other methods! 12. Can you explain the trade-off between bias and variance? The goal is to get the algorithm to generalize, but not oversimplify Bias: Can cause a model to miss relevant or important relationships between features and its target output. Algorithms with high bias error tend to be underfit. Variance: How sensitive a machine learning model is to small changes in the training data. Models with high variance tend to focus on the random noise in training data rather than important relationships between features resulting in overfitting. Tradeoff: One of the biggest problems in supervised learning , the bias-variance tradeoff aims to choose a model optimized for accurately capturing regularities in its training data, but also generalizing well on unseen data. Sadly, it is typically impossible to do both at the same time. , the bias-variance tradeoff aims to choose a model optimized for accurately capturing regularities in its training data, but also generalizing well on unseen data. Sadly, it is typically impossible to do both at the same time. High bias, low variance — Consistent but inaccurate. High variance, low bias — Accurate but inconsistent. 13. Can you define and explain the differences between precision and recall? Precision: Classification evaluation method with the goal of answering: “What proportion of positive predictions are actually correct ?” “What proportion of are ?” Example: Imagine a case where you are asked to build an email spam filter. It’s not a big deal if we accidentally classify an advertisement (spam) as a genuine email. It IS a big deal if a new job offer from your dream company is classified as spam. In this case we want to focus on precision and maximize the ratio between true positives and total positives. Recall: Classification evaluation method with the goal of answering: “What proportion of actual positives are correctly predicted ?” “What proportion of are ?” Example: Imagine a case where you are asked to develop a predictive model to classify people as positive or negative for cancer. We REALLY don’t want people with cancer being given a false negative because it’s possible they will go even longer without treatment. But there’s not nearly as much downside when telling a healthy person that they have cancer. 14. How do you define the F1 score and why is it useful? F1 is the harmonic mean (Pythagorean mean, appropriate for situations when the average of rates is desired) of both precision and recall. It is typically used as a best practice when there is not a specific reason to highly value either precision or recall (like in the examples in the previous question. Typically calculated in a confusion matrix, its formula is below: Where: TP = True positive FP = False positive FN = False negative 15. How do you ensure you’re not overfitting? Can you explain some techniques to reduce overfitting? After splitting our dataset into train and testing sets, if our model does a better job on the training set than the testing set, it is likely overfit. We can take steps to reduce this: Cross-validation : This could be as simple as using a train, test, validation split on your data or even something more complex like K-folds (where the data is split into K number of sections or folds where each fold is used as a testing set). : This could be as simple as using a train, test, validation split on your data or even something more complex like K-folds (where the data is split into K number of sections or folds where each fold is used as a testing set). Early Stopping : During each training epoch the model is given more opportunities to fit the data but after a while this begins to overfit the training set. We can monitor the training performance (for an increase in loss) and stop training as the performance on the validation dataset decreases (compared to the performance on the validation dataset at the prior training epoch). : During each training epoch the model is given more opportunities to fit the data but after a while this begins to overfit the training set. We can monitor the training performance (for an increase in loss) and stop training as the performance on the validation dataset decreases (compared to the performance on the validation dataset at the prior training epoch). Regularization : Refers to regularizing the parameters that shrink the coefficient closer to zero. This method stops the model from getting over complicated allowing it to generalize better. : Refers to regularizing the parameters that shrink the coefficient closer to zero. This method stops the model from getting over complicated allowing it to generalize better. Weight Constraints : Checks the weights of a network and if their size exceeds a certain limit, the weights are rescaled to be back below the limit (or in the range) and prevents single features from dominating the model. : Checks the weights of a network and if their size exceeds a certain limit, the weights are rescaled to be back below the limit (or in the range) and prevents single features from dominating the model. Dropout: Essentially “drops out” individual neurons in a neural network during the training process. Leading to significantly lower generalization error rates. Neural Network with Dropout Layer 16. Can you explain what is cross-validation and how is it useful? Cross-validation is a resampling method used to evaluate the level of fit of a machine learning models to a limited sample of data that is independent of the data we used to train the model. (i.e. holding data from the training set to test on the model later) This is especially useful when we are training a model with a limited data set. There are many forms of cross-validation including exhaustive, non-exhaustive, and nested methods. 17. Can you explain the difference between L1 and L2 regularization? A regression model that uses L1 regularization is called Lasso Regression. A model that uses L2 is called Ridge Regression. The main difference is that Lasso Regression (L1) is used to shrink the coefficient of less important features and removing some features entirely (helping with feature selection). While Ridge Regression (L2) adds “squared magnitude” of coefficient as penalty term to the loss function. Lasso v.s. Ridge Regression 18. What is the ROC Curve? ROC (receiver operating characteristic) curve is a graph that shows the performance of a multi-class classification model at every classification threshold. AUC-ROC (area under ROC curve) also written AUROC represents degree or measure of separability. It tells how much model is capable of distinguishing between classes. A higher AUC means the model will be better at is at predicting an observation as its own class. (i.e. 0 as 0 and 1 as 1) 19. What is a Confusion Matrix and how is it useful? A confusion matrix is a table used to represent the performance of a classification model where the output can be two or more classes. It contains the true positives, false positives, true negatives, and false negatives. Using a confusion matrix can help with calculating precision, recall, F1, and AUC-ROC, as discussed in earlier questions. 20. Which is more important: model accuracy or model performance? Model accuracy is the most important. Once a model is deployed in production, the quality of the output is very important and retraining happens less than scoring the outputs. As for performance, this depends on what we’re talking about when we say “performance”. If it is model training performance we can upgrade our computer, use distributed computing power, and parallelization to speed up training time. If we’re referring to model scoring performance then it would depend on the type of data we are using. Thanks for reading! Please hit the clap button (up to 50 times) if you enjoyed this post or simply managed to scroll this far. I hope this helps you on your machine learning journey and if I made any mistakes or you want to make suggestions, please connect with me through the links below and I’ll be happy to update the material. Have a fantastic day and don’t stop learning. Jack Ross (Data Scientist) About the Author I’m Jack and I like to learn things. I’m a Lambda Endorsed data scientist, lover of coffee, and like to occasionally injure myself doing action sports. I’m looking for data science opportunities so let’s connect! Where to find me:
https://medium.com/swlh/20-essential-ml-questions-answered-6bf61f8b1aa6
['Jack Ross']
2020-11-28 10:08:24.894000+00:00
['Machine Learning', 'Data Science', 'Interview Questions', 'Interview', 'Artificial Intelligence']
Interpreting 270862 Fitbit footsteps using time series analysis with Prophet
Time Series Analysis Back when I showed the first graph, I pointed out that it has a zigzag pattern, indicating that there are no two consecutive days with a similar number of steps. Honestly, I do believe this is the most exciting part of the dataset. For starters, it says something about me, and regarding how I led my days; have a nice walk today, take it easy tomorrow. However, the coolest part about it is that it gives me a legit reason to apply time series analysis to the dataset. In layman terms, the idea behind a time series analysis is to learn the patterns and trends from a sequence of events ordered by time to ultimately forecast future outcomes. For this project, I’m only interested in the first portion of this definition: learning the patterns. And for doing so, I’ll be using the Prophet package, a time series analysis library developed by Facebook. The Petronas Towers in KL. Photo by me. In this analysis, there are three things I want to focus on: the general trend, weekly and hourly seasonality. The general trend component describes the overall evolution of the series. Then, there’s the weekly seasonality which explains the time series’ behavior over the seven days of the week, and similar to it is the hourly seasonality which provides insight about my hourly steps routine. By studying these three components, we’ll be able to know how my walking pattern evolved over my whole stay in Malaysia, and of course, my preferred times and days to walk. To fit our model in Prophet, first, we have to call the prophet() function using as a parameter the desired dataset. This input has to be a data frame with two columns: ds and y . The ds column, which stands for a datestamp, should either be a date ( YYYY-MM-DD ) or a timestamp ( YYYY-MM-DD HH:MM:SS ). In this case, our columns are all the 15 minutes intervals elapsed from July 9 00:00:00, until August 2 23:45:00 e.g. “ 2019–07–09 00:00:00 ” and “ 2019–07–09 00:15:00 .” The second column, y, is the numeric value we want to forecast, and again, here this means the steps taken. Now, armed with a tidy dataset, let’s proceed to fit our model, predict the forecast, and draw the seasonalities. The following code snippet shows how you can do it in Python. Let me explain this a little bit. As with most things in data, the first thing we’ll is loading the dataset. Then, after creating our Prophet object, we’ll fit our model. Once that’s done, we’ll call model.predict(df) to obtain our forecast, and following this, we need to call model.plot_components(forecast) using the newly acquired forecast as the parameter to create the trend and seasonalities components plots. Lastly, we need plt.show() to draw them. (I’m using Seaborn to change the look of the plots). To simplify these three graphs, and for better visibility purposes, I post-processed the returned image to separate each figure and to title them. Cameron Highlands Wait! Before I show you the data, I want to quickly explain the meaning behind the numbers you’ll see in the y-axis of the plots. These values aren’t, I repeat, they are not, the actual number of steps taken at that particular time or day. Instead, we can interpret them as the incremental effect on y of that seasonal component (as stated here). For example, without spoiling too much, if you take a look at the following graph, you’ll find that the value of the first day, “2019–07–10”, is around 140, meaning that this day has an effect of +140 on y. Ok, now let’s see the data. The first element I want to illustrate is the overall trend of the data. This component shows the series’ general changes over time. As we saw earlier, on the last days of my trip, I didn’t walk as much as I did during the first ones, and that’s what we see here. On the initial days, I was all hyped up and hungry for adventures in busy Kuala Lumpur. Then, after these hectic and rush days, I moved to the Cameron Highlands. There, high in the mountains, I slowed down a bit (Stranger Things season 3 is the culprit). After CH, I went down several hundred meters and reached the island of Penang, and its center, George Town. Honestly, the internet here wasn’t that stable, so I took solace in exploring the city, hence the small increase in the timeline. Lastly, at Langwaki, once again, I decided to slow down and enjoy the ocean breeze.
https://towardsdatascience.com/interpreting-270862-fitbit-footsteps-using-time-series-analysis-with-prophet-bde8817bbfaf
['Juan De Dios Santos']
2019-08-28 11:25:19.613000+00:00
['Data Journalism', 'Wander Data', 'Data Science', 'Python', 'Programming']
Connecting to people we don’t know and can’t recall is wasting our time
Connecting to people we don’t know and can’t recall on LinkedIn is wasting our time Time you could be spending sewing face masks, baking sourdough, looting stores or waiting for Q’s next cryptic instruction Let’s consider whether adding another connection to another stranger is actually going to help you achieve anything, or whether the value of your time is actually all going to Microsoft (which owns LinkedIn). How many contacts can you really stay up-to-date with? LinkedIn says I’m connected to 7,881 people. If they all just post an update once a month (some of them post once a day, some of them once a year) and I spend on average about a minute reading each update and choosing a reaction emoji, that’s 94,572 minutes a year. That’s impossible, of course. We simply can’t do what LinkedIn says it wants us to do — to stay connected with everyone we know professionally on LinkedIn, while also holding down a job. How much is your lost LinkedIn time worth? I may have more LinkedIn connections than most LinkedIn users, but look at your own numbers, and do the math. Can you show it’s definitively led to a job offer, promotion, pay rise or new customer worth that many minutes of your time each year? How about over five years? If I do that math, using my average hourly rate over the past five years, I’d have to make more than a million additional dollars in that five year period. I’m pretty sure I would have noticed that happening by now! And that’s before you account for LinkedIn trying to stop you staying up-to-date with everyone For most of us, a lot of updates scroll off the bottom of the LinkedIn feed before we ever see them. Many hours are wasted by posting updates we’ll probably never see. The more people you’re connected to, the more of their time you’re wasting and the less effective you’re making your own networking time on LinkedIn. There’s a button at the bottom of your LinkedIn feed. It’s there all the time. I’ll bet that unless you’re working in product at LinkedIn, looking for a job, or working in recruitment, you’ve never even seen this button. It’s the button you’d need to click on to see any updates that have already been scrolled off the bottom of your screen by the LinkedIn algorithm. This is what it looks like: Keep scrolling and you’ll find it. But scrolling that far is not what LinkedIn wants you to do. Instead, the feed algorithms ensure that updates and posts that get more engagements from other contacts are more likely to be shown. So the feed algorithm is actually going to make it harder to surface what’s really going on for people you haven’t heard from in a while, and for people who just aren’t very good at being engaging on LinkedIn. They could be the smartest, most successful people in your entire network. They could be sitting on something which really is going to add another million dollars to your next five years’ income, or change your life massively and forever. But every additional person you connect with on LinkedIn pushes you further away from them, unless they also happen to be very effective at content marketing on LinkedIn. Even if you could comprehend that many updates, you’d never remember all those people Google “the Dunbar number” or better yet, here’s a great article about it by Maria Konnikova in the New Yorker: “…groups can extend to five hundred, the acquaintance level, and to fifteen hundred, the absolute limit — the people for whom you can put a name to a face. While the group sizes are relatively stable, their composition can be fluid. Your five today may not be your five next week; people drift among layers and sometimes fall out of them altogether.” - Maria Konnikova, “The Limits of Friendship”, The New Yorker, 7 Oct, 2015 I have a social network disability If I were neuro-normal, I’d be able to remember the name and face of about one in five of the people I’m currently connected to on LinkedIn. But I suffer from some facial blindness which leaves me unable even to remember all the people I might meet at a large company event or conference. Sometimes I can’t recall the names of close friends and family. Being subjected to more LinkedIn updates, more frequently, makes it worse. UWA has a free 15min free facial blindness test, if you think you might suffer from this problem too. So, like, why? What’s the point of adding even more people to my LinkedIn contacts if I don’t remember who they are and I’m unlikely to be able to stay in touch with them because the platform actually wants me to focus on the people it thinks get the highest engagement? It makes no sense for you, or for me. But it makes sense for LinkedIn. More people connected to more people means more data that can be sold to marketers, recruiters and employers, and more data that will help Microsoft advertise its own products to me. This has been a rather long read. But this is what actually being connected to someone really is about, right? Not emojis and 25-words-or-less replies. Here’s a previous post I wrote on the topic if you want to go further.
https://medium.com/the-innovation/connecting-to-people-we-dont-know-and-can-t-recalll-on-linkedin-is-wasting-our-time-ae023ae37271
['Alan Jones']
2020-08-28 06:57:13.485000+00:00
['Networking', 'Professional Development', 'Startup', 'Social Media']
The Public Conversation on Mental Health and Dangerous Legislation: Part One
content note: voluntary admission to a psych ward, medication, suicidal ideation, ableism, institutionalization, abusive treatments The gown was too big and I had to hold it in place. Later I changed into thin scrubs, and skid-proof socks. I cried hysterically upon arriving on the ward. They handed me pills and a paper cup of water. I somehow still asked if they had made sure the medications didn’t interact with the ones I was on, despite having a not-subtle death wish in my head. One person, later in my stay, lay down on the floor, doing no harm to anyone. They picked them up and medicated them. The staff made noises of pity. I wondered what was so wrong about the floor. Then again, I am autistic and mentally ill. Of course the floor might make sense to me. I thought a lot about my blog, and other social media I was not allowed to be on. I thought of the things I was supposed to write for work and ended up writing poetry on printer paper instead. The psychiatrist only saw me once a day; one of the therapists really wanted me to go to groups. They unhooked the phones during groups. I only had two phone numbers to call anyway. Someone brought me books on disability to read in the psychiatric ward. It would have been more ironic if they’d brought me Mad in America to read. My roommate liked looking at the skyline at sunset and I turned and watched the light falter into darkness much the same way my words became staccato-like between tears. I didn’t get to know other people or their stories. Already I got attached to some of them without knowing them. I wanted to go back and talk to them after I left, but I never did. I was a mental patient, and occasionally I wonder if the hospital stay will be one of more to come. I left within a few days, insistent on leaving so I did not develop a dependency on the hospital to keep me safe. I proceeded to blog about my hospital stay and the executive order passed down by Obama. I had sent myself to the ER; voice shaking, I explained in a dull voice through tears what plans and impulses had formed in my head. With no one to sit with me and keep me safe and avoid the hospital, I had few options available to me. There is a need for reform. It seems to be the only thing anyone can agree on — but on how to enact reform, there are split ideas. The public conversation on mental health tends to push toward forced treatment and stigmatization, throughout history and now. Mental patients like me, many disability rights groups, and some LGBTQ+ groups have different opinions. Mental health care should be accessible for those who want it, safe to get, and destigmatized. There is no current 24/7 system for care that is not a hospital if it’s needed. You either work with someone who has a 9–5 job, go during the day during a partial hospitalization, or stay several days or more in a psychiatric ward of a hospital. It would be wrong to deny the need for better community care and alternatives to the sterile wards of hospitals. Some of these were converted from the same state hospitals that so many psychiatric survivors organized movements en masse in the 1960s to close. The funding promised in the Kennedy administrative era for community living has never been put to proper use. The current incentive for state Medicaid programs prioritizes nursing homes for people with disabilities instead of at-home care. The alternatives for many places we end up, like hospitals— peer support and respite centers — get a sideways glance. The state hospital doors began to close in the 1960s and 1970s, and the doors to the community never swung open. Congress is tackling the issue, right? Not really. In 2013, just after the media ran their usual rounds of calling the Sandy Hook Elementary school shooter mentally ill and disturbed, Rep. Tim Murphy (R-PA) introduced legislation that is currently known as the Helping Families in Mental Health Crisis Act (H.R. 2646). To some, it seemed like a solution to the country’s “mental health problem.” It seemed like a godsend from the name alone. The Murphy Bill, as it is known among many organizations, has received its support from a country desperate to find a solution to the growing issue of incarceration and hospitalization of people with mental health needs, and those who believe that mental health should be inextricably linked to gun control policy and/or forced treatment. The Murphy Bill has been met with sharp criticism from mental health self-advocates and disability rights groups, myself included. In its current form, the bill would: strip HIPAA rights for anyone in treatment with a doctor or therapist for mental health needs, strip a great deal of funding for and place restrictions on the portion of protection and advocacy agencies (P&As) that serve people with mental illness (this program is called PAIMI) provide more federal funds for institutionalization. This will mean less focus on community-based services. This includes cutting the budget for the Substance Abuse and Mental Health Services Administration (SAMHSA)’s community integration projects. encourage states to allow forced-medication programs by providing financial incentives to do so. These would be through court systems. The people affected only need have a DSM diagnosis that “substantially impairs” function. This could include gender dysphoria, which, while not a requirement for being transgender, is common in transgender individuals. In addition, as several organizations put it: “LGBT individuals are disproportionately affected by mental illness, face pervasive discrimination in health care settings and can experience unique vulnerabilities when denied privacy or decision-making power in their treatments.” Throughout history, there has never been question that people with mental illness belong in the care of doctors and need to be contained. It was clear following the theories behind what could have caused our “madness” and “lunacy” that in turn followed the treatments of bloodletting, electroshock, lobotomies, and neuroleptic drugs, that for all the supposition, few thought to talk us through our crises and treat us as human. It was clear when in the 1800s, the “mad trade” flourished and we were treated as commodities to be placed in “madhouses” and treated by “mad doctors,” passed around from one location to another and gawked at by the public. They were safe from us, but we weren’t safe from them. It was clear with the albeit altruistic intent of asylums to keep people safe with the rush of asylum-building in the 1840s, and care turned more and more to the custodial warehousing that is still common in placing people with disabilities in nursing homes today. The public was safe from us, but we weren’t safe from institutional abuse. It was clear in the 1920s and 30s, when the eugenics movement took its grip, and states created laws to keep those with developmental, intellectual, and psychiatric disabilities from marrying — as well as laws to sterilize them, especially those in institutions. It was clear in the 1960s, when the Parkinson’s-like symptoms of neuroleptic drugs were systemically ignored by doctors, and they chemically restrained us to shuffling gaits in the halls and staring at TVs for eight years, their minds feeling like cotton, as some patients reported their experiences in the hospitals. It is hard to think about where I would have been with each of those stages. Among other things, I have severe depressive episodes and bouts of generalized anxiety. My gender identity and sexual orientation would be fodder for symptoms, as it was for a person who expressed themselves by dressing in men’s clothing after the death of their husband instead of the “appropriate” women’s clothing, one of the reasons for their subsequent commitment. It is clear now, in 2016, that if the Murphy Bill passes, the means to contain us once more in various fashions will be at hand and the funds for community care will trickle further away. The public conversation is something of a dangerous curiosity. It believes the best course of action for mental health needs is treatment with hospitals and drugs and therapy, even if forced. It professes to want us to seek help on our own and demands to know why we haven’t gotten help yet. Then, after most mass shootings, it raises the stigma higher and higher by blaming us, people with mental illness, for the shooting. It affects lawmakers and the President and solidifies their opinion of us into the thought that we are a public health threat and in desperate need of containment and treatment. It then shames us with stigma for being a mental patient once we have entered the system. It doesn’t factor in the many other reasons why people might not seek care, including personal autonomy, previous bad experiences, and racism, sexism, homophobia, ableism, and transphobia in health care. Medical professionals are not exempt from casual or blatant forms of bigotry. What probably should not happen is the Murphy Bill passing, a product of the public conversation on mental illness. This public conversation on mental illness has resulted in dangerous legislation, a push for modern asylums by ethicists and some psychiatrists, and a blatant call to keep us from exercising our rights.
https://medium.com/psych-ward-experiences/the-public-conversation-on-mental-health-and-dangerous-legislation-part-one-18654e966c42
['Kit Mead']
2016-03-15 16:27:30.504000+00:00
['Mental Health']
What to Say on a Date (When You Have Nothing to Say)
What to Say on a Date (When You Have Nothing to Say) A surprisingly simple FBI tactic to keep a conversation going Photo by cottonbro from Pexels The Koothrappali conundrum “Two words: deaf chick. It doesn’t matter if I can’t talk ‘cause she can’t hear me.” — Raj Koothrappali explaining the type of girl that might be attracted to him You probably remember the character Raj Koothrappali from the hit sitcom “The Big Bang Theory”. He was the brilliant astrophysicist who was also brutally awkward with girls. Poor Raj had a tough time in the dating world because he was so nervous in the company of a female that he couldn’t utter a complete sentence. Later in the series, he found out that he was able to overcome this problem with the help of alcohol. What a terrible predicament. On one hand, with alcohol, he became much more socially adept. On the other hand, in the pursuit of the opposite sex, he might also have been risking a dangerous addiction. While extreme, Raj’s situation is something that many of us experience to varying degrees. We sometimes find ourselves in the company of an attractive person, and we just don’t know how or what to say to keep the conversation going. In these circumstances, do we take a stiff shot of tequila and hope for the best, or are there other options?
https://medium.com/hello-love/what-to-say-on-a-date-when-you-have-nothing-to-say-1f73869ceff6
['Keith Dias']
2020-12-08 22:49:39.576000+00:00
['Relationships', 'Advice', 'Books', 'Self Improvement', 'Love']
Happiness and Life Satisfaction
3. How Happiness Score is distributed As we can see below, the Happiness Score has values above 2.85 and below 7.76. So there is no single country which has a Happiness Score above 8. 4. The relationship between different features with Happiness Score We want to predict Happiness Score, so our dependent variable here is Score other features such as GPD Support Health , etc., are our independent variables. GDP per capita We first use scatter plots to observe relationships between variables. Gif by Author '''Happiness score vs gdp per capital''' px.scatter(finaldf, x="GDP", y="Score", animation_frame="Year", animation_group="Country", size="Rank", color="Country", hover_name="Country", trendline= "ols") train_data, test_data = train_test_split(finaldf, train_size = 0.8, random_state = 3) lr = LinearRegression() X_train = np.array(train_data['GDP'], dtype = pd.Series).reshape(-1,1) y_train = np.array(train_data['Score'], dtype = pd.Series) lr.fit(X_train, y_train) X_test = np.array(test_data['GDP'], dtype = pd.Series).reshape(-1,1) y_test = np.array(test_data['Score'], dtype = pd.Series) pred = lr.predict(X_test) #ROOT MEAN SQUARED ERROR rmsesm = float(format(np.sqrt(metrics.mean_squared_error(y_test,pred)),'.3f')) #R-SQUARED (TRAINING) rtrsm = float(format(lr.score(X_train, y_train),'.3f')) #R-SQUARED (TEST) rtesm = float(format(lr.score(X_test, y_test),'.3f')) cv = float(format(cross_val_score(lr,finaldf[['GDP']],finaldf['Score'],cv=5).mean(),'.3f')) print ("Average Score for Test Data: {:.3f}".format(y_test.mean())) print('Intercept: {}'.format(lr.intercept_)) print('Coefficient: {}'.format(lr.coef_)) r = evaluation.shape[0] evaluation.loc[r] = ['Simple Linear Regression','-',rmsesm,rtrsm,'-',rtesm,'-',cv] evaluation By using these values and the below definition, we can estimate the Happiness Score manually. The equation we use for our estimations is called hypothesis function and defined as We also printed the intercept and coefficient for the simple linear regression. Let’s show the result, shall we? Since we have just two dimensions at the simple regression, it is easy to draw it. The below chart determines the result of the simple regression. It does not look like a perfect fit, but when we work with real-world datasets, having an ideal fit is not easy. seabornInstance.set_style(style='whitegrid') plt.figure(figsize=(12,6)) plt.scatter(X_test,y_test,color='blue',label="Data", s = 12) plt.plot(X_test,lr.predict(X_test),color="red",label="Predicted Regression Line") plt.xlabel("GDP per Captita", fontsize=15) plt.ylabel("Happiness Score", fontsize=15) plt.xticks(fontsize=13) plt.yticks(fontsize=13) plt.legend() plt.gca().spines['right'].set_visible(False) plt.gca().spines['top'].set_visible(False) The relationship between GDP per capita(Economy of the country) has a strong positive correlation with Happiness Score, that is, if the GDP per capita of a country is higher than the Happiness Score of that country, it is also more likely to be high. Support To keep the article short, I won’t include the code in this part. The code is similar to the GDP feature above. I recommend you try to implement yourself. I will include the link at the end of this article for reference. Social Support of countries also has a strong and positive relationship with Happiness Score. So, it makes sense that we need social support to be happy. People are also wired for emotions, and we experience those emotions within a social context. Healthy life expectancy A healthy life expectancy has a strong and positive relationship with the Happiness Score, that is, if the country has a High Life Expectancy, it can also have a high Happiness Score. Being happy doesn’t just improve the quality of a person’s life. It may increase the quantity of our life as well. I will also be happy if I get a long healthy life. You? Freedom to make life choices Freedom to make life choices has a positive relationship with Happiness Score. Choice and autonomy are more directly related to happiness than having lots of money. It gives us options to pursue meaning in our life, finding activities that stimulate and excite us. This is an essential aspect of feeling happy. Generosity Generosity has a fragile linear relationship with the Happiness Score. Why the charity has no direct relationship with happiness score? Generosity scores are calculated based on the countries which give the most to nonprofits around the world. Countries that are not generous that does not mean they are not happy. Perceptions of corruption Distribution of Perceptions of corruption rightly skewed that means very less number of the country has high perceptions of corruption. That means most of the country has corruption problems. How corruption feature impact on the Happiness Score? Perceptions of corruption data have highly skewed no wonder why the data has a weak linear relationship. Still, as we can see in the scatter plot, most of the data points are on the left side, and most of the countries with low perceptions of corruption have a Happiness Score between 4 to 6. Countries with high perception scores have a high Happiness Score above 7. 5. Visualize and Examine Data We do not have big data with too many features. Thus, we have a chance to plot most of them and reach some useful analytical results. Drawing charts and examining the data before applying a model is a good practice because we may detect some possible outliers or decide to do normalization. This step is not a must but gets to know the data is always useful. We start with the histograms of dataframe . # DISTRIBUTION OF ALL NUMERIC DATA plt.rcParams['figure.figsize'] = (15, 15) df1 = finaldf[['GDP', 'Health', 'Freedom', 'Generosity','Corruption']] h = df1.hist(bins = 25, figsize = (16,16), xlabelsize = '10', ylabelsize = '10') seabornInstance.despine(left = True, bottom = True) [x.title.set_size(12) for x in h.ravel()]; [x.yaxis.tick_left() for x in h.ravel()] Next, to give us a more appealing view of where each country is placed in the World ranking report, we use darker blue for countries that have the highest rating on the report (i.e., are the “happiest”), while the lighter blue represents countries with a lower ranking. We can see that countries in the European and Americas regions have a reasonably high ranking than ones in the Asian and African areas. '''World Map Happiness Rank Accross the World''' happiness_rank = dict(type = 'choropleth', locations = finaldf['Country'], locationmode = 'country names', z = finaldf['Rank'], text = finaldf['Country'], colorscale = 'Blues_', autocolorscale=False, reversescale=True, marker_line_color='darkgray', marker_line_width=0.5) layout = dict(title = 'Happiness Rank Across the World', geo = dict(showframe = False, projection = {'type': 'equirectangular'})) world_map_1 = go.Figure(data = [happiness_rank], layout=layout) iplot(world_map_1) Let’s check which countries are better positioned in each of the aspects being analyzed. fig, axes = plt.subplots(nrows=3, ncols=2,constrained_layout=True,figsize=(10,10)) seabornInstance.barplot(x='GDP',y='Country', data=finaldf.nlargest(10,'GDP'), ax=axes[0,0],palette="Blues_r") seabornInstance.barplot(x='Health' ,y='Country', data=finaldf.nlargest(10,'Health'), ax=axes[0,1],palette='Blues_r') seabornInstance.barplot(x='Score' ,y='Country', data=finaldf.nlargest(10,'Score'), ax=axes[1,0],palette='Blues_r') seabornInstance.barplot(x='Generosity' ,y='Country', data=finaldf.nlargest(10,'Generosity'), ax=axes[1,1],palette='Blues_r') seabornInstance.barplot(x='Freedom' ,y='Country', data=finaldf.nlargest(10,'Freedom'), ax=axes[2,0],palette='Blues_r') seabornInstance.barplot(x='Corruption' ,y='Country', data=finaldf.nlargest(10,'Corruption'), ax=axes[2,1],palette='Blues_r') Checking Out the Correlation Among Explanatory Variables mask = np.zeros_like(finaldf[usecols].corr(), dtype=np.bool) mask[np.triu_indices_from(mask)] = True f, ax = plt.subplots(figsize=(16, 12)) plt.title('Pearson Correlation Matrix',fontsize=25) seabornInstance.heatmap(finaldf[usecols].corr(), linewidths=0.25,vmax=0.7,square=True,cmap="Blues", linecolor='w',annot=True,annot_kws={"size":8},mask=mask,cbar_kws={"shrink": .9}); It looks like GDP , Health , and Support are strongly correlated with the Happiness score. Freedom correlates quite well with the Happiness score; however, Freedom connects quite well with all data. Corruption still has a mediocre correlation with the Happiness score. Beyond Simple Correlation In the scatterplots, we see that GDP , Health , and Support are quite linearly correlated with some noise. We find the auto-correlation of Corruption fascinating here, where everything is terrible, but if the corruption is high, the distribution is all over the place. It seems to be just a negative indicator of a threshold. I found an exciting package by Ian Ozsvald that uses. It trains random forests to predict features from each other, going a bit beyond simple correlation. # visualize hidden relationships in data classifier_overrides = set() df_results = discover.discover(finaldf.drop(['target', 'target_n'],axis=1).sample(frac=1), classifier_overrides) We use heat maps here to visualize how our features are clustered or vary over space. fig, ax = plt.subplots(ncols=2,figsize=(24, 8)) seabornInstance.heatmap(df_results.pivot(index = 'target', columns = 'feature', values = 'score').fillna(1).loc[finaldf.drop( ['target', 'target_n'],axis = 1).columns,finaldf.drop( ['target', 'target_n'],axis = 1).columns], annot=True, center = 0, ax = ax[0], vmin = -1, vmax = 1, cmap = "Blues") seabornInstance.heatmap(df_results.pivot(index = 'target', columns = 'feature', values = 'score').fillna(1).loc[finaldf.drop( ['target', 'target_n'],axis=1).columns,finaldf.drop( ['target', 'target_n'],axis=1).columns], annot=True, center=0, ax=ax[1], vmin=-0.25, vmax=1, cmap="Blues_r") plt.plot() This gets more interesting. Corruption is a better predictor of the Happiness Score than Support. Possibly because of the ‘threshold’ we previously discovered? Moreover, although Social Support correlated quite well, it does not have substantial predictive value. I guess this is because all the distributions of the quartiles are quite close in the scatterplot. 6. Multiple Linear Regression In the thirst section of this article, we used a simple linear regression to examine the relationships between the Happiness Score and other features. We found a poor fit. To improve this model, we want to add more features. Now, it is time to create some complex models. We determined features at first sight by looking at the previous sections and used them in our first multiple linear regression. As in the simple regression, we printed the coefficients which the model uses for the predictions. However, this time we must use the below definition for our predictions if we want to make calculations manually. We create a model with all features. # MULTIPLE LINEAR REGRESSION 1 train_data_dm,test_data_dm = train_test_split(finaldf,train_size = 0.8,random_state=3) independent_var = ['GDP','Health','Freedom','Support','Generosity','Corruption'] complex_model_1 = LinearRegression() complex_model_1.fit(train_data_dm[independent_var],train_data_dm['Score']) print('Intercept: {}'.format(complex_model_1.intercept_)) print('Coefficients: {}'.format(complex_model_1.coef_)) print('Happiness score = ',np.round(complex_model_1.intercept_,4), '+',np.round(complex_model_1.coef_[0],4),'∗ Support', '+',np.round(complex_model_1.coef_[1],4),'* GDP', '+',np.round(complex_model_1.coef_[2],4),'* Health', '+',np.round(complex_model_1.coef_[3],4),'* Freedom', '+',np.round(complex_model_1.coef_[4],4),'* Generosity', '+',np.round(complex_model_1.coef_[5],4),'* Corrption') pred = complex_model_1.predict(test_data_dm[independent_var]) rmsecm = float(format(np.sqrt(metrics.mean_squared_error( test_data_dm['Score'],pred)),'.3f')) rtrcm = float(format(complex_model_1.score( train_data_dm[independent_var], train_data_dm['Score']),'.3f')) artrcm = float(format(adjustedR2(complex_model_1.score( train_data_dm[independent_var], train_data_dm['Score']), train_data_dm.shape[0], len(independent_var)),'.3f')) rtecm = float(format(complex_model_1.score( test_data_dm[independent_var], test_data_dm['Score']),'.3f')) artecm = float(format(adjustedR2(complex_model_1.score( test_data_dm[independent_var],test_data['Score']), test_data_dm.shape[0], len(independent_var)),'.3f')) cv = float(format(cross_val_score(complex_model_1, finaldf[independent_var], finaldf['Score'],cv=5).mean(),'.3f')) r = evaluation.shape[0] evaluation.loc[r] = ['Multiple Linear Regression-1','selected features',rmsecm,rtrcm,artrcm,rtecm,artecm,cv] evaluation.sort_values(by = '5-Fold Cross Validation', ascending=False) We knew that GDP , Support , and Health are quite linearly correlated. This time, we create a model with these three features. # MULTIPLE LINEAR REGRESSION 2 train_data_dm,test_data_dm = train_test_split(finaldf,train_size = 0.8,random_state=3) independent_var = ['GDP','Health','Support'] complex_model_2 = LinearRegression() complex_model_2.fit(train_data_dm[independent_var],train_data_dm['Score']) print('Intercept: {}'.format(complex_model_2.intercept_)) print('Coefficients: {}'.format(complex_model_2.coef_)) print('Happiness score = ',np.round(complex_model_2.intercept_,4), '+',np.round(complex_model_2.coef_[0],4),'∗ Support', '+',np.round(complex_model_2.coef_[1],4),'* GDP', '+',np.round(complex_model_2.coef_[2],4),'* Health') pred = complex_model_2.predict(test_data_dm[independent_var]) rmsecm = float(format(np.sqrt(metrics.mean_squared_error( test_data_dm['Score'],pred)),'.3f')) rtrcm = float(format(complex_model_2.score( train_data_dm[independent_var], train_data_dm['Score']),'.3f')) artrcm = float(format(adjustedR2(complex_model_2.score( train_data_dm[independent_var], train_data_dm['Score']), train_data_dm.shape[0], len(independent_var)),'.3f')) rtecm = float(format(complex_model_2.score( test_data_dm[independent_var], test_data_dm['Score']),'.3f')) artecm = float(format(adjustedR2(complex_model_2.score( test_data_dm[independent_var],test_data['Score']), test_data_dm.shape[0], len(independent_var)),'.3f')) cv = float(format(cross_val_score(complex_model_2, finaldf[independent_var], finaldf['Score'],cv=5).mean(),'.3f')) r = evaluation.shape[0] evaluation.loc[r] = ['Multiple Linear Regression-2','selected features',rmsecm,rtrcm,artrcm,rtecm,artecm,cv] evaluation.sort_values(by = '5-Fold Cross Validation', ascending=False) When we look at the evaluation table, multiple linear regression -2 (selected features) is the best. However, I have doubts about its reliability, and I would prefer the multiple linear regression with all elements.
https://towardsdatascience.com/happiness-and-life-satisfaction-ecdc7d0ab9a5
['Xuankhanh Nguyen']
2020-08-09 02:58:36.484000+00:00
['Machine Learning', 'Python', 'Happiness', 'Data Virtualization', 'Data Science']
Most “Influencers” are Broadcasters — Not Influencers
Most “Influencers” are Broadcasters — Not Influencers Being interesting doesn’t equal being influential Photo by Coco Championship from Pexels Just because someone has a lot of followers on Instagram doesn’t automatically mean they have any influence over their audience. It’s possible to be a fun person to follow, without having any authority whatsoever. In fact, this is the rule, not the exception. I’ve consulted businesses who have worked with so called influencers, been kind one myself, and helped people with large audiences get deals with brands, so I’ve seen this from every side. It’s clear to me that most people, including myself, don’t have significant influence over masses of people. People with a big following simply have the ability to broadcast a message to a large audience. That’s not nothing, but don’t mistake it for influence. Broadcasting Does Have Value Some products sell themselves — the only thing you need to do is to be seen by your target audience. Say you sell a working jetpack for $500. You don’t need authority figures convincing people how awesome a jetpack is. The only thing you need to do is to get the attention of people. Your product does the rest. When you have something that’s easy to sell to a cold audience, people with large Instagram followings are great to work with. Show them numbers from past collaborations or Facebook campaigns, to convince them that your product sells like hot cakes. Strike up a fair commission deal and build yourself a small army of broadcasters. No need to worry about how influential these people are. When Actual Influence is Needed You benefit best from working with truly influential people, when you have a great product with a lot of competition. If you sell iPhone cases, for example, which has become a commodity, you need to persuade people that your case is better than the others. This is where actual influencers are great. They have the trust of their audience, and a record of recommending awesome products in the past. Just remember that true influencers don’t sell out for shitty products, because they know that it doesn’t take much to break the hard earned trust with their audience. You need something good for any kind of influencer marketing to work. How to Separate Influencers from Broadcasters True influencers are still as valuable as ever. It’s just difficult to distinguish between influencer and broadcaster, because they can look quite similar. A broadcaster might have an authentic and active following, but still lack influence. The key things to ask about, when negotiating an influencer deal, are sales numbers. That’s the only way to estimate how much influence someone has. Even then you have to account for the differences in your product and how good it fits the audience. Maybe the last product they sold actually sold itself — like a jetpack. For that reason, I tend to prefer influencer (or broadcaster) deals almost fully commission based, on whichever side I’m on. There’s a case to be made that brand awareness deserves a small base pay, but otherwise commission keeps things fair. Start out small and aim to build mutually beneficial, long-term relationships. You’ll quickly learn if someone is an influencer or a broadcaster. Both types can be useful, as long as you understand the difference.
https://medium.com/the-innovation/most-influencers-are-broadcasters-not-influencers-f19d29120808
['Sebastian Juhola']
2020-11-05 15:12:59.605000+00:00
['Social Media', 'Instagram', 'Marketing', 'Social Media Marketing', 'Influencer Marketing']
Are Humans Dumb?
You can read her article here. I’ve quoted her word-for-word (italics). My responses are in between. Imagine if you will eavesdropping on this conversation in a deli somewhere. Humanity isn’t ready for aliens. That was the thought expressed by a former Israeli space security chief, 87-year-old Haim Eshed. A man was quite credible with 30 years of experience in Israel’s space program. He admitted that aliens are real, and Trump knows about them, but the “Galactic Federation”, which is like an alien network, thinks that revealing that aliens exist would cause “mass hysteria”. You can read more in this article: Former Israeli space security chief says aliens exist, humanity not ready — The Jerusalem Post (jpost.com) Are we really shocked by the existence of aliens? We shouldn’t be. What I was shocked about was that recently the US government released evidence of otherworldly crafts (UFOs) and declared them to be real. So, if UFOs are real, wouldn’t that equate to aliens being real, unless the government is behind all those flying crafts? Who knows? Nothing makes sense anymore. Right you are! Things have not been making sense for quite some time now. There have always been what is called “aliens”. They’ve been around longer than we have. Ergo we’re the aliens but that’s a whole other topic. There is artwork all over the world depicting what these “aliens” look like not to mention how they travel. From cave drawings on up. Yes, even in the Bible. Their presence in the universe would also be an explanation for so many other things that are, currently, “unexplainable”. Take the pyramids in Egypt for one example. And why there have been giant leaps forward in technology to name two. I have been down so many rabbit holes from listening/reading/watching stories about Antarctica, the inner earth theory, alien bases on Mars, multi-dimensions and universes, UFOs in the Bible, the dark side of the moon, etc. There are so many rabbit holes to get lost in and at this point, I am tired, and I just want the truth. Trust me, Galactic Federation, we humans can handle it. If you’ve ever watched any of those ancient alien shows on the History channel or maybe it’s the Travel channel…and really listen to how they describe aliens. It should all sound familiar if you’ve ever attended church or studied the Bible. What they are describing is God himself. It shocks me that people think that the two are mutually exclusive. “Beings” that they refer to are exactly what angels are taught to us growing up in the church. Scientists however will be damned before they admit that they believe in a higher being let alone to refer to said being as God. A lot of them are atheists. Yet they believe in the same things we do. Go figure. The Galactic Federation, as I understand it, is the equivalent to our U.N. What is the truth? Lately, there have been so many missions to space. Donnie T (Trump) recently created the Space Force, do we really think that they are doing this for fun? Obviously, something is happening in space and Donnie T is well aware, just like most, if not all, of the other presidents prior to him. I don’t know exactly what they know, and I am tired of assuming, so please save me, save us the time and energy, and just tell us what is happening. Like all things in life, we don’t need to wait for someone else to give us anything. There is a way to get that information for ourselves. It involves ascending to the 5D. We should be a little offended that the aliens don’t think that humanity is ready for the truth because we have not evolved enough as a race. But for years, versions of the truth have been revealed to us, so what are they afraid of? Offended? Good golly no! I’d be PISSED if they gave us all the intel that they have! Stupid is a harsh word but if it quacks like a duck…We can’t handle homelessness, starvation, or even a presidential election. There’s no way the average Joe Schmoe on the street would be able to wrap their heads around this. You’ve got one side of the aisle showing up with guns in camo to protect rights while the other side is looting and burning down entire cities. If I were a being watching this crap show on Earth, you wouldn’t be able to get me in the same universe with it. We are knowingly and intentionally polluting the planet and killing its inhabitants — humans, animals, and plants. We haven’t earned the right to know anything else as we have yet to put to good use the knowledge we already have. “Cast not your pearls before swine.” Are they afraid that by revealing the truth that would mean automatic admission to the lies that we have been told about our world? News flash, we already know that we have been lied to. Or are they afraid that people will lose faith in government and realize that the Universe is bigger than us and we are just a small part of a vast system that does not revolve around us? Or worse yet, could the truth reveal that we were an experiment that went terribly wrong? Come on aliens, give humanity some credit. The information is out there in some form, it has been hidden in plain sight for us to see. Stop holding it hostage. We the people lost faith in our government some time ago. Thus, the rush on Area 51. Project Bluebook was a hit show for a reason. People are hungry but then hand them food and they still won’t eat. Our government has been willfully killing us and using us as lab rats for years. Between giving the US military LSD to see what the side effects would be to injecting blacks in the south with STDs again all in the name of science. My plea to the aliens. Aliens, Galactic Federation, the government, if you could please relieve us of the additional time that we will be wasting going down more rabbit holes of misinformation, we would appreciate it. We are intelligent beings, with amazing abilities, and we would like to understand it all. Some of us humans are willing to listen and absorb the knowledge, as long as it is the truth. People are tired and ready for change, so please stop holding the information hostage and…wake up humanity to the truth. As most women can attest to if you want something done and done right, do it yourself. Watch this guy’s video and then try it yourself. As soon as I’m on the road trust and believe I intend to do just that! They even have a group you can work with, so your energy isn’t drained trying to fly solo. Although solo is how I plan to begin. I want to have complete control over the situation to insure to my comfort level that things are going in a direction that won’t cause an intergalactic war.
https://medium.com/illumination/are-humans-dumb-4f2259cc2a8f
['Terry L. Cooper']
2020-12-14 17:45:18.159000+00:00
['UFO', 'Ufos And Aliens', 'Science', 'Space Exploration', 'Aliens']
How to animate a pie chart with Victory in React Native?
I was using another library for graphics, but when Apple announced that UIWebView was deprecated I had to change that library. So, I started to search on Google for libraries that don’t use UIWebView, and that’s when I found Victory. Victory is great! It has multiple charts and the best thing about it is that you can customize almost everything. What’s Victory? Victory Native is a React Native library that offers different types of charts, such as line, bar and pie charts. This library is one of the most frequently used alternatives for developers when it comes to adding graphics to an app. I’ve recently used it in one of my projects. I’ve used the <VictoryChart /> , <VictoryLine /> , <VictoryPie /> and <VictoryAxis /> components.
https://medium.com/wolox/how-to-animate-a-pie-chart-with-victory-in-react-native-db5997b991a5
['Matías Grote']
2020-11-06 14:16:38.362000+00:00
['Data Visualization', 'React', 'Mobile', 'React Native', 'Software Development']
How Educating an Audience & Creating Awareness Helped Allbirds Become a Billion-Dollar Sneaker Brand
Educating people and being transparent in their approach To sell a product that is environmentally friendly and accounts for a positive natural impact. As a business owner (or) a marketer, all you need to do is spread awareness as it’s benefits. Sneakers being manufacture from a sheep’s wool. Allbirds had the advantage to authentically push their brand out and be transparent about their process and clearly shared their impact on the environment. Many studies say that people are initially triggered to buy products from Eco-friendly brands. But at the end of the day, many of them don’t. But here, Allbirds not only been an Eco-friendly brand. They ticked a lot of boxes for their customers, like:- Being comfortable and long-lasting Being minimalist with simpler designs with few selected colors Being budget-friendly (Starts at 95$) Besides this, I want to shed light on the essential steps that this brand took to sell millions of sneakers pairs in just two years. They educated people and built awareness through their website. If you're an Allbirds sneaker fan (or) have been following the brand for a long time. Then you would know that, the brand is know for it’s education and awareness campaigns. From the very beginning, they focused on providing products manufactured from natural sustainable materials. After releasing their initial shoes into that market in 2017, Allbirds ran extensive campaigns educating and making people aware of their products and manufacturing process. Throughout their website and sometimes on Instagram, Allbirds have prioritized spreading the word 'footprint' to educate people. The best example is their webpage dedicated to Sustainability, Screenshot by Author Screenshot by Author Here they have mentioned, what is a carbon footprint — thus, educating their readers! On the other side, Allbirds shares its work on zero emissions and is equally transparent about what material goes into the making of their sneakers. Now, people know what Allbirds are made up of, and they are aware of what they’re buying into + it’s impact on the nature. Doing it on social media too Millennials and Gen Z spend their drastic amount of time daily on social media. Brands can't ignore the power of social media for their business. Allbirds is not an exception for this. They are on top of their social media game. Regarding educating and creating awareness on social media too. Their Instagram content and YouTube videos are an excellent example of this On their Instagram, there are multiple occasions where Allbirds didn't shy away from spreading the word 'footprint' and are always eager to share their products, which represented a positive impact on the climate. Allbirds Instagram Story (Source: Instagram) Their YouTube videos have always been authentic, straightforward, and informational. The most popular videos on their channel are about sharing a message and talking about their footwear — enlightening people about their brand. Screenshot by Author The videos with titles like:- Allbirds: Meet Your Shoes — Sheep Meeting Allbirds: Meet Your Shoes — Tree Meeting Allbirds: From Sugar To Shoe Allbirds Wool Runners: Comfy Shoes Made From Sheep Have millions of views. All videos tell a story of Allbirds sneakers being manufactured using natural materials like wool, sugarcane, and trees. This sends out a positive signal towards their end user regarding both the sneakers and the brand. Other videos with titles like:- Allbirds: The Parents Allbirds: The Gift Allbirds: The Arrival Allbirds: The Nutcrackers Are meant to show us the uncomfortable experiences we encounter during holiday seasons and share how Allbirds sneakers can make you feel good by wearing them during such times. Boom! Connecting people with emotions and experience that everyone can relate to
https://medium.com/swlh/how-educating-an-audience-creating-awareness-helped-allbirds-become-a-billion-dollar-sneaker-brand-5a3612e4fa7b
['Thakur Rahul Singh']
2020-12-15 18:42:39.781000+00:00
['Branding Strategy', 'Branding', 'Case Study', 'Marketing Strategies', 'Marketing']
4 Things You Can Study in Real Life that School Can’t Teach You
4 Things You Can Study in Real Life that School Can’t Teach You A balanced view of real-life vs. school Photo by MD Duran on Unsplash I’m not just going to rail against the education system and make some vague and empty statements about the education system being broken. I’m not going to devalue the idea of education in and of itself either. Education has value. You can’t separate the success you have in life completely away from school. You have to learn how to read and write, do math, understand the basics of history, etc. Sure, maybe you didn’t technically need to go to a school to learn all of those things, but your parents had to put you somewhere while they work. You get valuable experiences from education, too. College is too expensive, but it does provide a great incubating period for young adults to be on their own and develop social skills. And for those who say college doesn’t offer the tools you need for success beyond school, there is a bunch of empty career counseling and financial literacy classes on campus that would beg to differ. I just didn’t want you to get the idea that this would be another useless formal education bashing article. The point is that you can only gain certain lessons from the real world, those lessons are valuable, and some if not all of them take a lifetime to learn. Also, I want you to regain your identity as a student. You spent so much time studying the rules of the education systems — years and decades of work. Don’t, then, abandon education altogether because you’re in the real world. Your study of life itself should be much more intense than your studies were in school. Yes, most of us leave school and just…see what happens next. Apply to 47 jobs indiscriminately, land one, marry the person we met at work, call it a day. I’m writing this to get you to think of your entire life as an opportunity to learn. Let life teach you instead of passively living it. Take a look at these skills you can only learn from the school of life and use them to your advantage. The Two Things You Love and Fear Most Are there real-life consequences to your success and failure in the education system? Yes. Getting certain grades and going to certain schools can increase your job prospects, network, income, etc. But you can’t recreate most of the real-life success and failure scenarios in a classroom and the consequences are bounded. You can get a bad grade, but that consequence is known. The consequences for your decisions in real life, both good and bad, are infinite. Again, the education system isn’t entirely to blame, but many students go through the system and learn to live through the success and failure lens they developed in school. In real life, there is no binary right or wrong answer to a decision. In real life, you can make the right decision based on the information you have in the present, only to fail to get the outcome you want in the future. You can also make bad decisions and still get good outcomes. The education system has a much higher standard for success than the real world does. You need to get the most answers right to pass a test, but you only need to get a few decisions out of thousands right to be successful. It’s impossible to recreate all the different variables that lead to success or failure in the real world. Things like the combination of luck and skill, your environment, network, timing in decision making, instant vs. delayed gratification, etc. The moral of the story? Focus on defining success and failure based on what matters to you. Not me, not your friends, nor the powers that frame our culture. Also, don’t judge yourself based solely on an educational system view of those words. Allow yourself to be ‘wrong’ as many times as it takes until you get it ‘right.’ One of the Most Overlooked Factors in Success and Happiness There’s no class for getting over a heartbreak. There’s no exact rubric for choosing friends, romantic partners, spouses. You can’t recreate the experiences that shape your personal relationships in a classroom environment. You can get guidance, yes. There are tons of different resources that can teach you about things like social skills, maintaining healthy relationships, building a network, etc. Some say we should have more classes on these topics. And I agree. But you’ll ultimately learn those lessons from going through social and relationship situations in your real life. The question is, what lessons will you take away from these situations? Will you actively learn from them? Will you proactively prepare for them? I don’t have any exact prescriptions here. I’ve just been thinking about this topic lately. Your relationships and your ability to successfully interact with other humans have a huge impact on your life. These subjects should be taken more seriously than pretty much all others, right? Yet, we have a tendency to haphazardly fall into these situations rather than consciously planning for them. When’s the last time you sat down to really think about what you want in a partner, in a friend, in a colleague? Do you have this written down somewhere? I do. It seems weird to do at first, but articulating what you want helps you spot it when you see it. Are you actively working on making yourself a better partner, a better friend, a better colleague? Do you have the same traits you’re looking for in other people? Are you practicing useful strategies like: Journaling — Take the time to understand the values you hold dear so you can embody them and search for them in others. — Take the time to understand the values you hold dear so you can embody them and search for them in others. Active communication — Are you talking to your partner, your friends, and your colleagues to see if you’re both on the same page? Do you have open-minded, inviting, and honest conversations with them? — Are you talking to your partner, your friends, and your colleagues to see if you’re both on the same page? Do you have open-minded, inviting, and honest conversations with them? Self-improvement strategies — Are you doing the same things you expect of others, e.g, being mindful of your health, staying informed, learning new skills, and tracking your progress? Having gone through several relationship up and downs in the past few years, I’m focused on being way more intentional about them in the future. That’s the general theme of this post — take the same level of effort and rigor you used in school and apply it to your real life. Also, understand that real life is different from school. A bit confusing, I know. It’s funny, we spent decades taking notes, studying, reading, and preparing for tests. And then we reach real-life and abandon all those habits when it comes to the situations that, you know, will shape our lives forever. A Skill That’s Difficult to Teach but Worthwhile to Lean Back to that financial literacy point, I made earlier. I recently finished a book called The Psychology of Money by Morgan Housel. It’s not so much a book about personal finance and investing tips as it is a case study on emotions. He opens the book with a story about a janitor who retired with millions of dollars and a fancy hedge fund manager who lost all his money. One knew very little about finance but behaved the right way — made small deposits and let compounding to the work. The other knew a ton more about finance but couldn’t reign in his hubris. The lesson isn’t about finance. The lesson is that your education can’t teach you how to behave. There are plenty of educated people who make poor decisions. There are plenty of uneducated, which is a loaded word, to begin with, people who make great decisions and succeed because of them. No class can teach you valuable skills like: Managing risk and reward — You can know all the odds upfront, but you still have to decide whether or not to pull the trigger — You can know all the odds upfront, but you still have to decide whether or not to pull the trigger Delayed gratification — Finance is the perfect microcosm for this. You can look at the results of compound interest on a chart, but few people want to invest small sums of money for decades in a row without pulling the money out — Finance is the perfect microcosm for this. You can look at the results of compound interest on a chart, but few people want to invest small sums of money for decades in a row without pulling the money out Dealing with social pressure — Can you deal with looking foolish at first so you can win down the road? I can go on here, but you get the point. You can learn about these behavioral concepts, but dealing with them in real life is much harder than understanding them in theory. What’s the answer? There is no perfect answer. You learn as you go and try to make the best decisions in real-time, never quite getting it right. Again here is to take the same level of rigor and apply it to your decision making. I keep journals with future goals and analysis of past decisions. I try to make a conscious effort to regulate my emotions and behavior even though I know it’s futile. In your case, you want to learn the ultimate lesson. Your behavior shows you who you really are. What you say doesn’t matter as much as what you actually do. Your future intentions don’t matter as much as your current behavior. You’re showing the world what you really care about by the way you behave. The Most Valuable Lessons You Can Learn In general, life teaches you the wisdom you can’t gain from education alone. And by education, I mean all forms of education. You can read all the self-improvement books you want, but the lessons only become real to you when you learn them in a real-life setting. You can study and learn valuable skills about careers and business from a classroom, a book, a video, etc, but you still have to navigate your career and business, dealing with all the behavioral skills and variables I mentioned above. Use both. That’s the moral of the story. Both education and experience, the things life teaches you, are valuable. You can’t fully separate them from one another. Just try to give them more equal weight. Stop letting your life just unfold on its own without taking the lessons seriously. Don’t take life’s lessons less seriously than you take something as trivial as a grade. Such a weird phenomenon seeing people bust their ass in school for so long, only to go on auto-pilot for the rest of their lives. You’re not done with education once you’re done with your formal education. You’re at the beginning of your real education. You’ll learn much more in the remaining 50 plus years of your life than the first 22. Well, you’ll learn if you make it a point to learn. You can easily take the situations in your life and learn the wrong lessons from them, or no lessons at all. This is what many people do. Don’t be one of them.
https://medium.com/publishous/4-things-you-can-study-in-real-life-that-school-cant-teach-you-e4f0e97d395
['Ayodeji Awosika']
2020-10-22 18:34:10.209000+00:00
['Self Improvement', 'Personal Development', 'Life Lessons', 'Education', 'Psychology']
Acrotrend and Nuffield Health’s Strategic Partnership
Thanks to the recent updates Acrotrend has helped us make to our organisation, we are now more competitively positioned to provide our customers with an enhanced service. DAVE ANKERS, TECHNOLOGY, STRATEGY & PLATFORM DELIVERY DIRECTOR, NUFFIELD HEALTH We have been working as a strategic partner to Nuffield Health — the UK’s largest not-for-profit healthcare provider — to create a more amenable data environment for their current and future analytics needs and capabilities. Nuffield Health holds enormous quantities of data. This data was creating data silos and making it difficult not only to obtain a true and consolidated view of customers, but to pull business-critical reports required to guide their key strategy of ‘Connected Health’. Acrotrend’s team worked with Nuffield Health’s key business stakeholders to understand their functional and business goals from an insight and analytics perspective and mapped these requirements onto a fit-for-purpose and ready-for-future tech stack. Our architecture and technology selection instantly made the data environment more amenable for their analytics needs and capabilities, and together with the implementation roadmap, made it really clear on how Nuffield Health could get the most of the proposed solution. Fill out the form here to read the full case study and find out more about how we did this. To find out more about how Acrotrend can help your business become truly data-driven, please contact us today to request a free consultation.
https://medium.com/acrotrend-consultancy/acrotrend-and-nuffield-healths-strategic-partnership-ad5ea18d1d4b
['Acrotrend Consultancy']
2019-11-06 11:34:53.061000+00:00
['Data Science', 'Big Data', 'Fitness Industry', 'Customer Experience', 'Customer Analytics']
Bad 1990s design habits
You can design better presentation slides by getting rid of engrained habits that can go back decades. Sometimes I work with teenagers to teach them about presentation design. To my surprise, they often are much better students than “grown ups” who are supposed to benefit from decades of business experience. Here is a theory why. Transparencies for overhead projectors encouraged you to copy pages out of a book and uncovering paragraphs or key points bullet by bullet. Moving to PowerPoint, people just kept writing these bullets. The first visuals that you felt compelled to project to an audience were data charts: lines, bars, columns. These type of graphs needed to have a title in the top left and a source at the bottom. Most slide designs today use a big title at the top left, other typography on the page is almost never bigger than the title. Very rarely, people leave the title out all together. Pictures are low resolution and take a lot of memory, hence you can only put in small images in a presentation document that you need to email someone. PowerPoint was created as a mouse-based drawing software, rising alongside Microsoft Windows. Everything could be dragged, and resized easily to fit. Cropping an image was tricky. The first plasma TV screens confirmed to us that it was OK to stretch an image out of proportion, as long as it fitted whatever you needed to fill easily. Word processors enabled us to ponder a sentence over and over, editing, adding words until it encapsulated everything we wanted to say. This leads to buzzword-loaded, fluffy, mission statement-type business prose that you would never use in spoken conversation. We were not trained to write razor sharp newspaper headings. In a word processor, the time it takes to read a document equals the total number of pages. So, to cut the time it takes to deliver your presentation, you need to cut slides. If you still want to cover the same content, just reduce the font size and cram in more information in a slide. Page count rules. When writing, you create thought flow from top to bottom, so in PowerPoint there is no need to use other visual techniques to express contradictions, overlaps, tensions, win-wins, from-to movements, transitions. There were only 3 types of fonts: sans serif, serif and Comic Sans, so that’s what we use in Microsoft Office documents. To make a point we use all those powerful software tools to add stuff: bold, underline, shadows, bright colors. It never occurred to us that by de-emphasising things around what we want to emphasise, it would stand out naturally and more beautifully. Next time you design a presentation, think how you would tackle it without the baggage from the 1990s, just like a teenager today.
https://medium.com/slidemagic/bad-1990s-design-habits-5dee41472dcd
['Jan Schultink']
2017-01-03 06:38:30.824000+00:00
['PowerPoint', 'Design', 'Presentations']
Stock Price Change Forecasting with Time Series: SARIMAX
Definition of Time series There are different techniques for modeling a time series. One of them is the Autoregressive Process (AR). There, a time series problem can be expressed as a recursive regression problem where dependent variables are values of the target variable itself at different time instances. Let’s say if Yt is our target variable there are a series of values Y1, Y2,… at different time instances, then, for all time instance t. Parameter µ is the mean of the process. We may interpret the term as representing “memory” or “feedback” of the past into the present value of the process. Parameter ф determines the amount of feedback and ɛt is information present at time t that can be added as extra. Definitely here by “process”, we mean an infinite or finite sequence of values of a variable at different time instances. If we expand the above recurrence relation, then we get: It is called the AR(1) process. h is known as the Lag. A Lag is a logical/abstract time unit. It could be hour, day, week, year etc. It makes the definition more generic. Instead of the only a single previous value, if we consider p previous values, then it becomes AR(p) process and the same can be expressed as: So, there are many feedback factors like ф1, ф2,..фp for AR(p) process. It is a weighted average of all past values. There is another type of modeling known as MA(q) process or Moving Average process which considers only new information ɛ and can be expressed similarly as a weighted average: Stationarity & Differencing From all of the equations above, we can see that if ф or θ< 1 then the value of Yt converges to µ i.e., a fixed value. It means that if we take the average Y value from any two-interval, then it will always be close to µ, i.e., closeness will be statistically significant. This type of series is known as Stationary time series. On the other hand, ф > 1 gives explosive behavior and the series becomes Non-stationary. The basic assumption of time series modeling is stationary in nature. That’s why we have to bring down a non-stationary series to a stationary state by differencing. It is defined as: Then, we can model ∆Yt again as time series. It helps to remove explosiveness as stated above. This differencing can be done several times as it is not guaranteed that just doing it one time will make the series stationary. ARIMA(p,d,q) process ARIMA is the join process modeling with AR(p), MA(q), and d times differencing. So, here Yt contains all the terms of AR(p) and MA(q). It says that, if an ARIMA(p,d,q) process is differentiated d times then it becomes stationary. Seasonality & SARIMA A time series can be affected by seasonal factors like a week, few months, quarters in a year, or a few years in a decade. Within those fixed time spans, different behaviors are observed in the target variable which differs from the rest. It needs to be modeled separately. In fact, seasonal components can be extracted out from the original series and modeled differently as said. It is defined as: where m is the length of the season, i.e. degree of seasonality. SARIMA is the process modeling where seasonality is mixed with ARIMA model. SARIMA is defined by (p,d,q)(P, D, Q) where P, D, Q is the order of the seasonal components. SARIMAX & ARIMAX So far, we have discussed modeling the series with target variable Y only. We haven’t considered other attributes present in the dataset. ARIMAX considers adding other feature variables also in the regression model. Here X stands for exogenous. It is like a vanilla regression model where recursive target variables are there along with other features. With reference to our problem statement, we can design an ARIMAX model with target variable percent_change_next_weeks_price at different lags along with other features like volume, low, close, etc. But, other features are considered fixed over time and don’t have lag dependent values, unlike the target variable. The seasonal version of ARIMAX is known as SARIMAX. Data Analysis We will start by analyzing the data. We will also learn some other concepts of time series along with the way. Let’s first plot AutoCorelation Function(ACF) and PartialAutoCorelation Function (PACF) using statsmodel library: import statsmodels.graphics.tsaplots as tsa_plots tsa_plots.plot_pacf(df['percent_change_next_weeks_price']) And then ACF: tsa_plots.plot_acf(df['percent_change_next_weeks_price']) ACF gives us the correlation between Y values at different lags. Mathematically covariance for this can be defined as: A cut-off in the ACF plot indicates that there is no sufficient relation between lagged values of Y. It is also an indicator of order q of the MA(q) process. From the ACF plot, we can see that ACF cuts off at zero only. So, q should be zero. PACF is the partial correlation between Y values, i.e., the correlation between Yt and Yt+k conditional on Yt+1,..,Yt+k-1. Like ACF, a cut-off in PACF indicates the order p of the AR(p) process. In our use case, we can see that p is zero. Decomposing components We will now, how many components are there in the time series.
https://medium.com/towards-artificial-intelligence/stock-price-change-forecasting-with-time-series-sarimax-4f5ca053d464
['Avishek Nag']
2020-09-16 02:24:40.557000+00:00
['Machine Learning', 'Python', 'Sarimax', 'Statistics', 'Time Series Forecasting']
My Quest for Vegan Soup Dumplings
I even attempted to bring the miso soup dumpling of my dreams to life, and produced a dozen or so bland, mushy goo-balls. I used too much agar, and the alarmingly firm-from-the-fridge miso cubes never fully melted in their steam bath. The “soup” oozed from the dumpling like a jelly donut, but even thicker and more viscous. It’s a delicate balance: If you use too much agar, the soup filling is too firm and won’t melt in the dumpling. If you use too little it melts as you work; if you’re using vital wheat gluten, your faux-meatball is likely to absorb any excess liquid, and poof! There goes the soup in your soup dumplings. Ng solves this by using the bare minimum (it seems) of agar, but returning the dumpling filling to the freezer in between steps, so the agar stays cold and solid. But this can be trying and it drags out the dumpling-making process. On top of everything, I was — foolishly, insanely! — attempting this project in the middle of July, as heat waves came and went and came again. The dumpling fillings, no matter how cold to start, melted to the touch as I tried to wrangle all of the bits inside the wrapper: some of the stretchy gluten-meat, some of the frozen shards of jelly, and, as I began experimenting with different fillings, bits of sautéed vegetables. Ground meat holds together nicely. It’s pliable, sticky even. Vegan dumpling filling is not. It wants to fall apart. Perhaps worst of all, it’s wet. If I tried to overfill the dumplings, or even just adequately fill them, liquid would spill out. The dumpling dough would become slippery and impossible to close, refusing to cohere at the edges, no matter how many neat folds I produced. “Is this what it’s like to do brain surgery,” I wondered deliriously, sweat prickling my brow. It was time to try something different. Chris Santos, the chef and Chopped judge, is the creator of the original remix, the French onion soup dumpling, which he served at The Stanton Social from its opening in 2005 to its closing at the end of 2018. The dumplings were so popular that Santos also serves them at his new restaurant, Vandal. He has said these dumplings will probably be with him the rest of his life. His recipe is available online, and I set about veganizing it. I thinly sliced two medium onions and threw them into a hot pan with several generous glugs of olive oil and a pinch of salt and freshly ground pepper. I added two cloves of garlic, finely minced. I turned down the heat and waited for the onions to sweat, and then to soften and melt. After they were lightly caramelized, I poured in a cup of white vermouth and several sprigs of fresh thyme and rosemary from my herb pots. I let the wine and onions reduce down to a thick jammy texture, and then added a cup of mushroom broth. I had some frozen from the last time I rehydrated porcini mushrooms, and it was the exact rich, earthy flavor this soup wants. This is easy, I thought to myself. So far, the only trick is to make the most delicious onion soup you can. Simple! When the soup was the thickness of a loose jam — more onion than broth, but still some broth — I poured it into a baking dish and popped it into the freezer. What’s that you say? I forgot the agar? Something interesting and different about Santos’ recipe is that it doesn’t call for a thickener. Instead, he freezes the soup mixture, then slices and dices it into cubes that are folded into wonton wrappers. (Finding vegan wonton wrappers — or making your own — might be the hardest thing about this recipe.) Santos brings the corners of the wrapper up to create a little purse-like shape. This is easier when the wrappers and filling have softened a bit, but not too much. I experimented with other shapes: a triangle, two wrappers joined together to make an extra-large ravioli shape. Then I slid the shapes into a pan of hot oil, enough to cover the bottom but not to submerge the dumplings. They need to be turned to brown on all sides. This process is a race against time. I had turned out the sheet of frozen soup onto a cutting board, and as I formed the first few dumplings, the others were already beginning to return to their original, soupy state. On the other hand, frying is a lot faster than steaming, so dinner was ready relatively quickly. When all sides of the dumplings had browned, I fished them out of the oil and placed them on a sheet pan lined with paper towels. The Santos French onion soup dumpling is served with a crouton, but that sounded extra dry to me, so I omitted it. Instead, I poured a small bowl of balsamic vinegar for dipping. Had I been more interested in presentation, I would have made a balsamic reduction to drizzle over the dumplings, and scattered finely minced fresh chives over the top. Reader, two people ate all 24 in one sitting, even though the original recipe says it makes 36 and recommends preparing just 12 to serve and freezing the rest. Whoops! Just half of the dumplings, paired with a simple side salad, would have made a lovely, complete, and less gut-bursting meal for two, but I have no regrets. Of course, these dumplings also lacked the gushiness I set out to demand from my soup dumplings, and yet the unexpected delight of rich onion soup encased in a crispy outer shell made up for it. I was satisfied, refreshed, and ready to try my hand at vegan xiaolongbao again.
https://medium.com/tenderlymag/my-quest-for-vegan-soup-dumplings-309ec187498d
['Jessica S. Mckenzie']
2019-08-12 16:31:01.163000+00:00
['Food', 'Cooking', 'Recipe', 'Vegetarian', 'Vegan']
Imperfect Words Submissions *** UPDATED***
Welcome, and thank you for your interest in becoming a contributor to Imperfect Words! I created this modest little publication originally for my own pieces that didn’t fit anywhere else, but now I’m feeling a little lonely in here all by myself, and would love for you to join me! The guidelines are quite simple: The piece has to be your own — and in DRAFT form form Credit the image…Always Try to use the proper cases Spell-check and grammar-check your work (Grammarly is a great tool!) What is welcome: Poetry (of any kind) Short stories (fiction or non-fiction — no length restrictions) Opinion pieces (no length restrictions) What is not welcome: Hate speech and/or racism will not be tolerated Blatant violence (unless it’s fictional and not opinion-based — ask if uncertain) Political stances / triggering pieces — there are publications for those types of pieces, and this is a peaceful place. Attacks on other writers — we’re all friends here ( Photo by Duy Pham on Unsplash) *** The publication reserves the right to fix any missed spelling or grammar mistakes, as well as correct title cases where appropriate. The publication also reserves the right to add an image if there isn’t one on the draft. *** If you think your work would fit in here and you’re interested in becoming a writer, simply comment below! :) (I’ll do my best to respond within a day, but please be patient as it might take a bit longer sometimes)
https://medium.com/imperfect-words/imperfect-words-submissions-e5b542d9fdbd
['Edie Tuck']
2020-02-27 17:00:53.328000+00:00
['Write For Us', 'Writing', 'Submission Guidelines', 'Misfits', 'Join The Club']
Improper Theft
Improper Theft It’s not robbery, it’s just how life flows through you… I never said she stole my money. My parents are getting it wrong. I mean, she’s my best friend! Why would she steal it, of all people? Suddenly, my door opens and Natasha walks in. She has a glum look on her face, as if the corpse of Hitler suddenly appeared in the US. In her right hand she’s holding, very tightly it seems, a small leather journal. I like that journal — it’s our history journal. Every time we find cool information about history that we want to keep, we write it in that journal. We’ve been recording in it for years, and it’s almost full. However, we haven’t put together an entry in it for over six months. I close the book I was reading and lay it gently on my bookcase. I nod toward Natasha, who takes the hint and plops onto the bed. “So what’s going on?” I ask, but in a lighter voice than I usually do. Something is wrong. “Nothing,” she responds quickly, but she doesn’t look at me. She’s staring intensely at my bookcase, as if searching for something. After a couple more moments of silence, I speak up. “What do you want to do today?” “Er…actually, Julia, I have something to say.” Oh no. “Alright. What is it?” “I was the one who stole the money.”
https://medium.com/sukhroop-the-storyteller/improper-theft-facac692d239
['Sukhroop Singh']
2019-09-09 02:16:55.198000+00:00
['Short Story', 'Life', 'Creativity', 'Creative Writing', 'Art']
Rule, Rule, Joke
Just a few days ago I wrote about comedy clichés, well this is one of my favourite manoeuvres of all time. One so great that even when you obviously see it coming you can’t help but listen and wait for the punch line. I’ve searched a bit and it seems it’s called The Triple. It consists on stating something that is true or reasonable (defining a rule), then something else that is also true or sensible for the same reason (confirming the rule) only to, right in sequence, mention a third element that breaks the rule and delivers the joke. Here’s an example I’m sure you remember… Beast: “I wanna do something for her… but what?” Beauty and the Beast (1991) Another great example, right in the middle of the fantastic “Not Just for Gays Anymore” by Neil Patrick Harris… “The Triple” starts on 2:55 So, yeah, embrace The Triple in your jokes and you’ll make me extremely happy.
https://medium.com/negligible/rule-rule-joke-42f9355ce6cd
['Hugo Cornejo']
2017-12-27 19:40:22.649000+00:00
['Comedy', 'Poetry', 'Writing', 'Neil Patrick Harris', 'Beauty And The Beast']
How I Built a Company With a $130M Valuation
In April 2012, I found the glimmer of what that answer might be. I read an article on the new JOBS Act, or Jumpstart Our Business Act. It was signed by Obama and for the first time in 80 years allowed companies to raise capital online from anyone through the sale of securities (stock, debt, etc). Suddenly, the door created by Kickstarter looked like it could become an option for companies that did not have consumer-facing products. Crowdfunding on a new scale. I thought to myself this could be humongous. It could transform finance as we know it, but I also knew it would be some time before the JOBS Act would be implemented. So I waited. I continued to do my research and grew the accelerator in the interim. The decision to pivot Ron Miller Around this time, I met Ron Miller, a serial entrepreneur and master salesman, and we quickly hit it off as friends. He was interested in joining the accelerator, so I brought him on as a mentor. I also hired Johanna Cronin to manage the accelerator. As the accelerator grew, I continued to do my homework and read about the JOBS Act and the concept of investors buying shares online with a credit card. Others entered the space and began crowdfunding businesses using Title II of the JOBS Act, which was the first part of the JOBS Act to be implemented. But Title II wasn’t interesting to me as it only allowed companies to raise capital from accredited investors. My gut told me that this wasn’t the answer. Johanna Cronin In many ways Title II is just a continuation of the way funding has been done, only updated to an online format. I saw the potential for the JOBS Act as something much greater, a potentially revolutionizing force if the cards played out right. To focus on accredited investors felt like a mistake: the way deals are structured, the way the investment platform would be presented, would all be different than if they were designed for the consumer. By the time the consumer could participate, we would have no credibility. We would be a part of the same biased institution that we wanted to reshape. So we waited and continued work at the accelerator. I had no intention of stopping before I reached my goal of 60, which eventually I did, over the course of two and a half years. In November 2013, the SEC released a draft of Regulation Crowdfunding, which I found fascinating. I had a hunch that this was going to be a monster. Once I reached my goal of investing in 60 companies, the conversation between Ron and I became one of do we do another cohort of startups in the accelerator or should we pivot our business to equity crowdfunding? Can we do both? An early version of the StartEngine website, circa summer 2014 In the spring of 2014, it became clear that equity crowdfunding was the bigger opportunity and that running both the accelerator and developing this crowdfunding simultaneously wasn’t feasible, so we pivoted our business. Ron, who initially became the CEO for the first two years and is now the Chairman of the Board, Johanna, who is now our Director of Product, Marketing, and Services, and myself, the CEO. Landing our first client The second rendition of the StartEngine logo We needed a name, and we needed a logo. The name StartEngine had some notoriety in LA already, which would be useful in the new endeavor. Accelerator programs are focused on raising capital for businesses anyway, so I figured if the name worked for an accelerator, then it would work for an equity crowdfunding platform. We built a prototype of the platform, keeping the name and logo of StartEngine, and then it became a waiting game until the JOBS Act was implemented. In the meantime, I continued mentoring the companies that had gone through the accelerator. We needed money to run StartEngine, but it felt disingenuous to go to institutional investors. If we wanted to revolutionize finance, what kind of message does it send to our potential clients if we take money from the very institutions, the VCs, the banks, that we believe our platform lets entrepreneurs do without? So instead, we put our own money into the company and raised from friends who shared the same belief that finance needed to be disrupted in a big way to help tens of thousands of frustrated entrepreneurs get the capital they need to succeed. Then everything changed on the morning of March 23, 2015. We got a phone call the night before that the SEC would be voting on Regulation A+ in the JOBS Act. We hadn’t seen a draft or anything of Regulation A+ yet, so this was a surprise. The commissioners voted that morning, and it passed unanimously. The date was set for Regulation A+ to go live: June 2015. We had 60 days to find a client and build the software. We used Colab to build the software as we didn’t have an in-house development team yet, and we began the search for business. It was tough going. The market didn’t know what equity crowdfunding was. Eventually, we got connected to a car company by the name of Elio Motors via Darren Marble, a local entrepreneur who shared an interest in the crowdfunding space and was building his own marketing platform in LA. Darren introduced us, knowing that it would be a good match. Ron, Johanna, and I by an Elio car We met with Paul Elio, the CEO of his namesake, on June 1st, signed him the next week, and the offering for Elio Motors, the highly efficient, light-weight car company, went into a public test the waters page on June 19, 2015. Within hours we had millions of dollars in reservations, most of which came from the Elio Motors community. These reservations were not commitments and just levels of interest in investing into the company before the offering is qualified by the SEC. A screenshot of the Elio Motors page, a few days after launch It was a smooth and painless process for our first time launching a company. When Elio finally went live that November, they raised $16.9M from over 6,600 investors. It was the first Regulation A+ offering ever, and it was a smashing success. StartEngine, Accelerated The current StartEngine logo Elio didn’t quite spark the sales I thought it would, in large part because we had to educate the marketplace. I expected Elio’s success to trigger a catalyst. After the raise, I thought America would wake up and think, “I need to do this too.” But reality is slower than that. To be clear, I never doubted that StartEngine would work. I believed in it because we took a scattershot approach: throw pasta on the wall and see what sticks. Our business model targeted small businesses and consumers, huge segments of the audience. The market was simply too big for StartEngine not to work. The need for equity crowdfunding was there; we just had to educate the market to let it know the option was there in the first place. Eventually, things accelerated. The real turning point came when Regulation Crowdfunding was implemented in May 2016. Regulation Crowdfunding was inexpensive and cost-effective. Even better, it was easy. It opened StartEngine to a wider number of companies. Our platform began to make more sense as a viable funding option for companies. By the end of 2016, we did 10 launches. By the end of 2017, over 100. Halfway through 2018, we’ve already launched more companies than we did all of last year and nearly tripled our team.
https://medium.com/hackernoon/how-i-built-a-company-with-a-130m-valuation-b112b166bb49
['Howard Marks']
2018-07-18 07:21:52.243000+00:00
['Business Development', 'Crowdfunding', 'Funding', 'Bitcoin', 'Startup']
Learning Python: From Zero to Hero
This post was originally published at TK's Blog. First of all, what is Python? According to its creator, Guido van Rossum, Python is a: “high-level programming language, and its core design philosophy is all about code readability and a syntax which allows programmers to express concepts in a few lines of code.” For me, the first reason to learn Python was that it is, in fact, a beautiful programming language. It was really natural to code in it and express my thoughts. Another reason was that we can use coding in Python in multiple ways: data science, web development, and machine learning all shine here. Quora, Pinterest and Spotify all use Python for their backend web development. So let’s learn a bit about it. The Basics 1. Variables You can think about variables as words that store a value. Simple as that. In Python, it is really easy to define a variable and set a value to it. Imagine you want to store number 1 in a variable called “one.” Let’s do it: How simple was that? You just assigned the value 1 to the variable “one.” And you can assign any other value to whatever other variables you want. As you see in the table above, the variable “two” stores the integer 2, and “some_number” stores 10,000. Besides integers, we can also use booleans (True / False), strings, float, and so many other data types. 2. Control Flow: conditional statements “If” uses an expression to evaluate whether a statement is True or False. If it is True, it executes what is inside the “if” statement. For example: 2 is greater than 1, so the “print” code is executed. The “else” statement will be executed if the “if” expression is false. 1 is not greater than 2, so the code inside the “else” statement will be executed. You can also use an “elif” statement: 3. Looping / Iterator In Python, we can iterate in different forms. I’ll talk about two: while and for. While Looping: while the statement is True, the code inside the block will be executed. So, this code will print the number from 1 to 10. The while loop needs a “loop condition.” If it stays True, it continues iterating. In this example, when num is 11 the loop condition equals False . Another basic bit of code to better understand it: The loop condition is True so it keeps iterating — until we set it to False . For Looping: you apply the variable “num” to the block, and the “for” statement will iterate it for you. This code will print the same as while code: from 1 to 10. See? It is so simple. The range starts with 1 and goes until the 11 th element ( 10 is the 10 th element). List: Collection | Array | Data Structure Imagine you want to store the integer 1 in a variable. But maybe now you want to store 2. And 3, 4, 5 … Do I have another way to store all the integers that I want, but not in millions of variables? You guessed it — there is indeed another way to store them. List is a collection that can be used to store a list of values (like these integers that you want). So let’s use it: It is really simple. We created an array and stored it on my_integer. But maybe you are asking: “How can I get a value from this array?” Great question. List has a concept called index. The first element gets the index 0 (zero). The second gets 1, and so on. You get the idea. To make it clearer, we can represent the array and each element with its index. I can draw it: Using the Python syntax, it’s also simple to understand: Imagine that you don’t want to store integers. You just want to store strings, like a list of your relatives’ names. Mine would look something like this: It works the same way as integers. Nice. We just learned how Lists indices work. But I still need to show you how we can add an element to the List data structure (an item to a list). The most common method to add a new value to a List is append . Let’s see how it works: append is super simple. You just need to apply the element (eg. “The Effective Engineer”) as the append parameter. Well, enough about Lists . Let’s talk about another data structure. Dictionary: Key-Value Data Structure Now we know that Lists are indexed with integer numbers. But what if we don’t want to use integer numbers as indices? Some data structures that we can use are numeric, string, or other types of indices. Let’s learn about the Dictionary data structure. Dictionary is a collection of key-value pairs. Here’s what it looks like: The key is the index pointing to the value. How do we access the Dictionary value? You guessed it — using the key. Let’s try it: I created a Dictionary about me. My name, nickname, and nationality. Those attributes are the Dictionary keys. As we learned how to access the List using index, we also use indices (keys in the Dictionary context) to access the value stored in the Dictionary . In the example, I printed a phrase about me using all the values stored in the Dictionary . Pretty simple, right? Another cool thing about Dictionary is that we can use anything as the value. In the Dictionary I created, I want to add the key “age” and my real integer age in it: Here we have a key (age) value (24) pair using string as the key and integer as the value. As we did with Lists , let’s learn how to add elements to a Dictionary . The key pointing to a value is a big part of what Dictionary is. This is also true when we are talking about adding elements to it: We just need to assign a value to a Dictionary key. Nothing complicated here, right? Iteration: Looping Through Data Structures As we learned in the Python Basics, the List iteration is very simple. We Python developers commonly use For looping. Let’s do it: So for each book in the bookshelf, we (can do everything with it) print it. Pretty simple and intuitive. That’s Python. For a hash data structure, we can also use the for loop, but we apply the key : This is an example how to use it. For each key in the dictionary , we print the key and its corresponding value . Another way to do it is to use the iteritems method. We did name the two parameters as key and value , but it is not necessary. We can name them anything. Let’s see it: We can see we used attribute as a parameter for the Dictionary key , and it works properly. Great! Classes & Objects A little bit of theory: Objects are a representation of real world objects like cars, dogs, or bikes. The objects share two main characteristics: data and behavior. Cars have data, like number of wheels, number of doors, and seating capacity They also exhibit behavior: they can accelerate, stop, show how much fuel is left, and so many other things. We identify data as attributes and behavior as methods in object-oriented programming. Again: Data → Attributes and Behavior → Methods And a Class is the blueprint from which individual objects are created. In the real world, we often find many objects with the same type. Like cars. All the same make and model (and all have an engine, wheels, doors, and so on). Each car was built from the same set of blueprints and has the same components. Python Object-Oriented Programming mode: ON Python, as an Object-Oriented programming language, has these concepts: class and object. A class is a blueprint, a model for its objects. So again, a class it is just a model, or a way to define attributes and behavior (as we talked about in the theory section). As an example, a vehicle class has its own attributes that define what objects are vehicles. The number of wheels, type of tank, seating capacity, and maximum velocity are all attributes of a vehicle. With this in mind, let’s look at Python syntax for classes: We define classes with a class statement — and that’s it. Easy, isn’t it? Objects are instances of a class. We create an instance by naming the class. Here car is an object (or instance) of the class Vehicle . Remember that our vehicle class has four attributes: number of wheels, type of tank, seating capacity, and maximum velocity. We set all these attributes when creating a vehicle object. So here, we define our class to receive data when it initiates it: We use the init method. We call it a constructor method. So when we create the vehicle object, we can define these attributes. Imagine that we love the Tesla Model S, and we want to create this kind of object. It has four wheels, runs on electric energy, has space for five seats, and the maximum velocity is 250km/hour (155 mph). Let’s create this object: Four wheels + electric “tank type” + five seats + 250km/hour maximum speed. All attributes are set. But how can we access these attributes’ values? We send a message to the object asking about them. We call it a method. It’s the object’s behavior. Let’s implement it: This is an implementation of two methods: number_of_wheels and set_number_of_wheels. We call it getter & setter . Because the first gets the attribute value, and the second sets a new value for the attribute. In Python, we can do that using @property ( decorators ) to define getters and setters . Let’s see it with code: And we can use these methods as attributes: This is slightly different than defining methods. The methods work as attributes. For example, when we set the new number of wheels, we don’t apply two as a parameter, but set the value 2 to number_of_wheels . This is one way to write pythonic getter and setter code. But we can also use methods for other things, like the “make_noise” method. Let’s see it: When we call this method, it just returns a string “VRRRRUUUUM.”
https://medium.com/free-code-camp/learning-python-from-zero-to-hero-120ea540b567
[]
2020-05-23 20:17:37.134000+00:00
['Python', 'Programming', 'Coding', 'Web Development', 'Software Development']
Intercepting ADF Table Column Show/Hide Event with Custom Change Manager Class
Ever wondered how to intercept ADF table column show/hide event from ADF Panel Collection component? Yes, you could use ADF MDS functionality to store user preference for table visible columns. But what if you would want to implement it yourself without using MDS? Actually, this is possible through custom persistence manager class. I will show you how. If you don’t know what I’m talking about. Check below screenshot, this popup comes out of the box with ADF Panel Collection and it helps to manage table visible columns. Pretty much useful, especially for large tables: Obviously, we would like to store user preference and next time the user comes back to the form, he should see previously stored setup for the table columns. One way to achieve this is to use out of the box ADF MDS functionality. But what if you don’t want to use it? Still possible — we can catch all changes done through Manage Columns popup in custom Change Manager class. Extend from SessionChangeManager and override only a single method — addComponentChange. This is the place where we intercept changes and could log them to DB for example (later on form load, we could read table setup and apply it before fragment is rendered): Register custom Change Manager class in web.xml: Manage Columns popup is out of the box functionality offered by ADF Panel Collection component: Method addComponentChange will be automatically invoked and you should see similar output when changing table columns visibility: Download sample application code from my GitHub repository.
https://medium.com/oracledevs/intercepting-adf-table-column-show-hide-event-with-custom-change-manager-class-d8d264979979
['Andrej Baranovskij']
2019-02-22 14:24:29.893000+00:00
['Oracle Adf', 'Java']
The Lottery Ticket Hypothesis
The Lottery Ticket Hypothesis When randomness works in our favour Development towards incrementally faster and easier-to-train neural networks has been an active area of study during the last decade. While many methods and techniques have evolved from this — Batch norm, Adam and residual connections, just to name a few — what if the true answer to more efficient models lies in their random initialisation? This hypothesis is strengthened by some key observations. In particular, researchers studying pruning have found that it’s possible to train relatively smaller networks successfully as long as its parameters are initialised properly. Otherwise, such small models often struggle during training, which in many cases are attributed to their limited capacity. Below are two quotes from researchers which highlight these findings and the role initialisation plays. Training a pruned model from scratch performs worse than retraining a pruned model, which may indicate the difficulty of training a network with a small capacity — Li et al. During retraining, it is better to retrain the weights from the initial training phrase for the connections that survived pruning than it is to reinitialise the pruned layers… gradient decent is able to find good solutions when the network is initially trained but not after re-initialising some layers and retraining them — Han et al. It seems like, if we train a model, prune the weights of smallest magnitude and then reinitialise the resulting model with its original parameters we end up with a small model that is trainable. If the same, pruned model, on the other hand, is reinitialised with random parameters the performance achieved is often significantly lower if training succeeds at all. It can, therefore, be assumed that the initialisation uncovered by pruning holds some property that allows the subnetwork to be effectively trained. But there is actually more to it than that! In this article, we explore the findings of Frankle and Carbin in their paper The Lottery Ticket hypothesis where they thoroughly study this phenomenon and discuss its implications. The Hypothesis The hypothesis is postulated as follows: A randomly-initialised, dense neural network contains a subnetwork that is initialised such that — when trained in isolation — it can match the test accuracy of the original network after training for at most the same number of iterations The hypothesis state that the initialisation of this subnetwork allows us to train this smaller network for fewer iterations while at the same time reaching a higher test accuracy! This almost sounds too good to be true. Fortunately, the authors strengthen their claims through a thorough analysis of this phenomenon. To illustrate the process of finding the subnetworks referred to in the hypothesis, please have a look at the animation below.
https://medium.com/dair-ai/the-lottery-ticket-hypothesis-7cd4eae3faaa
['Viktor Karlsson']
2020-08-07 22:16:00.914000+00:00
['Machine Learning', 'Summary', 'Artificial Intelligence']
Beginning Python Programming—Part 5
Functions do not necessarily have to return a value. Instead, we could nest the print statement to handle error logging for us like this: Now we have taken print("Aww...") and encapsulated it into a function that we can call any time we need to. This is called decomposition. If composition is building how a section of code works, decomposition is extracting the code re-used over and over again and making it easier to reference. For this simple example, we could have just written the print statement since it was only one line, but for the sake of brevity, perhaps a bit of laziness, this should be sufficient to get the idea. By this function not returning a value, when it finishes we don’t have to use the return keyword. We can use the return, but by the time all of the code inside the body has been performed and we’ve reached the closing curly brace the function returns anyways. The only time you might use the return keyword in a function that doesn’t return a value is when you want to exit the function before all of the code has run. NOTE: If you put in a return statement outside of a conditional check, certain editors such as PyCharm will state “This code may never be executed” or something to that effect. It’s just a tip saying “We know you did this, but here is an opportunity for you to clean up the code that will never be used, or fix your return statement so the code below has a chance to run.” “So what’s up with the guitar Bob?” I didn’t just include the guitar at the top of the page for no reason. I wanted to give a real-world example of how functions should work. Imagine, if you will, that guitar is perfectly tuned to A440, i.e. Concert A. If I were to play the 1st string (the bottom E string), you would hear a perfect E note. No matter how many times I plucked that 1st string, you’d always hear the same perfect E. This is an example of a function. When we play the string, it vibrates at a certain hertz and returns sound. This could be relatable to the body and return of the function. The general idea is that we don’t care how the function does stuff, we just want it to do what we expect when we call it. One more thing… If you know you want to use a function but haven’t quite figured out what it should do, you can use the pass keyword to skip execution of the function. This can also be used in if statements. As a quick example, check out this start for a function: def calculate_driving_distance(starting_city, ending_city): pass Photo by Annie Spratt on Unsplash Scope On to the final section: scope. So far we have experienced scope. We experienced it with functions, if statements, while loops, and for loops. In a general sense, we’ve been covering it all along. So what is scope? Scope defines where an object lives in code. “OK Bob, you are crazy, you just told me an inanimate object has a life.” No really, in programs, we make them come to life, we make them perform work. Why not consider it this way: Objects in code have a lifespan. For most of what we have been doing since the beginning, your variables have been created when the program starts, and when the program ends, they go away. That is considered the lifespan of an object. When you create an object, which is just a general name encompassing things like variables and constants, it has a set lifespan. The lifespan of an object is determined by where it’s placed in the program. If you place an object on its own in a program like my_object: str = "Hello World" it will last for as long as the program is running. As soon as the program stops, my_object is removed from memory. The problem is we can’t just load everything into memory. We’d run out of space, so we instead pick and choose what should be available at any given point in a program by defining its scope. So I’ve talked about what scope is, but let’s see some practical examples of how scope can be defined and used in your code and how it makes you think about where to put your variables: I’ve defined all kinds of levels of scope in this, so let’s start from the top. my_global_integer is an object that can be used anywhere in code as long as the program is still running. my_global_function is a function that can be used anywhere in code as long as the program is still running, however, the name variable inside of the body is only used inside of the body. As soon as the function exits, the name variable along with its value is removed from memory. If we were to use the name variable outside of the function, Python would recognize it and automatically make a global variable at runtime. This could cause problems with memory efficiency; however, if you define name in one function and again in another, it will be used at a local function scope in each. my_global_returning_function is exactly the same as my_global_function except it returns a value, and you might be keen to think it returns the variable created inside of the body because that’s what it returns, but it actually returns only the value of the variable. When the function reaches the end of the body, the variable name is removed from memory and you just have a copy of the value that you hopefully assigned to another variable. Since we know that the returning function returns "Katy" this if statement would be comparing if "Katy" == "Katy", yes they are the same! So we reference back to the global_name variable from the global scope and grab the value from there (because we assigned a value before we performed the evaluation). However, let’s say the name returned was something else like "Sophie" which is not the same as "Katy" . In the else statement, there is both a constant and a variable that are created and initialized. They can only be used while we are in curly braces of the else statement. If we were to have a variable created in the if portion of the if statement, it could only be used in the if portion. We can still print my_global_variable and my_global_function , because they were defined outside of the body. Something special about Python being an interpreted language is if I were to mistype my_global_integer on line 16 and use myGlobalInteger the program would still run. Python does this because it doesn’t even know about lines it doesn’t use. This is different in compiled languages, as all lines are compiled down into byte code before they run. Since Python does all of its compilation at runtime, it only compiles what is needed and skips the rest. If you forget everything else I said, just know that you can always go deeper with a variable, but you can never go outside of the body where the variable was initially declared. Just imagine that whenever you leave the body that contains a variable, Bones says, “He’s dead, Jim!” Not much of a Star Trek fan? Well, I know a lot of people that like Hotel California and it works on much the same premise: you can never leave. Not an Eagles fan either? Well, I’m sure you can come up with your own mnemonic to help you remember how scope works if you feel like you need one. And that is scope. Summary So today, with the help of tangled shoelaces, we realized the code we have written so far is a bit sloppy. We started off with 3 principles to help us imagine how our code should look: Separation of Concerns, the Single Responsibility Principle, and Don’t Repeat Yourself. We then talked about Functions and how they can help us abide by these principles. I strongly recommend you practice writing your own functions so you get the hang of it and understand what is and is not allowed. We learned about how to exit early from functions using return , and how to skip execution altogether with pass . Finally, we talked about Scope. Just a tip, declare your variables at the lowest scope they will be used, then as you need the variable available in other places, move the declaration to a higher scope. As we move on, this will become more apparent in how we do this. I’m sure my future examples will do this. Suggested Reading Python Tutorial 4.5 through 4.7 What’s Next You made it! Up next we will be talking about Classes, Properties, and Methods. If you’ve been following along, you already know Properties and Methods. I just need to introduce you to Classes first.
https://medium.com/better-programming/beginning-python-programming-part-5-3c7cfa3cd701
['Bob Roebling']
2019-05-31 03:08:31.306000+00:00
['Function', 'Python Programming', 'Python', 'Programming', 'Scope']
Beliefs vs. Knowledge: It’s Important to Know the Differences
Photo by Jaredd Craig on Unsplash by William Seavey One thing I know about the state of human existence — it is that there is a dichotomy, and a constant tension between, beliefs and knowledge that has often led our species to the precipices of survival. Only humans have beliefs because as the highest developed species on earth our cerebral cortex is capable of evaluating and analyzing beyond the necessities of even most mammals — which is to find food, shelter, protect themselves, bear and raise young, and repeat. But humans have the luxury in most cases to form beliefs, whether or not they are connected to the physical world in which most animals must toil. Beliefs might loosely be defined as ideas and thoughts that are not based in actual knowledge (we will get to that definition soon) but emotional, mystical or intellectual “revelations.” Remember, no species but ourselves have them and “the others” pursue the challenges and rewards of LIFE (which as far as we know is unique to our one planet in the solar system despite our ability to now travel in space) without an intellectual framework to guide them., Beliefs are religious, political and moral and are often rather arbitrarily conceived and perhaps far too universally applied (at least in my view). Whereas knowledge is generally thought to be gained by direct observation, scientific or deductive thought and provable through a variety of steps that can be taken and verified by a body of hopefully well informed persons (“vetted.”). People can and will believe anything they want to as it’s part of the freedom we have as the dominant species. In America we can believe in gun ownership (enshrined in our second amendment) even though no one is required to own guns, or even chooses to. The KNOWLEDGE that gun ownership clearly can lead to mass murders SHOULD be persuasive in discouragimng idle ownership, but many people hold to their beliefs in their right, under our Constitution, to bear arms. (This is clearlly a political belief). They can and will believe in God even though no one has been able to actually prove in his (or her?) existence, or that Jesus Christ was his son and the only human to come back from the dead (via the Ressurection). The devout will say the proof for God is in the Bible, Koran or other documentation handed down, but humans have been writing fiction and non-fiction for thousands of years and the lines between them have often blurred. I’m dubious of the word “faith.” (I tend to write non-fiction about many subjects because I believe in the old adage that “truth is stranger than fiction” so why bother with anything less than the truth?. And boy has my own life had a lot of truth — and has made for some good stories. By now you probably have concluded that I am not terribly fond of beliefs. I believe in love, even though it, too, is hard to prove outside of one person’s perceptions and experiences. I believe humans, due to our evolutionary benefits, are possibly the most intelligent creatures there ever were, but I can’t prove that either. I believe that other people’s beliefs should never be imposed on you by force, or deception, and manipulative proselytization. I believe humans should strive to, as Rodney King once said, “all get along.” (The fact that we DON’T is a clear flaw in our evolutionary design.) And here’s what I DON’T believe. (Maybe it’s a non-sequiter that I don’t believe in most beliefs, yet here are the ones I DO believe aren’t true.) I see no evidence that I will have another lifetime unless it is via the cells and molecules in my body somehow reconstituting themselves in a different being — whether human or not. (If more people felt this way I think there would be even more emphasis on making the most of your ONE life, and not wishing vainly for another one.) To put it in three words: one and done. I don’t believe anyone should have to die by another’s hand, but I also don’t believe there is retribution in an afterlife (sadly). I believe we should try and convict murderers, and probably execute them for crimes they consciously committed. A lifetime of remorse in prison (at taxpayer’s expense) seems an odd and futile response to killing someone, unless in self-defense or by accident, of course. But I can understand people’s discomfort with the state making these choices. Killing someone in the name of religion is equally abhorrent yet this is exactly what is going on around the world by Islamic radicals who, as I stated before, have no right any more than anyone else to impose their beliefs on others. I find it ironic that ANY belief could justify killing, but knowledge is a different thing. (If you start a war that kills and cripples others out of some ideologcal or religious belief, you need to pay.) I do have a problem with killing hundreds of thousands as we did when we dropped bombs on Nagasaki and Hiroshima — it wasn’t the average Japanese citizen who was suicidal (as it turned out) but the leadership that refused to surrender. Same goes for the Nazi party, or Serbian radicals. When it comes to Knowledge, we have tons now in the 21st century. Our health and medical sciences can cure many illnesses We know that democratic societies are generally productive and fair to most. We know how to make things that have greatly improved our standards of living. We have the solutions to pollution and global climate change if we will put in the effort — and not deny their existence. We know how to live a long life if we take the proper precautions (and should be grateful when we do). I write this from a place of privilege, and I know it. I’m neither really rich nor poor but I live in the United States, still arguably Great (sorry, Mr. Trump). Sure, we could do better — Canada does, as I write in my book at americanada.us (no, I’m not Canadian). FYI, it is KNOWLEDGE that led to the last conclusion, carefully documented. (Remember, I don’t write fiction). Inequities and abuses abound everywhere but at least in the “first” world there are policies and procedures to rectify some of them. I don’t expect everyone reading this to agree to it all, but I have tried to have “common sense” all my life and not become polarized in one way or the other . William Seavey is is an author of more than a half dozen books (see williamseavey.com), a retirement consultant and bed and breakfast owner in California.
https://medium.com/journal-of-journeys/beliefs-vs-knowledge-its-important-to-know-the-differences-f64843ac729e
['William Seavey']
2019-09-11 11:50:47.511000+00:00
['Belief', 'Intelligence', 'Books', 'Religion', 'Knowledge']
Will we ever replace paper?
Will we ever replace paper? Why physical experiences are fighting back against the future of screens I have some sort of a feeling that paper could never be replaced. I mean, despite our best efforts paper has already been replaced across industries over the last decade in an effort to make everything electronic — from the data we store to the way we communicate — digital connectivity has vastly improved every inch of business. It just would never make sense for an organisation to operate intentionally on paper anymore. Yet still, think of any task you could perform with paper and I guarantee there is a digital counterpart to it. Paper is no longer essential due to the technology available to us, yet it still exists with no sign of going anywhere. Before you think this argument becomes ridiculous, that it is all about paper, then let me assure you it’s not. It’s about the core experience paper offers us and why it could perhaps never be replaced — this is not only paper, it’s about how so many digital touch points are failing to address the needs they were designed for. For the past five years, how we design services has been dictated and limited by the touch points that were available to us — the PC, mobile and analog touch points. Much emphasis was placed on creating experiences delivered through digital screens and as a result, people spent more time interacting via device than in person. — FJORD Ok, so it may not feel like these digital touch points are failing us and who am I to say they are but there’s a strong sense that we are beginning to sub-consciously fight back against them. Think of think of the last “to-do” list you made, was it using paper, post its or something digital? Maybe your iPhone notes or Trello? “The future of screens should be about blending physical and digital experiences” For the majority of us I can comfortably say we used something analog like paper and there’s a stronger reason behind this than some may think. It can be easier, faster, more available at the time but those are only surface arguments that can also be used for digital touch points. One of the main reasons we still use paper for To-do lists is the sensory experience it stimulates. The touch of the page, the link between your pen, hand and mind and the tactility of having your next goal in your hand. The action of writing or reading something physical has a much stronger sense of connection than say — a digitally typed note that may or may not send you a reminder 15 minutes before the task needs to be completed. This emotional stimuli, the dopamine it sends to your brain is something that has arguably not yet been recreated through digital To-do list services. Paper is just one small example of this sub-conscious dissatisfaction with our digital touch points. Do I even need to mention the growing angst against our screen addiction, or our need to now separate core technology outside of one device. Where once the goal was to pack as much as possible into one screen we suddenly find ourselves wanting experiences that offer something more personal and meaningful. Surprisingly it seems to me that the next step is actually to unpack our screens of all of these solutions into separated (physically integrated) experiences. The future of screens should be about blending physical and digital experiences, solutions that create more sensory experiences — even for the simplest of tasks such as To-do lists. We should no longer be using digital as a sole touchpoint but as an enabler for our physical touchpoint — take this next example as a reference. Restaurants have started to offer apps to download before visiting so that users can order and pay without the table service. Yes Wetherspoons in the UK is a great champion of this particular service as it has targeted the needs of their consumers in the correct context but why are restaurants that are famed for their customer service also offering these apps? I can’t say I would visit a nice restaurant and be amused to find I had to download an app onto of my already full memory to order something with no personal recommendation or conversation — the context of a touchpoint like this just isn’t appropriate. Instead why aren’t these restaurants looking to build on their customer service by using digital services or technology to enhance it thus creating a more sensory and admirable experience. — starting with the paper menu perhaps?? Technology should inspire us to design services that enable a positive experience, the line between physical and digital touchpoint is one such area that will start to be addressed more and more but until then no, I do not think we will ever replace paper.
https://uxplanet.org/will-we-ever-replace-paper-1dccf12f295b
['Jack Strachan']
2018-05-14 10:50:33.838000+00:00
['Innovation', 'Technology', 'Design', 'UX Design', 'UX']
The problem with real news — and what we can do about it
Forget fake news, a poisonous term. Real news is an even bigger problem. This realization is what inspired me to found the Dutch journalism platform De Correspondent in 2013, promising to be “an antidote to the daily news grind” for its readers. So many people responded that we even set a world record in crowdfunding a news site. Today, we’re on the verge of launching The Correspondent, bringing unbreaking news to the United States and beyond. Rob Wijnberg speaking at the launch of De Correspondent | Photo: Bas Losekoot That the news in its traditional forms is the problem with journalism actually dawned on me much earlier, when in 2006 I joined the editorial department of a major Dutch newspaper. I was 24 and studying philosophy when I landed a job covering domestic affairs. As a philosophy student does, I immediately started asking: what is this thing called news that I’m supposed to make here? Scrutinizing the practices of my colleagues, I eventually distilled a definition that I think describes news pretty accurately. News is all about sensational, exceptional, negative, and current events. And those five words capture precisely the problem with news. The news is: one crazy unrelated event after another To start off with the sensational: news is generally that which is shocking, scandalous, or appalling enough to evoke comment. It often revolves around what’s most visible — one might even say explosive. That is why terrorist attacks are often news, says Guardian journalist Joris Luyendijk, but occupations of foreign lands are not. Attacks are shocking, highly visible events, occupation much less so. Put another way: it’s easy to capture a bus exploding, yet very hard to film the suppression of everyday freedoms. Extending this idea, the news also mostly revolves around the highly exceptional. Cartoonist Matt Wuerker captured this attribute of news brilliantly: while we’re surrounded with millions and millions of peace loving, law abiding, hate fighting, unity advocating fellow citizens, it only takes a few neo-Nazis, jihadis or KluKluxKlanners to fill a news cycle twentyfourseven. Cartoon: Matt Wuerker | Source: Politico Not only does this skew our view of other human beings, news also makes us blind to the influential that is not exceptional at all. That’s why we often don’t hear about major developments until something highly improbable happens (events the Lebanese-American philosopher Nassim Taleb dubbed “black swans”). The 2008 financial crisis, for example, didn’t become huge news until the Lehman Brothers investment bank filed for bankruptcy — a highly unusual event. But the lead up to this event — banks that kept piling risk on top of risk, little by little, day by day — never made it to the front page because of the fundamental mismatch between what was happening (gradual risk increase) and the way news commonly signals what is happening (event-driven sensationalism). The news is also, almost without exception, negative. “If it bleeds, it leads” is a journalism catchphrase. In other words: good news is no news. People who keep up with the news are thus quick to think the world is getting ever more dangerous — though in fact the opposite is true. What’s more, the news constantly gives us the feeling that people can’t be trusted: they commit fraud, they’re corrupt, they steal from one another, they blow themselves up. The reality is that the overwhelming majority of people are good and want to do right by others. But that’s not news, is it? The news is also obsessed by what’s recent. Almost everything that’s news must be something that has just now taken place. But the most recent thing isn’t by definition the most influential one. Everything in the world has a history. And that history determines in large part why something happens. Because the news usually keeps its eye trained on today, it blinds us to the longer term, both past and future. Informing us about power structures that have grown over time, like the historical roots of racism, or alerting us to gradual societal changes, like the financialization of our economy, is simply not natural to the forms and rhythms of daily news. And the reason for that, lastly, is that the news revolves mainly around events. News has to have a hook, to use journalism jargon: a reason to report it now instead of later. That sounds logical, but it means that trends rarely make the evening news. For trends aren’t instances; they progress over time. That’s why the nightly news always ends with the weather, but never with the climate. You can’t say: “Today the climate changed”, even though it actually did. Hook-think is also why much of the news consists of what we might call calendar journalism: recurring, often planned events that serve as an excuse to elevate something to the status of news. Consider press conferences, quarterly earnings, think tank reports, commemorative services, and anniversaries. Or the president’s tweets. That means you can pencil in much of the news in advance, making it something that isn’t “new” at all. The news is: what’s not happening When you put all this together, it means the news actually fails to deliver on its single biggest promise: to tell us what’s happening in the world. People who follow the news mostly know what doesn’t happen. It portrays the world to us as a never ending string of sensational, unusual, terrible, rapidly forgotten events. In contrast to fake news, which is misleading because it’s simply untrue, real news misleads us in a more subtle and fundamental way. It gives us a deeply skewed view of probability, history, progress, development, and relevance. That’s why we’re quick to think that most terrorists are Muslims, even though that isn’t true. Or that the world is only getting worse, even though that isn’t true. Or that terrorist attacks pose a greater threat to our well-being than sugar, even though that isn’t true. Or that the financial crisis started in 2008, even though that isn’t true. Or that crime is going up in the United States, even though that isn’t true. In short, our news obsession takes away from what journalism as a practice is supposed to be about: helping everyone who is part of the public understand the world well enough to join in public discussion about what is to be done. As the saying goes: “If you don’t read the newspaper, you’re uninformed. If you do read the newspaper, you’re misinformed.” The news is: a health hazard To be clear: when I say “news” I don’t mean “all journalism.” There are countless types of journalism that are thorough and informative, and there are ten of thousands of journalists committed to public service who do invaluable work. Nor is my criticism of the news meant as a dismissal of “the media,” as that phrase is now commonly understood. Like many of my colleagues, I am worried by the wave of mistrust toward journalists that’s currently sweeping the United States and the world at large, spurred on by a political elite that hopes to exploit this suspicion of the media. But of all the forms that journalism can take, the news is by far the most influential. We consume it in unbelievable quantities: on average Americans spend almost 70 minutes a day following the news in some form — that’s more than four full years across an average lifespan. As a result it dominates our water cooler conversations, largely sets the political agenda, and heavily shapes our view of humanity and the world. And not in a good way. The ultimate effect of our excessive news consumption — more accurately, our news addiction — is to make us afraid of other people, skeptical of the future, and cynical about our own ability to affect it. Day in, day out, the news confirms our most stubborn prejudices and our greatest fears. It makes us pessimistic and suspicious. It even makes us unhappy. In short, the news is bad for us, as individuals and as a society. “News is to the mind,” the Swiss writer Rolf Dobelli once wrote, “what sugar is to the body.” Really, the news should come with a surgeon general’s warning. We need an antidote to the news To help temper the negative effects of the news, I founded the Dutch journalism platform De Correspondent five years ago with a crowdfunding campaign. The idea behind it was simple: let’s redefine the news together — from the sensational to the foundational. And the response was overwhelming: nearly 19,000 founding members joined our cause and helped us achieve a world record in journalism crowdfunding. We raised $1.7 million in a country of just 17 million people. In five years our member base grew to over 60,000 today, making us one of the fastest growing community funded news sites in Europe. These members enable De Correspondent to be a fiercely ad-free, in-depth journalism platform, making good on our slogan on a daily basis: being “an antidote to the daily news grind.” The problem isn’t liberal bias, it’s recency bias That slogan perfectly captures our mission: to serve as a remedy to the worst effects of the news. Central to that is a different definition of news. Instead of looking only at what happened today, at De Correspondent we look at what happens every day. When you do that consistently, it makes for a different view of the world. Because why is it that after nearly every major societal shock, from weapons of mass destruction to the financial crisis to Brexit to Trump’s election, people in the news media ask the same question: why didn’t we see this coming? The most common answer is ideological bias. Journalists are “too left-wing” or “too liberal” and so they don’t want to acknowledge what is really going on. I think there’s a better answer. The news media have the wrong definition of news. Lehman Brothers’ fall, Britain’s break from the EU, and Trump’s election are indeed spectacular, exceptional events, but they are also the result of slow, unobtrusive, systemic trends. Phenomena that take place not today but everyday, and therefore never develop a hook that qualifies them to be presented as news. Phenomena that are also too everyday to generate sensational headlines or clicks. Rob founded De Correspondent in Amsterdam in 2013 Going from the sensational to the foundational, with the help of readers At De Correspondent in the Netherlands, we try to tell precisely those stories that aren’t news, but news-worthy nevertheless. Or, as we often say, that reveal not the weather but the climate. Those stories are written by correspondents who don’t have a news-driven schedule to meet, and thus can take the time they need to develop an area of expertise and learn to recognize and describe the truly influential developments of our time. Our ultimate goal: to replace the sensational with the foundational and the recent with the relevant. To achieve that, we’ve had to learn new journalistic habits at De Correspondent. And even more important: we’ve had to break old ones. The key habit we had to break was the journalist’s traditional bar for relevance and timeliness. There’s a kind of unspoken agreement among journalists on what exactly constitutes the most important “issues of the day” — and that unspoken agreement is tightly linked to the fact that journalists are themselves extremely heavy news consumers. Their own excessive news consumption predisposes journalists to believe that what’s happening in the world right this instant, and what’s the most important story to tell right now, is whatever’s getting a lot of airplay in other media. That makes it easy and safe to do the same. Then no one can be blamed for over-reporting it, because everyone is responsible for that. To put an end to this self-fulfilling prophecy, the first thing we do is teach our correspondents to seriously moderate their own consumption of news. We encourage them to seek inspiration for article ideas outside of the day’s newspapers, talk shows, and tweets — by going out into the streets, by reading books, and, above all, by asking our readers the question, “What do you encounter every day at work or in your life that rarely makes the front page, but really should?” Now, it may sound easy to ignore the news, but it turns out to be quite a challenge. Journalists are quick to fear they’re missing out: there’s no sin more cardinal than letting a competing news outlet take center stage with breaking news you don’t have yourself. Even at De Correspondent, we still wrestle with this problem on occasion. Especially when events happen that rivet the world’s attention, such as terrorist attacks. But those are precisely the moments we guard against, lest we reflexively fall back into the habit of reporting on mayflies. We resist that urge not by asking ourselves “What are we going to do with this news?” but by asking “What do we have to add to this news that isn’t available anywhere else?” If the answer is “nothing,” then we won’t report on even the most major of news events. That’s why in 2016, on the day of the tragic bombings in Brussels, just 125 miles from our office in Amsterdam, we didn’t publish a word. Instead, we referred our members to the best reporting by other outlets, in full keeping with the philosophy of media professor Jeff Jarvis: “Do what you do best, link to the rest.” Our readers appreciated this so much that we welcomed more new members that day than ever before. That change of habit — that redefinition of relevant — has sparked a deeper and, we believe, profoundly positive change: it’s no longer our correspondents’ goal to be the first, get a scoop, or be picked up by other outlets. Their goal is to thoroughly ground themselves in the major developments of our time and, along the way, share their learning curve with a growing community of followers. To get there, we’ve also had to train our correspondents to stop thinking in completed stories. Most newspaper articles and television news items can’t be published or broadcast until they’re complete. But that limitation is absent online, where news can be an unfolding process instead of a static snapshot. Instead of only presenting readers with the finished product, our correspondents share their plans and ideas, and then provide interim updates by keeping a public notebook. This interactive way of doing journalism has a major advantage: our readers can traverse the same learning curve our journalists do. Instead of assuming all kinds of background, as the news often does, our reporting allows the reader to join in at his or her own level of knowledge, and grow from there. And often, that starting point is even higher than that of the journalist. By shining a public light on the journalistic process instead of hiding it behind the wizard’s curtain, we give our readers a way to share their specific knowledge and experiences with our correspondents. And so we’ve trained our journalists to no longer view their readers as passive consumers of information, but as active contributors of expertise. Breaking news: one hundred readers know more than one journalist Our members play a crucial role in discovering and exploring the everyday systems that are the focus of our journalism. At De Correspondent we believe that a hundred readers by definition know more than a single journalist. On our platform these everyday experts share their knowledge and experience with our correspondents. For example, hundreds of teachers, students, and school principals help our Education correspondent understand what’s happening in our schools, and hundreds of doctors, mail carriers, and train conductors help our Public Services correspondent understand the issues at play in our country’s public sector. We ask our members, “What do you encounter every day at work or in your life that rarely makes the news, but really ought to be on the front page?” Their answers are often the start of discoveries we could never have made alone. It is by now no exaggeration to say that the knowledge of our more than 60,000 members has become indispensable to our journalism. Not just because they share what they know, but also because they are willing to pay $80 a year to do so! Their willingness to pay lets us keep De Correspondent fully ad-free. Beyond being pleasant to the eye — no blinking banners screaming for attention — it’s also an essential condition for the kind of journalism we aspire to make. Because the sensationalism and hypersensitivity of our daily news feeds is caused in large part by the underlying business model. Since the 19th century the news has been largely funded by advertising. That means the real product isn’t so much the news itself, but the public’s attention. This attention economy is the breeding ground for today’s screaming front-page headlines and the clickbait glutting our social media. These incentives are less present at De Correspondent because the readers themselves are our clients, and not the advertisers. This is a big reason we can shift from the sensational to the foundational and publish an antidote to the daily news grind: we have members who grasp the connection between the kind of journalism we practice and the elimination of that third party in between journalists and readers, the advertisers. News that helps us make the world a better place This kind of journalism, in which journalists don’t just produce and readers don’t just consume, is ultimately rooted in an underlying conviction: that by sharing our knowledge and experience with each other, we can leave the world better than we found it. Said another way, De Correspondent is based on a belief in progress. That belief isn’t a baseless hope; it isn’t even a political stance. Believing in progress is a rational, factual conclusion. Because the history of humankind is a history of progress. Think about it. Not a single chimpanzee has ever gone to the moon. Not a single chimpanzee carries everything she knows in her pants pocket. Not a single chimpanzee has ever been to court. Yet the chimpanzee and man share a very recent common ancestor. The reason we’ve evolved from nut-cracking apes into rocket-flying humans is as simple as it is ingenious: there is no species on Earth as good at sharing knowledge as we are. No single individual human knows how to build an iPhone, a rocket, or a system of justice, because all of them are the product of shared knowledge. This simple principle of sharing what we know enabled us to keep taking the next step in specializing in what we’re best at. Together, we progressed. But crazily enough fewer and fewer people believe this progress still exists. For the first time since the nineteenth century, when belief in progress became common, a majority of the population in 25 countries believes the world is headed in the wrong direction. Also gaining ground is the idea that the lives of our children and grandchildren will be worse than our own. And one of the worst enablers of this waning belief in progress is — you guessed it — the daily news that we consume. Because the news mostly disseminates outrage and pessimism, not knowledge and confidence. As a result we’re less informed about the world we live in and more skeptical of our ability to change it. I believe there’s another way. I believe that humanity’s fate is best served by sharing knowledge and experience instead of outrage and fear. That together, we can still understand the world. And that a world we understand is a world we can change. Together, we can still make progress. Now that’s real news. Become a founding member at thecorrespondent.com today.
https://medium.com/de-correspondent/the-problem-with-real-news-and-what-we-can-do-about-it-f29aca95c2ea
['Rob Wijnberg']
2018-12-03 15:13:51.887000+00:00
['Journalism']
Crown development meeting minutes 17/08/2020
Crown development meeting minutes 17/08/2020 Just the facts, ma’am Present: ahmedbodi, ashot, crowncoin-knight, pjcltd, walkjivefly Ashot Ashot investigated the possibility of including the Dash deterministic MN code in the Bitcoin codebase update. He judged it too much work and too high risk. In view of this he thinks we should proceed with the “vanilla” Bitcoin codebase update and only later address deterministic nodes. He has been working on some sync issues in his own small testnet. Hopefully he’ll soon be ready to progress to a larger testnet. In preparation for this he will take a look at the testnet halvening problem (issue 333). Bitcore/Insight After adding NFT support to Bitcore, the Crown Insight explorers are able to operate again. insight-02.crownplatform.com is already running, the other two sites will be cloned from it and functional again soon. New website The new website is nearly ready for launch. It is simplified and updated for a contemporary audience and optimised for use on mobile devices. Designed for Generation Screen ElectrumX We have one new version ElectrumX server running. The infrastructure team will set up a second server soon. Community developer, Ahmed, is running one too so there will be three servers for Electrum Crown wallet users to connect to. Electrum Crown There is no good technical solution to the derivation path incompatibility between Electrum Crown v1.2 and v4. Ahmed is about to go on holiday for two weeks but will look into ways to facilitate migration from old version to new version wallets when he returns. It would be helpful to know how many users are affected by this incompatibility. If you are an Electrum Crown v1.2 user with a Trezor hardware-backed wallet please let us know through Discord. Bang up to date with the latest Electrum codebase, NFT compatibility and Trezor integration Crown Network Crawler Ahmed is making solid progress on the Crown network crawler bounty project. Due to other commitments and his imminent holiday he won’t finish by the initial estimated completion date. Instead, he hopes to complete the project by the end of September. Crown Sweeper Community developer, Adrian, has demonstrated a first pass at the Crown sweeper bounty project. This graphical tool will run on linux, Mac and Windows, and provide an easy way to consolidate masternode/systemnode rewards. The remaining issues are mainly cosmetic and we hope to release the project soon.
https://medium.com/crownplatform/crown-development-meeting-minutes-17-08-2020-56c47785192d
[]
2020-08-18 07:39:43.581000+00:00
['Bounty Program', 'Electrum', 'Development', 'Codebase Update']
Use Virtual Ice-Breaker Activities to Foster Connection
Before understanding the virtual ice-breakers, let’s see some benefits of working remotely, like: Setting your own working schedule. The comfort of being at home. Avoid commute time. Less stress. Spend time with your kids and family. More satisfying. But it has challenges as well: We can feel lonely and isolated while working remotely. In a survey, we found that approx 46% of employees said they miss coming to the office. The Importance of Virtual Ice-breakers A virtual Ice-breaker is a great way to keep your remote teams engaged and learn more about each other. Ice-breaker is a game or activity that you can do with your team members to get to know each other and start the conversation. Virtual ice-breakers are the same, but they are done via video calls. Here are some reasons why virtual ice-breaker activities are useful: You can introduce new hires to teammates in a fun way to make them feel comfortable. A well-placed interactive icebreaker can lighten virtual meetings and keep your team’s attention throughout the meeting. It gives you an opportunity to learn more about your teammates’ personalities and thoughts. 5 Elements to Design Virtual Ice-breakers Here are some elements that you should consider before designing your virtual ice-breakers: Have a Purpose: Ask yourself, “what’s your purpose with ice-breakers activities?” Introducing new hires? Or bringing your teams together and let them know each other? First, you must establish a purpose clearly. Help Employees Feel Comfortable: A virtual ice-breaker activity will be successful if your employees feel comfortable to participate. Tell your team clearly what’s the purpose of these activities and encourage everyone to participate. Timely: Your virtual teams are probably calling from different time zones, so take time into account. Do you want your ice-breaker activities to be a quick 10-minutes or 30-minutes? Frequency: Consider whether the virtual ice-breaker is once per week or if you want to make it a daily thing? Discuss with your team and come up with your plan. Tool: Make sure everyone is using the same video conferencing tool, like Zoom, and they’re familiar with the tool. Also, decide the format — do you want your team to use their webcam or a voice call will be sufficient? 11 Ice-Breaker Activities for Remote Teams Here are 11 virtual Ice-breakers ideas for your remote team meetings — these are easy to implement! 1. Critical Thinking Questions One of the best initiatives to break the ice for a remote team meeting is to start by asking a question that facilitates critical thinking. For example, you can ask a question like, ‘A man is lying dead in a field. There is an unopened package lying next to him and no other creature in the field. How did he die?’ Post lateral thinking questions in the group and give everyone some time to answer. Then let them have a brief conversation about their answer and the learning experience behind it. 2. A Ride in the Time Machine People love to have conversations about themselves. You can channelize this habit of your employees by asking them the time machine question. For example, ask them about a place or a time they’d like to visit if they had a time machine. You can use follow up questions to have an ice-breaking conversation. 3. A Stroll Through the Childhood Ask your team members to share their childhood pictures privately with you. Then, post these pictures in the office group and ask people to guess the person. Not only can it be a great ice-breaking activity but also reminiscence people of their childhood memories. This would facilitate open conversations and a good laugh to top it off! 4. Current Affairs Conversation Asking members to have a conversation about an open issue can be a great team-building activity. For example, you could share a topic that people can read about in your office group. Then start the remote team meeting by asking them to express their views about the same. Once people are done sharing their views uninterrupted, set aside some time for an open conversation. 5. Social Question Start a team meeting by asking people little questions, people love to answer about themselves. These could be about their favorite season, team, food, etc. This activity, on the one hand, focuses on enabling a lively conversation among people, even leading to a debate on why one’s favorite football team is better than the others. On the other hand, it helps people get to know each other better. 6. Host an Open Mic Let the people in your organization speak about something they like. This could be a poem, a short story, a song, jokes, etc. Taking the central stage would give them due attention to showcase their talent. Similarly, performing in a group would also serve as a key to social validation. Ask team members to applaud others. 7. Emoticon Starter Ask your team members to share an emoticon that precisely tells how they are feeling at the moment. This will help start the meeting on a lighthearted note, as people witness the range of emojis being put to use. This activity makes use of one of the best virtual things that people love to do- use emoticons. 8. Six-Word Story Watch the ice shatter as people become their creative best with this activity. Ask your team members to tell their life’s story using six words. Or if there is something interesting that they want to share, ask them to do it within the six-word limit. Not only is this fun but also a great platform for people to be their quirky best. 9. Guess a Riddle Post a riddle and ask your team members to reply with answers within 5 minutes. Tell them to not use any search engine or write answers they might have already known. Watch them be at their creative best as they try to answer questions with a race against the clock. 10. Build a Story Again, one of the most popular activities on the list, this breaks the ice and builds an interesting story. Start by assigning random numbers to your team members. Then ask the person who has number 1, to write down a sentence and leave it incomplete. The next person with the succeeding number will write another line, and this would go on until all members have had their turns. Read out the story that is sketched in the end. 11. Guess the Gibberish This game is already viral among social media enthusiasts. You could leverage this as an ice breaker for your team by posting gibberish words for common movies, songs, or even names of some of your team members. Not only will people rush to answer it correctly but also enjoy the competitive spirit. 25 Quick Virtual Icebreaker Questions Don’t have time for virtual icebreaker games? Here are quick icebreaker questions you can ask at the beginning of your video call to have some fun and help coworkers get to know one another. What’s the best trip you ever had? What’s your favorite thing about your native place? What single event has had the biggest impact on your life? What childish thing do you still do as an adult? What’s something you’ve done this year that you’re proud of? In your opinion, what is the most beautiful place on earth? What is your spirit animal? (The animal who is most similar to your personality.) What famous animal movie character do you like the most? What is your favorite restaurant? If you could only eat one food for the rest of your life, what would you eat? What is something you are great at cooking? What is your favorite breakfast, lunch, or dinner? What is your favorite sport or physical activity? What three things do you consider yourself to be very good at? What is something you love/hate doing? And Why? Have you ever lived in another state/country? If you had to delete all but 3 apps from your smartphone, which ones would you keep? What technology innovation made the most impact on your life? When did you get your first mobile phone? What kind was it? Where would you time-travel, if it were possible? What are your favorite movies/TV-series? If you could be any fictional character, who would you be? If you won a lottery of $1 billion, what would you do with all the money? As a child, what did you want to be when you grew up? What “old person” things do you do? Got some other virtual ice-breakers ideas that can boost the engagement of your remote workers? We’d love to hear from you.
https://medium.com/publishous/virtual-ice-breaker-activities-for-remote-teams-in-2020-ecff8fcbcfbf
['Pawan Kumar']
2020-08-21 12:48:51.021000+00:00
['Work Life Balance', 'Work', 'Remote Work', 'Remote Working', 'Entrepreneurship']
Changing Mind
A member of Mutrack and Inthentic. I lead, learn, and build with vision, love and care. https://piyorot.com Follow
https://medium.com/people-development/changing-mind-f4c5f2dc716b
[]
2016-10-18 13:19:20.808000+00:00
['Decision Making', 'Life', 'Work', 'Self-awareness']
Guide to Material Motion in After Effects
I’ve already shared why Motion Design Doesn’t Have to be Hard, but I wanted to make it even easier for designers to use the Material motion principles I know and love. After Effects is the primary tool our team uses to create motion examples for the Material guidelines. Having used it to animate my fair share of UIs, I wanted to share my workflow tips and… My After Effects sticker sheet Download this basic sticker sheet to see a project completed using my streamlined workflow (outlined below). It contains a collection of Material components, baseline UIs, and navigation transitions. Download it here 👈 Available under Apache 2.0. By downloading this file, you agree to the Google Terms of Service. The Google Privacy Policy describes how data is handled in this service. Importing assets into AE First things first, we need assets to animate. Most of the visual designers on our team use Sketch, which by default doesn’t interface with AE. Thankfully Adam Plouff has created this plugin that adds this functionality. I used it to import our library of baseline Material components from Sketch into AE. These assets are found in the sticker sheet’s Components folder. Creating UIs With this library of baseline components, new UIs can quickly be assembled by dragging them into a new AE comp.
https://medium.com/google-design/guide-to-material-motion-in-after-effects-9316ff0c0da4
['Jonas Naimark']
2019-05-22 14:32:47.463000+00:00
['Animation', 'Technology', 'Visual Design', 'Material Design', 'Design']
The Language of the Heart
Pampering my fragile ego, I lost many years, writing, to please the readers. I depended on their affirmations, Craving for their attention and acceptance. But everything went in vain. When I realized my mistake. It’s not how things work. Choices and preferences change daily. Trying to fit in is more than a sin. This feeling has plagued me, And I want to feel better, So I write what I feel. I feel what I write. This has brought me more success. The lesson I learned will stay forever. My life is short; I want to make it count. I am creating new spaces. Rather than fitting into the existing ones. I am happier now! I am no one to advise. But a wise learns from others. Makes his own path, chasing his own dreams, Following his heart.
https://medium.com/blueinsight/the-language-of-the-heart-b89edcb804cc
['Darshak Rana']
2020-12-06 18:20:28.199000+00:00
['Poetry', 'Happiness', 'Blue Insights', 'Writing', 'Life']
The three skills you need for fact-based marketing
The three skills you need for fact-based marketing And how good coffee can make all the difference for your organisation Companies often talk about creating a data-driven culture. Some organizations want to implement evidence-based management, others are engaged in business experiments. For marketers I would say it’s all about fact-based marketing. These are all related movements that want to change organizations in a way they are no longer steered by opinions. It’s no longer about the person who gets paid the most money (the HiPPO) pushing her opinion through. In this article I share my first experiences with building such organizations and I name three skills that marketers should develop (further) if they want to be part of this new movement. When I worked at Booking.com, one day a colleague of mine complained the coffee tasted bad in our department and in his opinion the coffee machine had to be replaced. My mentor and senior product owner datascience, Mats Einarsen, took the initiative and he would fix the coffee problem. A few days later the coffee machine guy came, along with a new coffee machine. But instead of replacing the old, Mats organized a blind taste test. Which of the two coffees was better? After a large number of colleagues had done the test, Mats shared the results; the coffee machine guy was allowed to pack up and take back the new device. The evidence was clear, the old coffee was better. The view that the old coffee machine had to be replaced, was refuted by the facts. Do you like what you have read so far? Get a quarterly update of what I am busy with. Booking.com is – by far – the most data-driven company that I know. Data seems to be a religion and this belief runs through all layers of management. They are also the most successful company I know. (With probably a revenue of over 5 billion dollars last year only) The data-driveness of Booking.com seems a major reason for their success. It is difficult to prove, but at least more and more managers realize that is makes total sense to run . The desire to get the ROI of marketing more clearly, so that we can make clear what our impact, obviously playing much longer. Half the money I spend on advertising is wasted; the trouble is I do not know-which half. — John Wanamaker But optimizing online marketing this awareness has only grown. Offline and above the line must be the same. We now have more experience. Building a data-driven culture is a desire to have more and more marketers. For another project I am talking with Colin McFarland on how to design a data-driven organisation. Colin works for a hardcore data-driven company but he wants to get even better at sticking to facts and staying away from opinions. The company he works for is also very successful. It makes you wonder whether there is a link between having a data-driven culture and having commercial success… Nowadays more organizations are warming up to the idea of evidence-based management, and I’d love to share my personal experiences with building a data-driven culture. You can’t create such a culture on the drawing board. That’s something you have to learn by doing. I train people in conducting experiments, making decisions based on facts and learning from the data. There are three aspects that I focus on when an organization asks me how they can become more evidence-based. These are the skills that you need if you want to run more business experiments or if you want practice fact-based marketing. 1. Critical thinking A marketing fallacy I identified already years ago is that marketers make informed and rational decisions. For what I have seen marketers normally base their actions on their intuition. An intuition that often turns out wrong. What I wanted to change already then, and still want to improve is what I now define as “critical thinking.” For me this means that you can formulate your opinion or idea in one or more testable statements. In my trainings I force people to develop their ideas as testable hypotheses. The whole point is that marketers formulate their views in a formal manner. This requires training, but it is the start of any data-driven approach. The start that is often forgotten. A critical thinker doesn’t just posit a statement, but bases it on sound reasons trustworthy sources. 2. Designing an experiment If you have a testable idea, a workable hypothesis, you want that test it. In an ideal experiment you gather insights into the relationship between cause and effect of your actions. Decisions in non-data-driven organizations often lean on opinions from outside. In a data-driven culture the opposite is true. I completely agree with my former colleague Stuart Frisby when he wrote about a hierarchy of reliable data sources for decision-making purposes: Your own experiment data Your opinion Someone else’s opinion Someone else’s experiment data Learning by experimentation with your customers and your own propositions. That is something no one else can do for you. And that’s why you shouldn’t value an outsider’s view much, and even less so copy the “best practices” in the industry. Stuart phrased it probably best when he said: “Test your own damn hypothesis” 3. Statistics I spent a lot of time working side by side with a statistics professor, so I would never dare to say that statistics is less important for a data-driven organisation. But I will say this: many people I have worked with could learn the first two skills relatively easily. This in stark contrast to becoming significantly better at statistics. In university this is a stumbling block, but in business and more specific in the field of marketing this is simply not feasible. Statistics is in my opinion important when you have already begun to run experiments. Your questions will be more clear. And suddenly you would like to know, given the data that you have from an unique customer, what the best choice of proposition is to show right now and how confident you are that the customer will act on the given proposition. In that case it is very handy to have someone on the team that speaks Bayesian in addition to the usual business lingo. Marketers who like statistics have the future. But until then data scientists will work in their own specialized teams while they enjoy the power of statistics. I believe it is about time that marketers should like to test their opinions, think critically about their actions, learn based on facts and thus make sure they contribute to the success of their organization. I’m curious what your experiences are! Do you like what you have read? Get a quarterly update of what I am busy with.
https://medium.com/i-love-experiments/the-three-skills-you-need-for-fact-based-marketing-6f231ce4fddf
['Arjan Haring']
2016-08-07 05:48:13.003000+00:00
['Organizational Culture', 'Big Data', 'Data Science']
Building Your First Machine Learning Model: Linear Regression Estimator
There are so many good packages that can be used for building predictive models. Some of the most common packages for predictive analytics include Sci-kit learn package Caret package Tensorflow It’s important that before using these packages, you master the fundamentals of predictive modeling, that way you are not using these packages simply as blackbox tools. One way to understand the functioning of machine learning models is for you to actually learn how to build your own models. The simplest machine learning model is the linear regression model. Everyone new to data science should master the fundamentals of the linear regression estimator, since a majority of machine learning models (such as SVM, KNN, Logistic Regression, etc.) are very similar to the linear regression estimator. In this article, we describe how a simple python estimator can be built to perform linear regression using the gradient descent method. Let’s assume we have a one-dimensional dataset containing a single feature (X) and an outcome (y), and let’s assume there are N observations in the dataset: A linear model to fit the data is given as: where w0 and w1 are the weights that the algorithm learns during training. Gradient Descent Algorithm If we assume that the error in the model is independent and normally distributed, then the likelihood function is given as: To maximize the likelihood function, we minimize the sum of squared errors (SSE) with respect to w0 and w1: The objective function or our SSE function is often minimized using the gradient descent (GD) algorithm. In the GD method, the weights are updated according to the following procedure: i.e., in the direction opposite to the gradient. Here, eta is a small positive constant referred to as the learning rate. This equation can be written in component form as: If you would like to find out more about the GD algorithm and why it works, see the following article: Machine Learning: How the Gradient Descent Algorithm Works. Implementation Using Python Estimator import pandas as pd import numpy as np import matplotlib.pyplot as plt class GradientDescent(object): """Gradient descent optimizer. Parameters ------------ eta : float Learning rate (between 0.0 and 1.0) n_iter : int Passes over the training dataset. Attributes ----------- w_ : 1d-array Weights after fitting. errors_ : list Error in every epoch. """ def __init__(self, eta=0.01, n_iter=10): self.eta = eta self.n_iter = n_iter def fit(self, X, y): """Fit the data. Parameters ---------- X : {array-like}, shape = [n_points] Independent variable or predictor. y : array-like, shape = [n_points] Outcome of prediction. Returns ------- self : object """ self.w_ = np.zeros(2) self.errors_ = [] for i in range(self.n_iter): errors = 0 for j in range(X.shape[0]): self.w_[1:] += self.eta*X[j]*(y[j] - self.w_[0] - self.w_[1]*X[j]) self.w_[0] += self.eta*(y[j] - self.w_[0] - self.w_[1]*X[j]) errors += 0.5*(y[j] - self.w_[0] - self.w_[1]*X[j])**2 self.errors_.append(errors) return self def predict(self, X): """Return predicted y values""" return self.w_[0] + self.w_[1]*X Application of Python Estimator a) Create dataset np.random.seed(1) X=np.linspace(0,1,10) y = 2*X + 1 y = y + np.random.normal(0,0.05,X.shape[0]) b) Fit and Predict gda = GradientDescent(eta=0.1, n_iter=100) gda.fit(X,y) y_hat=gda.predict(X) c) Plot Output plt.figure() plt.scatter(X,y, marker='x',c='r',alpha=0.5,label='data') plt.plot(X,y_hat, marker='s',c='b',alpha=0.5,label='fit') plt.xlabel('x') plt.ylabel('y') plt.legend() d) Calculate R-square value R_sq = 1-((y_hat - y)**2).sum()/((y-np.mean(y))**2).sum() R_sq 0.991281901588877 In summary, we have shown how a simple linear regression estimator using the GD algorithm can be built and implemented in Python. If you would like to see how the GD algorithm is used in a real machine learning classification algorithm, see the following Github repository.
https://towardsdatascience.com/building-your-first-machine-learning-model-linear-regression-estimator-ba86450c4d24
['Benjamin Obi Tayo Ph.D.']
2019-11-27 17:51:59.530000+00:00
['Machine Learning', 'Optimization', 'Python', 'Gradient Descent', 'Linear Regression']
Your Company Culture is Who You Hire, Fire, and Promote
The actual company values, as opposed to the nice-sounding values, are shown by who gets rewarded, promoted, or let go. — Netflix Culture: Freedom & Responsibility Every time I walk into a new company I’m advising, I invariably encounter a set of noble values prominently displayed on the walls. The first thing I’ve trained myself to do is to not take them as gospel, and instead carefully observe how people really behave, which will tell me the actual values I need to know. It’s not that most companies are disingenuous about the values they espouse. One of Enron’s “aspirational values” was integrity, which may have genuinely expressed who they wanted to be at the beginning. But over time, this proclaimed value didn’t reflect their “practiced values” which were revealed when they committed fraud. The gap between aspirational and practiced values is diagnostic of how much your company’s culture needs to improve. The actions you take to bridge the gap is prognostic of whether it will. Why Behaviors Persist (Do As I Do, Not As I Say) Why does a gap usually exist between a company’s aspirational and practiced values? You would think that to alleviate cognitive dissonance, most employees would feel an inherent need to practice what they preach — or are preached at to do. But to truly close the gap, you have to attack the problem at its root: The issue is that aspirational values almost always come from, and must be rectified at, the top. Though most employees care what leadership thinks of them, they are actually quite astute at paying attention to what leadership does, not what they say. According to the theory of behaviorism, no behavior will persist long term unless it is being perpetuated by either a positive reinforcer (providing a reward, such as a promotion or praise) or a negative reinforcer (removing a punishment, such as a probationary period or undesirable tasks). Thus, when companies start, leaders set the company’s values not by what they write on the walls, but by how they actually act. For example, do they stay late and burn the midnight oil? Or do they leave early to be with their families? According to social learning theory, these behaviors become socialized, and rank-and-file employees who take their cues from these leaders act, and react, accordingly. These are what’s called trickle-down behaviors. As the company grows and senior leadership is not always readily observable, employees begin to act according to what their managers either actively reinforce through praise and promotion or passively reinforce by allowance. Over time, employees become aware of which colleagues are being hired, fired, or promoted, and why. Did Joe get hired because his references praised his incredible work ethic? Did Jill get fired because she wasn’t considered a team player? Did Jamie get promoted because he spent a lot of time socializing with company leadership? Hence, employees quickly learn the “rules of the game” to survive and thrive at their company, and act accordingly, which may have nothing to do with what aspirational values are plastered on the walls. Your company’s employees practice the behaviors that are valued, not the values you believe. How to Assess Values During the Interview Process Instead of letting your company become a corporate version of The Hunger Games, leadership should actively prioritize behavior that’s congruent with company values. First, you must ensure that all final candidates live up to your company’s values. For example, I’ve created an interview template to evaluate candidates on seven key traits: grit, rigor, impact, teamwork, ownership, curiosity, and polish. (I recommend you replace these with your own company’s aspirational values.) Another key trait (or the lack thereof) has been popularized by Professor Bob Sutton at the Stanford Graduate School of Business. The “No Asshole Rule” dictates that no matter how great a candidate may be, being an asshole is an automatic deal breaker. The way this can be implemented is through what I call the “One Red Flag Rule,” which is based on the observation that pathological traits are sporadically, not continually, expressed — except in severe cases. For example, a narcissistic candidate may not act arrogant all the time, but may express arrogant comments 10 times more, on average, than someone who doesn’t have this issue (e.g. a few times a day versus a few times a month). Thus, if a candidate can’t suppress making an arrogant comment during a 30–60 minute interview, they are likely to do this even more often when employed full-time. Drawing on best practices espoused by a leading investing firm, Andreessen Horowitz, I conduct up to six reference checks per candidate to evaluate “values fit.” Nevertheless, evaluating these traits during a brief interview can be challenging and lead to an inaccurate assessment. This is why I recommend conducting thorough reference checks, based on research findings dictating that the best predictor of future behavior is past behavior. A 300-employee San Francisco startup, Weebly, goes so far as to invite job candidates to work an onsite trial week, paid at fair market value. The reason is simple: It’s very hard to suppress values-incongruent behaviors when working closely with others for that amount of time. As Weebly’s CEO David Rusenko said: Assholes can hide it in interviews, but for whatever reason, they cannot hide it for a whole week. I don’t know why, but it all comes out within a week. How to Reward (Values During Performance Management) No matter how good your interviewers are, any screener will result in false positives (people who you thought fit your values, but don’t once they’re hired) and false negatives (people who you thought would not fit your values, but would have if you had hired them). Companies that prioritize culture are willing to accept some false negatives in order to avoid false positives. If you get some false positives anyway, the solution, as entrepreneur and president of Y Combinator Sam Altman said, is to “fire quickly.” Few people have the psychological wherewithal to resign of their own accord if they still want the job, but know they aren’t the right fit. That’s why baseball coaches have to pull starting pitchers out of the game when their performance declines, and substitute a relief pitcher instead. In the same way, it’s a manager’s job to be a good coach and pull people out, compassionately, so they can find a better fit in another role or company. The major issue that I’ve seen with startups is that even if they claim to have a “No Asshole Rule,” they hardly ever practice it. Rationales I’ve heard include: “We’ve decided that we’re not going to fire him because he’s a high performer” or “for that one bad trait, he has four good traits going for him,” and, of course, the “[data scientists/engineers/product managers] are hard to replace, so we’ll make do.” The moment that leaders start weighing values-congruent against values-incongruent behavior, as if they can cancel one another out, is the moment when they have compromised their values. The best way to avoid this pitfall is to make values-congruent behavior a formal and prioritized part of your company’s performance management process. I’ll share the system I designed and helped implement at my company. It starts with evaluating each employee on the Performance-Values Matrix. Whatever employee evaluation system you use — whether that’s a formal annual review or regular one-on-ones — employees should be evaluated on both their performance-based behavior and values-based behavior. Both should be quantified on a spectrum (e.g. a 1–10 point scale), but I’ve simplified into a 2x2 matrix for illustrative purposes. (Note: I use gender-neutral language throughout this piece, but I use “guys” in the matrix due to space constraints. I also usually use more PC terms, but the colorful language helps me cheekily make my point here.) © Dr. Cameron Sepah 1. Incompetent Assholes (Fire Fast) Incompetent assholes are not only low-performers, but their behavior is incongruent with company values. In this matrix, they are in the lower-left quadrant and thus can only earn 25 percent of the maximum employee evaluation score. Hopefully, there should be very few of these folks at your company, but occasionally a few will slip through the hiring cracks, or something may have happened that caused performance and values-congruent behavior to permanently degrade over time. Nevertheless, they sap overall employee motivation by not contributing equally to the workload and are toxic to company morale. Needless to say, incompetent assholes should be identified and fired as fast as possible. 2. Competent Assholes (Remediate or Separate) Competent assholes are high-performers, but exhibit behavioral tendencies that are incongruent with company values. Given that “asshole” is not a clinical term, I will define it here as someone who lacks empathetic behavior to the point that it causes interpersonal issues. The biggest mistake that I see companies make is that they retain competent assholes because they are seen as critical to the company or difficult to replace. However, by doing so, they not only passively reinforce the competent asshole’s behavior by tolerating and promoting them, but they implicitly send the message to the rest of the company that you can basically get away with murder so long as leadership believes you to be indispensable. You can imagine what kind of culture this creates over time. In contrast, using the Performance-Values Matrix, an employee who’s competent but a complete asshole can only earn 50 percent of the maximum employee evaluation score, given that the other 50 percent of their evaluation is based on their values-congruent behavior. There’s a reason Professor Sutton called it the “No Asshole Rule.” It’s because exceptions shouldn’t be made, otherwise it shows your values are merely aspirational. The solution for competent assholes is a tactic that I call “remediate or separate.” Despite the fact that these folks are strong performers, it should be made clear that value-incongruent behavior is not tolerated and they will need to remediate their behavior in a measurable way within a given period of time. Thus, competent assholes should be put on what I call a “Values Improvement Plan” (VIP). For this, 360-degree reviews — from an employee’s manager, peers and direct reports — are a great way to assess improvement, or else be separated from the company. The reason I like giving these folks a chance is that sometimes employees that aren’t entirely inflexible (or pathological) can improve when they realize that their job depends on it. Often times, this requires entering therapy or executive coaching with a skilled psychologist, which is worth its weight in gold if the employee is willing to change. 3. Incompetent Nice Guys (Manage or Move) Incompetent nice guys and gals are the exemplars of your culture and are well-liked by almost everyone, but unfortunately are not high-performers. Like competent assholes, completely incompetent nice guys and gals also can only earn a 50 percent maximum possible employee rating. That’s because it is nearly as much of a sin to tolerate incompetent people as it is to tolerate assholes. Giving free license to someone to underperform just because they are kind or likeable sends the message that your company is not a meritocracy, and that it’s more important to be socially skilled (or at worst, be a brown noser). However, the solution here is different than with component assholes, in that the best solution is to manage or move them. Incompetent nice guys and gals should be put on a traditional performance improvement plan (PIP), and skillfully managed in order to give them training and feedback to improve their abilities. When their incompetence stems from a fundamental disconnect between their strengths and their current role’s demands (e.g. mediocre social skills in a client-facing role) one solution I’ve seen prove fruitful is to move them into a different role. (The person may be an analytical whiz if moved to a more technical role). Of course, if that is not possible or does not work out, they should also be separated from the company. Encouraging incompetent nice guys and gals find a position that’s a better fit for their strengths is, ironically, the nicest thing you can do for them. 4. Competent and Outstanding Nice Guys (Praise and Raise) Hopefully most of the employees at your company are both competent and nice; you must exhibit both behaviors in order to be in the top-right quadrant of the Performance-Values Matrix. Competent nice guys and gals earn up to 75 percent of the maximum employee evaluation score, and should be both praised and given the opportunity for advancement. But in order to set the bar high, employees should only earn the full 100 percent score if they exhibit both outstanding performance and value-congruent behavior. They are what Sarah Tavel of venture capital firm Greylock Partners calls the “mitochondria” of startups, because they are the company’s power plants — adding value beyond their job description by asking and doing what is best for the company. Given how rare these individuals are, founders should go out of their way to attract and retain them. By building this designation directly into the evaluation matrix, outstanding nice guys and gals should be formally recognized and rewarded with raises and promotions. These are the current or future leaders of your company, and need to be nurtured and cherished given that they are the foundation for your company’s performance and morale. Of note, while I distinguish between competent and outstanding nice guys [and gals] in the matrix, I do not do so for assholes. That’s because I believe it is nearly impossible to be an outstanding asshole. For example, there is a myth of the “10x engineer” in Silicon Valley, where a truly talented engineer is 10 times as valuable and productive as an average engineer. Even if one engineer could possibly do the work of 10, if they are an asshole — especially in a management position — they will decrease the performance of the people around them to such an extent that their team’s net productivity will break even or be at a loss long-term. Call to Action: Reinforce Your Culture When leaders become Machiavellian and hire and retain mercenary employees, they resemble a Hermann Hesse novel reaching “the “place on the journey where everything falls apart” and the company’s culture degrades. While company engagement scores typically decline as they scale — given they have to hire quickly from a limited talent pool and are too overworked to fire quickly — this does not have to be an inevitable outcome. Culture can only improve when there’s a baseline of openness, where people feel they can come forward and share concerns or opportunities for people and teams to do better. If people don’t trust what happens after they give their feedback, then reviews will be “positive” and not provide any useful information. This requires both anonymous surveys and the promise that aggregate feedback is valued and will be acted upon. So if you want your company’s culture to be congruent with those noble aspirations written on your walls, you must continually assess how well your employees are behaving compared to those aspirational values, and develop ways to bridge the gap between aspiration and practice. I believe the best way to do this is to directly reinforce value-driven behavior, including making it an integral part of employee’s reviews and weighting it as highly as performance. As the old saying goes, you reap what you sow. As leaders, you get the behavior that you reward. Continued in Part 2: Anatomy of a Workplace Asshole
https://medium.com/s/company-culture/your-companys-culture-is-who-you-hire-fire-and-promote-c69f84902983
['Dr. Cameron Sepah']
2018-06-29 23:10:56.548000+00:00
['Tech', 'Leadership', 'Management', 'Technology', 'Entrepreneurship']
My 6-Year-Old Is Chill AF. She Also Has Anxiety and OCD.
Over the past eight months, I have been routinely amazed by my kid. The coronavirus pandemic changed her life. Preschool shut down. We quit going shopping, quit dining out. Her Frozen II birthday party was canceled. We didn’t get to go to the pool this summer or hang out with friends. There’s a whole lot we quit doing, all to be safe, just in case. I never expected that to be easy for my daughter. I know it’s tough. It’s hard on me and we miss everything from leisurely Target trips to Saturdays spent at the library. I keep waiting for her to have a meltdown because she misses the McDonald’s play place, or because we drive past the old “bounce house” every single day. The bounce house was one of our favorite hangouts — a place where I could work on my writing and she could play with other kids — but they had to close their doors months ago. Many of our favorite places have closed over the past few months, even the American Girl store just outside of Atlanta, where we always planned to have her golden birthday in a few years. It feels like life has been one disappointment after another this year, and yet, my daughter takes it all in stride. Nothing is quite “normal,” but my kid’s easygoing nature means that every socially-distanced holiday so far has been her definition of the “best day ever.” Seriously. Her Zoom birthday party and home festivities? “This is the best birthday ever,” she declared. No trick or treating this Halloween, but we still managed to do a socially-distanced play date. “Best Halloween ever!” She talked about that celebration every day for a week. In a lot of ways, I’ve lucked out because my daughter is so damn easy to please. Sure, she’s still a kid, so she sometimes thinks she wants every single toy she sees on TV, but she totally gets it when I tell her no, not today, maybe never, etc. She trusts me, I think, to look at most situations on a case-by-case basis. I trust her to not lose her shit when I say no. “Hey, I don’t think I’m up to putting the big tree up for Christmas this year,” I told her last week. “I feel bad about that. What do you think? What if we do a real small tree instead?” My daughter barely missed a beat. “Yeah, we could use my doll tree,” she said. “That would be fun.” “I’ll put an advent calendar tree on the wall too,” I told her. THIS is the only tree I feel like putting up in 2020 | Image by Pottery Barn Kids “Ooh, yeah!” As she replied, I almost couldn’t believe it. How did I ever get such an easy-going kid?
https://medium.com/honestly-yours/my-6-year-old-is-chill-af-she-also-has-anxiety-and-ocd-8137beda9d85
['Shannon Ashley']
2020-11-16 23:52:53.142000+00:00
['Parenting', 'Life Lessons', 'Family', 'Life', 'Mental Health']
When Depression Refuses to let go
After losing five years to depression, I’m now a human in transit between a past that holds me back and a future calling out to me. As I strive to imagine a life that I haven’t been part of for a long time and haven’t reintegrated yet, my present is pure longing. I’m chomping at the bit, itching to transcend my illness, and self-actualize. Although the deep-seated desire to get better means progress, there’s no wishing away depression. Chronic illness is your partner for life, a shape-shifter that never goes away. Accepting this is probably the most important coping skill you can develop but it forever changes your self-image. Whether you suffer from chronic mental illness as I do or a chronic physical condition invisible to the naked eye, you are now disabled. Major depressive disorder like mine falls under the ADA (Americans with Disability Act). This means that I technically can’t be discriminated against when applying for a job. Then again, this is likely to make for awkward interviews in the future as HR departments wring their hands in despair not knowing how to handle my openness. Hence freelance work, for now. Granted, willingly turning your diagnosis into a portfolio is non-standard but stigma and silence kill. Consider this my minuscule journalistic contribution to a more tolerant and tolerable world, the kind of world I don’t live in yet but would like to live in someday. Because ideals without action are worthless. Last year, the Trump administration even proposed barring people with disabilities from immigrating to the U.S. In short, had I been sick when I immigrated rather than falling sick shortly afterwards, America could well have slammed the door in my face.
https://asingularstory.medium.com/when-depression-refuses-to-let-go-f3543c5897c1
['A Singular Story']
2020-08-29 13:01:20.706000+00:00
['Mental Health', 'Disability', 'Self', 'Work', 'Freelancing']
How to Stop Mac Apps From Launching at Startup
Does your Mac slow to a crawl thanks to apps that spring to life upon startup? Here’s how to disable and manage startup items so you can stop them in their tracks. By Jason Cohen Does your Mac take a long time to boot up? And when it does start, are you bombarded with a series of programs you didn’t open? Startup apps are convenient, but too many can eat up precious memory and slow down your computer. The good news is that you can fight back and manage your startup apps. If you love your Mac but hate waiting around for apps to load, here’s how to disable them on startup. Disable Startup Apps From the Dock The simplest way to disable an app from launching on startup is to do it from the Dock. Right-click on the app and hover over Options in the menu. Apps that are set to open automatically will have a check mark next to Open at Login. Tap that option to uncheck it and disable it from opening. Hide or Disable Login Items You can also manage multiple startup items at once. Go to System Preferences > Users & Groups > Login Items for a list of apps set to launch on startup. To remove a specific app, highlight it and click the minus button under the list. If you prefer, certain apps can be set to launch at startup without necessarily popping up onto the screen. This ensures the program won’t get in your way but will be ready to use when it’s needed. You can hide a startup app to only run in the background until you’re ready to use it by ticking the Hide box next to each app listed. Temporarily Disable Startup Apps You’re starting up your Mac in a hurry and don’t have time for the computer’s normal boot process. Instead of waiting for all those startup apps to load, you can temporarily stop them for just this one session. Enter your login information as you normally would, but hold down the Shift key on the keyboard before submitting your credentials. Continue holding until the Dock appears, and the startup apps won’t load this time. Delay Startup Apps with Delay Start You can delay the launch of startup apps rather than disable them entirely. The third-party app Delay Start lets you set a timer for specific apps to control when they start up. Delay Start works similarly to the Mac’s own internal interface. Click the plus sign to add a program to the list. Change the time setting to indicate how long (in seconds) you want the apps to be delayed. On the next startup, the items you added will launch with the delay you set.
https://medium.com/pcmag-access/how-to-stop-mac-apps-from-launching-at-startup-a018fa11b796
[]
2020-12-28 20:02:31.103000+00:00
['Tips And Tricks', 'Mac', 'Technology', 'Apps', 'Apple']
Why It’s OK to Divorce Your Family
Why It’s OK to Divorce Your Family How to decide if it’s time to break ties with your family members. Photo by National Cancer Institute on Unsplash We all have our own ideas of the “perfect family”. Maybe it’s the Cleavers or the Bradys or the Father Knows Best family (am I aging myself here?). The reality of it is, most of our families don’t measure up to those impossible standards. Family can be difficult, we all get that. Some family relationships take a lot of work, or a lot of gritting your teeth, or a lot of self-care to navigate. But sometimes you have a family relationship that is beyond repair. Maybe it’s a parent with severe alcoholism or a sibling with mental illness. Maybe it’s a family member who has done something so awful, there’s no coming back from that. What do you do then? Sometimes your best option is to sever the relationship and divorce your family member.
https://medium.com/illumination/why-its-ok-to-divorce-your-family-776bf2d91086
['Rose Bak']
2020-11-06 16:37:19.636000+00:00
['Self Care', 'Relationships', 'Family', 'Life', 'Mental Health']
Helpful Tools I Used to Optimize My Medium Stories as a Beginner
Helpful Tools I Used to Optimize My Medium Stories as a Beginner Tools to help you with headlines, grammar, and publication research Photo by ThisisEngineering RAEng on Unsplash When I first started writing on Medium three months ago, I needed a lot of help. Like many of you probably, my first full month was trying to figure out everything that Medium had to offer. I felt overwhelmed. Publications? Formatting? Tags? Topics? Curation? What? I just want to write, damn it! Alas, that’s not how Medium works — and for good reason. Medium built a platform that rewards quality work and effective usage of the inner workings of the system. I understood very quickly that in order for me to have my stories read, I needed every advantage I could get. Whether you agree with the idea or not, Medium fosters a merit-based, but very competitive environment for its writers. As a writer on Medium, you’re competing for the attention of millions of readers. Time is a limited resource and each and every reader only gets so much time to dedicate to possibly reading your stories. Well, how do you stand out? Quality and relevance. Is your writing understandable and does your audience relate to it? In the beginning, I considered my writing voice to be stale. My grammar wasn’t the best. I had zero experience with blogging. And I thought I was the most boring person in the world — what could I possibly write about that would captivate people’s attention? Practice makes perfect Fortunately, I’m a voracious reader, love researching, and learning new things. I put myself to the task and read up on as much as I could about Medium and how it works. I poured through dozens of articles from a variety of different people. I tested out things to see what worked and what didn’t work. I wrote a variety of different stories under different topics. In my journey, especially the first month, I found and used different tools that assisted in bettering my writing and formatting. I used the profits from my first month on Medium to subscribe to a few of the Pro versions too. I thought of the investment as an investment in myself and my potential growth on Medium. The old saying, “Practice makes perfect”, couldn’t be more relevant for your journey on Medium. The more you do something, the better you’re going to get at it. It’s called continuous improvement. I’ve since reduced my usage of some of these tools due to my growing mastery of the platform. I’ll probably unsubscribe from the Pro versions of the tools I bought since I barely use them now. I’m still not perfect, but if I look back at some of the first articles that I’ve written, I’ve definitely seen massive improvements. I believe each of you will see the same incremental improvements in your writing too. These tools, especially if you struggle in certain areas, can help give you guidance so you can truly focus on what really matters most — your writing. If there are any other tools, extensions, and websites that you’ve used in your Medium writing journey, leave a response below! I’d love to hear about them and include them in this list in future updates. With this list, I talk about my thoughts on each tool, but I encourage each of you to try them yourself and see what works for you. Here is the list!
https://medium.com/about-me-stories/helpful-tools-i-used-to-optimize-my-medium-stories-as-a-beginner-40db62f120c7
['Quy Ma']
2020-12-22 02:44:03.823000+00:00
['Blogging', 'Writing', 'Medium', 'Advice', 'Writing Tips']
Performance Tuning Apache Sqoop
Controlling Data Transfer Process Photo by Keszthelyi Timi on Unsplash A popular method of improving performance is by managing the way we import and export data. Here are a few ways: Batching Batching means that related SQL statements can be grouped into a batch when you export data. The JDBC interface exposes an API for doing batches in a prepared statement with multiple sets of values. With the --batch parameter, Sqoop can take advantage of this. This API is present in all JDBC drivers because it is required by the JDBC interface. Batching is disabled by default in Sqoop. Enable JDBC batching using the --batch parameter. sqoop export --connect jdbc:mysql://mysql.example.com/sqoop \ --username sqoop \ --password sqoop \ --table cities \ --export-dir /data/cities \ --batch Fetch Size The default number of records that can be imported at once is 1000. This can be overridden by using the Fetch-size parameter which is used to specify the number of records that Sqoop can import at a time using the following syntax: sqoop import --connect jdbc:mysql://mysql.example.com/sqoop \ --username sqoop \ --password sqoop \ --table cities \ --fetch-size=n Where n represents the number of entries that Sqoop must fetch at a time. Based on the available memory and bandwidth, the value of the fetch-size parameter can be increased w.r.t the volume of data that needs to be read. Direct Mode By default, the Sqoop import process uses JDBC which provides a reasonable cross-vendor import channel support. However, some databases can achieve higher performance by using database-specific utilities since they are optimized to provide the best possible transfer speed while putting less strain on the database server. By supplying the --direct argument, Sqoop is forced to attempt using the direct import channel. This channel may be higher performance than using JDBC. sqoop import \ --connect jdbc:mysql://mysql.example.com/sqoop \ --username sqoop \ --password sqoop \ --table cities \ --direct There are several limitations that come with this faster import. For one, not all databases have available native utilities. This mode is not available for every supported database. Sqoop has direct support only for MySQL and PostgreSQL. Custom Boundary Queries As seen before split-by uniformly distributes the data for import. If the column has non-uniform values, boundary-query can be used if we do not get the desired results while using the split-by argument alone. Ideally, we configure the boundary-query parameter with the min(id) and max(id) along with the table name. sqoop import \ --connect jdbc:mysql://mysql.example.com/sqoop \ --username sqoop \ --password sqoop \ --query 'SELECT normcities.id, \ countries.country, \ normcities.city \ FROM normcities \ JOIN countries USING(country_id) \ WHERE $CONDITIONS' \ --split-by id \ --target-dir cities \ --boundary-query "select min(id), max(id) from normcities"
https://medium.com/swlh/performance-tuning-apache-sqoop-512242a58df5
['Thomas George Thomas']
2020-09-17 03:29:19.896000+00:00
['Data Science', 'Big Data', 'Analytics', 'Database', 'Sqoop']
Social Media Response to Sainsbury’s Christmas Ad Proves UK is as Racist as the US
Sainsbury's is trending on Twitter. This is unusual enough for me to investigate, and so I watch a heartwarming Christmas ad that makes me miss my family, want to eat a roast dinner, and imagine my boyfriend doing an embarrassing song and dance about his cooking skills for our future children. Aw, Twitter is just excited for Christmas, I think naively, before the words racist comments catch my eye. The family giving me warm Christmassy feelings in Sainsbury’s new ad, titled Gravy Song, is black. There’s a phone call between a dad and his daughter. She’s dreaming of her mum’s roasties and wondering whether she’ll be home in time to eat them. Her dad reminds her that he is something of a wizard at making gravy, so much so that he’s improvised a song to celebrate his skillset. It’s a sweet and loving advert that made me nostalgic for a time I actually find quite stressful irl. Well played Sainsbury’s advertising team. Gravy Dance has over a million views on YouTube, and the comments are turned off. Meanwhile, Sainsbury’s second Christmas advert, Perfect Portions, featuring a white family has less than 20,000 views on YouTube, and comments are still allowed. As is often the case with our culture, the scale of our racism problem reveals itself via the Internet. The comments feature is disabled on one ad but not the other. At the time of publishing this article, almost as many people have selected the dislike button as the like button on the black family advert on YouTube, compared to 23 people using the dislike feature on the white family ad. Who are the people taking the time to use these functions to display their hate? This is how institutionalized racism is allowed to manifest. Covert acts of hate that could easily be denied as racial, and that cut like tiny knives. And what is it that people object to? Racists on Twitter are upset because the UK is an 80% white country, and this one advert has no white people in it. These customers are boycotting Sainsbury’s. “Go woke, go broke!” they exclaim. As if to be woke aka enlightened, is a bad thing. “Advertiser’s should stick to the Heart warming animal ads, or Santas etc. Remember the Penguine ad! Stick with light hearted ads! ALDI did well just by a Carrot & a Parsnip!!” another Twitter-user writes. For some reason, this person believes that representing a black family having fun is political and incendiary, rather than lighthearted. As though they live in a dystopian world in which celebrating diversity is evil because the government swears racism doesn’t exist. What does it mean if a person relates more to vegetables than other human families? Is this a sign of some more serious pathology? These tweets make me angry, and they make me sad. How can a far-too-late attempt by a large organisation to correct decades of a lack of diverse representation be read as its opposite? And why are these people so confident in airing their ignorant, bigoted opinions? Do they not know that they are outing themselves as racist? Or do they not care? And if they don’t care, is this a failing of morality or education? Because I have heard this kind of overtly ignorant racism from people I am related to, and I know that there it is a matter of education. Occasionally, I see it in students that I teach. Then I’m less sure. Always, it’s introduced or concluded with, “You know I’m not racist.” Or: “I don’t care about race.” Another astonishing piece of double speak. No matter how much I explain why diverse representation matters, I still have to listen to blithely confident people talking about companies ‘getting on the diversity bandwagon’ and ‘virtue signalling’. “Well done Sainsburys and BLM for creating more division in the country,” another Twitter-user writes. And I hear echoes of something my dad said when I last argued with him about race. That toppling slave trader statues only creates more of a divide. As if the statue isn’t toppled because of the divide. Sainsbury’s has attempted to pick a side in a bid to show that racism will not be tolerated. But social media refuses to acknowledge history or reality. It is unable to admit that this is a fight that started hundreds of years ago. Instead, racists blame black activists for creating the division they are fighting against. It is mindbending. And it makes me very tired. Because how do you battle an opponent averse to reason? The mental gymnastics required to keep communications decent can wear a woman out. Even an increasingly privileged white woman. We have reached a point of denial-induced doublespeak where the words ‘I’m not racist’ actually mean the opposite. I tell people I am racist, to try and differentiate myself from the huge section of white people who refuse to accept any responsibility for the shit show we are experiencing. And how could I not be? I was brought up in small-town Britain in the 80s by people who got their news from The Sun and The Daily Mail.
https://medium.com/an-injustice/social-media-response-to-sainsburys-christmas-ad-proves-uk-is-still-as-racist-as-the-us-e4c31024190b
['Chelsey Flood']
2020-11-21 17:11:02.061000+00:00
['Race', 'Equality', 'Social Media', 'Marketing', 'Politics']
When Can We Stop Wearing Masks?
Photo by Anton on Unsplash When Can We Stop Wearing Masks? Vaccines are promising, but don’t toss that face covering just yet From a scientific perspective, there’s no longer any question that face masks help prevent the spread of Covid-19, protecting both the wearer and others. So with optimism rising over vaccines that appear to be highly effective, one big question now is how long we’ll all be bound to cover our faces, especially since new cases and deaths continue to soar. The answer: Longer than you might have hoped. Key factors include how quickly a sizable majority of the population gets vaccinated, plus a big wild card: whether the vaccines actually prevent infection, either mostly or entirely, or merely prevent and curb symptoms while leaving an infected person still potentially infectious. In the meantime, infectious-disease experts stress the ongoing importance of masks in a layered approach to prevention that still includes avoiding crowds and poorly ventilated spaces, washing hands frequently, then getting the vaccine when your turn comes. “Even after being immunized it will be important for people to continue to wear masks until we hear that the pandemic is under control, many people have been immunized, and community levels of disease are at acceptably low levels,” says David Aronoff, MD, director of the Division of Infectious Diseases at the Vanderbilt University School of Medicine. “While vaccines will prevent symptomatic infection, we do not know whether vaccines will completely prevent infection in the first place, or prevent transmission of the SARS-CoV-2 virus.” Vaccines are never perfect For comparison, the annual flu shot — more important than ever this year — is typically around 50% effective. It fully protects some individuals, and if a vaccinated person does catch the flu, their symptoms are likely to be less severe and of shorter duration than a nonimmunized person, and during that time they could still transmit the disease to someone else, Aronoff explains. “I am hopeful that SARS-CoV-2 vaccines prevent infection and eliminate contagiousness,” he tells me. “But we need to behave as if they reduce symptoms, prevent severe disease, shorten the duration of viral shedding, and reduce but not eliminate contagiousness.” Multiple Covid-19 vaccines are getting high marks from scientists not involved in their creation, including Eleanor Murray, ScD, assistant professor of epidemiology at Boston University’s School of Public Health. But Murray echoes Aronoff on the notion that there’s more to learn before we know how high the high fives should go. “The randomized trials showed us that the vaccines do a great job preventing illness, but haven’t reported on infection status,” Murray says in an email. “We will likely learn more about this over the next few months as more people get vaccinated, though, so hopefully we’ll have the answer soon. If the vaccine does prevent transmission, then we’ll need to wear masks until maybe about 70% of the people in our communities are vaccinated. If the vaccines don’t prevent transmission, we may need to wear masks longer than that.” That 70% figure is a rough estimate of what it takes for herd immunity to significantly kick in, presumably slowing the spread of the virus because it begins to run out of people to infect. Dr. Anthony Fauci, the nation’s top infectious-diseases expert, said on December 15 that an even lower level of immunization could start to slow the pandemic down by late spring or summer and potentially “turn this thing around” by the end of 2021. “I would say 50% would have to get vaccinated before you start to see an impact,” Fauci told NPR. “But I would say 75 to 85% would have to get vaccinated if you want to have that blanket of herd immunity.” Spring? Summer? Fall? Fauci’s timeline depends on those vexing unknowns, however. “There is a big if to this,” says Marc Lipsitch, PhD, a professor of epidemiology and director of the Center for Communicable Disease Dynamics at Harvard T.H. Chan School of Public Health. Herd immunity thresholds cited by Fauci depend not just on the number of shots taken, but on whether the Covid-19 vaccines prevent infection, infectiousness, and transmission as well as, say, measles and mumps vaccines, Lipsitch tweeted December 15, or if they mainly prevent symptoms and have only modest effects on transmission, as is the case with vaccines for some other diseases. It’s also not yet clear how rapidly vaccines will be deployed to the entire population. “I don’t think we should stop wearing masks indoors, especially in spaces that we do not know what the ventilation/filtration rates are and that are crowded until the general public starts to get vaccinated, and that will not be until close to next fall, I suspect,” says Shelly Miller, PhD, an environmental engineer at the University of Colorado, Boulder, who studies the indoor transmission of infectious diseases. Until then, the importance of masking up is no less crucial — and far more so when in crowded indoor spaces and especially in poorly ventilated places, versus the comparatively lower risk outside with good airflow and proper physical distancing. Risk is also high if someone brings the virus home, and there are several ways to mitigate that challenge. Since masks are here to stay for a while, here’s an in-depth guide to choosing the best face coverings and dealing with the many issues they present:
https://coronavirus.medium.com/when-can-we-stop-wearing-masks-e85aa174a325
['Robert Roy Britt']
2020-12-16 00:43:47.124000+00:00
['Vaccines', 'Pandemic', 'Covid 19', 'Coronavirus', 'Masks']
WeGroup, communication with digital customers
WeGroup has raised €1.5M in total. We talk with Arvid de Coster, its CEO. PetaCrunch: How would you describe WeGroup in a single tweet? Arvid de Coster: WeGroup helps insurance providers to better connect with their growing group of digital customers PC: How did it all started and why? AC: Insurance is a very conservative and slow industry, that has been lacking behind on technology when you compare it to other financial industry players such as banking. A lot of processes are very inefficient and result in high overhead costs and a lot of customer frustration. It also fails to use modern technology and resources, such as the vast amounts of open data available, which gives insurance players a big disadvantage in today’s market. WeGroup is destined to solve these problems. Initially, we were a B2C startup, trying to build ‘the insurance company of the future’. After a while, however, we found out that we could solve these problems better while working together with traditional players, both insurance carriers and distributors, which is why we pivoted away from B2C. PC: What have you achieved so far? AC: Since our founding in 2017, we are proud to have grown our team from three founders on an attic in Belgium to a growing company of 25 people, with offices in 3 European countries and active in the entire European Union. Thanks to landing some major international clients fast, we were able to raise €1.5M in seed funding, and our now working towards a Series A of €5M. Of our product development, we are equally proud. There is a lot of buzz around words like artificial intelligence and all its side domains, but we actually explored fields like deep learning and natural language processing to build actual working products, solving actual industry problems. PC: How will you use your recent funding round? AC: The money raised will be used mainly to accelerate our international expansion. Being operational in several European countries comes with different languages as well as cultural and economic differences. Therefore it is important to have team members all over the place, familiar with all these variables. This means both technical people (for support) as well as business people for sales and marketing. PC: What do you plan to achieve in the next 2–3 years? AC: In the next few years, we should proof that our solution is scalable on an international level. The diversity in Europe is great to put product and growth to the test, but of course our ambitions rise beyond that. The same industry challenges European companies are facing can be felt by their counterparts in the US and Asia. Therefore, we work towards scalable and healthy expansion in order to prepare ourselves to cross the ocean.
https://medium.com/petacrunch/wegroup-communication-with-digital-customers-398f2bb4ebea
['Kevin Hart']
2019-08-23 19:34:32.776000+00:00
['Insurance', 'Insurtech', 'Startup', 'Customer Service']
Basic Things You Should Know About Web.
On my previous blog/story I’ve enlightened your eagerness of learning web. There are also some interesting things and facts in this. Today let’s talk about those. What is Web Development? In simple words it’s basically a work that helps us to develop a website on the internet. You can either make a single static page or a complex web page. Larger organizations require a large group of developers for their websites and their business. Web development usually refers to the main non-design aspects of building web sites- coding and writing markup. Web development may use Content Management System (CMS) to make content changes easier and available with basic tech skills. Web has developed this world in a technological perspective. There are three kinds of Web Developer Specialization : Front-End Developer, Back-End Developer and Full-Stack Web Developer. Front-End Developers are responsible for behavior and visuals that run in the user browser, while Back-End Developers deal with the servers. And the Full-Stack Developer is an expert who deals with the both side i.e. the front-end development and the back-end development. That’s not for today. I’ll discuss more about this on another day. Writing thousands of lines of code is not it. You should really know what it does. Creating a website is not only about learning various programming languages you should also learn about devtools, github, APIs and many more. There are some basic technologies that are required to create a website and host it on your own. We’ll talk about it: HTML/CSS/JS : Suppose you’re constructing a house, being a architect of the house you should make a plan first and then execute the plan. The construction process goes like this :- The base of the building (The walls, the floor, the roof), then the interior and exterior colors and finally installation of electricity and water. This is just an example about what you’re about to learn. The HTML part of the website gives the website a solid structure and just builds it. The CSS part of the website gives the website a beautiful look with colors, margins etc… The CSS part basically beautifies your website. And finally the JS part takes care of the behavior of your website. HTML stands for Hyper Text Markup Language and CSS stands for Cascading Style Sheets. These two are the core of a website. Where as JS which stands for JavaScript is not a core of any website. JavaScript is highly in-demand nowadays. It makes your webpage more interactive and all. If you want to learn more about this JS thing than you’ve to dive deep inside it. This language is used in Frontend and Back end too. You can see any websites coding by right clicking and selecting inspect. GitHub : Okay now you’ve made a website on your own. Now what? Suppose your website is about your business or for the welfare of the society, but it wouldn’t work out if people can’t see it. So you’ve to host your website. GitHub is the place where you can host your site, create repositories, branches, commits and pull requests. We’ll talk about this later on. If you’re eager to know more about GitHub you can visit https://guides.github.com/activities/hello-world/. I’ll recommend if you’re learning web development then you should probably get in touch with Git asap and learn more about it. It’ll help you host your webpage and reveal your work to others. Browser Devtools : Your time is more valuable than anything else. And you can make changes and debug your webpage using Devtools. You can edit HTML/CSS elements or properties, check device, track JS errors etc. This makes your work lot easier than searching the thousands lines of code you’ve written. You can also check if your webpage is responsive according to other devices like Mobile, tab etc. Browser Devtools have different tabs like elements, console, network etc. a dev must know at least briefly about all these. Most people use Google Chrome browser and. Chrome Devtools to develop or debug their webpage or website. It makes the work of a developer a lot simpler and a lot easier. Wireframing : A developer must work on wireframing his project before he starts. Now what does wireframing means? The word ‘wireframe’ may look like a difficult thing to learn, but it’s not even like that. Wireframe is a 2D blueprint of the website the dev is going to make writing tons of code. Wireframing is done on a piece of paper and the dev is supposed to draw/plan the web page he’s going to make and develop. It’s simple as that. It makes the work a lot easier and saves time. You can develop the part of the webpage by looking at the wireframing you’ve done. It sounds a lot easier right? Well it is…Good luck!! Practice : Web development is nothing about reading books and memorizing. It’s more about practicing on your own pc or laptop. The more you practice, the more you learn. There will be lots of errors and mistakes in your coding, but always remember debugging is also a main part of learning. Even the large companies makes mistakes building their web. There are lots of tags and properties in web development techs. You don’t have to remember all of them, but few of them are important to remember and memorize. And the rest of it? You can always go to MDN of W3School for these. They are the best guides for web development. Always remember to do it practically. You can take notes but if you don’t practice it regularly it’ll not help you learn. I told as much I could, hope this helps. Good Luck on your journey. I’ll see you next time!!!
https://rikbiswas.medium.com/basic-things-you-should-know-about-web-d21d91807b43
['Rik Biswas']
2020-12-09 17:09:08.531000+00:00
['Science', 'Web Development', 'Web Design', 'Technology', 'Computer Science']
Engineering For Failure
Not so long ago, our systems were simple: we had one machine, with one process, probably no more than one external datastore, and the entire request lifecycle was processed and handled within this simple world. Our users were also accustomed to a certain SLA standard — a 2-second page load time could have been acceptable a few years ago, but waiting more than a second for an Instagram post is unthinkable nowadays. (Warning: buzzwords ahead) When systems get more complex, with strict latency requirements and a distributed infrastructure, an uninvited guest crawls up our systems — request failure. With each additional request to an external service within the lifecycle of a user request, we’re adding another chance for failure. With every additional datastore, we’re open to an increased risk of failure. With every feature we add, we risk increasing our latency long-tail, resulting in a degraded user experience in some portion of the requests. In this article, I’ll cover some of the basic ways we at Riskified handle failures in order to provide maximal uptime and optimal service to our customers. Failure by example Every external service, no matter how good and reliable, will fail at some point. We at Riskified learned this the hard way when we experienced short failures with a managed, highly available service that almost resulted in data loss. That incident taught us the hard lesson that request failures should be handled gracefully. In Google’s superbly written Site Reliability Engineering Book, they describe The Global Chubby Planned Outage, in which a service was so reliable, that its customers were using it without taking into account the possibility of failure, and even using it without a real essential need, just because it was so reliable. As a result, Chubby, Google’s distributed locking system, was set a Service Level Objective (SLO) for service uptime, and for each quarter this SLO is met, the team responsible for the service intentionally takes it down. Their goal is to educate users that the service is not fail-safe and that they need to account for external service failures in their products. So how should engineers handle request failures? Let’s cover some comment patterns: Retrying Retrying a failed request can, in many cases, solve the problem. This is the obvious solution, assuming network failures are sporadic and unpredictable. Just set a reasonable timeout for each request you send out to an external resource, and the number of retries you want, and you’re done! Your system is now more reliable. Something to consider, however, is that additional retries can cause additional load on the system you’re calling, and make an already failing system fail harder. Implementing and configuring short-circuiting mechanisms might be a thing to consider. You can read more about it in this interesting Shopify engineering blog post. Prefetching — Fail outside of the main flow One of the best ways to avoid failure while calling an external service is to avoid calling this service at all. Let’s say we’re implementing an online store — we have a user service and an order service, and the order service needs the current user’s email address in order to send them an invoice for their last purchase. The fact that we need the email address, doesn’t mean we have to query the user service while the user is logged in and waiting for order confirmation. It just means that an email address should be available. In cases of fairly static data, we can easily pre-fetch all (or some) user details from the user service in a background process. This way, the email is already available during order processing, and we don’t need to call the external service. In the event the service fails to fetch user details, that failure remains outside of the main processing flow and is “hidden” from the user. In his talk, Jimmy Bogard explains it better than I do (the link starts from his explanation about prefetching, although the whole talk is great!) Best efforting In some cases, we should just embrace failure, and continue processing without the data we were trying to get. You’re probably wondering — if we don’t need the data, why are we querying it at all? The best example we have for this in Riskified is a Redis-based distributed locking mechanism that we use to block concurrent transactions in some cases. Since we’re a low-latency oriented service, we didn’t want a latency surge in lock acquiring to cause us to exceed the SLA requirements of our customers. We set a very strict timeout on lock acquiring so that when the timeout is reached, we continue unlocked — i.e we prefer race conditions over the increase in latency for our customers. In other words, the locking feature is a “nice to have” feature in our process. Falling back to previous or estimated results In some cases, you may be able to use previous results or sub-optimal estimations to handle a request while other services are unavailable. Let’s say we’re implementing a navigation system, and one of the features we want is traffic jam predictions. We’d probably have a JammingService (not to be confused with the Bob Marley song), that we’d call with our route to estimate the probability of traffic jams. When this service is failing, we might choose a sub-optimal course of action, while still serving the request: Using previous results: we might cache some “common” jam predictions and serve them, we might even pre-fetch the jam estimation for the most commonly used routes of some of our users. Estimate a result: Our service can hold a mapping of mean jam estimation per region and serve that estimation for all requests for routes in the region. In both examples, the solution is obviously not optimal, but probably be better than failing a request. The general idea here is to make a simple estimation of the result we’re trying to get from the external resource. Delaying a response If the business of the product allows it, it’s possible to delay the processing of the request until the problem with the external resource is solved. As an example, let’s take the JammingService from the previous solution — when it fails we can decide to queue all requests in some internal queue, return a response to the user that the request cannot be processed at the moment, but a response will be available as soon as possible via push notification to the user’s phone, or via webhook for example. This is possible mostly in asynchronous services, where we can separate between the request and the response. (If you can design the service to be asynchronous to begin with, that’s even better!) Implement simplified fallback logic On some mission-critical features, a more complex solution is needed. In some cases, the external service is so critical to our services, that we’d have to fail a request if the external service fails. One of the solutions we devised for such critical external resources, is to use “simplified” in-process versions of them. In other words, we’re re-implementing a simplified version of the external service as a fallback within our service, so that in the event the external service fails, we still have some data to work with, and can successfully process the request. As an example, let’s go back to our navigation system. It might be such an important feature of our system, that we want each request to have a fairly good traffic jam estimation, even if our JammingService is down. Our JammingService probably uses various complex machine learning algorithms and external data sources. In our simplified fallback version of it, we might choose, for example, to implement it using a simple greedy best-first algorithm, with simple optimizations. In this case, even if there’s a failure of the JammingService, some fairly good traffic jam estimation is available within our navigation system. This isn’t optimal since now we need to maintain two versions of the same feature, but when the feature is critical enough, and may be unstable enough — it could be worth it. Closing thoughts — Failing as a way of life At school, I was quite a bad student, so failing is not new to me. This taught me that as an engineer, anything I lay my hands on might fail, and simply catching the exception is not enough — we need to do something when we catch it, we still need to provide some level of service. I encourage you to dedicate a big part of your time to failure handling, and to make it a habit to announce your systems are production-ready only when you handle your failures in a safe and business-oriented way. As always, you’re welcome to find me at my Twitter handle: @cherkaskyb
https://medium.com/riskified-technology/engineering-for-failure-f73bc8bc2e87
['Boris Cherkasky']
2020-09-24 11:59:52.253000+00:00
['Scale', 'Backend', 'Programming', 'Software Engineering', 'Cloud']
Let’s Walk Into the Unknown. Together.
Change is terrifying. It’s the fear of encountering the unknown that is ultimately hindering our progress and locks us into the same old place. This fear is the reason why I revert to “Last Christmas” instead of listening to new Christmas hits of today’s talents — although the shuffle function helped me overcome this problem partly, nowadays. It’s the reason why I hesitated to start writing articles on my blog. And it’s the same reason why we are longing for more and more privileges, allowing us to stay the same, instead of accepting responsibility for the challenges ahead of us. “The only way to make sense out of change is to plunge into it, move with it, and join the dance.” - Alan Watts The scary part of change is that it doesn’t give us an option. Change will happen, and it will affect us. But there still is a choice: we can decide whether we accept or refuse responsibility. Since this pandemic began, I caught myself several times thinking back to my accidents, contemplating whether I have made the right decision. One reason for remembering events is because they might still contain information we can learn from. The reason I am thinking back to my accident might be because I have to learn how to accept change again. Will life be the same after the lockdown? Probably not. Too many people have died, lost their jobs and businesses, and missed opportunities to say goodbye to loved ones. Industries have learned how cheap and possible work from home is. But while surprisingly high numbers of people fall for conspiracy theories, fortunately, even more of us are taking care of one another. We are in this together, and we are arguably doing quite a good job. Yet, things will change. It’s natural and nothing detrimental. But it’s terrifying. Change is not only terrifying because of the unknown but also because we have to take on responsibility. Being responsible means that our actions matter. And once we realize how deeply our actions matter, we begin to fathom that everything in life matters. Responsibility, therefore, is the ultimate antidote to nihilism. Why should I even care to face the unknown and be responsible for my actions? It’s a fair question to ask, given the terror, navigating the unknown imposes on us. Ultimately, once I die, and depending on my afterlife believes, nothing I did would matter, would it? That’s where I tend to disagree with nihilism. You can technically stretch everything out over a long enough time frame to make it insignificant, but that doesn’t indicate that it never was significant at some point. Your actions are definitely impacting your own life, giving them a lifespan from up to 100 years, which is already significant — after all, it’s a whole lifetime. But then, your actions are also impacting the ones around you. Giving your actions meaning, even after your own life might have come to an end already; increasing its significance. Now, if you live in fit circumstances and act properly, you might work on something that will impact several generations or even the course of humanity itself. This will leave your actions with even more meaning when measured by their impact. Arguably, nothing of the latter will matter to you specifically, after your death, as you won’t experience any of it. But that doesn’t mean your actions aren’t meaningful. Your actions will be even more significant, with life spans being planned to grow exponentially during the next 50 years. Although, perhaps, the concept of meaning might work best, with the proposition of having an end — a final moment, to which it led up to. Notwithstanding, and regardless of your lifespan, actions have consequences. But so does inaction. Once we realize that what we are doing is meaningful, there is another insight waiting: the need for responsibility. If our actions are of significance, we have to take responsibility for them. We are responsible, not only for our own lives but also for the lives around us. The better we can compose ourselves, the more profound we can support the people we meet. I don’t think there is a path in the middle. If you ascribe life any meaning, you will have to accept responsibility. You can’t deeply believe in a meaningful life while not acting responsibly. Constantly acting against what you believe would ultimately push you into resentment and depression. The other possibility is to deny life any meaning, ridding yourself of any responsibility. But where does one go from here? Acting this way is selfish. If too many people would believe it and act this belief out, there wouldn’t be any progress. Why would you build houses, respect other people — and their property? Why would people even bother working together and help others? Why should we save someone that got hit by a car if this wouldn’t even be of any meaning? If we acted like life was meaningless, we wouldn’t be in a place to even think about a possible meaning of life. The fact that we are as privileged as we are today is because the actions of our ancestors had meaning. And they still have. It’s because our ancestors took on the responsibility.
https://medium.com/curious/lets-walk-into-the-unknown-together-f72a17f7b222
['Julian Drach']
2020-12-23 23:11:55.190000+00:00
['Responsibility', 'Self', 'Philosophy', 'Change', 'Mental Health']
About me
I am a Social Media Manager, Digital Marketing Consultant, Creator of You Are Not Alone and Former Founder of Moment Clothing from London. I’ve still currently/done PR, Social Media Management and Digital Marketing Consulting work for a range of clients, including Guzzi’s World (Vlogger), Goodwood Pictures (Filmmaker), YSYS formerly known as Startup Stories Worldwide (SSW) , FWRD , Croydon Labour Party and more, including starting my own clothing line (Moment Clothing) with a friend back in 2013. My love for Marketing came about in a strange way, as at first my major interest was in politics. However, I still always had a love for social media and technology, even with friends constantly asking me to help them manage their social media platforms for them when they started their own business, ventures, photography or music career. I never really thought at first that I had a skill for social media or digital marketing, I just absolutely loved using social media constantly, engaging with others and creating funny and engaging content. Furthermore, my love for social media and digital marketing fully developed just recently, after failing my second year of University twice (due to bereavement of my father as I was still grieving and other personal issues) the result of this, was that I couldn’t continue doing my course (politics) at the University. After this depressing period for me and time away from academia, it gave me time to think about what I really enjoyed doing, which is social media management and digital marketing. This then led me to set myself up as a Freelance Social Media Manager and Digital Marketing Consultant, helping friends who started/ing their own start-ups, businesses, ventures (YSY, Surprise Them With Progress)and those who are musicians (Mr Outspoken), vloggers (Guzzi’s World), photographers (the lonely ldnr) and filmmakers (Goodwood Pictures); I help them manage their social media, grow their online presence, help create their personal brand and produce engaging relatable content for all their social media platforms; as well as coming up with creative ideas for them to engage with their followers e.g. competitions and storytelling — start-ups or artists — how they got to where they are now, what they do behind the scenes, advising them to document their journey to their audience via their social media outlets, just to name a few. I believe social media is a community. An online space for people to communicate, network, discuss and showcase their talents, businesses or projects, as well for brands to engage with their customers and vise versa customers engaging with their favourite brands. For this reason, I pride myself in being an avid social media user, as well as a Social Media Manager and Digital Marketing Consultant; producing exceptional and creative social media content/campaigns; intuitive content creation and branding that emotively resonates and engages with followers. My journey up to this very moment has been down to hard work, putting myself out there, self awareness, perseverance, networking, constantly willing to learn and develop my current skills and learn new skills, as well as my continued passion for what I love doing — social media and marketing. I am currently taking the Google Digital Garage online Digital Marketing course to further my skills in digital marketing, as well as trying to seek internships at marketing agencies, which has proven hard without having a degree at the moment but I continue to persevere and know that with my hard work I will eventually get one. At the moment, I am currently freelancing. I have recently began working on one of my personal ventures You Are Not Alone -an online platform where people who suffer/ed with mental health illnesses can share their stories; with a similar approach as Humans of New York but extending it further than just a picture and a quote but having a space (a blog) where they can share in depth about their battle with their mental health illness. The other venture is to start my very own charity helping underprivileged children firstly in Jamaica then aiming to expand to other Caribbean countries then Africa. I’m looking for people who would like to get involved and to help get my ideas off the ground. I have a strong and passionate interest in social media, marketing and the development of new technology. I’m in need of a mentor in those industries, a mentor that will give me the necessary knowledge, experience and opportunity to gain the required skills needed in the marketing industry so I can excel in these fields. I have been searching and haven’t found one yet, but I continue to persevere in my search for that person. At present, I’m currently looking for a marketing internship where I can gain valuable career and life experience, knowledge and skills to enable me to start a career in marketing and to conquer the marketing industry. I am always looking for the chance to meet and work with marketing agencies, brands, artists, start-ups or businesses, as well as managing social media platforms and providing digital marketing consulting. Please do get in contact with work or opportunities via my email ([email protected]), and please also feel free to get in touch with me for even a chat or if you would like to know more.
https://medium.com/clevan/about-me-9604b5801ea2
[]
2017-03-13 22:48:19.918000+00:00
['Marketing', 'Social Media', 'Jobs', 'Digital Marketing', 'Freelancing']
What to Read Next? Analyzing the Digital Fragmenta Historicorum Graecorum with Python and NLP
Getting the data The DFHG’s API relies on queries of authors: this makes sense because of the fragmentary nature of the corpus — we don’t have specific “works” to search through, like The Republic, so author is the most atomic we can get. We thus need to extract a list of all the authors included in the corpus. We can then build out API calls and feed our data into an NLP engine. This code outlines the steps I took to send a request to the DFHG’s API documentation, clean up the HTML, and get a (clean) list of authors. Here is a quick check of the word counts from the fragments in our corpus (ignore the warning): Latium est, non legitur This is not a clean dataset — at least, not in the sense that we’re ready to do any actual analytics yet. The problem here is the nature of the material: these are fragmentary pieces of writing, some of which survive only through quotations in other sources. What makes life even more difficult is disentangling the “actual” authorial content from whatever may have accrued to the text at the hands of scribes across the centuries. As a more particular matter, what we have here are actually several fragments that are in Latin. The DFHG does provide a Latin translation for most (if not all) of the fragments in the corpus, but when Latin comments turn up in the ‘text’ field of the data we scraped earlier, that suggests the influence of a commentator, not necessarily the author we want to look at. To proceed, we need to figure out a way to separate the content we want from the commentary we don’t. Luckily for us, Latin and Greek are different languages written in different alphabets. (It would be a different question entirely if we were trying to segregate, say, Spanish and English words when they’re bunched up together, since they use the same alphabet.) Since Latin uses, naturally, the Roman alphabet, we can use regex (regular expressions — basically word searches) to find patterns of text in the data that are made up of Latin characters. I’m going to be pretty broad here: we want to capture as much of the Latin as possible while leaving Greek. If a passage is all “original,” we should let it stand. Otherwise, we should flag it. This line will pick up any row in our data that has a Latin character in the “text” field: latin_index = author_df[‘text’].str.contains(‘([a-zA-Z])\)’, regex = True) Let’s do some further cleaning to get the number of Latin words, their positions within the text, and the relative frequency of Latin/non-Latin words in each fragment: We now have counts and frequencies. At this point we have to make some decisions about how to handle these data points: we can just drop these rows and move on, or we can try and keep the texts that have a minimal amount of Latin in them. Try the second route. There are options here as well: we can either drop rows entirely, or try and filter out the Latin words from each text. I’m going to try a combination of both, going through the data first and trying to drop rows that are entirely or almost entirely Latin, then scrubbing the remaining text. First, let’s look at the distribution of Latin in the corpus we have. What these charts tell me is that our intuition was right — we have a few authors whose work preserved in the DFHG is almost entirely in Latin (i.e. those fragments that fall on the left hand side of the lower figure), and many others where a few stray Latin words have made their way into the mainly Greek text. To make a final pass at cleaning the corpus, I defined a few functions to take a list of words and return the indices of all Latin words in the list. I then filter the list of words by deleting elements at those indices. for index, row in final_df.iterrows(): indicies = row['latin_word_index'] for index in sorted(indicies, reverse = True): del row['words'][index] I then drop fragments with remaining Latin words. final_df = df.drop(df[df['latin_word_count'] != 0]) NLP Time Now we can move on to the NLP part of this project. I’m taking my cues from some of the example notebooks posted on the CLTK Github page. I’m going to calculate a few summary metrics, including lexical density, which measures the number of “lexical” words as a percentage of total words in a passage. A “lexical” word is a word that is not extremely common. These are called “stop” words (think something like “and,” or “or” in English) and we don’t want to include them in our analysis. Counting stop words doesn’t tell us much about a text, beyond an author’s penchant for conjunctions. So, when we look at a text like the quote from Jefferson, above, we expect the lexical density to be somewhere in the middle, since there are a decent number of stop words, but few words are repeated. According to Analyze My Writing, the lexical density quote 48.24%, which is about what we’d expect. We can take a first look at lexical density by fragment after some cleanup. We’ll then look at the lexical density by author. First, we need to regularize the texts and strip out punctuation and so-called stop words. (A special note on punctuation — Greek uses the semicolon ; to indicate a question, so we won’t include that character in the punctuation list to filer.) A stop word in this case is a common word that doesn’t add much to our understanding of the meaning of the text. In English these are words like “the,” “is,” “and,” and so on. The CLTK comes with a built-in list of stop words for both Greek and Latin, so I’ll use that functionality to clean up the fragments. This code will clean up the list of words we’ve extracted from each fragment: def filter_punct(words): out = [word for word in words if word not in punct] out = [word for word in out if word not in stops] final_df['words'].apply(clean_punct) final_df['lemmata'].apply(clean_punct) ## Note that this function actually cleans both punctuation and stop ## words final_df[['author', 'clean_lemmata']] # prints the following Now a quick check of the length of each fragment we have: And finally we can now calculate the fragment level lexical density for our corpus: def lemmatize(text): lemmata = lemmatizer.lemmatize(text) # from cltk return lemmata df['lemmata'] = df['clean_words'].apply(lemmatize) df['lexical_density'] = df.apply(lambda x: len(set(x['lemmata'])) / len(x['lemmata']), 1) # python set return set of unique elements Let’s remove the fragments whose lexical density is 1 and plot to see what kind of distribution we have. (I’m omitting fragments where density is 1 for two reasons. First, these fragments tend to be much shorter — the average word count for a 100% density fragment is ~10 words, while those with lower densities average ~82 words. Since these are fragments, it’s likely that passages with 100% density are not representative of an author’s voice. Second, shorter fragments will almost by definition have higher lexical densities than longer fragments. Because shorter fragments are made up of fewer words, there are fewer opportunities for an author to repeat herself, and fewer repetitions leads to a “denser” text.) Looks like the distribution is approximately normal, but with lots of left skew. We can check the descriptive statistics for more:
https://medium.com/swlh/what-to-read-next-analyzing-the-digital-fragmenta-historicorum-graecorum-with-python-and-nlp-7618e05433a8
[]
2020-05-24 21:52:49.575000+00:00
['Classics', 'Python', 'Digital Humanities', 'Ancient History', 'NLP']
Oversimplified ML(3): Outputs
Another Data set The third type of output from a model is an altered data set. Sometimes data sets require processing before use in regression or classification. This is especially true of “wide” datasets with many columns. This type of machine learning will reduce the number of columns until the data set is more manageable. One method to simplify data is called principal component analysis, or PCA. Line of best fit (pink/black) and orthogonal paths (red) This method assigns a line through data that best represents it. Points are then moved across the orthogonal (diagonal) line to the line of best fit. After moving this process each point can be represented with two new values. Instead of (x,y) defining a data point PCA will represent each point with the position on the line of best fit and the orthogonal distance. Since the line of best fit comes first, the first number will be more significant than the second. Line of best fit (blue) and the perpendicular plane (gray) In 3D and higher dimensions we largely follow the same process. To start, we choose the line which has the lowest orthogonal squared error. This is the sum of the length of each orthogonal line squared. Next, each point’s first principal component is recorded. This is done by finding the intercept between the line of best fit and the orthogonal distance. The distance along the line of best fit is the first principal component. A point in the top right would have a large first component while a point closer to (0,0,0) has a smaller first principal component. After the first principal component is collected all points are moved to the perpendicular plane. Another line of best fit is calculated and the second principal component is derived. This process is repeated until only one dimension is left. That dimension becomes the final principal component. The data set is now described with the most important variables first and the least important variables last. We can safely remove the least important variables without losing much of the original information. To explore how efficient our transformation is we need to compare the new and old data sets. To compare these data sets they must be in the same dimensions. Equation for a line Using the equation for a line we are able to return to the old data’s dimensions for each point. Plug in the first component’s value in for t. The other variables come from the line used in PCA. Repeat this process for the first k number of columns in the PCA data set. Now we have x (the original point) and x_approx (the new point in x dimensions). Losing less than 5% of variance will make the above equation true To measure how much information is lost we will look at the variance before and after the transformation. If we have lost too much variance we should add another column from PCA. If we still have almost all the variance we can ‘simplify’ the data by removing the rightmost column of PCA. Repeat until an acceptable amount of variance and number of columns is reached
https://medium.com/data-for-associates/oversimplified-ml-3-outputs-d987ccc71737
['Theo De Quillacq']
2020-09-24 03:26:33.531000+00:00
['AI', 'Data Science', 'Statistics', 'Machine Learning', 'Mathematics']
Found & Lost
Lost in thought, I wandered through the meadow Overwhelmed by guilt and wracked by remorse- Time and distance blended into nothing She pulled the trigger, but it was my gun The sun burst free from its gray entombment Its rays reflected off some strange object The piercing light yanked me from my stupor My feet stopped moving and I stared at it Compelled, I stutter-stepped towards the object Embedded in the ground was an egg shell Glancing around, I first noticed the woods The thick, tall trees stood just a yard away My grief had blinded me to their presence The image of my child’s destroyed face Began once again to consume my soul The reflected light pulled me back again I stooped down to examine the egg shell My fingers began to excavate it The thick clay fought to retain its treasure Grudgingly the Earth began to recede The object was no egg, it was too big Excited, I clawed around the puzzle Once the edges were discovered, I knew I was working to free a human skull I slammed my fists on the ground and howled My heart shattered over the thought of her The gun had been purchased for my escape She had found it after I lost my nerve I resolved to go back and finish it Movement flickered on the edge of the woods A strange gossamer shape danced like a kite My fingers brushed the cool, egg-shell white skull That instant, the shape brightened and sharpened I removed my hand and the shape faded Alarmed, I again touched the buried skull The shape flared and dance — and I understood I dug my fingers into the red clay Inch by inch I began to free her skull Every few minutes I glanced at the woods Each time her form was more defined and bright Once I uncovered the skull’s eyes I looked My daughter smiled at me from the woods “Can we talk to each other now?” I asked She shook her head and pointed to the skull I nodded and returned to the battle Clay clung to my hands and covered my clothes The skull remained spotless, unsoiled What magic connected spirit and bone? Flashes of my dead daughter’s body came I chased them with glances of her spirit In the woods she was perfect and intact Unlike the corpse oozing gore I had left As the work progressed, she became present Less apparition and more corporeal Was my work some kind of resurrection? I couldn’t get any answers from her Her gaze never left me or the white skull Her response to every question the same, She would tip her head to her left shoulder, She would smile and then point to the skull I longed to hear her sweet voice and hold her I knew once the skull was free she would change My knuckles were swollen, my hands gnarled But nothing could stop me from freeing her Only the lower jaw remained buried Twilight was being enveloped by night I growled and threw myself at the work The horizon strangled the suns last rays The skull was fully exposed. Exultant, I palmed the skull, stood up, and looked at her Her entire figure pulsed with furor Like a time lapse film, her head and face changed First her smile vanished, then her features — Caved in on themselves and a hole appeared Allowing me to see through her small head Just like the moment when the gun went off Fear and disgust stole my voice and my will I stood like an ancient oak in the clay She raised a finger and pointed at me No — not at me — she pointed to the skull Finding volition, I dropped the white skull It hung in the air, beyond gravity She turned her palm over and beckoned it The skull sailed to her waiting, upturned hand Eternity inside of a moment There was nothing. No motion. No feeling. My dead, deformed daughter began to pulse Millions of pinpricks of starlight speared her A fury erupted from everywhere The sounds and the lights were too much for me But, I remained cemented to the spot The Devil’s transfiguration began The being in front of me gained a face She grew to the height of a tall woman This was no spirit. She was real like me Silence and darkness made the night thicker The woman smiled as she palmed the skull Her smile was not my daughter’s smile She threw her head back in a cruel laugh With blinding speed she threw the skull at me It landed at my feet back in the clay I swallowed hard as the Earth reclaimed it As the clay advanced my form diminished The woman cackled and walked from the woods I was transported to her former spot The top of the skull could barely be seen My crumpled body rested next to it She examined my corpse and turned around “I have waited at the edge of the woods For sixty-five years until someone came Someone who had committed a greater Atrocity than the one I had done” Her accent and voice were strange to my ears I tied to speak to her but had no voice Searching her eyes for meaning, I pointed She glanced down at the strange white, egg shell shape “The skull is a talisman. It’s your key You will be restored to yourself once one Who has done worse than you digs up the skull. Until then you will haunt the woods unheard.” The woman strolled away across the field Hikers found my body two days later I explored the limits of my prison I try to avoid the edge near the skull There is no anger or sorrow in me True divine justice has been carried out My prayer is I will never be restored Living is suffering. This is better.
https://medium.com/weirdo-poetry/found-lost-6b48758ef381
['Jason Mcbride']
2020-08-23 18:14:19.140000+00:00
['Poetry', 'Ghosts', 'Grief', 'Fiction', 'Writing']
New Union Operators to Merge Dictionaries in Python 3.9
Different Ways to Merge Dictionaries Before Python 3.9 1.dict.update() d1.update(d2) Update the dictionary d1 with the key/value pairs from d2 , overwriting existing keys. Return None . d1={'a':1,'b':2} d2={'c':3,'b':9999} d1.update(d2) print (d1) #Output:{'a': 1, 'b': 9999, 'c': 3} Limitations d1.update(d2) will modify the original dictionary d1 . If the original dictionary need not be modified, create a copy of dictionary d1 and then update it. d1={'a':1,'b':2} d2={'c':3,'b':9999} from copy import copy d3=copy(d1) d3.update(d2) print (d3) #Output:{'a': 1, 'b': 9999, 'c': 3} 2. Dictionary unpacking {**d1,**d2} d3={**d1,**d2} A double asterisk ** denotes dictionary unpacking. It will expand the contents of dictionaries d1 and d2 as a collection of key-value pairs and update the dictionary d3 . Keys that are common in d1 and d2 will contain values from d2 . d1={'a':1,'b':2} d2={'c':3,'b':9999} d3={**d1,**d2} print (d3) #Output:{'a': 1, 'b': 9999, 'c': 3} Limitations {**d1, **d2} ignores the types of mappings and always returns a dict . 3. collections.ChainMap ChainMap : A ChainMap groups multiple dictionaries or other mappings together to create a single, updateable view. collections.ChainMap(*maps) Return type is collections.ChainMap . We can convert to dict using the dict() constructor. d1={'a':1,'b':2} d2={'c':3,'b':9999} from collections import ChainMap from collections import ChainMap d3=ChainMap(d1,d2) print (d3) #Output:ChainMap({'a': 1, 'b': 2}, {'c': 3, 'b': 9999}) print (dict(d3)) #Output:{'c': 3, 'b': 2, 'a': 1} Limitations It also ignores the types of mappings and always returns a dict . ChainMap s wrap their underlying dict s, so writing to the ChainMap will modify the original dict . In the above-mentioned example, if we modify the ChainMap object, d1 will also be modified. d3=ChainMap(d1,d2) d3['a']=555555 print (d1) #Output:{'a': 555555, 'b': 2} 4. dict(d1,**d2) d3 = dict(d1,**d2) d3 will contain key-value pairs from d1 and d2 . Keys that are common in d1 and d2 will contain values from d2 . d1={'a':1,'b':2} d2={'c':3,'b':9999} from collections import ChainMap d3=dict(d1,**d2) print (d3) #Output:{'a': 1, 'b': 9999, 'c': 3} Limitations d 3=dict(d1,**d2) will work only when d2 is entirely string-keyed. If int is given as a key in d2 , it will raise TypeError .
https://medium.com/better-programming/new-union-operators-to-merge-dictionaries-in-python-3-9-8c7dbbd1080c
['Indhumathy Chelliah']
2020-10-14 16:19:39.818000+00:00
['Python', 'Python3', 'DevOps', 'Data Science', 'Programming']
3 to read: Bottomless Pinocchio | Good Google? Gone | Congress misses its chance
By Matt Carroll <@MattCData> Dec. 18, 2018: Cool stuff about journalism, once a week. Get notified via email? Subscribe: 3toread (at) gmail. Originally published on 3toread.co The WaPo’s ‘Bottomless Pinocchio’: A new rating for a false claim repeated over and over again: Love this. You have to give the WaPo credit for their aggressive coverage of Trump. For instance, not so long ago, a politician caught lying would be embarrassed enough to stop repeating the lie. Not Trump. So the WaPo has upped its game w this interesting new system for ranking repeat liars. Kudos to Glenn Kessler. What happened to the good Google?: Google’s Dragonfly will intensify surveillance of journalists in China: Many Google watchers and company employees were shocked when they found out the search company was working hand-in-hand with the Chinese gov’t to create a censorship-compliant search engine. When word leaked out, it caused protests within the company. Well, apparently they weren’t enough to derail the project. Money talks, and the Chinese market is too big to give up over the principles of democracy, apparently. Mia Shuang Li for CJR. The missed point of Google’s Congressional hearing: Congress had a chance to dig deep into Google’s business practices and how they can hurt consumers across the country, notes Charlie Warzel of BuzzFeed. So did pols look at how people are tracked? Or how their personal data is sold? No. Instead pols focused on perceived political bias, asking shallow questions. And not surprisingly, the Google CEO gave evasive answers. All in all, a chance to shine a little light on the internal workings of one of the most influential corporations in the world was flubbed.
https://medium.com/3-to-read/3-to-read-bottomless-pinocchio-good-google-gone-congress-misses-its-chance-9cca36302be2
['Matt Carroll']
2018-12-18 14:49:28.818000+00:00
['Media Criticism', 'Journalism', 'Matt Carroll', 'Journalism Criticism', 'Media']
Mindfulness Isn’t Working. Here’s Why.
In the 21st century, mindfulness meditation, a practice that was for millennia reserved for members of a particular tradition and lone spiritual wanderers, has been standardized, digitized, and distributed to the masses. That’s just super, isn’t it? Yep. Even more than that. It could be one of the greatest things to happen in recent years, potentially even the whole course of human history, right? Yep. But… There’s a weird thing that happens when it comes to critically discussing practices like meditation, and in particular mindfulness. Even though they may offer some benefit, it’s as if, because they involve qualities like being nonjudgemental and acceptance, they’re somehow entitled to a free pass from being looked at critically and rationally. Master, why do we meditate outside bare naked in the middle of winter? Because that’s how things are. Accept it. For Ronald Purser, author of McMindfulness: How Mindfulness Became the New Capitalist Spirituality, this is surprisingly convenient in the times we live in. In a time when asking too many questions might disrupt the status quo, then dropping out of the “thinking mind” and into “being mode” seems all too appropriate. In a time when stress and stress-related diseases are rife but so is pollution, poverty, inequality, financial insecurity, war, and the rest of it, then avoiding “over-intellectualizing” and “trying to change things” and having a narrative that explains hows the cause of stress is inside our own heads, and all we need to do is get better at “accepting it”, is way too convenient. For millennia, meditation was mostly something practiced by small groups of people living outside of society. Its very purpose was as a tool for questioning who we are and what life is, to wake up our inner wisdom in the face of domineering hierarchical structures, cultural ideals, and any kind of strict dogma or imposed way of living. It had nothing to do with improving performance and trying to adapt to or thrive within a sick or faulty system. Today, however, more and more of us are introduced to meditation as part of a society, culture, and set of ideas that are already well established. Ahem, capitalism. Far from the radical counterculture practice that emerged some 2000 years ago, mindfulness is seen as nothing more than a list of health benefits that come at the cost of a monthly subscription fee. In fact, Purser argues that mindfulness has become yet another tool in the futile and ultimately capitalist-benefiting game of maintaining, monitoring, and continually working on the project of the self. This raises two questions. The first is the one I opened this article with. Okay, that may make some sense. But still, if mindfulness makes people feel better, then what’s the big deal? To this, Purser has a lot to say. To try to simplify one of his points, let’s consider his example of recycling. There’s no denying that recycling is good for the environment. But is continuing to recycle actually the best thing for the environment? Looking at it another way, doesn’t it actually support a practice that is harming the environment? In the same way, Purser argues that meditating or practicing mindfulness to reduce stress can help people feel better. But he asks is it really the best thing for them? Does it really get to the root of the issue, or does it merely support, and direct attention away from, the overarching issue of a stress, or plastic, producing system? The second question it raises is: Well, if it isn’t the right way to go about things, then what is Mr. Big Shot? To this end, Purser has a lot less to say. Not because he can’t answer it. But because the solution isn’t separate from the understanding of where mindfulness is going wrong. After all, understanding where things may be going wrong, instead of merely jumping ahead to find the solution or rush to download the next big meditation app, is pretty much the most mindful thing anyone can do today. Self-acceptance vs Self-improvement Alongside talk therapy, massage, and medicine, mindfulness has taken its place as another therapeutic tool that can be put to work in healing ourselves. Mindfulness can help you destress. Sleep better. Improve your focus. And gain a tonne of other benefits that are good for your health and wellbeing. The problem is, rather than being the ultimate purpose of the practice, originally, such benefits are merely side effects that happen along the road. Modern mindfulness hails such benefits as its main selling points. And in doing so, it can reinforce the mindset of grasping and striving to change and improve ourselves that it’s originally designed to illuminate and offer an alternative to. This is also a familiar paradox of meditation, often talked about as the self-acceptance vs self-improvement paradox. In essence, the paradox says that meditation can lead to change and improvement in ourselves, but only through giving up the effort and need to try and change and improve ourselves. When we approach mindfulness with the change and improvement mindset, we can end up trying to use the practice to make ourselves feel better, to get rid of or improve a certain mood, and to lose ourselves in relaxing and pleasant states and sensations. Whats’ more, when we come to the practice expecting such change, there’s already resistance and judgment of your experience. There’s a “bad” or “negative” or unwanted feeling or state that needs to be eradicated or changed. Alternatively, we can take the self-acceptance approach. This isn’t about resignation or giving up. It’s not about taking it easy on yourself, avoiding difficult feelings, and using mindfulness as a way to zone out and relax. It’s about becoming aware of unhelpful patterns we can get stuck into of fixing, resisting, judging, analyzing, escaping, and otherwise trying to make our experience into something it’s not. It also, therefore, requires the recognition that mindfulness, despite the claims, is not about trying to make ourselves feel better or destress. That can happen, but it’s not what the practice is about. Mindfulness is not a therapeutic method. It does not follow a problem-solution model. Its purpose is not intended to “fix”, “cure”, “heal”, or even “improve” yourself. These ideas are fundamentally contradictory to the principle of “acceptance” and “non-judgment” (which we’ll look at later) because they’re based on the faulty assumption that you are somehow broken or flawed or need improvement in the first place. Of course, this doesn’t mean no practice or action is necessary. But it’s practice and action that is not aimed toward a specific goal or purpose outside of being in your immediate experience. The city is our monastery. Erm… no It’s long been possible to meditate in any way and anywhere you want. But traditionally, Buddhist monks would have three things to support them: a teacher, an ethical framework, and a community, which most often came with a quiet place to live and practice. Today, you don’t need any of these things to practice mindfulness. Just download an app, choose between 100s of short courses, stick on a guided meditation, and, with your noise-canceling headphones at the ready, you can begin meditating while riding the metro on your way to work. As you can see, these modern conditions that many of us meditate in are worlds away from the conditions of a monk living inside a monastery. But the point isn’t that you need to live in perfect and serene conditions in the middle of the countryside in order to meditate. The point is that even when you know what mindfulness is about and your intentions are in the right place, it’s easy for the practice to become a way to simply take a breather and retreat from the chaos that’s going on around you. And it’s no surprise. Despite feeling “normal” or “natural” to many of us, modern digital life is far from neutral. Taking five minutes to practice mindfulness is often the only time we give to ourselves to pause in our day. The trouble is that there’s a whole industry that has long been built to capitalize on the demand for moments of stress-reducing respite. A market in which mindfulness apps now play a leading and growing role. The irony is that, rather than actually making us more connected with ourselves and others and in touch with “the present moment”, mindfulness apps can make us less connected to ourselves, what’s going on around us, and the environment we live in. By taking moments to tune out of our environment and “drop-in” to practice, while listening to soothing guidance that has no direct relation to our current experience, mindfulness apps can’t help but engender a disconnection with our immediate experience and reinforce the barrier between ourselves, our conditions, and our environment. Mindfulness apps provide a temporary moment of calm, a sort of experiential escape hatch, that allow us to find and nourish our happy place — a place that’s safe and separate from what’s out there, the chaos that is the rest of our day and life. Buddhist monks haven’t meditated for millennia away from the distractions of desires, relationships, and commerce solely because it’s a nice thing to do, it feels good, and it helps them focus. They’ve done it out of the knowledge and recognition that what we are is not some isolated creature that’s separated from the conditions around us. The two are one and the same. An analogy that’s often used to demonstrate this is the pot of water. Say we’re stressed out and anxious. This would be like the pot of water has been stirred up by the wind and is agitated, swaying, and making waves. Just as you wouldn’t expect to be able to see a clear reflection of yourself in the surface of the water, when we’re stressed and anxious, by their very nature, we aren’t able to see clearly and consider the best way to escape our stress and anxiety, if we can even see a way at all. Of course, this is exactly what we come to see through practice. But there’s no merit in striving like mad to see your reflection in a pot of water that’s boiling, murky, and full of all sorts of moss and muck, and then beating yourself up because you can’t stay awake or focus clearly for two minutes. The water needs time to settle on its own. Which can be done by moving to a monastery, or more likely, taking a good hard look at the conditions you live in, what’s going on in your life, and making some changes. This is why the most “mindful” thing that one person can do for themselves can look completely different from that of someone else. It’s your personal experience combined with your unique conditions, and no matter how many guided meditations from leading experts you listen to or how big your meditation streak, it doesn’t necessarily mean you’re paying any attention to what’s happening in your own pot. (Don’t get me wrong, meditation apps and guided practices have their time and place. But they’re not simply a 21st-century alternative or a more convenient equivalent to traditional mindfulness practice. They aren’t neutral). The lack of good judgment about non-judgment This whole article could be dismissed as a case of not practicing enough non-judgment. So goes the generally accepted definition of mindfulness by the founder of MBSR, John Kabat-Zinn: “the awareness that arises from paying attention, on purpose, in the present moment and non-judgmentally”. When it comes to being “non-judgemental”, it has nothing to do with avoiding critical discourse on the grounds that opinions are bias, you might piss a few people off, and, in the grand scheme, everything is already perfect anyway. This could be seen as a form of “idiot compassion”, a term used by Tibetan Buddhist teacher Chogyam Trungpa Rinpoche. Idiot compassion is a way of appearing nice and accepting without having to rock the boat, and therefore, not having to feel any discomfort or face any consequences for ourselves. As one of his students, Buddhist nun and author Pema Chodron, explains. “[Idiot compassion] refers to something we all do a lot of and call it compassion. In some ways, it’s what’s called enabling. It’s the general tendency to give people what they want because you can’t bear to see them suffering” So often, being “non-judgemental” is used to meet our own ends. We want to feel better, and we don’t want to be seen as judgemental, so we try not to be too critical or harsh on other people and ourselves by practicing “non-judgment”. But judgment is definitely not something we want to get rid of from our lives. And the reality is, we couldn’t no matter how hard we try. When it comes to practicing mindfulness, judgment is often thought of as a negative behavior that has to do with seeing people as better or worse than us, and rating things as good or bad. And, therefore, it’s an unmindful behavior we should stop doing immediately. In this way, we may strive to become the ideal mindful person who doesn't discriminate against anyone or anything. Everyone is included, no experience is avoided. So, if little Johnny wants to scream for an hour in the middle of the supermarket, then maybe that’s just what he needs. If for a few weeks or months I feel like drowning ourselves in a few tubs of ice cream at the end of each day, then who am I to judge? Instead of actually practicing non-judgment, we lose our critical faculties and take the role of the ultimate overriding high court mindfulness judge. Non-judgement is an unfortunate and unhelpful misnomer of modern mindfulness. As you can kind of get a hint of by the “non” part, non-judgment is not about taking action or actually adopting some kind of new behavior or quality. Non-judgment is just another way of describing what we’re practicing when we have the intention to let all experience come and go, instead of trying to make it fit the way we think it should be or would like it to be. It has absolutely zero to do with describing a way of taking action in the world (much like the word “mindful”). It’s to do with how we relate to our own experience. But as you can imagine, the two are inseparable. And so by looking at how we approach judgment in practice, we can also get a glimpse of how it might show up in other aspects of our lives. Let’s compare two ways of approaching non-judgment in meditation: You sit down to meditate. Ten seconds in and you’re feeling tension in your shoulders. It hurts, but it isn’t unbearable. So you try and be with it without judging it. But after a while, it’s not getting any better. So you try breathing deeper. No change. You try observing it more patiently. No change. You even try pretending it doesn’t really hurt and being kinder and gentler to it. No change. Round two. You sit down to meditate. Ten seconds in and you’re feeling tension in your shoulders. You try and be with it without judging it. You notice it's quite painful, and that you’re tensing up around it. Other parts of your body are also tense because it’s pissing you off. You notice the pain is changing, sometimes getting stronger, sometimes weaker. You notice your posture is slumping and adjust. You notice how the sensation is still there, and the thoughts about how you wish it would go away. After a while, sensations are still there, but they’re not as much of a problem as when you started. As you can see, the fundamental principle behind not judging has nothing to do with denying the fact that some things in life just suck. In this case, you’re not trying to change the fact that there’s some unpleasant sensation in your shoulders. Rather, you’re coming to see if there’s another way you can be with the discomfort, as opposed to fighting them and wishing it would go away, and if not, seeing what else you can do about it. Pain exists. Some things in life are difficult. The point of mindfulness isn’t to become a super holy mindful person who doesn’t think critically and embraces every experience while paying extreme attention to it. That’s living a fantasy world built on the “non-judgemental” idea that it’s possible to live without discomfort and completely avoid difficult and unwanted experiences. The point of mindfulness is to see that we don’t need to grasp or push away feelings based on knee-jerk reactions of “likes” and “dislikes”. Non-judgement is non-judgment when you and your needs and your wants and your likes and dislikes are not at the center of everything. This doesn’t mean you lack critical discussion, reflection, or any kind of discerning thought. It simply means being open to the idea that what you consider to be your “self” may actually be a lot bigger than you thought.
https://medium.com/age-of-awareness/mindfulness-isnt-working-here-s-why-e7f693a5fea4
['Joe Hunt']
2020-12-21 09:41:16.577000+00:00
['Self', 'Life Lessons', 'Mindfulness', 'Mental Health', 'Self Improvement']
Tony Robbins’ 5 Most Inspirational Pieces of Advice
Tony Robbins’ 5 Most Inspirational Pieces of Advice Skill Development Expert Profile — Tony Robbins Randy Stewart, CC BY-SA 2.0 <https://creativecommons.org/licenses/by-sa/2.0>, via Wikimedia Commons Tony Robbins is a coach, author and motivational speaker that has inspired millions of people to take action and fulfil their dreams. He identifies the roots of what’s holding someone back, to make fast, transformational changes. He has coached celebrities such as Nelson Mandela, Oprah Winfrey, Serena Williams, Leonardo Di Caprio, and Bill Clinton, to achieve world-class achievements in their respective fields. His style is brutally direct and honest. Most people struggle to face their biggest fears and challenges and therefore never do anything about them. By forcing people to reveal what’s holding them back, Robbins leads people to achieve breakthroughs that they previously believed were impossible. This piece will explore some of Robbins’ most inspirational quotes and pieces of advice. 1. Get rid of your limiting beliefs ‘The only thing that’s keeping you from getting what you want is the story you keep telling yourself.’ Robbins claims that the objective of his seminars is to break down his participants’ mental patterns so they can emerge as ultra-confident beings with the power to achieve their greatest dreams. It’s in our nature to only invest energy into those activities we believe will produce the outcomes we seek. Therefore, when we don’t think something will work out, even unconsciously, we sabotage our potential by taking half-hearted action. Little action leads to lousy results, which again results in uncertainty and disheartened beliefs. It’s a vicious cycle. A belief is a feeling of certainty about what something means, and most of them were unconsciously created based on painful or pleasurable experiences in our past. If we had a negative experience around failure or rejection in the past, we might do anything to avoid it happening again. But the past does not dictate the present, and it’s possible to change your beliefs. But only after you become aware of them and face them. 2. Find a compelling vision ‘People are not lazy. They simply have impotent goals — that is, goals that do not inspire them.’ If you want to feel energised to take massive action, it’s essential to find a vision that’s so compelling, that you just can’t stop yourself from working on it. Something that excites you so much that you can hardly sleep at night and gets you up the next morning. When you create a strong vision, it helps you do things that other people aren’t willing to do. By aiming at something so attractive and exciting, there will no longer be a question of whether or not you will get there. Instead it’s about how soon can I get started. You no longer care what you have to do to get there because you’re willing to do whatever it takes. 3. Raise your standards ‘I challenge you to make your life a masterpiece. I challenge you to join the ranks of those people who live what they teach, who walk their talk.’ People who work out every day, don’t have any more hours in the day than you do. Their lives are not busier. They have simply identified exercise as something they must do, so they find the time. Instead of making a should-list, these people make a must-list. If you want to achieve more in life, you have to expand who you are and find time for the activities that matter. If you want to achieve more, you have to step up and raise your standards. By changing your inner game and identifying as someone who performs at a high level, you find a way to do whatever is necessary to become successful. You can only improve by changing. If you keep doing what you’ve always done, you will stay the same. ‘It’s not what we do once in a while that shapes our lives, but what we do consistently.’ 4. Take responsibility ‘Identify your problems, but give your power and energy to solutions.’ It’s easy to find out what you should do to become successful, but much harder to do it. But the only way to achieve something great is by starting to take action. And the more action you take, the more likely you are to achieve your goal. Whatever happens, take responsibility. Whether you experience success or failure, learn from it and use it as a lesson to do better next time. If you give an excuse as soon as you face a challenge, you will often give up and think that this is not for you. Don’t be that person. The key personality traits all successful people have in common are persistence, drive, dedication and passion. ‘You see, in life, lots of people know what to do, but few actually do what they know. Knowing is not enough! You must take action.’ 5. Life is about progress ‘Change is inevitable. Progress is optional.’ Getting things is not going to make you happy. The things might excite you for a moment, but unless you keep growing, they’re not going to keep you happy. The key to a fulfilling life is through progress. By achieving what we aim for, we feel alive and energised. This involves challenges such as learning new skills, performing at a higher level, helping others to achieve more. The better you can help people succeed, the more fulfilled you will feel. The best way to achieve progress is to always give your best, and always look for ways to do things better. Whenever you feel like you’ve given everything you’ve got, you will feel no regrets and be satisfied with what you’ve achieved. Take home message Achieve breakthroughs by getting rid of your limiting beliefs. Create a compelling vision to be motivated to work hard every day. To change for the better, you need to raise your standards. Expect more from yourself. Take responsibility for everything that happens, both your successes and failures. To live a fulfilling life, look for progress and improvement as often as possible. Thanks for reading! :) If you liked this article, you may also enjoy:
https://medium.com/skilluped/tony-robbins-5-most-inspirational-pieces-of-advice-eaeb118f8bde
['Erik Hamre']
2020-11-27 02:41:50.594000+00:00
['Motivation', 'Self Improvement', 'Expert Profile', 'Life Lessons', 'Inspiration']
It’s Not What Happens To You, But How You Respond To It
Direct The Sail “The will to win, the desire to succeed, the urge to reach your full potential… these are the keys that will unlock the door to personal excellence.” — Confucius If you’ve ever been on a sailboat, you’ll understand the importance of the wind. Without wind the boat will not move, it will drift about with the current. The wind is essential for a great experience. Sailors understand they cannot control the direction of the wind. If it is blowing right to left, you cannot make it blow the other way, but what you can do is direct the sail. Jim Rohn, an American author and motivational speaker said: “The same wind blows on us all; the winds of disaster, opportunity and change. The same wind blows on us all. Therefore, it is not the blowing of the wind, but the setting of the sails that will determine our direction in life.” It is not about the wind, because the same wind blows on us all. It is not about the problems that come or the obstacles we face because we all face problems albeit to varying degrees. Peace, happiness, and success is about our attitude and how we respond to life. Winds will come because we are powerless over many things in life, but if we respond with passion, faith, and perseverance, we will be okay. Photo by Katrina Wright on Unsplash Learn How To Set The Sail “We cannot direct the wind, but we can adjust the sails.” — Dolly Parton Let’s say you are on a sailboat with one person who knew how to work the sail. Imagine they went out cold and the sail boat was headed for a rock. What would you do? Figure out how to change the sail? Experience anger or a panic attack? Give up and jump out of the boat into the water? You might wish you knew how to work the sail. In the same way, sometimes you will be headed towards a rock or a problem, frustration etc. What will you do? Panic? Give up? Why not learn how to set the sail? Learn how you will react when directed towards a giant rock that could cause imminent pain and suffering. Assuredly these times will come. How will you react? What will your attitude be? Take a few minutes and write what you will do when such times occur: When frustrations arise. When your boss says you are not doing good enough. When your partner tells you he or she does not think the relationship is going well. When a car cuts you off in traffic. When there’s more bills to pay at the end of the month than there is money. What will your attitude be? Decide now. Decide you will adopt an attitude of courage, faith, and optimism. Say it out loud if you must. It is a choice. Your attitude is your choice and you have power over it. Just like a sailor has power over the sail, your power is your attitude. Set it to work with the wind. Photo by Denis Oliveira on Unsplash Whatever It Takes “When you catch a glimpse of your potential, that’s when passion is born.” — Zig Ziglar A marathon runner trains intensely to run a marathon. They discipline their body and mind well before the race because they understand the importance of training. To run a marathon without training is a recipe for disaster. In the same way, to go through life unprepared for the winds of change, problems, trials, etc. is a recipe for disaster. We can train ourselves to prepare for such times by adopting a “whatever it takes” approach. We can feed ourselves with empowering material which nourishes our inner landscape and keep success principles before us. We can spend time with those with a similar outlook and learn the principles that will help us soar with the wind in our personal, and professional life. We ought to learn them now because when tough times come, it is difficult to focus and learn new things. You have incredible potential in every area of your life. Celebrate your successes and delight in possibilities. You have come so far already. Greet each morning with gratitude and great things to come your way. Your attitude can make or break you, so declare it will be one of gratitude, faith, trust, and optimism. You are infinite potential. Call To Action Do you want to lead a remarkable life? Are you committed to taking action despite your fears and doubts? Have you had enough of not achieving the success you seek? If so, download your FREE copy of my eBook NAVIGATE LIFE right now, and start your amazing journey of greatness today!
https://medium.com/the-mission/its-not-what-happens-to-you-but-how-you-respond-to-it-a9ebe9b1b441
['Tony Fahkry']
2020-10-12 15:21:36.745000+00:00
['Personal Growth', 'Life Lessons', 'Self Improvement', 'Inspiration', 'Motivation']
Absolute and relative
Absolute and relative To understand the world, we need to be able and willing to adopt both an absolute and a relative perspective Did you see that video, a few weeks ago? It was widely shared on social media, and has been viewed nearly 1.5 million times (with other copies many more times) since it was posted. This was not entirely surprising: the surreal sight of an airline passenger repeatedly punching the seat in front of him does indeed have great viral potential. It concerned, of course, a case of air rage, a conflict over the position of the seat back. The puncher, seated in the last row of the plane (where the seats do not recline), was seemingly most unhappy with the fact that the lady in front of him had reclined her seat. The internet promptly split into those who took her side ( “I paid for a seat with a button, so I can bloody well press it”), and those who took his side ( “I paid for this seat and I am entitled to a modicum of comfort for my knees”). Despite the tribal division, one thing appeared to unite both camps: the greedy airlines are to blame for this kind of conflict. Is it greed? That the airlines have a hand in this is beyond doubt. As Men’s Health reported following an earlier ‘reclining seat wars’ incident, “The average seat pitch-which is the legroom between seats-was 35 [89 cm] inches during the 1970s. But today, it’s just 31 inches. Somewhere over the last four decades, we lost four inches.” This was in 2015, and in the meantime, more shrinkage has taken place (according to SeatGuru, the pitch on the flight in question is 30 inches). Given the size of a plane, designers have a considerable degree of freedom regarding how many seats they put in it. Further apart means more space per passenger, closer together evidently means less space. But that is not the only thing that matters, of course — there is, as so often, a trade-off being made. Closer together also means more seats, hence more paying passengers, and hence more profit for the airline. But it’s a lot cheaper! (image via twitter) See: it is greed! Or is it? More seats closer together also means that the fixed cost of flying the plane from A to B (the lease of the aircraft, the fuel, the crew, maintenance cost and so on) can be spread over more passengers, and that the cost per passenger can come down. Jeremy Horpedahl, an economist at the University of Central Arkansas, worked out that a round trip from New York City to Los Angeles in 1969 cost $304.50, the equivalent of 92 hours’ work at the average hourly wage at the time of $3.31. Nowadays, the same flight costs $500–600, corresponding to 21–25 hours’ work at the average hourly wage of $23.87. Now it would be ridiculous to argue that this reduction in the cost of air travel by 75% in real terms is entirely the result of squeezing passengers more closely together. But every little helps, in an industry that is not renowned for its huge profitability. According to the International Air Transport Association, net profit per passenger across the industry in 2019 was around $5.70, down from $6.22 in 2018. So, the accusation of greed seems somewhat overblown. Horpedahl observes, by the way, that the “golden age of flight”, with 35 inch of seat separation and all the other trimmings is still available for $2,000 in business class — which, remarkably, is equivalent to approximately the 92 hours of work you needed to make the trip in economy class in 1969 (it’s even a bit less). But unlike our (grand)parents back then, we now have a choice: pay what they paid for a similar, superior experience, or travel in a more modest and less spacious way, saving three-quarters of the cost. The space available has shrunk over half a century in absolute terms, but so has the cost of flying. Does that make us worse off overall in relative terms? That is not so obvious. Tricky inflation Inflation is one of the economics concepts that is widely known outside the professional sphere. We ordinary citizens experience it as rising prices: stuff gets more expensive over time. For example, the list price of a Toyota Corolla in 1980 was $4,348. The latest 2020 model costs $19,600. The difference corresponds with an annual price inflation of about 3.8%. But wait (the economists say), we’re not comparing like with like. A 2020 Toyota Corolla is equipped with more gadgets than even a 1980 Rolls-Royce contained — a backup camera that shows you how to park, a USB port, pedestrian detection, lane departure alert, eight airbags — I could go on. More importantly, not just these, but many more features were absent in its 40-year old sibling, from air conditioning to LED lights and remote locking, and of course a much superior engine, transmission, brakes and so on. If both models stood side by side, with zero miles on the clock, at the same price of $19,600, hardly anyone would opt for the 1980 version. So, the economists say, to calculate the true inflation, we need to take into account the fact that the new car is so much better than the old one. So let’s say that a hypothetical 2020 model, equipped like the 1980 model would be worth $11,400 in today’s money. That brings down the price inflation rate to 2.4% per annum. In 1980, someone earning the average hourly wage of $4.82 would have needed to work just over 900 hours to buy themselves a brand new Toyota Corolla. If their income had increased in line with the car price inflation (taking into account the improvements to the car) of 2.4%, their hourly wage would now be $12.64. The price inflation of the car and the wage inflation were the same over the last 40 years, so what could be fairer? Which is cheaper? That depends (images: RLGNZLZ CCBY and Toyota) But there is a slight problem. Look at the number of hours someone at the average wage will now need to work to buy a new Toyota Corolla — it’s about 1,550 hours, or about 17 weeks longer than in 1980. Yes, the car is massively better than the one available 40 years ago, but it has become a lot less affordable. If there were a car available at the 1980 specification, it would be more affordable — but then again, if my grandma had wheels, she would be a wagon. (What this simplified example suggests also applies more generally: adjustments are made for quality improvements in manufactured goods, and these percolate into the consumer price index.) So we observe an intriguing phenomenon: stuff simultaneously gets ‘cheaper’ (in the sense that you get relatively more for the same equivalent amount of money), and ‘more expensive’ (in the sense that it has become less affordable in absolute terms). Are we better off, or worse off compared to 40 years ago? That is not so obvious. Like the duck-rabbit or the old woman/young woman illusion, what we see depends on what we look at. It is true that the amount of space in airline seats has shrunk, but it is not the whole truth. It is true that you get much more car for your money in 2020 than in 1980, but that too is not the whole truth. With such ambiguous situations, there is often an absolute perspective and a relative one. To fully understand the reality of what is going on, we had better consider them both — and beware of strong claims based on just one perspective.
https://koenfucius.medium.com/absolute-and-relative-d7c90362ef08
['Koen Smets']
2020-02-28 09:00:03.027000+00:00
['Psychology', 'Behavioral Economics']
Cracking the Google Coding Interview
Part 3. Coding Interview Question Guide Practicing for coding questions takes a lot of time, effort, and focus. Let’s break down the top Google coding questions, as well as actionable advice to prepare. Top 15 Google coding interview questions Find the Kth largest element in a number stream. Problem statement: Design a class to efficiently find the Kth largest element in a stream of numbers. The class should have the following two things: ​ The constructor of the class should accept an integer array containing initial numbers from the stream and an integer K. The class should expose a function add(int num) which will store the given number and return the Kth largest number. 2. Find K closest numbers. Problem statement: Given a sorted number array and two integers K and X, find K closest numbers to X in the array. Return the numbers in the sorted order. X is not necessarily present in the array. 3. Delete node with given key. Problem statement: You are given the head of a linked list and a key. You have to delete the node that contains this given key. 4. Copy linked list with arbitrary pointer. Problem statement: You are given a linked list where the node has two pointers. The first is the regular next pointer. The second pointer is called arbitrary_pointer , and it can point to any node in the linked list. Your job is to write code to make a deep copy of the given linked list. Here, deep copy means that any operations on the original list (inserting, modifying, and removing) should not affect the copied list. 5. Mirror binary trees. Problem statement: Given the root node of a binary tree, swap the left and right children for each node. 6. Find all paths for a sum. Problem statement: Given a binary tree and a number S, find all paths from root-to-leaf such that the sum of all the node values of each path equals S. 7. Find the length of the longest substring with no more than K distinct characters. Problem statement: Given a string, find the length of the longest substring in it with no more than K distinct characters. 8. Find the longest substring with no repeating characters. Problem statement: Given a string, find if its letters can be rearranged in such a way that no two same characters come next to each other. 9. Find equal-sum subset partition. Problem statement: Given a set of positive numbers, find if we can partition it into two subsets such that the sum of elements in both subsets is equal. 10. Determine if the number is valid. Problem statement: Given an input string, determine if it makes a valid number or not. For simplicity, assume that white spaces are not present in the input. 11. Print balanced brace combinations. Problem statement: Print all braces combinations for a given value N so that they are balanced. 12. Given a number of tasks, determine if they can all be scheduled. Problem statement: There are N tasks, labeled from 0 to N-1. Each task can have some prerequisite tasks that need to be completed before it can be scheduled. Given the number of tasks and a list of prerequisite pairs, find out if it is possible to schedule all the tasks. 13. Implement an LRU cache. Problem statement: Least Recently Used (LRU) is a common caching strategy. It defines the policy to evict elements from the cache to make room for new elements when the cache is full, meaning it discards the least recently used items first. 14. Find the high and low index. Problem statement: Given a sorted array of integers, return the low and high index of the given key. Return -1 if not found. The array length can be in the millions, with many duplicates. 15. Merge overlapping intervals. Problem statement: You are given an array (list) of interval pairs as input where each interval has a start and end timestamp. The input array is sorted by starting timestamps. You are required to merge overlapping intervals and return output array (list). 12-week preparation roadmap Preparing for a Google coding interview is strategic. It requires months of prep and practice to master the right concepts and develop confidence. Let’s look at the definitive twelve-week prep plan proven to help candidates land jobs at big companies. Week 0. Choose a programming language based on Google’s expectations and your preferences. Week 1. Review the basics of your programming language. If you brush up on the basics, you’re less likely to stumble during your interview. Review concepts like how to read input from the console, and how to declare and use 2D arrays. Week 2 & 3. Familiarize yourself with data structures and algorithms. These are essential to coding interviews with Google. Data structures you should know: Arrays Linked lists Stacks Queues Trees Graphs Heaps Hash sets Hash maps/tables Algorithms you should know: Breadth-first search Depth-first search Binary search Quicksort Mergesort A* Dynamic programming Divide and conquer​ Week 4 & 5. Practice data structure and algorithmic challenges with sites like Educative or Leetcode. Start practicing simple coding problems. This will make it easier down the line to tackle harder questions. Weeks 6–8. Practice complex coding problems, and start timing yourself. It’s important to consider runtime and memory complexity for each solution. Weeks 9 & 10. Study system design interview questions. These are now an integral part of the interview process and impact your hiring level. Week 11. Study OS and concurrency concepts. These questions are used to gauge your hiring level. Brush up on multithreading fundamentals to stand out for higher levels in Google’s ladder. Week 12. Study object-oriented programming and design questions. These questions gauge your critical thinking, project-based problem-solving skills. Tips for practicing coding challenges There is no shortcut or magic wand for practicing coding challenges. Here are some basic tips to guide you through the preparation stage. Keep time in mind. The coding interview will be timed, so it’s important to prepare with that in mind. If you are used to preparing under a time constraint, it will be far less stressful during the actual interview. Know your weak spots. As you prepare, take note of your weak spots. Everyone has them. Google has stated that they care about your thought process, so if you come up against a weak spot, talk through it. This will demonstrate your eagerness to improve. Know the common pitfalls. There are three big pitfalls when it comes to a Google interview: not knowing the Big-O complexity of an algorithm, having no knowledge of Google’s expectations, and not articulating your problem-solving process. Keep these pitfalls in mind as you work. Articulate your process. Google wants to hear about your thought process. As you practice, get used to explaining why and what you are doing. Those with a clear sense of how they work stand out.
https://medium.com/better-programming/cracking-the-google-coding-interview-3b8dd29b0d6a
['The Educative Team']
2020-05-16 16:33:42.374000+00:00
['Interview', 'Interview Tips', 'Programming', 'Coding Interview', 'Startup']
Strengthen Security with Application and Threat Intelligence
Discover how application and threat intelligence work together to help security operations (SecOps) and development teams improve resiliency and minimize risk. Discover the types of information these teams gather, how it’s applied to different tools, and how it fortifies your security posture. Attackers are tenacious, opportunistic, and always looking to exploit vulnerable misconfigurations. In fact, Verizon’s most recent Data Breach Investigation Report (DBIR) shows misconfigurations as the fastest-growing cause of breaches over the last 5 years. You need to stay a step ahead, but it isn’t easy. That’s why many SecOps teams are using application and threat intelligence to fortify their defences. Now, you’re probably familiar with threat intelligence — which tracks attacker profiles, methods, and vectors. But did you know that application intelligence is just as important, if not more so? Staying up to date on known misconfigurations within your own applications is critically important. But, it’s hard to keep up — and it only takes one missed patch to leave your network vulnerable. Read on this article to see how application and threat intelligence helps you work smarter, not harder. You’ll learn what types of information these teams gather, how it’s applied to different tools, and why it comes standard with all Keysight’s security tools, including Threat Simulator. Table of contents What You Don’t Know Can Hurt You Keysight ATI: A Real-World Example of Application and Threat Intelligence Minimize Risk and Fortify Your Defenses ATI’s Global Impact Smarter Security Makes Applications Stronger — and More Resilient Network security is essential, but it is not easy. Attackers are tenacious, opportunistic, and always looking for vulnerable misconfigurations to exploit. Staying a step ahead is vitally important, but it’s easy to fall behind. Many security providers offer “threat intelligence” to overworked security teams — tracking attacker profiles, methods, and vectors. But threat intelligence is only half of the equation. To stay secure, organizations also need to monitor known vulnerabilities and misconfigurations within their applications. This “application intelligence” is just as critical as threat intelligence, yet often overlooked. In this article, you will discover how application and threat intelligence work together to help security operations (SecOps) and development teams improve resiliency and minimize risk. Using Keysight’s Application and Threat Intelligence (ATI) Research Center as a real-world example, you will learn what types of information these teams gather, how it’s applied to different tools, and how it fortifies your security posture. Security teams work tirelessly to protect their networks, but that task is not getting any easier. Attack surfaces are growing, and a single misconfiguration can be the difference between a safe network and a compromised one. When the margins are this thin, application and threat intelligence can make all the difference. What You Don’t Know Can Hurt You Despite rapid technological advancement, modern applications are not getting any simpler. Enterprises count on SecOps and development teams to understand the latest application and threat vulnerabilities, but that is asking a lot. Operating systems, software development environments, and new attack methods all require constant attention — and multiple teams often find themselves continuously scouring message boards for new threats. However, vulnerabilities come from many places. For instance, one operating system kernel or driver update can have ripple effects on related software elements. A single unpatched security vulnerability can create a pathway directly into your application database. Did that update create the possibility of a buffer overflow that hackers can exploit? Did I open the door to your customer data? These are just a couple of the questions SecOps and development teams ask themselves every day. It would help if you validated your entire security ecosystem to prevent attackers from capitalizing on your system’s weaknesses. But there is only so much time — and budget — to go around. Development teams are under pressure to fix bugs and meet delivery schedules, while SecOps is working to secure an ever-expanding attack surface. Something has to give, and many teams struggle to keep up. Ignoring risk is a surefire way to open your network up to attack. That’s why so many organizations turn to professionally curated threat intelligence feeds to take control of their security posture. With crucial insights like attack signatures, malicious IP addresses, and emerging threats, SecOps teams can stay a step ahead of cybercriminals. However, threat intelligence is not enough on its own. Staying ahead of the latest attacks is a good start, but SecOps and development teams need to be aware of risks within their applications as well. “Application intelligence” like this isn’t as commonplace as threat intelligence, but it’s no less important. While new exploits and attack vectors get the most attention from the media, nearly half of all breaches stem from human error, system glitches, and misconfigurations. Both application and threat intelligence is paramount to staying secure. Dual intelligence streams can help you efficiently minimize risk, improve resiliency, and protect your most sensitive applications and data. Keysight ATI: A Real-World Example of Application and Threat Intelligence Keysight knows networks. From test to application performance, visibility to cybersecurity, we know the challenges of maintaining a network firsthand. After all, when data is traversing at high speeds, security and performance issues are bound to arise. That’s why we created the Keysight Application and Threat Intelligence (ATI) Research Center, an elite group of top application and security researchers from around the globe. With knowledge spanning software development, reverse engineering, vulnerability assessment and remediation, malware investigation, and intelligence gathering, their collective expertise helps Keysight deliver industry-leading insight to security teams around the world. Drawing on decades of leadership in network validation and test solutions, the Keysight ATI team synthesizes data gathered from our network security products with known application behaviours in multiple network environments (such as enterprises, service providers, and network equipment manufacturers). This combination gives the ATI Research Center a robust understanding of how hackers exploit vulnerabilities before a product launch and after its release on a live network. The ATI Research Center operates a distributed, worldwide network of honeypots and web crawlers to identify malware, attack vectors, and application exposures continuously. The team regularly identifies and discloses zero-day vulnerabilities. All findings are correlated against real-world events, validated against reported results, and then pushed to clients via continuously-updated feeds. ATI feeds deliver actionable insight on critical application vulnerabilities — as well as threats across networks, endpoints, mobile devices, virtual systems, web, and email. These continuously updated feeds give SecOps teams access to a wide range of intelligence, including: open-source data sets billions of IPs and URLs millions of spam records millions of malware attacks millions of network intrusions Minimize Risk and Fortify Your Defenses Combining deep knowledge in cybersecurity threats and application protocol behaviour, the ATI team looks at threats and vulnerabilities in the same way as a cybercriminal, from every possible direction. This unique combination of intelligence enables security teams to assess their risk and vulnerability holistically — making it easy to take proactive, informed actions to shore up their defences. Security is never static, however, and the threat landscape is always changing. That’s why ATI partners with leading developers to monitor every layer of the Open Systems Interconnection (OSI) stack and actively research threats around the globe. Furthermore, daily malware updates deliver actionable insight in near real-time — helping the most agile security systems stand out in a crowded marketplace. Empowering security teams is a crucial function of ATI’s mission, but it is only a fraction of the work they do. At Keysight, we also rely on ATI’s collective output to bolster our test, visibility, and security solutions — enabling us to: create realistic application attack simulations that emulate the entire kill chain block malicious inbound traffic and outbound communications collect ongoing intelligence on new threats identify unknown applications pinpoint traffic via geolocation These capabilities go far beyond signature recognition. With Keysight’s ATI feed, you can proactively defend against attack patterns, reduce your attack surface, and pinpoint product vulnerabilities — before and after release. By combining specialized security knowledge with decades of industry leadership in network test, protocols, and security, it’s never been easier to verify, validate, and fortify your defences. ATI’s Global Impact In addition to application and threat intelligence feeds, Keysight’s ATI Research Center supports a wide range of our products, including: Threat Simulator: Breach and Attack Simulation platform ThreatARMOR: Threat Intelligence Gateway BreakingPoint: application and network security testing Vision Series network packet brokers (equipped with AppStack visibility intelligence) IxLoad: L4–7 performance testing IxChariot: pre-and post-deployment network validation IxNetwork: L2–3 performance testing However, a mere list of products does not tell the whole story. Top-ranked security vendors depend on the outputs of ATI research to ensure their products and applications work as intended. Moreover, enterprise SecOps teams count on ATI’s continuous updates to ensure Threat Simulator can emulate the latest attacks, and ThreatARMOR can block them. Additionally, almost every major network equipment manufacturer (NEM) and service provider relies on ATI data when they validate their hardware and systems with Keysight’s network test products. The unique combination of application and threat intelligence enables a wide variety of advanced validation techniques, such as: Emulating communication protocol programming methods, common practices to introduce weaknesses, and wide-ranging traffic types. Deconstructing application protocols and packaging them for use in real-world user simulation testing. Executing application fuzzing, probing for specific weaknesses, and pinpointing undetected zero-day vulnerabilities in security tools Smarter Security Makes Applications Stronger — and More Resilient Threat intelligence providers and security vendors typically focus on the symptom, not the root cause. They address how to identify and block high-level threats — but often ignore the vulnerabilities attackers exploit to gain access to your network in the first place. More robust applications mean better performance — and more resilient security. That’s why you need to know if the applications you are using are stable and secure. But knowing the latest attacker exploits, identities, and methods require a depth of threat intelligence that only comes from years of experience and millions of working hours. The same is true for your applications. Understanding a churning sea of vulnerabilities — and the myriad ways they can affect downstream tools — also requires considerable time, effort, and investment. Security teams work tirelessly to protect their networks, but that task is not getting any easier. Attack surfaces are growing, and a single misconfiguration can be the difference between a safe network and a compromised one. When the margins are this thin, application and threat intelligence can make all the difference. SecOps teams need every edge they can get. It’s time to help them work smarter, not just harder.
https://medium.com/predict/strengthen-security-with-application-and-threat-intelligence-91db32dbdbcd
['Alex Lim']
2020-12-04 02:26:54.293000+00:00
['Artificial Intelligence', 'Security', 'Technology', 'Cybercrime', 'Cybersecurity']
Trading Places: When Did My Kids Start Worrying About Me?
“How are you feeling? Are you sleeping? Do you still have your sense of taste and smell?” Our children try sounding casual. They know their dad and I are social distancing. Nevertheless, I feel their worry. I am 62 years old, healthy, and fit. I walk, bike, and swim. My doctor says I have the heart of a 20-year old. Still, I’m 62, which inches up my risk of dying-from-Covid-19. Our kids know I’m vital. They also know that people my age and older are succumbing to this illness, which must be forcing them if only subconsciously, to grapple with my mortality. It’s not unthinkable. My appearance, not to mention my regular, sometimes crippling, bouts of arthritis, could easily suggest that I’m older and more susceptible to illness than I am. My curly hair is snowy white, and my face is prematurely lined from years of sun worship. Despite my athleticism, or maybe because of it, I have two titanium hips and a fused lower spine. Most days — at least before working out — I ache and creak like a rusty machine. Perhaps I was naïve to think that our kids would take my aging in stride. They always knew that I was older than their friends’ moms. They even applauded my rejection of the cultural pressure to look younger by camouflaging my age: yet, the powerful feminist example I thought I was setting by not coloring my hair, wearing makeup, or squeezing into clothes designed for younger, skinnier bodies may have instead emphasized the age gap between other moms and me, implanting in our kids the tiniest, most subliminal fear of losing me sooner than later. When our son was eight, his teacher asked if I was his mother or his grandmother. And, our daughter, at age 12, said a friend who’d spotted me at her softball game asked, “Is that your mom? Isn’t she too old to be your mother?” These stories made me cry, but they didn’t attune me to the anxiety about my longevity that our kids may have had. Only recently have I begun wondering if they see me as elderly. My last orthopedic surgery scared them, and they seem more than a little alarmed whenever I mention needing a routine medical procedure. Although they good-naturedly tease me for speaking too loudly or forgetting things that they’ve said minutes earlier, I sense them digesting the deficits that they’re beginning to notice, like my hearing and memory loss. When I took a spill during a family stroll, our son dove to help me stand as if I were drowning, while our daughter tenderly brushed dirt from my sleeves. Both looked frightened and sad. During a family conversation about the pandemic, I thought it timely to mention the location of my husband’s and my living wills. Our son changed the subject. Twenty-five years ago, right after my husband and I learned I was pregnant, a friend of ours, a father of two, remarked: “From this moment on, you will never not worry.” He was right. His comment, although foreboding, prepared me for the worrying that I would do from then on. What it didn’t prepare me for was the day our kids would start worrying about me. Andrea Kott, MPH, a freelance public health writer, is the author of the memoir “Salt on a Robin’s Tail: An Unlikely Jewish Journey Through Childhood, Forgiveness and Hope,” due this month from Blydyn Square Books.
https://medium.com/modern-parent/trading-places-when-did-my-kids-start-worrying-about-me-68ef24778932
['Andrea Kott']
2020-11-25 04:13:07.529000+00:00
['Aging Parents', 'Coronavirus', 'Parenthood', 'Children', 'Motherhood']
I’m Being Productive By Telling Others They Don’t Have to Be Productive.
I’m Being Productive By Telling Others They Don’t Have to Be Productive. See? I’m helping. Photo by JESHOOTS.COM on Unsplash Everyone is trying to rush to finish their work for the holidays — so they can “enjoy” the rest of 2020, and by that I mean…flush it down the toilet. But this is more late-stage capitalism garbage…the feeling of “having to be productive.” We have to edit the book, finish the assignments, pay the bills, complete our wills! We have to do it all by some arbitrary date! We have to help, we have to do, we have to earn, we have to accomplish! Clearly, there is a need for someone to remind people they do not have to be productive. That’s where I can step in! You do not need to be productive. I’m here to champion the idea of relaxation. I am a warrior for sitting in a hammock. I want everyone to be content with the realization that work will be on the other side. There will be work tomorrow, and the day after, and deep into 2021. This is my mission. This is how I’m productive — by reminding people they don’t have to be productive. Circular? Hardly. We have so many messages out there telling people to be more. The perfect mom, the perfect gift. To pay their bills on time, so they can be one of the ones feeling sorry for people who lost their job, but not one of the sufferers. Even now, everyone wants some kind of ivory tower. And they hustle and they work and they judge those who don’t seem to be working as hard. Without me to remind them, who would? Hell, maybe I should start charging for my services. I teach people how to relax. How to be okay with themselves. I could, you know, monetize this hobby. Not for myself, but for the greater good, you know. Now, what should my class be? Should I build a blog around it? How do I…you know…monetize the idea of not needing money? Look, if I don’t do it, then someone else will. I need a logo. I need an Etsy account. I need an email address. Do I even need a fake name? Phew. This is a lot of work. I should take a break. Maybe this can be my project for 2021.
https://medium.com/are-you-okay/im-being-productive-by-telling-others-they-don-t-have-to-be-productive-1a40aacb9cfa
['Lisa Martens']
2020-12-17 17:56:21.384000+00:00
['Humor', 'Time Management', 'Satire', 'Self Improvement', 'Productivity']
Zero Unemployment Assistance
After two long two weeks of patiently waiting and biding my time, I was finally sent an e-mail confirming my eligibility for unemployment. As a rideshare driver, my earnings had been affected by the COVID-19 crisis for over a fortnight, but being a gig-worker I was not eligible to apply for benefits until the official passage of the federal stimulus bill which would reportedly allow contracted workers to qualify for unemployment. Just applying had proven difficult due to the inability to get a hold of anyone in the unemployment office to assist me in completing an application that was designed to determine the amount of benefit owed a former employee on behalf of the company that had released them. The problem was that as a gig-worker no company was willing to claim me as an employee. I resolved to complete the application to the best of my ability and submit it online. The ten-day waiting period for a response had already turned into two weeks, so I was beginning to get impatient with no money coming in to address the timely arrival of monthly bills and invoices. At the urging of my family I had hung up the keys to my car and took myself off the road, deciding that I would instead focus on educating my daughter whose school had been closed for the foreseeable future. I was going to scale back and make do on an unemployment subsidy as the risk of contracting the virus was not worth the reward of working all night for well below minimum wage. I’d been making about $6 an hour (half the minimum wage in CA) driving around all night during the initial weeks of quarantine. Needless to say, I had not been the most attentive parent or teacher — all groggy eyed and worn-out from a long graveyard shift of doing my best to make ends meet. I’ve always said that driving rideshare has been the best and worst decision of my life. It’s a well-rehearsed line I’ve used several times in response to inquiries from passengers curious about my experiences driving a taxi-cab of the new millennium. It has been an eye-opening and fantastic ride filled with remarkable people, wonderful conversation, and occasional adventure. I have learned a tremendous amount about myself and people along the way in the beautiful exchange of hearts and minds between strangers. I have made friends for life while driving and have had enlightening moments of human interaction that warm my heart just thinking about them. I often ponder the endless possibilities that come with each accepted request, as I welcome a new stranger into my life with an unknown destination and story to tell — a tale in which I have just been cast in the bit role of taxi driver. My life as a supporting actor in the lives of my passengers is often dictated by the mood and experience that they bring with them, factors beyond my control and open to the whims, joys and disappointments inherent in each of our experience. These memories help to justify what has been a long and tiring journey. Full-time rideshare drivers have to work sixty to seventy-hours a week just to make a sustainable wage. The portion of fare that drivers receive doesn’t fairly compensate for the inherent expenses of fuel, rapidly accumulating costs of car maintenance, and mileage. Moreover, we receive no healthcare benefits, vacation pay or sick time, often forcing us to work through sickness and disease. It is all the more reason why it is essential that drivers be supported in this time when as a society we attempt to collectively curb the spread of a highly contagious and deadly virus. I hesitate to write this article at this time because there are millions of people in the throes of financial hardship and who wants to read another sob story about unpaid rent and a growing stack of past-due bills. Moreover, as I write, there are innumerable individuals battling this mysterious life-threatening illness, or suffering helplessly while they can do nothing to help or comfort a loved one dying in quarantine. There are lives being lost which obviously trumps the loss of jobs or income. Nevertheless, I have always felt a social responsibility to speak up on behalf of my fellow drivers whenever I write about the experience of driving. We are a large global community who have been significantly exploited by the companies for which we are “contracted” to provide services. The major rideshare companies have gained market control over the transportation industry at the cost of the individuals doing the bulk of the work — the drivers rolling around in their own cars and carrying passengers. Now allow me to discuss how this has impacted the amount of my unemployment subsidy as determined by the state of California. After re-confirming my eligibility by answering the standard questions regarding whether I had looked for work in the past week or had joined an educational program preparing me for future work opportunities, I was informed that I am now eligible to receive $0.00 every week. That is not a typo. I did not miss a number. I am wondering if the system is going to pour salt on my wound and issue me a check for $0.00. I would be completely in the dark as to why were it not for an e-mail I received last Friday. About two years ago, I was driving somewhere in Los Angeles when a colleague approached me, asking if I wanted to join Rideshare Driver’s United, an attempt at forming a union advocating on behalf of drivers for the basic rights of minimum wage and standard health benefits to which all workers are entitled. I was happy to join the cause and imagined a strike with 50,000 cars parked in a stadium parking lot while prices on the rideshare platforms surged due to the lack of drivers available. It would be a firm and clear message to the companies that we are indeed essential employees. However, it would never work; it is impossible to organize a work force that has no water cooler. We rarely get to interact with our co-workers as we are isolated in our mobile offices waiting for our next ride request. Moreover, we are essentially a community of “scabs” that have already unhinged and undermined the transportation industry as it once was. How often do you see taxis driving around anymore? I took a verbal lashing and admonishment from an elder cousin who worked several years as a cab driver when I told him I was driving rideshare. I appreciate Rideshare Driver’s United for all of their efforts, but from best as I can tell they are not a very big organization and communication with drivers is inconsistent and spotty at best. I’ll go months between text messages about plans of action and ways to advocate on the behalf of our rights. I still receive weekly calls from my last union, United Teachers of Los Angeles (UTLA), an organization so large and powerful that they have their own skyscraper on Wilshire Boulevard from which to direct their operations. As far as I can tell, Rideshare Driver’s United is run by a colleague sending out group text messages from the driver’s seat of his/her car in between rides. Anyhow, in their e-mail, Rideshare Driver’s United had explained that drivers were not receiving unemployment benefits due to the current confusion over whether we are to be classified as independent contractors or employees. You see, the people of California already voted on and passed a law that went into effect on January 1st of this year that requires technology companies and the like to provide gig workers with employee benefits and a standard minimum wage. However, the major rideshare companies have refused to re-classify us as employees and have instead spent millions of dollars gathering signatures in front of grocery stores under the guise of protecting the flexibility of their drivers. If you live in California, you have definitely been approached by these autograph collectors because they are getting paid upwards of five dollars a signature when such political petitions normally pay in cents not greenbacks per ‘John Hancock.’ The rideshare corporations even send out notices to drivers asking us to sign up and join them in their effort to protect our rights to a flexible schedule. They threaten drivers with claims that we would be required to adhere to a rigid work schedule if they were to follow the law. They also send out surveys collecting data on driver preferences for a flexible schedule. However, “polling all drivers” includes the millions of drivers who only drive between five and ten hours a week, or even far less. These individuals have alternate sources of health benefits and often full-time jobs with traditional schedule demands. They do not put the significant amount of wear and tear on their vehicles that full-time drivers do, so naturally when asked if flexibility is important, they give the corporations the answer they desire. However, these part-time drivers are not the basis that sustains the business. I would venture to say that most rideshare business is completed by the crazies like me out there driving sixty and seventy hours a week — so unless you are going to give full-time drivers six votes to their one, this is not an accurate depiction of the needs of the driving work force. I’ve been driving full-time for four years and I know that drivers are paid well below minimum wage in both the Bay Area, where I primarily drove for the first three years, and even lower in Los Angeles, where I have driven for the past year. I could crunch the numbers for you, but for now I am just going to say that there are official studies conducted by universities and economists that back up this assertion. Rideshare Driver’s United recently assisted me in filing for back wages as stipulated by the new law that went into effect in January; I was informed that based upon my work over the past three tax years, one company owes me well over $200,000 and the other owes me over $100,000. That sounds about right. I have worked countless hours and I am still slowly accumulating debt. I should be stacking money given the amount of overtime that I clock. I don’t expect to ever see this money, but I will continue to speak out for the rights of drivers when asked to do so. Furthermore, I would appreciate it if the companies that I have worked so hard for (while maintaining an outstanding customer service and safety rating with passengers who rate you on every trip) would step up in this dire time and take care of their essential employees who have maintained the business. Unfortunately, I know that this will not happen as the reality is quite different. If and when I return to driving following this crisis, I am positive that I will be competing with more and more drivers for whatever rides are available because the high unemployment rate will have led the companies to advertise themselves once again as a “side-hustle” for making money and getting your feet back on the ground. There will be no protection for full-time drivers and the market will be flooded with more drivers than ever, leading to less ride requests and less money. After all, what do the companies have to lose from over-saturating the market with drivers? Nothing, because they don’t have to pay us benefits. In fact, they will make gains as riders will have shorter wait times and find this service even more convenient than it already is…yet at the silent cost of basic human and employee rights. I always figured that driving rideshare would be a debt that I could manage. When my car inevitably dies long before it is paid off due to the 260,000 miles and counting that I have accumulated since signing up to drive, I would just fire up my old truck collecting dust in the garage, blow the dust off my textbooks, and head back to the classroom to continue my lifelong vocation as an educator. Maybe I would bike to school and shed off some of the other debt incurred on my gut by sitting in the driver’s seat for hours on end. But right now, I am not sure how I am even going to pay my rent. So why don’t I just join one of the delivery apps in their collective campaign to save our favorite restaurants and deliver food with my daughter in the back seat along for the ride? Well, first because this is not good parenting. Second, I am a gig worker and have been for three years, so I no longer have favorite restaurants. Mostly however, it is because I have been getting “side-hustled” by technology companies for years, and I can tell when a vulture is trying to profit off the carcasses of a tragedy. The nearly 20,000 wonderful people that have joined me on this ride as passengers have been the true joy of my job. This love for people is why I will do my best not to return to work right now, because about half of the customers currently ordering rides at night are actually people ignoring social distancing rules. I respect people and my daughter too much to put myself, her, and others at risk by facilitating the spread of disease. However, if I am unable find access to the unemployment money that has been set aside for people like me, I may soon be left with no choice.
https://thayerussell.medium.com/zero-unemployment-assistance-1d0a203ef460
['D. Thayer Russell']
2020-04-16 18:11:46.196000+00:00
['Economics', 'Uber', 'Unemployment', 'Coronavirus']
My favorite courses to learn Software Architecture in 2021 — Best of Lot
My favorite courses to learn Software Architecture in 2021 — Best of Lot These are the best courses to learn Software architecture and become a solution architect in 2021 Every Programmer wants to grow in their career, but it’s not easy, and if you don’t pay attention to your job, you will likely stay in the same position for many years. The growth in the initial few years is generally fast. Still, once you reach the barrier of 5 years, you need to decide which direction you want to move like — people management, product management, or software architecture. For tech guys, who don’t want to go on people and product management, the software architecture or solution architecture is the final position, which is not surprising. If you want to be close with coding and technical discussions, like to try new technologies and want to use them in your organization to solve a challenging problem, software architecture is an excellent position to be in. Most of the Java developers I have met or interacted with wanted to become a software architect, though only a few succeed, and most of them are still either a technical lead or Senior Software developer. But, the big question is, how does a senior developer become a software architect? What books or courses you can look forward to learning the skills a Software or Solution architect should have? It’s also one of the most common questions I receive from my reader, apart from how to prepare for Java interviews. To help you with this question and to answer many such questions from my readers, I’ll share a few online courses you can take to learn more about Software Architecture and how to become a software architect. In the past, I have shared a couple of books you can read to learn some software architecture skills, and these courses will supplement whatever you have learned from them. You can also use these courses and those books to get the best of both worlds. Top 5 Courses to become Software Architect in 2021 As I have said, a Software architect position is not a comfortable position. The architect is responsible for all technology decisions in the project and also a significant role. You need to know a lot of things, not just the technology but also the business. You should not only be familiar with general software architecture, design, coding, and programming best practices, but also with the latest technologies, libraries, and framework and knows their pros and cons to choosing the right technology for your solution. In these few courses, I have tried to include most of the things you need to learn Software Architecture, but this list is by no means complete, and I am keen to get some suggestions from some of the experienced Software Architects which come across this article. Anyway, without any further ado, here is my list of some of the best online courses to learn Software Architecture and become a Solution Architect or Software Architect. When it comes to online learning, Coursera is one of the reputed websites and also one of my favorite places along with Udemy and Pluralsight. It has some of the best courses on machine learning, Algorithms, and Software Architecture, and this is one of them. In this course, you will learn how to represent a software architecture using visual tools like UML, which is very important to communicate the architecture with shareholders as well as developers who will implement it. You will also learn some of the standard architectures, their qualities, and tradeoffs. The course also talks about how designs are evaluated, what makes a good architecture, and architecture can be improved. And the best part of the course is that you will do some hands-on practice in the last module by documenting a Java-based Android application (Capstone Project) with UML diagrams and analyze evaluate the application’s architecture using the Architecture Tradeoff Analysis Method (ATAM). And, if you find Coursera courses useful, which they are because they are created by reputed companies and universities around the world, I suggest you join the Coursera Plus, a subscription plan from Coursera which gives you unlimited access to their most popular courses, specialization, professional certificate, and guided projects. It cost around $399/year but its complete worth of your money as you get unlimited certificates. This is another excellent and must take the course for all the programmers who aspire to become a software architect. In this course, instructor Mark Farragher will teach you all the skills you need to become an outstanding solution architect. He will not only teach you how to create an excellent architecture design but also show you all the soft skills you will need to really shine in this role and make an impression on your peers. This is extremely important as you need to do a lot of talking and probably need to interact with most of the people in the organization, including CEO and CTO. This course also covers how a Software Architect or Solution Architect operates in an IT team, which soft skills are required to become an outstanding architect, and which extra responsibilities you can take on to really make an impression on your peers. Talking about technical skill, the course will teach you how to create high architectures, explains common architecture design patterns, and shows how to design these patterns in UML. It also covers what to look at architecture, both high and low levels, like caching, exception management, and deployment scenarios. In short, an excellent course for all programmer and senior developer who wants to become a Solution Architect. This course is a more low level than the previous two sessions and talks about software architecture and design patterns, somewhat more concrete things than abstract design. The course uses Java programming language to solve problems, which is great for Java developers. Still, the theoretical background is language-independent and useful for all programmers irrespective of their programming languages. I highly recommend coding out the implementations several times on your own to get a good grasp of it. It also covers things like SOLID principles and design patterns, which are vital for any good architecture and robust application. This is another fantastic course for all the programmer who wants to become a software architect. In this course, you will learn what the role of a software architect in a team and organization is and why it is so important. You will learn about the skills and knowledge required to become a competent software architect and responsibility during each phase of the software development and project life cycle. Lastly, you will learn one of the most important aspects of being a solution architect: how to design and communicate a solution to both technical and non-technical stakeholders. In short, this course is your roadmap to becoming a capable and successful software architect. Even if you are a software architect, you can also take this course to further hone your soft skills. If you need more resources, you can also check out this list of books to improve your Soft Skills as a Programmer and Developer. This is another excellent Pluralsight course on Software Architecture. In this course, you will learn about Clean Architecture, if you think what a clean architecture is, it’s nothing but a set of modern patterns, practices, and principles for creating a software architecture that is simple, understandable, flexible, testable, and maintainable. There is a lot of focus on an organization to write Clean Code and create Clean Architecture, and this course will help you in that direction. This is an introductory course, which means no prerequisites for this course. However, having essential experience with at least one the C like a programming language, and basic knowledge of software architecture is beneficial. It’s totally different from the Clean Architecture book by Uncle Bob, which is also a worth reading book for programmers wants to become a software architect. By the way, you would need a Pluralsight membership to join this course, which costs around $29 per month or $299 per year (14% discount). If you don’t have this plan, I highly recommend joining as it boosts your learning and as a programmer, you always need to learn new things. Alternatively, you can also use their 10-day-free-trial to watch this course for FREE. This is one of my favorite courses when it comes to learning Software Design or System design. It’s created to prepare you for the System design interview, but you can also use it to learn how to approach System design in general. The Grokking the System Design Interview, is one of the first courses (or book) that describes the Large Scale Distributed System Design problems in detail. Even if you’ve worked on Distributed Systems before, there are a lot of things that you can learn from this course. Here is the link to join this course — Grokking the System Design Interview Authors have created this course to provide you design choices(including pros and cons) so that you can understand the requirement, compare approaches, and come up with the best solution for the problem in hand. They are also mindful not to provide a solution at a granularity that’s appropriate for a 45-minute discussion. This makes the course very interesting. Even if you are not preparing for Coding interviews, I suggest you take this course to improve your System design skills. Big thanks to The Educative Team and Fahim ul Haq for creating this awesome course. This is another excellent course from Educative to learn about Web application architecture. In this course, you will learn about different architectural styles like monolith, microservices, client-server, 3-tier architecture, decentralized peer-to-peer architecture, and how request and data move in web application. You will also learn about how to think big and think in terms of layers, performance, scalability, and high availability, which is a must for today’s application. The course not only introduces with the different architectural pattern but also explains the pros and cons of each approach and walk you through a different scenario where a particular architecture is more suitable than others. Here is the link to join the course — Web Application & Software Architecture 101 To be honest with you, this is the best course for not only senior developers but also every software developer out there as it will expand your thinking process and will you make a more confident web developer. There is a significant discount on the course now, and it’s available for just $44, the original price $79, it’s a bit expensive them Udemy courses but worth it. On the other hand, if you like Educative as a platform, you can also buy a subscription for just $17 per month (50% discount), I have one, and I highly recommend you to get. That’s all about some of the best online courses to learn Software architecture and become a Software Architect or Solution Architect. As I have said, the role of a Software architect is significant, and he also needs to do a lot of talking, hence not just subject matter and technologies, he also needs to be good at soft skills. It’s a gratifying career, both in terms of pay and work, as you get a lot of limelight and get to talk to both higher and lower levels in your organization like from CEO to Developers, and know most of the things about your application and solution. Other Articles You May Like to Explore 10 Things Java and Web Developer Should Learn in 2021 10 Programming Languages to look in 2021 10 Testing Tools Java Developers Should Know 5 Frameworks Java Developers Should Learn in 2021 10 Tools Every Java Developer should know 5 Courses to Learn Big Data and Apache Spark in Java Finally, Java has var to declare Local Variables 10 Books Every Java Programmer Should Read in 2021 10 Tools Java Developers uses in their day-to-day work 10 Tips to become a better Java Programmer Thanks for reading this article so far. If you found these courses useful in becoming a software architect or learn software architecture, please share it with your friends and colleagues. If you have any questions or feedback, then please drop a note. cc P.S. — If you are looking for a free course to learn Java design patterns, which are also crucial for Software architects, then you can also check out Java Design Patterns and Architecture course on Udemy. It’s completely free and has loads of useful information on using design patterns for Java programmers. Other Medium Articles you may like
https://medium.com/javarevisited/top-5-courses-to-learn-software-architecture-in-2020-best-of-lot-5d34ebc52e9
[]
2020-12-09 09:05:08.228000+00:00
['Java', 'Programming', 'JavaScript', 'Software Development', 'Web Development']
Why Python should be your go-to language?
5. Python is instrumental in Data Science and AI Python comes with a lot of built-in libraries that provide a lot of the functionality a data scientist might need. In addition to that, there are also a great number of robust and popular libraries you can download for Python and use in your projects. Now, give it a go if it fits your needs.
https://medium.com/dev-genius/why-python-should-be-your-go-to-language-fb69be1fb0ba
['Injurious Answer']
2020-10-07 19:33:00.183000+00:00
['Python', 'IoT', 'Language', 'Data Science', 'Programming']
Why Richard Branson Is So Successful
Why Richard Branson Is So Successful By Richard Feloni Richard Branson founded his first business, Student magazine, after dropping out of high school at age 15. He soon cofounded the Virgin record store, which then grew into a record label. After 10 years of great success, Branson left his business partners dumbfounded when he announced he wanted to branch into the airline industry. Nearly 50 years later, Branson is the billionaire chair of the Virgin Group and has overseen approximately 500 companies, with his brand currently on somewhere between 200 and 300 of them. It’s his remarkable passion, vision, and leadership qualities that make him an “exponential entrepreneur,” write serial tech entrepreneur and XPRIZE CEO Peter Diamandis and Flow Genome Project founder Steven Kotler in their new book, “Bold: How to Go Big, Create Wealth and Impact the World.” Branson sits on the XPRIZE board, and Diamandis spoke with him for the book. Drawing from Diamandis and Kotler’s insight and an interview Business Insider CEO Henry Blodget held with Branson last fall, we’ve broken down the key elements to Branson’s philosophy that has been behind the hundreds of businesses he’s either created or helped develop. He’s a “fun junkie.” “Branson says to himself, ‘if I have fun doing this, I assume other people have fun doing this,’ so fun has become his filter for ‘should I go into it?’ and it’s a great filter,” Kotler says. When he first told Virgin Music CEOs that he wanted to use a third of the company’s profits to start an airline because it would be “fun,” they weren’t amused. But Branson wasn’t being cheeky or trite. He’s been able to have such a successful, rich, and long career because he’s been enjoying himself. “Fun is one of the most important — and underrated — ingredients in any successful venture. If you’re not having fun, then it’s probably time to call it quits and try something else,” he writes in his book “The Virgin Way: Everything I Know About Leadership.” He protects the downside. “Superficially, I think it looks like entrepreneurs have a high tolerance for risk,” Branson tells Diamandis in “Bold.” “But, having said that, one of the most important phrases in my life is ‘protect the downside.’” Limiting possible losses before moving forward with a new business venture is a lesson his father taught him when he was 15, he writes in a LinkedIn post. His dad would let him drop out of school to start a magazine, but only if he sold 4,000 pounds of advertising to cover printing and paper costs. It’s a strategy he repeated in 1984 when he went into the airline business with Virgin Atlantic. He was only able to convince his business partners at Virgin Records to agree to the deal after he got Boeing to agree to take back Virgin’s one 747 jet after a year if the business wasn’t operating as planned. Diamandis and Kotler write that this strategy has allowed Branson to remain agile as an entrepreneur. Over the past five decades, Branson has, of course, experienced many failures, like Virgin Cola and Virgin Clothing. But he “is quick to rapidly iterate his ideas, and quicker to shut down a failure,” Diamandis and Kotler write. “In total, while Branson is known to have started some five hundred companies, he has also shut down the two hundred of them that didn’t work.” He’s customer-centric. “Unless you’re customer-centric, you might be able to create something wonderful, but you’re not going to survive,” Branson tells Diamandis. “It’s about getting every little detail right.” Branson writes in “The Virgin Way” that even though it’s impossible to be hands-on with all of his companies, he will occasionally play customer, experiencing a Virgin service as a consumer would. It’s why he says he once called one of his company’s customer service lines and disguised his voice, demanding to be put on the phone with Richard Branson — his test worked, and he was connected to his assistant, who saw through his disguise. He tells the story for a laugh, but also to communicate the fact that regardless of whether you’re running a startup or a massive conglomerate, you can’t lose touch with your customer. Branson also says he used to regularly cold call Virgin Atlantic business-class customers to ask about their experience, and he writes down observations about his own experiences as a Virgin customer, such as when he noted that he and fellow Virgin America passengers didn’t want a hot towel offered to them on a scorching Las Vegas day. He took that bit to management and had the policy changed to having cold towels offered on hot days. He’s a master delegater. Branson may still kite-surf in his 60s, but he’s not superhuman. He’s constantly searching for new ways to expand the Virgin brand into “industries that are stuck or broken,” as Diamandis and Kotler say, assured that the people he’s surrounded himself with can make his ideas reality. “The best bit of advice I think I can give to any manager of a company is find somebody better than yourself to do the day-to-day running,” Branson tells Business Insider. “And then free yourself up to think about the bigger picture. By freeing myself up, I’ve been able to dream big and move Virgin forward into lots of different areas. And it’s made for a fascinating life.” More From Business Insider:
https://medium.com/thrive-global/why-richard-branson-is-so-successful-46a93e988cb2
['Thrive Global']
2017-06-21 18:22:44.173000+00:00
['Wisdom', 'Entrepreneurship', 'Adventure', 'Work Smarter', 'Success']
What Color Is Everything When Nobody Is Looking?
“Color is the place where our brain and the universe meet.” — Paul Klee Is the sky really blue? Is white color made up of all colors? It sure seems like it, but it only does so, because our brain makes up such properties. You have probably learned in school, that light is an electromagnetic wave, and the wavelength of this wave will determine the color of the light if it’s in the visible spectrum. If you want to refresh this knowledge or learn more about light, including a cool mindmap of all of its properties, check out my other piece, The Science of Light, the Heartbeat of the Universe. Here’s what you might have learned: The distribution of electromagnetic waves with respect to frequency and wavelength, highlighting visible spectrum, source: Wikimedia. The electromagnetic wave seems to have different properties depending on the frequency or wavelength (two sides of the same coin since the product of frequency λ and wavelength f is a constant, namely the speed of light, λf = c). Only a small part of the spectrum is visible to us. This is what we perceive as optical light, while we are not able to see microwaves or wifi. Is there anything special about the visible part of the spectrum then? And why are all of the colors included in such a small range of wavelengths? Colors are not real The wavelength of the electromagnetic waves does not carry any property of color in itself. It’s only to our human senses, that light becomes optical light and colors are made up. If you add to the definition, that the wavelength of the electromagnetic wave determines the color of light that we observe, then it’s all good. But if you want to consider the Universe as it is, without being dependent on a human observer, then it’s important to understand, that it’s not the light itself, not the wavelength of the electromagnetic spectrum, that carries a property of color. So asking the question, “what color is the universe when nobody is looking?”, makes as little sense as asking, “if a tree falls in the forest and nobody is around, does it make a noise?” It’s the brain of the biological human (and many other of our vertebrate cousins in the carbon- and DNA-based life), that makes up the colors. Colors are not real! At least not real-real. Not the kind of real that exists independent of us in an ontological realism, assuming that it exist. (The alternative is the ontological relativitism, where there exist no objective and independent Universe, it is only created by our observations.) It’s only the kind of real that humans as a species can agree upon, and luckily we don’t have any other species making arguments against us. So far, so good. How we perceive color The electromagnet waves propagates through space from some energy source and when it interacts with matter it gets absorbed as quanta of energy, also called photons. Light interacts with the matter that we know, but there might be other kinds of matter out there, that doesn’t interact with light, and therefore we can’t see it nor sense it in any other way through our senses (all of our five senses are in the end based on the electromagnetic force). This is what we, for now, call dark matter, but it’s outside the scope here. If you want to read more about that, check out Gravity-based life as we don’t know it. When an electromagnetic wave reaches our eye, it might come directly from its source, such as the sun, a star, or any other radiating black-body or by a burning chemical reaction in a fire, or it might have been absorbed by other material first, and only some of it is reflected back before it reaches the retina in our eye.
https://medium.com/predict/what-color-is-everything-when-nobody-is-looking-aefaee67dcf2
['Lenka Otap']
2020-12-28 08:54:39.467000+00:00
['Philosophy', 'Colors', 'Philosophy Of Mind', 'Philosophy Of Science', 'Science']
Dark Energy
What is dark energy? Why is it so important to the development of the Universe? Find out in Dark Energy, the newest installment of Cosmic Funnies where Planet X tells the story!
https://medium.com/the-cosmic-companion/dark-energy-bd04ba0db533
['Cosmic Funnies']
2020-01-26 18:53:59.984000+00:00
['Science', 'Space', 'Education', 'Astronomy', 'Comics']
X-AI, Black Boxes and Crystal Balls
Inside AI X-AI, Black Boxes and Crystal Balls On our road to trusted AI, I discussed in my previous blog the question of bias, how it travels from humans to machines, how it is amplified by AI applications, the impacts in the real world, for individuals and for businesses, and the importance to proactively tackle this problem. Today, I’ll address the issue of explainability and transparency of the so-called “black box” models. Choosing between Explainability and Accuracy? To trust, one must understand. This is true of relationships between human beings. It is also true when it comes to adopting systems that augment human capabilities to generate insight and make decisions. It’s a partnership between humans and machines. Like in all partnerships, trust is the key. It is perhaps no surprise that explainability of AI and Machine Learning (ML) algorithms has become one of the most discussed and researched topics in the field. What does it meant for an algorithm to be explainable? It means that the system can convey useful information about its inner workings, the patterns that it learns and the results it provides. Interpretability is a softer, lighter version of explainability. A system is interpretable if we can see what’s going on and if we can reasonably predict the outcome based on input variables, even if we do not necessarily know how the system reached its decision. Some model types, like decision trees and linear regressions, are quite simple, transparent and easy to understand. We know how changing the inputs will affect the predicted outcome and we can justify each prediction. Unfortunately, the same complexity that gives extraordinary predictive abilities to “black box” models such as deep neural networks, random forests, or gradient boosting machines, also makes them very difficult to understand (and trust). The workings of any ML technology are inherently opaque, even to computer scientists. By its nature, deep learning is a particularly dark black box. The problem is made worse by the large volumes of data used to train those models, making it difficult to figure out which data points have more influence on the outcome than others. The fact that ML algorithms evolve over time also makes things hard, because the algorithms keep learning from new data. At the end of the day, it is a trade-off between accuracy and explainability. The question is how much we are prepared to compromise on either. Unfortunately, we have not reached the point where we can have models that are both highly accurate and fully transparent, although we are moving in that direction.
https://towardsdatascience.com/x-ai-black-boxes-and-crystal-balls-fd27a00752ec
['Olivier Penel']
2019-04-29 07:52:18.799000+00:00
['Inside Ai', 'Explainability', 'Transparency', 'Machine Learning', 'Artificial Intelligence']
How to Use the Concept of Parsimony to Make Your Life Better
How to Use the Concept of Parsimony to Make Your Life Better Will living a simpler life make us happy? Photo by Harry Grout on Unsplash At its core, life’s pretty simple. We wake up, work, eat, sleep, and repeat. So why does it feel so darn complicated? It’s probably because, as human beings, we naturally make things harder than they are. Life can be boring. Facing a challenge gives our life meaning, and working to resolve it gives us purpose. Psychologists have called this phenomenon “complexity bias.” In doing so, are we making our lives worse? What if simpler were better? Parsimony (also known as Occam’s Razor) is a Philosophical Razor that argues just that. It was coined by scholastic philosopher William of Occam (c. 1287–1347,) who claimed that in all situations, the simplest theory is the best theory. The theory states when faced with two competing hypotheses that come to the same conclusion, we should choose the simplest explanation (or in Philosophical terms: the one that makes the fewest assumptions). This razor is commonly used in Philosophy and Science as a heuristic to make decisions quickly and effectively. But, if applied correctly, could it make our lives better?
https://medium.com/mind-cafe/how-to-use-the-concept-of-parsimony-to-make-your-life-better-a345d64f7094
['Jon Hawkins']
2020-12-15 15:42:12.412000+00:00
['Self', 'Philosophy', 'Life', 'Advice', 'Psychology']
22 Questions for ethics in data and AI
22 Questions on data ethics (V1.1) Big picture 0. If anything was possible, and there were no constraints, how would we act? Design stage What is our intention? 2. What are the components we need to bring our project into reality? 3. How will we learn about the potential consequences of our work? 4. How will we establish, and maintain, authority and trust? 5. How are we expected to conform? 6. Who are the participants in our creative process? 7. How will we cede control to, and support, those responsible for day to day choices? Implementation/Management 8. What is the balance of power between those represented in our data, and those possessing it? 9. What does our data potentially reveal about the personal lives of those represented in it? 10. What events, beyond our control, should we be planning for? 11. How will we be aware of, and adjust for, bias? 12. What, or who, are we willing to sacrifice, at what price? 13. When does data get deleted? 14. Who, or what, else will be involved, and how do they align with our values? System/Organization 15. What forms of behaviour do we actually incentivize? 16. When we fail to live up to our standards, how will we respond? 17. How will we provide guidance on how to navigate difficult issues? 18. What aspects of our work would we like to keep hidden? 19. How much of our inner workings do we reveal to the world? 20. How will we encourage, listen to and incorporate feedback? 21. How will our ethical policies continue to grow and evolve?
https://medium.com/the-organization/22-questions-for-ethics-in-data-and-ai-efb68fd19429
['Peter Brownell']
2019-04-25 17:13:11.740000+00:00
['Data Science', 'AI', 'Ethics']
Crown Platform to launch custom Proof of Stake solution in Q4 2018
Crown Platform to launch custom Proof of Stake solution in Q4 2018 Paddington Software Services, lead by Presstab, hired to perform the task Crown Community supported the move with overwhelming 436 Yeas / 19 Nays and over 175.000 CRW in donations The new solution will be fully developed by December 2018 It is official. The Crown DAO has signed an agreement with Paddington Software Services LLC on July 18th to develop a unique Masternode and Systemnode based Proof of Stake consensus mechanism. CEO and founder Tom Bradshaw, also known as Presstab, will be directing a team of experienced developers that will write and implement the new code in collaboration with the Crown Development Team, piloted by Artem Brazhnikov. The software development plan consists of six milestones that will be delivered until December 2018. After a theoretical phase, in which specifications and the protocol will be prepared and approved, unit tests will start, culminating in testnet launch. Only then and after rigorous hardening of software design and security, the main network fork code will be developed and pushed. The development process will start immediatly and the Crown Developers Team will publish additional information as the different phases are delivered. “I am passionate about community funded projects and my whole team is looking forward to start the work on Crown.” Tom and his team are excited to develop a unique Proof of Stake solution for the Crown Platform, where they will be investing their years of experience in PoS solutions including but not limited to projects like PIVX, SaLuS, StakeNet (XSN), Neutron (NTRN), and ColossusCoinXT (CLOX). Toms interest in macro economics and financial technology caused him to study and begin using Bitcoin and other digital currencies in 2012. Tom has provided analysis and development work for many digital currencies beyond the ones mentioned above. According to GitHub.com’s code search tool, fragments of Tom’s code appear in over 200 digital currencies. Tom recently designed and implemented the world’s first private proof of stake protocol, and is one of 8 contributors to libzerocoin, the primary implementation of the Zerocoin cryptographic scheme (according to GitHub.com over 1,600 code repositories use libzerocoin). The implementation of a MN/SN based Proof of Stake is a further step towards making Crown Platform unique and fully community driven. Thank you for being with us on this exciting journey.
https://medium.com/crownplatform/crown-platform-to-launch-custom-proof-of-stake-solution-in-q4-2018-1313c661f021
['J. Herranz']
2018-07-19 08:26:50.786000+00:00
['Cryptocurrency', 'Bitcoin', 'Development', 'Proof Of Stake', 'Blockchain']