title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
What Do We Owe Our Parents?
Photo by Cristian Newman on Unsplash “I created you, Mi Vida, as God created Adam… Without me, you would never have seen a beautiful sunset or smelled the rain approaching on the wind. You would never have tasted cool water on a hot summer day. Or heard music or known the wonderful pleasure of creating it. I gave you these things, Mi Vida. You … owe … me.” A curious thing to say to your child, or rather your clone. In The House of the Scorpion, by Nancy Farmer, the main character Matt, a clone of someone in need of his organs, is presented with this obligatory pressure from his creator. Though we are not clones, we all find ourselves in similar situations, don’t we? As children, what do we owe our parents? In many Asian cultures, especially Chinese culture, children are expected to take care of their parents in the same way their parents took care of them. Children are expected to give their parents monthly allowances to show “filial piety,” a deeply ingrained cultural value of showing respect for one’s elders. Children are expected to take their parents into their homes once they settle down. Children are guilted into providing for their parents with frequent remarks of, “When I’m retired, remember what I did for you.” and “I can’t wait for you to get your first job and help me get your younger brothers through college.” These pressures certainly leave many Asian Americans with feelings of suffocation and embarrassment. Admittedly, the need for open communication instead of guilt and manipulation is pressing in Asian families. Giving back to parents should be something that’s done out of appreciation. Certainly, the current practice of filial piety is flawed. However, many of these values are goodhearted. It is unfortunate to see these Asian American expectations set forth in a country where it is uncommon to do these things for your parents, and as such, don’t earn the respect and authority in America that they do elsewhere. In a world increasingly prideful of “individuality,” fewer people are practicing values that benefit society as a whole. Most of us all secretly dread old age, where we need the aid of a retirement home to get through the day. We dread it because it’s lonely and unexciting. But old-age is only lonely and unexciting because people, in their prime time, have lost the value of filial piety. As middle-aged adults, we give our mothers a call once a week and forget about them otherwise. Meanwhile, your mothers sit alone in your small childhood home in Atlanta. It is because of this mentality, that many middle-aged adults are practicing right now, that you yourself dread getting older. Maybe it wouldn’t be so bad if you were surrounded by your children and grandchildren day after day. So, perhaps we as children are indeed obligated to our parents in a greater way than we all acknowledge today. We owe it to our parents to continuously involve them in our lives, regularly invite them to our guest bedrooms every month, or meet with them every Sunday for brunch and an activity. Eager to live our own lives and get away from the parents who nagged you for 18 years straight, we forget the gratitude we owe them and the gratitude we secretly wish our children would show us in our old age. However, our obligation must certainly live on a spectrum; there is a limit to what we owe our parents. And only you can determine what that is. Granted, The House of the Scorpion brings to light the shocking notion that a child owes their parent their literal life. But Farmer’s quoted explanation also leaves room for more thought. Your parents decided to give you the ultimate gift of life — the extraordinary feeling of your legs racing beneath you as you run, the sheer pleasure of chocolate ice cream, the beauty of sunsets. Though you may not owe them your life, you certainly owe them as much as you can give them. You owe them more than what you are giving them right now.
https://medium.com/the-philosophers-stone/what-do-we-owe-our-parents-dc666c39b592
['Sappho Fortis']
2020-07-17 22:45:58.654000+00:00
['Self-awareness', 'Philosophy', 'Personal Development', 'Life', 'Family']
Science-Backed Ways to Help Calm Your Mind in Uncertain Times
1. Focus on What You Can Influence Often, we’re so consumed by our worries that we forget what’s inside our locus of control. We spend large amounts of our brainpower on things we can’t improve and feel helpless because there seems nothing we can do. We can’t control, for example, whether others follow the rules of physical distancing, whether hospitals have enough intensive care beds, whether the global health care systems are ready for a second or third COVID wave, and how long this entire pandemic will last. By worrying about these things, we waste precious energy. We don’t help anyone if we talk ourselves down and worry about factors outside of our control. Instead, we should try to focus on the things we can influence, like our attitude, our beliefs, our thoughts, and our energy levels. We can regain a sense of stability if we live by the principle of philosopher Epictetus, who wrote: “Happiness and freedom begin with a clear understanding of one principle. Some things are within your control. And some are not.” How to do it: Place a blank page on your table and write down everything you worry about. For the next 24-hours, add whatever strikes your mind. Tomorrow, use two different colored pens to highlight what you can and can’t influence. Feel free to add controllable aspects you haven’t thought about, like your mental and physical health. Then, commit to focusing on everything inside your control. For example, you can decide to get high-quality sleep, decrease your screen time, eat good food, move your body, check in with your loved ones, and be kind to the people around you. Allow yourself to do everything that makes you feel good without breaking COVID regulations. By focusing on the things you can control, like your thoughts, words, actions, choices, attitude, and reactions, you can create a better environment.
https://medium.com/publishous/science-backed-ways-to-calm-your-mind-in-uncertain-times-5a7bdba5c4e4
['Eva Keiffenheim']
2020-11-09 14:20:53.637000+00:00
['Advice', 'Mental Health', 'Anxiety', 'Self Improvement', 'Mindfulness']
Best of the Week — November 30/December 6
Hi guys! Here we are with another appointment of the “Best of the week”, reviewing best articles from the previous week. Let’s see them. Most Viewed Articles Recommended Articles Resource Articles
https://medium.com/javarevisited/best-of-the-week-november-30-december-6-2ab4e93ebc86
['Dario De Santis']
2020-12-07 08:11:23.127000+00:00
['Java', 'Programming', 'Software Development', 'Coding']
What are the Fundamentals of Reinforcement Learning
What is Reinforcement Learning? Reinforcement learning consists of an agent interacting with the environment and learning from trial and error — the agent is the learner. The environment is anything that the agent interacts with. The interaction between the two is dynamic — the agent observes the environment’s state and takes action based on it. Then the environment produces a reward (good or bad) and changes its state based on the action. This goes on and on in a loop with the intent of accumulating the maximum number of rewards to achieve an end goal in the best way possible. Figure 1 illustrates this process. Each loop is made of a sequence of State (S), Action (A) and Reward (R) — such sequence can be called an episode or the trajectory. [2] Figure 1 — Agent-Environment interaction [3]. Adapted by author It’s quite intuitive. We humans follow this process naturally; we don’t even think of the entities and elements involved. In general, terms, what we and RL systems try to solve, is the problem of decision making [1]. It’s an area that sits at the core of other fields as well, not just Machine Learning [1]. Engineering and Neuroscience are two examples. A different name may be used, for example, Optimal Control in Engineering and Reward System in Neuroscience. Still, the idea is the same — how to optimize a sequence of actions to achieve the best result. In the case of Neuroscience, it studies the ‘reward system’ region of the brain, which contains neurons that produce dopamine to evaluate rewards and consequently make decisions. [1] RL is different from Supervised or even Unsupervised learning. Supervised learning is when the algorithm is fed with data that is labelled. Such data is called the training data set. For example, a Spam filter — it’s something that all email applications have today, you rarely need to flag whether it’s spam or not. The algorithm learned from the training set, which contains emails labelled as spam or ham (ham=not spam). From these input data, it learned the pattern of both types and based on it, can recognise. [2] Unsupervised is the opposite of supervised (really?). The training data set has no information on the solution; it’s unlabeled. It’s like trying to play a sport without ever having a coach or even watched somewhere. An example: imagine an algorithm that tries to find communities within the data from a social media website — but solely based on the person’s connections and their connection’s connections — without any information on what school the person went, the gender, or anything. [2] But how RL is different from Unsupervised learning? The answer is more subtle than if you compare RL with Supervised. While in both RL and Unsupervised, there is no supervisor, the behaviour of each type of learning is different. In Unsupervised, the intent is to try to find the hidden information/label from the data. In RL, it’s about summing up the rewards to achieve the end goal. [3]
https://medium.com/swlh/what-are-the-fundamentals-of-reinforcement-learning-61c5d6979ed7
['Vinicius Monteiro']
2020-12-12 15:05:41.278000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Reinforcement Learning', 'Alphago']
White Hat SEO Case Study: How To Get a #1 Ranking
White Hat SEO Case Study: How To Get a #1 Ranking Today you’re going to learn how Emil rocketed his site to the #1 spot in Google. (You’ll also see how he turned this #1 ranking into $100k in monthly recurring revenue) But wait, there’s more! I’ll ALSO show you how Richard boosted his organic traffic by 348%…in 7 days. And in this post I’ll walk you through the exact white hat SEO strategy that they used, step-by-step. How Emil Used The Skyscraper Technique to Generate 41,992 Pageviews, a #1 Ranking and $100k In Monthly Recurring Revenue Emil’s Skyscraper content went live in April. But he didn’t eat Doritos on his couch and hope that his post hit the first page. Instead, he promoted his post using email outreach (more on that later). And that email outreach directly led to… 41,992 pageviews: 645 social shares: (Including Tweets from peeps with thousands of followers): And most important of all, fistfuls of high-quality backlinks: But that’s just the tip of the iceberg… “Shares and Pageviews are nice and all…but what about the long-term ROI?” Even though Emil’s post came out over 7 months ago, his post continues to generate traffic, leads and sales. How? Gool ol’ fashioned white hat SEO. His newly minted backlinks skyrocketed his site to the #1 spot for his target keyword, “wellness program ideas”. That #1 ranking (and traffic from social media) brings in 10,000+ Pageviews per month…like clockwork: And because Emil’s post attracts high-quality organic traffic, a good chunk of his visitors convert into leads: But most important of all, this guide helped boost Emil’s homepage traffic by 59%. Here’s why this is so important: Because Emil’s homepage converts so well, this traffic boost drives over $100k in monthly recurring revenue. How Emil Turned a “Blah Blog” Into an Online Sales Machine OK so who is this Emil guy? Emil Shour runs content marketing and SEO at SnackNation, a healthy snack delivery service. On Emil’s first day his boss sat him down and said: “get us some backlinks”. Now: At this point, Snack Nation’s SEO strategy was scattershot. They were publishing generic posts like, “3 ways to do X”and “5 tips for Y”. Here’s an example: These posts didn’t move the needle…despite the fact that they referenced Office Space (I LOVE that movie) In fact, most of their posts generated only a handful of shares: In Emil’s own words: “When we first started out, the blog had a couple of okay posts. We weren’t going after keywords that were outside of our really tiny niche. There’s only so many things people are looking for in terms of office snack delivery.” Emil Shour Emil quickly realized that his “get us some backlinks” mandate wasn’t going to work if they kept pumping out content like this. Like most people do in a tight spot, Emil started Googling… How a Random Encounter Changed Everything… Emil’s search led him to a blog post at Backlinko, “Content Strategy Case Study: 36,282 Readers + 1,000 Email Subscribers“. In that post I wrote about Jimmy Daly… …and how he used The Skyscraper Technique to generate 4,865 Pageviews in a week: When Emil saw this post, a light bulb went off: “I read that post maybe ten times because I was just amazed. I was like ‘holy crap. I have to do this.’ And I just modeled it completely after that.” Emil Shour I already showed you the impressive results that Emil achieved thanks to The Skyscraper Technique. Now it’s time to show you exactly how he did it. Step #0: Find an Awesome Keyword Emil kicked things off with keyword research. It didn’t take Emil long to figure out that VERY few people searched for healthy office snacks. For example, a keyword like “healthy office snack ideas” gets only 20 searches per month. But here’s the interesting part: Emil realized that people interested in healthy office snacks are ALSO interested in the broad topic: “employee wellness”. So he popped “employee wellness” into the Google Keyword Planner… …and voila! — he found this gem of a keyword: Now: This is a keyword that Emil’s customers search for every day…which is great. But here’s the important thing to keep in mind: This keyword has a $7 suggested bid… ….and a ton of Adwords ads on the first page. All that Adwords action told Emil: “there’s strong commercial intent behind this keyword.” In other words: People that search for that keyword AREN’T a bunch of tire kickers. These are people that are going to buy from you. The Google Keyword Planner ain’t bad. But if you want to find untapped keywords that your competition doesn’t know about, then I recommend trying these 3 tools. 1. MetaGlossary MetaGlossary.com is one of my favorite long tail keyword generators. Just put a broad keyword into the tool… …and it will pop out dozens of awesome suggestions: 2. FAQFox FaqFox.com finds questions that your target audience asks online. (And these questions usually contain untapped long tail keywords) Let’s say you’re in the paleo diet space. Just enter the keyword “paleo diet” (and a few sites you want FaqFox to scrape). And you’ll get a list of questions that people ask about your topic. (You’ll also get a few…ummm…interesting questions): But the normal questions are long tail keywords that you can create content around. 3. Udemy I saved the best for last. Udemy is a keyword research goldmine. First off, you have Udemy autosuggest. As if that wasn’t helpful enough, the curriculum for each course hooks you up with a TON of keyword options: And just like that you have a boatload of keywords that your competition will NEVER find. OK, let’s get Back to Emil’s story… Now that Emil had a keyword in-hand, it was time to size up the first page competition. Step #1: Find Content That Already Ranks for That Keyword Once you’ve found a keyword, it’s time to get a feel for what’s already out there… …so you can destroy it. (Yes, I let out an evil laugh when I wrote that 😀 ) So: How do you find content that’s already done well? A simple Google search. Simply search for your target keyword (and a few closely-related keywords), and see what comes up. For example: Emil Googled “employee wellness program ideas”, “wellness programs” and “corporate wellness programs”: And he noticed a few trends in the results: First, most of the first page results were lists of different wellness program ideas. Second, Emil noticed that the first page results had some major flaws. For example, 4 of the top 10 results were PDFs. Needless to say, PDFs aren’t very user friendly. Emil also took note of the fact that these lists lacked important details. Some were literally just lists of ideas: (As you know, it’s hard to take action on a piece of content that leaves out meaty details) Emil also saw that the first page lacked visual content…like images, videos, charts and screenshots. Content with at least one image generates an average of 43% more social shares than pure text-based content. Last but not least, Emil noticed that the content on the first page was BORING. Obviously, “employee wellness programs” isn’t the most exciting topic on planet Earth. But that’s no excuse to write stiff, dull copy like this: As you’ll see in a minute, Emil went the extra mile to make his topic fun and interesting. But first, we need to dive into step #2… Step #2: Create Something That Deserves To Be #1 Here’s the truth: First page rankings have NOTHING to do with “keeping your site updated with fresh, quality content”. (Yes, really) Instead, your ability to hit the first page depends on two things: Thing #1: Creating something that deserves to rank #1. Thing #2: Promoting that content. Seriously, that’s it. Question is: How do you create content that deserves to be the best? Let me answer that by showing you EXACTLY how Emil did it. 1. Emil’s post listed more wellness program ideas than any other guide. Most of the content that Emil found listed 5–10 wellness program ideas: And a few authors went crazy and wrote 50+ ideas: But even the craziest authors weren’t as crazy as Emil… …because Emil set out to list out a whopping 120 ideas (!). There was only one problem: Emil got stuck at idea #50. So he asked everyone in the office to chip in with ideas. That got him to 60 total ideas. 60 is good…but not good enough. Emil had hit a brick wall. How the heck was he going to rank #1 with only 60 ideas? That’s when he had an idea… 2. Emil asked experts to contribute ideas Emil realized that he was sitting on top of a GOLDMINE of employee wellness program ideas. I’ll explain. SnackNation partners with dozens of healthy snack companies. And Emil guessed that these health-focused offices would be happy to share the wellness programs they used. And he was right. Emil got his sales team to ask their partners to send creative ideas: And these partners were happy to lend a hand: Emil also asked a few bloggers that write about employee wellness to contribute an idea or two: Again, they gladly sent some amazing ideas his way: (As you’ll see in a minute, these expert contributions generated LOTS of bonus traffic to Emil’s post) Thanks to contributions from a handful of experts, Emil finally had 120 ideas. (And a post that was 4979 words long) Now it was time to take his content to the next level. 3. Emil split up his content into sections. Let’s face it: Sifting through 121 items on a list is a chore. Despite that fact, many of the articles ranking in Google didn’t organize their ideas into sections: That’s why Emil decided to organize his list of ideas into 7 categories: Not only do these sections make Emil’s content easier to skim, but they got him nifty sitelinks in Google: In my experience sitelinks can significantly boost your CTR… …and therefore, your organic traffic. 4. Emil added multimedia to make his content more visually appealing. Like I mentioned earlier, most of the articles ranking on page 1 had zero images: That’s why Emil peppered his post with eye-catching images… …and helpful videos: 5. Finally, Emil made his copy fun and interesting. Here’s the deal: Whether you write about life insurance or life hacking, your writing CANNOT be boring. Seriously. If you bore people, they’re going to click over to YouTube faster than you can say “cute cats”. That’s why Emil made sure his writing was upbeat and engaging: Once Emil made his content more compelling than the competition — bada bing, bada boom — his draft was good to go. And after a few tweaks, Emil’s kick butt post was live: “121 Employee Wellness Program Ideas For Your Office“. Now that Emil’s post was live, it was time to celebrate right? Wrong. I probably don’t need to tell you that hitting “publish” is just the beginning. That’s why I want to show you the 6 content promotion strategies that Emil used to get the word out about his new guide. Step #3: Promote Your Epic Content Despite what you may have heard, there’s A LOT more to white hat SEO than “posting great content.” Sure, awesome content makes link building easier… …but it’s just the first step. That’s because there are 2 million blog posts published every day (source). And from launching several sites since 2010, I learned the hard way that if you really want to rank, you need to get out there and build links. With that, here are the 6 promotional strategies that Emil used to get the word out. 1. Emil got influencers on board with “Pre-Outreach” Once Emil put the finishing touches on his post, he knew he had something special. That’s why he decided to promote his post… …before he even published it. (This is known as “Pre-Outreach”) Here’s how it went down: First, Emil found blogs that wrote about employee wellness. And he sent them this message: Because he didn’t beg for a link, they were happy to hear from him: Then Emil sent a link to the post when it went live: And that led to a nice contextual backlink: I recently used pre-outreach to promote my SEO tools guide. 2. Use “Weak Ties” to generate early buzz You may not realize it, but you have “weak ties” that will happily promote your content for you. Question is: What are “weak ties”? And how can they help you with content promotion? I’ll explain with an example: “Weak ties” are people in your professional network that you’re acquaintances with (for example, old colleagues or people that work in other departments). After Emil’s post went live he asked the entire SnackNation team to share his new post: Even though most of Emil’s coworkers don’t work in the marketing department (and were therefore “weak ties”), they were more than happy to lend a hand: And these shares from weak ties got Emil some early buzz. “We had thirty people in our office. And if you have LinkedIn, Twitter, Google+, Facebook — if everyone’s sharing, it already gives us some social proof. That’s like eighty shares right off the bat. And people like sharing their friends’ stuff. So it just gave us a lot of social proof, gave us some amplification.” Emil Shour Because this is so easy and works so well, I reach out to my “weak ties” to promote every post at Backlinko: These may not be people that I have beers with every weekend, but I know them well enough to gently ask for a share. 3. Emil used “The Content Roadshow” Next, Emil promoted his content with “The Content Roadshow”. Let me show you how The Content Roadshow works: First, Emil searched for bloggers that wrote about employee wellness, human resources and other related topics. And when he found a high-quality piece of content like this one… …he emailed the author: In this case, Kristi asked Emil to submit his content to her roundup: He did…and got a sweet DA49 backlink in return: 4. Next, Emil emailed brands that he mentioned in the post Next, Emil emailed the other companies that he referenced in his post. For example, Emil mentioned Authority Nutrition here: And sent them a message to let them know that they’ve been featured: As you can see, these brands happily shared his post: 5. Then it was time for Emil to let his expert contributors know that they were live Remember when Emil asked a bunch of workplace wellness experts to contribute a quote? (Here’s an example:) Well, when the post went live, Emil let the experts know that they featured their wellness idea: Not only were the experts happy to share the post on social media, but one of them even linked to Emil’s post: This is yet another content promotion strategy that I recently used to promote my SEO tools guide (that guide now has over 3,500 shares from social media). I personally emailed the people behind all of the tools that I mentioned… …and most of them were PUMPED to share my guide. At this point Emil’s outreach racked up a bunch of social shares and traffic. (It even generated quite a few comments — a first for the SnackNation blog) Comments and shares are nice… …but they’re not going to get you to Google’s first page. To do that, you need lots of high-quality backlinks. Which leads us to our last promotional strategy… 6. Finally, Emil reached out people that linked to the content Emil found in step #1 Now that Emil had some social proof going, it was time to get down and dirty with link building. Here’s the exact process that Emil used: First, he searched for his target keyword in Google… …and popped the top 50 results into a spreadsheet: Use the free Chrome Extension Scraper to quickly export Google results into a spreadsheet. Here’s how: Just search for a keyword and right click on any of the results. Then choose “Scrape Similar…” Then pick “Export to Google Docs…” And just like that you’ll have the results added to a spreadsheet. Next, Emil found out who linked to the top 50 results. He popped each URL into a backlink analysis tool: And went one-by-one through the results. Then he emailed each of those people to let them know about his new, superior resource. Let’s take a look at a real life example of Emil’s outreach in action… Here’s his first email: Her response: His second email (with a link to his content) Boom! A link: And these backlinks pushed Emil’s guide above his competitors. Sure, a ranking #1 in Google is great. But last I checked you can’t pay your mortgage with a first page ranking. In other words: For your SEO to pay off, your content needs to generate leads and sales for your business. And that’s where this bonus step comes into play… Bonus Step: Generate Leads With The Content Upgrade Now that Emil’s #1 ranking was secured it was time to turn this targeted traffic into revenue. Here’s how he did it: At first, Emil pitched a generic ebook to everyone that visited his post. Even though his post was about employee wellness, the ebook was about employee engagement. And that pitch generated around 20 new subscribers per week. 20 subscribers a week isn’t bad. But it could be better. So Emil decided to swap out the generic ebook with a laser targeted Content Upgrade. Specifically, Emil created a PDF version of his post: (His PDF also contained 10 bonus ideas that weren’t found in the post) And to pitch his Content Upgrade, Emil embedded a CTA at the top of his content: How well did The Content Upgrade work? Instead of 20 subscribers per week… …the Content Upgrade blasted that rate up to 59 per week. (That’s a 195% increase) Backlinko Update: Did Richard’s Traffic Blast Stand The Test of Time? Back in 2013 I revealed how Richard Marriot used The Skyscraper Technique to boost his organic traffic by 348% in 7 days. Question is: Did Richard’s results last? Or did his site slip to page 2? Let’s find out… Here’s How Richard Marriot Used White Hat SEO to Skyrocket His Search Engine Traffic As an SEO newbie, Richard wanted to know which white hat SEO tools the experts used (in other words, not automated black hat tools). So Richard searched in Google for things like “SEO tools”, “white hat SEO tools” and “link building tools”: And he noticed that the results didn’t answer the fundamental question: “Which SEO tools should I use?” That’s when he decided to create something that did answer that question. How? He asked SEO experts which link building tools they used. In total, he emailed 115 influential people in the SEO space…and got 47 replies (that’s a 41% conversion rate). Even though he didn’t have any connections and only a few followers on Twitter, he was able to get contributions from ballers like Neil Patel. The end result is Richard’s expert roundup post, 55 SEO Experts Reveal 3 Favorite Link Building Tools: Now: Once Richard’s guide was live, he built links to his guide with email outreach. First, he found pages with broken links. And he sent the author of that page this script: When they replied saying “What’s the broken link?”, he sent them this email: And he was rewarded with a handful of high quality backlinks: Including links from: SearchEngineLand.com (DA92) A PR2 resource page A PR6 digital marketing firm blog So: Where does Richard rank today for his target keyword (“link building tools”)? #1 baby! Here’s What to Do Next… If you enjoyed this case study, I want you do one thing: Leave a comment to let me know. Whether you have a site about baking, bodybuilding, or bird cages…The Skyscraper Technique works. And I REALLY want to see you succeed. And step #1 is to leave a comment to let me know you’re ready to try The Skyscraper Technique. So leave a comment below right now.
https://medium.com/marketing-and-entrepreneurship/white-hat-seo-case-study-how-to-get-a-1-ranking-dd59bfd277af
['Brian Dean']
2016-08-22 12:14:59.499000+00:00
['Marketing', 'SEO']
Discover the .git Folder
Along the way, we will get an overview of the storage system and understand why the phrase ‘a branch is just a pointer’ makes sense. Agenda .git what??? Structure of the .git folder Conclusion .git what??? You probably already know that if you want to create a git repository, you have to type git init in your terminal. But do you know as well that a new .git folder is automatically generated at that moment? When you look at your project directory, you will find them. Git stores everything there. If we look inside, we can see many files and folders. It may seem confusing at first glance, but don’t worry, we’ll examine everything step by step. Structure of the .git folder First of all, your folder might contain some additional files and folders. This is because there are some more files and folders besides the ones shown above. At the top, however, you can see all those that usually appear when creating the git repository. So let’s start with the objects directory. objects When we open it, we only see an info and pack folder. But later many more directories will be added. Git stores all staged files in it. But that is not the only thing. Later we will look at two more types that are stored in the objects folder. Overall, we can say that these are what we usually call database storage. Go back to our project folder and enter the following line in the console: echo 'test' >> test.txt git add test.txt Firstly we create a new file test.txt with the content test. Next, we add it to the staging area. Not sure what the staging area is? Take a break and read my previous article ‘Git the three worlds system’. There I’ll briefly explain the three different areas in Git. Now we can go back into the .git/objects folder. You will see a new directory. Git stores everything in the 160-bit hash value. What appears as a 40 digits long hexadecimal number. You’re probably wondering, “Wait a minute. I can only see two hexadecimal numbers. Where is the rest?”. Git uses the first two characters of each hash to create a folder that included a file named with the last 38 characters. So you only see the first two characters of each hash. Git does this to make the memory system faster. But the whole 40 digits are the key, with the contents of the file being the value. To examine the hash, we can type: git cat-file -p 9daeafb9864cf43055ae93beb0afd6c7d144bfa4 The -p allows us to see all file changes. In our case, you only see “test”. The hash depends on the changes and metadata, so your hash will be different. If we want to link it to the local git history, we need to make a commit. Let’s do that. git commit -m "feat: Create a test.txt file with the content test" Therefore we created our first commit. Change back to the .git/objects folder and type ls -la. You should have the same number of directories as below. Two new folders were created. But why two new directories? We are only pushing one commit. Let’s analyze them and see what type each object is. To get the type of an object, we replace -p with -t in the git cat-file command. our tracked changes --------- git cat-file -t 9daeafb9864cf43055ae93beb0afd6c7d144bfa4 // blob new commit ----------- git cat-file -t 2b297e643c551e76cfa1f93810c50811382f9117 // tree git cat-file -t b9ca915ed5e9507d44dbfaebc8a64b0f2ba52649 // commit Now we can see that the 2b… hash stores something called tree and the 05… hash stores something called commit. To get a more detailed understanding, we must dive deeper into the memory system of git. But I think that would be too much. We will discuss this in more detail in my next article. For the moment, to get a better orientation, we can say, the blob stores only the content. The tree stores additional information, such as what file the content belongs to and what kind of file it is, and finally the commit anchors the changes in the history. refs Below you can see the structure of the refs folder. Into it, we find two subdirectories. The heads directory contains all branches and the tags directory contains all bookmarks from the history. heads We will start with the head directory. When we look inside, we’ll find a file called master. It contains a reference to the last commit -> b9ca915ed5e9507d44dbfaebc8a64b0f2ba52649. But what does it mean? Branches are an important part of git. Each branch is completely independent of all the others. If you want to learn more about branches, read my git fundamentals article. As soon as we create a new branch called second_branch and looking into the heads directory, we will find a second file with the same name as our new branch. The file contains the exactly same reference as the file master. Now we create a new commit in our second_branch and display the contents of both files again. master --------- b9ca915ed5e9507d44dbfaebc8a64b0f2ba52649 second_branch ----------- c225e0c7175b7467eb6cc5f283413b2eee027ff3 The branch master still has the reference -> b9ca915ed5e9507d44dbfaebc8a64b0f2ba52649, whereas the second_branch has now a reference to our new commit. Summarized we can say that when you create a branch, a new file with the same name is automatically created in the heads directory with a reference from the current commit. Each time you create a new commit, the reference that contains the file is changed. So, branches are just pointer. tags There are two different types of bookmarks. Each of them is stored in the Tags directory. The first is known as lightweight tags and is only a reference to a commit like the files in the heads directory. The second one is called annotated tag and stores much more information. Below is a short list of them: tagger name email date tagging message In most situations, it’s recommended to use the annotated tags, but it depends on the situation. HEAD The HEAD is quickly explained. It’s a reference to the active branch. So git knows which branch is currently in use. To see what our head file contains type cat HEAD. info In the info directory, we find additional information about our repository. One of the best-known files is the exclude file. It decides which pattern will be ignored. To define the ignored files and folders, we use a file in our project called .gitignore. config The configuration file, as its name suggests, stores the configuration of your repository. You can define the configuration globally for all repositories or locally. In this case, it’s only used for your local repository. Below are several configurations that you can set in your configuration file. To see a complete list, visit the git configuration page. name email editor excludefiles autocorrect … description Here you can create a short description of your repository. But it’s pretty irrelevant if you’re not using gitweb. hooks There are some predefined git functions that are executed on certain events. In the Hooks directory, we can see a few of them. An example of an event could be if the entire commit process is completed. Above is a sample list of hooks from my tutorial repository. Each of them has the word ‘sample’ appended to the name. This ensures that all hooks are disabled at the beginning. If you want to enable a feature, you just have to remove the word ‘sample’. You can also edit or write your own git hooks. For more information, click here. Conclusion That’s it. Today we discovered the .git folder together. In the end, you should know what files/folders you typically found in the .git directory and what they do. Additionally, we get an overview of the git storage and understand why the phrase ‘a branch is just a pointer’ makes sense. I hope the article was helpful to get a deeper understanding of Git and the .git folder. If you have any questions or feedback, please let me know in the comments. In my next article, we will look at how the memory functionality of git works. Well then see you soon.
https://medium.com/analytics-vidhya/git-part-3-discover-the-git-folder-ca3e828eab3d
['Henry Steinhauer']
2020-12-11 11:57:02.298000+00:00
['Git', 'Programming', 'Database', 'Development', 'Version Control System']
Unleashing the Power of AI for Better Science: The Case for an In-House Data Science Group
Artificial Intelligence, or AI, is everywhere. Given the many recent advancements in AI for healthcare, it’s important for academic medical centers to be at the forefront of these trends. At NYU Langone Health, we acknowledge the need to invest in our own understanding of AI and how it will transform the way we practice medicine. That’s why we’re committed to cultivating data science expertise within our institution, creating a Predictive Analytics Unit (PAU) that’s dedicated to realizing the promise of AI in healthcare. 1. Communication: Even though AI is everywhere, it can feel like a black box. An in-house data science resource, such as our PAU, offers easily accessible advice and guidance, demystifying the technology. At NYU Langone Health, the PAU mediates conversations between users, leadership, and the companies we partner with who develop AI-enabled tools to ensure that everyone understands the benefits and goals of using these tools. We consult on projects and offer perspective on how we can use AI and predictive modeling to better achieve our goals. We bring not only our understanding of AI to the table, but also knowledge of intuitional needs and priorities, helping to make decisions about the best implementation of AI-enabled tools in the right context. 2. Education: Since we’re just starting to understand how AI will effect healthcare, part of my role is to go into our community of providers and both demonstrate the value that AI-enabled solutions can offer us and examine how it could be leveraged in the future. I educate our front-line providers on how to leverage the information produced by predictive models to augment and enhance their decision-making. Working in-house allows me to develop the personal relationships and institutional awareness necessary to ensure that what we teach sticks, is reinforced, and is continuously improved. The in-house team provides a consistent experience, a singular source of advice that isn’t contingent on a relationship with a vendor. 3. Replication:The AI scientific community as a whole has embraced sharing and the open-source movement. Information is easily available. Our PAU routinely evaluates models that are available to the public, assessing them for use at NYU Langone Health in a way that best suits our needs and goals. We can work with a flexibility and speed that isn’t replicable when working with an external team. 4. Validation: When in-house development and replication of models isn’t the appropriate approach for a specific need, we turn to external offerings. There is no shortage of start-ups claiming their AI capabilities are the solution to a health system’s problems. An in-house data science team serves as a vital resource for cutting through the sales pitch and really understanding the capabilities offered by external companies. By having a knowledgeable team that can test the validity of a vendor’s claim, we ensure that only those tools that will actually benefit the institution are adopted. Similar to an architectural review committee, we’ve enacted a formal process to closely examine externally developed models as part of our regular project management program. 5. Representation: Data scientists are a hospital’s best resource when advocating on the hospital’s behalf about AI and predictive modeling. Similar to a lawyer, serving as in-house counsel, in-house data science teams have the best interest of the intuition at heart and are uniquely suited to advocate on the institution’s behalf, ensuring that goals are met and services are rendered as committed. So should a health system develop in-house data science expertise? The answer is an unequivocal yes. By offering communication, education, replication, validation, and representation capabilities, an in-house data science resource offers immense value to a health system, especially given we’ve just scratched the surface of what AI can do for healthcare.
https://medium.com/nyu-langones-health-tech-hub/unleashing-the-power-of-ai-for-better-science-the-case-for-an-in-house-data-science-group-ae40ae249359
['Nyu Langone Health Tech Hub']
2019-04-22 14:15:07.239000+00:00
['Healthcare', 'Data Science', 'Innovation', 'Artificial Intelligence']
Brown Betty
Brown Betty The refreshing detail of a perfectly comfortable pour Original Alcock, Lindley and Bloore, non-drip teapots displayed at Vitsœ “The chances are, if I asked you to draw a teapot from memory, you’d think of a shape not too dissimilar from the Brown Betty. That’s because it’s one of the most manufactured teapots in British history.” So says ceramicist Ian McIntyre who, as part of his Collaborative Doctoral Award with Manchester School of Art, York Art Gallery and the British Ceramics Biennial, set about examining the origins of this noble pot. Brown Betty is a product of evolution, with form and function refined over decades, rather than the authorship of any single designer. It emerged as a cheap, utilitarian pot for the working classes, absorbed into the fabric of everyday life. This evolution resulted in a teapot modest in appearance yet perfect for the task in hand: brewing and pouring tea. By quietly performing its job so well it has endeared itself to generations. Despite its popularity however, surprisingly little is known about the teapot’s original makers. Etruria Marl clay seam. Photograph by Bjarte Bjørkum The very character of the pot comes from the quality of the clay, which has been mined in Staffordshire for red-ware teapots for over 300 years. “I think it’s safe to say that a Brown Betty that isn’t made of Staffordshire red clay, isn’t an original Brown Betty at all” states Ian. This clay — Etruria Marl — was first refined around 1695 by two Dutch brothers, John Philip Elers and David Elers, in Bradwell Woods, North Staffordshire. Prior to this the potteries which existed were small family-run outfits, producing crude wares like butter pots for farmers to transport their produce to market. The brothers used this clay to make teapots to emulate and compete with the expensive red stoneware Yixing teapots, which were being imported from China by the East India Company. It is widely agreed that the refinement of this clay, which could reliably withstand the temperature of boiling water without cracking, gave rise to new technological experiment in Staffordshire, and became a key catalyst for the industrialisation of the six towns that make up Stoke-on-Trent. Bowl, clay and pot, made in Staffordshire red clay, by Ian McIntyre, for his exhibition at Vitsœ 2016 The Brown Betty is a purely rational design, stripped of anything superfluous to its function and production methods” explains Ian, who over the course of his studies sourced multiple Brown Bettys of various shapes, dates and manufacturer to evaluate the principles behind the design transformation. He discovered that over the years the Brown Betty form migrated into a globe, which was seen as the best shape to infuse the loose-leaf tea when water was added. The shape and the wall-thickness combine to keep the tea warm. The most innovative maker of Brown Betty was Alcock, Lindley and Bloore, who operated through the 20th century. The body of an Alcock, Lindley and Bloore teapot was made in three parts. The globe was pressed before the handle and spout were applied. This enabled a potter to crudely punch a grid of holes into the globe before attaching the spout. The grid held the tea leaves in the globe when pouring. These are details which the Brown Betty sadly lost over the years, as they were cast in one-piece moulds to reduce manufacturing costs. Ian, however, saw these details as fundamental to the authenticity of Brown Betty and set about making pots of his own to further understand the delicacy of the design detail. Ian McIntyre in his studio removing the mould from a prototype Brown Betty He discovered that the handle presented a functional and ergonomic shape, with the generous loop positioning the gripping hand for easy leverage of the pot. This also minimised the strain on the wrist — when pouring and the return at the top of the loop prevented knuckles burning on the globe. At first sight the spout of the historic pots he analysed appeared poorly finished, but they had been rough-cut deliberately by a craftsman. The sharp edges at the opening — and just underneath the lip — cut the flow of water, preventing tea from dribbling back down the outside of the pot. To be certain that tea would not dribble a patented non-drip spout had been introduced as an optional feature. Functioning like a tap the spout ensured a straight pour and almost magically eliminated drips. Photograph by Angela Moore A classic Brown Betty would have been glazed in either the rich brown Rockingham glaze, or a transparent glaze that reveals the natural colour of the clay. Both have the advantage of masking any tea stains on the teapot. If the glaze were chipped, the red colour of the clay would be revealed — favourable to a contrasting clay — allowing a characteristic patina to lengthen the life of the pot. To prevent the lid falling out of the pot while pouring, an ingenious solution was reached: the lid in the tilted pot slid forward into a groove in its collar, locking it in position. When the pot was restored to horizontal, the lid released. A more discreet feature of this patented design enabled pots to be stacked for storage by placing the lid upside down in the pot. To support this feature, the spout and the handle stay below the collar of the pot, which also means the pot could drain upside-down after washing. Ian McIntyre’s Brown Betty exhibition at Vitsœ, London 2016 In 2016 Ian’s research into the history of Brown Betty and his practical investigations were presented in an exhibited at Vitsœ’s London shop. He showed moulds made from an original Alcock, Lindley and Bloore teapot and pots cast using Staffordshire red clay. The culmination of this understanding of the form and function of Brown Betty lead to the development of his first prototype Brown Betty. Following this exhibition, Ian teamed up with Cauldon Ceramics of Staffordshire, a small craft manufacturer of traditional redware and the oldest remaining maker of the Brown Betty teapot in the UK. Together they set about remanufacturing this lustrous beauty. Taking great care to respect the traditions and the years of refinement that have gone before, including the patented locking lid and non-drip spout. Using the authentic clay, the collaboration implemented new production processes and design details to reinstate an authentic representation of a classic Brown Betty. Ian’s attention to detail has ensured that the traditions of the pot have been maintained. This latest Brown Betty edition is intended to promote the legacy and value of this everyday object that has transcended fashion but is a beautiful and reliable utility object. Or, as Ian says: “On a personal note I feel that the Brown Betty is a counterpoint to the seemingly unending barrage of new products being launched and discontinued daily in the design industry. I feel that this story reflects a dedication to a material or a design, and the refinement of a process that has given rise to a classic, not because of nostalgia, but because it’s the best at what it does.” Ian McIntyre’s re-engineered edition of the classic Brown Betty. Photograph by Angela Moore Ian’s latest edition was nominated for the Beazley Design of the Year 2018 at the London Design Museum and is in the permanent collections of London Design Museum, Victoria & Albert Museum, Manchester Art Gallery and York Art Gallery. V&A produced a short film on the Brown Betty, which can be viewed here. Brown Betty available from Labour and Wait
https://vitsoe.medium.com/brown-betty-a97f41e9f572
[]
2020-05-11 15:11:55.974000+00:00
['Pottery', 'Design', 'Product Design', 'Tea', 'Vitsoe']
She Brings a Silent Spirit
N.b. — I know it’s blank verse. Mostly. I wanted to step away from rigid meter for this one. It mostly failed. Get my Sestina-O-Matic and jump into writing poetry! Zach J. Payne is, to borrow the words of Lin-Manuel Miranda, “a polymath, a pain in the ass, a massive Payne”. He is a thespian, poet, and writer for young adults. He is the #2 Ninja Writer. A native of Whittier, CA, he currently lives in Warren, PA.
https://medium.com/sonnetry/she-brings-a-silent-spirit-513fadd73034
['Zach J. Payne']
2020-08-10 00:02:06.252000+00:00
['Relationships', 'Mental Health', 'Love', 'Poem', 'Ninjabyob']
Christians, Social Media, and the Need for an Opportunity Cost Mindset
Every yes to our screens is no to something greater. Photo by ROBIN WORRALL on Unsplash Opportunity cost is one of the foundational principles of economics, and for finite beings living in a world full of scarcity, a reality that is inescapable. Few of us regularly consider the litany of no’s that accompany our conscious and unconscious yeses. A yes to that vacation you’ve been planning might mean saying no to a new car. A yes to vacuuming could mean no to taking the dog for a walk. Just as opportunity cost is at work in our everyday lives, it is at work in our digital lives as well. The attention economy is full of endless opportunities for us to watch, read, play, tweet, etc. But we are human, and while the opportunities might be functionally limitless, our attention is not. In his book, Competing Spectacles, Tony Reinke puts it bluntly, “human attention is a zero-sum game. At some point we must close our screens and fall asleep…” We can’t have it all and we can’t watch it all. Every tweet we read is book we don’t. But do we ever consider that? Do we ever factor in the opportunity cost of our media consumption choices? What could I do instead of binging on Netflix for four hours? Earlier this year Senator Josh Hawley of Missouri invoked opportunity cost language in regards to social media — though in a slightly different way. Hawley in a speech at the Hoover Institution on May 2nd, while objecting to the current state of Big Tech stated, “…think for a second about the opportunity cost that this social media business model and these social media platforms — what you might call the social media economy — represents. For years now, this is what some of our brightest minds have been doing with their time: Designing these platforms, designing apps that integrate with them…What else might these bright minds, these talented women and men, have been doing that might have been truly productive for the American economy and for American consumers?” Being a senator, it makes sense and is important for Hawley to address the opportunity cost of the social media economy at the macro level. What have we lost by devoting a generation of our brightest minds to creating algorithms and platforms in which the main purpose is to hold the attention of as many eyes as possible for as long as possible in order to be able to sell that attention in the form of advertising to the highest bidder? But at a personal level we also need to begin to operate with an opportunity cost mindset when it comes to our own social media use and media consumption. Every yes to checking our email or seeing how many followers we have is a no to playing with our kids, doing the dishes, reading a book, or even praying. It might sound surprising, but the concept of opportunity cost is addressed in the New Testament. Of course the term isn’t found anywhere, but the concept is vividly described in Matthew 16:26. Right after rebuking Peter and right before the transfiguration, Jesus explains the cost of discipleship. It is there that he says, “For what will it profit a man if he gains the whole world and forfeits his soul?” The forfeiture of soul is a steep opportunity cost for gaining the world—and one we need to keep in mind. On a daily basis, both in our real lives and our digital lives, we need to adopt an opportunity cost mindset. Every decision we make requires of us either time, attention, money, or all of the above. Because of our finite nature and the scarcity of goods in the world we can’t have it all, we can’t have the world and our souls. By keeping that in mind as we plop on the couch for another round of Netflix binging or mindless Facebook scrolling, we can remember this reality and choose soul instead of world. An opportunity cost mindset can be a tool we use to keep from succumbing to the attention merchants who want to monetize our eyes. To be sure this is not a call to pharisaical judgementalism, which perhaps is the inherent danger of an opportunity cost mindset. In reminding ourselves that reading the Bible really would be better than checking Instagram, we need to beware of using this type of thinking with others, lest we point out the twitter addiction in our friend’s eye without noticing the Facebook addiction in our own (Matthew 7:3–5). But, rather than letting the potential for improper judgement of others keep us from considering adopting an opportunity cost mindset, we could see it as a call to apply it to ourselves all the more diligently, both for our own good, and for the good of our neighbor who will see our new found freedom from worthless things (Psalm 119:37) and eagerly ask how they too can be set free. There is another compelling example of opportunity cost thinking I have come across a lately. In his book The Benedict Option, Rod Dreher spends time talking with monks who live according to the Benedictine Rule. One of those monks, Father Cassian, said of rejecting the world in order to live in the monastery, “there’s not just a no, there’s a yes too. It’s both that we reject what is not life-giving, and that we build something new.” We might not be called to move out of the world into monasteries, but every one of us would do well to reject that which is not life-giving. To remember the high opportunity cost of our tweets, posts, and more and instead choose to “build something new.” To choose our souls instead of the world. SDG John Thomas is a freelance writer. His writing has appeared at Christianity Today, The American Conservative, and The Federalist. He writes regularly at Soli Deo Gloria.
https://medium.com/soli-deo-gloria/christians-social-media-and-the-need-for-an-opportunity-cost-mindset-2e4c25d83c85
['John Thomas']
2019-10-24 18:21:57.982000+00:00
['Spirituality', 'Books', 'Christianity', 'Religion', 'Social Media']
Edie’s Three
“Top 3 is a publication where Medium writers support other Medium writers by promoting each other’s work. Medium members are encouraged to post three stories from other writers that they enjoyed reading.” — Daryl Bruce Are You Addicted to Writing? By: Zita Fontaine Hello, my name is Edie, and I’m a writing addict. It’s true — I am absolutely addicted to writing, and so much of what Zita writes in this piece hits me right in the gut. I think most of you who are writers will be able to say the same as you read through. The symptoms described me to a T — from letting other hobbies (and friends) fall off to the wayside, forgetting to eat (among other things, shh!), and losing interest in the every-day things that don’t have anything to do with writing. But I’ve been making an effort, somewhat, to remedy that. Doing a sit-up, showering, making toast…you know, all that annoying stuff that gets in the way of writing! But, I kind of like this addiction, and I’m not sure I’m ready, or willing, to give it up… 5 Unusual Ways To Make Your Writing Standout By: Jun Wu From writing on this platform for a few months now, I’ve read many articles on ways to make my writing better, more appealing, stronger… and yes, although the tips in this particular article are things that I technically already “know”, they’re also things that I tend to overlook too often. I think we can all use a reminder once in a while — and especially now, a month into this new system, we can see that it’s going to take a little more effort to reel readers in and keep them engaged. I’m all for anything that will help me do that! Take The First Step By: Ryan Fan I am a self-proclaimed queen of procrastination. I will wait and wait until the last possible moment (sometimes too late) to do things. The problem is that these ‘things’ are usually the important kind as well…this is not an attribute I’m proud of. It’s not a quality, and ‘fault’ has such negative connotations — so let’s just go with ‘attribute.’ Kay? Kay. I’ve tried day-planners — I keep buying them, and they keep collecting dust. I’ve tried to-do lists — they work for a few days and then end up as crumpled bits of paper at the bottom of my bag. I love this idea, and I think it could definitely work for you…meaning all of you who are not me. The reason I say that is because I’ve also tried this method — I’ve got draft folders filled with ideas and headlines; many of them already also have a photo. You would think that would help me. Alas, I think I’m hopeless in this department. But you don’t have to be!
https://medium.com/top-3/edies-three-1dfa761b3509
['Edie Tuck']
2019-12-04 16:16:51+00:00
['Medium', 'Writing', 'Top 3', 'Procrastination', 'Writers On Writing']
A Little Worry, a Little Doubt, a Little Fear, and a Lotta Joy!
I’m more spiritual than religious, and I do love this song! I wake early this melodious morn, To Christmas calling me with song — Joy to the world, the Lord has come Let earth receive her king… Let every heart prepare Him room, And heaven and nature sing, And heaven and nature sing, And heaven and heaven and nature sing! Joy?! Are you kidding? You’ve no time for singing such silly saccharine songs. You have calls to make, articles to write… You have way too much to do! Besides, you don’t have a clue. Nonetheless, I let the song respond — Joy to the world the Savior reigns Let men their songs employ While fields and floods, rocks hills and plains Repeat the sounding joy Repeat the sounding joy Repeat repeat the sounding joy So there! Take that! And that! Joy, shmoy! Have you looked outside lately? You could die stepping outside your door, or inside if the electrician is contagious. And it ain’t going away any time soon. Be scared. Be very scared. For your family if not for you! Nonetheless, the song insisted, persisted, lyrical lusciousness lighting my heart — He rules the world with truth and grace And makes the nations prove The glories of His righteousness, And wonders of His love, And wonders of His love, And wonders of His love, And wonders wonders of His love. So there! Just this once I let love have the last word, continuing to sing this Joyous Hymn to the steamy suds adorning my sleepy self in my nice hot shower. Thanks for listening and witnessing. It means a lot to me!
https://medium.com/know-thyself-heal-thyself/a-little-worry-a-little-doubt-a-little-fear-and-a-lotta-joy-43e99009d4a0
['Marilyn Flower']
2020-12-12 09:05:00.079000+00:00
['Covid 19', 'Life Lessons', 'Mental Health', 'Joy', 'Christmas']
Introducing: A Quantum Procedure for Map Generation
This is the blog version of a talk given at the IEEE Conference on Games 2020. Each slide is followed by what I say during that slide in the talk. You can find the paper here. Hi, I’m James Wootton. I work at IBM Research. We build quantum computers and I used a couple of them to do some map generation for games. That’s what I’m going to tell you about today. So what are quantum computers? Well first let’s think about our normal digital computers for a moment. These express information in bits, and any algorithm can be compiled down into very simple operations on one or two bits. In saying this, I’m really thinking of the Boolean circuit model as a model for what a digital computer is. As an example, the simplest of these Boolean operations is the NOT gate, which just flips a single bit between 0 and 1 . Quantum computers are the same, except that the bits are stored in a way that can be described with quantum mechanics. Quantum mechanics is a more general theory than the usual Boolean logic that we use to describe bits. So this gives us more freedom, and allows us to manipulate the information in new and different ways. We get a set of basic quantum operations that we can use to build algorithms in very different ways. Some problems that are intractable for digital computers, because they need unreasonable time or resources to run, will become solvable with quantum computation. This all requires much larger and better quantum hardware than we currently have. However, we do have prototype devices that can be used even now. Anyone can go to quantum-computing.ibm.com, create a quantum program, and send it off to our labs to run. It’ll go through control hardware which turns it into a series of microwave pulses. These then go into a fridge which cools a quantum chip down to nearly zero Kelvin, runs the program, and then sends back an output. This gets decoded by the control equipment and turned into nice, familiar 0 s and 1 s for you to look at. So what does this hardware look like? It is made up of ‘qubits’. These are quantum bits: quantum systems used to store bit values. We can manipulate single qubits and also pairs of qubits in various ways. Some look like Boolean operations (like NOT and XOR ), and others don’t. However, for the two qubit operations, we are limited in our choice of which pairs of qubits can be used. This is governed by the so-called ‘coupling map’ of the device. For example, in the device called Rochester that you can see here, the qubit labelled 0 can only directly interact with qubits 1 and 5. Also, every manipulation we can perform, as well as the act of reading out the result, is subject to imperfections. Everything that can go wrong, will go wrong with some probability. This means that, if we were to attempt the arbitrarily long and complex processes found in most of the algorithms we know, we can expect to get nonsensical results. To get sensible results from near-term hardware, we can try to base our algorithms explicitly on the things that quantum computers do naturally. One example is the generation of quantum interference effects. Doing this with a quantum computer is almost as easy doing it with water: like throwing a rock into a pond! Another example is the simulation of quantum dynamics. Qubits are quantum particles, that evolve and interact according to the operations applied to them. If we want to study a process that acts like this, then it is trivially easy to get a quantum computer to help. The application that we will look at now is exactly of this form. We will find something that we want to simulate for the purposes of procedural generation, argue that it is naturally captured by the dynamics of qubits, and then implement the simulation on a quantum computer. To explain how, let’s have a closer look at qubits (although, admittedly, not close enough). The contradiction at the heart of qubits (which we won’t have time to resolve) is that: •They cannot store any more information than a single bit. •Their internal state is more complex than that of a single bit. So they are more than just a bit, while being no more than just a bit. Because of quantum! The internal state of a qubit can be described in terms of three real numbers, ⟨X⟩, ⟨Y⟩ and ⟨Z⟩, each between +1 and -1. These are subject to the constraint that you can see here on the slide. This description is exactly the same as a much more familiar system: a ball. Any point on or in a sphere of unit radius can be described in terms of the cartesian coordinates: x, y and z. None of these may exceed the radius in either direction. And the distance from the center shouldn’t exceed the radius either. This is all exactly captured by these same conditions. From this coincidence we get a nice spherical visualization of the qubit, which is known as the ‘Bloch sphere’. Let’s just focus on the value of ⟨Z⟩. If we ask the qubit for an output, this value will tell us how likely we are to get a 0 or 1 . For ⟨Z⟩=1, when we are at the north pole, we are certain to get 0 . For ⟨Z⟩=-1 (the south pole) we are certain to get 1. For ⟨Z⟩=0 (any point along the equator) the outcome will be random. Anything else will be also random, but with a bias towards the closest pole. The reverse of this is how we can calculate what the value of ⟨Z⟩ is for a qubit: run the process many times, determine what the probability is for the 0 and 1 outcomes, and use this to estimate ⟨Z⟩. Note that the algorithm for map generation we present here depends on the values of the variables estimated in this way. Not on the random outcomes themselves. It is therefore a (mostly) deterministic process. For n qubits we need 4ⁿ variables (which is exactly why these things are hard to emulate on standard digital hardware). For example, ⟨ZZ⟩ describes how likely the outputs of two qubits are to agree. ⟨ZZ⟩=1 means definitely agree (both 0 or both 1 ). ⟨ZZ⟩=-1 means definitely disagree (so 01 or 10 ). The rest of the variables describe other important details regarding the way in which the qubits are correlated. These correlations also satisfy constraints. Some are similar to the single qubit ones, allowing us to draw more spheres. There are also things like the monogamy of entanglement: Put simply, the more correlated two qubits are with each other, the less they can be correlated with other qubits. We are going to use qubits to generate geopolitical maps. This will be done by simulating the sort of process you see in a game like ‘Civilization’: a bunch of settlers around the world all suddenly decide to found nations. These then grow and interact. A quantum process is used to decide how the nations will act. Each nation will correspond to one qubit. The ⟨X⟩, ⟨Y⟩ and ⟨Z⟩ variables correspond to three different policies that a nation can pursue: attack, defend, and expansion into unclaimed territory. We would expect that no realistic nation can maximally do all three things at once. Limited resources will mean that priorities need to be decided upon. In this way, the natural constraint on the state of a qubits automatically restricts the simulation to realistic behavior for nations. Also, nations do not make their decisions independently. The variables describing correlations will cause their decisions to be correlated also. The constraints on correlations will also ensure that the behavior remains realistic. For example, the monogamy of entanglement will ensure that two nations at war with each other will focus their decisions on each other. Admittedly, the constraints of quantum states are not a perfect reproduction of ‘The Art of War’. Someone not motivated by a love of quantum computing probably wouldn’t have come up with this method. But it seems to work quite nicely. The quantum decision making process is embedded into a ‘game’. Nations get turns in which they can place a city on the map. Cities exert influence over surrounding territory. This then determines which nation controls territory, and can even result in cities changing loyalty. These results are then fed back into the simulation. If a nation loses territory, for example, its qubit state will be manipulated to make it more defensive. If a city changes hands, the correlations between the two nations involved will be altered to reflect a state of war. All of this is implemented in Python. The ‘qiskit’ framework is used to create and run the quantum jobs. More specifically, I built a package called ‘QuantumGraph’ on top of Qiskit, optimized exactly for these kinds of applications. It is suitable for all your procedural generation needs! (or at least some of your quantum ones) Put it all together, and we have a process that generates geopolitical maps growing over time. If you hand over control of a nation to a player, you also have a game. The quantum process then serves as the AI for opponents (and also for an advisor to the player). So how intelligent is this AI? Admittedly, it is not the best you will have ever seen. But it should at least be better than random decision making. To test this, we ran a process in which all nations are are initialized in the state with ⟨X⟩=⟨Y⟩=⟨Z⟩. Since they have no bias to any particular policy, they make random decisions at first. Half of the nations (which we call the ‘standard’ nations) then undergo the standard process in which gains and losses of territory are fed back into the qubit states. This is not done for the other half (the ‘opponent’ nations). So these remain random, and insensitive to what is going on. We’d expect to see the standard nations doing better, because they base their decision on what is actually happening. They should therefore gain more territory than their opponents. That’s exactly what we see when we emulate how the process would run on perfect quantum devices, using standard digital computers. This is only possible when we have small numbers of nations. Note that I tried to use the word ‘simulate’ in the paper to refer to quantum computers being used for simulation tasks, and ‘emulate’ for when a digital computer is used to reproduce the results we would expect from quantum devices. But when doing this, I forgot about the titles of the figures, as you see here. They should actually refer to emulation. Now for real hardware: 28 nations on the 28 qubit device called ‘Cambridge’, and 53 nations on the 53 qubit ‘Rochester. In both cases, evidence of the ‘intelligence’ remains. It is not quite as clear as before, perhaps due to the effects of noise. What we have seen here is an initial and rudimentary example of current quantum devices doing a task that is relevant for procedural generation, and for games. If we look to the history of using computers for games, what can we compare it to? The goal would be to develop a quantum milestone equivalent to 1962's Spacewar!: the first example of a novel and unique game that was made possible through the new technology. Is this map generation process an example of novel and unique procedural generation, or an AI for a novel and unique game? Probably not. Perhaps a more realistic comparison is the Checkers AI developed at IBM in the 1950s: more about testing the capabilities of the new technology than giving users something that they need. So, in conclusion, quantum computers exist. They are currently limited, but we can use their intrinsic ability to perform quantum simulations for procedural generation. What about their ability to generate interference effects? Can that be used for procedural generation too? You can find out about that in my talk in the PCG workshop at FDG next month!
https://medium.com/qiskit/introducing-a-quantum-procedure-for-map-generation-eb6663a3f13d
['Dr James Wootton']
2020-09-04 09:29:50.235000+00:00
['Qiskit', 'Game Development', 'Artificial Intelligence', 'Procedural Generation', 'Quantum Computing']
Three Simple Suggestions for UX Authors and Presenters
I + You = We I’ve been using this dopey equation for a year or two, but it really works. As the journey you’re crafting is a conversational one (see my suggestion above), you’ll want to create a sense of We with your audience as soon as you can. Because We are going on this journey together, and the sooner we’re all part of a common We, the sooner that journey can begin. Get to We by first establishing the I in the story: who are you, the narrator, and why should anyone care about your take on things? Is it something about your experience, your creative genious, your unique perspective, or your brilliant success? Or maybe how you’ve survived some godawful and painfully embarassing failures? Next, who is the You? Explain who you think your audience is, and the challenges you imagine they face. By showing them that you know them, you can show your empathy for them: make clear that you’ve been where they’ve been, have felt the same pain, and— through your shared journey — can help them reach the promised land. Once you’ve connected I and You, you’ve achieved the We that can get started on that journey together. And you’ve created a term that will serve as useful shorthand through your book or talk: anytime you use the word We, readers or audience members will immediately pay attention, because they’ll know you’re talking about them.
https://medium.com/rosenfeld-media/three-simple-suggestions-for-ux-authors-and-presenters-c0fe884596eb
['Louis Rosenfeld']
2020-12-14 21:04:23.762000+00:00
['User Experience', 'Presenting', 'UX', 'Writing']
ContentBox Joins Huobi Global Elites Meetup in Korea
October 18th, 2018 ContentBox executive Jason Lee was invited to participate in the first Korean private meeting with Huobi Global Elites. Huobi Global is a leading cryptocurrency exchange and Huobi Global Elites is an important partner of Huobi, aiming to share knowledge and resources and promote common development. There were only 40 invited guests that participated in the event. The meeting focused on introducing new initiatives from Huobi Global including the Huobi Global Elites initiative, related questions and answers, and then on-site meetings with the participating guests. We will be posting more updates on our exciting progress soon. Until next time. ___ Contentbox Website: https://contentbox.one/ Telegram: https://t.me/BoxCommunity Contentbox Twitter: https://twitter.com/Contentbox_one Contentbox Facebook: https://www.facebook.com/contentboxone/ Contentbox Medium: https://medium.com/@Contentbox_one Contentbox White Paper: https://contentbox.one/wp.pdf Contentbox Bitcoin Talk: https://bitcointalk.org/index.php?topic=3898690.0
https://medium.com/contentbox/contentbox-joins-huobi-global-elites-meetup-in-korea-18811732ada6
[]
2018-10-18 18:24:33.705000+00:00
['Blockchain', 'Decentralization', 'Startup', 'Cryptocurrency', 'Podcasts']
Greener Funerals
Greener Funerals How to give back after death. Photo by Aaron Burden on Unsplash As more and more people become eco-conscious and environmentally aware, they change their habits and lifestyles in the hopes they’re helping more than they’re hurting the world. Most of us have heard about reducing our impacts through recycling and composting. What a lot of people aren’t aware of, though, is that they have options to reduce their impact of death on our environment. Corpse management changed a lot over the years. It used to be the norm that families would care for their dead and handle the other aspects of the death of loved ones. Over time, we’ve moved from personally caring for bodies less and less and relying on mortuary and funeral homes more and more. The art of embalming has been around for a very long time. Ancient civilizations employed different ways of providing meaning in death. Only in the last two hundred fifty years has the world used formaldehyde to preserve the dead. Formaldehyde embalming became more of a necessity when body transport became unmanageable (and disgusting) during times of war and pestilence. In the last hundred years, we’ve moved to be more industrial-centric when it comes to handling death. Colleges teach mortuary science, and states require licensing. Body disposal is considered unclean and dangerous. Rumors abound about death and disease. Lobbyists convinced legislatures and the public that death is unnatural. There’s only one problem. Embalming today creates a toxic brew of chemicals that requires special (and often incomplete) measures to prevent those toxins from leaching into the soil. Our idea of peaceful repose and green grass surrounding our loved ones means that we have extra expense to purchase caskets that require more materials to house bodies. Then we must enclose those caskets in concrete to reduce ground-sinking so that cemetery caretakers can mow when bodies inevitably decay. None of this is environmentally friendly. It monetarily costs us so much that an entire industry evolved for people to have funereal insurance to cover after-death expenses. Depending on the laws of where you live, there are ways to bury bodies without involving chemicals. Embalming isn’t a legal requirement in a lot of places. Neither do we have to settle on expensive caskets or regular cemeteries to bury our loved ones. In my state, Minnesota, you can even bury your dead in your backyard as long as you do so within three days of death and follow state laws. BYOS (bring your own shovel) might sound like an extreme way to observe someone’s passing, but death is a natural process. We don’t have to make it unnatural by adding cancer-causing chemicals into the mix. There are other options to reduce costs but to also give back after death: Green burial and composting. Green burial revolves around avoiding chemicals, extra emissions, and processes by only using biodegradable materials. A tree may be planted on the gravesite rather than a physical marker like a headstone. Composting takes this idea one step further. Bodies buried within biodegradable materials are designed to cause natural processes to take over and turn death into soil for reuse and renewal elsewhere. Again, different places have different laws, but more and more entrepreneurs have come out on the side of safer, environmental ways to reduce death impacts on the living. We’re fortunate to live in a time that gives us so many options to improve our environment. It may take a little extra work, but returning our views on death to nature will save us all in the long run. For those who want to learn more, here are some resources about carbon-saving burials: https://www.youtube.com/user/OrderoftheGoodDeath https://recompose.life/ https://www.prairiecreekconservationcemetery.org/ https://www.greenburialcouncil.org/ Photo by Kerri Shaver on Unsplash References: The Editors of Encyclopaedia Britannica. “Embalming.” Britannica, 2020, https://www.britannica.com/topic/embalming The Editors of Encyclopaedia Britannica. “Development of Modern Embalming.” Britannica, 2020, https://www.britannica.com/topic/embalming/Development-of-modern-embalming Doughty, Caitlin. “Is Embalming Dangerous?” Ask a Mortician, 21 Sept 2015, https://www.youtube.com/watch?v=p3rIc1qS258&ab_channel=CaitlinDoughty%E2%80%93AskAMortician “What Is Green Burial.” Funeral Basics, 2020, https://www.funeralbasics.org/what-is-green-burial/ “Grave Matters.” Minnesota Pollution Control Agency, 2020, https://www.pca.state.mn.us/grave-matters Keene, Valerie. “Burial and Cremation Laws in Minnesota.” Nolo.com, 2020,https://www.nolo.com/legal-encyclopedia/burial-cremation-laws-minnesota.html
https://medium.com/greener-together/greener-funerals-317e8af1dd92
['Sunshine Zombiegirl']
2020-12-09 15:23:40.519000+00:00
['Death', 'Environment', 'Green', 'Environmental Impact', 'Funerals']
Renault Spider is a Memory of an Almost Extinct Segment
Aluminum chassis and plastic composite body panels dramatically reduced the weight while ensuring rust would never be an issue. Rear-wheel drive and manual transmission made the best of a 2.0-liter engine. The minimalistic approach made the windshield an optional item while power steering, heating and sound system were unavailable. 1,493 units were produced in four years. The very first Renaultsport car was a particular case, of course. What calls our attention is that the market segment it used to symbolize has almost died with it; when was the last time we’ve heard of compact convertibles? Looking back, we can say they became yet another casualty of people’s everchanging desires. The Spider has given us a chance to talk about those who shared its concept.
https://medium.com/cardesignchronicles/renault-spider-comes-from-almost-extinct-segment-52be58f95aa8
['Danillo Almeida']
2020-06-22 11:56:08.583000+00:00
['Cars', 'Design', 'Style', 'Past', 'English']
Understanding Digital PR Agency
Digital PR or Digital Public Relations is much similar to digital marketing which has shifted from offline activity to the digital world. Digital PR agency helps in establishing and creating awareness about businesses of clients by focusing on the target audience and establishing digital footprints. So, let’s dive to understand Digital PR Agency, its benefits, and whether it is worth it or not! What Is A Digital PR Agency? Digital PR Agency helps you establish and grow the reputation, brand awareness, and understanding of your product or services in the digital world. These agencies help you establish trust and reputation online with your consumers or targeted traffic. This primary is done through building relationships with online journalists, bloggers, and influencers relevant to your business or niche of your product or service. This leads to getting media exposure on a variety of platforms which include news sites, blogs, podcasts, etc. to name a few. In addition, the clients get visibility on a wide range of online mediums and influencing their target audience. How does Digital PR Agency Can Help? Digital PR Agency can help you with your goals and setting up a proper strategy to accelerate and grow your online presence. Some of the deliverables, you can expect are : Building a PR strategy based on your business goals which can include growing your backlink profile to increase organic traffic etc. Pitching stories or content on online media platforms in the form of interviews, podcasts, etc. Providing keyword research strategy and SEO in general to identify the type of media content you should be creating Creating social media strategies with the help of social media influencers. Also, managing, growing and gathering feedback across all your social media channels Benefits Of Digital Public Relations Agency The agency provides you with a lot of benefits to your business with an efficient and well-planned strategy. Some of the benefits are as follows : Building trust, reputation, exposure and brand awareness with your target audience Establishing yourself as a verified or authoritative publisher within your niche Increase in referral traffic and boost in SEO Growing your digital footprint and brand image To conclude, a big part of PR is creating and maintaining relationships. At StratDev, our team tackles this challenge by leveraging new and existing connections, building bridges between your brand and your dream team of collaborators. Our team works with yours to produce and promote content that engages your audience through earned media amplification. So, if you need help, guidance, or any consultancy regarding Digital PR goals, you should surely visit or contact StratDev Digital Marketing.
https://medium.com/stratdev-digital-marketing/understanding-digital-pr-agency-934d0badf2fb
[]
2020-08-10 17:26:48.646000+00:00
['Marketing', 'Digital Marketing', 'Digital Agency', 'Digital Marketing Agency', 'Public Relations']
Estratégias de Product Analytics para empresas SaaS B2B
in In Fitness And In Health
https://medium.com/incursu/estrat%C3%A9gias-de-product-analytics-para-empresas-saas-b2b-9f95e6a9b86b
[]
2019-08-19 14:18:05.151000+00:00
['Marketing', 'Customer Experience', 'Data Science', 'Product Management', 'Analytics']
John Milton — Poet, Polemicist & The Secretary of Foreign Tongues
John Milton — Poet, Polemicist & The Secretary of Foreign Tongues “ A good book is the precious life-blood of a master spirit.” John Milton. Image: Kaliope In March 1649, less than two months after the execution of Charles I, John Milton took on the position of Secretary for Foreign Tongues to the Commonwealth Council of State. Such a position would undoubtedly have come about as a result of a pamphlet written by Milton approving of the king’s execution, printed just days after the event. The new Commonwealth, and later Cromwell himself, needed good scribes to deal with communications from foreign nations. Milton, an affirmed anti-royalist, poet, and self-appointed supporter of Oliver Cromwell, was their man. A contemporary painting of Charles I execution. Image: vanproveratwordpress John Milton soon made himself indispensable to the powers that be, and with six or seven languages under his belt, he quickly rose to a position where Cromwell could make good use him. Cromwell, our chief of men, who through a cloud Not of war only, but detractions rude, Guided by faith and matchless fortitude, To peace and truth thy glorious way hast ploughed, And on the neck of crowned Fortune proud Hast reared God’s trophies, and his work pursued; While Darwen stream, with blood of Scots imbrued, And Worcester’s laureate wreath: yet much remains To conquer still; Peace hath her victories No less renowned than war: new foes arise, Threatening to bind our souls with secular chains. Help us to save free conscience from the paw Of hireling wolves, whose gospel is their maw. The above poem, To The Lord General Cromwell — May 1652, is Milton’s plea to Oliver Cromwell to ensure the peace, hard won, is worthy of the blood spilled and lives lost. Cromwell would certainly have read Milton’s poem, no doubt smiling ruefully as he did so, knowing full well that his most ardent supporter was right, yet impatient at his Secretary of Tongues impudence when England was now at war with Holland (on the high seas), with Cromwell also having to deal with a parliament that had once again lapsed into indolence. The chances are that the soon to be Lord Protector may have ordered Milton to be brought before him, and then, finding the now completely blind poet wholly unfettered in thought and word about the heavy-handedness of many of the edicts that came from parliament, and did our Lord General not realise just how many people were homeless and starving in the villages, and on the city streets, and that his 60,000 strong army were, through idleness and gossip, becoming restless. He may also have added: “ And it must not be forgotten, sire, that you have promised the people a bill of representation?” The smile may then have left Cromwell’s face as he led Milton to a chair, asking his aide to bring the illustrious poet some refreshment. “ Do you think I am not aware of these things, John? But the king’s son had to be dealt with, and then we had to suffer the Irish and the Scots, and now the Dutch. When they have been dealt with by our navy we can move forward and build a new nation for all.” As Cromwell’s aide handed Milton a glass of Rosa Solis cordial and a small finger biscuit, and Cromwell looked from his window at a beggar crumpled at the river’s edge, Milton asked: “ And what of books sire, now that you are Chancellor of Oxford University?” “ Oxford, eh, and we both Cambridge men. Books indeed Milton. But Parliament first I think?” And Cromwell closed Parliament, and took personal control of the country. Some wanted to make him King. Written and printed at the height of the Civil War, Milton’s essay against the censorship of books, Areopagitica, was an early reminder to parliament of one of the many reasons why England was at war with a traitorous king. It was also a treatise that became one of the first building blocks advocating uncensored literature and free speech, and an essay that, one-hundred-and-thirty years later, would help fuel the Enlightenment in Europe and, quite independently, the authors of the US Constitution. And Milton’s cry that “…a good book is the precious life-blood of a master spirit.”, entered deeply into Cromwell’s psyche who, as a deep and learned reader of history (along with the psalms, which he saw as building blocks for the future), knew the importance of an educated society as well as a well housed and well fed one. In many ways Milton’s work kept Cromwell’s conscience well pricked and alert to the future needs of an England that had been kept in servitude for far too long. Milton would not allow Cromwell to lose sight of his vision for a prosperous, happy, democratically represented, and influential England, led by a Lord Protector who loved his people (all of his people), with Jesus Christ and John Milton at his side. Earlier this century I wrote, produced, and directed a dinner play called 1651, An Evening with Oliver Cromwell, where the audience, gathered at a large table, dine with Oliver Cromwell, Major General Thomas Harrison, and John Milton. For the purposes of my play Cromwell had promoted the poet to the position of his Chief Advisor which, as some have suggested, is a “ tantalizing thought.” What my Milton gave to the play (set just a few days before the Battle Worcester) was his ability to interject fearlessly between Cromwell and Harrison, with questions of purpose and policy, which often stoked the two soldiers to a shared frustration they often found hard to suppress. Milton then cooled the heat by informing Cromwell of a letter received only that morning from his agent in Italy that the English flag now flew proudly in the Mediterranean. The young actor, James Keningale, who played Milton, really did put flesh on my dramatic skeleton. The play also brought together John Bunyan (who served in the Parliamentary Army), Cromwell’s daughter Elizabeth, bringing letters from her mother, plus Prince Charles (Charles II), in disguise. We’ve done the play several times, not least for The Battle of Worcester Society, organised by Oliver Cromwell’s great-grandson several times removed. James Keningale. Image: Mandy John Milton was born in Bread Street, Cheapside, London, in 1608. His father John, was an aspiring composer, and newly converted to the protestant faith, which meant he had to flee his parent’s Catholic home (he was probably thrown out) to settle in London. And it may have been at church that he later met and, later still, married Sarah Jeffrey, John’s mother. Finding employment as a scrivener (scribe) with a firm of lawyers, John Milton the elder prospered. And as the John Milton Academy Trust describe it: John Milton came into this world in “ …a century of revolution — in politics, print, science and the arts.”, with the baby’s parents embracing it all. The baby, as he grew, also knew he had a big part to play in the century of revolution. Milton attended St. Paul’s School, founded in 1509, then “… had lessons with ‘my excellent tutor’ Thomas Young who was later to become Vicar of Stowmarket and Master of Jesus College Cambridge.” Milton found his love of books at St. Paul’s (as would Samuel Pepys when a student at the school a generation later) and would define education as: “ …that which fits a man to perform justly, skilfully and magnanimously all the offices both private and publicke of peace and war.” Baroque Music 1 The Concert Theodoor Rombouts 1597–1637. Image: warwickvalleyliving.com When finished for the day copying deeds, and writing-up briefs, Milton’s father would take hold of a different quill to compose songs, and music for the lute and violin, instilling into his son a love, and an understanding, of music; an understanding which can be felt in Milton’s poetry which, if kept in mind when reading his work, can be a useful aide to understanding the poet’s inner mental rhythms and passions: Hail, native language, that by sinews weak Didst move my first endeavouring tongue to speak’ And mad’st imperfect words with childish trips, Half unpronounced, slide through my infant lips, Driving dumb Silence from the portal door, Where he had mutely sat two years before: Here I salute thee, and thy pardon ask That now I use thee in my latter task… Milton as Poet and Civil Servant. Image: Colby College After St. Paul’s Milton made it to Christ’s College Cambridge, graduating in 1629 with a BA, then an MA in 1632. He continued there, studying languages, and preparing himself to join the priesthood. The idea of becoming a priest soon waned, and in 1638, Milton took off for Italy and Sorrento (via France), the birthplace of the late Renaissance poet, Torquato Tasso (who had died in a mental institution, just thirteen years before Milton’s birth), whose epic poem, Gerusalemme liberata, which describes the siege of Jerusalem during the First Crusade in 1099, and a poem that would have stirred the young Milton to his very soul. Milton may very well have translated the poem into English. There can be no doubt that Tasso’s masterpiece will have influenced Milton in both style, and length. A few years ago my wife and I visited Sorrento, staying in the Hotel Tramontano over looking the Bay of Naples. It is a beautiful hotel that had, in Milton’s time, been a private house that doubled as a pension for those who came to worship Tasso. Milton was no exception, staying at the Tramontano pension (there’s a plaque in the hotel’s foyer saying so) for a couple of weeks as he searched out Tasso’s birthplace, trod the pathways that Tasso had walked, and perhaps rested in the shade of an ancient lemon tree where the great poet may have parked his poetic bottom. Milton also made the long walk to Naples to speak with Tasso’s biographer, who introduced Milton to Giovanni Battista Manso, one of Tasso’s patrons. Before leaving Italy Milton also visited the astronomer Galileo. Milton’s visit to Sorrento started something of a trend, with most of the English Romantic poets congregating at the Hotel Tramontano in the early 19th century, probably as some sort of intellectual energy boost before moving on to Rome. The song, Torna a Surriento (Come Back To Sorrento), was composed at the Hotel Tramontano in 1894 by Ernesto De Curtis, with words by his brother, the poet and painter Giambattista De Curtis. Listen to Dean Martin’s version Image:ebay With the restoration of the English monarchy in 1660, Milton, now politically out of favour (and very lucky to escape with his life), was depressed, and hugely saddened by Cromwell’s death, and hopes they both had for England. As a solace he returned to poetry, producing, by dictation, his masterpieces, Paradise Lost, Paradise Regained, and the extraordinary poetic drama: Samson Agonistes. Sam: A Little onward lend thy guiding hand To these dark steps, a little further on; For yonder bank hath choice of sun or shade. There I am wont to sit, when any chance Relieves me from my task of servile toil.. Milton would marry three times. Firstly to Mary Powell (with whom he had four children: Anne, Mary, John and Deborah), with Mary dying in 1652 after the birth of their fourth child. The next marriage was to Katherine Woodcock, who died just four months after the birth of their daughter Katherine. And finally to Elizabeth Mynshull (who was thirty-one years Milton’s junior). The couple lived happily together until Milton’s death. John Milton died in 1674, aged 65. Read: Oliver Cromwell: A Profile (Part One) Bibliography: Antonia Fraser — Cromwell: Our Chief of Men (Weidenfeld & Nicolson, London, 1973); The Work of John Milton ( The Wordsworth Poetry Library, Ware, 1994); John Buchan — Oliver Cromwell (Hodder & Stoughton Ltd, London, 1934); Samuel Pepys — Plague, Fire Revolution, Edited by Margaret Lincoln, with a Foreword by Claire Tomalin (Thames & Hudson, London & the National Maritime Museum, 2015) With acknowledgements to: John Milton Academy Trust ​
https://medium.com/books-are-our-superpower/john-milton-poet-polemicist-the-secretary-of-foreign-tongues-ffae5ca25a6a
['Steve Newman Writer']
2020-11-11 14:45:35.508000+00:00
['Books', 'Poet', 'Biography', 'History', 'John Milton']
How to Support Your Depressed Spouse
How to Support Your Depressed Spouse What To Do When Your Partner Is Not At Their Best My partner has been depressed for months now. He is struggling to find a job that he finds fulfilling and he has fallen into a daily rut, which is pretty toxic. He has stopped working out, he is drinking each evening and is not taking care of himself. It has become very frustrating to deal with and honestly, I feel like I am living with a slopy roommate or a child. I am tired of cleaning up after him and desperately trying to get him to do anything around the house. We are fighting on an almost daily basis and I am really struggling to remember the good parts of this relationship. I know that we committed to being together through the good times and the bad time, but it is so tough. I feel like I do not know who I am living with anymore because this is not the person I married. My partner is toxic to be around and he is either treating me like his personal punching bag or completely ignoring me. I am tired of his passive-aggressive comments and sullen attitude. How long is long enough to say that I tried? How long am I suppose to stay in this situation before I can say that I have done enough? When is enough, enough? Sincerely, Tired, frustrated, alone and desperate Relationships are extremely complicated and anyone who says that their relationship is great all of the time and that them and their partner never fight is lying or hiding something much more complicated. When your partner is depressed the first thing you can do is educate yourself on depression. Learn what is appropriate support for your partner. Understand the difference between supportive practices and enabling practices. Make sure you instilling a sense of safety for your partner to speak with you about their concerns and ensure that you are able to help them effectively. Depression is a very serious illness and must be treated as such. Understanding how to support someone effectively through a difficult time is essential, but it is also critical to understand the warning signs of suicide and what to look out for. Seek out a mental health professional, for yourself, so that you have the support you need through this difficult time. Yes, your partner is the one who is struggling, but do not discount your own feelings and struggles dealing with a partner who is depressed. Having a certified therapist scheduled to meet with regularly can be a really safe place for you. It is also really useful for you to have someone to listen to your problems and concerns at this point in your life too. Your partner is probably so absorbed in their own problems that they are not checking in on you and how you are doing. Seek out couples counselling to deal with communication issues. You mentioned that you feel like you are your husband’s punching bag and that needs to stop immediately. You should not be victimized and hurt by your partner, it is simply unacceptable. Schedule some couple’s counselling so that you two can address your concerns in a safe space with an unbiased third party present. Couple’s counselling can be a lifesaver during a difficult time in a relationship because sometimes the biggest issue is that when we are stressed we are not able to hear one another properly, if at all. Make a healthy living environment at home and help your partner to be more healthy. You mentioned that your spouse has stopped exercising, eating well and is drinking too much. All of these actions are common, but not healthy actions individuals tend to partake in when they are in a depressive state. Help your partner out by creating a healthy home. Buy healthy groceries, cook meals at home, try to work out together and do activities in the evening which do not revolve around alcohol or sitting in front of the television. Go for walks, listen to a podcast together, cook together, play Scrabble or whatever it is you two enjoy. Help your partner by creating small, daily steps for self-improvement. This is an area I fail at regularly because I tend to take over and want to control the entire situation. Doing too much too soon will not help anyone. Help your partner to create three goals for each day, one for their physical well-being, one for their professional goals and one to improve their emotional state. In the beginning, the goals can be super small, for example, going for a walk, applying for one job a day and eating a healthy dinner. Make sure the goals are attainable and not overwhelming for your partner, and be there to support them through it. Hold them accountable, but do not hover over them, give them some space to breathe. If they are not successful at attaining their goals each day, do not be hard on them. Remember that they are probably being really hard on themselves right now, so they do not need additional hardship from others. Remember the good times you have shared. When someone is going through a difficult time it can be so easy to see everything as dark and dreary. It is vital to remember the good times of your relationship. Stop focusing on the negative and remember that your partner is simply going through a bad season. Think of this situation as a season, and seasons change regularly. By understanding that this is a challenge that your relationship is going through and something which could arguably make you much stronger, in the long run, it might be easier to deal with. Write out your frustrations, so that you do not speak them. Writing or journaling your thoughts, frustrations, fears, and concerns can be really healthy. Having a mental dump at the end of the day can be exactly what you need in order to have a restful night and also avoid a potentially stressful, negative conversation with your partner. Your partner is not in a place where they can support you right now, so dumping your difficulties on them is not going to do anything helpful for anyone. Have a safe person to talk to about your issues. It can be easy to become completely enthralled with your partner’s issues, but you must make sure to take care of yourself. If you let your own self-care slip you will not be of any use to your spouse. You might feel like you simply do not have the time to take care of yourself, but do not undervalue the importance of having someone checking in on you. Letting a friend or trusted family member know what is going on is vital. You will need them and them checking in on you can be a great buoy to stop you from getting completely absorbed in the ocean of your partner’s issues.
https://amanlitt.medium.com/helping-your-depressed-spouse-2ccf118b080a
[]
2019-12-04 23:18:21.943000+00:00
['Depression', 'Relationships', 'Support', 'Mental Health', 'Relationships Love Dating']
Atomic Habits by James Clear
An easy and proven way to build good habits and break bad ones Why are habits important? You might not believe that small tiny habits can have a significant change in your life. James Clear in his book, Atomic Habits, argues and proves that tiny changes in your life can and will have remarkable results. Tiny positive habits are important and they compound day to day, year to year to significant result. Habits are compound interest of self-improvement. The same way that money multiplies through compound interest, the effects of your habits multiply as you repeat them. 1% better every day You might not think that improving your daily habit by 1 percent will give you results because day to day, you don’t feel any different. But over time, that small improvement in your life will compound to incredible results. In Figure 1 above, you can see how over time, 1 percent change to your daily life compounds to incredible results. Alternatively, not changing your daily habit result in decline of result. Over time, positive improvement compounds exponentially where as negative habit declines in a slower rate. How can I start a new habit? Every single new year, most of us start a new years resolution. We want to exercise more, read more, go out more. We want to improve ourself and get better. However, most of us stop working on our new years resolution by February, if not sooner. If you want to build a good habit, there are some steps you need to take in order to stick to your goals. Make it easy — You will never stick to your habit if it’s not easy to complete it. You may do the habit for couple days, months but it will never stick long term if it’s not as easy as possible. You need the habit to be so simple enough to not have any excuse to start it. Habit stacking — When you first start a habit, it might be a good idea to stack the new habit on top of the old ones that you already do. This will make the new habit very easy to start and complete. For example, if you want to drink more water, make it a habit to drink water every time you go in your kitchen to eat. During the day, you are bound to go to your kitchen to make yourself food. Walking to the kitchen will trigger you to drink more water. Over time, this habit stacking will become a routine and you will be able to drink more water. Change your environment — It’s so important to create an environment for yourself to start your habit and eliminate bad ones. You should have environments where you only work, work-out and relax. You shouldn’t work on your bed, you shouldn’t relax at your office. Every single environment of your house or your office should have a purpose. Plateau of Latent Potential When you start any new habit or a goal, we tend to think that our progress should be linear. In reality, the results are delayed. Most of us want results right away and get disappointed when we don’t see it right away. This is called “Valley of disappointment”. Valley. of disappointment is often when people quit their goal because they haven’t seen results yet. All big things come from small beginnings. The seed of every habit is a single, tiny decision. It’s important to push through the Valley of disappointment phase and realize that results will come after the hurdle. Results will grow exponentially if we stick to our habit. Repetition, repetition, repetition James Clear, makes it very obvious that whatever habit you want to create for yourself is not going to be easy to start with. You need to repeat, your habit enough times for it to become automatic for yourself. There is no set amount of time or repetition for it to become automatic and it will change from habit to habit. It’s important to stick to your habit and push through the valley of disappointment. Tiny Changes, Remarkable Results
https://medium.com/coderbyte/atomic-habits-by-james-clear-9f295403d45a
['Ilknur Eren']
2020-11-09 17:29:12.012000+00:00
['Book Review', 'Atomic Habit', 'Habits', 'Productivity']
Top 12 Mobile App Analytics Platforms in 2019 (Pricing Included)
Top Mobile Analytics Platforms in 2019 (Pricing Included) Creating an app is only half of the battle. Once the app is finished, you need to understand your users. What are they tapping, swiping, watching and buying? How often do they use your app and how long do they stay in the app? The best way of knowing that is using a mobile app analytics platform. In the two of the leading app stores, there are more than 4 million apps, and it is becoming increasingly important to track the habits of the users and their behavior. Mobile app analytics collect and present the data with the insight in all the platforms. They help us achieve our goals. Today, an average user in some countries has over 100 apps on their phone. There are more attributed installs than ever. Attributed installs are growing at a rate of 39%, while non-attributed installs are growing at a slower pace. That means more marketers are tracking the user journey to their app or through their app. 1. Firebase Firebase is Google’s mobile platform for developing apps that grow the user base on iOS, Android, or the Web. It was actually acquired by Google in 2014 and it soon became Google’s flagship mobile platform for developers. Since then it had another 3 acquisitions. Firebase measures everything in one central location, from user engagement to app crashes. With multiple platform support, it is offering funnel visualization, cohort analysis, A/B testing and real-time analytics. Firebase is integrated with other Google’s products like Google Ads and AdMob. Key features: Unlimited Reporting Audience Segmentation Crash reporting Real-time Database Cloud storage Deep linking performance In-app purchase data Attribution In-depth audience segmentation Platforms supported: Android, iOS, C++, Unity Pricing plan: Spark Plan — free, Flame Plan ($25/month), Blaze Plan — calculate pricing Headquarters: San Francisco, United States Office: San Francisco, United States Support: FAQ, Support guides, community forums on Stack Overflow and Quora, Technical support over email (after filling out the support form) – if you have a paid GCP support plan, you should submit technical issues through the GDP support console 2. Apple Analytics (App-Analytics) Mobile app analytics specialized for apps published in the App Store. It offers insight into how users discover your app or how they search the App Store. Apple Analytics tracks app store impressions, user engagement, as well as segmentation of users. Sales and trends section allow you to understand which of your apps or in-app subscriptions are the most popular. Key Features: Usage data Sales data App store data Only supports the iOS platform No SDK installation required Platforms supported: iOS Pricing plan: included in Apple Developer Program fee which costs $99 or $299 Headquarters: San Francisco, United States Offices: Cupertino, Toronto, Khwaeng Pathum Wan, Taipei City, Singapore, Auckland, Kuala Lumpur, South Bay, Seoul, Tokyo, Sydney, Bengaluru, Causeway Bay, Cork (since Apple Analytics is one of Apple’s products, listed here are cities with Apple’s offices) Support: Apple Developer Support over phone or email (after submitting your problem in a form), Developer Forums 3. AppsFlyer Custom analytics tool with insights in rich in-app events, omnichannel measurement, cost & ROI reporting, offering support for different internal business intelligence systems. AppsFlyer also has a dedicated mobile app for monitoring the performance on the go. Since this is also an attribution platform, it can offer deeper app insights for specialized goals like retargeting attribution, tv attribution, and deep linking. Features: Mobile attribution Marketing analytics Deep linking TV Attribution Mobile App for tracking on the go Platforms supported: iOS, Android, Windows & Xbox, Amazon, tvOS, Unity, Cordova, Marmalade, Cocos2ds, Adobe Air, React Native Pricing plan: 30-Day free trial, custom plan upon request Headquarters: San Francisco, United States Offices: San Francisco, New York, Herzliya, Beijing, Berlin, Haifa, London, Bangkok, Tokyo, Seoul, Bangalore, Buenos Aires, São Paulo, Kiev Support: email ([email protected]), help center 4. GameAnalytics GameAnalytics is a free and flexible analytics tool specific for improving KPIs for the entire portfolio. More than 54 thousand game developers use this platform to optimize their games. It has over 850 million active players and it is used in more than 63 thousand titles. This free player analysis platform is a great choice for developers, from indies to large games studios. It helps them to analyze, understand, and monetize their players by making the right decisions based on data. Features: Tracking campaigns Collect, visualize and track player data in one platform Improving game by error tracking Integrated with many platforms Completely free Platforms supported: Unity, Unreal, iOS, Android, Javascript Pricing: completely free to use, no pricing models Headquarters: Kopenhagen, Denmark Offices: London, Copenhagen, New Delhi, Beijing Support: FAQ page, contact form for support 5. Adjust Unified mobile app analytics tool that gathers mobile marketing data in one simple interface. You can spot hourly trends, evaluate user LTV and analyze cohorts. This platform allows users to track all in-app events and sync data back to their own BI platform. Adjust track events over the span of a user’s lifetime, aggregating all of their engagement data in the dashboard. With the cohort analysis, you can segment the users by creative, install date, location and more. Fraud Prevention Suite protects you from malicious and fraudulent activity. Features: App and Advertising platform Fraud Prevention Suite Fully customizable event tracking Custom segmentation of audience Access to raw data Real-time data Platforms supported: iOS, Android, Windows Store, Windows Phone Pricing plan: basic plan starting from 100€, Business plan and Custom plan (upon request) Headquarters: Berlin, Germany Offices: Berlin, San Francisco, New York, Paris, London, Moscow, Istanbul, Shanghai, Beijing, Seoul, Tokyo, Mumbai, São Paulo Support: Adjust docs — page answering the most frequent problems, email support ([email protected]) 6. Tune Mobile app analytics and performance marketing platform. It tracks the whole journey of the customers by unifying touchpoints across every channel. In 2018 it was acquired by Branch.io. Since then, TUNE’s Attribution Analytics platform is a part of the Branch. Tune is a single integrated solution for measurement and engagement across the entire customer journey. People-based Attribution means that you have to miss anything. You’ll have 30% more data to optimize your campaigns and maximize ROI. It has powerful targeting and measurement technology with 25 billion events tracked monthly. Features: Unifying touchpoints part of the Branch People-Based Attribution engine fraud detection cohort analysis cost ingestion/ROI Platforms supported: Android, iOS, tvOS, Javascript, Windows Pricing: free up to 10K MAU, Startup and Enterprise plan (upon request) Headquarters: Seattle, United States Offices: Seattle, San Francisco, Tel Aviv, New York City, Seoul, London, Berlin, Tokyo, Gurugram Support: contact over email after submitting a support ticket 7. Kochava Kochava is a mobile app analytics and attribution platform for tracking user acquisition, engagement and LTV (long term value) for mobile application. With the Kochawa mobile analytics platform, you’ll be able to see and capture all the important data points, create cohorts of users. Data set you’ll be able to export in a variety of formats. You can track lifetime value (LVT) of your users and analyze true return on investment (ROI). Features: Fraud prevention Real-time data True LTV Retention Analytics 90+ Funnel view Platforms supported: Android, iOS, tvOS, Windows & Xbox One, Unity, ReactNative, Cordova, Adobe Air, Xamarin, Web SDK, Corona Labs, Adobe DPS, Adobe Analytics Pricing plan: Free App Analytics, more advanced Attribution Analytics and Unified Audience Platform starting at $100 Headquarters: Sandpoint, United States Offices: San Francisco, Los Angeles, New York City, Paris, Dublin, Beijing, Seoul, Singapore Support: support page with FAQ, phone (tel:+8555624282) or email ([email protected]) 8. MixPanel Mobile app analytics platform with many analytical tools like funnel analysis or cohort analysis. It doesn’t have a live demo. By default, Insights show the top events in the last 96 hours. MixPanel app analytics has multiple tools for the different needs of app marketers. Those are Insights, Live View, Formulas, Flows, Funnels, Retention, and Signal. Funnels allow you to track how your customers move through your app or website by creating a series of step through events. The user can see which features increase conversion, engagement, and retention. Maybe the most special report is the Addiction report — it explores your user’s retention. Features: Multiple tools (Insights, Live View, Formulas…) Free for 5 million data points Live view Addiction Report Platforms supported: iOS, Android Pricing plan: based on data points — free plan (5 million data points per month), basic plan (starting at 10 million data points per year), enterprise (upon request) Headquarters: San Francisco, United States Offices: San Francisco, New York, Lehi, Seattle, London Support: Mixpanel Community, Help Center, email ([email protected]) 9. Appsee Appsee is a mobile app analytics platform for measurement of user experience with your native mobile apps on two platforms — Android and iOS. With Appsee you’ll be able to understand exactly how users interact with your apps. Results are shown in real time after the session or video recording has completed. You can record users, touch heatmaps or use in-app analytics. However, not all devices support user recording. Features: Video recording Touch heatmaps In-app analytics Visual Reports Free trial Platforms supported: iOS, Android Pricing plan: Free plan for 1 app with 2500 monthly sessions, Premium plan (with 14-day free trial), Enterprise (upon request) Headquarters: Tel Aviv-Zafo, Israel Offices: Tel Aviv-Zafo, New York Support: email ([email protected]), support forum 10. Flurry Flurry Analytics is integrated into the app in only five minutes, and they come with support for the Android and iOS platform. It is a part of Yahoo’s Developer Network. This easy to use platform allows you to track new users, active users, sessions and more. You’ll be able to monitor app performance and compare the metrics. In one dashboard you’ll have detailed insight in user and session activity. With crash reporting, you’re able to identify issues and bugs in your app. Flurry Analytics comes with the app for monitoring your app’s performance anytime, anyplace. According to their website, it’s used by 250,000 developers in 940,000 apps. Features: Event tracking Funnels tracking Crash reporting Completely Free Crash Analytics Raw Data Download Platforms supported: iOS, Android, React Native, watchOS, Unity Pricing plan: completely free Headquarters: San Francisco, United States Offices: San Francisco, New York, London, Chicago, and Mumbai Support: FAQ page, email support ([email protected]) 11. Facebook Analytics Facebook’s omnichannel analytics for better understanding the actions of users on the web, in the apps and on Facebook. Here you’ll be able to visualize people’s progress to conversions through their actions across your Page and website, mobile and desktop. You’ll be able to accurately measure people’s retention over time. Facebook Analytics allows you to get a full view of how people interact with you across your website, apps, Facebook Page, and bots in one report. With the use of machine learning to analyze and monitor the data, you’ll be able to take action more quickly. Features: Revenue tracking Custom Dashboards Automated Insights Lookalike Audiences Custom Audiences Platforms supported: Web, iOS, Android Pricing plan: completely free Headquarters: San Francisco, United States Offices: Facebook has offices around the world (New York City, San Francisco, Paris, London, Tel Aviv…) Support: help center, Stack Overflow community, Facebook Developer Community 12. UXCam UXCam is a mobile app testing and management tool designed with product managers, UX designers and app developers in mind. UXCam allows mobile app developers to pinpoint app issues easily to enhance user experience. App developers can watch recordings of users’ sessions to identify any potential problems. The app is available on both Android and iOS devices. You can increase app engagement, attract more users, and reduce churn with UXCam since it enables developers to get to the root of a problem. In short, UXCam improves app KPIs by offering an understanding of your users. Key Features: Session Replay & Analysis Heatmap & Screen Analysis User Journey Analysis Funnel Analysis Platforms supported: iOS & Android Pricing plan: Limited Free Plan, Pricing upon request Headquarters: Berlin, Germany Offices: Berlin, San Francisco Support: Email ([email protected]), Support Slack, Helpdesk Chat (on www.uxcam.com) What can be tracked through mobile app analytics? Events If you are using mobile app analytics, you’ll be able to see statistics about the automatically collected events as well as the custom ones. Automatically collected events are triggered by user interactions with your app. It can be an ad click, ad exposure, ad impression… Events can be analyzed further. Also, details can be broken-down for that specific event. That means using parameters like event location, event demographics and more. Some events are general — they can be applied to all apps. Others are specific for a niche of the app like retail/e-commerce or travel. Conversions Conversions are the most important events — they identify the most valuable users. It is crucial to track these events. They tell us what is the flow of the user journey preceding the conversion events. Often, the three most important conversion events will be predefined. Those are: FIRST OPEN — when a user opens the app for the first time IN-APP PURCHASE — when a user completes an in-app purchase, including an initial subscription E-COMMERCE PURCHASE — when a user completes a purchase Audiences Audiences are users that you group together on any combination of attributes that is important to your business. That way you can see behaviors of different segments of your users. Mobile app analytics include segments — in which users are grouped by the activity, the ARPU, the app version they user, age, gender, country/region, their interests. Cohorts A cohort is a set of users that started using your app around the same time (or same day of the week). It’s important to track cohorts to know the retention rate by seeing how many users end up coming back to your app. Retention cohorts will often be shown in a chart, where each row will represent a cohort. On the above chart, the bottom row is the most recent cohort. The top row represents the earliest cohort. Darker shades represent a higher percentage of the users who returned to the app. Funnels A funnel is a visual representation of steps that a user has taken in order to achieve a certain action (i.e. conversion). Funnels are also used to visualize the completion rate of a series of steps (events) in your app. For example, a funnel can contain the steps necessary to create an account within your app. You can filter funnel reports by audiences or user properties to see whether some segments of your user base achieve a higher completion rate. Conclusion: Mobile app analytics help us learn how users engage with our products and campaign so we can make them better. In the end, knowing the behavior of your users is crucial for the success of your app. Multiple free mobile app analytics listed above give you insight into the app data. They can be completely free or offer a demo or trial for testing. Knowing your expectations is important when choosing a mobile app analytics platform. Many of the mobile app analytics platforms mentioned are going beyond just collecting data about the users and the app, offering mediation of ads, crash analysis, deep linking, and attribution. More interesting articles about mobile apps: About Udonis: We are an award-winning marketing agency specialized in mobile apps & games. We help scale products that people love, keeping the attention on data and results. Have questions, need help? Email us at [email protected]! This story is published in The Startup, Medium’s largest entrepreneurship publication followed by +424,678 people. Subscribe to receive our top stories here.
https://medium.com/swlh/top-11-mobile-app-analytics-platforms-pricing-included-cdc553578fd
['Mihovil Grguric']
2019-10-10 12:40:41.501000+00:00
['Mobilemarketing', 'Performance', 'Mobile Apps', 'Entrepreneurship', 'Mobile']
Less Thinking, More Doing: How to Combat Analysis Paralysis
Photo by Burst on Unsplash According to Columbia University decision researcher Sheena Iyengar, the average American reports making 70 decisions per day. However, the decisions we make vary in investments, risks, and consequences. For brunch I might have to decide if I want to enjoy eggs benedict or huevos rancheros, and later that day pick which major I’m going to immerse myself in for the next four years. Funnily enough, I have found myself agonizing about both of the above decisions before, and have watched many friends do the same. The biggest thing that I fear is that I’ll regret my choice, and that I’ll miss out on something better. “The perfect is the enemy of the good.” — Voltaire We think that cycling through our options over and over again will diminish those worries and help us land at the “right” decision. But most of the time, we just need to act. I’ve heard this called “analysis paralysis” in the past, which is an accurate way to describe the feeling. Analysis paralysis prevents us from making progress and coming to a solution. It also can cause increased feelings of self-doubt, anxiety, and overwhelm. As a leader, it can make you seem as though you lack confidence and can’t be trusted in volatile situations. Although this continues to be a struggle for me, I have found some key reminders that make decisions a little bit easier. Of course, some are more applicable than others based on the situation. Here are some things to remember when you’re struggling with making a decision. Most Likely, None of Your Options Will Doom You To Failure Let’s say you’re deciding between colleges, job opportunities, or majors. It’s a decision that you’ve gone back and forth on a lot, and there are a lot of pros to all of your options. You just can’t decide, and there is no clear answer in sight. It begins to consume you and keep you up at night. Nervousness and anxiety can make you feel like a decision is graver than it really is. It’s a part of that fight or flight response that has stayed with us through evolution. It’s not a trivial decision. But regardless of what you choose, you will probably be okay. Of course, there are tradeoffs in every decision, and it’s best to acknowledge them to avoid second-guessing yourself later. However, you have to remember that the universe will run its course and you will be just fine in the long run. You Really Don’t Know Until You Try Especially for smaller decisions, like how to attempt to solve a puzzle or test question, there is no way to know the correct answer until you try. Real experiences fill in where your brain can’t with the beauty of trial and error. So many times you may find yourself dizzy in brainstorming, especially in team problem solving. Trying to analyze all of the possibilities can be exhausting, and can sometimes turn into a competition of who can speak the loudest or have the most creative idea. You might lose sight of what your goal really is. Just pick something and go. It will give you data, which is better than thinking in circles. Sure, it might lead you to the wrong answer. But this will get you to the right answer faster. Your Gut Is Normally Right Listening to your gut doesn’t always come naturally, and it’s a muscle that you can develop over time. However, you often do have a direction that you are leaning toward and you may not even know it. Sometimes, your gut instinct is not immediately apparent because you are so lost in analysis paralysis. Your intuition gets trapped in your busy mind. It’s also possible that you don’t want to admit to yourself that you know what you really want, for a variety of reasons. The best way to uncover your gut feeling is to take some time to journal. Refrain from listing the pros and cons for each choice, since you’ve probably already done that. Instead, just free write for ten or so minutes. By putting your train of thought on paper, it becomes very clear what you really want to do. Another way to discover what your gut thinks is to notice how you feel and act when you talk about your options. Does your face light up when you talk about one over the other? Do you become more animated and sit up straight? Do you find yourself frowning and speaking monotonously when describing another option? If it’s hard to pick up on your own cues, try asking a trusted friend or family member what they think you want to choose. Be careful not to ask what they think you should do. Instead, ask them “What does it seem like I’m leaning toward?” When you hear the answer, it probably won’t surprise you. It Feels Good To Be Decisive Making a decision can be incredibly empowering. You will feel like you have control over the trajectory of your life. You get to choose where you direct your energy. Being decisive is also crucial to great leadership. Your team will feel the strength of your decisiveness. The most confident leaders may still have a lot of doubt and insecurity below the surface. However, it’s better to have direction instead of wavering in uncertainty. The consequences of inaction are worse than the consequences of making a decision. It’s no easy feat. Being on the other side does wonders for your confidence. The Bottom Line There are a few things that you can remind yourself when you find yourself stuck in analysis paralysis. You’re going to be okay. You don’t know until you try. Listen to your gut. It will feel amazing to just make a decision and run with it. Making a decision should be empowering, not excruciating. Being stuck in a decision holds us in a sort of purgatory that makes us feel weak, trapped, and stagnant. And eventually, it’ll be less thinking, more doing.
https://medium.com/wholistique/less-thinking-more-doing-how-to-combat-analysis-paralysis-25e6f4d75f16
['Sophie A']
2020-08-25 12:40:28.857000+00:00
['Decision Making', 'Self Improvement', 'Personal Development', 'Productivity', 'Life']
What is this Geospatial Data Set and Why You Should Care?
What is this Geospatial Data Set and Why You Should Care? Detailed analysis on geospatial data sets, types and use cases Let us start with some statistics (and some myth-busting/jargon deconstruct). 80% of the data today has some kind of location component to it. Have you come across this slide/phrase before? Chances are you have. In fact, this phrase is used world-over to highlight the importance of Location Intelligence or GIS. But this is not new information, this phrase has been in use since 1985 according to this article. The world has since come a long way in terms of the data quality, quantity and the business Use Cases but the marketing gimmick remains. As a user Location data is powering a host of services we use every day, from checking the ETA to your office as per the traffic to visualizing pollution levels in different parts of your country. If you are a company that already uses location intelligence (think ride-hailing, ride-sharing, food delivery, ride rentals, e-pharmacy, fleet management, travel and so on) you know how important geographic data is for you. But even for companies which are not traditionally location-based (or even tech-friendly), GIS data is useful for marketing analytics, revenue mapping, market studies and so on. Some companies use it for personnel tracking or issues tracking. Others use it to derive strategic insights. We know companies of all sizes and industries are using it and using it well. At locale.ai this data is what we process and present to get you specific actionable insights. But to understand what YOU can do with it, it is important to answer the questions: What exactly is Geo-spatial data? What are the types of datasets under the ambit of Geospatial data that can add meaning to your analyses and where do these come from? Definitions Strict terminology suggests that Location data are information about the geographic positions of devices (such as smartphones or tablets) or structures (such as buildings, attractions). While the word geospatial is used to indicate that data that has a geographic component to it. This means that the records in a dataset have locational information tied to them such as geographic data in the form of coordinates, address, city, or ZIP code. GIS data is a form of geospatial data. Geomatics is the discipline concerned with the collection, distribution, storage, analysis, processing, presentation of geographic data or geographic information. Geo-spatial data can be both Cross-Sectional (measured at a specific point in time) or Longitudinal/Multitemporal (time-series) data that has geographical coordinates associated with it (latitude, longitude and rarely altitude). Simply speaking, anything captured with location can be useful Geospatial data. Datatype Variants The independent variable or the latitude, longitude, and altitude can be of the following types:
https://medium.com/locale-ai/now-what-is-this-geospatial-data-set-d1b0e539eb6d
['Mudit Gandhi']
2020-09-29 20:27:46.074000+00:00
['GIS', 'Maps', 'Business Intelligence', 'Location Intelligence', 'Geospatial']
You Need a Depression Buddy
Photo by Dương Hữu on Unsplash “D o you have anyone who actually understands you?” Throughout the last 7 years of my depression, I’ve been exceptionally lucky to have encountered many people who have shared my journey. Unfortunately, the first 21 years were not so blessed. Those who are familiar with the monsters of mental illness will also be familiar with the haunting feeling of loneliness that it brings. Even the most well-meaning comments can only serve to enhance this loneliness. My father’s initial assessment was that “I needed more sunshine" and there are more people than I can count who have urged me to give up my medications. This doesn’t even touch on the people who are not quite as generous in their goodwill. It is critical for my well being and sanity that I get to speak to someone who actually understands what it means to be locked in a battle with your own brain. My first depression buddy took the form of a lovely coworker who was delightfully upfront about her struggles and her use of antidepressants. When I was 18, my mother told me that I couldn’t take antidepressants because they would change my personality. Having someone that I admired be so willing to discuss her medications went a long way to removing the stigma around my own need for pharmaceutical intervention. I’ve tried to return this favour whenever possible. I’m not ashamed of my medications or my illness anymore. But I do need help. As small children, we learn the Buddy System as a way to protect us from the evils that may befall a lone camper on the way to a dark outhouse. Though grown, I am no less afraid of the things that lurk in the dark. While it used to be the unknown, now I am frightened of the things that I know far too well. Having a buddy helps to banish these monsters back to the darkness that they belong to. Where we used to hold hands and share a flashlight, now we share stories and exchange late-night messages. There is an inherent sense of vulnerability in opening yourself up to a depression buddy. You have to be able to admit your struggles and be willing to share them. Luckily, it is a glorious, symbiotic relationship and the other person is just as vulnerable. It is not a relationship between mentor and mentee, rather it is two individuals who are simply trying to make their way through a life that has thrown them some invisible, but daunting, challenges. The person offering support one day may be the one requiring it the next. You don’t only need a depression buddy, you need to be one. “Do you have anyone who actually understands you?” One of my dear friends asked me this question the other day. Luckily I was able to point back and say “yes, I have you”. If you are reading this and find yourself sorely lacking in the buddy department, reach out to your friends and family. You never know who may also be struggling. If you don’t feel comfortable with that, send me a message. I will be your depression buddy. Canadian Centre for Suicide Prevention 1–833–456–4566 Suicide Prevention Lifeline 1–800–273–8255
https://medium.com/invisible-illness/you-need-a-depression-buddy-2fab36ef07b0
['Dakota Montgomery']
2020-03-28 00:00:40.907000+00:00
['Mental Health', 'Mental Illness', 'Friendship', 'Support', 'Depression']
Data for Good — Zika Hackathon
Ari Kahn, genomics subject matter expert at TACC On September 9th more than 70 volunteers united in a Data for Good hackathon to explore new ways to use data in the fight and prevention of Zika virus. The event took place at the Texas Advanced Computing Center (TAAC) at the University of Texas in Austin, Texas. Cloudera volunteers helped organize the event through its Cloudera Cares program for social causes. Cloudera partners Qlik and Bardess also participated in the event who are already helping organizations use big data in the life sciences, genome and pharma world to discover new ways to improve quality of life. Through these hackathons we build awareness around the Zika virus and promote data sharing and collaboration. At a prior Zika hackathon on May 15th we analyzed data on Aedes mosquito habitat, breeding conditions, weather data and travel to the US from countries with Zika infected mosquitos. The data was used to plot potential hotspots in the US and found Miami and Houston at critical risk. This discovery was made before the recent outbreak of Zika in Miami. Zika cases in the US At the latest Hackathon we saw more ambitious data for good projects with research in the clinical and epidemiological areas of Zika. With projects going from the identification of Zika in water samples using metagenomic data to exploring the Zika protein and docking to identify potential drugs to fight Zika Virus infections. Volunteers with various skills and knowledge grouped and collaborated on these Zika related projects. Zika Metagenomic Portal Frontend — Goal: to create a website portal for people who are collecting metagenomic data to submit data to a service that would search that data for traces of Zika. Using agave api to connect the portal to TACC Wrangler system and other computational resources Zika Metagenomics Portal Backend — Goal: Check water samples against Zika serum using sample training sets to train the model. The team was blown away when the actually found Zika in public available data samples, a breakthrough achievement for the Hackathon Project Hydro — Goal: Cross comparison in Harris and Hidalgo Texas counties looking at various data sources (floodplains data, women of childbearing age, vegetation density) to assess Zika risk posed to pregnant women based on location. Zika protein and docking research for drug discovery Medicines to Zika Protein — Goals: Use High performance computing to facilitate docking process that is involved in Zika virus drug discovery. Deviated from that to use ML to identification of most efficient drug. Zika Demo part 2 — Goal: add new datasets to demo created from the first Zika Hackathon and provide a platform that can be used to promote Zika awareness and need for open data sets with help from our partners Qlik, Bardess and Data.world. The identification of Zika in publicly available water sample data was a huge discovery and proof that these projects have the potential of making a significant scientific impact. These projects are hopefully the seed to a future discoveries or insights. Yet one of the major challenges observed at the hackathons is the lack of Zika public data sets. For example at the first hackathon we had to write scripts to scrape the CDC website for data. We reached out the CDC to request access to the raw data in any file format, but CDC does not share publicly these data files. New organizations like data.world are making access to data better with easier ways to share and discover datasets. Data.world, who also participated in the hackathon, made Zika datasets available on their platform, but this is just the start, we need more organizations like the CDC, WHO, ECDC to post their datasets in downloadable file formats to promote research and discovery. Data.gov is a great resource for public data sets, with over 186,467 datasets and growing, but there is not much on Zika, if you search “zika” today you will only find one result and it is not a Zika specific dataset. Texas Advanced Computing Center (TAAC) at the University of Texas TACC is also making access to large petabytes of data storage easier and promoting collaboration. TACC’s systems, while mostly used by Academia today, are also available to private enterprises. Home of some of the top supercomputers of the world, TACC’s systems with support for Apache Hadoop are hungry for data science projects and data for good research. The Zika hackathon was a huge success and it was great to see all the volunteers collaborate, knowledge share and unite in a data for good cause. If a small group of people can gather for a few hours and accomplish these results, just imagine what can be done by the health and life sciences industry at large. Cloudera is big supporter of President Obama’s Precision Medicine Initiative and with hackathons like these we promote the use of new Big Data technologies for this type of research as we saw at this hackathon using metagenomic data for Zika identification. Cloudera Cares volunteers The entire world can benefits from open source data platforms like Apache Hadoop with self service analytical and machine learning tools like Apache Spark MLlib. Many times it is the underdeveloped countries with lack of resources where these data for good projects can make the most impact, and hackathons like these help promote awareness and examples on how to tackle tough social problems with data for good and open source data. Get involved in a data for good project near you and be part of the change.
https://medium.com/cloudera-inc/data-for-good-zika-hackathon-33fbf2c5995e
['Eddie Garcia']
2016-09-26 17:03:06.755000+00:00
['Big Data', 'Genomics', 'Zika Virus', 'Cloudera', 'Open Data']
Nine features that made SQL Server more than a traditional DBMS
Nine features that made SQL Server more than a traditional DBMS Hadi Fadlallah Follow Dec 6 · 4 min read SQL Server Big Data Cluster (image source) Before SQL Server 2012 release, this product was considered a database management system for small and medium enterprises. Starting with the 2012 release, the database engine is no longer considered for medium-scale enterprises after adding high-end data-center management capabilities. In November 2019, SQL Server 2019 Big Data Clusters were introduced, giving the ability for users to build a Big Data ecosystem. This article will briefly mention nine features added starting SQL Server 2008 that make SQL Server more than a traditional database management system. This feature was added in SQL Server 2008 to be applied to tables and indexes. There are two types of compression: Row compression: Alter the physical storage format of the data based on its data type. Alter the physical storage format of the data based on its data type. Page Compression: It applies the “row compression”, plus two compression operations (Prefix and dictionary) which are based on the data syntax. Using the data compression feature helps reduce database size and the I/O workloads since the number of data pages needed to store data are decreased, which means that the executed tasks and queries read fewer pages from the disk. This feature’s “side effect” requires more CPU resources to decompress data before being consumed. Considering the Columnstore tables and indexes introduced in SQL Server 2012, data compression is always applied. Columnstore indexes are much like the column-based NoSQL database. It is designed to store a massive amount of data, especially fact tables, to be used in analytical operations. This feature was improved in SQL Server 2016, where they increased querying data performance ten times than the rowstore tables. Besides, since data compression is always applied to columnstore indexes (as we mentioned before), the data is stored within a fewer number of pages than the uncompressed data. Also known as In-Memory OLTP tables. Similar to standard tables, these tables are stored within the hard drive (durable disk), but also they have a copy within the active memory (hidden copy). These tables are optimized to perform faster transactions. Instead of using table locks while running transactions, these tables use row versioning to keep the original data till transactions commitment. Besides, these tables are not fully logged. JavaScript Object Notation (JSON) is an open universal data format that uses human-readable text to store data objects in key/value pairs. Recently, this notation became widely used to exchange data using API’s to store data within NoSQL databases. In SQL Server 2016, JSON becomes supported, letting the developers combine NoSQL and relational concepts by storing documents formatted as JSON text within relational tables. Four main JSON functionalities was added allowing developers to: Parse JSON text and read or modify values Transform arrays of JSON objects into table format Run any Transact-SQL query on the converted JSON objects Format the results of Transact-SQL queries in JSON format Polybase is a feature introduced in SQL Server 2016 that allows querying data from external data sources such as Hadoop using T-SQL. Queries are executed without the need to create Linked Servers objects which have lower performance. In SQL Server 2019, new external data sources were supported, such as Oracle, Teradata, MongoDB databases. When it was firstly introduced in SQL Server 2016, it was called R services. In the 2017 release, the name was changed to Machine Learning services since Python became supported. The Machine Learning services allow running Python and R scripts over the data to perform analytics and machine learning algorithms using the popular packages. One of the main advantages is that the script execution is performed within the database engine without moving data outside SQL Server or over the network. This feature was introduced in SQL Server 2017, allowing users to create graphs similar to the NoSQL graph databases (node and edges). Even if popular NoSQL graph databases (such as Neo4J) are recommended to store graphs, but in some cases, it is useful to have the graph data stored within the SQL database engine to facilitate the data integration process. To perform distributed data integration operations, SQL Server Integration Services (SSIS) scale-out was introduced in SQL server 2017. It allows the execution of the SSIS packages across multiple computers, while in the previous versions implementing a distributed approach was very complicated. Starting SQL Server 2017, a Docker image for SQL Server was provided by Microsoft, allowing users to install SQL Server a wider range of operating systems. This feature was the main building block that allowed building Big Data Cluster introduced in SQL Server 2019 since it gives the ability to connect multiple SQL Server nodes (Docker containers) using Kubernetes (container orchestrator).
https://medium.com/munchy-bytes/nine-features-that-made-sql-server-more-than-a-traditional-dbms-342baa60eaed
['Hadi Fadlallah']
2020-12-06 23:27:46.006000+00:00
['Sql', 'Big Data', 'Relational Databases', 'Dbms', 'Sql Server']
Design Jargons, UX Smells, and a special collection of articles on Chatbots
What’s hot in UX this week: Sagi Shrieber is a designer, writer, & entrepreneur. Co-founder of Hacking UI, founder of PixelPerfectMag, and a UX mentor for startups at Google Campus. What are the new words you see yourself using more often recently? Any new terms that were not part of your vocabulary two years ago? Sagi: Number one is Micro-Interactions. Back then it was all about creating prototypes. Animations were sometimes done in After Effects. Recently (in the past year or two) a lot of new tools came out which a were only about that. Tools like Principle and Origami are now used to communicate transitions & animations better than ever… Read full interview →
https://uxdesign.cc/design-jargons-ux-smells-and-a-special-collection-of-articles-on-chatbots-eadba279b05e
['Fabricio Teixeira']
2017-02-05 23:24:00.293000+00:00
['Design', 'Hot This Week', 'Chatbots', 'User Experience', 'UX']
How This Tire Company Created the World’s Most Famous Food Guide
How This Tire Company Created the World’s Most Famous Food Guide Marketing lessons from a food guide disguised as a tire repair guide Photo by Jametlene Reskp on Unsplash Whether you’re a budding restauranteur or just someone who loves discovering good food, I’m sure you’ve heard about the Michelin Guide. It’s the most anticipated and renowned food handbook of the century. Every year when the Michelin Guide is updated, foodies go nuts. And for perfectly good reason. The guide represents the highest standards and quality of food served all across the global culinary landscape. Food in Michelin-Star restaurants is often of exquisite quality and aesthetic plating. Photo by Les Amis. Some critics even go so far as to say that it’s the Bible of the culinary world — it’s really that influential. Its prestigious Michelin Star ratings can also be a huge game-changer for any restaurant. Those that receive a Michelin Star for the very first time can expect a flood of customers to their establishment overnight. Likewise, losing a Michelin Star can be devastating to the businesses of restauranteurs and chefs alike. It’s akin to having your gold medal snatched back right off your neck. It was even reported that in 2013, superstar TV chef and decorated Michelin Star awardee Gordon Ramsay, once broke down when he lost two Michelin Stars for his New York-based restaurant, The London. Even Gordon Ramsay, one of the world’s most successful chefs, revers the Michelin Guide. If even Hell’s Kitchen resident bully quivered in the presence of the Michelin Guide, one can only imagine the authority that the guide has over restaurants everywhere. So how exactly did this impressive handbook come to be created in the first place?
https://medium.com/better-marketing/how-this-tire-company-created-the-worlds-most-famous-food-guide-370376bcf01a
[]
2020-10-27 12:56:17.146000+00:00
['Branding', 'Food', 'Culture', 'History', 'Marketing']
How to use the VGG16 neural network and MobileNet with TensorFlow.js
In this article, we will build a deep neural network that can recognize images with a high accuracy on the Client side using JavaScript & TensorFlow.js. I’ll explain the techniques used throughout the process as we go along.We will be using VGG16 and MobileNet for the sake of the demo. If you need a quick refresher on TensorFlow.js, read this article. Below is a screenshot of what the final web app will look like: Final Web App To start off, we will create a folder (VGG16_Keras_To_TensorflowJS) with two sub folders: localserver and static. The localserver folder shall contain all the server NodeJS code, and the static folder will have all the CSS, HTML, and JavaScript code. Screenshot Showing the Folder structure Note : you can name the folders and file whatever you like. Server Configuration We will manually create a package.json file with the below code: { "name": "tensorflowjs", "version": "1.0.0", "dependencies": { "express": "latest" }} The package.json file keeps track of all the 3rd party packages which we will use in this project. After saving the package.json file, we will open the command line and in it we will navigate to the localserver folder. Then we will execute the following: npm install Command Line for MacOS After doing so, NPM will execute and ensure that all the required packages mentioned in package.json are installed and are ready to use. You will observe a node_modules folder in the localserver folder. We will create a server.js file with the below code: server.js contains the NodeJS code which allows the local server to be hosted which will run our WebApp. Client Configuration Next we will create a predict_with_tfjs.html. Below is the code: Once the HTML code is done, we will create a JavaScript file and call it predict.js. Below is the code: Model Configuration Once the client and server side code is complete, we now need a DL/ML model to predict the images.We export the trained model (VGG16 and Mobile net) from Keras to TensorFlow.js. Save the output in folders called VGG and Mobile net, respectively, inside the static folder. ScreenShot for Python Defining the Classes We will keep imagenet_classes.js inside the static folder. This file contains a list of all the ImageNet classes. You can Download this file from here. Testing the Code After all the setup is done, we will open up the command line and navigate to the localserver folder and execute: node server.js We should see the below output: After the successful implementation of server side code, we can now go to the browser and open http://localhost:8080/predict_with_tfjs.html. If the client side code is bug free, the application will start. Then you can select a different model (VGG16 and mobile Net) from the selection box and do the prediction. GitHub Repository for the project: You can watch the complete code explanation and implementation in the below video: Source : ADL # Video no 1 Source : ADL # Video no 2 My Next Post will Cover Financial Time Series analysis using Tensorflow.js…Stay Tuned. Best of Luck ! 👍
https://medium.com/free-code-camp/how-to-use-the-vgg16-neural-network-and-mobilenet-with-tensorflow-js-ea4c76d0b8e0
['Akshay Lamba']
2018-08-10 18:26:09.649000+00:00
['JavaScript', 'TensorFlow', 'Artificial Intelligence', 'Deep Learning', 'Technology']
It’s Only a Paper Moon
It’s only a paper moon, it holds my dreams and more. It’s only a paper moon, It is a wish made from floor. It isn’t as bright as the moon, it’s paper after all. It’s a fake moon on my wall, but still I make my wish. Moonlight, moon bright, first paper moon I see tonight, I wish I may, I wish I might, have this mighty wish tonight. Grant me peace upon my bed, when I lay down my sleepy head. Hold me in your sweet night light, that I may wake and be bright. May you guide me through this night, may you always hold me close, gather up my thoughts,my prayers & keep bad dreams at bay. It’s only a paper moon, It preserves my secrets tight, It illuminates my life, It contains my heart’s delight.
https://medium.com/poets-unlimited/its-only-a-paper-moon-939da02d2166
['Debbie Aruta']
2019-05-31 17:44:34.145000+00:00
['Moon', 'Writing', 'Poem', 'Writer', 'Poetry']
Freud’s Decision Hack Will Change Your Life
A while back I was in a relationship I wanted to end. I needed to decide whether to see if things improve or to just call an end to it and upset the person who loved me. It took me weeks to make the decision. I didn’t sleep well. I had a recurring night terror in which I saw enormous shadowy spiders crawling up my bed toward me. Half-awake, I’d throw my sheets off the bed and scramble to switch on the lamp only to realise I was still dreaming. When I finally made the decision to end the relationship, the spiders stopped. We are often faced with difficult decisions. We lose sleep when dilemmas we face churn over in our minds. We rehearse each eventuality with our thoughts and repeat them over and over, hoping that exhaustive analysis will eventually give us the right path to take. Decisions are scary. Some of us live our lives avoiding them, and perhaps suffer all the more for it. But there are ways to make a decision more quickly. By tapping our unconscious mind — the place where the spiders came from — we can take a deeper perspective on the decision. There are simple ways of doing this, and below I’ll give you one technique that will help. Sigmund Freud photographed by Max Halberstadt in c.1921 (public domain, source: Wikipedia) The Unconscious Mind Tough decisions cause a lot of trauma. This is partly because the thinking involved is so exhausting. Think of your conscious mind as being like a spotlight in a dark room filled with everything that's happening in your life. It’ll shift from one thing to another, but that’s not to say nothing is happening in the darkness. Most of our mental processing happens in the unconscious mind. When we drive, work or play a sport we’re often in a flow-like state where our unconscious does all the hard work. While we may not be actively considering our options in many given situations, the deeper (and far greater) mechanisms of the unconscious are at work on these considerations. The term “unconscious” was coined by the German philosopher Friedrich Schelling but is most associated with Sigmund Freud, the inventor of psychoanalysis. Freud’s work was revolutionary, it gave us the notion that our unconscious mind — a part of us that we have very little control over — determines so much of our behaviour, including our decisions. Ideas and memories buried in the non-conscious mind, Freud believed, could account for our fears, phobias, neuroses, desires and pleasures. Before Freud, it was widely assumed that human beings were perfectly rational, that our decisions were based entirely on conscious calculations. Freud’s work showed that is not even half the story. The mind, Freud’s followers contend, is like an iceberg. Only a small part of it is exposed to conscious introspection. We can only feel the force of the unconscious indirectly, such as its workings in dreams, psychological symptoms, slips of the tongue and the associative way we interpret things. A classic way to get insight into the unconscious is the Rorschach test technique, named after Swiss psychologist Hermann Rorschach, by which a patient is asked to interpret what they see in a random inkblot. The “free association” of the patient is interpreted by an analyst to reveal something that may not be “known” to their conscious mind. Latent traumas and buried (“repressed”) memories can be surfaced using these techniques. But the unconscious mind is not just a place of lurking fears and neuroses. It’s a plentiful well-spring of creativity and wisdom. In so much as there are ways to reveal the symptoms of the unconscious mind’s darker aspects, there are ways to tap its enormous creative and intellectual potential. Many a creative will tell you that it’s best to think about a problem for a while then let it go and get on with something else. The unconscious will work on the problems beneath the threshold of your attention. You will find that a solution will pop into your head at unexpected moments. Meditation, doodling and journaling are also ways of opening up the power of the unconscious. How we interpret things as simple as an inkblot or a coin toss reveals a lot about us. This is an original inkblot used by Hermann Rorschach to get an insight into his patients’ unconscious thoughts and feelings. Do you see a butterfly or a bat? Is it beautiful or ominous? (Image: public domain, source: Wikipedia) The Coin Toss What’s this all got to do with tough decisions? Well, decisions are exhausting to our conscious minds. We’ll likely be thinking hard about decisions because these junctures in our lives require a lot of speculative thinking. Making sense of the chaos around us is hard enough and it’s so much harder to make sense of the potential chaos to come on either side of a dilemma. But while we may be turning things over in our minds — perhaps even having sleepless nights — all that information is making its way into the unconscious mind where it is also being processed. To make a decision as tough as a job relocation, a relationship change, or a career change, it’d do us a lot of good to bring our unconscious mind’s emotional and intellectual depth to a dilemma. It might even save us a few sleepless nights. When faced with a big decision many people have been tempted to leave their decision to fate. A common way of doing this is tossing a coin: “heads I take the work transfer and relocate to a new country, tails, I stay put.” This method is reckless and could do a lot of harm. It is said (but unproven) that Freud had a much better way of helping people make decisions using a coin toss. Whether or not Freud discovered the method, it’s a powerful way to bring your unconscious to bear on the decision making process. Toss a coin as if the coin is deciding your choice for you. Now, don’t act on the result of the coin toss but instead decide how you feel about the result. The coin toss forces you to consider how you would feel if the decision was made for you by the force of fate and circumstance. The coin flip clarifies your feelings about the decision. Was the result what you hoped for? Are you disappointed? While the decision-making process forces us to use our conscious mind to speculate and calculate the outcomes of our choice, the coin flip suddenly brings our unconscious into play. This is the full force of an intuitive “gut” feeling that is impossible to describe, yet so powerfully emphatic. It may not make a decision simple, but it will bring to bear your true feelings and help you make your choice based on your emotional reaction to the result. Thank you for reading. I hope you learned something new. If you enjoyed this article, you may also like my article on Marcus Aurelius:
https://medium.com/the-sophist/freuds-decision-hack-will-change-your-life-c27ad4f183fd
['Steven Gambardella']
2019-12-16 07:24:19.366000+00:00
['Philosophy', 'Self', 'Self Improvement', 'Psychology', 'Life Lessons']
Flutter State setState, context, widget and mounted
The Classic State In the gif above, you see we’re going up through the three pages incrementing their individual counters along the way. There’s a number of buttons offered so to navigate up and down the routing stack. As anticipated, returning back down the pages then up again, the counter on the third page is reset to zero. This makes sense. The app had retreated back down the routing stack, and the State object retaining the counter (the state) on the third page was terminated (its dispose() function called). Perfectly normal. However, notice the second page is keeping its counter value?? How is it doing that?! Further note, on the second and third pages, you can increment the counters on the previous page! On the third page, for example, there are two buttons to increment both the second page and the home page. Well, that’s neat. Now how is that done? Finally, pressing that ‘Home Page New Key’ found all the way up on page three results in the counter on the first page (the Home page) to be reset to zero! Now, what’s going on there? Granted, this is such a rudimentary example, but it does demonstrate some fundamental aspects involved in the State Management of a typical Flutter app. I’ll put to you that all the frameworks offered, today, use the same basic mechanism to provide such capabilities (some all be it in a cumbersome and rather round-about way in my opinion). Use the setState() function to a particular State object. Let’s examine the first State object responsible for displaying a screen to the user. In this case, it’s the ‘Home page’ greeting the user with the first counter. Below is a screenshot of that State with little red arrows highlighting points of interest. What do you see? First and foremost, you can see the example app is using a subclass of the State class called, SetState. Yes, this class is the main point of interest in this article. Certainly using a subclass of the State class now relieves us of that one annoying warning message about using the setState() function. You’ll find all the frameworks out there have a ‘subclass’ of the State class in one form or another. All of which supplying a means to use the setState() function to a particular State object — whether the developer knows exactly which State object that is or not is another matter. What else do you see? Well, there’s ‘the separation of work’ that I personally live by when developing code. More specifically, there’s always a separate and dedicated class (_HomePageBloc in this case) that’s responsible for the actual ‘business logic’ involved in the app while the build() function in some widget somewhere is responsible for the interface. Further, there’s the degree of abstraction that I always implement as the API between these separate areas of responsibility. As an example, the actual ‘counter’ here in this app is concealed by the class property, data. I follow the consistent practice of naming instance fields after the parameter used by the receiving Widget. As it happens, the Text widget’s first parameter is named data. All this a consistent approach — dare I say, a design pattern. Now, in keeping with the topic at hand, let’s examine two points of interest in particular. What I’ll be presenting from now on will be concepts subtle in nature but very important characteristics to uphold when writing good code. The three Bloc classes (no direct relation to the BloC design pattern) you’ll find in this example are indeed the ‘Business Logic Components’ for the app. Each has its own little bit of responsibility (their own little bit of ‘state to manage’). They’re also the app’s event handlers — each response to particular events that may be triggered by the user or by the system (the phone) itself. The screenshot below presents the first Bloc somewhat named after the State object it’s to work with. It even explicitly takes in the type of that State object it works with. At a glance, you can see this Bloc class is for the home screen. Now, why is the Bloc simply taking in ‘the type’ of the State object? Why not take in the very State object itself? Seems to be another rouned-about way of doing things, no? Lastly, note the Bloc class is found in the same Dart library file with its leading underscore. For demonstration purposes, this was the case. However, in practice, I’d suggest the ‘business logic’ deservers its own Dart file. There’s another question here. Why was an altogether separate function called onPressed() created in the State object? See below. I mean, in some design patterns offered today, it’s customary to simply find the corresponding ‘onPressed’ VoidCallback function in the Bloc class and use that instead as depicted in the screenshot below. However, when you do see such functions in a State object, know that this object is now providing an API. Therefore, it must be necessary for external players to also trigger an event in this object. You’ll soon realize who these players are in this particular example app. Hint: Buttons. Let’s backtrack a bit and take a deep dive into the instantiation of this first Bloc class, _HomePageBloc. In the screenshot below, you can see we’ve jumped ahead a bit to examine this Bloc class. In fact, we’ll examine all three of them. You can see all three Blocs in this app take advantage of inheritance extending from the common parent class, _CounterBloc. After all, they all pretty much do the same thing and so that function is found in one parent class — working with an integer called, counter. Again, it would have been better to have these Blocs in their own separate Dart library files. For example, if they were in their own file, you’d then be free to extend a different parent class altogether in the future — displaying string ‘word pairs’ let’s say while unaffecting the rest of the app. The parent class, _CounterBloc, should be in its own Dart library file as well preferably with a more generic class name. A degree of abstraction always makes for easier maintenance of an app in production. Regardless, note the first Bloc class, _HomePageBloc, doesn’t know the type of State object beforehand, and so that type is passed in as a generic type. While in the second Bloc class, _SecondPageBloc, the State type is known and is explicitly specified. Which approach to use, of course, depends on the circumstance. At least, you’re free to use either. You have that option. Further note, the second class utilizes a factory constructor, and that’s how it retains its count even when you retreat back down the routing stack! In every Flutter app you write, returning to a previous page will remove a Page from the stack, and if it’s represented by a StatefulWidget, that means the StatefulWidget’s State object will be disposed of. Every time. Unless you do something about it. You’ll note, when it comes to the second page, the ‘State Management’ has been allocated to a separate class altogether and not left to the State object. Following the Singleton design pattern, the _SecondPageBloc class remains in resident memory for the life of the app. Thus keep its counter (its state) and is assigning a brand new State object to itself (the second arrow) whenever a user comes back to that page. Now, let’s look at the third and final Bloc class, _ThirdPageBloc. Note that the instance field, state, is successfully overridden with a getter. THIS IS HUGE! Do you know why? It’s huge you can successfully override a mutable instance field with an immutable getter! Since a getter is essentially ‘instantiated’ only when it’s first used, you can provide the ‘type of state’, but you’re not obligated to instantiate a reference to that State right then and there! You can wait. Possibly in some situations, the State is not to be instantiated at that point — It may not be available for some reason. _CounterBloc class We might as well take a look at that parent class to the Bloc’s now. Again, it’s an event handler. Such a class is required to respond to events. In this case, the most profound event is when a user taps on the plus sign to increment the counter. Thus, the most important capability of this class is to then notify the appropriate State object when it’s completed responding to that event. As you know, it does have access to the ever-important setState() function for a particular State object. It takes advantage of that access and even defines its own setState() function — for any other modules to then call to notify the app and reflect a change. Finally, it offers a corresponding dispose() function to be called in its State object’s own dispose() function when the State object itself indeed terminates during the course of the app’s lifecycle. It’s all nice and compact. However, we could do better. Note, such abilities, on the whole, should be present in any and all modules that are to work with a SetState object in this fashion. Such abilities should be readily available to any class you may define to contain the ‘business logic’ of an app. Such a circumstance would therefore be a good candidate for a mixin, no? A screenshot of that mixin is below. That parent class, _CounterBloc, has now been changed — focusing truly now on the one lone functional responsibility assigned to it in this particular app. When it increments its counter, it then notifies the rest of the app with the setState() function. It now takes in the mixin using the keyword, with, resulting in code generally being more modular. Further, as you see below, there’s a higher cohesion in the resulting class. Navigating The State Looking at the third page in the routing stack, we see it presents to the user five buttons. The first three buttons literally affect ‘the state’ in three separate regions in the app. The first button calls upon a provided Bloc object to respond to the event of incrementing that page’s counter. The next three buttons involve the State objects from ‘previously visited’ areas of the app. Each is responsible for retaining their own state. Note, the names of the VoidCallback functions of the State objects, homeState, and secondState: onPressed. Should the fourth and last State object also have such a function?
https://andrious.medium.com/state-management-in-flutter-df291824b309
['Greg Perry']
2020-12-21 05:44:44.190000+00:00
['Mobile App Development', 'Android App Development', 'Flutter', 'iOS App Development', 'Programming']
Israel-based Startups Eliminating Bottlenecks in the AI Workflow
Azrieli Center in Tel Aviv. Photo by Ted Eytan via Flickr AI Infrastructure: background, trends, and insights through the lens of Israel’s startup ecosystem Introduction Over the past few years, artificial intelligence has played a major role in defining trends of startups. Across all industries, the general evolution has shifted from computing based on human instruction to computing based on self-learning. Research and advisory firm Tractica even predicted that the annual worldwide AI revenue will grow from $643.7M in 2016 to $38.8B by 2025. However, as new technologies are implemented across all domains, we need to consider the following: during a gold rush, sell shovels. Thus, we begin to see an opportunity for artificial intelligence infrastructure. Essentially, along with a new class of software — here, artificial intelligence and its subset, machine learning — comes new infrastructure to support it. Why the push for AI infrastructure? Traditionally, companies use a software-defined infrastructure (SDI) to support their dynamic environment. A typical SDI, like a cloud-based infrastructure, is built on the evolution of scripts or program code. It works independently of a specific hardware environment, and is designed so that it can control an infrastructure largely without human interaction. However, SDI has its limitations, especially as the technologies that companies use are beginning to transform and evolve. Software-defined infrastructure is constrained by a static source code, which also means that its functioning largely depends on the skills of the developer who writes the particular code. Additionally, an SDI is unable to understand or learn about its own environment. SDI, in essence, is unintelligent; it lacks flexibility. In contrast, AI infrastructure is an intelligent upgrade: it’s complete with AI and ML algorithms that can “learn” from the information it gains over time to build frameworks that can keep up with the new data. AI infrastructure can: Analyze the dynamic behavior of the existing infrastructure and learn to understand their working by itself. Eliminate errors in the environment by constantly monitoring the functioning of the infrastructure, fixing issues when they arise. Allocate resources when required by the workload and de-allocate them when they are no longer required. We’ve already begun to see the shift towards AI infrastructure: from June 2018 to June 2019, there were 22 new Israeli startups founded in the sector of AI/ML infrastructure. However, major cloud providers already all have some kind of involvement, the most prominent example being Google with TensorFlow, an open-source machine learning library for research and production. So, with major multinationals already invested and active in the industry of AI infrastructure, startups should consider for themselves: is there a true startup opportunity here? More specifically, is there a true opportunity for an Israeli startup? The current, nearly unanimous, answer is yes — startups do have an opportunity to become active in this space. Since 2012, there has been a 300,000x increase in the amount of compute used in the largest AI training runs, suggesting a sizeable opportunity for startups to aid in the efficiency of the artificial intelligence workflow. Specifically, Israeli startups seem effectively poised to be at the forefront of this disruption: Israel is a key infrastructure innovator (think: USB flash drive, the Intel 8088, VoIP, etc.), so we can reasonably expect Israel to continue to innovate in this next generation of computing infrastructure. However, as the industry is only just beginning to develop, startups need to address the difficulties of an artificial intelligence practice to truly understand where problems arise and, thus, where opportunities lie to innovate. In this report, I’ll address the key loci where I have found bottlenecks in the artificial intelligence workflow. From here, I’ll introduce opportunities for startups to solve the related infrastructure issues, and point out Israeli startups already active in the domain. Where applicable, I’ll share my insights on where I expect to see a rise in startup activity within a particular domain or service. Additionally, before continuing, it will be helpful to lay out some terminology that will be used throughout this report, and is commonplace in other industry discussions of AI/ML infrastructure: Terminology AI refers to the larger topic that includes artificial intelligence (AI), machine learning (ML), and deep learning (DL). refers to the larger topic that includes artificial intelligence (AI), machine learning (ML), and deep learning (DL). AI frameworks provide data scientists and developers with the building blocks to train and validate AI models without having to go into the low-level programming of the underlying algorithms. Popular frameworks include TensorFlow (mentioned above) and Caffe. provide data scientists and developers with the building blocks to train and validate AI models without having to go into the low-level programming of the underlying algorithms. Popular frameworks include TensorFlow (mentioned above) and Caffe. GPU refers to graphical processing units, which serve as a dense parallel computing engine. refers to graphical processing units, which serve as a dense parallel computing engine. PoC refers to proof of concept, which demonstrates a system’s ability to perform an activity. In this case, a PoC would be used to demonstrate that a solution based on this architecture delivers the necessary benefits. refers to proof of concept, which demonstrates a system’s ability to perform an activity. In this case, a PoC would be used to demonstrate that a solution based on this architecture delivers the necessary benefits. HDFS refers to Hadoop Data File System, a common scale-out file system using storage rich servers for analytics and machine learning. The Artificial Intelligence Workflow The AI workflow is a detailed process cycle, and there are issues at multiple different points that prevent artificial intelligence technology from reaching its full efficiency and potential. To identify these issues, I should first lay out the general cycle and organization of the AI workflow: 1. Data collection involves installing and configuring the data to be used. The data can be collected over a number of years, and may be identified from a variety of sources: Traditional business data Sensor data Data from collaboration partners Data from mobile apps and/or social media Legacy data 2. Data preparation can take weeks or months to complete. The quality of an artificial intelligence model is directly related to the quality of the data used during training: as is often said in the artificial intelligence space, Bad data leads to bad inferences. Within the context of AI, data can be separated into a few buckets: Data used to train and test the models Data that is analyzed by the models Historical or archival data that may be reused (this data can come from a variety of places: databases, data lakes, public data, social media, and more) 3. Data training and optimization typically takes days to weeks. To train an AI model, the training data must be in a specific format, and each model has its own format. As a result, data preparation is often one of the largest challenges — both in complexity and time — for data scientists. In fact, many data scientists claim that over 80% of their time is spent in this data preparation phase, and only 20% on the actual art of data science. 4. Deployment and inference of the data typically only takes seconds to receive results. 5. Accuracy preservation and improvement reveals how the AI workflow is an iterative cycle: the output of the deployment phase is used as a new input to the data collection phase, so the model constantly improves in accuracy. AI infrastructure is important because the success of moving the data through this 5-step pipeline depends largely on the quality of the infrastructure. Bottlenecks in the Workflow Now that I have laid out the lifecycle of the artificial intelligence workflow, I can address the challenges preventing artificial intelligence technology from reaching maximum efficiency. Based on my own research and the research of industry professionals, there seem to be four main issues: I will outline these issues, highlighting where AI infrastructure can be used to streamline the process, and introducing areas where, from my personal insights, work from startups still needs to be done to speed up the workflow: 1. The artificial intelligence workflow is compute intensive. 2. Training and developing AI models requires an exorbitant amount of trial and error with hundreds, often thousands, of experiments. 3. Data annotation often is so time-intensive it creates a bottleneck. 4. Machine learning as a service is in high demand, since there aren’t enough trained data scientists to do the work manually. I will address these issues in isolation, beginning with the first: the artificial intelligence workflow is compute intensive, meaning there isn’t a current infrastructure robust enough to deal with machine learning — specifically deep learning — operations at scale. As a result, startups and established companies alike have attempted to introduce their own solutions. For example, established companies like Google, Microsoft, Alibaba, and Intel have created their own hardware through AI-dedicated chipsets. Startups like Habanaand Hailo have followed suit in this hardware-driven thought process and attempted to bring their customized hardware to market. However, a parallel solution exists, which I find to be more cost-effective, scalable, and innovative: instead of creating new hardware, simply develop software that optimizes the existing hardware for machine learning tasks. We see this already in Uber’s open sourcing of Horovod, a distributed training framework for TensorFlow and other frameworks with the goal of making distributed deep learning fast and easy to use. Additionally, this hardware-optimizing software is seen in Google’s AutoML, a suite of machine learning products that enables developers to train models specific to their business needs. The second issue in the artificial intelligence workflow pertains to data science. One part of this work involves running hundreds, often thousands, of experiments with a myriad of different parameters in order to reach the optimal result. This requires an exorbitant amount of trial and error, which isn’t necessarily scalable for robust models or large amounts of data. Israeli startups have already begun attempting to solve this issue — some major players in the startup space include: Allegro: Provides a complete product lifecycle management solution for AI development and production, beginning with computer vision. Cnvrg.io: Organizes every stage of a data science project, from research to collection to model optimization. Comet: Allows data scientists to automatically track datasets, code changes, and production models to improve efficiency, transparency, and reproducibility. Other similar startups in the space, and their descriptions, can be found here. Additionally, we saw in April 2019 a $13M investment in Israeli startup Run.AI, which provides a virtualization and acceleration solution for deep learning (the software virtualizes many separate compute resources into a single giant virtual computer with nodes that can work in parallel). A third issue in the artificial intelligence workflow is the annotation, or tagging, or data. Companies use hundreds of thousands — sometimes millions — of data points to train their models, meaning data annotation can often be a bottleneck in the workflow. Thus, startups have a unique opportunity to automate this data preparation process instead of just relying on cheap labor (think Amazon’s crowdsourcing marketplace Mechanical Turk): two examples in the Israeli startup ecosystem are Dataloop, a platform for data management and human-in-the-loop (HITL) computer vision, and DataGen. A pioneer in the synthetic data creation space, I find DataGen to be particularly interesting — DataGen creates synthetic data, realistic enough to effectively train a model, instead of sourcing existing datasets. This is beneficial because companies are often unable or reluctant to use client data because of privacy issues, and synthetic data allows them to use artificial data with the same characteristics as their real data. Another notable benefit here is that this type of synthetic data comes pre-annotated: the process of annotating data is incredibly time-consuming and expensive. It would only make sense to see a significant rise in the adoption of synthetic data in the coming years, and with it, a rise in startups doing work similar to that of DataGen. Another workaround for the issue of data annotation is the concept of unsupervised learning, which does not require labeled data. Instead, unsupervised learning takes in the input set and finds patterns in the data, both organizing the data into groups (clustering) and finding outliers (anomaly detection). Within unsupervised learning is a particularly fascinating development in the AI infrastructure space, also utilized in DataGen’s technology: Generative Adversarial Networks (GANs). Here, two networks battle each other where one network — the generator — is tasked with creating data to trick the other network — the discriminator. From my research, I have found unsupervised learning to be an innovative development in the artificial intelligence space because it can sort data into groups that humans may not consider due to preexisting biases. A fourth issue in the artificial intelligence workflow is simply the lack of data scientists. It’s no surprise that the fields of AI and machine learning have grown tremendously in recent years: as a result, we’re faced with a lack of trained data scientists who are able to keep up with the huge influx of data. Additionally, hiring a team of data scientists is simply too expensive for young startups who want to develop new artificial intelligence technologies, and too impractical for companies where artificial intelligence isn’t a core focus. A solution here is providing machine learning as a service: Palantir is the biggest example, along with Amazon Web Services offering their own product. We can also see Israeli companies in the space with SparkBeyond, Razor Labs, and Pita. All of these companies provide high-end, expensive services; thus, there is a unique opportunity for startups to develop affordable machine learning services that can be marketed toward a broader audience. Insights for Startups So, where do I see AI infrastructure as a unique opportunity for startups to engage? As a recap, here are the areas of the AI workflow I have identified where startups can introduce disruptive technology to improve the speed and efficiency of the process cycle: The AI workflow is compute intensive. Although this problem can be solved through additional hardware in the form of AI-dedicated chipsets, there is a unique opportunity for startups to develop software that optimizes the existing hardware for AI/ML tasks. Although this problem can be solved through additional hardware in the form of AI-dedicated chipsets, there is a unique opportunity for startups to develop software that optimizes the existing hardware for AI/ML tasks. Data annotation is incredibly time-intensive, and it can often lead to biases when training an AI model. Synthetic data is a solution to these problems: because it is pre-annotated, it saves companies time and money. Additionally, since synthetic data is artificially generated, it allows companies to avoid privacy concerns from using real client data, and ensures that companies won’t be training their AI models to have unconscious biases. Synthetic data is a solution to these problems: because it is pre-annotated, it saves companies time and money. Additionally, since synthetic data is artificially generated, it allows companies to avoid privacy concerns from using real client data, and ensures that companies won’t be training their AI models to have unconscious biases. Supervised learning requires labeled data, which may include unconscious biases on the part of the data scientist. As a solution, look to unsupervised learning, which does not require labeled data but instead sorts data into groups according to patterns. Unconscious bias will no longer play a role here, since the AI model is doing the data sorting according to objective patterns. As a solution, look to unsupervised learning, which does not require labeled data but instead sorts data into groups according to patterns. Unconscious bias will no longer play a role here, since the AI model is doing the data sorting according to objective patterns. Machine learning as a service is becoming increasingly popular for companies across all verticals. From healthcare to retail to automotive and everything in between, introducing artificial intelligence is becoming imperative to keep up with the ever-evolving industries. However, hiring a team of data scientists is often too costly for companies that don’t have AI as a core focus. Here, startups have an opportunity to develop a cost-effective service that allows companies across industries to utilize their own AI/ML frameworks. Closing Thoughts Artificial intelligence will specifically impact infrastructure management and introduce significant business benefits across various stages of the AI workflow. One specific benefit of utilizing artificial intelligence for infrastructure management is detection of cybersecurity threats. From incidents like WannaCry to the Cambridge Analytica scandal, the need for companies to have robust cybersecurity is at an all-time high. AI systems have the ability to quickly spot unusual patterns and predict possible security breaches by studying the organization’s networks. With the development of an AI infrastructure (as opposed to software-defined infrastructure), companies across all verticals can have stronger immunity against cybersecurity threats, even defeating cybersecurity issues preemptively, both reducing downtime and saving money. Additionally, the use of AI in infrastructure management allows companies to have a reduced dependency on human resources. AI provides complete visibility into all process relations for infrastructure systems. AI reduces the complexities of business processes and cuts down costs, ensuring better decision making and reduced risk for unconscious bias in company practice. AI infrastructure will revolutionize storage management. Because AI is capable of learning patterns and data lifecycles, AI infrastructure may have the potential to preemptively warn a user about a storage system failure, thus giving the user ample time to back up important data and replace hardware before the failure takes place. In sum, artificial intelligence and machine learning are transforming businesses — and entire industries — faster than ever before. Success for startups will be based on how they can help companies understand the role of data in their respective industry and make the right choice regarding what infrastructure they implement.
https://medium.com/datadriveninvestor/israeli-based-startups-eliminating-bottlenecks-in-the-ai-workflow-af2d734bb674
[]
2020-08-07 13:53:03.257000+00:00
['Artificial Intelligence', 'Startup Nation', 'Israeli Startups']
Managing Promotions. Without promotion planning, demand…
Managing Promotions Without promotion planning, demand planning fails Based on Wikipedia, promotion is defined as: “In marketing, promotion refers to any type of marketing communication used to inform or persuade target audiences of the relative merits of a product, service, brand or issue. The aim of promotion is to increase awareness, create interest, generate sales or create brand loyalty. It is one of the basic elements of the market mix, which includes the four P’s, i.e., product, price, place, and promotion.” The goal of promotion is three-fold: To present information to consumers and others. To increase demand. To differentiate a product. Promotions, as definition suggests, are managed by marketing. That might include, but not limited to price reductions, cross product packages, combining multiple of single products with less prices, sweepstakes, discounts on other products and so forth. All these create more consumption on customers’ side. At least this is the expectation of a business. Promotions deviate the total market depending on the size of your business. In other words, by making promotions, we are buying market share from our competitors (assuming ours is better than their promotion) and hoping the new customers trying our product, will stay with our brand. The question here is, what is the realistic expectation of the incremental of the demand? The question plays a key role, because over-sale of a promotion means buying more market share, but diminishing the profit (since promotions need investment), on the other hand nobody wants to have a low-performer promotion since otherwise we didn’t plan it anyway. Pre — Promo Planning Before the promotions, we need planning. What parameters are going to change, what is the contribution of those SKU’s in business, category in terms of sales and profit? We need to analyze the price elasticity and distribution changes of the SKU’s. We need to get the behavior of them. What are the expected changes in threefold: optimistic, pessimistic and most importantly realistic, based on standard demand planning calculations. The company should make its plans based on most realistic case, but also get prepared for the optimistic and pessimistic cases to be ready for the real-world. During Promo Planning Depending on the duration of promotion, we need to take reports on a regular basis to understand the trend. If it is not preforming well, it is better to discuss how to make it more prominent and which of our assumptions are not working. On the contrary, if it over performs, can we expect a big risk on our profit? We should also take care of our inventory status, it might be an excess inventory if we cannot sell, or can be in shortage if we sell a lot. Sure that for all risks and opportunities, it would be great to have some mitigation plans. Post Promo Analysis End of promotions does not mean end of promotion study. Now we need to check back whether our assumptions were right or not, what could be better, what could we do to make it better. All the assumptions, decisions and results should be saved into a database, and this might be a simple spreadsheet. The main idea here is to make all reachable and keep away from forgetting. Conclusion Promotion planning is an endless journey of today’s business environment. We all try to steal market share and profit from our competitors. However unless we make it in a clear way, we repeat our mistakes and fail repetitively causing money. Baris Nurlu has an Industrial Engineering grade having executive management experiences in many multinational companies. Currently manages a regional sales and operations of one of the best global FMCG companies. He also builds good apps in Baseduo in Apple Store and Google Play Store. You can reach him via [email protected]
https://medium.com/datadriveninvestor/managing-promotions-data-driven-investor-6f5d41e9be23
['Baris Nurlu']
2020-04-01 08:18:55.019000+00:00
['Planning', 'Marketing', 'Demand Planning', 'Promotion']
[LeetCode] 3Sum
Given an array nums of n integers, are there elements a, b, c in nums such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero. Notice that the solution set must not contain duplicate triplets. Example 1: Input: nums = [-1,0,1,2,-1,-4] Output: [[-1,-1,2],[-1,0,1]] Example 2: Input: nums = [] Output: [] Example 3: Input: nums = [0] Output: [] 解題想法: 這題本來想寫自己的想法下去,但發現這位大大真的寫得太好了,根本無懈可擊,讓我直接引用她的寫法吧,再次感謝 Fion carry,但她是用 C#,會 C# 的也可以學一下大大的寫法,她的文章有圖文並茂喔,值得大家去看。 [Day 7] 演算法刷題 LeetCode 15. 3Sum (Medium) 將 array 從小到大升冪排序 將須要找出的3個數的 index 分別表示為 first , second , third 用 for 迴圈計算,並將 first 做為 nums 的 起始點 second 則為 first + 1 為起始點 third 則為 nums.Length - 1 為起始點 並判斷 nums[first] + nums[second] + nums[third] 是否為 0 若等於 0,則為解答之一,Add 到 List 若小於 0,則代表負數太大,需要將 second 移至下一個較小的負數 (second++) 若大於 0,則代表正數太大,需要將 third 移至上一個較小的正數 (third--) 7. 另外判斷 first 是否已經重複,若重複則跳過此次迴圈,因為答案也會是一樣的 如 {-1, -1, 0, 1, 2} 8. 另外判斷 second 是否已經重複,若重複則 second++,並跳過此次迴圈,因為答案也會是一樣的 如 {-4, 2, 2, 2, 3} 7. 與 8. 是讓效能再更好的其中之一條件,不用重複查找已經查找過的數字。 若沒有寫也是會過的哦! 簡單好懂版: Runtime: 1648 ms, faster than 13.79% Memory Usage: 17.8 MB, less than 7.12% 概念相同,程式縮減版: Runtime: 780 ms, faster than 51.52% Memory Usage: 16 MB, less than 63.27% 參考文獻:
https://medium.com/jacky-life/leetcode-3sum-bb1deec8ba31
[]
2020-09-21 13:48:01.879000+00:00
['Leetcode', 'Python']
Pretty, Polluted
Pretty, Polluted How Diwali celebrations make for prettier sunsets The sky is a picturesque watercolour blend of grey and orange, the tangerine evening sun bleeding colour into the bleak grey dusk. There are other shades, too: a delicate lavender that stains the clouds, with dabs of lemon yellow in between, and a clear sky blue that lingers around the edges, not quite ready to make an exit. It is six o’clock, on Thursday, the 8th of November, and the sunset today is more brilliant than most. I could tell you that the sunsets are always beautiful here, in the countryside outside Bangalore. I could tell you that it’s something about the winter air that makes them more arresting than usual. Of course, if I did, I’d be lying to you. Why is the sky blue? Try shining a flashlight through a glass of water with a little soap mixed in (if we were in a lab, you’d use sodium thiosulphate with a little sulphuric acid in it, but soap-water will do for now). What do you see? Assuming you aren’t actually going to get up from wherever you’re sitting right now, I’m just going to tell you: It’ll look blue in the glass, but when it hits the wall on the other side, it’ll be closer to red. The more soap there is, the redder it gets. Remember that tidbit; we’ll come back to it. White light scatters into seven distinct colours when it hits a particle. You probably learnt them in middle school: violet, indigo, blue, green, yellow, orange, and red. You might have learnt it using an acronym — VIBGYOR or ROY G. BIV — or to the tune of a snazzy rhyme, like I did. How many colours are there in the rainbow? How many colours are there in the rainbow? Seven lovely colours are there in the rainbow Seven lovely colours indeed They are violet, indigo, blue, green, yellow Violet, indigo, blue, green, yellow Violet, indigo, blue, green, yellow Orange aaand red! The point is, they all scatter differently, and in different directions. Colours with shorter wavelengths, like violet and blue, scatter first and faster. Red, not obviously, is the last. That’s why the water in the glass looks blue. The soap particles are scattering blue wavelengths your way, whereas the yellow and red ones keep going right past. When you’re seeing the sky during the day, you’re really just seeing the atmosphere. And light is filtering through kilometres of it before it reaches you eye. What is the atmosphere made of? Particles—or molecules, really — of oxygen and nitrogen. Billions of them. Tiny ones, most of them smaller than ten microns. To put that into perspective, a human hair is about seventy-five microns across, give or take depending on how good your hair-care routine is. Just like the soap, these molecules scatter blue, green, and violet light your way and the rest of it away. Why don’t we see a purple sky? Because we’re more sensitive to blue than violet — which gets broken down into blue and red anyway. Red, green and blue make white, leaving some blue left over for you to see. That’s how your eyes work. And that’s why the sky is blue. But is it always blue? If you were sitting with me that evening in Bangalore, the answer would be obvious. When you’re seeing the sun during the day, it’s light is travelling through kilometres of atmosphere to reach you. When it’s on the horizon, however, there are significantly more kilometres of atmosphere through which it has to go. Remember what I said earlier, about the light being redder when there was more soap? The same principle applies here. The light hits more particles, so more of it gets scattered away — leaving only the longer wavelengths, the redder tones, to make it to the other side. Whenever there’s any kind of small particle in the air, sunsets tend to get more red. That’s why sunsets are so pretty over the ocean: there’s a lot of salt in the air to make them that way. This year, the festival of Diwali falls on the 6th, 7th, or 8th of November. The date varies depending on which part of India you’re from, and which caste you belong to. There’s one thing that stays the same everywhere, however: firecrackers. Crackers are a Diwali tradition, somehow appropriated and now as normal as new clothes or lighting dozens of earthen lamps around the house. Everyone bursts crackers, village or city, rich or poor. Flower pots explode into showers of sparks ten feet tall, rising and falling like an elaborate crescendo. Suru-suru batthis draw lines of light in the air that tattoo themselves onto your eyelids, visible even after they’re gone. Bhoo-chakras spin delicately on the asphalt, drizzling gold that disappears instantly. Rockets whizz through the air, trailing fire like miniature comets. Look up anywhere in the city, any time after sunset — and quite a bit of the time before sunrise, too — and you’re almost guaranteed to spot a sprinkle of glitter somewhere in the sky. Diwalis are beautiful; they’re a visual feast. And they’re causing more than a little bit of a problem. The World Health Organisation uses a measure called PM10 to look at air pollution. PM10 consists of all particles in the air that are less than ten microns in diameter — the same size as the molecules that make up the atmosphere. PM10 particles are what make up smog, and car exhaust, and that general sense of grey that maybe hangs over your city on some mornings. At that size, and in heavy concentration, little particles can cause a whole host of diseases, mostly respiratory. At it’s worst, walking through a polluted city can be like smoking several packs of cigarettes a day. Which is why the World Health Organisation has a limit on PM10: sixty micrograms per cubic metre. To give you an idea of scale, a large drop of water is roughly 60 micrograms and a cubic meter is — well, a cubic metre is fairly self-explanatory. Bangalore’s PM10 average is 83. And Delhi’s? Delhi’s is 292. The city of New Delhi has been struggling with air pollution for years. Every winter, newspapers are splashed with freshly-scandalised headlines about its air quality. The truth is, it’s massive ninteen-million-strong population aside, Delhi is also just geographically unlucky. Not only is it at the centre of a basin that stretches across northern central India, trapping much of the pollution, it’s also surrounded by three states that burn their fields annually. In Punjab, Haryana, and Western Uttar Pradesh, farmers religiously burn their fields every year after the rice harvest. That isn’t to absolve the city of responsibility — its colossal fossil-fuel emissions speak for themselves — but to say that it isn’t entirely self-contained. Cities like Chennai and Mumbai have the sea-wind to blow away their pollution, while Delhi has nothing. Besides, whatever its crimes on a regular day, Delhi’s air turns lethal around Diwali, sometimes touching a solid 320 micrograms per cubic meter. They say you shouldn’t play with fire because you might get burned. Crackers are what happened when we tried taming fire. But they’re still burning us. Firecrackers are mostly made of sulphur and carbon, but there are usually a plethora of other chemicals in there, too, to serve various purposes: aluminium, copper, barium, antimony, and strontium, to name a few. Different chemicals serve different purposes: antimony creates white flames and sparks while calcium deepens the colours; green-causing barium stabilises fellow elements and iron produces the sparks. And so on. The basic idea is, they help make fireworks look good. And when you burst a cracker, these chemicals — not surprisingly — go up into the air. Most of these particles are smaller than ten microns. They fall instead under the category of PM2.5, particles smaller than 2.5 microns. That’s one-tenth of a third of the width of a human hair. That’s small enough that you can inhale enough of it to do some serious damage. That’s also small enough that a Diwali-level increase is significant enough to scatter yellow light away, and keep only the orange and the red for a time. Bangalore’s doing slightly better than Delhi. Slightly. Our Diwali PM10 readings hit 122, a forty-two percent increase from the norm. Still, “Diwali air pollution in Bengaluru dips 33%”, one newspaper declared jubilantly, stating a major decrease in pollution from last year’s celebrations. And yet, the overall yearly average has only gone up by five micrograms since the year 2015–16, and it doesn’t show any signs of reducing. But hey, why complain? Let’s sit back and enjoy the sunset. Have something to say? At Snipette, we encourage questions, comments, corrections and clarifications — even if they are something that can be easily Googled! You can also sign up for our weekly email updates.
https://medium.com/snipette/smoke-and-sunsets-efe2b8517817
['Manasa Kashi']
2018-11-11 03:01:46.070000+00:00
['Diwali', 'Air Pollution', 'Physics', 'Firecrackers', 'Environment']
“Language delivers me to me”: A Review of Alice Notley’s ‘For the Ride’
For the Ride, by Alice Notley. Penguin Poets, 2020. It just so happened that the day of Alice Notley’s reading and informal launch for her newest book, For the Ride — March 14, to be exact — was followed up a few days later by a plea from the city for people to work from home and avoid going out unless absolutely necessary to avoid spreading COVID-19. It was an intimate setting — of the fifty people who registered for the event, it is safe to say that no more than ten people ended up attending. The small bunch of us already spread out to maintain some distance as we sat in the dimly lit room, captivated by Notley’s voice and laughing at her honest and at times self-deprecating humour. Was it a serendipitous coincidence or a kind of ominous foreshadowing that For the Ride served as a kind of bookend that marked the shift from a Pre- to a Post-Pandemic world? The answer depends on one’s mindset (and possibly superstition level), although it is difficult to not see Notley’s reimagining of language, identity, and the very concept of what constitutes an apocalypse as some sort of sign, a prophetic glimpse into a potential future, or at least a suggestion about the evolution of poetry, language, and gender. For the Ride, a poetic odyssey in eighteen parts, tells the story of One, who is the protagonist and “hero proper” of the collection, but whose characterization is ambiguous and deliberately open-ended. The only insight into who One is occurs towards the end of For the Ride, when Notley describes One as “Once is a she, now’s just One.” Along with the multifaceted One, Notley creates an equally captivating (and expansive) band of characters who accompany One on their journey in a spaceship-like ark to another dimension, with the goal of saving words and (re)inventing language as we know it, all within the guise of a familiar quasi-space travel narrative, as they disembark in faraway cities and encounter other forms of life, even engaging in a battle, at one point. Notley creates her own internal structure within For the Ride, from sections that are taken from an Anthology of poetry that One and their companions — or the Survivors, as Notley refers to them in the book’s preface — have with them to numerous concrete poems that echo the poetic tradition of Apollinaire. For those readers who like to ground themselves narrative, Notley provides a series of checkpoints in the form of chapter titles that help situate what is happening in that particular section — as well as to help the reader orient themselves within Notley’s extensive experimentation with words and the poetic form — in a more straightforward manner, as it is quite easy to “get lost” in For the Ride’s fluid transitioning across time, voices, and discourses. For readers already familiar with Notley’s work, For the Ride will be less of a surprise and more of a pleasant return to the familiar, following the natural ebb and flow of the book’s philosophical unravelling which, it is worth noting, Notley herself remains unsure of, telling the reader: “ I mean I don’t know exactly what happened; I might even have to tell this story again sometime.” While it is possible to call One a person, it feels more appropriate to think of them as an incorporeal, even ethereal, entity who has “been robbed of personhood; sorry, that’s what it’s like when one dies,” a representation of a kind of collective consciousness that manifests itself through language. Finding oneself inside the glyph, a space that reads like a cavernous Matrix of language that has been crossed with painterly elements from Impressionism, One’s wrestling with being, with words and identity, becomes a running thread through For the Ride. One’s physical travels always, ultimately, return to the self, to questioning where we, through One, can situate ourselves within existing markers of identity and how these markers, the way we talk and think about ourselves, can and should be expanded, thus One’s goal to “build a new language — / sort of new — bricolage: why waste a thing? Always start with something./ Find out way to mix things for perfection’s fear and its course.” The remaining cast of For the Ride is an existentially ambiguous group of figures on par with One themselves. In fact, the number of characters in the book is expansive enough that, at times, it proves difficult to keep track of them all, with some of them being more memorable than others due to a greater sense of character or some more distinct personality traits that helps flesh them out a bit more. For instance, Notley positions Qui as a shaman-like figure, while Wideset came across as perhaps the oldest and wisest member of the group. My personal favourite was France, a ghost-like spectre who, like Hamlet’s father, would periodically reappear to haunt the pages of the book. France is at once a parental figure for their kid but also a representation of nationhood that, for Notley, lingers in the background as a part of the past that continues to linger, as One muses in chapter II, declaring, in a moment of self-reflection: “One is the dead one, immigrant. One is the dead one named France./ One’s not even French, One’s like dead! Foreign France, that’s the dead./ One forgot to say One is once an immigrant, pastly, or is One?” For the Ride is not an easy book and it would be a bold-faced lie to write this review while pretending that I was able to follow all of Notley’s wordplay and poetic musings in tandem. Similarly, Notley provides numerous entry points for her readers, openings through which they can enter into the text to either enjoy the book’s formal and narrative elements, or to engage with the material critically and take For the Ride’s premise as an invitation for discussion and reexamination. In fact, “understanding” is an important term in the context of the collection because saying “I understand” implies a kind of linearity, a transition from A to B that is solidified through a reduction of complex and expansive concepts, like the ever-changing nature of poetics, thereby allowing these concepts to be reproduced through repetition in the form of teaching. Instead, For the Ride is preoccupied with locating and identifying alternative systems of communication and ways of knowing, as well as rethinking how we communicate. Notley undoubtedly has a sense of humour in approaching this daunting task, which is largely concentrated in chapter XII: The New Brain, as she teases the reader: “Look, poem!/ Poured from the foot-brain!” Rather than asking what a poem means, what a poet is trying to say or to elicit in the reader, the more appropriate questions to ask in For the Ride is what it means to be a poet, a figure who “is/ the original/ birds cry to.” “Can the ones call each other/ poet as/ pronoun?”, asks a voice — Notley? One? one of the other members of the crew? a voice from the past? it is often unclear who is speaking, although the answer is not always important — in chapter VII: Becoming Poems. Not long after, in XIII: Wall of Words, we are told, once again by an omniscient voice, that “One’s tired of sentences. One says, At least of their unwinding length:/ too timelike. Prefer planes. Sense of overlapping realities…” Much like early 20th century art movements — Bauhaus, Constructivism, Surrealism — sought not only to create a new style in visual art but also to conceptualize a new way of being through that art, For the Ride is similarly driven by this desire to rethink the boundaries of the possible within how we think and express ourselves. Rather than thinking of language as a code that gives others easy access into our thoughts, Notley’s poetry pushes for a more radical kind of unplugging and reprogramming, a salvaging and recycling of what we currently have with the goal of creating a system so different that inter-dimensional travel becomes a fitting metaphor for conveying the radicalness of the end product. Another vital and prominent focus in For the Ride is the role of gender within language, particularly the gendering of language, something that Notley fights against, beginning with One’s identity, which is a source of constant contemplation throughout the book, for One as well as for Notley, who seems to muse through One: One’s supposed to be inventing new language, definitely tearing down the old of gender, tensal submission, whatall, pomposities to enslave one…Tear it down as ones save ones — Ark of salvation and destruction of the old at the same time. Wake up! Tear it down! and save one. One is the species, words are. Despite the admirable goal — of One, of Notley — the idealist utopian vision of language that For the Ride searches for is never quite found because it is not manifested by Notley herself in writing the book. Despite the dream of a genderless language, the presence of the binary is unshakeable in Notley’s imagery, such as the figures With Breasts and Without, “With Breasts […] hysterical, Without[ ] dumbstruck. Scared.” Or the more apparent slippage in chapter IV that suggests a moment of crisis or a dilemma in the early stages of One’s journey — “They use me, I mean One. Inventing female as succor. Addicts” — although this slippage in identity, in terms of gender and the self, is mirrored again in chapter VIII, in One’s words: “Try. Who is one, as you? Will one say I? I was beeyoutiful girl.” Such episodes are few and they are subtle, but they lay in plain sight, embedded within the sturdy walls of poesis that Notley has erected. Much as with many aspects of For the Ride, it is left open to discussion whether Notley is trying to distinguish between gendering words and words that we associate with gender. Notley’s more serious, and even subtler, offence is in a couple of rare episodes of derogatory and outdated terms, as in the opening stanza of chapter XII: The New Brain: One’s caught en retard but now sans slow or fast one will last spread across the wordful univers de la mort. Once one lived — ah one lives pastly! — in an hôtel, Asiatic the hostelleries across this voie lactée welcome one, cups of stay or go — linguistic vortex or no — this retard can say anything. Notley’s playful intermixing of French and English, something that occurs frequently in For the Ride, here toes the line between clever and potentially offensive. At the same time, this stanza highlights the very fluidity of language that is one of the central themes of the book, as one’s knowledge (or lack) of French will inevitably colour how one perceives the last line of that stanza. Whatever one’s personal stance on this, Notley certainly succeeds in highlighting language’s shapeshifting nature as words slip in and out of meanings, in and out of fashion, usage, time. In her book Time Travels: Feminism, Nature, Power, Australian philosopher and feminist theorist Elizabeth Grosz asks: “Is language a human prosthesis?” Notley’s newest collection offers a potential answer to this question, simple in wording but by no means in its implication: “Language delivers me to me…” From the more traditionally poetic in chapters I: The Glyph of Chaos with Willows and XV: I Have Been Let Out of Prison, to the slip into a kind of quasi-stream-of-conscience re-identification and re-alignment of the self in chapters XVI: Stark Star and XVII: The Memory of Nerves, For the Ride is a journey in narrative and in philosophical poetics that invites its readers to let go of their preconceptions — of poetry, of selfhood — and allow Notley to gently carry them away on what will surely become one of the defining literary odysseys of our dystopian age.
https://medium.com/anomalyblog/language-delivers-me-to-me-a-review-of-alice-notleys-for-the-ride-aef5dd589372
['Margaryta Golovchenko']
2020-12-15 16:32:55.068000+00:00
['Literature', 'Book Review', 'Review', 'Poetry', 'Books']
Time Series Forecasting With SQL — It’s Easier Than You Think
I’ve previously written about performing classification tasks with SQL, so make sure to take a look it if that’s something you find interesting: Time series are different than your average machine learning task. You can’t train the model once, and use it for months in production. Time series models must be trained with the entirety of history data, and new data points might come every hour, day, week, or month — varying from project to project. That’s why doing the training process in-database can be beneficial, if hardware resources are limited. Python will almost always consume more resources than the database. We’ll use Oracle Cloud once again. It’s free, so please register and create an instance of the OLTP database (Version 19c, has 0.2TB of storage). Once done, download the cloud wallet and establish a connection through SQL Developer — or any other tool. This will take you 10 minutes at least but is a fairly straightforward thing to do, so I won’t waste time on it. Awesome! Let’s continue with the data loading.
https://towardsdatascience.com/time-series-forecasting-with-sql-its-easier-than-you-think-1f5b362d0c81
['Dario Radečić']
2020-09-22 18:57:26.857000+00:00
['Sql', 'Artificial Intelligence', 'Towards Data Science', 'Data Science', 'Machine Learning']
Machine Learning: Dimensionality Reduction via Linear Discriminant Analysis
A machine learning algorithm (such as classification, clustering or regression) uses a training dataset to determine weight factors that can be applied to unseen data for predictive purposes. Before implementing a machine learning algorithm, it is necessary to select only relevant features in the training dataset. The process of transforming a dataset in order to select only relevant features necessary for training is called dimensionality reduction. Dimensionality reduction is important because of three main reasons: Prevents Overfitting: A high-dimensional dataset having too many features can sometimes lead to overfitting (model captures both real and random effects). Simplicity: An over-complex model having too many features can be hard to interpret especially when features are correlated with each other. Computational Efficiency: A model trained on a lower-dimensional dataset is computationally efficient (execution of algorithm requires less computational time). Dimensionality reduction therefore plays a crucial role in data preprocessing. Implementation of Dimensionality Reduction There are several models for dimensionality reduction in machine learning such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Stepwise Regression, and Regularized Regression (such as LASSO). We focus here on PCA and LDA, which are widely used for classification problems. PCA and LDA are two data preprocessing linear transformation techniques that are often used for dimensionality reduction in order to select relevant features that can be used in the final machine learning algorithm. PCA is an unsupervised algorithm that is used for feature extraction in high-dimensional and correlated data. PCA achieves dimensionality reduction by transforming features into orthogonal component axes of maximum variance in a dataset. An implementation of PCA using iris dataset can be found here: https://github.com/bot13956/principal_component_analysis_iris_dataset The goal of LDA is to find the feature subspace that optimizes class separability and reduce dimensionality (see figure below). Hence LDA is a supervised algorithm. In this article, we illustrate the implementation of LDA using the iris dataset. An in-depth description of PCA and LDA can be found in this book: Python Machine Learning by Sebastian Raschka, Chapter 5. Figure 1: LDA algorithm transforms from old to new feature subspace so as to optimize class separability and reduce dimensionality. Picture adapted from: “Python Machine Learning by Sebastian Raschka”. The code for implementing LDA is found here: https://github.com/bot13956/linear-discriminant-analysis-iris-dataset/blob/master/LDA_irisdataset.ipynb The output from an LDA calculation using the iris dataset is shown in figure 2 below: Figure 2: Linear separability of iris classes in the LDA subspace. Notice that the LD1 component captures most of the class discriminability. By analyzing the cumulative discriminability (see code: https://github.com/bot13956/linear-discriminant-analysis-iris-dataset/blob/master/LDA_irisdataset.ipynb), we can show that the LD1 and LD2 components capture 100% of the total discriminability. Hence when we perform classification (using logistic regression or support vector machine) on the LDA subspace, we can train the model on a lower two-dimensional LDA transformed dataset. Since the original iris dataset was four-dimensional (4 features), we remark that LDA transformation achieves both class separability as well as dimensionality reduction. In summary, we have shown how the LDA algorithm can be implemented using the iris dataset for illustrative purposes. In a previous article, we showed how the PCA algorithm can be implemented using the iris dataset as well. Here are the two links: PCA implementation: https://github.com/bot13956/principal_component_analysis_iris_dataset LDA implementation: https://github.com/bot13956/linear-discriminant-analysis-iris-dataset Thanks for reading.
https://medium.com/towards-artificial-intelligence/machine-learning-dimensionality-reduction-via-linear-discriminant-analysis-cc96b49d2757
['Benjamin Obi Tayo Ph.D.']
2020-06-11 17:24:59.156000+00:00
['Machine Learning', 'Python', 'Data Transformation', 'Data Processing', 'Data Science']
Writer Of The Week: Robyn Powell
‘Writing means empowerment.’ Though people often view objective reported journalism as the pinnacle of respectable media work, I’d argue that the personal essay can be, in its own way, just as integral to creating change in society. As a medium explicitly devoted to bridging the ever-widening empathy gap, essay-writing can push people to consider brand-new perspectives or reconsider existing ones, shifting entire ideologies while helping to engender equality. And it is actually subjectivity — the sharing of a definitively personal experience — that most powerfully makes this happen. Robyn Powell provides a particularly compelling example of these forces in action. Her candid, nuanced essays addressing disability rights through the lens of her own lived experience have no doubt helped countless people question their ideas and biases. At the same time, her writing also deftly weaves in legal context (she’s an attorney) and research-based reporting to provide a multifaceted approach to journalism. It is through weaving together all these elements—the personal, the contextual, the factual — that Robyn is able to so convincingly argue that, for example, disabled mothers have historically faced grave injustices, or that Trump, Sessions, and Bannon represent an unholy trinity of anti-disability-rights ideology. When asked what writing means to her, Robyn replied that she finds it empowering. And through her richly layered writing, Robyn empowers us all. Below, Robyn shares her thoughts on Cyndi Lauper, ice cream, and which Sex and the City character she is. The TV character I most identify with is Miranda from Sex and the City. I think “paying writers in exposure” is exploitative and devaluing. My most listened to song of all time is “Girls Just Want to Have Fun” by Cyndi Lauper. My 18-year-old self would feel surprised but content about where I am today. I like writing for The Establishment because it is women-run. If I could only have one type of food for the rest of my life it would be ice cream. The story I’m working on now is about sexual assault among students with disabilities. The story I want to write next is about reproductive justice for women with disabilities. If I could share one of my stories by yelling it into a megaphone in the middle of Times Square, it would be “As A Disabled Person, I Implore You Not To Vote For Donald Trump.” This was written pre-November 2016 — if only more people had heeded this advice. Writing means this to me: Writing allows me to express myself in ways that my day job does not. Now more than ever, we need the stories of those from marginalized communities front and center, and writing enables me to do this. Writing also provides the opportunity to give exposure to the issues facing people with disabilities — something that is far too often overlooked. In sum, for me, writing means empowerment. If I could summarize writing in a series of three GIFs, it would be: GIFs are usually inaccessible to people with disabilities.
https://medium.com/the-establishment/writer-of-the-week-robyn-powell-3143225e0940
['The Establishment']
2017-10-30 15:06:33.844000+00:00
['Publishing', 'Disability', 'Arts Creators', 'Writing', 'Writer Of The Week']
Using Application Load Balancers to Handle Multiple Domain Redirects
I hope everyone has been following safety measures and staying inside to be healthy. The time bygone and the present have been unfamiliar in long stretches, forcing us all to find ways to peck our mind in different areas. Since you are here, let me share a problem where I applied a similar idea. In my time helping develop high-performance infrastructure and services at DLT Labs™, I’ve learned a lot about the role of load balancers while making them scalable. One of the cloud providers we work with is Amazon Web Services, and today I thought I’d talk a little about what they call an Application Load Balancer (ALB), and what is needed to set one up. Amidst all ongoing activities, one day my team witnessed a bottleneck caused by a requirement for multiple domains to be redirected to a single domain. On looking out for possible solutions, I came across a few. Naturally, as solutions go, they presented certain limitations too. Magically, an Application Load Balancer can help provide a one-stop solution to clearly sweep-off this problem. Thereafter, you can go and explore more to take advantage of its different use cases in various areas. Now, let’s get familiar with the problem, related approach, and its solution. #The problem Here is some context before diving right into the solution: For example, let's say there are three domains: “testprac1.example.com”, “testprac2.example.com”, and “testprac3.example.com” and all these domains are required to serve HTTP/HTTPS requests. Requests of any either nature — HTTP or HTTPS for each of the domains above, must be involuted into the same request — i.e., a request to “testprac2.example.com” should be redirected to “testprac1.example.com”. Likewise, a request to “testprac3.example.com” should be redirected to “testprac1.example.com”. In case you are hosting static data, Amazon S3 and Amazon Route53 would come into the picture, or you can create several domains for each record to be served. This isn’t the simplest solution, as it isn’t what we call a feels-right-sort-of-solution! Undoubtedly, I too felt the same, and ultimately found a gem of a source. What I found, I am going to put all of that for you in the following solution. The solution What we are going to use here, is a ‘Layer 7 load balancer’. Layer 7 is a term for the application layer, where the human-computer interaction happens, from the widely-used OSI model of computer systems. This will allow us to dictate a set of rules. These rules can make it either redirect or forward the incoming request(s) to the corresponding destination(s) using a mapping form of mechanism. We will use ALB which provides the capability to redirect requests from one domain to the newer domain that we intend to serve.
https://medium.com/swlh/using-application-load-balancers-to-handle-multiple-domain-redirects-8e5077be3b28
['Dlt Labs']
2020-12-09 02:59:58.047000+00:00
['Software Development', 'AWS', 'Dltlabs', 'DevOps', 'Programming']
100 Things You Should Know About People: #9 — Blue and Red Together is Hard On Your Eyes (Chromostereopsis)
[caption id=”attachment_185" align=”alignleft” width=”300" caption=”Alternating blue and red bars”] [/caption] [caption id=”attachment_184" align=”alignleft” width=”300" caption=”Red text on a blue background”] [/caption] What is it about red and blue? — When lines (or letters) of different colors are projected or printed, the depths of the lines may appear to be different; lines of one color may “jump out” while lines of another color are recessed. This effect is called Chromostereopsis. This effect is strongest with red and blue, but it can also happen with other colors (for example, red and green). So what? — In addition to causing a depth effect, chromostereopsis can also be annoying and hard on the eyes. It is fatiguing. Although there are different theories as to why your eyes react to these color combinations in the way that they do, the important thing to remember is that they do. What should you do about it? — If you are a visual or web designer make sure that you are not using red and blue together in this way. I still find web sites that have this color combination. Here are a few! [caption id=”attachment_187" align=”alignleft” width=”300" caption=”Example of a website with red and blue”] [/caption] [caption id=”attachment_188" align=”alignleft” width=”300" caption=”Example of a website with red and blue”] [/caption] [caption id=”attachment_189" align=”alignleft” width=”176" caption=”Example of a website with red and blue”] [/caption] What examples have you found? — — — — — — — — — — — — — — — — — — — — — — Did you find this post interesting? If so, please consider adding your comment, subscribing to the blog via RSS, signing up for our email list, and/or sharing the post.
https://medium.com/theteamw/100-things-you-should-know-about-people-9-blue-and-red-together-is-hard-on-your-eyes-chromostereopsi-f069eae763c8
['The Team W']
2016-09-21 22:12:58.323000+00:00
['Psychology', 'Chromostereopsis', 'Usability', 'Visual Design']
Bill Flynn Interview — Fem Founder™
Bill Flynn has more than thirty years of experience working for and advising hundreds of companies, including startups, where he has a long track record of success. He’s had five successful outcomes, two IPOs, and seven acquisitions, including a turnaround during the 2008 financial crisis. Bill is also a multi-certified growth coach, has a Certificate with Distinction — Foundations of NeuroLeadership, is a Certified Predictive Index Partner and international speaker. Bill has also authored a best-selling book — Further, Faster — The Vital Few Steps that Take the Guesswork out of Growth garnering a 5-Star rating. Away from work, he is an avid reader and athlete, enjoys volunteering locally, and when he is not off cheering on his collegiate-champion daughter, Bill lives in Sudbury, Massachusetts, with his wife, dog, cat, and four chickens. Can you tell our readers about your background? Through the years of many startup management positions, mostly in sales and marketing, my coaching style developed. I love to learn and share my knowledge with others. I strive for continual improvements through small steps — kaizen. With this philosophy, my performance and that of my teams increased significantly over time. I learned to provide direction versus instruction to develop highly productive and independent teams. I evolved to do a lot less telling and a lot more asking and trusting. After my tenth startup, I decided to look for a way to apply those skills where the idea to become a coach fell into my lap. More on that below. What inspired you to start your business? After many successful and failed startups, and in the past few years working and speaking with hundreds of CEOs and companies around the world, I have found the following, for the most part: · We do meetings wrong · We do strategy wrong · We do hiring wrong · We do decision making wrong · We do change wrong · We do feedback wrong · We do vision wrong · We do teams wrong · We do innovation wrong · We do people wrong Here is why I think this happens too often: 1. There is a meaningful gap between what science knows and business does. 2. Few things truly matter, but those that do matter tremendously. Leaders do not spend enough time here. 3. Leaders rely too much on effort, luck, timing, and force of will to achieve “success”. 4. We don’t deduce before we produce. Spend more time upfront to save time overall. I started Catalyst Growth Advisors to help leaders take the guesswork out of growth by getting them to fire themselves from the day to day to focus more time on predicting the future. My purpose is to spend each working moment helping to advance the human condition through having enlightened leaders focus on the few things that truly matter to their customers and teams. Where is your business based? Just outside if Boston, MA. How did you start your business? What were the first steps you took? After my tenth startup, I was looking to try something new. I probably should have made this decision after startup number six but I am a little slow. It takes me about an hour and a half to watch 60 Minutes. I initially signed up for a newsletter from Verne Harnish, Founder of ScalingUp, EO, and YPO among other things. He wrote to me directly. After a few email exchanges, I was signed up to take an initial certification training class a month later. I was on my way. Origin story “Make me look as big as you can.” started it all. In 2007, I had my first experience of being a coach when I was brought in as a consultant to help a founder sell his business. Within 10 months, we were bought by a $100M+ organization as their online IT services arm; primarily outsourced email hosting. On my first official day as GM, due to a catastrophic system failure, we, in effect, did not deliver email to anyone. It was not until two days later that we cobbled together a short term fix. We had lost about 1000 customers in that first week and the rest were very unhappy with us. I, and the four other leaders, put together a plan based on my direction to sure up the key parts of our business. Three of which I had no prior experience. It worked beautifully. We doubled this business in about two years, did not lose one team member, and eventually had some of the highest customer satisfaction scores in the industry. On my final day, two of the senior managers let me know that what I led them through was really hard. They hated it (their words). But, they were so glad they did it. I wanted more of that. I wanted to impact others in a significant and meaningful way. I certainly did not come up with the idea of business coaching but it’s an excellent fit for my knowledge, skills, and abilities. This one key experience I had over a decade prior was the catalyst. What has been the most effective way of raising awareness for your business? I deliver insightful content to the right audience as often as possible. I do workshops and podcasts, speak to leaders (one to one and one to many), write a blog post twice a month and have just published a book, Further, Faster, that can be downloaded for free from my site but is also available on Amazon, iTunes, and Audible. I also work with and am constantly expanding my centers of influence relationships. How do you stay focused? I have figured out the few things that truly matter to grow my business and I ruthlessly spend my time on these things. I also set 3-year, one year, and quarterly goals. Here is how that looks. 3–5 Differentiating actions — Three-year initiatives Expertise at 3HAG delivery (especially the latter stages) and supporting validation options Expertise in Neuroscience (as it relates to leading teams, growth, and feedback) Excellent teacher (Apply latest evidence-based teaching methods) Known for: Taking the guesswork out of growth. Best money and time spent. Annual Goals (2020) 1. At least $200K revenue — between 3 and 4 clients 2. Add 1–2 new Centers of Influence partners (e.g., IBs, Lawyers, Partial CFO, Accountants, Banks, Small Giants, AIM) 3. Have 2–3 coaching prospects in 50%+ queue at anytime 4. Continued learning — 24 business books (min), HBR articles, etc. — (See “Books Read in 2019” note) 5. Add one new revenue stream in 2020 to coaching, affiliate program and speaking 1. ******Add one C-Suite Master Class — 2020 ****** — MAIN FOCUS 2. Speaking — ~5% 3. Book Sales — 2020–200 paid 6. Improve coaching approach o Learn how to be a great teacher o Growth learning (be comprehensive and well-prepared) — Ongoing (at least 2 of below) § Go to Leanstack coach’s seminar/workshop — Ash Maurya — 202X § The Growth Faculty seminar (Jim Collins) — 2020 § Reimagining the Future Virtual Summit — 2020 § Small Giants — 2021 § ZingTrain — 202X § GGOB — 202X Quarterly Goals 1. Provide 2+ workshops 2. Do 2–4 podcasts to promote the book 3. Send out copy of book to all contributors and influencers · Provide 3+ updates to client list I also exercise, meditate, and get 7–9 hours of sleep each day. How do you differentiate your business from the competition? I am a contrarian, an etiologist, and an essentialist. Being a contrarian immediately differentiates me. Since most businesses fail in a short amount of time (60–80% < 10 years) and those that do not often struggle to stay relevant and alive, teaching what most others teach makes no sense to me. There are some exceptional examples of businesses that thrive over long periods of time. They do things differently. That is what I teach. An etiologist is someone who studies cause and effect. Jim Collins is an etiologist as is Simon Sinek and Marcus Buckingham. Each searches for success in different organizations and uncover the commonalities. I curate and share these unique behaviors, methods, and actions on behalf of my customers. An essentialist is someone who focuses on the few things that truly matter as those things have an outsized effect in business and in life. The Pareto Principle states that 20% of the effort produces 80% of the results. Teams must spend 80% or more of their time on that 20%. I wrote an entire book on this subject. I continually improve my own knowledge and skills so I can hear things like this more often: “Bill provides some of the highest ‘value per word’ of any consultant (coach) I’ve ever met.” — Erik Waters, Co-Owner/CFO, Adtech System “… Having facilitated AIM’s CEO Connection Group for over 5 years, and having had several presenters on Growth and Resiliency, today’s webinar blew them out of the water! … — Beth Yohai, Vice President of Business Development at AIM HR Solutions What has been your most effective marketing strategy to grow your business? Marketing, as a word, is a catch-all for many things. Strategy is part of marketing. As is, demand generation and product development among others. I only have one business strategy. It is summed up in two words — Be Exceptional. As I previously stated, in order to differentiate myself, I am continually learning the most effective techniques, methods, and frameworks that are evidence-based and proven across time and different businesses. I look to continually separate myself from my rivals in these areas. If you are asking the most effective ways I generate leads for my business, here is what I do. I have figured out who my core customer is and spend as much time speaking to, writing for, and coaching these leaders and their closest partners; often my centers of influence partners as well. My core customer is a humble leader who is a life-long learner and is eager to challenge the status quo. I have become a Vistage, EO, and YPO speaker. I partner with organizations like Small Giants and MassMEP. I seek out the places where these unique mindsets congregate and work hard to become associated with these groups. “The secret to success: find out where people are going and get there first.” I market with this quote in mind always. It is attributed to Mark Twain. What’s your best piece of advice for aspiring and new entrepreneurs? Deduce before you produce. Fall in love with the problem and the customer, not the idea. Your original idea is very likely not the one that brings you success. Mike Tyson said it best — “Everybody has a plan until they get punched in the mouth.” Your customers are going to “punch you in the mouth” when you ask them for money versus when you ask them what they think of your idea. Solve it so well and so deeply that they are compelled to tell others. To do that, it is important to understand your core customer’s struggles, the progress they are looking to make, how they are trying to solve that today, and where they are succeeding and failing to that end. One must interview them like a journalist instead of a salesperson. Make it about them and not a way to get them to buy your stuff. Do this at least twenty times and see if you can find a pattern from the information collected. If you cannot, you may have to go back to the drawing board. For what it is worth, I have completed this process seven times. I always found a pattern between twelve and twenty conversations. Not always the one I was hoping for. Two years ago, I wrote, “How to Design a Solution that your Best Customers Want and Value with One Question”. This prescriptive piece accelerates the process of figuring out how you change the lives of your best customers for the better. It does not replace the interview process but, if you have existing customers, it can move things along more quickly. What’s your favorite app, blog, and book? Why? I do not have one favorite of any of these. In my work as a business coach, there are many different opportunities to help others so my “favorites” vary. However, if your readers are interested, I have created a list of all the best resources I use in my work. including books and podcasts that I have compiled over many years. It is broken down by category and highlighted by impact on my thinking. I hope your readers find it useful. Who is your business role model? Why? Alan Mulally. Hands down. I believe that Mulally is the finest CEO we have had in my lifetime, possibly ever, as he helped turn Boeing around in the middle of 9/11-then did the same with Ford during the Great Recession. I am unaware of any business leader (e.g., Jobs, Walton, Kelleher, Gates) who not only survived two existential, economic crises but also whose failing businesses came out of these crises even stronger. Companies that endure; last decades, generations and centuries typically make a handful of decisions right. Mulally accomplished twice in a nearly identical way the seemingly impossible for two separate industries and cultures. He simplified the business into a few key areas of focus. For instance, at Ford, in the middle of the worst financial crisis of our lifetime at the time, the turnaround plan focused almost exclusively on the following: (excerpt from American Icon): 1 Aggressively restructure to operate profitably at the current demand and changing model mix. (EXECUTION) 2 Accelerate the development of new products our customers want and value. (STRATEGY) 3 Finance our plan and improve our balance sheet. (CASH) 4 Work together effectively as one team. (PEOPLE) Just after Mulally left in 2014, Ford surpassed Toyota, re-establishing itself as the leading provider of cars in the world. A position Toyota had held since 2009 and GM for seven decades before that. GM replaced Ford in the 1930s. What Mulally did is not magic. He learned that relentlessly focusing on a few key things executed nearly flawlessly by a cohesive team is the best way to run any business. How do you balance work and life? I believe that there is no such thing. There is the only life of which work is a part. A vital part but a subset of life nevertheless. One of the benefits of COVID-19 is that we are now more often fitting work into the cracks or life where, before March, we were more often fitting life into the cracks of work. For the most part, work is about outcomes. There are deadlines and priorities, of course, but I think leaders are learning that sitting in an office, coming in and leaving at a certain hour are artificial and arbitrary beliefs we have adopted from the Industrial Age. They are, more often than not, no longer relevant and are proving to be less productive for many organizations. I believe, if leaders spent more time thinking and planning outcomes, honing and liberally communicating a clear and vivid vision, and summoning the courage to trust these well-vetted and enthusiastic team members, there will be less need to have constant oversight of the day-to-day operations and the people who carry them out. Unfortunately, leaders spend 80–90% of their time on running the day to day and 10–20% of their time thinking about the future. That needs to flip. I believe, when this is done, much of this work/life balance nonsense will be relegated to nostalgia. What’s your favorite way to decompress? Physical and mental stimulation. Physical — I play hockey, lift weights, and do cardio to blow off steam and stay healthy. Mental — I read a lot, play guitar and sing, do crossword puzzles, and get plenty of sleep! What do you have planned for the next six months? It is pretty hard to plan that far out right now but here are the key items on my evolving schedule. July 1. Three podcasts to promote the book 2. A handful of client sessions 3. Small Giants workshop 4. Several Mastermind/Accountability group meetings 5. A prospect calls — please note that I only bring on 1–2 new clients/year 6. Weekly Entrepreneurial coaching — volunteer August 1. Most of the above — no podcasts or prospect call scheduled 2. Two Vistage sessions — virtual 3. AIM Mutual workshop September/October 1. Client sessions 2. Weekly Entrepreneurial coaching — volunteer 3. Mastermind/Accountability group meetings November 1. Client sessions 2. Weekly Entrepreneurial coaching — volunteer 3. Mastermind/Accountability group meetings 4. Attend Neuroleadership Summit (if held) December 1. Client sessions 2. Weekly Entrepreneurial coaching — volunteer 3. Mastermind/Accountability group meeting My calendar fills in on a rolling 3–6 week window. Over the next few weeks, August and September will fill in with a few more key things. This is what it looks like as of early July. How can our readers connect with you? Bill Flynn — [email protected] https://www.linkedin.com/in/billflynnpublic https://www.facebook.com/bill.flynn.9022 https://catalystgrowthadvisors.com/ (website) @whfjr (twitter) billflynn01776 (Instagram)
https://medium.com/fem-founder/bill-flynn-interview-fem-founder-e3ecc555eca3
['Kristin Marquet']
2020-07-29 11:41:22.129000+00:00
['Leadership', 'Entrepreneurship', 'Entrepreneur', 'Founder Stories', 'Founders']
Machine Learning Full Course | FREE OF CHARGE !!
According to the Economic Times, 11.5 million job openings by 2026 with sky-high salaries. However, learning these skills can be quite hectic especially if you don’t have a big budget to spend on these courses. Institutions that offer these courses price it at budget-breaking limits knowing that people are more inclined to learn these skills. Degrees in Data Science are hyped as high as $114,000 and above. Online courses are priced as well with few affordable to those who can afford it. WHAT IF YOU WANT TO BE A SELF-TAUGHT DATA SCIENTIST/MACHINE LEARNING ENGINEER? I received 3,011 emails asking me to help them learn Data Science. Having thought of these self-motivated students, I decided to make my paid course free for everyone to kick-start their journey in Data Science. This course has been selling since last year, with over 4,320 students enrolled. I made this course free of charge so that you can also start your journey and save your budget for more critical issues. You can SUBSCRIBE to the channel and click on the Notification bell for any updates and new courses and projects. I have all the course and the course materials available and I have created a playlist for these courses in sequential order.
https://medium.com/total-data-science/machine-learning-full-course-free-of-charge-d570f2f72cb5
[]
2020-11-14 01:33:23.077000+00:00
['Data Science', 'Artificial Intelligence', 'Machine Learning']
Python Most Common Challenges
Q — Implement Log-Loss Function in Plain Python You will be given a list of lists, each sublist will be of length 2 i.e. [[x,y],[p,q],[l,m]..[r,s]] consider its like a matrix of n rows and two columns a. the first column Y will contain integer values b. the second column 𝑌𝑠𝑐𝑜𝑟𝑒Yscore will be having float values Your task is to find the value of the below Ex: [[1, 0.4], [0, 0.5], [0, 0.9], [0, 0.3], [0, 0.6], [1, 0.1], [1, 0.9], [1, 0.8]] output: 0.4243099 Explanations and Notes on Log Loss Logarithmic Loss (i.e. Log Loss and also same as Cross Entropy Loss), is a classification loss function. Log Loss quantifies the accuracy of a classifier by penalising false classifications. Minimising the Log Loss is basically equivalent to maximising the accuracy of the classifier. Log loss is used when we have {0,1} response. In these cases, the best models give us values in terms of probabilities. The log loss function is simply the objective function to minimize, in order to fit a log linear probability model to a set of binary labeled examples. In slightly different form Log is expressed as Log Loss is a slight modification on the Likelihood Function. In fact, Log Loss is -1 * the log of the likelihood function. Log Loss measures the accuracy of a classifier. It is used when the model outputs a probability for each class, rather than just the most likely class. In simple words, log loss measures the UNCERTAINTY of the probabilities of your model by comparing them to the true labels. Let us look closely at its formula and see how it measures the UNCERTAINTY. Now the question is, your training labels are 0 and 1 but your training predictions are 0.4, 0.6, 0.89, 0.1122 etc.. So how do we calculate a measure of the error of our model ? If we directly classify all the observations having values > 0.5 into 1 then we are at a high risk of increasing the miss-classification. This is because it may so happen that many values having probabilities 0.4, 0.45, 0.49 can have a true value of 1. This is where logLoss comes into picture. Log-loss is a “soft” measurement of accuracy that incorporates the idea of probabilistic confidence. It is intimately tied to information theory: log-loss is the cross entropy between the distribution of the true labels and the predictions. Intuitively speaking, entropy measures the unpredictability of something. Cross entropy incorporate the entropy of the true distribution, plus the extra unpredictability when one assumes a different distribution than the true distribution. So log-loss is an information-theoretic measure to gauge the “extra noise” that comes from using a predictor as opposed to the true labels. By minimizing the cross entropy, one maximizes the accuracy of the classifier. Cases where Log-Loss function can be mostly used The log loss function is used as an evaluation metric of the ML classifier models. This is an important metric as it is only metric which uses the actual predicted probability for evaluating the model ( ROC — AUC uses the order of the values but not the actual values). This is very useful as this penalizes the model heavily if it is very confident in predicting the wrong class(please check the plot of -log(x)). We optimize our model to minimize the log loss. Hence this metric is very useful in cases where the cost of predicting wrong class is very high. Hence model tries to reduce the probabilities of belonging to wrong class and we can choose the higher threshold of probability to predict the class label. This metric can be used for both binary and multi class classifications. The value of log loss lies between 0 ( including) and infinity. This is the only disadvantage of log loss as it is not very interpretable. We know that the best case would be 0 value of log loss however we can not interpret other values of log loss. For some cases log loss of 1 can be good while for other it may not be good enough. One hack is that we can measure the log loss of random model and try to reduce log-loss of our actual model from this value as much as possible without increasing variance. Please note that this metric can only be used if our model can predict the probability of each class. Hence for calculating the log loss for the models which don’t provide the probability score, probability calibration methods can be used on top of the base classifier to predict the probability score for each class. A use-case of Logloss Say, I am predicting Cancer from my ML Model. And Suppose you have 5 cases. 2 cases were cancer (y1 = y2 = 1) and 3 cases were benign (y3 = y4 = y5 = 0). Say your model predicted each model has 0.5 probability of cancer. In this case, what we have for log loss is… −1/5∗(log(0.5)+log(0.5)+(1−0)∗log(1−0.5)+(1−0)∗log(1−0.5)+(1−0)∗log(1−0.5)) Essentially, y_i and (1 — y_i) determines which term is to be dropped depending on the ground truth label. Depending on ground truth, either the log (y_hat) or log(1-y_hat) will be selected to determines how far away from the truth your model’s generated probability is. We can use the log_loss function from scikit-learn, with documentation found here: But here we will implement a pure-python version Finally Simple Python Implementation
https://medium.com/analytics-vidhya/pure-python-exercises-8f0affb25217
['Rohan Paul']
2020-12-23 15:39:23.917000+00:00
['Machine Learning', 'Python', 'Data Science', 'Interview Questions', 'Python Interview Question']
The 16 Best Job Search Engines in 2019
If you’re searching for a job, LinkedIn is the unicorn standout — but it’s also not the only game in town. There are quite a few job search engines to help you make your next career move. You can take advantage of Google for Jobs, Facebook Job Search, and so many more. Here, we’ve rounded up 16 job search engines to consider utilizing as you search for you new job. With more than 575 million users, LinkedIn is the world’s most popular social media network for professionals. LinkedIn doesn’t only let people create their own profiles and search for jobs. It also lets those companies reach out to candidates and recruit them on the site. Features include targeted job promotion, recommended matches, and candidate management. Users can also share posts and publish their own content on this platform. They get opportunities in LinkedIn to boost their own standing through their own content. That can make them more attractive to both employers and job seekers. This online recruitment platform was designed to help companies hire new employees. SimplyHired posts job listings in 100+ job boards to cover as much ground as possible. It also has comprehensive search engine, location-based job posting, and salary estimator tool. There are mobile apps for both Android and iOS, letting you hire or job-seek anywhere you go. This is great for companies that are concerned about the quality of candidates going to them. It can take as little as a week from posting a job listing to hiring someone of value. This website features job openings, company profiles and reviews, salary listings, and so on. Indeed was designed to connect job seekers and employers seamlessly and conveniently. Companies can create their own profiles and post job listings to attract job applicants. They can then review job applications, manage candidates, and schedule job interviews. There are also sponsored job options, resume subscriptions, mobile recruitment, and so on. Best known for anonymous company reviews, Glassdoor is another popular job search platform. Job seekers get to learn about working conditions and salaries in various companies. They can then apply for jobs to those companies if there are listings posted. Companies can post job listings, starting at $99 each with cost varying on location. That means job listings on Glassdoor are from companies most people would want to work for. In June 2017, Google added a job search feature, making it possible for job seekers to search for positions straight through Google. With Google for Jobs, users can type in their desired position or field, and the search engine then gives you a curated list of jobs that have been recently indexed. The only drawback for employers is that it’s not possible to post jobs directly to Google for Jobs. It’s more of an aggregator that makes use of the most powerful search engine in the world. This website boasts being one of the purest job search engines on the Internet. LinkUp touts itself as the fastest growing job search engine on the web today. However, unlike most other job search engines, it’s not just an aggregator of other job boards. It indexes jobs exclusively from company websites, making it more trustworthy. This service aims to drive real job seekers directly to real jobs with real employers. The website just lets you enter the job you want and your current location. You’re then given job listings directly from companies looking to recruit new employees. Being a traditional classified ad website, Craigslist has a job board that anyone can post on. The jobs in Craigslist listings can range from manual labor to copywriting and creative work. Its main strength is that anyone can post on it and browse the listings without registration. You can then message the poster anonymously so you can keep your email address private. The most obvious disadvantage here is uncertainty on whether the listings are actually good. Some of them may be scams or by unscrupulous companies with bad working conditions. You’ll then have to do research outside of Craigslist to know more about that employer. If you’re an employer in the United States, this is the premium job board for you. US.Jobs is an online job board that American businesses of all sorts use. Candidates can apply and share their resumes with countless employers. Meanwhile, employers looking to post jobs here will need to pony up quite a bit. A basic post costs $99, while the Smart package is $25,000 per post per month. But if you’re looking for serious job candidates in the country, US.Jobs is the place to be. This online job board is as premium and legit as you can get. Robert Half International is a California-based global human resource consulting firm. It was founded in 1948 and is currently listed in S&P 500. That means this company has a lot of history behind it. Robert Half is credited as the world’s largest accounting and finance staffing firm. The company as over 345 locations worldwide, so they’re definitely reputable. Robert Half specializes in law, finance and technology. This online job board gets millions of visitors per month due to how good it is. Monster is made for small, mid-market, and enterprise businesses, as well as public sectors. It offers three paid plans for employers posting jobs and managing candidates. Features include job listings, resume search, employer branding, real-time analytics, and so on. Employers can reach a large amount of candidates easily through Monster without much fuss. You can also rate candidates 1–10 to gauge how qualified they are for what they’re applying for. There are also easy-to-use filters to further sift through the talent pool for better recruitment. This one provides job listings to 100+ job search engines as well from all over the Internet. ZipRecruiter was designed with employers, recruiters, and staffing agencies in mind. It has customizable job description templates and lets you post pre-screen interview questions. The backend is intuitive and easy to use, letting users get up and running in no time. There’s also the Job Widget, which lets you embed your job listings onto your website. Do know this site does make employers pay more of a premium, despite not being as popular. But a few of the many job boards ZipRecruiter posts to may be worth the price of admission. In its ongoing quest to be the go-to platform for everything, Facebook now has job search. Employers can use this Facebook feature to get both active and passive recruits. Passive recruits are people who may not be actively looking for a job opportunity at the moment. But you can still end up getting some of them as they think about going for that job over time. This job search takes advantage of the power of the world’s top social media network. You get to look up potential recruits’ Facebook profiles to learn about them on a personal level. Job posts are free, so you can recruit on Facebook with little to no cost. But if you want your job post to be more visible, you can turn it into a sponsored post for a fee. Founded in 2001, Job.com is one of the very first legitimate online job boards on the Internet. This Texas-based company constantly aims to be at the forefront of job recruitment. Job.com right now is built on blockchain and powered by smart contracts. They say this lets them slash costs for employers and betterreward hired candidates. It also lets them cut the middleman out, getting rid of the need for third-party recruiters. This service boasts letting employers forego the hiring fee of 20%, as is the industry standard. Instead, they only pay 7% of the candidate’s annual salary, with 5% going to that candidate. It’s a model that’s meant to change the game and disrupt the job recruitment industry. If you’re looking for jobs in the public sector in America, then this is the site to go to. USAjobs.gov is the US Federal Government’s official employment website. It’s open to the public, paid for with the American people’s tax money. The website offers jobs to veterans, students, graduates, individuals with disabilities, and so on. If you’re looking to start a career in public service, then this is the place to go to. This website is for those who are looking for a more general job board. CareerBuilder is one of the most popular and most trusted job boards in the US. It boasts direct relationships with 92% of Fortune 500 companies in the country. Pricing is based on the number of posts purchased, so it’s for companies with lots of openings. The service lets employers buy job postings in bulk for a better price. Careerbuilder is meant for medium to enterprise-level business looking to bolster their ranks. As the name suggests, Snag is all about snagging the best job candidates quickly. Snag, also known as SnagAJob.com, is mostly focused on the hourly job market. It claims to be the number one place for hourly jobs in the US. Over sixty million job seekers are said to be registered to Snag. This website is a good place to build your talent pool right away. Many of the jobs featured here are in retail, as well as the restaurant and hotel industry. But it should be good for other industries as well that pay wages by the hour. Snag has an $89 monthly membership fee and is perfect for jobs that pays $10-$20 per hour. Be a Unicorn in a Sea of Donkeys Get my very best Unicorn marketing & entrepreneurship growth hacks: 2. Sign up for occasional Facebook Messenger Marketing news & tips via Facebook Messenger. About the Author Larry Kim is the CEO of MobileMonkey — provider of the World’s Best Facebook Messenger Marketing Platform. He’s also the founder of WordStream. You can connect with him on Facebook Messenger, Twitter, LinkedIn, Instagram. Originally posted on Inc.com Want more stories like this? Be sure to check out more from Mission!🧠 👉 Here!
https://medium.com/the-mission/the-16-best-job-search-engines-in-2019-257e4fbcffbc
['Larry Kim']
2019-06-25 03:27:46.402000+00:00
['Careers', 'Startup', '2019', 'Hiring', 'Job Search']
How to Power Up Your Referral Program Using Branch’s Android SDK?
A referral rewards system is a fantastic tool to grow more high-quality users! We can help you acquire more users by incentivizing your users to share your app with friends while integrating Branch’s SDK. Here, I will show you how to build your referral system for your new ride-sharing app. Here you will learn how to: Integrate Branch’s Android SDK into your mobile app Build referral links to share earned rewards with users Reward referral rules on specific events triggered Review both referred and referring user’s credits Redeem credits in your app for users Setting up Branch’s Android SDK import Branch’s SDK to your build.gradle in the app folder. Configure Branch Dashboard Create and sign up an account on dashboard.branch.io. We can follow the guide to setting up your account and dashboard here. Configure App Add Branch to your AndroidManifest.xml and replace the existing values with the values from the Dashboard generated. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="io.branch.branchandroid" android:versionCode="4" android:versionName="1.0.0" > <uses-permission android:name="android.permission.INTERNET" /> <application android:largeHeap="true" android:allowBackup="true" android:name="io.branch.branchandroid.CustomApplication" android:icon="@drawable/ic_launcher" android:label="@string/app_name"> <activity android:name="io.branch.branchandroid.MainActivity" android:label="@string/app_name" android:screenOrientation="portrait" android:theme="@style/Theme.Transparent" android:launchMode="singleTask" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <!-- Branch URI scheme --> <intent-filter> <data android:scheme="branchandroid" android:host="open" /> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> </intent-filter> <!-- Branch App Links --> <intent-filter android:autoVerify="true"> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="https" android:host="se-john-tan.app.link" /> </intent-filter> </activity> <!-- Branch init --> <meta-data android:name="io.branch.sdk.BranchKey" android:value="@string/branch_key" /> <meta-data android:name="io.branch.sdk.BranchKey.test" android:value="@string/branch_key_test" /> <!-- Branch testing (TestMode "true" to simulate fresh installs on dev environment) --> <meta-data android:name="io.branch.sdk.TestMode" android:value="false" /> <!-- Branch install referrer tracking --> <receiver android:name="io.branch.referral.InstallListener" android:exported="true"> <intent-filter> <action android:name="com.android.vending.INSTALL_REFERRER" /> </intent-filter> </receiver> </application> </manifest> Initialize and start Branch session public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override public void onStart() { super.onStart(); Branch branch = Branch.getInstance(); // Branch init branch.initSession(new Branch.BranchReferralInitListener() { @Override public void onInitFinished(JSONObject referringParams, BranchError error) { if (error == null) { // params are the deep linked params associated with the link that the user clicked -> was re-directed to this app // params will be empty if no data found // ... insert custom logic here ... Log.i("BRANCH SDK", referringParams.toString()); } else { Log.i("BRANCH SDK", error.getMessage()); } } }, this.getIntent().getData(), this); } @Override public void onNewIntent(Intent intent) { this.setIntent(intent); } } Load Branch public class CustomerApplication extends Application { public void onCreate() { super.onCreate(); //enabling auto session management Branch.getAutoInstance(this); } } Build a Referral Link to Share Earned Rewards After we have configured and initialized Branch to load, we can build a referral link to identify the specific user who shares the link and the device we are tracking from. //Identify the device Branch.getInstance().setRequestMetadata("device_id", android123); //Identify the user Branch.getInstance().getRequestMetadata("user_id", john-android123); After identifying the device and user, we can create a link that the user can share. Any of the referrals from this link will be tracked among the users. We can use this piece of code whenever we want to refer to friends for your ride-sharing app. We can build this by creating a share deep link: LinkProperties lp = new LinkProperties() lp.setChannel("facebook") lp.setFeature("sharing") lp.setStage("new user") lp.addControlParameter("$desktop_url", "http://desktop.com/rideshare/android123") BranchUniversalObject branchUniversalObject = new BranchUniversalObject; BranchUniversalObject.generateShortUrl(this, lp, new Branch.BranchLinkCreateListener() { @Override public void onLinkCreate(String url, BranchError error) { if (error == null) { Log.i("BRANCH SDK", "got my Branch link to share: " + url); } } }); ShareSheetStyle ss = new ShareSheetStyle(MainActivity.this, "Check out This New Ride Sharing App!", "This stuff is awesome: ") .setCopyUrlStyle(ContextCompat.getDrawable(this, android.R.drawable.ic_menu_send), "Copy", "Added to clipboard") .setMoreOptionStyle(ContextCompat.getDrawable(this, android.R.drawable.ic_menu_search), "Show more") .addPreferredSharingOption(SharingHelper.SHARE_WITH.FACEBOOK) .addPreferredSharingOption(SharingHelper.SHARE_WITH.EMAIL) .addPreferredSharingOption(SharingHelper.SHARE_WITH.MESSAGE) .setAsFullWidthStyle(true) .setSharingTitle("Share With"); BranchUniversalObject.showShareSheet(this, lp, ss, new Branch.BranchLinkShareListener() { @Override public void onShareLinkDialogLaunched() { } @Override public void onShareLinkDialogDismissed() { } @Override public void onLinkShareResponse(String sharedLink, String sharedChannel, BranchError error) { } @Override public void onChannelSelected(String channelName) { } }); Reward Referral Rules and Events We can handle referral rewards for both the referring user and the referred new onboard user. Referral rules can be set up under the referral tab in Branch’s dashboard to determine the amount of credit to allocate to the referring user (the user who share the link), referred acting user (new on-boarded user), or all acting users (both referring and referred acting users). We can set up different rules for a specific event that is triggered by each user for a different set amount of credit by the events specified. We can reward rules for users, such as rewarding 10 points when the referring user share a link to a friend to sign up as a new user and take his or her first ride for the first time. We can name this reward rule as “signup_referral”. The referring user will be awarded when his or her friend fires the event “signup_referral”. When the referring user’s friend fires the link, this is what’s going to happen: Branch.getInstance(getApplicationContext()).userCompletedAction(“signup_referral”); This will trigger the event “signup_referral” when the new user clicks on Branch’s link to install the app and sign up an account. When this happens, the referring user will earn the reward. Therefore, the referring user receives the credit when his or her friend takes their first ride, and the referred acting user can also get credit for signing up for the first time. Branch will award the credit to each or all of the user automatically, based on the event that is set in your dashboard. Review Rewards for Both Referred and Referring User We can track users that trigger the different rules for each event. This can be done by looking at the analytics for each users' rewards and the number of users each of the users referred to under the Referral tab in the Branch’s dashboard, which should look like this: Redeem Credits We should allow users to redeem credits so we can get users to make them feel that signing up for your ride-sharing app is all worth it! We would need to get the number of credits each user has so we can redeem them and have a location in your ride-sharing app to show the balance and amount of credits the user has remaining. In order to display the number of credits a user has, we can do the following: Branch.getInstance().loadRewards(new BranchReferralStateChangedListener() { @Override public void onStateChanged(boolean changed, Branch.BranchError error) { int credits = branch.getCredits(); } }); We can set rules for the user to redeem the credits on the Branch dashboard. If we set a rule that in order for a user to redeem credits by having a minimum of 10 credits, we can do the following: Branch.getInstance().redeemRewards(10); If we want to get all of the users’ history of credits to see who has the highest rewards on your customized dashboard so that everyone can keep track of the reward rankings, we can do the following: Branch.getInstance().getCreditHistory(new BranchListResponseListener() { public void onReceivingResponse(JSONArray list, Branch.BranchError error) { if (error != null) { Log.i("BRANCH SDK", "branch load rewards failed. Caused by -" + error.message) } else { Log.i("BRANCH SDK", list); } } }); There you go! Now we got your referral program working using Branch’s Android SDK and you can start integrating and having your users share the ride-sharing app with their friends! Let me know what you think and we can discuss this further! John Tan
https://medium.com/swlh/how-to-power-up-a-referral-program-using-branchs-android-sdk-fbb5d1a8c9c9
['John Tan']
2020-12-29 01:37:06.891000+00:00
['Startup', 'Mobile', 'Android Sdk', 'Deep Linking']
Guest blog: Abundance Generation
Bruce Davis is one of the original co-founders of Zopa, the highly successful peer-to-peer lending site. He’s recently set up Abundance Generation, the ‘first community investment platform that makes it possible for people to earn a cash return by investing in renewable energy farms in the UK’. He also happens to be an anthropologist, deeply interested in the social aspects of money. In this post he explores why we might want to move our money, and reconnect to real value. Look me in the eyes. You believe that money has real intrinsic value. Now, when I click my fingers, you are ‘back in the room!’ We live in a world that believes that money has intrinsic value, as if you could eat it. Trying to subsist on money though, is a bit like trying to eat a centimetre. In the 20th Century we started to confuse the functions of money as a medium of exchange and as a store of value. Money is now the dominant medium of exchange because we trust that other people will also accept it in exchange for things we need or services we give. Money is now the measure of all things, but it is just that, a measure. Dollars, Pounds, Euros and Rupees are all just different names for the things we use to measure the value of something. Unlike the centimeter though, the original of which is kept in France, there is no ‘ultimate’ money from which all other money is judged to measure value (nor is money backed by gold any more, although you still find politicians who don’t realise that fact). Rather our acceptance of money is a function of shared social and cultural beliefs. The problem however, is that our political masters believe that money is also a store of value and want us to believe the same. However, money has no intrinsic value, and simply creating more of it through unproductive lending to speculative activities merely reduces the value of the money you already have, generating inflation. Believing that printing more money makes you richer is like believing that creating more centimeters can make a person taller. So what are the real ‘stores of value’ in our society? Essentially these are assets which generate stuff, by producing more stuff from nature (digging up stuff or growing stuff essentially), making things (although this only creates value if you can sell it to someone for more ‘stuff’ in return) or generating from unlimited renewable resource — such as renewable energy. Wealthy people have long realised this, and taken advantage of this. Their wealth doesn’t come from ‘money’ as such, but rather their investments in other assets. The comedian Miles Jupp sums it up when he recounts an anecdote about being mugged. “Give me all your money”, says the mugger. “Happy to,” says the rich man, “but it will take a while as it is all tied up in land. Let me take your details and I will send you a cheque in a few months.” This is why wealthy individuals have tended to do rather well out of the financial crisis. As money has been devalued by quantitative easing and bank bailouts (essentially we now have ‘shorter’ centimetres with which to measure the value of things) the effect has been to make the ‘value’ of their real assets increase when measured in terms of money. So what can you do? Certainly moving your money into the likes of The Co-operative, Zopa or the credit unions means that you are directing money to more ‘productive’ investments and assets, such as community enterprises and renewable energy. We’ve also just launched a new platform which enables you to put your money directly into UK renewable energy assets, just as the rich do, but from as little as £5 per investment. Abundance Generation is the part of a new wave of “democratic finance”. It allows you to have direct control over where your money goes, but also allows you to diversify your money out of “money”, offering you a sustainable way to produce real value both for yourself and society as a whole. Originally published in March 2012 at moveyourmoney.org.uk
https://medium.com/move-your-money/guest-blog-abundance-generation-e8898b305115
['Move Your Money Uk']
2016-10-04 03:46:02.484000+00:00
['Economics', 'Abundance', 'Blog', 'Renewable Energy', 'Bitcoin']
How Bruce Lee and Russian Trains Can Shake Up Your Self Care Right Now
1. Wu Wei (China) — “Non-Action” Wu Wei (無為) is a Taoist principle which roughly translates as non-action or without effort. Rather than conveying a sense of laziness or apathy, the real message behind Wu Wei is about swimming with the tide instead of against it. It’s a concept of letting go of our ego and not forcing our will unto the universe. If you were a boat in the open ocean and the wind started blowing, applying Wu Wei would be to put up a sail instead of attempting to row against the wind. So, a better way of phrasing Wu Wei could be “effortless action.” This quote from Bruce Lee describes it best: “Be like water making its way through cracks. Do not be assertive, but adjust to the object, and you shall find a way around or through it. If nothing within you stays rigid, outward things will disclose themselves. Empty your mind, be formless. Shapeless, like water. If you put water into a cup, it becomes the cup. You put water into a bottle and it becomes the bottle. You put it in a teapot, it becomes the teapot. Now, water can flow or it can crash. Be water, my friend.” It’s important not to mistake fluidity for submission or giving up. Anyone who has observed a mighty river carving its way through mountain ranges has seen the awesome power of fluidity and can appreciate that the river did not submit to anything. Wu Wei simply means to thrive by going with the flow. How You Can Apply This Right Now Acknowledge that the pandemic has been a huge disruptor in your life and that there are days in which you will just not be productive. Instead of forcing yourself to be productive, allow yourself to just read a book or watch a movie. A big part of moving forward is also accepting that there is now a new normal. Instead of investing your energy into trying to reclaim your old life, find opportunities in this new situation that were never possible before. Maybe you could use your previous commute time to finally launch your side hustle. Created by the author 2. Pule ’Ohana (Hawaii) — “Family Prayer” Pule ’ohana translates to “family prayer” and is a simple ritual where a family gathers to talk about their day, to apologize for any wrongdoing, and to express gratitude. At its core, the ritual is about creating a safe and consistent way to openly bring up grievances, resolve them in a spiritually mindful way, and to heal relationship wounds. As we are forced to spend more time together than we previously had, there is bound to be friction between household members which will need to be resolved. However, grievances don’t have to be present for the ritual to have value. The practice could also be used as a forum to validate each other’s feelings, understand personal boundaries, and to have thoughtful dialogues about life. There are several reasons why this ritual is so powerful right now. Almost everyone on earth has been touched by anxiety, grief, or a sense of isolation recently. This simple act of coming together on a consistent basis alleviates all three emotions. This practice taps into a fact that anthropologists have known for decades — which is that rituals are powerful in reducing anxiety and grief. It also reduces isolation as research shows that structured rituals like this one have the ability to create strong bonds when it is performed together. How You Can Apply This Right Now A ritual is defined by a certain rigidity and repetitiveness. Set aside a set time or specific meal and establish a rough structure of discussion. Go around the table with your family (or housemates) and create a safe space to discuss feelings and any grievances. It’s helpful to establish some rules of engagement to resolve differences prior to initiating the practice. Try to end each day with a component of gratitude, especially for one another. 3. Razgovory v Poezde (Russia) — “Train Talks” The phrase “razgovory v poezde” translates to “train talks” and has its birth in the Trans-Siberian Railway. The concept describes the authentic and raw conversations you are likely to have when a bunch of people are crammed together in a small space for an extended period of time to endure a tough journey. Sounds like a perfect mirror of our quarantine experiences to me. The part I love most about the “train talks” is the way Russians describe it as skipping the small talk and immediately diving deep. The bond is initiated through sharing food, swapping stories, and experiencing some degree of suffering together but it is cemented by a healthy dose of authenticity. You board the train as strangers but in the end, you leave it as friends. We could all benefit from leaving the quarantine experience with stronger relationships than when we went in. The reason cultivating connections beyond the superficial is important is that research shows that being able to be our authentic selves is important to our wellbeing. It is also central to our ability to have satisfying connections. What’s more, establishing a sense of stability and alignment with our true selves is actually linked to a higher level of grit and the ability to withstand challenges and pursue goals. “Small talk is for small people. Conversations are for the elite.” — Salma Farook How You Can Apply This Right Now Move beyond small talk and superficial conversations with the people around you. Seek to ask more questions and practice going deeper. If you don’t know how to start, there is a whole movement on authentic relating that provides a host of free resources. In return, allow yourself to be vulnerable and to truly express how you feel and where you’re at with the people that you trust. According to the Russians, a bit of Vodka helps. 4. Fika (Sweden) — “Coffee and Friends Break” The root of the word “Fika” comes from an old slang word for coffee: kaffi. Transposing the two syllables gives you Fika. Though it is broadly translated as a coffee (and usually cake) break, it is very different from the American version of a grab-and-go coffee fix. For one, Fika involves an intentional mindset of stepping away and actually taking a real break — not distractedly sipping coffee while you frantically try to meet that deadline. Swedes practice Fika not just to pause but also to focus on indulging. Traditional pastries and sweets are usually an integral part of the break. Secondly, Fika also typically involves socializing and connecting with those around you. You don’t Fika by yourself at your desk. An interesting distinction between a Fika and a coffee break as we understand it is that Americans tend to use coffee as a means to continue working while Swedes use it as a reminder to take a break. Research suggests that the Swedes may have the right idea as prolonged focus on a task has been found to eventually be detrimental to productivity. Numerous studies have also shown that socializing, particularly informal socialization greatly increases creativity. How You Can Apply This Right Now Remember that Fika is a mindset. It involves prioritizing breaks and actually allowing your mind to refresh instead of just physically stepping away from your computer. The other key components of Fika are companionship and a yummy, indulgent snack. Try to make sure you always have time for at least two Fikas a day. It will not only help you be more productive but it will strengthen your relationships as well.
https://medium.com/curious/how-bruce-lee-and-russian-trains-can-shake-up-your-self-care-right-now-5446767f3aac
['May Pang']
2020-11-23 23:14:52.436000+00:00
['Self Improvement', 'Life Lessons', 'Self', 'Mental Health', 'Life']
Designing for Power and Simplicity
The spectrum of work and ideas that Microsoft Office supports is diverse beyond imagination. That’s not hyperbole; it’s simply what happens when over a billion people use Office across vastly different industries, disciplines, geographies, and generations. People trust Office to help manage trillions of dollars of global business as much as they trust us to help with their child’s homework. For Microsoft Design, our biggest challenge and our biggest reward is that our audience is quite literally everyone. An audience of this size brings into laser focus the universal need for simple, powerful tools that help people stay focused amid an increasingly crazy world. Office has always offered a powerful set of tools with a wide range of useful features. Through refreshed UX, we make that power even more accessible with simple designs that use AI to supercharge a diverse range of ideas and workflows. One billion people can’t and shouldn’t have a one-size-fits-all design solution, and we’re evolving Office 365 into a suite of connected services with experiences that adapt to the needs of whoever is using it. We’ve put a lot of heart (and some serious midnight oil!) into these changes, and we’re excited to share them with you today. As always, we welcome your thoughts and feedback in the comment section below. Designing for simplicity: expanding our Fluent Design System Last year, we unveiled the Fluent Design System, a simple and connected visual system that supports Office as it moves toward faster, frictionless, and more intelligent experiences. For the first time in history, five generations share the workplace — this remarkable reality makes improvements like these more important than ever. We’ve been working hard to expand Fluent Design, and we’ve now aligned the entire Office 365 suite on typography and iconography. You’ll notice a shared header across products, the same grid everywhere, and added depth to focus on what matters. Our entire Office suite now aligns on the Fluent Design color, grid, iconography, and depth to create a simple and familiar experience everywhere. Microsoft has long been committed to inclusive design principles, and themes like Dark Mode ensure Office 365 can best adapt to the diverse needs of our many users. We’ve also evolved our color palette so, while still very familiar, the hues are lighter and more vibrant. Dark Mode, shown here in Microsoft Teams, enables Office to adapt to the diverse needs of our many users and we’re excited to be bringing it to Microsoft Outlook on the web. Designing for power: meet Microsoft Search If Fluent Design removes what’s not necessary through simpler visual interfaces, Microsoft Search delivers what is necessary through powerful intelligence. People rarely create content in a vacuum, and we typically need to reference files or conversations outside a single workspace. It’s a disruptive process that invites distraction while toggling between tabs and tools. With Microsoft Search, we combine AI with Microsoft Graph to deliver results directly into your workflow. Search is visually prominent across every Office app, fostering a consistent experience that brings contextual results before you even begin typing. Our zero-query search means just that: there’s zero for you to do. The simple act of putting the cursor in the search bar surfaces relevant apps, content, and people based on past behavior. Our redesigned start pages also leverage intelligence to organize files by frequency and activity instead of date and size. A unified experience of Search across web and mobile. Designing for superpower: sharing Ideas with you Human-centered design underlies everything we do and beyond powering tools like Microsoft Search, weaving AI throughout the design process has enabled us to carefully craft experiences that intelligently extend your own capabilities in natural ways. Not everyone using Microsoft Word, Microsoft Excel, or Microsoft PowerPoint is going to be a professional designer, writer, or analyst (I’m certainly not innately geared toward Excel wizardry!). Still, that shouldn’t bar people from creating professional-quality work. Our new companion experience, Ideas for Office, docks alongside your work and offers one-click assistance with grammar, designs, data insights, rich imagery, and more. Ideas helps you work faster and look like a pro while doing so. Ideas uses powerful intelligence to help design slides, check grammar, provide data insights, and much more. To complement this sidebar experience, we’ve also designed in-canvas interactions that help keep you in flow and will be forthcoming in future releases. (By the way, “canvas” is Microsoft lingo for the main body part of a tool, like the page in Word or the slide in PowerPoint). For many types of work, it’s where you want to stay when you’re in the zone. To enable that, we’ve made it lightning fast to pipe in the contents of other files; in Excel, for example, simply type “Insert a chart” and all of the relevant docs and graphs come right to you. We always engage deeply with users to collect feedback. For example, we learned that people often leave reminders for themselves within documents. To help complete the task at hand without distraction, you can now simply type “todo” into the main body of the document to create a reminder or @mention someone who can help. Instead of leaving your document to find what you need, Microsoft To-Do allows you to stay in the flow by bringing content and people directly to you. In Excel, we found that people were spending a ton of time crunching numbers about geographies and stocks, so we made that process faster and more accurate. Now, you just type the names of places or traded companies in succession, mark them as the appropriate data type, and Excel can automatically pull in related information about the geography or security for further analysis. Designing for you: let us know what you think Any designer knows that our process is one without a beginning, middle, or end. It’s an iterative cycle, and even huge milestones like today’s mark the start of the journey’s next leg. Our customers are on this journey with us, and the improvements we’ve unveiled are all part of our effort to support their best work. If you have feedback on how to make our designs even stronger, we welcome them in the comments below!
https://medium.com/microsoft-design/designing-for-power-simplicity-9cddec615567
['Jon Friedman']
2020-05-19 04:57:52.458000+00:00
['Microsoft', 'User Experience', 'Design', 'UX', 'Fluent Design System']
I Was Forced Out of My Job
There’d been a great deal of upheaval in my life prior to taking the real estate job. I’d been living with a roommate who could no longer hide his alcoholism. He was having trouble at his own job, so I should have seen his abandoning the apartment coming. But I didn’t. I managed to find someone to take over the townhouse and secured a studio in a developing part of the city. I was scared because I had very little money at that point and, just before my roommate abandoned the apartment, I’d given two weeks notice at my shipping job for health reasons. Having grown up hearing stories about my grandparents in the Great Depression, I did what my grandmother had done. I rationed out the food I had and I waited for a job offer. When the call finally came, I couldn’t believe my luck. It was the best offer I could have hoped for. And it came just in time. When I received my first paycheck, I was out of cash and had two days worth of food left. Determined to prove that the company had made a smart choice, I applied myself to the role and learned their methods so quickly it amazed my trainer. She was used to new recruits taking months to be able to handle the accounts on their own, not weeks. I took it as a good sign that one of the company’s oldest employees was impressed. But while my trainer and my immediate supervisor liked my being able to lighten their workload, the other people in the office looked at me with suspicion. I’ve never “fit in” well and, while I tried to relate to my co-workers, we couldn’t seem to find common ground. I was single; they were married or divorced, usually with several children. They liked to chat and gossip as they sat in the open office; I invariably missed the point when I tried to join in. Over time, I began to be ignored when I asked them questions. People would leave the break room when I came in. When I walked past one woman’s desk on my way to the bathroom, I was consistently called an “uppity bitch.” Trying to talk with my co-workers so I could understand and fix the problem did nothing but bring more abuse. The head of HR was no help because she instigated much of it. She and two other women sat in the break room one afternoon and, loudly, talked about everything that was wrong with me, from my makeup to my attitude. I was stunned. Fighting back nausea, I entered the break room and asked what it was that I had done to make them so angry. The response? That they weren’t talking about me (a lie as they had used my name several times) and that I needed to stop trying to “start shit.” I couldn’t believe the situation had gotten that bad. I went to my supervisor at the end of the day and explained what was going on. And then I did something radical, I disclosed that I was Autistic to him. My trainer already knew; she had a grandson on the spectrum and had asked me outright if I was too. It felt good to not be judged so I’d confirmed it and we’d had a few conversations about Autism. I’d recommended books for her, but I hadn’t imagined that she would have told anyone else. But she had. It had filtered out to the rest of the company so that when I disclosed to the supervisor, he was unsurprised. He was also unsympathetic, telling me to “just ignore” the others and keep doing the excellent work I was doing for him. Haunted by the prospect of being unemployed again, I did my best to follow his advice. I used headphones to try and block them out and began smoking for the first time in six years just so I could escape out the doors a few times a day. The depression and anxiety I’ve struggled with since childhood reasserted themselves viciously. But it wasn’t until my co-workers discovered my PTSD (and started intentionally triggering it) that I went to my supervisor again. Post traumatic stress disorder presents in a variety of ways. An exaggerated startle response is one of the most common and mine went off one day when a co-worker dropped a stack of binders behind me. I was wearing my headphones so I hadn’t heard her approach but the resulting noise caused my entire body to jerk hard enough to send my chair a few feet backwards. The office roared with laughter and after that I couldn’t relax no matter what. I was always waiting for the next attack, constantly sick with headaches and a clenched stomach. My supervisor didn’t care. He advised me not to retaliate and assured me I had a job there as long as I did good work. As far as he was concerned, my being allowed to wear headphones was all the accommodation I needed. No other “special allowances” would be made for me. And then he told me not to talk about being Autistic anymore, not with him and not with my trainer. I was stunned, humiliated. I knew this wasn’t right, that it was blatantly illegal. But there was no way I could afford a lawyer to fight it and I had to keep the job while I searched for another one. I didn’t know what to do except try to hold on. I began to notice that other executives were watching me closely. They began to give me assignments that I had no time to do and no experience with. They’d “approved it with my supervisor” so I took the work and kept my head down. I was told that I couldn’t clock in a few minutes early as I’d been doing, as everyone else did. I was denied overtime when I really needed it to complete the extra work. And then the death blow came. In a company wide email, everyone was informed that my co-worker, S., was going to take over my duties. Nowhere did it explicitly say that I had been fired but there was no other explanation. That day, I waited for someone, anyone, to come and tell me to turn in my badge, to fill out paperwork, or do an exit interview. But no one did. They were waiting for me to break, for me to leave. They needed me to quit because if they fired me they might end up in a discrimination lawsuit. That whole day as people congratulated S., I kept asking myself if this was going to be the thing I let break me. When my supervisor refused to see me, saying he was too busy, I thought, “Yeah, I think this is.” At 4:30, I gathered my things, put them in my purse, left my badge on the desk, and walked out. I wish I could tell you that it felt triumphant, like I was finally free, but I can’t do that. There had been too many months of torture and I was too numb to feel much of anything. It would take a long time to reverse that fact and even longer to feel like I wasn’t to blame. Getting my story out there helps. And maybe it’ll help someone else too.
https://medium.com/invisible-illness/i-was-forced-out-of-my-job-3e3616a03b8f
['Kate Taylor']
2019-09-12 22:21:20.131000+00:00
['Work', 'Disability', 'Mental Health', 'Equality', 'Autism']
Vision Transformers for Image Recognition at Scale
While convolutional neural networks have been used in computer vision since the 1980s, they were not at the forefront until 2012 when AlexNet surpassed the performance of contemporary state-of-the-art image recognition methods by a large margin. Two factors helped enable this breakthrough: (i) The availability of training sets like ImageNet, and (ii) The use of commoditized GPU hardware, which provides significantly more compute for training. As such, since 2012, convolutional neural networks have become the go-to model for vision tasks. The benefit of using a convolutional neural network is to avoid the need for hand-designed visual features, instead of learning to perform tasks directly from data end-to-end. However, while the neural network avoids hand-crafted feature-extraction, the architecture itself is designed specifically for images and can be computationally demanding. Looking forward to the next generation of scalable vision models, one might ask whether this domain-specific design is necessary, or if one could successfully leverage more domain agnostic and computationally efficient architectures to achieve state-of-the-art results. As the first step in this direction is the Vision Transformer, a vision model-based as closely as possible on the Transformer architecture which was originally designed for text-based tasks. Vision Transformer represents an input image as a sequence of image patches, similar to the sequence of word embeddings used when applying Transformers to text, and directly predicts class labels for the image. The transformer will demonstrate excellent performance when trained on sufficient data, outperforming a comparable state-of-the-art convolutional neural network with four times fewer computational resources. The Vision Transformer treats an input image as a sequence of patches, akin to a series of word embeddings generated by a Natural Language Processing Transformer. Vision Transformer The original text Transformer takes as input a sequence of words, which it then uses for classification, translation, or other Natural Language Processing (NLP) tasks. The transformer can be designed to make it operate directly on images instead of words, and observe how much about image structure the model can learn on its own. Vision Transformer divides an image into a grid of square patches. Each patch is flattened into a single vector by concatenating the channels of all pixels in a patch and then linearly projecting it to the desired input dimension. Because Transformers are agnostic to the structure of the input elements, there’s an added learnable position embedding to each patch, which allows the model to learn about the structure of the images. A prior model would not comprehend the relative location of patches in the image, or even that the image has a 2D structure, it must learn such relevant information from the training data and encode structural information in the position embeddings. Left: Performance of Vision Transformer when pre-trained on different datasets. Right: Vision Tranformeryields a good performance/compute trade-off. High-Performing Large-Scale Image Recognition The data suggest that (1) with sufficient training the Vision Transformer can perform very well, and (2) It yields an excellent performance/compute trade-off at both smaller and larger compute scales. Therefore, to see if performance improvements carried over to even larger scales, it is trained a 600M-parameter Vision Transformer model. This large model attains state-of-the-art performance on multiple popular benchmarks, including 88.55% top-1 accuracy on ImageNet and 99.50% on CIFAR-10. It also performs well on the cleaned-up version of the ImageNet evaluations set “ImageNet-Real”, attaining 90.72% top-1 accuracy. Finally, the Transformer works well on diverse tasks, even with few training data points. For example, on the VTAB-1k suite (19 tasks with 1,000 data points each), ViT attains 77.63%, significantly ahead of the single-model state of the art (SOTA) (76.3%), and even matching SOTA attained by an ensemble of multiple models (77.6%).
https://medium.com/analytics-vidhya/vision-transformers-for-image-recognition-at-scale-fe1b57a9c02b
['Abhilash Pattnaik']
2020-12-28 16:22:22.222000+00:00
['Image Recognition', 'Transformation', 'NLP', 'Artificial Intelligence']
A New Capability Maturity Model for Deep Learning
Photo by yang miao on Unsplash How can we understand progress in Deep Learning without a map? I created one such map a couple years ago, but this map needs a drastic overhaul. In “Five Capability Levels of Deep Learning Intelligence”, I proposed a hierarchy of capabilities that was meant to inform the progress of Deep Learning development. In that proposal, my classification was based on structural components that I suspected should exist at each level: Five Capability Levels of Deep Learning (Now Revised!) So specifically, you begin with a feed forward network in the first level. That would be followed by memory enhanced networks, examples of which would include LSTM and Neural Turing Machine(NTM). Followed by networks that are able to ingest knowledge bases (bridging the semantic gap). The next level would encompass systems capable of handling imperfect or partial information and finally leading to a society of mind as described by Minsky. When the above classification was proposed, there was a lot of hype about the promise of DeepMind’s NTM. It was once thought that climbing up the Chomsky hierarchy would lead to more advanced Deep Learning capabilities. This approach has since died down as the capabilities of NTM were revealed to be disappointedly unscalable. Chomsky’s hierarchy only reveals what can be computed, it is unable to provide insight into the nature of learning. “Reasoning and learning are two sides of the same coin.” There’s a lot of detail that was missing in my original proposal. The most glaring oversight was detail on how to evolve (not necessarily design) learning systems. My original capability model was not inspired by the paradigm that learning is primarily driven by intuition machines: Although I did capture the existence of the semantic gap, I said little about “contextual adaptation”. Absent in my proposal was the idea of “embodied learning”. I was unaware of the importance of an inside-out (predictive coding) architecture. Finally, I was ignorant of the necessity of a hierarchy of self-awareness. An emphasis on learning must be the driving approach of any capability model of intelligence. Reasoning (or inference) and learning are incidentally two sides of the same coin and therefore must be treated equivalently. “The usefulness of a well informed capability maturity model is that it allows us to get an accurate sense of progress in AI.” The usefulness of a well-informed capability model is that it allows us to get an accurate sense of progress in AI. This is critically important today, our policymakers do not have the language to describe or even make sense of AI development. We cannot understand progress in AI (and anticipate its benefits and dangers) when our notion of intelligence is wrapped in vague, ambiguous and imprecise language. Now it is time to unveil my revised Capability Maturity Model for Deep Learning: To help better understand the distinction of the different maturity levels, I use the conceptual diagram that universally captures the problem of cognition: Level Zero — Handcrafted (Non-Intuitive Programmed) These are present day programmed systems. Good Old Fashioned AI (GOFAI) (see: “Tribes of Artificial Intelligence”) systems that are unable to learn through experience fit within this class. These systems perform sense-making through well established deductive reasoning algorithms. Level One — Stimulus Response (Intuitive Perception) DL CMM Level 1 - Representation originates from environment These are present day feedforward deep learning networks that are able to learn regularities that are found in training data. These are universal function approximators. This is what DARPA would describe as “statistical” systems. Conventional machine learning methods such as kernel methods, decision trees, and probabilistic graph models are covered in this class. These systems make sense of the world through inductive reasoning. The most advanced form of these systems is generative systems such as Generative Adversarial Networks (GANs). An intermediate form of these is state-based models such as RNN and NTM. There is a refinement of state-based deep learning networks that are Turing complete. In my older classification, I created two levels, one for classification only networks and another for memory based networks. One can always decompose this level according to the conventional Chomsky hierarchy with memory-less function at the bottom and Turing complete machinery at the top. It is important to realize that a cognitive machine must be at the same level of the Chomsky hierarchy as its environment. Furthermore, if the environment is Turing Complete, then inductive reasoning has its limitations in that only anti-causal reasoning is possible (i.e. predict cause from observed effect). Level Two — Dual Process (Intuitive Extrapolation) CMM Level 2 — Representation from environment, actions influenced by internal world model. These include systems that merge handcrafted traditional algorithms and intuition based systems (see: “Coordinating Intuition and Rational Intelligence”). Today’s most advanced systems are in this class. An example of this is DeepMind AlphaGo and AlphaZero that combines traditional MCTS algorithm with a conventional deep learning network. These model-free systems are capable of a kind of abductive reasoning to build its internal world models. Tree Search is effectively a systematic way to perform experiments on an internal world model. These world models are non-reflective and opaque, it is at the next level where a causal world model is generated. The rational part of this system is programmed in level Zero. Level Three — Interventional (Intuitive Causal Reasoning) DL CMM Level 3 — Predictions originate from World Model, Representation driven by interaction. This is what DARPA describes as “Contextual Adaptation”. This is the second rung in the causality ladder that Judea Pearl describes. Embodied and interactive learning are essential for this level. These systems employ abductive reasoning to build internal models of reality. Interventional systems learn by interaction with the world. These are systems that employ Pearl’s “do-calculus” to refine its world models. The distinction with the previous maturity level is that in this level the models are explicit (but not necessarily transparent). The process of interaction with an internal world model (i.e. a mental model) is sample efficient. Once a more abstract model is created that represents the causality of the real world, then such a system is able to imagine the cause and effect of its actions prior to actual execution. This abstract world model can also be introspective (i.e. reflective world model). This “What-if” capability motivates the need for an inside-out architecture. That is, it is important to notice the inversion of the cognitive process ( the black dot in the diagram signifies the starting point). Achieving level 3 bridges the semantic gap between sub-symbolic and symbolic systems. Not only does this lead to an explosion of applications, but it leads to truly autonomous cognition. Level Four — Counterfactual (Intuitive Ingenuity) DL CMM Level 4 — World model includes representation of self and goals. This is the third and final rung in Pearl’s causality ladder. Humans are capable of imagining a world and performing thought experiments (i.e. Gedankenexperiment) to create higher and more precise mental world models. What if a cause does is removed from a world model, how would it behave? I would describe this as intuitive ingenuity, that is the ability to explore the world and invent new tools and models that more efficiently transform or predict the environment. Counterfactual systems answer “Why?” Judea Pearl argues that it is this level of capability that differentiates homo sapiens with the rest of the animal kingdom. At this level, the Narrative Self emerges. Here’s a nice TEDx talk describing imagination and knowledge. Level Five — Conversational (Intuitive Empathy) DL CMM Level 5. Context includes reasoning about other selves. This final level is what is needed to achieve what Brendan Lake describes as intuitive psychology. This is what Michael Graziano describes in his Attention Schema Theory. This I describe as Conversational Cognition. Perhaps Minsky’s Society of Mind or compositional game theoretic models are required to achieve this maturity level. At this level, we go beyond learning world models through an individual’s imagination. We get to a level where world models are learned through the interaction of many conversations. This new capability model is more functional than my previous proposal that was more structural. The problem with a structural definition is that it is unclear as to what kinds of capabilities are enabled with each new structure. Furthermore, it is also unknown as to precisely what kind of cognitive structures are required to arrive at a higher level of cognition. It’s interesting to note that many companies that brand themselves as having “Artificial Intelligence” are at only Level 0 in this capability model. Firms that do data science employ only Level 1 tooling. Firms like Google have deployed Level 2 capabilities such as foreign language translators and furthermore have demonstrated sophisticated game playing research projects (see: AlphaGo). Our own human cognitive capability is our most reliable guide to achieving artificial cognitive systems. In Computer Science, Chomsky’s hierarchy is a guide for more complex computational machinery. Unfortunately, Chomsky’s hierarchy doesn’t have any resolution beyond the Hybrid level (level 2). Turing Completeness is a necessary requirement for advanced cognition. Despite the universality of Turing machines, it remains unknown as to what will be needed to achieve levels 3 up to 5. Despite this unknown, this maturity model is useful in that it expresses the capability that is needed at higher levels. This maturity model, therefore, should be a good enough guide for you to understand how far or how near civilization is in achieving an artificially “human complete” system. It is important however to be aware of the pace of progress. The first kind of Intuitive-Rational Hybrid systems (Level #2) was demonstrated effectively in 2015 (See: “Sputnik moment”). Deep Learning systems were discovered in 2012, and self-play GANs were discovered in 2014. We are now seeing in 2018 signs of Interventional systems (i.e. level #3) in the form of what is known as “Relational Deep Learning” or very easy to overlook name: “Graph Networks”. This is why it’s a very exciting time for Deep Learning research. The breakthroughs are going to be extremely fast and furious! What’s interesting though is how the nature of information changes as you go up the capability maturity ladder. A panel discussion with Yudea Pearl and other DL experts discussing the causality ladder (level 1, 3 and 4) Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution .
https://medium.com/intuitionmachine/an-advanced-capability-maturity-level-for-artificial-general-intelligence-b300dafaca3f
['Carlos E. Perez']
2019-10-07 17:03:21.300000+00:00
['Deep Learning', 'Artificial Intelligence', 'Machine Learning']
I Was In Love With A Narcissist:
After my text to him, I shut my phone off to hide. My life felt suddenly over. “This is good for you”, I reasoned aloud. “You did the right thing.” Words I tossed into the ether with the cadence of Carol Brady. I imagined her talking Alice off the orange linoleum ledge after leaving Sam The Butcher. I was alone in my apartment — heartbroken but safe — wrapped in Colgate and coffee-stained terrycloth. My house robe was now a makeshift blanket providing comfort as I sat on the kitchen tile. I wanted my own Carol Brady. I pulled myself up from the floor and grabbed toilet paper from the near-empty roll in the bathroom and used it as tissue. The puffy bags framing my bloodshot eyes were fun. So this is what a more pathetic version of myself looks like. The dewy shadows above my kitchen sink were waving for my attention. Great. A new day — whether I wanted one or not. But instead of ignoring the dirty dishes (mocking my apathy) and collapsing into the sofa with Netflix and detachment, I did something different. I stood in the mess, pulled up the blinds and started cleaning. My knees were weak (a breakup staple) but my heart felt different. Although freshly disabled, it was beating in an unfamiliar way. Its rhythm was heavy, supplying natural adrenaline from feeling distraught — but there was no shame. I knew right then that I had turned a corner. Instead of my usual long-winded text messages to him explaining how he hurt me and why I deserved better, my text this time was simple: “I can’t do this anymore.” Ending a five-year relationship via text message isn’t ideal for everyone — but safely leaving an abusive relationship with a narcissist, is. After a while, my wastebasket was overflowing and I exhaled at the irony, picking up pieces of my feelings. While rinsing my hands in the bathroom sink I lifted my head, peered into the reds of my eyes and had an epiphany: This heartache was completely avoidable. If only I followed the signs instead of ran from them.
https://medium.com/narrative/i-was-in-love-with-a-narcissist-406001dd4339
['Christine Macdonald']
2020-03-09 17:22:56.457000+00:00
['Relationships', 'Mental Health', 'Dating', 'Love', 'Life']
Psychology: The New Liberal Frontier
by Anthony Ghosn Providing more resources to inner city children does more harm than good. Or at least that is what Harvard Psychologist Doctor Richard Cabot’s Cambridge Somerville Experiment seemed to suggest when it concluded “the general impact of treatment appeared to have been damaging”. This study was widely cited as evidence against the proliferation of better services to underprivileged youths in the United States. It was later discovered that the experiment, which purported to provide students with counseling, did not employ any trained psychotherapists or professional counselors. What is more, its author, Doctor Richard Cabot was a well-known transcendentalist, an ardent believer in self-reliance and Ralph Waldo Emerson’s close friend. If you consider that the author’s political inclinations can bias his or her research, does it not concern you that 72% of college professors are liberal compared to 15% who identify as conservatives. Not to mention the fact that the most liberal faculties tend to be found in humanities and social science departments. Not only are the authors liberal, the actual subjects being tested are predominantly liberal. According to one article “68% of research subjects in a sample of hundreds of studies in leading psychology journals came from the United States, and 96% from Western industrialized nations”. Moreover, “67% were undergraduates studying psychology”. The set of people being used for experimentation is highly skewed towards western college level students. This sampling bias does more than simply skew for demographic factors like age, education level and language, it skews for political views. A 2011 survey of American college freshman conducted by UCLA found that their “political views [are] decidedly more liberal” than they have been in the past. According to the study in 2011 7.6% more college students identified themselves as liberals than conservatives. What does this suggest about narratives on human nature that have been proposed and supported by respected universities — particularly Stanford? Stanford has released a number of psychological studies that have dramatically redefined the way that intellectuals perceive human psychology. Perhaps most famously, Stanford’s Philip Zimbardo published his results from the “Stanford Prison Experiment”. This groundbreaking piece of work, which was released amidst conversations concerning the guilt and responsibility of those following orders, seemed to suggest that situational pressures could lead any human into evil actions. Situational attribution, a central tenet of liberal ideology, is the notion that actions can be predicted based on situational pressures and factors. This experiment has been widely cited in support of policies like commander responsibility that have had a significant effect on American legal theory. What is not very well known is that the Stanford Prison experiment only had 24 male subjects, all Stanford undergraduates. Do we believe that 24 Stanford men are a sufficiently comprehensive sample size from which to extract conclusions about human nature? What is perhaps even more concerning is that this study is just a highly salient example of the incredible amount of research Stanford has produced to back social psychology. At its core, social psychology is a rejection of praxeology — the perspective that human beings are their own operators. Given that those coming up with the hypotheses to be tested are predominantly liberal and those being tested are disproportionately liberal, should we not be more skeptical of their findings? The departments of psychology’s social psychologists promote ideas that contradict notions of individualism, free will and praxeology which underpin conservative perspectives on the world. Would a liberal embrace a study conducted by a conservative professor on twenty-five conservative students that substantiates a conservative claim? The sample sizes are far too skewed for them to be used as inferential data about widespread human nature and how society works. The question remains, is Stanford’s psychology department promoting liberal theories? Does Stanford have a responsibility to ensure parity between liberal and conservative publications from the point of view of intellectual integrity? I have taken psychology classes with very well respected professors who have, in all seriousness, claimed to work on the President’s political campaign! Stanford is undoubtedly a liberal school but there is something particularly concerning about shrouding political theory in empirical research. No one who sees the data can deny that there is clearly a prevalent bias towards liberal views in both the hypotheses and execution of experimental psychology. Conservatives widely acknowledge that academia has a significant liberal bend. In the case of psychology, however, the implications of politically motivated findings are highly concerning. By supporting liberal perspectives with “research” and “evidence”, schools are implicitly substantiating what is ultimately a point of view on humans nature. Conservatives, who believe that humans are capable of making their own decisions independent of their surroundings, are being said to contradict what the “research shows” but the research seems to be no more unbiased than an opinion in this case.
https://medium.com/stanfordreview/psychology-the-new-liberal-frontier-ba2dd411e49d
[]
2016-12-11 00:27:43.823000+00:00
['Psychology', 'Politics']
Homophobia’s Gonna Cost You!
In the study Polarized Progress: Social Acceptance of LGBT People in 141 Countries researchers combined results of 11 cross-national, global, and regional surveys to develop a Global Acceptance Index (GAI) score for social and legal protections for LGBT people in 141 countries. Perhaps surprisingly, LGBT acceptance has generally increased! Since 1980, 80 (of 141) countries have seen increased GAI scores while 46 countries have seen a decline. Only 15 countries saw no change. However, LGBT acceptance has become more polarized. Countries with a history of acceptance have become even more accepting. Countries that were less tolerant have become even more so. Many Western democracies —such as Canada, Australia, New Zealand, and Argentina— already had high GAI scores and saw modest increases. That stands in contrast to much of Western Europe, and the standout Latin American country Uruguay, which saw the greatest improvements in GAI scores. Conversely, intolerant countries in North, West, and East Africa and East, Central, and South Asia saw even further declines in acceptance. And highly intolerant countries, such as Egypt, Saudi Arabia, Azerbaijan, Indonesia, and Bangladesh saw precipitous declines in their GAI scores.
https://emjaymurphee.medium.com/homophobias-gonna-cost-you-1a12912f2e44
['M. J. Murphy']
2020-01-25 04:54:44.606000+00:00
['Gay', 'Development', 'LGBTQ', 'Law', 'Social Justice']
Getting Started With WebAssembly, Docker, and Alpine
Building WASM Files Inside Docker Now that we have our .dockerfile set up, let's build from this file: docker build -t wasm . Let's finally get started with the WASM app written in Go. Create a file named main.go . This will be our main application (aka front end) that will later be written to the WASM file: We defined a few functions in the Go syntax (vanilla-js-looking). WASM is a native machine assembly language for browsers. Remember that some Go functions may not be compatible. Some functions that attempt to interact with the host machine due to the sandboxed nature of web browsers may not work. Now, let's create an index.html file that can trigger these functions and run our program. Here, we have a function that calls an onClick HTML button element. This add() function does not exist yet until we compile the .wasm file. Let’s review where the compiling happens in our Dockerfile: RUN GOOS=js GOARCH=wasm go build -o main.wasm The RUN command will compile to .wasm before we load the server. GOOS=js and GOARCH=wasm are required to build a .wasm file. While using go build with an -o argument to set an output file name, our Go application will be compiled to WASM. Let’s run a container in detached mode using the argument -d : docker run -d -p 8180:8080 wasm As you can see, I am using port 8180: in front of the port 8080 . This is to remap from the host machine to the container.
https://medium.com/better-programming/getting-started-with-wasm-webassembly-docker-alpine-b8652f82ce5e
['Steven Rescigno']
2020-10-22 15:58:44.908000+00:00
['Webassembly', 'Docker', 'Startup', 'Programming', 'Wasm']
Programming Essentials in Python for a Data Scientist
Files File handling is an important part of any web application. Python has several functions for creating, reading, updating, and deleting files. File Handling The key function for working with files in Python is the open() function. The open() function takes two parameters; filename, and mode. This function returns a file object, also called a handle, as it is used to read or modify the file accordingly. >>> f = open("test.txt") # open file in current directory >>> f = open("C:/Python38/README.txt") # specifying full path We can specify the mode while opening a file. In mode, we specify whether we want to read r , write w or append a to the file. We can also specify if we want to open the file in text mode or binary mode. The default is reading in text mode. In this mode, we get strings when reading from the file. On the other hand, binary mode returns bytes and this is the mode to be used when dealing with non-text files like images or executable files. f = open("test.txt") # equivalent to 'r' or 'rt' - t: text mode f = open("test.txt",'w') # write in text mode f = open("img.bmp",'r+b') # read and write in binary mode When working with files in text mode, it is highly recommended to specify the encoding type i.e utf-8 here. By default in windows, it is cp1252 but utf-8 in Linux. >>> f = open("test.txt", mode='r', encoding='utf-8') Closing the File When we are done with performing operations on the file, we need to properly close the file. Closing a file will free up the resources that were tied with the file. It is done using the close() method available in Python. >>> f = open("test.txt", encoding = 'utf-8') >>> # perform file operations >>> f.close() This method is not entirely safe. If an exception occurs when we are performing some operation with the file, the code exits without closing the file. A safer way is to use a try…finally (exception handling) block. >>> try: >>> f = open("test.txt", encoding = 'utf-8') >>> # perform file operations >>> finally: >>> f.close() This way, we are guaranteeing that the file is properly closed even if an exception is raised that causes program flow to stop. Create a New File In order to write into a file in Python, we need to open it in write w , append a or exclusive creation x mode. We need to be careful with the w mode, as it will overwrite into the file if it already exists. Due to this, all the previous data are erased. Writing a string or sequence of bytes (for binary files) is done using the write() method. This method returns the number of characters written to the file. >>> f = open("test.txt", "a") >>> f.write("my first file ") >>> f.write("This file ") >>> f.write("contains three lines ") >>> f.close() The above example opens the file “test.txt” and append content to the file. Read Only Parts of the File To read a file in Python, we must open the file in reading r mode. We can read the text.txt file we wrote in the above section in the following way: >>> f = open("test.txt",'r',encoding = 'utf-8') >>> f.read(4) # read the first 4 data 'This' >>> f.read(4) # read the next 4 data ' is ' >>> f.read() # read in the rest till end of file 'my first file This file contains three lines ' >>> f.read() # further reading returns empty sting '' Delete a File To delete a file, you must import the OS module, and run its os.remove() function: >>> import os >>> os.remove(“ test .txt”) To avoid getting an error, you might want to check if the file exists before you try to delete it:
https://medium.com/insights-school/programming-essentials-in-python-for-a-data-scientist-3676d0c5df1c
['S. Khan']
2020-07-02 11:12:08.263000+00:00
['Python3', 'Tutorial', 'Programming', 'Python', 'Data Science']
How to Ensure Your Child Develops a Debilitating Phobia, in 8 Simple Steps
Set the scene. You, the parent, are enjoying a sleepy Sunday morning, nursing a lukewarm beverage on the sofa. Your tiny tot toddles towards you, eyes a-sparkle, babbling about ducks, bunnies, or suchlike. It’s a perfect moment before it all falls apart. Your eyes are drawn, ineluctably, to a skittering black beast on your precious angel’s pristine pajamas. Who knows whence it came — perhaps from the very maw of hell, or perhaps from under the coffee table. Either way, it’s the spider of your nightmares. And before long your child’s too, if you follow these steps correctly! 1. First, you scream. Don’t hold back. If you’re having difficulty achieving the correct volume (loud) and pitch (ear-splitting), visualization exercises may assist. Imagine you’ve gone to rouse your adorable babe from a nap, and instead discovered them transformed into a bristly, beady-eyed arachnid. Your scream should be invested with a similar sense of Kafkaesque horror. 2. Your child may wail. All to the good. Don’t rush to comfort them. Instead, continue to demonstrate your inability to function in the face of the advancing monster. A dramatic coffee spill will do nicely. 3. Next, you must engage the demon in battle. Your child is the battleground. The spoils, their sanity. 4. Keep that crusading zeal at the forefront of your mind as you make your trembling way towards your toddler, eyes brimming with murderous purpose. 5. Seize any incongruous instrument in the vicinity with which you can wage war. This may be a tissue, a puzzle piece, a stuffed animal; the more absurd, the better! Bat your chosen weapon ineffectually in the vague direction of your hideous foe. The aim here is not to harm spider or child, but to cause confusion and bewilderment at your incompetence in this crisis. 6. The dread arachnid, now as traumatized as the child, will be crawling in frantic circles over your precious infant. Take one last desperate swipe and it will fall unharmed at your feet, before vanishing into the spidery hinterlands of your home. 7. Claw off your child’s clothes, lest the enemy has found shelter therein. You may both be sobbing, but safety first; divest them fully before clasping to your bosom. 8. The final step is crucial. Crying child in tow, flee the room you started in. Find a defensible space, (a closet works perfectly) and slam the door. This closet is now your home. Forget that you ever possessed a kitchen. The era in your life of frivolities, like toilets, and beds, is over forever. That land belongs to the spider, now. Crouch in your sanctuary and clutch your distraught offspring close. Try humming a gentle lullaby as you rock back and forth. Keep your eyes trained at all times on that thin strip of light beneath the door. ‘Ware the approaching shadow. Never forget about the enemy beyond. Congratulations! You’ve succeeded in scarring your child for life.
https://medium.com/the-haven/how-to-ensure-your-child-develops-a-debilitating-phobia-in-8-simple-steps-f0ec7a5cc4ad
['Caitlin Brown']
2020-09-14 17:06:02.992000+00:00
['Satire', 'Parenting', 'Humor', 'Parenthood', 'Psychology']
The Tell-Tale Heart Transplant
The Tell-Tale Heart Transplant A review of Chip Jones’ stunning new book, *The Organ Thieves: The Shocking Story of the First Heart Transplant in the Segregated South* You get a phone call that your brother is in the hospital. You rush to go to him, get a couple more scrambled messages about how he had a head injury at work and is now in a recovery room. When you get the next update, he is dead. When you take his body to the funeral home, you get another batch of shocking news: his heart and kidneys are missing. What do you think? What do you do? Now, to add on to all of that, you are a Black man in segregated Virginia in 1967. What do you think? What do you do? “They took my brother’s heart!” the man on the other end of the line exclaimed in horror. This is the engine behind Chip Jones’ The Organ Thieves: The Shocking Story of the First Heart Transplant in the Segregated South. The Black man is William Tucker. His brother, Bruce Tucker, went in to the Medical College of Virginia (MCV) with a head injury and his heart became the 15th heart transplant in world history. He was not a listed donor. His family did not give consent. The reader must wonder (I did) if his head injury was given adequate treatment, after seeing what happened next. The Egyptian Building (left) and Dooley Hospital (right), where a lot of the events of *The Organ Thieves* took place Jones’ tome is part a history of MCV, part history of transplantation, another part courtroom drama, and a little dash of archeological excavation. Its flexibility makes the story unique, and the depth of research is simply astounding. It’s been compared to The Immortal Life of Henrietta Lacks, now a classic that I am embarassed to say I have not yet read. However, I am very familiar with it (Jones even references the book several times within his text) and to my knowledge Bruce Tucker’s importance to the ethics of heart transplantation and the concept of “brain death” seems just as important as Henrietta Lacks’ posthumous contribution to cancer research. The similarities in the two stories are striking, to say the least. Jones is clear in his message about what put Bruce Tucker’s family in such a vulnerable situation: Bruce’s race and socioeconomic circumstances made him vulnerable for organ removal, demonstrated by the simple fact that doctors declared him derelict without making much of an effort to find his family. But the racial component, though crucially important, does not take up nearly all the oxygen in Jones’s figurative room. The science of heart transplants, of death, and where those connect forms the crux of the intellectual questioning in his account. Does “brain death” count as death, or does death depend on the cessation of breathing and heartbeat? If surgeons were to have followed the Virginia law at the time, they would have had to wait 24 hours after Bruce Tucker’s heart stopped beating to count him surely dead. That would have saved his brother William a lot of heartache, but it would make a heart transplant impossible. As Jones relays in the book, the authors of the 2015 text Transplantation Ethics wrote: The public policy discussion of how to define death began in earnest in the late 1960s, not long before surgeons were confronted by Bruce Tucker’s case … Now, a half century later, we are still unclear about exactly what it means to be dead. It’s the paradox of modern science: We can put a dead man’s heart into a living man’s body, but we don’t know exactly what it means to be dead. If these questions interest you, I implore you to read The Organ Thieves and think deeply about the ethical conundrums presented. They are important to how we define life, who matters, and what is most important when it comes to saving lives with the help of modern science — science which cannot answer every important question. I borrowed a copy of The Organ Thieves from my local library. Borrow it, request it, or consider donating to your library today.
https://medium.com/park-recommendations/the-tell-tale-heart-transplant-b6348e33d103
['Jason Park']
2020-09-16 13:31:05.071000+00:00
['History', 'Reading', 'Medicine', 'Racism', 'Books']
New England’s Series A Deals — Part III
Recently, The Buzz profiled the a number of market maps on the New England Venture/Startup Ecosystem including the following: These market maps continue to be very positively received by our subscribers so we’re continuing to deliver. This week, we have the final installment of the New England Series A Deals articles. Here’s the list:
https://medium.com/the-startup-buzz/new-englands-series-a-deals-part-iii-8f597cbce0f0
['Matt Snow']
2020-11-14 20:02:17.124000+00:00
['Startup', 'New England', 'Venture Capital', 'Fundraising', 'Technology']
Writers’ Armour!
Hello again! 😊 Do you know when was the first time I wrote online? 2008. I was in my ninth grade and one of my friends told me about this free domain service provider called blogspot.com and a free publishing platform blogger.com. He told me how it helped him create his first blog, how he wrote his first article and how he made contacts through that. I was so inspired as it was very new to me. I immediately linked my Gmail account to Blogspot and started writing random thoughts and even stuff like what success is, in terms of the mathematical formula : (a+b)(a-b)=a²-b² !! I was a school kid ok?! Fast forward 2018 and we have WordPress and GoDaddy who can help us with cPanel hosting and maintaining our blog. Follow a few simple steps and you’ll be amazed to know how easy it is to set up your first blog. cPanel hosting: To create your domain name and make your website go live, you could create an account on GoDaddy.com. Remember to choose the option that gives you only cPanel hosting. It might charge you a nominal fee of INR 800 for an annual subscription. After a few verifications from their side, you will get your domain name after almost a week. WordPress setup: Once your domain name is ready, you could start adding products from your GoDaddy account. Add WordPress to your account and you’re all set. Make sure to remember all your passwords and keep them safe. You can customize your WordPress account and install necessary plugins, add themes and can begin writing on anything that you would like to. Medium: Well, if you’re skeptical about following the WordPress setup and get the domain to make your blog go live, there’s an easy but an equally amazing alternative for you. Just create an account on Medium and start pitching in your stories to publications (like maice) that give a good platform to showcase your writings. coffee-unlock the brain-write-repeat! Tools for good writing: One of the most helpful apps that are in the market today is Grammarly. You can add this to google chrome as an extension and proofread your articles before posting them on your blog. To check for plagiarism, you could use any of the free plagiarism tools online such as smallseotools.com. You can also install Evernote which comes in very handy when you like to jot down notes during your research on a topic before you start writing. I use Google’s Keep application on my phone and make a list of ideas as and when they arise. You can add images and infographics to your blog by creating them on canva application or link to free images from unsplash.com or shuttershock.com. SEO: In lay man’s terms, Search Engine Optimisation or SEO refers to increasing views on your website or blog. To achieve this, you can make a list of keywords wrt the content that you want to write on your blog and use them as naturally as possible in your writing. You should make sure not to just stuff all your keywords in one place and also avoid repetitions because it will make your reader say bye. The above-mentioned tools can be used once you get accustomed to writing. Make time for yourself every day and relax to bring out the writer in you. If coffee stimulates you, make yourself an espresso and just expresso! Immerse yourself into your thoughts, talk to your heart, make your mind instruct your hands to type and start writing! Until next time! 😊
https://medium.com/maice/writers-armour-4b243c2a91e1
['Mohitha B G']
2018-07-30 04:31:01.232000+00:00
['Tools', 'Getting Started', 'Writing', 'Blogging', 'How To Write']
Fairness and Bias in Artificial Intelligence
Biases in AI Reporting Bias Reporting bias occurs when the frequency of events, properties, and/or outcomes captured in a data set does not accurately reflect their real-world frequency. This bias can arise because people tend to focus on documenting circumstances that are unusual or especially memorable, assuming that the ordinary can “go without saying.” @Google Explaining Reporting Bias by Statistical Entropy Case Study 1 A mobile camera or digital camera is used as a shot gun detector to detect changes in speed of vehicles. In order to detect speed, the vehicle is detected first and then the amplitude of the image is calculated. Reference Articles [Optical Flow Estimation] http://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf Case Study 2 A dashcam is fitted to vehicle inside and is used to detect mistakes in lane changing. In order to detect the motion, the vehicle is detected first and then the graph of the images in the video are evaluated. Reference Articles [Perceived Stress Questionnaire on Driver] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5981243/ 1. Problem Explanation: Capture of Frequency of Events Case Study 1: (Capture of Frequency of Events using threshold of Pixel values) Vehicle Detection using OpenVINO Vehicle Detection using Optical Flow Estimation and Bounding Boxes In the above two example videos, the number of points inside each bounding box is evaluated over a set of frames. The same graph is used to show the Frame 1633 under a chosen frequency. The challenge lies in converting this graph of information of particle count into usable statistics such as Distance and Speed or even parameters that show direct influence of statistical entropy. — How to extract Parameters from the Exponential Graph The example code below extracts positive values from the points inside bounding box rectangle. def get_lookback_frame(points, lookback_frames): # shift points by 1 frame return points.shift(1).rolling(window=lookback_frames).min() def get_long_signal(points, lookback_frame): # calculate the steadily increasing part of the evaluating graph long = (points > lookback_frame).astype(np.int64) return long def filter_signal(points, lookback_signal): return points * lookback_signal The example code below finds outliers in Parameters such as Speed: # Outlier Detection def get_return_lookback(points, lookback_frame): # evaluate log returns of lookback and current ones return np.log(points) - np.log(lookback_frame) def get_signal_returns(signal, lookback_returns): # multiply the signal to lookback_returns return signal * lookback_returns Log Returns of Each Threshold of Points Found from Tracker The log returns matches with the statistical entropy taken in this problem. Calculating residual errors from a linear regression fit will provide us with outliers in speed. To estimate the distance, the log returns is a good enough metric as it steadily decreases. Case Study 2: (Capture of Frequency of Events using PSQs) In evaluation of PSQs, same technique is applied. In this case the count is more or less the same across the interval which implies the table must be converted to a PCA like transformation. Automation Bias Automation bias is a tendency to favor results generated by automated systems over those generated by non-automated systems, irrespective of the error rates of each. @Google 2. Problem Explanation: Video Inference Case Study 1 (Shot gun detector) Vehicle Detection using OpenVINO Vehicle Detection using Optical Flow Estimation and Bounding Boxes The above video is demonstrating optical flow detection within the bounding box images as you can see. The detected points are filtered over a threshold which emits data to be sent through an analysis pipeline. The Analyzed data is used to determine the statistical parameters of the video. So why does Automation Bias exist? Automation Bias exists when a piece of software with single original model can be used to determine the statistical properties which helps in optimization of the video characteristics. Selection Bias Selection Bias occurs if a data set’s examples are chosen in a way that is not reflective of their real-world distribution. Selection Bias can take many different forms: @Google 3. Problem Explanation: Evaluating a Graph Coverage bias: Data is not selected in a representative fashion. @Google Case Study 1: (Shot Gun Detector) [Correct]: Collect Order of Appearance of Nodes inside Shot Gun Detector by Confidence Intervals Collect Order of Appearance of Nodes inside Shot Gun Detector by Confidence Intervals [Wrong]: Do Particle Count from detected boxes over n frames without forming order of appearance of bounding boxes Do Particle Count from detected boxes over n frames without forming order of appearance of bounding boxes [Correct]: Connect two adjacent frames using Order of Appearance Collected with correct threshold Connect two adjacent frames using Order of Appearance Collected with correct threshold [Wrong]: Do Particle Count over frames by choosing the wrong threshold of pixels The particle count inside those bounding boxes are measured from Optical Flow Phase Based Methods. The data collected does not form a representative fashion but they are transformed into usable statistics. viz = pd.DataFrame(columns=['vehicle', 'count', 'frame', 'nodes']) frame = 0 for n, t in zip(nodes, threshold): frame += 1 map_n = list(map(lambda k: str(k), n)) for i, j in zip(n, t): viz = viz.append({'vehicle': i, 'count': j, 'frame': frame, 'nodes': "-".join(map_n)}, ignore_index=True) Non-response Bias: (or participation bias): Data ends up being unrepresentative due to participation gaps in the data-collection process. @Google Case Study 1: (Shot Gun Detector) Participation Gaps can occur in vehicle detection because some bounding boxes may get undetected either due to Deep Learning error which is very unlikely to happen. The other reason is when the vehicles are moving sequentially, the bounding boxes switch the Confidence Intervals or Confidence Threshold implying they change their order of appearance in the detection. This introduces participation bias in detection of a particular vehicle from a set of vehicles. Code to Explain the Bounding Box Detection: def draw_boxes(out_write_npy, zone, frame, result, args, width, height): for box in result[0][0]: # Output shape is 1x1x100x7 conf = box[2] # comparison of confidence threshold if conf >= args.pt: xmin = int(box[3] * width) ymin = int(box[4] * height) xmax = int(box[5] * width) ymax = int(box[6] * height) # draw bounding box rectangles cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), args.c, args.th) return frame Code to show participation bias of bounding boxes:
https://medium.com/datadriveninvestor/fairness-and-bias-in-artificial-intelligence-c7fbfe880df
['Aswin Vijayakumar']
2020-12-27 15:34:33.921000+00:00
['Machine Learning', 'Artificial Intelligence', 'Learning And Development', 'Data Science', 'Fairness And Bias']
Why Non Biased AI Doesn’t Exist
Automating manual tasks to enable scale is no small feat. We know this. But when dealing with health data this is an even more daunting task and to really make sure we are on top of our game, we talk often and problematise around how to manage the implications when transferring tasks from a human to a machine. Below is our approach at Grace Health. It’s not the truth, but an insight into our effort to keep the dialogue sober and realistic. With a vision to improve women’s health across the world, we have our challenge cut out for us. By chatting with our automatic health assistant women are able to track and further understand their period, get friendly notifications and predictions about their cycle, plus get answers to the most common health issues. Coming up we will also provide access to medical assistance in the privacy of her own phone and to connect the dots, products and services all the way to her door step (last month we launched pharmacy delivery for our users in Accra, Ghana). When reaching out and hoping to connect with, a global market of women, scaling is of the utmost importance. This is why utilising tech — in our case AI — to reach as many as possible is a no-brainer. Easy in theory, difficult in practice. Now to our point. People in the West have a tendency to view their perspective as the right one and the way everyone should live, which of course is ignorant and wrong. However, there are perspectives and ideologies that could be useful for markets to adapt from each other, and our stance is that a more liberal approach to education and rights around sexual and reproductive health is one of them. Let’s get back to the title of this piece, “Why non biased AI doesn’t exist” So what is even ‘non-biased’? The term non-biased literally means not biased. In short, neutral (as in not taking sides) whereas non-biased means completely free from bias. To be unbiased, you have to be 100% fair — you can’t have a favourite or opinions that would color or shape your judgment. Artificial intelligence (AI) on the other hand, is an area of computer science that emphasises the creation of intelligent machines that work and react like humans, replicating behaviour such as problem solving, reasoning, perception and planning. All traits that humans hone over time, largely due to our experiences and notions. Also known as bias. Machines can be taught to act and react like humans only if they have abundant information relating to the world. Artificial intelligence models must be given access to objects, categories, properties and relations between all of them to implement knowledge engineering. Here’s where I argue that the bias slips in. IBM states on their website “AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender, or ideological biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem. But we believe that bias can be tamed and that the AI systems that will tackle bias will be the most successful.” and we agree, with the emphasis on tamed. When a service or machine is developed by a human, you automatically transfer not only biases and prejudice, but you also transfer your whole value base and perception of the world. What is right? To whom? When? Where? When you automate human behaviour it is almost impossible to not teach it to mimic human behaviour and deduction principles which then also implies bias. Well, is bias always detrimental? We don’t necessarily think so. Bias and predisposed opinions skew the way we make decisions, but they also give us a framework for how to make sense of the world. We’re not necessarily trying to answer the ethical question but rather shine a light on the complexity and potential of using AI to replicate humans, and why the discourse is needed from time to time. We base our company and product on three key ideas: Every person has the right to make informed decisions for herself Every person has the right to love who they want to Rape or violence is never ok. It’s criminal and should be reported
https://medium.com/grace-health-insights/why-non-biased-ai-doesnt-exist-ed4fe90442fb
['Therese Mannheimer']
2020-09-15 07:51:19.935000+00:00
['AI', 'Data', 'Womens Health', 'Bias']
Machine Learning — Diagnosing faults on vehicle trackers with a CNN
Photo by Omer Rana on Unsplash Initial Considerations We also published another article with almost the same title: On that article, I used some specific knowledge of the tracking modules operation to extract features manually; now, on this Storie, I aim to diagnose faults on the same dataset without any feature extraction. For those who did not read the first article, the introduction and problem description will be the same. However, all the development will not. Introduction Many transport and logistics companies use modules to track their vehicles, and it is not unusual that some modules are not working as expected. The tracker might need to be discarded, wrong data from the vehicle can be collected, or technical support can be sent a long distance away to analyze the problem, for example. Thus, it is essential to conduct a remote fault diagnosis on those modules since, when they show failures, financial losses can occur. Since these modules continuously send data to a database, the aim of this research is using this data to diagnose faults on the modules. It was necessary to develop a methodology to pre-processing the collected data sent by the vehicle modules. After this, machine learning techniques were applied to create models, which were analyzed and compared. However, what is the fault? Faults are an unallowed deviation of at least one property or a specific parameter of the system [1]. Moreover, there are three steps to diagnose faults, which are: Fault Detection: is the most basic task of fault diagnosis, it is used to check for malfunction or failure in the system and to determine when the fault occurs; Fault Isolation: the isolation serves to detect the location of the failure or which is the defective component; Fault Identification: the identification is used to determine the type, format, and size of the failure. Generally, the fault identification process consists only of Fault Detection and Isolation (FDI). That does not negate the utility of fault identification. However, this process may not be essential if no reconfiguration action is involved [2]. Problem description First of all, to perform any machine learning solution, data is needed. For this case study, a Brazilian logistics company provided some data sent by many trackers, so all the data used is real. The modules installed in the vehicles send data during their entire period of operation. The company provided 12586 registries. The company provided 12586 registries. A registry is composed of all the data transmitted by one module in one day period. On average, each module sends 1116.67 points per day. Each point has eight attributes: Battery Voltage : Float value for voltage of the vehicle battery; : Float value for voltage of the vehicle battery; Longitude : Float value for the vehicle longitude; : Float value for the vehicle longitude; Latitude : Float value for the vehicle latitude; : Float value for the vehicle latitude; Ignition : Boolean value indicating whether the ignition is switched on; : Boolean value indicating whether the ignition is switched on; Odometer : Float value for the vehicle odometer; : Float value for the vehicle odometer; GPS Signal : Boolean value indicating whether the GPS signal is valid; : Boolean value indicating whether the GPS signal is valid; Velocity : Float value for the vehicle speed; : Float value for the vehicle speed; Memory index: Integer value for the memory position that the point is saved in the module. Since the modules send data regularly and on different frequencies according to each module, each registry has a different size. This way, the size of each registry is the number of points sent multiplied by 8 (quantity of attributes). The aim is identifying the faults that a registry can have or if it is working correctly. This way, this is a multi-class classification problem, where there are eight classes; one class for the registries without faults, and seven possible faults, listed below: Fault 1 : Wrong pulse set up; : Wrong pulse set up; Fault 2 : Odometer locked; : Odometer locked; Fault 3 : GPS locked; : GPS locked; Fault 4 : Ignition wire released; : Ignition wire released; Fault 5 : Accelerometer defective; : Accelerometer defective; Fault 6 : Module buffer with a problem; : Module buffer with a problem; Fault 7: GPS with jumps in location; Some of these faults have several occurrences too low. The flaws that had less than 3% were all labeled as “Others”. The quantity of data of each failure can be seen on the table below. Development We aimed to detect and isolate the faults; however, while just detecting faults is a binary problem, detection and isolation is a multi-class problem. It might be much easier to detect the faults. Look by the company side, almost always, when a module present faults, they need to replace it. So it does not matter much what is the fault, anyway we need to replace the module. We propose to models here, one to detect faults and another to detect and isolate the faults. Data pre-processing As we aim to make minor changes in the dataset, without using any specific knowledge about the system operation, the data pre-processing is reasonably straightforward. We choose to keep all the data, not performing any outlier analysis. Mostly outlier is a strong indication that there is a fault, if we throw all the outliers away, a lot of faulty registries would be gone. This way, the only thing we do is a normalization, using the RobustScaler. A traditional normalization can cause the data to be suppressed by some high value, so RobustScaler is used. This scaler uses the first and third quartiles as constraints and not the maximum and minimum values, so this normalization is more robust and maintains the outliers. Considering the registries have variable length, it was not possible to directly construct the ML models with this data set. In this way, it was necessary to set a default size for all retries. Knowing that the highest number of points present in a registry is 6410, the mean of the number of points is 1116.67, and the standard deviation is 1155.75, the registries with a smaller amount of data were completed with “0'’ until they have the same amount of data as the largest registry. Different approaches to set the default size of a registry could be performed, such as truncating the amount of data from above-average registries, and completing with “0'’ those that are below average, or truncating the amount of data from all registries with more data than the smaller registry. However, these approaches were tested and considered unfeasible due to the divergence of the CNN’s. CNN architecture We made a model based on VGG-16. Using 4 convolutional blocks. Each like Conv -> Conv -> Pooling. Then two fully connected layers. The idea on the convolutions is to make the filters choose and modify the attributes. To do that, we disposed of each registry with (8, 6410, 1) shape. This way, the CNN threats the data like an image. To check more deeply the architecture, check my GitHub (the link is on the introduction and at the conclusion). The used batch size was 1, learning rate 0.0001 and he_uniform initializer. The batch_size can be higher. However, due to hardware restrainments, three was higher for me. Model Training Two models were trained, one for fault detection and another to fault detection and isolation. Thus, two structures of the same dataset were defined. These structures are described as follows: Structure 1 : was used in the experiments to detect if the system had faults or not. It contains 12586 records, of which 7480 are flawed, and 5106 are flawless; : was used in the experiments to detect if the system had faults or not. It contains 12586 records, of which 7480 are flawed, and 5106 are flawless; Structure 2: was used in the experiments to detect and isolate faults. It contains 5106 records flawless, 2170 with failure 2, 1573 with failure 3, 1702 with failure 4, and 2035 with “Others” failure, totaling 12586 records. All the models were written in Python 3.6 with Sk-learn 0.20.2 and ran on Ubuntu 18.04 with an Intel i7–7700HQ, 16Gb RAM, GTX 1050Ti. To run the algorithms, the datasets were separated into a training and a test set. For every experiment, 20% of the registries were for testing and 80% for training. Model Evaluation The metrics need adequate interpretation. In the case study in question, it is crucial to minimize the number of false positives, as this would involve sending a technician to perform unnecessary maintenance. While a false positive implies receiving a complaint of some malfunctioning module. Then, from a financial point of view, the recall has greater relevance than precision. With 88.42 Precision and 87.96 Recall, the confusion matrix for the fault detection can be seen below: With 54.98 Precision and 52.57 Recall, the confusion matrix for the fault detection and isolation can be seen below: For both models, there is high variance and high bias. The models could have better results if the architecture was deeper or changing some hyperparameters. Alternatively, even performing more extensive data pre-processing. Conclusions As proposed, these models can detect and isolate faults on vehicle fleet tracking modules. However, the evaluative metrics are not suitable to use the models. On my other article, using knowledge from the systems, the metrics were higher can be me implemented at the company to diagnose faults on their modules remotely. The limitations of the employed methods include the inability of the models to discover faults that are not mapped, so any faults which have not been learned would be misclassified to be one of the known ones. References [1] D. van Schrick, “Remarks on terminology in the field of supervision, fault detection and diagnosis,” IFAC Proceedings Volumes, vol. 30, no. 18, pp. 959–964, 1997. [2] D. Wang and W. T. Peter, “Prognostics of slurry pumps based on a moving-average wear degradation index and a general sequential monte carlo method,” Mechanical Systems and Signal Processing, vol. 56, pp. 213–229, 2015.
https://medium.com/pgss-consultants/machine-learning-diagnosing-faults-on-vehicle-trackers-with-a-cnn-ad6949bbe18b
['Pgss Consultants']
2020-05-01 22:54:15.028000+00:00
['Machine Learning', 'Artificial Intelligence', 'Tracking', 'Fault Diagnosis', 'Data Science']
The Salton Sea — A Disappearing Environmental Asset
What was the impact of this new body of water? Located well below sea level at -225 feet and without a natural outflow, the sea filled and expanded. The site grew to over 800 square miles although the depth of the Salton Sea was relatively shallow at about 30–50 feet. Because a major source of inflow was agricultural runoff the amount of dissolved salt in the Sea increase. Several things happened as a result of the expanding Salton Sea. First, the Sea was designated as a repository for agricultural drainage. This drainage area eased the waterlogging of the land in the Imperial and Coachella Valleys. Improved land increased the agricultural output of the land. Later, the State of California began to stock the Salton Sea with fish in hopes that sports fishing would take off. The Salton Sea became a noted spot for anglers and this activity brought other tourist activities to the area surrounding the Sea. Photo by Jeremy Bishop on Unsplash What were the impacts of the Salton Sea on area biodiversity? The stocking of fish in the sea, the interest from Anglers, and the make-up of the sea made it a haven for several species of endangered fish and birds. Additionally, California was impacted by receding wetlands in many areas because of a heavy need for residential water usage. The Salton Sea became a maintain stay stopover for birds flying on their migratory pathways. In fact, the Salton Sea National Refuge indicates that 380 species of birds can be spotted at the Sea, which is higher than many other national refuge areas. The Salton Sea area has been compared to the Texas Gulf Coast in the amount of biodiversity and economic impact it has on surrounding areas. Humans may overlook the value of biodiversity in their everyday lives the Salton Sea provided a haven for many species while serving the economic and agricultural demands of humans. Attempting to change people’s values and attitudes towards conservation or environmentalism may be difficult but some solutions require little action at all.
https://medium.com/environmental-intelligence/the-salton-sea-formed-by-the-environment-maintained-by-human-activity-and-now-a-disappearing-34fd9971b3d4
["Thomas O'Grady"]
2020-10-15 04:58:39.481000+00:00
['Biodiversity', 'Public Health', 'Environmental Issues', 'Ecology', 'Environment']
Boost Your Productivity As A Programmer With These Tools!
Programming is a job that requires you to be really productive. If you do not use the correct tools for productivity, there is a good chance that you will end up wasting a lot of time, which could have been spent on some other work. In this article, we will be discussing the top tools that you can use for increasing your productivity as a developer. 1. Trello By far the best todo application yet created, Trello is an amazing tool for all programmers to organize their workflow. If you use it correctly, then Trello can actually boost your productivity a lot. Have you ever felt the need for a boost every morning so that you can keep yourself motivated? Well, I sure have. Trello has a good design and overall satisfies me with its amazing and intuitive UI. I not only organize my work and get a list of things to work with, but also, I get that extra motivation which I always like to have. I hope that you find this tool really useful. You can check it out here. 2. Firebase If you have ever struggled with setting up the back-end of a website, because you are a front end developer, firebase is ideal for you. I have found myself using it all the time in my front end projects because it is so powerful! Firebase offers several different features, some of them are: Easy authentication Deploying your website Cloud functions Machine learning Database To name a few. And also, guess what? It is all for free! At least, to a certain limit. Although, their limit is quite generous, and you do not need to worry about it until you hit a very high benchmark. Firebase is a boon to programmers, and I highly recommend checking out our clone blogs where we explain them in detail here, and if you are interested, then make sure to check out our authentication with firebase blog here. 3. VS Code and its extensions One of the most popular and loved text editors, vs code is one of the best tools for programmers. I highly recommend any programmer to use vs code, regardless of whether they use java, c++, python, JavaScript, or any other language. It is so powerful and you seriously do not have to worry about anything else if you have some of its best extensions. From anything between creating the base layout of a react file to formatting your whole code, vs code and its extensions have got your back. I am pretty sure that most of the developers today are using either vs code, sublime text, or atom. In my opinion, vs code is the best text editors because of the amazing community that it has and because of the extensions. To learn more about these extensions, go check this “5 Visual Studio Code Extensions Developers Need in 2020” blog out here. 4. A good mouse and keyboard As a developer, your keyboard and your mouse are the two things that you spend the most amount of time on. If you have a really bad keyboard, then it is a good idea to buy a new one as soon as you can. After all, it is the mode of communication for you and the computer. If you invest in a good keyboard, you do start to type faster and your fingers do not hurt as much as they used to. You can feel the difference and trust me, that will be worth it. Regarding the mouse, well, most people do not even consider the mouse to be essential, but when you start to design things or work with huge pieces of code, then you start to see the use of one. I used to be the person who didn’t think that a mouse would be an essential asset for me until I bought one. It is much better to use a mouse than a touchpad while selecting things and surfing stack overflow. These are some assets that do pay off in time and will help you save up a lot of time. Do not be afraid to invest in a high-quality keyboard. You could get a wired mouse, but I have worked with a wired and a wireless mouse. The biggest benefit of a wireless mouse is avoiding the extra clutter of wires on your desk. This again is a personal preference, but you should consider a wireless mouse if you are like me, who likes to keep stuff clean. 5. An external monitor. If you would have asked me a month ago about getting a better monitor, I would have cringed and said that I don’t need one. I think that this is the case with most developers. It does seem to be okay to work on a smaller screen, but once you work on a bigger screen, you start to see the difference. There are numerous advantages of a big screen, and one of them is that you can have your code editor open on one side, a browser open on the other, a terminal that is lying in the corner, and also, your Spotify on one corner, and still have no problem at all. This is not possible on a small screen, and even if you do manage to get all this fit, you will most likely have a lot of problems dealing with the clutter. A large screen always improves your workflow as a programmer. If you can, then I would recommend going for a laptop with a big display. But say that you are a person who travels a lot. Personally, I do go to my friends’ place sometimes to code along with him, and a big laptop screen will not be handy in that case. Well, for you, I think that a decent-sized monitor and a 13-inch laptop would be perfect so that whenever you are at home, you can connect your laptop to a monitor, and if you wanna go out, then that laptop won’t bother you. An external monitor does boost your productivity and saves time because now, you no longer need to keep switching between your windows, and you can have everything in one place. These were the five tools you can use to improve your workflow as a developer and save up a lot of time too. I hope that you found value in this article. Make sure to drop your suggestions and thoughts in the comment section! Thank you Priyanshu Saraf
https://medium.com/cleverprogrammer/boost-your-productivity-as-a-programmer-with-these-tools-3e38b718f7ac
['Priyanshu Saraf']
2020-11-04 17:59:37.424000+00:00
['Work', 'Productivity', 'Clever Programmer', 'Programming']
Creating an iPhone App-Like Only With Your Data Science Skills: One-Tap Life Logger Linked to Google Sheets
Demo 1 for API Solution — Google Cloud Functions To have Google Cloud Functions receive the HTTP request and push record to Google Sheets, the followings are the steps: Step 1 — Create a new GCP project It is better to create a new one for isolation from other projects you already have. I created a new one with the name “Record iPhone Click”. Step 2 — Configure Google Sheets first to receive the data push from Cloud Functions Set up the Google Sheets side first because we need the authentication info for the Cloud Functions code. Following the description in this page, Enable “Google Drive API”. Enable “Google Sheets API”. Go to “APIs & Services > Credentials” and choose “Create credentials > Service account key”. Fill out the form Click “Create key” Select “JSON” and click “Create”. Download of a JSON file automatically takes place. Note the address from “client_email” in the JSON file. Save the JSON file to somewhere the Cloud Functions can access to. Create a spread sheet in Google Sheets. Write “event” in cell A1 and “time” in cell B1, as a header for the final log record. In the sheet file, pressing the “Share” button on the top right, add client_email you get from step 7 above as an authorized user. This is all! Step 3— Configure a new function in Cloud Functions Go to “Cloud Functions” and CREATE FUNCTION. Use Trigger type as “HTTP”. Configuration of new Cloud Functions (snapped by author) Step 4— Input code to main.py and requirements.txt Press Next on the bottom left, and go to the code input page. Code of new Cloud Functions Here’re the code I actually used. main.py requirements.txt Just remember that the service account JSON file is required to be downloaded into local folder and when you want to do it in Cloud Functions, you have to save the file in /tmp/ sub-folder. Otherwise, Cloud Functions refuses to build your app. Step 5 — Deploy the function Click “Deploy” button and wait for the successful deployment. It can take a few minutes. Step 6 — Find the URL and check if it’s working URL can be found in the details of the function. Where you can find the URL. If you test-click it, you will see the following-like message, which means the code is successfully deployed. If you see internal error, server error, or any other errors, something is wrong. Successful message. Here you should also see a new record in the sheet you created. Newly inserted record in the sheet. Step 7 — Send the URL to your iPhone and make a Safari link on your home screen Open your URL on your iPhone and add the shortcut to Home Screen. Add your link to Home Screen Then, you can see the new icon on your iPhone home screen. Done! What we finally got Step 8 — To make the URL links for your other life logs, copy the function again and again, set up the event name, and add each shortcut to your home screen. This is the most inconvenient part of this option: Cloud Function’s code can be only linked to one HTTP request URL. Therefore, if you need more than one life logs (like “feed baby”, “baby wakes up”, “arrived at office”, etc.), only you can do is to create the functions again and again. Make separate functions to every event. This process can be significantly reduced when you have more control in API set up like the next option using Heroku!
https://towardsdatascience.com/creating-an-iphone-app-like-with-only-your-data-science-skills-one-tap-life-logger-ac691698d3b3
['Moto Dei']
2020-09-18 17:50:38.150000+00:00
['Heroku', 'Cloud Functions', 'Python', 'Data Science', 'iPhone']
Generating a Dataset with GANs
Generative Adversarial Networks (GANs) — Adobe Stock Image Having a dataset is a key component to training any sort of machine learning model. But what about instances where you may not have access to the data? Not being able to use a dataset because of data regulation and privacy concerns poses a problem when trying to apply machine learning. How can we train models without being able to use the relevant dataset? This is where deep learning can help! Using generative adversarial networks, or GANs, we can generate a dataset for training. We can solve those issues by creating an entirely new dataset based on the original dataset that retains important information. What are GANs? GANs are a class of machine learning systems. This technique is known for learning to generate new data with the same statistics as the training set. They are most often used for images, but we wanted to try them on numerical data. Our Experiment For our experiment, we worked with the Pima Indians Diabetes Database on Kaggle. This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases and contains many diagnostic measurements as well as predictor variables such as the number of pregnancies the patient has had, their BMI, insulin level, age, and so on. We wanted to create an entirely new dataset based on this original dataset that retains important information from the original- which would be useful in solving the problem of restricted access to data due to data regulation and privacy concerns. We based our approach on the paper Data Augmentation Using GANs by Fabio Henrique K. dos S. Tanaka. Training the GAN The original paper examined four different primary architectures: One 256 dimensional hidden layer One 128 dimensional hidden layer Two hidden layers of 128 and 256 dimensions Two hidden layers of 256 and 512 dimensions We wanted to see if we could improve the fake data generated by our GAN by tweaking the best architecture possible. We experimented with altering the batch size and learning rate across models with each of hidden layer architectures. The four architectures we experimented with Evaluation To measure the success of the fake dataset produced by our GAN, we trained a classification and regression tree (CART) on the fake dataset, and tested the tree on the real dataset. Results So how did we do? We made sure that our generated fake dataset met a similar distribution of classes compared to the original real data. In the image below, you can see that across different subcategories, the distributions of the fake data (the blue bars) are pretty close to those of the real data (the red bars). Not bad! We found that the best results were produced by a GAN with a larger learning rate than reported in the paper with the one 256-dimensional hidden layer architecture. The paper reported a classification accuracy of 74.8% on this dataset, but we were able to achieve a higher accuracy of 79.1%! Training these models was very interesting. In the image below, you can see a cost function plot over the epochs of training, where the generator’s cost is in red and the discriminator’s cost is in blue. Notice how the generator and the discriminator are in a constant war to outdo each other; we had to ensure the parameters we chose resulted in a stabilized GAN. We ran multiple iterations of each model architecture to ensure the results we were getting were not by random chance. The generator’s cost (red) and discriminator’s cost (blue) over epochs of training Future Work Based on our experiments, we think using the one 256 dimensional hidden layer architecture and a learning rate of 0.002 may be more successful in creating a new dataset that retains important information from the original. Since the generator and discriminator are constantly doing battle, we think it might be better to terminate training dynamically, rather than arbitrarily at the 500th epoch. Otherwise, it’s possible to end training on an oscillation of the generator where the cost is quite high, which happened with this model. As an improvement, we could establish the 500th epoch as the earliest possible endpoint, and instead end training only when we read a generator cost that’s lower than or as low as any previously seen cost, with a max epoch cutoff of 600 epochs. This way, we would have the most optimal generator for the task. However, this idea requires further research. Conclusion We were able to generate a dataset with the same key features as the original dataset using GANs. Using a learning rate of 0.002 and an architecture of one 256 dimensional hidden layer, we were able to achieve better accuracy than the paper we based our work on. All of our code can be found on github. The Authors This project was created by Master of Science in Artificial Intelligence (MSAI) students at Northwestern University: Aristana Scourtas, Nayan Mehta, and KJ Schmidt
https://medium.com/swlh/generating-a-dataset-with-gans-1e994ff633fd
['Kj Schmidt']
2020-07-23 22:02:23.071000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deeplearing', 'Gans', 'Data Science']
16 Fun Sex Facts You Don’t Need to Know…Unless You Like Fun Sex Facts
16 Fun Sex Facts You Don’t Need to Know…Unless You Like Fun Sex Facts #1: Your nipple color is your perfect lipstick shade Photo by Oleg Magni from Pexels As a nonfiction author, I have notebooks and notebooks filled with bizarre factoids that I have collected over the years. (You would be surprised at what science will throw research dollars at.) The following weird sex facts won’t make you better in bed or make you seem more enlightened around friends and family. Nope. I make no such promises. But if you ever want to end a boring conversation, just pepper in a few of these fun facts and then make a graceful but mysterious exit… #1. Your nipple color is your perfect lipstick shade. Every woman knows that it is tough to pick the perfect nude lipstick shade. Just one shade off can make the difference between looking au natural and like a twelve-year-old who raided her mother’s makeup drawer. Fear not. According to the experts on The Doctors, our body holds the secret — the color of your areolas is your perfect lipstick shade. Now I am not suggesting you lift your shirt at the Sephora counter, but…it would make makeup tutorials more interesting. #2. Masturbation gives men an evolutionary advantage. Sperm is a lot like cheese. Leave it sitting around, and it gets all funky. And by funky, I mean missing heads, shriveled heads, tapered heads, and the most terrifying of all…two-headed sperm. But you don’t need to fear mutant sperm because men’s bodies have a way of flushing out the rejects — masturbation. In several research studies, masturbation created a fresh and more viable batch every time defective sperm were unloaded. It also helped stabilize sperm count. Just don’t make it too fresh. Daily masturbation depletes sperm count. He’s cute but horrible in bed | Photo by Sid Balachandran on Unsplash #3 Porn and Viagra don’t work on pandas. Pandas are freakin’ adorable, but they make horrible lovers. The problem is partly libido. Most male pandas in captivity would rather sit around eating bamboo than make sweet bear love. Other zoologists have theorized that pandas simply don’t know how to do the deed. Sometimes the male panda will crawl on top of the female and hump her head. It’s hilarious but not so funny if you are a zoologist trying to conserve the bears. Just google “panda porn,” and you will be thankful you are neither a female panda nor a panda researcher. (Actually…don’t. It depressing.) Panda porn is really a thing. Researchers have tried surrounding pandas with videos of other pandas having sex. The hope was that seeing all their fellow bears getting it on would motivate them. It did not. In 2002, researchers had another bright idea — why not give the panda Viagra? Unfortunately, that didn’t work either. Maybe they should try some soft music and candlelight next? #4. The first penis pump was invented by a tire professional. A penis pump or vacuum erection pump consists of a vacuum tube that fits over the penis with a ring that constricts the base of the penis. And then you pump, pump, pump up your pecker until you can pitch a tent with it. The penis pump actually does work. It uses suction to draw blood into the penis and keep it erect. But it is usually only recommended for older folks with severe erectile dysfunction. (And no, it does not increase the size despite the Amazon reviews claiming otherwise.) Otto Lederer of Austria patented the first penis pump in 1917. But it was Geddings David Osbon who took that patent, perfected it, and brought it to the masses. Like most great inventions, necessity led the way. Osbon stepped into his doctor's office one day, complaining that he could no longer maintain an erection. The year was 1960, and there were not many options for men, so his doctor didn’t have a solution. Osbon was not deterred. He owned an automobile and tire business, so he took what he knew about mechanics and applied it to his erection. The rest is history. Photo by William Warby on Unsplash #5. The distance between a woman’s urethra and her clitoris affects her ability to orgasm. The female orgasm has always stumped sex researchers. Most women can not climax from penetration alone, and only 18.4% of women report they can orgasm without direct clitoral stimulation. But why some women respond to penetration alone can be found in a simple tool — a ruler. Oddly, this research was conducted a century ago. In the 1920s, Napoleon’s great-grandniece, Princess Marie Bonaparte, became frustrated with her lack of orgasm. So she did what any repressed scientist would do…she collected data. She found that if a woman’s glans clitoris (the fleshy tip) was less than the distance of the tip of her thumb away from the urethra, she was more likely to orgasm from intercourse. Known as the “rule of thumb,” this old research has validity. There’s evidence a woman is more likely to orgasm solely from penetration if the distance between her glans clitoris and the urethral opening is less than 2.5 cm (1 inch). But instead of getting your ruler out during sex, I suggest you ask her. #6. Most erections lean to the left. Nobody is symmetrical, but you might have noticed your erect penis curves to one side — usually the left. This is often caused by plaque or scar tissue, from circumcision. It’s completely normal unless the curvature is so great that it causes erectile dysfunction and pain. Then you get Peyronie’s disease, and that is not fun. Positive John Thomas sign for a patient who sustained a pertrochanteric fracture on the left | CC BY-SA 3.0 An erection leaning to one side is often jokingly referred to by radiologists and surgeons as “Throckmorton’s sign” or John Thomas Sign. Surgeons would use the position of the penis on the x-ray as a compass — whatever direction it pointed was the side the patient should be operated on. (This is apparently hilarious if you are replacing a hip.) While there has been some debate on whether hip fractures can cause an erection to lean to one side, most studies have found Throckmorton’s sign does not indicate where the fracture lies. Sorry, Throckmorton. And you might have noticed that one of your testes (usually the left) also hangs lower than the other. Also, totally normal. #7. A man’s ejaculate kills a rival’s sperm. Men will do all sorts of crazy things when they think their partner is cheating. (I am looking at you crazy ex who went through my phone multiple times!) But perhaps suspicious men should stay out of their lover’s phone and just let their sperm take out the competition. Or more precisely, their “fighter sperm.” The last portion of a man’s ejaculate, called fighter sperm, contains a natural spermicide designed to kill the sperm of any rivals who comes after him. Most interestingly, these kamikaze sperm increase when a man thinks his woman is cheating on him. This theory, known as sperm competition, is still being debated in humans, but there is evidence of it in other species. The problem with these sperm competition studies is how to replicate a situation where a guy thinks his partner is cheating. In one study, they told the men to just “imagine” their partner’s infidelity. Sorry, science, people. That probably isn’t going to cut it. But you are welcome to use my ex-boyfriend as a test subject. #8. A man produces more sperm with certain females. Those wiggly little tadpoles are very discriminating. They need to be inspired. There is evidence that the quality and amount of sperm increases when certain females…um inspire him. So if your man is impregnating you at every turn…that’s sweet. He clearly really likes you. Or at least, he wants to carry on his genetic line with you. #9. More babies are born at 8:00 AM than at any other time. 8:03 AM on November 17th was one of the happiest days of my life. That’s the time my daughter was born. And she was not alone. A 2013 report by the National Center for Health Statistics found that 3.5 times more babies are born at 8:00 A.M. Researchers are not exactly sure why more babies are born at this time, but most attribute it to more C-sections performed in the morning vs. night. More interestingly, when they looked only at babies born at home, those children were more likely to make an appearance between 1 A.M. and 4:59 A.M. This is most likely evolutionary. Back when we were hunkered down in caves, you didn’t want to pop a baby out while the tribe was out hunting. This man has a larger than average penis. I am sure you needed to know that. | Photo by MR O.K on Unsplash #10. A man’s ring finger indicates the size of his penis. Let’s get this out of the way…women don’t care about penis size. Seriously, we don’t. But apparently, researchers do. In several studies (yes, more than one), researchers found the ratio of a man’s ring finger to his index finger indicates the size of his penis. Longer ring finger = bigger dick. The reason is simple — testosterone. The more testosterone exposed to in the womb, the longer his ring finger and the bigger his penis grows. So far, no research has been done on girls’ estrogen exposure in the womb and the size of their vagina. Thank god. #11. If your sense of smell is strong, you will have better orgasms. One of the reasons why I refuse video dates is because I need to smell a man to know if I like him. If you feel the same, then you are just exercising a neglected sex organ — your nose. In a small study, researchers found women who had a stronger sense of smell had more frequent orgasms. In another study, researchers found those who had lost their sense of smell reported lower sex drives. Again, these studies were small, but if you have the olfactory glands of a horny bloodhound…you may have more fun in bed. #12 Certain smells increase blood flow to the penis. If you understand the seductive powers of smell, then you will choose your scented candles wisely. Researchers have found lavender and pumpkin pie smells increase blood flow to the penis. While none of the odors reduced penile blood flow, certain smells like cranberry were not as sexy. So if you want your man to ravish you, take him to the pumpkin patch and not a cranberry bog. That groovy polyester suit is killing his sex drive | Public Domain #13. Wearing polyester lowers your sex drive. Picture it. You show up for your Bumble date, and the guy is wearing polyester pants. Should your first thought be… A. Am I in a 1970s time warp? B. This can’t be good for his sex drive. C. Both I am going to take both B and C as an answer, kids. In a study examining how certain textiles lower sex drive, researchers found polyester was the biggest offender. Another study found that wearing synthetic fibers also reduced sperm count. The reason is that synthetic fibers do not allow the boys to breathe. Researchers are concerned because sperm counts in men have plummeted more than 50% in the last 40 years, and fast fashion made from nylon, rayon, and polyester might be partly to blame. The good news is that wearing cotton did not alter sperm count or sex drive. #14. You are attracted to those with an opposite immune system. The adage of opposites attract rings true when it comes to genetics. Researchers have found people are more likely to be attracted to those who have opposite immunity genes. And how do we know who has the opposite immune system? Again…we smell it. Researchers theorized that this attraction is evolutionary. Mating with a partner that has the opposite immune system produces children with a more robust immune system. #15. You can get arrested for owning a vibrator in Alabama. Oh, Alabama. You gave us little gems like Lionel Richie, but you also gave us some really dumbass rules surrounding sex. And making it a crime to own a vibrator is right up there. Alabama is not the only place that you can’t hum your way to pleasure. Several countries will throw you in the slammer if you even pack a vibrator in your suitcase. The man-killer vagina is not a myth. Photo by Jeffery Wong on Unsplash #16. Some vaginas have teeth. Women’s vaginas have been terrifying men for centuries. In several Eastern and Western cultures, the myth of the vagina dentata warned men not to stick their manhood where it did not belong. A vagina dentata is a vagina with sharp teeth. (Vagina dentata is Latin for “toothed vagina.”) But toothed vaginas are not just a myth. Some women are born with dermoid cysts — growths containing hair, fluid, teeth, or skin glands — inside their vagina. The cysts are usually not sharp, and they can easily be removed with surgery. Unless…you want to turn your genitals into one badass venus flytrap. But I hope you never need to know that. Or any of this…
https://medium.com/sexography/16-fun-sex-facts-you-dont-need-to-know-unless-you-like-fun-sex-facts-956c74fbf775
['Carlyn Beccia']
2020-12-14 17:27:54.989000+00:00
['Sexuality', 'Life Lessons', 'Humor', 'Science', 'Culture']
WD Stack on Medium
We think that designers & front-end developers should easily be able to discover the latest resources, tools and useful freebies. The WD Stack is publication for frontend developers and web designers. If you have a story idea please submit the draft or published link to us for review via your Medium account. WD Stack.com is home to the top resources on dev & design. It’s organized in collections, and stacked by popularity so it’s easy to find the stuff you really want. Because “best” is a matter of opinion. Deciding what’s “best” is subjective. The selection and ranking of resources should be a community-driven process. WD Stack facilitates community-driven curation where users vote on the posted resources. Instead of one author, curating what they deem to be useful, the broader community (you) builds and rates the resources to fairly determine what’s “best”. Check out WDstack.com to find resources grouped logically by collection, or you can browse by popularity as ranked by other users.
https://medium.com/wdstack/wd-stack-on-medium-5c293da2c8bc
['Carol Skelly']
2016-06-14 20:15:19.567000+00:00
['About', 'Design', 'Responsive Design', 'Web Design', 'Web Development']
Deploying Applications in Kubernetes Using Flux
What is the entire story all about? (TLDR) We will be using Flux to synchronize the Helm Charts stored in a version control system to our Kubernetes cluster. We will use HelmRelease ( CRD ) with Flux. Installing Flux Let us now Install fluxcd in our Kubernetes cluster using a helm chart. If you are not familiar with what a helm chart is, refer to this guide. Before we Install fluxcd we will have to Install the HelmRelease CRD ( Explained later in the article ). #Adding the Flux Repo helm repo add fluxcd https://charts.fluxcd.io #Installing the HelmRelease CRD kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/crds.yaml kubectl create namespace flux #Create the namespace for flux Installation Flux connects to the Git Repository using an ssh key. If the ssh key already exists, A Kubernetes secret can be created from the key. Else configure the key with your GitHub given by fluxcd after installation. Since I already have an existing key pair I would be creating a Kubernetes Secret from the Private Key. kubectl create secret generic flux-git-deploy --from-file=identity=./id_rsa -n flux --dry-run=client -o yaml | kubectl apply -f - #This would create the kubernetes secret for flux to communicate with GitHub Since we have made the configuration for our flux deployment to communicate with our git repo let us deploy fluxcd and HelmOperator deployment. helm install flux fluxcd/flux --set git.url= [email protected] :pavan-kumar-99/medium-manifests.git --set git.branch=fluxcd --set git.secretName="flux-git-deploy" --set git.user=flux-user --set git.path=helm-releases --namespace flux #Install fluxcd deployment helm upgrade -i helm-operator fluxcd/helm-operator --set git.ssh.secretName=flux-git-deploy --namespace flux #Install helm-operator deployment kubectl create ns fluxcd-demo #Create a namespace to deploy our HelmRelease Helm Operator The Helm Operator is a Kubernetes Operator, allowing one to declaratively manage Helm chart releases. The desired state of a Helm release is described through a Kubernetes Custom Resource named HelmRelease. . Based on the creation, mutation, or removal of a HelmRelease resource in the cluster, Helm actions are performed by the operator. Fluxcd with helm operator Here is a sample repo which contains some sample helm charts and a sample HelmRelease file. We would now understand what is written in the HelmRelease file. kind: HelmRelease ( Kubernetes CRD ). metadata.name: The name of the HelmRelease. metadata.namespace: The namespace in which the HelmRelease is supposed to be deployed in. metadata.annotations: fluxcd.io/automated: To enable automation for fluxcd. spec.releaseName: The name of the helm chart release name. spec.targetNamespace: The namespace into which the helm chart has to be installed. ( Make sure you create the namespace before the HelmRelease gets Installed ) spec.chart.git: The Git Repository URL from which the helm charts has to be installed. spec.chart.path: The path from GitHub Repository. spec.chart.ref: The name of the GitHub branch. Demo Once the fluxcd and helm operator charts are installed you should see the flux components created in the flux namespace. fluxcd components Now go grab a cup of coffee and wait for 5 minutes. You should now have all your resources created in your cluster defined in the helm chart. These are the resources defined in our helm chart. Resources in the helm chart Let us watch the resources in the fluxcd-demo namespace ( spec.targetNamespace-> HelmRelease file ) watch -n 5 kubectl get all -n fluxcd-demo fluxcd-demo namespace And now you have all the resources defined in the helm chart created in your Kubernetes Cluster. Woooo !!! As we know that fluxcd will watch, pick up the changes from git, and will update our cluster, let us update the number of replicas of fluxcd-demo deployment. Number of replicas = 1 Let us edit our HelmRelease manifests to override the values defined in the values.yaml file of the Helm Chart. I have updated the number of replicas to 5 in GitHub by overriding the replicaCount value in the HelmRelease file. Now go grab another cup of coffee and wait for 5 minutes. Number of replicas = 5 We have now successfully deployed our first HelmRelease using fluxcd. Conclusion Thanks for reading my article. Here are some of my other articles that may interest you. Reference https://docs.fluxcd.io/en/latest/tutorials/get-started/ Recommended
https://medium.com/swlh/deploying-applications-in-kubernetes-using-flux-a9d171b11917
['Pavan Kumar']
2020-12-26 16:37:50.005000+00:00
['Helm', 'Gitops', 'Github', 'Kubernetes', 'Flux']
I’m in a Toxic Relationship With the Restaurant Industry. Help.
When the clock struck twelve on January 1, 2020 I was in a bar with three of my closest friends playing JENGA, enjoying a Guinness. While 2019 had been a personally rough year, I was optimistic about the future. I planned to work harder at my personal projects, fix my financial situation, and finally get out of [REDACTED]. Obviously, that didn’t happen. I don’t need to recap the events of the past seven months to you. Nor do I want to talk about the US response to the virus. For my thoughts on that, you can read this: The one thing I will say is this: When the PA lockdown happened in March and the first waves of increased unemployment pay and stimulus checks rolled out, I was making more money than I ever had before. I made more money shut up in my apartment than I did working 50-hour work weeks. I want to be clear that this isn’t a criticism of [REDACTED], but rather a criticism of the restaurant industry as a whole. Restaurant employees are not required to receive minimum wage, so long as the tips they claim cover that difference in pay. Restaurant employees rarely receive benefits of any sort. No paid vacation, no dental/health/vision, no 401k. In most scenarios, if you have to take time off, you’re simply losing out on money. This poses a great threat in the current climate. Restaurant workers, like many other essential employees, are in a high-risk low-reward situation. Have customers been kinder? Have tips been more generous? The short answer is no, they haven’t. I worked Mother’s Day this year, during which we were still in the “Red Phase” of lockdown. It was a nightmare. Ticket times were nearly two hours. As the crowd outside grew more and more frustrated, they became less and less understanding. We were cussed out, screamed at, and threatened. Many people cancelled their orders. It was the first time I’d ever seen our kitchen managers simply have no clue what to do. Somehow, we managed it. Just down the street from us, Red Lobster had no choice but to send customers home without food. I was disgusted by how little empathy people possessed. According to Forbes, at the beginning of 2019 78% of Americans were living paycheck to paycheck. I remembered a conversation I had with my manger back in March, where he said “servers aren’t living paycheck to paycheck. They’re living day to day.” He perfectly articulated why it’s so difficult for restaurant employees to break away from this line of work. You never know how much money you’re going to make. If you don’t think you’ll have enough for rent, you pick up an extra shift. It’s hard to think about not leaving work with cash at the end of the day. I’ve never had to make a budget. Not really, anyway. I’ve never had the experience of receiving two weeks worth of pay and having to make a plan for it to last me two more weeks. I go into work, and sometimes I leave with $100, other times with $30. You just never know. Most times, the money is good enough or simply “too good” to justify going to another retail job, even if it might make you happier in the end. For a while, I considered quitting and working at a local bookstore, only to realize that $11 an hour wouldn’t pay my bills. The money, however, is never so good that there isn’t always something better lurking just around the corner. It’s just a matter of reaching out and taking it, if you’re willing to suffer through the adjustment period and take an initial financial hit.
https://medium.com/illumination/im-in-a-toxic-relationship-with-the-restaurant-industry-help-5ee6758c3954
['Austin Harvey']
2020-10-28 15:38:50.170000+00:00
['Covid 19', 'Mental Health', 'Jobs', 'Restaurant', 'Finance']
How to organize your studies while working
3. Find Pareto Optima The next step is to analyze the classes from a Pareto perspective. It is nice to learn as much as possible, but often we don’t have the time to learn everything at all. Thankfully Pareto helps with his principle. Find out what 20% of the course are the most important ones. In most cases these are the assignment. In assignments teacher see the core skills of the lecture covered (that’s why they are covered in the assignment) 4. Find a workflow to get everything done This will differ from everyone. For example, do I work full-time and have many other projects to do as well. Everybody keeps reminding you that “it is not possible to get everything done” or that “you have to focus 100% only on your studies”. This is often not true. You can most of the time find ways to get everything done. What do I mean with that and how can you do it. Let me explain how I will do it this semester. First of all, I am more efficient if I just read the material and work through the exercises. I have been studying now for more than 5 years, and for me sitting in lectures is often an incredible waste of my precious time. Therefore I always start with the exercises to see what is actually demanded by the professor. Afterward, I read through the study material and try to answer all questions/assignments while I do this. I repeat this with every class I have until all my deadlines are met. Afterward, I go through the additional learning material. For example streamings, additional slides, additional articles and so on. This process is scheduled for my working hours. If I work from 9am to 5pm, I will have a break after I come home and start immediately after the break with my studies. Of course, the weekends will be primarily used for the more complex assignments, which couldn’t be finished during the week. 5. Work around exam dates A huge issue when working full-time is exams. Exams are scheduled the way the university likes. And most of the time they cannot schedule the time for full-time working students. The way I solve this is the following: Ask the professor if there is a way to re-schedule or if there are other dates available Schedule work leave or holiday for the key exams. (In most countries you are entitled to holidays) If the exam is online and around lunchtime, you can use your lunchtime for taking the test (yes, this is crazy but doable) If the exam is online and just a short multiple-choice test. You can use the toilet. (yes, this is hardcore — but necessary if your employer doesn’t support your ambitious goals) If the exam is not online and you do not get a holiday, you have to take it next semester. Be careful to structure your classes so there is a good mix of online and in-person exams. (if possible) 6. Breathe As you are studying and doing work as well I guess you are ambitious and motivated as hell. So this might seem counter-intuitive but it is vital. Breath and relax. There is no way to work all the time 100%. Better to chill in between, but when you work, then work. Ultimately doing full-time work and a study at the same thing comes down to mindset. It is possible but requires discipline. It is useful to see studying as a pursuit of learning for life rather than just getting a degree. Because the degree will not be enough motivation for you to keep through all of it over a period of (more than) three years. If you have any questions feel free to ask. I can also write an article about concepts that are especially interesting to you.
https://medium.com/createdd-notes/how-to-organize-your-studies-while-working-f89483c341e5
['Daniel Deutsch']
2019-10-30 19:34:45.439000+00:00
['Motivation', 'Education', 'Study', 'Full Time Jobs', 'Workflow']
Network IP Ranges of a Private Kubernetes Cluster in Google Cloud Platform
In a secure and private Kubernetes (K8S) cluster in Google Cloud Platform (GCP), it is important to make sure that you are using private IPs and right-sized IP ranges for your current and future scaling needs. A bad network design is very difficult to fix especially after the services started running in production. The story Securing Your Kubernetes Cluster in Google Cloud Platform covered the basics of the setup. Detailed coverage of the IP address ranges in the K8S cluster deserves a story on its own and this one is trying to achieve that. Before jumping into the matter, it is better to do a deep dive into the K8S fundamentals and K8S documentation provides a lot of materials for your needs. In addition to that, the story Kubernetes 101: Pods, Nodes, Containers, and Clusters by Daniel Sanche serve as a quick refresher of the subject. Private IP Addresses Private IP addresses cannot be used for any kind of routing purposes on the Internet. The Network Working Group’s RFC1918 gives all the details of the private IP address ranges that private networks can use. According to RFC1918, following are the private IP address ranges. 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) A Primer on K8S Inter-Pod Networking This section is going to talk about a high level 50,000 feet view of the K8S inter-pod networking. For a detailed, coverage of this topic, you may refer the K8S documentation on K8S cluster networking. All the pods communicate with other pods through a NAT-less network Corresponding to a pod’s eth0 interface, there is a vethX interface pair talking to the bridge of the node. A pod to pod communication within a node goes through this bridge and the packets don’t leave the node. Communication between pods across different nodes has to happen through the eth0 interface of the node. The bridge of the node passes the packets that have to leave the node through the eth0 interface of the node. The routing tables handle the packet routing rules. Services in K8S gets stable IP addresses, and port. K8S does the service discovery to identify the pods that are running the actual services. The service IP addresses are virtually created in K8S API Server as Endpoint objects in conjunction with kube-proxy running in each and every node. Any new service creation notification is sent to all the kube-proxy components running in the nodes and it makes the service addressable within that node. When clients connect to a service IP, the K8S API Server sends the request to a node and the kube-proxy in that node passes the request to a random pod running that service. The kube-proxy maintains IP tables for these purposes. Private K8S Cluster IP Addresses It is advisable to have your K8S cluster in a dedicated custom Virtual Private Cloud (VPC). In this custom VPC, you need IP addresses for the following resources. 1. General purpose VMs, K8S nodes etc. 2. Pods created by K8S 3. Services created by K8S 4. K8S API Server It is important to have non-overlapping IP address ranges for all of the above resources. You should have the scaling requirements identified very early on before even architecting and designing your K8S infrastructure. The IP address range selection depends a lot on those requirements. All the configurations related to GCP’s infrastructure takes the IP address ranges in the CIDR notation. When you are coming up your IP address ranges for your VPC and K8S infrastructure, it is advisable to use a CIDR calculator so that you are not overlooking and making mistakes in IP address calculations. There are many such tools available and CIDR.xyz is being used here while writing this story. The Terraform script captures all the details including the IP address ranges of the private K8S cluster in discussion here. The following IP range is chosen for the general purpose VMs such as K8S nodes, bastion hosts etc. You can see that there is a possibility of ~65,536 (discounting the reserved IPs) VMs that you can create with this IP range. Visualization Using http://cidr.xyz/ The following IP range is chosen for the pods created by K8S. You can see that there is a possibility of ~4,096 (discounting the reserved IPs) K8S pods that K8S can create with this IP range. Visualization Using http://cidr.xyz/ The following IP range is chosen for the services created by K8S. You can see that there is a possibility of ~256 (discounting the reserved IPs) K8S services that K8S can create with this IP range. Visualization Using http://cidr.xyz/ The following IP range is chosen for the K8S API Server. You can see that there is a possibility of ~16 (discounting the reserved IPs) IPs that K8S can use for its API URL. Visualization Using http://cidr.xyz/ Validation It is advisable to validate that your K8S cluster indeed is having the configured IP address ranges. This is also a requirement to have conversations with your security team and evidence what you claim. The following validations are required to confirm that all the IP address ranges in this K8S ecosystem are really private. VPC Use the following commands to make sure that your VPC has the correct IP address range. In addition to the VPC, you also need to make sure that you have the correct VPC peering in your infrastructure. The VPC peering that you are seeing below is created by K8S to have a separate VPC for the K8S API Server. For all the K8S nodes and pods to talk to the K8S API server, there should be a peering established with your VPC. $ gcloud compute networks subnets list | grep europe-west2 default europe-west2 default 10.154.0.0/20 mservice-subnetwork europe-west2 mservice-network 10.1.0.0/16 $ gcloud compute networks peerings list NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS gke-7884e5a1eff6b98b5d90-517b-450a-peer mservice-network gke-prod-europe-west2-f88a gke-7884e5a1eff6b98b5d90-517b-c02f-net True ACTIVE [2019-03-24T02:28:40.141-07:00]: Connected. K8S API Server Use the following commands to make sure that your K8S API Server has the correct IP address which is also in the private IP address range. $ gcloud container clusters get-credentials mservice-dev-cluster --region europe-west2 apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: name: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster contexts: - context: cluster: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster user: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster name: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster current-context: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster kind: Config preferences: {} users: - name: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster user: auth-provider: config: cmd-args: config config-helper --format=json cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp $ $ kubectl config viewapiVersion: v1clusters:- cluster:certificate-authority-data: DATA+OMITTEDserver: https://172.16.0.2 name: gke_YYYY-XXXX_europe-west2_mservice-dev-clustercontexts:- context:cluster: gke_YYYY-XXXX_europe-west2_mservice-dev-clusteruser: gke_YYYY-XXXX_europe-west2_mservice-dev-clustername: gke_YYYY-XXXX_europe-west2_mservice-dev-clustercurrent-context: gke_YYYY-XXXX_europe-west2_mservice-dev-clusterkind: Configpreferences: {}users:- name: gke_YYYY-XXXX_europe-west2_mservice-dev-clusteruser:auth-provider:config:cmd-args: config config-helper --format=jsoncmd-path: /usr/lib/google-cloud-sdk/bin/gcloudexpiry-key: '{.credential.token_expiry}'token-key: '{.credential.access_token}'name: gcp K8S Nodes Use the following commands to make sure that your K8S nodes have the correct IP addresses which are also in the private IP address range. Note that there is no external IP address for any of the nodes. $ kubectl get nodes -o wide | awk '{print $1, $6, $7}' NAME INTERNAL-IP EXTERNAL-IP gke-mservice-dev-clu-mservice-node-po-528e24d4-d46s 10.1.0.6 Container-Optimized gke-mservice-dev-clu-mservice-node-po-5628ab4e-60v6 10.1.0.7 Container-Optimized gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 10.1.0.8 Container-Optimized K8S Pods Use the following commands to make sure that your K8S pods have the correct IP addresses which are also in the private IP address range. In the below result, you can see that some of the pods have got IP addresses in the VPC’s subnet’s primary IP range which is 10.1.0.0/16 even though we have explicitly declared that 10.2.0.0/20 to be used for the K8S cluster secondary range when you defined the IP allocation policy. This is because of the kube-system namespace has an infrastructure in a different VPC $ kubectl get pods -o wide --all-namespaces | awk '{print $1, $7, $8}' NAMESPACE IP NODE istio-system 10.2.2.12 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 istio-system 10.2.2.6 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 istio-system 10.2.2.14 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 istio-system 10.2.2.20 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 istio-system 10.2.2.15 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 istio-system 10.2.2.16 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 istio-system 10.2.2.9 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 istio-system 10.2.2.21 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 istio-system 10.2.2.17 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 istio-system 10.2.2.13 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.2.8 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.2.7 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.2.18 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.1.3 gke-mservice-dev-clu-mservice-node-po-5628ab4e-60v6 kube-system 10.2.0.2 gke-mservice-dev-clu-mservice-node-po-528e24d4-d46s kube-system 10.2.2.2 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.2.3 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.2.10 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.2.4 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.1.0.6 gke-mservice-dev-clu-mservice-node-po-528e24d4-d46s kube-system 10.1.0.7 gke-mservice-dev-clu-mservice-node-po-5628ab4e-60v6 kube-system 10.1.0.8 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.2.5 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.2.11 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 kube-system 10.2.1.2 gke-mservice-dev-clu-mservice-node-po-5628ab4e-60v6 K8S Services Use the following commands to make sure that your K8S services have the correct IP addresses which are also in the private IP address range. $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 1h Infrastructure Test Automation When you use IaaC tools like Terraform, it is also important to make sure that you implement the right level of infrastructure test automation in place so that these test reports can be used to evidence the security of your system without any manual intervention. Conclusion When you expose your K8S resources to the Internet through public IP addresses, there are thousands of sophisticated adversaries trying to have a peep into your network. A guarantee that all of your K8S resources are having private IP addresses make sure that those resources are not directly accessible from outside your network for staging any kind of attack. Even if you are protecting your internal network elements with private IP addresses, you can still expose your service to have external IP addresses including your K8S API Server URL. Caution has to be exercised to protect the exposed services from the adversaries with various authentication and authorization techniques in conjunction with transport layer security.
https://rajtmana.medium.com/network-ip-ranges-of-a-private-kubernetes-cluster-in-google-cloud-platform-93bd556d6f2f
['Rajanarayanan Thottuvaikkatumana']
2019-03-24 17:10:42.844000+00:00
['Gcp', 'Google Cloud Platform', 'K8s', 'Kubernetes', 'Cidr']
How to Appreciate Yourself Gamefully and Kindly
The “Appreciation Game” I decided to enhance my “Creativity and Gratitude Game” with another game I sometimes play, the “Appreciation Game,” which is just a regular notebook where I record each little step I accomplish during the day. That way, I make sure that I move in small steps and not in big jumps. After writing down what I just did, I immediately cross it out. For me, it closes the move, and I can then go on to the next one. You might wonder what this detailed recording can achieve, besides slowing down and interfering with progress on what I want or need to do. In fact, it has the opposite effect. By taking several seconds to record each little self-contained step, I slow down enough to appreciate what I have just done. I also interrupt possible dwelling thoughts such as “That is not what I wanted to do” or “That is not good enough.” I simply record what I have done, appreciate that I am progressing with my day, and then by crossing out the record, I free myself up for anything else I want to do that day. This striking through each record stops me from mentally going back to what I just did and trying to reiterate it in my thoughts. In games, too, you can’t go back (as in a browser) either if you made the “wrong” move. In many games, you have to start anew. So basically, you play a new game round or a new game altogether. Looking at my to-do list for the day, or anyone or anything that comes my way or requires my attention, helps me to identify that next challenge, project, or activity “game.”
https://medium.com/illumination-curated/how-to-appreciate-yourself-gamefully-and-kindly-cd5a2f22d8a1
['Victoria Ichizli-Bartels']
2020-11-25 12:09:00.859000+00:00
['Ideas', 'Self-awareness', 'Gaming', 'Serendipity', 'Appreciation']
Scripting A Hexagon Grid Add-On For Blender 2.91
This tutorial explores how to make a grid of regular convex hexagons in Blender with Python, then how to turn that script into an add-on. An example render made with the add-on looks like so: A grid rendered in Cycles with a Viridis color palette. The tutorial takes inspiration from the hexagonal basalt columns of the Giant’s Causeway in Northern Ireland and Fingal’s Cave in Scotland. Such formations have also inspired the environs in video games such as Dark Souls II, Dragon Age: Inquisition and Skyrim: Dragonborn, to name a few. A caveat: this tutorial does not explain how to create photo-realistic basalt columns with the natural irregularities of those pictured at the top. A Voronoi mesh generator might be more useful for readers with such a goal. Those wishing to skip to the full code for this tutorial may reference the Github repository. This tutorial was written with Blender version 2.91. Blender has evolved rapidly over the past two years; please consult the release notes for changes between that version and the current one. Hexagon Basic Geometry First, let’s review some geometry. Suppose we position the hexagon in a Cartesian coordinate system at the origin, (0.0, 0.0) , where the positive x axis, (1.0, 0.0, 0.0) , constitutes zero degrees of rotation. Positive on the y axis is forward, (0.0, 1.0, 0.0) , and positive on the z axis is up, (0.0, 0.0, 1.0) , which means that increasing the angle of rotation will give us counter-clockwise (CCW) rotation. We follow the convention of Blender’s 2D mesh circle primitive, where the initial vertex starts at 12 o’clock, not at 3 o’clock. In consequence, the hexagon is standing upright, not flat on its side. Since a regular convex polygon is a constant shape, we can hard-code its features to avoid converting from polar coordinates to Cartesian coordinates via sine and cosine. In the table below, we assume that the first vertex is at the center of the hexagon. A hexagon’s maximal and minimal radii. Source: https://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/Regular_hexagon_1.svg/1024px-Regular_hexagon_1.svg.png The key numbers here are the square-root of three, 1.7320508 , and its division by two, 0.8660254 . A hexagon’s radius multiplied by 0.8660254 is the distance from the hexagon’s center to an edge’s midpoint. If we tile hexagons leaving no gaps between, then to travel from one hexagon center to another across an edge we would travel two such lengths, 1.7320508 . This is explained in greater detail in the Parameters section of the Wikipedia entry on hexagons. The circumradius, or maximal radius, R (green) is the distance from the center to a vertex. The inradius, or minimal radius, r (blue) is the distance from the center to an edge’s midpoint. For the sake of topology, we may also want to subdivide the hexagon. The subdivisions available depend on whether we want to insert a vertex coordinate at the hexagon’s center and/or bisect edges of the hexagon to create new vertices at edge midpoints. Subdivisions of a hexagon. From left to right: none, pentagons, quadrilaterals, triangles. In some cases, the need to ensure that a hex grid is composed only of triangles before the model is exported will drive our decision. In other cases, we may be interested in the pattern generated by the subdivision after tessellation. Tiled hexagons divided into three quadrilaterals. For example, subdividing a hexagon into three quadrilaterals creates — along with color choice — the optical illusion of stacked cubes in an isometric projection. More possibilities are illustrated in the Wikipedia article on hexagonal tiling, here. Echoing The Manual Approach Next, we consider how we’d create a hexagon grid manually, with Blender’s graphical user interface (GUI). Afterward, we’ll recreate that algorithm in Python script. zerobio provides a tutorial on how to do this: In the first leg of the process, we create a circle primitive with 6 vertices, append an array modifier for the horizontal offset, then append an array modifier for the vertical offset. For those unfamiliar with the BMesh workflow, a prior tutorial offers an introduction. As far as parameter names for create_circle go, cap_ends fills the shape with a ngon when True ; cap_tris fills with a triangle fan instead of an ngon. Generally, the first argument in a bmesh.ops method, bm , is positional; all subsequent arguments, such as segments and radius , should be supplied with the parameter named. Many of the named parameters listed in the API can be assumed to have default arguments. For the ArrayModifier s, zerobio uses relative offsets. Either relative or constant offsets will work, given appropriate measurements. For the second array modifier that creates the grid’s rows, the relative horizontal offset is 0.05 because half the mesh’s width, 0.5 , is divided by the count used in the first (horizontal) array modifier, 10 . A circle primitive with two array modifiers. The advantage of this approach is that it uses out-of-the-box methods, making it fast and easy; it’s also non-destructive insofar as it uses modifiers. The main disadvantage, which zerobio shows how to fix, is that the grid forms a parallelogram. To make a rectilinear grid, the array modifiers need to be applied (removing an aforementioned advantage). Sections of the parallelogram then need to be snipped and rearranged. Another minor inconvenience is that the grid’s origin lies in near the bottom-left corner, in the original shape’s center. The grid could be centered with Object > Set Origin > Geometry To Origin , but it’d be nice if it were centered right away. Were we to pursue this route further, we could try to square the parallelogram with code, perhaps with a series of orthonormal planar bisections, or with a boolean intersection between the hex grid and a cuboid. Hexagons in Concentric Rings The redo menu of the complete add-on. Instead of the modifier approach above, we’ll aim to create an add-on which appends a grid of hexagons to a collection from the Add > Mesh menu of the 3D viewport when in Object mode. We could create a staggered square grid instead, but given that our inspiration is natural, not artificial, arranging our hexagons in concentric rings is a better fit. Many inputs in the screen capture above should be self explanatory. The terrain type is an enumeration that specifies a shaping function for the extrusion of the grid on the z axis. This tutorial addresses four shaping functions: uniform, linear gradient, spherical gradient and conic gradient. These are the same functions we used to make color gradients in Processing. Any function that accepts an origin and destination point as inputs and returns a factor in [0.0, 1.0] will work. This terrain factor is mixed with a noise factor according to a noise influence. The result is used to interpolate from the extrusion lower bound to the upper. The remaining 3 inputs will be supplied to one of Blender’s noise methods. We’ll start with the business logic, then include more data related to Blender’s GUI later. As an exception, one GUI-related item we’ll include right away: a class which will be recognized as an operator. In Python, to extend a class with a sub- or child- class, we specify the parent, bpy.types.Operator , in parentheses after the class name and before the colon. For ring count, we include the central hexagon as the first ring, then set the minimum number of acceptable rings to one. That way, at least one hexagon will be created. orientation indicates how much to rotate the grid as a whole around the z axis after it is created; in the code above, we haven’t made use of this parameter yet. face_type specifies the manner in which a hexagon is subdivided; we’ll address this in the next code snippet. In case we need to organize our faces and vertices by hexagon, we’ll create two dimensional list s for verts and faces . For each iteration through the inner loop, a list of hex_vs and hex_faces will be appended. When finished this method will generate a 2D grid similar to this: Indices per hexagon for a grid with ring count 4. The logic to create this grid is sourced from Red Blob Game’s extensive articles on hexagonal tessellation and coordinate systems. To grasp how these nested for loops work to create the grid, it is helpful to add diagnostic print statements for indices i and j . For 4 rings, i will span from negative to positive rings-1 . The number of hexagons added per iteration of the outer loop will begin with 4 and increase by 1 until we reach the central iteration, where i is 0 and 7 new hexagons are added. Afterward, the hexagons created will decrease by 1 . The total number of hexagons created, 37 , is equal to 1 + i_max * verif_rings * 3 . Depending on face_type , that number will need to be multiplied by faces per hexagon to predict the total number of faces. To add faces, we introduce the following: When the BMVertSeq is called to create a new vertex, a BMVert is returned. All we need to supply to the new method is a collection — tuple, list, Vector — that specifies the vertex’s coordinates (conventionally shortened to co in Blender’s API). Even though the mesh is 2D up to now, we should still provide a third component for z, 0.0 . To append a new BMFace in the BMesh’s BMFaceSeq , we need a collection of BMVert s. The code snippet above shows only points, tri fan, quad fan and ngon options; more options are supported in the full Github repository. For face types which require edge midpoints to be calculated, an edge’s origin and destination vertices are added, then the sum is divided by two to find the midpoint. Once we’ve finished creating faces in the nested for-loops, we merge duplicate vertices if requested. Next, we calculate the width and height of the grid overall. This allows us to rescale coordinates in world space to texture coordinates in [0.0, 1.0]. We create a 4x4 rotation matrix with Matrix.Rotation using the z axis, (0.0, 0.0, 1.0) ; since a 3x3 matrix is sufficient to hold a rotation, we specify that the desired size is 4 . We ensure the mesh’s normals are updated, then return a dictionary with the faces. Aspect ratio of UV coordinates. Because the grid is wider than it is tall, UV coordinates (left) will be squished horizontally; this results in a square image looking stretched horizontally on the mesh (right). If desired, this can be corrected with the grid’s aspect ratio, width divided by height. Extrusion We could stop there if that’s all we needed. However, manually configuring a SolidifyModifier to vary the extrusion with noise or according to a shaping function can take a few steps in the GUI. Extrusion based on a linear (left, blue), spherical (center, red) and conic gradient (right, green). For that reason, we’ll make a separate extrude method which accepts the list of faces generated by the prior method. The method will return True if all inputs were valid and the extrusion has occurred; False if not. We tackle the simplest case first: where there is no margin between hex cells and the user has specified that overlapping vertices be merged together. In this case, we allow only uniform extrusion. The bmesh.ops.extrude_face_region method accomplishes this task. When the use_keep_orig flag is set to True , the original faces are retained. These will become the bottom faces of the hexagonal prism. This method does not translate the new geometry it creates. The information it returns needs to be filtered. Because bmesh.ops.translate accepts vertices, we append all elements from the extrusion results to a list if they match the type BMVert . The snippet above could be briefer and clearer were Vector math used. For example, dot_ab could be assigned the result of a.dot(b) . However, as a precaution, we treat input arguments as if they were tuples. The elements of a vector can be accessed with either a subscript or the axis. For example, a[0] and a.x should return the same number. The calculations we need to make depend on the requested shaping function. Linear gradients depend primarily on scalar projection of one vector, a, onto another, b. In this case, a is the difference between the line’s origin and the hexagon’s center; b is the difference between destination and origin. Spherical gradients depend on finding the ratio of hexagon center’s distance from an origin to a maximum distance. Conic gradients depend on finding the azimuth, or heading, of a vector, then converting the angle to a factor. Once the above factor is found, we introduce some noise (i.e., smooth randomness). The mathutils.noise submodule offers a variety of noise methods, including those meant for terrain; however, because the hexagon grid is of such a low resolution, we chose a simple one. The consequence of extrusion on the grid’s face indices can be seen below: Face indices after extrusion. If the 2D grid originally contained 37 ngons, then the first hexagonal prism, left center, will have face 0 as its bottom and face 37 as its top. Each subsequent top face will be an increment of 7 from the previous (37, 44, 51, and so on). The quadrilaterals on the sides will use the intermediate indices, though not in a clockwise or counter-clockwise order. In the case of the first prism, we see indices 38, 40, 43, 39, 42, 41 CCW from the 12 o’clock vertex. We have the option to sort vertex and face sequences if we wish. The grid’s UV texturing is also impacted. The side panels will look streaked: UV texturing after extrusion. The extrusion copies the bottom face’s UVs to the top. This tutorial will leave this as is. Those who wish to change it via script will find that newly extruded faces can be found via the filtration-by-type approach used above. The UVs can also be adjusted manually by entering Edit mode, selecting a side face, going to Select > Select Similar , then selecting an option from UV . Coding The Add-On Last, we turn to the code which will provide a user-friendly interface for the business logic created above. Much of what is covered here can also be learned from the manual or the video series Scripting for Artists. First, we’ll add the information to display when the add-on is searched in the Edit > Preferences > Add-ons menu. The Create Hex Grid add-on as it appears in preferences. This is done by adding a dictionary called bl_info after the imports and before the class declaration: "version" signals the current version of the add-on; it can be an indirect way of communicating to the user how developed the add-on is. For example, version (1, 0) or 1.0 would be the add-on’s initial release to the public, whereas (0, 13) or 0.13 would signal that the add-on is still early in development. Guidelines on semantic versioning, such as this one, may help with the decision of when to change the add-on version. The "blender" field indicates the version of Blender the add-on is intended to work on. To find the current version of Blender in use via Python, check bpy.app.version . We next add some apparatus related to registering and unregistering our add-on. The register and unregister methods are called when we tick the checkbox next to an add-on in the Preferences menu to enable and disable it. The poll method indicates the proper panel wherein the add-on will operate; this conditions when the add-on appears in a search. To help select which icon we associate with the add-on, we can enable the Icon Viewer add-on in preferences, select the icon visually, then copy the string used to identify the icon. Blender Icon Viewer Add-On Next, we add properties to the HexGridMaker class. These properties will dictate what inputs appear in the redo/undo menu, the tooltips that will appear on mouse hover, and the upper and lower bounds that govern valid input ranges. Properties are assigned with a colon, : , not an equals sign, = . Using the latter will print a warning in the terminal. The snippet below doesn’t show all the necessary inputs for a hex grid, just enough for illustration. Numeric properties contain a min , max pair and a soft_min , soft_max pair. The soft bounds limit the value when the mouse is passed horizontally over the field; the hard bounds also limit keyboard input. Real numbers, i.e., FloatProperty s, include step and precision . step specifies as a percent how much to increment and decrement a value when the < > buttons on the end of a input field are pressed; this takes an integer value which is divided by 100. precision specifies how many places right of the decimal to display. Generally speaking, for small values, the defaults are not fine enough and need to be increased. For FloatVectorProperty s, make sure that the dimensions specified by size matches the default collection’s length; for example, (0.0, 0.0) should match 2 ; (0.0, 0.0, 0.0) should match 3 . EnumProperty s can get fairly involved because items is a list of tuples. Within each tuple, the first string represents how the enumeration constant is represented in code. The second string represents how the constant is displayed in the GUI. Lastly, we hook our business logic together in the execute method. The second input of the execute method is a context ; we use that over bpy.context . The main logic here is to create mesh data, unload the BMesh into the mesh data, assign the data to an object, and link the object to a collection in the scene. Conclusion There are any number of ways the add-on created in this tutorial could be improved. For example, we could offer more ways to arrange hexagons in a grid; a toggle to create separate objects each with its own hexagon mesh instead of one mesh; greater control over UV coordinates; more shaping functions; more refined control over noise; and so on. The fine line with add-ons is to create just enough convenience without adding too many features. Feature bloat makes the menu difficult to read and, at extremes, makes the user feel like they need to read documentation just to understand how the add-on is used.
https://behreajj.medium.com/scripting-a-hexagon-grid-add-on-for-blender-2-91-bbcda88850c7
['Jeremy Behreandt']
2020-12-02 15:43:57.408000+00:00
['Hexagons', 'Python', 'Blender', 'Geometry', 'Creative Coding']
Decision Trees — Introduction (ID3)
Decision Trees — Introduction (ID3) Have you ever wondered how learning from past experiences might work? You meet different types of persons throughout your life, after some experience, you get the idea that what kind of person you like, right? I mean after several experiences with many humans, when you meet a new human, most of the time you get the idea if you like them or not. How do you do that? With ‘Experience’! right? But you don’t keep all years of experience at the top of your brain always, rather than that it feels some simple and quick decision mechanism working inside your brain. So, rather than going deeper into the biology of the brain, let’s try to build a similar mechanism at a simpler level. Let’s say after your encounter with several people, you don’t want vampires to be your friend in future :P So you made a list of several people you met, their characteristics and if they turned out to be a vampire or not. ( “?” in shadow attribute is because you met those people only in dark conditions so you couldn’t verify if they cast a shadow or not ) After observing this data, we may come up with a naive model as this tree, Since with the help of that tree we can make a decision, we call it “Decision Tree”. This tree must satisfy all data in the given dataset, and we hope that it will also satisfy future inputs. But how could we come up with such a tree? The tree given above is made just by some random observation on data… Following observations… All people with pale complexion are not vampires . . All people who have a ruddy complexion and eats garlic are not vampires and if they don’t eat garlic then they are a vampire . and if they don’t eat garlic then they . All people who have an average complexion, and they don’t cast a shadow or we don’t know if they cast a shadow or not, then they are a vampire, or else if they cast a shadow then they are not a vampire. But is that the right way to build a decision tree? Is that tree is the simplest tree we can get from the given dataset? Such random analysis on a large dataset will not be feasible. We need some systematic approach to attack this problem. Let’s try to attack this with a greedy approach… So first, we look at the dataset and decide which attribute should we pick for the root node of the tree… This is a Boolean classification, so at the end of the decision tree we would have 2 possible results (either they are a vampire or not), so each example input will classify as true (a positive example) and false (a negative example). Here ‘P’ refers to positive, which means a person is a vampire, and ’N’ refers to negative, which means the person is not a vampire. We want attribute which divides more data into homogenous sets, which means in such sets where only P or only N exists because if we have that, we can definitely answer about a vampire or not, thus those will be leaf nodes of the tree. Check for each attribute, and see which one has the highest number of elements in the homogenous set. Here we find that the ‘Shadow’ attribute has the highest count for elements in a homogenous set, so we choose this attribute. So till now, we got this much of the tree… For the shadow attribute “yes” and “no”, we can decide if a person is a vampire or not, but in case of “?” we don’t know, we need to decide which attribute divides data well when shadow = ‘?’ So, let’s analyze another attribute while the shadow is unknown… Here we find that “Garlic?” attribute divides maximum elements, in fact, all elements in homogenous sets. So, our tree now looks like this, This tree looks simpler than the one we created by picking random attributes, so we observe that the greedy approach is helping us to get better results. But, is that the right way to do so? No, because if the dataset is large, we need not end up with attributes dividing into the homogenous set, we may find for all attributes elements in the homogeneous set are 0. How should we proceed then? So now let’s dive into the ID3 algorithm for generating decision trees, which uses the notion of information gain, which is defined in terms of entropy, the fundamental quantity in information theory. Imagine these 2 divisions of some an attribute… We observe that that one on left has the equal number of Ps and Ns, so that doesn’t give us any hint about the decision, but one on right has more Ps than Ns, so it may direct us somewhat towards P, so in these 2 we might consider right one. So, now instead of scoring them 0 right away, let’s go with another way. Let’s say, one where Ps and Ns are equal numbers has the highest entropy (1), and one where there are only Ps or Ns has the lowest entropy (0). We can have something like this, P/(P+N) vs Entropy graph. So, when P=N, thus P/(P+N) = 0.5 then Entropy = 1, if P=k(some integer) & N=0 then Entropy = 0. That feels like a pretty much appropriate graph to achieve what we want, so is there some mathematical way to to get this graph… Luckily for us, this curve can be achieved by the following equation Which can be written in P/(P+N) and Entropy form, by replacing x= P/( P+N ) and y = Entropy, Where P and N is the count of Ps and Ns of an attribute for which we are finding the attribute, We want to find information gain from the attribute, which is defined as, ( IG — Information gain from some attribute A is the expected reduction in entropy ) IG(Attribute) = Entropy of attribute — Weighted average of Entropy of each child set For example, ^ ( Example calculation of IG ) Since now you got the idea about Entropy and Information Gain, let’s build our decision tree again from scratch with this new approach! We observe here that we get maximum Information Gain from shadow attribute, Choosing this as our root node, We need to decide another attribute for Shadow = ‘?’ We get maximum Information Gain from Garlic, So our tree will look like this, This is exactly the same as the previous approach, because luckily at each step we were able to find some attributes dividing into a homogenous set, but the approach with Information Gain is more robust, which can be applied to make a decision tree from a large dataset. Reference : Identification Trees | MIT OCW
https://towardsdatascience.com/decision-trees-introduction-id3-8447fd5213e9
['Siddharth Maurya']
2019-11-04 16:53:44.449000+00:00
['Decision Tree', 'Machine Learning', 'Data Science', 'Artificial Intelligence']
The Small Stuff You Overreact To Is Really The Big Stuff You Haven’t Dealt With Yet
If you’re someone who sweats the small stuff in life — the traffic, the parking ticket, the snide comment, and all those other inevitable but maddening things — you are not alone. We are told all the time to remember the big picture. We probably won’t think about the traffic and parking ticket and snide comment on our death beds, and we probably won’t even remember them once we’ve finally arrived at our destination and paid the fee and found something else to occupy our attention. But the obvious fact that you shouldn’t let your days be overrun by an overreaction to what is objectively not really a big deal only compounds the feeling of frustration. If you are someone who sweats the small stuff, you don’t do it because you’re petty. You don’t do it because you’re not intelligent enough to know what does or doesn’t matter. You don’t do it because you are lacking values, or perspectives, or principles. You react the way you do because the small stuff is really the big stuff that we don’t know how to reconcile. The traffic compounds the feeling of already being so out of control with your schedule and as though the world is both constantly demanding something from you and likewise putting obstacle after obstacle in your way. Those lost 20 minutes are not just 20 minutes, they’re a representation of feeling like you’re constantly falling behind. And the parking ticket isn’t just a parking ticket, it’s yet another unnecessary expense to add to the list, to wane from the account because yet again you weren’t mindful about some small detail and you let yet another thing slip through the cracks and yet another unnecessary and meaningless bill must be paid. You’re not upset about the $20, you’re upset about feeling your finances slowly turn out of your control, despite your absolute best intentions. And the snide comment isn’t just a snide comment, it’s a representation of the fact that no matter what you do or how hard you work or how far you get there will always be detractors seemingly intent on seeing you in the worst way possible, offering no grace, pointing out what you cannot fix or cannot handle or really do not want to be. The comment itself isn’t what makes for the emotional reaction, it’s the reminder that this is an unsavory reality of living in the world and an inevitable reality of putting anything of meaning or significance or interest into it. Maybe this one inconvenience is what pushes your internal temperature one more degree upwards and now it’s finally boiling. You probably don’t have to psychoanalyze every minor but adverse emotional experience you have in your life. But if you’re someone who has to constantly remind themselves to not “sweat the small stuff,” it’s probably because you are constantly sweating the small stuff, and before you spend any more of your life worrying about what’s uncontrollable and ultimately unimportant, you should consider the well of emotion just beneath the micro-trigger, and the ways in which it might be trying to tell you how to take care of yourself a little better. When we have a disproportionate reaction to something on the surface, it’s often because there’s something deeper just beneath. These experiences in our lives often function in metaphors, or the way dreams do — it’s not about what literally happened but the emotion it provoked, and the assumption we then made about ourselves and our lives and what might happen in the future. You might not actually be a monster lacking self-control. You might just have some deeper feelings that you don’t understand. But the silver lining is that our emotions don’t exist to punish or hurt us, they are directive. They are trying to show us the way back to peace. Do you remember a time when you were younger when you feared something like your mother leaving you for preschool because you thought she’d never come back? Or you would absolutely cringe at the idea of sitting alone at the lunch table? Or the concept of not being invited to the party was palpitation-inducing? Do you know why you don’t worry about those things anymore? You learned she would come back. You realized that even if you did have to sit alone sometimes, it doesn’t make you a social recluse, and even if it does, you’re still going to be okay. You found enough dignity to realize that you don’t want to attend a party that you’re not wanted at anyway, and you’re better off for not having peaked too soon. These were all small things that became big things because of a lack of understanding. In that sense, we all still have some growing up to do. After you calm down, and once the heat of the moment has passed, give yourself a moment to consider where that rage and fear and pain is really coming from. If it truly is just the traffic and parking ticket and snide comment, then great, because it will pass. And if it is something deeper, that’s fine, because it will too. If you enjoyed this piece, check out my new book on self-sabotage, or book a 1:1 mentoring session with me.
https://medium.com/age-of-awareness/read-this-if-youre-someone-who-always-sweats-the-small-stuff-but-really-doesn-t-want-to-a978c158eba4
['Brianna Wiest']
2020-12-11 13:53:55.047000+00:00
['Personal Development', 'Philosophy', 'Psychology', 'Self Improvement', 'Life Lessons']
The Underground Caribbean Bitcoin World: Interview With Shadow Man
What area of the world are you from? I am from an island in the Caribbean. Why do you wish to stay anonymous? I value my privacy; I do business with traders in Venezuela and that’s an automatic red flag to regulators in most countries. Why are you talking to me? There is a lot of misinformation about Bitcoin, and so if the information that I’m sharing here can be of educational value, I’m all for it. Also, I would like to debunk the ‘nocoiner’ claim that Bitcoin can’t be used as real money, by providing examples of how I use Bitcoin in my daily life. What is your technical background, education or formal training? I wouldn’t say I have a technical background, but I can say that I am more tech savvy than most normies, (thank God; good OPSEC is tough.) I attended a university in the US, was studying business, but never graduated. When did you become interested in Bitcoin and crypto? I first heard about Bitcoin in 2011, and like most people it was a friend who was showing me this nifty way to buy drugs online. I found it cool but didn’t fully get it, and so I dismissed it. It had resurfaced in mid 2013 for me, and I’ve been in love ever since. When did you buy your first Bitcoin? 2013, on mtGox. What was it about Bitcoin that pulled you in? Initially it was the price action. But then I started binging on Andreas Antonopolous videos on YouTube. He used to be on the Bitcoin group hangouts in the early episodes, and he had mentioned Austrian Economics. And so that led me to read literary works by Mises, Rothbard, etc. I’ve had a disdain for the state apparatus for a long time, but never quite realized how economics is at the heart of it all. Also, the cypherpunk writings at the Nakamoto Institute were very informative and extremely fun to read. What is your biggest Bitcoin fail? I’d say my biggest crypto fail was a recent fuck-up I had with one of my Monero wallets (mymonero.com.) I believe I fell victim to a phishing redirect attack. Lost quite a big chunk of Monero, but that forced me to learn how to generate secure paper wallets. You can never be too careful in this space; only the paranoid survive. What is your single favorite Bitcoin moment? The lightning network being deployed on mainnet. According to many people in the space, this is a game changer that will enable things we can’t even imagine yet. Who are a few Bitcoiners who have influenced you the most? In the early days it was Andreas, and MadBitcoins’ The Bitcoin Group panel. Also, one particular talk I saw on YouTube by Erik Voorhees titled “The role of Bitcoin as money.” I kinda lost some respect for Voorhees after his short-sighted public support for Segwit2x, but I’m still grateful for his early evangelism. Nowadays some of the Bitcoiners I look up to are, in no particular order: Jimmy Song, Nick Szabo, Adam Back, Eric Lombrozo, Andreas, the meat eater maximalist (Rochard, Saifedean, Goldstein), Francis Pouliot, Kyle Torpey, Andrew DeSantis, Vortex, etc. What do you see in Bitcoin that others might not living in Caribbean? The ability to help illegal immigrants send money home to their families. Especially to countries in great turmoil like Venezuela. What is the Bitcoin world like in your part of the Caribbean? Very niche. There is a regular meetup here, but nothing anywhere near mainstream adoption. How many people do you know who hold Bitcoin on your island or close to you? I got some of my family members and friends into it. Also, the few people that attend the Bitcoin meetup here. How do most people buy their Bitcoin where you live? Through online exchanges, I believe. Is there any mining happening on your island? There are rumors but nothing I can confirm. Being in an old Caribbean city talking Bitcoin with Shadow Man made me realize how small the Bitcoin world really is. There are many good people working everyday to help build the economy. What is the regulation on Bitcoin like on your island? While there has been a warning issued by central banks about scams like Onecoin and Bitconnect, there has been no official statement released mentioning Bitcoin. What do you think about countries cracking down on Bitcoin? These crackdowns are very short-sighted and are based in irrational fear. The innovation and economic growth will only move elsewhere, where regulation is more conducive to technological progress. We saw a clear example of this of how btc businesses in NY reacted to the Bitlicense. In the book, ‘The Sovereign Individual,’ the authors predict countries that want economic growth will start treating citizens more like customers rather than cattle, so as to attract entrepreneurs. The book was written in 1999 and is extremely prescient, they even predict the rise of Bitcoin, or as they call it ‘cybercash.’ I can’t recommend it enough. Explain a little bit about how your business started? I came to a realization a couple of years ago that LocalBitcoins could effectively be used to facilitate remittances. How has it grown over the last few years? Before, I used to deal with Venezuelan immigrants who wanted to send money home. They would give me cash and I would have the money sent to their local Venezuelan bank account using localbitcoins. I would rather have my customers receive USD, but unfortunately the only way to get money to their bank accounts would be in the local hyperinflationary currency, the Venezuelan Bolívar. I would do trades as small as the equivalent of $20 usd. Now I deal with retail store owners in town, who have access to a Venezuelan bank account, so they buy larger quantities of bolívares from me, and then in turn sell those bolívares to the Venezuelan immigrants who live on my island. This might all sound confusing but it’s basically arbitrage trading through the informal remittance market. What type of services do you currently offer? I’m actually doing less of that and getting more into enterprise level crypto-mining. Have you heard of people getting arrested? Not here on my island, no. But I hear about miners getting busted in Venezuela sometimes. Do you do most of your business on Localbitcoins.com or p2p? I use LocalBitcoins a lot, but I also deal with certain traders directly, so p2p as well. These are traders that I initially started doing business with on localbitcoins, but then developed a comfortable level of trust to the point where the valuable escrow feature of localbitcoins was no longer needed. Explain what LocalBitcoins is and how it works? Localbitcoins.com is an online, peer-to-peer Bitcoin exchange, that allows you to buy and sell Bitcoin with individuals all over the world. The killer feature is the escrow service provided by the platform. What are some of the positives and negatives about using LocalBitcoins versus straight p2p? Pros: 1. Awesome escrow feature 2. Cool way to connect with traders all over the world. 3. Good track record in terms of security. Cons: 1. LocalBitcoins takes a cut off of every trade, 1% I believe. 2. Scammers. Some scammers build up their reputation by doing fair trades with good customer feedback and then use their reputation as leverage to try to convince noobs to trade outside of localBitcoins.com. What tips do you have for people who are interested in using LocalBitcoins. Always read extended feedback of a user before initiating a trade. I have never used LocalBitcoins personally, but I have many friends who have. What do you think about the new change with LocalBitcoins requireming AML/KYC? The change has not really affected my usage. These kind of changes are however likely to push me and many other traders to look to decentralized alternatives such as Bisq. Will you continue to use LocalBitcoins? For the foreseeable future, yes. What other sites are good options to use besides LocalBitcoins.com for better privacy? LocalBitcoins is the only one I can think of. There are others like Paxful, but I’ve never used them. What are a few easy red flags to spot a scammer on these sites? Next to a LocalBitcoins username, you can see number of trades as well as a percentage. The percentage refers to feedback score. The obvious red flag is if this percentage score is not at 100%. That’s not to say those with 100% score are to be blindly trusted. The user score system on lbc is a bit flawed imo and can be better. The main point here is dont fall for tricks by traders to lure you into doing business outside of the LBC platform and ALWAYS CHECK EXTENDED FEEDBACK BEFORE DECIDING TO TRADE WITH ANYONE! What country do you do the most arbitrage with? Venezuela and India What is it like dealing with the Bolivar and Venezuelan Banks. It feels fucked up dealing with a currency that gets more and more worthless by the hour, literally. It’s one of the reasons I don’t trade much with Venezuela anymore. Getting scammed can happen to anyone. You must pay attention at all times. Never believe you are immune. Tell us a horror story you have encountered? I got scammed in the early days by a Venezuelan trader. We did a few trades together on localbitcoins and then he convinced me to do a trade outside of localbitcoins. As soon as I sent him the btc, he disappeared. The only thing I could do was leave negative feedback on his localbitcoins profile. What’s it like dealing with India versus Venezuela? It’s pretty much the same. The only main difference is that the Venezuelan currency is hyperinflationary, which means the value is in constant freefall. The Indian Rupee is relatively more stable. How do you feel about decentralized exchanges like hodlhodl and bisque? I am very excited about the development of these kinds of exchanges. I think the model is already workable, there just needs to be more liquidity. Something that spurs a network effect, which is inevitable imo. What security tips do you suggest for storing your crypto? Get a Trezor. What are your thoughts on alt-coins? I hold some to speculate, but my bag is mainly BTC.
https://medium.com/hackernoon/the-underground-caribbean-bitcoin-world-interview-with-shadow-man-f0702143be08
['Pirate Beachbum']
2018-05-17 16:24:45.301000+00:00
['Caribbean', 'Crypto', 'Caribbean Bitcoin World', 'Entrepreneurship', 'Bitcoin']
Zen 224
Susan Brearley is a brilliant strategist and entrepreneur, a published book author, writer, seasoned editor, essayist, occasional comedy writer, and an accidental poet. She’s the happy owner of between 1 and 20 top writer badges on Medium, on any given day. She is Chef, Elder and Program Director at Elf Works Lane Retreat Center, where she coaches and mentors people, helping them identify and create the life they were born to live, on the planet we all want to live on. Next Garden of Neuro Workshop is scheduled for October 9–12 in the Adirondacks, and will accommodate a maximum of 7 people. Send a message for more information. In the meantime, join one of our Mastermind Book Club cohorts. She is currently based in the mid-Hudson Valley, New York.
https://medium.com/house-of-haiku/zen-224-714381e8b70d
['Susan Brearley']
2020-10-03 00:23:01.635000+00:00
['Poetry', 'Travel', 'Environment', 'Mindfulness', 'Inspiration']
The City
Inception What are dreams? What role do they play in our lives? When you say you had a dream, what does that mean to you? Do you sit back and think about this dream or do you assume it’s just one of the tricks your unconscious mind does to clear the unwanted, the unfathomable? These are philosophical questions and the question of what dreams are is quite interesting. In my early ages as a young boy, I had a dream. Not once, not twice but many times. It was the dream of flying. Well, as many of you would think, such an experience is one beautiful and fascinating in its sense, why not visit that world again. That’s exactly how I felt and to me this dream had no meaning at all other than just a dream. As a boy I could not question such experiences like anyone with understanding would, assuming they know what understanding means. Later on, as I gained a little bit of understanding, I began to be self-aware — so I think. I started to see the world around me in this timeline and the world to come as not much different from the world in my unconscious mind. There have been questions on the internet whether human beings would be able to create such a thing as an intelligent machine and flying vehicles, these are the topics these days. Well, I’m not educated in this area but it’s okay to speculate. My speculation is there would be such a being or in a more general term, world, in the coming times. It’s an interesting and beautiful world to think about if you ask me. West world What does intelligence in these contexts mean? Well, let’s do a thought experiment. The Universe as the scientists have discovered created what we now know as Evolution. Evolution created one of the most complex beings, us, human beings. If you think about it, it is a process. Now, a question would arise, what then have humans created? There is technology. But is that all there is? Well, think about it. One of the most, if not, The most interesting topics in the world of technology is Artificial Intelligence(AI). The idea that we as humans can create something or someone that thinks, acts and has the same experiences as we do. One would ask, will these beings have an experience like dreaming as my young or every humans’ soul does? Assuming there is such a thing as a soul — I think there is. The answer is, we’ll see. What about understanding? Well, as Roger Penrose puts it in the following video, Intelligence is something that needs understanding. Does it mean then that until we build a machine that has understanding, we cannot think of such an entity as intelligent? I will leave that to you the creators of AI. Final Words This is my dream and a city I want to live in, it’s one we all want to be in. But how do we get there?
https://medium.com/mindpload/the-city-b808a90ab691
['Brian Mboya']
2020-04-08 06:01:09.606000+00:00
['Ideas', 'Dreams', 'Minds', 'Artificial Intelligence']
Unbranded: The advent of submissive advertising
This seems to be the overriding sentiment behind the latest trend in content marketing; the lesser-spotted unbranded ad. Long gone are the days when brands unabashedly touted their names. They’ve been relegated to the bottom of the commercial chain, the very last rung of the self-promotional ladder. And there they skulk like hunters in the forest, hoping to lure in unsuspecting consumers with just the right grade of meaty ‘content’. OK, I’m exaggerating. Unbranded advertising, as the latest spawn of the digital marketing age, is promising. It is rich, shareable content that’s been cleverly designed to engage, inform and entertain, all without making any overt references to the brand or product it represents. Some standout examples are L’Oreal’s content beauty hub Fab-Beauty.com, aimed at ardent beauty enthusiasts, Eurostar’s Somers Town film, which finely spliced art with the travel-bug, and Johnson & Johnson’s BabyCenter.com, providing information and resources for parents. All successful unbranded content is centred around an idea, rather than a product, and all strive to add value to the consumer. They aim to build engagement, gain traction, and in the long-run, promote brand recognition and/or reinforce brand loyalty. But the ‘long run’ is the operative term there, as unbranded content is a slow burner; not ideal if you are seeking a serious and sudden spike in brand equity. The purpose of unbranded advertising appears to be less about reaching customers, and more about understanding them. L’Oreal’s Fab-Beauty has identified valuable insights through analysing the behaviour of its website visitors, including how they self-describe and what content they care about. This is where the potential of unbranded content really shines. It provides prime fodder for long-term brand strategy, and if leveraged correctly, can help marketers really hit their mark. And the raw appeal of the unbranded campaign is undeniable; it relies on quality, not quantity, or reputation. Crucially, it builds trust, the holy grail of brand loyalty. And possibly best of all, the brand needn’t suffer the fallout of a conventional campaign that fails to deliver, as the unbranded ad comes with little risk, and less accountability. But therein, perhaps, lies the problem. In their efforts to placate a mass market and appear authentic, are brands undermining their integrity and losing touch with their true proposition? You must remember this Advertising guru David Trott claimed that only 4 per cent of adverts are positively remembered, while 7 per cent are negatively remembered and a staggering 89 per cent are entirely forgotten. He argues that it is more important to be a part of the 11 per cent, whether positive or negative, than to fall into the vast abyss of the forgotten ads. He, like many of the more ‘seasoned’ creative types, believe it is better to be be daring and bold than to play it safe and sink into obscurity. In this respect, unbranded advertising is the very definition of playing it safe. It is unaccountable, awkwardly apologetic and essentially grasps at straws for recognition. Like so many digital epidemics, it relies on the power of influencers, and the hope of a namedrop. It lacks balls. We all know where this need to placate the consumer has come from: the sinister malevolent force of cynicism and shrewdness that is the dreaded generation X/Y/Z delta 1 — the digital youth. They’re widely considered all too savvy for the likes of traditional marketing shenanigans. So utterly infallible to the charms of a well-placed tagline or shiny bit of art direction, that advertising, in its purest form has become original sin. But can we know this for sure? Do digital natives truly prefer the advertising wolf in the content sheep’s clothing to the more traditional method of simply trying to solve a simple need with a simple proposition? And is it possible that in pandering to the fickle whims of the social media age, advertising is losing its edge? Of course unbranded and branded content needn’t be mutually exclusive. One way to utilise the power of authentic, engaging content is to simply keep the branding to a minimum, as deftly demonstrated by Australia’s audacious Metro ad. Here the content performs first, before the brand steps up and takes a bow. Another example of the light branding technique hailed from boutique clothing store Wren, whose First Kiss video incited mass curiosity that resulted in both traffic and sales increases. Both examples effectively create intrigue and tap into the zeitgeist, while preserving brand integrity. Ultimately, there is a lot to be said for the benefits of tapping into the unbranded medium of influence, and with a well executed strategy and some longsighted vision it can definitely pay off. And yet something about it still strikes me as a little obsequious. Being unabashedly branded will always ring true, it’s more authentic and certainly braver. Sure some people will hate it, but as Trott would say, at least they’ll take notice. By Suzy Kostadinov, Copywriter at Hugo & Cat To find out more about brand experiences, drop us a line at [email protected]
https://medium.com/nowtrending/unbranded-the-advent-of-submissive-advertising-b18596bcb81f
['Hugo']
2017-06-07 10:38:43.611000+00:00
['Marketing', 'Unbranded', 'Submissive', 'Advertising']
This Pandemic Has Me Feeling Sentimental
Photo by Brian McGowan on Unsplash I was almost there, to that place. The place I thought I was going. Everything seemed to be falling into it. Everything was going well. All the signs were adding up. I was finally learning to flow with life. And then! Corona graced us with her presence. And everything changed. Suddenly I’m not working. I’m not making an income. The summer I planned to have socializing in NY, doing anything I please, is dissipating as rapidly as toilet paper from shelves. All of the plans I was making for my so-called-life have suddenly come to a screeching halt. I thought about panicking, but I really just didn’t want to. This is just a personal choice, no judgment if your reaction was panic. Instead I leaned into the new space created from the decimation of my future plans, and from there I was able to let the reality of the situation settle in. What remains is a space filled with nothing but time, which if I’m honest, I’ve secretly been wishing for. Time to do nothing, and time to do everything. Just time. I wanted it and now I have it. Were we all secretly asking for this? Humanity is having a very intense, shared moment. This shift in consciousness is global. The vast majority of us were given no other choice than to sit indoors with our thoughts. So I’ve been doing just that — thinking. I do that plenty anyway, but now I can take an even deeper dive into my psyche, which I’m learning is one of my favorite things to do. I’m growing quite fond of the time I share with myself in my head, which took years of work (fortunate work) to reach. It was not always so pleasant in my mind. Even still, it is not always this way. But all of the meditating, therapy, reading of the proper literature, and quitting drinking must’ve been preparing me for a time like this — a global pandemic. It is with great privilege I say this, but this pandemic is one of my favorite happenings of my lifetime. Don’t get me wrong, I am saddened by it, too. I understand there is immense suffering and loss surrounding me. Yet I somehow believe the inner-workings of this will lead to something greater. Seeds are being planted. Corrupt and fragile systems are being disrupted. People are getting sick and dying at an alarming rate (sadly). And none of it is in our control. Control was something I thought I had. And now it’s been ripped away from me, which I’m sure is a familiar feeling far and wide. Then again, maybe it was never ours to begin with. Who am I to say how billions of years of evolution should behave? I’m only 32 and am just now starting to clean up my own life. It’s not easy when you exist within systems that reward us for playing along. I’ll be the first to admit I’ve drank the Kool-Aid (bought into bullshit) from time to time. But those days are coming to an end, the living blindly and uninhibited. I get this whole life thing isn’t just about me. If only all of humanity could grasp we are not the real rulers of this world. We are mere inhabitants. We are wise, productive, resourceful inhabitants, but we have an expiration just the same. We are not special. We are not above nature — we are nature. We’re merely animals who got cocky, which has brought forth immense beauty and terror. It is our blessing and our curse we’ve accessed the ability to manipulate our environment the way we have. Our gift; creating language which allows us to express and share ourselves. Our curse; creating language which allows us to imprison and enslave one another. Life does not come without consequence, for better or worse, in the truest sense of karma — our actions bring forth our reality. And now we must take accountability — or not. But let’s! We are quite capable of admitting when things aren’t working if we just would. I never understood surrendering to something greater than yourself before this pandemic, but now it seems vital for me. And I don’t mean believing in God. It means believing this — this Universe/life/consciousness thing — isn’t here for just us (humans). If anything, our remains will be nothing more than a segment of a timeline in an evolution which has no guarantee of surviving. Especially not at this rate. Most of what we do is in the name of survival, yet we destroy our only habitat in the process. It’s fascinating. And it’s not all ill-intentioned either. It is just incredibly, incredibly complicated here on Earth. Or so we tell ourselves… I believe our approach to life has grown far too selfish, and if we are not shaken to our core, I do not see how we will ever wake from this dream. From this illusion that it was ever all about us — certainly at the individual level. Be angry, be sad, be confused during this time all you need. But there is hope in collectively losing the realities we thought to be true. There is promise in the shattering of failed systems and beliefs. There is an opportunity now to see the ways in which we’ve been operating are unsustainable; they’re hardly even workable. We are being woken up to the fact that the Earth will go on fine without us — and this is a good thing. What if the lesson here is surrender? What if the lesson is that we don’t truly own anything? Certainly not the Earth, and certainly not other people. Everything in our world has been deduced to a value system dependent upon how much money it’s worth or how much money it can make you — and it’s ruined us. What is sacred? What is wild? What haven’t we claimed as “ours”? Perhaps we claim things in an attempt to say — here is my control! This is mine, this is how I have power and prove I am here. Well I am here to say none of this is ours. We don’t get to keep it; even our lives are merely borrowed time in the simplest sense. And that doesn’t mean we shouldn’t fight like hell for them. It doesn’t mean injustice is acceptable, and we should all just roll over and die. It means exactly the opposite. That if this is it, we should do our best to treat our habitat kindly, and not just for ourselves, but for the species that are to come after us. Because it was never, ever, just about us! There is a force greater and vaster than our humanity. Some call it God. I call it the Universe. But there is no denying it brought us to life, and it can take it right back. It’s clear to me now, that place I thought I was going — the place where everything gets good and stays good — isn’t real. There is only now, there is the illusion of control we think we have, and there is death. And if we don’t begin to consider everyone at even the most basic, fundamental level, we all run the risk of losing. There is no winning without losing they say. Is there a world where we all can win? That depends on how you define winning. If winning to you is only about you crossing the finish line, then perhaps that world doesn’t exist in your experience. But that story is old and tired. More and more of us are waking up to the reality that we need one another, and that anyone who arrived here deserves to be here. We can redefine winning. Winning doesn’t have to mean everyone gets a gold medal, or that everyone will lose their identity. Winning doesn’t even have to mean everything is equal, but that everyone needs to be considered. A society that wins cultivates and maintains a system who not only cares for its people and their families — their species, but the next species that doesn’t even exist yet. We needed every form of life before us to keep fighting for its life. We’d be nothing without every single happening that occurred. We are not the first, and we are not the last in the timeline of the Universe. Let’s please, please get present to the fact that humanity, although powerful, is far from perfect and has much work to do. And everyone’s hands are needed.* * I do not expect my words to speak to every single human’s experience on this Earth. My intention is not to tell others how to live, but to remind myself how I want to live. Thank you for reading.
https://slamantha-morgan.medium.com/this-pandemic-has-me-feeling-sentimental-bcd81ff21cb8
['Samantha Morgan']
2020-04-06 17:27:30.807000+00:00
['Human Rights', 'Oneness', 'Humanity', 'Coronavirus', 'Hope']
Where Are All the Studios Going?
Where Are All the Studios Going? Super Jump Podcast: Season 3, Episode 25 Mitchell and Wyatt need to know: Where are all the studios going? It seems like every mid-size studio is either failing or is being absorbed into larger companies. Is there a future for game makers bigger than indies, but smaller than triple-A? All that and more on this week’s Super Jump Podcast! If you only want to hear this episode without dealing with a pesky podcatcher, that can be done here: If you’re like me, however, and you LOVE pesky podcatchers, you’re in luck! The Super Jump Podcast is now available on iTunes and any podcatcher that picks up iTunes podcasts. Just search for “Super Jump” or follow this link to our iTunes feed. If you use another kind of podcatcher and we aren’t yet on its directory, you can still use it to listen to us. All you have to do is manually enter in the following URL: Our theme and transition music is by Jamatar, who can be found here at his Bandcamp page. Finally, if you’d like to write in to an episode of the Super Jump Podcast, you can do so at [email protected]. You may be read on the show! In our third season, we’re exploring a wide variety of gaming topics on the Super Jump Podcast, with the fun, enthusiasm, and positivity that we apply to the magazine itself. We really hope you enjoy the show as we continue into our third season — please take the opportunity to rate us on iTunes too, as it will really give us a boost. Thanks for listening, and stay super!
https://medium.com/super-jump/where-are-all-the-studios-going-85378f2ef643
['Mitchell F Wolfe']
2019-09-06 08:15:33.369000+00:00
['Gaming', 'Game Development', 'Startup', 'Super Jump Podcast', 'Podcasts']
The True Cost of Inequality: Learning Lessons from the CoVid 19 Pandemic
The Covid 19 virus has just begun to reach us here in western Massachusetts but we know it will be here shortly, and there have been many steps taken in our area in anticipation. Many stores are closed. Government offices, movie theaters, gyms, theaters and music venues, and other places where people gather have been closed. Restaurants are only serving curb side takeout. Schools are closed and feeding programs are set up for children who depend on the schools for breakfast and lunches. As we take these steps it becomes evident that the true impact of this epidemic, especially as it expands over the next weeks and months, will be very different depending on your relative status when it began, and as is so often true, those who will experience the greatest hardship and loss will be those already in the most vulnerable circumstances. Those who work low paying jobs, with no health insurance or benefits, with little or no savings, with few material resources will have a greater challenge in weathering this crisis than those who come into it with health insurance, with greater material resources and options, with the ability to live off savings, to obtain food and necessary supplies, who have a home in which they can ride out the storm without fear of eviction. What is certain is that poorer people will have worse infections and be more likely to die. We know that poorer people have poorer functioning lungs. Studies around the world have shown that richer people have healthier lungs, independent of whether they smoke or live in polluted areas. This relationship, poorer people doing more poorly, is called the social gradient in health. It is there for most health conditions. We ignore it at our peril. (Stephen Bezruchka, Alternative Radio, March 14, 2020) These extreme circumstances we are experiencing at the moment are a speeded up and more intense version of what we experience in more “normal” times. Before the arrival of the virus, Reverend Barber from the Poor People’s Campaign states that there were 140 million people poor and low wealth people in US, 15 million who could not afford water, 4 million who have poisoned water (such as those in Flint, Michigan), 62 million not making a living wage. Nearly one in five children goes to bed hungry, not because of the virus but because of the gross inequality in this country that has produced a few who are impossibly wealthy and a greater majority of our population that are desperately poor. Inequality is a major factor in the health of a population. Researchers Richard Wilkinson and Kate Pickett, in their 2009 book The Spirit Level point out that those countries that are most unequal in income have the worst health results across a broad spectrum of indicators, including life expectancy, infant mortality, mental health and drug use, teenage births, violent crimes, and more. Our current crisis dealing with the coronavirus provides an extreme example of how this plays out. The more unequal our population, the more members of our towns and cities are unhealthy coming into this crisis, and the more at risk that leaves all of us. We are only as healthy as the person standing next to us, or the person working at the convenience store, super market, or shelf stocker bringing us those cherished roles of toilet paper. We are truly in this together. There are larger, long term lessons to be learned here for those of us connected to public education. There has been constant pressure on schools to improve performance, with an emphasis on raising test scores. None of this pressure, none of the proposed solutions includes reducing inequality, addressing systemic issues such as racism and sexism, or recognizing that schools are not operating in a bubble, isolated from their surroundings. We can’t fundamentally improve our schools without making fundamental changes to the communities in which they are situated. If we want to improve test scores make sure all of our students are getting enough to eat, have safe and secure places to sleep, and are living in family situations that are stable and healthy. Make sure there are jobs that pay a living wage for the adults in our communities, that our neighborhoods are free toxins and poisons, that there is available child care, medical care, and mental health supports available and accessible to all. Address systemic racism, sexism, and other forms of discrimination. Make strong efforts to connect schools with their communities, recognizing ways in which we have common purpose and can take advantage of our collective resources and wisdom. Photo by NeONBRAND on Unsplash There are of course many things we can do within schools to support our students to do their best learning beginning with developing strong relationships with them, making clear to them in every way possible that they are valued and fully welcomed into our classrooms. We can make sure they are getting healthy food for breakfast, lunch, and as snacks during the school day, and during weekends and vacations. We can make it clear that there is no tolerance for put downs or discrimination in our buildings. We can help children learn more about their own learning processes and to recognize that there are many ways to learn and to communicate that learning, and then we can make sure to offer a range of teaching approaches and assessments that include that wide range of intelligences and processes. We can begin to hire educators who love learning, who are creative and passionate and who are skilled and dedicated to sharing that love with their students (rather than focused on raising test scores). We used to hire people like this to teach in our classrooms and there was much more joy and creativity in our classrooms then. We can disappear the high stakes tests, which have added stress and rigidity to our educational system while adding no educational benefit. All of these steps will help in the short run, but if we truly want to make educational change we must broaden our focus to making societal change. Our current health crisis makes clear that our current system does not provide for the health and well being of our population, and that we can’t continue as we have been, increasing the gap between the few who have and the many, many who do not. It is a system that is beginning to collapse on itself, and unless we work towards improving the lives of all the members of our communities, in school and out, we cannot sustain. We are experiencing a wake up call and hopefully we are wise enough to pay attention.
https://medium.com/peter-lang/the-true-cost-of-inequality-learning-lessons-from-the-covid-19-pandemic-21008f13c51a
['Peter Lang']
2020-04-16 14:42:25.687000+00:00
['Inequality', 'Covid 19', 'Education Reform', 'New York']
When Our Thoughts Are Tangled
Some of the most complex, and multi levelled tangles can occur in our brains throughout life. Now I have never been one to be strongly proficient in untangling what we consider traditional knots; those being the kinds we may face in the form of rope or string. I was a late learner but, I eventually put that odd comparison to work in realizing that if I have problems untangling knots in small strings, then I sure as anything better learn about the tangles my mind gets stuck into. I learned to figure that the best way to deal with these mental knots, was to never allow them to get tied in the first place. Now, that can be easier said than done. But still it’s a venture worth trying. One of the important, and major issues that is a suspect in the brain knot tying is a topic I have actually written often about. Anyone recognize the term “black and white thinking?” Well, it is a very rigid model of thinking, that can produce multiple problems. It’s an all or nothing thinking that may easily find itself tying knots. And another problem; it is a thinking style that even when it isn’t tying knots, it sure is not going to untie any knots we may already have. So, we can see with that analysis that, all or nothing thinking can be nothing but a stressful, useless brain tangler. It rarely, if ever helps us. The experts recommend trying to adjust this style, from a black and white technique, to more of a “shades of grey style.” Create a personal evaluation skill in your mind that encompasses rating things on a scale of numbers. It is a common practice, and it can help us get away from the insistence that something can only be good or bad. Another thing to be aware of is the fact that even our own thoughts must sometimes be challenged by us. Often even more than outsiders, considering we’re the sole listeners of every thought we have, whether it’s normal thinking or distorted. If we can’t challenge ourselves, it’s still gonna be us cleaning up the mess. If your thoughts seem questionable, well, then question them. Even though we may be oblivious to our thoughts as we go through our days, it still is our thoughts. I know that we can barely acknowledge a lot of our subconscious thinking from time to time, but we have to try to develop more of an awareness to our subconscious thinking. Especially if it is negative. I read an excerpt from “The Feeling Good Handbook” by David D. Burns MD, and it said to “write down negative thoughts so you can see and read any cognitive distortions you think may either be a problem, or a potential future issue.” I believe that a Cost Benefit Analysis is something that can be used just as much for our thought and emotions, with the same importance that it has for its normal usual place in business type decisions. Specifically, I see emotions, being a major focus point. I believe that all of us probably go through plenty of emotions in our lives that may not quite be that necessary. It’s a result of being human, and none of us should demand perfection onto ourselves. We all go through wasted emotions, some from exaggerations and or overreactions. If we can go about the approach in a cost benefit style, then maybe we can strengthen our awareness in this area. Take a feeling or an emotion, and decide its positive points and negative ones. See what appears to be a waste, and see what parts are very important. If you can’t find a positive avenue that an emotion may be travelling, maybe it should just be let go of. So much of this revolves around the fact that we ourselves have an absolute chance of being our own worst triggers. We fall into that trap as much as we do, because it isn’t something on the top of our minds. We are in this endless search to find all these assumed “outside sources” as the reasons and causes of every single solid problem we have. We are molded, and evolved, to travel through life looking away from ourselves whenever there is an issue. We can’t go through life holding mirrors in front of ourselves, as we wrongly go about trying to pinpoint what entity is fooling all our real life issues. Well, we can ourselves be guilty. Even if we are not the problem, maybe we have to take a step back and remember that we are not our thoughts, and just because we think it, doesn’t mean it’s true.
https://medium.com/indian-thoughts/when-our-thoughts-are-tangled-a63c3f7a4933
['Michael Patanella']
2019-03-30 14:11:32.680000+00:00
['Self Improvement', 'Life Lessons', 'Mental Health', 'Thinking', 'Thoughts']
Calculate the Max Distance in an Array
Problem Max distance: InterviewBit Given an array A of integers, find the maximum of j - i subjected to the constraint of A[i] <= A[j] . Example: Input A: [3, 5, 4, 2] Output: 2 for the pair (3, 4) Solving Process Brute Force One brute force solution consists in iterating over each pair of the array and finding the maximum of j - i . This solution is O(n²). Let’s try to find a better solution. Recursion A common technique with array problems is to have two pointers. As we need to find the maximum of j - i , let’s try to initialize the pointers at the start and the end of the array. A: 3 5 4 2 i j If we already get A[i] <= A[j] , we don’t even have to iterate and we can return j - i which is equals to size(A) - 1 . Yet, in our example, this is not the case. How to move i and/or j ? A solution would be to try both cases by: Moving i to the right to the right Moving j to the left And then return the maximum of both sub solutions. In Java: Please note in this possible implementation we do not use i or j . Instead, we use List.subList(fromIndex, toIndex) which returns a view of the list (cc Val Deleplace ;). Despite being elegant and concise, this code is not really efficient. The space complexity is O(n) due to recursion calls and the time complexity is O(2^n) as for each call we need to do two sub calls. We could try to optimize it using dynamic programming techniques but let’s move on. What if we try to figure out whether another solution exists using those two pointers: A: 3 5 4 2 i j In this example we should ask ourselves, is there any way to guess whether it’s i or j that should be moved? In the previous example, we know the solution is (3, 4) so we should move j to the left. As A[i] equals 3 and A[j] equals 2, does it mean we have to move each time the pointer with the smallest value? To validate/invalidate this theory, let’s take another example: A: 4 3 1 2 Output: 1 for the pair (1, 2) If we apply our theory: A: 4 3 1 2 i j // Initial situation 4 3 1 2 i j // We move j as A[j] < A[i] In this example, our theory is not going to work as j should be at position 3 at the end of the algorithm. This is the end of the road for the two-pointers technique. Let’s try something different. Sorted Array If you are familiar with the Solving Algo publications, I already mention another strategy: the inputs simplification. With arrays, the most common simplification is sorting. What if the input array was sorted? As we need to calculate a distance based on the indexes, we must keep track of the original indexes. Not just blindly sort the array. One possible solution in Java to deal with that is to implement our own helper class: Helper stores a value and an index variable and implements a sorting strategy based on the value . We can iterate over A and construct a collection of Helper . Once sorted, we would have something like that: A: 3 5 4 2 Helpers: (2, 3), (3, 0), (4, 2), (5, 1) If we represent the Helper collection in a different way: Values: 2 3 4 5 Indexes: 3 0 2 1 Let’s come back to the initial problem. We need to find max(j - i) with the constraint A[i] <= A[j] . As the collection is sorted by value we know that if we take one random element, every element on its right respects the constraint A[i] <= A[j] . As we must find max(j - i) , the solution is to build a rightMax array containing the maximum index starting from the right of the Helper collection. In pseudo-code: int[] rightMax = new int[size] int maxValue = IntegerMinValue for (i = size - 1; i >= 0; i--) { maxValue = max(maxValue, helper[i].index) rightMax[i] = maxValue } With the previous example, rightMax would be equals to [3, 2, 2, 1] . Then, we iterate over the Helper collection and rightMax to return the maximum difference of rightMax[i] - helpers[i].index . As we need to find max(j - i) it’s crucial to iterate until the end and not just return the first pair validating A[i] <= A[j] . A possible implementation in Java: What about the complexity:
https://medium.com/solvingalgo/solving-algorithmic-problems-max-distance-in-an-array-7e8c9f71c8b
['Teiva Harsanyi']
2020-12-09 19:28:45.280000+00:00
['Algorithms', 'Coding', 'Java', 'Programming', 'Arrays']
It’s Okay to be a Fallow Field Sometimes
It’s Okay to be a Fallow Field Sometimes The wisdom of doing nothing Photo by Federico Respini on Unsplash Hi, it’s me, your neighborhood annoying productivity cheerleader. I’ve learned something in the last few weeks — it’s okay not to push yourself to achieve something every single day. Is “write every day” good advice? Of course! Is it realistic? Well, that depends. Should you punish yourself for not being able to do it? Absolutely NOT. I suffer from anxiety and perfectionism, all rolled up together with a bow of horrific, crippling guilt on top. So when I get up at 6:45 in the morning, my whole day is GO GO GO until I pass out, exhausted, somewhere around 1 am. That’s IF I fall sleep — anxiety likes to keep me awake, worrying about what I will have to do the following day or what I didn’t achieve today. In these strange and unprecedented times, we are asked to cope with or even ignore an ongoing, mass trauma, and to try and be as productive as we were not that long ago. Don’t hold yourself to the standards of the world as it was three months ago; that world is gone. Don’t punish yourself for not being “productive,” and don’t internalize capitalism so hard that you think you’re only valuable when you’re accomplishing something or making money. My god that’s unhealthy. As for me? I’m learning, bit by bit, to just BE. To just sit still. To do things I enjoy without productivity or hustle being a part of the equation. Farmers have known for generations that letting a field lie unused for a season now and again was actually good for it; better still, it can be planted with restorative crops that give back to the soil to make it better at its job next growing season. Treat yourself like a farm — now and again, take some time off and engage in an activity that inspires you, fulfills you, refreshes you. Your (later) productivity will thank you.
https://medium.com/narrative/its-okay-to-be-a-fallow-field-sometimes-31be62db0354
['Deidre Delpino Dykes']
2020-04-27 13:43:23.466000+00:00
['Self Care', 'Work From Home', 'Work Life Balance', 'Quarantinelife', 'Mental Health']
Why we have decided to kick ticket-tiers
Why we have decided to kick ticket-tiers …and introduced a unified ticket price. The last two co-creator dinners were full of content in every aspect. Establishing new collaborations for Revision, discussing content for the Revision Summit (18–20 November) and going through multiple feedback rounds on organization and outlook of the Network. In the light of this vibrant spirit we sat down after the two dinners and tried to incorporate the feedback we received. One important feedback was that the many different ticket tiers were too confusing and most importantly not necessarily inclusive enough. Therefore we would like to announce the Revision Summit Unified Ticket price: All tickets to the Summit will be sold at €250 to avoid confusion over different categories. Participants eligible for reduction (students, NGO’s, refugees, unemployed or elderly people), please fill out the form on our website and we will find a solution to enable your participation. We thank the co-creators for their valuable feedback and would like to encourage all of you to continue letting us know where we can improve. We want to live the ideals we put forth, being inclusive and transparent. The Revision Summit is supposed to be a different type of conference, a place created for the community and by the community. Let us take the discussion about a more open, human-centric and technology-driven society past the realms of the summit and meetups into the open, lets challenge ourselves, our own work and others. In this spirit — we are looking forward to more of your feedback! You can get your tickets to the Revision Summit here.
https://medium.com/revision-europe/why-we-have-decided-to-kick-ticket-tiers-d8042e78e993
['Quincey Stumptner']
2018-10-11 08:35:22.867000+00:00
['Berlin', 'Startup', 'Blockchain', 'Technology', 'Events']
The StackAdapt Conscientious Waste Reduction Initiative
In 2017, StackAdapt launched the #HackDiversity initiative, to raise awareness about diversity in tech. An 18 minute documentary was released, calling for an industry effort to address bias in the tech industry. We really loved this initiative, and wanted to do more to bring awareness to other issues in the industry and around the world. April 22nd is Earth Day, and for some this is the only time sustainability is top of mind. For us at StackAdapt, sustainability is an ever-present and growing concern and we felt it was important to do our part to not only bring awareness to the issue but make some conscious changes. The Challenge! The #StackAdaptWasteReduction Initiative. First, we needed to gauge how we were doing when it came to waste, to identify where and how we could make a few adjustments. Our focus would be on (1) Smart Commuting and (2) Waste Reduction (e.g., paper and plastic). A survey was sent company-wide, with questions including: What is your current mode of transportation to work? How often do you print material at the office? How often do you get take out coffee in a week? How often do you decline a plastic bag or plastic utensils when you order takeout? The results were amazing. We found that less than half of respondents print materials in the office, and approximately 30% bike or walk to work — depending on the weather. While the majority, almost 70% take public transit. The team was very forthcoming with their suggestions. Interestingly, the majority of respondents agreed that their biggest concerns revolved around waste in the office and sustainability as a whole. At the beginning of the initiative, we had a few recycling bins around the office, and compost bins in the kitchen — but it seemed there was an appetite for more, with a need to receive details on what exactly goes where. We leveraged suggestions that came from the survey results to make some key changes geared specifically toward waste reduction. This is what we have done so far: Added additional and larger recycling bins around the office to encourage appropriate waste disposal Added more reusable utensils in the kitchen, to encourage less plastic cutlery from takeout lunches Offered tips to the team through Slack such as: “Did you know that both Starbucks and Tim Hortons offer a “bring your own mug” discount of 10 cents per drink? If you have an out-of-office coffee or tea habit you may wanna consider adopting the #travelmuglife” Created a helpful cheat sheet that explains where to dispose of different products, according to the waste company contracted by our office building. It also provides some best practices for waste reduction such as: This is just the beginning! From deciding against single use coffee pods, to encouraging better recycling and composting practices, we are committed to continually educating our team — and those around us — about conscientious waste reduction, keeping sustainability top of mind. How did your company celebrate Earth Day? Are you taking steps to work toward sustainability? Let us know on Facebook, LinkedIn, or Twitter!
https://medium.com/stackadapt/the-stackadapt-conscientious-waste-reduction-initiative-21915e08b06f
['Christiana Marouchos']
2019-04-25 13:56:05.645000+00:00
['Waste Reduction', 'News', 'Earth Day', 'Environment', 'Stackadapt']
I Improved My Mental Health During a Pandemic
2019 was an arduous year for my mental health. I was struggling with depression. The thing is, I was in denial about it because I wasn’t displaying what I imagined depression to look like. Instead, I called myself lazy and unmotivated. When the pandemic began, quarantine was mandated, I had to face my depression head-on. I couldn’t ignore it any longer because I would end up in this spiral of self-deprecation and would end up crying. These episodes would happen at least once a week. Because of this, I decided to make a change. Finding a hobby Ever since I started college in 2012, I did not have a specific hobby. I remember I would spend an entire afternoon looking for a hobby. I would Google different types of hobbies and take quizzes to see what hobbies would suit my personality. When every day began to look the same, I realized that I didn’t have a hobby to offset my daily routine. So, I found a few hobbies, and I felt more productive. The more productive I was, the less crying sessions I had. Creating a Strict Routine The moment that I was willing to make a change, I decided to follow a strict routine. I would have alarms for my chores, work, meals, and break times. I made sure I was busy until dinnertime. Sad to say, I couldn’t follow this routine for more than two weeks. However, I was productive, content, and pleased with what I had accomplished. Practicing Self-Care On the days where I was at my best last year, I would find different ways to take care of myself. So, in 2020, I decided to restart my skincare routine, take care of my nails once a week, and have a better bedtime routine. Something I learned is appreciating my body for what it is at the moment. Now, at the end of 2020, I have included makeup and exercise into my routine. I love my self-care habits. They are my indicators for my mental health, and I can practice perseverance on the days that I feel unmotivated. Exercising From a young age, I have been self-conscious of my weight and how my body looked. This year, I am at my heaviest, and I want to make a change. I decided to exercise and keep track of my weight loss. At first, I was fixated on calories and how I needed to be at a deficit. There were benefits to it, but it slowly became a numbers game, and I wasn’t willing to fall into that trap. However, as I began focusing more on exercising, I noticed how my body has changed. The number on the scale was barely dropping, but my body looked better. I learned that health is not determined by a number. Health is putting in consistent effort to strive to be healthier and learning to be content with your current body. Talking About My Mental Health Something that I regret not doing during 2019 is opening up about my mental health. In 2019, I felt alone because I was silent about my mental health. I never brought it up with my own husband. But now, when I am feeling down, upset, and anything, I let him know. Just by opening up to him, I am more willing to open up to my friends. Even though my friends do not live in the same state as me and I am in my home almost 24/7, I feel a lot less lonely than I did last year.
https://medium.com/sweaters-and-blankets/i-improved-my-mental-health-during-a-pandemic-17f6665e41af
['Tiffany Hsu']
2020-12-12 16:33:03.186000+00:00
['Self Care', 'Depression', 'Pandemic', '2020', 'Mental Health']
Dropping Out of Character
When Acting Lessons and Buddhist Thought Meet in the Bathtub. Photo: Ren Powell Yesterday after work I took a long bath without my mobile phone. Without earbuds. No podcast, no music, no news. I can’t remember the last time I did that. I had a rush of ideas. Most of them related to work, but that was fine really. Creativity feels good regardless of the arena. I got out of the tub, dried off, and worked at the computer until bedtime. I have a separate chrome browser for school-related bookmarks. At eight o'clock, I closed it for the next 12 days. Today though, I’m thinking about work again. About how I teach first-year drama students to be conscious of personal props, the items that become the habitual gestures and defining physical characteristics of their role’s personality. Glasses, scrunchies, cowboy boots, soda bottles, toothpicks. These objects are psychological props — they reinforce the actor’s constructed identity. They provide a way for an actor to literally cling to their role, to keep themselves from dropping out of character. By the third year, we are talking about Richard Schechner and our social behaviors related to personal props that prompted him to insist for a time that his Performance Group play in the nude. For years now I’ve used my keys as an example of a personal prop. I have work keys. I don’t have a car, so I don’t have a car key. We have a code on our front door, so I don’t have a house key either. When I pull my work keys out of my backpack, I take on a role: teacher. My work keys are incredibly symbolic. Students will ask me to unlock the costume storage room or a rehearsal room. Or by the third year, they may ask to borrow my keys so they can do it themselves. At some point years ago, I became hyper-aware of my work keys. How I would actually cling tightly to them when I felt a class of 30 restless students taking control of a situation that should have been under my control. Weirdly, my noticing this — stepping back and taking on the role of the director in relationship with my “character” — I was able to access when control was necessary and when it wasn’t. I could make more conscious choices about my “role” as an instructor. These days, half the time I have no idea where my keys are — which I’m certain is not something my boss wants to know. Yesterday finding myself in the bathtub without my mobile phone, I had the same kind of epiphany. We read and talk a lot about social media and how we can passively allow it to define us. But the phone itself — the device — has come to partially define me. My mindless connection to this object, and its ability to connect me to a world of ideas to occupy my thoughts every moment, is shaping my behavior. It’s determining how I move in the world. Literally: in the bath, one elbow propped on the edge of the tub to hold the phone dry. My shoulder twisted slightly. My neck under stress. I’ve believed for a long time that we are nothing more than what we do: what we think and how we interact with the world. And that thinking and interacting with the world are interconnected in such a way that one defines the other — reinforcing or challenging who are “are” at any moment. I believe this is how we can change. How we do change. I’m going to stop grasping at my mobile phone. Stop clinging to my sense of self: the productivity shoulds and ought-tos. I’m going to dare to be truly naked in the bathtub. Maybe dare to drop my character more often, wherever I am.
https://medium.com/mindfully-speaking/dropping-out-of-character-748bd8c317ad
['Ren Powell']
2020-12-29 07:42:25.266000+00:00
['Self-awareness', 'Acting Performing', 'Teaching', 'Performance', 'Buddhism']
What “Beta” Means for Material Design Guidance
Material is now marking some design guidance as beta — learn why and what it means for the design system Image by the amazing Michelle Alvarez, Google Senior Visual Designer We recently began labeling aspects of our design guidance as “beta” on material.io. What does that mean? We’re being more transparent about which components, patterns, and elements are likely to evolve in the near term. The background on Material beta As an organization, Material Design strives to produce the highest quality design and engineering for you to use in your products. Everything we do goes through multiple rounds of review, testing, and validation before it makes it out into the world. But the world keeps changing — as do the technologies, contexts, and constraints shaping our design system. We want to get new components into your hands faster and learn from how you use them. What beta means on material.io When you see content in our spec marked as beta, know that it’s still backed by research and careful consideration by our team, but it’s also subject to change. There are typically two reasons why something is marked beta: It hasn’t been fully engineered on enough of our platforms. Implementing new guidance on different platforms is an important part of finalizing the design. Material supports four platforms: Android, Flutter, iOS, and the web. Implementing a new component on one platform is a first step, but we want to address any concerns that come up as we implement across our other platforms. For example, does this pattern work well using iOS’ particular navigation gestures? How about on the web when the user scales up the text size on the page? Although our designers consider multiple platforms, until we’ve implemented a new UI pattern widely, it’s difficult to anticipate what issues might come up. The underlying concept is still emerging. For example, machine learning is clearly changing the world but design best practices are still taking shape. Our patterns for machine learning-powered features are a first attempt at helping designers grapple with certain use cases and situations. However, given the newness of machine learning in user interfaces, it’s crucial to acknowledge that the landscape is changing rapidly and our guidance might need significant updates to continue to work well. How Material guidance graduates from beta Of course, we don’t want guidance to be marked beta forever! We’ll move guidance out of beta when it’s been implemented in code for multiple platforms or becomes a well-established pattern across several Google products. As of November 2019, here are the components, design elements, and UI patterns in beta: What do you think? Our approach to using beta for design guidance is still taking shape, and we’d love to get your feedback. Think it’s helpful? Unnecessary? Let us know in the comments below 👇
https://medium.com/google-design/what-beta-means-for-material-design-guidance-10c5739f47a9
['Adrian Secord']
2020-02-14 14:20:26.457000+00:00
['UI', 'Material Design', 'UX', 'Design']
Introducing Bitfolio 3
It’s been a while since our last development update, but we are back with a bang, Bitfolio 3 for iOS is now available! 🎉 And there is more, this is just the beginning. The big challenge in developing Bitfolio 3 was having to rewrite part of our infrastructure to accomodate our vision. Good news, the hard part is mostly done and Bitfolio 3.0.0 is the just the first of many updates. So let’s see what’s new and what’s coming! What’s new Everybody was asking for it and we finally have it: exchage sync support! 🙌 We’ve also revamped our UI, the main Portfolio screen will now accomodate all your exchanges and portfolios, giving you an overview of all your crypto valuables. Bitfolio version 3.0.0 supports Binance, CoinEx and Cex.io, we are planning to support many more in the future, if you want us to speed up the adoption of a specific exchange please let us know via this form. So to recap all what’s new in version 3.0.0: Support for exchange sync Revamped Portfolio UI Support for iOS 13 Automatic dark mode switch What’s coming More exchanges, we want to support as many exchanges as possible. Ability to view current and past orders on exchanges. Widget for tickers. Ability to choose between multiple news sources and add new sources. Exchange trading. And much more, check out the Bitfolio roadmap on trello, you can vote on your favorite feature and keep an eye on the development. 👍 Useful Links Thank you ❤️ We’ve received a lot of feedback and we love it, please keep it going, keep sending feature requests and if you see a bug or an issue, send a complaint! (but please don’t be rude 😉). Please consider sharing Bitfolio with your friends and coworkers, it would help us tremendously. Again, thank you ✌️ The Bitfolio Team
https://medium.com/bitfolioapp/introducing-bitfolio-3-c9c27ba570c
['Francesco Pretelli']
2019-09-25 08:10:19.764000+00:00
['Bitcoin', 'iOS', 'Cryptocurrency', 'Binance', 'Apple']
Understanding “Off The Record”: An Intro For Angry Bloggers
Lord knows, The NYTPicker, a blog devoted to lightly annoying the New York Times, is probably penned by some disgruntled, drug-addled acquaintance(s) of mine. But that does not mean that they/he/she are not wronger than wrong today. Today they claim that New York Times TV reporter boy Brian Stelter, who is one of the world’s most prolific/annoying users of Twitter in the world (your mileage may vary!), screwed over a source in a story (former CNN honcho Jon Klein) by revealing his name on Twitter. Then, they backpedal a little to this: Your tweet clearly identified Klein as someone you interviewed “off the record.” Whether the quote in your story came from Klein isn’t the point. Your disclosure on Twitter violated the off-the-record terms of the interview, and disclosed the identity of a news source who demanded anonymity. Yeah, not at all! People don’t get to set the terms of communication preemptively in this world. If you email me, requesting such terms, I can agree to them or not. Here’s how that works: Them: I will talk to you, but not for quotation in any way, and I do not want to be identified, because I will get fired/get divorced/other troublesome thing. And then you may respond. You: Okay, I agree that I won’t quote you or name you. Them: *Spills beans* Or: You: No! I do not agree with your suggestion! Here is my alternative suggestion, what you think of that? Or: You: No thanks! Go fuck yourself! The End.
https://medium.com/the-awl/understanding-off-the-record-an-intro-for-angry-bloggers-1af02a38d9ec
['Choire Sicha']
2016-05-12 23:11:17.226000+00:00
['Brian Stelter', 'Journalism', 'Blogs']
Problems Patterns
Different pieces to help solve the puzzle. In our case as coders, its problems. Hey friends hope you are all positive and testing negative! Today I want to talk about problem solving patterns for organizing your data. As a coder you will encounter so many of these patterns throughout your career. You will especially see them throughout Leet Code, Hacker Rank and technical interviews. The patterns that will be discussed are: Frequency Counter, Multiple Pointer, Sliding Window, and Divide and Conquer. Of course there are a lot more patterns, but I believe starting off with these four patterns will help beginners get the flow of creating such patterns. Getting the flow will allow you to go on to more complicated patterns and I am excited to tell you more about in the near future! Let’s get on to the first four patterns. 1. Frequency Counter Pattern How many times/values occur during this runtime? Our first pattern uses objects or sets to collect values/frequencies of values. This is useful for when you have multiple pieces of data inputs and you need to compare them to see if they consist the same values, hence frequencies. I will be using a LeetCode problem to help explain the process of this pattern. This problem is called validAnagrams and we are required to check and see if the two string inputs consist of the same amount of letters. We need to see if our pieces of data inputs (characters of both string) occur the same amount of times. Figure 1.0 Lookup is our frequency counter and it is going to store the values of the first string. In figure 1.0, we created a variable object called lookup. That variable is our frequency counter and if we take a look at lines 9 through 13, we are storing the elements into our variable. Look carefully on line 12, we are using a ternary statement to check whether or not our frequency counter has stored the same letter. If the letter is already stored in our lookup variable, increment it by one and if not we will set it to one. This part is very important because we are not just storing the elements from the string, we are also counting the amount of times it has occurred. Now if we take a look on lines 14 through 20, that is the frequency pattern at work. Just like how we were adding the letters into the variable lookup, we are now taking away the letters and the amount it has occurred. We are using conditional statements to check the second string’s letter and the variable lookup. If the letter from the second string does not match up with any of the letters that lookup has kept track of return false, otherwise decrement the occurrence by 1. 2. Multiple Pointer Pattern Pointers will start from both ends of the spectrum. In our second pattern, it utilizes pointers or values that correspond to an index or position and moves towards the beginning, middle, or end based on a certain condition. In the following problem, we are going to check and see if the sorted array has a sum of zero. We will need to iterate the array and see if the pointers add up to zero. Figure 1.1 Our array for this input is [-15, -5, -3, -2, 1, 2, 3, 123] Before we start adding the indexes we need to create pointers. In figure 1.1, we are creating a left and right variable to keep track of where we are in the array. Because we need to check every single index, we use a while loop that will stop when the left becomes greater than the right. As we iterate through the indexes, we will add up the index element of left and right and assign that value to the variable sum. We will then check to see if this variable sum is equal to 0. If it does equal to 0, great we will return the index element. What happens if the sum is greater than 0? We will then decrement the right by 1 and vice versa if the sum is less than 0. 3. Sliding Window
https://medium.com/swlh/problems-patterns-320ce83897e8
['David Cha']
2020-10-27 19:42:56.444000+00:00
['Algorithms', 'Beginner', 'Patterns', 'JavaScript']
Mastering Regular Expressions made incredibly easy - It's all about the patterns
The number of use cases for which Regular Expressions (Regex) can be applied on is immense. No matter whether you are a data scientist collection data, white collar worker automating business processes or student who aims at extracting data from academic journals on a larger scale, Regex is likely to become your best friend. Think of the world wide web is an incredibly comprehensive source of (structured, semi or rather unstructured) data, very often interesting facts and numbers are kept on a web page and you may find yourself in a situation where you would like to access this data. In most cases the web page, software provider, etc. is not offering a simple API to access or download data (for free), this is why you might want to find a tailored solution to obtain data yourself. It’s all about the pattern © me This is how we will address Regex in this quick read I often found readings covering Regex with short unrelated examples and explanations which were quite hard to follow. I will try to approach this topic differently: Provide a very short and handy Regex Cheat Sheet Have a quick look at groups and how they perfectly enhance our patterns Definition of a Problem rather than the description of a given patterns, e.g. “As a user, I want to find all data until the dot “.” is reached In order to give you an impression of a couple simple regex examples, we will browse through a colourful mixture of use cases — and no matter where we heading to, Regex will allow us to master all our requests. Why are we doing this? Data analytics and further reaching data science disciplines heavily rely on accurate and clean data, and because of this, cleaning and organising as well as collecting data roughly contributes to 80% of the data science profession! A proper understanding of regex will make your day way easier. Covering most of the data science daily business — Forbes You will find an exhaustive supply of cheat sheets online — I prefer to just have the most common characters ready when needed. If you feel proficient with those, you should not have a hard time feeling comfortable writing patterns: Thanks to cheatography We will focus on two examples: The history of tea (tea_text) A football results table Before we start, let’s first have a look at our cheat sheet below, followed by a short introduction to groups. Groups ( ) Groups are essential when it comes to work with (fractions of the) pattern results. I often avoided to use groups, however this made it very hard to extract relevant data only. Groups are defined through using ‘(‘ and ‘)’ in the pattern. Groups allow to explicitly target a group of characters, even if the group is embedded in a another pattern. Programming languages then allow to query for these specific groups and address them directly. To outline the general purpose of groups, follow this example and make sure you got the idea behind: “This is the day you will always remember as the day you almost caught Captain Jack Sparrow” “This is the day you will always remember as the day you have not caught Captain Jack Sparrow” “This is the day you will always remember as the day you have caught Captain Jack Sparrow” “This is the day you will always remember as the day you certainly have caught Captain Jack Sparrow” It seems Jack Sparrow, apologies, Captain Jack Sparrow gets more or less caught on a regular basis, but we want to know which exact words are used. So we define the relevant words in our pattern through a group. Through our pattern ‘you\s(\w*\s*?\w*)\scaught’ we are able to extract values that lie in between our determined pattern. Depending on the programming language you use (I will use Python), the syntax to implement Regex varies, however the idea is always the same. If your pattern finds a match, you are able to extract the entire match but also the single group(s) only. Convenient! Now, being equipped with group knowledge, let’s go through the examples. First, let’s start with the tea text. Chinese small-leaf-type tea was introduced into India in 1836 by the British in an attempt to break the Chinese monopoly on tea.[57] In 1841, Archibald Campbell brought seeds of Chinese tea from the Kumaun region and experimented with planting tea in Darjeeling. The Alubari tea garden was opened in 1856 and Darjeeling tea began to be produced.[58] In 1848, Robert Fortune was sent by the East India Company on a mission to China to bring the tea plant back to Great Britain. He began his journey in high secrecy as his mission occurred in the lull between the Anglo-Chinese First Opium War (1839–1842) and Second Opium War (1856–1860).[59] ..... [57] Tea was originally consumed only by anglicized Indians; however, it became widely popular in India in the 1950s because of a successful advertising campaign by the India Tea Board.[57] Assume we would like to extract the annotations from the tea excerpt. For this purpose we use the group logic, to only extract the values, not the brackets — however we consider brackets to be necessary to identify the annotation rather than receiving any sort of single or double digit. pattern = “\[(\d{1,2})\]” annotations = re.findall(pattern,tea_text) [‘57’, ‘58’, ‘59’, ‘57’, ‘57’] If we ignored the usage of groups, we would obtain the annotations as they occur in the text — surrounded by brackets. pattern = “\[\d{1,2}\]” annotations = re.findall(pattern,tea_text) ['[57]', '[58]', '[59]', '[57]', '[57]'] It may be required to extract all time spans, hence we do not want to extract e.g. 1836 but we would like to obtain 1839–1842. pattern = “\(\d*.\d*\)” # . could also be explicitly '-' years = re.findall(pattern,tea_text) [’(1839–1842)’, '(1856–1860)’] More specifically, we might want to extract the years for the time spans we have already been able to locate. For this purpose we will greatly rely on the group idea previously outlined. pattern = “\((\d*).(\d*)\)” # year range first_years = re.search(pattern,tea_text) first_years.group(0) # '(1839–1842)' first_years.group(1) # '1839' first_years.group(2) # '1842' first_years.groups() # ('1839', '1842') ## Iterating over all year spans is the easy part: single_years = [] for match in re.finditer(pattern,tea_info): if match: for i in range(len(match.groups())): single_years.append(match.group(i+1)) single_years # ['1839', '1842', '1856', '1860'] Replacing values Regex is not only a good way to find patterns, but also to replace them. Depending on the language you use, regex can be used to substitute patterns through your desired string. This is particularly useful, when you would like to avoid replacing to many values that may affect other words, e.g. ‘is’ to ‘was’ replacement may result in words like Thwas… pattern = "(First.*\(\d{4}.\d{4}\))\s*and.(Second.*\(\d{4}.\d{4}\))" # using \s and . to show that results are the same here # Provides the following elements: [('First Opium War (1839–1842)', 'Second Opium War (1856–1860)')] pattern = "First\sOpium\sWar" replace_word = "1st Opium War" re.sub(pattern,replace_word, tea_text) # inplace modification ! #[..] between the Anglo-Chinese 1st Opium War (1839–1842) and Second Opium War (1856–1860). Counting values Very often the number of occurrences is an interesting fact. The problem that may arise is, that values may be a mix of upper and lower case combinations, this makes it very hard to bring those values together. pattern = “[Cc]hinese.(?=[tT]ea)” # positive lookahead occ = len(re.findall(pattern,tea_text)) # normally we would use re.IGNORECASE to achieve this result print(‘There are {} occurrences of "Chinese Tea"’.format(occ)) # There are 3 occurrences of "Chinese Tea" In the second example, the score of the some last football games is captured as a string. The intention is to be able to extract teams and the goals they scored, half and full time. A potential use case could be, that a final table shall be calculated. Given to following game results: game_day = """ Juventus F.C. - Napoli 4:1 (2:0) A.C. Milan - Internazionale F.C. 2:2 (1:0) A.C. Fiorentina - Torino 1:0 (0:0) Lazio - Atalanta 0:3 (0:2) Lazio - Juventus F.C. 0:4 (0:2) """ There are many ways to extract team names and scores, punctuation blanks and special characters however make it more complex to find a one-suits-all solution. Think of these two patterns — or come up with a better one! # A more generic approach pattern = " ([A-Za-z][\w\.\s]*)- ([a-zA-Z\.\s]*) (\d):(\d)\s* \((\d):(\d)" # A very tailored approach pattern = "(\w*\s?\w\.\w\.|\w\.\w\.\s*?\w*|\w*) # note the OR | \s*-\s* (\w*\s?\w\.\w\.|\w\.\w\.\s*?\w*|\w*)\s* (\d):(\d)\s*\ ((\d):(\d)" The above statements may seem a complex at the first sight, but, it really isn’t. I used two ways to pretty much achieve the same result, however one pattern intends to reflect the exact pattern as indicated in our example data, the other follows a far more generic approach. Before focusing on the letters brackets and numbers, let’s have a first look at the overall structure of the pattern. As indicated through ‘(‘ and ‘)’ there are several groups that allow us to extract and address data from every single group. This is particularly handy, if we only wanted to use the name and score of one team, or only the half time score, etc. We further see ‘-’ and ‘:’ which give us a clear idea of the regex structure in terms of the data given. Pattern 1 (group 1 and 2): #generic: [a-zA-Z0–9\.\s]* #or more precise: (\w*\s?\w\.\w\.|\w\.\w\.\s*?\w*|\w*)\s* The brackets provide us with the letters and characters we allow, these can be letters and dots. In the other case we refer to any word character (\w), explicit dots (\.) and blanks(\s). Then first approach [a-zA-Z0–9\.\s]* is great to read, however has one caveat, trailing blanks will be included in our results — so make sure your programme can handle these. The second approach is much more specific and perfectly covers our test data points — the caveat here, make sure there are no teams that may not be covered by the pattern. Note that we separate the patterns in the second approach through OR (|) which is necessary due to the fact, that clubs do not have F.C.’s or A.C.’s as part of their name. Although “uglier”, I prefer the more specific version. The next group is simply the same pattern again, as we are looking for football teams again. Pattern 2 - the score: (\d):(\d)\s*\ # note: groups! Simply queries for decimals (numbers), so for the end score. We have not considered limitations to the number of occurrences, but in order to avoid issues, we could also use a quantifier: (\d{,2}):(\d{,2})\s*\ # we don’t expect any team to score higher than 99 goals a game. Pattern 3: \((\d):(\d) # Just repeats the second pattern, however excludes # the first bracket ‘\(‘. In order to provide an idea of what could be done with the extracted values, I created a brief “table calculator” below. This calculator uses the entered teams and scores and assigns their points. If we used this regex on the all given matches, we could eventually calculate the entire league table — just an idea: There is a lot more to discover, I hope these few lines could spark your interest into regex. See you next time, stay safe! #Regex #Python #Data Science #6040 #CS6040
https://towardsdatascience.com/mastering-regular-expressions-for-your-day-to-day-tasks-b01385aeea56
['Günter Röhrich']
2020-08-04 09:16:21.495000+00:00
['Regex', 'Python', 'Computer Science', 'Data Science', 'Analytics']