title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
20 Free iPhone Mockups [PSD, Sketch] - December 2020
Device mockups are getting more and more popular these days. Apple started this trend by sharing the frontal PSDs of the recent iPhones on their Guidelines portal years ago. These mockups were just renders of the devices without any artistic branded touch to them. Lots of designers in the industry felt a need to create some custom frames to present their products in a unique way. And the over years the mockups visual style went from photorealistic ones to simplified versions like some of the ones above. Now it is a huge trend when designers wait each fall for the upcoming Apple iPhone event and start drawing as Tim Cook speaks on stage. And then they post them on Dribbble or Bē. Q: Where to use these mockups? A: Lots of products these days find these templates/mockups useful for a wide range of marketing needs: AppStore screenshots, app landing pages or just present UI/UX design works using these iPhone X mockups.
https://uxplanet.org/free-iphone-x-mockups-psd-sketch-4c455d74b2c3
['They Make Design']
2020-12-21 07:41:37.838000+00:00
['Mockup', 'Design', 'Sketch', 'Psd', 'Iphone X']
Look on algorithms behind Natural Language Processing (NLP).
Natural language processing (NLP) describes the interaction between human language and computers. Human language is different than what computers understands. Computers understand machine language or we can say the binary codes. Computers don’t speak or understand human language unless they are programmed to do so. And that’s where NLP comes into picture. How does natural language processing works? There are two main techniques used with NLP , the first one is syntax analysis and the second one is semantic analysis. Syntax is the structure or form of expressions, statements, and program units. Syntax can be used for assessing meaning from a language supported grammatical rules. There are some of the techniques used in syntax analysis which includes: I.) parsing :- which is a grammatical analysis for a sentence. II.) word segmentation :- which divides an outsized piece of text to units III.) sentence breaking:- which places sentence boundaries in large texts IV.) morphological segmentation:- which divides words into groups V.) stemming:- which divides words with inflection in them to root forms Semantics is the meaning of those expressions, statements, and program units. There are algorithms which NLP applies to know the meaning and structure of sentences. There are some of the techniques used in semantic analysis which includes: I.) word meaning disambiguation:- which derives the meaning of a word supported context II.) named entity recognition:- which determines words which will be categorized into groups III.) natural language generation:- which will use a database to work out semantics behind words Also, we can divide NLP field into two camps: Linguistics camp Statistics camp. The idea of NLP started in the early era of AI. In fact, it came into existence during the time of Alan Turing, who is considered to be the founder of both AI and computing in general. The challenge was to create a machine that can converse in a way that is indistinguishable from human which is also known as Turing test. “ELIZA” one of the earliest famous AI program that can be considered as an attempt to beat the Turing test. As we know that there were no such algorithms that could really understand the human language at that time. So, we can say that ELIZA and other chat bot programs at that time used to be programmed manually crafting lots and lots of rules to respond the human conversation. So, it can be said that those programs never had the capacity of actually understanding the natural language rather we can say that they were the result of psychology, to fool humans. So, the concept of linguistic arose which can be viewed as the science of how language is created. A pattern is searched in a language and the rules for constructing and interpreting all natural language utterances are formulated, which is done by linguists. And some models or grammars are generalized on the basis of that rule. (Linguistic rules are also used to parse and recognize the artificial language when building a compiler). The way of parsing natural language is also very much similar except that Context Free Grammars are limited so instead Context-Sensitive Grammars are used. Then in the 90’s, a different perspective was approached to the NLP problem by a statisticians. After that essentially all the Linguistic theories were all thrown out. A simple model of language was introduced which was called “Bag of Words ” model. This model is very simple, it assumes that sentence is nothing but just a bag of words. This model doesn’t care for the order of words. For example, “I go for walk” and “walk I go for” are not dissimilar under this model, though one of these two sentence has a higher probability. When using this model, there is no necessity of meanings, it assumes that whenever it sees these four words, it likely has a similar meaning. Why would anyone wants to use “Bag of Words” model when there is a sophisticated Linguistic model. What advantages does this statistic camp provides? The statistics camp wants to avoid manual programming of rules to and look for automatic interpreting of language just like supervised fashion, by feeding in large amount of labelled data and learning patterns. Let’s talk about some of the existing algorithms: Algorithms can be simple as Vector Space Model where text can be represented as vector and data can be obtained by vector operations. Embedding is one such use case. Inference driving algorithms such as Frequent item set is one such use case, where you can look into text corpus and try to make inference about what would come next. Relevance ranking algorithms used in search engine such as Tf-IDF, BM25, pagerank, etc. There are algorithms which are used understand meaning out of texts. Like Latent semantic analysis ( LSA) , Probabilistic Semantic analysis (pLSA) and Latent Dirichlet allocation (LDA) . LSA) , and . There are algorithms which try to derive sentiments, context and subject of written text. Like sentiment analysis is very popular as it tries to associate some sentiment value to the unknown words. Also, in recent time there are deep learning models/algorithms which uses statistical methods to process tokens using multilayer ANNs. As we can see there is no one type of algorithm for NLP. Various approaches to NLP information retrieval can be drawn from below image: Coreference resolution: “Adam stabbed Bob, and he bled to death!” It’s huge problem in NLP to determine whether “he” in the above sentence refers to Adam or Bob. It is very well-studied problem in NLP and also has a fancy name “Coreference Resolution”. In linguistics, coreference, sometimes written co-reference, occurs when two or more expressions in a text refer to the same person or thing; they have the same referent, like in above sentence. Back in 2001, machine learning algorithms was approached (paper). The proposed classifier was decision tree, which classifies given candidate pair of words as either “Coreferential” (meaning refers to the same thing) or “Not Coreferential”. Following features were used for each candidate pair: Distance : which can be computed as number of sentences between the two words. (more the distance we can say the words are less coreferential). : which can be computed as number of sentences between the two words. (more the distance we can say the words are less coreferential). Pronoun : determines whether candidate pairs are pronouns, one of them is, or none. : determines whether candidate pairs are pronouns, one of them is, or none. String Match : which can be defined as the overlap between the two words. ( “Prime Minister XXX” and “The Prime Minister” can be considered coreferential). : which can be defined as the overlap between the two words. ( “Prime Minister XXX” and “The Prime Minister” can be considered coreferential). Number Agreement : which defines whether candidate pair of words are singular, both plural, or neither. : which defines whether candidate pair of words are singular, both plural, or neither. Semantic Class Agreement : which defines whether candidate pair of words are of the same semantic class, if any. (“Person”, “Organization”, etc.). : which defines whether candidate pair of words are of the same semantic class, if any. (“Person”, “Organization”, etc.). Gender Agreement : can be defined as whether candidate pair of words are of the same gender, if any. (“Male”, “Female”, “Neither”). : can be defined as whether candidate pair of words are of the same gender, if any. (“Male”, “Female”, “Neither”). Appositive : defines whether candidate pair of words are appositives (Say, If a sentence starts with “The Nepali President, XXX said…”, then “President” and “XXX” are appositives and are probably coreferential). : defines whether candidate pair of words are appositives (Say, If a sentence starts with “The Nepali President, XXX said…”, then “President” and “XXX” are appositives and are probably coreferential). ..and a few more similar features. References
https://tmilan0604.medium.com/look-on-algorithms-behind-natural-language-processing-nlp-e06f18b6c31d
['Milan Thapa']
2020-10-30 06:36:09.990000+00:00
['Machine Learning', 'Artificial Intelligence', 'Algorithms', 'Naturallanguageprocessing', 'Turing Test']
About Written Tales
pixabay.com Who is behind Written Tales? I would like to introduce myself. My name is Kevin, a writer just like you trying to build a reader base. Why? I was tired of how the publishing business works. How some charge writers a fee to submit their work. Others make the writer wait for months without hearing a word. How the entire process can be discouraging for new authors. Then, a cord struck within. I had a desire to create a publishing platform. A program to help writers grow their talent and promote the work they write. And from this “Written Tales” was born. The Goal The goal of Written Tales is to give new and seasoned writers a platform where they have an uncensored voice. A stage where their work can reach maximum exposure through multiple social platforms. Without creative arts, innovation will die. Society will tumble into the abyss of ignorance. And critical thinking will become a lost art. Writers need an uncensored platform for their voices, and a community to help them grow. Due to this need, I decided to fund the project because I believe in the cause. Uncensored? We are not reckless in what we publish, but we are open-minded. We believe in free speech and will protect it even if we do not agree with the author’s position. Some creative works may offend, others will bring happiness. And this is the beauty of a platform that does not restrict a person’s view. Again, we will not publish reckless writing. But, writing that leads to lively debate, we will. Final Comments We are here to help bring literature back to the forefront of society through short stories, flash fiction, and poetry. If you would like to be a part of this cause, please join as an author, or support us by signing up for the Written Tales newsletter.
https://medium.com/written-tales/about-written-tales-d64a809d2cee
['Written Tales']
2020-11-20 11:26:29.287000+00:00
['Poetry', 'Publishing', 'Fiction', 'Writing', 'Written Tales']
An Oral History of ‘Coffee News’
I ntroduction You may not notice it, sitting in the background. Next to the lost-pet notices and bassist want-ads. Above the sugar. Its tan visage inviting you for a five-second perusal. Just enough color to camouflage a weak coffee stain. Coffee News is everywhere and nowhere. Widely read, but never truly understood. The anodyne accompaniment to many a Starbuck’s study session. The anesthetic accomplice to many a caffeinated evening’s eavesdropping. On the scale of stimulating reading material, today’s Coffee News lies somewhere between Highlights magazine, a Lutheran church bulletin, and a Carl’s Jr. place mat. But behind that drab page lies a story of bacchanalia, murder, betrayal, greed, and scandal that has long been known only to a select few. Scattered until now in family legends, depositions, indictments, and unsold vanity autobiographies, the history of Coffee News is presented here for the first time, told in the words of those who lived the dream…or the nightmare. On the scale of stimulating reading material, today’s ‘Coffee News’ lies somewhere between ‘Highlights’ magazine, a Lutheran church bulletin, and a Carl’s Jr. place mat. PART ONE: The Indianapolis Imbroglio Walter Fine, Managing Editor, Coffee News, 1978–1993: I suppose you’re asking me because I’m the oldest one left, everyone I know is dead, and I have no one else to talk to, so you think I’ll agree to your interview. Well, you’re right. So here goes. I’ll tell it to you the way I heard it. Linus Anacletus Clement Coffee made his fortune as a slave trader in Vicksburg, Mississippi. His son, Clement Coffee, grew that fortune as a Mississippi River barge pilot and later as a steamboat captain who specialized in returning runaway slaves. His son, Clement Coffee II, Chip, was a cattle trader and meatpacking magnate whose abattoirs were the basis for The Jungle. Clement III, Trip, was a renowned lawyer in St. Louis. He cornered the market in refrigerated rail cars and physically held them ransom at a rail yard in Kansas City using a private army of Pinkerton men. In that way, he amassed a still greater fortune. Clement IV, Skip, was sort of a reclusive philanthropist. He financed Birth of a Nation, has a dorm named after him at Dartmouth, and his charitable gifts endowed work-orphanages and union-busting-private-detective schools around the country. His first son, Clement V, Quint, became a priest and died of dystentery while aiding Colombian children freed from slavery on coffee plantations. Quint’s younger brother, Vance Coffee, was a rampaging drunk and a womanizer. He invested the whole family fortune into casinos in Warm Springs, Nevada. He thought the name was better than the other options, Reno and Las Vegas. Well, he didn’t think about where the interstate was going to go and that was that. Lost the whole fortune. He went to Colombia to borrow money from Quint. Discovered powdered cocaine there. Started smuggling it in. He thought he had snorted it all on the plane ride to Miami, but he forgot the pinch in his snuff box. So he got busted at customs. Went to prison in Terre Haute, Indiana for a few years. When he got out, he broke into an elementary school in Indianapolis and made off with five mimeograph machines. He stashed them under a nearby bridge, where he lived at the time. He published the first edition of Vance Coffee’s News of the Day in 1951. It started as a really virulent right-wing rag. Truman was a commie, Ike was a commie, Nixon’s a commie, there’s fluoride in the toothpaste. All that stuff. He’d pass it out at VFW halls, tattoo parlors, and biker bars. Old Bob Welch was one of the earliest readers and I’ve heard it said it inspired him to found the John Birch Society in ‘58. Never had anything to do with coffee. Unless you count Vance going to Colombia. And even that had more to do with cocaine, as it turned out. Vivian Martz, acquaintance of Vance Coffee: There was a joke in those days, “What do you call ten copies of Coffee’s News? A blanket.” Felicia Wittingdon, Vice President of Franchising & Distribution, Grupo CN Media, S.A., owner of Coffee News, 2008 — : Yeah, I’ve heard that one. I think today I hear it more as a motto, “Coffee News: The blanket you can read.” Things more like that. Irony, you know. We take pride in it today, our service to the homeless. We’ve switched to warmer paper. It is a special paper too, made so that if you scrunch it up a bunch of times, it gets soft enough to use as toilet paper if you’re in a pinch…so to speak. We thought of putting adhesive on the bottom and right margins to make it possible to actually attach them together to form a blanket. But it’s a cost thing. It’s print media and it’s free, so, as you can imagine, our budget is pretty constrained. Coffee News: The blanket you can read. Ian Hogg, creator of “Slag Off, You Posh Twats!,” the logo of Coffee News since 1970: The logo began as my proposal for the cover of Sgt. Pepper’s Lonely Hearts Club Band. I still think Pete Blake nicked the idea, the bastard. Instead of cutouts of all these pop and political figures, I had had a collage of all these miserable people from Liverpool from all walks of life. Drunk pipe fitter. Smoking chimney sweep. Bitter cab driver. Newsboy on diet pills. Mum pushing a pram with her fifth baby, taking a nip. All glaring at the Beatles like, “You fink you’re better’n, you cunts? Fook right off!” And the Beatles sitting there, in all that ridiculous regalia like, “Yeah, you Scouser twats, we’re rich innat ’n’ yer bollocks!” So it was this indictment of the nouveau riche and tax-dodging cunts like the Beatles. Lennon got it. I think Paul thought it hit a little too close to home. Posh twat. Anyways, Pete Blake takes that and replaces these Liverpudlians with famous people and makes a queen’s tit. Goes down in history. So he’s a gobshite. But I had done these cartoony sketches of the idea before I’d made the photo collage. I had one in a drawer somewhere after I moved to New York in early 1970. I’d just finished doing the Today’s Now, Currently, a pop-art exhibit at the ICA in ’69. Stan Mason met me at this bar in Greenwich Village one day. He’d just gotten to New York and asked if I had anything he could use as a logo for this new paper he’s peddling. Offered to pay. So I dug up one of those drawings, turned those jealous frowns upside down, tacked in some newspapers, and there you have it. Two hundreds dollars. Never thought about it again until you asked. Erin Stolhanske, granddaughter of William Stolhanske: My grandpa, William [Stolhanske], had a little coffee shop in the front of his grocery store. He ran the store, grandma ran the coffee shop. As I understand it, she let Viv Martz put the paper next to the apartment listings, classifieds, and garage-sale notices. By the cream. Viv was a waitress there. Gramps didn’t know who Vance Coffee was, let alone what was in the papers. My grandpa was not a political guy. He voted for Stevenson twice. [Vance Coffee was found murdered in 1960 outside an apartment in Indianapolis after what police determined was an amphetamine-fueled, Nazi-themed sex orgy. Motive was never determined, but Vance’s gambling debts to local mobster “Stoney” De Luca were strongly suspected. — ed.] The attorney general came by after that bastard [Vance Coffee] was killed, asking why gramps was distributing anti-Semitic literature promoting the overthrow of the American government. They never charged him, but he found [Coffee’s] estate sale and overpaid for the mimeograph machines so they wouldn’t become a Bircher pilgrimage destination or be put to the same use again. The only person he knew who could write was his son, my uncle Dave. [David Stolhanske died in 2004. His quotes herein are from the transcript of his deposition in Stolhanske v. Mason, CA-98–00784, S.D. Ind. (LEXIS 98–082889712) — ed.] Dave Stolhanske, owner of Coffee News, 1960–69: I had been a journalism major at Ball State and had just come home looking for a job. I was pouring coffee at mom’s coffee shop. They didn’t call it being a barista then; it was Maxwell House. I changed the name of the paper to Coffee News only so I could use most of the original typography and layout. I wasn’t good at typesetting. It was that simple. The fact that it was put out at a coffee shop was a coincidence. I put my poetry in there. Ads for the local floral shop. Some jokes. Garage-sale notices. Quotes from my old copy of Bartlett’s from school. Recipes. ‘This Day in History’-type stuff. Pretty wholesome. Other coffee shops around town began carrying the paper, so I made some side money on the advertising. … In the early-’60s we published a few stories from Kurt Vonnegut under a pseudonym, Norma van Haayden. Kurt and I had been in Sunday school together and he’d send me whatever had gotten rejected from the big magazines. Those stories later served as the basis for Cat’s Cradle. … I’m surprised I kept it going as long as I did. I finally quit the coffee shop when I got a job at Honeywell writing their style guide for the writing of technical manuals. … I played a lot of bridge back then. Stan [Mason] was in my bridge club. I guess I never saw the potential [of Coffee News] beyond a few coffee places in Indy and Carmel. But the original idea and format was mine. Not the militant fascism. The wholesome part, after we got it from Vance Coffee. That stuff. … On that night, Stan and I had been drinking a lot of beer. I remember Stan [Mason] saying he really liked the idea of Coffee News and had big ideas for it. I humored him, but I wasn’t interested. I don’t remember signing anything and I would never have signed anything. But if I did, I was incapacitated. And as far as the Vonnegut stuff, I guess that’s why we’re here today. Walter Fine: Stan Mason was a son of a bitch and an asshole. But I loved the man. A true visionary. [Stan Mason died in 2012. His quotes herein are from his autobiography “The Best Things in Life are Free — The Life and Times of Stan Mason, Sole & Exclusive Creator and Publisher of Coffee News,” © 1997, Simon & Shuster, as well as his testimony in SEC v. Mason/CNG Publishing, Inc., 87:808991, S.D.N.Y., (LEXIS 90–109283577, June 4, 1990) — ed.] Stan Mason, Owner & Editor-In-Chief, Coffee News, 1969–2006; President of Mason Publishing, L.P., 1978–84; Chairman & CEO of Mason/CNG Publishing, Inc., 1984–2006: I don’t like to talk about other people, but I will say this. Davey Stolhanske was a degenerate gambler and a drunk. We had the same bookie. I knew he was in to him for about two thousand. Davey hated the [Indianapolis] Pacers [professional basketball team] because his girlfriend had cheated on him with Chick Rollins, who wrote for the [Indianapolis] Star [the city’s major newspaper] and owned part of the team. He knew better, but he couldn’t help but bet against them. They kept winning. He kept losing. … Davey drank Yuengling like water. I’d known this guy forever. We played cards. He was bitching about how much he owed his bookie, so yeah, I knew about the debt. We’re playing bridge and we start betting. I’d lived in Chicago for a few years and worked at the Tribune. I knew what kind of money was in advertising, and I’d seen this Coffee News rag all over town since I’d been back. So I just had an idea. Do the same thing in a bigger town. Do it in every town. And boom. Rich. So I says to him, Davey, I got a bet for you. You win, I pay off your debt to [bookie] Stoney [De Luca]. I win, you give me your coffee newspaper. I won. … A week later, Davey calls me up. He’s bitching about the bet. Doesn’t wanna give up the paper. I take pity. I say, you know what, I’ll buy it off you. He says, How much? I say, How much do you owe Stoney? So we met at The Indianapolitan [night club] and we drew up a contract, and that was that. So, yeah, it was a bridge bet that led me to get the paper, but I bought it fair and square for two thousand dollars. I did not win the paper in a card bet, because betting on cards is illegal in the great state of Indiana and such a gambling winning would be an illegal, and thus unenforceable, contract. … At the time, I was unaware of the Kurt Vonnegut stories that had appeared in Coffee News in, I guess, ’61 or ’62, but as a matter of course, whenever I purchased any publication, I made sure to include all copyrights and other intellectual property, known or unknown [emphasis in original], held by that publication. That’s just my due diligence. That’s business. Anthony “Flat Tire” Medrano, interviewed at Federal Corrections Complex, Terre Haute, Indiana, 2016: The way I heard it, Davey Stolhanske signed that contract with a tire iron held against his head. Actually, that’s the way I saw it. I was holding the tire iron. Stoney De Luca was there. What do I give a shit? Stoney’s dead and the statute of limitations on that expired in ’75. … Why now? Well, nobody ever asked me before. [Stanislaus “Stoney” De Luca died at his home in Coral Cables, Florida in 1988 of natural causes and complications from acute syphilitic necropathy — ed.] The way I heard it, Davey Stolhanske signed that contract with a tire iron held against his head. Actually, that’s the way I saw it. I was holding the tire iron. — Anthony Medrano Walter Fine: I’d worked at the New York Sun and then the Daily News. I was out of a job for personal reasons. When I was released, I met Stan Mason at Delmonico’s. My friend Billy “Batts” Battaliano had introduced us. I knew him from working the blotter at the Daily News. Stan knew him through some guy in Indianapolis, Stoney something. Anyway, he was hustling this paper and needed somebody to run the print side. That was right when he got to town. It was 1970 or so. He was involved a lot on the editorial side at first, but needed help. So I was Assistant to the Editor, then Assistant Editor through most of the ’70s. Finally, he got more into the higher-level publishing aspect and I basically took over running the paper in ’78. … A couple of months after I started, Stan came into my office holding some back issues he’d dug out of a box he brought with him from Indiana. He asked if I knew who Norma Van Haayden was. I asked if she’d been one of the girls who’d come back with us from P.J. Clarke’s [the famous New York bar] earlier that week. He said no. He asked if I knew a lawyer. My wife at the time was from old New York money. She gave me a name. Piers van Valkenberg, former partner, Debevoise, Wardwell, & Van Dyck, LLP: All I can say about that is that in 1971, Coffee News reached a settlement with Mr. Vonnegut and his publishers on terms satisfactory to all parties. It was New York in the 1970s and I owned the highest circulation paper in town, and we were expanding across the country. We were making so much money I said, ‘We can’t charge for this.’ It was a beautiful thing. — Stan Mason Walter Fine: The advertising paid the bills. The Vonnegut royalties paid for the drugs. Our offices were across the alley from The National Lampoon and on the same floor. There was a zip line at one point. It was anarchy. Erin Stolhanske: I didn’t know anything about the Vonnegut stories then, but I was just a kid. Later, I remember Uncle Dave talking about it, showing us the stories. He didn’t know anything about the law. He ended up teaching English in Castleton [Indiana]. It wasn’t until Stan Mason’s book came out that the light bulb went off. Dave Stolhanske: He knew. I know he knew because I told him. People say I didn’t know, but I knew. I’m not stupid. Not like they say. I’m smart. I was an English major. I knew about copyright. There was nothing in there about copyrights when I signed it. If I did. Which I didn’t. … If I did. It was under duress. I told you. They had a tire iron to my head! … It was Stoney De Luca and another guy. No, I don’t know his name. Stan Mason: It was New York in the 1970s and I owned the highest circulation paper in town, and we were about expand across the country. I said, ‘We can’t charge for this.’ It was a beautiful thing.
https://medium.com/the-clap/an-oral-history-of-coffee-news-3a57a3ca7f9e
['J.P. Melkus']
2018-08-25 21:41:52.945000+00:00
['Satire', 'Parody', 'Journalism', 'Oral History', 'Humor']
Personal Finance Classes Offered By Making Of A Millionaire
Our top personal finance classes available on Skillshare If you don’t have a Skillshare subscription, that is okay. If you use this link, you get a free trial on Skillshare, which is more than enough time to take all of our courses. In the interest of full disclosure, if you do sign up for the free trial, we get a referral fee from Skillshare. It costs you nothing (and in fact, it gives you 2-free months) and helps us keep this publication alive, but we want to be fully transparent at all times. 1. Personal Finance Masterclass: 6 Steps To Lock In Your Financial Goals If you take one class on personal finance this year, make it this one. This class is about much more than budgeting. By the end of this class, you will know exactly how much you need to be saving for an emergency fund, retirement, and to pay off all your debts. To do that, you will use the custom-built excel workbook that I have built and made available for anyone taking this class. It will crunch all of the numbers for you and create a budget that locks in these goals. In this class, you will learn how to use the excel workbook as well as the six steps to creating a goals-based budget.
https://medium.com/makingofamillionaire/resources-for-making-of-a-millionaire-readers-f2438dec0993
['Ben Le Fort']
2020-12-21 17:07:00.356000+00:00
['Money', 'Personal Finance', 'Education', 'Community', 'Productivity']
Event Sourcing From Static Data Using Kafka
Event Sourcing From Static Data Using Kafka A different distributed scheduler approach. Events in DDD platforms use to be raised by interaction with external sources, and those events use to be generated from commands (updates, creations, deletions, or pure business actions). Distributed computing platforms receive messages from other systems and there is usually a gateway where those messages become events with a generic and standard format. Users can also interact with APIs and raise other events that must be propagated over the platform in order to save the information or notify other services to affect other domain entities. Events life-cycle is not a long term process, basically, we could summarize it like: “something has changed, and maybe someone is interested in this change” maybe our event can notify a service, and this service is forced to raise another event, but the life of this “consequence” should be similar than the one that triggered it. On the other hand, it’s easy to find business information related to dates, or temporal information, that should perform the transformation in our data. In this situation, we can face the problem that motivates this post. Events cannot wake themselves up. A typical problem, expiration date. Let’s imagine we are working on an e-commerce platform, and maybe we have thought about creating object models called… I don’t know… price? (Maybe you could think this section is a plagiarism of Walmart labs post (1), but I swear I had to deal with exactly the same problem before reading its solution) The prices can work like promotions in a certain way, but if the prices want to be dynamic, they need to work (activate, deactivate) in a temporal window. We can think in large promotion days like Black Friday, as window times for promotions, or even as activation periods for different prices. Let’s suppose a typical situation related to event-streaming systems: Price is a model entity, and it has an attribute called “expiration_date” with a date value, and another called “status” with active/inactive value. An external system begins to load a bunch of active prices through a similar bunch of price-domain-events. Our asynchronous CQRS-based persist system is listening to our message’s middleware and quickly saves all prices in the persistence engine. Another service is also listening and refresh all prices in our cache system. Users can see new prices, data is consistent and everything is running like it’s supposed to. Let’s have a beer, this streaming platform has been successfully designed. A typical price evet lifecycle. When space-time in our dimension reaches the date marked as the expiration date for one of our little prices, what should happen? This price should change its status and users should notice the change… but what really happens? Absolutely nothing. Our events cannot work with time attributes unless these attributes have only informative purposes. We can’t change entities and make notifications to other services. Our entire system depends on external systems to send every time information and that kind of event if some information must be changed. This could be a problem, or at least a great limitation to design event-based platforms. So, how can we know then, if some promotion or some price has expired? Solutions based on a distributed scheduler Basically, all solutions for this problem are based on schedulers or distributed schedulers, this means many jobs searching over trillions of elements. If we are lucky we can have our entities distributed and well balanced over persistence systems and some entity-based designs to look for changes in small triggers. Couchbase has proposed recently an eventing framework working on one of its services which could be a great solution for this problem. (2) Document insertions in the database are linked to small functions, and these functions can be scheduled to run when our attribute “expiration_date” time comes. Through Kafka connectors, each document can be transformed into a domain event and be released in the middleware. Wallmart also has released Big Ben. This is a system that can be used by a service to schedule a request that needs to be processed in the future. The service registers an event in the scheduler and suspends the processing of the current request. When the stipulated time arrives, the requesting service is notified by the scheduler and the former can resume processing of the suspended request. Those are both good solutions to solve this problem, but we had an idea that could be simple (and therefore smart) and help with all of our cases. Kafka to the rescue Stream processing its maybe the greatest strength of Kafka. New features related to Kstreams and KTables are showing a new world of possibilities for software engineers and architects. KTable is an abstraction of a changelog stream from a primary-keyed table. Each record in this changelog stream is an update on the primary-keyed table with the record key as the primary key. A KTable is either defined from a single Kafka topic that is consumed message by message or the result of a KTable transformation. An aggregation of a KStream also yields a KTable. Since Kafka 2.4 KTable joins work as SQL joins, Foreign-key, many to one, joins were added to Kafka in KIP-213 (3). This basically means that we can join events not only using its primary key, we can also join events in different topics by matching any of its attributes. Join by foreign key between two KTables. Our solution What do foreign keys in KTables have to do with our static events? Let’s think about our original problem with expiration dates. In a pure event sourcing system, we would have a topic dedicated to price events. Creation, update, and deletion events are allocated on the same price topic. On one hand, we can develop a really easy service based on a simple scheduler. Its responsibility is sending time events each minute, or each second if we need more accuracy. On the other one, we have to deploy a joiner service, the “Updater”. This service is listening from time event topic and price (or any other domain) event topic. Its entry points are two KTables, and these KTables are allowed to store a very big set of data. When the timed event arrives in time topic (and time KTable), our update service seeks over domain KTable if one specified field matches with this date. If there are one or many matches, we can send a new update event with our price, or even we can put some logic into the update-service in order to change the price entity status. Prices lifecycle with even update process based on time events. Show me the code! Ok, this could be a good solution but, how many lines of code do you need for a joiner? Less than ten lines: Joiner by Fk with KTables. Performance We can think in many scenarios for event expiration or release. We have tested scenarios for 0.5–1%, 5–10%, and 50% of business events affected for time events. Let’s imagine the worst situation, one in which time is over midnight, and it begins a very special date, where almost half of our entities have to change its status. As you can see, we have filled our topics with 4 and 8 million messages in order to stress Ktable join processors. Performance tests. In average cases, our system is updating elements (releasing events) each millisecond, working with one replica. Worst cases can make join over our KTables in 2 milliseconds. We have checked this system scaling horizontally close to a linear progression in performance metrics. We could say this solution can release as many events as you want with a really low effort in development and infrastructure. Generalization What do we need to use this solution across all our domains? Not much work, really. We just need to configure our time scheduler service (It can be fault-tolerant through replication because we can filter replicated messages with the same temporal key in destination topic) and one “joiner” service for each entity topic. In each domain, it can be found many domain entities “allocated” in a Kafka topic, each one of this topic receives events related to these entities, and those events can be resent or reloaded in our event pipeline when its temporal field matches with timed events. Placing a few dedicated services, our platform can “reload” events itself leaving that responsibility in Kafka, and it also guarantees consistency and really good fault tolerance levels. Acknowledgments I would like to thank Rafael Serrano and Jose Luis Noheda the support received, Soufian Belahrache (Black belt on KTables), and Francisco Javier Salas for their work on this POC, and Juan López for the peer review.
https://medium.com/swlh/event-sourcing-from-static-data-using-kafka-d00069332802
['Javier Martinez Valbuena']
2020-08-13 14:21:20.699000+00:00
['Streaming', 'Kafka', 'Microservices', 'Event Sourcing', 'Software Architecture']
Why Shopify Hires For Potential Not Talent And How You Can Too
Why Shopify Hires For Potential Not Talent And How You Can Too Potential can beat talent. Photo by Tim Marshall on Unsplash While watching a podcast on Youtube last week, I had one of those aha moments. The podcast was one of those random suggestions on Youtube and featured Tobi Lutke (Founder of Shopify). He was talking about how to build a team without access to a ‘primary’ talent market. Silicon Valley is a well known primary talent market. The Bay area offers a high concentration of well qualified and talented people with specific skill sets. Drawn by the success of other startups, many people move there in the hopes of tasting their version of startup success. Whereas Ottawa, where Tobi founded Shopify, is closer to a political hub for Canada. Ottawa is a center for the arts and cultural institutions, national museums, etc. Most of the “talent” not interested in Arts or Politics moved out of the area. Tobi mentioned that many best selling business books are written about building unicorn companies in primary talent markets. He thought many of the ‘best practices’ for building a team encouraged in business books are not relevant to the average business. One of the common maxims you’ll come across is ‘hire people who are better than you at what you don’t like to do.’ When all the books, articles, blog posts, podcasts, etc., that we consume tells us this, it’s easy to believe that this is the only way to build a company. The problem is these people are often too expensive or not available in a given talent-pool. Most of us don’t have the luxury of hiring from a ready-made work-force. We have to hire from talent pools with weaker skill sets but, importantly, find people with the same amount of potential. Shopify realized this difference very early in their journey. Instead of focusing on hiring the best talent available, they built their business around hiring for potential and then developing that potential. Fixed vs. Growth Mindsets In secondary Talent pools, Tobi explains we need to create learner’s organizations. As much as a company aims to produce a product or service that people want and need. A company also needs to build a culture that encourages learning and development. Shopify has created a hiring process that focuses on people’s potential rather than skill. They look for people that will far exceed the role they are currently hiring them for; they are looking for tomorrow’s company leaders. Because their focus is to hire based on potential, then they need to hire people who have the capacity and want to reach their potential. Shopify differentiates between two types of people when it comes to potential. People with a fixed mindset People with a growth mindset. People with a fixed mindset “believe their qualities are fixed traits and, therefore, cannot change. These people document their intelligence and talents rather than working to develop and improve them. They also believe that talent alone leads to success, and effort is not required.” — Unknown. While people with a growth mindset “have an underlying belief that their learning and intelligence can grow with time and experience. When people believe they can become smarter, they realize that their effort has an effect on their success, so they put in extra time, leading to higher achievement.” — Unknown. We can see this in many areas of life. My housemate is a personal trainer, and time again, he faces this difference in mindset. Some of his clients believe they are the way they are, and they can’t be helped. In comparison, others look forward to improving and watching their growth. I’ve noticed people with a fixed mindset are on defense. “I can’t” or “I don’t have time” or “I’m not XYZ” There’s always a reason not to. In contrast, people with a growth mindset are open to developing themselves and exploring new opportunities. Fixed Mindset people make excuses and pass responsibility. Growth mindset people find a way and take responsibility. Shopify internalised this distinction between people’s mindsets and used it to create ‘The Shopify Way.’ A system they have developed to hire for potential and develop that potential into world-class talent. THE SHOPIFY WAY This is how it works. They hire for potential. They look for in others what others don’t see in themselves. They help people develop a growth mindset. They give them a Shopify education: Company history, previous mistakes, employees who previously held a fixed mindset. Reasons for doing what the company does. Not just saying this is the way it is, and you need to accept it. Instead, they provide context and explanations so people can find the reason by themselves. They develop their skill sets. Only then do they focus on building a person’s necessary skills. “Shopify aims to help people fulfill their potential 10–20 years earlier than they otherwise would have”. They support them with mentors. One skilled person is paired with five unskilled workers. Mentorship is essential in helping inexperienced employees navigate the nuances of personal growth. We don’t develop the same way and come unstuck at different points. We need people to support us through these moments. They give them challenges designed to push their staff past what they thought was previously possible. “Hey, we have this problem, and it’s vital for our company’s continued success, and we think you’re the right person for the job.” They remove self-imposed boundaries that people put on themselves. Hire for potential, unlock a growth mindset, and support the journey. It works for Shopify and could work for you.
https://medium.com/the-innovation/why-shopify-hires-for-potential-not-talent-and-how-you-can-too-231f2fab2f37
['Rhys Jeffery']
2020-12-17 15:03:37.899000+00:00
['Hiring', 'Mindset', 'Talent', 'Human Resources', 'Entrepreneurship']
The World Won’t Cry With You
Leaving my dreams behind, I walked a thousand miles. To see if the world would buy my smiles. I lost my passion; I lost my will. But nothing moved, and watching my pain; the world stood still.
https://medium.com/afwp/the-world-wont-cry-with-you-65ccda88daad
['Darshak Rana']
2020-11-29 15:31:55.089000+00:00
['Life Lessons', 'Motivation', 'Poetry', 'Life', 'Philosophy']
Project Journal, Week 5. Welcome to our DATA360 team blog! This…
Welcome to our DATA360 team blog! This blog will be the journey of our investigation into interesting aspects of the crime in Chicago. The city of Chiacgo. Credits: Fox News We started our project by brainstorming the topic we would like to dig into. KD was interested in crime in general, so we decided to investigate crime. Aside from that, Chicago is known by many people as the city of crime and violence. With that in mind, we agreed to make ‘Crime in Chicago’ as our topic. Data Mining For the first week, we found some relating and interesting datasets on Kaggle and on other sources. Kaggle, a data-sharing community. Source: Kaggle Crimes of Chicago The first dataset we found interesting is Crimes in Chicago, a ernomous BigQuery dataset which consists of crime data from 2001 to 2017. This dataset contains more than 6,000,000 rows of incident data (Yes, 6 millions). We are not so sure to what extend we can use this dataset for, but we are sure that we can do many great things with it. We figured we can merge other datasets with this one to tell interesting stories. Other Datasets Other datasets we found include the temperature of Chicago’s Midway Airport from 2000–2019. We found an interesting finding with Crimes in Chicago and Midway Airport Temperature which we will share in our next blog post. Additionally, we gathered Chicago’s Gasoline Price from 2000 to 2019 and Chicago’s Unemployment Rate from 1990 to 2019. We have not done any analysis with those two datasets yet, but we expect to find some interesting correlations once we do. That’s what we got for this week blog! There will be more, but we will save the fun for next times. I hope you enjoy this blog post. Please comment below for what you found interesting or what you want to suggest about our project!
https://medium.com/augie-data360-chicago-crime-analysis/project-journey-week-5-74c3d49ebce3
['Minh Ta']
2019-05-02 04:07:13.405000+00:00
['Chicago Crime', 'Chicago', 'Data Science', 'Kaggle', 'Bigquery']
The Top Online Data Science Courses for 2019
After over 80+ hours of watching course videos, doing quizzes and assignments, reading reviews on various aggregators and forums, I’ve narrowed down the best data science courses available to the list below. TL;DR The best data science courses: Criteria The selections here are geared more towards individuals getting started in data science, so I’ve filtered courses based on the following criteria: The course goes over the entire data science process The course uses popular open-source programming tools and libraries The instructors cover the basic, most popular machine learning algorithms The course has a good combination of theory and application The course needs to either be on-demand or available every month or so There’s hands-on assignments and projects The instructors are engaging and personable The course has excellent ratings — generally, greater than or equal to 4.5/5 There’s a lot more data science courses than when I first started this page four years ago, and so there needs to now be a substantial filter to determine which courses are the best. I hope you feel confident that the courses below are truly worth your time and effort, because it will take several months (or more) of learning and practice to be a data science practitioner. In addition to the top general data science course picks, I have included a separate section for more specific data science interests, like Deep Learning, SQL, and other relevant topics. These are courses with a more specialized approach, and don’t cover the whole data science process, but they are still the top choices for that topic. These extra picks are good for supplementing before, after, and during the main courses. Resources you should use when learning When learning data science online it’s important to not only get an intuitive understanding of what you’re actually doing, but also to get sufficient practice using data science on unique problems. In addition to the courses listed below, I would suggest reading two books: Introduction to Statistical Learning — available for Free — one of the most widely recommended books for beginners in data science. Explains the fundamentals of machine learning and how everything works behind the scenes Applied Predictive Modeling — a breakdown of the entire modeling process on real-world datasets with incredibly useful tips each step of the way These two textbooks are incredibly valuable and provide a much better foundation than just taking courses alone. The first book is incredibly effective at teaching the intuition behind much of the data science process, and if you are able to understand almost everything in there, then you’re more well off than most entry-level data scientists. QUICK TIP Use Video Speed Controller for Chrome to speed up any video. I usually choose between 1.5x — 2.5x speed depending on the content, and use the “s” (slow down) and “d” (speed up) key shortcuts that come with the extension. Now to an overview and review of each course. 1. Data Science Specialization — JHU @ Coursera This course series is one of the most enrolled in and highly rated course collections in this list. JHU did an incredible job with the balance of breadth and depth in the curriculum. One thing that’s included in this series that’s usually missing from many of data science courses is a complete section on statistics, which is the backbone to data science. Overall, the Data Science specialization is an ideal mix of theory and application using the R programming language. As far as prerequisites go, you should have some programming experience (doesn’t have to be R) and you have a good understanding of Algebra. Previous knowledge of Linear Algebra and/or Calculus isn’t necessary, but it is helpful. Price — Free or $49/month for certificate and graded materials Provider — Johns Hopkins University Curriculum: The Data Scientist’s Toolbox R Programming Getting and Cleaning Data Exploratory Data Analysis Reproducible Research Statistical Inference Regression Models Practical Machine Learning Developing Data Products Data Science Capstone If you’re rusty with statistics and/or want to learn more R first, check out the Statistics with R Specialization as well. 2. Introduction to Data Science — Metis An extremely highly rated course — 4.9/5 on SwichUp and 4.8/5 on CourseReport — which is taught live by a data scientist from a top company. This is a six week long data science course that covers everything in the entire data science process, and it’s the only live online course in this list. Furthermore, not only will you get a certificate upon completion, but since this course also accredited, you’ll also receive continuing education units. Two nights per week, you’ll join the instructor with other students to learn data science as if it was an online college course. Not only are you able to ask questions, but the instructor also spends extra time for office hours to further help those students that might be struggling. Price — $750 The curriculum: Computer Science, Statistics, Linear Algebra Short Course Exploratory Data Analysis and Visualization Data Modeling: Supervised/Unsupervised Learning and Model Evaluation Data Modeling: Feature Selection, Engineering, and Data Pipelines Data Modeling: Advanced Supervised/Unsupervised Learning Data Modeling: Advanced Model Evaluation and Data Pipelines | Presentations For prerequisites, you’ll need to know Python, some linear algebra, and some basic statistics. If you need to work on any of these areas, Metis also has Beginner Python and Math for Data Science, a separate live online course just for learning the Python, Stats, Probability, Linear Algebra, and Calculus for data science. 3. Applied Data Science with Python Specialization — UMich @ Coursera University of Michigan, who also launched an online data science Master’s degree, produce this fantastic specialization focused the applied side of data science. This means you’ll get a strong introduction to commonly used data science Python libraries, like matplotlib, pandas, nltk, scikit-learn, and networkx, and learn how to use them on real data. This series doesn’t include the statistics needed for data science or the derivations of various machine learning algorithms, but does provide a comprehensive breakdown of how to use and evaluate those algorithms in Python. Because of this, I think this would be more appropriate for someone that already knows R and/or is learning the statistical concepts elsewhere. If you’re rusty with statistics, consider the Statistics with Python Specialization first. You’ll learn many of the most important statistical skills needed for data science. Price — Free or $49/month for certificate and graded materials Provider — University of Michigan Courses: Introduction to Data Science in Python Applied Plotting, Charting & Data Representation in Python Applied Machine Learning in Python Applied Text Mining in Python Applied Social Network Analysis in Python To take these courses, you’ll need to know some Python or programming in general, and there are actually a couple of great lectures in the first course dealing with some of the more advanced Python features you’ll need to process data effectively. Dataquest is a fantastic resource on its own, but even if you take other courses on this list, Dataquest serves as a superb complement to your online learning. Dataquest foregoes video lessons and instead teaches through an interactive textbook of sorts. Every topic in the data science track is accompanied by several in-browser, interactive coding steps that guide you through applying the exact topic you’re learning. Video-based learning is more “passive” — it’s very easy to think you understand a concept after watching a 2-hour long video, only to freeze up when you actually have to put what you’ve learned in action. — Dataquest FAQ To me, Dataquest stands out from the rest of the interactive platforms because the curriculum is very well organized, you get to learn by working on full-fledged data science projects, and there’s a super active and helpful Slack community where you can ask questions. The platform has one main data science learning curriculum for Python: Data Scientist In Python Path This track currently contains 31 courses, which cover everything from the very basics of Python, to Statistics, to the math for Machine Learning, to Deep Learning, and more. The curriculum is constantly being improved and updated for a better learning experience. Price — 1/3 of content is Free, $29/month for Basic, $49/month for Premium Here’s a condensed version of the curriculum: Python — Basic to Advanced Python data science libraries — Pandas, NumPy, Matplotlib, and more Visualization and Storytelling Effective data cleaning and exploratory data analysis Command line and Git for data science SQL — Basic to Advanced APIs and Web Scraping Probability and Statistics — Basic to Intermediate Math for Machine Learning — Linear Algebra and Calculus Machine Learning with Python — Regression, K-Means, Decision Trees, Deep Learning and more Natural Language Processing Spark and Map-Reduce Additionally, there’s also entire data science projects scattered throughout the curriculum. Each project’s goal is to get you to apply everything you’ve learned up to that point and to get you familiar with what it’s like to work on an end-to-end data science strategy. Lastly, if you’re more interested in learning data science with R, then definitely check out Dataquest’s new Data Analyst in R path. The Dataquest subscription gives you access to all paths on their platform, so you can learn R or Python (or both!). 5. Statistics and Data Science MicroMasters — MIT @ edX MicroMasters from edX are advanced, graduate-level courses that carry real credits you can apply to a select number of graduate degrees. The inclusion of probability and statistics courses makes this series from MIT a very well-rounded curriculum for being able to understand data intuitively. Due to its advanced nature, you should have experience with single and multivariate calculus, as well as Python programming. There isn’t any introduction to Python or R like in some of the other courses in this list, so before starting the ML portion, they recommend taking Introduction to Computer Science and Programming Using Python to get familiar with Python. Price — Free or $1,350 for credential and graded materials Provider — University of Michigan Courses: Probability — The Science of Uncertainty and Data Data Analysis in Social Science — Assessing Your Knowledge Fundamentals of Statistics Machine Learning with Python: from Linear Models to Deep Learning Capstone Exam in Statistics and Data Science The ML course has several interesting projects you’ll work on, and at the end of the whole series you’ll focus on one exam to wrap everything up. 6. CS109 Data Science — Harvard Screenshot from lecture: https://matterhorn.dce.harvard.edu/engage/player/watch.html?id=e15f221c-5275-4f7f-b486-759a7d483bc8 With a great mix of theory and application, this course from Harvard is one of the best for getting started as a beginner. It’s not on an interactive platform, like Coursera or edX, and doesn’t offer any sort of certification, but it’s definitely worth your time and it’s totally free. Curriculum: Web Scraping, Regular Expressions, Data Reshaping, Data Cleanup, Pandas Exploratory Data Analysis Pandas, SQL and the Grammar of Data Statistical Models Storytelling and Effective Communication Bias and Regression Classification, kNN, Cross Validation, Dimensionality Reduction, PCA, MDS SVM, Evaluation, Decision Trees and Random Forests, Ensemble Methods, Best Practices Recommendations, MapReduce, Spark Bayes Theorem, Bayesian Methods, Text Data Clustering Effective Presentations Experimental Design Deep Networks Building Data Science Python is used in this course, and there’s many lectures going through the intricacies of the various data science libraries to work through real-world, interesting problems. This is one of the only data science courses around that actually touches on every part of the data science process. 7. Python for Data Science and Machine Learning Bootcamp — Udemy Also available using R. A very reasonably priced course for the value. The instructor does an outstanding job explaining the Python, visualization, and statistical learning concepts needed for all data science projects. A huge benefit to this course over other Udemy courses are the assignments. Throughout the course you’ll break away and work on Jupyter notebook workbooks to solidify your understanding, then the instructor follows up with a solutions video to thoroughly explain each part. Curriculum: Python Crash Course Python for Data Analysis — Numpy, Pandas Python for Data Visualization — Matplotlib, Seaborn, Plotly, Cufflinks, Geographic plotting Data Capstone Project Machine learning — Regression, kNN, Trees and Forests, SVM, K-Means, PCA Recommender Systems Natural Language Processing Big Data and Spark Neural Nets and Deep Learning This course focuses more on the applied side, and one thing missing is a section on statistics. If you plan on taking this course it would be a good idea to pair it with a separate statistics and probability course as well. An honorary mention goes out to another Udemy course: Data Science A-Z. I do like Data Science A-Z quite a bit due to its complete coverage, but since it uses other tools outside of the Python/R ecosystem, I don’t think it fits the criteria as well as Python for Data Science and Machine Learning Bootcamp. Other top data science courses for specific skills Deep Learning Specialization — Coursera Created by Andrew Ng, maker of the famous Stanford Machine Learning course, this is one of the highest rated data science courses on the internet. This course series is for those interested in understanding and working with neural networks in Python. SQL for Data Science — Coursera Pair this with Mode Analytics SQL Tutorial for a very well-rounded introduction to SQL, an important and necessary skill for data science. Mathematics for Machine Learning — Coursera This is one of the most highly rated courses dedicated to the specific mathematics used in ML. Take this course if you’re uncomfortable with the linear algebra and calculus required for machine learning, and you’ll save some time over other, more generic math courses. How to Win a Data Science Competition — Coursera One of the courses in the Advanced Machine Learning Specialization. Even if you’re not looking to participate in data science competitions, this is still an excellent course for bringing together everything you’ve learned up to this point. This is more of an advanced course that teaches you the intuition behind why you should pick certain ML algorithms, and even goes over many of the algorithms that have been winning competitions lately. Bayesian Statistics: From Concept to Data Analysis — Coursera Bayesian, as opposed to Frequentist, statistics is an important subject to learn for data science. Many of us learned Frequentist statistics in college without even knowing it, and this course does a great job comparing and contrasting the two to make it easier to understand the Bayesian approach to data analysis. Spark and Python for Big Data with PySpark — Udemy From the same instructor as the Python for Data Science and Machine Learning Bootcamp in the list above, this course teaches you how to leverage Spark and Python to perform data analysis and machine learning on an AWS cluster. The instructor makes this course really fun and engaging by giving you mock consulting projects to work on, then going through a complete walkthrough of the solution. Learning Guide How to actually learn data science When joining any of these courses you should make the same commitment to learning as you would towards a college course. One goal for learning data science online is to maximize mental discomfort. It’s easy to get caught in the habit of signing in to watch a few videos and feel like you’re learning, but you’re not really learning much unless it hurts your brain. Vik Paruchuri (from Dataquest) produced this helpful video on how to approach learning data science effectively: Essentially, it comes down to doing what you’re learning, i.e. when you take a course and learn a skill, apply it to a real project immediately. Working through real-world projects that you are genuinely interested in helps solidify your understanding and provides you with proof that you know what you’re doing. One of the most uncomfortable things about learning data science online is that you never really know when you’ve learned enough. Unlike in a formal school environment, when learning online you don’t have many good barometers for success, like passing or failing tests or entire courses. Projects help remediate this by first showing you what you don’t know, and then serving as a record of knowledge when it’s done. All in all, the project should be the main focus, and courses and books should supplement that. When I first started learning data science and machine learning, I began (as a lot do) by trying to predict stocks. I found courses, books, and papers that taught the things I wanted to know, and then I applied them to my project as I was learning. I learned so much in a such short period of time that it seems like an improbable feat if laid out as a curriculum. It turned out to be extremely powerful working on something I was passionate about. It was easy to work hard and learn nonstop because predicting the market was something I really wanted to accomplish. Essential knowledge and skills Source: Udacity There’s a base skill set and level of knowledge that all data scientists must possess, regardless of what industry they’re in. For hard skills, you not only need to be proficient with the mathematics of data science, but you also need the skills and intuition to understand data. The Mathematics you should be comfortable with: Algebra Statistics (Frequentist and Bayesian) Probability Linear Algebra Basic calculus Optimization Furthermore, these are the basic programming skills you should be comfortable with: Python or R, SQL Extracting data from various sources, like SQL databases, JSON, CSV, XML, and text files Cleaning and transforming unstructured, messy data Effective Data visualization Machine learning — Regression, Clustering, kNN, SVM, Trees and Forests, Ensembles, Naive Bayes Lastly, it’s not all about the hard skills; there’s also many soft skills that are extremely important and many of them aren’t taught in courses. These are: Curiosity and creativity Communication skills — speaking and presenting in front of groups, and being able to explain complex topics to non-technical team members Problem solving — coming up with analytical solutions for business problems Python vs. R After going through the list you might have noticed that each course is dedicated to one language: Python or R. So which one should you learn? Short answer: just learn Python, or learn both. Python is an incredibly versatile language, and it has a huge amount of support in data science, machine learning, and statistics. Not only that, but you can also do things like build web apps, automate tasks, scrape the web, create GUIs, build a blockchain, and create games. Because Python can do so many things, I think it should be the language you choose. Ultimately, it doesn’t matter that much which language you choose for data science since you’ll find many jobs looking for either. So why not pick the language that can do almost anything? In the long run, though, I think learning R is also very useful since many statistics/ML textbooks use R for examples and exercises. In fact, both books I mentioned at the beginning use R, and unless someone translates everything to Python and posts it to Github, you won’t get the full benefit of the book. Once you learn Python, you’ll be able to learn R pretty easily. Check out this StackExchange answer for a great breakdown of how the two languages differ in machine learning. Are certificates worth it? One big difference between Udemy and other platforms, like edX, Coursera, and Metis, is that the latter offer certificates upon completion and are usually taught by instructors from universities. Some certificates, like those from edX and Metis, even carry continuing education credits. Other than that, many of the real benefits, like accessing graded homework and tests, are only accessible if you upgrade. If you need to stay motivated to complete the entire course, committing to a certificate also puts money on the line so you’ll be less likely to quit. I think there’s definitely personal value in certificates, but, unfortunately, not many employers value them that much. Coursera and edX vs. Udemy Udemy does not currently have a way to offer certificates, so I generally find Udemy courses to be good for more applied learning material, whereas Coursera and edX are usually better for theory and foundational material. Whenever I’m looking for a course about a specific tool, whether it be Spark, Hadoop, Postgres, or Flask web apps, I tend to search Udemy first since the courses favor an actionable, applied approach. Conversely, when I need an intuitive understanding of a subject, like NLP, Deep Learning, or Bayesian Statistics, I’ll search edX and Coursera first. Wrapping Up Data science is vast, interesting, and rewarding field to study and be a part of. You’ll need many skills, a wide range of knowledge, and a passion for data to become an effective data scientist that companies want to hire, and it’ll take longer than the hyped up YouTube videos claim. If you’re more interested in the machine learning side of data science, check out the Top 5 Machine Learning Courses for 2019 as a supplement to this article. If you have any questions or suggestions, feel free to leave them in the comments below. Thanks for reading and have fun learning! Originally published at learndatasci.com.
https://medium.com/free-code-camp/top-7-online-data-science-courses-for-2019-e4afdc4693e7
[]
2019-05-02 20:18:44.339000+00:00
['Artificial Intelligence', 'Machine Learning', 'Technology', 'Data Science', 'Programming']
Make Passive Income Programming — 5 Incomes for Software Developers
Wouldn’t it be beautiful to get paid to do something that you love? Better yet, what if that thing could passively generate you a hefty chunk of change every year? Well, if you’re one of the lucky souls that found a passion for programming then I have good news for you. There are a ton of ways for software developers to make passive income programming. While additionally reaping many other benefits for their career as well. As a self-taught software developer who has a Bachelor of Commerce degree, I felt obligated to share the knowledge I have with the community. So without further ado, here are five ways you can turn your coding abilities into another passive income stream. 1. Build Software Hopefully, it doesn’t come as a surprise that building software is the first method on this list. I mean, this is what we do! The great thing about creating software is that once it’s built (and relatively bug-free), there isn’t much more work you need to put into it. Especially if that software only has one purpose and doesn’t require additional features being implemented. So how can we turn software development into a passive income stream? Well, there are a few approaches we could follow. Personal Projects The first way to make money building software is by creating your own software. Something that people will actually find useful. Then selling that invention either as a SaaS or through advertising within the platform. This can literally be anything. Is there something you wish existed that made your life easier? Does something exist but could be done better? As long as you can solve some specific pain points and there is a demand for the software, there’s a chance you can monetize it! For an example of this approach, check out Glide.js. The developers at Glide.js realized there was a lot of demand for a javascript slider library with a very small codebase (~23kb). So they decided to build a library that makes slider development trivial without bloating your codebase. Since they were first to market a product like this, they were able to build a network of developers that use and recommend their software. Now anyone that views the documentation page gets greeted with a non-obtrusive carbon ad that earns money passively. In addition, they also have a donation page if you feel inclined to support them. The great thing about building software as passive income is that you can use this as an opportunity to learn that new language or framework you’ve been putting off. Which would make a great addition to your portfolio and expand your knowledge. Not only that, whatever you build could make somebody’s life easier. Allowing you to contribute back to the community that gives you so much. Doesn’t that sound amazing? I think it does. If you are a new developer or always get stuck on the process of building software, check out my guide on How To Plan a Coding Project — A Programming Outline. It’s a way to approach software development by breaking it down into steps (much like the concept of programming). Partnering Up The second way to make money building software would be to partner up with a business owner or entrepreneur that has a great idea for an application. Preferably a simple one that doesn’t require a significant amount of development time. Now here is where this becomes a passive stream and not just another freelance client. Agree on a contract that gives you a percentage of the incoming revenue or profit for the product. If additions need to be made for the software, you can either include a fixed amount of working hours a month, work at a reduced hourly rate, or outsource a developer. Whatever you decide, once the product is built, the business aspect is now out of your hands. No need to worry about marketing or sales. Just for your monthly royalty checks! Do this for a few pieces of software and pretty soon you will have a pretty great passive income stream. You might be wondering how you can find a business partner like this? Well, there is no shortage of great business ideas from entrepreneurial-minded individuals. A great place to start for this could be r/Entrepreneur or any forum board or group that business professionals might hang out. In my personal opinion, finding a business partner online can be a little.. sketchy. Personally, I believe working with local businesses’ can be a much safer route to go down. Most of them might not have a great app idea but they do have products to sell. Which brings me to our next great passive income stream. 2. eCommerce & Shopify If you have ever thought about diving into the world of eCommerce, now is the time. There are many businesses, both local and abroad, that could benefit from providing an online outlet for their storefront. Following the methodology above, you can very easily make passive income by building eCommerce stores with Shopify. Offer to build the stores for free, walk them through importing products, and in return, receive a small percentage of the revenue. This approach is easy to sell businesses on because it is has a very low-risk factor. If the store makes less than expected, the business owner is no worse off than before. Making it a much easier sell. So, how does eCommerce with Shopify differ from building any other type of software? Great question. I am going to answer that. The Benefits of Shopify The first reason is that it can be dead simple to build an eCommerce store with Shopify. I built my first Shopify store within a few weeks at the beginning of my career as a developer. That store generated +$75K in the first year. In addition, the liquid templating language used by Shopify is very intuitive to pick up and makes it easy to build out frontend that displays product data. There is also a plethora of tools available to make development easier while financial data is all handled by Shopify. Making the process as smooth as possible. I am not afraid to admit that in my early days I assumed Shopify was a joke platform. Meant solely for ambitious but delusional dropshippers that didn’t know the first thing about programming or business. After my experience with the platform, I can confidently say I love Shopify and what they have done for eCommerce and their developers. I can honestly say I couldn’t imagine building another Frankenstein WooCommerce site again. Shopify Partner Program Shopify also has a partnership program that revolves around the idea of passive income. There are plenty of ways to make passive income with Shopify. Whether it’s building tools, referring store owners or developing customer stores yourself. That’s right, in addition to working out a revenue model with your clients, Shopify also pays you recurring revenue based on your client’s Shopify plan. Better yet, you have access to each store’s dashboard so you can work out how much your clients owe you every month. Seriously. Shopify may be your best bet in making passive income. Especially if you excel in frontend development. Here is a screenshot of the $66 USD I made this year with Shopify, along with one of my client’s store that made ~$75K this year. Even by making 5% of revenue with this store, you would receive ~$3,500 in completely passive income. Obviously, don’t expect every store to make this kind of revenue, but if you’re smart about it, and pick your businesses right, you could make it a full-time job! 3. Start A Development Blog Looking for a long term strategy? Starting a development blog can be a great way to earn passive income programming. It is also a great way to stay up to date on current technologies, help beginners with the knowledge you have and improve your writing skills. I mean, who doesn’t love a developer who can actually write a decent README file? I know I do. You can check out my blog here: thecodebytes.com The truth about earning revenue from a blog is that it can take a lot of time and effort to build a following and reap any sort of benefits. However, it can definitely be done. A close friend of mine actually earned +$7K from blogging in 2019 and has been growing ever since. There are essentially three ways to make money passively writing about code. I’ll walk through them for you. Advertising I know this has been mentioned but this is probably the easiest of the monetization methods mentioned. Advertising partners such as Adsense or Monumetric allow you to display advertisements on your blog and get paid passively! It really doesn’t get much easier than that. The only challenge from there is making sure your content is of high quality and by building an audience. Affiliates Another popular way for bloggers to make money is with affiliate programs. Affiliates are essentially links pointing to products or services that you partner with. If someone signs up for an affiliate from your unique URL, the partner will give you some form of compensation. Amazon has a popular affiliate program but if you’re looking for something closer related to the development-sphere, Shopify, Codeacademy and probably any other platform that has a large following would be a great place to start. 3rd Party Sites In addition to your personal blog, I also wanted to state that there are third party writing platforms that can help you earn money by writing about code. I personally use Medium, but there are a lot of sites out there. Dev.to and Hackernoon are two cool platforms that allow you to cross-post from your own blog. Allowing you to link back to your original content while still helping the community. A big win/win if you ask me! How much money can you make with Medium? Well, it depends on how much you write and whether or not the post goes viral. I haven’t written much on medium but I wanted to include a shameless screenshot for full transparency. As you can see, in total I earned around $22.29 from my three articles. This number isn’t great, but if you spent some serious time writing articles, this number would add up. My articles are continuing to make money as well. You can check out what I write about here. Important Note: If you are interested in making money with Medium. Make sure you sign up for the Medium Partnership Program or you won’t get paid. 4. Online Tutoring Videos The fourth way to make passive income programming is through online tutoring. If you are more of a visual and outgoing person, video content is the way to go (aren’t all programmers outgoing?). The best part about video content is that it is re-usable. You record it once and then it is easily distributable forever. There are two main forms of online tutoring videos. Youtube Youtube follows a similar revenue model as blogging. Making most of your money off of advertising or affiliate sales. It also works by building a consistent following and growing your account. For that reason, I won’t talk too much about it. If you are really looking to grow your passive income programming streams, building both a blog and youtube while cross-posting wouldn’t be a bad idea! Allowing you to grow two revenue streams at once. Massive Open Online Courses A second way to make money with online tutoring is with MOOCs (Massive Open Online Courses). These courses allow you as a developer to make a course and share it online for anyone to view for a set price. If you are a good developer and have gained a decent following online (through youtube or blogging), selling a MOOC is a very realistic way to make passive income. Figuring out what to make a course about is a balancing act between both what is in demand and what has low competition. If there is something you are highly skilled in that many developers want to know more about, this could a great idea for a course. How much money can you make with MOOCs? Honestly, the sky’s the limit. Take Brad Traversy’s MERN Stack Front To Back. With 42,945 students x ~$20 per student he has made over 1 million dollars from one course. Obviously, he has spent a lot of time building up his audience by consistently providing quality content. However, you can see the height of the pay ceiling. It’s never been easier to make online courses about code. Sites like udemy don’t even require a degree to become a teacher. Just signup and upload your content. The student reviews will be the deciding factor on whether or not your content is worth paying for. *Your students analyzing your content quality* 5. Outsourcing Freelancing Clients This brings us to the final passive income idea for developers. Finding and outsourcing freelance clients. When I first started out as a self-taught developer, the only work I could find was as a freelancer. It was actually very difficult to find my first client that paid well. But after that, it became significantly easier and easier to find new clients. Mainly due to my growing portfolio and word of mouth. So much that I had to start turning down offers because I could not work fast enough to take on additional clients. The solution? Start outsourcing freelance clients to other developers. As a programmer, you have two key characteristics for this to work. First, you know what tools and approximate time frames it would take to get something done. Second, you also have the skillset to find other developers and vet them for their abilities. By outsourcing developers that are willing to work for a reduced rate, you can essentially be a middle man for clients. Taking freelance offers, sending them to your developers and sending them back to your clients. This is a mutually beneficial scenario. Clients like someone in their time zone who is available, fluent in English and gets the job done on time, on budget and with good coding standards. You can be the one to bridge the gap for developers overseas. Allowing them to make a liveable wage and yourself to make passive income. Now, I know that this last one isn’t technically passive. However, if you can scale it, eventually you could also hire someone to take over the management aspect. At this point, you are essentially just running a business. However, it would be a passive business. Just something to keep in mind. Closing Remarks So there you have it. Five ways to make passive income with programming that will actually make you money. As a developer, you are blessed with a high barrier to entry that makes it very difficult for a non-technical person to make money within this niche. Giving you much better odds of making an income without worrying about steep competition. Now, I am not saying these methods will be easy. People often confuse passive income with easy income. However, is anything worth doing ever easy? We all know programming isn’t. I hope I have proven that with enough hard work upfront, you can reap the benefits of passive income for years to come. Finally allowing you to quit your day job, save for that vacation or simply invest some extra money. I really don’t care what you do. I just wanted to let you know that these options are always available to you. Because I love you. So you’re welcome. If you are a beginner coder, check out my article on Become a Professional Full Stack Web Developer in 2020. It should give you a good starting point if you want to delve into the wide world of Web Development. Happy coding!
https://medium.com/swlh/make-passive-income-programming-5-incomes-for-software-developers-fd605395db71
['Grant Darling']
2020-12-26 17:23:09.570000+00:00
['Passive Income', 'Programming', 'Web Development', 'Make Money', 'Entrepreneurship']
YouTube Gave Me an Award and I Hated It
When I started at university, I thought I would be incredibly proud if I got a master’s degree. But as I slowly got closer, it seemed less and less impressive. Similarly, before I was accepted into the conservatory, it seemed inconceivable to me that I could ever do something so fantastic as getting a degree there. But I’m about to start my third year and not only am I doing it — I’m surrounded by people who are also doing it. It suddenly doesn’t feel so special anymore. Imagine a mountain climber who is about to reach the peak of a mountain and goes: ‘But… this isn’t impressive at all. It’s just three lousy little steps.’ We forget where we have come from. We forget to look behind us and see the distance we’ve already crossed. We forget to be grateful to our former selves — for all the times we stuck with it, when it was difficult, but especially when it was easy. “The journey of a thousand miles begins with a single step. And then… like a million more steps.” – Felicity Ward This is really funny, but it’s actually kind of profound. It’s as if we think that in order to be truly fulfilled, something needs to feel like a Heroic Effort, when in reality it’s the thousands of tiny steps (some of which we took because we had no choice) that got us to where we are. If you study for every test because there is a professor forcing you, that doesn’t mean you’re not actually passing the tests. It means you set yourself up in a way that didn’t allow you to slack off. Just because I have a natural aptitude for languages, that doesn’t mean that A for French at the conservatory is meaningless. And just because I enjoyed making mermaid videos and it never really felt like work, that doesn’t mean I’m not allowed to be proud of reaching a milestone. Me with my Silver Play Button. Yay. I’m not an unhappy achiever. I’m an ungrateful one. I think I’ll maybe just hang that silver play button somewhere and practise gratefulness. Not gratefulness for all the effort and blood/sweat/tears it took, but gratefulness for the fact that sometimes, things appear to have come easy — and that’s okay.
https://medium.com/the-ascent/youtube-gave-me-an-award-and-i-hated-it-36f36c752a93
['Stella Brüggen']
2020-09-06 17:01:01.892000+00:00
['Psychology', 'Happiness', 'Careers', 'Self Improvement', 'Self']
A Mid-Autumn Day’s Matinee
My play Translation by was just published here on Medium — you could say it’s “in previews” if you want to seem like a real theatre geek. In the traditions of Shakespeare in the Park and midweek midday matinee performances, I am unlocking it Wednesday (+ 3 others) for all to read, enjoy and share. Translation by — Doing my best not to spoil it (yet still sell it) I can say it is a comedy of errors centering on a group of diverse players trying to ready a translation of a work… and not fully succeeding.
https://medium.com/the-coffeelicious/a-mid-autumn-days-matinee-8c8b6a864b13
['Ernio Hernandez']
2017-11-22 12:30:39.225000+00:00
['Reading', 'Fiction', 'Writing', 'Culture', 'Play']
Visualization With Seaborn
Seaborn is a Python data visualization library based on Matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics. It provides choices for plot style and color defaults, defines simple high-level functions for common statistical plot types, and integrates with the functionality provided by Pandas DataFrames. The main idea of Seaborn is that it provides high-level commands to create a variety of plot types useful for statistical data exploration, and even some statistical model fitting. 1.0.1 Table of Contents Creating basic plots Advance Categorical plots in Seaborn Density plots Pair plots # importing required libraries import seaborn as sns sns.set() sns.set(style='darkgrid') import numpy as np import pandas as pd #importing matplotlib import matplotlib.pyplot as plt %matplotlib inline import warnings warnings.filterwarnings("ignore") plt.rcParams['figure.figsize']=(10,10) In this notebook, we will use the Big Mart Sales Data. You can download the data from Github: https://github.com/Yuke217 # read the dataset df = pd.read_csv("dataset/bigmart_data.csv") # drop the null values df = df.dropna(how="any") # View the top results df.head() 1.0.2 Creating basic plots Let’s have a look at how can you create some basic plots in seaborn in a single line for which multiple lines were required in Matplotlib. 1.0.2.1 Line Chart With some datasets, you may want to understand changes in one variable as a function of time or a similarly continuous variable. In seaborn, this can be accomplished by the lineplot() function, either directly or with relplot by setting kind=”line”: # line plot using lineplot() sns.lineplot(x="Item_Weight", y="Item_MRP",data=df[:50]) 1.0.2.2 Bar chart In seaborn, you can create a bar chart by simply using the barplot function. function. Notice that to achieve the same thing in Matplotlib, we had to write extra code just to group the data category wise. And then we had to write much more code to make sure that the plot comes out correct. sns.barplot(x="Item_Type", y="Item_MRP", data=df[:5]) 1.0.2.3 Histogram You can create a histogram in seaborn by using distplot(). sns.distplot(df['Item_MRP']) 1.0.2.4 Box plots You can use Boxplot() for creating boxplots in seaborn for creating boxplots in seaborn Let’s try to visualize the distribution of Item_Outlet_Sales of items. sns.boxplot(df['Item_Outlet_Sales'], orient='vertical') 1.0.2.5 Violin plot A violin plot plays a similar role as a box and whisker plot. It shows the distribution of quantitative data across several levels of one(or more) categorical variables such that those distributions can be compared. Unlike a box plot, in which all of the plot components correspond to actual data points, the violin plot features a kernel density estimation of the underlying distribution. You can create a violin plot using the violinplot() in seaborn sns.violinplot(df['Item_Outlet_Sales'], orient='vertical') 1.0.2.6 Scatter plot It depicts the relationship between two variables using a cloud of points, where each point represents an observation in the dataset. You can use relplot() with the option of kind=scatter to plot a scatter plot in seaborn. with the option of kind=scatter to plot a scatter plot in seaborn. Notice the default option is scatter # scatter plot sns.relplot(x="Item_MRP", y="Item_Outlet_Sales", data = df[:200], kind="scatter") 1.0.2.7 Hue semantic We can also add another dimension to the plot by coloring the points according to a third variable. In seaborn, this is referred to as using a “Hue semantic”. sns.relplot(x="Item_MRP", y="Item_Outlet_Sales", hue="Item_Type", data=df[:200]) Remember the line chart that we created earlier, When we have hue semantic, we can create more complex line plots in seaborn. semantic, we can create more complex line plots in seaborn. In the following example, different line plots for different categories of Outlet_Size are made. # different line plots for different categories of the Outlet_Size sns.lineplot(x="Item_Weight", y="Item_MRP", hue="Outlet_Size", data=df[:100]) 1.0.2.8 Bubble plot We utilize the hue semantic to color bubbles by their Item_Visibility and at the same time use it as size of individual bubbles. # bubble plot sns.relplot(x="Item_MRP", y="Item_Outlet_Sales", data=df[:200],kind="scatter", size="Item_Visibility", hue="Item_Visibility") 1.0.2.9 Category wise sub plot You can also create plots based on category in seaborn. in seaborn. We have created scatter plots for each Outlet_Size Now we create three plots based on different Outlet_Size using col. # subplots for each of the category of Outlet_Size sns.relplot(x="Item_Weight", y="Item_Visibility", hue= 'Outlet_Size',col ="Outlet_Size",data=df[:100] ) 1.1 2. Advance categorical plots in seaborn For categorical variables we have three different families in seaborn. Categorical scatterplots: stripplot() (with kind=”strip”; the default) swarmplot() (with kind=”swarm”) Categorical distribution plots: boxplot() (with kind=”box”) violinplot() (with kind=”violin”) Boxenplot() (with kind=”bowen”) Categorical estimate plots: pointplot() (with kind=”point”) barplot() (with kind=”bar”) The default representation of the data in catplot() uses a scatterplot. 1.1.1 a. Categorical scatterplots 1.1.1.1 Strip plot Draws a scatterplot where one variable is categorical. You can create this by passing kind=strip in the catplot . sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="strip", data=df[:250]) 1.1.1.2 Swarm plot This function is similar to stripplot() , but the points are adjusted so that they don't overlap. , but the points are adjusted so that they don't overlap. This gives a better representation of the distribution of values, but it does not scale well to large numbers of observations. This style of plot is sometimes called a “beeswarm”. You can created this by passing kind=swarm in the catplot . sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind='swarm',data=df[:250]) 1.1.2 b. Categorical distribution plots 1.1.2.1 Box Plots Box plot shows the three quartile values of the distribution along with the extreme values. The “whiskers” extend to points that lie within 1.5 IQRs of the lower and upper quartile, and then observations that fall ourside this range are displayed independently. This means that each value in the boxplot corresponds to an actual observation in the data. sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="box", data=df) 1.1.2.2 Violin Plots sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="violin",data=df) 1.1.2.3 Boxen plots This style of plot was originally named a “letter value” plot because it shows a large number of quantiles that are defined as “letter values”. It is similarto a box plot in plotting a nonparametric representation of a distribution in which all features correspond to actual observations. By plotting more quantiles, it provides more information about the shape of the distribution, particularly in the tails. sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="boxen",data=df) 1.1.2.4 Point plot sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales",kind="point",data=df) 1.1.2.5 Bar plots sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="bar",data=df) 1.2 3. Density Plots Rather than a histogram, we can get a smooth estimate of the distribution using a kernel density estimation, which Seaborn does with sns.dkeplot: A Density Plot visualises the distribution of data over a continuous interval or time period. Density plot allows for smoother distribution by smoothing out noise. The peaks of a Density Plot help display where values are concentrated over the interval. An advantage Density Plots have over Histograms is that they’re better at determining the distribution shape because they’re not affected by the number of bins used (each bar used in a typical histogram). # distribution of Item Visibility plt.figure(figsize=(10,10)) sns.kdeplot(df["Item_Visibility"],shade=True) # distribution of Item MRP plt.figure(figsize=(10,10)) sns.kdeplot(df["Item_MRP"],shade=True) 1.2.1 Histogram and Density Plot Histograms and KDE can be combined using distplot: plt.figure(figsize=(10,10)) sns.distplot(df['Item_Outlet_Sales']) 1.3 4. Pair plots When you generalize joint plots to datasets of larger dimentions, you end up with pair plots. This is very useful for exploring correlations between multidimensional data, when you’d like to plot all pairs of values against each other. We’ll demo this with the well-known Iris dataset, which lists measurements of petals and sepals of three iris species: iris = sns.load_dataset("iris") iris.head()
https://medium.com/analytics-vidhya/visualization-with-seaborn-e2d9cacd932b
['Yuke Liu']
2020-07-02 14:51:37.106000+00:00
['Data Science', 'Seaborn', 'Matplotlib', 'Data Visualization']
The Big Six Framework: How We Lowered The Cost Per Lead By 80%
The Two Most Common Problems In Advertising The two most common problems in advertising are found on opposite sides of the spectrum: ignorance and overwhelm. Ignorance It’s not uncommon for advertisers to be unaware of all of the factors necessary for successful advertising. This tends to lead to overoptimization where you double down on one or a few areas of the ad campaigns while neglecting other perhaps more important aspects. Overwhelm On the other hand, advertisers who are aware of all the data, options, tools, strategies and tactics that exist, tend to become overwhelmed. This leads to paralysis where you stop executing and start spending too much time analyzing. The Solution How do you solve these problems? First, you need to get a clear picture of the entire advertising landscape and all the factors involved. Second, you need a process for identifying the single largest bottleneck, so you can focus your attention where it’s needed most. And we’re going to show you how we can achieve both of these objectives. Introducing The Big Six Framework Based on years of failing, succeeding and learning, we’ve created a powerful framework that allows us to cut through complexity and consistently produce results for our clients. We call it the Big Six: The Simple Framework That Produces Staggering Results Variables Variables are basically all the components of effective advertising. All of them can be manipulated to improve performance. Depending on the audience, product, etc., some variables may be more important than others. Mechanism The mechanisms are the way in which the variables are manipulated. Metrics Metrics are used for diagnosing and tell us whenever there’s a problem with a variable. With relevant benchmarks and targets, metrics allow us to easily identify the largest bottleneck(s). Note: Like all frameworks, this one is designed to simplify a complex reality. For best results, use with judgment!😊 All right, that’s enough theory! Let’s see how we used this framework for one of our clients to systematically lower the CPL by over 80%. Applying The Framework Planning The Campaign📊 We were approached by a company in the health and fitness space looking to acquire new members. Before working with us, they had tried to generate leads sporadically and with mixed results. Now they wanted to take a more structured approach. First, we ran the numbers. Using previous experience and client data, we created benchmarks and targets for the metrics (CPM, Click Through Rate, Conversion Rate). This is a crucial step, as otherwise, you won’t be able to use the metrics as a diagnosing tool. Hypothetical Target CPA Analysis Second, we planned and prioritized our actions. We created a backlog of all the things we wanted to test and implement. This prioritization allows us to make sure we’re always using our time efficiently and not just doing busywork. Knowing the numbers makes it easier to prioritize. For instance, we knew that it would be more challenging to improve the Cost Per Click (CPC) than the Conversion Rate. So we focused on the variable we thought was going to have the biggest impact: the Offer. We created two identical campaigns and landing pages with two different offers, and launched the first ads. Going Live🚀 After running the ads for a few days, things were looking.. not great! In fact, we had a CPL of just over $40, far above our target. Instead of freaking out and rethinking the entire campaign, we analyzed the data and found the reason behind the high CPL: a lower-than-expected Conversion Rate. And since neither Offer was converting well, we knew there was a problem with the Landing Page. By analyzing screen recordings from Hotjar, we were able to further pinpoint the problem: visitors were reading most of the content but leaving the page once they came to the contact form at the bottom. We redesigned the form and made it shorter and easier to fill out. Ready For Round Two🥊 With a new form, things were moving in the right direction and we were looking at a CPL of $26. We could now see a clear difference in performance between the two offers. We still weren’t quite satisfied with the conversion rate, so we decided to go back and test a third offer. New Targeting🎯 The new offer turned out to be the highest converting of the three and brought down the CPL to $12. Both the landing page and the offer were now working, and the conversion rate was where we wanted it to be. We decided to shift our attention to the traffic side. While we were focusing on improving the conversion rate, we also paid attention to the advertising performance as well. By looking at a breakdown of the demographics and geography of our audience, we’d noticed that some of the segments were underperforming, and we made a few changes to the targeting. Note: While you should ideally only change one variable at a time, you want to constantly look at data and potentially revise your priorities. The Final Touch🎨 The new targeting had a positive (but not huge) impact, and the CPL dropped down to just below $10. We were confident that the ads were showing to the right people, and we could now focus on further increasing the CTR (Click-Through-Rate). The easiest way to do this was by capturing more attention through new and better ad creatives. Note: The creative is often the most important variable and the thing we focus on first. Here it was last since we were building out the advertising funnel from scratch. In Summary With the new ad creatives, we finally managed to decrease the CPL to less than $7, which represents an 80% improvement from where we started. This meant that the client was now getting 5x the number of leads at the original cost! 80% Reduction In CPL The fastest and straightest path to great results comes from understanding all the factors without letting it overwhelm you. Be systematic, patient and focus on what matters. Good luck!
https://medium.com/rho-1/the-big-six-framework-how-we-lowered-the-cost-per-lead-by-80-99ea93696518
['Josua Fagerholm']
2020-03-06 01:02:00.635000+00:00
['Digital Advertising', 'Advertising', 'Marketing', 'Digital Marketing', 'Facebook Marketing']
beginnings
I can is shedding its onerous mass to become the starting point of a high spirit drop the ‘t erase for heart’s sake can’t is a nefarious tightrope masquerading as path meant for falling soles it has no business in the hallways of beginning decorated with hope.
https://medium.com/meri-shayari/beginnings-9f22c9ce04
['Rebeca Ansar']
2020-12-19 18:31:34.829000+00:00
['Life Lessons', 'Motivation', 'Poetry', 'Poet', 'Poem']
Build React Tabs Using Recoil, Styled Components, and Storybook.js
Build React Tabs Using Recoil, Styled Components, and Storybook.js A development guide to building React components with the latest technologies Image credit: Author In a previous article, we introduced Recoil, the state management library that’s been available since May 2020. For managing states, Recoil is simpler and more effective than Context API and Redux. We have been using it for our projects ever since. In another article, we introduced styled components, a JavaScript library that allows us to write CSS inside a JavaScript file. As a result, components can run independently, without relying on any external CSS files. Storybook is a tool for UI development. It makes development faster and easier by isolating components. This allows us to work on one component at a time. We use tabs as an example to illustrate the power of Recoil, styled components, and storybook. As we are writing an interview series, creating tabs components is also a frequently asked interview question. This article prepares you for both development work and interview challenges.
https://medium.com/better-programming/build-react-tabs-using-recoil-styled-components-and-storybook-js-4ad534cef007
['Jennifer Fu']
2020-12-30 00:17:18.166000+00:00
['Nodejs', 'Recoil', 'JavaScript', 'React', 'Programming']
Beyond DQN/A3C: A Survey in Advanced Reinforcement Learning
One of my favorite things about deep reinforcement learning is that, unlike supervised learning, it really, really doesn’t want to work. Throwing a neural net at a computer vision problem might get you 80% of the way there. Throwing a neural net at an RL problem will probably blow something up in front of your face — and it will blow up in a different way each time you try. A lot of the biggest challenges in RL revolve around two questions: how we interact with the environment effectively (e.g. exploration vs. exploitation, sample efficiency), and how we learn from experience effectively (e.g. long-term credit assignment, sparse reward signals). In this post, I want to explore a few recent directions in deep RL research that attempt to address these challenges, and do so with particularly elegant parallels to human cognition. In particular, I want to talk about: hierarchical RL, memory and predictive modeling, and combined model-free and model-based approaches. This post will begin with a quick review of two canonical deep RL algorithms — DQN and A3C — to provide us some intuitions to refer back to, and then jump into a deep dive on a few recent papers and breakthroughs in the categories described above. Review: DQN and A3C/A2C Disclaimer: I am assuming some basic familiarity with RL (and thus will not provide an in-depth tutorial on either of these algorithms), but even if you’re not 100% solid on how they work, the rest of the post should still be accessible. DeepMind’s DQN (deep Q-network) was one of the first breakthrough successes in applying deep learning to RL. It used a neural net to learn Q-functions for classic Atari games such as Pong and Breakout, allowing the model to go straight from raw pixel input to an action. Algorithmically, the DQN draws directly on classic Q-learning techniques. In Q-learning, the Q-value, or “quality”, of a state-action pair is estimated through iterative updates based on experience. In essence, with every action we take in a state, we can use the immediate reward we receive and a value estimate of our new state to update the value estimate of our original state-action pair: Training DQN consists of minimizing the MSE (mean squared error) of the Temporal Difference error, or TD-error, which is shown above. The two key strategies employed by DQN to adapt Q-learning for deep neural nets, which have since been successfully adopted by many subsequent deep RL efforts, were: experience replay, in which each state/action transition tuple (s, a, r, s’) is stored in a memory “replay” buffer and randomly sampled to train the network, allowing for re-use of training data and de-correlation of consecutive trajectory samples; and use of a separate target network — the Q_hat part of the above equation — to stabilize training, so the TD error isn’t being calculated from a constantly changing target from the training network, but rather from a stable target generated by a mostly fixed network. Subsequently, DeepMind’s A3C (Asynchronous Advantage Actor Critic) and OpenAI’s synchronous variant A2C, popularized a very successful deep learning-based approach to actor-critic methods. Actor-critic methods combine policy gradient methods with a learned value function. With DQN, we only had the learned value function — the Q-function — and the “policy” we followed was simply taking the action that maximized the Q-value at each step. With A3C, as with the rest of actor-critic methods, we learn two different functions: the policy (or “actor”), and the value (the “critic”). The policy adjusts action probabilities based on the current estimated advantage of taking that action, and the value function updates that advantage based on the experience and rewards collected by following the policy: As we can see from the updates above, the value network learns a baseline state value V(s_i;θ_v) with which we can compare our current reward estimate, R, to obtain the “advantage,” and the policy network adjusts the log probabilities of actions based on that advantage via the classic REINFORCE algorithm. The real contribution of A3C comes from its parallelized and asynchronous architecture: multiple actor-learners are dispatched to separate instantiations of the environment; they all interact with the environment and collect experience, and asynchronously push their gradient updates to a central “target network” (an idea borrowed from DQN). Later, OpenAI showed with A2C that asynchronicity does not actually contribute to performance, and in fact reduces sample efficiency. Unfortunately, details of these architectures are beyond the scope of this post, but if distributed agents excite you like they excite me, make sure you check out DeepMind’s IMPALA — very useful design paradigm for scaling up learning. Both DQN and A3C/A2C can be powerful baseline agents, but they tend to suffer when faced with more complex tasks, severe partial observability, and/or long delays between actions and relevant reward signals. As a result, entire subfields of RL research have emerged to address these issues. Let’s get into some of the good stuff :). Hierarchical Reinforcement Learning Hierarchical RL is a class of reinforcement learning methods that learns from multiple layers of policy, each of which is responsible for control at a different level of temporal and behavioral abstraction. The lowest level of policy is responsible for outputting environment actions, leaving higher levels of policy free to operate over more abstract goals and longer timescales. Why is this so appealing? First and foremost, on the cognitive front, research has long suggested that human and animal behavior is underpinned by hierarchical structure. This is intuitive in everyday life: when I decide to cook a meal (which is basically never, by the way, but for the sake of argument let us assume I am a responsible human being), I am able to divide this task into simpler sub-tasks: chopping vegetables, boiling pasta, etc. without losing sight of my overarching goal of cooking a meal; I am even able to swap out sub-tasks, e.g. cooking rice instead of making pasta, to complete the same goal. This suggests an inherent hierarchy and compositionality in real-world tasks, in which simple, atomic actions can be strung together, repeated, and composed to complete complicated jobs. In recent years, research has even uncovered direct parallels between HRL components and specific neural structures within the prefrontal cortex. On the technical RL front, HRL is especially appealing because it helps address two of the biggest challenges I mentioned under our second question, i.e. how to learn from experience effectively: long-term credit assignment and sparse reward signals. In HRL, because low-level policies learn from intrinsic rewards based on tasks assigned by high-level policies, atomic tasks can still be learned in spite of sparse rewards. Furthermore, the temporal abstraction developed by high-level policies enables our model to handle credit assignment over temporally extended experiences. So how does it work? There are a number of different ways to implement HRL. One recent paper from Google Brain takes a particularly clean and simple approach, and introduces some nice off-policy corrections for data-efficient training. Their model is called HIRO. μ_hi is the high-level policy, which outputs “goal states” for the low-level policy to reach. μ_lo, the low-level policy, outputs environment actions in an attempt to reach that goal state observation. Here’s the idea: we have 2 layers of policy. The high-level policy is trained to maximize the environment reward R. Every c timesteps, the high-level policy samples a new action, which is a “goal state” for the low-level policy to reach. The low-level policy is trained to take environment actions that would produce a state observation similar to the given goal state. Consider a simple example: say we are training a robot to stack colored cubes in a certain order. We only get a single reward of +1 in the end if the task is completed successfully, and a reward of 0 at all other time-steps. Intuitively, the high-level policy is responsible for coming up with the necessary sub-goals to complete: perhaps the first goal state it outputs would be “observe a red cube in front of you;” the next might be “observe a blue cube next to a red cube;” and then “observe a blue cube on top of a red cube.” The low-level policy bumbles around the environment until it comes up with the sequence of actions necessary to produce these observations, e.g. picking up the blue cube and moving it on top of the red one. HIRO uses a variant of the DDPG (Deep Deterministic Policy Gradient) training objective to train the low-level policy, whose intrinsic reward is parameterized as the distance between the current observation and the goal observation: DDPG is another seminal deep RL algorithm that extended ideas from DQN to a continuous action space. It is another actor-critic method that uses policy gradients to optimize the policy, but instead of optimizing it with respect to the advantage as in A3C, it optimizes it with respect to the Q-values. Thus in HIRO, the DDPG-adjacent error to minimize becomes: Meanwhile, in order to use off-policy experience, the high-level policy is trained with off-policy corrections. Here’s the idea: to be sample efficient, we want to use some form of replay buffer, like DQN. However, old experience cannot be used directly to train the high-level policy. This is because the low-level policy is constantly learning and changing, so even if we condition on the same goals as our old experience, our low-level policy may now exhibit different actions/transitions. The off-policy correction proposed in HIRO is to retroactively change the goal seen in off-policy experience to maximize the likelihood of the observed action sequence. In other words, if the replay experience says the old agent took actions (x,y,z) to reach goal g, we find a goal g̃ that would make the current agent most likely to take those same actions (x,y,z), i.e. one that would maximize this log probability of the action sequence: The high-level policy is then trained with a DDPG variant on those actions, the new goal, and the environment reward R. HIRO is certainly not the only approach to HRL. FeUdal networks were an earlier, related work that used a learned “goal” representation instead of the raw state observation. Indeed, lot of variation in research stems from different ways to learn useful low-level sub-policies; many papers have used auxiliary or “proxy” rewards, and others have experimented with pre-training or multi-task training. Unlike HIRO, many of these approaches require some degree of hand engineering or domain knowledge, which inherently limits generalizability. Another recently-explored option is to use population-based training (PBT), another algorithm I am a personal fan of. In essence, internal rewards are treated as additional hyperparameters, and PBT learns the optimal evolution of these hyperparameters across “evolving” populations during training. HRL is a very popular area of research right now, and is very easily interpolatable with other techniques (check out this paper combining HRL with imitation learning). At its core, however, it’s just a really intuitive idea. It’s extensible, has neuroanatomical parallels, and addresses a bunch of fundamental problems in RL. Like the rest of good RL, though, it can be quite tricky to train. Memory and Attention Now let’s talk about some other ways to address the problems of long-term credit assignment and sparse reward signals. Specifically, let’s talk about the most obvious way: make the agent really good at remembering things. Memory in deep learning is always fun, because try as researchers might (and really, they do try), few architectures beat out a well-tuned LSTM. Human memory, however, does not work anything like an LSTM; when we go about tasks in daily life, we recall and attend to specific, context-dependent memories, and little else. When I go back home and drive to the local grocery store, I’m using memories from the last hundred times I’ve driven this route, not memories of how to get from Camden Town to Piccadilly Circus in London — even if those memories are fresh in recent experience. In this sense, our memory almost seems queryable by context: depending on where I am and what I’m doing, my brain knows which memories will be useful to me. In deep learning, this is the driving thesis behind external, key-value-based memory stores. This idea is not new; Neural Turing Machines, one of the first and favorite papers I ever read, augmented neural nets with a differentiable, external memory store accessible via vector-valued “read” and “write” heads to specific locations. We can easily imagine this being extended into RL, where at any given time-step, an agent is given both its environment observation and memories relevant to its current state. That’s exactly what the recent MERLIN architecture extends upon. MERLIN has 2 components: a memory-based predictor (MBP), and a policy network. The MBP is responsible for compressing observations into useful, low-dimensional “state variables” to store directly into a key-value memory matrix. It is also responsible for passing relevant memories to the policy, which uses those memories and the current state to output actions. This architecture may look a little complicated, but remember, the policy is just an recurrent net outputting actions, and the MBP is only really doing 3 things: compressing the observation into a useful state variable z_t to pass on to the policy, writing z_t into a memory matrix, and fetching other useful memories to pass on to the policy. The pipeline looks something like this: the input observation is first encoded and then fed through an MLP, the output of which is added to the prior distribution over the next state variable to produce the posterior distribution. This posterior distribution, which is conditioned on all the previous actions/observations as well as this new observation, is then sampled to produce a state variable z_t. Next, z_t gets fed into the MBP’s LSTM, whose output is used to update the prior and to read to/write from memory via vector-valued “read keys” and “write keys” — both of which are produced as a linear function of the LSTM’s hidden state. Finally, downstream, the policy net leverages both z_t and read outputs from memory to produce an action. A key detail is that in order to ensure the state representations are useful, the MBP is also trained to predict the reward from the current state z_t, so learned representations are relevant to the task at hand. Training of MERLIN is a bit complicated; since the MBP is intended to serve as a useful “world model,” an intractable objective, it is trained to optimize the variational lower bound (VLB) loss instead. (If you are unfamiliar with VLB, I found this post quite useful, but you really don’t need it to understand MERLIN). There are two components to this VLB loss: The KL-divergence between the prior and posterior probability distributions over this next state variable, where the posterior is additionally conditioned on the new observation. Minimizing this KL ensures that this new state variable is consistent with previous observations/actions. The reconstruction loss of the state variable, in which we attempt to reproduce the input observation (e.g. the image, previous action, etc.) and predict the reward based on the state variable. If this loss is small, we have found a state variable that is an accurate representation of the observation, and useful for producing actions that give a high reward. Here is our final VLB loss, with the first term being reconstruction and the second being the KL divergence: The policy network’s loss is a slightly fancier version of the policy gradient loss we discussed above with A3C; it uses an algorithm called the Generalized Advantage Estimation Algorithm, the details of which are beyond the scope of this post (but can be found in section 4.4 of the MERLIN paper’s appendix), but it looks similar to the standard policy gradient update shown below: Once trained, MERLIN should be able to predictively model the world through state representations and memory, and its policy should be able to leverage those predictions to take useful actions. MERLIN is not the only deep RL work to use external memory stores — all the way back in 2016, researchers were already applying this idea in an MQN, or Memory Q-Network, to solve mazes in Minecraft — but this concept of using memory as a predictive model of the world has some unique neuroscientific traction. Another Medium post has done a great job of exploring this idea, so I won’t repeat it all here, but the key argument is that our brain likely does not function as an “input-output” machine, like most neural nets are interpreted. Instead, it functions as a prediction engine, and our perception of the world is actually just the brain’s best guesses about the causes of our sensory inputs. Neuroscientist Amil Seth sums up this 19th century theory by Hermann von Helmholtz nicely: The brain is locked inside a bony skull. All it receives are ambiguous and noisy sensory signals that are only indirectly related to objects in the world. Perception must therefore be a process of inference, in which indeterminate sensory signals are combined with prior expectations or ‘beliefs’ about the way the world is, to form the brain’s optimal hypotheses of the causes of these sensory signals. MERLIN’s memory-based predictor aims to fulfill this very purpose of predictive inference. It encodes observations and combines them with internal priors to generate a “state variable” that captures some representation — or cause — of the input, and stores these states in long-term memory so the agent can act upon them later. Agents, World Models, and Imagination Interestingly, the concept of the brain as a predictive engine actually leads us back to the first RL question we want to explore: how do we learn from the environment effectively? After all, if we’re not going straight from observations to actions, how should we best interact with and learn from the world around us? Traditionally in RL, we can either do model-free learning or model-based learning. In model-free RL, we learn to map raw environment observations directly to values or actions. In model-based RL, we first learn a transition model of the environment based on raw observations, and then use that model to choose actions. The outside circle depicts model-based RL; the “direct RL” loop depicts model-free RL Being able to plan based on a model is much more sample-efficient than having to work from pure trial-and-error as in model-free learning. However, learning a good model is often very difficult, and compounding errors from model imperfections generally leads to poor agent performance. For this reason, a lot of early successes in deep RL (e.g. DQN and A3C) were model-free. That said, the lines between model-free and model-based RL have been blurred as early as the Dyna algorithm in 1990, in which a learned model is used to generate simulated experience to help train the model-free policy. Now in 2018, a new “Imagination-augmented Agents” algorithm has been introduced that directly combines the two approaches. In Imagination-Augmented Agents (I2A), the final policy is a function of both a model-free component and a model-based component. The model-based component is referred to as the agent’s “imagination” of the world, and consists of imagined trajectories rolled out by the agent’s internal, learned model. The key, however, is that the model-based component also has an encoder at the end that aggregates the imagined trajectories and interprets them, enabling the agent to learn to ignore its imagination when necessary. In this sense, if the agent discovers its internal model is projecting useless and inaccurate trajectories, it can learn to ignore the model and proceed with its model-free arm. The figure above describes how I2A’s work. An observation is first passed to both the model-free and model-based components. In the model-based component, n different trajectories are “imagined” based on the n possible actions that could be taken in the current state. These trajectories are obtained by feeding the action and state into the internal environment model, transitioning to a new imagined state, taking the maximum next action in that, and so on. A distilled imagination policy (which is kept similar to the final policy via cross-entropy loss) chooses the next actions. After some fixed k steps, these trajectories are encoded and aggregated together, and fed into the policy network along with the output of the model-free component. Critically, the encoding allows the policy to interpret the imagined trajectories in whatever way is most useful — ignoring them if appropriate, extracting non-reward-related information when available, and so on. The policy is trained via a standard policy gradient loss with advantage, similar to A3C and MERLIN, so this should look familiar by now: Additionally, a policy distillation loss is added between the actual policy and the internal model’s imagined policy, to ensure that the imagined policy chooses actions similar to what the current agent would: I2A outperforms a number of baselines, including the MCTS (Monte Carlo Tree Search) planning algorithm. It is also able to perform well in experiments where its model-based component is intentionally restricted to make poor predictions, demonstrating that it is able to trade-off use of the model in favor of model-free methods when necessary. Interestingly, the I2A with a poor internal model actually slightly outperformed the I2A with a good model in the end — the authors chalked this up to either random initialization or the noisy internal model providing some form of regularization in the end, but this is definitely an area for further investigation. Regardless, the I2A is fascinating because it is, in some ways, also exactly how we go about acting in the world. We’re always planning and projecting into the future based on some mental model of the environment that we’re in, but we also tend to be aware that our mental models could be entirely inaccurate — especially when we’re in new environments or situations we’ve never seen. In that case, we proceed by trial-and-error, just like model-free methods, but we also use this new experience to update our internal mental model. There’s a lot of work going on right now in combining model-based and model-free methods. Berkeley AI came out with a Temporal Difference Model (TDM) which also has a very interesting premise. The idea is to let an agent set more temporally abstracted goals, i.e. “be in X state in k time steps,” and learn those long-term model transitions while maximizing the reward collected within each k steps. This gives us a smooth transition between model-free exploration on actions and model-based planning over high-level goals — which, if you think about it, sort of brings us all the way back to the intuitions in hierarchical RL. All these research papers focus on the same goal: achieving the same (or superior) performance as model-free methods, with the same sample efficiency that model-based methods can provide. Conclusion Deep RL models are really hard to train, period. But thanks to that difficulty, we have been forced to come up with an incredible range of strategies, approaches, and algorithms to harness the power of deep learning for classical (and some non-classical) control problems. This post has been a very, very incomplete survey of deep RL — there is a lot of research out there that I haven’t covered, and more yet that I’m not even aware of. However, hopefully this sprinkling of research directions in memory, hierarchy, and imagination offers a glimpse into how we can begin addressing some of the recurring challenges and bottlenecks in the field. If you think I’m missing something big, I probably am — let me know what it is in the comments. :) Meanwhile, happy RL hacking!
https://towardsdatascience.com/advanced-reinforcement-learning-6d769f529eb3
['Joyce Xu']
2018-10-01 21:13:53.411000+00:00
['Robotics', 'Artificial Intelligence', 'Machine Learning', 'Reinforcement Learning', 'Deep Learning']
All I Need to Know About User-Centered Design I Learned One Summer at Apple
All I Need to Know About User-Centered Design I Learned One Summer at Apple Kristy Knabe Follow May 14 · 3 min read I loved the old rainbow Apple. Looked great on a t-shirt! In 1991, I was in grad school at Carnegie Mellon and I got a summer internship at Apple Computer. Little did I know this would be a life-changing summer. My internship was with Apple’s fledgling User-Aided Design team, a team started by a small segment of people who worked in the Instructional Products group who were responsible for writing Apple’s user manuals. Back then Apple shipped products with beautiful 4-color manuals. I wish I had a few of those today — they would be on my coffee table. Personal computing was an emerging technology in 1991 and most people were pretty intimidated by the thought of having a computer on their desk. So the manuals had to explain everything from unpacking the box (which is how the Set Up Poster originated) to how to turn on the computer. Even the desktop metaphor was new — so usability testing was critical to product success. Did the illustrations make sense? Was the packaging organized so the user could figure out what to do first, what to do next? Did the first steps in an online tutorial (Macromedia!) make sense to even the most novice user? Only the users could really tell us how the hardware, software, packaging and documentation came together to form an integrated product. That summer I learned the think-aloud protocol. I learned how to give users instructions about usability research and how to have them sign an NDA. I learned to run elaborate soundboards and recording equipment in a test lab. I learned to interact with users through one-way glass and find ways to make them comfortable even when the test protocols could be pretty intimidating. I learned how to write tasks from a user’s point of view and watch users without saying much at all. I learned to ask “what would you expect” and “what do you think” more than I could ever imagine! I learned to analyze users’ comments, questions and actions. I learned to edit video on very difficult video editing equipment to create those few critical highlights for the stakeholders and management team. I was with the group who were organizing the Usability Professionals Association’s (UPA) first annual conference in Orem, Utah so I learned the value of meeting with other people who were dedicated to improving usability. I also was fortunate to learn what went into organizing a professional conference. Those were exciting times. I was hired full-time at Apple in January 1992 and worked there throughout the 90s. Those were challenging years at Apple but the User-Aided Design group grew and usability was a main focus of product design. I saw some products succeed and many more fail. But the “failures” always led to better products, experiences and designs. So Apple’s bad years in the 90s led to some very (very!) good years down the line. And a lot of the user research informed product designs. I have spent almost 30 years in the User-Centered Design field. Somewhere along the way, we became UX Researchers, UX Strategists, UXers. The UPA became the UXPA. I have worked in many different industries, on marketing teams, engineering teams, product design teams and in innovation and ideation labs. It has been a great career that I continue to love. And some things have changed. There are no labs with fancy equipment and one-way mirrors anymore. The video editing takes hours and not weeks. Recruiting users is much easier and incentives cost much less. But most of what I learned that summer in 1991 at Apple I still use every day. Or at least every week. The basics of User-Centered Design have stood the test of time — watch users, don’t just ask; define primary users and tasks and support them in design; iterative testing is the most effective so test early and test often; and decide key product design decisions in the lab (virtual now) and not the conference room. The summer of 1991 changed my life. And a few products changed too thanks to the skills I learned.
https://medium.com/marketade/all-i-need-to-know-about-user-centered-design-i-learned-one-summer-at-apple-89df72bc7ae
['Kristy Knabe']
2020-05-14 16:29:24.196000+00:00
['User Experience', 'UX', 'Apple', 'User Research']
How to Avoid Losing Your Foreign Language Skills
Speak to yourself Let’s start with the most cringy tip of mine: Have (overly) dramatic monologues with yourself, if your surrounding allows it. Living alone obviously makes it easier, but you could still do it when your flatmates are out or when you take a walk through a nearby forest or park. I am absolutely convinced that speaking a foreign language with yourself makes you more self-confident and improves your pronunciation. Let it all out —the long Italian vowels, the French filler words, the super guttural English sounds, the hard German consonants… Find those binge-worthy podcasts On my daily one-hour walk that I’ve introduced due to the lack of corona-caused alternative activities, I usually listen to podcasts in foreign languages — mostly in French. I’ve discovered that it’s quite hard to find podcasts in foreign languages on Spotify (the service I use) because it will mostly suggest German podcasts. A good idea I’ve discovered is to search for interviews with personalities I’m interested in (politicians, singers, authors). If I like the podcast host who has published that interview, I might like the other episodes of that podcast. You can also search for topics like “racism”, “feminism”, “climate change” in your target languages and hope to find corresponding productions. Watch series, movies, and Youtube videos This is probably the most obvious one and one that most of us already do with utmost pleasure. One tip: Using a VPN allows you to watch e.g. Netflix content that is normally not available in your country. Why not make it a habit to watch at least one movie per week in your target language? Speak to friends and participate in language gatherings If you have friends who speak your target languages, feel blessed. If not, there are still plenty of ways to get that language practice going. In pre-corona times, I’d have suggested that you look for events like “polyglot meetups” or “language cafes” on Facebook or simply via Google. In our current times where meeting strangers is not exactly recommended, you could download the app Tandem or find a virtual tandem partner online. Language conferences (such as “Women in Language” which I wrote about here) usually also offer language exchange sessions. Change the language of your operating system My tablet is in French, my phone used to be in Italian, my laptop is in English. Why not add those little technical words and phrases to your subconsciousness, so that you’ll sometimes catch yourself thinking in that language? “Ah, on est déjà lundi !” — Ah, it’s already Monday! You can of course do the same for apps like Facebook or Instagram. Photo by Kari Shea on Unsplash Watch fitness videos in another language Every morning after leaving the cozy comfort of my warm bed, I make myself do yoga. I used to only watch videos from “Yoga with Adriene” (gotta love her), but I started thinking: “Why not find some French yoga videos?” There are obviously fewer French-speaking yoga teachers with Youtube channels out there, but I really enjoy having my morning routine in French. To my surprise, there were some words like “tailleur” (cross-legged seat) I had never used in my life. A few YouTube yoga recommendations: Mady Morrison (German), ELLE (French), Yoga Fire by Jo (French) and MichaelaYoga (Italian, German and English). Teach others (for free) Do you have friends interested in learning the language you already speak? That may be an extremely useful opportunity for you to practice the language. You probably also know the saying: You only know something when you’re able to teach it. This might not be entirely accurate for languages because most people have difficulties to explain grammar rules or think ‘What the heck is a possessive pronoun?!’, but are still perfectly apt users of that language. Nevertheless, explaining something to others is a win-win situation: Another person is happy they received help and you feel good for putting your skills to use. Obviously, according to the level to which you speak the language, you can even monetize your language teaching and e.g. become a community teacher on the website “italki” where you can register without an official diploma. That’s what a Ukrainian study friend of mine is doing. Write your diary in that language Producing journal entries makes you actively use that language. I invite you to read this article of mine about journaling in your target language(s): Cook and bake in your target language Be it a recipe video or a written recipe: Those will fulfill the purpose of filling your stomach (always nice) and feeding you that kitchen vocab’. I always get slightly nostalgic when looking at French recipes: The units of measurement and some ingredients remind me of the time when I lived in France and they were the most normal things in the world. For instance, “Maïzena” is the word the French use for cornstarch — even though it’s a brand name. Devour books and news articles You might be willing to read books, but have a hard time finding them. Check out local libraries and, in the worst case, buy ebooks that you can download wherever you live. If you have local friends that speak your target language, asking them if they own books in that language might also be worth a try. News articles on the other hand can also teach you about current issues in a country where your target language is spoken so that you’d stay up to date about local developments. For French, that would obviously mean that apart from a French newspaper, I could read Belgian, Congolese or Martinicain news (to name just very few). Read aloud Just like speaking to yourself, reading texts in another language out loud makes you practice the pronunciation in a safe surrounding at home (and what do we love more than staying at home, right? #covid19). What I like to do when I come across a word I’m unsure how to pronounce: I google “[word] pronunciation [language]”, e.g. “näringsfång pronunciation Swedish”. They are numerous websites where people have recorded words in their mother tongue so you can hear an authentic version — and otherwise, most dictionaries will have audio versions of their available words. Listen to songs, sing along & dance (!) I recommend that you create playlists for each of the languages you learn or have learned. Whenever you feel like listening to one of them, you can directly select the French or the Portuguese playlist and dance around while washing your dishes, cooking, or rhythmically swinging your tea towel to the beat (the perks of having no flatmates who could eye you with great amusement).
https://medium.com/language-lab/how-to-avoid-losing-your-foreign-language-skills-64df9c6c6155
['Annika Wappelhorst']
2020-11-28 10:05:22.128000+00:00
['Motivation', 'Self Learning', 'Language', 'Language Learning', 'How To']
How to write articles that people want to read
How to write articles that people want to read Here are a bunch of recommendations that the In Plain English team consider to be best practices when writing articles that your readers will find engaging and easy-to-read Take a moment to give your article a good title and subtitle . Try to make them concise, yet compelling. If in doubt, ask yourself: “Would I find this title interesting enough that I would want to continue to read the article?” . Try to make them concise, yet compelling. If in doubt, ask yourself: “Would I find this title interesting enough that I would want to continue to read the article?” Don’t create weird formats for your headings and subheadings. Just keep them simple and make sure that the formatting for each heading/subheading in your article is consistent with one another. If you are planning on numbering your headings, here are some examples for you to refer to: 1. This is a good heading 2. This is another good heading 1 : > This is a bad heading 2 #: This is another bad heading
https://medium.com/javascript-in-plain-english/how-to-write-articles-that-people-want-to-read-6e661edb6d06
['Sunil Sandhu']
2020-07-17 22:06:35.637000+00:00
['Writing', 'Programming', 'Articles', 'Guides And Tutorials', 'Tutorial']
What Does the New Twitter Character Limit Mean for Marketers?
The 140-character limit on Twitter may soon be just a memory- Twitter announced on its blog that they are rolling out a character limit of 280 to a small group of users, and if it is successful, it will be launched to everyone. Twitter, anticipating potential backlash at the new development, said “We understand since many of you have been Tweeting for years, there may be an emotional attachment to 140 characters — we felt it, too. But we tried this, saw the power of what it will do, and fell in love with this new, still brief, constraint.” There is, in fact, a fair amount of backlash to this announcement. Soon after the announcement was made, #Twitter280 was trending on Twitter, and was filled with people who had access to the 280 character limit using it purely to show their disdain for it. A large portion of these tweets are calling out Twitter for implementing this new character limit that nobody really seemed to be asking for, while failing to address other improvements to Twitter that have been highly requested, like the opportunity to edit tweets, better harrassment reporting tools, and a zero tolerance attitude towards hate speech. While the general public seems to be reaching the consensus that the longer character limit is not a good thing, many marketers are wondering how this will impact their strategy and ability to engage with their audience. Right after Twitter announced the change, many brands didn’t think too much but instead jumped right into testing out/tweeting about the new character limit. After the dust had settled, however, AdWeek reached out to several marketing agencies to get their take. Rachel Spiegelman, CEO of Pitch, said “The 280-character tweets will likely dilute Twitter as a receptive marketing platform for consumers engaging with brands. Some of the most successful brands on Twitter, including Wendy’s, JetBlue and DiGiorno Pizza, have gotten to the peaks of brand engagement because of the discipline and rigor it takes to fit a message into 140 characters.” Science seems to agree that shorter tweets perform better. Twitter’s best practices reference research by Buddy Media found that 100 characters is the ideal tweet length: “Creativity loves constraints and simplicity is at our core. Tweets are limited to 140 characters so they can be consumed easily anywhere, even via mobile text messages. There’s no magical length for a Tweet, but a recent report by Buddy Media revealed that Tweets shorter than 100 characters get a 17% higher engagement rate.” Social media scientist Dan Zarrella performed research to find out which tweet lengths resulted in the highest click-through rates (CTRs). He found that tweets between 120 and 130 characters long had the highest CTRs. Others expressed that they didn’t feel that the change would allow them to better deliver what audiences were actually looking for. Jennifer Ruggle, SVP of digital solutions at The Sandbox Agency said, “Users don’t go to Twitter to read long text blocks.” In terms of changes in strategy, some marketing professionals are expressing fear that brands will jump into usage of the 280-character limit without really considering the effects that it will have. John Sampogna, co-CEO and founding partner at Wondersauce, said “I’m sure the brands and users who truly ‘get’ the platform will find new creative ways of using it. My concern is most will not.” One potential upside is that the increased character limit may allow brands to deliver better customer service and better address complaints. The increased character limit allows for better explanations and more in-depth responses. It will also help brands more clearly list legal terms and conditions in Tweets. This is particularly relevant for brand influencers. “This also gives no excuse for brand influencers not to disclose transparency or sponsorship language when applicable as well, which is better overall for consumers,” said Hannah Redmond, group director of strategy and innovation at The Marketing Arm. It may take some time to fully understand how the new 280-character limit will shape marketing strategies, for better or worse. For now, brands should tread carefully and not lose sight of what draws people to Twitter and what type of content they are looking for. That means not posting longer tweets simply because it is now an option. Brands and marketers should still try to use the character limit as a driver of creativity, by attempting to deliver messages that resonate in a short amount of characters. That being said, brands and marketers should not ignore the opportunities to better engage with consumers that the 280-character limit may create. A big component of this will be improved, or more in-depth customer service.
https://medium.com/fanzee/what-does-the-new-twitter-character-limit-mean-for-marketers-49bca104868e
['Leah Bury']
2017-10-04 15:11:59.276000+00:00
['Social Media', 'Social Media Marketing', 'Marketing', 'Digital Marketing', 'Twitter']
Building microservices using IBM CloudPaks as amateur developer 2/5
Building microservices using IBM CloudPaks as amateur developer 2/5 Chechu Follow Sep 12 · 4 min read Microservices Logging in Openshift This is the second article of a set of 5 about how to code microservices using IBM CloudPaks: 1.Leveraging Kabanero 2. MicroServices logging in OpenShift 3. Working with ServiceMesh and Microservices 4. Async Communication for Microservices with IBM Event Streams and IBM MQ (Kafka and MQ) 5. Microservices reliability across Kubernetes clusters with IBM MultiCloud Manager In the first article, I built an app following a microservice architecture, but it can be a nightmare to debug that application, in a multi-user scenario by leveraging the native logging system. As we introduced a more “dispersed” architecture, tracing a request through the different microservices can get complicated, especially with a multi-user deployment. As we need to identify each “thread’s” progression through the microservices, we will need a method to identify each “thread”. Let's review the app and see my approach to solve this problem looking at the logs dropped. Microservices Application Topology Microservice App Architecture Looking at one “thread” progression, for example on the management page load request: Looking at even this simple app running in a multi-user scenario, it is apparent that a lot of log entries will be generated for the same transaction without any context to identify the user who generated the log entry, making it impossible to quickly diagnose an issue. In order to address this challenge, I created my own solution based on the Mapped Diagnostic Context instead of the logging framework. Here is an outline of my solution: In a nutshell, the UI (React microservice) creates a unique ID, that is attached to each request that a transaction generates, together with the respective username. In my local development with Appsody, we can see the following: The schema I used for the logs is the following: reqID: “16de6c53–7279–467f-b689-dd9c03ca8d6b” user: “[email protected]” status: “successful” service: “auth” message: “(/api/users/login) — User Login: [email protected] SUCCESS” Using ReqID and user I can trace the progress of a transaction across all microservices. The other keys will identify the respective microservice, the status, and a detailed message. The code added is in this repos: - UI : React portal using IBM Carbon Design - APIGW : API gateway to expose the backend microservices to the React portal - Auth: Authentication microservice to validate users accessing backend microservices are logged. - Management: Microservice that manages the creation of the courses. - K8sManager: Microservice that manage the creation of the workspaces on Openshift and the Linux VM (to run ‘oc’ commands) *** Each article’s code is hosted in a branch on this repo, with the Master branch hosting all the modifications across the 5 articles. Once we push the new code to our repos, the pipelines described in the previous article will start to deploy the new pods. If the Openshift cluster has the logging operator installed we can use Kibana to see the logs and filter them. The example below shows a flow of ‘user login’ and selects a ‘course’, which retrieves or creates a workspace. And if we filter by the “reqID”: “bb696da4-e3e7–4dd5–9860–578a80ea2bcd”, we can trace the progress of the request through the microservices. Wrap Up Following the Mapped Diagnostic Context presented above, we can implement a “distributed tracing” microservice pattern, explained here. This facilitates debugging a transaction that spans across different microservices. This becomes critical as the topology gets more dispersed. If you find this helpful you might want to take a look at the next article of the series — Working with ServiceMesh and Microservices— (to be published on September 21st).
https://medium.com/ibm-garage/building-microservices-using-ibm-cloudpaks-as-amateur-developer-2-5-bf06cdebabbc
[]
2020-09-17 13:57:52.657000+00:00
['Openshift 4', 'Microservices', 'Logging', 'Microservice Architecture']
9 Popular Cross-Platform Tools for App Development in 2019
9 Popular Cross-Platform Tools for App Development in 2019 Picking up the right app development tools is important for building a good app. To help get you started, I’ve already conducted the research to give you the top options available for cross-platform app development tools. Read on to know about the multiple platform tools ! Popular Cross-Platform Tools for App Development in 2019 When business firms think about building a mobile app, their minds go straight to cross-platform app development. Today startups and SMEs find cross-platform as an excellent form of technology to develop an app on multiple platforms like Android, iOS, and Windows simultaneously. This means by building a single app you can target both Android and iOS, thus, maximizing your reach to the target audience. In fact, the cross-platform application development market surpassed the figure of $7.9 in 2019. Ideally, cross-platform technology delivers native-like apps because of the advent of advanced tools and technologies that allow developers to build apps that may appear similar to native apps. Also, in such a scenario when the number of apps in the Google Play Store was most recently placed at around $2.6 million apps in March 2019. Businesses wouldn’t want to risk missing their presence on Google play store or any other platform. Budgeting always an issue for businesses if they go for native apps, this is where cross-platform technology has emerged as the premium choice for businesses that aim to build their app multiple platforms. So, move onto the list of the best cross-platform app development tools to go for in 2019. 1. Adobe PhoneGap PhoneGap — Best Mobile app development tool (Source: Google Images) PhoneGap is owned by Adobe and is one of the best cross-platform development tools to use in 2019. It’s based on the open source framework Apache Cordova that gives you access to complete set of PhoneGap toolset which helps streamline the app development process and include the options: Debugging tools allow you to inspect HTML, CSS, and debug codes in JavaScript. All I would suggest is that you must take the help of a dedicated cross platform developer Here is the list of tools: For iOS App Development Safari Web Inspector Tool Steps to Use: Take your iOS device and connect to your computer. Now, install and launch Safari on your system. Make your PhoneGap application launched on iOS Device. Open the menu of Safari Develop, and look for your iOS Device in the list. Select “PhoneGap Webview” listed under your iOS device. For Android App Development Chrome Developer Tool Steps to Use: Make sure your Android test device supports all the developer options. Now launch your Google Chrome web browser. Look for chrome://inspect in Chrome. Select PhoneGap Application on your device. Developer tools will launch. For Windows, visit the page Microsoft Visual Studio One of the reasons why I am suggesting PhoneGap is because anyone can learn how to use their tools, even if you don’t have experience of using them. PhoneGap takes care of the development process by compiling all your work in the Cloud, so you don’t need to maintain native SDKs. 2. Appcelerator Appcelerator — Most popular mobile app development tools (Source: Google Images) Appcelerator is a cross-platform mobile app development platform that helps get your app ready in a faster way by simplifying the whole process. By using a single JavaScript code you can build native-like apps and mobile apps with cloud-like performance. Another top benefit of Appcelerator is their quality as it can be used for building apps for any device or operating system. The tool also makes it easy for you to use and test your apps using the automated mobile tests that allow you to measure your app usage and results of your app project. You can detect bugs, crashes, and also make some adjustments to improve the overall performance of your app. With Appcelerator, you will be provided with access to Hyperloop that is one of the best cross-platform APIs for the multi-platform application development. 3. Corona Corona- Apps development Tool (Source: Google Images) Corona is a cross-platform ideal for creating games and apps for mobile devices, desktop, and tv devices using just one code base. This tool speeds up your coding process and you can easily update your code, save the changes, and instantly see the results on real devices. With Corona, your apps are optimized for performance because of the lightweight scripting power of Lua that enhances your app performance at every level. Corona is free to use cross-platform app development tool that primarily used in 2d games as it’s great to use for high-quality graphics and high-speed development of games. 4. React Native React Native — Best app development software (Source: Google Images) React Native allow you to create native applications and uses JavaScript as a programming language to build apps. The strong side of React Native is that you can write modules in languages such as C, Swift, and Java. The best part is you can work on image editing and video processing that aren’t possible with the other API frameworks. React Native is unquestionably the best platform to use for cross-platform app development because it interprets your source code and convert it to the native elements in less time. Both Facebook and Instagram have used React Native to build their native apps that are the most used applications of the world. So, you can trust on React Native. 5. Xamarin Xamarin — Best cross platform mobile app development tools (Source: Google Images) Microsoft Visual Studio Xamarin’s allows you to build apps for different platforms such as Windows, iOS, and Android using a single .net code. The best part of Xamarin cross-platform tool is that all the apps built on it look and feel like native apps that is because it uses the native interfaces that work the same way a user wants to use them. With Xamarin, you can give your app a platform-specific hardware boosts to achieve the performance similar to native apps. Also, most of your coding approx. 75% will be the same, regardless of the platform you’re building your mobile application for. Xamarin works on a single code by identifying it and accelerates the process for cross-platform mobile app development. Xamarin works on both Mac and PC systems and offers you tools such as debugging, UI design tool, and code editing. 6. Qt QT: Cross Platform Mobile App Development Kit (Source: Google Images) Qt is the best cross-platform development tool for mobile app development. Why I’m counting this tool in the best cross-platform tools is because of its quality features that allow creating fluid, UIs, applications, and embedded devices with the same code for Android, iOS, and Windows. If your app is not performing well and you want to rework on it, you can easily make changes to your app using Qt that will automatically make all the changes applied to your app. This software tool also allows you to see how your app is performing on different platforms. Moreover, it’s easy to use and don’t have a complex interface like some other cross-platform development tools I’ve seen. 7. Sencha Sencha: Easy Mobile App Development Tool (Source: Google Images) With Sencha you will get all the modern Java and JavaScript frameworks that help you build your web apps easily for any device. It provides you 115+ fully supported and test UI components that you can easily integrate into your apps. It is one of the most comprehensive tools to perform end-to-end testing of apps on all the platforms. In addition, Sencha provides you with the “Themer” to create reusable themes by customizing themes built on iOS, Ext JS, ExtAngular, and ExtReact. Sencha offers a data visualization table that makes it easier for you to track your app information. This also makes it possible for you to organize your app content and how your content is displayed on the browser, device, and screen size. 8. Unity3D Unity3D: Open source Web App Development Tool ( Source: Google image) This cross-platform app development tool is so popular because of its graphics quality that is absolutely incredible. It’s so easy to use this tool and you can use it for more than just a mobile app. With Unity3D tool you can export your app or games to 17 platforms that include — iOS, Android, Windows, Xbox, PlayStation, Linux, Web, and Wii. Unity3d can also be used to track user analytics and share your app on social networks. You can also connect with the network of Unity3D developers called Unity Connect to find help and get your questions answered if you’re having tech issues with coding or something else. 9. 5App 5App: iOS and Android App Development Tool ( Source: Google image) 5App is a unique tool designed specifically for businesses into learning, HR consulting, and firms that want to organize and deliver resources to their employees or to the right people at the right time. 5Apps uses HTML5 and JavaScript for coding of apps and emphasis on the security of app data. The tool allows you to quickly create relevant content to support your employees’ learning and performance. The finished app is compatible with both Android and iOS devices, so you can choose accordingly as per your company’s needs. Final Thoughts Today, businesses face tough competition and their main focus is on the target audience. That’s why businesses need to take advantage of cross-platform app development tools as possible. In my list of top 7 cross-platform mobile development tools, you can find a tool that can manage all of your mobile app development needs. This isn’t always easy to choose the best development tool because of so many options available on the market. So refer to my list of top cross-platform app development tools to build your mobile app.
https://medium.com/hackernoon/9-popular-cross-platform-tools-for-app-development-in-2019-53765004761b
['Amyra Sheldon']
2019-07-09 09:53:35.048000+00:00
['Crossplatform Application', 'Open Source App Dev Tools', 'Best And Popular', 'Mobile App Development', 'Web App Development Tools']
“Full-Time Crypto”
For nearly all of us in the cryptocurrency space uttering the words “full-time crypto” means something special. For you, that phrase may mean early retirement, a change in your career path, financial freedom or even cutting ties (somewhat) with fiat monies forever. In my case, all those reasons have made me ponder that phrase multiple times a week over the past 6+ years. Reflections on a hobby that began in 2011 and includes hundreds of deaths of my passion project, solidifies the fact that I’m “oldskool” in the cryptocoin world. Fittingly, it was one of those articles proclaiming Bitcoin’s death that drew me into the rabbit hole to start with. Illustration: Martin Venezky The idea of digital money — convenient and untraceable, liberated from the oversight of governments and banks — had been a hot topic since the birth of the Internet. Cypherpunks, the 1990s movement of libertarian cryptographers, dedicated themselves to the project. Yet every effort to create virtual cash had foundered. — “The Rise and Fall of Bitcoin” — Benjamin Wallace, WIRED “Hooked on a Feeling” Like many of you internet nomads, this sounded amazing. Individual Sovereignty; are you kidding me? Sign me up! There were limited, believable use cases at this point. Only BTC moving between exchanges and sites like Silk Road or LocalBitcoins, but the promise of a future, free from even these centralized 3rd Party Bitcoin services, was more than enough to spark hope in a decentralized economy. Things changed about 48 hours later with my first 3X bump… “I’m in this for the Coin” 👈 For most of you in the cryptocurrency space, this is your morning, afternoon, evening and, most likely, dream life. People in this state of euphoria contemplate “full-time crypto” life pretty regularly — retiring early, financial freedom and even changing your career path to “day trader”. Don’t let anyone in this space squelch your new found love for chart reading or magnetism towards FOMO. It’s part of the initiation that every oldskooler worth listening to will admit to participating in when cornered. I think something else they’ll tell you is, “it’s more than just making money.” Now if you’re listening or not is another story. Most of the price action leading into 2013/2014 was surrounding Satoshi’s “greatest invention of all time” A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. - “Bitcoin: A Peer-to-Peer Electronic Cash System” — Satoshi Nakamoto Little did most of us know that the “coin” was only 1/2 of the equation. Funny how viewing things through money-tinted glasses leaves out details… like, how does this new cash system actually work? I think this is where journalists fail to see Satoshi’s vision in their death bed write-ups, and where many of us found time to learn after experiencing the gut check run-up to $1200 → $160 → $600 → $300. “Semi-Retirement” That’s what I called it. We had our first girl in 2012, and we were blessed enough financially for me to leave my architecture firm and focus on our family and maintain our lifestyle via day trading cryptocurrencies. I think my quota was make 1 BTC/day trading — do that and we were good to go! Still in my late-thirties at the time, calling it a retirement sounded great (to me) but I didn’t want the raised eyebrows all my old college roommates would surely give at our family parties. So I improvised. I mean, who was I to spoil their joy of being neck deep in parenthood? Raising children, going to ball practices every night, eating McDonald’s in the car every meal. Spending 40+ hours a week away from the ones you loved shouldn’t be interrupted by such nonsense as retiring 15 years after just getting started. Essentially it was a sabbatical from my “real job” as a healthcare design architect. The career path I unwittingly signed on for as a toddler building with LEGOs, refined in high-school designing cars and lost days of sleep over in college. “Living the American Dream” of paying into 5–10 years of debt and then trudging through nearly 50 years of service for someone not named “Mr. ME”. Oh glory days! ”I’m in this for the Blockchain” I was still playing family man and day trading cryptocurrencies in 2014 when I decided to return to my architecture firm. There really isn’t anything better than having freedom to spend the majority of your day with the people most important in your life. My decision to return to a real job wasn’t as much a necessity as it was a strategic move. Financially we were good, but I had 2 reasons to jump back in. First, frankly, I was bored. I missed doing design and I wasn’t really participating in the communities of any of the coins I was holding. When you aren’t involved in a hobby you say you’re passionate about you quickly lose interest. Its the Curse of the Entrepreneur; lacking focus and chasing ideas. There weren’t hours of crypto webcasts that popped up daily in my feed to keep me entertained, only Bitcointalk.org which was good for a couple train wreck sightings a week. Looking back at this time, I can see the tight bond of community, value and technology. We like to shill our favorite coin’s “first to” and “best at” technology but most users still don’t understand it all and deep down they only want more people to talk about their project so the price goes up. Second, Bitcoin was in a remarkably stable price state and it was telling me it is time to accumulate (hence we needed capital), as news was beginning to surround the distributed ledger 1/2 of Satoshi’s invention; the “blockchain”. I had already begun focusing on privacy coins before returning to the 9–5 with new projects like Darkcoin and Monero beginning development. But I really gravitated to projects that were bringing utility into the crypto space. I didn’t sign up to Bitcoin for merely another way to move money on the internet from one exchange to another, I signed up for a better this 👇 Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model. — “Bitcoin: A Peer-to-Peer Electronic Cash System” — Satoshi Nakamoto To me this speaks to the power of the blockchain. The ability to create trustless models for social platforms, exchanges, marketplaces, storage and even oracles. These projects were moving past the coin and building in utility, just like the quote above taken from the first line of the Bitcoin Whitepaper introduction. This ultimately led me to the Shadow Project which had already developed a privacy currency, ShadowCash SDC, but was now building in some utility with a decentralized marketplace. To me, this was the next step for crypto; creating a decentralized, anonymous economy! Around this time I finally began volunteering for the first time, wherever I could help within the SDC community. It may have taken me a while to figure this out but, I had skills from real life that could actually help a team of volunteers working on an open-source project that I also believed in and invested in. “Get off your ass and do something!”
https://medium.com/decentralize-today/full-time-crypto-9b389c6db099
['Paul Schmitzer']
2018-01-16 21:31:46.165000+00:00
['Cryptocurrency', 'Retirement', 'Freedom', 'Entrepreneurship', 'Bitcoin']
Even a pandemic can’t stop the desperate flow of refugees to Europe
Even a pandemic can’t stop the desperate flow of refugees to Europe In the circumstances, an EU humanitarian package might serve as a band aid but not much more. Influx. Members of a German rescue NGO on a rubber boat during an operation off the Libyan coast. (AFP) In the weeks since the World Health Organisation declared a pandemic, it’s become clear that the outbreak of disease can paralyse national economies but not the flight of desperate people across the Mediterranean to apparent safety. How else to explain the fact that migrants are still travelling from Libya towards Europe? In the last week or so, more than 500 migrants left Libya for Europe, according to the International Organisation for Migration (IOM). On April 12, the Italian government had, perforce, to quarantine a ship-load of migrants at sea. The good thing is it didn’t try to send them back. The way things are going right now, “no state wants to rescue” migrants, according to the German non-profit Sea Watch. Libya, Italy and Malta have all shut their borders citing the pandemic. Last week, Libya refused entry to about 280 returning migrants. IOM initially said Libyan ports appeared to have closed altogether. But later, the UN Refugee Agency’s special envoy for the central Mediterranean Vincent Cochetel clarified that “Libya’s Directorate for Combating Illegal Migration does not seem able or prepared to take more detainees.” Under the terms of a deal between Italy and Libya’s UN-backed government, signed in 2017 and renewed last November, the Libyan coastguard is meant to stop migrant boats heading for Europe and return their passengers to Libya. But the pandemic seems to have thrown all of that into doubt. So what rights, if any, do refugees have during a once-in-a-century pandemic? The first point to note is that refugees and asylum-seekers are recognised under international law. Although unprecedented times require unprecedented measures, it’s reasonable to say that migrants of all sorts should at least be entitled to just and humane treatment. In this context, there is no more shining example than Portugal. Earlier this month, Portugal granted full citizenship rights, through June 30, to all refugees, asylum-seekers and migrants with pending applications for residency certificates. This will allow them to access healthcare, a government spokesperson explained. The decision stands as one of the more heartwarming instances of pragmatic humanism in the age of the coronavirus. Elsewhere, not so much. The exceptional circumstances of a pandemic have justifiably prompted border closures and travel restrictions, but it’s all too clear that several countries are simply using the coronavirus outbreak to push the same restrictionist policies they pursued before. It was on March 1, before a single coronavirus case was recorded in Hungary, that it suspended the right to claim asylum in the country, claiming there was a connection between the disease and illegal migration. Landlocked Hungary has the luxury of self-isolation afforded by its geography, but not island nations like Malta. On April 13, Malta’s foreign minister and home minister jointly wrote to the European Union’s (EU) High Representative for Foreign Affairs and Security Policy Josep Borrell to demand “imminent and substantial” humanitarian assistance for Libya to deal with “the rapidly deteriorating migration situation in the Mediterranean during this testing hour.” Malta’s argument was stark. Unless the EU launches a humanitarian mission for Libya with at least 100 million euros “today and not tomorrow,” there may be little or no “incentive” for migrants to stay put in Libya rather than making for European soil. Accordingly, the Maltese ministers wrote, the EU should “boost the empowerment of the Libyan Coast Guard in enhancing the control of its borders, as well as concretely ensuring that Libya represents a safe port for the disembarkation of migrants.” The issue will be discussed at an emergency EU meeting. But a second tangential point may be harder to confront. With the pandemic triggering the worst economic downturn since the 1930s’ Great Depression, poor countries face the prospect of debt crises and political turmoil. This, in turn, could prompt massive outflows of migrants towards the rich world, especially Europe. As Kristalina Georgieva, managing director of the International Monetary Fund, recently noted: “Trouble travels. It doesn’t stay in one place.” The implications are dire for conflict-scarred countries like Libya. In the Maltese letter to EU High Representative Borrell, the ministers described Libya as “a complex landscape plagued with difficulties across conflict, health, humanitarian and migration dimensions, all of which are snowballing at this very moment.” The COVID-19 crisis, they added, is “leaving its mark in Libya and is weakening an already fragile health system.” More than 650,000 people wait to “leave Libyan shores for Europe,” they warned. In the circumstances, an EU humanitarian package might serve as a band aid but not much more. Originally published in The Arab Weekly
https://rashmee.medium.com/even-a-pandemic-cant-stop-the-desperate-flow-of-refugees-to-europe-22c51b1a7c27
['Rashmee Roshan Lall']
2020-04-19 11:33:44.332000+00:00
['Europe', 'Restrictionist', 'Refugees', 'Asylum Seekers', 'Coronavirus']
Problems Deep Learning will probably solve by 2019
It is hyperbole to say deep learning is achieving state-of-the-art results across a range of difficult problem domains. A fact, but also hyperbole. In this post, you will discover recent applications of deep learning. Deep Learning for Forecasting Nuclear Accidents Forecasting is one of the many applications where machine learning techniques have established a firm footing. With the deep learning networks getting better with each passing day, the move to entrust these networks with something as sophisticated and incredibly powerful as nuclear plants are in progress. The external instabilities like Tsunamis and extremist activities like terrorism cannot be forecasted with certainty. But what happens within a nuclear plant can be controlled and should be. Deep learning for diagnosis and prognosis A piece of news published two days ago claims that deep learning can analyze lung cancer histopathology slides in less than 30 seconds. Deep learning to eradicate suicide In a recent NYU study wherein scientists built a natural language processing AI, basically, the same technology that runs Alexa, Assistant, and Siri that can detect PTSD in veterans with 89 percent accuracy just by listening to audio recordings of the person’s speech. Deep Learning to Save Lives The rapid advances in computer vision due to the application of AI starting in 2012, have led to predictions of the imminent demise of radiologists, to be replaced by better diagnosticians — Deep Learning algorithms. These algorithms will help “automate every visual aspect of medicine,” going beyond radiology to pathology, dermatology, dentistry, and to all situations where “a doctor or a nurse are staring at an image and need to make a quick decision.” This “automation” does not mean replacing doctors. Rather, it means the augmentation of their work, providing consistent, accurate, and timely assistance. We need all the doctors we have in the world and we will need 10X more because of the aging population. AI-Based System to cut process time for abnormal X-Rays Deep Learning can help a Sales Team Thrive Machine Learning, specifically Deep Learning, fills in gaps that human intuition never could. Put to use across a team of eager sales pros, its innate advantages add a layer of intelligence to any crew’s knowledge base. As a tool, deep learning provides insights by spotting and naming patterns in millions of unstructured data points. Deep Learning bridges gaps in the sales pipeline by determining who is most likely to convert to the next stage in the sales funnel. Using Deep Learning, sales leaders can not only identify a good-fit potential customer, but also predict the possible deal size, deal cycles, and other insights. Conclusion There are many cases where AI and Deep learning can revolutionize a particular field. The list would go on and on. There are a plethora of applications. By the end of 2019, we will witness a wide variety of problems solved by AI.
https://medium.com/dataseries/problems-deep-learning-will-probably-solve-by-2019-b02233ed9aad
['Surya Remanan']
2019-04-29 19:25:23.217000+00:00
['Deep Learning', 'Artificial Intelligence', 'Data Science']
Top 9 Jupyter Notebook extensions
Introduction Jupyter Notebook is probably the most popular tool used by Data Scientists. It allows mixing code, text, and inspecting the output in one document. This is something that is not possible with some other programming IDEs. However, the vanilla version of the Jupyter notebooks is not perfect. In this article, we will show you how to make it slightly better by installing some useful extensions.
https://towardsdatascience.com/top-9-jupyter-notebook-extensions-7a5d30269bc8
['Magdalena Konkiewicz']
2020-06-24 19:03:40.847000+00:00
['Artificial Intelligence', 'Machine Learning', 'Technology', 'Data Science', 'Programming']
8 Life Lessons I’ve Learned at 40-Something That I Wish I’d Known at 20-Something
8 Life Lessons I’ve Learned at 40-Something That I Wish I’d Known at 20-Something Some of the things that come with age are great. Awareness is one of them. Photo: Anna Pritchard/Unsplash My 40s are a lot different than I thought they’d be when I was still in my 20s. On the one hand, I have a much deeper understanding of why my dad liked naps so much when I was a kid. I’ve learned not to ever fall asleep in an awkward position if I want to be able to walk the next day. I can’t just eat whatever I want anymore if I don’t want to suffer the horrible consequences either. However, I’m also a lot more aware and secure in myself than I thought I’d be at this age. I’m calmer. I don’t sweat the small stuff nearly as much. And I’ve learned a thing or three about life that I wish I’d understood a lot earlier on. Here are some of the more important ones. Do yourself a favor and get this stuff straight now so you don’t have to do what I did and learn the hard way. 1. There’s no such thing as too late or too old. When I was younger, I was super concerned about whether or not I was keeping up with other people my age when it came to the big milestones in life. I was never what you’d call an overachiever, so I didn’t care whether I was the first of my friends to get married or land my dream job. I just knew I wasn’t cool with being the last. That meant I jumped headfirst into things that deserved a lot more thought and consideration. I rushed into marriage in my mid-20s and wound up divorced by 29. I pushed myself to take on huge responsibilities I wasn’t ready for way too soon in life and I wound up with bad credit it took me my entire 30s to fix. Now I couldn’t even tell you why I did those things or what the big rush even was. Don’t waste your 20s rushing to become your parents. You’ll look back one day and regret simply being young when you had the chance to be. There’s no set age by which you have to find your ultimate bliss in life, own a home, choose a life partner, or anything else major. For some people — myself included — that ideal time is a little later in life. For others, it’s never, because they get older, gain some perspective, and realize they don’t even want those things. So don’t waste your 20s rushing to become your parents. You’ll look back one day and regret simply being young when you had the chance to be. 2. Who you were as a child is more important than you think. One of the dumbest things I’ve ever been led to believe was that children don’t know themselves — that I didn’t know myself. It eventually turned out that I knew myself better as a child than I have at any other point in my life. It’s just that it’s so darned easy to lose sight of yourself once society starts telling you how wrong you are for liking what you like and being whoever it is that you are. For instance, I knew I wanted to make my life about creating things when I was a kid, as well as that a typical 9 to 5 job probably wasn’t for me. My parents, on the other hand, had their heart set on my working in animal care for some reason and eventually managed to convince me that’s what I wanted too. They did such a good job of it that when I eventually found myself working ridiculous hours as a vet tech at a local animal clinic, I couldn’t understand why I hated it so much. These days, I’m a full-time writer who works out of her home according to a flexible schedule of my choosing — a much better fit. The thing is it’s fine to want to make your family proud, but if their dreams for you differ from your dreams for yourself, you’ll be a lot happier if you listen to yourself. No one knows you as well as you know yourself and you knew yourself without limits or shame when you were a kid. Hold onto the things you loved and longed for then. They turn out to be pretty important, especially when you inevitably find yourself wondering what to do with your life next. Chances are the answer is connected to something that made you come alive as a child. Photo by Yarden on Unsplash 3. It’s better to make memories than collect things. My mother has this huge beef with people who spend money on stuff like concert tickets, vacations, or special dinners at restaurants. She reasoned that once you’ve gone to that concert, it’s over and you have nothing tangible to show for it, meaning the tickets were a huge waste of money. If you had to spend money on fun, you bought things instead… objects. Unlike the concert tickets, you’ll have the things you buy potentially forever, especially if you take care of them. That’s the approach to disposable income and leisure time that I grew up with and lived by for years. And as with that vet tech job I never truly wanted, I couldn’t figure out why all this crap I was buying wasn’t making me as happy as it was supposed to. Part of it had to do with the hard truth that most “stuff” becomes pretty useless sooner or later. If it doesn’t break or wear out, it becomes obsolete — like the massive cassette collection that was my world when I was in my teens. Same for all the knickknacks I spent my 20s collecting. “Stuff” becomes pretty useless sooner or later. If it doesn’t break or wear out, it becomes obsolete. Memories are a different story though. Most of the physical objects I spent so much money on when I was younger hit a landfill years ago. But I still remember the concerts I went to, the vacations I took, and the festivals I attended like they were yesterday. Those memories and the way I felt when I was creating them are as shiny and precious to me today as they were back then. So are the ways some of those experiences changed me as a person. These days, I never think back on the past and regret not buying some trendy piece of clothing that I probably wouldn’t even have worn or yet another statue to sit on my bookshelf collecting dust. I think about that trip to Romania I had the opportunity to take in college, but ultimately passed on. I think about the time I went to Mexico on a cruise and let my stick-in-the-mud ex talk me out of riding a burro up a dirt trail while I was there. It makes me sad that I don’t have those memories to look back on, especially since I may never have those same opportunities again. But the good news is I learned to just go ahead and do the things I want to do in life, even if it means doing them alone. The memories and cool stories last a lifetime. 4. The little things are the big things. Speaking of memories, I’ve learned that it’s not always obvious when you’re creating one that’s going to mean a lot to you one day. Everyone knows their wedding day or the day their child is born is a big deal and that they’ll remember that for the rest of their life. Some of my favorite memories are the ones that kind of snuck up on me at the time though. I’m talking about the time my husband and I drove out to our favorite barbecue spot on Memorial Day one year and spent the whole day there, even though it got super cold and started to snow unexpectedly. I mean the day I was walking by the beach with my friends as a teenager in the fog, saw a seal, and thought for a split second that it was a mermaid. There’s the time I signed up for an online film appreciation class on a whim and realized I still love learning as an adult. And the week a random frog lived underneath my bedroom window and made me happy every night with all his little frog noises. Those are some of the moments and occurrences that turned out to mean the most to me over the years. I couldn’t even tell you why, but there’s something magical about them — something that suggests they’re what life is truly all about. They were little things that became big because they had meaning, especially if they were also shared with someone I loved. Photo by Bruno Nascimento on Unsplash 5. Taking care of yourself physically is every bit as important as people tell you it is. Ignore that piece of advice and you’ll eventually wish you hadn’t, I assure you. I’m not sure how things are for young people these days, but I wasn’t taught about fitness in much detail when I was young. Sure, I was taught it was important, but I was never properly schooled on why or told what exactly would happen to you if you chose not to bother. I certainly wasn’t given any practical advice on how to turn fitness and proper self-care into permanent habits. Luckily for me, years of working on my feet and having friends who preferred physical pastimes to simply sitting around all the time meant I spent most of my life “accidentally fit”. The problem came when I got older, had more choices, and started making a bunch that meant I wasn’t very active anymore. That quickly led to the swift and blinding development of numerous health problems and this horrible feeling that I had no control over my life anymore. Get so used to taking care of yourself that doing otherwise feels unbearably weird. These days, I’m doing much better in that department. I’ve gone out of my way to educate myself on how to take care of my body, as well as to establish a healthy routine that’s realistic for me. The “realistic for you” part is critical because, at the end of the day, it doesn’t matter how effective a given fitness regimen is. If you hate it with the fire of a thousand suns, you’re not going to stick with it and you can’t benefit from exercise you’re not doing. Don’t do what I did and wait until you’re 40 and your metabolism is slowing down to get your act together. Do it while you’re still young and stick with it. Find a way to love being active and to make it a daily part of your routine. Get so used to taking care of yourself that doing otherwise feels unbearably weird. You’ll be glad you did one day, because seriously. If I could change just one thing about how I ran my life when I was younger, this would be the thing. (Here’s a piece I wrote all about that in particular, should you be interested.) 6. The best time to make your dreams come true is now. Not in 10 years when you’ve figured out what your one true career path is. Not in a few months when you’ve finally lost that stubborn 20 pounds. Not tomorrow when the weather’s better and not “someday” when your life’s finally the big, perfect bowl of peach cobbler you hope it eventually will be. It’s now… today! The unshakeable optimism that comes with being young is amazing and I remember it fondly. I figured my whole life was still ahead of me and took it for granted that everything would simply work out in my favor one day all by itself, so why force things? I wanted to travel, but I thought the experience would be better “someday” when I had tons of money and a perfect job that didn’t feel as soul-sucking as my current one did. I wanted to speak multiple languages, but I wanted to learn in the perfect house I thought I’d own someday while sitting in the perfect combination office-study I also planned on having. I wanted to teach myself how to do genuinely awesome makeup, but I wanted a flawless life and a circle of brag-worthy friends to show it off to first. Well, guess what. That perfect life never materializes because it doesn’t exist. Even if you’re crazy successful one day, you’ll forever have constraints on your time or your resources. There will always be something going on that stops circumstances from being ideal, so start working on the things you want to do, be, and experience now. Then you can spend middle age building on what you’ve already learned, not starting from scratch. Photo by Henry Hustava on Unsplash 7. Nobody’s coming to save you from yourself or your life. Like a lot of very shy young girls, I spent a lot more time reading books and watching movies than I did having real-life experiences and meaningful interactions with other people. That gave me the impression that my life was eventually going to play out like the stories I loved so much and that I wouldn’t have to do anything special to help it happen. My life was legitimately hard for me when I was young for lots of reasons, but it never occurred to me to try to rise above it so I’d be able to build myself a better one eventually. Instead, I fantasized about the day someone else would love me enough to do it for me. I thought one day my emotionally unavailable parents would suddenly become different people and want to help me out in life the way my friends’ parents helped them. Or that whenever that perfect partner finally materialized he’d take care of me and provide for me. That way I’d never have to step out of my comfort zone, try anything scary or new, and figure out life for myself. If you do luck out one day and meet someone who’d love to give you an awesome life just because you’re you, trust that they’re going to expect you to pitch in in one way or another. People get tired of being the only horse on the team who’s actively working to pull the wagon. Well, life doesn’t work like that, so if you think this way, it’s to your benefit to get it sorted now while you’re still young. “Princess-in-a-tower disease” isn’t a good look on someone who’s in their 30s and it’s an even worse one on someone middle-aged or older. Don’t be fooled either. You don’t have to have been a young girl who enjoyed Disney princess movies a little bit too much to have this issue, so it’s worth asking yourself some questions. Are you an aspiring creative who’s banking so hard on “being discovered” one day that you’re not actively seeking out and seizing opportunities? Are you coasting through life because you assume you’ll eventually inherit money or property when your parents croak? Are you a parent who thinks your kids are going to grow up one day and undo all your mistakes for you? If so, it’s time to grow up. No one is out there chomping at the bit to save you from your apathy and lack of gumption. And if you do luck out one day and meet someone who’d love to give you an awesome life just because you’re you, trust that they’re going to expect you to pitch in and help on one level or another. People get tired of being the only horse on the team who’s actively working to pull the wagon. Always do your share and pull your weight, even if no one asked you to. 8. No one is entitled to a relationship with you (and vice versa). I’ve touched here and there on the fact that my home life was pretty dysfunctional when I was growing up. It was that low-key type of dysfunctional that sneaks up on you though. No one hit me or put lit cigarettes out on my arms, but there was a lot of emotional abuse and gaslighting going on. There still is. Eventually, I concluded that it was better to end my relationships with some of the most toxic people in my family and put up extremely strict boundaries with others. I’ve made similar decisions with other people in the past, especially ex-partners and false friends who took so much more than they gave. Learning to say no to harmful relationships with toxic people changed my life overnight. Healthy relationships that are two-way streets are much too good to miss out on, but you need to make room for them in your life. No one is entitled to a relationship with you for any reason, especially if they’re unwilling to treat you with basic human decency — not even family. People who care about you don’t kick you while you’re down or try to destroy your joy in the things you love. They don’t tell you you’re worthless, mock your appearance, and delight in being cruel to you. If you have people like this in your life, you are absolutely within your rights to cut them off, protect yourself, and move on. Even if they’re family. People also have the right to decide the same when it comes to you, so learning how to gracefully let others exit your life is also worthwhile. Healthy relationships that are two-way streets are much too good to miss out on, but you need to make room for them in your life. There won’t be any if you’re clinging to people who don’t value their relationships with you to the extent that they should. I’m not a huge believer in regret as far as life goes. I do believe strongly in learning as much as you can from your experiences. That’s a process that won’t ever stop for me, as I’ve learned to enjoy the challenge of growing and evolving over the years. Whatever age you are now, please do the same. It keeps life meaningful, colorful, and worthwhile. Shannon Hilson is a full-time professional writer from Monterey, California. She lives a quiet, creative life with her husband who is a movie producer and composer. When she’s not either writing or reading, she loves cooking and studying foreign languages.
https://medium.com/the-post-grad-survival-guide/8-life-lessons-ive-learned-at-40-something-that-i-wish-i-d-known-at-20-something-d7d1b0617eff
['Shannon Hilson']
2020-08-27 00:30:59.701000+00:00
['Life Lessons', 'Self Love', 'Aging', 'Self Improvement', 'Self-awareness']
NASA discovers water on surface of Earth’s moon
NASA discovers water on surface of Earth’s moon Scientia Follow Oct 27 · 3 min read News | Alliah Antig Photo courtesy of NASA/Daniel Rutter The National Aeronautics and Space Administration (NASA) confirmed in a press release the presence of water on the southern hemisphere of the moon. The findings by NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) were published in the latest issue of Nature Astronomy. In their previous exploration, hydrogen was the only element present in the Clavius crater, one of the largest craters visible from Earth. While they detected hydration on the lunar surface of the moon, the researchers could not distinguish at that time if it was water or other hydroxyl compounds. SOFIA, with the help of its Faint Object Infrared Camera for the SOFIA Telescope, was able to discover a concentration of 100 to 412 parts per million of molecular water in the Clavius crater, an indication of the presence of water on the sunlit surface of the moon. The researchers also concluded that the discovery of water in a small lunar soil region “is a result of local geology and is probably not a global phenomenon.” “Now we know it is there. This discovery challenges our understanding of the lunar surface and raises intriguing questions about resources relevant for deep space exploration,” Paul Hertz, director of the Astrophysics Division in the Science Mission Directorate at NASA, said. The discovery gave rise on how water persists in a different situation especially on a harsh, airless lunar surface. “Without a thick atmosphere, water on the sunlit lunar surface should just be lost to space. Yet somehow we’re seeing it. Something is generating the water, and something must be trapping it there,” Casey Honniball, one of the authors of the study, stated. NASA theorized that the raining down of micrometeorites on the lunar surface brought small amounts of water on the moon’s surface upon impact, resulting in the transformation of hydroxyl into water. The observations made are now used to formulate a systemic approach on how to learn more about the production, storage, and transportation of water across the moon. Resource maps of the moon will be added to the task of NASA’s Volatiles Investigating Polar Exploration Rover which will be used for future human explorations in space. Jacob Bleacher, chief exploration scientist for NASA’s Human Exploration and Operations Mission Directorate, sees that this will open more opportunities for new scientific discoveries. “If we can use the resources at the moon, then we can carry less water and more equipment to help enable new scientific discoveries,” Bleacher added, looking at the likelihood of utilizing the resources found at the moon to minimize the load of equipment needed to carry during explorations. NASA aims to learn more about the causes and effects of the presence of water through the Artemis program. Their purpose of establishing a sustainable human presence by the end of the decade can now be achieved by gathering relevant information in advance before sending the first woman and next man to the lunar surface in 2024. #
https://medium.com/up-scientia/nasa-discovers-water-on-surface-of-earths-moon-ab1155bfc83a
[]
2020-10-28 10:27:46.218000+00:00
['News', 'Astronomy', 'Science', 'Moon', 'NASA']
Why Remote Learning Was a Big Hot Mess
This September, we started the 2020–21 school year fully remote and it only took three weeks for our family to breakdown and send the kids back to school on a hybrid schedule. I served on the re-opening committees at school, made suggestions, asked questions, and offered to help — but here we are — worse off. Being an involved parent is an understatement in 2020. Aside from helping plan for the re-opening from brainstorming social-emotional learning ideas to inclusion, keeping on top of the kids each day while working remotely, and explaining to our teenager how to log into Zoom securely, nothing could have prepared us for this year. And I’m left wondering — Why are our kids held to impossible standards during a worldwide pandemic? It started on the first day of school when our son’s teacher posted a Zoom link to google classroom. He introduced himself while looking at the kids who attended in person, read them the book, wrote on the board while we watched, like peeping toms through a keyhole of sorts (Zoom). He muted his mic and then walked away and didn’t return. The next day, the google classroom remained unchanged, with no new links, no classwork, nothing. The same happened on Monday. Photo Credit: Canva.com In another class, his teacher hit the mute button while talking and continued to talk for over twenty-five minutes while a dozen students sat there staring patiently at the screen. That same teacher has given him zeros for not completing work with vague instructions while he has no access to the textbook online. And I’m not pointing the finger — I’m saying that we are all stuck in miscommunication limbo waiting for this cycle to end. But it isn’t going to end soon. Now that the flood gates of remote possibilities have opened, remote learning to some capacity is a permanent part of our lives. A few mismarked absences and zeros later, our son fell apart. He fell apart before me. He said he felt invisible after more e-mails about missed Zoom calls and classwork came. Eager to help him, I logged into his classes and saw for myself — vague assignment instructions and Zoom links buried under the announcements stream in unmanaged google classrooms. Teachers responded to his private comments with e-mails that went unread. It was like going on a scavenger hunt for information and none of us wanted to play anymore. Remote only learning is a big hot mess because it requires parents and caregivers to be more present than any of us know how to be, or have time to be. We’re also asking teachers to deliver instruction to separate populations of students with completely different needs at the same time. On top of everything, we are asking kids to act like adults. Our teenager has been silently struggling, not asking for help, upset about miscommunications with teachers he has never met in a school he has never actually stepped foot in while accumulating absences in the empty shells of his virtual classrooms. And it is all his fault, allegedly. Why is it assumed that teenagers are mature enough to manage their own remote learning? Why are they left to fend for themselves in this sink or swim environment? This isn’t a job and they aren’t at work. But their parents probably are. Not every student has someone at home to help them during the day. What they need are clear instructions for everything from logging in to meetings to assignment prompts. They need broken-down grading rubrics and reminders. They need grace periods and brain breaks. They need compassion.
https://medium.com/age-of-awareness/why-remote-learning-was-a-big-hot-mess-12fd89f6b161
['Laura J. Murphy']
2020-10-27 00:44:32.223000+00:00
['Education', 'Remote Learning', 'Mental Health', 'Parenting', 'Education Reform']
5 Tips for Composing Event Handler Functions in React
5. Avoid Referencing and Depending on the State Inside Event Handlers (Closures) This is a really dangerous thing to do. If done right, you should have no problems dealing with states in callback handlers. But if you slip at one point and it introduces silent bugs that are hard to debug, that’s when the consequences begin to engulf that extra time out of your day. If you’re doing something like this … … you should probably revisit these handlers and check if you’re actually getting the right results. If our input has a value of 23 and we type another 3 on the keyboard, here’s what the results say: If you understand the execution context in JavaScript, this makes no sense because the call to setValue has already finished executing before moving onto the next line. Well, that’s actually still right. There’s nothing JavaScript is doing that’s wrong right now. It’s actually React doing its thing. For a full explanation of the rendering process, you can head over to their documentation. But, in short, whenever React enters a new render phase, it takes a snapshot of everything that’s present specific to that render phase. It’s a phase in which React essentially creates a tree of React elements, which represents the tree at that point in time. By definition, the call to setValue does cause a rerender, but that render phase is at a future point in time. This is why the state value is still 23 after the setValue has finished executing, because the execution at that point in time is specific to that render, sort of like having their own little world they live in. This is how the concept of execution context looks like in JavaScript: This is React’s render phase in our examples (you can think of this as React having its own execution context): With that said, let’s take a look at our call to setCollapsed again:
https://medium.com/better-programming/5-tips-for-composing-event-handler-functions-in-react-479553968585
[]
2020-05-20 14:22:43.794000+00:00
['JavaScript', 'React', 'Reactjs', 'Nodejs', 'Programming']
I spent most of the last three decades scribbling about traveling, business and dining.
I spent most of the last three decades scribbling about traveling, business and dining. Reporting on the tastes, flavors, ideas and sights that deemed worthy to be documented in ink. In that journey, I have become an expert at expressing other people’s passion. Not my own. When I decided my voice worthy and ready, I wrote a sad book about my sister dying. Then I wrote an equally dreary book about my son’s drug addition. Again- I found myself the presenter of other people’s stories. I would mantra — “My life has been unusual and full of adventure! It is time to tell my story.” In my novel queue I have my circus story (I left law school and became a trapeze artist for 5 years), a couple of screenplays about raising 6 foster kids, a teleplay based on a weekly murder and HOA and a grand adventure after my uncle died and left me 5000 animals. Many words tapped, thesaurus consulted and wine gulped, but I never sent my babies into the wild. I kept them in folders and desk drawers waiting for that magical moment when I felt worthy/ready/brave/done. For my maiden voyage, it made sense to combine sex and food for a summer read and Consumed was born. In this building of prose, I accidentally acquired a muse. In searching sexual and dinning matters our profiles found each other and became tangled like a gold chain left in the jewelry box. The muse bounced and enhanced my words. I did the same for them. I loved it because the act of writing can get quite lonely and I’m a social beast. Also there was something in the anonymity that brought out an honest bravery I didn’t know I possessed. In this unusual pairing of voices, we volley sexual and life situations back and fourth. The result is rough, funny and sometimes extremely sexy. This has become a highly addictive practice. It’s a new relationship bound only by prose and honesty. There are no limits or judgments. Writing is a lonely road and the opportunity to travel it with a compadre made it palpable. Thru practice, it provided the courage I lacked. We have never met. I don’t know the muses name. They do not know mine. I have been given permission to share using a pen name. This strange liaison is one of the finest I’ve ever experienced. We chat about everything and nothing. It’s honest, profound and most importantly facilitated my voice. I write for them. I believe everyone should have a muse in their life. Rarely does this happen. I thought it right to share. On this blog/space/journal I will share our daily contemplations. I have promised to do this daily, the muse has not. You may recieve my strange brain fevers, or a tapestry of elegance, a dance of two writers. See if you can tell where the muse voice ends and mine begins. I barely can anymore.
https://medium.com/muse-writtings/i-spent-most-of-the-last-three-decades-scribbling-about-traveling-business-and-dining-5a696d8a2838
['Teri Bayus']
2016-07-11 04:13:17.852000+00:00
['Muse', 'Sex', 'Writing', 'Affairs', 'Writer']
Wintertime
Wintertime A poem on on the nature of time in the cold season by Kristen Munk on Unsplash I hear the mountain of this magical landscape enchanted by souls who were wronged in the past The land is still loyal and it sings to the spirits Shuddering peaks groan and creak as they exhale fall’s last gasp They contract and they cool like the days of the season They shrink like the years before me on my path I’ll turn thirty one with the rise of the sun and I tense like the rocks at the prospect of that The moments are cold and they turn to ice crystals frigid seconds set in stones that skip forward through time But in the heat of the fire they’re melting and malleable and for once in my life the manipulation is not my mine I finally let go and I let the heat warm me I unclench my hands turned to fists in the cold The minutes and hours thaw into each other Time turns to liquid that my fingers can’t hold Coyotes call at the base of the mountain Their voice brings me back to the tempo of time It echos the rhythm of winter come running and the moments freeze back into the the forward design
https://medium.com/for-the-sake-of-the-song/wintertime-c1f8551324fe
['Sydney J. Shipp']
2020-11-30 22:31:56.911000+00:00
['Poetry', 'Self-awareness', 'Winter', 'Time', 'Nature']
Five Important Facts You Should Know about Digital Marketing
According to a ‘Managing Digital Marketing’ study by Smart Insights, 46% of brands don’t have a defined digital marketing strategy, while 16% do have a strategy but haven’t yet integrated it into their marketing activity. The right digital marketing strategy can give companies the competitive edge over their rivals, and to maximize growth, profit, and value. Here are the five important facts that you should know about digital marketing. They’ll ensure you get the most out of your people and digital investments by aligning them with the critical moves that drive competitive advantage and superior results.
https://medium.com/marketing-in-the-age-of-digital/five-important-facts-you-should-know-about-digital-marketing-41e53aecba3d
['Wenting Xu', 'Tina']
2020-08-10 00:02:08.872000+00:00
['Digital Transformation', 'Marketing', 'Strategy', 'Marketing Strategies', 'Digital Marketing']
Styled Components: A CSS-in-JS Approach
Build More Styled Components We continue building styled components for div and a tags: AppDiv is created at line five, with styles at lines 5-7. AppDiv replaces div with className at line 26. AppLink is created at line 20, with styles at lines 20-22. AppLink replaces a with className at line 32. More things are styled: Although the text alignment for AppDiv isn’t obvious, the blue link puts us a step closer to the original Create React App. We’ve used styled.tagname helper methods. Can the tagname be a component name? No. If we want to build upon a tagged template literal, styled should be used as a constructor like styled(Component) . The new component inherits the styling of Component . In the following code, Button1 is styled with the red text with white background. Button2 inherits the red text, and has the yellow background. Button3 inherits the yellow background, and is styled with the green text. const Button1 = styled.button` color: red; background: white; `; const Button2 = styled(Button1)` background: yellow; `; const Button3 = styled(Button2)` color: green; `; Put them together: <Button1>Button1</Button1> <Button2>Button2</Button2> <Button3>Button3</Button3> It looks like this: If the styled target is a simple element ( styled.tagName ), styled components passes through any known HTML attribute to the DOM. If it’s a custom React component ( styled(Component) ), styled components pass through all props. The above button example can be accomplished by passed props used in interpolations: const Button = styled.button` color: ${(props) => props.clr || "red"}; background: ${(props) => props.bg || "white"};; `; Here’s the usage: <Button>Button1</Button> <Button bg="yellow">Button2</Button> <Button clr="green" bg="yellow">Button3</Button> It’s important to define styled components outside the render method, otherwise, they are recreated on every single render pass. For the three generated buttons, each has two classes connected to it: The first is the static class, which does not have any style attached to it. It’s used to quickly identify which styled component a DOM object belongs to. The second one is the dynamic class, which is different for every element. It’s used to style the component:
https://medium.com/better-programming/styled-components-a-css-in-js-approach-755f6a196c42
['Jennifer Fu']
2020-07-21 15:21:51.823000+00:00
['JavaScript', 'React', 'Reactjs', 'Nodejs', 'Programming']
Mark Zuckerberg Shares The Jewish Prayer He Says to His Daughters Every Night
Mark Zuckerberg is among the busiest CEOs around the globe. The 33-year-old runs Facebook, the social-media giant with a market cap of $547 billion. As CEO, Zuckerberg spends a lot of time directing and managing his company, however he still makes the time to exercise, travel and most importantly, spend time with his family. His philosophy is to stay productive and balanced by eliminating nonessential choices from his life and by setting ambitious goals for himself. His typical routine includes an 8 am wake up call, a morning work out session and a lot of time at Facebook. He doesn’t waste time dealing with any of the little choices we make everyday such as picking an outfit. When asked about his wardrobe in 2014, he told an audience: “I really want to clear my life to make it so that I have to make as few decisions as possible about anything except how to best serve this community.” Despite his busy life, he always manages to spend some time with his wife Priscilla Chan and his daughters Max and August. He also doesn’t give up on his Jewish identity and cares to pass it on to his daughters. Every night before going to bed, the Facebook CEO tucks his children in with a traditional Jewish prayer, the “Mi Shebeirach.” He mentioned the same prayer when he gave the commencement address at Harvard University. Facebook founder said: “It goes, ‘May the source of strength who blessed the ones before us help us find the courage to make our lives a blessing,’ ” he said. “I hope you find the courage to make your life a blessing.” Zuckerberg quoted “Mi Shebeirach” prayer for healing that was written by Debbie Friedman, one of the most significant Jewish musicians of the past 50 years.
https://medium.com/jewish-economic-forum/mark-zuckerberg-shares-the-jewish-prayer-he-says-to-his-daughters-every-night-1852318bf1ae
[]
2018-02-12 14:33:33.558000+00:00
['Mark Zuckerberg', 'Facebook', 'Ethics', 'Jewish', 'Jef']
Functional Programming in Java
LAMBDA EXPRESSIONS Lambda expressions or functions are blocks of code that can be assigned to a variable, passed around as an argument or even returned from functions. They are anonymous functions and contains parameters, lambda operator (->) and function body. Lambda expression syntax Lambda expressions were introduced in Java as a means of supporting functional programming. As Lambda expressions are anonymous functions, passed around as arguments, we need a way to execute these functions on demand. This is where functional interfaces come into play. Functional interfaces, having only a single abstract method, accepts the lambda function or a method reference as the implementation for that particular abstract method. To understand better, let’s see how streams and optionals uses lambda expressions as implementations for the abstract method in functional interfaces. FUNCTIONAL INTERFACES USED IN JAVA STREAMS Streams in Java provide a functional approach to process a collection of objects. Stream.java provides different methods to process list elements, map(), flatMap(), filter(), sorted() etc, each of which takes a functional interface type as an argument. Let’s consider an example of a LIST of names and Stream on the list to filter out names that contains the letter ‘a’. The .filter() here is a function to filter out elements from the list that satisfies the specified criteria and the .collect() returns back another list with the filtered elements. Notice that the input passed onto the filter function is a lambda expression. The filter() method in Stream.java has the following structure. filter function in Stream.java from java.util.stream As you can see, filter accepts a Predicate as an argument. So what is a predicate? Predicate.java A predicate is a functional interface provided in java.util.function package and contains one abstract method, which is, boolean test(T t). But how do we get the implementation for test(T t)? What gets executed when predicate.test(t) is called?
https://medium.com/swlh/functional-programming-in-java-c6d03c93392a
['Thameena S']
2020-10-14 16:27:12.581000+00:00
['Lambda', 'Java', 'Lambda Expressions', 'Functional Programming', 'Functionalinterface']
Why is no one talking about depression after university?
Every year, thousands of students’ lives change dramatically, often leaving them isolated, anxious, and even depressed. It’s time we started talking about it. “Anxiety about Monday would start on Saturday night.” Post-university depression is not only real, but also rarely talked about. Photo: Flickr/pigeonpie “Imagine sitting on a limb for a long time and, when you try to stand on it, you buckle under. You can’t get up. Everyone around you is standing up and telling you to do the same, but you just can’t. You dare not.” Robyn Hall* graduated from university last summer. Despite being one of the lucky few to quickly find a job in her chosen field, she still struggled with the transition into her new life. She described the difficulty of coming to terms with her feelings of depression. “‘But you’re a graduate!’ my brain yelled at me. ‘Grow up!’ But the self-loathing continued. You leave a place you’ve been in for three or four years, where you developed so much, leaving behind the closest friends you’ve possibly ever had. Even if you do get a job, nobody tells you that once you ‘hit the jackpot’, you’ll struggle to make new friends; that 9–5 will leave you exhausted. You’re scared of not being good enough, that you won’t live up to expectations. It’s the ultimate disparity between representation and reality.” Robyn is not the only one to struggle with depression after leaving university. When I graduated, I went from feeling the happiest I’ve been in my adult life, to the worst. By October I was jumping at sudden noises and afraid to leave my bedroom. When a year-long relationship suddenly ended, I didn’t know how to see past the black clouds pressing in on me. I sought help from my GP, who referred me to a local mental health outreach programme. But in the end, it was time, a relocation, and support from friends that began to stabilise the feelings of anxiety and depression. I can count graduates with similar stories on two hands — and those are just the ones close enough to confide in me. Every year, thousands of people’s lives are turned upside down when they jubilantly throw a hat into the air, then watch it come crashing down into reality. So why does no one talk about the feelings of hopelessness that so many are left with? After all, with over 900,000 young people currently unemployed and benefits for under-25s constantly under threat, is it any wonder that mental health issues in young people are rising across the board? I spoke to Matt Tidby, who stayed in his university town of Norwich following graduation, supporting himself with temp jobs. “The majority of the work itself was doable, if monotonous — but things like the telephone, where I was expected to advise on mortgages after about half-a-day’s training, left me hugely anxious and very unhappy. I suffered on a personal level, and lost a lot of confidence in my ability to do both that job, and any of the jobs I actually craved. “Quite ridiculously, I lived in fear of being ‘put on the phones’ — I built that minor stress into a mountain of worry that blotted out everything. After about a month, the job applications stopped. I got into quite a destructive system of trying to make it to each weekend without things getting too shit to handle. Anxiety about Monday would start on Saturday night.” Matt eventually left the job, recognising the damage it was doing, and said that things were beginning to get better. “It’s a daily, rapidly changing situation, really — a positive email or a phone call can reverse many days of feeling low. It’s a strange inversion of my time temping; whereas once I lived in terror of the phone ringing, now I urge it to. I’m more hopeful that it will.” While researching this piece, I found very little information targeted specifically at graduates suffering from mental health problems, despite an article in the Independent last year that found that of 40 students and recent graduates surveyed, “95% believed that post-university depression was very much a real thing”. With so little information available, I contacted the mental health charity Mind directly. Head of information, Beth Murphy, had this to say: “Moving on from university is often the biggest change a person has experienced up to that point in their lifetime. Added to this, today’s graduates are facing the double-whammy of the debt associated with paying for university and a tough job market that can seem impenetrable. “Financial stress and uncertainty around employment are major contributors to mental health problems like anxiety and depression. Mind has seen a surge in calls to our Infoline from people struggling with financial difficulties, many of them post-graduates. Our In the Red report actually found that 85% of respondents said their financial difficulties had made their mental health problems worse.” So if post-university depression is “a real thing”, why does no one talk about it? Is this the same stigma surrounding mental health that affects all sufferers, or is there something else going on? Robyn believes that there is a pressure on graduates to feel grateful for their position. “Once you get a 9–5 job, coping with depression can be worse. People are all over to congratulate you, help you in any way they can; you’re so afraid of disappointing everyone that you just let the guilt fester away. I think even in the media it’s not represented enough that you can do your ‘dream job’ and not feel right.” So what can be done? Beth recommends communication above all else. “If you are worried about your mental health, confide in a friend or family member or speak to your GP. There are also lots of small things you can do to make yourself feel better — exercise can be hugely beneficial, releasing chemicals which help increase wellbeing and mood. Keeping in touch with friends is also important, as withdrawing from social contact can make things worse.” Whether you attended university or not, being young and uncertain about your future is the perfect opportunity for feelings of anxiety to take hold. I’m constantly struggling with my own mental health, but I’m one of the lucky ones; I have a job to focus me, friends to listen when things get dark, and access to medical help. But the same can’t be said for everyone, and with mental health trusts asked to shave almost 20% from their budgets next year, that last, vital support system is more at risk than ever. It’s time to stop suffering in silence and acknowledge depression after graduation as a real risk to young adults. And it’s time to stop cutting the very services that may well save their lives. For information, support and advice please visit mind.org.uk or call Mind’s confidential mental health information service on 0300 123 3393. To find out more about starting conversations and tackling mental health stigma, visit time-to-change.org.uk *Names have been changed.
https://medium.com/abstract-magazine/why-is-no-one-talking-about-depression-after-university-94d3e09ca1d2
['Amy Fox']
2015-12-11 11:00:42.662000+00:00
['Life', 'Depression', 'Mental Health']
3 Ways to Implement the Singleton Pattern in TypeScript With Node.js
The Problem — Logging Example Here’s an example problem: I have a Node.js app for payment processing that uses a Logger class. We want to keep a single logger instance in this example and ensure the Logger state is shared across the Payment app. To keep things simple, let’s say that we need to ensure that the logger needs to keep track of the total number of logged messages within the app. Ensuring that the counter is tracked globally within the app means that we will need a singleton class to achieve this. A high-level diagram of the sample app by the author. Let’s go through each of the classes that we will be using. Logger class: Logger.ts A basic logger class that allows its clients to log a message with a timestamp. It also allows the client to retrieve the total number of logged messages. Payment class: Payment.ts The Payment processing class processes the payment. It logs the payment instantiation and payment processing: The entry point of the app: index.ts The entry point creates an instance of the Logger class and processes the payment. It also processes the payment through the Payment class: If we run the code above, we will get the following output: # Run the app tsc && node dist/creational/singleton/problem/index.js Output screenshot by the author. Notice that the log count stays at 1 despite showing 3 logged messages. The count remains at 1 because a new instance of Logger is created in index.ts and Payment.ts separately. The log count here only represents what’s logged in index.ts . However, we also want to include the number of logged messages in the Payment class. Here are different ways to solve this problem by using a singleton design pattern.
https://medium.com/better-programming/3-ways-to-implement-the-singleton-pattern-in-typescript-with-node-js-75129f391c9b
['Ardy Dedase']
2020-11-02 16:29:10.810000+00:00
['Technology', 'Startup', 'Software Development', 'JavaScript', 'Programming']
Growing Older, But Not Growing Old
Attitude, Attitude, Attitude Photo by Alex Wilken from Pixabay An Irish study a few years ago found one of the most important elements in maintaining physical and cognitive health as we age is attitude. Of course, that cuts both ways. “Everyone will grow older,” says the study’s lead researcher Deirdre Robertson, “and if negative attitudes towards aging are carried throughout life, they can have a detrimental, measurable effect on mental, physical, and cognitive health.” If that’s not good enough, here’s what George Burns said about aging: “You can’t help getting older, but you don’t have to get old.” Remember, that comes from a man who once played God! I’ve never understood the aversion to getting older. I remember as a preschooler being in awe of a little restaurant on the corner across from the junior high school in my hometown. The restaurant was torn down decades ago and a new junior high was built across town. Finley’s corner opened an hour before school started and closed an hour after school ended. My older brothers hung out there with other junior and high school kids. One day my mother took me into this exotic place and we sat at the counter and drank coca-cola served by the man himself, Mr. Finley. Though my feet dangled from the stool about two feet from the floor, I felt I had hit the big time. If I had known the quote from jazz great Fats Waller, I probably would have invoked it - “Somebody shoot me while I’m happy.” It was intoxicating having the legendary Mr. Finley engage me in typical adult to child banter: “How old are you?” “Do you go to school yet?” And me responding in typical child to adult diatribes. “I have a dog.” “His name is Laddie.” “I named him after Lassie, but he’s a boy, so I couldn’t name him Lassie.” (This was the 1950s. Naming was more tied to gender then.) Maybe it was the familiarity we were building; maybe it was the carbonation or sugar from all the soda. Whatever, it was, I dropped all sense of decorum and referred to Mr. Finley by the name all the older kids used — “Old Man Finley.” My embarrassed Mother admonished me not to speak to Mr. Finley “that way”. “That’s alright,” Mr. Finley said. “that’s what all the kids call me.” My mother and I remembered and laughed about the incident from then on. I also remembered Mr. Finley’s attitude. He was not only amused, but he also seemed sort of proud. That’s the way I feel. I’ve worked all my life to get older. I’m proud of it and I’m determined to enjoy it. Not only do I feel intuitively that an optimistic approach to aging will help me live a better life, but there is also more science to back that up. In 2018, a Yale School of Public Health study found that a positive attitude about getting older significantly reduced the likelihood of developing dementia. To me having a good attitude about growing older has always been linked to humor. “He’s so old that when he orders a three-minute egg, they ask for the money upfront.” That’s from that Burns fellow again. He not only lived to be 100. He started his solo career as a comedian when he was 80. Sure, with age you may have to moderate your lifestyle a little. You may not be able to say, eat, drink, and be merry. You can say, eat (wisely), drink (moderately), walk (once a day), take a nap (when you don’t get eight hours of sleep), and be happy you’ve made it this far.
https://medium.com/crows-feet/growing-older-but-not-growing-old-c9f05f61f6f
['Max K. Erkiletian']
2020-10-23 21:38:14.275000+00:00
['Aging', 'Humor', 'Mental Health', 'Lessons Learned', 'Positive Thinking']
How to Focus: Back to Basics as a form of Meditation
How to Focus: Back to Basics as a form of Meditation There’s no chanting, a fair amount of swearing, it’s a pain in the ass — but it delivers perspective, appreciation, and focus. Wood by Robert Ruggiero It’s a brand new year, we’ve created our New Year’s Resolutions and we’ve even looked at how to make sure we deliver on our resolutions, but we still need help in knuckling down and focusing on the tasks and the year ahead. What’s a guy to do? The first thing I always do is procrastinate. I will write an article at some point about Positive Procrastination (I appreciate the irony of putting that off for now) but I genuinely believe in living by a fully comprehensive to-do list and being able to procrastinate by picking up on another task that needs doing so that time is never wasted, just tasks are not necessarily prioritised in the best way. I reach for tools that will help me succeed. Anyone who has ever bought more than one self-help book will recognise the pattern: I need help with task X; e.g. writing articles I will spend time researching what other people have done to write articles I will spend my money buying bokos/subscribing to resources that other people sell about writing I will realise that all of the people selling these resources didn’t make any money from writing and instead make their money from talking about making money from writing Lifering by Frederick Tubiermont The ridiculous thing is that I know what I need to succeed and how to focus. I’ve done it before, I’ve learned it before, I’ve achieved it before. Not everyone works on a computer 80% of the time but for me, the winning pattern is:
https://medium.com/copse-magazine/how-to-focus-back-to-basics-as-a-form-of-meditation-21996623ba48
['Adam Colthorpe']
2020-01-02 11:23:54.445000+00:00
['Productivity', 'Self Improvement', 'Life', 'Meditation', 'Work']
Drafted
Drafted Good ideas at the time? Probably not. I once read Stephen King penned a story so horrifying, so ghastly, so macabre, he suffered terrible nightmares while writing it. Yup. It disturbed him to such a degree, he could never bring himself to send it to his publisher. The manuscript remained ever locked in his desk drawer. He eventually tossed the key into the deep waters of a harbor close to his home. True story. Pretty sure. May have been Dave Barry. Regardless, it goes without saying even though I’m saying it anyway that every author maintains a dark file of stories that will never see the light of day. Perhaps they’re stories so freakishly scary, their release would risk sending readers into cardiac arrest. Perhaps they’re stories so sad, the authors can’t complete them past flowing tears. Perhaps they’re stories so beautiful yet so personal, the authors can’t bear to share them with the public for fear they won’t receive the love they deserve. Or perhaps they’re stories that just plain suck. Mine apply to that last one. Here are titles of story ideas I simply had to flush and the reasons why: My Last Physical Exam My general physician strongly recommended I lose an amount of weight proportionate to that of a Northern Pacific baby sea lion. I decided I would post this story only after I’d lost the weight. How to Lose Weight When You’re Over 50 This can’t be done. Cobra Kai (The Karate Kid Sequel): A Movie Review This ended up sounding like a thousand word rant about how good Ralph Macchio looks at 56. And it contained way too much profanity. My review, not the movie. An Easy Way to Clean Your Barbecue with Safe Chemicals I don’t want to relive this. In the Girls’ Room with Pink Curtains Near Asphyxiation What I thought was the making of an excellent mind-blower of a science-fiction tale was just bits and pieces of a hallucination I experienced while repainting our poorly ventilated daughters’ bedroom. My transcription was twenty-two paragraphs comprised of one word: mot. I don’t know what that means. Anime is Awesome! This story took root when I made an earnest attempt to embrace anime films. My daughters are enamored with them, so I’d rented three movies from the library, made popcorn and settled in for a Miyazaki marathon one Sunday afternoon intent on gaining a powerful appreciation. I had pen and paper ready as the magic began to unfold. I gave up thirty minutes into the first one. I stumbled from the room certain I was suffering a paint fume relapse. World War Walmart I began this story following a shouting match with a Russian couple after they cut me off in a discount store parking lot. Things escalated quickly then stopped abruptly when we weren’t able to understand each other’s insults. We ended up shaking hands. I thought this encounter, if worded properly, could be shaped into an intelligent allegory shedding light on important topics such as global relations, diversity, acceptance, the human condition and rolled back prices as they all relate in today’s political arena. Nope. Fond Memories as a College Freshman I have no fond memories as a college freshman. My roommate was a stoner, my professors were assholes and Sammy Hagar joined Van Halen. My Daughters Hate Me I started this post during a bad week when my three teenage daughters all became angry with me for some reason. I don’t know what I’d done, but this was a major turning point in my life as a parent. The story was to be a deep dive into my shortcomings as a father and how the relationships I shared with my daughters had changed forever. My Daughters Love Me I stopped working on the previous story when my daughters suddenly returned to being nice to me again. The Wisdom of a Wife Apparently those last two stories shared a logical explanation which was revealed to me in a private conversation with my wife who was careful to use small, slowly spoken words. A terrible misunderstanding. She also suggested I kibosh the whole subject. Yes. That was for the best. Medium: The Board Game Actually I’m not done with this. So now that I’ve dredged up the embarrassing rejects of my otherwise masterful body of work, I hope you feel encouraged to share an idea you intend to keep hidden under lock and key forever. I don’t want to feel alone on this. Thanks so much for reading. And not judging. I shall now resume publishing the quality subject matter you so richly deserve.
https://thehappysidestep.medium.com/drafted-beb55ba5f368
[]
2019-03-26 14:13:43.210000+00:00
['Satire', 'Parenting', 'Writing', 'Huffington Paint', 'Humor']
Git and Github: A Love Story or Something Like That.
Github Repo As I continue my journey to becoming a software engineer, I’m trying to identify gaps in my knowledge. Basic things that I should probably know. As I research them I plan to write a post, which I feel is a good way of retaining the knowledge that I have acquired. Towards the end of my time at the Flatiron School, it occurred to me that this thing that I had basically been using the whole time was still a mystery. I knew how to fork and clone something, I knew how to initialize (or init) a repo, and add files and save them. I just didn't know the why’s or the how’s. I didn't really know what Git even was, and I barely knew about Github. So that’s what I’m going to do here. I will discuss what git is, and why we use it. Then I will give some info about Github, history, who owns it, and why people seem to prefer it to other options. Then I plan to dive into some popular git commands. I will link all the resources I pulled from in case you want more info. So sit back and relax, it's going to be a bumpy road. Actually, it's going to be fine, I’m not sure why I said that. GIT To talk about git I first have to discuss version control. What is version control you may ask, and my response is that it is exactly what it sounds like? It’s a system that manages changes to the files you are working on so you can recall a specific version later. There are three different styles to version control and they are local, centralized, and distributed. This link will tell you more info about all three. Now where does Git fit into all of this. Git is a distributed version contol system. It was created in 2005 by Linus Torvalds, for development with the linex kernal. They had been using Bitkeeper, but a breakdown with the commercial company that developed Bitkeeper ended in them loosing the free-of-charge status. So Linus, who also created linex, decided to create their own DVCS. They would use what they learned fom Bitkeeper and improve upon it. Thus Git was born. If you want a quick giggle read the naming of Git here, I found it humorous. One of the main differences between Git and other version control systems is the way Git thinks about data. Most systems use delta based version control, where they store the information as a list of file-based changes. Git on the other hand takes a snapshot of the information and compares it to the existing file and then only saves the differences. If a file hasn't changed then Git just links to the already existing file. One of the key features to Git and is intergreal to its speed is that everything is local. Since everything has been cloned on to your computer it makes searching a project almost instintanous. Another benefit of this is the ability to work offline, and then pushing commits once you are on network again. Git also uses checksums to store and then refer back to that data. Which makes it impossible to change anything without Git knowing. This helps with not losing data and file corruption. Git has three stages modified, staged, committed. Modified: File has been changed, but not saved to the directory. Staged: File has been marked to be saved. Commited: File has been saved. These three stages correlate to the three sections of a Git project the working tree, staging area, and the Git directory. GITHUB Github was launched in April 2008 by Tom Preston-Werner, Chris Wanstrath, P. J. Hyett, and Scott Chacon. It reported fairly early success with 46,000 public repositories in the first year. The numbers grew from there with 90,000 repositories and 100,000 users the following year. The numbers just continued to grow, until they caught the attention of Microsoft, which had been using Github since 2012. Micorsoft acquired Github in 2018, for $7.5 billion. Sidenote for my Javascript users out there Microsoft also acquired npm, this year. So Github is popular, but why. This one is hard to anwser for me. I’ve only ever used it, so I have no reference for how it compares to other version control sites. I can say that as someone just getting started, that it is pretty user friendly, and it has a plethera of features. The community is a huge part of what makes it great. Being able to check out other peoples code, and in return have them look at yours, and give feed back is pretty great. It also has a bunch of integrations and features which just add to its usability. The one feature I currently use is Github Pages, which I use to host my portfolio page. GIT COMMANDS I’m going to go over some basic commands for Git, with links to more in-depth explanations. I will be assuming you are using Github to do this. Things you will need to do first is make sure Git is installed, and that you have an account set-up with Github. Also, most if what I’ll be talking about will be for use with MacOs, as that's what I use, but most will be universal. So to install git from the terminal us can use the command: $ git --version If you don't have Git it will ask you to install it. Instead of walking you through you first repo, I will just link you to Github guides walkthrough which is very details, with images and everything. I probably couldn’t explain it better than they can. Here are some basic Git commands with links to their documentation: Here are some commands also with links to documentation that also may prove useful, but you might not need right out of the gate: git rm — remove files from a working tree. git mv — move or rename a file. git checkout — switch branches. git diff — shows changes between commits. The last thing I wanted to explain is the Github flow, just so you have an idea of how it works. It has six steps: Create a branch. Add commits. Open a pull request. Discuss and review code. Merge. Deploy. CONCLUSSION I learned so much by researching Git and Github. Not all of it is going to make me a more productive software engineer, but knowing it definitely makes me feel more like one. Having some background knowledge about things more experienced engineeres know, can really help with imposter syndrome. Make you feel like you have a little insight about this world you are trying to become part of. I know I will use all of this knowledge and put it to use, my command line abilities get better everyday, to the point were I don’t have to use the mouse for as much, which makes me that much faster. I hope you found this helpful. I will split the links to all the resources I used as primary and secondary. That way you will know where the main source of information came from. I would encourage you to check them even if you are only cherry picking them and not reading the whole thing. They provided me with so much knowledge, I can’t even really begin to explain. PRIMARY RESOURCES SECONDARY RESOURCES
https://medium.com/swlh/git-and-github-a-love-story-or-something-like-that-f18f789a7144
['Robert M Ricci']
2020-12-14 01:38:24.328000+00:00
['Programming', 'Informational', 'Git', 'Software Engineering', 'Github']
What Is The Intel Student Ambassador Program?
In November of 2016 we announced the Intel® AI Academy for Students, created to work collaboratively with students at innovative schools and universities doing great work in the Deep Learning and Artificial Intelligence space. As part of this program we also announced the Intel® Student Ambassador Program for AI, an exciting new program for university students to engage with Intel around their work in Machine Learning, Deep Learning and Artificial Intelligence. What is the Student Ambassador program? The Student Ambassador Program is a developer affinity program, designed to assist student experts in telling their story and share their expertise with other student data scientists and developers. Intel is working with universities across the globe to introduce this program. Those students invited into the program as Student Ambassadors are provided technical support, resources, and marketing to advance their own work through Intel software, tools, and hardware. This program is primarily targeted toward graduate students; however, undergrads and PhD students can apply should they have the combined education, skill and time to fulfill program requirements (note: this program does not provide a college internship with Intel, nor does it provide placement for employment with Intel). What are the benefits of the Student Ambassador program? The Student Ambassador Program offers many benefits for the select students who are invited into the program. These benefits include: Formal association with Intel® Corporation via Student Ambassador title, swag, and affiliation Free software, tools and libraries from Intel Direct access to their own instance on the Intel® AI DevCloud, Intel’s AI cluster, to power the development and training of deep learning models Access to early disclosure information (under NDA) during monthly meetings with Intel Direct access to Intel engineers and resources to support their work and adoption and integration of Intel® architecture Sponsored travel to support speakerships and/or training by or for the Student Ambassador Sponsored funds to assist in hosting, training, and speaking sessions at their campus to promote their work Numerous speakership and collaboration opportunities coordinated by the Intel® AI Academy for Students, exclusively for Intel® Student Ambassadors Opportunities to apply for Early Innovation micro-funding opportunities, solely for Student Ambassadors What are the expectations for Student Ambassadors? Student Ambassadors will continue in their role as long as the student is able to and desires to continue as a Student Ambassador or upon their graduation, whichever comes first. During their time as a Student Ambassador, each is expected to complete the following: Create of an online profile and posting of at least one (1) project to Intel’s® Developer Mesh website Deliver three (3) pieces of technical content to be shared on Intel’s Developer Website discussing your own research, projects, and interests in the space of Deep Learning and Artificial Intelligence Host speaker of one (1) or more Ambassador Labs on campus, connecting with your peers and local community, providing training and insight into your work to a total of 125 students or more over the course of a calendar year I’m interested. How do I get involved? For students or faculty interested in the Student Ambassador Program, there are multiple ways to engage with Intel and get involved: Universities can invite Intel to come on campus for a half-day workshop to discuss the program and provide initial training on deep learning and artificial intelligence technologies supporting Intel architecture. Visit this site for more information on setting up a workshop. For students directly interested in the Student Ambassador program Post information about your research and or student projects to Intel’s® Developer Mesh website. This is a key step in us evaluating students for the Ambassador Program. Add these projects to the Student Group on the Intel Developer Mesh website. Posting to this site helps Intel get a glimpse into the student work and helps demonstrate the student’s willingness and aptitude for sharing their experience with the community. After posting a project to Developer Mesh, students can complete and submit an online candidate form. How will Intel support other students, not eligible or able to be a Student Ambassador? Intel is also able to support and sponsor student clubs at universities. With this program Intel is able to provide sponsorship funds to select university clubs. Sponsorship funds help support a club’s cost for meetings and gatherings, in exchange for the club discussing and sharing information about Intel’s support of Artificial Intelligence. Select clubs will be provided with an AI training kit, including content and documentation to share and discuss during their meetings and gatherings. Select university clubs will be prioritized for guest speakerships by Intel or associated partners as resources are available. Those interested in being evaluated as an Intel Student Program University Club can submit information for candidacy here. Intel is excited about the opportunity to work and engage directly with students who are shaping and advancing new work and use cases for Artificial Intelligence via campus workshops, Student Ambassadors, and University Clubs. Our aim is to provide students and developers the resources and opportunity to have a voice and influence in driving AI forward. Learn more on the Intel Student Ambassador site, check out the AI projects on Mesh, or contact Niven Singh, Intel’s Student Community Manager directly for more information.
https://medium.com/intel-student-ambassadors/what-is-the-intel-student-ambassador-program-2cac2c855ada
['Niven Singh']
2018-10-30 18:05:37.704000+00:00
['Artificial Intelligence']
Welcome To October, Ghouls
Haunt yourself a little happy. Photo by Cederic X on Unsplash Welcome, ye who enter here, to October. Macabre Month. Scary Christmas. The month where those of us who live in the dark at last have our time in the spotlight. It’s pumpkins and chilling movies and all manner of treats dressed as body parts. It’s falling leaves and cool air and plastic spider rings just kind of everywhere. It’s a general layer of sinister fog enrobing the entire month and I don’t mind telling you I’m excited. Naturally, my decor went up September first, but I don’t expect anyone to get it. September is a garbage month full of hot weather when we don’t want it and no discernible enjoyable traits. Additionally, how can I be expected to bask in the orange and purple glow of my twinkle lights for just 31 days per year? That hardly seems like enough. I buy skulls in bulk for heaven’s sake. But today is October 1st, officially the start of Halloween season, a spooky and sinister time that brews great personal joy within my cold, black heart. My black faux candelabra is on as a write this. Dance, little orange flames, dance. Photo by Shani Silver. Why do we like this crap? Why are the shelves of Target bedecked with increasing quantities of battery operated novelties and home exterior decor to rival the Griswolds? I have a theory that everyone loves a little creepy, it’s just that October is the only time they feel confident saying so. Nobody wants to be a weirdo, but in October, we all are. There’s a safety to the scariness of October. Even those who don’t leave the faux raven skeletons out all year have a laugh as they dabble in the darkness. We want to be scared, but safely. Like we need the confidence that the chainsaw is fake to have a good time, you know? I have fake tombstones and a skull in a snow globe in my living room and this month is the only time I can have people over and not have to explain them. This is maybe the only month of the year I can thoroughly relax. October is a month that, when you think about it, uses death as decoration and yet somehow we’re all remarkably upbeat. It’s indulgence in weirdness of the highest order and it’s a beautiful thing to see and offer candy to out of an automated bowl. October is the only month human bones are funny. I’m not asking for an explanation I’m just stating the facts. So go forth, gremlins. Haunt on, you spirits. Jump into this cobweb-covered month with both feet—better still if they’re wearing witch shoes at the time. I celebrate October with reckless frivolity, coating my home in black and white striped accents and draping creepy cloth over anything that will keep still. I’ve been running makeup trials of my costume for two weeks and I’ve owned it’s major components since summer. But for a horrible error on my part and the cat would be resting in a tiny haunted house of her own right now. Dammit for selling out. Anyway—what I’m saying is, embrace your inner creep, celebrate your sinister side, and have a very, very happy Halloween, starting right now.
https://shanisilver.medium.com/welcome-to-october-ghouls-e3e7d7cbf3a5
['Shani Silver']
2019-10-01 10:55:23.776000+00:00
['Halloween', 'Writing', 'October', 'Weird', 'Humor']
The Binary Search Algorithm
A number of months ago, with a budding interest in data science and machine learning, I decided to take MIT’s 6.0001: Introduction to Computer Science and Programming in Python from their course archive. Having had only a basic understanding of Python at the time, I was caught off guard when topics such as recursion and the bisection method were introduced very early on during the first few lectures. By lecture 5, I was already working at the edge of my ability; and then I realized. This course is not meant to teach you the Python language — Python is merely the tool used to help students without prior experience in programming or computer science to develop the skill of computational thinking. So after I was floored just reading the final question of the first problem set, this is how I went about implementing my first binary search algorithm. The method A dramatic step forward in computational cost from exhaustive search algorithms such as guess and check and approximation, binary search is a more efficient algorithm that finds and returns a target value from a sorted array. It can be plainly illustrated using the following steps: Establish the search space – or size of the array – with a low boundary and high boundary. Divide the sum of the low and high boundaries by two to find the middle of the search space. If the target value is equal to the value in the middle of the search space, the target value has been found. Return the target value. Else, eliminate the half of the search space in which the target is impossible to be found. Repeat the steps in the new search space until the target value is found. The target is not present in the search space if the algorithm concludes emptying the array. Application Now, let’s write a simple root-finding program applying this search algorithm. """Find the square root of x""" x = 16 epsilon = 0.01 # This is the acceptable margin of error for this algorithm no_of_guesses = 0 low = 1.0 high = x #The root of x cannot be > x target = (high + low) / 2.0 #Watch out for integar division while abs(target**2 - x) >= epsilon: no_of_guesses += 1 if target**2 < x: low = target else: high = target target = (high + low) / 2.0 print("No. of guesses =", no_of_guesses) print(target, "is approximately the square root of", x) The output of this code is as follows. No. of guesses = 11 3.999267578125 is approximately the square root of 16 Per the epsilon we defined, our program has come up with a reasonably accurate answer! However, it is important to note these sticking points: Bisection search only works when the value of the function varies monotonically with input. When working with an array, always ensure that it is sorted. The exit condition is crucial. Stop the search when the low boundary is no longer less than or equal to the high boundary. The relevant search space would have already been looped through by the time this condition is met. Now with the basics under our belt, we should be able to start incorporating the bisection method into our own search algorithms. I recommend first getting started doing the ‘easy’ category binary search questions on LeetCode. Here are a few for reference. As you progress, you will be able to move up in the categories towards the harder questions and eventually feel comfortable using this search algorithm in your projects. Final Remarks The binary search is a fairly efficient search algorithm and will serve you well in a variety of use cases. However, it is only faster than linear search algorithms, that too only with sizeable datasets. In essence, comprehending this thoroughly will give you a great foundation in data structures and algorithms and prepare you to explore further and learn more efficient structures. Additionally, Python provides the Bisect module which contains many functions that would be useful to you while working with bisection search algorithms. Until next time, happy coding!
https://thakshilarajakaruna.medium.com/the-binary-search-algorithm-7b37eb8bb445
['Thakshila Rajakaruna']
2020-11-29 12:51:31.604000+00:00
['Binary Search', 'Beginners Guide', 'Python', 'Programming']
Banana Pudding and the Hegelian Dialectic
Banana Pudding and the Hegelian Dialectic Having a thesis and an antithesis requires synthesis. Having an Id and a Superego requires an Ego that can satisfy both’s needs. Eating banana pudding works too. Photo by Maxim Potkin on Unsplash When I was looking for a topic for my senior thesis in college I stumbled on the work of the 19th Century German philosopher, G.W.F. Hegel in the library of the Jesuit University I attended. I didn’t remember reading about the Hegelian Dialectic in philosophy classes or discussing him in any of my ethics seminars. The simple premise that we conjure up a thesis in life and are then challenged on that thesis with antithetical data was intriguing to me. That the resultant synthesis was a mere starting point for reconciliation of the next thesis was right up my alley. The understanding of metaphysical conflicts was exactly the thing I had come to College to study. I was addicted to understanding the process of a mental process. That there was a systematic way we could explain this thing I was going through called growth, which was transforming me from a little ghetto brat into a soon to be graduate student blew my mind. That I was simultaneously reading Carl Jung was just icing on my intellectual cake and a new theory of the self was beginning to blossom in my peanut of a brain. Pretty heady stuff for a 25 year old high school drop out using her brain for the first time for things other than getting it stoned. I came up with a theory of my own from this journey down philosophy lane and that goes something like this: We all have a set of believes, norms, customs, etc. that we use to measure our existence against. Our “Thesis” of our life, in Hegelian terms. Maybe something called an Ego if we are Carl Jung. As we experience life our “Thesis” is often challenged by nonconforming experiences, urges, or ideas which we need to fit into our world view. Hegel’s “Antithesis”, or a nudge from our “Id” in Jung’s vernacular. We then seek to assimilate these contrary experiences, whether mental or metaphysical, with our existing reality. Hegel would call this the “Synthesis”, Jung might say our “Superego” police have come to restore law and order to our selves. The new Thesis we form through this Synthesis would then become our new normal. Our Jungian Ego would be adjusted and the process would begin again. I wrote that paper, which I cannot find in any of the cardboard boxes I’ve been lugging around for the last 35 years, for my senior project. It garnered me an “A” and has left me with a haunting feeling that the concept of synthesizing data to adjust one’s inner view of the world and see that world though a new lens, repeatedly, is all the human condition really is. I think that the Hegelian Dialectic can describe most of what drives human behavior. We like being comfortable in our thoughts and feelings. Something comes along that disrupts that comfort so we have to build it in to our reality in such a way that we can continue to function. We move forward with a slightly skewed belief, a new perspective or a heightened awareness. End of story. So what does that have to do with banana pudding? Nothing. I was just eating a bowl of delicious, homemade vanilla pudding with bananas and vanilla wafer’s as I began writing this morning. At 6 a.m. I was wide awake with an empty day ahead of me. Now typically I would roll over (after getting up to pee) and let myself sleep for another 2 hours, but there were these three bananas on my kitchen counter and I wanted to use them in some way that didn’t involve turning on my oven. God forbid I let three perfectly ripe bananas go to waste! That was when the banana pudding idea came to me. I had never made banana pudding, at least not from scratch, but I believed I had all of the ingredients, and Google happened to be available, so I looked up a recipe. The resulting loveliness, a smooth creamy vanilla pudding with both bananas and vanilla wafers floating in it was exactly what I needed after a tough week. Comfort food in a new form, my Thesis for using ripe bananas had been shifted by the Antithesis of pudding which meant not having to turn on my oven on a hot, humid day to bake banana bread or muffins. A small psychic shift, but a shift none the less. The resultant Synthesis, light and cool, velvety in my mouth, is just what I needed. It was what I was savoring as I began to write this, after taking a mid-morning nap and giving the mixture enough time to chill in the fridge. I had a new normal, a go to for ripe bananas, a new weapon in my comfort food arsenal. Banana pudding is my new thesis. I can really get behind this new version of myself on a humid, cloudy Saturday morning when I have nowhere to go, no one to entertain and no children to feed. My life is forever changed for the better. Hegel would be proud.
https://medium.com/illumination/banana-pudding-and-the-hegelian-dialectic-e2853b4c2e68
['Janice Maves']
2020-07-13 23:13:06.876000+00:00
['Cooking', 'Psychology', 'Self Improvement', 'Philosophy', 'Humor']
Allow the Books Speak to You With Python
Step #1. Import the Python library The library I was talking about is pyttsx3 (python text to speech version 3). It is a text-to-speech conversion library in Python. Unlike alternative libraries, it works offline and is compatible with both Python 2 and 3. You can use any editor for creating this project. I prefer using Pycharm due to its user-friendly and other important features. You can use any editor and then import this library by executing the below-mentioned command: pip install pyttsx3 pip is a package manager for Python. That means it’s a tool that allows you to install and manage additional libraries and dependencies that are not distributed as part of the standard library. Here, we are telling the package manager to install the specific library to our project. Step #2. Make the Code Talk Once the library is installed, we can import it into our project. import pyttsx3 Then it takes three lines of code to make the code speak. speak = pyttsx3.init() speak.say('A.I. is going to take over the world') speak.runAndWait() Here, we have initialized an instance of our imported library. We used the in-built method say in which we wrote the text that we want to convert into speech. Lastly, we call the runAndWait method for the execution. Run the above code for turning your text into the speech. Step #3. Create an Audiobook There is a prerequisite before we move to create our own audiobook. We will need a pdf that can be converted into an audiobook. You can choose any pdf file. If you have a pdf for any book or novel, then you can use that one. Once we have a pdf, then the next thing we need is a package to read pdf files. We will again go to our editor and install the package — pip install PyPDF2 After the package installation, we can import the package in our code to read the pdf file. import PyPDF2 book = open('filename.pdf','rb') pdfReader = PyPDF2.PdfFileReader(book) page = pdfReader.getPage(1) text = page.extractText() Here, we have imported the package to read pdf files in our code. Then we have created an object call book to open the given pdf file. The second argument ‘rb’ stands for — read as binary. After that, we call the PdfFileReader method of the imported package and pass our pdf file information to it. Then we are calling the getPage method to extract the specific page information and extract the text by calling method extractText in the next step. Once we get the text extracted then we can pass it to the method say that we created in step #1. Final Code for audiobook import pyttsx3 import PyPDF2 book = open('filename.pdf','rb') pdfReader = PyPDF2.PdfFileReader(book) pages = pdfReader.numPages() speak = pyttsx3.init() for num in range(0, pages-1) pages = pdfReader.getPage(num) text = page.extractText() speak.say(text) speak.runAndWait() That’s it. In eleven lines of code, we created our own custom audiobook. Next time, instead of sitting in front of a computer going through some daunting pdf, just convert it into an audiobook and lie down while listening to it. There are several other methods present in library pyttsx3 for customizations like changing the voice of the reader, controlling volume, and even we can save the audiobook as a .mp3 file in our system. I’ll leave those things up to you for exploring the library further.
https://towardsdatascience.com/allow-the-books-speak-to-you-with-python-e95c65030c7a
['Shubham Pathania']
2020-12-17 13:51:51.009000+00:00
['Coding', 'Software Development', 'Python', 'Data Science', 'Programming']
Dodgers and MLB Equally to Blame in Justin Turner’s COVID-19 Protocol Breach
As the Los Angeles Dodgers poured out of the dugout and bullpen in the wake of Julio Urias’ final called strike against the Tampa Bay Rays, granting them their first World Series win since 1988, nothing seemed out of place. That is, until cameras focused in on an unkempt, crimson beard celebrating amongst the throngs of players, coaches, and executives. It belonged to Dodgers third baseman Justin Turner, who had been mysteriously absent for the last few innings of the game. Turner was ordered to exit the game in the 7th when his COVID-19 test came back positive. He complied and isolated himself in a nearby doctor’s office until the Dodgers claimed victory, when he raced back out onto the field and could be seen hugging his teammates and clutching the World Series trophy. At one point, the organization gathered together for a group photo, and Turner removed his mask not six inches away from the nearest Dodger. ESPN reporter Stephen A. Smith was quick to point out that Turner had already spent the past several hours with his teammates. While that may be true, scientists have long reiterated that the more time someone spends in the presence of an infected individual, the more opportunity they have to catch the virus. Plus, the field was also filled with reporters, photographers, and the families of the players, who were likely being exposed to Turner that night for the first time. Turner’s re-entry may seem like little more than an ill-advised, individual decision he made out of pure desperation to celebrate the victory with his teammates. However, the forces that allowed him to return extend much further outward. Reports say that the league was aware that Turner’s test had come back “inconclusive” in the 2nd inning and immediately relayed the information to the Dodgers’ management. However, Turner didn’t exit until five innings later, after the test was expedited and came back positive. Several reporters and fans pointed out the futility of receiving results after the game had already started, wondering if it were simply an empty gesture to fulfill the protocol rather than protect players from harm. Had Turner been forced to leave the premises entirely, he wouldn’t have even faced the temptation to storm back onto the field and fraternize with his teammates. Authorities of both the team and the league were well within their right to take further action. However, they chose to leave the decision up to Turner himself. Some reports even claim that Dodgers higher-ups permitted Turner to be on the field for the team photo, assuring one another they would insist he leave afterward. Inaction surrounding positive COVID-19 cases in MLB teams is far from a problem specific to the Dodgers. In late July, the Miami Marlins reported that three members of their squad had tested positive. The team’s management assured the general public that they had quarantined infected individuals, implemented daily testing, and were generally taking the situation very seriously. However, Commissioner Rob Manfred still allowed the team to go forward and play their scheduled game against the Philadelphia Phillies. He cited “temperature checks” as the reason that they decided to proceed, as if fevers were the strongest indication of someone’s ability to transmit the virus. Within days, the number of positive cases on the Marlins had risen to 20. When Manfred was asked on The Daily podcast about Nationals star outfielder Juan Soto testing positive ahead of the season opener, he responded that “we knew we were going to have positives . . . The whole point is you have a system that’s flexible enough to deal with what’s coming. We knew it was coming.” Many are convinced that professional athletes’ superior fitness levels make them less prone to a serious bout of COVID-19, and therefore seem to advocate for more lenient protocols. While several of the MLB players who were infected seem to have recovered without significant inconvenience, not all have been so fortunate. Red Sox pitcher Eduardo Rodriguez was unable to play this year after developing myocarditis, or inflammation of the heart muscles, from the virus. Braves first baseman Freddie Freeman suffered a 104-degree fever at the height of his illness, reporting that he prayed for God not to take his life. Multiple journalists have shared stories of professional athletes who now question their future in their sport due to COVID-19 complications. If the MLB wants to retain any sense of credibility going forward, they should refrain from pretending to care about the health and safety of the players and instead be transparent about what they are: a profit-driven organization that has operated entirely out of their fear of sacrificing ratings to the virus by forgoing the 2020 season. Perhaps the most common argument that people have used in defense of Turner’s actions is the fact that he had contributed so much to the team that it just wasn’t fair for him to miss the season’s culmination. However, it also wasn’t fair for millions of people to miss the plethora of events that were inaccessible throughout 2020, including weddings, graduations, births, and, most significantly, the deaths of loved ones who passed alone in hospitals. In fact, one of the reasons that the pandemic has persisted into the fall is because of people who simply can’t stand to miss things, and therefore crowd into bars, restaurants, and houses to retain some sense of normalcy. Watching the Dodgers mob one another on the field, it’s easy to forget, for a second, about the massive amount of death that has occurred in the last several months outside of the stadium walls. While baseball has served as a welcome haven for many during an otherwise devastating year, it’s the behavior of people like Turner, the Dodgers, and Manfred that reminds us we’re not even close to getting out.
https://medium.com/top-level-sports/dodgers-and-mlb-equally-to-blame-in-justin-turners-covid-19-protocol-breach-cf093a67ed9b
['Lily Seibert']
2020-10-31 17:38:56.059000+00:00
['World Series', 'Justin Turner', 'Coronavirus', 'Dodgers']
What’s Going On With Those Swift Substrings?
The Root of the Problem — UTF-8 To understand how Strings work, we need to go back to the basics — Unicode and UTF-8. When we work with Strings, we have the feeling we are dealing with a plain text, just an array of symbols and numbers, but this is a lie. It used to be the case back then, when computers worked with something called ASCII. ASCII was a way to represent all the important characters (letters, digits, symbols) as a number between 32 to 127, so every character took one byte of memory. And what about 127 to 255? Every developer could use it for whatever they wanted, so you can imagine the mess we had when computers got out of the US to non-English countries. That’s where Unicode takes a part — Unicode is the way of representing every letter and digit you can think of, in almost every language in the world, and not only that — Unicode is great for representing emojis as well. So, the Unicode characters map is a four-bytes map, and since most of the characters we are typing are English letters and digits, it is very inefficient to allocate four bytes for each character, when in most cases, one byte is enough. That’s the final piece of the matrix -> encoding, and in this case — UTF-8. UTF-8 is a way to encode a Unicode string to smaller chunks of data so it can be more efficient.
https://medium.com/better-programming/whats-going-on-with-those-swift-substrings-83c58cedf596
['Avi Tsadok']
2020-01-20 00:28:44.973000+00:00
['Development', 'Mobile', 'Swift', 'iOS', 'Programming']
20 Terminal Commands That You Must Know
----------------Manipulation With Files and Folders----------------- 1. Encrypting Files I Know windows is not so much famous for the security it offers but still, there are some methods that can give a guarded feel. Encrypting Files is one of them. Many windows users use third-party apps to encrypt their data but windows also offer an inbuild encryption system for securing files. Open Your Terminal (Win+R Type CMD and Press Enter), and target your terminal to the folder where your files are that you want to secure. Then simply use the command below. Cipher /E Now No one without the password can not access your files. If You Want to Decrypt the Files than you can use Cipher \D . 2. File Compare We all store our important data in files and overtime when the data of the files change and gets updates then it becomes very tough to find the difference between the previous and latest version of the file. You can also relate it with two versions of a coding project. We usually create multiple versions for our project file and in the end, we forgot what changes we have done. Using the file compare command of the terminal we can find the difference between the two files by just a simple line of command. fc /a File1.txt File2.txt ##Simple compare fc /b File1.txt File2.txt ##Binary compare (Best For Images) 3. Hiding Folders You Might be thinking that one I already know but wait the one you are thinking is not good enough. we all know there is an easy way of hiding folders using the right-click and then in properties checking the checkbox “Hidden”. if you know it then you also know that folders can be seen if you go in the view and then check the “Hidden Files” Check box in the top bar. Anyone who is using your computer can do that and easily access your hidden files. There is a much better and safe way is to use the terminal. In the Terminal Target The location to the parent of your desired folder and then type the below command. Attrib +h +s +r FOLDER_NAME ## Attrib +h +s +r studymaterial Now YOur Folder is hidden completely and you can’t even see it by checking the Hidden Files checkbox in the top bar. To unhide the folder, you can use the command Attrib -h -s -r FOLDER_NAME ## Attrib -h -s -r studymaterial 4. Showing File Structure This one I found useful because most of the time when you are working in a team on a big project the most important thing is the file structure. One Mistake in the file structure and your all efforts wasted. You don’t do a bigger mistake like this that's why CMD comes with a command which helps you to show the file structure.
https://medium.com/pythoneers/20-terminal-commands-that-you-must-know-f24ebb54c638
['Abhay Parashar']
2020-12-23 14:35:17.737000+00:00
['Tech', 'Technology', 'Productivity', 'Windows 10', 'Education']
Never Give Up — Is One Of The Most Cliché Advice To Discover Our Passion
Never Give Up — Is One Of The Most Cliché Advice To Discover Our Passion 2 reasons why “Never Give Up” is the worst advice to follow while discovering your passion. Photo by JESHOOTS.COM on Unsplash How many times have you been told, “Never give up!” or “No one likes a quitter!”? How many times have you heard inspirational stories — (These stories are all over the damn places on Facebook or Linkedin…) — that go something like this: “So-and-So faced countless setbacks, but you know what he kept fighting all along” or “Mrs. A had failed 100 times, but she never gave up on her career and look where she is now”? I assume your answer would be along the lines of “Infinite times or More times than I can remember.” NEVER GIVE UP!! It’s probably one of the most cliché phrases you’ll hear as you’re building your career. I’ve heard this phrase more than 5000 times — or maybe more than that — in my life up till now. And there is this another one — “Winners never quit and quitters never win” — Vince Lombardi Excuse my brain while I vomit. What Nonsense!!! — Are you freaking kidding me? From childhood, we’re taught to persevere, to be patient, no matter what, but sometimes that patience— that unwillingness or inability to let go — prevent us from moving forward, finding happiness, adapting to every challenge that life throws our way. Giving up does not always make you a bad person, or failure, or whatever evil thing you have been telling yourself. Sometimes giving up means that you are mature enough to know when to cut your own losses and move on, that you have the bravery to protect your own mental health, that you’re willing to take the risk of changing course. Last year in 2019, I started blogging, yep I started that. I started my blog named Lifestyle on blogger.com. I was very excited about that and I thought maybe this is what I really wanted to do. But the problem was I don’t know what I want to write about. So, I started writing about anything I feel like — blogs on skincare, life routine, life …not writing much by myself but copying-pasting another author’s materials(Hey, I was a beginner back then and you can’t blame me for plagiarizing). Yeah so, I keep writing — but something was wrong, something just didn’t feel right. I started having conflicting thoughts — Why I am even doing this? What the hell is the point? Will it even be worth it or not? But completing ignoring asking myself answer to the question — Is this is what I really want to do? — I keep trying, repeating to myself “Don’t give up. Don’t give up…” again and again and after some time when things didn’t work out after putting so much effort I freaking felt like a complete mess — a f*cked up mess. So, I quit then and there. I didn’t quit because I can’t do it. I quit because I feel f*cked up — because I don’t know what exactly I was doing and why I was doing that. It is the same with all of us we keep doing what we’re doing without even acknowledging the fact is this what we want to do. The more times we read the “Never Give Up” phrase, the more the thought of — “We’ll not give up” — gets embedded deep inside our brain. “Keep trying. Keep going.” “Don’t stop. You can do this. Just try once more.” “If you give up, then you’re a loser.” We keep ignoring the need to ask ourselves — Why I am Doing What I am doing? or Is this what I want to do? — and just keep trying again and again despite how many times we fail, but we kept going. Obviously, there is nothing wrong with trying again and again, it’s the mantra we all need to follow to reach the level of success but it’s only efficient when we are trying what we really want to do. Are you really doing what you really want to do? If yes, then no problem for you, and if no then there is a problem — a problem that will ruin your career. Here are 2 reasons why “Never Give Up” is the cliché advice — to follow — while discovering your passion
https://medium.com/live-your-life-on-purpose/never-give-up-is-one-of-the-most-clich%C3%A9-advice-to-discover-our-passion-b836b234e602
[]
2020-12-24 14:01:03.145000+00:00
['Life Lessons', 'Inspiration', 'Productivity', 'Self Improvement', 'Life']
Kubernetes Just Deprecated Docker Support. What Now?
Kubernetes Just Deprecated Docker Support. What Now? Kat Cosgrove tweeted this on December 2: Let me transcribe the whole thread for you here if you’re not a Twitter user: “So, Kubernetes is deprecating Docker support and you’re either nervous or confused. That’s okay! I would like to help you understand what’s happening. A thread! 1/10 From Kubernetes v1.20, you will receive a deprecation warning for Docker. After that, you will need to use a different container runtime. Yes, this will break your clusters. You might think that Docker == Kubernetes. Not so! 2/10 The thing we call Docker is actually an entire tech stack, which includes a thing called containerd as well as some other stuff, like some fancy UX changes that make it easier for humans to interact with. Containerd is a high-level container runtime by itself. 3/10 Kubernetes doesn’t need all of that fancy UX stuff, though. It just needs the container runtime. Using Docker, the whole stack, as your container runtime means Kubernetes has to use something called dockershim to interact with the parts it actually needs. 4/10 This is because Docker isn’t CRI (Container Runtime Interface) compliant. Dockershim allows us to get around that, but it also means we have an entirely separate thing to maintain just so we can use Docker as our runtime. 5/10 This kind of sucks. It’s inconvenient. The solution is to cut out the abstraction and just use containerd as our container runtime in Kubernetes. Because, again, Kubernetes isn’t a human — it doesn’t need the UX enhancements. 6/10 So, you don’t need to panic. Docker isn’t dead (yet), and it still has its uses. You just can’t use it as your container runtime in Kubernetes anymore. After the next version, you need to switch to containerd. 7/10 Yes, you COULD just stay on an old version of Kubernetes. No, you absolutely should not, or else @IanColdwater will haunt your clusters. Ghost 8/10 The Kubernetes docs for container runtimes are here, with info about using containerd or CRI-O: https://kubernetes.io/docs/setup/production-environment/container-runtimes/… 9/10 Anyway, I hope this helped allay some anxiety or misunderstandings. If you’re still confused, that’s okay! Ask questions! This is REALLY complicated. Your questions aren’t stupid, even if they’re simple! 10/10 BONUS TWEET: Yes, Kubernetes will still run images built by Docker! TL;DR not a whole lot will change for devs, those images are still compliant with OCI (Open Container Initiative) and containerd knows what to do with them.”
https://medium.com/better-programming/kubernetes-just-deprecated-docker-support-e86d2327afad
['Edgar Rodríguez']
2020-12-07 18:29:27.416000+00:00
['Kubernetes', 'Docker', 'Container Orchestration', 'Programming', 'Containerd']
React UseState Explained With Examples
React UseState Explained With Examples Learn about React UseState with practical examples Photo by Ferenc Almasi on Unsplash Introduction React provides a bunch of hooks that allow you to add features to your components. These hooks are JavaScript functions that you can import from the React package. However, hooks are available only for function-based components, so they can’t be used inside a class component. In this article, we will learn about the React UseState hook with practical examples. Let’s get right into it. What is UseState and when we use it? As I said React provides you with a bunch of hooks that you can use on your application. However, useState and useEffect are the two important hooks that you will be using a lot. The hook useState is a function that takes one argument, which is the initial state, and it returns two values: the current state and a function that can be used to update the state. If you tried to print the function useState() in the React dev tools ( console.log(useState) ), you will notice that it returns an array that contains the argument that you have put in the function useState and undefined where you will add a function to update the state. The hook useState can be used when you want to change a text after clicking a button for example or creating a counter and etc. Simple UseState examples In order to use the hook useState , you will have to import it from the React package first. Here is an example: import React, { useState } from 'react' Now you can start using the hook on your code without any problems. Have a look at the example below: import React, { useState } from 'react' function Component() { const [name, setName] = useState('Mehdi') } Notice that we are using the ES6 array destructuring inside the component. So the variable name inside the array refers to the argument of the function useState (current state). On the other hand, the variable setName refers to the function that you will add to update the state. So this means we have a state named name and we can update it by calling on setName() function. Let’s use it on the return statement: import React, { useState } from 'react' function Component() { const [name, setName] = useState('Brad') return <h1> My name is {name} </h1> } //Returns: My name is Brad Since function components don’t have the setState() function, you need to use the setName() function to update it. Here’s how you change the name from “Brad” to “John”: import React, { useState } from 'react' function Component() { const [name, setName] = useState('Brad') if(name === "Brad"){ setName("John") } return <h1> My name is {name} </h1> } //Returns: My name is John Multiple useState When you have multiple states, you can call the useState hook as many times as you need. Here is an example: import React, { useState } from 'react' function Component() { const [name, setName] = useState('Alex') const [age, setAge] = useState(15) const [friends, setFriends] = useState(["Brad", "Mehdi"]) return <h1> My name is {name} and I'm {age} </h1> //My name is Alex and I'm 15 } Notice that, the hook receives all valid JavaScript data types such as string, number, boolean, array, and object. Conclusion The hook useState is one of the important and useful React hook that you must know. Moreover, this hook basically enables function components to have their own internal state and add features to them. Thank you for reading this article, I hope you found it useful. More Reading
https://medium.com/javascript-in-plain-english/react-usestate-explained-with-examples-13d6c17b4b61
['Mehdi Aoussiad']
2020-12-21 17:43:24.180000+00:00
['Programming', 'Web Development', 'React', 'JavaScript', 'Coding']
Loading Data from OpenStreetMap with Python and the Overpass API
There are a number of ways to download map data from OpenStreetMap (OSM) as shown in their wiki. Of course you could download the whole Planet.osm but you would need to free up over 800 GB as of date of this article to have the whole data set sitting on your computer waiting to be analyzed. If you just need to work with a certain region you can use extracts in various formats such as the native .OSM (stored as XML), .PBF (A compressed version of .OSM ), Shapefile or GeoJSON. There are also different API possible such as the native OSM API or the Nominatim API. In this article we will only focus on the Overpass API which allows us to query specific data from the OSM data set. Quick Look at the OSM Data Model Before we start, we have to take a look at how OSM is structured. We have three basic components in the OSM data model, which are nodes, ways and relations which all come with an id. Many of the elements come with tags which describe specific features represented as key-value pairs. In simple terms, nodes are points on the maps (in latitude and longitude) as in the next image of a well documented bench in London. A way on the other hand is a ordered list of nodes, which could correspond to a street or the outline of a house. Here is an example of McSorley’s Old Ale House in New York which can be found as a way in OSM. The final data element is a relation which is also an ordered list containing either nodes, ways or even other relations. It is used to model logical or geographic relationships between objects. This can be used for example for large structures as in the Palace of Versailles which contains multiple polygons to describe the building. Using the Overpass API Now we’ll take a look how to load data from OSM. The Overpass API uses a custom query language to define the queries. It takes some time getting used to, but luckily there is Overpass Turbo by Martin Raifer which comes in handy to interactively evaluate our queries directly in the browser. Let’s say you want to query nodes for cafes, then your query looks like this node["amenity"="cafe"]({{bbox}}); out; where each statement in the query source code ends with a semicolon. This query starts by specifying the component we want to query, which is in this case a node. We are applying a filter by tag on our query which looks for all the nodes where the key-value pair is "amenity"="cafe" . There are different options to filter by tag which can be found in the documentation. There is a variety of tags to choose from, one common key is amenity which covers various community facilities like cafe, restaurant or just a bench. To have an overview of most of the other possible tags in OSM take a look at the OSM Map Features or taginfo. Another filter is the bounding box filter where {{bbox}} corresponds to the bounding box in which we want to search and works only in Overpass Turbo. Otherwise you can specify a bounding box by (south, west, north, east) in latitude and longitude which can look like node["amenity"="pub"] (53.2987342,-6.3870259,53.4105416,-6.1148829); out; which you can try in Overpass Turbo. As we saw before in the OSM data model, there are also ways and relations which might also hold the same attribute. We can get those as well by using a union block statement, which collects all outputs from the sequence of statements inside a pair of parentheses as in ( node["amenity"="cafe"]({{bbox}}); way["amenity"="cafe"]({{bbox}}); relation["amenity"="cafe"]({{bbox}}); ); out; The next way to filter our queries is by element id. Here is the example for the query node(1); out; which gives us the Prime Meridian of the World with longitude close to zero. Another way to filter queries is by area which can be specified like area["ISO3166-1"="GB"][admin_level=2]; which gives us the area for Great Britain. We can use this now as a filter for the query by adding (area) to our statement as in area["ISO3166-1"="GB"][admin_level=2]; node["place"="city"](area); out; This query returns all cities in Great Britain. It is also possible to use a relation or a way as an area. In this case area ids need to be derived from an existing OSM way by adding 2400000000 to its OSM id or in case of a relation by adding 3600000000 . Note that not all ways/relations have an area counterpart (i.e. those that are tagged with area=no , and most multipolygons and that don’t have a defined name=* will not be part of areas). If we apply the relation of Great Britain to the previous example we’ll then get area(3600062149); node["place"="city"](area); out; Finally we can specify the output of the queried data, which configured by the out action. Until now we specified the output as out; , but there are various additional values which can be appended. The first set of values can control the verbosity or the detail of information of the output, such as ids , skel , body (default value), tags , meta and count as described in the documentation. Additionally we can add modifications for the geocoded information. geom adds the full geometry to each object. This is important when returning relations or ways that have no coordinates associated and you want to get the coordinates of their nodes and ways. For example the query rel["ISO3166-1"="GB"][admin_level=2]; out geom; would otherwise not return any coordinates. The value bb adds only the bounding box to each way and relation and center adds only the center of the same bounding box (not the center of the geometry). The sort order can be configured by asc and qt , sorting by object id or by quadtile index respectively, where the latter is significantly faster. Lastly, by adding an integer value, you can set the maximum number of elements to return. After combining what we have learnt so far we can finally query the location of all Biergarten in Germany area["ISO3166-1"="DE"][admin_level=2]; ( node["amenity"="biergarten"](area); way["amenity"="biergarten"](area); rel["amenity"="biergarten"](area); ); out center; Python and the Overpass API Now we should have a pretty good grasp of how to query OSM data with the Overpass API, but how can we use this data now? One way to download the data is by using the command line tools curl or wget. In order to do this we need to access one of the Overpass API endpoints, where the one we will look go by the format http://overpass-api.de/api/interpreter?data=query . When using curl we can download the OSM XML of our query by running the command where the previously crafted query comes after data= and the query needs to be urlencoded. The --globoff is important in order to use square and curly brackets without being interpreted by curl. This query returns the following XML result <?xml version="1.0" encoding="UTF-8"?> <osm version="0.6" generator="Overpass API 0.7.54.13 ff15392f"> <note>The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.</note> <meta osm_base="2018-02-24T21:09:02Z"/> <node id="1" lat="51.4779481" lon="-0.0014863"> <tag k="historic" v="memorial"/> <tag k="memorial" v="stone"/> <tag k="name" v="Prime Meridian of the World"/> </node> </osm> There are various output formats to choose from in the documentation. In order to download the query result as JSON we need to add [out:json]; to the beginning of our query as in giving us the previous XML result in JSON format. You can test the query also in the browser by accessing http://overpass-api.de/api/interpreter?data=[out:json];node(1);out;. But I have promised to use Python to get the resulting query. We can run our well known Biergarten query now with Python by using the requests package in order to access the Overpass API and the json package to read the resulting JSON from the query. import requests import json overpass_url = "http://overpass-api.de/api/interpreter" overpass_query = """ [out:json]; area["ISO3166-1"="DE"][admin_level=2]; (node["amenity"="biergarten"](area); way["amenity"="biergarten"](area); rel["amenity"="biergarten"](area); ); out center; """ response = requests.get(overpass_url, params={'data': overpass_query}) data = response.json() In this case we do not have to use urlencoding for our query since this is taken care of by requests.get and now we can store the data or directly use the data further. The data we care about is stored under the elements key. Each element there contains a type key specifying if it is a node, way or relation and an id key. Since we used the out center; statement in our query, we get for each way and relation a center coordinate stored under the center key. In the case of node elements, the coordinates are simply under the lat, lon keys. import numpy as np import matplotlib.pyplot as plt # Collect coords into list coords = [] for element in data['elements']: if element['type'] == 'node': lon = element['lon'] lat = element['lat'] coords.append((lon, lat)) elif 'center' in element: lon = element['center']['lon'] lat = element['center']['lat'] coords.append((lon, lat)) # Convert coordinates into numpy array X = np.array(coords) plt.plot(X[:, 0], X[:, 1], 'o') plt.title('Biergarten in Germany') plt.xlabel('Longitude') plt.ylabel('Latitude') plt.axis('equal') plt.show() Another way to access the Overpass API with Python is by using the overpy package as a wrapper. Here you can see how we can translate the previous example with the overpy package import overpy api = overpy.Overpass() r = api.query(""" area["ISO3166-1"="DE"][admin_level=2]; (node["amenity"="biergarten"](area); way["amenity"="biergarten"](area); rel["amenity"="biergarten"](area); ); out center; """) coords = [] coords += [(float(node.lon), float(node.lat)) for node in r.nodes] coords += [(float(way.center_lon), float(way.center_lat)) for way in r.ways] coords += [(float(rel.center_lon), float(rel.center_lat)) for rel in r.relations] One nice thing about overpy is that it detects the content type (i.e. XML, JSON) from the response. For further information take a look at their documentation. You can use this collected data then for other purposes or just visualize it with Blender as in the openstreetmap-heatmap project. This brings us back to the title image which shows as you might have guessed it, the distribution of Biergarten in Germany. Image from openstreetmap-heatmap Conclusion Starting from the need to get buildings within certain regions, I discovered how many different things are possible to discover in OSM and I got lost in the geospatial rabbit hole. It is exciting to see how much interesting data in OSM is left to explore, including even the possibility to find 3D data of buildings in OSM. Since OSM is based on contributions, you could also explore how OSM has been growing over time and how many users have been joining as in this article which uses pyosmium to retrieve OSM user statistics for certain regions. I hope I inspired you to go forth and discover curiosities and interesting findings in the depths of OSM with your newly equipped tools. Thanks for reading! If you enjoyed the post, go ahead and show the clap button some love and follow me for more upcoming articles. Also, feel free to connect with me on LinkedIn or Twitter. This article was originally published on janakiev.com.
https://towardsdatascience.com/loading-data-from-openstreetmap-with-python-and-the-overpass-api-513882a27fd0
['Nikolai Janakiev']
2018-08-17 13:14:48.564000+00:00
['Python', 'Data Science', 'Towards Data Science', 'GIS', 'Openstreetmap']
Design Thinking at Cisco
I spend a lot of time explaining the value of design — the fact that design is not about pixels or mockups or wireframes. Design is about finding the right problem to solve, and then solving it in the best way possible. Good designers are problem solvers. Great designers are problem finders! This process of finding and solving problems is referred to as Design Thinking, and as part of the design transformation at Cisco we set out to create a design thinking framework would not only be used to up the game of our already amazing designers, but would also be something to enable the thousands of engineers who might not always have the benefit of working with a design partner. The framework explained The Cisco Design Thinking framework consists of three phases: Discover, Define, and Explore, and two guard-rails: Making thing and validating with users. Lets take a closer look at the three phases. Discover In the first phase of Cisco Design Thinking, your priority is getting to know your users — with empathy. By empathizing with users and truly understanding their core needs, current frustrations, and related pain points, you can uncover the valuable opportunities that drive true innovation. We do this by immersing ourselves in the world of our user through research techniques like interviews and contextual inquiries. We interpret the information we are capturing through artifacts like Journey maps, empathy maps, and story boards. We aim to capture the current state of the world for our user, and then reframe the information in order to draw insight from it. We document any opportunities using this standard format: Define Once you have documented your opportunity, you and your team will likely identify many problems that will need to be solved. But which ones matter most to your users? Your goal in this phase is to prioritize three — or fewer — clearly articulated problems that your solution will address on behalf of your users. We use a template to capture these problem statements and they get amended to opportunity statement so that it looks like this: Once the team has settled on their opportunity and problems statements, its then time to start creating solutions and for this we get into the explore phase. Explore You have a clear sense of who you’re building for, the opportunity at hand, and the key problems to be solved. Now it’s time for the team to start identifying creative solutions. The key to the Explore phase is to ensure that the solutions developed explicitly solve the prioritized problems documented in the Define phase. I think of the explore phase as a continual loop of learning. We take a problem and begin by exploring as many solutions are possible. Pick the most desirable solution, and figure out how to quickly build an experiment that tests it. Run the experiment. Did is pass or fail? Continue to iterate on this one till you are happy at which point you can move onto another problem. This constant looping of “build, measure, learn” is exactly what the Lean Startup methodology is all about. There is a lot of overlap between Design Thinking and Lean Startup and even Agile. In fact the way I think about it is this: Pulling it all together All of these pieces of the framework come together and are applied first as a way to develop a high level direction for what we are doing, and then as a way to accelerate learning through the delivery of the proposed solutions. Don’t look at the framework as a progressive linear process, but look at it as set of tools that you use depending on your current challenge. On any given project you will flip around between the different phases in order to achieve the outcome. The main things to remember are: Always make sure you are focussed on the right problem, before trying to solve it. Design thinking is a team sport. Collaboration with cross functional teams is critical to its success. When you are creating solutions, always focus on running fast experiments that answer what you need to learn, or validate assumptions you are making. Supporting the framework We have created a few supporting artifacts that enable the teams to practice this new framework. The most impactful artifact has been the field guide. This beautifully designed practical guide contains an explanation of each of the phases, along with lots of examples. The second half of the book is filled with tools and exercises that can be used along the way. Whats next? The Cisco Design Thinking framework is already being used by teams around the globe and is not just focussed on product development. We have executives, sales, HR, design, and engineering all using it to great effect. Our next steps involve developing a learning framework around CDT that will allow us to train four different levels of Design Thinkers: Enthusiasts, Practitioners, Facilitators, and Coaches. If you are interested in learning more about what we learned along the way, please don’t hesitate to reach out or leave a comment on this post. Also, be sure to check out my previous article about why I chose to focus on why I chose Design Thinking for my team.
https://medium.com/cisco-design-community/the-cisco-design-thinking-framework-1263c3ce2e7c
['Jason Cyr']
2018-01-15 23:26:34.179000+00:00
['Design Thinking', 'Design Process', 'Innovation', 'Design', 'User Experience']
The practical benefits of augmented analytics
The practical benefits of augmented analytics How does augmented analytics really benefit your organisation? We break down its many practical advantages in usability, time-savings and value it unlocks across the entire analytics life cycle. Augmented analytics uses emerging technologies like automation, artificial intelligence (AI), machine learning (ML) and natural language generation (NLG) to automate data manipulation, monitoring and analysis tasks and enhance data literacy. In our previous blog, we covered what augmented analytics actually is and what it really means for modern business intelligence. In this article, we focus on helping you learn the many practical benefits that augmented analytics can bring to your business, across the three core pillars of the analytics lifecycle: preparation, analysis and insight delivery. #1 — Augmented data preparation Traditionally, database administrators bring critical data together from multiple sources and carefully prepare it for integration with downstream systems and analytics tools. Augmented data preparation is a component of augmented analytics that transforms this procedure to become less reliant on manual process and ensures best practices for each step of the preparation phase are followed. How augmented analytics enhances data preparation Automatic Data Profiling: Automatic profiling can recommend the best approaches to cleaning, enriching, manipulating and modelling data; combined with the team’s existing knowledge, this can fundamentally improve data management (cataloguing, metadata, data quality) and help teams continuously refine their data preparation processes. As an example, Yellowfin Data Prep provides automated recommendations to best fix or curate data as part of its Suggest Actions feature. Auto-detection for less repetition: Augmented data preparation can handle mundane, routine but necessary data transformation steps, like physically joining schemas together or comparison calculations, with automation. Algorithms auto-detect schemas and join data from different sources together, without the need for manual intervention. Streamlined data harmonisation: Augmented data preparation enables admins to integrate more data sources, from ad-hoc, external or trusted sources, using automation and machine-led algorithms, faster than using traditional harmonisation approaches. An example of augmented data preparation from Gartner details a business in the Consumer Packaged Goods (CPG) sector who originally required five people to access, clean, blend, model and integrate data across various data systems (point-of-sale, pricing, Nielsen) which took five weeks. Augmented data preparation reduced this process to one person and one hour, with one click-updates. #2 — Augmented analysis Sifting through and analyzing prepared data before it is deployed to the wider business is traditionally handled by data analysts. But the vast volumes of data organizations accumulate today means it’s just not possible to look at every relevant data point from every angle in a timely manner. How augmented analysis enhances data analysis Image via Yellowfin Automated visualisations: Analysts can leverage automated visualisations for their analysis efforts; these contextualised visualisations are automatically generated using the machine learning capabilities of augmented analytics based on the best fit options for representing said metrics. This saves analysts a lot of time and also assists them in deeper understanding of data. Continuous ‘always on’ analysis: Machines now make it possible to have analysis set to be always running monitoring and analysis of data. If it finds a type of change (spike, volatility), it will auto-analyse an anomaly and bring it to the surface, as opposed to relying on analysts to spend considerable time looking for every single instance of relevant change and potentially missing it due to lack of time or data fatigue. Reduced analytical bias: Analysts always make assumptions when trying to find answers (we have to start somewhere). Augmented analytics can help reduce bias by running automated analysis across a bigger range of data, with a focus on factors of statistical importance, broadening future search efforts. By applying automated analysis in parallel with the analyst’s manual oversight, the risk of missing important insights is reduced. Time-to-insight: Rather than having to manually test all possible combinations of data, analysts can apply sophisticated ML algorithms to auto-detect hidden correlations, clusters, outliers, relationships and segments. The most statistically significant findings are presented via smart visualisations, optimised for the analyst’s further interpretation and action. This helps specialists examine highly pertinent insights in more depth and provide more detail when they pass data to users — it’s also a lot faster. As a real-world example of the practical impact of using augmented analysis, Yellowfin’s augmented analytics features (Signals, Assisted Insights) enabled aviation manufacturer AeroEdge’s analysts to identify hidden patterns that lead to manufacturing issues and address them 80% faster. This also increased cost awareness for the operators in charge of analysing data, which led to further identified opportunities to improve business profitability. #3 — Augmented insight delivery Finding patterns in data is like looking for a needle in a haystack. It’s not easy, nor is it a task that can always be done in a timely manner. Augmented insight delivery helps business users discover not just where the needle lies faster than what traditional manual effort can produce, but also understand what’s there and why it’s important so they can do something about it, without the traditional blockers if needed to go back and forth via the technology teams for assistance. How augmented insight delivery enhances insight discovery Image via Yellowfin Predictable discovery: The automated and machine-led capabilities of augmented analytics never get tired of exploring all possible combinations of data, aren’t biased, and can alert users to meaningful changes right as they happen, including outliers that might not always be evident when users manually view highly aggregated dashboard or charts. They help provide reliable findings into potential issues or insights that regular analysis can’t always guarantee, and reduce the need for users to seek additional intervention from analysts. For example, our Yellowfin digital team had pre-built dashboards in Google Analytics to provide aggregated overviews of traffic to our website.. Sometimes, anomalies were hard to ascertain for regular users from the aggregated visualisations. So we set up Yellowfin Signals to automate the monitoring, alerting and analysis of unexpected spikes in page-views. We instantly received automated alerts that Signals’ automated analysis determined paid ads were from countries where we weren’t running any paid campaigns (or so we thought). Signals drove the immediate conversation with our ad agency in which we discovered our agency made a mistake and did not limit our paid ads to the locations we had chosen. Without Signals and Yellowfin’s augmented insight delivery capabilities, it would be like trying to find that data ‘needle’ beneath a highly aggregated dashboard. Augmented analytics help to ensure these types of hidden insights are consistently brought to the surface. Instant, explained answers: Augmented insight delivery features like Assisted Insights leverage NLG to dynamically generate high-level explanations and comparisons that break down findings in a way that users of varying knowledge can understand. Coupled with autogenerated visualisations, variance analysis and calculation creation, this gives an instant, visual understanding when they query their data. Most importantly, it helps improve the user’s data literacy, encouraging further data-driven decision-making and data-led cultures as a whole. Personalised insights: Ranking algorithms learn the more users interact with their analytics tools, and rank what’s most relevant over time. With this sort of augmented capability in place, users can better understand exactly what critical areas of their business metrics they should be looking at, and be assured that their automated BI is delivering them more pertinent information to analyse over time, which also gradually opens up otherwise unseen avenues of insight. Augmented analytics: Why it’s becoming essential for enterprise in 2021 Next year, global advisory firm Gartner predict augmented analytics to be a dominant driver of new purchases of data analytics and BI platforms, making it clear it’s an area of analytical capability that is no longer seen by industry leaders as a far-flung future. By familiarising yourself with the many practical ways augmented analytics has been benefiting organisations today, you can better prepare for a future implementation and ensure you retain a competitive edge as your analytics needs continue to evolve.
https://medium.com/dataseries/the-practical-benefits-of-augmented-analytics-5a6fa4031c0b
['Daniel Shaw-Dennis']
2020-12-11 10:12:07.760000+00:00
['AI', 'Augmented Analytics', 'Innovation', 'Analytics', 'Data']
Learn to code smarter: How to become a senior software engineer quickly
Learn to code smarter: How to become a senior software engineer quickly Shanea Follow Nov 13 · 6 min read When I first taught myself to code, I noticed a gap. Even though I’d been teaching myself to code for five years, I didn’t have the skills necessary to reach the next level. I was technical… but not technical enough. It wasn’t just me who noticed this skill gap either. After years working to become a Product Manager at Google, I finally had the opportunity to interview for the role. However, after passing five internal interviews, I was told by the hiring manager that I would never pass the technical ladder transfer interviews. The job was given to someone else. After all my hard work, I felt defeated. My insecurity– that I wasn’t technical enough– was staring at me in the face. Despite all the hours I’d spent building mobile apps and learning how to develop in Java, Javascript, and Python, I wasn’t skilled enough to snag my dream job. I wanted to be a better software engineer and product manager, so I got a (second) bachelor’s degree, this time in Computer Science. Because I’d been in the job market before this degree, I gained unique insight into how Computer Science is taught, as well as how those lessons directly translate into our roles as engineers. Now that I have a Computer Science degree, have put in the hours as a technical product management executive, and have founded my own tool for developers, I understand what was preventing me from exceling. Although I’m happy about where I’ve landed, this knowledge shouldn’t be locked in a Computer Science degree. Today, I’m sharing how you can learn to code smarter so that you can become a senior software engineer quickly. Even if you have a ways to go, this knowledge will help you become better than you were yesterday. Why become a senior software engineer? First off, what’s so great about becoming a senior software engineer? Why go through the trouble? In my experience, senior software engineers are trusted to solve harder problems and handle more complexity. Although this can be challenging, it also gives you the opportunity to build something that’s rewarding and impactful. It gives you a seat at the table. Not only that, but being a senior software engineer gives you the chance to mentor and provide insights to others. Often, it may lead to managing your own group. And, let’s not forget about the senior software engineer salary. On average, senior software engineers make 92% more than junior ones, according to PayScale. For me, becoming a senior leader changed the trajectory of my career. While I was completing my degree, I landed a senior role at eBay. Not only did I get the role I wanted, but I was also able to skip the junior level. In doing so, I instantly tripled my salary. In that first year, I took seven separate products from ideation to launch, giving me enough experience to get into even higher level roles. How to become a senior software engineer quickly If you want to advance in your engineering career, you shouldn’t have to go get a second degree. That’s a big (and expensive) commitment that requires years of your time. Becoming a senior software engineer quickly requires you to read, understand, and have a big picture understanding of programming languages. How can you ensure you have an in-depth understanding of code? You’ll need to read a lot of code, get a lot of code reviews, and give a lot of code reviews. Spending time with code and gaining feedback from others will help you gain the depth of knowledge you need to move forward. But giving and receiving code reviews isn’t enough to put you on the right track. Ultimately, you need to gain the ability to build large mental models. This all boils down to loading up more complex systems in your head. Engineering requires us to hold abstract systems and concepts in our heads via a skill called spatial reasoning. Spatial reasoning is the ability to “generate, retain, retrieve, and transform well-structured visual images” (Lohman 1996). It’s what we do when we visualize shapes in our “mind’s eye.” In engineering, we use spatial reasoning to create a mental picture or a mental model of how our systems should look. We hold it in our heads. You follow a function call from one file to another. You imagine how data at runtime flows through that picture you created. You transform that picture by flipping it and manipulating it daily. To get to senior engineer, you need to hold larger and larger systems in your head. You need to add more and more to your mental model. You need to build up a database of things you have seen before. This is what takes so much time, and it’s what you need to conquer to go from junior to senior engineer. A few tips for building these models Turns out, I have terrible spatial reasoning skills. This challenge is so visceral to me that I’ve built an entire company around it. CodeSee’s mission is to help developers and development teams master and maintain their understanding of large scale codebases. Codesee.io helps dev teams all speak the same language. It takes these large scale systems that engineers have traditionally held in their heads, and it creates a visual map along with all of the data that PMs can understand and that shows how all the pieces fit together. This map shows everything from the line of code that gets run to the higher level system architecture. Here are a few recommended tips for building mental models, all of which are built into the CodeSee platform. Write things down. Some say that good writing is good thinking, and I agree. Being able to write down what’s going on in your code will clarify your thoughts, help you see the big picture, and ensure that you’re able to communicate your ideas to others. Make sure you write things down in a scalable way that you can search from, build onto, and is available when you need it. Practice spatial reasoning skills. Spatial reasoning does not come naturally to me, but I’ve practiced these skills to become an expert. Every time I write a bit of code, I work to build a mental model in my head. Draw a picture instead of holding something in your head. Drawing a simple picture or diagram can help you plot out your ideas and situate them contextually. Similar to writing things down, drawing a picture helps you solidify your thoughts and share them with others. Reason about the data. Every system is made up of code and data. If you are only looking at the code, you’re missing half of the picture. Ask yourself: Where is the data stored? What does it look like? Where does the data start, go and end up? How is the data transformed along the way? Read a lot of code. This is what most people advise, but it’s really important. I put reading of code into my calendar, and I go to Stack Overflow and other open source codebases. The best advice I’ve heard here was from a language teacher: Read it once, ignoring the things you don’t know. Read it again, noting the things you don’t know. Then, look up everything you don’t know. Finally, read it again. No matter your background or experience, it’s possible to go from a junior to senior software engineer– just as long as you have a solid, big picture understanding of programming languages. Shanea Leven is the Founder and CEO of a developer platform called CodeSee. CodeSee helps developers master understanding of codebases. We visualize in real-time how a software system works and fits together, so developers — and anyone else — can onboard more easily, plan more reliably, and ship features faster and better. Shanea has spent many years as a technical product leader building platforms for developers at Google, Docker, eBay, Cloudflare and various startups. She is also the chair of Executive Women In Product.
https://medium.com/codesee-io/learn-to-code-smarter-how-to-become-a-senior-software-engineer-quickly-8f19903f419d
[]
2020-11-13 23:48:36.071000+00:00
['Engineering', 'Coding', 'Software Development', 'JavaScript', 'Programming']
8 UX Design tips for “Not always” cases when designing for iOS and Android
Photo by Halacious on Unsplash Before going on, I would like to say that everything you read is only based on my UI/UX design knowledge, experience, and conducted user tests. Somethings could not work for you but in my case, it turned out good. Many examples that I will bring I took from an enormous medical app that stakeholders wanted to fit into a small screen (I felt like an IMF agent considering how big the application was). I decided to go with that exact application because it had many problems and challenges as stakeholders wanted almost all functionalities of a hospital plus features of doctor kitchen. 1. Not always fewer clicks are better. When making a page of certain functionality, that has too many exits, sometimes it is better to add more clicks (in my case taps), make additional pages rather than fit everything in one small screen. Photo by Kelly Sikkema on Unsplash On a small screen, everything stacked together can mislead the user. Key functions become more noticeable and easier to access. Visually more appealing. When we check the case with add workday section, the first time I added two more buttons on the top as stakeholders and POs didn’t even want to listen to any argument. They wanted quick access from the first page. But as potential users pointed out, they often tapped on the user profile menu instead of the edit button underneath it. 2. Not always native solutions are the best. When I was designing a native iOS ‘cancel’ and ‘done’ actions on the Modals, I decided to improvise and put the same section on the bottom. Image of a project I worked on. In the left picture on top, we see the native solution for modal Cancel and Done actions. In the right picture, I brought them to the bottom. It’s easier to access with the thumb. The native version is too high, and on newer devices often impossible to reach without rearranging the phone in hand or using second. I did the native version too, and after AB testing results were on my side. I can’t even tell you how happy I was with that. IOS, do you read this? 1 point for me :) 3. Not always stick to the same solution for the same OS I’m not talking about Gmail that uses Material Design for iOS and Android or Instagram that used iOS style on both platforms. I am talking about using a feature of one in the other, while most components stay native to each platform. Before going on with this one, I must say that I absolutely adore both solutions for text fields iOS (Human Interface) and Android (Material Design). Image by me. I used Material Design and Human Interface add contact forms Forms of iOS are just so simple yet elegant, and they have great UX. Material design’s popping titles are so mindblowing and again have great UX. You always know what fields you are filling. In the case of the app, I used popping Material Design forms for both platforms. All three groups (stakeholders, users, and our team) unanimously preferred the popping animation fields. 4. Not always perfect means good. This one goes for the stakeholders. I respect them a lot, and working with them was so much fun. The thing is that they aimed at perfect. And the project lasted for a very long time because every week they came up with new ideas that will “work better”. And we, constantly came back to do changes and the project lasted even longer. Photo by Brett Jordan on Unsplash The first time I saw the phrase “Done is better than perfect!” on a MacBook sticker, I fell in love with that phrase. At the same time, I was designing my personal website and I did the same perfect thing every day. New ideas, new changes, new inspirations, that all led to a new schedule which delayed the success for me. When you aim for perfect each imperfection keeps delaying the project. Our minds can’t project the perfect thus we will never be able to form that as a goal. On the other hand, good is formed, it’s stable, and it has “measurements.” By no means, I am saying that you have to rush your project for sooner results. I am just saying that there is no absolute perfect, and by chasing it, you may never finish the project. Remember this, and I am saying as a huge perfectionist — there is never a perfect, just like in physics. Repeat that to yourself! It’s like trying to catch the junkie dragon in South Park 11.14 episode. You can never catch it. For me, closest to perfect is Apple’s website design, especially the iPhone 11 pro page. And they keep updating. Because there is always better. My advice is to look at your design after some time away from the screen. If you think it is good, then it’s already better than perfect. Trust me. If stakeholders say it’s good, and sugar on top — users approve it, there is no better perfect than that. 5. Not always what is simpler for you simpler for others. I really love iOS data pickers, and I use them as often as I get the chance. But user testing showed that most users easier navigated in android data pickers. Image made by me. iOS and Android Datapickers. Everyone agreed that iOS looked cooler, but it’s the famous problem where UI and UX are going against each other. In some cases, there is no sweet middle spot. And you ALWAYS want to go with the users. 6. Not always going with the users will have the best outcome for YOU. We know the phrase ‘Client is Always Right!’ Thankfully, in web design, thanks to many kinds of researches, including user testings, they often admit that it’s not the case. But in some cases, stakeholders want to stick with something they love, and whatever you do, whatever UX research proves them wrong — they won’t agree to take it away. In some rare cases, you might even lose the project if you keep insisting to take away the thing they love so much. Here comes a delicate spot where you have to choose for yourself what’s more important; do the wrong UX, and in some severe cases, lose your rating or lose the project. I’m afraid it’s all up to you. It depends on how important you think that feature is, how much it will impact other’s opinion of your professionalism. And how much do you need that project. I’m afraid each case is unique, and you have to decide for yourself. In my case, it was a user icon that worked by Hamburger menu logic. It opened when tapping on the user icon. I didn’t like that profile pic button opened the hamburger menu. But the Client is Always Right. 7. Not always User Experience is better than Marketing. To my shame, I have to admit that I am worse at marketing skills than I should have been. Marketing is a big part of UX and vice versa. On the app, there were pages where the Marketing department, along with stakeholders, ingeminated to use their big brand symbol on a small screen (not a logo) on pages where the right thing would be to use more functional assets. And the thing is that we both were right. From the UX perspective, it would be friendlier for users to have some functions there. And I was forced to fit it on other pages. But from a Marketing perspective, the brand was new, and they needed to make people see the symbol in as many places as possible. Marketing department won, and the outcome was good. Though I don’t know what would happen if we did it my way. The goal of the marketing department was achieved, and sacrificed pages didn’t even impact users. 8. Not always — I have 8 tips. I’m sorry, just kidding :) 8.1. Not always people know what they are used to. I know, it’s a bit odd. I’ll explain it. In one test, I crossed users and platforms. I gave the iOS version to Android users and vice versa. Many Android users navigated easier in some iOS features even though they never held an iPhone in their hands. It was most noticeable when they navigated through modals (popups). And vice versa; iOS user group navigated easier in the Android calendar. In conclusion: Not always — you have to stick to the right UX, Native solutions, etc. Use Your Skills, Experience, and most of all, a common sense. It’s a big UX design world out there, and everything is constantly changing. What was right before is wrong now, and what is wrong now might someday be considered the best UX. Trust your gut and keep going. I’m sure if you made it so far with this boring article :), you’ll make it in the big league and you’ll make it really good. Which is better than perfect! (See what I did there?!) Thank you for reading. If you have any questions or thoughts about the article feel free to leave a comment. Wish you Best User Experience in real life!
https://uxplanet.org/8-ux-design-tips-for-not-always-cases-when-designing-on-ios-and-android-ae45bb6d575d
['Daniel Danielyan']
2020-08-18 09:33:59.998000+00:00
['UX', 'UI', 'Design', 'iOS', 'Android']
8 Classic JavaScript-Coding Mistakes You Should Avoid
Handling the ` this ` Reference The this keyword confuses every JavaScript developer. Perhaps, it’s a lot different than what other programming languages like Java offer. The this keyword refers to an object, and the reference can change depending on how the function is called and not where it’s defined. For methods inside an object, this would refer to the invoker object itself, whereas for independent functions, this refers to a global object. let user = { name: "your name", getName() { console.log(this.name); } }; user.getName(); Now, if we extract the function, the this reference would change: let user = { name: "your name", getName: function(){ console.log(this.name); } }; user.getName() //your name var getUsername = user.getName; getUsername() //undefined Even though getUsername holds a reference of getName , the invocation site has changed, thereby making this a global object. Hence the getUsername() returns undefined. Another trickier case of the this keyword is when it’s used inside an anonymous function. The this context within anonymous functions doesn’t have access to the outer object and, hence, points to the global scope. To access object properties inside anonymous functions, we need to pass the object’s instance, as shown below: let user = { name: "your name", getName: function(){ var self = this; (function () { console.log(self.name); }()); } }; Instead of storing the this reference in a variable like we did above, we could have also invoked call(this) on the anonymous function, as shown below:
https://medium.com/better-programming/8-classic-javascript-coding-mistakes-you-should-avoid-14f198ea9e36
['Anupam Chugh']
2020-06-25 05:46:42.750000+00:00
['Programming', 'Software Development', 'Web Development', 'JavaScript', 'Startup']
7 tips against your smartphone addiction
1. AVOID IDLE MOMENTS It's usually during transitional breaks or moments of boredom, that we tend to reach for our phones as an automatic reaction to fulfil the void in the waiting time. According to Jamison Monroe, CEO of Newport Academy, the best thing you can do to avoid scrolling is to create a list of things you could do during your idle moments. The key is to come up with options that appeal to you. Here's some for you: You could be taking a walk You could be writing on paper with a pen You could be singing or dancing to your own favourite song You could drop down and do 10 push-ups (or stretch… actually maybe stretch) You could be closing your eyes, and meditate for up to 10 minutes. 2. USE TECH TO ELIMINATE TECH Download apps that can tell you the times you've checked your phone on the same day you are using your phone in order to trigger warnings of breaking self-imposed limits. These Apps can lock your apps for you for a specified amount of time to help you place your attention away from your smartphone. Facebook released a feature called “Quiet Mode” earlier in 2020, allowing users to minimize distractions by muting the app’s push notifications for a time frame pre-specified. What I particularly like about it, is that you could set Quiet Mode to automatically run during your workday to reduce your temptation to waste time in the app. Furthermore, if you try to launch Facebook during Quiet Mode, the app will remind you that you’ve set this time aside with the goal of limiting your time in the app. I honestly can’t wait for this to be also rolled out for Instagram & WhatsApp. Other apps that I would recommend are AppDetox or AppBlock. Otherwise, may other exists in the App Store / Google Play. 3. MAKE USE OF YOUR OWN NOTIFICATIONS Leverage your calendar to set daily reminders (via e-mail & push notification). This will find its way to become the most useful and healthy push notification on your phone. 4. UNPLUG BEFORE BED This is the toughest one to adopt in the short-term, but the healthiest in the long-term. An hour before you go to sleep, avoid any tech or electronic device. The blue wavelength light emitted from digital screens interrupts the production of Melatonin, which gives our brain the signal to go sleep & rest. Melatonin, known as the darkness hormone. Leave your device on a desk far from your bed, or in other room to avoid temptation. 5. START SMALL We tend to go big or go home. Pride aside, start small to test the waters and slowly reduce your screen time. For example: Turn off your phone during dinner, or leave it away from the table. Leave it at home when going for walks Define a daily limit — e.g. 3 hours x day tracked no phone time. Trust a partner or friend to hold it for you during work time. Allow yourself to grab it when taking mandatory breaks. 6. TURN OFF NOTIFICATIONS Think of a smartphone as the world's smallest slot machine — it elevates your dopamine receptors, and continues that behavior over and over again as it offers an unpredictable award just like in gambling. These awards are triggered by notifications (whether useful/useless, good or bad). Silence notifications for all social channels to make yourself less tempted to look at your phone every few seconds. In case of work dependancy, make sure to connect the relevant apps on your work laptop instead. Think of accessing social media updates as your reward for your hard work, making your time more enjoyable. Watch The Social Dilemma on Netflix if you need more convincing on silencing some Apps. 7. PLAN BREAKS. Commit to taking daily breaks, during which time you turn off your phone and put it out of sight and out of reach. To make the most out of your break, plan a specific activity to do to fill in the gap. I usually prep myself a nice Italian-made coffee, head out with my coffee and go for a short [mindful] walk around the block without my phone. However, do try to do the same also during meals or other specific daily events — pay attention to what is going on around you. Recommendation: Let your friends or colleagues know, in order to allow for you to fully detach and maximise your mindfulness during your break; for them to give you some space.
https://medium.com/design-bootcamp/7-tips-against-your-smartphone-addiction-8e37ff8e5f9
['Claudio Corti']
2020-12-18 00:54:23.599000+00:00
['Addiction', 'Mobile', 'Mindfulness', 'Mental Health', 'UX']
The Late Night Dream
The Late Night Dream A short horror poetry. Once upon a midsummer night There came a roaring from the wind. There I stood. Wandering and coupled with fear. Ravaging thoughts and goose bumps. Filled my flesh. Then suddenly, A shadow. A very big one. With long nails and fangs. Like that of a beast. Ready to pounce on me. Ready to devour me. There I stood. Unable to move. Unable to run. Unable to scream. Then I knew. The end is near. The end is now. With nobody to save me. But a voice. A still small voice. A melodious one. Like that of an angel. Wake up! wake up!! Little monster. © Evince pen
https://medium.com/illumination/the-late-night-dream-c5baf106b6e
['Evince Uhurebor']
2020-11-24 19:33:20.434000+00:00
['Poetry', 'Poetry On Medium', 'Writing', 'Horror Fiction', 'Self']
Five app prototyping tools compared
There are 🇺🇦 Ukrainian, 🇷🇺 Russian (+ another one) and 🇨🇳 Chinese (+ another one) translations of this article, and there’s a follow-up with Principle, Flinto for Mac & Tumult Hype. I recreated the IF by IFTTT user onboarding in five different high-fidelity prototyping tools to get an idea of the differences between them: Proto.io, Pixate, Framer, Facebook’s Origami and RelativeWave’s Form. See how these five recreations behave compared to the real thing: Pages versus Layers Why did I select these five? I discovered that recreating something that is this animation-heavy (icons moving around in different directions and at different speeds) is not even possible in most prototyping packages. The majority of tools only let you connect static pages, while only the more complex ones let you animate different objects or layers within a given page. I’ll explain it a bit more. Page-based tools In a page-based tool, you lay out different screens, and then you make hotspots or buttons to connect them together. You tap a button somewhere on one screen to go to another screen. Page-based tools generally also have a choice of different transitions between screens, like fade in, slide in from the right, slide up from below, etc. It’s a bit clunky, but it’s a good way to make quick mockups when you’re still figuring out the flow of an app (which and how many screens are needed, how they would appear, where buttons should go, etc.). Examples of page-based tools are: Briefs, InVision, Notism, Flinto, Fluid, Mockup.io, Prott, POP, Marvel, Balsamiq, Red Pen and Keynotopia. Granted, in some of these tools you can have animations or scrollable areas within a page, but you cannot use them to emulate every interaction possible in real native apps. Layer-based tools Every asset, interface element, or in other words, layer can be made tappable, swipe-able, draggable… but also animated. Prototyping a complete app in a tool like this would be crazy, though; it would be too much work (you might as well build the real app). But they’re great for trying out new interactions, or for tweaking the timing of an animation. Proto.io, Pixate, Framer, Facebook’s Origami and RelativeWave’s Form are the tools I tried. To be honest, there are a few others — Axure and Indigo Studio — but they seem to be more enterprisey (read: rather expensive). I might try them out some other time. So, onwards with the chosen ones.
https://medium.com/sketch-app-sources/five-app-prototyping-tools-compared-form-framer-origami-pixate-proto-io-c2acc9062c61
['Tes Mat']
2017-11-24 15:52:55.331000+00:00
['Design', 'UX', 'Tech', 'Prototyping', 'UI']
Mistakes to Avoid in Affiliate Marketing
Mistakes to Avoid in Affiliate Marketing Visualmodo Follow Mar 3 · 4 min read Affiliate marketing programs are known to pay a good amount of commissions on a regular basis which makes it a lucrative industry for passionate marketers or worldwide web lovers. As lucrative as this industry is, beginners, find it hard to figure out the right formulae to get them started and maintain the right path. Most of the affiliate marketing mistakes are not detrimental in the beginning but in the long run, they affect the returns on our affiliate marketing efforts more than we can afford, see how to avoid it. Affiliate Marketing Mistakes And How To Avoid How The Sales Happen Most of the bloggers fail in affiliate marketing because of not understanding how the sales will be generated. I’ve seen bloggers with great blog posts failed miserably because of this. Having quality blog posts is not enough to generate affiliate sales. You need to understand how these posts can help you to make affiliate sales. For example, let’s say you have written a great post on Topic X, but you haven’t put any affiliate links on the post. Even if you place affiliate links, there is no clear call to action. Do you think this post is going to generate sales for you? No, the chance is meager. You need to optimize your posts for affiliate marketing. Another important thing is understanding which types of blog posts are driving more sales. Not all the types of blog posts get sales. Here are some best types of blog posts that are proven to drive more sales: “List Of Alternatives” Post. “Coupon Codes & Sales” Post. “List of Best X or Best X for Y” Post. “Comparison” Post. “Product/Service Review” Post. “How To” Post. “Ultimate Resources” Post. Try these types of posts. And I am pretty sure you will see good sales. Share the Wrong Product Affiliate Marketing Mistakes Getting affiliate sales is hard. Choosing the wrong products/services makes it harder. So how do you know if a product/service is right or wrong for you to promote? It depends on several factors. Let’s see some of the factors: Firstly, the product/service is not relevant to your niche. For example, it’s irrelevant to promote an SEO tool on a recipe blog. Secondly, product/service quality is not good, but it offers high commission rates. Avoid these types of products. Even if you get some sales, in the long term, it will decrease your credibility. Finally, the product/service is new on the market. Always remember, popular products convert better. So always try to promote the products/services that are relevant and popular in your niche. Do Not Understand The Product It’s not like you have to use and test all the products/services that you are promoting on your blog. But it’s essential to have proper knowledge of the affiliate products/services. This way, you will be able to solve your readers’ problems in better ways. For example, if you are promoting HostGator on your blog, you need to know how HostGator can help in starting a blog and other services of HostGator. And if possible, it’s better to test the products/services before promoting it on your blog. It will increase your trustability. Ignore Link Management Affiliate Marketing Mistakes To Avoid It’s a common mistake that almost every affiliate marketers make. But this mistake can turn out to be a big one if you are not using an affiliate link management system for a while. Imagine, someday, your highest revenue-generating affiliate program decides to change its affiliate platform, and they want you to use a new affiliate link. How would you change all the affiliate links that you’ve inserted on the blog posts? You’ll have to find and change the links manually. But if you are using a link management plugin, you can do it within one minute. Here’s how it works. Affiliate Link Management plugin allows you to cloak your affiliate links. The idea is to hide your affiliate link with URL redirection. Most of the time affiliate links are ugly like this one — https://partners.hostgator.com/c/214426/177309/3094 Now, whenever a company changes their affiliate link structure, all you need to do is changing the affiliate link from the plugin dashboard. And the change will be applied to all links. Cloaking is not the only benefit of using an affiliate link management plugin. Here are some other benefits – Firstly, you can add affiliate links automatically to the specified keywords. Secondly, check your affiliate links’ click statistics. You can change affiliate links based on geo locations and more. Finally, and most importantly, it makes your affiliate links SEO-Friendly. Finally, the question is, which plugin should you use to manage your affiliate links? There are several plugins out there. ThirstyAffiliates and Pretty Links Pro are the most popular. I use and recommend ThirstyAffiliates. Final Thoughts Creating a successful affiliate marketing business requires passion, commitment, knowledge of products, updated knowledge about market trends, relationship building with clients, quality traffic, and helpful product recommendations. Many times, affiliate marketers with great products fail to make their business profitable just because they deploy detrimental strategies and the information is above is a small step to help them avoid these mistakes. In a world of noise where everyone is selling something, it is very important to indulge in the race of making money but to for earning money but to help consumers/customers to buy what they actually need.
https://medium.com/visualmodo/mistakes-to-avoid-in-affiliate-marketing-881e66f8878f
[]
2020-03-03 02:40:12.343000+00:00
['Avoid', 'Affiliate', 'Marketing', 'Mistakes', 'Error']
You Have More in Common With Voldemort Than You Think
You Have More in Common With Voldemort Than You Think We all leave little pieces of our soul in the world around us Courtesy of Warner Bros. Entertainment Inc. Photo: Eric Charbonneau/WireImage/Getty Images What could the average person possibly have in common with the fictional mass-murdering noseless wizard whose name ought not to be spoken? It’s a bold claim, I know. Well, each of us constructs our identity in the same way Voldemort does — or, Voldemort provides an illuminating allegory for how we build our identities. Voldemort, that dastardly villain, created these things called Horcruxes. The concept of the Horcrux, for those of you who don’t already know, was that ol’ Voldy split his soul and stored the shards in different objects or beings. This ensured his life — nobody could kill him unless they first destroyed all the Horcruxes. We do have to put one element of this analogy aside: the fact that he created these Horcruxes by murdering people. Let’s just ignore that bit for the sake of this analogy, as I don’t think most people are secret murderers. When you take that element out, though, this premise has some surprising backbone to it. A person latches pieces of their soul onto objects and people with sentimental significance to them, and they cannot truly die until these Horcruxes are all destroyed. The destruction of one brings them pain, and they will stop at nothing to protect their soul shards. When put in these terms, could we not accurately say this about ourselves? We all ground our identity in ideas, memories, and experiences. But we don’t stop there. We humans like to create physical symbols of abstract concepts. So, we use objects and people as symbols of the ideas upon which we base our identities. We keep sentimental jewelry and knickknacks; we cling to family and friends; we buy clothes with icons and slogans on them. And we each have a few key things, from necklaces to cars to plots of land, that make up the core of who we are. Moreover, because we have based our identities on the ideas these objects represent, we will protect them with our lives. An attack upon our identity is akin to an attack on our person — it’s a threat to a foundational aspect of our being. We will voraciously defend these ideas or the items that represent them. Who among us has not been thrown into a momentary panic when a friend or family member almost threw away something that was meaningless to them but sentimental to us? And lastly, so long as these ideas remain in the world and are built upon, we never disappear. Even the objects that represent these ideas can live on and grow past our lives. The farmer identifies with his farm — and his children will continue to cultivate it. So, I think it could be said that we all have Horcruxes, though we don’t need to kill people to make them. And we don’t die if all of our Horcruxes get destroyed. Or do we? Well, no. But our Horcruxes do have a relationship to the quality of our lives. Life, fundamentally, is a delicate balance of order and chaos. Life requires unpredictability to work. We can see this truth in something as fundamental as evolution — the random (chaotic) mutation of genes that ultimately allows us to adapt and survive. But it also requires order. If our DNA was too unpredictable, nothing resembling a species could form. If every core piece of your identity is destroyed, you would be left feeling like nothing. The necessity of this balance is present at all levels of our lives. Thus, life exists and thrives in this precise balance of the two elements. Order and chaos. Change to something new and maintenance of what is. Our identities are integral to our existence as individuals, and the same rule applies. Our identities must gradually grow and change if we are to survive, and they must also maintain core elements to achieve any level of stability. If every core piece of your identity is destroyed, you would be left feeling like nothing — like no one. As though the person you were had died, and whoever you became next would be totally new. Many people have experienced this kind of rebirth through extreme loss. On the other side, an unchanging identity is effectively dead. It is stagnant. There is nothing left of its story to tell. It is merely waiting for the literal death to catch up to the spiritual end. Similarly, when death strikes any person, their identity freezes. Having an unchanging identity has many of the same consequences as death. Death is to know, completely and absolutely, who you are — never to have the will to become a different, better person, nor to have any questions remaining about your true nature. That said, it is, in a way, possible to survive without an identity. Victims of severe childhood trauma often achieve this, because any identity one develops in a situation of abuse becomes the target of attack. An identity is a weakness to be exploited by the enemy, so the individual creates a protective barrier of nonassociation and nonattachment. One can never achieve complete nonidentity — we will always grasp something to help keep us alive and moving. But in these situations of constant attack, the identity is minimized and hidden. It becomes almost undetectable, even by the individual that holds it. So, one might ask, if it is possible to survive and even become immune to emotional attack by minimizing one’s identity, is this not the ideal way to live? You can probably guess my answer. No, because identity is necessary to create anything. Or, to create something is to give form to a piece of your identity. Without an identity, you have nothing to give shape. Moreover, it is necessary to identify with something before you can improve or build upon it. This is similar to the fact that before you have a right to change something, you must first have at least partial ownership of it. But what do ownership and identification have to do with each other? Well, they are parallel processes. To own something is to, at least partially, identify with it. If you didn’t identify with it, you would cast it aside. Similarly, when you strongly identify with something, even if you are not the sole owner of it, you protect it and tend to it as though you were. Because identity is both our strength and our weakness, we must be particularly careful with what we identify with. You need to identify with things to own them. If you want anything, if you have even a single solitary desire in life, you must open your identity to it before you can acquire it. Identifying with things is necessary for achieving goals. In a state of nonidentity, you may be immune to emotional attack, but you are also incapable of reaching any joy in life. It’s an emotional scorched-earth policy — you can’t steal what is burned to ash, and you can’t attack something that doesn’t exist. All this is to say that, ultimately, we must construct an identity for ourselves. We cannot go about our entire lives without connecting to anything in the world around us. Or rather, if we did, it wouldn’t be a life much worth living. But because identity is both our strength and our weakness, we must be particularly careful about what we identify with. We cannot hurl our soul around haphazardly and make Horcruxes of everything we touch. If we did, we would disappear for lack of distinction and become susceptible not to attack, but to the random nature of life. Catastrophes would consistently strike one or more of the things we are identified with. We must identify with that which is more core to our desires and leave all else behind. It will only slow us down.
https://humanparts.medium.com/you-have-more-in-common-with-voldemort-than-you-think-51ed18951fb9
['Atheno Boldly Fearless']
2020-04-29 17:15:06.690000+00:00
['Life Lessons', 'Psychology', 'Self Improvement', 'Life', 'Self']
Covid-19 in the Middle East: situation report for week ending 22 August
ALGERIA Algeria’s outbreak peaked towards the end of July when more than 600 new cases were being recorded each day. Since then the trend has been downwards, with new cases averaging 429 a day during the past week according to official figures. Restrictions imposed in 29 of Algeria’s 58 wilayas (administrative districts) were eased last week but there is still a night curfew (11pm to 6am) and face masks must be worn outdoors. Large mosques (1,000-plus capacity) have been alllowed to open throughout the country but worshippers must bring their own prayer mats and wear face masks. Congregational prayers on Fridays are still banned. About 4,025 medical staff have been infected with Covid-19 in Algeria and 69 of them have died, according to the government’s scientific committee. These figures are a lot higher than those previously given by the health minister. For more information see: Covid-19 in Algeria Confirmed cases: 40,667 New cases in past week: 3,003 Active cases: 10,662 Deaths: 1,418 Tests carried out: (unknown) BAHRAIN Bahrain has more than 26,000 known cases per million inhabitants. This makes it the world’s third most infected country after Qatar and French Guiana. However, Bahrain is also one of the world leaders in Covid-19 testing. So far, almost 60% of its 1.7 million population have been tested. The daily total of new cases fluctuates but Bahrain’s epidemic appears to be subsiding gradually. The number of people reported to be currently infected is around 3,300 compared with 5,700 at the peak in mid-June. Cafes and restaurants remain closed but the authorities have announced plans for a phased reopening in September. Bahrain no longer requires people arriving in the country to isolate themselves. In recent tests only 0.2% of new arrivals were found to carry the virus. For more information see: Covid-19 in Bahrain Confirmed cases: 48,661 New cases in past week: 2,609 Active cases: 3,314 Deaths: 181 Tests carried out: 1 million EGYPT New Covid-19 cases in Egypt over the last three months. Seven-day rolling average, day by day. New cases peaked in June and have been falling sharply during the past few weeks, according to official figures. This week’s average was 123 cases a day compared with almost 1,600 at the peak. Although Egypt’s official figures have often been viewed with suspicion there is other evidence that its outbreak is subsiding. For example, the health ministry has been closing down some of its temporary isolation facilities. Egypt has been anxious to revive its economically important tourism sector and in July it began reopening its seaside resorts for foreign visitors. These resorts — in South Sinai, the Red Sea and Marsa Matrouh on the Mediterranean coast — have been isolated from the rest of the country to reduce the risk of infections spreading. Foreigners flying directly to the resorts don’t need to be tested for Covid-19 but they will need a test if they wish to leave the resort and visit other parts of the country. Foreigners arriving in other parts of the country must have tested negative during the 72 hours before travelling and will not be allowed to visit the resorts. For more information see: Covid-19 in Egypt Confirmed cases: 97,148 New cases in past week: 928 Active cases: 27,599 Deaths: 5,231 Tests carried out: 135,000 IRAN Iran was the first country in the region to be seriously affected by the virus and its epidemic shows no sign of abating. Government figures show an initial wave of infections which peaked at the end of March. It subsided during April, briefly dipping below 1,000 new cases per day but then rose to a new peak in the first week of June. New cases this week averaged 2,206 a day — virtually unchanged from the previous week. Iran continues to report more coronavirus-related deaths than any other country in the region. A further 1,045 deaths have been recorded during the past week. Confirmed cases: 354,7645 New cases in past week: 15,939 Active cases: 28,522 Deaths: 20,376 Tests carried out: 3 million IRAQ Iraq is currently recording more new infections than any other country in the region. New cases this week averaged more than 4,000 a day and Wednesday’s total of 4,576 cases was the highest since the outbreak began. Worse still, Iraq’s official figures are widely believed to understate the scale of the epidemic. Many cases go unreported because of social stigma. Compliance with preventive measures appears to be low and health services are inadequate. For more information see: Covid-19 in Iraq Confirmed cases: 197,085 New cases in past week: 28,795 Active cases: 50,356 Deaths: 6,283 Tests carried out: 1.4 million ISRAEL After coming close to bringing the epidemic under control, Israel has been hit by a second wave much larger than the first. The first wave peaked at around 600 new cases a day in early April. Efforts to control it were intially successful and by the second half of May new cases had dropped to about 15 a day. However, the virus surged back when lockdown restrictions were lifted and by the end of July new cases were averaging almost 1,800 a day. The second wave now appears to have peaked but the number of new cases remains high, averaging 1,377 a day this week. For more information see: Covid-19 in Israel Confirmed cases: 100,716 New cases in past week: 9,636 Active cases: 22,122 Deaths: 809 Tests carried out: 2.2 million JORDAN Until a couple of weeks ago Jordan appeared to be the most successful Arab country in controlling the virus. Although it continued to intercept new cases among people arriving from abroad, transmission within the country had virtually ceased. Since then, however, there has been a spate of locally-occurring cases and they now account for most of the newly-detected infections. The recent problems began with an outbreak at the Jaber-Nasib crossing point on the border with Syria where at least nine employees were diagnosed with the virus (see news report). This led to further infections among their contacts in various other places. Buildings in several cities have been sealed off but but tracing contacts and ensuring compliance with quarantine is proving a formidable task. One of the people who tested positive for the virus this week is said to have come into contact with 170 people and visited 35 different places all over the country. At a news conference on Friday health minister Saad Jaber said the main reason for the increase in infections is non-compliance with preventive measures at border crossings. Employees had broken the rules to meet for tea, coffee and tomato stir-fry, he added. Even people who had tested positive were shaking hands and hosting large gatherings. New measures may be imposed in the light of developments over the next few days. These could include a one-day lockdown on Fridays, extending curfew hours and temporarily closing schools, mosques, churches, parks and gathering places. For more information see: Covid-19 in Jordan Confirmed cases: 1,532 New cases in past week: 203 Active cases: 259 Deaths: 11 Tests carried out: 728,000 KUWAIT New infections peaked in late May at just over 1,000 cases a day. The numbers have dropped back substantially since then and this week’s average was 583 a day. The government has announced that the night curfew will be lifted on August 30. Restrictions on large gatherings such as weddings and funerals will continue. As a result of the economic downturn caused by the pandemic and low oil prices Kuwait is planning to expel 360,000 foreigners though as yet there is no timetable for their departure. For more information see: Covid-19 in Kuwait Confirmed cases: 79,269 New cases in past week: 4,084 Active cases: 7,494 Deaths: 511 Tests carried out: 581,000 LEBANON Political and economic turmoil, plus the devastating explosion in Beirut on August 4, have diverted attention from the coronavirus. Although Lebanon’s outbreak is still relatively small, infections have surged during the past month. New cases this week averaged 505 a day — about three times as many as at the end of July. A new partial lockdown began on Friday and is due to last two weeks. However, there are doubts about how well it will be observed or enforced (see news report). For more information see: Covid-19 in Lebanon Confirmed cases: 11,580 New cases in past week: 3,535 Active cases: 8,260 Deaths: 116 Tests carried out: 451,000 LIBYA Libya is in its ninth year of internal conflict. The UN-backed Government of National Unity in Tripoli is challenged by Field Marshall Haftar’s forces based in the east of the country. There are also numerous militias. This leaves the country ill-equipped to cope with a major epidemic. Growing levels of insecurity, political fragmentation and weak governance have led to a deterioration of basic services, particularly in the health system. At least 27 health facilities have been damaged or closed by fighting and some have been attacked directly. There are 870,000 people — refugees, asylum seekers and displaced persons — who the UN regards as especially vulnerable. The World Health Organisation (WHO) describes the coronavirus situation in Libya as “clusters of cases” — in other words, a series of local outbreaks rather than a generalised epidemic. Sebha, Tripoli, Zliten, Misrata, Ashshatti, Ubari, Traghen, Janzour and Khoms are said to be particular hotspots. Testing is very limited and the number of confirmed infections is still relatively small but growing fast. Half of the known cases were recorded this month. Investigations by the National Centre for Disease Control (NCDC) have concluded that most infections are the result of people not practising social distancing. The Libya Herald reports that most people do not wear masks in public. It adds: “Many still spend the weekend with their parents/relatives, attend funerals, baby-parties, and weddings. And although function halls have been forced to shut down, many are holding events for hundreds in open locations such as farms.” The authorities in Tripoli have responded by announcing series of penalty charges, including fines of 250 dinars ($180) for not wearing a face mask on public transport and 500 dinars (plus temporary closure) for businesses that fail to enforce mask-wearing. For more information see: Covid-19 in Libya Confirmed cases: 10,121 New cases in past week: 2,794 Active cases: 8,888 Deaths: 180 Tests carried out: 91,000 MOROCCO New Covid-19 cases in Morocco over the last three months. Seven-day rolling average, day by day. Coronavirus infections have been rising sharply in Morocco over the last few weeks, with a record 1,776 new cases reported on Sunday. This is a major setback since early June when a strict lockdown had reduced new cases to around 40 a day. Local health experts attribute the reversal mainly to the “rushed” way restrictions were lifted (see news report). There are scattered outbreaks around the country which could grow rapidly if not monitored closely. Containing these depends heavily on the effectiveness of contact-tracing — which is being hampered by delays in testing. Once someone has been diagnosed with Covid-19, it usually takes a week or more to trace all their contacts — by which time the virus may have spread further. Delays in testing are also being blamed for causing avoidable deaths, because people with serious symptoms are often not receiving treatment until it is too late. For more information see: Covid-19 in Morocco Confirmed cases: 49,247 New cases in past week: 10,006 Active cases: 14,239 Deaths: 817 Tests carried out: 1.7 million OMAN Infections peaked in mid-July with just under 1,600 cases a day and are now on a downward path. New cases averaged 147 a day this week — a substantial drop to levels not seen since early May. This week the authorities issued a long list of rules for people visiting restaurants and cafes. More than 600 medical staff in Oman have been infected with Covid-19 since the outbreak began, according to the government. For more information see: Covid-19 in Oman Confirmed cases: 83,769 New cases in past week: 1,026 Active cases: 4,774 Deaths: 609 Tests carried out: 309,000 PALESTINE Palestine, like Israel, is in the midst of a wave of new infections. Hebron is the most seriously affected area, with 10,832 confirmed cases — almost half the total. New cases this week averaged 477 a day — a small increase on the previous week. Many of the infections are attributed to people ignoring the rules for social distancing, which the authorities have difficulty enforcing. The health ministry says more than 30% of cases are the result of Palestinians travelling to and from work in Israel which is in the second wave of its epidemic. Fears of a major outbreak in Gaza have not materialised. Most of the known cases there appear to have been due to contacts with Egypt. For more information see: Covid-19 in Palestine Confirmed cases: 24,398 (West Bank 16,293, Gaza 117, East Jerusalem 7,988) New cases in past week: 3,342 Active cases: 8,993 Deaths: 135 Tests carried out: 214,000 QATAR In population terms Qatar has more known cases than any other country — 41,000 per million inhabitants. Migrant workers have been disproportionately affected. Qatar’s epidemic reached a peak in the first week of June but infections have fallen since then. New cases this week averaged 278 a day — well below the peak of more than 1,800 a day. For more information see: Covid-19 in Qatar Confirmed cases: 116,481 New cases in past week: 1,949 Active cases: 3,072 Deaths: 193 Tests carried out: 577,000 SAUDI ARABIA Saudi Arabia has the largest number of recorded cases among the Arab countries. New infections reached an initial peak in the fourth week of May, then dropped back slightly before rising to a higher peak in the third week of June. Since then, though, there has been a substantial improvement. Numbers of new cases are still large. This week they averaged 1,326 a day — a small drop since the previous week and about 3,000 a day below the June peak. The kingdom currently has fewer than 25,000 active cases compared with more than 63,000 at the peak. Migrant workers have been disproportionately affected but the authorities have also complained about non-compliance with precautionary measures by Saudi citizens. For more information see: Covid-19 in Saudi Arabia Confirmed cases: 305,186 New cases in past week: 9,284 Active cases: 24,539 Deaths: 3,580 Tests carried out: 4.5 million SUDAN The coronavirus struck Sudan in the midst of a political transition following a popular uprising against the regime of President Bashir and the country is ill-equipped to cope with a major epidemic. Testing is very limited and official figures don’t reflect the full scale of the outbreak. For more information see: Covid-19 in Sudan Confirmed cases: 12,623 New cases in past week: 461 Active cases: 5,335 Deaths: 812 Tests carried out: (unknown) SYRIA According to official figures Syria’s outbreak is still small, with just over 2,000 cases reported in areas controlled by the Assad regime. Even so, that is twice as many as two weeks ago. Official announcements rarely give any details and this lack of transparency fuels suspicions that many cases are being concealed. There is also some evidence that people with Covid-19 symptoms are reluctant to contact the authorities. Anecdotal evidence suggests community transmission of the virus is now widespread and one study indicates there may be tens of thousands of unreported cases. Fears have been raised about north-western and north-eastern parts of the country which are outside the regime’s control. Millions of displaced people are living in those areas and health services are often rudimentary. So far, 225 cases have been confirmed in the north-east and 54 in the north-west according to Syria in Context, a subscription website. For more information see: Covid-19 in Syria The following figures relate to regime-controlled areas only: Confirmed cases: 2,073 New cases in past week: 558 Active cases: 1,515 Deaths: 83 Tests carried out: (unknown) TUNISIA New Covid-19 cases in Tunisia over the last three months. Seven-day rolling average, day by day. Tunisia’s outbreak remains small, with fewer than 3,000 infections recorded so far. New cases are growing rapidly though. This week they averaged 101 a day, compared with only 35 in the previous week. In June, Tunisia appeared to be almost free of the virus and began promoting itself as a safe holiday destination. Tourists were to be allowed in with just a simple temperature check. On Wednesday, however, the authorities announced that people arriving in the country must present evidence of a negative RT-PCR test result. This applies to everyone, including those arriving from low-risk countries. A controversial LGBT+ film festival, originally due to have been held in Tunis last March, has now been postponed for a second time because of the Covid-19 outbreak. The first such festival was held in secret in 2018 because of local opposition. For more information see: Covid-19 in Tunisia Confirmed cases: 2,607 New cases in past week: 704 Active cases: 1,123 Deaths: 64 Tests carried out: 119,000 UNITED ARAB EMIRATES The UAE’s epidemic peaked in the last week of May when new infections were running at more than 900 a day. Numbers of new cases are now considerably lower, though this week’s average of 339 a day is the highest for a month. The UAE has carried out more tests per head of population than any other Arab country and ranks tenth worldwide in terms of levels of testing. For more information see: Covid-19 in the UAE Confirmed cases: 66,193 New cases in past week: 2,374 Active cases: 7,527 Deaths: 370 Tests carried out: 6.3 million YEMEN Because of the ongoing war, Yemen already faced a humanitarian crisis before the coronavirus arrived. Millions are malnourished and vulnerable to disease, and health services are inadequate. Official figures grossly understate the severity of the epidemic. Cholera is also prevalent. For more information see: Covid-19 in Yemen Confirmed cases: 1,910 New cases in past week: 48 Active cases: 306 Deaths: 543 Tests carried out: (unknown)
https://brian-whit.medium.com/covid-19-in-the-middle-east-situation-report-for-week-ending-22-august-3551aa6af577
['Brian Whitaker']
2020-08-23 07:15:25.167000+00:00
['Coronavirus', 'Middle East', 'Covid 19']
Towards an Economic Theory of Everything in under 5 minutes
Economics is traditionally defined as the study of the allocation of scarce resources which have alternative uses. Microeconomics is further defined as the study of individual decision making, or of supply and demand, price theory, market design, and so on. Macroeconomics is defined as the study of aggregative economic phenomenon, national accounts, GDP, fiscal and monetary policy. In some sense, economics can be called the science of scarcity. Nevertheless, most economists are not scientists in the traditional meaning; some are engaged in empirical research that is profound, but as a field, economics, especially the foundations, is closer to mathematics than a science. Only loosely can the traditional definition account for most of what economists do. Decision making under uncertainty can be considered the science of individuals allocating resources of time under scarcity of information. Macroeconomics can be the study of governments making decisions under political constraints. And so on. But a more general definition is needed. Here’s one: economics is the study of agents, their evolution, and their interactions in complex environments. This definition includes both microeconomics and macroeconomics, it includes the “science” of economics, as well as the math, and it includes the study of humans, animals, aliens, and artificial agents (parts of computer science). One economic theory of everything would be to actually create micro-foundations for the macroeconomy. This has historically been done through rational expectations and representative agent models. In the future, this can be behavioralistic and even neuroeconomic models. Anwar Shaikh actually believes that microeconomics needs better macro-foundations, not the other way around. This would involve class, gender, social relations, political factors, culture, etc. Perhaps it will prove too difficult to model individual decision making with realistic accuracy (I doubt this given enough time), nevertheless, at least theoretically, it seems possible that increasingly accurate simulations of economic behavior is possible. Unifying neuroeconomics with macroeconomics seems to be one way of creating an economic theory of everything. Shaikh also shows that you don’t necessarily need micro-economic foundations to do macroeconomics, since aggregates are robustly insensitive to individual parts- the whole is greater than the sum of the parts. In this sense, Shaikh believes the proper focus for economists is studying the economy, not necessarily individual units. Finally, economics needs to integrate with computer science, especially artificial intelligence. This is for several reasons. Humans are a form of artificial intelligence. Reinforcement agents can model human behavior. Eventually, accurate simulations of economies will be created with the aid of AI. So if there is going to be an E=MC² in economics it will likely come by expanding the definition of economics to account for the mathematical study of agents, not just scarcity of resources 2. creating macro-foundations for microeconomics, perhaps through sociology, anthropology, political science, etc. 3. creating micro-foundations for macroeconomics, perhaps through behavioral economics and neuroeconomics 4. incorporating reflexivity, interactions, and complexity into models 5. integrating with AI/ computer science Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/towards-an-economic-theory-of-everything-in-under-5-minutes-12f25b0038ee
['The Moral Economist']
2020-11-25 15:17:11.151000+00:00
['Economics', 'Investing', 'Finance', 'Psychology', 'Philosophy']
A New Global Mobility Hierarchy Emerges as International Travel Resumes
Coronavirus-related travel restrictions are beginning to lift in some countries after more than six months of panic and uncertainty. The resumption of international cross-border travel may appear to be a signal that things are slowly returning to normal, but as the latest research from the Henley Passport Index — based on exclusive data from the International Air Transport Association (IATA) — shows, the pandemic has completely upended the seemingly unshakeable hierarchy of global mobility that has dominated the last few decades, with more change still to come. At the beginning of the year, for instance, the US passport was ranked in 6th position on the Henley Passport Index — the original ranking of all the world’s passports according to the number of destinations their holders can access without a prior visa — and Americans could travel hassle-free to 185 destinations around the world. Since then, that number has dropped dramatically by over 100, with US passport holders currently able to access fewer than 75 destinations, with the most popular tourist and business centers notably excluded. As criticism of the country’s pandemic response continues to mount, and with the US presidential election just weeks away, the precipitous decline of US passport power and American travel freedom is seen as a clear indication of its altered status in the eyes of the international community. Other significant changes in the once-solid global mobility hierarchy paint an equally vivid picture of the chaos caused by the Covid-19 pandemic. At the beginning of 2020, the Singapore passport was ranked 2nd globally, with passport holders able to access an unprecedented 190 destinations. However, under the current travel restrictions, Singaporeans can travel to fewer than 80 destinations around the world. Unsurprisingly, those countries whose coronavirus responses have been criticized for being inadequate have taken the greatest knock when it comes to the travel freedom of their citizens. Brazilian passport holders were able to access 170 destinations without acquiring a visa in advance in January. Currently, approximately only 70 destinations are accessible. The decline in mobility and passport power for countries such as India and Russia have been less dramatic, but nevertheless indicative of an overall shift. Russian citizens had access to 119 destinations prior to the Covid-19 outbreak but can currently travel to fewer than 50. At the beginning of the year, Indian passport holders could travel to 61 destinations without a visa but due to virus-related restrictions, they currently have access to fewer than 30. Without taking the various pandemic-related travel bans and restrictions into account, Japan continues to hold the number one spot on the Henley Passport Index, with a visa-free/visa-on-arrival score of 191. Singapore remains in 2nd place, with a score of 190, while Germany and South Korea are tied 3rd, each with a score of 189. EU member states continue to perform best overall, with countries from the bloc taking up most of the spots in the index’s top 10.
https://medium.com/curious/a-new-global-mobility-hierarchy-emerges-as-international-travel-resumes-72a39e741ca5
['Henley']
2020-10-14 10:32:29.783000+00:00
['Travel Freedom', 'International Travel', 'Global Mobility', 'Coronavirus', 'Henley Passport Index']
An RDF crawler
I wrote an RDF crawler (aka scutter) using Java and the Jena RDF toolkit that spiders the web gathering up semantic web data and storing it in any of Jena’s backend stores (in-memory, Berkeley DB, mysql, etc). Download it here. The system is multithreaded and so can simultaneously download from many sources while the aggregation thread does the processing. It builds a model that remembers the provenance of the RDF and takes care to delete and replace triples if it hits the same URL twice, so you can run it as often as you like to keep the data fresh without bloating the store with out-of-date information. As yet it doesn’t do anything with what it gathers; the information’s just sitting there waiting for interesting applications to be built on top of it. To use it as distributed, set up a mysql database called “scutter” and set the username and password in the DBConnection setup in Scutter.java then recompile using ‘ant compile’ (sorry, no handy config files in this 0.1 release). Run the script scutter.sh passing in as many starting-point URLs as you like. These will be added to the queue, and any rdfs:seeAlso pointers in the downloaded RDF will be recursively followed until no more unique URLs can be found. The biggest known issue at the moment is that it doesn’t do proper management to work out when it’s run out of URLs — it just stops. The standard log4j.properties file can be edited to change what gets logged — with full debugging information turned on, you get quite a lot of output. Plans for the future include tying FOAF-related processing into the aggregation such as smushing and mbox_sha1sum normalising, and making a publish/subscribe-based system so that people who can’t run their own aggregators can subscribe to the RDF that’s gathered.
https://medium.com/hackdiary/an-rdf-crawler-f747a5493a4c
['Matt Biddulph']
2018-01-12 02:53:45.222000+00:00
['Java', 'Rdf']
Top 7 Practice Tests and Mock Exams to Prepare for Oracle’s Java Certifications — OCAJP and OCPJP
Top 7 Practice Tests and Mock Exams to Prepare for Oracle’s Java Certifications — OCAJP and OCPJP javinpaul Follow Jun 11 · 7 min read image_credit — Udemy Hello guys, there is no doubt that exam simulators play an essential role in preparing for any Java certification, like OCAJP, OCPJP, OCEJWCD, OCMJEA exams. In fact, they are one of the most crucial pillars because choosing a good exam simulator with a good book is generally the success mantra of many Java certification aspirants. The exam simulators prepare you well for exams by presenting the level of questions you can expect in real reviews. They provide much-needed practice in a review like an environment to gauge your speed and accuracy. I have personally seen the difference of 30% in score between people who do a lot of mock exams and who just go without practicing mock exams. Candidates make more mistakes when they first took exams, and by participating in mock tests, you train your mind to make fewer mistakes. They also help you to handle the time pressure of the real exam better. Though it’s not necessary to buy a commercial exam simulator that is probably the best-spent money, you get a lot of value for your money. You not only learn your mistakes, but the comprehensive explanations are given by these simulators also help you to correct them. Since many of my readers have requested about which is the best exam simulator to buy for OCAJP 11 or OCAJP 8? Or, which one is the cheapest exam simulator but good quality, I decided to jot down some of the excellent quality exam simulators for Oracle’s Java certification. Top 7 Practice test and Mock Exam to Crack Oracle’s Java Certification Here is my list of some of the best Java Exam simulators currently available in the market. The list is solely based on whatever I have read and known from the people who have used it, but I have not taken all the exam simulators personally. My personal experience is only with Whizlabs, which I think is more than sufficient for any candidates who wants to achieve more than 80% in OCAJP or OCPJP, but I have listed down other commercial mock exam providers to provide a comprehensive list of exam simulators. Most of the exam providers not only provide simulators for OCAJP and OCPJP but also for more advanced Java exams like OCEJWCD (Oracle Certified Expert Java Web Component Developer) and OCMJEA (Oracle Certified Master Java Enterprise Architect). So, no matter which exam you are preparing, you will find some good exam simulators with these providers. This is the best exam simulator for Java certification. I have used it personally so I can vouch for the quality of Whizlabs. It has separate practice tests for OCAJP and OCPJP, both Java 11 and 8, depending on which version you are preparing. The OCPJP 11 exam simulator contains over 400 questions and five full-length mock exams, which costs around 20$, you might get some discount as well. You can take the test online from any device, and it also provides detailed reports on your strong and weak areas. You can also buy Whizlabs Practice questions on Udemy. Here is the link to buy Whizlabs simulator on Udemy: I didn’t know that you can also buy practice questions on Udemy but you can and they also have some of the best practice questions for Java certifications like Java SE 8 and Java SE 11. Here are some of the notable Java Practice questions you can buy on Udemy, they are also very affordable and you can get most of them in just $10 on several Udemy sales which happens every now and then. This is another great Java exam simulator but only available for Java 8, i.e. only for both 1Z0–808 and 1z0–809 exam. They also have a Java 1Z0–808 and 1Z0–809 Free Test, which is created to demonstrate all the features of our Java8 Associate Web Simulator. You will be able to access 25 complete questions and will have 53 minutes to finish the test. If you want, you can also download their free 1Z0–808 a 1Z0–809 dumps in PDF format for the reference. If you want to go for cheap and best, then nothing beats Entuware. It contains around $9.95 for question bank with approximately 500 questions. Surely, you can’t get less expensive than this. The items are also top quality, pretty much the same level as Whizlabs, and detailed answer is also of good quality and explains why correct answers are correct and why wrong answers are incorrect. Kaplan SelfTest is authorized by Oracle, so you can be sure that it covers the exam objectives well. The Kaplan SelfTest contains over 170 questions, and the price starts from $69 for 30 days online access. The CD costs you around $99. The Kaplan 1X0–804 Practice Test for Java SE 7 Programmer II (OCPJP7) also includes 275 complimentary flashcards, and a comprehensive score report helps you focus your study efforts. Transcender is similar to Kaplan, and also an Oracle authorized practice exam provider. They have different packs for a different time duration, like 190 questions; price starts from $109. You should only buy either Kaplan or Transcender because they actually contain the same problem, the only thing which is different is the number of topics covered and the number of questions provided. They are actually now merged together and known as Transcender, powered by Kaplan IT Training. This is another good Oracle and Java Exam simulator provider that offers training courses and exam simulators for almost all Java certifications. You can buy OCPJP 11 online training, OCPJP 11 study guide + mock exam questions from this provider for your practice. They also have free tests on their website so that you evaluate their content before you buy, worth trying to check your knowledge as well. 7. Mock Exams from Java Certification Guides You can also find a couple of mock exams when you buy the Java Certification Study guide. The Study guide is an excellent resource to prepare for the exam because they provide full coverage of the syllabus and prepare you for the exam by presenting concepts that are more valuable from the exam point of view. Here are a couple of excellent Java study guides for both OCAJP and OCPJP, for both Java SE 11 and Java SE 8. Apart from these, there are a couple of other books and study guides, depending upon whether you are preparing for OCAJP 11, OCPJP11, OCAPJP 8, or OCPJP 8. You can check out my recommended books for these exams in this blog, here. Other Certification Resources for Java Programmers and IT Professionals That’s all about the list of some of the best Java commercial exam simulators for OCAJP and OCPJP exams. Most of these Java exam simulators provider also provides mock exams for other Java certifications like OCPJWCD or OCMJCEA and other reviews. There are also a lot of free mock exams available for both OCAJP 11 and OCPJP8, which you can take a look at before buying any Java exam simulators. You can use them to judge the quality of full exams which P. S. — If you are new to the Java development world and want to learn Java in depth before going for certification then I also suggest you go through this The Complete Java Masterclass course by Tim Buchalaka and his team on Udemy. It is also one of the most up-to-date courses to learn Java covering new features from recent Java releases.
https://medium.com/javarevisited/top-7-practice-tests-and-mock-exams-to-prepare-for-oracles-java-certifications-ocajp-and-ocpjp-36502d4ca061
[]
2020-12-11 08:58:50.638000+00:00
['Certification', 'Programming', 'Software Development', 'Java', 'Coding']
Osho’s Views on J Krishnamurthy
When Jiddu Krishnamurti died, Osho expressed his thoughts on him as a being and his work. Its relevance, its longevity and its usefulness. It is worth reading. The discussion has been called “Death of the mystic, J. Krishnamurti”. J. Krishnamurti died last Monday, In Ojai, California. In the past you have spoken of him as another enlightened being. Would you please comment on his death? The death of an enlightened being like J. Krishnamurti is nothing to be sad about, it is something to be celebrated with songs and dances. It is a moment of rejoicing. His death is not a death. He knows his immortality. His death is only the death of the body. But J. Krishnamurti will go on living in the universal consciousness, forever and forever. Just three days before J. Krishnamurti died, one of my friends was with him; and he reported to me that his words to him were very strange. Krishnamurti was very sad and he simply said one thing: “I have wasted my life. People were listening to me as if I am an entertainment.” The mystic is a revolution; he is not entertainment. If you hear him, if you allow him, if you open your doors to him, he is pure fire. He will burn all that is rubbish in you, all that is old in you, and he will purify you into a new human being. It is risky to allow fire into your being — rather than opening the doors, you immediately close all the doors. But entertainment is another thing. It does not change you. It does not make you more conscious; on the contrary, it helps you to remain unconscious for two, three hours, so that you can forget all your worries, concerns, anxieties — so that you can get lost in the entertainment. You can note it: as man has passed through the centuries, he has managed to create more and more entertainments, because he needs more and more to be unconscious. He is afraid of being conscious, because being conscious means to go through a metamorphosis. I was more shocked by the news than by the death. A man like J. Krishnamurti dies, and the papers don’t have space to devote to that man who for ninety years continuously has been helping humanity to be more intelligent, to be more mature. Nobody has worked so hard and so long. Just a small news article, unnoticeable — and if a politician sneezes it makes headlines. What is your connection with Krishnamurti? It is a real mystery. I have loved him since I have known him, and he has been very loving towards me. But we have never met; hence the relationship, the connection is something beyond words. We have not seen each other ever, but yet…perhaps we have been the two persons closest to each other in the whole world. We had a tremendous communion that needs no language, that need not be of physical presence…. You are asking me about my connection with him. It was the deepest possible connection — which needs no physical contact, which needs no linguistic communication. Not only that, once in a while I used to criticize him, he used to criticize me, and we enjoyed each other’s criticism — knowing perfectly well that the other does not mean it. Now that he is dead, I will miss him because I will not be able to criticize him; it won’t be right. It was such a joy to criticize him. He was the most intelligent man of this century, but he was not understood by people. He has died, and it seems the world goes on its way without even looking back for a single moment that the most intelligent man is no longer there. It will be difficult to find that sharpness and that intelligence again in centuries. But people are such sleep walkers, they have not taken much note. In newspapers, just in small corners where nobody reads, his death is declared. And it seems that a ninety-year-old man who has been continuously speaking for almost seventy years, moving around the world, trying to help people to get unconditioned, trying to help people to become free — nobody seems even to pay a tribute to the man who has worked the hardest in the whole of history for man’s freedom, for man’s dignity. I don’t feel sorry for his death. His death is beautiful; he has attained all that life is capable to give. But I certainly feel sorry for the whole world. It goes on missing its greatest flights of consciousnesses, its highest peaks, its brightest stars. It is too much concerned with trivia. I feel such a deep affinity with Krishnamurti that even to talk of connection is not right; connection is possible only between two things which are separate. I feel almost a oneness with him. In spite of all his criticisms, in spite of all my criticisms — which were just joking with the old man, provoking the old man…and he was very easily provoked…. Krishnamurti’s teaching is beautiful, but too serious. And my experience and feeling is that his seventy years went to waste because he was serious. So only people who were long-faced and miserable and serious types collected around him; he was a collector of corpses, and as he became older, those corpses also became older. I know people who have been listening to him for almost their whole lives; they are as old as he himself was. They are still alive. I know one woman who is ninety-five, and I know many other people. One thing I have seen in all of them, which is common, is that they are too serious. Life needs a little playfulness, a little humor, a little laughter. Only on that point am I in absolute disagreement with him; otherwise, he was a genius. He has penetrated as deeply as possible into every dimension of man’s spirituality, but it is all like a desert, tiring. I would like you back in the garden of Eden, innocent, not serious, but like small children playing. This whole existence is playful. This whole existence is full of humor; you just need the sense of humor and you will be surprised…. Existence is hilarious. Everything is in a dancing mood, you just have to be in the same mood to understand it. I am not sorry that J. Krishnamurti is dead; there was nothing more for him to attain. I am sorry that his teaching did not reach the human heart because it was too dry, juiceless, with no humor, no laughter. But you will be surprised to know — whatever he was saying was against religions, was against politics, was against the status quo, was against the whole past, yet nobody was condemning him for the simple reason that he was ineffective. There was no reason to take note of him…. Krishnamurti failed because he could not touch the human heart; he could only reach the human head. The heart needs some different approaches. This is where I have differed with him all my life: unless the human heart is reached, you can go on repeating parrot-like, beautiful words — they don’t mean anything. Whatever Krishnamurti was saying is true, but he could not manage to relate it to your heart. In other words, what I am saying is that J. Krishnamurti was a great philosopher but he could not become a master. He could not help people, prepare people for a new life, a new orientation. But still I love him, because amongst the philosophers he comes the closest to the mystic way of life. He himself avoided the mystic way, bypassed it, and that is the reason for his failure. But he is the only one amongst the modern contemporary thinkers who comes very close, almost on the boundary line of mysticism, and stops there. Perhaps he’s afraid that if he talks about mysticism people will start falling into old patterns, old traditions, old philosophies of mysticism. That fear prevents him from entering. But that fear also prevents other people from entering into the mysteries of life…. I have met thousands of Krishnamurti people — because anybody who has been interested in Krishnamurti sooner or later is bound to find his way towards me, because where Krishnamurti leaves them, I can take their hand and lead them into the innermost shrine of truth. You can say my connection with Krishnamurti is that Krishnamurti has prepared the ground for me. He has prepared people intellectually for me; now it is my work to take those people deeper than intellect, to the heart; and deeper than the heart, to the being. Our work is one. Krishnamurti is dead, but his work will not be dead until I am dead. His work will continue. References: What Osho said about J Krishnamurti and his work on his death.
https://medium.com/devansh-mittal/oshos-views-on-j-krishnamurthy-895a742e2eac
['Devansh Mittal']
2019-10-07 14:17:02.487000+00:00
['Spirituality', 'J Krishnamurthy', 'Osho', 'Psychology', 'Philosophy']
診所排程規劃:3 大重點,幫您快速打造適合的線上預約掛號網站!
免費的跨平台線上預約排程系統,同時支援網頁版以及行動裝置 Oh, Instagram, the source of inspiration for countless millennials, a visual gem and the perfect “look at how cool I…
https://medium.com/simplybooktw/%E8%A8%BA%E6%89%80%E6%8E%92%E7%A8%8B%E8%A6%8F%E5%8A%83-3-%E5%A4%A7%E9%87%8D%E9%BB%9E-%E5%B9%AB%E6%82%A8%E5%BF%AB%E9%80%9F%E6%89%93%E9%80%A0%E9%81%A9%E5%90%88%E7%9A%84%E7%B7%9A%E4%B8%8A%E9%A0%90%E7%B4%84%E6%8E%9B%E8%99%9F%E7%B6%B2%E7%AB%99-bbcb8652c482
['Simplybook.Me']
2020-12-08 09:10:29.392000+00:00
['Simplybookrecommend', 'Simplybook', '五分鐘打造專屬預約系統', 'Medical', 'Productivity']
Cypress and Mobile Apps?. Cypress.io + React Native Web + Pareto…
What mobile app testing feels like. Ow. (Pen and paper? Seriously? :) ) Photo by freestocks on Unsplash It is tricky to set up automated testing of mobile apps. Maybe you’re on a small project and your Jest + Enzyme unit tests aren’t giving you the ROI you want. Maybe you want to test network error scenarios or time-sensitive logic, but don’t have the means to do so in your current app framework. Your friends in the web-development world tell you about Cypress, but they don’t have a React-Native mobile app to test. There’s a way to bridge the gap that doesn’t require rearchitecting your app. In this article, I lay out how to apply the best of web app testing to your React-Native mobile app, with a few tips and tricks along the way. Cypress (Cypress is a development-oriented web-app end-to-end and integration testing tool. Read more in the Cypress.io docs) When selecting an e2e testing solution for web apps, we face the question “should I choose a Selenium-based tool, or should I choose Cypress?” This is a false-dichotomy — though they both test web apps, they solve different problems. Listen to the answer the Cypress.io docs gives in the FAQ section: … Cypress may not be able to give you 100% coverage without you changing anything, but that’s okay. Use different tools to test the less accessible parts of your application, and let Cypress test the other 99%. (From Cypress.io FAQ) If you’re a fan of the Pareto Principle (“20% effort, 80% results” more or less), you’ll start to see the appeal of Cypress. If your requirements ask for that last 20% of the result (cross-platform/cross-browser, cross-origin, multi-tab, etc.), no one is stopping you from picking up Selenium to cover the cases Cypress can’t address. (I’ve found that software testing has more to do with economics and ROI than software, but that’s a separate article) In short: Cypress is about getting more ROI from your tests. (Not to mention the powerful mocking abilities Cypress unlocks — personally love the network request-response mocking features) Wouldn’t it be nice if you could get the same philosophy and powers when testing your mobile apps?
https://medium.com/javascript-in-plain-english/easy-mobile-app-automated-tests-509e9cde311f
['James Fulford']
2020-04-13 02:13:13.223000+00:00
['Mobile App Development', 'JavaScript', 'Software Development', 'Expo', 'React Native']
In Defense of Very Long Novels
Photo by Ryan Graybill on Unsplash This past week I’ve read two, seemingly polar opposite LitHub articles. The first was “In Praise of Difficult Novels” by Will Self, which argues for a return of the High Modernist movement in current literary fiction. The second was “On the Very Contemporary Art of Flash Fiction” by John Dufresne, which explains the opportunities, especially in respect to writing on the Internet, of flash fiction. I don’t judge a piece of writing based on its length. What’s important is what the author is able to convey to readers within the limitations of their form. This is why flash fiction can be so brilliant, not only for its accessibility and its reflection of the Twitter age as Dufresne points out, but also for great flash fiction’s profound ability to capture a moment in all its strange singularity. Medium has been a good platform for finding flash fiction, and I assure you a quick search will not leave you disappointed. That being said, I think this standard should apply to all forms of fiction. What I want to argue is that the long novels have been unfairly rejected or made taboo by readers. However, I also want to push back against Will Self’s intellectual-nostalgia conception of complexity in literature. I think the High Modernist writers (e.g. Joyce) made enormous progress in developing the form of long novels, but I think the focus of the discussion should be on which of their techniques were good for readers and communication, rather than what only and sometimes exclusively makes sense for writers like obscure allusions and stream of consciousness writing. I once heard Junot Díaz argue that one quality that separates short stories from novels is that novels can make a lot more mistakes. That is, a great short story needs to be perfectly clean, whereas readers will look over many of the weaknesses in a novel because readers are generally nicer to novelists they like. It seems then that reader satisfaction follows a fluctuating scale, where they are more judgemental of short stories, less for novels, and then become increasingly impatient as the form gets lengthier. I’m like this too. I think this impatience stems partially from the fact that we read novels for clear stories or for well thought out, well crafted themes and characters. Like Díaz said, we have a lot of trust in novelists. So, when a book starts stretching past four or five hundred pages, we start to wonder if the novelist really knows what they’re doing, or whether there is actually anything new or worthwhile left to read. My concern is that this worry causes readers to be wary of long novels in general, and to assume that length necessarily means excess. Although it’s clear that many long novels just need to be edited — Murakami’s 1Q84 and Yanagihara’s A Little Life are fair examples of this, where length takes away from their ideas. What then is a standard for a great long novel? I think the answer comes from thinking about what makes great pieces of writing in any form. For example, some of what makes great short or flash fiction is its ability to say so much in such little space. Essentially, an act of compression or refinement. I would argue that one central quality of long novels is their ability to create chaos out of order. The extreme example is Finnegan’s Wake, but this is also apparent in novels like War and Peace, Gravity’s Rainbow, and even Harry Potter or The Lord of the Rings. What’s interesting to me is that no one would criticize The Lord of the Rings for being too long. The reason why is that readers recognize that it takes a lot of space to build an entire world, often with a huge cast of characters and subplots. I think the same standard should apply to long pieces of literary fiction, where the author is trying to craft a whole new world to reflect all the complexities of the real world. The fact is, yes short fiction can encapsulate complexity, but can it immerse you in it? Can it make you feel that complexity physically? An obvious objection to literature’s history of long novels is its very male pretentiousness and hyper-intellectualism. This is completely fair, and I’ve been a contributor to this issue with the pieces I‘ve written on my page. I think it’s really unfortunate that some of the prominent writers that have taken on the challenge of long novels are very pretentious and inaccessible but I don’t agree that that’s the fault of the form. I don’t believe in Will Self’s argument of “long novels are important because the Modernist project was so beautiful because they used X and Y techniques…” Yes, maybe long novels are not for you but maybe it’s just because the long novels that we hype up as amazing #1s are necessary stepping stones full of mistakes that are required to fully actualize the form. (May I suggest Middlemarch?) Long novels have the potential to hold our entire world, and I hope that this potential is not lost on readers.
https://medium.com/literally-literary/in-defense-of-very-long-novels-3b9df2fc3e9c
['Xi Chen']
2018-09-28 13:10:28.308000+00:00
['Reading', 'Books', 'Essay', 'Culture', 'Literally Literary']
Get with the algorithm: Facebook’s News Feed Changes
Get with the algorithm: Facebook’s News Feed Changes We Are Social hosted a talk on Facebook’s recent News Feed change announcement. Here’s a quick summary of some of the key themes that came out of the discussion. The end of organic reach? Facebook’s News Feed announcement may have come as a surprise to many but organic reach has been dropping off in recent years. We are Social’s, Chief Strategy Officer, Mobbie Nazir says on average their clients are seeing an organic reach level of around 4%. Lauren Davey, Head of Social Media & Display at Barclaycard Business said that the company doesn’t post any organic content on Facebook — only paid posts. She said that marketers need to stop seeing social media as a free commodity and see it as another paid marketing channel. I agree with this somewhat — but you can still get great results on other channels such as Twitter and Instagram without putting a budget behind your posts — creativity is key. However, you do still need to invest in a great social media manager to make this work. No more ‘Tag a mate’ content Facebook pages like LADbible have traditionally used ‘Tag a mate’ posts to quickly gain high reach and engagement levels. As part of the newsfeed changes — Facebook’s algorithms will no longer favour these types of posts. LADbible have always had a Facebook-first approach — it all started as a Facebook page, even before they had a website and they now have around 150 employees and billions of views per week. Peter Heneghan, Head of Communications at LADbible says they have diversified the types of content they share — moving towards more ‘meaningful’ content. They recently polled their followers on what topics are the most important to them — mental health came out on top — so they’ve created content sparking the debate around mental health — crucially, targeting young men. Meaningful Content ‘Meaningful Content’ was the buzz phrase of the morning. Many organisations pump out crap branded content for content’s sake said Leo Ryan, Vice President of Customer Success (EMEA) at Spredfast. Brands must carefully consider the content they create and ask themselves if it’s actually interesting to the people it’s aimed at. The ultimate goal of creating meaningful content is creating meaningful conversations, it’s all about quality over quantity. Conversations & customer care Yes — organic reach is declining — but we need to put less emphasis on reach. Direct conversations with customers is by far the most engaging form of social media. Make sure your brand is ready to chat to its followers - providing them with a great customer experience will seriously build a brands reputation. Nobody really knows what impact these changes will have on social media marketing, all we can do is predict. Personally I think it’s important (and brave) that Facebook want people to spend less time on the platform. Social media has a powerful grip on many of us — having both a positive and negative effect on our lives. Recent research reveals the negative impact it is having on young people’s mental health — its important that the platforms act responsibly knowing this. Facebook’s most important asset is its users —keep them happy or risk losing them.
https://medium.com/confab-social/get-with-the-algorithm-facebooks-news-feed-changes-36f8e022b23f
['Joanna Ayre']
2018-02-06 12:33:21.759000+00:00
['Algorithms', 'Facebook', 'Content Strategy', 'Social Strategy', 'Social Media']
Predicting StockX Sneaker Prices With Machine Learning
The Footwear industry consists of companies engaged in the manufacturing of footwear such as dress shoes, slippers, boots, galoshes, sandals and athletic and trade related footwear; however, the most lucrative sector of this industry is collectible sneakers. The rise of marketplace apps like StockX and GOAT, alongside the proliferation of social media sites where you’re just one message away from turning a rare pair of trainers into cash, mean that more people are selling their shoes than ever before. The global sneaker resale market has been valued at over $2 billion, while the right pair of kicks can go for over $10,000 💸. Moreover, the massive margin of profit for each shoe makes the resale market attractive to those who would like to make some extra cash, given that in the past year, the average profit margin in the sneaker industry was 42.5%. While there is plenty of money to be made, it can be risky to buy a shoe due to the volatile nature of each shoe. Sneakers are like stocks with their resale price constantly changing from day to day. Thus, I developed this web application to predict the price of a given shoe based on factors such as date, shoe size, buyer region, and more. This tool resolves the issue of knowing which sneaker is worthwhile and when to buy it. As a “sneakerhead” and reseller myself, I know that this program will have lots of value in the community. For in-depth details on this project, check out my GitHub Repo. Getting Started Installation Clone this repo, create a blank Anaconda environment, and install the requirements file. $ git clone # Clone the repo$ git clone https://github.com/lognorman20/stockx_competiton # Create new environment called ‘stockx-env’ conda create -n stockx-env python=3.8 # Activate the environment we just made conda activate stockx-env # Install the requirements pip install -r requirements.txt Usage In your terminal, Cd to the repository, then to the application folder. Run this program using the command below. Make sure to run the app from the `application/` directory. After running it, click on the link provided in the terminal. cd application python app.py Understanding the Data The data I used is from StockX’s data competition in 2019. Here’s a description of the data from StockX: “The data we’re giving you consists of a random sample of all Off-White x Nike and Yeezy 350 sales from between 9/1/2017 (the month that Off-White first debuted “The Ten” collection) and the present. There are 99,956 total sales in the data set; 27,794 Off-White sales, and 72,162 Yeezy sales. The sample consists of U.S. sales only. To create this sample, we took a random, fixed percentage of StockX sales (X%) for each colorway, on each day, since September 2017. So, for each day the Off-White Jordan 1 was on the market, we randomly selected X% of its sale from each day. (It’s not important to know what X is; all that matters is that it’s a random sample, and that the same fixed X% of sales was selected from every day, for every sneaker). Every row in the spreadsheet represents an individual StockX sale. There are no averages or order counts; this is just a random sample of daily sales data.” I did some exploratory data analysis and made some visuals. You can check out my EDA notebook on the GitHub repo: Fig. 1: The Average Daily Sale Price from 2017 to 2019 Fig. 2: The Average Sale Price by State Fig. 3: The Average Sale Price by Sneaker Name Fig. 4: Coorleations between each feature Fig. 5: Sale Price Distribution of Off-White Sneakers Fig. 6: Sale Distribution of Yeezy Sneakers Fig. 7: The Most Popular Shoe Sizes Fig. 8: The Most Popular Sneakers Fig. 9: Best Selling Sneaker Retail Prices Development Data Cleaning The data that StockX gave me was not very messy. Here’s what I did: Changed ‘order date’ dtype Changed ‘release date’ dtype Removed ‘-’ from sneaker name Removed ‘$’ and comma from sale price Removed ‘$’ from retail price Renamed columns to get rid of spaces Converted dates into numerical values Converted categorical data to numerical using OneHotEncoding Model Building To begin, I split the data into train and tests sets with a 80/20 split. I selected three models: Random Forest Regressor because has the power to handle a large data set with higher dimensionality, provides higher accuracy through cross validation, is commonly used when analyzing the stock market due to its random nature, and each tree draws a random sample from the original data set when generating its splits, adding a further element of randomness that prevents overfitting. XGBoost because I have a large number of training examples given that this dataset is has about 100,000 rows. Therefore, it should have plenty of data to learn from and apply gradient boosting. This dataset also has a mix of categorical and numerical features, which XGBoost tends to do well with. Decision Tree Regressor as a baseline model to compare the others to. Model performance Since I am trying to predict an exact value, I decided to use mean squared error to measure the accuracy of each model. I was expecting XGBoost to perform the best due to its gradient boosting methods, however, the random forest regressors was able to out perform it. Decision Tree Accuracy (Baseline): 0.97284 XGBoost Test Accuracy: 0.98225 RandomForest Test Accuracy: 0.98452 Model with best accuracy: RandomForest The highest performing model was the RandomForestRegressor with an accuracy of 98.5%. Not bad. Productionization In this step, I pickled my model and saved it into a callable object to be used to create a basic Flask application. After that, I struggled to summon my knowledge of HTML and CSS from my 6th grade tech class to create a simple front-end web site for my model to be hosted. I inserted my model into the web application and the rest is history! (Check out the demo on the GitHub page) Reflection Real World Application This project can be applied in several ways. 1. Helping to decide when to buy a sneaker by predicting its price at any given time 📈 2. Knowing which factors influence the sale price of each sneaker can help businesses optimize their shoe buying process to those that have the most potential 👍 3. Sneaker businesses can see a timeline of when sneaker prices are high or low to know when to buy/sell 📆 4. Know if your friend got ripped off for buying their shoes too early or too late! 🤣 What I learned All in all, this project gave me better insight into the worlds of machine learning and sneakers. If I was to do this project again, I would choose a different way to handle categorical variables other than OneHotEncoding such as ` pd.get_dummies` to reduce the amount of features. When I was creating the Flask application, it was difficult to recreate the lucrative amount of features that I had from my training data in a real world application, and using a different method would absolve this issue. I was surprised that Off-White sneakers typically sold for much more than Yeezy sneakers. From my experience as a sneaker reseller, this threw me off guard. Moreover, I was surprised to see that certain retail prices typically sold better than others. Visualizing the data helped me notice these trends and I now know how I can apply them. Contact Feel free to reach out to me on LinkedIn and follow my work on Github! LinkedIn GitHub
https://medium.com/swlh/predicting-stockx-sneaker-prices-with-machine-learning-ec9cb625bec0
['Logan Norman']
2020-10-05 03:23:55.296000+00:00
['Machine Learning', 'Programming', 'Sneakers', 'Predictions', 'Python']
For Love
For Love I Hope You Can Feel It… Photo by Adrian Swancar on Unsplash It’s amazing how I saw you that night. And it was like the whole world stopped when your eyes said “Hello.” Your gaze said “Remember me.” And let’s dance on life through eternity. I could cry a tear for the love in your eyes. As my once heart ache turned into loving sighs. I remember you, a love I do not know. For without you, I feel my heart in a chokehold. And the world thought we were crazy. I don’t know. Maybe a little… maybe. With a cup of tea and some laughs on a high. I’ll remember you when I saw you the first time. Many lifetimes ago…
https://medium.com/scribe/for-love-3c7638b49d8a
['Q. Imagine']
2020-12-16 09:42:17.311000+00:00
['Poetry', 'Poems On Medium', 'Writing', 'Love', 'Poem']
Helping Those Who Help Others — How We Updated This Nonprofit’s Site for Easier Use and Clearer Messaging
Helping Those Who Help Others — How We Updated This Nonprofit’s Site for Easier Use and Clearer Messaging Ideometry Follow Oct 20, 2017 · 3 min read Simply put, CDA wants to make the world a better place, and they want to help others make the world a better place by ensuring their relief efforts do not have unintended negative consequences. Through their website, CDA offers publications, case studies, toolkits and guides relating to areas such as Responsible Business and Conflict Sensitivity, but they also provide in-person advisory services and trainings to nonprofits, NGOs and corporations in these same areas of expertise. The Problem Though clients give glowing reviews of CDA’s services, people were less than enthusiastic about their website. Along with its outdated appearance and wordy subpages, the site was not user-friendly. For example, the search function for CDA’s publications, the main driver of traffic to the site, was very difficult to find and navigate. Furthermore, CDA’s website did not reflect the recent restructuring of CDA’s organization, namely their Collaborative Learning branch and their Advisory Services. The Solution Ideometry conducted extensive interviews with current CDA employees as well as internal and external stakeholders to get an accurate understanding of what the exact needs were for the new website. We compiled this information to create a series of user journeys, and these user journeys guided the restructuring of CDA’s website. The new website highlights those aspects of CDA users most want to see — the upcoming events, the recent publications, the blog posts — while educating them about new CDA project and service areas. It’s also extremely user-friendly, with mobile-compatibility, modern design and clear calls to action. Most importantly, the back end of the website is user-friendly for CDA staff, so they can quickly update the content as needed. Ideometry even designed a new logo that CDA can use not only on its website, but on mailers, email headings and business cards. *** If you liked what you saw here, check out some of the other branding and creative campaigns we’ve done for a major credit union and a BBQ catering startup. Need help creating an amazing brand? Get in touch with us today.
https://medium.com/ideometry/helping-those-who-help-others-how-we-updated-this-nonprofits-site-for-easier-use-and-clearer-3b3cceeb09ea
[]
2017-10-24 14:43:01.694000+00:00
['Web Design', 'Web Development', 'Marketing', 'Nonprofit', 'Digital Marketing']
5 habits for coping with stress that are actually making your anxiety worse
By Amy Morin From a racing heartbeat to excessive worrying, anxiety feels awful. It affects you physically, cognitively, and emotionally. The symptoms can make it difficult to function. Sometimes you can pinpoint where the anxiety is coming from, like when you’re anxious about an upcoming root canal. At other times, you might feel anxious about everything — debt, relationships, work, and your health. Amy Morin. Courtesy of Amy Morin When your anxiety levels are high, you might feel desperate to do whatever it takes to feel better fast. But the things you reach for to get instant relief might actually be making your anxiety worse. As a therapist, I see it happen all the time. People work really hard to help themselves feel better. But much of the time, their efforts aren’t just counterproductive — they’re downright harmful. Here are five common mistakes that will make your anxiety worse, even though you may think they’re making you feel better: 1. Avoiding the things that make you feel anxious On the surface, avoidance seems like a helpful response to anxiety. If you feel anxious about your financial situation, you might ignore your bills and avoid looking at your bank account. Avoiding the reality of your mounting debt and dwindling bank account will keep your anxiety at bay — at least temporarily. As your financial problems mount, however, your anxiety will grow. Research backs up the fact that the more you avoid anxiety-provoking situations, the more anxiety-provoking they become. And avoidance causes you to lose confidence in your ability to face these fears. So while avoidance might give you a quick moment of relief, the act of dodging problems worsens anxiety over time. 2. Scrolling through your phone before you go to sleep Clients who come into my therapy office often say things like, “My mind just won’t shut off at night” or, “As soon as I try to go to sleep, my brain just reminds me of all the things I need to start worrying about.” In an effort to drown out the noise in their heads, many of them scroll through their phones before they fall asleep. And while looking at social media for a few minutes might feel like it quiets their brain for a minute, staring at a screen actually interferes with sleep and leads to more anxiety. In fact, just having a smartphone in the same room while you’re sleeping can increase your anxiety. A 2018 study published in “Computers in Human Behavior” found that after just one week of not sleeping with a smartphone in the bedroom, individuals reported less anxiety, better quality sleep, and improved well-being. So you might want to try it as an experiment of your own. For one week, leave your smartphone in the kitchen when you go to sleep. See if you feel better. A whopping 94% of participants in the study decided to continue leaving their phones in another room when they slept because they felt so much better. 3. Venting to your friends and family When you’ve had a rough day, you might think you need to “get your feelings out.” So you may be eager to share with your family and friends all the things that went wrong. After all, you might erupt like a pressure cooker if you stuff your feelings, right? Well, that’s actually a misconception. The more you talk about things that cause you distress, the more you keep yourself in a heightened state of arousal. A 2013 study published in the “Cyberpyschology, Behavior, and Social Networking” volume found that venting backfires — especially in people with perfectionist tendencies (which is common in individuals with anxiety disorders). The authors of the study say people are better off focusing on the positive aspects of their day. Recounting what went right, rather than dwelling on what went wrong, can boost mood and decrease anxiety. 4. Thinking about your problems There’s a common misconception that the more you think about a problem, the more likely you are to develop a solution. So many anxious people sit around running zillions of “what if…” scenarios through their heads just to make sure they’re prepared. But thinking longer and harder isn’t necessarily the best way to solve a problem. In fact, letting your brain work through a problem in the background could be a better option. Researchers have found an “incubation period” might be the key to solving problems and making your best decisions. Studies show people make better decisions after they give their brains a break from dwelling on a problem. So whether you’re worried about a specific issue or dwelling on an anxiety-provoking problem, distract yourself for a bit. Give the unconscious part of your brain an opportunity to work through the issue in the background. 5. Self-medicating with drugs or alcohol Reaching for drugs or alcohol at the end of a long day might seem like a helpful way to relax your anxious brain. But self-medicating usually backfires. Despite the repercussions, self-medicating is a popular coping strategy. Studies suggest that almost 25% of individuals with anxiety disorders try to mask their symptoms with substances. Using drugs and alcohol to cope with anxiety has been linked to a variety of adverse outcomes, ranging from higher levels of stress and dysfunction to lower quality of life and increased physical health problems. So while substances might take the edge off for a minute, they contribute to longer-term problems. And these problems fuel anxiety, making it a cycle that can be difficult to break. How to get help for anxiety If you struggle with anxiety and have gotten caught up in habits that are making you feel worse, get professional help. Anxiety is one of the most treatable yet under-treated conditions out there. Cognitive behavioral therapy is an effective therapeutic strategy that could reduce your symptoms and help you break free from the unhelpful habits that are keeping you stuck. Medication may be an option as well. Talk to your physician or reach out to a mental health professional so you can break free from the habits that are keeping you stuck in a cycle of anxiety. This article was originally published on Business Insider July 14, 2020. For more great stories, visit Business Insider’s homepage.
https://medium.com/business-insider/5-habits-for-coping-with-stress-that-are-actually-making-your-anxiety-worse-162c5f33cc9b
['Business Insider']
2020-12-25 17:03:29.408000+00:00
['Anxiety', 'Stress', 'Mental Health', 'Coping Strategies', 'Screentime']
【Summary】Progress Made in Dialog Management Model Research
This article is the result of the collaborative efforts of the following experts and researchers in the Intelligent Robot Conversational AI Team: Yu Huihua and Jiang Yixuan from Cornell University as well as Dai Yinpei (nicknamed Yanfeng), Tang Chengguang (Enzhu), Li Yongbin (Shuide), and Sunjian (Sunjian) from Alibaba DAMO Academy. Many efforts have been made to develop highly intelligent human-machine dialog systems since research began on artificial intelligence (AI). Alan Turing proposed the Turing test in 1950[1]. He believed that machines could be considered highly intelligent if they passed the Turing test. To pass this test, the machine had to communicate with a real person so that this person believed they were talking to another person. The first-generation dialog systems were mainly rule-based. For example, the ELIZA system[2] developed by MIT in 1966 was a psychological medical chatbot that matched methods using templates. The flowchart-based dialog system popular in the 1970s simulates state transition in the dialog flow based on the finite state automaton (FSA) model. These machines have transparent internal logic and are easy to analyze and debug. However, they are less flexible and scalable due to their high dependency on expert intervention. Second-generation dialog systems driven by statistical data (hereinafter referred to as the statistical dialog systems) emerged with the rise of big data technology. At that time, reinforcement learning was widely studied and applied in dialog systems. A representative example is the statistical dialog system based on the Partially Observable Markov Decision Process (POMDP) proposed by Professor Steve Young of Cambridge University in 2005[3]. This system is significantly superior to rule-based dialog systems in terms of robustness. It maintains the state of each round of dialog through Bayesian inference based on speech recognition results and then selects a dialog policy based on the dialog state to generate a natural language response. With a reinforcement learning framework, the POMDP-based dialog system constantly interacts with user simulators or real users to detect errors and optimize the dialog policy accordingly. A statistical dialog system is a modular system not highly dependent on expert intervention. However, it is less scalable, and the model is difficult to maintain. In recent years, with breakthroughs in deep learning in the image, voice, and text fields, third-generation dialog systems built around deep learning have emerged. These systems still adopt the framework of the statistical dialog systems, but apply a neural network model in each module. Neural network models have powerful representation and language classification and generation capabilities. Therefore, models based on natural language are transformed from generative models, such as Bayesian networks, into deep discriminative models, such as Convolutional Neural Networks (CNNs), Deep Neural Networks (DNNs), and Recurrent Neural Networks (RNNs)[5]. The dialog state is obtained by directly calculating the maximum conditional probability instead of the Bayesian a posteriori probability. The deep reinforcement learning model is also used to optimize the dialog policy[6]. In addition, the success of end-to-end sequence-to-sequence technology in machine translation makes end-to-end dialog systems possible. Facebook researchers proposed a task-oriented dialog system based on memory networks[4], presenting a new way forward in the research of the end-to-end task-oriented dialog systems in third-generation dialog systems. In general, third-generation dialog systems are better than second-generation dialog systems, but a large amount of tagged data is required for effective training. Therefore, improving the cross-domain migration and scalability of the model has become an important area of research. Common dialog systems are divided into the following three types: Chat-, task-, and Q&A-oriented. In a chat-oriented dialog, the system generates interesting and informative natural responses to allow human-machine dialog to proceed[7]. In a Q&A-oriented dialog, the system analyzes each question and finds a correct answer from its libraries[8]. A task-oriented dialog (hereinafter referred to as a task dialog) is a task-driven multi-round dialog. The machine determines the user’s requirements through understanding, active inquiry, and clarification, makes queries by calling an Application Programming Interface (API), and returns the correct results. Generally, a task dialog is a sequence decision-making process. During the dialog, the machine updates and maintains the internal dialog state by understanding user statements and then selects the optimal action based on the current dialog state, such as determining the requirement, querying restrictions, and providing results. Task-oriented dialog systems are divided by architecture into two categories. One type is a pipeline system that has a modular structure[5], as shown in Figure 1. It consists of four key modules: Natural Language Understanding (NLU): Identifies and parses a user’s text input to obtain semantic tags that can be understood by computers, such as slot-values and intentions. Identifies and parses a user’s text input to obtain semantic tags that can be understood by computers, such as slot-values and intentions. Dialog State Tracking (DST): Maintains the current dialog state based on the dialog history. The dialog state is the cumulative meaning of the dialog history, which is generally expressed as slot-value pairs. Maintains the current dialog state based on the dialog history. The dialog state is the cumulative meaning of the dialog history, which is generally expressed as slot-value pairs. Dialog Policy: Outputs the next system action based on the current dialog state. The DST module and the dialog policy module are collectively referred to as the dialog manager (DM). Outputs the next system action based on the current dialog state. The DST module and the dialog policy module are collectively referred to as the dialog manager (DM). Natural Language Generation (NLG): Converts system actions to natural language output. This modular system structure is highly interpretable, easy to implement, and applied in most practical task-oriented dialog systems in the industry. However, this structure is not flexible enough. The modules are independent of each other and difficult to optimize together. This makes it difficult to adapt to changing application scenarios. Additionally, due to the accumulation of errors between modules, the upgrade of a single module may require the adjustment of the whole system. Figure 1. Modular structure of a task-oriented dialog system[41] Another implementation of a task-oriented dialog system is an end-to-end system, which has been a popular field of academic research in recent years911. This type of structure trains an overall mapping relationship from the natural language input on the user side to the natural language output on the machine side. It is highly flexible and scalable, reducing labor costs for design and removing the isolation between modules. However, the end-to-end model places high requirements on the quantity and quality of data and does not provide clear modeling for processes such as slot filling and API calling. This model is still being explored and is as yet rarely applied in the industry. Figure 2. End-to-end structure of a task-oriented dialog system[41] With higher requirements on product experience, actual dialog scenarios become more complex, and DM needs to be further improved. Traditional DM is usually built in a clear dialog script system (searching for matching answers, querying the user intent, and then ending the dialog) with pre-defined system action space, user intent space, and dialog body. However, due to unpredictable user behaviors, traditional dialog systems are less responsive and have a greater difficulty dealing with undefined situations. In addition, many actual scenarios require cold start without sufficient tagged dialog data, resulting in high data cleansing and tagging costs. DM based on deep reinforcement learning requires a large amount of data for model training. According to the experiments in many academic papers, hundreds of complete sessions are required to train a dialog model, which hinders the rapid development and iteration of dialog systems. To solve the limitations of traditional DM, researchers in academic and industry circles have begun to focus on how to strengthen the usability of DM. Specifically, they are working to address the following shortcomings in DM: Poor scalability Insufficient tagged data Low training efficiency I will introduce the latest research results in terms of the preceding aspects. Cutting-Edge Research on Dialog Manager Shortcoming 1: Poor Scalability As mentioned above, DM consists of the DST and dialog policy modules. The most representative traditional DST is the neural belief tracker (NBT) proposed by scholars from Cambridge University in 2017[12]. NBT uses neural networks to track the state of complex dialogs in a single domain. By using representation learning, NBT encodes system actions in the previous round, user statements in the current round, and candidate slot-value pairs to calculate semantic similarity in a high dimensional space and detect the slot value output by the user in the current round. Therefore, NBT can identify slot values that are not in the training set but semantically similar to those in the set by using the word vector expression of the slot-value pair. This avoids the need to create a semantic dictionary. As such, the slot values can be extended. Later, Cambridge scholars further improved NBT13 by changing the input slot-value pair to the domain-slot-value triple. The recognition results of each round are accumulated using model learning instead of manual rules. All data is trained by the same model. Knowledge is shared among different domains, leaving the total number of parameters unchanged as the number of domains increases. Among traditional dialog policy research, the most representative is the ACER-based policy optimization proposed by Cambridge scholars6. By applying the experience replay technique, the authors tried both the trust region actor-critic model and the episodic natural actor-critic model. The results proved that the deep AC-based reinforcement learning algorithms were the best in sample utilization efficiency, algorithm convergence, and dialog success rate. However, traditional DM still needs to be improved in terms of scalability, specifically in the following three respects: How to deal with changing user intents. How to deal with changing slots and slot values. How to deal with changing system actions. Changing User Intents If a system does not take the user intent into account, it will often provide nonsensical answers. As shown in Figure 3, the user’s “confirm” intent is not considered. A new dialog script must be added to help the system deal with this problem. Figure 3. Example of a dialog with new intent[15] The traditional model outputs a fixed one-hot vector of the old intent category. Once a new user intent not in the training set appears, vectors need to be changed to include the new intent category, and the new model needs to be retrained. This makes the model less maintainable and scalable. One paper[15] proposes a teacher-student learning framework to solve this problem. In the teacher-student training architecture, the old model and logical rules for new user intents are used as the teacher, and the new model as a student. This architecture uses knowledge distillation technology. Specifically, for the old intent set, the probability output of the old model directly guides the training of the new model. For the new intent, the logical rules are used as new tagged data to train the new model. In this way, the new model no longer needs to interact with the environment for re-training. The paper presented the results of an experiment performed on the DSTC2 dataset. The confirm intent is deliberately removed and then added as a new intent to the dialog body to verify whether the new model is adaptable. Figure 4 shows the experiment result. The new model (Extended System), the model containing all intents (Contrast System), and the old model are compared. The result shows that the new model achieves satisfactory success rates in extended new intent identification at different noise levels. Figure 4. Comparison of various models at different noise levels Of course, systems with this architecture need to be further trained. CDSSM[16], a proposed semantic similarity matching model, can identify extended user intents without tagged data and model re-training. Based on the natural description of user intents in the training set, CDSSM directly learns an intent embedding encoder and embeds the description of any intent into a high dimensional semantic space. In this way, the model directly generates corresponding intent embedding based on the natural description of the new intent and then identifies the intent. Many models that improve scalability mentioned below are designed with similar ideas. Tags are moved from the output end of the model to the input end, and neural networks are used to perform semantic encoding on tags (tag names or natural descriptions of the tags) to obtain certain semantic vectors and then match their semantic similarity. A separate paper[43] provides another idea. Through man-machine collaboration, manual customer services are used to deal with user intents not in the training set after the system is launched. This model uses an additional neural parser to determine whether manual customer service is required based on the dialog state vector extracted from the current model. If it is, the model distributes the current dialog to online customer service. If not, the model makes a prediction. The parser obtained through data learning can determine whether the current dialog contains a new intent, and responses from customer service are regarded as correct by default. This man-machine collaboration mechanism effectively deals with user intents not found in the training set during online testing and significantly improves the accuracy of the dialog. Changing Slots and Slot Values In dialog state tracking involving multiple or complex domains, dealing with changing slots and slot values has always been a challenge. Some slots have non-enumerative slot values, for example, the time, location, and user name. Their slot value sets, such as flights or movie theater schedules, change dynamically. In traditional DST, the slot and slot value set remain unchanged by default, which greatly reduces the system scalability. Google researchers[17] proposed a candidate set for slots with non-enumerative slot values. A candidate set is maintained for each slot. The candidate set contains a maximum of k possible slot values in the dialog and assigns a score to each slot value to indicate the user’s preference for the slot value in the current dialog. The system uses a two-way RNN model to find the value of a slot in the current user statement and then score and re-rank it with existing slot values in the candidate set. In this way, the DST of each round only needs to make a judgment on a limited slot value set, allowing us to track non-enumerative slot values. To track slot values not in the set, we can use a sequence tagging model[18] or a semantic similarity matching model such as the neural belief tracker[12]. The preceding are solutions for non-fixed slot values, but what about changing slots in the dialog body? In one paper[19], a slot description encoder is used to encode the natural language description of existing and new slots. The obtained semantic vectors representing the slot are sent with user statements as inputs to the Bi-LSTM model, and the identified slot values are output as sequence tags, as shown in Figure 5. The paper makes an acceptable assumption that the natural language description of any slot is easy to obtain. Therefore, a concept tagger applicable to multiple domains is designed, and the slot description encoder is simply implemented by the sum of simple word vectors. Experiments show that this model can quickly adapt to new slots. Compared with the traditional method, this method greatly improves scalability. Figure 5. Concept tagger structure With the development of sequence-to-sequence technology in recent years, many researchers are looking at ways to use the end-to-end neural network model to generate the DST results as a sequence. Common techniques such as attention mechanisms and copy mechanisms are used to improve the generation effect. In the famous MultiWOZ dataset for multi-domain dialogs, the team led by Professor Pascale Fung from Hong Kong University of Science and Technology used the copy network to significantly improve the recognition accuracy of non-enumerative slot values[20]. Figure 6 shows the TRADE model proposed by the team. Each time the slot value is detected, the model performs semantic encoding for different combinations of domains and slots and uses the result as the initial position input of the RNN decoder. The decoder directly generates the slot value through the copy network. In this way, both non-enumerative slot values and changing slot values can be generated by the same model. Therefore, slot values can be shared between domains, allowing the model to be widely used. Figure 6. TRADE model framework Recent research tends to view multi-domain DST as a machine reading and understanding task and transform generative models such as TRADE into discriminative models45. Non-enumerative slot values are tracked by a machine reading and understanding task like SQuAD[46], in which the text span in the dialog history and questions is used as the slot value. Enumerative slot values are tracked by a multi-choice machine reading and understanding task, in which the correct value is selected from the candidate values as the predicted slot value. By combining deep context words such as ELMO and BERT, these new models obtain the optimal results from the MultiWOZ dataset. Changing System Actions The last factor affecting scalability is the difficulty of pre-defining the system action space. As shown in Figure 7, when designing an electronic product recommendation system, you may ignore questions like how to upgrade the product operating system, but you cannot stop users from asking questions the system cannot answer. If the system action space is pre-defined, irrelevant answers may be provided to questions that have not been defined, greatly compromising the user experience. Figure 7. Example of a dialog where the dialog system encounters an undefined system action[22] In this case, we need to design a dialog policy network that helps the system quickly expand its actions. The first attempt to do this was made by Microsoft[21], who modifies the classic DQN structure to enable reinforcement learning in an unrestricted action space. The dialog task in this paper is a text game mission task. Each round of action is a single sentence, with an uncertain number of actions. The story varies with the action. The author proposed a new model, Deep Reinforcement Relevance Network (DRRN), which matches the current dialog state with optional system actions by semantic similarity matching to obtain the Q function. Specifically, in a round of dialog, each action text of an uncertain length is encoded by a neural network to obtain a system action vector with a fixed length. The story background text is encoded by another neural network to obtain a dialog state vector with a fixed length. The two vectors are used to generate the final Q value through an interactive function, such as dot product. Figure 8 shows the structure of the model designed in the paper. Experiments show that DRRN outperforms traditional DQN (using the padding technique) in the text games “Saving John” and “Machine of Death”. Figure 8. DRRN model, in which round t has two candidate actions, and round t+1 has three candidate actions In another paper[22], the author wanted to solve this problem from the perspective of the entire dialogue system and proposed the Incremental Dialogue System (IDS), as shown in Figure 9. IDS first encodes the dialog history to obtain the context vector through the Dialog Embedding module and then uses a VAE-based Uncertainty Estimation module to evaluate, based on the context vector, the confidence level used to indicate whether the current system can give correct answers. Similar to active learning, if the confidence level is higher than the threshold, DM scores all available actions and then predicts the probability distribution based on the softmax function. If the confidence level is lower than the threshold, the tagger is requested to tag the response of the current round (select the correct response or create a new response). The new data obtained in this way is added to the data pool to update the model online. With this human-teaching method, IDS not only supports learning in an unrestricted action space, but also quickly collects high-quality data, which is quite suitable for actual production. Figure 9. The Overall framework of IDS Shortcoming 2: Insufficient Tagged Data The extensive application of dialog systems results in diversified data requirements. To train a task-oriented dialog system, as much domain-specific data as possible is needed, but quality tagged data is costly. Scholars have tried to solve this problem in three ways: (1) using machines to tag data to reduce the tagging costs; (2) mining the dialog structure to use non-tagged data efficiently; and (3) optimizing the data collection policy to efficiently obtain high-quality data. Automatic Tagging To address the cost and inefficiency of manual tagging, scholars hope to use supervised learning and unsupervised learning to allow machines to assist in manual tagging. One paper[23] proposed the auto-dielabel architecture, which automatically groups intents and slots in the dialog data by using the unsupervised learning method of hierarchical clustering to automatically tag the dialog data (the specific tag of the category needs to be manually determined). This method is based on the assumption that expressions of the same intent may share similar background features. Initial features extracted by the model include word vectors, part-of-speech (POS) tags, noun word clusters, and Latent Dirichlet allocation (LDA). All features are encoded by the auto-encoder into vectors of the same dimension and spliced. Then, the inter-class distance calculated by the radial bias function (RBF) is used for dynamic hierarchical clustering. Classes that are closest to each other are merged automatically until the inter-class distance between the classes is greater than the threshold. Figure 10 shows the model framework. Figure 10. Auto-dialabel model In another paper[24], supervised clustering is used to implement machine tagging. The author views each dialog data record as a graph node and sees the clustering process as the process of identifying the minimum spanning forest. The model uses a support vector machine (SVM) to train the distance scoring model between nodes in the Q&A dataset through supervised learning. It then uses the structured model and the minimum subtree spanning algorithm to derive the class information corresponding to the dialog data as the hidden variable. It generates the best cluster structure to represent the user intent type. Dialog Structure Mining Due to the lack of high-quality tagged data for training dialog systems, finding ways to fully mine implicit dialog structures or information in the untagged dialog data has become a popular area of research. Implicit dialog structures or information contribute to the design of dialog policies and the training of dialog models to some extent. One paper[25] proposed to use unsupervised learning in a variational RNN (VRNN) to automatically learn hidden structures in dialog data. The author provides two models that can obtain the dynamic information in a dialog: Discrete-VRNN (D-VRNN) and Direct-Discrete-VRNN (DD-VRNN). As shown in Figure 11, x_t indicates the t-th round of dialog, h_t indicates the hidden variable of the dialog history, and z_t indicates the hidden variable (one-dimensional one-hot discrete variable) of the dialog structure. The difference between the two models is that for D-VRNN, the hidden variable z_t depends on h_(t-1) , while for DD-VRNN, the hidden variable z_t depends on z_(t-1) . Based on the maximum likelihood of the entire dialog, VRNN uses some common methods of VAE to estimate the distribution of a posteriori probabilities of the hidden variable z_t . Figure 11. D-VRNN and DD-VRNN The experiments in the paper show that VRNN is superior to the traditional HMM method. VRNN also adds the dialog structure information to the reward function, supporting faster convergence of the reinforcement learning model. Figure 12 shows the transition probability of the hidden variable z_t in restaurants mined by D-VRNN. Figure 12. Dialog stream structure mined by D-VRNN from the dialog data related to restaurants CMU scholars[26] also tried to use the VAE method to deduce system actions as hidden variables and directly use them for dialog policy selection. This can alleviate the problems caused by insufficient predefined system actions. As shown in Figure 13, for simplicity, an end-to-end dialog system framework is used in the paper. The baseline model is an RL model at the word level (that is, a dialog action is a word in the vocabulary). The model uses an encoder to encode the dialog history and then uses a decoder to decode it and generate a response. The reward function directly compares the generated response statement with the real response statement. Compared with the baseline model, the latent action model adds a posterior probability inference between the encoder and the decoder and uses discrete hidden variables to represent the dialog actions without any manual intervention. The experiment shows that the end-to-end RL model based on latent actions is superior to the baseline model in terms of statement generation diversity and task completion rate. Figure 13. Baseline model and latent action model Data Collection Policy Recently, Google researchers proposed a method to quickly collect dialog data27: First, use two rule-based simulators to interact to generate a dialog outline, which is a dialog flow framework represented by semantic tags. Then, convert the semantic tags into natural language dialogs based on templates. Finally, rewrite the natural statements by crowdsourcing to enrich the language expressions of dialog data. This reverse data collection method features high collection efficiency and complete and highly available data tags, reducing the cost and workload of data collection and processing. Figure 14. Examples of dialog outline, template-based dialog generation, and crowdsourcing-based dialog rewrite This method is a machine-to-machine (M2M) data collection policy, in which a wide range of semantic tags for dialog data are generated, and then crowdsourced to generate a large number of dialog utterances. However, the generated dialogs cannot cover all the possibilities in real scenarios. In addition, the effect depends on the simulator. In relevant academic circles, two other methods are commonly used to collect data from dialog systems: human-to-machine (H2M) and human-to-human (H2H). The H2H method requires a multi-round dialog between the user, played by a crowdsourced staff member, and the customer service personnel, played by another crowdsourced staff member. The user proposes requirements based on specified dialog targets such as buying an airplane ticket, and the customer service staff annotates the dialog tags and makes responses. This mode is called the Wizard-of-Oz framework. Many dialog datasets, such as WOZ[5] and MultiWOZ [28], are collected in this mode. The H2H method helps us get dialog data that is the most similar to that of actual service scenarios. However, it is costly to design different interactive interfaces for different tasks and to clean up incorrect annotations. The H2M data collection policy allows users and trained machines to interact with each other. This way, we can directly collect data online and continuously improve the DM model through RL. The famous DSTC2&3 dataset was collected in this way. The performance of the H2M method depends largely on the initial performance of the DM model. In addition, the data collected online has a great deal of noise, which results in high clean-up costs and affects the model optimization efficiency. Shortcoming 3: Low Training Efficiency With the successful application of deep RL in the Go game, this method is also widely used in the task dialog systems. For example, the ACER dialog management method in one paper[6] combines model-free deep RL with other techniques such as Experience Replay, belief domain constraints, and pre-training. This greatly improves the training efficiency and stability of RL algorithms in task dialog systems. However, simply applying the RL algorithm cannot meet the actual requirements of dialog systems. One reason is that dialogs lack clear rules, reward functions, simple and clear action spaces, and perfect environment simulators that can generate hundreds of millions of quality interactive data records. Dialog tasks include changing slot values, actions, and intents, which significantly increases the action space of the dialog system and makes it difficult to define. When traditional flat RL methods are used, the curse of dimensionality may occur due to one-hot encoding of all system actions. Therefore, these methods are no longer suitable for handling complex dialogs with large action spaces. For this reason, scholars have tried many other methods, including model-free RL, model-based RL, and human-in-the-loop. Model-Free RL — HRL Hierarchical Reinforcement Learning (HRL) divides a complex task into multiple sub-tasks to avoid the curse of dimensionality in traditional flat RL methods. In one paper[29], HRL was applied to task dialog systems for the first time. The authors divided a complex dialog task into multiple sub-tasks by time. For example, a complex travel task can be divided into sub-tasks, such as booking tickets, booking hotels, and renting cars. Accordingly, they designed a dialog policy network of two layers. One layer selects and arranges all sub-tasks, and the other layer executes specific sub-tasks. The DM model they proposed consists of two parts, as shown in Figure 15: Top-level policy: Selects a sub-task based on the dialog state. Selects a sub-task based on the dialog state. Low-level policy: Completes a specific dialog action in a sub-task. Completes a specific dialog action in a sub-task. The global dialog state tracker records the overall dialog state. After the entire dialog task is completed, the top-level policy receives an external reward. The model also has an internal critic module to estimate the possibility of completing the sub-tasks (the degree of slot filling for sub-tasks) based on the dialog state. The low-level policy receives an intrinsic reward from the internal critic module based on the degree of completion of the sub-task. Figure 15. The HRL framework of a task-oriented dialog system For complex dialogs, a basic system action is selected at each step of traditional RL methods, such as querying the slot value or confirming constraints. In the HRL mode, a set of basic actions is selected based on the top-level policy, and then a basic action is selected from the current set based on the low-level policy, as shown in Figure 16. This hierarchical division of action spaces covers the time sequence constraints between different sub-tasks, which facilitates the completion of composite tasks. In addition, the intrinsic reward effectively relieves the problem of sparse rewards, accelerating RL training, preventing frequent switching of the dialog between different sub-tasks, and improving the accuracy of action prediction. Of course, the hierarchical design of actions requires expert knowledge, and the types of sub-tasks need to be determined by experts. Recently, tools that can automatically discover dialog sub-tasks have appeared30. By using unsupervised learning methods, these tools automatically split the dialog state sequence of the whole dialog history, without the need to manually build a dialog sub-task structure. Figure 16. Policy selection process of HRL Model-free RL — FRL Feudal Reinforcement Learning (FRL) is a suitable solution to large dimension issues. HRL divides a dialog policy into sub-policies based on different task stages in the time dimension, which reduces the complexity of policy learning. FRL divides a policy in the space dimension to restrict the action range of each sub-policy, which reduces the complexity of sub-policies. FRL does not divide a task into sub-tasks. Instead, it uses the abstract functions of the state space to extract useful features from dialog states. Such abstraction allows FRL to be applied and migrated between different domains, achieving high scalability. Cambridge scholars applied FRL[32] to task dialog systems for the first time to divide the action space by its relevance to the slots. With this done, only the natural structure of the action space is used, and additional expert knowledge is not required. They put forward a feudal policy structure shown in Figure 17. The decision-making process for this structure is divided into two steps: Determine whether the next action requires slots as parameters. Select the low-level policy and next action for the corresponding slot based on the decision of the first step. Figure 17. Application of FRL in a task-oriented dialog system In general, both HRL and FRL divide the high-dimensional complex action space in different ways to address the low training efficiency of traditional RL methods due to large action space dimensions. HRL divides tasks properly in line with human understanding. However, expert knowledge is required to divide a task into sub-tasks. FRL divides complex tasks based on the logical structure of the action and does not consider mutual constraints between sub-tasks. Model-Based RL The preceding RL methods are model-free. With these methods, a large amount of weakly supervised data is obtained through trial and error interactions with the environment, and then a value network or policy network is trained accordingly. The process is independent of the environment. There is also model-based RL, as shown in Figure 18. Model-based RL directly models and interacts with the environment to learn a probability transition function of state and reward, namely, an environment model. Then, the system interacts with the environment model to generate more training data. Therefore, model-based RL is more efficient than model-free RL, especially when it is costly to interact with the environment. However, the resulting performance depends on the quality of environment modeling. Figure 18. Model-based RL process Using model-based RL to improve training efficiency is currently an active field of research. Microsoft first applied the classic Deep Dyna-Q (DDQ) algorithm in dialogs[33], as shown by the figure © in Figure 19. Before DDQ training starts, we use a small amount of existing dialog data to pre-train the policy model and the world model. Then, we train DDQ by repeating the following steps: Direct RL: Interact with real users online, update policy models, and store dialog data. Interact with real users online, update policy models, and store dialog data. World model training: Update the world model based on collected real dialog data. Update the world model based on collected real dialog data. Planning: Use the dialog data obtained from interaction with the world model to train the policy model. The world model (as shown in Figure 20) is a neural network that models the probability of environment state transition and rewards. The inputs are the current dialog state and system action. The outputs are the next user action, environment rewards, and dialog termination variables. The world model reduces the human-machine interaction data required by DDQ for online RL (as shown in figure (a) of Figure 19) and avoids ineffective interactions with user simulators (as shown in figure (b) of Figure 19). Figure 19. Three RL architectures Figure 20. Structure of the world model Similar to the user simulator in the dialog field, the world model can simulate real user actions and interact with the system’s DM. However, the user simulator is essentially an external environment and is used to simulate real users, while the world model is an internal model of the system. Microsoft researchers have made improvements based on DDQ. To improve the authenticity of the dialog data generated by the world model, they proposed[34] to improve the quality of the generated dialog data through adversarial training. Considering when to use the data generated through interaction with the real environment and when to use data generated through interaction with the world model, they discussed feasible solutions in a paper[35]. They also discussed a unified dialog framework to include interaction with real users in another paper[36]. This human-teaching concept has attracted attention in the industry as it can help in the building of DMs. This will be further explained in the following sections. Human-in-the-Loop We hope to make full use of human knowledge and experience to generate high-quality data and improve the efficiency of model training. Human-in-the-loop RL[37] is a method to introduce human beings into robot training. Through designed human-machine interaction methods, humans can efficiently guide the training of RL models. To further improve the training efficiency of the task dialog systems, researchers are working to design an effective human-in-the-loop method based on the dialog features. Figure 21. Composite learning combining supervised pre-training, imitation learning, and online RL Google researchers proposed a composite learning method combining human teaching and RL37, which adds a human teaching stage between supervised pre-training and online RL, allowing humans to tag data to avoid the covariate shift caused by supervised pre-training[42]. Amazon researchers also proposed a similar human teaching framework[37]: In each round of dialog, the system recommends four responses to the customer service expert. The customer service expert determines whether to select one of these responses or create a new response. Finally, the customer service expert sends the selected or created response to the user. With this method, developers can quickly update the capabilities of the dialog system. In the preceding method, the system passively receives the data tagged by humans. However, a good system should actively ask questions and seek help from humans. One paper[40] introduced the companion learning architecture (as shown in Figure 22), which adds the role of a teacher (human) to the traditional RL framework. The teacher can correct the responses of the dialog system (the student, represented by the switch on the left side of the figure) and evaluate the student’s response in the form of intrinsic reward (the switch on the right side of the figure). For the implementation of active learning, the authors put forward the concept of dialog decision certainty. The student policy network is sampled multiple times through dropout to obtain the estimated approximate maximum probability of the desired action. Then the moving average of several dialog rounds is calculated through the maximum probability and used as the decision certainty of the student policy network. If the calculated certainty is lower than the target value, the system determines whether a teacher is required to correct errors and provide reward functions based on the difference between the calculated decision certainty and the target value. If the calculated certainty is higher than the target value, the system stops learning from the teacher and makes judgments on its own. Figure 22. The teacher corrects the student’s response (on the left) or evaluates the student’s response (on the right). The key to active learning is to estimate the certainty of the dialog system regarding its own decisions. In addition to dropping out policy networks, other methods include using hidden variables as condition variables to calculate the Jensen-Shannon divergence of policy networks[22] and making judgments based on the dialog success rate of the current system[36]. Dialog Management Framework of the Intelligent Robot Conversational AI Team To ensure stability and interpretability, the industry primarily uses rule-based DM models. The Intelligent Robot Conversational AI Team at Alibaba’s DAMO Academy began to explore DM models last year. When building a real dialog system, we need to solve two problems: (1) how to obtain a large amount of dialog data in a specific scenario and (2) how to use algorithms to maximize the value of data. Currently, we plan to complete the model framework design in four steps, as shown in Figure 23. Figure 23. Four steps of DM model design Step 1: First, use the dialog studio independently developed by the Intelligent Robot Conversational AI team to quickly build a dialog engine called TaskFlow based on rule-based dialog flows and build a user simulator with similar dialog flows. Then, have the user simulator and TaskFlow continuously interact with each other to generate a large amount of dialog data. First, use the dialog studio independently developed by the Intelligent Robot Conversational AI team to quickly build a dialog engine called TaskFlow based on rule-based dialog flows and build a user simulator with similar dialog flows. Then, have the user simulator and TaskFlow continuously interact with each other to generate a large amount of dialog data. Step 2: Train a neural network through supervised learning to build a preliminary DM model that has capabilities basically equivalent to a rule-based dialog engine. The model can be expanded by combining semantic similarity matching and end-to-end generation. Dialog tasks with a large action space are divided using the HRL method. Train a neural network through supervised learning to build a preliminary DM model that has capabilities basically equivalent to a rule-based dialog engine. The model can be expanded by combining semantic similarity matching and end-to-end generation. Dialog tasks with a large action space are divided using the HRL method. Step 3: In the development phase, make the system interact with an improved user simulator or AI trainers and continuously enhance the system dialog capability based on off-policy ACER RL algorithms. In the development phase, make the system interact with an improved user simulator or AI trainers and continuously enhance the system dialog capability based on off-policy ACER RL algorithms. Step 4: After the human-machine interaction experience is verified, launch the system and introduce human roles to collect real user interaction data. In addition, use some UI designs to easily introduce user feedback to continuously update and enhance the model. The obtained human-machine dialog data will be further analyzed and mined for customer insight. At present, the RL-based DM model we developed can complete 80% of the dialog with the user simulator for moderately complex dialog tasks, such as booking a meeting room, as shown in Figure 24. Figure 24. Framework and evaluation indicators of the DM model developed by the Intelligent Robot Conversational AI team Summary This article provides a detailed introduction of the latest research on DM models, focusing on three shortcomings of traditional DM models: Poor scalability Insufficient tagged data Low training efficiency To address scalability, common methods for processing changes in user intents, dialog bodies, and the system action space include semantic similarity matching, knowledge distillation, and sequence generation. To address insufficient tagged data, methods include automatic machine tagging, effective dialog structure mining, and efficient data collection policies. To address the low training efficiency of traditional DM models, methods such as HRL and FRL are used to divide action spaces into different layers. Model-based RL methods are also used to model the environment and improve training efficiency. Introducing human-in-the-loop into the dialog system training framework is also a current focus of research. Finally, I discussed the current progress of the DM model developed by the Intelligent Robot Conversational AI team of Alibaba’s DAMO Academy. I hope this summary can provide some new insights to support your own research on DM. References [1].TURING A M. I. — COMPUTING MACHINERY AND INTELLIGENCE[J]. Mind, 1950, 59(236): 433–460. [2].Weizenbaum J. ELIZA — -a computer program for the study of natural language communication between man and machine[J]. Communications of the ACM, 1966, 9(1): 36–45. [3].Young S, Gašić M, Thomson B, et al. Pomdp-based statistical spoken dialog systems: A review[J]. Proceedings of the IEEE, 2013, 101(5): 1160–1179. [4].Bordes A, Boureau Y L, Weston J. Learning end-to-end goal-oriented dialog[J]. arXiv preprint arXiv:1605.07683, 2016. [5].Wen T H, Vandyke D, Mrksic N, et al. A network-based end-to-end trainable task-oriented dialogue system[J]. arXiv preprint arXiv:1604.04562, 2016. [6].Su P H, Budzianowski P, Ultes S, et al. Sample-efficient actor-critic reinforcement learning with supervised data for dialogue management[J]. arXiv preprint arXiv:1707.00130, 2017. [7]. Serban I V, Sordoni A, Lowe R, et al. A hierarchical latent variable encoder-decoder model for generating dialogues[C]//Thirty-First AAAI Conference on Artificial Intelligence. 2017. [8]. Berant J, Chou A, Frostig R, et al. Semantic parsing on freebase from question-answer pairs[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 2013: 1533–1544. [9]. Dhingra B, Li L, Li X, et al. Towards end-to-end reinforcement learning of dialogue agents for information access[J]. arXiv preprint arXiv:1609.00777, 2016. [10]. Lei W, Jin X, Kan M Y, et al. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2018: 1437–1447. [11]. Madotto A, Wu C S, Fung P. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems[J]. arXiv preprint arXiv:1804.08217, 2018. [12]. Mrkšić N, Séaghdha D O, Wen T H, et al. Neural belief tracker: Data-driven dialogue state tracking[J]. arXiv preprint arXiv:1606.03777, 2016. [13]. ¬Ramadan O, Budzianowski P, Gašić M. Large-scale multi-domain belief tracking with knowledge sharing[J]. arXiv preprint arXiv:1807.06517, 2018. [14]. Weisz G, Budzianowski P, Su P H, et al. Sample efficient deep reinforcement learning for dialogue systems with large action spaces[J]. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 2018, 26(11): 2083–2097. [15]. Wang W, Zhang J, Zhang H, et al. A Teacher-Student Framework for Maintainable Dialog Manager[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 3803–3812. [16]. Yun-Nung Chen, Dilek Hakkani-Tur, and Xiaodong He, “Zero-Shot Learning of Intent Embeddings for Expansion by Convolutional Deep Structured Semantic Models,” in Proceedings of The 41st IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016), Shanghai, China, March 20–25, 2016. IEEE. [17]. Rastogi A, Hakkani-Tür D, Heck L. Scalable multi-domain dialogue state tracking[C]//2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2017: 561–568. [18]. Mesnil G, He X, Deng L, et al. Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding[C]//Interspeech. 2013: 3771–3775. [19]. Bapna A, Tur G, Hakkani-Tur D, et al. Towards zero-shot frame semantic parsing for domain scaling[J]. arXiv preprint arXiv:1707.02363, 2017. [20]. Wu C S, Madotto A, Hosseini-Asl E, et al. Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems[J]. arXiv preprint arXiv:1905.08743, 2019. [21]. He J, Chen J, He X, et al. Deep reinforcement learning with a natural language action space[J]. arXiv preprint arXiv:1511.04636, 2015. [22]. Wang W, Zhang J, Li Q, et al. Incremental Learning from Scratch for Task-Oriented Dialogue Systems[J].arXiv preprint arXiv:1906.04991, 2019. [23]. Shi C, Chen Q, Sha L, et al.Auto-Dialabel: Labeling Dialogue Data with Unsupervised Learning[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 684–689. [24]. Haponchyk I, Uva A, Yu S, et al. Supervised clustering of questions into intents for dialog system applications[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 2310–2321. [25]. Shi W, Zhao T, Yu Z. Unsupervised Dialog Structure Learning[J]. arXiv preprint arXiv:1904.03736, 2019. [26]. Zhao T, Xie K, Eskenazi M. Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent variable models[J]. arXiv preprint arXiv:1902.08858, 2019. [27]. Shah P, Hakkani-Tur D, Liu B, et al. Bootstrapping a neural conversational agent with dialogue self-play, crowdsourcing and on-line reinforcement learning[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers). 2018: 41–51. [28]. Budzianowski P, Wen T H, Tseng B H, et al. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling[J]. arXiv preprint arXiv:1810.00278, 2018. [29]. Peng B, Li X, Li L, et al. Composite task-completion dialogue policy learning via hierarchical deep reinforcement learning[J]. arXiv preprint arXiv:1704.03084, 2017. [30]. Kristianto G Y, Zhang H, Tong B, et al. Autonomous Sub-domain Modeling for Dialogue Policy with Hierarchical Deep Reinforcement Learning[C]//Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI. 2018: 9–16. [31]. Tang D, Li X, Gao J, et al. Subgoal discovery for hierarchical dialogue policy learning[J]. arXiv preprint arXiv:1804.07855, 2018. [32]. Casanueva I, Budzianowski P, Su P H, et al. Feudal reinforcement learning for dialogue management in large domains[J]. arXiv preprint arXiv:1803.03232, 2018. [33]. Peng B, Li X, Gao J, et al. Deep dyna-q: Integrating planning for task-completion dialogue policy learning[J]. ACL 2018. [34]. Su S Y, Li X, Gao J, et al. Discriminative deep dyna-q: Robust planning for dialogue policy learning.EMNLP, 2018. [35]. Wu Y, Li X, Liu J, et al. Switch-based active deep dyna-q: Efficient adaptive planning for task-completion dialogue policy learning.AAAI, 2019. [36]. Zhang Z, Li X, Gao J, et al. Budgeted Policy Learning for Task-Oriented Dialogue Systems. ACL, 2019.[37]. Abel D, Salvatier J, Stuhlmüller A, et al. Agent-agnostic human-in-the-loop reinforcement learning[J]. arXiv preprint arXiv:1701.04079, 2017. [38]. Liu B, Tur G, Hakkani-Tur D, et al. Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems[J]. arXiv preprint arXiv:1804.06512, 2018. [39]. Lu Y, Srivastava M, Kramer J, et al. Goal-Oriented End-to-End Conversational Models with Profile Features in a Real-World Setting[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers). 2019: 48–55. [40]. Chen L, Zhou X, Chang C, et al. Agent-aware dropout dqn for safe and efficient on-line dialogue policy learning[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017: 2454–2464. [41]. Gao J, Galley M, Li L. Neural approaches to conversational AI[J]. Foundations and Trends® in Information Retrieval, 2019, 13(2–3): 127–298. [42]. Ross S, Gordon G, Bagnell D. A reduction of imitation learning and structured prediction to no-regret online learning[C]//Proceedings of the fourteenth international conference on artificial intelligence and statistics. 2011: 627–635. [43]. Rajendran J, Ganhotra J, Polymenakos L C. Learning End-to-End Goal-Oriented Dialog with Maximal User Task Success and Minimal Human Agent Use[J]. Transactions of the Association for Computational Linguistics, 2019, 7: 375–386. [44]. Mrkšić N, Vulić I. Fully Statistical Neural Belief Tracking[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2018: 108–113. [45]. Zhou L, Small K. Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering[J]. arXiv preprint arXiv:1911.06192, 2019. [46]. Rajpurkar P, Jia R, Liang P. Know What You Don’t Know: Unanswerable Questions for SQuAD[J]. arXiv preprint arXiv:1806.03822, 2018. [47]. Zhang J G, Hashimoto K, Wu C S, et al. Find or Classify? Dual Strategy for Slot-Value Predictions on Multi-Domain Dialog State Tracking[J]. arXiv preprint arXiv:1910.03544, 2019. Are you eager to know the latest tech trends in Alibaba Cloud? Hear it from our top experts in our newly launched series, Tech Show! Original Source:
https://medium.com/datadriveninvestor/progress-in-dialog-management-model-research-444c52f4bc1a
['Alibaba Cloud']
2020-06-22 10:41:51.468000+00:00
['Machine Learning', 'AI', 'API', 'Alibabacloud', 'Algorithms']
Trade Biotech Stocks Like a Hedge Fund With These Hacks
The secrets of the market are out there, waiting to be unearthed. Few people have the curiosity or grit to dig for them. Sometimes, those secrets are right in front of our eyes. Few people have the boldness or presence of mind to simply look. In the past, I’ve presented investment ideas that have been based, in large part, on discerning work to determine with near certainty whether a biotech asset is under- or overvalued (e.g. here, here, here, and here). This work requires technical proficiency, a critical eye, and the stamina for deep analysis. I have also discussed ways in which professional investors can acquire an edge through the widespread practice of both legal and illegal insider trading. Although ethically dubious, these schemes require the cultivation of expert networks, intimate knowledge of the markets, and substantial legal wherewithal. Most importantly, the approaches above entail a professional commitment, with a concomitant investment in time and resources, and are not accessible to the layman investor. The following are shortcuts. The Clinicaltrials.gov Hack Clinicaltrials.gov is a website that provides the public with information on clinical studies. The information is provided and updated by the sponsor or principal investigator of the clinical study, and the website is maintained by the National Institutes of Health. Registration is required for any Phase 2, Phase 3, or post-marketing trial of a drug, biologic, or medical device that meets one of the following conditions: The trial has one or more sites in the United States The trial is conducted under an FDA investigational new drug application or investigational device exemption The trial involves a drug, biologic, or device that is manufactured in the United States or its territories and is exported for research These criteria yield essentially any trial that would materially affect the value of a publicly traded biotech company. While the clinical trial descriptors don’t provide granular data on the status of trials (e.g. number of patients currently enrolled or proportion that have completed the protocol), they do provide an overall classification of trial status (recruiting, completed, suspended, terminated, etc.). The FDA now also requires that trials initiated from 2017 onward report results once they are available. All told, clinicaltrials.gov is a public source of information on events that would affect most, if not all, biotech stocks. So what is the likelihood that information would be posted on clinicaltrials.gov before it is formally announced to the public in a press release? Not high, and such an occurrence would almost certainly be a blunder. But it does happen. On February 25, 2016, clinicaltrials.gov logged a change in the study record for Vitae Pharmaceuticals’ ($VTAE) psoriasis trial of its drug, VTP-43742. The change indicated that enrollment for the trial was closed at 74 patients instead of the anticipated 108. Halting a trial’s enrollment prematurely could have a variety of causes, but very few of them would be considered auspicious. The most likely explanation, especially in an ascending dose trial, is toxicity. On March 3, 2016, the company issued a press release noting that enrollment was closed to additional psoriatic patients, adding that data from the enrolled cohort would be “sufficient to determine next steps in the program.” This revelation was viewed negatively by the market, and the company’s stock plunged 52% the following day. Vitae later reported that the drug demonstrated positive efficacy in the trial, causing its stock to regain much of its lost ground. However, recruitment had in fact been halted due to toxicity concerns, as investigators in the trial observed transaminase elevations in four patients in the 700-mg group, which swayed Vitae to forgo the highest dose cohort of 1,050 mg. A similar case manifested on March 23, 2016, when clinicaltrials.gov registered a change in the study record for Ionis Pharmaceuticals’ ($IONS) trial of drug IONIS-TTR(Rx) in familial amyloid polyneuropathy (FAP). The change signaled that enrollment of the Phase 3 trial was halted at 172 patients, instead of the planned 195. On April 7, 2016, Ionis issued a press release stating that the FDA placed its planned trial of IONIS-TTR(Rx) in transthyretin amyloid cardiomyopathy on clinical hold, due to an undisclosed issue with its ongoing trial in FAP. Ionis promptly shed 11% of its value. It was later revealed that the clinical hold was triggered by a negative safety signal from the FAP trial, in which some patients experienced a severe decline in platelet count. To be fair, changes to clinical trials don’t always foreshadow bad news. Trials are sometimes stopped early due to efficacy (which would trigger unblinding of the trial in order to treat all patients with efficacious drug). This was famously the case for Intercept Pharmaceuticals’ ($ICPT) trial of obeticholic acid in nonalcoholic steatohepatitis, where the announcement sent the company’s stock soaring over 500%. But trials can also be stopped due to lack of efficacy, toxicity issues, or simply poor enrollment. In an analysis of terminated studies on clinicaltrials.gov, 68% of trials were terminated due to reasons other than scientific data from the trial (e.g. insufficient rate of enrollment, issues with study conduct), and only 21% of trials were terminated due to findings related to the overall benefit-risk profile of the intervention. Only a subset this 21% would be trials that are stopped due to positive efficacy. Reasons for clinical trial termination based on an analysis of a clinicaltrials.gov dataset Thus, a potentially lucrative trading strategy would be to (1) troll clinicaltrials.gov for recent updates to clinical trial records where the sponsor is a publicly traded biotech company, (2) determine whether the update is material to the company’s stock price, (3) verify whether a press release has already been issued and, if not, (4) trade in the company’s stock. The most straightforward embodiment of this strategy a short of the stock of a company whose trial is terminated, suspended, or for which recruitment is halted without a relevant disclosure by the company. The risk is that the change in the trial is due to a positive development (which we’ve determined is unlikely) or that the change is actually immaterial and some other, positive catalyst emerges in the meantime. If you’re convinced the change is material but could be positive, consider hedging your position with a call option to the upside. You may be thinking that, with over 250,000 trial records on clinicaltrials.gov, monitoring each trial in real-time would be a futile effort. Fortunately, the web site recently implemented an RSS feature which, with some customization, allows you to automate this process. The RSS feed can automatically update you to recently added or modified study records of interest. For instance, a search for all interventional studies with the status Active, not recruiting, Suspended, Terminated, or Withdrawn, yields 31,010 study records. Click on Subscribe to RSS in the upper-right corner of the search results box: 2. A pop-up box containing RSS feed options will appear. Choose the option for Show studies that were added or modified in the last 14 days, and click on the Create RSS Feed button to open the feed and display a list of any new updates to your search results. You can subscribe to the RSS feed using your browser or a feed reader (e.g. Feedly). Once you set up an RSS feed on your browser or feed reader, you can integrate with IFTTT to set up e-mail or push notifications and receive any relevant update in real-time. Now, I’m notified immediately of any clinical trial that is terminated, suspended, or that stops recruiting. The FOIA Hack Sometimes, when you don’t have the answer to a question, the government will give it to you. The Freedom of Information Act (FOIA), signed into law in 1966, gives any person the right to access public records, such as FDA facility inspections, drug adverse event reports, and internal newsletters. We fund government, and they collect a lot of data on people, corporations, and their products. The FOIA allows the average taxpayer to access that data. Trading on material obtained through FOIA is not illegal because the government has no duty to keep the information private — in fact, officials are required to disclose the information, except when its release poses a threat to national security. Some federal agencies are bound to protect certain trade secrets, such as the proprietary manufacturing protocol for a drug, in which case the agency will withhold or redact such information. Hedge funds already make liberal use of FOIA to perform due diligence, with several examples of such funds profiting or stemming losses based on the information they obtained. In March of 2009, Genzyme announced that the FDA had issued a warning letter identifying manufacturing deficiencies at a plant where it produced the enzyme replacement therapies Cerezyme and Fabrazyme. SAC Capital sent a FOIA request to the FDA for the Form 483 facility inspection report, which it received on March 30. The report led SAC to believe that the issues were more dire than the company let on because, over the next few months, SAC reduced their stake in the company from 221,000 shares to 127,000. On June 16, the company disclosed a viral contamination at the plant, leading to manufacturing shutdown of the two drugs. SAC was able to avert major losses, as the company’s stock declined 15% in the two weeks ending June 16. Another FOIA exploit enabled hedge funds to predict the acquisition of Actelion by Johnson & Johnson earlier this year. Although Actelion had been a rumored takeover target for some time, a group of hedge funds became increasingly convinced when they found that J&J’s corporate jet had been parked in Basel, Switzerland — near Actelion headquarters — for over a week. When the $30 billion deal was announced on January 26, 2017, Actelion’s stock soared 20%, earning the funds hundreds of millions in profit. The story echoes a scene straight out of the movie Wall St — but these funds didn’t need to rely on corporate espionage à la Bud Fox for this intel. The movements of almost any private jet can be tracked using publicly available tools, thanks to FOIA. The FAA keeps track of all aircraft, and because of FOIA, the FAA has agreed to provide the data in real-time to services such as FlightAware. The only information needed to track a plane is the tail number for the specific jet, which can be searched on the FAA registry using the owner’s name. These feats are not merely anomalies. A recent analysis found that FOIA requests are incredibly common among hedge funds. (Incidentally, the study’s authors used none other than a FOIA request in order to acquire the data on FDA-bound FOIA requests). A separate analysis broke down the 1,899 FOIA requests of FDA records by hedge funds from 1999 to 2013, and found that the most frequent kinds of requests were for Form 483s and consumer complaints. In addition to being frequently invoked, FOIA enables hedge funds to generate significant trading returns. In particular, when funds increase their holdings of a stock in connection with a FOIA request, the stock’s abnormal returns (a measure that adjusts for market trends) average 5.26%, and when funds reduce their holdings, abnormal returns average -3.09%. In other words, the trades associated with FOIA requests are, on average, profitable, underscoring the value of the information. Abnormal cumulative returns densities for stocks that were the subject of FOIA requests, illustrating how FOIA data confers an advantage. Results are computed for stocks for which holdings were increased by hedge funds making the FOIA request (blue dashed line), stocks for which holdings were decreased by hedge funds making the FOIA request (red dashed line), and stocks for which holdings were unchanged by hedge funds making the FOIA request (black solid line). FOIA requests give rise to information asymmetries. Even though the information is accessible to anyone, it is not publicly disseminated, and only those who request it will benefit from it. Although there has been an effort to make a searchable, online database of the over 600,000 yearly FOIA requests and responses (i.e. FOIA Online), the Department of Health and Human Services, which oversees the FDA, does not participate in the program. Moreover, the FOIA information comes in the form of unfiltered technical reports, and only those that can understand and process the information can effectively exploit it. Currently, I’m working to create a database of material obtained through FOIA requests to the FDA. The purpose is to give biotech investors access to public information that is, ironically, inaccessible to the independent investor. I’m aiming to crowdsource the database by, at least initially, requiring users to submit FOIA information in order to gain access. If readers are interesting in learning more about this project, please provide your contact information here. Submitting a FOIA request is quite straightforward. The FDA has an online request form through which you can submit your request. The form will ask you the maximum dollar amount you are willing to pay for processing. For consumer use, there is no charge for the first two hours of search and the first 100 pages of information, which should be sufficient for most requests. Beyond that, modest search and copying fees apply. There will be a field where you can enter your request or upload it as a document — be as specific as possible. Remember, you may ask for anything within reason (e.g. adverse event reports, warning letters, facility inspection reports). You may also want to include with your request a note asking the agency to contact you by e-mail or phone in case of any questions, as requests can be denied for being unclear. Finally, you should ask to have the information sent in PDF format by e-mail so that the agency doesn’t default to snail mail. All agencies are required to respond to your request within 20 business days, although the information may take and additional 10 days in exceptional circumstances. It’s as simple as that!
https://medium.com/the-mission/trade-biotech-stocks-like-a-hedge-fund-with-these-hacks-ff153c907b0b
['Samy Hamdouche']
2017-09-11 17:07:06.517000+00:00
['Investing', 'FOIA', 'Stock Market', 'Tech', 'Science']
The Ultimate List of the Best Productivity Resources
The Ultimate List of the Best Productivity Resources Where the most productive people go to get the latest tips What’s your go-to resource for all things productivity? We asked, you answered. And the best tips and tricks are now rounded up here, in one handy list. With blogs and podcasts to check out, people to follow, and apps to try, we’ve got the ultimate list of where to look when you’re in need of some solid productivity advice. Blogs + News Podcasts Evernote Podcast — iTunes, SoundCloud, Overcast — Dive into the realms of achievement, entrepreneurship, and creative thinking — iTunes, SoundCloud, Overcast — Dive into the realms of achievement, entrepreneurship, and creative thinking Cortex Podcast — Each episode, they get together to discuss their working lives People Apps + Tools + Approaches
https://medium.com/taking-note/the-ultimate-list-of-the-best-productivity-resources-5ad2f648875b
[]
2017-11-15 22:03:33.186000+00:00
['Apps', 'Productivity', 'Self Improvement', 'Advice', 'Personal Development']
What Product Teams say and What They Really Mean — 10 Tips for Diagnosing Team Issues
Originally published on Mind The Product October 2018 Team issues can have a negative impact on a project and your people long term. There are a bunch of ways they might manifest themselves — and I’ve written them down as I’ve heard them over a decade of building digital products in cross-functional teams. I’m not touching on the upfront issues like bad sales process, junk briefs, confused business requirements, that’s for another day. This list is most useful for in-flight project teams, off and sprinting. Reading these unfiltered issues will surface the symptoms. And in turn, help with diagnosis. All teams and situations are unique, but some pains are universal and understanding the issue is halfway to a solution. 1 — “Our Client is a ☠️ They Don’t Understand What we are Trying to do” This is bad mojo for a team. In the same way that losing empathy for the customer can easily happen in long projects (good read here about this), it’s easy for a team to start classing the client as a hindrance to getting a project out. This can creep in from the smallest negative comments. If the team doesn’t take the time to understand who they are working with, an “us and them” mentality can develop. The client is taking great risks, personally and as a business. Building client empathy is important. They might be frustrated or confused, which can result in curt communication… Tip: Get to know the client, learn to ask the right question and be patient. But most of all don’t be a promoter of negative views in the team. 2 — “Let’s Push Back This Next Check Till we Have More to Show” This means the team isn’t confident in the direction they are going, and probably doesn’t have the right information. They’ll push the meeting back but go nowhere in the meantime, while the expectation gap between client and team grows and it gets harder to ask the simple questions they didn’t have the answers to in the beginning. Tip: When you or the team are nervous of meeting with the client or major stakeholder, ask why and then go talk to the client about that thing. 3 — “Wow, I’d Never Seen That Document Before” Projects will produce a heap of documentation, and that’s normal. This is a challenge worth understanding from day one. Light documentation in favour of delivering is (in my view) always preferable. One consistent issue I see is the grouping of deliverables by phase or sprint. This starts out looking like a good idea, but soon makes it extremely hard to view a continuous thread across the project. Tip: By taking time to discuss where specific groupings will live, how insights will be surfaced, and an agreement on nomenclature, you will save time and pain later. 4 — “Our Meetings are Long and Have no Outcome” It’s all-too easy to get into a bad meeting etiquette routine. If meetings feel long, then they are, regardless of their actual duration. Judging the correct length can be hard. The way the working day is broken into hours tends to mean a meeting will fill an hour (at least), irrespective of its content. Setting a meeting goal or outcome is imperative. That could be to generate ideas, agree on a deadline or assign work. Whether the goal is hard or loose doesn’t matter, but having one is key. Tip: The simple rules: set a goal, take notes, assign tasks, agree on next steps AND leave the tech out the room. 5 — “What did They go Into a Meeting Room for?” When things get a bit “interesting” on a project there is a tendency to get secretive and have small groups heading to a meeting room. It could be a bit of client drama, or maybe a team member issue. But quite often it’s just everyday tasks masquerading as an issue. The point here is that the rest of the team wonders what is going on. It creates team drama, and ripples from it are disruptive. Tip: Try to be absurdly transparent. Spell it out. Tell the team at standup what’s going on and then say it again later. And where possible don’t hide in a meeting room. 6 — “I Just Don’t get Enough Time at my Desk” All the meetings, planning, and alignment are hugely valuable activities. But a balance needs to be struck. If your week is peppered with team meetings and check-ins, how can you find time to get deep into work? This crushes flow time, that special mode that gets the best work and helps team members to feel job satisfaction. Tip: It’s worth evaluating the need for a meeting. If you are a manager, is this meeting more about your peace of mind than anything else? Could that be achieved in another way? Another issue to watch for is the double workload a team can feel when working on-site with a client. Close collaboration is hugely valuable and something I would always promote. But it’s worth recognising that it comes at a cost to the team. They are always on, staying professional, interpreting comments and filtering needs. Once you have been doing this for a few years you find tactics to manage the load and it can be very enjoyable for most. But for members of the team more used to crafting at a desk with headphones on most the day, it can be a great deal of effort to manage and not feel the most productive. Tip: Could you mark out safe spots in the week for the work to get done? I have gone as far as a traffic lights system in the past — I even had a traffic light on display. Parts of the week are green, free to chat and collaboration. Parts are red, please don’t disrupt, it’s deep working time. If this is planned in advance it gives the team a firm grounding to build out a week of work and know when they will be able to focus on the deeper thinking. 7 — “We Have a Presentation Today!?” When people in the team seem confused about where to be and what’s happening this can be an indication of some poor calendars etiquette — things like moving meetings around without updating verbally, dropping them into calendars on the day or, even worse, five minutes before they start. This creates uncertainty, causes confusion, and quickly leads to a behaviour where you don’t start any major task because you have no understanding of how long you will have to work at it — why bother getting into it just to be pulled straight out. Tip: Make time at the end of the day to plan your following day, confirm the meetings, and make adjustments. On the day, use a short team alignment like a standup meeting to get calendars aligned, and reconfirm all the key activities. Things change, people’s life commitments pop up. That’s all fine so long as the team are aware of where they are supposed to be ahead of time. 8 — “The Sprints Just Feel Relentless” Sprints can feel quite intense and exhausting — whether it’s because there’s a deadline in mind, or no end in sight. This can be made worse when a team doesn’t have a grasp of the roadmap, or when you haven’t paused long enough to recognise success. One thing I’ve heard in the past which rings true, is ‘sprinting a marathon’. Tip: One tactic is to have a break — a sprint every X sprints to focus on the little bits that have been sidelined, like process and documentation. This is especially useful for developers to jump on any technical debt. 9 — “Did you Take Notes? No, but it’s Cool, [Insert Firefighter] has it” Teams that seem to not be taking responsibility are a really common and bad sign. Most likely a key person is taking the heat. Firefighter is a great term for the people who parachute into the troubled projects and save the day. They have a job to do and little time to do it, so their style is to dictate action. It works in the short term. Clients tend to love them. But remember firefighters love to fight fires. It’s not necessarily on their to-do list to build a strong team. This leads to disengagement — why bother when the firefighter has it covered? Tip: How you know if you have one one person taking all the weight? Maybe the client said: “Where would we be without [Insert firefighter]. Don’t ever let them leave”. But what if they leave? Use the firefighter to set process, but then plan the day they move off the project with them. Let the team and client know. 10 — “Best not Disturb the Team, They Have a big Mountain to Climb” I have often heard this said by well-meaning managers. It comes from a good place. The team may have started strongly with retrospectives, but that can drift if not carefully guarded and valued. Not allowing the team the space to address problems weakens its ability self-fix. Resilience becomes low and the general mood can stagnate. Tip: It’s time to get back to building the space to reflect. Gather input from the team on issues. You’ll probably realise they have a deep understanding of what is going on and that they have some ideas to fix it. Find a forum for discussion as a group. Empower team members to take action from those discussions, and always allow time for them to succeed at the tasks by building time into the plan. …no time like now If you have an issue in your team, and maybe one of these sparked that realisation, well good news! — one of the biggest lessons I’ve learned is that it’s never too late to take a moment, reflect, and start the conversation that could fix things. As you’ve probably guessed, I don’t have any silver bullets for you — if I did I would have a book out 😀 Good luck 🙏
https://medium.com/ideas-by-idean/what-product-teams-say-and-what-they-really-mean-10-tips-for-diagnosing-team-issues-f77625fa72e8
['Rob Boyett']
2019-03-20 16:24:45.157000+00:00
['Product Design', 'Mobile', 'Design', 'Team Management', 'Agile']
How to Be Productive and Achieve If You Have a Tender Soul
Photo by Fabrizio Verrecchia on Unsplash Work with your soul, not against it. If you have a tender soul, you respond to everything that happens like a feather caught in the wind. Successes put you over the moon, but the slightest discouragement can knock you flat. If your self-esteem isn’t that great, criticism feels like stabbing knives. Just taking a step that might bring on disapproval can feel like a herculean task. Maybe you worry about making a mistake that would hurt someone, giving bad advice, getting something wrong, or offending someone. And whenever you try to do something that’s not right for you, your conscience screams until you stop. Even when it is right, a welter of emotions can get between you and what you’re trying to accomplish. Sometimes you might envy the people with steelier souls. People who can work like a machine without getting tripped up seventeen times a day by their feelings. I’m here to tell you, there’s nothing to envy about people who’ve shut down their emotional life. And there’s no reason you can’t create and achieve magnificent things — without putting a gag on your soul. I’ve tried the way that doesn’t work — for way too many years — trying to slog through a work life and then an academic program that didn’t chime with my soul. Trying to ignore the pain of the misalignment, but finding myself at the end of the day curled up on the sofa in a fetal position, drinking wine every night, or contracting mysterious illnesses that wouldn’t go away. I’m 52 now, and I think I’m finally figuring it out. Two attitudes, and one major strategy, have been helping me stay productive and move toward exciting goals, without feeling like I have to stifle my soul. Photo by Wolfgang Hasselmann on Unsplash Knowing that I truly don’t have to choose one or the other. The world seems to be structured to work for and reward people who’ve discarded their emotions. That’s probably true about large swathes of modern life: it encourages focus on financial bottom lines, mechanistic production, and feeding people’s addictions, for the sake of easy sales and immense profits, rather than nourishing their souls with integrity and imagination. But that’s not the whole world. There are still millions of people out there who value — crave, long for — beauty, truth, authenticity, vision, playfulness, delight, inspiration — all those things that only a person with a tender soul can offer. This is my world, and your world. It might not be quite as profitable as the other one, but it can definitely be enough. Nurturing and sheltering myself. This world can be pretty dark and dreary, and even sharp-edged for someone who’s sensitive. I’m learning to take care of myself. That means making sure I get the emotional and sensory nourishment I need: taking breaks to listen to my favorite music, filling my space with light and color and beautiful scents, and ultimately finding a place to live where I feel free, safe, and inspired. I’ve discovered I have to be extra-careful about my boundaries. The acid rain of this world can eat away at our joy. I’m doing everything I can think of to protect myself from that, and to maintain my sense of wonder and delight. This doesn’t mean withdrawal or isolation. There’s a difference between taking a positive interest in the world and people around you — engaging with them lovingly — and allowing yourself to be harmed and brought down. I’m learning to always remember who I am, and that my energy and accomplishments will be grounded in my sensitivity, compassion, vision, and joy. I need to nurture and shelter those qualities in myself. Photo by Gene Devine on Unsplash My emotions hold the key to functioning well — shutting them down isn’t going to work for me. For me, the emotional flow is pretty much constant, and until recently I found it very distracting and hampering. In my case, it’s been things like, for example, feeling really restless when I have to stick with a project that isn’t intrinsically interesting at the moment: I would let that restlessness completely carry me away from what I needed — and really wanted — to be accomplishing. Or, when I moved toward working on my novel, I would have a wave of feelings about it not being good enough, or feeling futility, like success will never come to me no matter how good I am or how hard I try. I found it really hard to set those feelings aside in order to focus on my work. I suspect that people who have closed down their souls don’t experience emotions like those so keenly, or they’re able to push them away fairly easily, and that’s one reason they get a lot done. I find it incredibly hard to do something I don’t fully want to be doing. I have to feel hopeful and excited about it, and that it’s the right thing for me and, ideally, beneficial for the world in some way. From sweeping the floor of my kitchen to building my writing career, I have to stir up some level of excitement and a feeling of congruence with the task before I can give it my energy and engagement. On the other hand, any negative feelings can completely prevent me from working — or even keeping my house tidy. So this is the solution I’ve discovered: Instead of trying to ignore or push away these unhelpful emotions, I turn toward them and give them the attention they seem to want. Before I start work, I first sit and self-reflect for a moment to sense what I’m feeling about what I’m about to do. Sometimes I find that I’m really excited and eager, and it’s great to notice that and be able to ride that energy into the session. But if it’s feelings that are pulling me away from the task instead of toward it, I will sit with them for a while and give them some time and attention. Sometimes, especially if I’m having trouble figuring out what’s going on, journaling helps me identify what it is that’s trying to make itself known. If I’m alone, I’ll even talk to myself out loud: “Wow, I feel really sad about doing this today, and I don’t know why, but crap do I feel sad.” Figuring out why I’m feeling a particular way can be useful information, but it seems most important just to identify and acknowledge the feeling itself, and sit with it till it softens. Sometimes the emotion is just sort of like an itch that needs to be scratched or a pebble I have to take out of my shoe — it just needs a few minutes of undivided attention, and it will fade away. Sometimes it’s more intense or durable. Sometimes journaling about it or crying a little will soften or dispel it, and even if it doesn’t completely go away, I’m still able to work now. There are times when I decide to accept that it’s there and get to work anyway, not trying to stifle the feeling, but just letting it be a presence while I do the work I really want to be doing. Photo by seth schwiet on Unsplash It’s so much more peaceful and productive when I’m honest about what I’m feeling. When the feeling goes against my chosen goals and plans, I don’t have to let it “win” and deflect me. But recognizing that it’s there can drain a lot of the undermining power out of it. Obviously, this practice can take a bit of time, but if it saves you from getting completely distracted from what you want to do and not doing anything, you’ll come out ahead. And I think it’s worth it in itself for the self-knowledge you gain from it. Acknowledging and sitting with the emotions can be truly healing, too. I’ve learned I don’t need to stifle myself in order to be productive and successful. Exactly the opposite: I can work productively when I accept and allow who I really am and what’s going on for me. I’ve learned that my soul is the source of my creativity, energy, and unique gifts. Shutting it down won’t get me anywhere that I actually want to go — and anyway, it hurts too much. I’ve learned that my truth, such as it is, really can be a gift to the world, to people who are yearning for truth and authenticity and for the specific life lessons that I’ve managed to learn and can now echo. It’s been so encouraging and life-changing to get that. Obviously, the same goes for you. So when your emotions are tripping you up, maybe give them the respect and attention that every inch of your soul deserves. You can still get the work done, set and achieve ambitious goals, and be as productive as anyone else — you just need to work with your soul, not against it.
https://medium.com/swlh/how-to-be-productive-and-achieve-if-you-have-a-tender-soul-1576b72ae4c0
['Sk Camille']
2019-09-13 05:53:38.611000+00:00
['Life Lessons', 'Emotions', 'Productivity', 'Self', 'Work']
Just walk out Amazon Go — the most convincing future of retail
JUST WALK OUT TECHNOLOGY- the key phrase used for Amazon’s cashier-less convenient stores, Amazon Go. These stores resemble the look of normal convenience stores, but customers don’t need to wait or scan to pay; they just have to walk out the stores with items. Amazon opened its second New York City location in June 11th, 2019. This location is the 13th amongst other locations in Seattle, Chicago, and San Francisco. Amazon’s initiatives to apply their online experience to brick-and-mortar shops are not the new thing. Back in 2017, Amazon acquired Whole Foods in order to expand its fresh grocery lines and physical store footprints. Amazon has also experimented with brick-and-mortar shops like Amazon 4-star with highly reviewed and rated items from amazon.com, and Amazon Books, which was literally a physical version of amazon.com book stores (Amazon Books NYC: Does it predict the future of retail?). Although these experiments weren’t the solution for the future retail, large retail enterprises, including Amazon, have tried to reinvent the physical shopping experience to be more reachable and convenient with the use of technology. Image source: Tesco virtual supermarket in a subway station via designboom In 2011, Tesco in South Korea installed a virtual shopping experience in Seoul’s subway stations — customers could scan QR codes on printed supermarket shelves on the station platforms. The idea was simple: hard working people didn’t have time for a grocery shopping and Tesco tapped into this concept by having them multitask during everyday commutes. Although this attempt is more about marketing rather than a practical solution, their registered members rose by 76%, and their online sales increased 130%. Unlike Tesco’s case, in the case of Amazon GO, customers still need to go to physical stores. Presumably, Amazon Go can help customers save time in its target market, which include dense downtown settings, where register lines get long during peak hours. However, one of Amazon Go’s main agendas is to reduce the operating cost, human staff. The history of physical stores One’s grocery shopping experience from markets in the 1800s was simply inefficient. Customers needed to visit individual stores that sold different goods. In 1916, the first Piggly Wiggly store in Memphis completely changed this flow. The customers were led to the store’s storage to pick up items themselves, and then to the centralized register area to pay all items together. This system didn’t only help operation costs, but it also stimulated customers to buy more by spending time picking up different items. In the 1930s, the Great Depression had pressured more supermarkets into the same direction and to pursue economies of scale, which ultimately lead to the success of Walmart. This then led to e-commerce giants like Amazon in later days. In the meantime, physical stores adopted various technologies to run a centralized register even more efficiently with less human staff. In 1972, Kroger agreed to test the barcode system to manage inventories better, soon creating the industry standard, Universal Product Code (UPC). Image source: The History of the Bar Code Timeline of the modern supermarket 1916 — The first Piggly Wiggly store let customers pick items from its storage and pay at the central location. 1930s — The Great Depression directed many stores to adopt the centralized register with the large quantity model. 1950s — Many big-box supermarkets appeared in suburban settings due to the motorization. 1969 — Walmart chain was founded. The original store was Walton’s 5–10 opened in 1950. 1972 — Kroger agreed with Radio Corporation of America (RCA) to test the barcode system. 1974 — The first use of the standardized barcode system, Universal Product Code (UPC), at Troy’s Marsh Supermarket. 1997 — Contactless payment system, Speedpass by Mobil, which looks like a keychain, was introduced to make a purchase without the use of cash or credit cards. Image source: Esso Mobil 2001 — Kmart adopted the self-checkout as the big-box player, but it then removed it from its stores by 2003. Image source: starts at 60 2014 — Apple Pay expanded the use of contactless payment to the wider merchants in the US. 2016 — The first Amazon Go store opened in Seattle. What does Amazon Go try to solve? Customers save time by NOT waiting in a cashier line. The closest precedent may be the self-checkout system in some large-scale supermarkets or drugstores. I personally find it useful for a faster process, though, it receives more criticism, mainly because customers are not trained to use registers (some poll from 2014 suggested that 93% of people disliked them). Customers don’t have to carry their wallets. The contactless payment system, such as Apple Pay, is the closest solution allowing customers to shop cashless. However, identifying and counting items still relies on human staff. To address this, technology to detect what items each customer has picked is being explored. For example, the creative unit, teamLab, created a hanger that reacts as a customer picks it up. Similarly, tagging items had been the mainstream solution, but this solution does not connect the product with the customer’s identity. For connecting individual customer identity, personal mobile phones take an important role. The remaining question is how to make the connection, and how to make the process frictionless. The low energy bluetooth device, Beacon, was seen as a solution. The company Emoticons introduced small, stylish, and affordable Beacon devices that were also easy to install in retail stores. These devices could emit bluetooth signals constantly, and they did not require pairing steps like regular bluetooth. This way, the system could identify when a customer entered the store and track their locations in the store. However, the solution did not come with a practical way to identify items the customer chose. Additionally, customers needed to download an app to have the connection. Amazon also foresees their Amazon Go stores to cut the operation cost. How does Amazon Go work? The smaller Amazon Go store format. (Image source: Amazon via. Business Insider) In order to successfully achieve this JUST WALK OUT TECHNOLOGY, Amazon Go stores have to achieve the following with extremely high accuracy: Register a customer — so the store can link their Amazon account. Track the customer’s location — so the system can correlate the customer data, and the actions taken place. Detect an item that was picked up — so the system can add items to the virtual shopping cart of the customer who was at the location. Detect an item if it was put back onto the shelf — so the system can remove items from the customer’s virtual shopping cart. Detect when the customer leaves the store — so the customer’s online transaction can be completed. 1. Register a customer This is the most conventional part of the experience. Customers have to download an app to their phone, which is not part of the Amazon app. At the store entrance, they have to scan the QR code on their app to the gate, which almost looks like some sort of a subway entrance. When I visited the store, I was with my wife and baby. I thought each person had to scan different QR codes and enter separately. However I was told that all of us can use the same QR code. 2. Track the customer’s location There are hundreds of cameras mounted on the ceiling; they are RGB cameras for tracking individual customers. Amazon has mentioned that their Go stores don’t use any facial recognition technology. Instead, these cameras detect each customer’s general profile and track individuals with motion detection. The camera correlates a customer leaving Camera A and picks up the same customer entering Camera B. The accuracy of tracking is augmented by the use of separate depth-sensing cameras, according to TechCrunch article. There is also a gate for the staff exist. 3. Detect an item that was picked up and 4. Detect an item if it was put back onto the shelf This is the most unique characteristic of Amazon Go and is represented in its store design. Each shelf has a weight sensor that knows the exact weight of each item. When an item is picked up, the sensor can tell exactly which shelf the item is from. Similarly, the sensor detects when the object with the same weight is put back. The central processing unit relates the information about each customer’s location and the actions taken place on each shelf. Because of this system design, each shelf has clear guides separating each row, and they are more spacious compared to regular grocery stores. The store always looks tidy and well organized, because items need to be placed precisely, and space helps accurately detect customers. 5. Detect when the customer leaves the store Customers don’t have to scan the QR code to exit like they do when they enter. In-store tracking detects when they leave the store. When I walked out from the store, I was curious if the store successfully detected items that my wife picked up. In fact, it took about 5 minutes after leaving the store to receive my receipt and see any updates on the app. I am not sure if this was by design, but I hope Amazon Go app had updated my virtual shopping cart while I was in the store. What does Amazon Go look like in the future? From my experience at the small Go store, there was plenty of human staff. Amazon Go is still an early initiative, and it needs people to help operate it. For example, detecting the right item for the right customer’s virtual shopping cart is still assisted by human staff when the processing’s confidence score is low. In addition, the friendly staff standing by the gate was helpful for answering questions and assisting customers who were not used to the new-age shopping experience. Restocking and reorganizing items on the shelves were also handled by humans. Having the right items on the right shelves, which could have been misplaced by customers, is the key for the entire system. Human staff is an inevitable workforce flexible enough to adjust the misoperations. Nonetheless, Amazon’s vision is to make the operation as efficient as possible. According to the new estimates from RBC Capital Markets analysts, Amazon Go brings in about 50% more revenue compared to the traditional convenience stores. Although the initial store cost $1 million in hardware alone, the cost can be drastically reduced by reverse engineering the earlier cases and deploying them on a large scale. Bloomberg reported that Amazon is aiming to open 3,000 locations by 2021. Annual revenue per square foot, based on store size (Source: Amazon’s cashierless Go stores could be a $4 billion business by 2021, new research suggests | recode) Amazon Go is the sophisticated version of future retail, which has been attempted by many of its competitors. If it had not been by Amazon, I can imagine the exact same solution being deployed to more convenient stores by providers. For consumers, it is only about 30 seconds that they save from a regular shopping trip at a convenience store. Yes, we don’t need to bring our wallets, but we need to carry our phones anyways and open the app and scan. So is it really that much better for consumers’ shopping experience and efficiency? One thing I really liked about Amazon Go is its limited product variety in the relatively spacious store, which is in place for pursuing its high accuracy of product detection. This lies in comparison to regular convenience stores that look messy with overwhelming product variations. I used to like this about convenience stores. Today however, with too much information and items being accessible online, I am probably not the only one who prefers something simpler, at least in physical spaces. Original post: http://www.ta-kuma.com/experience-design/just-walk-out-amazon-go%E2%80%8A-%E2%80%8Athe-most-convincing-future-of-retail/ Reference Inside Amazon’s surveillance-powered, no-checkout convenience store | TechCrunch Stepping Into An Amazon Store Helps It Get Inside Your Head | WIRED Amazon’s cashierless Go stores could be a $4 billion business by 2021, new research suggests | recode Wouldn’t it be better if self-checkout just died? | Vox Amazon’s store of the future has no cashiers, but humans are watching from behind the scenes | recode Meet the duo who make Amazon Go | Fast Company Only Amazon Could Make A Checkout-free Grocery Store A Reality | WIRED The technology behind Amazon’s surveillance-heavy Go store | WIRED Amazon Go is the inevitable evolution of supermarket retail | engadget Amazon opens its second Go store in New York | engadget The 6 Most Surprising Things About the New Amazon Go (No Cash Registers) Convenience Store | inc. The History of the Bar Code | Smithonian The Man Who Invented the Grocery Store | The Wall Street Journal
https://uxdesign.cc/just-walk-out-amazon-go-the-most-convincing-future-of-retail-469b5794d65c
['Takuma Kakehi']
2019-11-15 02:12:16.315000+00:00
['Future Of Retail', 'Amazon', 'Tracking', 'Retail', 'Amazon Go']
I’ve Been Plant-Based For A Month, Here’s How It's Gone
I’ve Been Plant-Based For A Month, Here’s How It's Gone So far, so good. Photo by Anna Pelzer on Unsplash For the last six weeks, I’ve followed a completely plant-based diet. I steer away from saying Vegan as I do believe that labels matter to a degree. However, I’ve not eaten any animal products at all! Before going completely plant-based, I was the biggest meat-eater going. Beef, chicken, lamb, veal, duck — you name it, I’ve eaten it. Food is something I’ve always appreciated because I came from a family of foodies. My dad is a French-trained Chef, my mum is Serbian and my Aunt also worked in restaurants and cafes for a large portion of my life. Going plant-based, or following a Vegan lifestyle was something that I always turned my nose up at, mostly because I thought it was pointless and also — because I had absolutely no desire to give up animal products. However, my gut health started to deteriorate and I was certain that dairy products were causing me issues. I wasn’t sick by any stretch of the imagination, but I had constant anxiety about my skin, and felt like I couldn’t eat a meal without feeling uncomfortable. I wanted to be open and honest with my journey, as I do believe there is a stigma attached to being Vegan/Plant-Based. Below I’ve highlighted the good, the bad and the ugly from my own experience — as someone who is a self-employed freelance writer and has to think about budgeting fairly often. The good Let’s start off with the positives, as that’s always a good place to begin. I feel “lighter” and have more energy: The biggest difference I’ve noticed, especially in week five and six is the energy levels. I’m a naturally fast eater, which often meant at lunchtimes I’d end up feeling uncomfortably full. Since following a plant-based diet my energy levels seem to stay at one level for the majority of the day and naturally drop off by the time I need to sleep. The biggest difference I’ve noticed, especially in week five and six is the energy levels. I’m a naturally fast eater, which often meant at lunchtimes I’d end up feeling uncomfortably full. Since following a plant-based diet my energy levels seem to stay at one level for the majority of the day and naturally drop off by the time I need to sleep. Complexion and weight: My weight has naturally fluctuated over the years, and my BMI has been at both ends of the spectrum — overweight (68kg), as well as underweight (53kg). Please note that this was in relation to my height and age at the time, and was definitely a correlation of the food I was eating and the alcohol I was consuming. Now I sit at a healthy 60kg which I’m really proud of, and following a plant-based diet seems to work a lot better for my digestion and overall weight management. I’ve never looked at being plant-based as a weight loss ploy, as I don’t think that’s a healthy way to look at things. However, after dealing with a lot of fluctuating weight from 16–21, at 25 I find that this way of eating works well both physically and mentally. In terms of complexion, I started to develop problematic skin at 24, which seemed to directly correlate with the amount of dairy that I was eating. Cutting this out has helped my complexion to recover. Accessibility: I’m incredibly fortunate and blessed to have access to large supermarkets and whole food stores, meaning I can buy good quality fresh and frozen products. My budget doesn’t allow for me to constantly buy the top end products on the market, so having shops that cater to varying price ranges has made a plant-based diet from an accessibility perspective really positive. I appreciate that accessibility plays a huge part in what kind of food you can eat, and I don’t think this is spoken about enough. I’m incredibly fortunate and blessed to have access to large supermarkets and whole food stores, meaning I can buy good quality fresh and frozen products. My budget doesn’t allow for me to constantly buy the top end products on the market, so having shops that cater to varying price ranges has made a plant-based diet from an accessibility perspective really positive. I appreciate that accessibility plays a part in what kind of food you can eat, and I don’t think this is spoken about enough. Snacks: I love to graze, and definitely was a goat or sheep in a previous life. The snack options for those who follow a plant-based diet are varied, and usually pretty healthy. This was something I didn’t realise until I started to read up on what I could eat. The bad Portion management: The first month of being plant-based came with some challenges, as I lived in a state of constant hunger due to not having larger portions. This was something I worked on straight away and now eat around 25% larger portions than I did when I was eating animal products. I don’t advise counting calories as that isn’t healthy — but intuitive eating was something I had to get my head around and find a portion size that would work for me! The first month of being plant-based came with some challenges, as I lived in a state of constant hunger due to not having larger portions. This was something I worked on straight away and now eat around 25% larger portions than I did when I was eating animal products. I don’t advise counting calories as that isn’t healthy — but intuitive eating was something I had to get my head around and find a portion size that would work for me! Time: Being plant-based takes a lot of time. I don’t want to skirt around that. I work from home and I’m self-employed, so the luxury of time means I can marinate and cook things at home from scratch, I can go to a market if I want something specific and preparing meals is factored into my day. If I worked in an office or a standard 9–5 I definitely think I’d struggle, so time is a privilege which is often overlooked. Being plant-based takes a lot of time. I don’t want to skirt around that. I work from home and I’m self-employed, so the luxury of time means I can marinate and cook things at home from scratch, I can go to a market if I want something specific and preparing meals is factored into my day. If I worked in an office or a standard 9–5 I definitely think I’d struggle, so time is a privilege which is often overlooked. Finances: I’d say on average my food shop is still the same as when I bought animal products, so financially I don’t feel like I’ve saved any money or spent any more than I usually would. A lot of the expensive things are usually pre-made plant-based meals, as well as supplements, nuts, seeds and spices. You can live on a plant-based diet comfortably, but I can imagine if I had less time to prepare things from scratch I’d end up spending more on pre-made products. I’d say on average my food shop is still the same as when I bought animal products, so financially I don’t feel like I’ve saved any money or spent any more than I usually would. A lot of the expensive things are usually pre-made plant-based meals, as well as supplements, nuts, seeds and spices. You can live on a plant-based diet comfortably, but I can imagine if I had less time to prepare things from scratch I’d end up spending more on pre-made products. Accessibility: Although I have a lot of access where I live, there are certain circumstances where things may be sold out/low stock and then I’ll either have to buy a more expensive alternative, or go without. Of course, this is a privileged problem but accessibility is one of the main reasons why I think fewer people are plant-based. This pressure should be put on retailers as opposed to the average person having to travel to multiple stores just to be able to buy the food they need. Although I have a lot of access where I live, there are certain circumstances where things may be sold out/low stock and then I’ll either have to buy a more expensive alternative, or go without. Of course, this is a privileged problem but accessibility is one of the main reasons why I think fewer people are plant-based. This pressure should be put on retailers as opposed to the average person having to travel to multiple stores just to be able to buy the food they need. Socially: Having a supportive friendship group is great, and in that aspect, I feel very comfortable socially eating plant-based food, however, once we’re out of a pandemic I imagine this will come with challenges. My main worry is being a burden to other people if they have to cater to my eating habits. The ugly Fast food: Plant-based fast food is delicious , but it’s a lot more expensive in comparison to your average McDonalds or Burger King. I guess this is positive as it does deter me from eating bad food (even though it tastes so good), however, this does tie into accessibility and finances. I paid £14.99 for a burger and chips yesterday, and although it was delicious — that’s a helluva lot of money. Plant-based fast food is , but it’s a lot more expensive in comparison to your average McDonalds or Burger King. I guess this is positive as it does deter me from eating bad food (even though it tastes so good), however, this does tie into accessibility and finances. I paid £14.99 for a burger and chips yesterday, and although it was delicious — that’s a helluva lot of money. Skin purging: I noticed in week three and four that my skin seemed to be in a hurry to get rid of any blemishes and spots that I had on my face. It alarmed me at first as I thought the plant-based diet wasn’t working for me, but after reading and understanding what happened — it looked as though my skin was clearing itself. Now, crystal clear aside from the odd hormonal spot! Conclusion Overall, I do think there have been a lot of positives to my journey so far, however, I think the real test of time will be three, six and twelve months down the line to see how it impacts me across the aforementioned points. If you’re following a plant-based diet I’d love to hear from you. Follow me on Twitter!
https://medium.com/the-innovation/ive-been-plant-based-for-a-month-here-s-how-its-gone-12517ed6215
['Claire Stapley']
2020-12-30 11:07:01.358000+00:00
['Plant Based', 'Sustainabilityms', 'Lifestyle', 'Vegan', 'Eating']
Universal Health Coverage Should Be a Fundamental Human Right
Many people are confused about what the ACA is actually supposed to do. One of the biggest ACA reforms is the establishment of public health insurance exchanges, which are like marketplaces that allow individuals and families to seek out and buy affordable and comprehensive health insurance plans. The ACA also provides increased government subsidies to help low and middle-income families afford health insurance. Additionally, it prohibits insurance companies from refusing service or charging higher rates to people with pre-existing conditions, making health insurance more affordable and accessible to all. The ACA also prohibits insurance companies from placing an annual or lifetime cap on how much money they’re willing to pay for an individual’s healthcare. Finally, the ACA requires all companies with at least 50 employees to offer affordable, comprehensive health insurance to all of their full-time employees. Although the ACA has made considerable strides towards the goal of achieving universal health coverage for all Americans, it’s not a perfect system, and has faced considerable pushback, especially from Republican politicians. One of the ACA’s major flaws involves Medicaid, a program established in the 1980s to provide affordable healthcare for low-income Americans. When the ACA was first established, one of its main goals was to expand Medicaid to all 50 states in the hopes that more low-income individuals could gain access to affordable health insurance. However, in 2012, the Supreme Court declared the expansion of Medicaid unconstitutional, which means that individual states are still allowed to opt out of providing expanded Medicaid coverage to their residents. As of 2019, 37 states (including Washington DC) have adopted the ACA’s Medicaid expansion, but 14 states have chosen not to. This has created a coverage gap for low-income individuals in these 14 states, which means that about 2 million Americans still do not have affordable or accessible health coverage. Until all Americans, including those who live at or under the poverty line, are given access to affordable healthcare, we cannot claim to be a nation that values the fundamental human right of health. In March of 2019, the Trump Administration announced that it wanted to overthrow the entire Affordable Care Act, nullifying advances in healthcare coverage for over 30 million Americans. To do this, the Trump Administration is banking on a lawsuit against the ACA, Texas v. Azar, which seeks to declare the entirety of the ACA unconstitutional. Legal scholars are divided on whether or not this lawsuit poses a serious threat to the ACA, so in the coming months, the Texas v. Azar suit is definitely something to keep your eye on if you’re interested in following the debate surrounding the ACA. To combat the Trump Administration, House Democrats recently introduced a bill to strengthen the Affordable Care Act. Provisions in this bill include increasing subsidies for low-income individuals, expanding federal assistance to include individuals at higher income levels, and fixing the ACA’s notorious “family glitch,” which currently makes it difficult for employed individuals to afford insurance plans that include their spouses and children. However, because of rampant partisanship in Congress, it’s still unclear whether this bill will make any ground. Universal healthcare and ‘Medicaid for All’ has become the battleground of a fierce partisan debate, with Republicans and Democrats vying for political power by trying to repeal or strengthen the ACA. Although the debate swirling around universal health coverage and the ACA can be incredibly tense and confusing, it’s important to always keep in mind the core tenet of human rights that serves as the foundation of the argument for universal healthcare. Regardless of what form it ends up taking, access to quality healthcare is a fundamental human right, and every attempt to deny this healthcare is a degradation of the United States’ commitment to upholding human rights. Subscribe to our Newsletter
https://medium.com/in-kind/universal-health-coverage-should-be-a-fundamental-human-right-f1991d575b6c
['In Kind']
2019-05-14 17:35:52.968000+00:00
['Politics', 'Affordable Care Act', 'Healthcare', 'Wellness', 'Insurance']
Where science meets business — crafting a career of impact
CAREERS Where science meets business — crafting a career of impact A doctoral degree has been the gold standard metric when it comes to predicting the potential for the impact an individual has in contributing to the bioeconomy. Bioeconomy.XYZ writers have been leading dialogue around a particularly important topic — the importance of the Ph.D. Alexander Titus’s article “PhD not required” has certainly made waves as he challenges the limitations of this gold standard as he presents the novel thought that impact within this space is not constrained to possessing a certain type of educational background. Joseph Buccina picks up the metaphorical baton as he answers the question “If a PhD is not required … then what is?” He gives tangible insights for how to grow within the bioeconomy without a doctorate. I would highly recommend reading these two articles. Now as an individual breaking into the bioeconomy and looking to make an impact while not currently possessing a Ph.D., this discussion has been extremely appealing. Finding your path from undergrad to the workforce can be incredibly daunting. So how do we get there? I recently wrote an article about unlocking the potential of networking especially in light of the uncertainty of the pandemic. All five recommendations were essential to forming meaningful connections with mavericks within the bioeconomy. One of the most impactful conversations that I have had to date was with Chris Hsu. When searching LinkedIn looking for individuals to interview (point #1), Chris’s profile caught my attention first due to parallels in our academic backgrounds. We both have undergraduate degrees with a scientific discipline and both hold master's degrees in a business concentration; an MBA in Chris’s case. Beyond that, Chris leads innovation within the bioeconomy through his work at GSK, a multinational pharmaceutical and consumer healthcare company. Now that is a level of impact I would love to have. To point #3, Chris embodies the hungry and humble mentality which is evident not only in his work experience but also in that he was willing to speak with me about his journey. This conversation with Chris gave me unique confidence about the decisions I have made about my professional journey thus far and the ones I will have to make in the future. Others looking to break into the bioeconomy with a nontraditional background would greatly benefit from Chris’s wisdom which he gave me permission to share. I used many of the questions I shared from point #4 in framing our conversation so I hope Chris’s story will resonate with you just as much as it did for me. Why did you decide to study science, specifically biology and public health in undergrad? Photo by National Cancer Institute on Unsplash For me, my journey started in high school when I was really fascinated by the Human Genome Project. At that time in the 2000’s, we were really just starting to make breakthroughs with understanding how the human genetic sequence could help unlock the mysteries of how diseases impact the body, as well as the potential for how we could edit or manage some of these genes in order to find cures. I remember a ton of excitement around the idea of unlocking the codes that help trigger certain types of cancer and being able to edit those cleanly to help patients through gene-therapy find a path to recovery. That really intrigued me. There was a movie I watched back in 2000 called Gattaca, which was set in the future and where the career paths and futures of individuals were determined by eugenics. In that dystopia, only those individuals with the best and strongest hereditary traits were favored. It was an extreme example, but the implications of that movie also really sparked my interest in what we knew about DNA and the potential for curing diseases. I learned that down syndrome is called by an extra chromosome and how genes can determine your gender and eye color. Thanks, high school biology class! If you fast forward to today we now have gene-editing technology like CRISPR and we have this up and coming mRNA technology that companies like Moderna are using in which you can use a viral vector to insert revised mRNA back into the body to produce specific proteins. And they are using the platform to develop vaccines for RSV or COVID for example. So it was really the wow factor- just being in awe of what the Human Genome Project could unlock, and just the idea that one day I would love to work at a company where we could cure diseases that have been plaguing human society for hundreds of years. The public health emphasis was really about understanding what can we do to promote community protection- protect society as a whole versus at the individual level. As graduation approached — why did you decide to not go to the medical school or graduate school path? When considering medical school, becoming a physician was never really an interest for me. I know typically for most science majors going to medical school, dental school or veterinarian school is where the vast majority of students go to (at least at my university). I chose to not go to graduate school immediately after because I saw a need to have work experience before I decided to develop a specialization through graduate school. When I graduated in 2007, it was absolutely one of the worst times to be a college graduate in the job market. Companies were only hiring experienced candidates, and all of us new grads had a tough time getting a foot in the door when companies were reducing their workforce. Most people of my peers tried to go back to graduate school immediately, but I tried to ride it out and get a job. I chose to pursue a career in the pharma/biotech industry because of my passion and inspiration that began in high school and knowing that the medicines I helped develop and produce could have a profound impact on a large patient community. Why did you choose your first job? I remembered that upon graduation I had three job offers on the table. Two were through federal agencies with the NIH and the FDA, and the third was a private sector company. I ended up selecting my first job because it was for cancer research and I had the chance to work with the National Cancer Institute. It was actually the lowest paying job out of the three offers that I received. But experience at the time was more important to me than pay. I always believed that getting the right experience now would translate into better compensation down the road. It ended up being a great opportunity for me to learn more about regulatory affairs, which plays a multi-functional role in developing the clinical and filing strategy to bring a drug to market. Photo by Science in HD on Unsplash When you decided to pivot from your associate positions to consulting — how did you know it was time to make a change and how did you evaluate the offer? By the time I had switched over to consulting, I had already had three different jobs in industry. I had almost switched jobs year-to-year in the first three years of my post-undergraduate experience. I often get asked “Chris, I see that you moved around a lot in your career early on — why is that?” And for me, I would say — pharma is such a large industry with a lot of different career paths. I figured the best time to learn, make mistakes, and figure out what I was interested in was as a young professional, so it made sense to pick up experience in different functional areas. It ended up being a great decision because it gave me a lot of early exposure to different aspects of the industry to develop a better understanding of drug development. I eventually transitioned into life science consulting because I wanted the ability to work with a variety of companies and across diverse projects. Consulting really was the best opportunity to get into a whole different pace of work as well. In consulting you work on projects for three months, six months or a year at a time- switching clients on a regular basis. This really accelerates your learning curve instead of you being in the same job for two to three years as you would be normally in an industry role. In that same time span in consulting I worked with at least 2–3 different companies and maybe on 4–5 different projects spanning various functional areas. Consulting was an opportunity for me to really accelerate my learning and my development. Photo by You X Ventures on Unsplash Why did you decide to move to GSK? This reason is more personal. Prior to coming to GSK, I had relocated San Francisco to support a leading pharmaceutical company client. My wife and I were engaged but we were basically doing long distance, planning a wedding while we were on opposite sides of the country. The long-distance and constant travel difficult to manage so I agreed to move back to the east coast. Coincidentally, GSK was at the time opening its third R&D vaccine center in Rockville. It was a really great time to come in because the site was just starting up and I was able to start my career in Vaccines as one of the first 75 employees on the site. Today we are over 400. How did your academic background prepare you to first be a Senior Program Manager for Global Meningitis Vaccines, Strategy, Portfolio, and Operations and now a Commercial Launch Excellence Lead? It really was a combination of academic and work experience that has helped prepare me for my current role as the Senior Program Manager. One of the reasons that I got the job was because of my work versatility, where I had prior work experience in multiple functional areas and as a result be able to better support my team and my stakeholders. I remember receiving feedback from the interviews that they really liked my consulting experience combined with the experience in clinical, manufacturing, regulatory affairs. Having that broad experience and exposure to these areas is really important in my current role where I help manage the overall portfolio of our key initiatives and execution of our strategic objectives. In terms of academic background, having prior knowledge of immunology or biological systems is critically important in understanding the scientific and technical development of medicines, and how the body is able to benefit from it. If I came from a non-science background, it would certainly be harder for me to understand disease progression and the science behind the vaccine itself in terms of the antigens, how it drives the immunological response and how antibodies are produced. The scientific knowledge is something you can develop over time if you have a non-science background. Pursuing my MBA was one of the best decisions I made in my career. Paired with a scientific degree, the MBA helped prepare me to develop and execute strategy, how to perform complex financial analyses, and understand the fundamentals of marketing. One of the more memorable quotes a colleague shared with me was “you can’t develop a medicine you can’t sell, and if you can sell it, why should people choose your product?” Simple but true. R&D and Commercial are very complimentary and having both a scientific and business background bring that relationship to life. What is the most gratifying part of your role? The most gratifying part of my role hands down is knowing that we have an impact on patients. Going back to why I wanted to get into the industry in the first place, it’s the idea that we could do something at large that could make an impact on the broader population. My dad is a doctor and has likely seen thousands of patients over his lifetime, and it’s extremely admirable seeing the time and dedication he gives to his patients. With vaccinations, you have the potential to change an entire generation globally. What is super rewarding about my role is knowing that millions of infants, children, or adolescents are receiving the vaccine that we developed and knowing we are potentially providing them a measure of protection against a severe disease. Why do you think that pairing a STEM undergraduate degree with a Business degree is an advantage in the bioeconomy space? Photo by Jaron Nix on Unsplash I think it is such a powerful combination because you have the scientific background to understand how medicines work, how diseases progress, but also the scientific challenges of creating drugs or vaccines. As I mentioned earlier, there will also be a fine balance between how much a company can commit to R&D investment and how much its Commercial can generate with sales. Ultimately, a business is only sustainable if you are able to generate consistent and sustainable revenue and then invest that money back into your R&D for new medicines. It is a very cyclical process but it’s pretty simple; current R&D investment drives future Revenue, and current Revenue drives future R&D investment. A STEM undergraduate degree and business degree combination gives you a more powerful presence as a leader where you understand the nature of R&D and drug development but also understanding the needs of the business- who your customers are and how you can best benefit patients ultimately. I want to contribute to this industry — what skills should I be developing now to set myself up to make a meaningful contribution? I think that is important to be naturally curious. Be willing to discover, explore, research, and learn about different technologies and trends in what is happening in the industry. I subscribe to the daily newsletters Fierce Pharma and Fierce Biotech. Every day I get a newsletter highlighting the newest drug approvals or clinical studies that didn’t go well. Getting that industry insight gives you a little more understanding of the different technologies that are out there, but also the different companies and players that are out there. There are mergers and acquisitions that are happening all the time (R&D by M&A), and it’s not uncommon that during the year a company releases positive data and by the end of the year you see some form of licensing or acquisition happening with another company. Also, take some risks early on in your career. If your current job isn’t fulfilling or you feel like you have maxed out, don’t shy away from moving on and trying something different. Especially for young professionals, this is the time of your life where I think you should learn and try different roles. Broad experience is always going to make you more marketable to companies. Be willing to network and connect. Reach out to different people across the industry. One suggestion is to have an informational interview to learn more about who they are, the job that they are doing and how did they get there (like you did Katy!) If you are already in a company, connect and network with your colleagues because one of them could become a mentor or a champion for you to develop within the company. Finally, you don’t know what you don’t know. Be open and proactive to finding the answers to the questions you have. Who are your mentors and how did you foster those relationships? I have multiple mentors that have supported me and have helped influence me throughout my career. You will undoubtedly come across people in your career that you have good chemistry with, where you immediately feel there is a trust and confidence to confide in them; and these people genuinely, the keyword being genuine, want to see you succeed. I have developed what I’d call lifelong career mentors, despite having left the company where I worked with them. Part of what makes this mentor relationship so important is these people really care about you, want to see you grow and see your potential. Mentors wouldn’t be helpful if they didn’t see your talent and potential, otherwise, it’s their precious time that they are not using effectively. The way that I have fostered these relationships is to first be myself. I am clear about what my interests are and what I’d like to develop. Also, keep these mentors in the loop about what is going on with you. You don’t need to talk to them on a weekly or monthly basis but keep the relationships warm. Keep them regularly updated on what’s happening in your life or professionally even if there isn’t a clear ask for them to do something for you. This helps your mentors be aware of what is happening so that if there is a time you need some advice or you need to come to them for help — they are up to speed. Don’t just come to them for help only when it comes time to finding your next position otherwise the relationship will feel one-sided. It’s on you to have regular check-ins and you can define regular together- it could be monthly, quarterly, semi-annually, — but if they genuinely care about you then they will want to keep in touch with you. I also think that it is important to have multiple mentors. It is always good to have multiple perspectives. And there may be times when you are seeking advice and you want to get the counsel of different people with different experiences which can give you a more balanced perspective. And you will find that at times their messages are consistent and other times you may find that their messages can be conflicting. But I have always found it valuable to have at least several mentors that you can lean on and that can be fully honest with you. Finally, your mentors should always be trying to challenge you as well. Any advice for those looking to contribute to this space with a non-traditional academic path? The good news is that I have met a lot of people who work in the field who don’t have scientific backgrounds. I have met Journalism majors and English majors who have done very well. There are definitely opportunities to work in a pharma company across different functions depending on what your interests are. For example, I work with someone who started out as a Journalism major who now works in a marketing role. That role typically requires an understanding of the scientific technology behind our vaccine, however, because they’ve been able to demonstrate the ability to learn agility, they’ve been successful There are also other career paths for contributing in other ways like in finance, accounting, legal, and supply chain, where you don’t need a scientific degree. If you are passionate about the bioeconomy — Chris is proof that it is possible to craft your own path of impact with a nontraditional background. I hope his journey can encourage others to be bold in their own paths. If Chris’s journey resonates with you, reach out and start a conversation! Want to talk about biotechnology or bioeconomy innovation? Working on some cool science you think is essential to the conversation? Let’s connect! But most importantly make sure you are following Bioeconomy.XYZ for accessible information about biotechnology and the bioeconomy.
https://medium.com/bioeconomy-xyz/where-science-meets-business-crafting-a-career-of-impact-35cd6ddf552e
['Kathryn Hamilton']
2020-11-18 21:07:14.844000+00:00
['Interview', 'Careers', 'Bioeconomy', 'Biotechnology', 'Graduate School']
Picking Peaches With Python in Animal Crossing New Horizons
Getting Started Before running ACNH Automator, you will need to input some information about your tree grid and where Nook’s Cranny is located in your town. In a future release this information will be entered into a command line prompt when running joycontrol, but for the current release you must edit the run_controller_cli.py file. On line 63 of run_controller_cli.py you will find tree_pick_data being defined as an instance of the TreePickLogic class. It is populated with sample data that you will have to change. There are also secondary defaults that are defined, which can be updated as necessary. I’ve included a full explanation of each value you will have change in the Readme, but I’ve also included some reference images to clarify how the grid system is set up. Grid Information ACNH Automator v1.0 assumes that your trees are spaced exactly one grid space apart from each other in the x and y direction. Options for updating this will be available in a future release. Grid space is measured in [x,y] and assumes that [0,0] is the space directly to the left of the top-left tree. The nook_grid value should be exactly 2 spaces below Nook’s Cranny to avoid running into the building by accident. Other recommendations You MUST make sure that your inventory selector (the hand icon when you’re in your inventory) is located on the first inventory space, or the selling process will not work properly. I also recommend clearing out your inventory of anything you don’t want to accidentally sell until you are very comfortable with this toolset. It’s important to have Nook’s Cranny located as close as possible to your tree grid in order to make traveling to sell your fruit easier. I recommend separating your tree grid from the rest of the town to avoid the possibility of villagers getting in your way, I solved this by building on a cliff that is inaccessible to villagers. Try to only have one space available on either side of your tree grid, this will help your character “get back on track” if the automation goes awry. Once you’ve entered your town’s data into run_controller_cli.py you can navigate your character to grid space [0,0] and move on to the next step. Emulating the controller and running “pick_trees” ACNH Automator relies on joycontrol to run, so you’ll need to first navigate to the Change Grip/Order menu on your Nintendo Switch, run joycontrol to begin emulating a controller, and then navigate back to Animal Crossing before running the pick_trees command. To start this process, cd into the main joycontrol directory and run the following command: sudo python3 run_controller_cli.py PRO_CONTROLLER Here is an example of what running joycontrol will look like. You might have to hit CTRL-C once or twice if it doesn’t connect to your Switch within a few seconds. Once joycontrol is up and running and your character is at [0,0] facing the first tree, you can simply run the following command: pick_trees Running pick_trees should look something like this. Based on the information you entered about your town, your character will: Navigate through the grid, harvesting fruit from each tree in the x direction until it reaches the last tree in the row. Travel down two spaces in the y direction to proceed to the next row, and change direction accordingly. Stop picking trees when a threshold is met for the amount of fruit that can be safely stored in your inventory. Travel to Nook’s Cranny to sell all of the fruit, and travel back to the next tree that needs to be picked. Repeat this process until all fruit is harvested and sold. Here is an example of what this process looks like in action:
https://medium.com/swlh/picking-peaches-with-python-in-animal-crossing-new-horizons-75274706ee79
['Arthur Wilton']
2020-11-11 15:53:12.004000+00:00
['Python', 'Nintendo Switch', 'Animal Crossing Switch', 'Animal Crossing', 'Github']
Don't make errors on error messages
When users are exploring a system, it’s like they are walking a path towards their goal. Mistakes are unavoidable in any paths. And when they happen, UX writers must come in as a tour guide to quickly help them out to continue their journey. Product teams may sometimes overlook tiny error messages and let the developers decide their words, which often sound like a robot talking to human. I just do not agree with this point, as you may experience it yourself, such a small but careless message on “Invalid password” could get you frustrated, or leave the system if they do not tell you where you go wrong. Therefore, error messages should be taken care of, as they should help users solve the problem and move on. 3 types of errors In-line errors These are small errors that happen when users are taking an action, they can still move forward, but they are advised to make a correction before moving on. For example, when users make a mistake inputting their phone number to a field, error messages appear to notify them and help them move on by asking them to fill in their number with 10 digits. Example of an in-line error Tips on these errors: The text can be very short and, in general, can clarify, remind, or instruct an ongoing conversation between the person and the experience instead of stopping their actions. Detour errors These are errors that occur when the person can’t get where they want to go in the way they anticipated, but they can still get there. (usually when they need to complete an action before keeping going) Example as below, when users make payment, they are required to add a card first. They still can get to where they are going, but they have to complete an action before it. Tips on these errors: Should provide instruction first, then explanation, and then the single action to take to move forward Blocking errors These are errors that occur when the way forward is blocked until the person takes an action that is outside of the scope of the experience. (Internet off, Site under construction) Example: below is one blocking error message, when the Internet is off. Users are required to take an action outside the scope of the app (turn on Wifi and connect to the Internet) Tips on these errors: Should provide instruction first, then explanation, and then the single action to take to move forward Common rules when writing error messages Besides tips for each type of error messages, there are common rules for any time UX writers playing with words Purposeful: Have a purpose in mind Error messages must have purposes aligned with users’ purposes, that is, telling users what they are experiencing, why it happens and how can they do to move on. Don’t speak the “what” without the “why” and “so what”. In this case, the purpose of the user is to make payment. And when an error happens, if no focus is put on writing, the error message could look like this. Only talk about the “what” No, we cannot let this happen. Imagine the user sees this, and then what? He would ask oh why it is unsuccessful? And what could I do next? In this case, imagine you are the user, you want to make payment. To get to the successful payment, you need to know why it is unsuccessful and how you could make it successful to fulfill your goal. Then the message could look like this: The “what”, with the “why” and “so what” Concise: Cut them short and meaningful Let’s face this, copywriting is there to sell, but UX writing is here to guide. People have a goal when they come to a system and they have no time to read UX texts. So make every message short and straight to the point. The easiest way to do this is to start with an imperative verb on how users may get through their problems and move on. Take an example: Long and not meaningful what to do Well and what is the standard format and what is the required format? Users do not want a long story like this, they want something short and straight to the point, like this: Short and meaningful This messages concisely tell them what to do, in the right way. Conversational: Talk to users like a human Most of the users are not interested in technical details of the problem occurred. So make humans recognize they are interacting with the words; they are in conversation with the experience. It means that the messages should be in normal languages, not with technical terms or codes. Like these: This message contains technical jargons This message contains technical jargons This message contains technical jargons Do normal humans understand these codes? …. Instead of doing this: Talk like a robot and users cannot understand Please do this: Talk like a human Clear: Cut them short The right words will be the ones that the people using the experience will recognize immediately, without having to think. They must not be ambiguous about the problems and make users ask questions like “Exactly what is going on?” Like these bad messages: Windows makes it hard to realize what kind of problem users are in Why is it invalid? These messages are so helpless because they do not tell users clearly what they are experiencing, and therefore they cannot find a way to move forward. Instead of saying this: Vague about problems Please say this: Longer but clearer Conclusion Error messages have a great influence in user experience, reflecting brand voice and personality. Pay attention to error messages to better communicate with users and make the experience worth their time. References Strategic writing for UX — Torrey Podmajersky
https://medium.com/uxpress/dont-make-errors-on-error-messages-a132f3770bf2
['Nguyen Anh Linh Giang']
2020-02-18 14:17:02.286000+00:00
['UX Design', 'Ux Writing', 'Content Strategy', 'Design', 'UX']
Kreatives & soziales “Hotelprojekt” ausgezeichnet
in In Fitness And In Health
https://medium.com/workersonthefield/kreatives-soziales-hotelprojekt-ausgezeichnet-dd85cbf1bdd5
['Reinhard Lanner']
2016-06-26 08:32:00.122000+00:00
['Architektur', 'Hotel', 'Design']
You Belong Here
You Belong Here A poem Photo by Noah Silliman on Unsplash Yesterday, your heart broke into a million pieces, and then it broke into a million more. And yet, you’re still here. Yesterday, you cried enough tears to fill all of the oceans in this great big world. And yet, you’re still here. Yesterday, you threw your hands up to the sky, and held onto the lie that you’re not strong enough to withstand all of this. And yet, you’re still here. You’re still here. You’re still here, with lungs still breathing, and eyes still blinking and tearing and seeing. You’re still here, with tender wounds and silver scars, and a heart that has had a million breaks, and yet, it continues to beat. You’re still here, with lessons you’ve learned from this life that you’re living, and a heart that can continue to keep loving. The world has tried to break your spirit and steal your light. And yet, you’re still here. You belong here.
https://medium.com/assemblage/you-belong-here-6c264128e9ad
['Megan Minutillo']
2020-12-23 14:28:16.058000+00:00
['Poetry', 'Poesía', 'Poetry On Medium', 'Encouragement', 'Self-awareness']
Manning Park Resort — March 6/20. Aging gracefully, and still skiing up a…
Gord represents my inspiration for continuing to enjoy this sport. He is 77 years young. I love it!! He boasts having hit the magic number for the $25 pass: age 75. He actually resides in the Yukon, but spends his winters here as the snow is better for skiing. Very cool. A fun fact about Gord is he has tried to ski the number of times each year to match his age. At age 65, he almost made it. He skied 63 times that season!! That is A LOT!! I also loved something important that Gord had to say, which really mimics what I stand for, and why I even blog about skiing (as it relates to mental health, and getting outside): Any time I have to stay in a big city for more than four or five days, I get what I call “Nature Deficit Disorder” [NDD] A great expression, indeed! I think I will adopt it. In that vein, Gord also commented on another important fact that goes along with being active and outdoors in the winter. He noticed that if he does not get himself out and on the hill much during the winter, that in the off season “his tummy is a little larger” and his joints a little stiffer. He actually took up telemark skiing only about five years ago to increase his mobility (much to his wife’s chagrin). Anyway, Gord said that following a less active winter, he then has less ability to do the summer things that he enjoys, like hiking and such. This is such an important factor for all of us to embrace, with whatever sport/hobby we can engage in during the winter months. Gord was also full of interesting historical information about Manning. For example, there is a run called Featherstone, which he said was named after Frank Featherstone. He skied here until he was 91! His wife also skied until she was 88 or 89. How spectacular is that?! Fun fact: apparently Frank was about 5 feet tall, and his wife about 4'6". I would say that makes things easier when you don’t have far to fall, lol. With regards to Manning itself, I would highly recommend it! It only has two chairlifts (a brand new one last year), but covers a great deal of terrain. The landscape is very beautiful. I understand it is also common for Manning to have over 10 cm of fresh snow overnight on a regular basis. Lots of powder to be found! Whether it is skiing, hiking, biking, running or walking-please get outside yourself! Your mental health will thank you. :)
https://medium.com/mind-your-madness/manning-park-resort-march-6-20-9c3b2a7846eb
['Jennifer Hammersmark']
2020-03-09 15:11:31.641000+00:00
['Outdoors', 'Exercise', 'Skiing', 'Vitamin D', 'Mental Health']
Special benefits are the decider for workers’ happiness
Special benefits are the decider for workers’ happiness The right programs stop people from walking out the door Photo by Fauxels Organizations struggle to provide the right benefits for their workers. Many leaders and managers don’t understand the basic wants and needs of rank-and-file employees, which is likely different from that of the top echelon. Coming from different professional and personal backgrounds, companies large or small can’t rely on managers or executives to know what everyone in the organization desires. “Your organization probably invests a lot of time, energy and money to retain top employees,” said Meghan M. Biro, analyst, brand strategist, podcaster and TalentCulture chief executive officer. “Yet, at least occasionally, you still wind up losing them to competitors.” She wondered about how to put an end to that unproductive cycle. “What can you offer your employees that means enough for them to stay?” Biro said. “As an employer, what’s your real value proposition? A beautiful office? No, not when we’re working remotely. Free gym memberships or great retreats? Soon, hopefully, but not now.” She contends that to retain top talent in today’s work environment, it’s not about perks. “Retention is about what employees really need,” Biro said, turning to Chris Wakely, executive vice president of global sales at Benify, which specializes in employee benefits around the world. His company compiled The Benefits and Engagement Report: A European Employer’s Guide to Employee Experience for the 2020s. The survey was conducted early in 2020, near the beginning of the global pandemic. “Despite all the craziness, about 5,000 people took the survey,” Wakely said. “We asked them what they think about their employer. What benefits, other than salary, do they want? “It was a really interesting time to be asking these questions as people dug into their new reality,” he said. “We really got an understanding of how employees think and act in the middle of change.” Benefits rule One takeaway, according to Wakely: Nine out of 10 employees aged under 30 say they would consider changing employers to receive better employee benefits. The revolving door of worker turnover is real. “A huge reason organizations struggle with providing the right benefits is that there’s a misconception,” Biro said. “What benefits employees is far beyond simply providing health and dental insurance. There’s so much more that goes into it.” Wakely breaks the problem down to two main reasons. “Benefits aren’t a one-size-fits-all model,” he said. “Each generation has its own needs and preferences. A company’s employee benefits offering needs to be personalized. One way is through offering a flexible benefits plan. “Human resources professionals might not have access to insights about their employees’ needs and wants,” Wakely said. “The guesswork can be removed through a global dashboard where administrators get an overview of benefits in use along with spending and supplier costs.” Employers must adapt as circumstances change. “When it comes to building a benefits strategy, perhaps the most important thing of all is flexibility — allowing employees to customize and personalize their benefits based on their needs,” Wakely said. “There are several ways to offer flexibility. “You can remove assumptions and find out what employees really want,” he said, citing one of his company’s related posts. Employees need to understand what’s in it for them when it comes to benefits. That means engaging education on their level of understanding. The common person is not a licensed insurance agent well versed in arcane legal language. Workers need translators who care. Well-chosen words matter “Evaluate your benefits,” Wakely said. “Find out what your employees think about your offer and which benefits are working. Align your benefits to other organizational goals. For example, if your goal includes promoting more remote working, offer more digital benefits. “The greatest benefits in the world aren’t worth anything if they aren’t communicated properly,” he said. “Thinking outside the box is important along with giving employees the flexibility to choose.” Conventional approaches will stymie creativity. “Benefits can include everything beyond compensation,” Biro said. “There are so many ways to provide them that meet employees’ real-life and working needs. “Small to medium-sized organizations should consider working with an outside service provider to improve the benefits experience,” she said. “It’s not just the what, it’s the how, the where, the when, too.” Biro questions how well employers perceive their workers’ benefits experience. “I often say this: Before you embark on changes, find out,” she said. “Take the pulse of your workforce.” A total rewards experience is a valuable hiring and retention tool. Management should not make employees cherry pick happiness. One benefit that addresses a particular need will not satisfy those with lingering wants in other areas. A bad overall experience will send people out the door to greener pastures. “So much influences an employee’s decision to share an experience — which affects the employer brand for prospective hires,” Biro said. “How they’re treated is clearly a major factor. “Consider the isolation, the disruptions, the noise, the pressures of working from home — even for those who love it,” she said. “Now balance a moment of happiness, a gesture of recognition against that. It’s a big deal.” Full view brings clarity Employees’ satisfaction rests with having the big picture about their benefits. “When employees only see part of their compensation, other important benefits such as insurance, pension and add-ons are overlooked,” Wakely said. “This undervalues the employee’s total reward package and wastes money on unused benefits from the employer’s perspective. “In today’s competitive job market where companies compete to attract and retain talent, this can make the difference of a candidate choosing one employer over the other,” he said. “Knowing what your employees want is essential. Give them the flexibility to choose their compensation package.” He referred to Benify’s benefit and engagement report from a survey of 5,000 employees to back up his recommendations. About The Author Jim Katzaman is a manager at Largo Financial Services and worked in public affairs for the Air Force and federal government. You can connect with him on Twitter, Facebook and LinkedIn.
https://medium.com/datadriveninvestor/special-benefits-are-the-decider-for-workers-happiness-bdc4b2207410
['Jim Katzaman - Get Out Of Debt']
2020-10-26 10:27:09.684000+00:00
['Entrepreneurship', 'Management', 'Remote Working', 'Benefits', 'Recruiting']
Text Classification with NLP: Tf-Idf vs Word2Vec vs BERT
Setup First of all, I need to import the following libraries: ## for data import json import pandas as pd import numpy as np ## for plotting import matplotlib.pyplot as plt import seaborn as sns ## for processing import re import nltk ## for bag-of-words from sklearn import feature_extraction, model_selection, naive_bayes, pipeline, manifold, preprocessing ## for explainer from lime import lime_text ## for word embedding import gensim import gensim.downloader as gensim_api ## for deep learning from tensorflow.keras import models, layers, preprocessing as kprocessing from tensorflow.keras import backend as K ## for bert language model import transformers The dataset is contained into a json file, so I will first read it into a list of dictionaries with json and then transform it into a pandas Dataframe. lst_dics = [] with open('data.json', mode='r', errors='ignore') as json_file: for dic in json_file: lst_dics.append( json.loads(dic) ) ## print the first one lst_dics[0] The original dataset contains over 30 categories, but for the purposes of this tutorial, I will work with a subset of 3: Entertainment, Politics, and Tech. ## create dtf dtf = pd.DataFrame(lst_dics) ## filter categories dtf = dtf[ dtf["category"].isin(['ENTERTAINMENT','POLITICS','TECH']) ][["category","headline"]] ## rename columns dtf = dtf.rename(columns={"category":"y", "headline":"text"}) ## print 5 random rows dtf.sample(5) In order to understand the composition of the dataset, I am going to look into the univariate distribution of the target by showing labels frequency with a bar plot. fig, ax = plt.subplots() fig.suptitle("y", fontsize=12) dtf["y"].reset_index().groupby("y").count().sort_values(by= "index").plot(kind="barh", legend=False, ax=ax).grid(axis='x') plt.show() The dataset is imbalanced: the proportion of Tech news is really small compared to the others, this will make for models to recognize Tech news rather tough. Before explaining and building the models, I am going to give an example of preprocessing by cleaning text, removing stop words, and applying lemmatization. I will write a function and apply it to the whole data set. ''' Preprocess a string. :parameter :param text: string - name of column containing text :param lst_stopwords: list - list of stopwords to remove :param flg_stemm: bool - whether stemming is to be applied :param flg_lemm: bool - whether lemmitisation is to be applied :return cleaned text ''' def utils_preprocess_text(text, flg_stemm=False, flg_lemm=True, lst_stopwords=None): ## clean (convert to lowercase and remove punctuations and characters and then strip) text = re.sub(r'[^\w\s]', '', str(text).lower().strip()) ## Tokenize (convert from string to list) lst_text = text.split() ## remove Stopwords if lst_stopwords is not None: lst_text = [word for word in lst_text if word not in lst_stopwords] ## Stemming (remove -ing, -ly, ...) if flg_stemm == True: ps = nltk.stem.porter.PorterStemmer() lst_text = [ps.stem(word) for word in lst_text] ## Lemmatisation (convert the word into root word) if flg_lemm == True: lem = nltk.stem.wordnet.WordNetLemmatizer() lst_text = [lem.lemmatize(word) for word in lst_text] ## back to string from list text = " ".join(lst_text) return text That function removes a set of words from the corpus if given. I can create a list of generic stop words for the English vocabulary with nltk (we could edit this list by adding or removing words). lst_stopwords = nltk.corpus.stopwords.words("english") lst_stopwords Now I shall apply the function I wrote on the whole dataset and store the result in a new column named “text_clean” so that you can choose to work with the raw corpus or the preprocessed text. dtf["text_clean"] = dtf["text"].apply(lambda x: utils_preprocess_text(x, flg_stemm=False, flg_lemm=True, lst_stopwords=lst_stopwords)) dtf.head() If you are interested in a deeper text analysis and preprocessing, you can check this article. With this in mind, I am going to partition the dataset into training set (70%) and test set (30%) in order to evaluate the models performance. ## split dataset dtf_train, dtf_test = model_selection.train_test_split(dtf, test_size=0.3) ## get target y_train = dtf_train["y"].values y_test = dtf_test["y"].values Let’s get started, shall we? Bag-of-Words The Bag-of-Words model is simple: it builds a vocabulary from a corpus of documents and counts how many times the words appear in each document. To put it another way, each word in the vocabulary becomes a feature and a document is represented by a vector with the same length of the vocabulary (a “bag of words”). For instance, let’s take 3 sentences and represent them with this approach: Feature matrix shape: Number of documents x Length of vocabulary As you can imagine, this approach causes a significant dimensionality problem: the more documents you have the larger is the vocabulary, so the feature matrix will be a huge sparse matrix. Therefore, the Bag-of-Words model is usually preceded by an important preprocessing (word cleaning, stop words removal, stemming/lemmatization) aimed to reduce the dimensionality problem. Terms frequency is not necessarily the best representation for text. In fact, you can find in the corpus common words with the highest frequency but little predictive power over the target variable. To address this problem there is an advanced variant of the Bag-of-Words that, instead of simple counting, uses the term frequency–inverse document frequency (or Tf–Idf). Basically, the value of a word increases proportionally to count, but it is inversely proportional to the frequency of the word in the corpus. Let’s start with the Feature Engineering, the process to create features by extracting information from the data. I am going to use the Tf-Idf vectorizer with a limit of 10,000 words (so the length of my vocabulary will be 10k), capturing unigrams (i.e. “new” and “york”) and bigrams (i.e. “new york”). I will provide the code for the classic count vectorizer as well: ## Count (classic BoW) vectorizer = feature_extraction.text.CountVectorizer(max_features=10000, ngram_range=(1,2)) ## Tf-Idf (advanced variant of BoW) vectorizer = feature_extraction.text.TfidfVectorizer(max_features=10000, ngram_range=(1,2)) Now I will use the vectorizer on the preprocessed corpus of the train set to extract a vocabulary and create the feature matrix. corpus = dtf_train["text_clean"] vectorizer.fit(corpus) X_train = vectorizer.transform(corpus) dic_vocabulary = vectorizer.vocabulary_ The feature matrix X_train has a shape of 34,265 (Number of documents in training) x 10,000 (Length of vocabulary) and it’s pretty sparse: sns.heatmap(X_train.todense()[:,np.random.randint(0,X.shape[1],100)]==0, vmin=0, vmax=1, cbar=False).set_title('Sparse Matrix Sample') Random sample from the feature matrix (non-zero values in black) In order to know the position of a certain word, we can look it up in the vocabulary: word = "new york" dic_vocabulary[word] If the word exists in the vocabulary, this command prints a number N, meaning that the Nth feature of the matrix is that word. In order to drop some columns and reduce the matrix dimensionality, we can carry out some Feature Selection, the process of selecting a subset of relevant variables. I will proceed as follows: treat each category as binary (for example, the “Tech” category is 1 for the Tech news and 0 for the others); perform a Chi-Square test to determine whether a feature and the (binary) target are independent; keep only the features with a certain p-value from the Chi-Square test. y = dtf_train["y"] X_names = vectorizer.get_feature_names() p_value_limit = 0.95 dtf_features = pd.DataFrame() for cat in np.unique(y): chi2, p = feature_selection.chi2(X_train, y==cat) dtf_features = dtf_features.append(pd.DataFrame( {"feature":X_names, "score":1-p, "y":cat})) dtf_features = dtf_features.sort_values(["y","score"], ascending=[True,False]) dtf_features = dtf_features[dtf_features["score"]>p_value_limit] X_names = dtf_features["feature"].unique().tolist() I reduced the number of features from 10,000 to 3,152 by keeping the most statistically relevant ones. Let’s print some: for cat in np.unique(y): print("# {}:".format(cat)) print(" . selected features:", len(dtf_features[dtf_features["y"]==cat])) print(" . top features:", ",".join( dtf_features[dtf_features["y"]==cat]["feature"].values[:10])) print(" ") We can refit the vectorizer on the corpus by giving this new set of words as input. That will produce a smaller feature matrix and a shorter vocabulary. vectorizer = feature_extraction.text.TfidfVectorizer(vocabulary=X_names) vectorizer.fit(corpus) X_train = vectorizer.transform(corpus) dic_vocabulary = vectorizer.vocabulary_ The new feature matrix X_train has a shape of is 34,265 (Number of documents in training) x 3,152 (Length of the given vocabulary). Let’s see if the matrix is less sparse: Random sample from the new feature matrix (non-zero values in black) It’s time to train a machine learning model and test it. I recommend using a Naive Bayes algorithm: a probabilistic classifier that makes use of Bayes’ Theorem, a rule that uses probability to make predictions based on prior knowledge of conditions that might be related. This algorithm is the most suitable for such large dataset as it considers each feature independently, calculates the probability of each category, and then predicts the category with the highest probability. classifier = naive_bayes.MultinomialNB() I’m going to train this classifier on the feature matrix and then test it on the transformed test set. To that end, I need to build a scikit-learn pipeline: a sequential application of a list of transformations and a final estimator. Putting the Tf-Idf vectorizer and the Naive Bayes classifier in a pipeline allows us to transform and predict test data in just one step. ## pipeline model = pipeline.Pipeline([("vectorizer", vectorizer), ("classifier", classifier)]) ## train classifier model["classifier"].fit(X_train, y_train) ## test X_test = dtf_test["text_clean"].values predicted = model.predict(X_test) predicted_prob = model.predict_proba(X_test) We can now evaluate the performance of the Bag-of-Words model, I will use the following metrics: Accuracy: the fraction of predictions the model got right. Confusion Matrix: a summary table that breaks down the number of correct and incorrect predictions by each class. ROC: a plot that illustrates the true positive rate against the false positive rate at various threshold settings. The area under the curve (AUC) indicates the probability that the classifier will rank a randomly chosen positive observation higher than a randomly chosen negative one. Precision: the fraction of relevant instances among the retrieved instances. Recall: the fraction of the total amount of relevant instances that were actually retrieved. classes = np.unique(y_test) y_test_array = pd.get_dummies(y_test, drop_first=False).values ## Accuracy, Precision, Recall accuracy = metrics.accuracy_score(y_test, predicted) auc = metrics.roc_auc_score(y_test, predicted_prob, multi_class="ovr") print("Accuracy:", round(accuracy,2)) print("Auc:", round(auc,2)) print("Detail:") print(metrics.classification_report(y_test, predicted)) ## Plot confusion matrix cm = metrics.confusion_matrix(y_test, predicted) fig, ax = plt.subplots() sns.heatmap(cm, annot=True, fmt='d', ax=ax, cmap=plt.cm.Blues, cbar=False) ax.set(xlabel="Pred", ylabel="True", xticklabels=classes, yticklabels=classes, title="Confusion matrix") plt.yticks(rotation=0) fig, ax = plt.subplots(nrows=1, ncols=2) ## Plot roc for i in range(len(classes)): fpr, tpr, thresholds = metrics.roc_curve(y_test_array[:,i], predicted_prob[:,i]) ax[0].plot(fpr, tpr, lw=3, label='{0} (area={1:0.2f})'.format(classes[i], metrics.auc(fpr, tpr)) ) ax[0].plot([0,1], [0,1], color='navy', lw=3, linestyle='--') ax[0].set(xlim=[-0.05,1.0], ylim=[0.0,1.05], xlabel='False Positive Rate', ylabel="True Positive Rate (Recall)", title="Receiver operating characteristic") ax[0].legend(loc="lower right") ax[0].grid(True) ## Plot precision-recall curve for i in range(len(classes)): precision, recall, thresholds = metrics.precision_recall_curve( y_test_array[:,i], predicted_prob[:,i]) ax[1].plot(recall, precision, lw=3, label='{0} (area={1:0.2f})'.format(classes[i], metrics.auc(recall, precision)) ) ax[1].set(xlim=[0.0,1.05], ylim=[0.0,1.05], xlabel='Recall', ylabel="Precision", title="Precision-Recall curve") ax[1].legend(loc="best") ax[1].grid(True) plt.show() The BoW model got 85% of the test set right (Accuracy is 0.85), but struggles to recognize Tech news (only 252 predicted correctly). Let’s try to understand why the model classifies news with a certain category and assess the explainability of these predictions. The lime package can help us to build an explainer. To give an illustration, I will take a random observation from the test set and see what the model predicts and why. ## select observation i = 0 txt_instance = dtf_test["text"].iloc[i] ## check true value and predicted value print("True:", y_test[i], "--> Pred:", predicted[i], "| Prob:", round(np.max(predicted_prob[i]),2)) ## show explanation explainer = lime_text.LimeTextExplainer(class_names= np.unique(y_train)) explained = explainer.explain_instance(txt_instance, model.predict_proba, num_features=3) explained.show_in_notebook(text=txt_instance, predict_proba=False) That makes sense: the words “Clinton” and “GOP” pointed the model in the right direction (Politics news) even if the word “Stage” is more common among Entertainment news. Word Embedding Word Embedding is the collective name for feature learning techniques where words from the vocabulary are mapped to vectors of real numbers. These vectors are calculated from the probability distribution for each word appearing before or after another. To put it another way, words of the same context usually appear together in the corpus, so they will be close in the vector space as well. For instance, let’s take the 3 sentences from the previous example: Words embedded in 2D vector space In this tutorial, I’m going to use the first model of this family: Google’s Word2Vec (2013). Other popular Word Embedding models are Stanford’s GloVe (2014) and Facebook’s FastText (2016). Word2Vec produces a vector space, typically of several hundred dimensions, with each unique word in the corpus such that words that share common contexts in the corpus are located close to one another in the space. That can be done using 2 different approaches: starting from a single word to predict its context (Skip-gram) or starting from the context to predict a word (Continuous Bag-of-Words). In Python, you can load a pre-trained Word Embedding model from genism-data like this: nlp = gensim_api.load("word2vec-google-news-300") Instead of using a pre-trained model, I am going to fit my own Word2Vec on the training data corpus with gensim. Before fitting the model, the corpus needs to be transformed into a list of lists of n-grams. In this particular case, I’ll try to capture unigrams (“york”), bigrams (“new york”), and trigrams (“new york city”). corpus = dtf_train["text_clean"] ## create list of lists of unigrams lst_corpus = [] for string in corpus: lst_words = string.split() lst_grams = [" ".join(lst_words[i:i+1]) for i in range(0, len(lst_words), 1)] lst_corpus.append(lst_grams) ## detect bigrams and trigrams bigrams_detector = gensim.models.phrases.Phrases(lst_corpus, delimiter=" ".encode(), min_count=5, threshold=10) bigrams_detector = gensim.models.phrases.Phraser(bigrams_detector) trigrams_detector = gensim.models.phrases.Phrases(bigrams_detector[lst_corpus], delimiter=" ".encode(), min_count=5, threshold=10) trigrams_detector = gensim.models.phrases.Phraser(trigrams_detector) When fitting the Word2Vec, you need to specify: the target size of the word vectors, I’ll use 300; the window, or the maximum distance between the current and predicted word within a sentence, I’ll use the mean length of text in the corpus; the training algorithm, I’ll use skip-grams (sg=1) as in general it has better results. ## fit w2v nlp = gensim.models.word2vec.Word2Vec(lst_corpus, size=300, window=8, min_count=1, sg=1, iter=30) We have our embedding model, so we can select any word from the corpus and transform it into a vector. word = "data" nlp[word].shape We can even use it to visualize a word and its context into a smaller dimensional space (2D or 3D) by applying any dimensionality reduction algorithm (i.e. TSNE). word = "data" fig = plt.figure() ## word embedding tot_words = [word] + [tupla[0] for tupla in nlp.most_similar(word, topn=20)] X = nlp[tot_words] ## pca to reduce dimensionality from 300 to 3 pca = manifold.TSNE(perplexity=40, n_components=3, init='pca') X = pca.fit_transform(X) ## create dtf dtf_ = pd.DataFrame(X, index=tot_words, columns=["x","y","z"]) dtf_["input"] = 0 dtf_["input"].iloc[0:1] = 1 ## plot 3d from mpl_toolkits.mplot3d import Axes3D ax = fig.add_subplot(111, projection='3d') ax.scatter(dtf_[dtf_["input"]==0]['x'], dtf_[dtf_["input"]==0]['y'], dtf_[dtf_["input"]==0]['z'], c="black") ax.scatter(dtf_[dtf_["input"]==1]['x'], dtf_[dtf_["input"]==1]['y'], dtf_[dtf_["input"]==1]['z'], c="red") ax.set(xlabel=None, ylabel=None, zlabel=None, xticklabels=[], yticklabels=[], zticklabels=[]) for label, row in dtf_[["x","y","z"]].iterrows(): x, y, z = row ax.text(x, y, z, s=label) That’s pretty cool and all, but how can the word embedding be useful to predict the news category? Well, the word vectors can be used in a neural network as weights. This is how: First, transform the corpus into padded sequences of word ids to get a feature matrix. Then, create an embedding matrix so that the vector of the word with id N is located at the Nth row. Finally, build a neural network with an embedding layer that weighs every word in the sequences with the corresponding vector. Let’s start with the Feature Engineering by transforming the same preprocessed corpus (list of lists of n-grams) given to the Word2Vec into a list of sequences using tensorflow/keras: ## tokenize text tokenizer = kprocessing.text.Tokenizer(lower=True, split=' ', oov_token="NaN", filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t ') tokenizer.fit_on_texts(lst_corpus) dic_vocabulary = tokenizer.word_index ## create sequence lst_text2seq= tokenizer.texts_to_sequences(lst_corpus) ## padding sequence X_train = kprocessing.sequence.pad_sequences(lst_text2seq, maxlen=15, padding="post", truncating="post") The feature matrix X_train has a shape of 34,265 x 15 (Number of sequences x Sequences max length). Let’s visualize it: sns.heatmap(X_train==0, vmin=0, vmax=1, cbar=False) plt.show() Feature matrix (34,265 x 15) Every text in the corpus is now an id sequence with length 15. For instance, if a text had 10 tokens in it, then the sequence is composed of 10 ids + 5 0s, which is the padding element (while the id for word not in the vocabulary is 1). Let’s print how a text from the train set has been transformed into a sequence with the padding and the vocabulary. i = 0 ## list of text: ["I like this", ...] len_txt = len(dtf_train["text_clean"].iloc[i].split()) print("from: ", dtf_train["text_clean"].iloc[i], "| len:", len_txt) ## sequence of token ids: [[1, 2, 3], ...] len_tokens = len(X_train[i]) print("to: ", X_train[i], "| len:", len(X_train[i])) ## vocabulary: {"I":1, "like":2, "this":3, ...} print("check: ", dtf_train["text_clean"].iloc[i].split()[0], " -- idx in vocabulary -->", dic_vocabulary[dtf_train["text_clean"].iloc[i].split()[0]]) print("vocabulary: ", dict(list(dic_vocabulary.items())[0:5]), "... (padding element, 0)") Before moving on, don’t forget to do the same feature engineering on the test set as well: corpus = dtf_test["text_clean"] ## create list of n-grams lst_corpus = [] for string in corpus: lst_words = string.split() lst_grams = [" ".join(lst_words[i:i+1]) for i in range(0, len(lst_words), 1)] lst_corpus.append(lst_grams) ## detect common bigrams and trigrams using the fitted detectors lst_corpus = list(bigrams_detector[lst_corpus]) lst_corpus = list(trigrams_detector[lst_corpus]) ## text to sequence with the fitted tokenizer lst_text2seq = tokenizer.texts_to_sequences(lst_corpus) ## padding sequence X_test = kprocessing.sequence.pad_sequences(lst_text2seq, maxlen=15, padding="post", truncating="post") X_test (14,697 x 15) We’ve got our X_train and X_test, now we need to create the matrix of embedding that will be used as a weight matrix in the neural network classifier. ## start the matrix (length of vocabulary x vector size) with all 0s embeddings = np.zeros((len(dic_vocabulary)+1, 300)) for word,idx in dic_vocabulary.items(): ## update the row with vector try: embeddings[idx] = nlp[word] ## if word not in model then skip and the row stays all 0s except: pass That code generates a matrix of shape 22,338 x 300 (Length of vocabulary extracted from the corpus x Vector size). It can be navigated by word id, which can be obtained from the vocabulary. word = "data" print("dic[word]:", dic_vocabulary[word], "|idx") print("embeddings[idx]:", embeddings[dic_vocabulary[word]].shape, "|vector") It’s finally time to build a deep learning model. I’m going to use the embedding matrix in the first Embedding layer of the neural network that I will build and train to classify the news. Each id in the input sequence will be used as the index to access the embedding matrix. The output of this Embedding layer will be a 2D matrix with a word vector for each word id in the input sequence (Sequence length x Vector size). Let’s use the sentence “I like this article” as an example: My neural network shall be structured as follows: an Embedding layer that takes the sequences as input and the word vectors as weights, just as described before. A simple Attention layer that won’t affect the predictions but it’s going to capture the weights of each instance and allow us to build a nice explainer (it isn't necessary for the predictions, just for the explainability, so you can skip it). The Attention mechanism was presented in this paper (2014) as a solution to the problem of the sequence models (i.e. LSTM) to understand what parts of a long text are actually relevant. Two layers of Bidirectional LSTM to model the order of words in a sequence in both directions. Two final dense layers that will predict the probability of each news category. ## code attention layer def attention_layer(inputs, neurons): x = layers.Permute((2,1))(inputs) x = layers.Dense(neurons, activation="softmax")(x) x = layers.Permute((2,1), name="attention")(x) x = layers.multiply([inputs, x]) return x ## input x_in = layers.Input(shape=(15,)) ## embedding x = layers.Embedding(input_dim=embeddings.shape[0], output_dim=embeddings.shape[1], weights=[embeddings], input_length=15, trainable=False)(x_in) ## apply attention x = attention_layer(x, neurons=15) ## 2 layers of bidirectional lstm x = layers.Bidirectional(layers.LSTM(units=15, dropout=0.2, return_sequences=True))(x) x = layers.Bidirectional(layers.LSTM(units=15, dropout=0.2))(x) ## final dense layers x = layers.Dense(64, activation='relu')(x) y_out = layers.Dense(3, activation='softmax')(x) ## compile model = models.Model(x_in, y_out) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() Now we can train the model and check the performance on a subset of the training set used for validation before testing it on the actual test set. ## encode y dic_y_mapping = {n:label for n,label in enumerate(np.unique(y_train))} inverse_dic = {v:k for k,v in dic_y_mapping.items()} y_train = np.array([inverse_dic[y] for y in y_train]) ## train training = model.fit(x=X_train, y=y_train, batch_size=256, epochs=10, shuffle=True, verbose=0, validation_split=0.3) ## plot loss and accuracy metrics = [k for k in training.history.keys() if ("loss" not in k) and ("val" not in k)] fig, ax = plt.subplots(nrows=1, ncols=2, sharey=True) ax[0].set(title="Training") ax11 = ax[0].twinx() ax[0].plot(training.history['loss'], color='black') ax[0].set_xlabel('Epochs') ax[0].set_ylabel('Loss', color='black') for metric in metrics: ax11.plot(training.history[metric], label=metric) ax11.set_ylabel("Score", color='steelblue') ax11.legend() ax[1].set(title="Validation") ax22 = ax[1].twinx() ax[1].plot(training.history['val_loss'], color='black') ax[1].set_xlabel('Epochs') ax[1].set_ylabel('Loss', color='black') for metric in metrics: ax22.plot(training.history['val_'+metric], label=metric) ax22.set_ylabel("Score", color="steelblue") plt.show() Nice! In some epochs, the accuracy reached 0.89. In order to complete the evaluation of the Word Embedding model, let’s predict the test set and compare the same metrics used before (code for metrics is the same as before). ## test predicted_prob = model.predict(X_test) predicted = [dic_y_mapping[np.argmax(pred)] for pred in predicted_prob] The model performs as good as the previous one, in fact, it also struggles to classify Tech news. But is it explainable as well? Yes, it is! I put an Attention layer in the neural network to extract the weights of each word and understand how much those contributed to classify an instance. So I’ll try to use Attention weights to build an explainer (similar to the one seen in the previous section): ## select observation i = 0 txt_instance = dtf_test["text"].iloc[i] ## check true value and predicted value print("True:", y_test[i], "--> Pred:", predicted[i], "| Prob:", round(np.max(predicted_prob[i]),2)) ## show explanation ### 1. preprocess input lst_corpus = [] for string in [re.sub(r'[^\w\s]','', txt_instance.lower().strip())]: lst_words = string.split() lst_grams = [" ".join(lst_words[i:i+1]) for i in range(0, len(lst_words), 1)] lst_corpus.append(lst_grams) lst_corpus = list(bigrams_detector[lst_corpus]) lst_corpus = list(trigrams_detector[lst_corpus]) X_instance = kprocessing.sequence.pad_sequences( tokenizer.texts_to_sequences(corpus), maxlen=15, padding="post", truncating="post") ### 2. get attention weights layer = [layer for layer in model.layers if "attention" in layer.name][0] func = K.function([model.input], [layer.output]) weights = func(X_instance)[0] weights = np.mean(weights, axis=2).flatten() ### 3. rescale weights, remove null vector, map word-weight weights = preprocessing.MinMaxScaler(feature_range=(0,1)).fit_transform(np.array(weights).reshape(-1,1)).reshape(-1) weights = [weights[n] for n,idx in enumerate(X_instance[0]) if idx != 0] dic_word_weigth = {word:weights[n] for n,word in enumerate(lst_corpus[0]) if word in tokenizer.word_index.keys()} ### 4. barplot if len(dic_word_weigth) > 0: dtf = pd.DataFrame.from_dict(dic_word_weigth, orient='index', columns=["score"]) dtf.sort_values(by="score", ascending=True).tail(top).plot(kind="barh", legend=False).grid(axis='x') plt.show() else: print("--- No word recognized ---") ### 5. produce html visualization text = [] for word in lst_corpus[0]: weight = dic_word_weigth.get(word) if weight is not None: text.append('<b><span style="background-color:rgba(100,149,237,' + str(weight) + ');">' + word + '</span></b>') else: text.append(word) text = ' '.join(text) ### 6. visualize on notebook print("\033[1m"+"Text with highlighted words") from IPython.core.display import display, HTML display(HTML(text)) Just like before, the words “clinton” and “gop” activated the neurons of the model, but this time also “high” and “benghazi” have been considered slightly relevant for the prediction. Language Models Language Models, or Contextualized/Dynamic Word Embeddings, overcome the biggest limitation of the classic Word Embedding approach: polysemy disambiguation, a word with different meanings (e.g. “ bank” or “stick”) is identified by just one vector. One of the first popular ones was ELMO (2018), which doesn’t apply a fixed embedding but, using a bidirectional LSTM, looks at the entire sentence and then assigns an embedding to each word. Enter Transformers: a new modeling technique presented by Google’s paper Attention is All You Need (2017) in which it was demonstrated that sequence models (like LSTM) can be totally replaced by Attention mechanisms, even obtaining better performances. Google’s BERT (Bidirectional Encoder Representations from Transformers, 2018) combines ELMO context embedding and several Transformers, plus it’s bidirectional (which was a big novelty for Transformers). The vector BERT assigns to a word is a function of the entire sentence, therefore, a word can have different vectors based on the contexts. Let’s try it using transformers: txt = "bank river" ## bert tokenizer tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) ## bert model nlp = transformers.TFBertModel.from_pretrained('bert-base-uncased') ## return hidden layer with embeddings input_ids = np.array(tokenizer.encode(txt))[None,:] embedding = nlp(input_ids) embedding[0][0] If we change the input text into “bank money”, we get this instead: In order to complete a text classification task, you can use BERT in 3 different ways: train it all from scratches and use it as classifier. Extract the word embeddings and use them in an embedding layer (like I did with Word2Vec). Fine-tuning the pre-trained model (transfer learning). I’m going with the latter and do transfer learning from a pre-trained lighter version of BERT, called Distil-BERT (66 million of parameters instead of 110 million!). ## distil-bert tokenizer tokenizer = transformers.AutoTokenizer.from_pretrained('distilbert-base-uncased', do_lower_case=True) As usual, before fitting the model there is some Feature Engineering to do, but this time it’s gonna be a little trickier. To give an illustration of what I’m going to do, let’s take as an example our beloved sentence “I like this article”, which has to be transformed into 3 vectors (Ids, Mask, Segment): Shape: 3 x Sequence length First of all, we need to select the sequence max length. This time I’m gonna choose a much larger number (i.e. 50) because BERT splits unknown words into sub-tokens until it finds a known unigrams. For example, if a made-up word like “zzdata” is given, BERT would split it into [“z”, “##z”, “##data”]. Moreover, we have to insert special tokens into the input text, then generate masks and segments. Finally, put all together in a tensor to get the feature matrix that will have the shape of 3 (ids, masks, segments) x Number of documents in the corpus x Sequence length. Please note that I’m using the raw text as corpus (so far I’ve been using the clean_text column). corpus = dtf_train["text"] maxlen = 50 ## add special tokens maxqnans = np.int((maxlen-20)/2) corpus_tokenized = ["[CLS] "+ " ".join(tokenizer.tokenize(re.sub(r'[^\w\s]+| ', '', str(txt).lower().strip()))[:maxqnans])+ " [SEP] " for txt in corpus] ## generate masks masks = [[1]*len(txt.split(" ")) + [0]*(maxlen - len( txt.split(" "))) for txt in corpus_tokenized] ## padding txt2seq = [txt + " [PAD]"*(maxlen-len(txt.split(" "))) if len(txt.split(" ")) != maxlen else txt for txt in corpus_tokenized] ## generate idx idx = [tokenizer.encode(seq.split(" ")) for seq in txt2seq] ## generate segments segments = [] for seq in txt2seq: temp, i = [], 0 for token in seq.split(" "): temp.append(i) if token == "[SEP]": i += 1 segments.append(temp) ## feature matrix X_train = [np.asarray(idx, dtype='int32'), np.asarray(masks, dtype='int32'), np.asarray(segments, dtype='int32')] The feature matrix X_train has a shape of 3 x 34,265 x 50. We can check a random observation from the feature matrix: i = 0 print("txt: ", dtf_train["text"].iloc[0]) print("tokenized:", [tokenizer.convert_ids_to_tokens(idx) for idx in X_train[0][i].tolist()]) print("idx: ", X_train[0][i]) print("mask: ", X_train[1][i]) print("segment: ", X_train[2][i]) You can take the same code and apply it to dtf_test[“text”] to get X_test. Now, I’m going to build the deep learning model with transfer learning from the pre-trained BERT. Basically, I’m going to summarize the output of BERT into one vector with Average Pooling and then add two final Dense layers to predict the probability of each news category. If you want to use the original versions of BERT, here’s the code (remember to redo the feature engineering with the right tokenizer): ## inputs idx = layers.Input((50), dtype="int32", name="input_idx") masks = layers.Input((50), dtype="int32", name="input_masks") segments = layers.Input((50), dtype="int32", name="input_segments") ## pre-trained bert nlp = transformers.TFBertModel.from_pretrained("bert-base-uncased") bert_out, _ = nlp([idx, masks, segments]) ## fine-tuning x = layers.GlobalAveragePooling1D()(bert_out) x = layers.Dense(64, activation="relu")(x) y_out = layers.Dense(len(np.unique(y_train)), activation='softmax')(x) ## compile model = models.Model([idx, masks, segments], y_out) for layer in model.layers[:4]: layer.trainable = False model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() As I said, I’m going to use the lighter version instead, Distil-BERT: ## inputs idx = layers.Input((50), dtype="int32", name="input_idx") masks = layers.Input((50), dtype="int32", name="input_masks") ## pre-trained bert with config config = transformers.DistilBertConfig(dropout=0.2, attention_dropout=0.2) config.output_hidden_states = False nlp = transformers.TFDistilBertModel.from_pretrained('distilbert- base-uncased', config=config) bert_out = nlp(idx, attention_mask=masks)[0] ## fine-tuning x = layers.GlobalAveragePooling1D()(bert_out) x = layers.Dense(64, activation="relu")(x) y_out = layers.Dense(len(np.unique(y_train)), activation='softmax')(x) ## compile model = models.Model([idx, masks], y_out) for layer in model.layers[:3]: layer.trainable = False model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() Let’s train, test, evaluate this bad boy (code for evaluation is the same): ## encode y dic_y_mapping = {n:label for n,label in enumerate(np.unique(y_train))} inverse_dic = {v:k for k,v in dic_y_mapping.items()} y_train = np.array([inverse_dic[y] for y in y_train]) ## train training = model.fit(x=X_train, y=y_train, batch_size=64, epochs=1, shuffle=True, verbose=1, validation_split=0.3) ## test predicted_prob = model.predict(X_test) predicted = [dic_y_mapping[np.argmax(pred)] for pred in predicted_prob] The performance of BERT is slightly better than the previous models, in fact, it can recognize more Tech news than the others. Conclusion This article has been a tutorial to demonstrate how to apply different NLP models to a multiclass classification use case. I compared 3 popular approaches: Bag-of-Words with Tf-Idf, Word Embedding with Word2Vec, and Language model with BERT. I went through Feature Engineering & Selection, Model Design & Testing, Evaluation & Explainability, comparing the 3 models in each step (where possible). Please note that I haven’t covered explainability for BERT as I’m still working on that, but I will update this article as soon as I can. If you have any useful resources about that, feel free to contact me.
https://towardsdatascience.com/text-classification-with-nlp-tf-idf-vs-word2vec-vs-bert-41ff868d1794
['Mauro Di Pietro']
2020-11-26 09:53:03.349000+00:00
['Data Science', 'Artificial Intelligence', 'Machine Learning', 'Programming', 'NLP']
Two Things That Separate the Wealthy From the Non-Wealthy
Two Things That Separate the Wealthy From the Non-Wealthy The mindset of successful people is different from the norm. Photo by Keenan Barber on Unsplash One of the essential resources that we have is our time, yet so many of us ultimately view our time the wrong way. Time often creates an illusion that tricks us into thinking that we’re making the best use of it when in reality, we’re not. For example, we may have a list of small jobs that we must complete, and because we allocate our time to these necessary tasks, we believe that we are spending our time wisely. I’m a big believer that if you want to improve upon something, you should model the types of people who have the result that you want. In terms of time management, the wealthy are the people to follow. The wealthy are where they are because of the unique way they think about time and how much value they place upon it. My Misconceptions Regarding the Rich When I was young, I used to think rich people were mean. When I thought of rich people in their homes, I always imagined them ordering around their workers and making them work unnecessarily hard just for the hell of it. To me, rich people had cleaners, cooks, gardeners, and maids for five reasons. Because they wanted to show how powerful they are They enjoyed the power trips. They like ordering poor people around. They were snobs and didn’t know how to get their hands dirty. They were clueless about manual labor. I was wrong. I was wrong, but this wasn’t a healthy way to think about the wealthy, mostly if I wanted to become one in the future. I know now that the wealthy are wealthy and continue to be more wealthy because they know how to value their own time and manage themselves better than most people. Relaxing & Producing Results The wealthy have caught the fact that you should spend your time doing only one of two things. Relaxing Producing results Each time a wealthy person thinks about doing something, they ask themselves is if the task or activity falls in either of these two categories: Point 1 is easy to determine . It’s merely a question of asking whether they deem a particular action relaxing or not. . It’s merely a question of asking whether they deem a particular action relaxing or not. Point 2 takes a bit more thinking, which proves the difference between how a wealthy person thinks about his time compared to others. Let’s take the example of everyday housework to explain the differences. Everyone has to do housework. Everyone’s houses need to be cleaned. Clothes washed, dishes washed, floor washed, vacuumed — there’s no getting around it. But if you were to think about ‘Housework’, which category does it fall into? A relaxing or B producing results? For the sake of keeping things simple, I’d say the majority of people do not enjoy housework; thus, it may not be relaxing. It’s more likely that it would be categorized in ‘Producing Results’ — everyone has to do it. Therefore when it’s completed, that’s a result. Even for the wealthy, ‘housework’ falls into the category of producing results, the only difference is he or she will get someone else to do it. He knows that he can do this because he places a monetary amount per hour of his time. Depending on the individual, it will be different. You may think you are worth £10 an hour, or the guy sitting next to you may think he’,s worth £50 an hour — it depends on the individual. The idea is that if a wealthy person comes across an activity that can be outsourced below his per hour value, in most cases, he will get someone else to do it, especially if it’s far below his value and given that he doesn’t enjoy doing it. Rather than waste his time on something he doesn’t enjoy, he’ll hire somebody at a cheaper rate to produce the results for him while he makes better use of his time, such as relaxing or producing high valued products. Instead, he could use the time to spend time with his family, relaxing, working on his business, or improving his life somehow. Once he’s put a monetary value on his time, then he knows which jobs are worth outsourcing. If he were to put a £ 100-hour value on his time, why should he iron if he can employ somebody to do it for £5 an hour? If the grass is long, why should he use up 2 hours of his time if he can get someone to do it for £10 an hour? With this principle in mind, he knows what jobs he should do and what jobs he should outsource. A wealthy person thinks about the best use of his time a lot. He has learned to use his time wisely and focus his time on achieving goals instead of meaningless chores somebody else could do. Don’t get me wrong, housework must be done — but if anybody can do it — get anybody to do it. How Does This Apply To Me, I’m Not Wealthy! Now I hear you loud and clear, but the principle discusses should open your mind about how to value your own time. Even if you’re sitting around and have time to clean your house, is this the best use of your time? You could spend the whole of your Saturday catching up with your housework for the week but would it not be better to spend your whole Saturday on ways to achieve your dreams? You don’t have to be rich to use your time wisely and learn to outsource work. If you’ve put a £20 value on your time, then it would be worth employing a cleaner for £5 an hour to come and do it for you. Think about it, your time is much better spent producing meaningful results. Anybody can clean, so get anybody to clean for you. Nobody can cross the things on your life’s To-do list other than you. Nobody can take action to reach your dreams apart from you. As the economy stands now, there are plenty of opportunities to employ people at reasonable prices — it doesn’t cost much to employ someone to come in once a week to take care of your housework. I have received plenty of flyers from local people wanting to do some extra work. One was a local pair looking for a bit of extra work, and the other was from a schoolboy on holiday looking to earn a bit of pocket money on the side (could use him to wash your car). I’m slowly getting more and more into online outsourcing. I outsource a lot of my boring and time-consuming ‘website’ work to people through Upwork. If you run an online business and need workers to do your link building, design work, writers, and virtual assistants, it’s a great place to find people like that. Doing so has done wonders to my life compared to how it was before. I used to try to do everything else myself, and now that I’ve outsourced, it’s freed up so much of my time to work on other essential things. If you consider saving the money you have planned on that big night out and instead spend it on outsourcing your work — you’ve freed up a whole chunk of your time to relax more or focus on producing results. I’ll repeat it there’s a reason the wealthy are wealthy, and that’s because they put a monetary value on their time. Learn to leverage your time on producing results that only you can do. Taking time to learn or do the activities that frustrate you is something that we don’t need to busy ourselves during our life. Learning to outsource your work will do wonders for you in your life for sure. A great place to start is by putting a monetary value on your own time. Ask yourself the question — How much is your time worth?
https://medium.com/live-your-life-on-purpose/two-things-that-separate-the-wealthy-from-the-non-wealthy-6c2b94280764
['Josef Cruz']
2020-12-17 23:02:53.182000+00:00
['Wealth', 'Mindset', 'Psychology', 'Self Improvement', 'Money']
Indiana Environmental Groups file lawsuit to stop logging in Hoosier National Forest
A recent lawsuit filed by several Indiana environmental groups accuses the U.S. Forest Service of proposing a project violating multiple environmental acts, endangering a reservoir that provides clean water to over 140,000 people and unlawfully imperiling endangered species. The project plans to selectively log 4,375 acres and burn 13,500 acres of forest over a time span of around 20 years to promote the growth of trees such as oak and hickory and treat forest health. The lawsuit accuses the U.S. Forest Service of violating the Council on Environmental Quality (CEQ) Regulations in line with the National Environmental Policy Act (NEPA). It also alleges that the Forest Service violates the goals and objectives of the Indiana Forest Plan, stating that certain practices that were supposed to be analyzed for suitability weren’t discussed, violating the National Forest Management Act. “Our lawyers have been talking to the Justice Department about putting the project on hold while we see if we can work out our differences,” said Jeff Stant, an executive director for the Indiana Forest Alliance. No changes to the project have yet been made, but a contract for shelterwood cutting has been delayed. Environmentalists have also proposed many alternatives, including moving the project to a different area outside of the Monroe Reservoir and reducing the volume of logging and burning. The Bloomington mayor and Monroe County officials say that the largest concern of the project is how it could affect the Lake Monroe Reservoir. The lake provides drinking water to over 140,000 people in South Central Indiana, and the project could increase sediment levels in a reservoir that already suffers from flooding, erosion and high levels of algae. “By logging the slopes in the Houston South Area…there’s no question that there will be an increase in sediment levels,” said Stant. The Lake Monroe Reservoir has suffered from contamination concerns for many years. Logging and farming lead to erosion in the area that increases nutrients in the lake which feeds toxic algae. The Indiana Department of Environmental Management has listed the lake as an impaired water body, meaning it doesn’t meet water quality standards assigned by the Clean Water Act. The U.S. Forest Service has responded to these concerns, stating that the project would have no significant impact on the sedimentation levels of Lake Monroe. Michelle Paduani, a district ranger for the Hoosier National Forest, emailed a statement saying that “the mitigation measures [the Forest Service] apply are highly effective in protecting water while meeting other objectives of improving wildlife habitat and forest resilience.” However, many in Monroe County aren’t willing to take the risk and worry that the studies cited by the U.S. Forest Service don’t apply to Lake Monroe. Scientists with concerns about increased erosion in the Monroe Reservoir have proposed setting up monitoring stations for sedimentation in the lake itself. Environmental activists have also brought up concerns about the health of the forest, which is currently classified as mature and contains many oak and hickory trees. The project plans on fulfilling goals outlined in the Indiana Forest Plan, a strategic outline set up to approve funding for forest conservation and desired future conditions on public land. This project would be the largest management project to ever take place in the Hoosier National Forest, affecting about 20,000 acres. Photo Courtesy of Indiana Forest Alliance Dr. Jane Fitzgerald, a coordinator for the Central Hardwoods Joint Venture (CHJV), stated that the project was essential to improving habitat conditions for bird species of conservation concern. “A couple of examples are the Cerulean Warbler, and the Wood Thrush, and probably the Prairie Warbler,” Fitzgerald said. “Those are the top three that come to mind.” In a letter of endorsement from the CHJV, Fitzgerald writes that the plan would encourage the growth of white oak trees, which forest birds use for foraging and nesting. According to a research article on bird populations and selective cutting, the thinning of branches would increase shrubby growth that provides better habitat structures for juvenile forest-breeding birds and improves population numbers. Another study completed by the Woodland Steward Institute showed that oak trees and canopy gaps were important to nesting success in Cerulean Warblers. However, some studies have shown that opening the closed canopy forest could have an unfavorable effect on forest songbirds. According to a research report on forest fragmentation, the act of breaking large forested areas into smaller pieces, the nesting success of songbirds can decrease in response to selective cutting. Environmental activists also fear that the prescribed burning could have detrimental effects on vulnerable species of bats, birds, amphibians and reptiles. The Houston South project area currently supports many species of bats that are federally threatened or endangered. The IFA argues that the planned burning and thinning could result in harm to maternity roosting trees, killing mothers and pups. Stant said that while the project may intend to provide a better habitat for these endangered animals, they may not recover from the prescribed burning. The burning could also lead to increased air pollution. Studies have shown that forests sequester more carbon as they mature, and the burning could release an unknown amount of carbon into the atmosphere. The Indiana Forest Alliance worries that the burning could affect recreational activities that normally take place in a public forest, from hiking and jogging to horseback riding. Another point of contention is that burning was not used by the Native Americans of Indiana to promote the growth of oak trees. According to an article by Cheryl Munson, a research scientist at Indiana University, “no indication exists that intentionally set fire was a key factor in determining the natural composition of the forest in the Houston South area.” The U.S. Forest Service has responded that burning is an essential factor in encouraging oak growth and that careful measures will be taken. “For the most part,” said Fitzgerald, “fire is what helps to regenerate the oaks… And in terms of the harvest, the new growth is going to sequester carbon too, so it’s not like you’re cutting the trees and paving it and there’s not ever going to be more carbon sequestration, there will be.” The Hoosier National Forest has struggled with a lack of age diversity for many years. The majority of the trees in the forest are classified in the 20 to 99 year age range. While promoting oak-hickory growth is the goal of the project, opponents believe that it would be best to let the forest naturally regrow and increase the diversity of the trees, without solely focusing on oak and hickory. It is still not known whether any changes have been made to the Houston South project. Hopefully, a suitable compromise can be reached that will help maintain Lake Monroe’s water quality and protect endangered species’ while still achieving the important goals of the project.
https://medium.com/the-climate-reporter/indiana-environmental-groups-file-lawsuit-to-stop-logging-in-hoosier-national-forest-33c8d1860ce2
['Chenyao Liu']
2020-06-25 19:34:46.401000+00:00
['Environment', 'Conservation', 'Politics', 'Law', 'Climate News']
A Full-Length Machine Learning Course in Python for Free
A Full-Length Machine Learning Course in Python for Free Andrew Ng’s Machine Learning Course in Python One of the most popular Machine-Leaning course is Andrew Ng’s machine learning course in Coursera offered by Stanford University. I tried a few other machine learning courses before but I thought he is the best to break the concepts into pieces make them very understandable. But I think, there is just only one problem. That is, all the assignments and instructions are in Matlab. I am a Python user and did not want to learn Matlab. So, I just learned the concepts from the lectures and developed all the algorithms in Python. I explained all the algorithms in my own way(as simply as I could) and demonstrated the development of almost all the algorithms in the different articles before. I thought I should summarise them all on one page so that if anyone wants to follow, it is easier for them. Sometimes a little help goes a long way. If you want to take Andrew Ng’s Machine Learning course, you can audit the complete course for free as many times as you want. Let’s dive in! Linear Regression The most basic machine learning algorithm. This algorithm is based on the very basic straight line formula we all learned in school: Y = AX + B Remember? If not, no problem. This is a very simple formula. Here is the complete article that explains how this simple formula can be used to make predictions. The article above works on only the datasets with a single variable. But in real life, most datasets have multiple variables. Using the same simple formula, you can develop the algorithm with multiple variables: Polynomial Regression This one is also a sister of linear regression. But polynomial regression is able to find the relationship between the input variables and the output variable more precisely, even if the relationship between them is not linear: Logistic Regression Logistic regression is developed on linear regression. It also uses the same simple formula of a straight line. This is a widely used, powerful, and popular machine learning algorithm. It is used to predict a categorical variable. The following article explains the development of logistic regression step by step for binary classification: Based on the concept of binary classification, it is possible to develop a logistic regression for multiclass classification. At the same time, Python has some optimization functions that help to do the calculation a lot faster. In the following article, I worked on both the methods to perform a multiclass classification task on a digit recognition dataset: Neural Network Neural Network has been getting more and more popular nowadays. If you are reading this article, I guess you heard of neural networks. A neural network works much faster and much efficiently in more complex datasets. This one also involves the same formula of a straight line but the development of the algorithm is a bit more complicated than the previous ones. If you are Andrew Ng’s course, probably, you know the concepts already. Otherwise, I tried to break down the concepts as much as I could. Hopefully, it is helpful: Learning Curve What if you spent all that time and developed an algorithm and then, it does not work the way you wanted. How do you fix it? You need to figure out first where the problem is. Is your algorithm faulty or you need more data to train the model or you need more features? So many questions, right? But if you do not figure out the problem first and keep moving in any direction, it may kill too much time unnecessarily. Here is how you may find the problem: On the other hand, if the dataset is too skewed that is another type of challenge. For example, if you are working on a classification problem, where 95% of cases it is positive and only 5% of cases are negative. In that case, if you just randomly put all the output as positive, you are 95% correct. On the other hand, if the machine learning algorithm turns out to be 90% accurate, it is still not efficient, right? Because without a machine learning algorithm, you can predict with 95% accuracy. Here are some ideas to deal with these types of situation: K Mean CLustering One of the most popular and old unsupervised learning algorithms. This algorithm does not make predictions like the previous algorithms. It makes clusters based on the similarities amongst the data. It is more like understanding the current data more effectively. Then whenever the algorithm sees new data, based on its characteristics, it decides which cluster it belongs to. This algorithm has other importance as well. It can be used for the dimensionality reduction of images. Why do we need dimensionality reduction of an image? Think, when we need to input a lot of images to an algorithm to train an image classification model. Very high-resolution images could be too heavy and the training process can be too slow. In that case, a lower-dimensional picture will do the job with less time. This is just one example. You probably can imagine, there are a lot of uses for the same reason. This article is a complete tutorial on how to develop a K mean clustering algorithm and how to use that algorithm for dimensionality reduction of an image: Anomaly Detection Another core machine learning task. Used in credit card fraud detection, to detect faulty manufacturing or even any rare disease detection or cancer cell detection. Using the Gaussian distribution(or normal distribution) method or even more simply a probability formula it can be done. Here is a complete step by step guide for developing an anomaly detection algorithm using the Gaussian distribution concepts: If you need a refresher on a Gaussian distribution method, please check this one: Recommender System The recommendation system is everywhere. If you buy something on Amazon, it will recommend you some more products you may like, YouTube recommends the video you may like, Facebook recommends people you may know. So, we see it everywhere. Andrew Ng’s course teaches how to develop a recommender system using the same formula we used in linear regression. Here is the step by step process of developing a movie recommendation algorithm: Conclusion Hopefully, this article will help some people to start with machine learning. The best way is by doing. If you notice most of the algorithms are based on a very simple basic formula. I see a notion that machine learning or Artificial Intelligence requires very heavy programming knowledge and very difficult math. That’s not always true. With simple codes, basic math, and stats knowledge, you can go a long way. At the same time, keep improving your programming skills to do more complex tasks. If you are interested in machine learning, just take some time and start working on it. Feel free to follow me on Twitter and like my Facebook page. More Reading:
https://towardsdatascience.com/a-full-length-machine-learning-course-in-python-for-free-f2732954f35f
['Rashida Nasrin Sucky']
2020-12-13 15:39:46.368000+00:00
['Data Science', 'Machine Learning', 'Artificial Intelligence', 'Programming', 'Technology']
The Architect of Artificial intelligence — Deep Learning
Artificial Intelligence has been one the most remarkable advancements of the decade. People are hushing from explicit software development to building Ai based models, businesses are now relying on data driven decisions rather on someone manually defining rules. Everything is turning into Ai, ranging from Ai chat-bots to self driving cars, speech recognition to language translation, robotics to medicine. Ai is not a new thing to researchers though. It has been present even before 90’s. But what’s making it so trending and open to the world?? I’ve been working with Artificial Intelligence and Data Science for almost 2 years now and have worked around a lot of so called state-of-the-art Ai systems like generative chat-bots, speech recognition, language translation, text classification, object recognition, age and expression recognition etc. So, after spending 2 years in Ai, I believe there’s just one major technology (or whatever you call it) behind this Ai-boom, Deep Learning. This being my introductory blog, I won’t dive into technical details of Deep Learning and Neural Nets (will talk about my work in upcoming blogs), but share with you why I think Deep learning is taking over other traditional methods. If you are not into Deep Learning and Ai stuffs, let me explain it to you in simple non-techie words. Imagine you have to build a method to classify emails into categories like social, promotional or spam, one of the prime Ai tasks that Google does for your Gmail inbox! What would you do to achieve this? May be you could make a list of words to look for into emails like ‘advertisement’, ‘subscribe’, ‘newsletter’ etc, then write a simple string matching regex to look for these words in the emails and classify them as promotional or spam if these words are found. But the problem here is how many keywords can you catch this way or how many rules can you manually write for this? You know the content over internet is cross folding and each day, new keywords would hop in. Thus, this keywords based approach won’t land you good results. Now if you would give a closer thought to this, you have a computer which can do keyword matching million times faster than you. So rather than using your computationally powerful device just for simple string matching, why not let the computer decide the rules for classification too! What I mean is a computer can go through thousands of data and come up with more precise rules for the task in the time you could just think of 5 such rules. This is deep learning all about! Instead of you explicitly designing rules and conditions which you think would solve the problem (like simple if-else, making dictionaries of keywords etc.), Deep Learning deals with giving computer the capability to produce certain rules which it can use to solve the problem. This means it’s an end-to-end architecture. You give in the data as input to the network and tell the desired output for each data point. The network then goes through the data and update the rules accordingly to land on a set of optimized rules. This decision making ability is generally limited to we humans, right? This is where Artificial Neural Networks (or simply neural nets) kick in. These are set of nodes arranged in layers and connected through weights (which are nothing but number matrices) in a similar way as neurons are connected in our brain. Again I won’t go into technical details of the architecture, their learning algorithms and mathematics behind it, but this is the way Deep Learning mimics brain’s learning process. Lets take another example, suppose you are to recognize human face in an image which could be located anywhere in the image. How would you proceed? One obvious way is to define a set of key-points all over human face which together can characterize the face. Generally these are in sets of 128 or 68. These points when interconnected forms an image mask. But what if the orientation of face changes from frontal view to side view?? The geometry of face which helped these points to identify a face changes and thus, the key-point method won’t detect the face. 68 key points of human face, Image taken from www.pyimagesearch.com Deep Learning makes this possible too ! The key-points we used were based on a human’s perception of face features(like nose, ears, eyes). Hence to detect a face, we try to make the computer find these features together in an image. But guess what, these manually selected features are not so pronounced to computers. Deep Learning rather makes the computer go through a lot of faces (containing all sort of distortions and orientations) and lets the computer decide what feature maps seems relevant to the computer for face detection. After all the computer has to recognize the face, not you! And this gives surprisingly good results. You can go through one of my project here where I used ConvNets (a deep learning architecture) to recognize expression of the face. Having large data set of faces for recognizing a face may occur as a problem to you. But one-shot learning methods such as Siamese Network have solved this problem too. It is an approach based on a special loss function called Contrastive Triplet Loss and was introduced in the FaceNet paper. I won’t discuss about this here. If you wish to know abut it, you can go through the paper here. Siamese Network for Gender Detection, Image taken from www.semanticscholar.org Another myth about Deep Learning is that Deep Learning is a Black Box. There’s no feature engineering and Maths involved behind the architecture. And so it simply replicates the data without actually providing a reliable and long-term solution to the problem. NO, it’s not like ! It has mathematics and probability involved in a similar way traditional Machine Learning methods have, be it simple Linear Regression or Support Vector Machines. Deep Learning uses the same Gradient Descent equation to look for optimized parameter values as Linear Regression does. The cost function, the hypothesis, error calculation from target value (loss) are all done in similar fashion as they are in traditional algorithms (based on equations). Activation functions in deep nets are nothing but mathematical functions. Once you understand every mathematical aspect of Deep Learning, you can figure out how to build the model for a specific task and what changes need to be done. It’s just that the mathematics involved in Deep Learning turns out to be little complex. But if you get the concepts right, it’s no more a Black Box to you! In fact this is true to all the algorithms in the world. As far as I’ve learnt, I’ve made my way through all the mathematics behind it. Beginning right from a simple perceptron, standard Wx+b equation of a neuron and back-propagation to modern architectures such as CNN, LSTM, Encoder-Decoder, Sequence2Sequence etc. The purpose of this blog was to create more acceptance for Deep Learning in the field of Machine Learning and Artificial Intelligence. That’s why I didn’t talk about Deep Learning architectures, codes and Tensorflow. Companies basing their business over AI need to support Deep Learning along with traditional Machine Learning methods. In my upcoming blog, I will talk about some cool projects I did, may be Generative Chat-Bot or may be Neural Machine Translation. If you are into Artificial Intelligence too, do let me know about your opinions on the blog!
https://towardsdatascience.com/the-architect-of-artificial-intelligence-deep-learning-226ac69ab27a
['Saransh Mehta']
2018-10-03 19:15:00.094000+00:00
['Deep Learning', 'Artificial Intelligence', 'Neural Networks', 'Data Science', 'Machine Learning']