title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Why 2021 Will Rock, And Why I’ll Still Be Miserable
Why 2021 will rock: We all get to go back to work. (Yay?) Why I’ll still be miserable: Don’t get me wrong, I’ll never diminish the magnitude of the yay that comes with going back to the office. Going back to the office deserves the yay-est of yays. But what gets a boo in my book is the fact I’ll have to sit next to Greg and Colleen again. Everyone knows they’ve spent all their work-from-home time binge-watching Netflix. So after having to half-listen-half-ignore their two-hour-long discourses about The Game of Thrones for 730 goddamn days, I’ll have to half-listen-half-ignore their two-hour-long discourses about the entire Netflix catalogue for another 935. I’ve heard everything I needed to hear about Full House back in 1995, jerkwads! Just because Jerry Seinfeld has 23 Hours to Kill, it doesn’t mean I do!
https://medium.com/the-haven/why-2021-will-rock-and-why-ill-still-be-miserable-8adbaacf1964
['Andrew Cheng']
2020-12-23 23:36:36.435000+00:00
['Satire', 'Humor', 'Coronavirus', 'Culture', '2020']
How Everton Can Replace Idrissa Gana Gueye: A Comprehensive Look
Everton are close to selling Idrissa Gana Gueye to Paris Saint-Germain for £30m+. We take a look at who Everton could replace him with. It is looking more and more likely that Everton will be losing one of their strongest and most important players. Idrissa Gana Gueye is a very good defensive midfielder and will be very difficult to replace. His contribution to the team is indeed significant, although opinions are somewhat divided in terms of truly how great a player he is. Regardless, we decided to take a more objective look at the Everton midfield, Gana’s contributions, and some alternatives as Everton have made it clear that they will bring someone else in if he does indeed leave. Everton’s Current Midfield Performance and Gana’s Contributions Everton have primarily played two different formations effectively during the second half of the PL campaign. Silva’s 4–2–3–1 usually involved Gana playing with Andre Gomes in midfield behind Gylfi Sigurdsson in possession that dropped into a 4–4–2 without. Gana usually played a bit further back to support Gomes, who pushed a bit higher. Silva also played more of his traditional 4–3–3 with Morgan Schneiderlin playing as a sitting 6 and Gana pressed higher where he could wreck havoc all over the field and push the ball forward in possession. Gylfi Sigurdsson usually still pushed a bit higher, thus you could argue it was still the 4–2–3–1, but the point is that Gana played both as a more traditional DM and more of a box to box CM. It’s hard to quantify Everton’s midfield specific performance, but as a team Everton had the follow levels of performance that the midfield contributed to in some manner in the PL season: Not many of those numbers are spectacular — Everton’s central midfield is workman-like and does not create much. Everton relies more on Gylfi Sigurdsson and Bernard as well as Digne and Coleman to create a lot of chances for the team. In terms of Idrissa Gana Gueye’s contributions to the above, it’s a mixed bag. Gana certainly helps Everton keep the ball with a high passing % (87.4%) and very few giveaways (the only non-defender better is Morgan Schneiderlin). The importance of this should not be understated as it’s a significant problem with the team, but Gana is a good possession player (despite some strange mythology to the contrary). Conversely, Gana isn’t the most creative. He doesn’t contribute many key or through passes per 90, although he does lead the starting XI in passes to the final third in number per 90 and in accuracy as well as in forward passes. So while he does deserve some criticism for not creating a lot of chances for his teammates, he moves the ball forward fairly well. Defensive68.0 defensive duels per 90 min (5th) at a 64.3% success rate (3rd) 43.7 interceptions per 90 min (14th) 10.4 fouls per 90 min (17th, 4th highest total) Everton had one of the best defenses in the league in open play and while they fouled a lot, the defensive duels total is staggering for a team that was decent in possession. Not surprisingly, Gana’s contributions to these numbers are significant. His 10.8 defensive duels per game is very high and rivaled only by Gomes on Everton. However, Gana wins 68% of those duels vs. Gomes 57%. Even though he engages in a ton of duels, Gana still doesn’t foul nearly as much as Everton’s other defensive midfielders and rarely gets dribbled past. Not much of this should be shocking to Everton fans. Idrissa Gana Gueye is an elite defensive midfielder without the ball and with it, he’s good at keeping possession, he moves the ball forward, and positively contributes to Everton’s build up play and attack. Still, Gana is not a creative force as a midfielder. How to Replace Gana It’s not certain that Everton can replace Gana directly with anyone. He covers so much distance and engages in a tremendous number of defensive duels. For comparison’s sake, we looked at defensive and central midfielders in the top 5 leagues and tried to find those that had higher than 10 defensive duels per 90 min and won 60% or more of those duels. That resulted in 13 names including Gana with all but 4 players 27 or older and only 2 from the PL and Gana was STIL 4 thin duels and 2 ndin duels won in that group. It’s just going to be very difficult to find a player that can replicate what Idrissa Gana Gueye does, especially in the PL. Nevertheless, we took a closer look at the defensive / center midfielders Everton has been linked to over the past year or so and added in a few of the names above as well as a couple other obvious candidates. We factored in that Brands has indicated he is not going to spend big money on an older player (believe it was 26). We also recognize that any replacement candidate has to still fit into a midfield with Gylfi Sigurdsson upfront and Gomes next to him, which means they still have to be defensively capable. It’s also worth noting that while Marco Silva made it clear they would have to find a replacement for Gueye if he left, Fabian Delph does gives Everton some support in that he takes care of the ball, isn’t afraid to make a tackle, and doesn’t foul a lot. Since it seems unlikely that Everton will find a replacement as disruptive as Gana — and we will still give it a shot, we believe that a replacement could bring other things to the team that could make up for the loss collectively. As a result, we considered a couple different types of players and identified names that we thought matched those profiles. Creativity One school of thought is to find a player that is going to help Everton create more chances and help breakdown inferior opponents to get more wins rather than draws. Of all the names mentioned, these couple stand out: Franck Kessie, 22, AC Milan. Kessie gets a lot of key passes from the midfield, although he’s playing more as a CM. He was thought of highly when he made the move to AC Milan and is still a young talent, however Kessie has fallen out of favor at Milan after his blow up with Lucas Biglia on the sidelines of the Milan derby and could likely be gotten for around 30M by most accounts. Kessie’s development seems to have stalled and his performance is headed in the wrong direction. Key passes aside, he has the 2 ndfewest defensive duels per 90 of our population, makes less than 12 forward passes per 90, and the rest of his relevant stats don’t look much better. Whether it’s the environment at Milan or other factors, his regression is notable and I’m not sure he’s sound enough defensively to merit consideration. Bruno Guimarães, 21, Paranaense. Guimaraes has played his share of CM, which may explain some of his offensive numbers, but he has the highest xA, key passes, and thru passes per 90 of anyone in our sample in the past season. It would also explain his meager defensive numbers. However, his first touch is excellent, his passing range is solid, he’s sturdy and he’s not afraid to put in a tackle. He has played some games as a DM and to me, he looks very comfortable in the position and I believe he could be an outstanding sitting 6. I really like Guimaraes, love his decision making and his toughness, but he’s probably a bit inexperienced to step in for Idrissa Gana Gueye immediately. He’s not far off, though, and I believe a move to Europe is right around the corner. Thiago Mendes, 27, Olympique Lyonnais. I violated the age limit with his inclusion, but since he was rumored to be the replacement for Idrissa Gana Gueye in January, I figured I would include him to get an idea of the type of player Silva and Brands may be after. Mendes is a terrific passer, has high xA and thru passing numbers per 90, and can create from a defensive position. He’s a good athlete who engages and wins a ton of defensive battles and looks most comfortable playing as a true 6, but can get up the field a bit and play CM. He has one of the highest successful defensive actions per 90 min of our population and can pick off passes from deep as well as tackle in central midfield. Mendes has already moved to Lyon from Lille and will make one of the better midfields in Europe while supporting Houssem Aouar and relegating Lucas Tousart to the bench. Discipline Another way to potentially replace Idrissa Gana Gueye is to find more of an anchorman that is extremely disciplined, excellent positionally, can pick off passes, can win balls in the air, and take care of the ball. This player still needs to be athletic and cover ground, but it might allow Gomes to push up even higher and provide cover for the backline against stronger opponents. Diadie Samassekou, 23, Red Bull Salzburg. Although it may be a product of an inferior league, Samassekou’s numbers (and his film) are nothing short of outrageous. It’s almost comical how often he picks off passes and dispossesses his opponents. Samassekou has the highest number of successful defensive actions per 90 on our list (10.75!) and the most interceptions per 90 (7.42!), many of which were in the other teams 30 so he can press effectively as well. His. He’s had several matches where he’s engaged in over 30 defensive duels, he even won 16 out of 20 duels in a Champions League qualifying match against Macedonia side Shkedija and scored a penalty, and was an absolute beast winning 13 of 15 duels against Napoli in a 3–1 win. He is very responsible with the ball and isn’t afraid to pass it forwards or down the field — he even shows some flair and skill with the ball, so I get the sense his offensive game is still developing. I love this player, but unfortunately for Everton, so does Dortmund who look like they are going to get an absolute steal for 20M. Phillip Billing, 23, Huddersfield. Billing is a solid defensive midfielder who seemed to drop performance-wise during the second half of the season. At his best, he’s a 6'6" tower in the middle of the field that picks off a ton of passes (3 rdon our list per 90) and wins the vast majority of his aerial duels (67%, 1 stfor players with over 1000 minutes in the past season). He is an underrated offensive player — he really likes to shoot and is dangerous from distance. He loves to attempt long passes and while they look great when he pulls them off, it’s not nearly often enough compared to his peers. Billing played for a relegation side that was terrible in open play, so his numbers need to be taken within context. I think his second-half performance and some of his comments thereafter raise some questions marks, but he is a talented defensive mid that probably will make a move to a better side this summer. Distribution It would make sense that a player with better passing range that could move the ball up the field accurately could take the burden off of Gomes and help Everton’s transition game tremendously. Jean-Philippe Gbamin, 23. Mainz 05. Gbamin is a big, strong defensive midfielder that was pushed into CM to get Pierre Kunde on the field, although I believe he’s a more natural defensive mid. He did play a bit of CB last season and played quite a bit coming through at Lens albeit 5 years ago, but he does have some positional versatility. Gbamin’s numbers need to be understood within the context of a Mainz team that was not good in possession. He doesn’t play a high frequency of passes and prefers to play the ball quickly, but many of those are longer passes. His passing accuracy numbers aren’t the best, but you could tell he would get frustrated at times and try to do a bit too much with the ball. He actually has some of the best dribbling numbers of our population per 90, so he has some ability on the ball. In defense, he’s active and is a tremendous standing tackler. He completely overpowers players at times and seems to LOVE blowing guys up on 50/50 balls. I also believe he can get a lot better in a more settled position with better players around him. He shows a high level of skill and every now and then pulls off a trick or two that may not be apparent by looking at his numbers. Often defensive players matured later than others and I really believe Gbamin still has a lot of ceiling and perhaps Silva with his hands-on approach to training can be the one to help him reach it. Incidentally, I’ve heard it pronounced way too many ways to suggest the proper way to say his last name…Bah-man…Bah-meen…Guh-bah-meen…etc. Kalvin Phillips, 23, Leeds United. Phillips is defensive mid as well as an undersized center back that loves to sit back and spray long passes all over the field. He has the highest average pass length in our sample as well as the second highest number of long passes and forward passes attempted per 90. Part of that is the time he is afforded on the ball in the Championship as well as his time as a center back. His accuracy leaves a bit to be desired, but the volume of long passes naturally bring his numbers down. His defensive numbers are amongst the best of the group. He engages in a ton of defensive duels and has the highest success rate in our sample (69%). It’s hard to say if he can make the jump to the PL and certainly it would be asking a bit much for him to replace Idrissa Gana Gueye at Everton. However, he wouldn’t break the bank and I wouldn’t be surprised to see another club give him a shot. Julian Weigl, 23, Borussia Dortmund. Statistically speaking, Weigl is the best passer in our population, however, it’s likely due to playing more minutes at CB than DM in this past season. It’s still hard to ignore the fact that he’s played almost the highest number of passes per 90 min and close to the longest average length of pass in our sample, yet leads in passing accuracy almost across the board — forward passes, lateral passes, and long passes. If you give him time, his right foot is capable of getting it to anyone anywhere. Defensively, he averages more tackles per 90 than anyone else in our sample and has a high number of tackles in the opposition’s 30, partially as the beneficiary of an aggressive Dortmund side. That being said, he does take a lot of chances with risky tackles usually on the ground and may not have the discipline or physicality to be a top flight defensive midfielder in the PL. He is still young and has tremendous talent, thus he wouldn’t be cheap, but I’m not sure he’s done enough to merit stepping in for someone like Idrissa Gana Gueye in a quicker, more physical league. Sander Berge, 21, Genk. Berge is a 6'5" deep lying play maker type that is solid all around, but especially good with the ball at his feet. He shows excellent passing ranges, is one of the more accurate passers in our population, and rarely gives the ball away. He’s very good in the dribble averaging almost 3 dribbles per 90 min and at the second highest success rate in our population (76%). He’s graceful and quick for his size, but he’s not an active defensive player in the middle of the pitch, although he does win a high number of the duels and tackles he engages in. As a pure defensive midfielder, he could improve on his ability to track runners and quickly recover back into better defensive positions. Berge very much reminds me of Andre Gomes, although he’s not playing at the same level that Gomes was at his age. I don’t think Berge would cope well with the speed of the PL and isn’t a good enough defender to play in the current Everton midfield, but his development is worth keeping an eye on, especially if and when he makes the move to a more challenging league. Push Forward / Run with it Another type of player that would be interesting in Everton’s midfield is the type of player that can push the ball forward by pass or dribble and help improve team tempo as well as put more pressure along with Gomes on the opposition. Ibrahim Sangare, 21, Toulouse. Sangare is my personal favorite and could appear in many other places on this list. He’s an absolute 6'3" beast in the center of Toulouse’s midfield who is disciplined enough to play a sitting 6 and skilled enough to push forward as a true box to box player. Sangare hasn’t come close to tapping into his immense potential and seemed to improve tremendously throughout the year. His work ethic is second to none and almost singlehandedly pushes Toulouse forward at times. Sangare averages almost 3 dribbles per 90 min, almost 20 forward passes per 90, has the higher average number of passes into the final 30 per 90, and has the second-highest number of thru passes attempted per 90 min in our population. Defensively he engages in and wins a lot of defensive duels. He doesn’t have very good pressing numbers, but Toulouse was usually sitting back defending much deeper than Everton would, and he has every physical attribute necessary to be successful in a more aggressive defensive scheme. He’s also very far from the player he could become and seems to improve dramatically even throughout the past season. He wouldn’t break the bank and while he’s still a bit raw and uneven, he probably has the highest ceiling of anyone on this list. I believe he’s going to be a world-class CM/DM under the right tutelage and believe he could achieve it under Silva. André-Frank Zambo Anguissa, 23, Fulham. Anguissa should be familiar to Everton fans as he was part of a dreadful Fulham side that managed to get 3 points against the Toffees down the stretch to put any dreams of the Blues in Europe to rest. Like some others on the list, it’s tough to gauge Anguissa considering how poor his team was. He did not get off to a good start at Fulham and sat out the greater part of 3 months with an ankle injury. When healthy, Anguissa led our population in dribbles per 90 (3.8), had lots of key and final third passes, and a high frequency of passes to begin with. His defensive numbers are middle of the road, although he does show some pressing ability. Anguissa might be gettable as he cannot be happy after Fulham suffered relegation, but he seems more comfortable as a central rather than defensive mid. He is heavily rumored to be on the way to Villareal and likely is not the best fit for Everton at this moment. Disruptive (future Idrissa Gana Gueye) There is some thought that we simply need to find another disruptive player that can simulate a bit of what he does now and in the future. Samassekou could do that with a bit more discipline although the jump in leagues would be significant. Sangare is certainly disruptive, but in a different way as he does it with his size and length, although his work rate is comparable. Players like Octavio at Bordeaux show some statistical similarities and some passing range as well, but isn’t quite at the level at age 25 and doesn’t win enough of his battles or have the physical attributes in my opinion to merit consideration. Thiago Maia at Lille also shows some statistical similarities and is strong as well as good in the air, but the 22-year-old really hasn’t gotten enough game time to merit serious consideration. If you had to choose… I am confident Marcel Brands has scouted many of these players extensively and will bring someone in that makes sense. It does appear that it might be Gbamin, who is certainly capable and would be a good choice. He is a more natural player at defensive mid, which is where he would likely be deployed at Everton and shows more ability with his passing game than does Gana. He’s more than physical enough for the PL and while he still has a bit to learn about the position, Silva is clearly an excellent teacher and could do wonders for the Ivorian. However, my choice would be Ibrahim Sangare. He’s still developing both physically and mentally, no question. But he’s good now and his potential is world class. His size, his work rate, his instincts are all very good. His offensive game is still raw and he’s going to make some mistakes usually out of ambition, but the foundational elements — vision, first touch, instinct — are all there as are glimpse of truly phenomenal play. If he improves at the rate he did this year with Toulouse, he’s could be a dominant defensive/central mid in the PL sooner rather than later. But his versatility is what I find most attractive. We could’ve considered him in most of our categories as he can be creative and disciplined as well as show the ability to distribute and be disruptive. An honorable mention goes to Thiago Mendes and Diadie Samassekou. Both would be interesting alternatives but in different ways. Unfortunately, the Mendes ship has sailed and it looks as though Samassekou is off to Dortmund. I do think Gbamin reminds me a bit of Mendes and if he’s the replacement it does seem to make some sense if the rumors of Mendes being the first choice to replace Idrissa Gana Gueye in January were true. It is also nice knowing that Fabian Delph is a solid backup in case things go south. It’s also worth noting that we looked at a lot more players than we’ve directly mentioned in this article (including but not limited to Lobotka, Zakaria, Soumare, Torreira, Diawara, Tameze, Cyprian, Lerma, Doucoure) and are happy to share our observations — perhaps we could put together a follow-up. Regardless, Idrissa Gana Gueye leaving has a massive impact on Everton Football Club and finding a replacement is a critical personnel move for the club now and for the future. As always, make sure you follow us on Twitter @ToffeeTargets for more up to date Everton transfer news. Also, give me a follow on Twitter @RyanWil02979819. Love talking about in-depth analysis on players across Europe. Be sure to check out my post on Moise Kean vs. Rafael Leao
https://medium.com/the-sports-niche/how-everton-can-replace-idrissa-gana-gueye-a-comprehensive-look-1cc549eae41c
['Christian Cappoli']
2019-07-30 15:56:26.624000+00:00
['Everton', 'Psg', 'Soccer', 'Premier League']
November 2020 Deals Recap
2020 has been such an eventful year to say the least. We know you’re all excited to finish up the final month of December and get onward with 2021 as we hopefully approach a vaccine. But nonetheless, the New England entrepreneurial ecosystem has remained incredibly resilient. In the spirit of Thanksgiving (we hope you had a great one by the way), we’re profiling all the New England venture deals for the month of November ’20 — we’re thankful for all you founders and investors continuing to move forward with your plans to make the world a better place! [NOTE: Round info per Crunchbase reporting. Max of three investors displayed per company.] Without further ado, here’s the list:
https://medium.com/the-startup-buzz/november-2020-deals-recap-55b73a7adfe6
['Matt Snow']
2020-12-01 22:47:03.885000+00:00
['Fundraising', 'Venture Capital', 'New England', 'Technology', 'Startup']
Information Architecture. Basics for Designers.
The World Wide Web contains a tremendous amount of information which is hard to imagine unstructured because a human brain wouldn’t be able to perceive any single thing. People got used to seeing content and functionality of the digital products as many of them are now: structured and easy to use. However, it doesn’t occur unintentionally. Designers and developers take a responsibility of constructing content and navigation system in the appropriate way for users perception. The science that assists experts in the content structuring is called information architecture. Today’s article is devoted to the essence of information architecture and presents the basic points every designer should know. What’s information architecture? Information architecture (IA) is a science of organizing and structuring content of the websites, web and mobile applications, and social media software. An American architect and graphic designer, Richard Saul Wurman, is considered to be a founder of the IA field. Today, there are many specialists working on IA development who have established the Information Architecture Institute. According to the IAI experts, information architecture is the practice of deciding how to arrange the parts of something to be understandable. Information architecture aims at organizing content so that users would easily adjust to the functionality of the product and could find everything they need without big effort. The content structure depends on various factors. First of all, IA experts consider the specifics of the target audience needs because IA puts user satisfaction as a priority. Also, the structure depends on the type of the product and the offers companies have. For example, if we compare a retail website and a blog, we’ll see two absolutely different structures both efficient for accomplishing certain objectives. Information architecture has become the fundamental study in many spheres including design and software development. The role of information architecture in design Nowadays, when the user-centered approach in design is a top trend, many designers learn the principles of information architecture science which they believe is a foundation of efficient design. IA forms a skeleton of any design project. Visual elements, functionality, interaction, and navigation are built according to the information architecture principles. The thing is that even compelling content elements and powerful UI design can fail without appropriate IA. Unorganized content makes navigation difficult and inexplicit, so the users can easily get lost and feel annoyed. If the users face first bad interaction, they may not give the second chance to your product. Many companies don’t see the importance of information architecture because they think it’s impractical. It’s hard to argue that IA takes some time to create it and requires specific skills to do it efficiently. However, powerful IA is a guarantee of the high-quality product since it reduces the possibility of the usability and navigation problems. This way, well-thought information architecture can save both time and money of the company which otherwise they would have spent on fixing and improvement. IA and UX design After reading everything written above, many people may have the question: “Isn’t IA the same as UX design?”. Technically, these terms relate to each other but they are far not the same. IA is a blueprint of the design structure which can be generated into wireframes and sitemaps of the project. UX designers use them as the basic materials so that they could plan navigation system. UX design means much more than content structuring. In the first place, UX designers aim at making pleasant interaction model, so that users feel comfortable using the product. They encompass various aspects influencing users’ behavior and actions such as emotion and psychology when the IA experts stay focused on the user’s goals. Let’s get this straight: good information architecture is a foundation of efficient user experience, so the IA skill is essential for the designers. Effective IA makes the product easy to use but only united with design thinking the product has the powerful user experience. IA system components If you want to build strong information architecture for the product, you need to understand what it consists of. Pioneers of the IA field, Lou Rosenfeld and Peter Morville in their book “Information Architecture for the World Wide Web” have distinguished four main components: organization systems, labeling systems, navigation systems and searching systems. Organization systems These are the groups or the categories in which the information is divided. Such system helps users to predict where they can find certain information easily. There are three main organizational structures: Hierarchical, Sequential, and Matrix. Hierarchical. In one of our previous articles, we’ve mentioned a well-known technique of content organization which is called visual hierarchy. It is initially based on Gestalt psychological theory and its main goal is to present content on the carrier, be it a book page or poster, web page or mobile screen, in such a way that users can understand the level of importance for each element. It activates the ability of the brain to distinguish objects on the basis of their physical differences, such as size, color, contrast, alignment etc. Sequential. This structure creates some kind of a path for the users. They go step-by-step through content to accomplish the task they needed. This type is often used for the retail websites or apps, where people have to go from one task to another to make the purchase. Matrix. This type is a bit more complicated for the users since they choose the way of navigation on their own. Users are given choices of content organization. For example, they can navigate through content which is ordered according to date, or some may prefer navigation along the topic. In addition, content can be grouped according to the organization schemes. They are meant to categorize content the product. Here are some of the popular schemes: Alphabetical schemes. Content is organized in alphabetical order. Also, they can serve as a navigation tool for the users. Chronological schemes. This type organizes content by date. Topic schemes. Content is organized according to the specific subject. Audience schemes. The type of content organization for separate groups of users. Labeling systems This system involves the ways of data representation. Design of the product requires simplicity, so a great amount of information can confuse users. That’s why designers create the labels which represent loads of data in few words. For example, when the designers give contact information of the company on the website, it usually includes the phone number, email, and social media contacts. However, designers can’t present all of this information on one page. The button “Contact” in the header of the page is a label that triggers the associations in the users’ heads without placing the whole data on the page. So, the labeling system aims at uniting the data effectively. Navigation systems In one of our UX Glossary articles, we’ve defined navigation as the set of actions and techniques guiding users throughout the app or website, enabling them to fulfill their goals and successfully interact with the product. The navigation system, in terms of IA, involves the ways how users move through content. It’s a complex system which employs many techniques and approaches, the reason why it’s wrong to describe it in a short paragraph. So, we’ll go back to the topic a bit later on our new blog’s article. Searching systems This system is used in information architecture to help users search for the data within the digital product like a website or an app. The searching system is effective only for the products with loads of information when the users risk getting lost there. In this case, the designers should consider a search engine, filters, and many other tools helping users find content and plan how the data will look after the search. To sum up, we can claim that information architecture is a core part of the powerful user experience design. Efficient IA helps users quickly and easily navigate through content and find everything they need without striking a blow. That’s why designers are recommended to learn the basics of the IA science. The topic of information architecture is wide and there are more interesting and useful aspects. Our next article on this theme will be devoted to the various techniques and methodologies which designers employ to create efficient IA. Stay tuned! Recommended reading IA for the Web and Beyond How to Make Sense of Any Mess: Information Architecture for Everybody Information Architecture Basics The Difference Between Information Architecture and UX Design
https://uxplanet.org/information-architecture-basics-for-designers-b5d43df62e20
['Tubik Studio']
2017-05-25 14:25:31.362000+00:00
['Design', 'Web Design', 'User Experience', 'UX', 'UI']
How to Use GPT-2 in Google Colab
Image from https://openai.com/blog/better-language-models/ For those who do not have access to the GPT-3 API it is important to remember we can still experiment with and use GPT-2! For everyone who is not familiar with what GPT-2 and GPT-3 are here is a short explanation from OpenAI, “We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.” Pretty awesome right? So let’s get started with this awesome technology! Getting Google Colab Setup If you have used Google Colab before you can skip these next steps. If you have never used Google Colab before here are the necessary steps to get started. You need to have a Google account. After you have the Google account created go to Google Colab. I would highly recommend upgrading to the paid version which is Google Colab Pro. It is only $9.99/month and you get access to more powerful GPUs, longer run times, and more memory which means more data! Implementing GPT-2 TL;DR this is the link to the complete notebook: Here Now that you have your Google account all setup with Google Colab we can move into implementing GPT-2. 1.Create a new notebook within Google Colab. If you are unfamiliar with how to do that click the + New in Google Drive like you would to create a Google Doc. You might have to click more and you will see a dropdown that says “Google Colaboratory.” Click that and it will open a blank notebook. 2. Of course name your notebook. 3. If you have Colab Pro make sure to change your run time type to Hardware Accelerator: GPU and Runtime Shape: High-RAM. This will give you the best performance. 4. Next step is mounting your Google Drive which is basically connecting your Google Drive to the Google Colab notebook. Here is the code to do that. from google.colab import drive drive.mount('/content/drive') When you run this cell it will tell you “Go to this URL in a browser:” and click that link. This will open a new tab and make sure to click the proper Google account. Click allow at the bottom. Copy that authorization code and paste it back into where it asks for it in Google Colab. 5. Next we need to create a folder for GPT-2 and clone their repo into there. # ONLY RUN ONCE %cd /content/drive/My\ Drive/ !mkdir gpt-2 %cd gpt-2/ !git clone https://github.com/openai/gpt-2.git %cd cd gpt-2 It is important to remember to only run this cell block once since you do not want to clone the GPT-2 repo multiple times. 6. After cloning their repo we need to go one folder deeper and here is the code to do that! %cd /content/drive/My\ Drive/gpt-2/gpt-2 If you are typing this out make sure you add a “\” after “My”. If the “\” is not there it will throw an error. 7. GPT-2 uses a certain version of Tensorflow so we need to update that in Colab as well. Run the code below to change the Tensorflow version in Colab %tensorflow_version 1.x 8. Now we need to install the necessary requirements provided by GPT-2. !pip3 install -r requirements.txt If you are typing this out make sure the “!” is there or it will throw an error. If you run this and get an output that has “WARNING: The following packages were previously imported in this runtime: [idna,requests] You must restart the runtime in order to use newly installed versions.” in it you do not have to restart the runtime you can continue onto the next step. 9. After that is done installing we can now download their model data. !python3 download_model.py 124M !python3 download_model.py 355M !python3 download_model.py 774M !python3 download_model.py 1558M These will take some time to download since these models are fairly large. 10. After these models are downloading we can now play around with and experiment with the awesomeness of GPT-2! This next line will now allow us to input text and GPT-2 will generate text based off of what we input! !python3 src/interactive_conditional_samples.py --top_k 40 A small box will pop up. You can input any text you’d like in there and see what it generates. If nothing generates the first time when you run it just add the same text in the next small box that says Model prompt >>>. 11. Google Colab will eventually time out so you will have to rerun all of these steps EXCEPT the cloning their repo step. Also, when you go to rerun these steps if you have Colab Pro make sure to change the run time type again. Good luck I hope you find some interesting insights into what can be generated! Here is a link to the complete code: Here A GPT-2 Generated Thank You “I think a lot of you guys would’ve like to know about this, so please, follow along to this post. Thanks for reading, and stay tuned. I hope you enjoyed this article. Happy business.” - GPT-2 Good enough! Thank you to everyone who made it this far and if you have any questions feel free to reach out to me on LinkedIn and I will respond as soon as possible.
https://medium.com/swlh/how-to-use-gpt-2-in-google-colab-de44f59199c1
['Aidan Curley']
2020-09-18 21:09:34.211000+00:00
['Machine Learning', 'Data Science', 'Innovation', 'Programming', 'Artificial Intelligence']
FloatingActionButton (FAB) with BottomAppBarUsing Material Design
I always wondered while using floatingActionButton that I could be able to make this type of animations with that. Now BottomAppBar has solved this problem. You can animate your FAB with bottomAppBar. Let me show you how to do it! 1) Adding BottomAppBar: All you have to do is to make new project and add this dependency. implementation 'com.google.android.material:material:1.1.0' This is material design dependency which allows you to implement new material components. BottomAppBar is one of them. I urged you to please go through the component section of material.io documentation for more new things to know! Now after adding this dependency, open up your activity_main.xml. BottomAppBar works with coordinatorLayout , so turn your parent view to coordinatorLayout like this. Now in design panel, you will see something like this. Don’t worry, this is because, bottomAppBar comes with material components theme and in your project by default your theme is AppCompat. Theme.AppCompat.Light.DarkActionBar Select AppComponent and write MaterialComponents instead. Navigate to your xml, bottomAppBar must be there! 2) Adding FloatingActionButton (FAB) I am using FAB that comes with material package i.e material.floatingactionbutton . FAB is showing at the top! We need it to be in the middle of the bottomAppBar. In order to do that, set the app:layout_anchor attribute of floatingActionButton and reference the id of bottomAppBar. You will see your FAB move all the way in the middle of bottomAppBar. It’s Coding Time! The main part of this article, you can’t animate your bottomAppBar and FAB from XML. You need code to make it happen. 1) FAB Sliding We need to set fabAlignmentMode of bottomAppBar. By default it is on center . In order to animate FAB, I am going to set this attribute programmatically. Run your app! By default, bottomAppBar has fabAnimationMode set to scale in xml. You can change it to slide to have a sliding experience of FAB! <com.google.android.material.bottomappbar.BottomAppBar android:id="@+id/bottomAppBar" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_gravity="bottom" app:fabAnimationMode="slide"/> Run your app. 2) BottomAppBar Sliding BottomAppBar behavior would be change to slideUp and slidedown . I am going to make it very simple just to make you understand. Run your app. You can play around with it by applying different scenarios. You can also animate fab with bottomAppBar sliding. I will leave this on you 😉 You would need to combine above code. Further Readings:
https://medium.com/codechai/floatingactionbutton-fab-with-bottomappbarusing-material-design-ce8df68a0a17
['Mustufa Ansari']
2020-11-06 10:31:59.695000+00:00
['Kotlin', 'Android', 'Java', 'Android App Development', 'AndroidDev']
Lessons learned when upgrading to Terraform 0.12
Terraform 0.12 has recently been released and is a major update providing a significant number of improvement and features. While you may want to rush and use the new features, I’ll walk you through some of the lessons we learnt while upgrading the terraform-oci-oke project. This is by no means an exhaustive list. I’ll be updating this as we understand and use more and more new 0.12 features. Ready? Let’s go! Read the blog series As a preview to the new features and improvement in Terraform 0.12, Hashicorp published a series of blog posts with code examples. Read and read them again until lambs become lions. I cannot emphasize this enough. You should also read the upgrade to 0.12 guide. Fix breaking changes first The most frequent code breaking changes you will likely encounter are probably the attributes vs blocks code syntax. In Terraform 0.11, they could be used interchangeably without any problem e.g. with oci_core_security_list, defining the egress security rules block as below with the ‘=’ was acceptable: egress_security_rules = [ { protocol = "${local.all_protocols}" destination = "${local.anywhere}" }, ] With 0.12, the syntax is more strict. Attributes are specified with an ‘=’ and blocks without the ‘=’. As a result, blocks such as the above need to be redefined as follows: egress_security_rules { protocol = local.all_protocols destination = local.anywhere } Another common code breaking change you will likely encounter is when you define resources with count. The new rules as per the documentation work as follows: If count is not set, using the resource_type.name will return a single object and its attributes can be accessed as resource_type.name.id e.g. oci_core_vcn.vcn.id If count is set when defining the resource, then a list is returned and needs to be accessed using the list syntax e.g. the service_gateway is created conditionally and count is used as a condition to determine whether to create it or not: resource "oci_core_service_gateway" "service_gateway" { ... count = var.create_service_gateway == true ? 1 : 0 } Since count is used, in order to obtain the service gateway id, you need to use the following syntax: network_entity_id = oci_core_service_gateway.service_gateway[0].id Since we are creating only 1 service gateway in this case, we know the list will have only 1 element and we can set the list index to 0. If you are creating more than 1 instance of the resource, then you need to use [count.index]. Start using first-class expressions Terraform 0.12 introduced support for first-class expressions. In 0.11, every expression had to be part of the interpolation string e.g. resource "oci_core_vcn" "vcn" { cidr_block = "${var.vcn_cidr}" compartment_id = "${var.compartment_ocid}" display_name = "${var.label_prefix}-${var.vcn_name}" dns_label = "${var.vcn_dns_name}" } In 0.12, the syntax is much simpler for when using variables and functions: resource "oci_core_vcn" "vcn" { cidr_block = var.vcn_cidr compartment_id = var.compartment_ocid display_name = "${var.label_prefix}-${var.vcn_name}" dns_label = var.vcn_dns_name } The impact of using first-class expressions can also be seen below. With 0.11, the security rules for the worker nodes would be specified like this: resource "oci_core_security_list" "workers_seclist" { ... egress_security_rules = [ { # intra-vcn protocol = "${local.all_protocols}" destination = "${var.vcn_cidr}" stateless = true }, { # outbound protocol = "${local.all_protocols}" destination = "${local.anywhere}" stateless = false }, ] ingress_security_rules = [ { # intra-vcn protocol = "all" source = "${var.vcn_cidr}" stateless = true }, { # icmp protocol = "${local.icmp_protocol}" source = "${local.anywhere}" stateless = false }, .... ] } With 0.12, this is how the security rules initially looked like. It’s 0.11 compatible and consisted of multiple blocks: resource "oci_core_security_list" "workers_seclist" { ... egress_security_rules { # intra-vcn protocol = local.all_protocols destination = var.vcn_cidr stateless = true } egress_security_rules { # outbound protocol = local.all_protocols destination = local.anywhere stateless = false } ingress_security_rules { # intra-vcn protocol = "all" source = var.vcn_cidr stateless = true } ingress_security_rules { # icmp protocol = local.icmp_protocol source = local.anywhere stateless = false } ... } We’ll revisit this again when talking about dynamic blocks. Keep interpolation syntax for string concatenation For string concatenation, ̶y̶o̶u̶ ̶s̶t̶i̶l̶l̶ ̶n̶e̶e̶d̶ it’s easier to use the interpolation syntax e.g. resource "oci_core_vcn" "vcn" { cidr_block = var.vcn_cidr compartment_id = var.compartment_ocid display_name = "${var.label_prefix}-${var.vcn_name}" dns_label = var.vcn_dns_name } Likewise, if you need to combine a named variable and string as an argument to a function: data "template_file" "bastion_template" { template = file("${path.module}/scripts/bastion.template.sh") ... } Use improved conditionals In Terraform 0.11, there were 2 major limitations when using conditional is used. The first one is that both value expressions were evaluated even though only 1 is returned. As an example, this impacted how the code for defining a cluster should be written for single-Availability Domain(AD) and multiple-AD regions. To illustrate, in single-AD regions, the API expect 1 parameter for subnet ids for the Load Balancer subnets and 2 parameters for multiple-AD regions. As we are looking the number of ADs up at runtime and using the number of ADs as the only condition to choose whether to pass either 1 or 2 subnet ids, there was no way to know this a priori and pass 1 or 2 subnet ids dynamically. Unless we make a map of the number of ADs for each region and use the map to determine whether to pass 1 or 2 parameters. But this means writing extra and unnecessary code. As a result, when defining the OKE Cluster, we had to manually toggle the code when choosing to deploy between single-AD and multiple-AD regions: # Toggle between the 2 according to whether your region has 1 or 3 availability domains. # Verify here: https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm how many domains your region has. # single ad regions #service_lb_subnet_ids = ["${var.cluster_subnets["lb_ad1"]}"] # multi ad regions service_lb_subnet_ids = ["${var.cluster_subnets["lb_ad1"]}", "${var.cluster_subnets["lb_ad2"]}" In 0.12, since this restriction is lifted, the code is simplified and we can determine the number of Availability Domains at runtime based on the selected region to determine whether to pass 1 or 2 parameters. service_lb_subnet_ids = length(var.ad_names) == 1 ? [var.cluster_subnets[element(var.preferred_lb_ads,0)]] : [var.cluster_subnets[element(var.preferred_lb_ads,0)], var.cluster_subnets[element(var.preferred_lb_ads,1)]] Now, there’s no need to manually toggle the code. The 2nd major limitation with conditionals in 0.11 is that maps and lists could not be used as returned values. This has also been lifted and we have used this in a minimal way. See the conditional block below. Introduce dynamic blocks to reduce code repetition As part of using first-class expressions, I mentioned the security list initially looked like this: resource "oci_core_security_list" "workers_seclist" { ... ingress_security_rules { # rule 5 protocol = local.tcp_protocol source = "130.35.0.0/16" stateless = false tcp_options { max = local.ssh_port min = local.ssh_port } } ingress_security_rules { # rule 6 protocol = local.tcp_protocol source = "134.70.0.0/17" stateless = false tcp_options { max = local.ssh_port min = local.ssh_port } } ingress_security_rules { # rule 7 protocol = local.tcp_protocol source = "138.1.0.0/17" stateless = false tcp_options { max = local.ssh_port min = local.ssh_port } } ... } I’m not showing the full gory list here but there are 6 such repetitive ingress security rules in the 0.11 version covering rules 5–11 according to the OKE documentation for 6 different CIDR blocks. With dynamic blocks, these 6 rules can be defined into 1 dynamic block only instead of the previous 6 ingress rules blocks for each CIDR. First, we define the source cidr blocks in a local list: oke_cidr_blocks = ["130.35.0.0/16", "134.70.0.0/17", "138.1.0.0/16", "140.91.0.0/17", "147.154.0.0/16", "192.29.0.0/16"] Then, we use a dynamic block and an iterator to create the ingress rules repeatedly: resource "oci_core_security_list" "workers_seclist" { ... dynamic "ingress_security_rules" { # rules 5-11 iterator = cidr_iterator for_each = local.oke_cidr_blocks content { protocol = local.tcp_protocol source = cidr_iterator.value stateless = false tcp_options { max = local.ssh_port min = local.ssh_port } } ... } Dynamic blocks behave as if a separate block is written for each element in a list or map. In the code above, we iterate over a list of CIDR blocks. You can also combine a dynamic block with a conditional. In this case, we only need a list with one item. The item itself doesn’t matter, we only need the for_each to iterate once if the condition is true. If the condition is false, the list is empty and the egress_rules below is not created. Effectively, this becomes a conditional block. dynamic "egress_security_rules" { # for oracle services for_each = var.is_service_gateway_enabled == true ? list(1) : [] content { destination = lookup(data.oci_core_services.all_oci_services[0].services[0], "cidr_block") destination_type = "SERVICE_CIDR_BLOCK" protocol = local.all_protocols stateless = false } } With Terraform 0.11, it was not possible to do this conditionally. This impacted us particularly on the service gateway for which we allow its conditional creation. Thus, we either had to manually edit the egress rules configuration in the OCI Console or force the user to use the service gateway. Likewise, we had to manually update the routing rules for either the Internet Gateway or NAT Gateway depending on whether the worker nodes are created in public or private mode and add the routing rules for the service gateway in either the Internet Gateway or the NAT route table. The first option is done after Terraform has run and leaves the Terraform state divergent from what’s actually configured in the cloud. As for the 2nd option, well, I don’t like forcing people down a particular path of using something they won’t need. Using the conditional dynamic blocks allows us to do this depending on whether the Service Gateway was created e.g. we add this to the NAT Route table: dynamic "route_rules" { for_each = var.create_service_gateway == true ? list(1) : [] content { destination = lookup(data.oci_core_services.all_oci_services[0].services[0], "cidr_block") destination_type = "SERVICE_CIDR_BLOCK" network_entity_id = oci_core_service_gateway.service_gateway[0].id } } Similarly, we add these routing rules to the Internet Gateway routing table if the NAT gateway was not created. Notice the additional condition: dynamic "route_rules" { for_each = (var.create_service_gateway == true && var.create_nat_gateway == false)? list(1) : [] content { destination = lookup(data.oci_core_services.all_oci_services[0].services[0], "cidr_block") destination_type = "SERVICE_CIDR_BLOCK" network_entity_id = oci_core_service_gateway.service_gateway[0].id } } Specifically for the security list for the okenetwork module, using the dynamic blocks with an iterator also helped us reduce the security list code by roughly 25% while still allowing us to add a previously missing functionality. As your code, especially your security rules become cleaner, take the opportunity to review and perhaps redefine them. Note: Someone has shared the conditional block as a solution on a github issue which I adapted slightly. Unfortunately, I forgot to bookmark the issue and cannot find it anymore. If you’re reading this good sir/lady, send me a note and claim thy prize. Upgrade self-contained modules The terraform-oci-oke project has 4 high-level modules: auth base which itself has 2 sub-modules (bastion and vcn) okenetwork oke The dependency graph of the modules is shown below (optional modules and dependencies with dashes): Terraform module dependency As you can see from the above, there are a few dependencies between the modules. The oke module depends on the okenetwork module which itself depends on the base module and its sub-modules. As part of the process of upgrading the project to 0.12, we started with upgrading the base module and then moved up the chain with okenetwork module and then finally the oke module itself. Remaining new features to explore We have yet to fully explore the following new features: For and for_each (although we have already dabbled with for_each a little bit) Use of list and maps as return type values from conditionals Rich types The new template syntax And as I mentioned above, as we move along with the upgrade, I’ll be updating this post. Summary Use the following process/principles to upgrade your project to Terraform 0.12: Fix code-breaking changes first Experiment with ‘easier’ new features first such as removing unnecessary interpolation where possible Remember that using interpolation for string concatenation is easier and still valid code. Don’t get rid of them just for the sake of it Gradually introduce more 0.12 improvement and features Use dynamic blocks to reduce/remove repetitive code Identify your module dependencies and upgrade them in order Don’t rush into creating complex object types; a simpler solution may be possible even if it means writing a little bit of repetitive code As your code gets cleaner, take the opportunity to review your code and improve it If you’re interested, you can follow our progress on github. Useful 0.12 code examples: https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples Update: Read part 2.
https://medium.com/oracledevs/lessons-learned-when-upgrading-to-terraform-0-12-6d894d3ab20e
['Ali Mukadam']
2019-10-31 23:38:32.324000+00:00
['Terraform', 'Oracle Cloud', 'Infrastructure As Code', 'Kubernetes']
This Article Won’t Change Your Life.
This Article Won’t Change Your Life. But if you’ve ever found yourself lacking motivation, you should read on anyway. I can’t stand clickbait titles. I have too much respect for you, the readers on this platform to try and grab your attention with the false premise of finding something life-changing within the words on this page. Mere words won’t change your life. But what they can do is give you a gentle nudge to get you on the right track. We all lack motivation from time to time. Whether it be for your early morning gym workout or your efforts to lose weight, even the most committed of souls have days where they just cannot face what the dawn brings. The World Ain’t All Sunshine And Rainbows Motivational speech videos litter the internet at every turn. More often than not, they’ll be set to a back-drop of sports films and will almost certainly include the scene from Rocky. You know the one I mean. Oh, and watching the video daily will without a doubt completely change your life. But the video below is different. It makes no claim to be able to change your life. Which is good, because it won’t. It does, however, claim to be the best motivational video ever made. Out of the hundreds, possibly thousands of motivational videos across the internet, this is the one that I constantly find my way back to. It’s my first port of call when I start to doubt myself, or when I find my motivation starting to wane. So for that reason, I’d say it lives up to its name. I recommend that you watch this video. Maybe not now, but at some point. Bookmark this story and come back to it at a more convenient time. If you’ve ever struggled with the motivation to carry on, watch this video. After all, it’s only six minutes long. Are you trying to tell me you don’t have 360 seconds to change your life forever? If you’re not seeing the results you want from the gym, or your diet, or your writing, whatever pursuit you embarked upon, watch this video. If you’ve ever looked at your life and thought ‘This isn’t where I want to be.’ Watch this video. The next six minutes won’t change your life. But they might just give you the kick up the arse you need to set that change in motion.
https://medium.com/1-one-infinity/this-article-wont-change-your-life-b0f1e0fa9e62
['Jon Peters']
2019-09-01 20:52:52.175000+00:00
['Inspiration', 'Humour', 'Speech', 'Motivation', 'Advice']
Close More Sales With Couple of Creatives
Related Videos Incorporate Strong Branding Until recently, market research has relied on an understanding that people make decisions based on rational thought processes… However, new science is discovering that most of the decision making process actually happens at a subconscious level, and is more influenced by emotion than rationale. Subliminal (adj.): (of a stimulus or mental process) below the threshold of sensation or consciousness; perceived by or affecting someone’s mind without their being aware of it. Subliminal messaging is what makes people decide one way or the other. To create an effective branding campaign, you must be working with masters of strategic communication. With branding, it matters not so much what you say, but how you say it. How you convey your message is important in effecting the way it gets perceived on a subconscious level. Therefore, your main focus when initiating a branding campaign should be the emotions you want to evoke in your audience. Focus on Your User’s Experience User experience (UX design) is about the user and the experience. It’s a marketing tactic that ensures your branding is effective in digital applications such as your website and mobile app. If you provide a positive brand experience for your users, they will reward you with their business. Provide Holistic Digital Marketing It is important to implement branding strategies throughout your online presence to reinforce your brand’s key messages and sustain the value for which it stands to attract a broader audience. You must keep your business’ website optimized. This means have it be well-designed and up to date with careful attention to brand guidelines, search engine optimization (SEO), easy navigation and clear call to actions. It is important to employ blogging, social media and email marketing campaigns to expand brand awareness and generate new inbound leads. Create content around the questions, issues and challenges to which your prospects want answers. This will help your prospects find valuable information while introducing them to your brand and establish yourself as an expert on the topic. Inbound marketing is one of the most effective ways to grow your business in the digital world. Thank you for reading. Please follow my blog. To learn more about what I do, check out my digital marketing agency, Couple of Creatives. ✌️💡
https://medium.com/couple-of-creatives/close-more-sales-with-couple-of-creatives-9e8ffa1aad08
['Alyssa Leverenz']
2017-03-14 15:36:33.081000+00:00
['Digital Marketing', 'Sales', 'Marketing']
The Stories that Bind
The Stories that Bind Sixty years, one love story, and the truth about the telling Will and Rosalie Wood at their 55th wedding anniversary. Their grandson Jake snapped this photo before cutting into the cake. Before I started recording, Will Wood leaned in. “There are some stories you believe and some stories you don’t believe,” he said. He told me this with a grin, because Rosalie, his wife of almost six decades, was about to start telling stories. At that time, I figured he was teasing. In the days and weeks that followed, as I sorted through the competing narratives on my recorder, I began to question what I knew about telling stories. That October afternoon I had driven from Fredericton to Saint John with my boyfriend Jake for a visit to his Grandparents. I had met Rosalie and Will before, our last trip celebrating her eighty-third birthday. On this trip I had plans to interview Rosalie, do a profile on her for a class assignment. When we crossed Harbour Bridge I could smell the city, a mix of sea salt and industry, as we approached the suburban neighbourhood and the blue Victorian house with red and white trim where they have lived for almost 50 years. When we pulled up, Jake squeezed my hand before we stepped out into the sun. “Will, they’re here! Jake’s here!” A plump woman with cropped white hair sped down the steps. She almost broke into a run when she hit the cement walkway. Jake stopped her with a hug. I stood back, waving to Will as he made his way down the walkway. Will is tall, straight-backed and slim, and for this occasion wore a button-down and slacks. He walked down a little slower than Rose but with a wide smile. After everyone had exchanged hugs, we were ushered into the house. Rosalie and Will took turns firing questions about the drive down and how this semester was going. We shed our coats and took off our shoes. Once I could get a word in, I thanked Rose for letting me interview her but she waved it off. She told me her daughter Suzanne interviewed her and her mother when she was in school — a story about powerful women. The pair led us to the living room where we sat on the leather couch. I count 16 cardinals, some displayed as figurines and ornaments, others as paintings and stained glass. On the living room table was a bouquet of long-stemmed red roses, a gift from one of Rosalie’s admirers. It was then that Will issued his disclaimer, and I began my long lesson about the stories that define our lives. You’d think that after almost 60 years together, the stories would have been settled, that memories would have been sorted and locked. One thing that’s not in dispute is that they met in a bowling alley. When both were in their early twenties they bowled at the same alley in Saint John. But for a long time, they were ships passing in the night. Rose bowled from 7 to 9 P.M. and Will from 9 to 11. One Wednesday night, Will and his buddy Earl were there early to practice for their bowling league. On the way out they spotted a group of girls who were just starting a game. Earl, in a move of brazen boyhood, walked over and wrote their names on the girl’s score sheet. So they began to bowl. “Here I was, an innocent young man that didn’t know anything,” Will said. “And this girl’s over dragging her arms over me and — ” Rosalie interjected: “Oh right!” “You know, making out. Here I am! An innocent guy.” “Now I’ll tell the story the way it was!” In Rosalie’s version, Earl approached them and asked what the group was doing there. Earl knew her and some of the girls from high school. They were a mix of eight 20-somethings from the upriver communities of Gagetown and Hatfield point. “We always went to The Rib afterward and had chocolate cream pies,” Rosalie said. The girls left to go to The Rib and they found Will and Earl driving around the square looking for them. “Well we were looking for other women, me and Earl,” Will said. “No, they weren’t, they were looking for us,” Rosalie said. What is clear is that by the end of the night, Will asked a friend for Rosalie’s phone number. “And he would call me and call me and call me.” But Rosalie was never there to answer. Her cousin was in the hospital at the time and she would visit every night. “I’ll tell you the truth of the story, not the story you’re getting,” Will interrupts while Rosalie keeps talking. He confirms that the next time they saw each other at the bowling alley, Will took his chance. He asked if after bowling she would like dinner. He offered to loan his car to her. She could drive it while he was bowling, then pick him up to go to Hilda’s Grill. By offering his car, he accidentally found his way to her heart. Driving cars is a weakness for Rosalie. She has what she calls a lead foot. “That’s one thing I love, to drive fast,” Rose said, falling into a memory. “I had a white continental high top Viscount Dodge.” It was the first car Rosalie bought with her own money. A car right out of a 1950’s cartoon. A two-door with silver piping that curved up at the back. She sold the car when they got married to pay for a fridge and stove. Before then, Rosalie told me she liked to race the car on the ice of Belleisle Bay. “Ten miles of open ice,” Will confirmed, laughing. “Ten miles of shiny race track.” “You put your brakes on and you would just go.” Rosalie assured me she knew where all the air holes were and how to avoid them. This skill came in handy off the ice too. On her way to her high school Baccalaureate, Rose was driving the Viscount on a series of roller-coaster hills. A sharp turn sent her off the road but she kept going. She continued her turn through a farm until she could see the road again. “And I was going so fast I came right back up on the road!” “Now most people would just try and slam on the brakes,” Will interjected. “But I thought I could get back on the road,” Rosalie said. “And I did!” Without so much as a pit stop, she made it on time. “She would have gone through the barn if she hadn’t turned the wheel,” he told me with wide eyes. “But you know what we were all young,” Rose said with a wave of her hand. They had three children: Suzanne, Keith and Rob. Suzanne came first, four years into their marriage. At this time Rosalie had opened her own beauty parlour in the basement of their first house. Rose was thrilled to have a daughter around the shop, counting her as the sister she never had. “I had these two terrible brothers that I loved to death. Who I’d love to have now,” she said. Will later assures me that two kids aren’t much more than one and at that point, you really should just have three. It seems like reasonable math to me, or that Will has forgotten what work went into raising kids. “Will said I smuggled them in,” Rosalie says while laughing. “I don’t think I smuggled them in.” “But anyway they all grew up.” Suzanne is now 56 with two children of her own. Seger was born first and three years after came Jake. Last spring, Covid-19 kept Rosalie and Will from seeing their family. During this time, every few weeks Suzanne would jar a pot of Boston baked beans or pick up treats from the Happy Baker. She would drive from Fredericton to Saint John to drop them off at her parents’ doorstep. When restrictions loosened, family and friends would talk through the glass door or six feet away down the walkway. Suzanne had her second son in a snowstorm. When her doctor heard the forecast, he made her come to the hospital right away. At the time she was with her mom shopping for shoes downtown. “It doesn’t seem so long ago I was trying to buy shoes,” Rosalie says, lost in a memory. Suzanne had already called her up days before the baby was expected. Rosalie was sure that the salesman was wary of helping them, nervous that her water would break in his stop. After getting the doctor’s orders they went straight to the hospital while Will watched after his oldest grandson. “We always had a great time together, didn’t we,” Will smiles as he speaks. Now the boys are both in their twenties and visit their grandparents every few months. At every opportunity, Rosalie will brag about her grandkids, how Jake made the Dean’s List and how Seger’s got his first big job as a tattoo apprentice. She won’t forget to brag about their mother, Suzanne, who bought the Body Shop in Fredericton when she was 20. The shop was a family affair, and they were constantly recruiting new members. “Neil said that all of us worked in the Body Shop,” Will chuckles. Will and Rosalie’s St. John house was transformed into a packing station for the shop. Two girls worked in the basement with Rosalie. Rob and Keith had their friends join in the operation too. Half the hockey team was answering phone calls, writing down how many soap bars and lotions should be sent to Fredericton. “You’d be surprised how many people were involved,” says Will. “People you would never expect.” Around Christmas time, everyone had a key to the basement. Whoever could drive would take the van, stock it up then send it to the city. When she wasn’t helping packing, Rosalie would work in the shop itself. “I felt like I was 18 again,” said Rosalie. But let’s get back to the story of how they met. “Here’s the truth of the story, see you always get the truth from me,” Will continued after Rosalie was done explaining her side. “The first time I met her she had slacks with the knee torn out and she was, you know, just barely-” Rosalie cuts him off. “We were all a bunch of country girls and we were all full of life and full of a lot of fun,” she explained. “We were playing leapfrog coming up and I ripped my whole knee out of my pants.” “That’s the story she told but I just think she just didn’t have a good pair. Anyway the next time she showed up though, wow,” Will lost himself in a moment of thought. “She had the rule of one size smaller than you normally wear.” “No!” Rosalie broke in. “Oh that’s not true” “And the hair was done, you know.” “Well, I was a hairdresser!” Shouted Rosalie. “And the eyes were all done up. And I thought wow this is a transformation. It’s like getting a rusty old car and putting some bodywork on the thing and painting it up. Anyway, I thought she looked pretty good. That was what happened to me. The bait was out, the lure was out and I was hooked.” Rose was telling the story of their first real date, the one where she drove Will’s car to dinner after bowling. “So anyway he gives me the keys to his car, you know, he gives me the keys to his car,” Rose said wiggling her eyebrows. “Think about it.” “Later on in life, everyone had the key to my car,” Will broke in. The date was going well until they got to the parking lot of the restaurant. While backing up, Rosalie drove the car into a hole, tearing the muffler off. “And he was really nice about it!” Will had put it on in the first place so the next day he took it to the garage and put it back on. He demonstrated how loud the car was without a muffler through a series of screeching and blowing sounds. “Any way you’ll know how I was hooked,” Will paused to mime a fish being caught on a hook. “She knew all the tricks. Well, women all do! They’re born with that knowledge. There wouldn’t be any more humans if they weren’t born with that knowledge.” I’m not sure if I believe that. Not all women are born with this allusive knowledge and some men are well adept in the art of wooing. But for Will, it seemed he was simply swept off his feet. “Are you fellows getting hungry?” Will asked. I switched off my recording as Rosalie and Will insisted on taking us out to dinner. We settled on a recommendation of theirs ten minutes away, a pub where all four of us ordered the fish and chips. Rosalie told her grandsons before that if she could live her life again she would change absolutely nothing. This is a pretty big statement, impressive as it is at first unbelievable. I suspect her lack of regret comes from the family and friends she had, on top of a fair bit of adventure. “I took care of her,” Will said of their younger years. “Stopped her from drinking and smoking and going around at night, looked after her.” “Funny you are very funny you are,” Rose chuckles. “Funny, funny.” We said our goodbyes with hugs and kisses in the parking lot, with many reminders of safety and promises to come back soon. Only later, as I began sorting through the various versions of their stories, did I come to understand on the drive home why I felt like part of the family. Rachel Smith is a third-year student at St. Thomas University majoring in Journalism and Great Books. Born in Boston, Smith grew up in New Hampshire and now calls Fredericton home. She misses the mountains but loves exploring the city. In her free time, Smith likes to ski, hike and read. This story was written for The Power of Narrative.
https://medium.com/see-it-now/the-stories-that-bind-sixty-years-one-love-story-and-the-truth-about-telling-4d1444ed5a34
['Rachel Smith']
2020-12-13 13:06:15.119000+00:00
['Saint John', 'Family', 'Storytelling', 'Love']
How to Deal with People Judging You and Your Work
The Biggest Critic It’s easy to complain about external criticism, but have you ever looked closer to home to find your biggest critic? Generally speaking, life’s biggest critic lives within you. Every single time you work up the courage to take action and move your life forward you tear yourself down. It’s self-criticism that gets to you and once you’re comfortable with criticizing yourself, it becomes a whole lot easier to take the criticism of others to heart. Think about it. When you have a fantastic idea, what do you do? Do you immediately tell all of your closest friends all about it? Or do you keep it to yourself and get things moving before you share it? There’s a good chance it’s the latter and I want you to think about why. It’s because you’re worried about how they will respond. After all, you yourself have built up criticism in your own mind. While a lot of people have blogs now, so many people struggle with hitting the publish button. It’s hard to put yourself out there and risk judgment. A lot of people will write posts and save them to drafts and it takes them weeks to build up the courage to publish one. Some people may even limit the comments for fear of the response they’ll receive. You can allow internalized fear and criticism to hold you back, or you can act regardless. Generally speaking, the random haters will throw criticism your way and move on. All you have to do is ignore them. Yes, that’s much easier said than done when you’re desperate for validation. So, how do you deal with people judging you and your work? Don’t Take It Personally This is probably the most difficult piece of advice you’ll ever hear. You can’t take judgment or criticism personally even though it feels profoundly personal. It’s easy to assume that someone’s behavior is about you, especially when it’s you that’s being judged. However, the reality is they criticize everything, everyone, including themselves. They may act as though they know everything, have everything, and are the greatest thing on two legs. The reality? They don’t feel that way truly, they just act like they do because they’re deeply insecure. When you’re faced with judgment, of you or your work, it’s rarely about you, it’s almost always about the person who is doing the judging. Response In Compassion No one is born to be a judgmental person. No one is born to be a nasty person. It’s a learned trait, one that we pick up from the people around us or society at large. When you reframe it this way and think about what could possibly have happened to someone to turn them into a nasty or judgmental person it becomes much easier to extend compassion in response to the judgment. I’m sure you’re rolling your eyes because why on earth would you be compassionate when someone is judging you? You can’t change the way the other person behaves, you can only regulate your response, and isn’t compassion always the better option? It’s up to you to choose the high road. And, you never know how your compassion might influence their future interactions. Take A Lesson Life is a lesson and when faced with judgment you can search for the lesson in it. It’s a lot easier to deal with judgmental people when you view them as a lesson to learn and a challenge to overcome. Will you respond by matching negativity with negativity? Or will you rise above it and be a better person? You can’t make it through life without running into a judgmental person, and there’s a good chance that you’ve been a judgmental person at one point. So, what else can you do but take the lesson and move forward? Rise Above It Your first instinct when someone criticizes is probably to protect yourself so you get defensive. Or you may go on the attack in a bid to protect yourself. Regardless, as satisfying as this response may feel at the time it drags you down to their level and makes you just as bad as them. If you dislike their behavior then you can’t give in and join in on the same behavior, you have to rise above it. The majority of extremely judgmental people are deeply self-critical. You don’t know what negative thoughts are flying through their head. For example, an overweight person who is a vocal critic of overweight people. They laugh it off and say they can make jokes because they’re also overweight, but they’re simply using humor to deflect from their own insecurities about their body. Sometimes the judgment they offer is a vocalization of their own self-criticism. Reframe & Adjust Your Attitude Your boss is deeply critical of you, it feels as though every minute of your workday revolves around their judgment. That would exhaust anyone, especially when you spend five days a week with this person. However, you can reframe the situation and change how you think about it. Rather than focusing on the judgment they offer, think about how lucky you are to have an awesome job with co-workers you appreciate. Don’t focus on the judgment, focus on the positives around it. Rather than focusing on the judgment of others, you can focus on the positive attention and support you receive from the people you love. Which brings us to your attitude. More specifically, an attitude for gratitude. You can be grateful for the fact that you’re not a bitter, angry, judgmental person. You can be grateful that your parents didn’t tear you down and turn you into the type of person who judges. As difficult as it is, you can find a silver lining if you look hard enough.
https://medium.com/personal-growth/how-to-deal-with-people-judging-you-and-your-work-f1e0f848667a
['George J. Ziogas']
2020-11-26 15:12:32.068000+00:00
['Self Improvement', 'Inspiration', 'Relationships', 'Work', 'Psychology']
Landing Page Doesn’t Convert? Here’s What You Should Do
According to data by Tony Haile of Chartbeat, customers view your website for 15 seconds or less. It’s enough time for them to do a big scroll, read bits and pieces of content, and decide whether to stay and soak in what you have on offer or leave and continue their search. What customers can do in 15 seconds on your website Read a compelling headline Enter their email address to subscribe to your mailing list Add a product to their shopping cart Contact you through click to call What customers can’t do in 15 seconds on your website Read 500 words of content on your homepage to know who you are and what you can do for them Sift through pages and pages of content to find how to contact you Search every nook and cranny to know if you have what they need (aka that product or statement that answers to ‘Can you help me?’) If it takes longer than a quarter of a minute for your landing page to communicate your unique selling proposition, customers will move on to the next. You know what’s worse? If you actually spent ad money to take them to your site, only for them to leave disinterested. Why your landing page isn’t converting and ways to fix it 1. Your message isn’t clear Write with customers in mind. If you have what they are looking for, make sure they know from the get go. Also, the unique selling proposition that made them click from a search page should be communicated throughout your landing page. Having conflicting information confuses customers. The last thing you want to do when you want to convert is to make customers feel deceived. Keep messaging consistent across these four main pieces of content: Headline Supporting headline Reinforcement statement Closing argument 2. There are no images to break up the text and keep things interesting Don’t forget about images. Without them, pages look bare and incomplete. Choosing a hero image is very important. Make it too loud and it distracts customers; too muted and it would seem like a random addition. Every element on your landing page should have a purpose. Use an image to help customer envision the product or service they’re going to get or a video that shows benefits and context of use. 3. Customers don’t know how helpful your product/service can be You have a compelling headline and image/video that captured your customer’s full attention — great. But, after scrolling down for more information, your customers find nothing else. Customers need convincing. They need you to tell them why they should sign up or buy your product. You can do this by having a bullet point list that summarises the benefits and/or features of your product or showing how your product works and explaining how it makes lives better. 4. There is nothing/no one to back your claims You’ve listed a hundred and one reasons why your product is a life-saver. Your customers are thrilled, but they’re worried it might be too good to be true. You need social proof, which comes in many forms. It could be a positive review left by one of your happy customers, approval from credible experts in the relevant field, paid or unpaid endorsements from celebrities/influencers, etc. 5. Your landing page doesn’t have a call-to-action Your customers are convinced and they’re ready to convert. They reach the end of your landing page, feeling very excited. Now what? You tell them to act on it, and fast. Your landing page has one goal: to make customers respond to your call-to-action. This could be signing up for an account, downloading an eBook, subscribing to your newsletter, or directing them to a product page. Goals vary, but one thing’s sure, once they’ve clicked to convert, your landing page has done its job. Once again, the five elements of a high converting landing page: Unique selling proposition Hero shot (images or videos) Benefits Social proof Call to action Here is an example of a landing page structure from Unbounce that contains all the above elements. Notice how it was laid out: Get more leads and conversions by having a high converting landing page. Bambrick Media has designed and created websites with effective landing pages for a number of Australian businesses in its 15 years of trade. Contact us today and we’ll help turn your landing page into a winner.
https://medium.com/digitaldisambiguation/landing-page-doesnt-convert-here-s-what-you-should-do-d612344a87da
['Jason Mcmahon']
2017-08-27 23:47:23.706000+00:00
['SEO', 'Landing Pages', 'Marketing', 'Landing Page Optimization', 'Digital Marketing']
Ya Gotta Pay To Play
A lot of times in life we all get dealt some really shitty hands. You know, like when you get passed over for that promotion you really wanted, writing for a living, losing a job, a loved one, writing for a living, working three jobs to put food on the table, your car breaking down in the middle of rush hour traffic, or even writing for a living. Subliminal messages, ain’t they a hoot? All joking aside (at the moment) the single most ubiquitous statement many of us know by heart is that ya gotta pay to play. And it’s not just buying a ticket for a ride at a local carney or sweetening the poker pot. Everything we do requires some level of payment whether it’s mental, emotional, physical, or monetarily. If you want to play the game you must be willing to pay the stakes. I think one of the biggest problems we Humans have is we either don’t understand the demands of the games we play, or we don’t know our own limitations. See, that’s the trick, where the skill of self-realization comes to bear. Knowing yourself and your capabilities well enough to keep you moving forward despite all the crap you get dealt along the way. Oh, it would be great knowing we’ll always have an ace up our sleeve when we get into a tight spot, but the sad reality of that is, most times we don’t. And should we continue to commit to something we now realize we should have never started in the first place, we stumble and face plant on the sidewalk. It’s all about understanding our limitations folks. We might have thought it was a great idea to start a business in a depressed economy. It’s been done before with some success. However, we never stopped to examine our shortcomings. Shortcomings such as the fact we don't have the slightest bit of business acumen and can’t manage a balance sheet or a profit and loss statement to save our lives. Where’s that ace up the sleeve when we need it? There are other limitations we sometimes overlook. Limitations such as lack of true purpose, positive mental attitude, perseverance, and grit. Some of the things we try to do in life require a shit ton of all four. Ah, the game of life. In most circumstances, life will continuously demand that you pay to play. It will constantly look for an opportunity to steamroll your ass. Just to get from one day to the next you’re going to need a hell of a lot of positive mental attitude and an understanding of just how much of a beating you can take. Before undertaking anything you need to understand your own limitations. I’m not saying don’t ever push the envelope. We all should be continuously reaching for that golden ring, right? It’s how we learn and grow. It’s how we add another really cool gadget on our Batman toolbelt. Hey, I watched The Dark Knight again last night. Sue me. The point is the more we push ourselves, the more things we accomplish, and the more we master. “Ah, but a man’s reach should always exceed his grasp. Or what’s a heaven for?” Robert Browning — Dramatis Personae But in doing so, in constantly pushing ourselves to achieve some new and shiny trait or skill, you need to be realistic and truly understand your limitations. You need to understand by all the attempts, successes, and failures you experience in your life what you realistically can and cannot do. You don’t tug on superman’s cape You don’t spit into the wind You don’t pull the mask off that old lone ranger And you don’t mess around with Jim [You Don’t Mess Around With Jim] — Jim Croce 1972 See, as I said. Understand the things you just can’t do. On the flip side of the can’t do chart is things we’d really love to do but know we’ll probably never get to do. Yes, we all want to be astronauts and part of the colonization of Mars just so we can get marooned and eat potatoes grown in shit as Matt Damen did, right? Again just me? Doubt that’s ever going to happen for me. I hate raw potatoes. See? You’ve got to catalog these limitations, understand what they are, and learn from them. Look, you’re already paying the price. You know you have to pay to play, so you might as well start studying the odds of your wins and losses and understand them both. Okay, at least agree with me truly understanding your limitations, and having an inkling of what you can and can’t do keeps you from undertaking something more than likely improbable. Maybe something that wasn’t up your alley in the first place. Remember, ya gotta pay to play this game of life and each of us truly needs to understand our limitations so we can get the best out of living each day. Of course, If you think you don’t have any limitations then you’re probably Superman and impervious to Kryptonite. If so this message isn’t for you. But hey, tell Lois Lane we said hello. Thank you so much for reading. You didn’t have to, but I’m certainly glad you did. Let’s keep in touch: [email protected] © P.G. Barnett, 2020. All Rights Reserved.
https://medium.com/the-top-shelf/ya-gotta-pay-to-play-1341e1991800
['P.G. Barnett']
2020-10-03 01:33:05.860000+00:00
['Human Behavior', 'Life', 'Behavior', 'Top Shelf Pg', 'Self-awareness']
5 Reasons To Hire An Advertising Agency Versus Staffing In-House
Marketing and advertising are essential for the success of any business. As your company grows, you will eventually be faced with the difficult decision of whether to build your own marketing group or to outsource to an advertising agency. Here are five benefits to hiring a full-service advertising agency. 1. You Get A Variety Of Talents And Skills When you hire an agency team, you’re adding experience and a diverse group of talents and skills. Hiring an agency means hiring people who work in all areas: copywriting, proofreading, messaging, design, SEO, digital, media, etc., and not just the skills of the individual or individuals who would make up your in-house group. If you are lucky, your employees will be capable in two different areas, but for the most part, each employee will only have one main skill set for your business to utilize. More importantly, utilizing the talents and skills of an agency can lead to higher quality messaging and improved results. 2. It’s Worth The Cost It is difficult for businesses to retain top-level advertising talent on staff, because top marketers and creatives are expensive and often prefer the various creative challenges and opportunities that come from working at an agency. This doesn’t even take into account employee benefits and the hidden costs of hiring, which include employee turnover, training, education, and down time. It all adds up. Furthermore, the costs of computer hardware, subscriptions for up-to-date software and online services, office space, etc. add up when staffing in-house. Agencies have a variety of important and helpful tools that you might not be able to afford or even know about. Need to strengthen your brand? Download our free guide to the StoryBranding process. 3. Offers You Objective, Up-To-Date Viewpoint Hiring an agency allows you to get new ideas from someone who has the time and experience to accomplish what is needed to help your business achieve its goals. While it is true that employees may know your company and its offerings, in-house marketers are often too familiar and don’t view things the same way a customer or client would. Agencies work across industries and can bring that experience to provide a fresh perspective and new ideas you might not have encountered yet. Also, it is helpful to receive outside opinions on your marketing efforts to make sure you are working to connect with customers with the best strategy possible, and not just doing what you are comfortable with. 4. Your Workforce Will Be Scalable Working with an agency provides you with the flexibility to increase your marketing manpower as needed. Whether your business is entering a busy season, preparing for a launch, or dealing with an unexpected project, agencies allow you to ramp up your bandwidth to make sure the necessary people are available to complete marketing projects on time. And when marketing projects are scarce, you’re not paying payroll and benefits to employees with few tasks to complete. 5. You Will Save Time When working with an agency, you will typically have a key point of contact, an account executive, who is in charge of everything related to your account. This means that the time you would have spent on managing your team, coordinating with freelancers and publications, checking the work, and proofing can now be spent on the other important aspects of your business. Additionally, agencies are built for hitting deadlines. Unlike with an in-house team, they have the staff in place to make sure that work on your account will not stop if someone is sick or taking a vacation.
https://medium.com/stevens-tate-marketing/5-reasons-to-hire-an-advertising-agency-versus-staffing-in-house-90011194c772
['Dan Gartlan']
2017-08-02 15:18:21.846000+00:00
['Inbound Marketing', 'Advertising Agency', 'Marketing Agency', 'Marketing', 'Digital Marketing']
Spotify Playlist Classification With Logistic Regression
Introduction Being a big music fan and having that little touch of OCD, I have this necessity of organizing my songs. I have mainly two playlists that, whenever I am in a more happy mood, I listen to one, but when I feel more in a sad mood, I listen to the other. I have done this for about three years now and to define which songs should go to each playlist, I had to listen and classify it by my standards. However, when I started learning a bit about Data Science, I realized that this could be automatized. First, I had to build the Dataset. Then did a Data Exploration, so that I could analyze and find patterns in it. Afterward, I did some Feature Selection, to select the features that impact the most to my model. After that, I searched for the best Classification Model for my project. Finally, I had to understand the Result Metrics, to properly understand my results. Building the Dataset To build a classification model, a labeled dataset is necessary. Fortunately, I already had that, since I was labeling my songs for three years, as I classified it manually into each playlist. So we have the label 1 for my happy playlist and the label 0 for the sad one. So I used the Spotify API to build the dataset. Getting the song’s features that are going to be used as parameters for the classification. Learn more about the features here. The dataset ended up like: Dataset’s first 5 rows Data Exploration Data exploration is extremely useful to understand the dataset. So I started building a heatmap to understand how each feature related to each other: Features heatmap From here, we can already see that energy and acousticness will be important to the model since they are inversely proportional to each other. Besides the heatmap, plotting a histogram for each feature is also a good way to see how they are distributed. For example, valence (measures the ‘happiness’ of a song) presents a different distribution for each label: Valence histogram Feature Selection To select the features that matter the most to the model, I decided to use the chi2 test. This test consists of verifying how relevant that feature is and it does that by calculating the p-value for each feature. Learn more about the chi2 test here. The p-value here explains, in a very rough manner, how much is this feature interfering with the result. Here we have a hypothesis that the feature is not relevant, but a low p-value rejects that hypothesis. Therefore, a high value shows that this feature does not influence much on the result and a low p-value shows that the feature influences a lot. Learn more about p-values here. So, we must choose a limit, where does the p-value become relevant enough. Usually, this limit is set at 0.05 (95% confidence interval). So, we filter out every feature with a p-value higher than 0.05. The p-value for each feature As we can see, some key features were kicked out of the model such as danceability, which is pretty strange, since its common knowledge that a sad song would be not as danceable as a happy one. That shows how my brain splits those two feelings apart. Sad for me is something more acoustics with not as much energy, but it could be sad and danceable, we can see that in the score of acousticness and energy. Classification Model To choose the model, it was necessary to look at the problem, and from there, realize which model fits best. There are plenty of models for classification, such as Naive Bayes, Logistic Regression, SVC. Since I am classifying into two categories, I don’t need a complex algorithm that can be used to classify 50 categories, which is a bit ‘overkill’, like SVC. Since I am doing a simple project only to be used by me, I don’t need a super-fast model, that would be able to classify with just a small amount of data, like Naive Bayes. Since LR's downfall is that it can only classify between 2 categories, but that is exactly what I need, I decided to go with Logistic Regression. To understand this model, the best way, in my opinion, is to compare it to the Linear Regression. Linear Regression and Logistic Regression comparison (source) As we can see in the image, the Logistic Regression works similarly to the Linear one. However, instead of fitting a line, it fits an S shape function (Sigmoid Function). Since the results we want are discrete and not continuous, the Logistic Regression can more accurately predict this categorical value. Learn more about the Logistic Regression here. Result Metrics After training the model, I had to understand the results. I used the classification_report function from the sklearn. This returns 4 metrics: Precision, the percentage of true positives divided by all positives, show accuracy for the positive predicted values. Recall, the percentage of true negatives divided by all negatives, show accuracy for negative predicted values. F1-score, a function of precision and recall, give a more general performance of the model Support, the number of events (in this case, songs) for each label. Learn more about these metrics here. Classification report of the model I chose these metrics for my results because they can easily show the false positives and false negatives from the classification. More so, the f1-score sums all of this information in one number, so we can clearly see the efficiency of the model. Conclusion As we can see from the classification report, the model performers amazingly for the playlist labeled as 1. However, the recall for playlist 0 is quite low, bringing the f1-score down a bit. That was probably because of the number of songs in the training, we can see that the label 1 has approximately the double of songs as the label 0. In general, the results were pretty good since this was my first time messing with Data Science. Even though the positives results, there are plenty of things to improve. Tests with different models such as the Linear SVC or even Naive Bayes could have been a better way to validate which model is better. More basic statistic exploration over the dataset could have helped to bring more insights. Even a clusterization over the dataset, so it could bring a more visual approach. All those limitations will be covered in the next release, stay tuned. This project was an amazing learning experience and really helped me to classify my music. I truly hope that this can help anyone starting with Data Science just like me! Any questions and tips would be greatly appreciated! Check the whole code in my repository here. Contact: LinkedIn
https://medium.com/swlh/spotify-playlist-classification-with-logistic-regression-f0a7689d7f2f
['Marcelo Dias']
2020-09-30 00:50:47.079000+00:00
['Artificial Intelligence', 'Machine Learning', 'Data Science', 'Spotify']
Public Token Sale is delayed due to current market conditions
Dear Supporters, Due to adverse market conditions, dApp Builder have taken the difficult decision to postpone the public element of our sale. We believe this to be the best decision for the company and token participants. In the interim the team remains fully focused on factors within our influence and are dedicated to bringing dApp Builder to market. We will continue to develop product and will focus on three core areas during this period. Further Product Development Partnerships Private sales and white-listing We will continue to update our supporters and community as we hit milestones in these areas and will share information shortly on these. We thank you for your support and are here to answer any questions you have! dApp Builder team Try it now on https://dappbuilder.io/builder Or watch the code on https://github.com/DAPPBUILDER/dApp-Builder Stay in touch for for updates via
https://medium.com/ethereum-dapp-builder/public-token-sale-is-delayed-due-to-current-market-conditions-7441cadaef41
['Dapp Builder Team']
2018-10-30 10:20:42.633000+00:00
['Dapps', 'Ethereum', 'Development', 'Blockchain']
Node.js for PHP developers: 5 must-know practical aspects with code examples
While the popularity of Node.js is increasing, PHP’s traction is going down a tiny bit. With that context, this post is going to elaborate on 5 must-know practical aspects of using Node.js for PHP developers. These will be things usually no-one talks or writes about, time to get going. Node.js for PHP developers (not Node.js vs PHP) This piece is a list of things you as a PHP developer must know and learn to use Node.js effectively. On the contrary, this post is not a Node.js vs PHP write up where PHP is bashed. I have used both languages. I started to write a lot of Node.js applications back in 2016. When I started I faced some difficulties as I was used to PHP at work for more than 7 years prior to that. There was a book released towards 2012 end covering Node.js for PHP developers. This blog post is not going to talk about what PHP or Node.js is, you can read about it in other posts. I will also not elaborate much about the Non-Blocking I/O or the event loop. Still, some of it will be brushed through when discussing the practical aspects of writing good Node.js code. Node.js for PHP developers the practical side PHP has been alive since 1995 and reportedly is still used by 79.1 % of the websites monitored by W3tech (I can’t really say if it the whole internet). So chances are very high that you have used PHP or deployed something written in PHP. For instance with a growing trend: WordPress is used by 63.7% of all the websites whose content management system we know. This is 39.0% of all websites monitored by W3Tech. On the other hand, Node.js was released in 2009. Major tech companies like Linked In and PayPal started adopting it from 2011 to 2013 for various reasons like microservices. As per Stack Overflow developer survey of 2020: For the second year in a row, Node.js takes the top spot, as it is used by half of the respondents. It is not a secret that Node.js is getting very popular in the past 5 years. So as a PHP developer, these are 5 must-know practical things to be a great Node.js software engineer too. Node.js for PHP developers is similar in some sense but also different in some other aspects some are described below: 1. Node.js code execution is async and non-sequential This is a behavior that tricks lots of PHP developers. In PHP the code runs in sequence, at first line 1 then 2, and so forth. In JavaScript and particularly in Node.js that may not be the case. You can potentially put things in the background with good use of promises and callbacks. JavaScript is event-based and your code responds to events. Below is a modified code example with an explanation taken from my open-source currency-API repo: Async code in Node.js an example If you look closer, that innocent-looking db.query at line 3, has been pushed in the background. So it will execute like below: Get rate Run insert query in the background While the insert is running the function are already returned the rate If there is an issue in the insert query it is logged in the catch There is no out of the box way to do something like this in PHP. This is the first thing that stumps PHP developers. It makes it harder to understand Node.js for PHP developers. This asynchronous code execution behavior also makes finding the right stack trace harder in case of errors in Node.js. To be honest, in 2020 you can easily use async-await. Even though it is syntactic sugar on Promises, it does make asynchronous programming a lot easier. When I started in the Node 4/6 era around 2016 with callbacks and Promises it was a different ball game altogether. Still, beware when to not use async-await (like above) and just go with promises, then and catch. Don’t get tangled in promise hell in the process though. Promise hell is like the next iteration of the callback hell. Pro tip: To know which ES6 features you can use with what version of Node.js, check it out at node.green. Another Pro Tip: Even Node.js versions are LTS, odd ones are not. So use Node 14 or 16 not 13 or 15 in production. Going a bit deeper into non-sequential execution, promises and the power it has plays an important role here. The ability to do concurrent things is great in Node.js and JavaScript in general. Node.js promises possibilities Promises being asynchronous, you can run them concurrently. There are ways to do it. You could race 3 promises and get the results from the fastest one. You can even do promise.all where if one promise is rejected, it stops the whole operation. Please read more about Promise.race , promise.all and promise.any in this great comparison. With that in mind, you can try other NPM libraries to limit promise concurrency or even filter through promises concurrently. You can do some of it with ReactPHP. But it is not included in the native PHP, not even in PHP 8. This is something new to wrap your head around in Node.js for PHP developers. Let’s proceed to the next point, the process does not need to die in Node.js like in PHP. 2. Node.js process is long-running, unlike PHP PHP is meant to die not in the sense that it will not be used in the future. In the sense that all PHP processes must die. PHP is not really designed for long-running tasks/processes. In PHP when a new HTTP request comes in the processing starts, after sending the response back the process is killed. That’s how PHP works in general. That creates the need for FPM and other servers. You can argue PHP was serverless by design 20+ years ago. I leave that up to you. On the other side, Node.js is a long-running process. This enables you to share information between requests as the same server/process is handling multiple requests. With a long-running process, you can easily exploit things like memoization on memory, connection pooling for a database, etc. It opens up other possibilities like counting the no. of concurrent requests on that process for instance. Memoization example If you don’t know Memoization. Memoization is a higher-order function that caches another function. It can turn some slow functions into fast ones. It saves the result of a function call after the first time to the cache, so if you call the function again with the same arguments, it will find it in the cache. It can be used in Node.js but not in PHP natively. Some workaround is possible in PHP like saving the function return value in Redis. Below is a code sample of memoization on an express route with p-memoize: Memoize on the route level with Node.js, it saves the round trip to the DB so it is much faster. The clear advantage of this is less load on the datastore. For 1 minute, it will respond back with the same response for the same parameters. The output of the function products.getMultiple is cached in memory for a minute. This makes the responses very fast. Connection Pool example with MySQL Another thing that is not possible because of a dying process in PHP is connection pooling. As per Wikipedia: In software engineering, a connection pool is a cache of database connections maintained so that the connections can be reused when future requests to the database are required. Connection pools are used to enhance the performance of executing commands on a database. So, you will have 5 connections in a pool and if you want to run 5 queries to the database it can be done concurrently. This saves time for both connecting to the database as well as running the query. This is easy to do in Node.js but not easily possible in PHP. Be mindful of the number of available connections and to keep your connection pool size optimal. For instance, if you are using Kubernetes and your application has 5 pods with a connection pool size of 2. That means your database will always have 10 open connections even though there are no queries being executed. Time for a connection pool example with MySQL database with MySQL npm module: Connection Pool example in Node.js with MySQL connection pooling. The above code will run the same query 5 times in parallel with 5 MySQL connections taken from the connection pool. I wished I could do this in PHP out of the box. In my experience, Node.js works very well with MySQL. If you want to try connection pooling with Mongo DB, here is a Mongo example. With a long-running process as a developer you need to be more careful about memory leaks and doing the housekeeping stuff well. This is where Node.js for PHP developers need a shift in thinking about how the code is executed. On the other hand, this is a great advantage in Node.js for PHP developers. 3. Debugging is easier in Node.js than in PHP Line by line code debugging is an important part of developer experience for any programming language. To debug PHP code, you can use add ons like X-Debug with some IDE settings. X-Debug is challenging to set up, to say the least. You have to install it, enable the extension. After that configure it properly with an IDE like PHPStorm. Basically, easy is the last thing you will say about making X-debug work. Unless it is all configured well with a Docker container and IDE settings are also easy to load. On the other hand, running node native debugger or even ndb is a lot easier compared to PHP and X-debug in my experience. With the use of VS Code, debugging Node.js application is so easy that even a caveman can do it. Open up Preferences > Settings and in the search box type in “node debug”. Under the Extensions tab, there should be one extension titled “Node debug”. From here, click the first box: Debug > Node: Auto Attach and set the drop-down to “on”. You’re almost ready to go now. Yes, it really is that easy. Then set some breakpoints on VS code with say index.js and in the terminal type node --inspect index.js . BOOM! Your step by step Node.js debugger is running well on the VS Code editor without much effort. A good difference from PHP, there is no need to install a different extension, enable it, and configure it to be able to debug a program. No need to install an extra extension is a benefit found in Node.js for PHP developers. The next point is also about better developer experience while upgrading even multiple major versions of the language. 4. Major version upgrades in Node.js is seamless over PHP Jumping even multiple major versions in Node.js is a seamless experience. Upgrading from even PHP 5.6 to PHP 7.0 is a week to month-long process depending on the size and complexity of the project. In my personal experience, I have upgraded Node.js microservices from versions 0.12 to 4 in the past. Recently I upgraded an application from Node.js 10 to 14. All of my Node.js major version upgrades have been easy. Example Pull request of Node 10 to 14 upgrade with Docker. It was super smooth. Some minor package.json changes were the only small issues I encountered. After deployment, there were rarely any issues related to code compatibility. As an added bonus, the performance was usually better upgrading the major versions. On the other hand, upgrading PHP has not been easy. Minor version upgrade for an application from PHP 5.4 to 5.6 was not much cumbersome. But, going from PHP 5.6 to 7.2 for a relatively big application was a pain. It took a long time and needed multiple composer.json changes. It was also a difficult task to test it. The good side of a major version upgrade in PHP was surely the performance boost. Just a note here, the PHP applications I worked with were older than the Node.js applications. Your experience can surely be different than mine. 5. Dockerizing a Node.js application is a breeze compared to PHP Docker’s popularity has been steadily rising in the past 5 years. It has changed how we software engineers work since its release. You should use Docker for local development too. With that in mind, Dockerizing a PHP application can be a difficult task depending on how components are laid out and the complexity of the application. Conversely, for dockerizing a Node.js application the effort is less and the process is a breeze. Below is an example of a Dockerfile for a PHP Laravel app with Apache. Example PHP docker file for Laravel with Apache and multi-stage build The good thing with this Docker image for Laravel is PHP is bundled with apache in the same image. It can be argued if this is a better way to do it than splitting PHP and Apache into two docker images. Also, notice the multi-stage docker build in the above docker image. Composer install is done in a different image and output is copied to the main one. If we had used PHP-FPM and Nginx in different docker images, it would have been more complex. There would be a need to manage two distinct docker images. Now it’s time to have a look at a Node.js Dockerfile. Example Node.js docker file with a multi-stage build As Node.js has a built-in web server, the Dockerfile is much cleaner. When you install node, npm is bundled with it. This eliminates the need to install packages at a different stage in the docker build. In the above Dockerfile multi-stage docker build is used to separate production and development docker images. Having the package manager (npm) bundled and having the web-server as part of the language/runtime is something different in Node.js for PHP developers. If you are interested more in Dockerizing a Node.js application step-by-step follow this tutorial. The main distinction here is Node.js is a runtime/language that has a web server built-in. This generally makes it easier as there is no need to mix an Apache or Nginx in the equation. Conclusion When using Node.js for PHP developers it does need a mild shift in thinking to exploit the powers of Node.js well. Node.js is not a silver bullet. There are drawbacks and it needs adapting to different ways of code execution. Certainly, there are some benefits of using Node.js for PHP developers like async programming and concurrency. Other advantages stem out from the Node.js process being long-running. I hope this post helps you get more out of Node.js even as an experienced PHP developer.
https://medium.com/javascript-in-plain-english/node-js-for-php-developers-4084aa3ed723
['Geshan Manandhar']
2020-12-26 09:15:17.831000+00:00
['Nodejs', 'JavaScript', 'Software Engineering', 'PHP', 'Web Development']
Building an Audio Visualizer for Razer Chroma Keyboards
b — Analyzing The Signal Before going any further, we need to know a bit more about what are time-based and frequency-based representation of signals. When you take a look at an oscilloscope, you are looking at a time based representation of a signal. It means that the oscilloscope is plotting the amplitude of a signal over time. Let’s take our previous schema, annotated this time : In this schema, a time representation highlights the fact that the amplitude of the signal is zero at t = 0. Over the course of a second, it reaches a maximum amplitude before reaching zero again. Now if you remember correctly, we want to build an equalizer effect, which has the following appearance : Now, remember when I said that sound can be described as a sinusoidal wave form? It is true.. but incomplete. c — Understanding Frequency Based Representation Before understanding the utility of the Analyser Node, we need to dig a bit deeper into signal processing and focus a bit more on a concept called Fourier Transforms. In reality, audio signals look like this : Credits : Uygar Uçar on dsp.stackexchange.com As you can see, it is far from the sinusoïd that we described in the first section. But is it that far away from it? Visually yes, but not mathematically. In fact, what if I told you that the signal observed above is just the superposition of many sinusoidal signals? When you’re listening to your favourite music, you are physically listening to the superposition of pure sinusoïds. This process repeats over time and builds the melody of your song. With this schema, we have the building blocks of how frequency based representation. To go from the chaotic signal to the pure sinusoïds that represent our signal, we use a mathematical tool call Fourier Transforms. Fourier Transforms allow us to find the original pure sinusoïds that made our signals and display their distribution over time. A schema speaks a million words. Does this Frequency Based Distribution sounds familiar to you? Exactly! This is the equalizer effect. What the signal varies over time, the distribution will also vary and provides a very cool equalizer effect. d — Building The Almighty Equalizer Effect Back to our architecture, the AnalyserNode comes very handy for our project. This node provides native FFT (Fast Fourier Transforms, a faster way to perform Fourier Transforms) on the stream provided as well as the frequency distribution of the signal. First, let’s inject the node in our code. From there, the AnalyserNode is binded to a memory object that is updated everytime the audio context receives new data from the audio stream. As a side note, how often does the audio context receive new data from the stream? From my experiments, it seems that the Web Audio API refreshes its context at a 144Hz rate, meaning that we get 144 new arrays every single second. In order to have a nice equalizer effect on our keyboard, there are some parameters that we need to adjust : the Fourier Transform size (also called FFT size) and the frequency resolution. Back to the science. As I explained earlier, sound is the superposition of multiple signals, each one having its own frequency. As humans, we are not able to hear all the frequencies available, in fact you are able to hear frequencies between 20Hz and 20 KHz.
https://medium.com/schkn/building-an-audio-visualizer-for-razer-chroma-keyboards-7814cab950ff
['Antoine Solnichkin']
2019-03-21 17:51:21.733000+00:00
['Programming', 'JavaScript', 'Software Development', 'Web Development', 'Science']
Our Framework for Thinking About Data Privacy Is Outdated
The first concrete notion of a “Right to Privacy” was detailed in 1890 by the lawyers Samuel Warren and Louis Brandeis in their eponymously named essay in the Harvard Law Review. At the time, they were concerned about the proliferation of photographic images, particularly with respect to tabloids and gossip columns. Their essay was a reaction to what they saw as an encroachment of this new technology on the individual’s “right to be let alone” — to have information about themselves known only to those whom they have given it freely. In the 130 years since then, the need for this protection has only increased with the development of novel methods of data collection and storage, in addition to powerful analytical techniques. As data collection has rapidly grown, so too has the legislation protecting the right to privacy. When the EU wrote the Charter of Fundamental Rights of the EU in 2000, Article 8 clearly stated that “everyone has the right to the protection of personal data concerning him or her”. More recently, the General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act (CCPA) in California have sought to establish stringent, legally binding, and enforceable requirements for how third parties handle the personal information that they have collected. Though the idea of privacy has been around for 130 years, the legal framework which underpins it has remained largely unchanged. Even GDPR and CCPA, considered the most modern and comprehensive laws surrounding data privacy, build off the idea of “informed consent.” As Warren and Brandeis put it, the “Right to Privacy ceases upon the publication of facts by the individual, or with his consent”. More formally, informed consent is “an agreement to do something or allow something to happen, made with complete knowledge of all relevant facts, such as the risks involved or any available alternatives” (source). The justification for such a framework is that so long as individuals consent to be known and are fully aware of the consequences of that decision, then the law should not restrict how a third party uses the data it collects because it was intentionally given. Hypothetically, this framework is an ethical manner of balancing privacy concerns with the need for companies to use data to innovate. In practice, however, informed consent is problematized by the scope and scale of data collection and sharing. In other words, modern data privacy regulations give informed consent an outsized role in the jurisprudence of the human right to privacy because informed consent itself is unable to address the information asymmetries regarding data collection between individuals and 3rd parties, as well as the technological developments in big data analytics that complicate its implementation. What Rights Do GDPR and CCPA Define? Both the GDPR and the CCPA break down the right to privacy into several components, which are either identical or analogous between the two regulations. The Right to Access: individuals can obtain any information about them that a third party uses for automated decision making. The Right to Data Portability: collected information is provided in a commonly-used and machine-readable format. The Right to Restrict Processing: consumers can put limits on what kinds of data companies can collect from them as well as what they can do with it. The Right to be Forgotten: consumers can request third parties to delete their personal data. The only time this request can be refused is if the data in question is a financial transaction, required for debugging the service, would violate free speech if removed, or other similar concerns. The Right to Know the Source of Data and the Business-Use Case for Collection. Each of these rights seeks to guarantee that individuals control their data rather than the third parties that collect it. They also enforce transparency between data collectors and consumers because consumers know what data a third party has collected on them and what they are doing with it. Violating any of these rights constitutes harm for the consumer, so lawsuits regarding these rights have standing in the court system. Just as the GDPR and the CCPA break down the right to privacy into different components, informed consent can similarly be decomposed into three main requirements. Complete knowledge of all relevant facts Knowledge of the risks involved Knowledge of available alternatives Knowledge of the risks and knowledge of the alternatives are, in fact, subcategories of knowledge of all relevant facts but can be considered separate, leaving “all relevant facts’’ to mean anything else that might alter an individual’s decision to give consent. In the specific context of data privacy, examples of relevant facts might be where data is being stored, third parties that data is being shared with, as well as what a third party is doing with data. If satisfied, these three components presume that an individual will use their knowledge to act in their own best interest and will not be harmed by their decision to relinquish their data to a third party. Each of the rights established by GDPR and CCPA, while not explicitly stated in the regulations themselves, is tightly connected with at least one of these three components. Knowledge Of All The Relevant Facts Beginning with the right to know the data sources, this right fits squarely into the most basic components of informed consent: knowledge of all the relevant facts. Businesses delineate what data they collect, how they use it, and why they collect it in their privacy policy — a document that most privacy regulations require to be in plain language and easily understandable. Privacy policies required this even before GDPR and CCPA, and companies have been successfully penalized for violations in the past. In 2019, French courts invalidated 38 clauses of the Google+ Privacy Policy, which shares clauses with Google’s overall privacy policy. Among the complaints were that “Google failed to inform its users adequately about purposes and recipients” of data collection, and that vague terms needed to be replaced with descriptions of data movement in a “clear and exhaustive” manner (source). While this seems like a victory for privacy, successful prosecution does not necessarily mean that data privacy is getting better. In fact, terms like “clear and exhaustive” present a real concern that, ironically, the passage of data privacy regulation can potentially make privacy policies even more complex. Before GDPR, privacy policies required on average 18 minutes to read and a college-level education to understand; after GDPR, readability has improved to a high school level in some cases, but reading time has increased as a result (source). The college-level education requirement itself excludes 70% of the American population older than 25, disproportionately impacting minorities in the same age bracket (79% of Hispanics and 74% of African Americans) based on the latest US Census Data. Both the length and complexity of privacy policies make them orthogonal to the idea of “knowledge of all the relevant facts.” For many, there is not even an “ability to know the facts.” The implication is that making a knowledgeable decision about data privacy is reserved for those with the time, education, and motivation to understand what is happening with their data, effectively excluding the many consumers without a college degree, notwithstanding any intersectional correlations with this requirement. Length and complexity aside, the technical knowledge required to understand data privacy exceeds what most people have, further complicating the requirement of informed consent that consumers know the facts. A study conducted in the United Kingdom found that only 13% of their respondents believed they had full knowledge of cookies, a common online tracking mechanism, while the remaining 87% had not heard of them, did not know how they worked, or only had a limited understanding (see study). If consumers are not technically literate, they are fundamentally lacking the facts which can help them make an informed decision about their data. The lack of knowledge extends beyond technical literacy to include a general unawareness of how data is used. For example, data brokerage, the practice of selling information to larger data aggregators which then distribute that information to other businesses, has long been shrouded in mystery. In 2014, the FTC published a report about data brokerage, highlighting the lack of transparency surrounding the processing. Notably, they found that “it may be virtually impossible for a consumer to determine the originator of a particular data element” (see report). Although this report was pre-GDPR, the fundamental structure behind how data brokerage works has not changed since the regulation was passed. Data brokers source data from the government, publicly available data such as social media, commercial services such as advertisers, and even other data brokers. As a result, data travels across an interconnected web of third parties, which, as the FTC found, would make it “virtually impossible for a consumer to determine how a data broker obtained [their] data” since they would have to effectively trace through this web. Expecting this web to be described comprehensively in a privacy policy is not a remedy because it still requires a consumer to trace what is going on behind the scenes. Moreover, it is an unreasonable demand to make of companies because of what Helen Nissanbaum, a professor of Information Science at Cornell, calls the “transparency paradox”. Namely, there is a fundamental tension between specificity and brevity, as well as comprehensiveness and clarity, when it comes to privacy policies. The impossibility of describing what goes on behind the scenes can arguably lead companies to include broader language, which can grant them greater flexibility while still adhering to the law. In essence, this paradox is what makes true knowledge of all the facts an ideal that can not be achieved with the current technical landscape. Knowledge Of All The Risks Perhaps consumers would exercise their right to know the sources and usage of their data, their right to be forgotten, or their right to restrict processing if they believed their data could be used for anything malignant; this apparent assumption of many consumers is critical to understand why data privacy regulations also fail to address the second key component of informed consent: knowledge of the risks. The common perception is that data collected is for the purpose of advertisements and thus is related only to advertisements. This, however, is not the case. Two notable examples that violate this assumption come from data collected by Facebook and Target. In 2013, researchers found that they could use an individual’s history of Facebook Likes to predict sexual orientation and several other characteristics to a high degree of accuracy (read paper). This was a result of common patterns between individuals of identical sexual orientation, regardless of whether an individual was “closeted” or “out.” The fact that this is possible is a cause for concern because the ability to identify marginalized groups also means the ability to target them. For groups that have historically been discriminated against, this creates an additional danger for them that they might be unaware of. It also takes agency away from individuals who have something about themselves that they would rather keep hidden from those around them. In 2011, there was an instance where Target was able to predict the pregnancy of a teenage girl before she had alerted her parents based on the similarity of her shopping patterns with other known pregnant women (source). Just as in the case of sexual orientation, pregnancy is a hyper-sensitive piece of personal information that is at the discretion of the individual to disclose. While it is a different argument how legislation could possibly protect against this, it goes to show that it is difficult to determine exactly what seemingly harmless data such as what you like on Facebook or what you buy at Target can predict. These two cases are not isolated and are emblematic of a documented phenomenon in big data analytics. A study in 2010 found that as few as 20% of users revealing attributes about themselves could help build a model to infer global attributes about a population (read study). In other words, for privacy-minded individuals who do not consent to their data being collected, informed consent is a moot point since data aggregators can still predict attributes about them because of those they are affiliated with. It is currently unclear how much data analytics can actually predict, but the cases that have brought headlines demonstrate that it is more than most people think is possible. The right to be forgotten established by legislation is a useful remedy in these cases. However, exercising these rights requires individuals to know that they are known by a third party, and, as a result, may not apply to the average person. Knowledge of Available Alternatives The final piece of informed consent which modern data privacy legislation fails to address is the knowledge of available alternatives. The right to data portability and the right to restriction of processing on their face seem to address this concern. After all, if no competitor can penalize users for denying their data, then users can freely move between alternatives. However, this neglects the fact that for most websites, trackers, cookies, and fingerprinting are default opt-in and non-negotiable. The EU Cookie Law required users to grant permission but never mandated that users can freely use a site without giving up that information (source). Most sites, when asking users to accept their tracking and cookie notice, make it impossible for site usage if the user does not wish to consent. This frequently takes the form of a large banner that blocks most of the webpage with the only clickable button being “Accept” or clicking a checkbox to agree to the privacy policy before creating a website account. This, in essence, is retaliation because the alternative is denial of service in its entirety. But nevertheless, it is allowed under the law. This is true even if the data collected is not central to the usage of the service from the consumer’s perspective because companies still require that data for their revenue. In this manner, the right to restriction of processing is, in a sense, a “false right” in that it theoretically exists but is impossible to exercise in practice. Moreover, while the presence of alternatives is not strictly necessary for informed consent, the lack of alternatives forces privacy to be a secondary concern, and hence informed consent to be a secondary concern, because the primary concern is access to services. The law cannot directly create alternatives for consumers to use, but it can inhibit their creation by making it prohibitively expensive to comply with the law. It is estimated that a firm of 500 employees must spend $3 million U.S.D to comply with GDPR. Even for small firms, time, personnel, compliance management software, legal fees, and data processing are all costs both small and large businesses must shoulder (source). For small businesses, this is one additional burden they must bear as they try to scale their business. For them, finding the money to pay for the infrastructure required by the law is significantly harder than it is for larger companies with a wealth of resources already at their disposal. In this manner, expensive legislation like GDPR, in general, tends to further entrench the incumbent powers in the market, making the presence of available alternatives even scarcer. Conclusion None of these problems with an informed-consent-based data privacy framework are new or particular to the GDPR and the CCPA. In fact, many of the examples given as well as the technological research into privacy were published before the GDPR was drafted in 2016. Yet they are still relevant under the data privacy regimes created by GDPR and CCPA because these two regulations only build upon what already existed. Just because these regulations give governments an ability to prosecute does not necessarily mean that data privacy as a right is actually strengthened. The facts show that the pre-conditions of informed consent are clearly being violated, and as a result, the right to privacy afforded by modern privacy regulations is, in practice, a facade of privacy that sits in de facto violation as individuals do not have the knowledge to make informed decisions and are beholden to the data demands of 3rd party data collectors in exchange for their services. Moreover, the large predictive capabilities of big data analytics call into question whether consent is still a useful framework in which to view privacy, particularly given that the aforementioned research has shown how a small percentage of people providing their data can enable accurate inference about those who have not consented to be known. Most importantly, the jurisprudence surrounding data privacy, if it will ever live up to its ideals, can not operate under untrue assumptions as it does today. Whether this requires modifying the assumptions of informed consent, a shift in perspective on privacy rights, or an entirely new framework altogether, what is constant is that any new legislation needs to be flexible enough not to hinder innovation but also stringent enough that only those who want to give up on their privacy can do so. Some of the necessary changes are outside the scope of the law. Beyond providing remedies for those who have been harmed and setting the rules of engagement between individuals and 3rd parties, the law’s impact is limited. In addition to the legal changes, the technology surrounding privacy requires a radical shift towards giving users truly granular control of the data they provide. It also necessitates a change in the culture surrounding technology and how people interact with it — requiring a higher level of technical literacy and wariness about how comfortable people are relinquishing personal data to others. In other words, a true right to privacy is built not by the law alone, but rather a combination of regulators, designers, and engineers working in harmony to develop systems that are useful, secure, private, and just as easy to use as the technology of today.
https://medium.com/swlh/our-framework-for-thinking-about-data-privacy-is-outdated-ce542e023f24
['Anmol Parande']
2020-12-23 22:49:07.340000+00:00
['Privacy', 'Tech', 'Big Data', 'Law', 'Gdpr']
A City Driven By Design
Design, whether we realize it or not, is present in everything we experience — our work environments, the app you used to order your coffee this morning, our transportation, the craft beers we consume, and so on. The list is limitless — design is everywhere. And perhaps its most compelling (albeit covert) superpower is its ability to influence civilizations, shape communities and help foster a new age of growth and opportunities. This superpower is not lost on AIGA Arizona though. The organization has long held, through its mission, that design is capable of encouraging social impact and promoting community excellence. Phoenix is a testament to this belief. Throughout the years, the Greater Phoenix community has been subject to a number of circumstances that have led to innovative change. And design has been at the heart of it all. Here are but a few of the many industries impacted — and elevated — by design with commentary from some of the area’s creatives who can speak to design’s multifarious and far-reaching impact. Education and classrooms Recently, there’s been a shift in creating classrooms better outfitted to support experiential learning, which in turn has been proven to impact the short- and long-term effects of students’ learning outcomes. One of the keys to creating a learning space conducive to many different types of educational approaches is flexibility. Ultimately, the way a learning space is designed directly influences the learning activities that can occur there. For example, Phoenix’s Biltmore Prep Academy and Creighton Elementary, among other schools, have turned to open classroom designs to keep kids engaged and learning in a variety of ways. How is education and the modern-day classroom impacted by design? “We are becoming a more planned community, thus requiring design and sustainable planning to converge. Specifically, I see that office space and creative thinking design has been translated to the classroom à la whiteboard walls, creative workspaces, etc.” — Lisa Glenn Nobles, executive director for CO+HOOTS Foundation and Phoenix resident for 11 years. “As leadership begins to realize the importance of design and design-thinking, more and better changes are being made to accommodate students, facilitate learning, and provide a physical (and digital) space that encourages the sharing of knowledge and an affinity to the school. Tools, especially, are easier than ever for students to use, making educational environments more inclusive and welcoming.” — Perri Collins, social media specialist at Arizona State University and Phoenix resident for 18 years. Technology and the digital space Design is an indispensable factor in creating humanized and easy-to-use interfaces in the rapidly evolving digital landscape. In many ways, great tech cannot exist without great design. And it’s the creative’s job to help technology deliver the desired experience users crave. This is especially significant in Phoenix, a city dubbed as the next tech hub. With some of the biggest names in tech — along with countless promising startups — relocating or rooting their companies in the Valley, there has been an increase in opportunities for creatives to gain a foothold in this emerging industry. How is design changing how we interact with technology? “Design is a key player in the tech space because it helps customers visualize and understand new technology that hasn’t even been built yet and make technical concepts easier to understand.” — Julie Bonner, marketing director at FreeFall Aerospace and Tucson resident for 16 years. “Design has helped to bring technology and the internet to the masses… I think how well things work is overlooked in favor of how good things look. People tend to gravitate toward beautiful products over simple products. I think beauty in design has its place, but I would take a simple tool — regardless of how ugly it is––like Craigslist, over a beautiful but difficult to use tool any day of the week.” — Carter De Angelis, product design consultant and Phoenix resident for nearly two years. City planning One of the most prominent ways design is reflected in the City of Phoenix is its well-known grid system. Taking advantage of the area’s relatively flat positioning, the city’s early designers mapped out roads on an easily-maneuverable layout, replete with big, arterial boulevards that supported the transportation conducive to our city’s original purpose as a farming town. Civil engineers and designers were also heavily influenced by the techniques used by the native inhabitants of this land, incorporating their methods to build up civilizations that support various ways of life. How Phoenix as a city works with — and for — its residents. “Design is very much alive and integrated throughout Phoenix’s construction industry. The majority of citizens don’t realize the level to which our cities and environments are designed… One of the most prevalent ways is its grid system. It is a powerful, yet subtle influence from design that many take for granted.” — Janelle Elizabeth Thomas, marketing lead at DPR Construction and born and raised in Arizona. “Design is the act of evolving… For hundreds of thousands of years, native peoples have been designing homes, communities and cities, to create a continuation of cultural stories through the means of sustaining our way of life. Take Phoenix, for example. This city is here because of native peoples and the canal systems they designed to sustain life in a desert. It is important for our community to preserve these designs for our young to learn and grow from, especially now that we are in the technical age. It will help us to contribute our inherited design systems to a larger world that will still sustain our homelands, cultural knowledge, newer engineered communities, careers, economic prosperity, etc. This is the impact design will have in our world.” — EuniQue Yazzie, creative director at euniQue Design LLC and a member of the Navajo Nation. Government workflows and services In addition to city planning, there has also been a rise in creatives working within local governments. Designers’ abilities to create products that work for people rather than systems have been keenly sought after by government officials who are striving to create systems and services better suited to meet the needs of their constituents. Creatives offer a different perspective, looking to identify the root of the problems that may be causing issues within a city. Chief among these issues is typically failure to integrate human-centric design — both in regards to the buildings themselves as well as the service they provide. Reimagining the citizen experience through design. “If government services were designed as well as our road system, I imagine we’d all enjoy a much more pleasant ‘citizen experience.’ The user experience of interacting with our public services is mundane at best and abysmally counterproductive at worst. Visiting the DMV, voting, interacting with our courts and even trying to track down a record of receiving a government grant — practically none of it is memorable and nearly all of it is a pain in the ass. We need proper design work, like what some agencies are doing with the help of 18F and others. Whoever can redesign the citizen experience — what it means to interact with our government on every level — will win big in the long term.” — Jérémy Chevallier, cofounder of PubLoft and GigLoft, and Phoenix resident since 1993. Restaurants and hospitality Phoenix is one of the fastest-growing foodservice markets in the country, making it a desirable destination for those in restaurant architecture and interior design. A restaurant’s aesthetic — from the paint colors, to its wall decorations, and all the way down to the uniformity and intricacies of the table setting — has become a crucial part in delivering the ultimate guest experience today’s restaurant-goers expect. One of our area’s most renowned chefs, Chris Bianco, views restaurant design similarly to how he views the food he creates, asking himself how he can bring out the essence of each so that they’re the best versions of themselves. According to Bianco, “You have to look at a space and ask what it wants to be, or look at a community and ask what it needs. Then, do the thing you like to do the best you can possibly do it. Seeing something you disagree with doesn’t mean you need to tear it down; it means you do your thing as well as you can to bring light to the darkness.” How restaurant design can elevate the guest’s experience. “Today’s restaurant design agencies have a responsibility to encourage their core clients to design for quality and responsible experiences — not disposable products or wasteful, irresponsible sourcing and manufacturing. Our city’s bars, restaurants and event venues such as the Arizona Science Center are created for the community — and all aspects of their design should be made available to everyone. I hope new venues and experiences that come to the Valley continue to reflect our established knowledge of good design.” — Diane Serpa, art director and owner of GreyCatDot Digital Design and Arizona resident since 1976. Breweries and beer Design has also helped elevate the Valley’s micro-brew scene. From the breweries to the product packaging, beer-making has opened up another avenue for designers to flex and expand their creative prowess. Additionally, the design that goes into marketing these craft brews has been crucial to breweries’ successes, helping put Arizona’s local beer on a national stage. How design supports local brews’ marketing efforts. “Design has played a huge role within the brewery world. Since the beginning of the craft beer revolution that’s been sweeping the county, Arizona included, breweries have provided an outlet for designers to create for a newer platform that before they would not have had the opportunity to be a part of (with the exception of the limited large companies). Local beer and design go hand-in-hand. If you want to stand out from everyone else you better damn well have a fun, unique, and new take on your branding, your voice, and ultimately its labels. There has been a lot of great designs and illustrations that now have a new and exciting platform to be paired with — not to mention one that’s a fun, recreational product most people enjoy. It’s also been an exciting space for designers to push the boundaries a little since there aren’t as many rules to what makes good beer packaging. I think if anything, the brewery industry has helped the design world get more attention by providing more direct exposure to packaging design––exposure that’s now seen by the masses. This ultimately demonstrates the importance of good design to non-designers.” — Mark Johnston, co-owner, designer, illustrator and pixel-pitted Pisces at Prickly Pear Paper. As is abundantly evident, design infiltrates our lives in countless ways — the examples mentioned above are only the tip of the iceberg. As a result, design has helped shape and transform Phoenix, serving as a catalyst to its evolution throughout the years. To learn more about the ever-morphing craft of design, the diverse implications it has within our everyday lives, and the creatives (both local and national) who are leading the charge, come out to the Evolve Design Conference, which will take place on October 12 and 13, 2019.
https://medium.com/yesphx-stories/a-city-driven-by-design-34670d1100ff
['Breanne Krager']
2019-09-25 21:29:55.525000+00:00
['Phoenix Design', 'Design', 'Phoenix Design Week', 'Phxdw', 'Aiga Arizona']
Writing on Substack Using Your Mobile Device
One of the only features that Substack does not currently have, that I wish were implemented, is the creation of a mobile app. As many bloggers can attest, writing increasingly is taking place outside of the traditional office, desktop/laptop, and stationary desk configuration. Many writers are balancing multiple jobs, and having the option to write on a mobile device is quite useful, as it extends your office to include any place where there is cell phone reception. Luckily, there is a work around for Substack writers. While Substack may not have an app, you can still access your publications on your mobile device and write posts.
https://medium.com/substack-writing/writing-on-substack-using-your-mobile-device-fd66d8b16273
['Substack Writing']
2020-07-10 03:17:20.164000+00:00
['Blogging', 'Writing', 'Substack', 'Blogging Guide', 'Writing Tips']
Are Brands Essential?
Relief From finances to resources, the following brands are providing much needed relief to healthcare workers, restaurant workers and the like. Fanatics/MLB and Bauer Hockey Lauren Pennline, Studio Coordinator These non-essential businesses found a way to continue employing people through our wildly uncertain times by repurposing their factories, equipment and materials to create some of the most essential equipment needed for those on the pandemic’s front lines — Fanatics making gowns and masks; Bauer making face shields. I think that’s pretty rad. The Wally Shop Hua Chen, Freelance Designer One brand I’m seeing that is standing stall during this crisis is The Wally Shop, which sped up their brand rollout to provide dry goods nationwide, donate to Feeding America, and provide free shipping for seniors and healthcare & emergency workers. Cristian Siriano Debra Katz-Velazquez, Controller Cristian Siriano stands out to me. When Governor Cuomo first discussed the shortage of masks and supplies for our health care workers, he had his team create 1,500 masks with materials they already had. They just finished another 5,000 (in gorgeous colors!) and they’re still at it. Plus they have done this with little fanfare. Siriano isn’t a huge fashion house and I am impressed with what he and his team have done in a short period of time. Kyo Pang and Moonlynn Tsai of Kopitiam Wednesday Krus, Design Director They stepped up, paid their team of 20 in full for all of their hours during closures, changed their operation to a two-person team, and transformed their restaurant’s kitchen into a 300-meal relief hub for low income and elderly neighbors. Sqirl, now operating as Framily Sarah, Senior Designer Sqirl is a local — designer eye candy; look up their plates — cafe located in LA that has turned its restaurant into a relief kitchen. Now operating as Framily Meal, it’s offering free meals to out-of-work hospitality workers in the area. Sqirl is a simple reminder that your community is your f(r)amily! Restaurant Workers Relief Whitney, Design Operations Manager I’ve been impressed by how restaurants and brands in the hospitality industry have come together to form the Restaurant Workers Relief program with outposts throughout the city. Advocacy Leading the advocacy charge are individuals thinking bigger than their brands, who have become the voices of their industries and who garner aid for people. David Chang Shivani Gorle, Cultural Strategist From day one, he has spoken up about how coronavirus is battering the restaurant industry and demanded that part of the federal stimulus plan go to help it. As the daughter of an independent restaurant owner, I empathize with the incredibly difficult position he is in — carry on with business as usual and risk employees’ health during the pandemic or shut down his far-flung operations altogether. He chose the latter. Moreover, his Momofuku Group started a crowdfunding campaign for all of his employees who need immediate relief as they navigate this crisis. DJ D Nice Dylan Stiga, Integrated Brand Strategist Watching DJ D Nice bring over 150,000 people from around the world together time after time to share in their love of music and dance through his Club Quarantine sets on Instagram Live has been inspiring to see. It’s an example of a “brand” that’s not doing anything different, but that continues to use its skills and tools to help people in the best way it knows how without expecting anything in return. The fact that he also is using his reach to advocate for initiatives like voter registration and healthcare worker relief speaks volumes about his understanding of what he can bring to the table when it matters the most. Access These brands have made accessibility a core offering during the pandemic, from connecting us with family, friends and colleagues, to providing us with tools that inform, educate and help us cope. LEGO Foundation Nicole Duval, Freelance Strategist Good citizen brands have prioritized the critical needs of families with children during the quarantine. According to UNESCO, there are more than 1.5 billion school-aged children currently out of school worldwide. LEGO Foundation has launched two educative initiatives for the benefit of families impacted by COVID-19. The first benefits the most vulnerable in refugee camps and war-torn countries as well as children in urban areas by donating US $50 million to ensure they will continue to have access to learning through play. The second initiative is the creation of a platform called #letsbuildtogether across Lego’s social channels to alleviate the level of stress of families during the period of confinement. Slack, Zoom and Adobe Sam Barbagiovanni, Design Director I feel fortunate to be in an industry where I can continue to work from home and carry on routines that are different but still familiar. Yes, my attire might be Sunday vibes 7 days a week and there might be extra trips to the fridge but designing as I would in the office and the brands that are helping me to do so are keeping me sane. So shout out to Slack, Zoom and Adobe for allowing this creature of habit to still connect with her co-workers and get stuff done. Business per usual in an unusually strange time. Headspace Dylan Stiga, Integrated Brand Strategist I’ve never really explored meditation, but as my wife is a nurse she was given free access to the Headspace platform, which provides meditation and anxiety relieving sessions and tools digitally. I found that even sitting through the short 5-minute sessions before work or after has given me a sense of calm to either get through the day or relax at the end. It’s also great to see that it partnered with the NY State Governor’s office on a free New York-tailored platform to help us all cope with our circumstances. Nicole Duval, Freelance Strategist The need for a daily ritual at home has increased during this quarantine time and the brand Headspace app has provided me with an internal feeling of relief through mindful meditation. I also recognize the purposed value of this brand, which is giving away free subscriptions to healthcare workers. UPS Martha Kirby, Client Services Director What can brown do for me? Deliver much-needed goods in a time of uncertainty. Misfits Market Benjamin Greengrass, Creative Director This event is exposing several vulnerabilities in our supply chain. With billions of dollars in fresh produce going to waste, our food producers are being forced to reimagine how they operate. In some ways, COVID-19 will accelerate innovation in sectors of the economy that have tolerated inefficiency. We’re privileged to be working with Misfits Market, a food subscription service with a procurement model designed to curb food waste. While few of us could have imagined what we are currently experiencing, its willingness to challenge an inefficient food system has yielded a massive increase in demand.
https://medium.com/thoughtmatter/are-brands-essential-5e714b2263e0
[]
2020-04-28 16:58:25.244000+00:00
['Small Business', 'Crisis', 'Coronavirus', 'Branding', 'In The Know']
Use Python to Value a Stock Automatically
Use Python to Value a Stock Automatically Is Apple Stock Overvalued? Just Enter the Ticker and Let Python decide Automatically! Photo by Adam Nowakowski on Unsplash In a previous article I described how to manually obtain data from the web and perform calculations for the intrinsic value of Apple stock, as well as explained the discounted cash flow model in full detail. This article uses the same model except that everything (pulling data from multiple sources, performing calculations) is now done automatically in Python. I will not explain the rationale behind the model or for making certain estimates here as they are already explained in the previous article. Please refer to it if you need further elaboration. I have also changed the order of the steps a bit as it makes sense to programmatically obtain all relevant data needed first before making estimates and performing calculations. Disclaimer: I am not a financial advisor and this article is not meant to represent any form of financial advice. Any investments you make using these calculations will carry risk so do remember to do your due diligence and research before doing so. Step 1. Import Packages and Sign up for Financial Modelling Prep API Import Packages to Extract and Present Data First, we import the packages needed to parse data from Finviz and from Financial Modelling Prep API. The data from Finviz is in a html table which while the data from Financial Modelling Prep API is in JSON format, both of which we will need packages to parse. This data is required to calculate the intrinsic value of the stock. We also use Matplotlib to do some plotting for data visualization later. In the last line above, the base_url for the Financial Modelling Prep API is given, for any data request we make(e.g. cash flow statement of AAPL), we add it after the base_url in the form of queries (more on this later). Enter Financial Modelling Prep API Key and Ticker Enter the Financial Modelling Prep API Key and the ticker of the stock that you are interested in (“AAPL” for Apple stock in this case). The API key “demo” below is a demo key which only works for obtaining data for “AAPL”. For other tickers, you need to sign up for an account at https://financialmodelingprep.com/developer to get an API key for free (for the first 250 requests, however you can actually just sign up for another account when your free requests are used up). Go to the Dashboard to obtain your API key after you’ve signed up. Step 2. Obtain Financial Statements from Financial Modeling Prep API The Financial Modeling Prep API gives us everything we need to obtain financial statements data from different companies, without having to pour through them manually. The data returned is in the form of JSON (example below) which we can parse using the json Python package. Click on this link to view it for yourself. Part of Cash Flow Statement of a Company Obtained from Financial Modelling Prep API Quarterly Cash Flow Statement (Most Recent 4 Quarters) Why are we looking at the quarterly cash flow statements instead of the annual cash flow statements? In the previous article, we used the free cash flow for the trailing twelve months (TTM) period to project future cash flows. Unfortunately, under Financial Modelling Prep’s annual cash flow statements data, there is no TTM data and the most recent data is the cash flow over the last year (2019). Hence, the most recent cash flows of the company (particular the first 2 quarters of 2020) are not accounted for. Hence we would need to sum up the cash flows from the most recent 4 quarters ourselves for the TTM data. In the code below, we add the ‘cash-flow-statement/’ to the base_url of Financial Modelling Prep API as well as the ‘ticker’ and the parameters ‘period=quarter’ and ‘apikey=demo’. For more details on how to query data please refer to the documentation. We parse the json data into a DataFrame and obtain the most recent 4 rows (hence the iloc[:4] in the code). Check the dates of the result below to see they are indeed the recent quarters. The Free Cash Flow is one of the columns in the DataFrame below (though not shown in the screenshot). Cash Flow Statement (most recent 4 quarters only) Annual Cash Flow Statement We repeat the above but leave out the ‘period=quarter’ to obtain the annual cash flow statement data. This step is optional as we do not need anything here to calculate the intrinsic value. We obtain this data so as to later plot out the cash flows over the years to check if the cash flows are stable and increasing. Verify the output below to see that the dates are indeed for annual data. Cash Flow Statement (Annual + TTM) Now we sum up data of the most recent 4 quarters, rename the row as ‘TTM’ and append it to the annual cash flow statement to combine both the annual and TTM data. Notice that in the result, we now have a row which contains TTM data. Check Stability of Free Cash Flows Let’s plot out the free cash flows from the above DataFrame for the last 15 years (hence the iloc[-15:] in the code). The cash flows are pretty stable and increasing, hence predictable. Note that a couple of anomalies are fine. An interested fact is that the TTM free cash flow is higher than the annual free cash flow last year (2019). This is because Apple managed to grew earnings in the previous quarter, despite the COVID pandemic. Pretty solid company eh? Note also that if you get a company with unstable and erratic free cash flows, the discounted cash flow model probably will not be a good estimate for its intrinsic value. Balance Sheet Statement (Quarterly) The balance sheet shows a snapshot of the company’s financial health (all the assets and liabilities that it has) at a moment in time (not summed up over a period of time as with cash flow statement). To get the relevant data of a company to calculate the intrinsic value, we simply need the balance sheet statement for the most recent quarter. We only need the top row (most recent quarter data) Free Cash Flow, Total Debt, Cash and Short Term Investments Let us extract the Free Cash Flow from the TTM row of our cash flow statement that we created earlier. We also obtain the Total Debt and Cash and Short Term Investments from the top row (most recent quarter) of the balance sheet statement. Let’s check them by printing them out as well. Step 3. Extract and Parse Data from Finviz Next, you need to obtain more data from Finviz. Finviz presents all the company data in a html table in its website as shown below. Hence, we need to use the requests.get method in Python and the Beautiful Soup package to parse whatever data we need. Price, EPS next 5Y, Beta, Number of Shares Outstanding The following code shows how I obtain the ‘Price’ (current share price), ‘EPS next 5Y’ (estimated growth in Earnings per Share for next 5 years), ‘Beta’ (more on this later) and ‘Shs Outstand’ (number of shares outstanding) from Finviz and store them into a Python dictionary so I can extract this data later. The results in the Python dictionary are shown below. To learn more about the details of how data is extracted from Finviz, feel free to read my article below. Estimate Discount Rate from Beta Again, in previous article when explaining the discounted cash flow model, I wrote in detail what Beta is and how it shows the risk level of a stock. Feel free to head back to the article if you need more elaboration. I also wrote that for stocks with higher Beta, we demand a risk premium (i.e. higher reward for taking higher risk) hence a higher discount rate. I’ve written code below to estimate the discount rate based on the stock’s Beta. So we got a discount rate of 8% as the Beta is 1.32. Print All Data Needed for Intrinsic Value Calculation For clarity, let’s print out all the data obtained from Financial Modelling Prep API and Finviz so far. All the data we have obtained are shown below, if you compare this to my previous article with manually grabbed data for Apple, you will see that they are the same. (Well except for the slight difference in Total Debt, which is probably due to slightly different definitions in what constitutes the Total Debt in the balance sheet for Yahoo Finance and Financial Modelling Prep, this is fine and we will see that it does not affect the calculated intrinsic value much.) Also in my previous article I elaborated on how I decide on the growth rate for years 6 to 10 and years 11 to 20. The same is done here. Here we assume that Apple will survive for the next 20 years, which is quite reasonable. Step 4. Project Future Cash flows and Calculate Intrinsic Value! We are now ready to calculate the intrinsic value of Apple stock! I’ve written a function below to do all of this! The first part of the code (the for loop) projects the free cash flows of the company for the next 5 years using the ‘EPS Next 5Y’ growth rate. We have years 6 to 10 and years 11 to 20 in the next two for loops. We just need to multiply the previous cash flow by (1 + growth rate) for each year and loop it. For the projected cash flow in each year, we also divide it by (1 + discount rate) to get the discounted cash flow from that year. We store all the projected cash flows into Python lists for summing up and plotting to visualize later later. Then we add up all up all the discounted cash flows in the cash_flow_discounted_list using the .sum() method, add the Cash and Short Term Investments Short Term Investments and subtract the Total Debt. Finally, we divide the above by the Total Number of Shares Outstanding and we are done! Let’s look at the plot of the projected Cash Flow as well as the Discounted Cash Flow for the next 20 years. while the projected cash flow is always growing, the discounted projected cash flow starts to decrease after year 5. This is because while the growth rate decreases over time, the discount rate does not and eventually becomes higher than the growth rate. Hence even if Apple survives for an eternity, the discounted cash flow eventually becomes 0 and the intrinsic value is not infinite. Print Intrinsic Value, Current Share Price, Margin of Safety The code below outputs the intrinsic value, current share price and margin of safety. The margin of safety shows how many % the current share price is lower than its intrinsic (the higher the safer we are investing in it). Here we obtain an intrinsic value that is similar to what we calculated in the previous article. We also see that the intrinsic value is lower than the current price of Apple. A negative margin of safety in the case of Apple below means that the share price is higher than the intrinsic value by a certain percentage). Once again, feel free to read the last part of my previous article to find out what I think of this. In this article we used Python to pull data from multiple sources before using them to perform calculations to find the intrinsic value of a stock. Simply change the ticker symbol to perform the same calculations on any other company you want! I hope this article was useful. If you like this article, feel free to check out the article below for an application example. Feel free to check out my other articles below too! Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/use-python-to-value-a-stock-automatically-3b520422ab6
[]
2020-12-11 00:54:24.439000+00:00
['Data Science', 'Money', 'Python', 'Programming', 'Finance']
For the ‘Love’ of Chicago: Restaurateur feeds homeless
For the ‘Love’ of Chicago: Restaurateur feeds homeless Black-owned Chicago restaurant fights for social change, healthy meals Photo credit: The Creative Exchange/Unsplash Vegetarians and vegans are used to walking into “socially conscious” restaurants decorated with motivational quotes. These eateries are usually full of reading material that focuses on environmentalism, healthy eating, animal rights, and — in this increasingly politically charged world — voting. But strolling through the front door of Chicago restaurateur Quentin Love’s Turkey Chop has a little extra flavor. In addition to framed interviews with major newspapers, artwork and words of wisdom, one wall in particular confirms this isn’t your average restaurant. Love is fully aware of why he opened this restaurant in the area that he did. There’s a painting of Willie Lynch housing projects; Harriet Tubman; handcuffed arms holding the shape of the United States of America in red, white and blue; a green outline of Africa; a man hanging from a tree; an open book of the Emancipation Proclamation; a puff of black smoke coming out of a pipe placed next to a can and a bottle; and brown faces behind a prison cell. The food menu says “socially conscious” in big letters, and it’s clear that this restaurant stands by its words. There’s also a communal vibe inside of the West Side Chicago restaurant. A steady rotation of customers stroll in and out, picking up deliveries or carryout, while some sit at tables with laptops or smartphones waiting to get their food. Familiar faces pause in front of Love to extend their hands or catch his eye for a mutual head nod before they walk near the cashier counter. Outside of Turkey Chop (Photo: Shamontiel L. Vaughn) While the aim of the West Side Chicago restaurant is to provide healthy eating options and a safe place to eat in the neighborhood, Love is fully aware of why he opened this restaurant in the area that he did. “I saw this old film called Black Caesar with Fred Williamson,” said Love. “He owned all of these businesses in the area and was able to revitalize the area through economics. I wanted to do that myself. So I opened up different businesses in a certain grid, starting with Chatham on Chicago’s South Side.” Love, a Marine who was honorably discharged during Desert Storm, became a serial entrepreneur and opened a barbershop, a dry cleaners and a clothing boutique between 1991 to 2001, according to a Sun-Times report. But there was one particular business he started that stood out from the bunch, a soul food restaurant called Soul Xpress, which later became Quench. Photo credit: Shamontiel L. Vaughn “I felt like we needed to add a couple of vegetable options to the menu,” said Love, who was a vegetarian for seven years. He knew he may have had a better shot at introducing healthier eating options as opposed to solely vegetarian and vegan restaurants, and Quench was a happy medium. He also opened a few other restaurants to please various palettes: An Asian-themed menu at Black Wok; a breakfast cafe called 5 Loaves; and a pasta place called Italian Soul. While the restaurants did have their fair share of customers from 2001 to 2011, unfortunately, all of them closed. His charitable organization The Love Foundation managed to survive. He knew he may have had a better shot at introducing healthier eating options as opposed to solely vegetarian and vegan restaurants, and Quench was a happy medium. And that alone helped Love to not be deterred. When Chicago’s West Humboldt Park Development Council invited him to open a new restaurant, he did just that with Turkey Chop in 2012. Turkey Chop’s menu has an amalgamation of hit dishes served at his past restaurants. But his current goal is bigger than being a successful entrepreneur and feeding his family. Photo sign outside of Turkey Chop (Photo: Shamontiel L. Vaughn) Turning his head to look out of his restaurant window, he points toward a nearby corner with a group of young men hanging out. “If you look right outside, you see 20 people on the corner in an environment that has been ran by the wrong element,” said Love. “And it makes other businesses afraid to open up around here. Why would I open up a business in an area that’s saturated with drugs and crime? “However, why should I have to get in my car to go somewhere else to get a decent meal or have a decent experience? When we spend those tax dollars in our communities, those neighborhoods continue to thrive. We have to put energy into our own communities.” And that “energy” from Turkey Chop is also being used to feed those in need and Chicago’s homeless every Monday for the past five years, with the help of a partnership from the Greater Chicago Food Depository. With Turkey Chop meals such as smothered chicken, lentils, rice and cabbage for a community that lives in a healthy food desert full of fast food joints, Love hopes that this will encourage the community to eat healthier. “I know everyone can’t afford to get a $15 meal or a $6 sandwich,” Love said. “And if you can’t afford to buy the meal, that’s fine too. That’s why we’re here on Mondays to make sure that you have the opportunity to have something healthy at least once a week.” Last Thanksgiving, The Love Foundation gave away 3,000 turkeys to those in need. And 25 percent of the company’s revenue will be donated to the sister organization for future holidays and events. “No matter what your background is, everybody needs to feel a sense of love even outside of self-love and your immediate family,” Love explained. “I want people in this community to know I care about them. How do they know that? Maybe I fed them yesterday. Even for the drug dealers, drug users and others making bad decisions, sometimes all it takes is the simplest gesture to make you decide you want to do something different. Feeling loved could mean the world to any human being.”
https://medium.com/i-do-see-color/for-the-love-of-chicago-restaurateur-feeds-homeless-7d0a2da8b9bd
['Shamontiel L. Vaughn']
2020-08-13 22:33:32.893000+00:00
['Restaurant', 'Chicago', 'Vegetarian', 'Black Dollar', 'Vegan']
Where to start your data science?
Where to start your data science? We hear a lot about machine learning, artificial intelligence, neural networks, and more and more buzz words, but how to approach it, where to start? Photo by Chris Liverani on Unsplash A ton of resources are available online and on paper, for learning, for research, for working in the data science industry. Online learning has seen a new revolution, with more and more people staying home, learning new technologies, discovering new fields in creative and technology space, therefore, basics and start from the scratch approach in data science has also seen an upward trend. At this point, people who are learning it for the first time are a mixed group of people, they come from different backgrounds, different level of expertise and experience, a broad range of people, from senior executive trying to understand if data science is another vertical of investment to go into, or managers to better make business decisions in marketing or pattern prediction; from students trying to get their hands on new technologies for the first time, or traditional programmers transitioning into the next thing. The best way to start with anything in today’s time is searching online, Google, YouTube; the two largest search engines, and in most cases we can get an accurate answer as long as we search our query properly. However, because of the broad and vast scope of the field and detailing and variations in the resources understanding the way that is most direct and easy becomes necessary. Because of the internet, most of the resources are free, but to have it all in one place with proper projects or ways to start at beginner level and grow up to advance level, where if you just want to get by learning SVM, one doesn’t have to learn the whole concept of Neural Networks. Here are few curated platforms to learn YouTube Now this itself could be enough to understand all the technologies, all the concepts all the math, and more. However, being the beast that it is now, trying to find the correct channel or playlist against recommendations, suggestions, and algorithms might be a big task. Sure, there are many dedicated business channels for coding, in the field of data science, it might not be enough. Bookmark, Save, or Subscribe essential and helpful channels to keep it on the record and can be referred to with our own flow. 2. .org As many resources are free, there are equally enough platforms that are for-profit which they want you to subscribe or pay for. The trick companies in the subscription model follow is they keep basic versions of few technologies free but for advance and detailed versions they force everyone to pay. However, .org companies or websites are a very good way to get what you want in a simple and organized manner without ill-will or greed. Some of the best examples to learn basics and at our own pace with a 7-day trial or 30 days activation. 3. All the educational .com In all the plethora of online web portals and youtube videos, there are many blogs, videos, and creators especially in the education sector with a specialty or focus on a certain type of technology, language, topic. However, for someone who is just starting on this path, however overwhelming these platforms seems to appear, it is a sure way to learn from basics to advanced. It helps us cover all the bases in the field and also creates a sense of community with competitions and tests. Few of the well-known sites are, Keep Learning! Keep Evolving! Keep Educating!
https://medium.com/analytics-vidhya/where-to-start-your-data-science-15966c7650dd
['Vinay Gandhi']
2020-12-25 16:16:39.558000+00:00
['Data Science', 'Programming', 'Coding', 'AI', 'Education']
26. Defunding downstream, moving upstream, taking care, making repair
The architecture of this system In an emotional New York Times interview in August, reflecting on the broader context around the killing of George Floyd, the mayor of Minneapolis Jacob Frey described how he can’t remove police “due to issues with both contract and arbitration.” Frey lamented that he is “hamstrung by the architecture of this system that prevents change.” Frey’s phrase, “hamstrung by the architecture of this system” indicates that we might focus our sense of progress not on economic growth or cost-effective police, but on the social infrastructures that trace, shape, and direct our patterns of living and structure of feeling. That is where design can be most powerful, increasingly — in understanding that built architecture and social architecture are the same thing, joined at the hip. And therefore, the linked questions of safety, crime, and social justice, for instance, can be hosted and pursued by design practices. Of course this isn’t easy. And yet the heightened sensibilities due to current events make quite radical proposals at least possible. Indeed, as I started writing this set of Papers, the Minneapolis city council pledged to dismantle the city’s police department and replace it with a new system of public safety, promising to invest instead in finding “non-police solutions to the problems poor people face.” This suggests a shift of emphasis towards creating and nurturing places and communities that intrinsically reduce and avoid harm, rather than reacting when harm happens. Better still, using the language of mental health and wellbeing, producing places and communities that enable people to thrive, moving into promotion beyond prevention. For example, this would be akin to designing streets that are intrinsically healthy, sustainable, vibrant, open, and diverse in the first place. Dan Heath’s book ‘Upstream’ indicates the broader value of prevention rather than cure, in investing in preventing crimes and illnesses which subsequently cost vast amounts to society, in numerous ways. The book was written upstream of COVID-19, as it happens, but turns out to be useful in understanding many aspects of why the USA’s healthcare system is both as unnecessarily expensive as it is hopelessly inadequate. Heath describes how the $3.5 trillion health care industry, almost a fifth of the American economy, is “designed almost exclusively for reaction … It’s hard to find someone in the system whose job it is to address the question ‘How do we make you healthier?’” As he points out, “We spend billions to recover from hurricanes and earthquakes while disaster preparedness work is perpetually starved for resources.” In this sense, we can file COVID-19 alongside hurricanes Katrina and Sandy, in revealing an endemic lack of ‘disaster preparedness’. The virus’s ability to locate and exploit the weak points in our systems is particularly obvious in a nation rendered fundamentally unhealthy (and for the USA, see also the UK.) More broadly, this funding and focus on the reactive ‘hospital end of things’, rather a proactive addressing the root cause of why people might end up in hospitals, has been a debate for decades, clearly, and continues to be so. Ever since the American-Israeli medical sociologist Aaron Antonofsky articulated the notion of salutogenesis (as opposed to pathogenesis) in the 1980s, we have had a ‘new way of seeing’ at hand. Yet little has been done to truly absorb these principles into the multiple arena it would apply, and thus health is still pinned as a downstream practice, rather than upstream. This makes healthcare itself ultimately unsustainable in welfare states like the Nordic countries, just as much as it is already broken in countries who attempt to fund healthcare privately and individually, rather than collectively. The question of shifting upstream applies both within the health sector—Gillian Tett noting that “We need to treasure — and fund — medical innovation, not just in a crisis but in “normal” times as well”—as well as without, including the broader public health concerns, often positioned outside of the formal healthcare sector, as well as within those sectors whose decisions and actions fundamentally produce good or bad health. These include the obvious candidates of transport, urban planning, employment legislation, architecture, education, food, social services, and so on, yet there are very few areas of government or sectors of industry that have no impact on health. Yet most are run as if this is not the case at all; thus, the healthcare industry has to pick up the pieces ‘downstream’. It will be interesting to see if we manage to invest the 2% of COVID-19’s economic damage required to prevent future viruses. Antonofsky‘s core idea was that we should address what causes health and wellbeing (salutogenesis), not what are the reasons for disease (pathogenesis). Under the current circumstances, we might equally ask what causes the coronavirus (see Slowdown Paper 2: We make the virus and the virus makes us), not simply how do we make a vaccine. We might also ask what causes crime, not simply how big is our police station. The Black Lives Matter protests have placed this question of ‘upstream’ strategy and policy directly on the table, yet framed through the question of policing, rather than healthcare per se—although of course public health and community resilience are intrinsically linked. The potentially radical moves to ‘defund the police’ in Minneapolis and elsewhere, triggered by the actions on the street, are entirely in this ‘upstream’ direction. Similarly, the traffic-oriented moves to transform the street itself—in fact, to repair the street—are triggered by actions to mitigate COVID-19. This is also in this upstream direction. The transformation of housing, education, physical activity, community clubs and societies, and social services are all upstream targets, if addressed coherently and ethically. The revisioning of policing itself is of course radically upstream. These all in turn suggest, and even demand, a radical revisioning of design at some point: around relational principles, around care and repair, around a deeper examination, and then articulation, of the value we are producing, and the values we stand for. We cannot defund the police unless we redesign the environment that fundamentally produces crime, whether social, cultural and political, or the built fabric that enshrines, enables, or engenders these things. This does not mean that design is the sole answer to the question of how we defund the police. Any chance to prick the last remaining shreds of hubris around ‘design thinking’ must still be taken, clearly, and no doubt some will be too quick to claim this is all a design problem. That must be resisted, as there are no doubt more fundamental approaches required. Listening to Mayor Lori Lightfoot of Chicago and William Finnegan on the role of the police unions around the NYPD makes that crystal clear. Similarly, no amount of restorative landscape design can counterpoint punishing working hours; a holistic answer includes the wellbeing benefits of a shorter working week, as well as environmental improvements.
https://medium.com/slowdown-papers/26-defunding-downstream-moving-upstream-taking-care-making-repair-a76db3925bb9
['Dan Hill']
2020-09-29 09:52:20.431000+00:00
['Covid-19', 'Policy', 'Defundthepolice', 'Environment', 'Healthcare']
The miracle of object-oriented programming
The power of the OOP languages In the last sixty years the paradigm of the OOP was widely stressed and we assisted at born and diffusion of many languages based on the OOP paradigm. The power of the OOP languages reside in three key concepts: flexibility, modularity and representation of the reality. Now we will try to discuss about these three concepts and the application on the programming language. The flexibility is the first key concept that we will analyze. The OOP born to be used on different development scenarios; the presence of the objects that represent the concepts of the scenario of application emphasizes this concept: it is possible, with the same language based on OOP define a library structure or the halls of a cinema or the supermarket products. So every real concept is achievable with an extension of object and the application in different contest make the OOP a flexible paradigm. The second key concept is the modularity. The OOP paradigm agrees to the definition of objects and every object can interact with other objects creating a hierarchy. If this hierarchy of objects realizes a functional sylos this is identifiable like a module or most commonly a library. The possibility to realize, include and use library write by third part agree to realize applications very quickly; otherwise the developer need to realize the functionality included into the external libraries. The maximum expression of the library is the framework that collect a lot of library to provide a wide set a function. The last concept is the representation of the reality. This concept is at the base of the flexibility; the OOP represent a reality, but it also can represent most than a single reality: so the it is flexible. With the objects, attributes and methods orchestrate by programming paradigm we can realize a digital representation of the reality: a DVD becomes an object, a title becomes an attribute, watching the film becomes an action or more simply a method. With the OOP changed the way to design the application and also changed the way to think programming.
https://medium.com/quick-code/the-miracle-of-object-oriented-programming-837c3b1f1e59
['Marco Domenico Marino']
2019-10-24 04:50:18.841000+00:00
['Programming Languages', 'Programming', 'Software Engineering', 'Software Development']
To The Moments We Almost Have
To The Moments We Almost Have I hope you can still have it Photo by britt gaiser on Unsplash to the occasions I will miss your 25th birthday your career milestones your sister’s wedding I hope you can still relish it to the moments we almost have the road trips we almost take the dream house we almost build the apple pie we almost bake I hope you can still have it to the chances we could’ve taken the tale we could’ve told the words we could’ve said the promises we could’ve fulfilled I hope all is well without it and when the clock strikes midnight like the lunar eclipse, remnants of you will fade away then you’ll no longer be a part of my life but what do you do when most parts of you already became mine
https://medium.com/scribe/to-the-moments-we-almost-have-84ba9b9241e7
[]
2019-11-11 17:31:51.023000+00:00
['Poetry', 'Short Story', 'Love', 'Reading', 'Writing']
Animals Tale
Animals Tale Motivation Photo by Sid Balachandran on Unsplash Bird fly up in the sky with no fear of falling down because he knows he can fly so, like birds we should believe in our abilities and work hard to make things happen. Ants work very hard they don’t need instructor how discipline they are We should learn from them to become discipline like them. The dog is a very faithful animal he is always loyal to his owner In the same way we can be loyal to our Goal by performing our hero’s role. A rabbit runs very fast he is always run hard. We should also try hard to get what we actually want. A lion is the king of the jungle he lives without fears. We should also become courageous like him then we will also easily win. Peacock Wait for the rainy season dance when the rain comes In this way, we can find happiness in Small things in our life. Sparrow works really hard she makes a nest without rest she works with consistency to give shelter to her children We have to become consistent like her to achieve our goal. Snake sleeps in the day and work at night we can also follow same work rule to achieve something in life. Animals teach us tales we must follow them so we will become strong like them. to achieve successfully our goals.
https://medium.com/flicker-and-flight/animals-tale-3460cc95afd5
['Mike Ortega']
2020-09-16 14:24:27.248000+00:00
['Success', 'Poetry', 'Motivation', 'Poetry Writing', 'Poem']
Exploring Design Drinking
Basically, each stage constrains one day of the Design Sprint process adding a party game attitude: MEASURE / 06:00 PM — 08:00 PM Participants must pitch their ideas and measure their goals. As soon as it’s done, we would order a few pizzas and we should have decided the focus of the sprint before the delivery is done. If not, the food will be cold. SHAKE / 08:00 PM — 11:00 PM We will run a few brainstormings to shake our minds and generate concept solutions. Instead of a timmer, we will play music, and when the track ends the Crazy 8's must be filled. If someone doesn’t accomplish it drinks a shot for each empty frame. STRAIN / 11:00 PM — 01:00 AM At this point, no one will be in the mood of sitting or writing post its. We need to drive this euphoria into something creative. That’s why we will evaluate our ideas by doing a role-play and giving a toast to each identified pain. At the end of the performance, we will have strained the most promising concept direction. SHAPE / 01:00 AM — 04:00 AM This is the most difficult part. The team must cold down and shape the selected idea as a testable prototype. Let’s avoid using computers and build something with all the garbage generated during the party. SIP / 04:00 AM — 06:00 AM It’s time to leave the apartment and expose our product to a walk of shame. We will carry out a guerrilla test with the people who leave the bars to collect user feedback. Because a drunk man tells no lies. In addition, we established some rules to reinforce the party atmosphere: no chairs or tables and always with the music on. Why we failed Against all odds, the problem was neither alcohol nor fatigue, it was our setup. We were too confident about our experience running design sprints so we did not pay attention to some basic aspects for its success: Overlapped disciplines To come up with a solid outcome you must assemble a multidisciplinary team. The team consisted of a service designer and two product designers… there is a lack of diversity for sure. Tiny team Nobody will call a meeting of just three guys a party. Discussions were always between the same people and there was no way to split the team to explore in parallel different concept directions. Reign of chaos It became impossible to follow our agenda without someone looking beyond the design project. A moderator to control the timings and to cut endless discussions was mandatory. Tailored challenge We chose the challenge that best suited the sprint settings and not the most interesting one. We were always making decisions based on what works best for the setup and not what is most interesting to explore. Afraid of working on something too big, we ended up designing something too boring. So at 4:00 AM we decided to call the night (actually already the day) before even started Shape and Sip stages. We were sad but satisfied, we did not discover a new unicorn but we tested the approach and it was promising.
https://uxdesign.cc/exploring-design-drinking-f68f145e5411
['Marc Morera']
2020-10-17 10:01:50.205000+00:00
['Design Sprint', 'Design Thinking', 'Innovation', 'UX', 'Startup']
Create a COVID-19 Dashboard with JavaScript
Now that we have decided what we want to build, let’s start writing some HTML & CSS! The HTML First, let’s take a look at all the libraries & scripts we need to import to create our beautiful dashboard. We’re going to use Bootstrap 4 to manage our views and make our web app look great on mobile devices. Then, we need to select a font. One of my recent favourites is Work Sans, its sans-serif & is perfectly readable across all screens. You may want to check out other fonts on Google Fonts. We’ll be importing scripts to all the libraries we are going to use here. Your <head> tag should be looking like this: A note about the meta tag on line-3: <meta content="width=device-width, initial-scale=1" name="viewport" /> This means that the browser will render the width of the page at the width of its screen. So if that screen is 320px wide, the browser window will be 320px wide, rather than way zoomed out and showing 960px. This helps for responsive design and should be used only if you intend your website to be responsive. Our HTML is divided into the following parts: The top header which consists of a title, a dropdown to select the country & a button which opens a modal displaying important information regarding the dashboard. (The modal is optional). The left container which consists of our graphs and statistics. This container data is fetched from the internet using the Axios library. The right container which shows a couple of resources from WHO. This is static and not fetched from an API. The header div is pretty simple with a standard <select> tag & a button. For the left & right containers, we’ll be using the bootstrap grid. Divide the grid into two parts ( col-lg-8 & col-lg-4 ). Let’s create the left container first. Take a look at the design. We want to add the flag & name of the country selected, a div that shows active cases & a collapsible view which contains 3 divs for Confirmed, Recovered & Deaths. To get the flag of each country, check out CountryFlags.io, an awesome but simple API that helps you get the flag of any country! The collapsible divs each contain a chart, a dropdown to select the days for which stats are shown & the total count of cases in that day range. This collapsible view is called an accordion because it works just like one. An Accordion. It expands and collapses to create music. In our case, the accordion div will contain all cards, these cards will expand and collapse on clicks & display our charts! In the end, after you’ve added the country name, Active status div & the three cards to the accordion, your final code for the left container should be: Phew, now let’s quickly finish off the static resources-container. It contains bootstrap cards, you can find a super handy resource which gives you a cheat sheet for all the bootstrap classes, with all the code! The CSS The CSS is fairly simple, and we will not be covering all the classes & properties used in this project until we discuss media-queries. Hence, use the below CSS to give your dashboard the final look and feel. Good job sticking around till here! We’re on track to create our final product! The next step is to work with JavaScript, which will be the longest part of this tutorial. It is true what they say for the 80/20 rule: 80% of work takes 20% of the time while 20% of the work takes 80% of the time! The JavaScript Before we dive into the actual code, let’s familiarise ourselves with IIFEs & Promises. Immediately Invoked Function Expressions IIFEs are functions in JavaScript which run as soon as they are defined. They are “immediately invoked” when your script file is run. Hence, they do not need another function to call them. Pre ES6: (function(){ console.log('Welcome, this is an IIFE!'); })(); ES6: (() => { console.log('Welcome, this is an IIFE!') })(); Promises To understand promises, consider an example. Say you’re baking a cake for the first time & the recipe says it takes 45 minutes to bake a cake. Now once you start baking a cake, you don’t know if you’re going to bake it right until the 45 minutes are up. There are two outcomes, if the cake is baked right, then you can eat your cake 🍰, or else if you’re like me, you burn your kitchen down & decide to just buy a cake the next time. 🤷🏼‍♂️ So there are three states: Pending: You don’t know if the cake is going be baked right. Fulfilled: Your cake is baked right, so you can eat it. Rejected: Your burn your kitchen down, so you can’t have it. The above scenario in a JavaScript promise would look like this: let isCakeBakedRight = true; const bakeCake = new Promise( (resolve, reject) => { if ( isCakeBakedRight ) { // Eat the cake! 🍰 resolve("Eat"); } else { const reason = new Error("Oven's on fire"); reject(reason); } } ); Axios is a promise-based HTTP client for the browser & node js. Now that you know about IIFEs & Promises, let’s start creating our JS Controllers. Let’s start by creating the UI Controller first. The UI controller, true to its name, will handle all operations related to the UI such as getting the data from the API, creating charts & updating the values for the number of cases. Let’s take a look at each of the code-blocks & functions in the UI Controller. HTMLStrings This variable stores all the strings we use in the HTML to identify classes or ids. This makes sure you don’t spend hours debugging typos & allows for quick & error-free referencing. setTotalCasesForStatus(status) The API which we use in this project returns total country data in two ways: For the entire country For each province in the country Due to this, we will be creating 4 variables and looping through the summary data returned by the API. This will work for both the ways listed above. This function will also set the values to the UI. setCasesForStatus(count, status) This function will set the number of cases for the last 30, 60, 90 & 120 days for the selected status. getChartColors(status, type) This function is used to set the chart colours, for the background & the border of the chart. setChartForStatus(data, status) This is an important function. This function collects the data received from the API & creates a chart for the selected status. Our line chart requires an array of labels for the X-Axis, which we insert as an interval of 5 days (due to a large number of dates, 30 or above). We also insert the current date to make sure it is not missed out. We pass the labels & dataset to the chart, along with the background colour & border colour. We also provide some custom options for the Y-axis such as the step size for the axis & if it should begin from 0. Documentation on more options and chart types can be found here. let ctx = document.querySelector(chartName); let statusChart = new Chart(ctx, { type: 'line', data: { labels: labels, datasets: [{ label: status+ ' Cases', data: chartData, backgroundColor: getChartColors(status, 'background'), borderColor: getChartColors(status, 'border'), borderWidth: 2 }] }, options: { scales: { yAxes: [{ ticks: { beginAtZero: false, stepSize: 10000, } }] } } }); getCountryFlag(country) This function uses a simple switch-case to return the flag ID for the CountryFlags API. The next list of functions is exposed from the controller so that they can be called outside the U IController. numberFormat This function returns the number formatted as per the required locale. If you’re looking for numbers to be returned as 1,00,000 use the en-IN locale option. More details here. getSummaryCount This function fetches the summary count from the API for the full day. MomentJS is a simple and handy library which lets you manipulate the date and time in JavaScript. For example, if you want yesterday’s date and time, all you need to do is, let yesterday = moment().subtract(1, 'days'); // 2020-06-05T22:06:42+05:30 Remember our discussion regarding promises? Let’s use it. In the code-block below, the Axios object fires a GET request to the API. The method returns a Promise so that we can use the then and catch methods to handle the result. The then method is called when the promise is fulfilled. The catch method handles rejections & other errors. axios.get('https://api.covid19api.com/country/' + selectedCountry + '?from=' + yesterday + 'T00:00:00Z&to=' + today + 'T00:00:00Z') .then( response => { let res = response['data']; setTotalCasesForStatus(res); }).catch( error => { console.log(error); }); getCasesForStatus(status, delta) This method gets the total cases for the status and the delta specified by the user. The default delta is set to 30 days. So you will see the data for the last 30 days from the current date by default. This is where momentJS makes our job easier by providing the subtract method which gives the delta date. On fulfilment of the promise, this method sets the count for the status & creates the charts. That’s is for the UI Controller. Next, we will create the IIFE that will set up eventListeners and create a default environment. Additional Task: Writing a Media Query for mobile devices. Media Queries were introduced with CSS3. It uses the @media rule to include a block of CSS properties only if a certain condition is true. The media query below will set some different properties if the screen-size is below 600px. @media screen and (max-width: 600px) { .left-container { height: auto !important; } .right-container { height: auto !important; } .details-container { margin: 0; } .resources-container { margin: 0; } .col-lg-5 { margin-top: 15px; } .country-select { margin-top: 14px; } .active-cases { width: 60%; } } More on Media-Queries in this awesome freecodecamp.org post.
https://medium.com/swlh/create-a-covid-19-dashboard-with-javascript-373f46a11fcc
['Varun Joshi']
2020-06-07 19:09:58.838000+00:00
['JavaScript', 'Software Development', 'Covid 19', 'Coronavirus', 'Programming']
Closing TCP/UDP Sockets With Timeouts and Error Handling in Nodejs
I’m not a big fan of the built-in `dgram` and `net` libraries in Nodejs, but it’s really easy to create sockets and write networking applications with Nodejs. One of the biggest issues I’m constantly seeing is the lack of cleanup functionalities when using the socket libraries in Nodejs. This code is from our DNS npm package that we use in Violetnorth You can find the Violetnorth — DNS npm package here: github.com/violetnorth/dns So I’m going to talk about a quick way to clean up sockets with timeouts and overall error handling when dealing with sockets. If you don’t implement some sort of timeout, you are going to run out of available sockets especially when using TCP sockets. Sockets usually hang and there needs to be timeout handling to close the socket if it’s hanging, which is not super apparent with Nodejs. Here is an example function that uses a TCP socket to do a DNS lookup. // TCP socket with timeout const dns = require("dns"); const net = require("net"); const dnsPacket = require("dns-packet"); const resolveTCP = (packet, addr, port = 53, timeout) => { return new Promise((resolve, reject) => { const socket = new net.Socket(); const id = setTimeout(() => { clearTimeout(id); socket.destroy(); reject("timed out"); return; }, parseInt(timeout)); socket.connect(parseInt(port), addr, () => { socket.write(packet); }); let message = Buffer.alloc(4096); socket.on("data", data => { message = Buffer.concat([message, data], message.length + data.length); }); socket.on("drain", () => { clearTimeout(id); // Clear the timeout if you are going to close the socket manually. socket.destroy(); resolve(dnsPacket.decode(message)); return; }); socket.on("end", () => { clearTimeout(id); // Clear the timeout if you are going to close the socket manually. socket.destroy(); resolve(dnsPacket.decode(message)); return; }); socket.on("error", err => { reject(err); return; }); socket.on("close", function() { resolve(dnsPacket.decode(message)); return; }); }); }; Another example function that uses a UDP socket to do a DNS lookup. // UDP socket with timeout const dns = require("dns"); const dgram = require("dgram"); const dnsPacket = require("dns-packet"); const _resolveUDP = (packet, addr, port = 53, timeout) => { return new Promise((resolve, reject) => { const socket = dgram.createSocket("udp4"); const id = setTimeout(() => { clearTimeout(id); socket.close(); reject("timed out"); return; }, parseInt(timeout)); socket.on("message", message => { clearTimeout(id); // Clear the timeout if you are going to close the socket manually. socket.close(); resolve(dnsPacket.decode(message)); return; }); socket.on("error", err => { clearTimeout(id); // Clear the timeout if you are going to close the socket manually. socket.close(); reject(err); return; }); socket.send(packet, 0, packet.length, parseInt(port), addr, err => { if (err) { clearTimeout(id); // Clear the timeout if you are going to close the socket manually. socket.close(); reject(err); } }); }); }; The idea is basically to create a named timeout function which will close the socket after the specified timeout milliseconds, but you have to clear the timeout function every time you close the socket manually (in the case of an error for example). Otherwise, you will see unhandled errors due to trying to close an already closed socket. Anyways, that’s pretty much it, error handling and timeouts with Nodejs sockets.
https://medium.com/swlh/closing-tcp-udp-sockets-with-timeouts-and-error-handling-in-nodejs-e06b063c0bf6
['Koray Göçmen']
2020-10-22 21:36:51.563000+00:00
['Nodejs', 'Programming', 'Software Engineering', 'JavaScript', 'Technology']
Introducing Myself
Learning new skills has always been my passion. Every year I am excited to try to learn a new craft. This year has its downsides, but it helped me find spare time for my new hobby projects. I am already enjoying two new hobbies — painting and writing. Why Medium? I see myself as more of an introvert. I feel more comfortable talking to someone one-on-one than being in a crowd. However, I do have thoughts to share and would like to reach out to many people who could benefit from my stories. The solution to my problem is Medium! It gives voice to shy people like me. I am excited to write about topics I love. I like to share ideas on various subjects like learning new skills, personal development, inspirational stories, or anything which comes to my mind. Why Illumination? This is one of my favourite publications simply because it is rich in a variety of topics. You can read diverse stories in one place. When I was accepted to the publication, I felt welcome right away. The editors are extremely fast in publishing my articles!
https://medium.com/illumination/introducing-myself-8c09c89d35eb
['Kirshi Yin']
2020-07-30 12:01:10.280000+00:00
['Autobiography', 'Self', 'Writing', 'Illumination', 'Writers On Medium']
Exploratory Data Analysis(EDA) on Residential Properties in Hyderabad
This article is a follow up of the previous one where I built a web scraper that extracted required data for a real estate website for analysis of residential properties in Hyderabad. In this, our primary focus would be on visualizing the data we have extracted and get meaningful insights of data so that we can come to the necessary conclusions. Our ultimate goal is to find the residential properties with the requirements of my father which are shown stated below which states that there are four requirements i.e, price to be between 25 to 40 lakhs, the area should be greater than 1000 sqft, the property should be a 2 or 3 bhk apartment which is located in the areas kukatpally, gachibowli and miyapur. We start with basic exploration and visualization of the data we have. For performing EDA we have univariate analysis, bivariate and multivariate analysis. Firstly, we import the necessary libraries and see the basic details about the data like the number of rows, columns and the data types of each column. (rows represent the data points and column represents the features/attributes of data). Let’s take a look at the top and bottom rows of the dataset. To get the data types of each column we can use df.dtypes which returns the datatypes of each column. (df is the data frame we loaded) Now let’s visualize every column we have, This bar plot above shows residential plots are the type of properties that are mostly found in Hyderabad which are about 14642 followed by 3 bhk apartments to be 4071 and similarly the count of other residential properties. The above plot depicts that there 4 categories in building_status which are New properties to be almost 12000, Under construction properties to be almost 5000, Ready to move properties to be 5800 and Resale type of properties to be 1900. To get the exact count of each category we can use value_counts(). Now let’s move to our main task which is to find properties with mentioned requirements as above. Specified Price range 25 to 40 L when we consider the price range 25 to 40 L number of rows are reduced from 25715 to 4789 that means there are only 4789 properties in this price range. Required locations: Kukatpally, Gachibowli, Miyapur When we consider the mentioned locations we are left with only 85 properties. Requires only Apartment’s To find the properties that are apartments we need to set the index as the title so that we can fetch the required data and then reset the index. We have only 69 properties that are apartments. Do you want to know what is the average price of the property? If so we have a savior describe() function which returns the mean, median, std, min value, max value, 25th 50th & 75th percentiles of data etc. Here count represents the number of rows and mean represents the average value of each column,25% says that 25% of data has a value less than that mentioned value there for example for column price 25% says that 25% of properties have their price less than 30.12 L.The minimum price of property is 25 L and maximum price is 40 L. And 75% of properties have their prices less than 37 L. What say? Let’s start visualizing?? Univariate Analysis This a pie plot for the price column it shows that 20% of properties have their price as 40 L and 17.50% properties to have 25 L. Each pie in the plot represents different percentages of prices. Let’s see the distribution of prices. The left plot is a histogram in which x-axis represents the price and the y-axis represents the count/frequency of that price. The plot that is to the right is a distribution plot where the x-axis represents the price and the y-axis represents the probabilities of prices. For example, almost 23 properties are having a price range of 34 to 35 L and almost a probability of 0.12 for having a price range of 34 to 35 L. In the same way, we can plot for other columns as well as shown below. Plot for rate_persqft and area_insqft Plot for building_status Plot for locations Oh! I guess we have forgotten to include two more requirements that are area and type of apartments. Well, no problem lets include them now and start further analysis and visualizations. So after further adding our conditions we are only left with 46 properties. Bivariate Analysis Plot for Title vs Price(L) The above figure is a box blot in which the two lines on both ends are called whiskers, the bottom line is the minimum value where the topmost line is the maximum value for each category i.e, 2 bhk and 3 bhk and the middle line inside the colored area specify the median which is also the 50th percentile. The plot describes that for 2 bhk the minimum price is 25 L and maximum price is 40 L, 25% of 2 bhk’s has their prices below 29 L whereas in 3 bhk the minimum starts at around 35 L and goes up to 40 L where 25% of 3 bhk’s has their price below 37 L. A Joint plot between area_insqft and price(L) From the above plot we can infer the relation between area and price by seeing the histograms on top of each axis of the plot. Surprisingly there is no increase in price with an increase in the area of the property(these assumptions are only for the data that we have got after applying the conditions that are specified) Boxplot between location and price(L) Multivariate Analysis We can see from the above plot that miyapur has a wide range of prices from 25 L to 40 L in which most of them are 2 bhks. Whereas kukatpally has very few apartments whose price range starts from 35 L to 40 L. In gachibowli there are only two 2 bhk’s whose prices are 33 L and 40 L. This is a violin plot which describes that miyapur has both types of apartments that are ready to move and under construction as well where most of the ready to move apartments have their price between 35 L and 36 L. And gachibowli has only under construction apartments. This a scatter plot where the x-axis is area_insqft and the y-axis is rate_persqft. We can see that most of the apartments in miyapur has a rate per sqft between 2000 and 2500, whereas in kukatpally rate per sqft is mostly between 3000 and 3750. Correlation: It defines the relationship between two features whether they are positively correlated or negatively correlated. Result of a correlation is called the correlation coefficient (or “r”).It ranges from -1.0 to +1.0. The closer r is to +1 or -1, the more closely the two variables are related. If r is close to 0, it means there is no relationship between the variables. If r is positive, it means that as one variable gets larger the other gets larger. If r is negative it means that as one gets larger, the other gets smaller (often called an “inverse” correlation). This is called a correlation matrix. We can even visualize it using seaborn. We can also use pair plots for correlation. For the price range considered the highest correlation is between rate_persqft and price, they both have a positive correlation. Negative correlation exists between area_insqft and rate_persqft. There’s not much correlation between price and area_insqft. Map Visualization We can also use maps for visualization using a folium package. importing necessary libraries To get a map visualization we need latitude and longitude of the locations. Let us consider the three locations that were specified as one of the requirements(miyapur, kukatpally, gachibowli). And we will write a function that returns latitude and longitude of a location using the geopy module. From above we can see that we were able to extract latitude and longitude of a location. Now let’s try plotting them. Let’s try plotting for all the locations we have in the dataset. we can extract all the latitudes and longitudes of locations in the same way as we have done above. First, extract all the lat n long of locations and then create two new columns in our original dataset latitude and longitude and store extracted values there and then do as follows, The above map shows us the intensity of properties in different locations of Hyderabad. In conclusion, there is a total of 14642 residential plots, 8381 Apartments, 4071 3bhk apartments, 832 2bhk independent houses,440 3bhk villas, etc. And in consideration to conditions give we finally have 46 properties that suit the requirements mentioned. Almost 20% of properties have a price of 40 L in price range 25 to 40 L and miyapur has more number of residential properties.
https://medium.com/swlh/exploratory-data-analysis-eda-on-residential-properties-in-hyderabad-fd246e1bf71c
['Bhargava Sai Reddy P']
2019-10-23 15:07:24.148000+00:00
['Eda', 'Python', 'Data Analysis', 'Data Science', 'Data Visualization']
Anything that can go wrong, will go wrong
Murphy’s Law: “Anything that can go wrong, will go wrong There are more ways things can go wrong than right. Imagine you are placing a deck of cards inside a box. Then you started shaking the box. It’s possible that the deck of card is reassembled in the same order after that vigorous shaking. But in reality, that never happens. Why? Because the odds are overwhelmingly against it. There is only one possible state where every piece is in order, but there are a nearly infinite number of states where the pieces are in disorder. An orderly rearrangement is incredibly unlikely to happen at random. The difficulties of life do not occur because the planets are misaligned or because some cosmic force is conspiring against you. It is simply entropy at work. It is nobody’s fault that life has problems. It is simply a law of probability. There are many disordered states and few ordered ones. Given the odds against us, what is remarkable is not that life has problems, but that we can solve them. You can fight back against the pull of entropy. You can solve a scattered puzzle. You can pull the weeds out of your garden. You can clean a messy room. You can organize individuals into a cohesive team. When you are in a mission to change things always keep in mind that, changing your/people’s minds are harder than it looks. The more you believe you know something, the more you filter and ignore all information to the contrary. The moment you start sharing new ideas, information, your knowledge etc. People will start thinking and searching google for what they believe. Our tendency to search for and favor information that confirms our beliefs while simultaneously ignoring or devaluing information that contradicts our beliefs. This is why I started with Murphy’s law. When you plan something the probability of the things that can go wrong is always higher. For making any change you have to expend energy to create stability, structure, and simplicity. Successful relationships require care and attention. Successful houses require cleaning and maintenance. Successful teams require communication and collaboration. Without effort, things will decay. It’s not easy to change the way people think, but that doesn’t mean that you should not try. We should and it is not going to be easy. There will be another set of people the “know all types”. When you are driving a change, you should not waste your effort to convince these group. They will neither accept nor reject the change. These groups are like magnets of -ve energy. The more you fight against them, the more they attract -ve energy. So be focused on your views, these types will eventually die after the period of chaos. Given the odds against us, what is remarkable is not that life has problems, but that we can solve them all. Take the case of that box and deck of cards. If you add more cards to that box or increase the size of that box, it will result in more chaos. Chaos is an after effect of entropy and entropy accelerates Chaos. The way to control chaos is to set examples. What this will do is people will start looking at that and the entropy effect will slowly come down and the chaos can be managed. Look at history, a leader emerges from every chaotic situation and people started following that leader. Eventually, the chaos would have ended or slowed down. In chaos : Try to always find something good to point out and something positive to contribute. Lead by example and set a standard in light of unreasonable expectations. In Chaos, leaders and people will be quick to point out mistakes and shortcomings. Counteract it by acknowledging your own work and bringing attention to your (and your teammates’) contributions Learn to keep speaking up and speaking out, and find solutions to every “but,” one at a time. When you know where your boundaries lie, you know when you have to speak up — or even walk. Rise above the dysfunction of existing leadership and be an example of the leadership that can work. Above all, don’t allow yourself to believe that it’s an acceptable way to live or lead. “Let things go wrong, but you should keep doing the right thing. It will slow down entropy”. More than anything this defines a character, a solid powerful character in you. https://trooming.com/
https://bivekrenuji.medium.com/anything-that-can-go-wrong-will-go-wrong-bfe1cfa44eba
['Bivek Renuji']
2020-07-11 10:43:44.586000+00:00
['Science', 'Management', 'Product Management']
A Designer’s Guide to Working with Product Managers
Difference in thought process To be honest, at first, I had a hard time syncing with the thought process of a PM. As designers we tend to think about the experience, the flow, how consistent the components are, how to make it better work in different resolutions, etc. Aspects that we do not pay much attention to — current business needs, engineering efforts being worked on, product strategy for the next few quarters to come, what’s on the roadmap, resource allocation etc. Obviously, this does not come to us instinctively which is why collaboration with a PM is a primal necessity in the product world — it helps paint a broader picture. After working closely with PMs for a significant amount of time, I decided to reflect and jot down methodologies that help me design with more intent. Some of the steps I take, in no particular order — 1. Prioritizing tasks 2. Maintaining a UX debt sheet 3. Creating a UX test plan 4. Tracking actions on the product 1. Prioritizing tasks I have noticed that bringing up topics about inconsistencies or pattern enhancement often times irked PMs. I realized this happened not because they don’t want to fix it or it is unimportant to them, but because it was not just the right time. There is always a right time to bring up your biggest pet-peeves. And how do we know when is it a right time? Well, answering this one was a little harder than I thought. What helped me was understanding the product status from a broader perspective. I started making a note of current business needs and engineering efforts. Business needs — what needs to be done to deliver value at the earliest. Because if you don’t sell, you don’t have a successful business, and in effect, there is no real scope to design something that might not exist. Engineering needs — what needs to be done to stabilize the system so that the existing functionalities don’t break. Things that need to be optimized to deliver a better experience. Understanding engineering constraints help a lot with making design decisions. Something that works better interaction-wise might be a huge load on the system, and a trade-off has to be made to make it stable. What’s the point of a sexy interaction when the functionality is broken? With these two major considerations, in addition to the huge list of design changes that I wanted to work on, I started prioritizing. Aspects of the product that required a major change usually went down in priority, and items that sent out maximum value went up the list. It helped me make peace with what I was working on. When the time was right (believe me, this time will arrive.), I plugged the redesign ideas I had thought about. This makes the interaction with PMs productive with much lesser friction. 2. UX Debt Sheet Almost all designers would have heard this from their PMs — “Let’s get this out for MVP and we can get back to it later.” This had me hugely frustrated at a point because I realized the changes being made to the product were very minimal. It was not the ideal experience the designer in me wanted. I feared that if we took an always MVP approach, we would eventually be left with more work over time just trying to fix all the workarounds. More like an MVP Paradox. Coincidentally, I noticed that the engineers had an account of technical backlog and they called it Tech Debts. This was an inventory of technical tasks that were pending. They dedicated a sprint to finish these tasks specifically. I thought this was a great idea and wondered if there could be an equivalent UX Debt sheet for the project I work on as well. I had a conversation with my PM and decided to list out all the UX backlog. I chalked out a way of prioritizing tasks that can be finished by mapping it in a graph with Impact and Ease of implementation as parameters.
https://uxdesign.cc/a-designers-guide-to-working-with-product-managers-bcb164a473df
[]
2017-02-24 01:26:56.831000+00:00
['Design', 'Product Management', 'UX', 'User Experience', 'Careers']
Coming Out as an LGBT Muslim
Khakan and his partner at Warwickshire Pride 2017. We share similarites and celebrate our differences. According to a Gay Quranic Facilitator, within the Quran, the holy book of Islam, there are 114 chapters, 6,236 verses, 77,943 words and 323,620 letters. I asked him how many mention homosexuality, homosexual acts or same sex Love? And the answer: 0. Zilch. Zero. Nada. NONE! I have been an activist for most of my life. As soon as I realised I was different to other boys, I tried to be active in not wanting to be found out. I hid myself from those who questioned or those who scorned me. I had to because I came from a Muslim family which tried to create a life for me which just wasn’t to be. I wanted to be a Good Muslim boy and follow my parents’ expectations and aspirations of me. That meant to follow the 5 pillars of Islam — Shahadah: sincerely reciting the Muslim profession of faith Salat: performing ritual prayers in the proper way five times each day Zakat: paying alms to benefit the poor and the needy Sawm: fasting during the month of Ramadan Hajj: pilgrimage to Mecca And to have a successful career, whatever that may be but ideally one which provides financial stability and fantastic opportunities to make it lucrative enough to build a large home, marry a person of the opposite sex and have children. Familiar cultural traits which are not only known within the Muslim household but in many South Asian families regardless of faith. I come from a large family, and being the youngest, I had to carry the pressure to conform more than my siblings. After all, I was brought up to respect my elders and seeing that they all had their turns to rebel against my parents, I was the one who was more like the “runt of the family”. I didn’t fit in with my alpha male brothers and neither did I quite fit in with the females within the household, even though I loved spending time with them in the gender expected roles created for them by a patriarchal society. I used to pray and attend the mosque, but being of an inquisitive nature, when I asked what the words I was reading meant, I was told to shut up and sit down. I wasn’t provided with any answers. We appeared to be learning by rote. I observed Ramadan and knew that we were supposed to simulate the experience of those in need and who were really experiencing starvation across the world and opened the fast with a huge feast at the end of the day in gratitude. But I also knew my loyalty to the Islamic faith was being tested and at times, I felt like a fraud because nobody would inform me as to what it all meant and I was just going through motions. Aged 23 It was only when I moved to London, aged 19, that I soon began to question myself and my faith. I was beginning to develop a better understanding of who I was and what I was about to become. I was isolated, depressed and socially awkward. I learnt to place barriers between myself and others. I realise now it was because I was trying to come to terms with my sexual orientation and I couldn’t face up to being a gay young man. My coming out experience was at a time when the HIV/AIDS CRISIS was at a peak, Section 28 which banned the promotion of “any form of homosexuality” was in full force and coming from a relatively religious background (my father was a founder member of Birmingham Central Mosque) not only created religious guilt in my mind, but made me feel that I couldn’t be as expressive as I had liked or even to be sexually active would have negative consequences — to face the threat of HIV, be ostracized by the family and community if they ever found out I was gay or face eternal hell, damnation and fire, as taught to me by my elders and religious leaders. It all instilled a sense of fear and my mental health suffered. So, the inner struggles or internalised homophobia I felt led me to contemplating suicide. It was only the thought of my religion teaching me suicide was a sin and how I would leave my mother, did I think twice. Not only that but the sound of the police siren blaring its siren as it weaved its way through the park, which brought me back from the brink of suicidal ideation. The cultural complexities of family, religion and faith did not allow me to navigate a linear life or lifestyle. As a student, my sister had included the Quran and the bible in my reading materials. For me to learn about myself and homosexuality, I had to read what the holy scriptures said. I read the books, page to page, back to back and cross referenced. I was looking for answers and couldn’t find anything. I read the parables, the context, subtext, nuances, the themes, and appreciated the guidance it offered. It provided me with a moral compass but not anything about being gay. My life “choices” were heaving with anxiety, implications and consequences. However, I realised it was not a sin to be gay. If it was, I thought to myself, let Allah deal with me when my life is over. I read the Story of Lot time and time again. It’s mentioned in both the holy scriptures. It wasn’t making sense. Why would God or Allah destroy a whole town or city including women and children if the menfolk were homosexual? If we were to look at the issues and themes, the mortal men desired Angels in the guise of men and there were other issues at play too — control, rape, promiscuity, incest, consent, tests of faith and loyalty to Allah or God, idolatry and worship of deities, and intoxication, amongst other themes, which had no bearings on being gay itself. The parables drew me in, in the same way as Shakespeare, Lewis Carroll and other phantasmagorical authors. I learnt that Allah wanted me, as a human, to be kind, compassionate, generous and loving toward my fellow beings, animals and nature. Yes, there is violence within the scripture but history I never free from violence. I placed the books in the context they were set and I gleaned what was necessary. I learnt I had to be myself in order to survive, and that knowledge is key. I had to know and accept myself before I started to share with others. It took me a long while and I didn’t inform my mother until after I had met my partner through a mutual acquaintance. Her reaction was more positive than I had anticipated and in her acceptance of me, she proved to be more to the faith than any Muslim practitioner, leader or scholar. She showed me unconditional love, compassion and acceptance. I told my father about a year later. He was more intolerant and his rage was full of homophobic rhetoric. He was more concerned about what his peers, the extended family and community would think, especially those from the mosque. I chose to step away from the family home and about a year later, my father phoned and asked me to return home. I was expecting more negative reactions but instead he embraced me. He offered re-assurances in his way and added “the Quran has a lot to teach people, some good and some bad. We will only know what becomes of us all when we reach the Day of Judgement” My father continued “we have all sinned and for those who have not sinned to “cast the first stone”. My father was showing signs of being a man with foresight. On meeting my partner, he suggested we adopt and foster a child before it became legal to do so. He asked if we would be getting married before it became recognised legislation in the UK. He showed me levels of acceptance I never knew he could and he strove to protect me from harm. As an elderly Muslim man and a scholar, he shared his pride in me for what I had achieved and encouraged me to be the best at what I do. He urged me to use my voice and challenge social injustices. He impressed me to fight for what I believe in. To this end, in 2014, I founded Birmingham South Asians LGBT — Finding A Voice. Birmingham’s first non-funded, independent, multi faith South Asian social/support group for South Asians men and women aged 18+. We share our stories and personal experiences. I hear stories from young British Asians, predominantly Muslims, who daren’t come out to their families, students from overseas who undergo conversion therapies, Muslim Asylum seekers who are fleeing their countries due to their sexual orientation and may face execution or being ostracised from their families or communities. I hear stories of young LGBT Muslims experiencing Islamophobia when they try to access LGBTQI+ Venues and bars, I share my own episodes of dual discrimination when being the only brown face in a white space. I write about homophobia within the South Asian community and challenge the attitudes and mind-sets of a generation of people who can really make a difference. In the past 12 months, I have challenged several Imams and asked if they can offer advice, guidance and information to an individual who is struggling with coming to terms with his, her or their own sexual orientation or questioning it, can they stand in front of a congregation and say it’s okay to be LGBT? The answer has been a resounding NO because the South Asian community is not ready for it. I asked when will the community be ready for it and I have been met with a wall of silence. I have challenged media sources who tend to reinforce negative stereotypes of being LGBT+ and when I say one can be LGBT+ and reconcile it with faith, it is a story they do not want to hear because it provides a positive role model story and goes against the narrative they have chosen to portray repeatedly. They collude in the censorship of our voices, especially those who do not fit in with the assumed stereotype of being a Muslim or Gay. I’m not allowed to be one AND the other. I have been given silent choices, a social contract which not only tries to disable me and alienate me, but one which says I shouldn’t exist. But I say, we being LGBT+ and with faith, are a force to reckon with and that’s when I choose to say I am with my white British partner, aged 68 of nearly 26 years, who is C of E and has the bible in his bedside drawer and I have the Quran on my side. And together, we do exist, we are recognised and accepted within our local community and we will find our voices! @khakanqureshi @brumasianslgbt www.facebook.com/findingavoice
https://medium.com/reaching-out/coming-out-as-an-lgbt-muslim-b2b5704425bd
['Khakan Qureshi']
2018-01-14 21:48:35.379000+00:00
['Storytelling', 'Islam', 'Coming Out', 'Muslim', 'LGBTQ']
There’s Help for Veterans with Mental Health and Substance Use Disorders
Dan Smee was walking on a dirt road near his home in California, a backpack slung over his shoulders, when suddenly his eyes were darting, his heart was pounding, and he was back in Iraq, back on patrol. He couldn’t sleep. He’d find himself in the smoking wreckage of an armored vehicle, groping around in the dark for his helmet, reaching to make sure his legs were still there. When the flashbacks got really bad — when he felt himself falling back into a flooded Iraqi canal, the water closing over his head — he chased them away with beer, whiskey, and sleeping pills. It was a trap. As a recent RAND study showed, potentially tens of thousands of post-9/11 veterans like him have tried to numb the symptoms of posttraumatic stress disorder (PTSD) or depression with drugs or alcohol. That can keep them from seeking the long-term care they need, and becomes just one more barrier to climb if they do. When Smee worked up the strength to ask for help, a social worker told him to go get sober first. “I was so out of my mind at that point that I didn’t even know which way to go,” he says now. “The more symptoms I had, the more I drank, the more I used Ambien. Everything kind of crashed in on me.” The Prevalence of Mental Health and Substance Use Disorders More than 10 percent of veterans who served in Iraq or Afghanistan developed PTSD from what they saw and experienced. Some estimates put the number closer to 30 percent. As many as 15 percent show signs of depression. And one large survey of injured veterans found that half of those who had PTSD or depression also screened positive for hazardous alcohol use or a substance use disorder. More than 10 percent of veterans who served in Iraq or Afghanistan developed PTSD from what they saw and experienced. That survey was conducted by the Wounded Warrior Project, a nonprofit organization whose mission is to help veterans injured during the wars in Iraq and Afghanistan. In response to those findings, and to stories of people like Dan Smee struggling to get help, it asked RAND to investigate the state of care for veterans with co-occurring mental health and substance use disorders. The lead researcher on the project, Eric Pedersen, spent the early years of his career working with those very veterans in clinics run by the Department of Veterans Affairs. Some of his patients thought they needed alcohol or cannabis to sleep. Others couldn’t leave the house without calming their anxiety with illicit drugs. “Telling somebody to go get sober before you deal with their PTSD or depression, that’s just not going to work,” said Pedersen, now a behavioral scientist at RAND. “They’re just never going to come back in for treatment. The drinking is the one coping strategy they have, and now you’re going to remove it before you treat their underlying mental health disorder?” That reality is not always reflected in the treatment options available to veterans and others struggling with co-occurring disorders. Pedersen’s team reviewed thousands of in-patient treatment centers and out-patient clinics, and found that more than a quarter of them have no way to treat co-occurring mental health and substance use disorders at the same time. And even among those that did, the researchers found no agreement on what good treatment looks like, no consistency — no standard plan of action to get someone like Dan Smee back on his feet. Improving the Delivery of Promising and Effective Treatments Smee had served as a medic in the Army after high school, and reenlisted with the National Guard after 9/11. He thought he would serve stateside, maybe guard an airport. Instead, he soon found himself patrolling the dusty fields of eastern Iraq. He remembers the way the dirt fell like rain after the explosion that ripped a hole in the armored vehicle he was in, the moment of superhuman strength that allowed him to escape that flooded canal in full battle gear. Dan Smee served as a medic in the Army after high school, then reenlisted with the National Guard after 9/11. Photo courtesy of Dan Smee He was using Ambien to sleep by the time he came home. Within a few months, he had added six packs of beer and pints of Jack Daniels. He stayed inside his apartment, the lights turned low, unless he needed to venture out to resupply. “The stuff that happened really started to wear on me,” he says now. “The friends lost, the near misses, being blown up and shot at and all the crazy stuff that comes with war. Things kind of unraveled for me.” There are good, effective treatment protocols to help people break that cycle of mental health and substance use disorders, RAND’s study found. The VA, in particular, has developed gold-standard treatments for veterans with co-occurring disorders, with the clinical evidence to prove they work. And yet, in interviews, the researchers found few providers who closely follow them. Often, that was because of financial pressure. A treatment protocol might require 15 therapy sessions, for example; a patient might only have the insurance to cover five. Or a program designed for one-on-one sessions with a psychiatrist might be repurposed for group therapy. Did those modified programs still help? Often, the treatment centers themselves couldn’t say for sure, because they didn’t always follow up to make sure discharged patients hadn’t relapsed. There are good, effective treatment protocols to help people break the cycle of mental health and substance use disorders. The researchers estimated that the average Wounded Warrior Project veteran lives within 15 minutes of a facility with some kind of program to treat co-occurring disorders. But there was no way to tell what kind of care it provides, what protocols it follows, or whether it could meet the specific needs of veterans. That could make a big difference. At one facility that did serve veterans, for example, staffers knew not to schedule appointments around rush hour. They understood that, for a veteran with PTSD, getting stuck in a traffic jam could bring back memories of stopped convoys and roadside bombs. “If you just want to go someplace, then sure, there’s someplace near you that says it can help with co-occurring disorders,” Pedersen said. “But in terms of quality? We’re not sure. We have these treatments that are out there, that we know work for veterans, but we need to do a better job to deliver them.” The VA and other treatment facilities should ensure they’re offering the full scope of evidence-based care that veterans with mental health and substance use disorders need. Smee, a social worker at the VA, now works with veterans facing the same co-occurring disorders that he did. Photo courtesy of Dan Smee They should make a point of screening every incoming patient for signs of such co-occurring disorders. And they should provide more information about their approach to those disorders, what treatment protocols they follow, and what veterans can expect when they get there. “We absolutely know that unhealthy substance use is prevalent in our warrior population, and that it’s a significant barrier to care,” said Michael Richardson, the vice president of independence services and mental health at Wounded Warrior Project. He added, “The same treatments don’t work for everyone, and integrated treatment approaches are one of the best ways we can address these challenges.” Dan Smee hit bottom in a police holding cell. He says he felt the presence of the devil there, and heard the voices of lost friends: “Hey, Smee, this ain’t you, man.” He describes it as a moment of total clarity. He went back to the VA, and this time got himself checked into a residential treatment facility for veterans with co-occurring disorders. He never looked back. Today, he’s a social worker at the VA, a guide for other veterans working through the same problems he did. “I struggled,” he says. “If I can help another veteran not struggle like I did, then it’s all worthwhile. That’s my whole new mission and purpose in life now.” He’s been sober for 14 years. — Doug Irving
https://medium.com/rand-corporation/theres-help-for-veterans-with-mental-health-and-substance-use-disorders-5736aca30937
['Rand Corporation']
2020-11-11 19:47:04.204000+00:00
['Veterans', 'United States', 'Mental Health', 'Substance Use Disorder', 'PTSD']
Key Machine Learning Definitions
Glossary of Terms | Sorted (A-Z) AI Agent: An AI agent or intelligent agent is a bot used in AI-related tasks. Algorithm: An algorithm is a process that follows a set of rules, a problem solver — mainly used by computers. AlphaGo: AlphaGo is a computer program that plays the board game Go. It is recognized as the first Computer Go program to beat a professional human Go player [5]. Artificial Intelligence (AI): Artificial intelligence (AI), as defined by Professor Andrew Moore, is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence [1]. Autonomous: Being autonomous refers to a device or tool capable of operating without direct human control. Backpropagation: Backpropagation is the primary algorithm for gradient descent, in which the output calculates the values of each node in a forward-pass neural network. Then, the partial derivative of the error respective to each parameter calculates in a backward pass neural network through the graph. Black box: A black box is a complex neural network in which algorithms, contents, and decision-making processes are unknown to the end-user. Bot: A bot is an autonomous program that can interact with computer systems, programs, or users. Mostly, directly or indirectly supervised by a human. Clustering: Clustering directly refers to unsupervised machine learning. Computational learning theory: Computational learning theory is the sub-field of AI, which is devoted to studying the design and analysis of machine learning algorithms [4]. Computer program: A computer program is a collection of instructions that perform specific tasks when launched by a computer. Computer science (CS): Computer Science is the scientific study of the principles and use of computers. Computer vision: Computer vision is an interdisciplinary scientific sub-field of AI and computer science that aims at giving computers a visual understanding of its input. Convolutional neural network (CNN): A convolutional neural network (CNN) is a class of deep neural network (DNN) used on image recognition, processing, and analysis — specifically designed to process pixel data. Data: Data is a digital collection of information. Data Cleansing: Data cleansing or data cleaning refers to the quality assurance process of datasets. The datasets are scrutinized to find and correct erroneous data records in the database(s). Data mining: Data mining is the process of examination and discovery of patterns in data to generate new information. Datasets: A dataset is a collection of related sets of data that is composed of separate elements but can be manipulated as a unit by a computer. Data science: Data science is an interdisciplinary scientific field, which focuses on the processes and systems to extract knowledge or insights from data in its various forms. Deep learning: Deep learning is a subset of machine learning in which layered neural networks, combined with high computing power and large datasets, can create powerful machine learning models. Deep neural network (DNN): A deep neural network (DNN) is a large computer system modeled after the human brain. Dimensional reduction: Dimensional reduction refers to the process of reducing the number of random variables by obtaining a set of principal variables via feature selection and/or feature extraction. Explainable AI: An explainable AI is an AI system capable of explaining its decision-making process. Generative adversarial networks (GANs): A generative adversarial network (GAN) are two neural networks contesting with each other in a zero-sum game framework. Heuristics: Heuristics refers to when rules are drawn from experience used to solve a problem faster than other traditional problem-solving methods in AI. While faster, a heuristic approach typically is less optimal than the classic methods it replaces. Input: Input is what is put in, taken in, or operated via a process or computer system. Intelligence: Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals, and some machines. Linear Algebra: Linear algebra is a branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and sub-spaces, yet is also concerned with properties common to all vector spaces. Long short-term memory networks (LSTMs): Long short-term memory networks (LSTMs) are unique kinds of recurrent neural networks capable of learning long-term dependencies [2]. Machine learning (ML): As defined by Professor Tom Mitchell, machine learning refers to a scientific branch of AI, which focuses on the study of computer algorithms that allow computer programs to automatically improve through experience [3]. Machine learning model: Machine learning modeling refers to a question/answering system fed with cleansed data, that takes care of processing ML related tasks. The machine learning model can be referred to as a complex representation of a process. Machine perception: Machine perception is the capability of a computer system to interpret data like the way humans use their senses to interact and relate to the world around them. Natural Language Processing (NLP): Natural language processing (NLP) is a scientific branch of AI, which focuses on helping computers understand, interpret, and manipulate human language. Neural network: A neural network is a computer system modeled after the human brain. Recurrent neural network (RNN): A recurrent neural network (RNN) is a powerful and robust type of neural network, capable of complex cycles in internal memory. Turing test: The famous Turing test refers to a test that only passes if a human being is unable to distinguish a machine from a human [6]. DISCLAIMER: The views expressed in this article are those of the author(s) and do not represent the views of Carnegie Mellon University, nor other companies (directly or indirectly) associated with the author(s). These writings do not intend to be final products, yet rather a reflection of current thinking, along with being a catalyst for discussion and improvement.
https://medium.com/towards-artificial-intelligence/key-machine-learning-ml-definitions-43e837ec6add
['Roberto Iriondo']
2020-08-28 03:20:39.397000+00:00
['Machine Learning', 'Artificial Intelligence', 'Machine Learning Modeling', 'Machine Learning Models', 'Machine Learning Terms']
A 3000-year old Japanese Ritual that Will Hack Your Productivity
Tea is one of the most popular drinks world over. As Indians, we associate our steaming mug of masala chai with rainy days, a family reunion, or a good book. The Japanese tea ceremony, however, is a ritual carrying immense weight, touching dimensions that are spiritual, moral, cultural and social. The Japanese concept of ichi-go ichi-e (literally meaning one time, one meeting) indicates that a tea gathering is a once in a lifetime event. The uniqueness of the ceremony owes to the fact that it is a ‘ritual’ — the gestures involved are a result of practice and purpose. Rituals are an art; their performance represents core values upheld by Japanese society. The significance of the tea ceremony as a ‘ritual’ can have various connotations. Firstly, it seeks to eliminate hierarchy between tea practitioners and guests. It merges differences between the ‘self’ and ‘other’; canceling distance in place of unity. Secondly, ritual behavior emphasizes learning through bodily actions, as demonstrated by gestures learned through the ceremony. The actions are purposeful and ‘ceremonial’ in nature — distinguishing it from everyday actions. It can only be entrusted to professionals or semi-professionals. The tea ceremony (called Cha-no-yu, literally meaning ‘hot water for tea’ in Japanese) in a 37 step process performed by the host. Prior to this, guests are expected to arrive earlier than usual, enter the waiting room and wear their Tabi (traditional Japanese socks). They rinse their hands and mouths at the stone basin and are seated on a tatami (bamboo) mat. Respect and punctuality are two core values of Japanese culture demonstrated by participants. Additionally, the host demonstrates structure through each step. This integrated value of the order is what distinguishes ceremonial behavior from an ordinary one. Order is expressed through wa, kei, sei, and jaku (implying harmony, respect, purity, and tranquility respectively). Each of these is expressed through elements of the ceremony. Other elements prepared beforehand include utensils, Mizusashi (cold water vessel) the Roji (garden), the Setsuin (privy), trees and shrubs, the water basin, stone lantern, and flower arrangement, among others. Apart from their functional and aesthetic purposes, each of these is representative of a cultural or spiritual philosophy underlying the ceremony. The earliest known consumption of green tea was from China in the 4th century — brought to Japan during China’s Tang dynasty (618–917). Initially, tea drinking was an elite pastime limited to Japan’s elite and warrior classes. Japan eventually began to grow the crop independently when relations between the two countries deteriorated, during the Nara period (710–794). It was consumed for medicinal purposes by priests and noblemen. Tea was uncommon and valuable; resulting in the formulation of certain practices to tea consumption and preparation. Myoisan Eisai and Hui Tsung’s methods evolved into the recent tea ceremony. Elisai crushed tea leaves and added hot water, while Tsung used a bamboo whisk to blend in the two. Although matcha powder is a recent substitute for tea leaves, other factors concerning the tea ceremony — particularly its philosophy — remains untouched. Tea began to gain popularity in the 1200s through the health benefits it offered. Eisei suggested that green tea could cure diseases including beriberi, boils, and paralysis. Plantations were grown beyond Japan’s Uji district; it was embraced by the Samurai class and saw a rapidly growing demand. In the 1300s, a new class of nobles (the Gekokujou) often held parties where guests identified authentic tea in their games. Guests were initially given 10 cups of tea — but could increase to even 30 or a 100 cups per guest! Cups were not exclusive to guests — they were often shared and passed around. Among Japan’s warrior class (Samurai), the lord took the first sip and passed the cup to his family members. This was believed to strengthen ties of family members. Photo by 五玄土 ORIENTO on Unsplash Food has historically been a bonding force throughout cultures; the tea ceremony is one such example. Respect is a core characteristic of the ceremony, integrated through the Samurai class. It is represented by the character ‘kei’ derived from traditional Chinese. The guest assumes a humble position; irrespective of his or her social status. They crawl through a tiny entrance (called Nigiriguchi in Japanese), kneel down and bow to a hanging scroll. The politeness of the host must not be confused with coldness or aloofness. Unlike other cultures, the Japanese are seldom animated or openly affectionate towards guests. However, the tea ceremony brings out a dominant trait of host-guest interactions in Japan — balancing personal space with warmth and hospitality. Order and cleanliness are core values to Japanese culture and are represented in the tea ceremony too. Essential items (such as utensils) are thoroughly cleaned by the host to enhance the experience. During the ceremony, a pure heart is what conducts rituals, not memory. The value of purity (represented by the character ‘sei’ in Japanese) is significant on both a literal and metaphorical level. Participants are cleansed on both a spiritual and metaphorical level The aesthetics involved are crucial to the ceremony. During the Marunouchi period, a Shoin architectural style was adopted to replace the simplified warrior-class (samurai) style. The Shoin style was gradually adopted in the ceremony as well — implemented in alcoves, desks, and shelves of the auspicious room. The shoin style is a characteristic of the Ashika shoguns (1336–1572). The presence of paintings and Chinese utensils symbolizes social hierarchy, which was prevalent during the shogunate. Hierarchical order, implying a well-organized society, was an early symbol of the tea ceremony. The shoin style contrasts with Wabi, a concept stating that beauty is carried by nature and simplicity. Tearooms influenced by this philosophy are made of bamboo, reed, and clay; utensils used are often cracked or chipped. Bamboo or tatami mats are used to cover floors. While the Shoin style confirms social order, Wabi is influenced by Buddhist ideals of frugality and minimalism. The beauty found in crude or unpretentious utensils; is one with which participants across social classes can identify with. The expression ‘wa’ of Wabi represents harmony, reflected by the colors and natural flora in the setting. Wabi strives to establish peace and oneness with nature. Once the initial three elements of the tea ceremony — wa, kei, and sei are embraced — the participant can finally embrace tranquility (given the term ‘jaku’ in Japanese). Oneness with nature is a moral value highlighted by cultures across the world; finding re-establishment in the tea ceremony as well. The limited use of space is a distinguishing feature of a ritual. Influenced by Zen principles, the tea ceremony seeks to find freedom in limitations, meditation, and self-discipline. A small and cramped room may seem uncomfortable to a visitor, but represents infinite space and freedom to a Zen practitioner. The ceremony invites the participant to reflect inwards and align with the environment, providing harmony on a social level. In An Anthropological Perspective on the Japanese Tea Ceremony, Herbert Pluschlow mentions the function of tearooms are discursive spaces — they represent an ideal social structure. It allows man to take distance from and measure his society. Today, the tea ceremony crosses social barriers in Japan and remains just as significant as it was 800 years ago. Moreover, the ceremony has specialized to an extent that it has three different ‘schools’ associated with it. These were established on basis of the bloodlines of the grandmasters of tea (called Iemoto in Japanese). The experts take pride in their specialized steps of tea making; resulting in a creamy beverage without a hint of bitterness. Uji and Kyoto are well-known among tourists for their tea houses, plantations and the ceremony itself. The flow and etiquette of the ceremony provides a subtle blend of discipline and tranquility its participants captures the beauty of the ceremony.
https://medium.com/thecontextmag/a-3000-year-old-japanese-ritual-that-will-hack-your-productivity-cba562a26947
['Context Staff']
2018-12-07 15:22:25.157000+00:00
['Efficiency', 'Productivity', 'Life Hacking', 'Spirituality', 'Culture']
Designing with Machine Learning
Janice just started her job at a brand new start-up three weeks ago. She’s the very first customer success specialist and wants to make a good impression on her new bosses. They haven’t done any research to understand what customers are saying about their product, so the founder suggests using SurveyMonkey to send a survey to their 5,000+ clients …tomorrow. “Sure!” says Janice, through the teeth of her contrived smile. The problem is Janice has never created a survey before, let alone sent one to 5,000+ people. Janice thinks to herself, “Holy cow, how am I going to do this?” Fear, anxiety, lack of confidence — all common emotions new customers of SurveyMonkey experience when writing their first survey. You can’t un-send a survey, so customers need a way to confidently create a professional survey that will return valuable data that can inform better customer understanding or to make a decision. My product manager and I tried to answer three questions: How might we leverage existing SurveyMonkey expertise and data? How might we increase confidence and reduce anxiety? How might we automatically create a professional survey for users? SurveyMonkey has over 60 million registered users. Of those, 16 million are active users. We receive over 20 million responses per day. We definitely had the data we needed to begin designing features driven by Machine Learning. What is the difference between AI and Machine Learning? Artificial Intelligence, or AI, has been at the tip of our tech-tongues for a while now. There is still a lot of confusion around the difference between AI and Machine Learning. In some instances, you’ve probably heard them used interchangeably. AI is the broader concept of machines being able to carry out tasks in a way that we would consider “smart.” Machine Learning is an application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves (Source). AI = The Concept. ML = The Application of AI. For SurveyMonkey’s purposes, we set out to use ML to provide predictions and recommendations. The ‘Genius’ feature that started it all In early 2017, we created a feature called ‘SurveyMonkey Genius’ which scored a customer’s survey and provided an estimate for the survey completion rate and time to complete. At the time, completion rate and time to complete were valuable insights we could confidently provide customers. When we released the feature, it was a hit because it showed a massive 10% increase in survey deploy rate (deploy = a survey sent that returns at least 5 responses). We helped make users feel more confident, but we knew we could do even more. Our next step was to build off of this win and figure out a way to provide meaningful value throughout the whole survey creation process. Next up, question type prediction. Over 30% of customers already have their questions typed up even before they start using SurveyMonkey for the first time. They have an idea of what they want to ask. But SurveyMonkey has about 20 different question types (Multiple Choice, Comment Box, Matrix, Matrix of Dropdown Menus, etc.) and a lot of the time, customers don’t know which to choose. Sometimes, they might choose a question type that’s not quite right for their use case, which can affect the quality of the answers they get back. We set out to design a feature that could recommend the right question type for you. SurveyMonkey is able to predict the right question type with 65% accuracy. Our team for this feature consisted of a product manager, a data scientist for a couple hours a week, and a front-end engineer. With over 16 million active users, we knew we had mountains of data about the question types they selected and the actual text customers typed into the question-text input field. The data scientist analyzed the content of the input text to develop a data model that could predict the right question type for customers. A data model is what data scientists create by analyzing data to make a prediction or recommendation. When we launched this in Q1 2018, we were able to predict the right question type with 65% accuracy. We reduced the time it took for customers to write a survey and increased their confidence because we provided a prediction that SurveyMonkey can stand behind. One step further: populating answer choices It takes time for customers to type in a 5-point answer scale like ‘Very satisfied, Satisfied, Neither satisfied nor dissatisfied, Dissatisfied, and Very dissatisfied.’ Go ahead, time yourself. I clocked in at about 47 seconds, including correcting typos. Multiply that time by about 9 questions and you’re looking at 7 minutes. Since we’d already predicted the right question type, how might we automatically fill out answer options? This was a no-brainer, but we had limited data. We never tracked and tried to correlate question text, question type, and answer text. We needed to build a feature to create a Scale Effect to collect the data we needed to inform a data model to make confident answer recommendations. This feature would also need to deliver customer value. How we reached the scale of data we needed To create this Scale Effect, the data scientist gathered the top answer-scale options, about 15 in all. Early Q2 2018, we launched a feature called ‘Answer Genius’ which allowed customers to select an answer-scale from a simple drop-down list. Every time a customer selected an answer-scale from the drop-down, it was a piece of data for us. We were building our mountain of data — and it took until the end of Q2. A Scale Effect means you need a large amount of data for Data Models to be considered confident in their predictions and recommendations (Source). Once Q3 rolled around, the data scientist was able to design a data model that suggested a single answer-scale option that was correlated with the customer input question-text. Shortly after, we launched an ML-powered answer-prediction feature. What if we could automatically build a survey for our customers? SurveyMonkey has over 100+ expert-written templates for customers to use. New customers just don’t know which one is right for them. So at the end of 2018, we began working on an MVP that guides customers to select the best possible survey. We called it ‘Build it for me’ mode. This would allow us to collect the data we needed. However, it was risky because we were significantly disrupting the flow. To build this feature, we had a content strategist and a survey researcher on our team. ‘Build it for me’ mode asks customers questions about their audience, survey goal, and use case. Based on those answers, we create the best-possible template for them. We also built a ‘Genius Assistant’ panel that makes recommendations and guides the customers through the rest of the survey-creation process. However, the ‘Genius Assistant’ was not built into the survey-sending process.
https://medium.com/curiosity-by-design/designing-with-machine-learning-44a42f413c62
['Jonathan Remulla']
2020-01-27 19:52:39.309000+00:00
['Machine Learning', 'Technology', 'User Experience', 'Artificial Intelligence', 'Product Design']
Who Unfollowed You on Twitter?
Who Unfollowed You on Twitter? How to use the Twitter API to track your follows and unfollows Photo by OpenClipart-Vectors on Pixabay. Are you posting regularly on Twitter? Do you get the occasional follower here and there? Are they unfollowing a few days later because you didn’t follow them back? Enough with that! Let’s find out who is unfollowing so we can burn voodoo dolls with their names… oh, sorry, I’m getting ahead of myself. I mean, so we can learn the Twitter API! This is what a lot of my experience on Twitter has been like since I started posting regularly at the beginning of this year. Somebody would follow me and a few days later unfollow — like a slow game of one step forward, one step back. So a few weeks ago, I decided to have a look at the Twitter API to get more familiar with it and learn its features. Tracking the number of followers over time seemed like a good and simple starting point. As a side benefit, this also provides the names of the people who are unfollowing.
https://medium.com/better-programming/who-unfollowed-you-on-twitter-58f757420339
['Christian Behler']
2020-11-12 16:14:26.724000+00:00
['Web Development', 'PHP', 'Twitter', 'Programming', 'Software Engineering']
Linear Regression in Python: Predict e-Commerce Revenue
Photo by ev on Unsplash In this post, the goal is to build a prediction model using Simple Linear Regression and Random Forest in Python. The dataset is available on Kaggle and my codes on my Github account. Let’s get started to understand the dataset. Discover and Visualize Data In this dataset, there are 3 categorical features Email, Address, and Avatar; 5 numeric features Avg. Session Length, Time on App, Time on Website, Length of Membership, and Yearly Amount Spent. And there are 500 instances. That is small but perfect for beginners. Figure 1: First 10 columns of the data frame. Here is the summary of statistics pertaining to the columns. Figure 2: Summary of Statistics. Attribute Histogram Plots created for a visual interpretation of numerical data. Figure 3: Attribute Histogram Plots. Looking for Correlations Let’s glance at how much the ‘Yearly Amount Spent’ correlates with other attributes. The output of the correlation. The attribute of ‘Length of Membership’ seems most correlated with ‘Yearly Amont Spent’. And ‘Yearly Amont Spent’ is negatively correlated with ‘Time on Website’. So, it shows that customers who stay on the website longer spend less money. Let’s check the correlation using a scatter matrix. Figure 4: Scatter Matrix Plot. As a result, the revenue is most relevant with the length of membership and the time on app is the second attribute where the correlation is high. Let’s compare time on app and time on web. We can see which group the customers who spend the most money belong to in Figure 5. The red dots represent a higher amount of spent and the blue ones are for lower amounts, the width of dots represents the length of membership. When the width of the points increases, it means the length of membership is longer. (Multiplying by 10 doesn’t matter, it just made the dots bigger on the graph.) Figure 5 tells us customer using app spends more money. And long-time members can be interpreted that they prefer to use app. Figure 5 Linear Regression Linear Regression is a Supervised Learning Technique. Regression is the process of predicting a continuous value. It’s done by training the model and the model predicts future instances using previously labeled data. X represents the independent variables, y represents the dependent variable it is also called the target and this variable will be tried to be predicted. The key point in the regression is that the dependent value should be continuous. Linear Regression Formula: Theta0 and theta1 are also coefficients of the linear equation, y hat is the predicted variable. Train the Model Most machine learning algorithms can’t work with null values. This dataset has been checked, there is no missing value. I defined the X and y values. I have chosen only numerical attributes for X. Here is the model evaluation output: These coefficients are for these features respectively Avg. Session Length, Time on App, Time on Website, Length of Membership that is what we define X. R-squared shows how the distance of data from the fit line. The best possible score is 1. The proximity of the r-squared value to 1 indicates the suitability of the model. For our model, it’s 0.986. RMSE measures the distance between the vector of prediction and the vector of target values. It’s 9.755 for our model. Random Forest Let’s evaluate the model with the Random Forest Regression. The output of Random Forest Regression. Linear regression gave better results for this dataset. References Please don’t hesitate to give me feedback. Thank you for reading.
https://medium.com/analytics-vidhya/linear-regression-in-python-predict-e-commerce-revenue-dd8e0c8bed2f
['Ayça Erbaşı']
2020-12-10 16:02:29.260000+00:00
['Machine Learning', 'Python', 'Linear Regression Python', 'Linear Regression', 'Data Science']
Q&A With Alexander Kokhanovskyy
Recently, there’s been much focus on DREAM and the updates to the DreamTeam platform. Alexander Kokhavoskyy, DreamTeam CEO and Co-Founder dropped by the telegram group for a live Q&A. If you would like to read the questions and answers, keep reading below. What is the plan to drive the token’s price? As long as FIAT can be used instead of the token, there is no reason for anyone to opt for the token. How about providing some incentives such as discounts when the token is used to make in-app purchased? As we already mentioned in our Tokenomics paper, DreamTeam platform accepts payments in both DREAM Tokens and fiat currencies. But fiat purchases will always be converted to DREAM Tokens behind the scenes to achieve a smoother user experience by avoiding the need for users to go and buy tokens on third-party exchanges directly. What is the benefit for longterm token holders? Perhaps token holders can receive some portion of in-app purchases as dividends, calculated and distributed via blockchain? For now, there are no extra bonuses for longterm token holders exist. However, next year, some benefits may appear. Stay tuned! Is there any token inflation plan, or do we stick with the current cap of (I think) 21 million tokens? I don’t understand the cap amount of 21M tokens. The circulating supply is 36M DREAM. When will Dota 2 be added, and what is holding it back so far? As we already said in our Dev Reports, we are going to add more and more games to the platform, so follow our social media channels for new updates. It will happen in 2020. What will you do with the DREAM token received through in-app purchases? DreamTeam will break down each token received in the following way: 15% of the tokens are locked for 5 years in DreamTeam reserves for future Partnership Development. 85% of the tokens are sold back on exchanges. As token development seems relatively slow, have you considered using Gitcoin bounties to speed things up? New updates in Token development are coming. Be sure to follow our news so as not to miss anything. What will you do if the difference between the token price on the exchange and the fixed rate on the platform reaches 20% (i.e., it is five times cheaper on the exchange)? Will you divide the price by two? No, we are not going to do this. As we’ve already said in our Tokenomics paper: If the price on an exchange is lower than the DreamTeam platform fixed exchange rate, DreamTeam will spend less money than received from the user. If the price on an exchange is higher than the DreamTeam platform fixed exchange rate, DreamTeam will spend more money than received from the user. Each time the price on a third party exchange becomes 500% higher than the fixed DreamTeam platform rate, DreamTeam will double the Token price on the platform: 1DREAM = $2. What’s the point of having a fixed price? Are you going to make money out of it? If not, why not just provide platform users with an exchange price + fee? All users get used to fixed and stable prices for items and services inside games, platforms, etc. To support this trend, DreamTeam introduces the fixed exchange rate inside the platform for all products and services in the amount of 1DREAM = $1. How will the selling and buying of the token from the platform to exchange work? Will it be automated or manual? This will be automated. How exactly is the exchange price calculated? If e.g., a user buys 1000 DREAM on the platform, there is bound to be some price slippage on exchanges. When will the 1 DREAM = $1? If the user needs to buy 1000 DREAM on the platform, the fixed exchange rate 1DREAM = $1 applies. DreamTeam receives money from user and buys the need quantity of Tokens for the user on exchange for the current exchange price. If the price on an exchange is lower than the DreamTeam platform fixed exchange rate, DreamTeam will spend less money than received from the user. If the price on exchange is higher than the DreamTeam platform fixed exchange rate, DreamTeam will spend more money than received from the user. If the price on exchange is higher than the DreamTeam platform fixed exchange rate, DreamTeam will spend more money than received from the user. The Token economy is planned for launch in January 2020. What is the maximum amount of $ a user may deposit in fiat? Or is this limit set in DREAM? Is there a swap opportunity at a fixed rate? (DREAM2USD)? The user can buy any amount of Tokens he needs to pay for DreamTeam services. As users buy DreamTeam Tokens at a fixed exchange rate, DreamTeam will buy the equivalent of Tokens, which the user needs for the current price on an exchange. No, swap or withdrawal won’t be available. Do users have any lock-up period for the tokens in terms of withdrawal? It’s not possible to withdraw Tokens from the DreamTeam platform. Is the price of a feature/subscription package on the platform shown in both fiat and DREAM or only in the latter? The price will be shown in DREAMZ (1000 DREAMZ = 1 DREAM). Сan you start with some recent stats? Number of active users, usage of different features, etc? We’ve reached around 1.9M users. As we are launching a new model for casual games, features will also change, Stats, Recruitment, Challenges, Unified Profile, Clans, and Content Feed (both coming in late Q1 2020). Those are the core features users are willing to use. Users cannot withdraw the tokens they bought, right? Tokens that were transferred from third-party wallets or won are transferable, right? At least on the platform from one account to the other? If the fix rate changes, the price of the features changes as well, right? So if a feature costs 1000 DREAMZ and the fixed-rate turns to be $2, the price of the feature drops to 500 DREAMZ, right? No limits at the moment. Vise-Versa, you can only BUY DREAM by using a fixed rate. Users can’t withdraw DREAM from the platform. They can only purchase goods and services in our Shop. The price changes only if the price of the platform vs. an exchange increases by 500%. E.g., an exchange price is higher than $5, than the fixed price increases for 200%, meaning $1 = 1 DREAM. But the service price remains the same. Premium Accounts still will be worth $5. How do we ensure that the Tokenomics strategy is implemented, as stated in the document? Is there a third-party audit company or direct transactions list to be provided to the user that shows what happens after a purchase? We are searching for a transparent solution for all parties but limited to keeping our commercial metrics confidential. How does the interaction between platform and exchange work? Is it automatized or manual? Which exchange(s) will be used? I mean, the buy/sell of DREAM from platform to exchange when a user makes a fiat purchase on the platform. So how will the price be calculated then for the 500% ratio? The price is not the same across different exchanges, and for larger purchases, there will be slippage? All exchanges, interactions will be fully automatic. It will be according to a certain amount of metrics met. Right now, 30 days of trading with the average daily price $5 or higher. Is there a top-tier exchange listing planned? Any decentralized exchange? After we released the new Tokenomics paper, we are expecting higher Token turnover. We will also start buying tokens as well. Listing on Tier1 exchanges is #1 priority beginning in Jan 2020. We are going to re-connect with seven exchanges: Binance (directly on the main exchanges through voting, not dex), Bitfinex, Upbit, Bittrex, Poloniex, OKex, Huobi. Daily price $5? Not sure I get that. The current price is below 10 cents; it will be a while to get it above $1. Did you mean daily volume? I didn’t quite catch you then. My main point was to describe HOW we are calculating a 500% price increase. Right now, the fixed rate is $1. 500% increase, meaning $5. How exactly will the new token features be rolled out on the platform? If something is currently $10, does it mean after release, someone will be able to use 10 DREAM to buy that? Nope, all purchases in fiat. On the backend, we are going to buy tokens from an exchange and credit them to users. So users can’t use DREAM directly? No, they can’t. In the case that they can, there is a 0% chance the App Store and Google Play apps will accept them. I thought 500% is the target ratio between platform price and exchange price. So when the exchange price gets above 20 cents, the DREAM price on the platform doubles.From $1 to $2. That’s how it was explained in the Tokenomics. My question was, how exactly is the current exchange price determined? You said there would be some metrics? Usually, there is a third party price oracle used for these calculations. Understandable. So if DREAM is $4 on an exchange, and a user on the platform makes a $10 purchase, DreamTeam platform will spend 40USD to buy 10 DREAM Tokens? Aren’t you then at great risk of losing money? It should get above $5, so only then it will be doubled. Yeah, I’ve read it, you were right! We need to re-phrase it a little bit. Right now, 30 days of trading with the average daily price of $5 or higher, meaning the “500%” rule kicks in. Not really. At the end of the day (or immediately), users will give us their 10 Tokens for services or goods. So we spend $10 to buy tokens worth $40. Finally, we are -$30 ($10–$40=-$30) and have the 10 DREAM. Selling 8.5 (85%) DREAM for $34, locking 1.5 (15%) DREAM. So we are -$30+$34=$4 NET Profit. Are the tokens that were transferred from us (token holders) still transferable from the platform to a 3rd party wallet o exchange? I bought tokens on liquid, sent them to my DreamTeam wallet, are they locked now from the further transfer? If we are buying on an exchange — yes, of course, at the end of the day, we will show all transactions on the blockchain, as everybody needs to see that. BUT users’ balances on the platform can only be spent within the platform. The way I get it, you cannot send DREAM to the platform. Correct me if I am wrong… Exactly, you can’t. So, the only option to use our tokens is to sell it (probably to you). So DREAM turns to be a security I assume. You can use them for Smart contracts in the future plus you will be able to buy Premium Accounts directly with DREAM (but only on the website, not in Apps). How much DREAM does DreamTeam currently own? How will you use that? I think you said for challenges and similar things on the platform? Any other uses? We own 10% — that’s our Reserve. We are not planning to use it in any way at the moment. Are you planning a big marketing campaign for the platform promo? In Q1 2020, we are raising a new investment round, and, of course, we will scale considerably in 2020. Are you going to add World of Tanks and World of Warcraft to the platform? WoT will be 100% in 2020, in terms of WoW, I’m not sure. From which fund will DREAM be paid for rewards? In the Tokenomics, you say that you consistently update your Github, but the last update on smart contracts was 10 months ago. Do you work on smart contracts? I think that from the reserve, there won’t be much, and people will not be able to withdraw them from the platform :) They will be able to receive goods or services for them. Therefore, this will not affect the price. Now we are working on the development of the Smart Contract for the reserve, where 15% of Tokens will be locked for 5 years. And the last update on Github was in autumn connected with delegated transactions. Can user buy DREAM on the platform for future usage? We have liquidity on exchanges, so users can do it immediately or with a delay. Users can buy tokens and hold them, which is visualized on the diagram in Tokenomics. Is there some kind of 3rd-party company that verifies the compliance of Tokenomics with reality? How are you going to prove the security of the process of going on exchange to buy DREAM for the users and not keeping money inside the company? Will the user be provided with a link to buy/sell on the exchange, transactions in the reserve, etc.? This 3rd party will be a blockchain transaction in a user’s account. They will see that they received tokens or they sent them. In terms of buying Tokens, we are thinking about how to do it transparently for you, on the other hand, so that our commercial information is not available to the entire market. No, you cannot see the transaction on the exchange. But when tokens are sent to users, and they send it back to the platform — then yes, it will be visible. Let’s say I give $10 to exchange it for 10DREAM, and then a week later, I get Prime for 5DREAM, right? And 5 DREAM remained in my balance. When you deposit, will the exchange happen automatically, or I will have to change it on the internal exchange myself? The process is automated. Who are your competitors? If speaking about team management and recruitment, then we have 8–10 competitors: Seek-team.com, Teamfind.com, Esport-Management.com, Readyup.com, Guilded.gg, PvP.com, etc. But we are bigger than all of them combined. The main traffic is from YouTube, are you going to increase the channel’s promo or use other promo channels? The primary traffic we get is from SEO and Direct. YouTube only gives about 10%. We also launch PPC campaigns. Isn’t blockchain the synonym for transparency? Why did you decide to work with it? As now you are saying that you are going to hide the turnover? We have other investors (Mangrove), and we are going to have the new ones in Q1. We have strict rules about what can be made public and what cannot. We need to think about all the parties involved. Do you have a strategic development plan for 5 years? There is no logic to have it. Our vision and mission remain unchanged. To have a development plan for 5 years is the same as to have a roadmap for 5 years. About DreamTeam: DreamTeam — infrastructure platform and payment gateway for esports and gaming. Stay in touch: Token Website|Facebook |Twitter|LinkedIn|BitcoinTalk.org If you have any questions, feel free to contact our support team any time at [email protected]. Or you can always get in touch with us via our official Telegram chat.
https://medium.com/dreamteam-gg/qa-with-alexander-kokhanovskyy-263c5fa5d88c
[]
2019-12-17 12:53:43.819000+00:00
['Game Development', 'Token', 'Gaming', 'Blockchain', 'Startup']
Finetune a Facial Recognition Classifier to Recognize your Face using PyTorch
Finetune a Facial Recognition Classifier to Recognize your Face using PyTorch Tricking facial recognition systems using adversarial attacks with GANs. This is part of a series I am writing on tricking facial recognition systems using adversarial attacks with GANs. However, before we trick a facial recognition classifier we need to build one to trick. I personally want to build one that can recognize my own face. Instead of training a neural network from scratch I can start with a pre-trained network and then finetune it to recognize my face. Finetuning is greatly beneficial as we can start with the model weights already trained on a large-scale face database and then update some of them to reflect the new tasks we want it to perform. These weights already understand how to recognize faces, but the only difference is it does not know my face. So to have this pretrained model learn my own face is much easier to train as the model weights already contain much of the needed information to perform the task. Directory Structure project │ README.md │ AGN.ipynb │ └───data │ │ files_sample.csv │ └───eyeglasses │ │ │ └───test_me │ └───train | └───Adrien_Brody | ... | └───Michael_Chaykowsky | ... | └───Venus_Williams │ └───val | └───Adrien_Brody | ... | └───Michael_Chaykowsky | ... | └───Venus_Williams │ └───models │ │ inception_resnet_v1.py │ │ mtcnn.py │ └───utils models directory is from the PyTorch facenet implementation based on the Tensorflow implementation linked above. └───models │ │ inception_resnet_v1.py │ │ mtcnn.py │ └───utils This inception_resnet_v1.py file is where we will pull in the pretrained model. The Inception Resnet V1 model is pretrained on VGGFace2 where VGGFace2 is a large-scale face recognition dataset developed from Google image searches and “have large variations in pose, age, illumination, ethnicity and profession.” Each layer’s weights in the model have an attribute called requires_grad that can be set to True or False . When you run loss.backward() in the training loop these weights are updated and this is what contains all of the information needed to perform the predictions. When finetuning the network we freeze all of the layers up through the last convolutional block by setting the requires_grad attributes to False and then only update the weights on the remaining layers — which intuitively you can imagine the earlier layers as containing the base-level information needed to recognize face attributes and base level characteristics so we keep all of that performance while updating the final layers to include another face (mine). All train directories have 11 or 12 images of each individual and all val directories have 4 or 5 images of each individual. Michael_Chaykowsky is a directory of my face where I used various poses, lighting, and angles. To collect these images I took videos with a standard iPhone in various spaces and then transformed these videos to image and used MTCNN on each to perform face-alignment and cropping to appropriate size (160 x 160 pixels). Imports from torch import nn, optim, as_tensor from torch.utils.data import Dataset, DataLoader import torch.nn.functional as F from torch.optim import lr_scheduler from torch.nn.init import * from torchvision import transforms, utils, datasets, models from models.inception_resnet_v1 import InceptionResnetV1 import cv2 from PIL import Image from pdb import set_trace import time import copy from pathlib import Path import os import sys import matplotlib.pyplot as plt import matplotlib.animation as animation from skimage import io, transform from tqdm import trange, tqdm import csv import glob import dlib import pandas as pd import numpy as np Multi-task Cascaded Convolutional Neural Network (MTCNN) for face alignment from IPython.display import Video Video("data/IMG_2411.MOV", width=200, height=350) Video of me rotating my face Capture the frames of the video as .png files and rotate/crop/align. vidcap = cv2.VideoCapture('IMG_2411.MOV') success,image = vidcap.read() count = 0 success = True while success: cv2.imwrite(f"./Michael_Chaykowsky/Michael_Chaykowsky_{ format(count, '04d') }.png", image) success,image = vidcap.read() print('Read a new frame: ', success) count += 1 The images come in rotated so I use imagemagick to make them right-side up. Make sure to brew install imagemagick first. I think there’s another way to install the library but if I recall it was a nightmare — so definitely suggest brew install . %%! for szFile in ./Michael_Chaykowsky/*.png do magick mogrify -rotate 90 ./Michael_Chaykowsky/"$(basename "$szFile")" ; done ! pip install autocrop Autocrop has a nice feature where they do some resizing of face images and you can specify the face percentage. You can forgo this if you are doing the full MTCNN method (prefferred), but if not you can do this which is quite a bit easier. ! autocrop -i ./me_orig/Michael_Chaykowsky -o ./me/Michael_Chaykowsky160 -w 720 -H 720 --facePercent 80 ! pip install tensorflow==1.13.0rc1 ! pip install scipy==1.1.0 Now using the align_dataset_mtcnn.py script from David Sandberg’s Tensorflow implementation of facenet we can apply this to the directory of face images. %%! for N in {1..4}; do \ python ~/Adversarial/data/align/align_dataset_mtcnn.py \ # tensorflow script ~/Adversarial/data/me/ \ # current directory ~/Adversarial/data/me160/ \ # new directory --image_size 160 \ --margin 32 \ --random_order \ --gpu_memory_fraction 0.25 \ & done Now you have a directory with all of your faces aligned and cropped appropriately for modeling. Load Data When we load in the data we will perform some random transforms to the images to improve training. Different transforms can be attempted and you may try various ones, like Random Color Jitter Random Rotation +/- 5 degrees Random Horizontal Flip Here I use random horizontal flip. All of these transforms make the model more generalizable and prevent overfitting. data_transforms = { 'train': transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = 'data/test_me' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=8, shuffle=True) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train','val']} class_names = image_datasets['train'].classes class_names Out[1]: ['Adrien_Brody','Alejandro_Toledo','Angelina_Jolie','Arnold_Schwarzenegger','Carlos_Moya','Charles_Moose','James_Blake','Jennifer_Lopez','Michael_Chaykowsky','Roh_Moo-hyun','Venus_Williams'] def imshow(inp, title=None): """Imshow for Tensor.""" inp = inp.numpy().transpose((1, 2, 0)) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) inp = std * inp + mean inp = np.clip(inp, 0, 1) plt.imshow(inp) if title is not None: plt.title(title) plt.pause(0.001) # pause a bit so that plots are updated # Get a batch of training data inputs, classes = next(iter(dataloaders['train'])) # Make a grid from batch out = utils.make_grid(inputs) imshow(out, title=[class_names[x] for x in classes]) Get pretrained ResNet on VGGFace2 dataset from models.inception_resnet_v1 import InceptionResnetV1 print('Running on device: {}'.format(device)) model_ft = InceptionResnetV1(pretrained='vggface2', classify=False, num_classes = len(class_names)) Freeze early layers Recall earlier I mentioned that we will freeze up through the last conv block. To find where that is we can iterate through this list using -n, -n-1, ... until we find the block. list(model_ft.children())[-6:] Out[2]: [Block8( (branch0): BasicConv2d( (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() ) (branch1): Sequential( (0): BasicConv2d( (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() ) (1): BasicConv2d( (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() ) (2): BasicConv2d( (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU() ) ) (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1)) ), AdaptiveAvgPool2d(output_size=1), Linear(in_features=1792, out_features=512, bias=False), BatchNorm1d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True), Linear(in_features=512, out_features=8631, bias=True), Softmax(dim=1)] Remove the last layers after conv block and place in layer_list . layer_list = list(model_ft.children())[-5:] # all final layers layer_list Out[3]: [AdaptiveAvgPool2d(output_size=1), Linear(in_features=1792, out_features=512, bias=False), BatchNorm1d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True), Linear(in_features=512, out_features=8631, bias=True), Softmax(dim=1)] Put all beginning layers in an nn.Sequential . model_ft is now a torch model but without the final linear, pooling, batchnorm, and sigmoid layers. model_ft = nn.Sequential(*list(model_ft.children())[:-5]) If training just final layers: for param in model_ft.parameters(): param.requires_grad = False Re-attach the last 5 layers which automatically sets requires_grad = True . This linear layer Linear(in_features=1792, out_features=512, bias=False) actually requires writing two custom classes which is not entirely obvious by looking at it, but if you look at the data input/output you can see that there is a Flatten and normalize class within the layer. Check resnet implementation for reason of reshaping in last_linear layer. class Flatten(nn.Module): def __init__(self): super(Flatten, self).__init__() def forward(self, x): x = x.view(x.size(0), -1) return x class normalize(nn.Module): def __init__(self): super(normalize, self).__init__() def forward(self, x): x = F.normalize(x, p=2, dim=1) return x Then you can apply the final layers back to the new Sequential model. model_ft.avgpool_1a = nn.AdaptiveAvgPool2d(output_size=1) model_ft.last_linear = nn.Sequential( Flatten(), nn.Linear(in_features=1792, out_features=512, bias=False), normalize() ) model_ft.logits = nn.Linear(layer_list[3].in_features, len(class_names)) model_ft.softmax = nn.Softmax(dim=1) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=1e-2, momentum=0.9) # Decay LR by a factor of *gamma* every *step_size* epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) Train def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() FT_losses = [] best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() scheduler.step() FT_losses.append(loss.item()) # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model, FT_losses Evaluate model_ft, FT_losses = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=500) plt.figure(figsize=(10,5)) plt.title("FRT Loss During Training") plt.plot(FT_losses, label="FT loss") plt.xlabel("iterations") plt.ylabel("Loss") plt.legend() plt.show() More to come Keep an eye out for more from this series where I will describe how to trick this classifier using adversarial attacks with GANs.
https://towardsdatascience.com/finetune-a-facial-recognition-classifier-to-recognize-your-face-using-pytorch-d00a639d9a79
['Mike Chaykowsky']
2020-04-11 20:29:13.591000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Facial Recognition', 'Pytorch']
“But What I Really Want…”
“But What I Really Want…” Compromise & the Status Quo Source: The November Century Magazine LXXIII №1 (London, England: MacMillan and Co. LTD, 1906) “Compromise” is nothing more than a means of preserving the status quo, which inevitably elevates greed over need. Agreeing to compromise is presented as mature, but it’s really just compliant. In this Orwellian perspective, rolling over is standing up. When I look around at what amounts to “the left” in the US, I find it to be full of compromises, with nearly nothing be excited about. Yes, I’ll support a campaign for a $15 minimum wage. But what I really want is a world where the necessities of life are no longer monetized. Money creates artificial scarcity, and is a tool of control. Yes, I’ll support non-discrimination in employment. But what I really want is an end to the employer-employee relationship, whether the employer is a mom-and-pop or a multinational corporation. That relationship is exploitative at its core. Yes, I’ll support immigrants. But what I really want is a planet without borders. The nation-state is recent invention, and it’s no coincidence that its rise has tracked with industrialism and capitalism. As a means of organization, it is fundamentally inequitable. Yes, I’ll support women going into politics and business. But what I really want is to smash patriarchy, and all the political and business institutions that go with it. Yes, I’ll support tenant rights. But what I really want is to end to the landlord-tenant relationship, whether the owner is an individual or a bank, and whether the arrangement is a lease or a mortgage. Nobody should have to pay for a roof over their head. Yes, I’ll support voting rights. But what I really want is a legitimate, participatory democracy, driven by people, not money or power, in which leadership is a role that is filled only as needed, not a club to bludgeon people with. Our current system excludes by default, when inclusivity is the only approach that will work for making a just society. Yes, I’ll support Medicare-for-all. But what I really want is an approach to health that is holistic, based on plants not pharmies, and that draws on all the world’s effective traditions, not just Victorian-era European premises. Yes, I’ll support “science.” But what I really want is a worldview that also embraces the value of the ineffable. The dismal reductionism of our day that masquerades as “reason” strips life of essential intrinsic qualities, and encourages us to act without sensitivity to that which we cannot measure or weigh. Yes, I’ll support funding for education. But what I really want is for the restitution of life-ways where coercion is not a method for helping children learn how to live in the world. Schools are far more about obedience than they are about education. Yes, I’ll support cuts to “defense” spending (as if anybody even talks about that anymore). But what I really want is an absolute end to US imperialism, including the closing of all overseas military bases and the dismantling of the entire nuclear arsenal. Yes, I’ll support a “Green New Deal.” But what I really want is to drastically curtail our consumption. No energy-producing technology is truly “renewable.” Solar, wind and water all harm the environment through their production and operation. Yes, I’ll support socialism over capitalism. But what I really want is for the means of production to be dismantled, not merely seized by another set of hands. Industrialism rapes the environment no matter who owns it. Yes, I’ll support fairer laws. But what I really want is the abolition of prisons and the so-called justice system. Furthermore, the punitive urge itself must be excised as a cultural fixture. Yes, I’ll support marriage equality. But what I really want is to end the institution of marriage itself, it being a property-based relic of the Bronze Age. Yes, I’ll support environmental regulations. But what I really want is a world in which we put the Earth First! Yes, I’ll support setting aside public land for “preservation.” But what I really want is to return it to the Native Americans. Yes, I’ll support organic farming. (In fact, I made my living as an organic farmer for a decade.) But what I really want is an end to the agricultural mindset, which is based on the domination of nature. We must return to a cooperative relationship with the planet and all life on it, through practices like wild-tending. Yes, I’ll support veganism because animal agriculture is so gross. But what I really want is to stop the division of living things into categories that are not okay to kill (animals) and the ones that are (everything else apparently). Only by recognizing the legitimacy of all forms of life can we eliminate cruelty. Yes, I’ll support the arts. But what I really want is a culture where music doesn’t belong to musicians, singing to singers, dancing to dancers, painting to painters, sculpture to sculptors, and poetry to poets. By elevating the “artist” we have stolen creative expression from everyday people and everyday life, and have manufactured something individual and rarefied from what should be common and communal. Yes, I’ll support individual freedom. But what I really want is to focus primarily on collective responsibility. How our choices affect others should be the first consideration of any decision-making process, with those “others” being not just other humans. So yes, I’ll support all these things, in part because I respect that the people working for them are driven by compassion. But what I really want is much more, although in another way of considering it, much less: This project we call “civilization” is too much. Our task now must be to scale back, power down, and simplify. Each additional day of compromise is another assault on a liveable environment. And this is the key point that is nearly always forgotten: All the non-humans creatures on this planet are subjected to our “compromises” but they are not parties to negotiating them. In that way, nothing at all — nothing! — that we civilized humans do is ever truly a compromise, but is always an imposition.
https://medium.com/age-of-awareness/but-what-i-really-want-81d8ba03573f
['Kollibri Terre Sonnenblume']
2020-08-15 01:18:04.423000+00:00
['Environment', 'Politics']
An Interesting Video Taking Apart the AirPods Max
An Interesting Video Taking Apart the AirPods Max The Headphones Are Surprisingly Repairable The latest AirPods Max seems to be more repairable than expected from Apple. Tech YouTuber Snazzy Labs broke down the headphones in an unlisted video. He goes over getting in and where some parts are located. Interestingly, it seems that both H1 chips are housed in the left ear cup (presumably because of space). The right ear cup is a little bit more cluttered with the battery. The video also breaks down the different sensors in the cups and how the signal is transferred through the headband to the other ear cup. It’s a very interesting video and worth watching.
https://medium.com/drknode/youtuber-breaks-down-airpods-max-in-secret-video-106bc3c85cbd
['Henry Gruett']
2020-12-26 21:11:19.274000+00:00
['Technews', 'Tech', 'Technology', 'Airpods Max', 'Apple']
Taking a Snapshot of 2020 — Before We Set Our Goals For The New Year
Taking a Snapshot of 2020 — Before We Set Our Goals For The New Year John Gamades Follow Dec 7 · 4 min read As we’re rounding the bend and ending out 2020, you might be breathing a sigh of relief. Elections, issues of inequality, a pandemic, financial instability, working from home for many of us, and virtual learning for our kids — any one of those on their own could have made this a challenging year. Instead, the stars aligned, and we got them all at once. It’s overwhelmed us, tried to undermine our strength, and tested our resilience. Words like endurance, perseverance, and determination mean something more coming out of 2020. Fortunately, pressure creates diamonds. In the middle of this, we all grew stronger. We found the power to stretch ourselves further, to do things we thought were impossible, and to thrive in the uncomfortable moments. There are so many valuable lessons woven into every experience we had in 2020, and the greatest miss of the year would be to step past those things we learned in our hurry toward 2021. I’m not going to let us rush forward without grabbing those lessons to bring with us. Here’s my perspective on the transition we’re about to make into 2021. Some of you will set New Year’s resolutions, and others are going to set goals for 2021. To be clear, they’re two different things. Our resolutions are most often vague, abstract, squishy — and they’re most often broken and lost by the time February first arrives. Goals, on the other hand, are much more strategic. They’re specific and clear, there’s a plan for achieving them, accountabilities are intertwined, and they’re aligned to our purpose and core values. I’m going to talk more about goals and my own goal setting process in my next blog. I’ll be helping you envision your coming year, and will be sharing my personal goal setting tools. First, though, we need to spend some time looking back on the year we’ve just experienced. We need to grab onto those 2020 lessons before they get lost… To do this, I created My 2020 Snapshot tool. You can use this link to download it here for free. This is a tool I first developed last year, as we were approaching 2020, to help set the stage for my own new year’s goal setting. Many of you used it then and I heard some great feedback. When I reopened the Snapshot tool this year, I saw a few things I wanted to alter based on what we’ve witnessed in 2020. Even the way we reflect on the past year required a 2020-pivot. I started by adding two new questions toward the very beginning of the 2020 Snapshot to help us start the review. This year, we’ll ask, “Where did I experience wins in 2020?” and “Where did I experience losses in 2020?” I’ll be honest… As I started answering those questions myself, many emotions came with my replies. Thinking about our wins in a year like this can (and should) create a rush of gratitude. Talking about what we lost can be heavy. There are things we’re still mourning. The year you’ve experienced may naturally tilt you one way or another, and that’s okay. Remember, we’ve all experienced 2020 in our own way, on our terms. As much as this year has been a shared experience, we haven’t shared everything. We’ve all lived through our own peaks and valleys, and we’ve learned unique lessons. Those lessons are the real wealth that will come from this past year — and they’re what I want to help you capture through this 2020 Snapshot. Specifically, I want each of us to answer this question, “What lessons did I learn as I walked through 2020?” We need to seek those lessons, find them, and name them. We need to make sure those lessons don’t get left behind as we look forward to something new. They represent our wins and failures, the things we’ve learned, and the scars we’ve picked up along the way. These lessons, the easy ones and the painful ones, will make us more prepared and stronger as we move into a new year… and the year after that… and the one beyond that. These lessons deserve trophies to go along with them — the kind of trophies we got when we were kids — to commemorate what we’ve learned and achieved. Adults don’t get many trophies, but we should. We should all get trophies for how we showed up in 2020. The “I didn’t give up” trophy, the “I was there for someone else” trophy, and the “We walked through this together” team trophy. Gold, polished, and shiny… we need a way to recall where we’ve been, all that we’ve accomplished, the challenges we’ve survived, and the lessons we’ve learned together. The Takeaway There will be more lessons to learn in 2021. That’s something we can all be sure of. Between then and now, as we’re preparing for the new year to come, take some time to reflect on this passing year. Download the My 2020 Snapshot tool, spend some time with it, and end this year strong. Take something valuable with you from 2020 so it can’t say it was the year that robbed us of our joy, resilience, and positivity. Follow me here for my next blog, where I introduce my 2021 goal setting tools. I’m going to share the process I use for setting my own goals (not resolutions) and I’ll help make the process simple and easy for all of us. The new year is just around the corner — let’s walk into it prepared so we can live boldly!
https://medium.com/age-of-awareness/taking-a-snapshot-of-2020-before-we-set-our-goals-for-the-new-year-d81fb11bb14c
['John Gamades']
2020-12-13 19:12:30.612000+00:00
['Self Improvement', 'Leadership', 'Life Lessons', 'Life', 'Entrepreneurship']
Cry Baby
It can be embarrassing, but sometimes old wounds weep.
https://backgroundnoisecomic.medium.com/cry-baby-9aba64a9870a
['Background Noise Comics']
2019-06-14 04:07:49.848000+00:00
['Self Esteem', 'Comics', 'Cartoon', 'Humor', 'Mental Health']
5 Common Mistakes that can Ruin Your Content Campaign
I’ve dedicated the past few years of my professional life exclusively to content marketing. And over that time, I’ve watched brands and ad agencies make the same mistake over and over again: they treat content marketing like advertising. It’s a common mistake, and almost always results in a waste of money and a frustrated CMO who may start to believe content just doesn’t work for their brand. The fact is content works. And it can work brilliantly. But it needs a different strategy than advertising and brands have to stop treating them equally. Here are five common misconceptions from the advertising world that you should avoid at all costs when developing a content strategy for your brand 1. Content marketing is a form of advertising It’s not. Advertising is a strategy brands use to increase sales. Content marketing is a different strategy brands can (and should) use to increase sales and awareness. They are two different strategies and should be developed and executed in different ways. Let’s say advertising and content marketing are both contenders in a boxing match. Basically, advertising would be whoever is fighting Rocky. And content marketing would be, well, you guessed it, Rocky. Advertising has always been about aspiration. It’s about the unachievable. Advertising is the guy who’s already on top: he’s got the money, the style, the clever lines, the girl and he makes you want to be just like him. It is aspirational. Rocky is loved by his audience because, in life, most of us aren’t big shots who have it all figured out. Content marketing, on the other hand, is all about what’s real. You see, the reason Rocky became such a popular character is because he’s not some bigger-than-life hero who can accomplish it all. In fact, he doesn’t even win the big fight until his second movie, and his opponent isn’t a villain. Rocky is loved by his audience because, in life, most of us aren’t big shots who have it all figured out. Sure, Rocky is still a little aspirational, but he’s still human and relatable. He’s what could have happened to us if we’d pursued our childhood dreams. He’s inspirational. And that’s the type of content people love, so much so that “Rocky” became one of the most profitable film franchises ever. 2. It should be all about your product or your brand It’s not. That’s advertising. Content is all about your audience. While advertising is a tool to make sure people know your product exists and where to buy it, content is what will make them remember your brand forever. Take Dove, for instance, a brand that hasn’t talked about how their body wash is better than the competition in a long time. It’s still the leader in its category across the United States. While other brands spend millions of dollars on celebrity endorsements, Dove invests heavily in content. No celebrities. No models. No mention of their products’ benefits. Just inspiring stories of normal women. And it works for them. 3. Okay, then it’s a little bit about my audience, but maybe like 65% about my brand Stop it! Sure, people need to know your brand is behind your initiative, and that whatever content you’re putting out there is only available because your brand produced it and cares about your consumers’ interests. But when you read an article about the best cars of 2019 and then realize all the cars listed are Fords because it’s sponsored content produced by a local Ford dealership, you aren’t going to feel enlightened. Instead, you’ll immediately think “I’ve been fooled. This is an ad.” Content cannot be a Frankenstein hybrid made out of advertising and journalism parts. Remember the Rocky analogy: we love him because he is real. Once that trust between brand and audience is broken, the campaign is more likely to backfire. Your audience will feel fooled or even angry. And that’s a long way from the whole purpose of your content campaign, which was to create a positive emotional connection with your audience so that, over time, your brand will become top of mind.
https://medium.com/creative-lab/5-common-mistakes-that-can-ruin-your-content-campaign-78826c13876e
['Carolina Esbaile']
2019-06-27 01:14:43.655000+00:00
['Video Marketing', 'Branded Content', 'Content Marketing', 'Marketing', 'Advertising']
Two Point Formula for Achieving Anything in Life
Two Types of People Generally speaking, people are of two types. People having a growth mindset and those who are opposed to that. People with a growth mindset are the ones who have vision, dreams, and goals they work towards. I call them successful people. Because they might not be successful at that time. Or maybe they might not be able to achieve quickly, but they have the most important trait of people who embraced success. Yes, they have that mindset. On the other hand, a person with a fixed mindset will say: “I want to do this, but I can’t,” and then they end up doing nothing. While a successful person will say, “Okay! I don’t have the money. But I’ve X, let’s work on Y using Z.”
https://medium.com/2-minute-madness/two-point-formula-for-achieving-anything-in-life-6a5847314f37
['Saeed Ahmad']
2020-12-28 06:20:39.273000+00:00
['Success', 'Careers', 'Life', 'Motivation', 'Life Lessons']
Weekly Machine Learning Research Paper Reading List — #4
Author: Fabian Keller, Emmanuel Muller, and Klemens Bohm Venue: 2012 IEEE 28th International Conference on Data Engineering, (ICDE 2012) Paper: URL Abstract: Outlier mining is a major task in data analysis. Outliers are objects that highly deviate from regular objects in their local neighborhood. Density-based outlier ranking methods score each object based on its degree of deviation. In many applications, these ranking methods degenerate to random listings due to low contrast between outliers and regular objects. Outliers do not show up in the scattered full space, they are hidden in multiple high contrast subspace projections of the data. Measuring the contrast of such subspaces for outlier rankings is an open research challenge. In this work, we propose a novel subspace search method that selects high contrast subspaces for density-based outlier ranking. It is designed as pre-processing step to outlier ranking algorithms. It searches for high contrast subspaces with a significant amount of conditional dependence among the subspace dimensions. With our approach, we propose a first measure for the contrast of subspaces. Thus, we enhance the quality of traditional outlier rankings by computing outlier scores in high contrast projections only. The evaluation on real and synthetic data shows that our approach outperforms traditional dimensionality reduction techniques, naive random projections as well as state-of-the-art subspace search techniques and provides enhanced quality for outlier ranking.
https://medium.com/towards-artificial-intelligence/weekly-machine-learning-research-paper-reading-list-4-64442005324d
['Durgesh Samariya']
2020-08-25 21:10:23.448000+00:00
['Research', 'Machine Learning', 'Science', 'University', 'Academia']
Nudging, steering or shoving people into the big society?
Nudging, steering or shoving people into the big society? noelito Follow Aug 26 · 3 min read How will we be motivated to want to help each other when we are warned that our colleague who’s been laid off or our neighbour who’s disabled and cannot work are not as deserving as us and are becoming a drain on the economy? For those of us out of work, how can we get a job when there aren’t any and when they are hardly any available? We also know that the biggest costs to public services — i.e. from health to policing and even welfare — are around chronic lifestyle behaviours and conditions. So could the government be using this “behavioural” approach to welfare reform? They could encourage benefit claimants to be able to do paid volunteering without having their benefits cut through a Community Allowance. This would improve their skills, give them something productive and meaningful to do and increase the likelihood of getting a job. Or should they use a “shove” approach — from shoving them on a bus to go and get a job? If we reflect on what motivates us to act and think in certain ways, we are often treated as individuals acting in isolation of everything and everyone except the money in our purse. But people influence us, our neighbourhood influence us, things we cannot predict influence us too. Context changes that too, it’s unlikely that the person who encourages you to smoke more is the person who has the greatest influence on how you manage your pension. If you are trying to nudge the general public into contributing to society, you need to generate a “cascading” of community behaviours. You can only do this if there enough community activists in a particular neighbourhood who are contributing towards the community. This increases the level of “persuadibility” in that neighbourhood of others feeling motivating to participate. Likewise, if there are enough people in a neighbourhood who don’t work, then the level of “persuadibility” of new entrants to get a job will be much lower and therefore more difficult to get them into work. You could therefore have a “boom to bust” mechanic going on in particular communities, where public services and community groups have been able to support these catalysts and where that support is suddenly withdrawn and in parallel, many people become unemployed, the desire for communities to help each other suddenly fades away and the desire for people to find work also degrades. We need to give people the tools to understand how their brains, behaviours and environments interact helps them make better decisions and tackle habits such as smoking, binge drinking and overeating. But where worklessness has been embedded into communities, how can we re-inject that sense of hope and energy? As @davidwilcox says “if u want people to act, support them. If u want people to talk, listen“
https://medium.com/carre4/nudging-steering-or-shoving-people-into-the-big-society-7c60b4275eb1
[]
2020-08-27 17:51:47.248000+00:00
['Design', 'Leadership', 'Places', 'Innovation', 'Government']
Three Books That Changed My Creative Life
Three Books That Changed My Creative Life Author_Didi_Cooper Follow Sep 26 · 4 min read Home in Kennebunkport (Didi Cooper) I want to believe that books come into your life at that perfectly magical moment so they can serendipitously inflict their wisdom and inspire you to act. What you do with that wisdom and magic is entirely up to you. Are you ready to try something new? Are you ready to think about your life in a completely different way? Does your hand easily and effortlessly run along the spine of the book that is meant to be your next read? The books below revealed themselves to me at very different periods in my life and each, in their own way, pushed me in the direction of creativity. It took me some time to explore that wondrous sense of creativity and determine that I would write. So, in this post you will not get a list of books that make you a better writer. You can find that anywhere. Instead, I share with you sparkling book gems that showed up on the doorstep of my creative unconscious and dared me to act. Colony Beach in Kennebunkport, Maine (Didi Cooper) Gifts of the Sea by Anne Morrow-Lindbergh: This poetic and lyrical classic is Anne Morrow-Lindbergh’s 1955 account of her solo vacation to Sanibel Island as a young mother. This tiny book treasure came into my life in 1999, at a time when I needed the messages and the beautiful writing style. Her raw honesty about the demands of life as a young mother, the complexities of marriage, and her journey for inner peace spoke directly to my heart. This Lindbergh classic is as real as real gets and whenever I re-read it I am reminded that her time on Sanibel Island in 1955 could as well have happened last week. Her writing is gorgeous and timeless. Low Tide in Kennebunk, Maine (Didi Cooper) Earth Magic by Steven D. Farmer: This is the book that has yellow sticky notes spilling over the pages, so my return to it will guide me to quotes and concepts I knew I would want to re-visit. This book speaks a language that links our growth and contributions to the magic of nature and the Earth. I found this fascinating dive into healing, spirituality, and metaphysical exploration helped me tap into and connect to the Earth in a way that has heavily influenced my writing. Colony Beach in Kennebunkport, Maine (Didi Cooper) Big Magic by Elizabeth Gilbert. Oh boy. Where Lindberg and Farmer provided mindful, mystical and gentle messages to push me ever so gently in the direction of my own creativity, Gilbert shoved me, got in my face, and told me to get moving. She wanted to know what I was waiting for, why I was preventing myself from living a creative life, and why in the world wasn’t I doing ALL the things that gave me joy. She challenged me to follow my curiosity wherever it might take me. Upon completion of this book I refer to as my wake up call, I wrote my first novel, The Magic of Missing You (still yet to find its publishing home). Try the audio book for Gilbert’s tough love chat with you about your own creativity. https://www.amazon.com/Big-Magic-Creative-Living-Beyond/dp/1594634726/ref=sr_1_1?dchild=1&keywords=big+magic&qid=1601089444&sr=8-1 So the next time a book calls your name, maybe there’s a reason. Maybe it’s meant to be your next read. Just maybe the universe conspired to bring you a message through the pages. Visit Didi Coopers website and blog: www.didicooper.com https://www.instagram.com/didi_cooper_writer/
https://medium.com/illumination/three-books-that-changed-my-creative-life-dbc29d8aa441
[]
2020-12-22 12:45:52.936000+00:00
['Book Recommendations', 'Books And Authors', 'Books', 'Readinglist', 'Reading']
Tilt
The tilt of the Earth’s axis Bares the freckled landscape Against the sun And while Wild purple skies Wick away Thunder Leaving blue And while Grass grows lushly Wild meadows Trimmed lawns Cow-cropped scrub And while Swifts swiftly dart Magpies scheme Wrens flit Between thorns The tilt of my axis Bares my motley mores Against the hurt
https://medium.com/weeds-wildflowers/tilt-566556bf1802
['Farah Egby']
2020-07-05 14:46:01.319000+00:00
['Self Improvement', 'Poetry', 'Morality', 'Self-awareness', 'Summer']
Introduction to Sensor Fusion for Self Driving Cars
RADAR Strengths and Weaknesses RADAR stands for Radio Detection And Ranging. There are two stereo cameras located & fixed on the self driving car. The two cameras together, work just like our eyes to form a stereo image that allows us to gather both image data, as well as distance information. There is also a camera for the recognition of traffic signals. Often, traffic signals can be located on the other side of an intersection, so usually there is a special lens to give the camera sufficient range to detect the signal from far away. Also, there is a radar, and it sits behind the front bumper. Radars have in automobiles for years. We can find them in systems like adaptive cruise control, blind spot warning, collision warning and collision avoidance. Doppler effec is used by radar to measure the speed directly. The Doppler effect is based on the fact that whether the object is moving away from us or toward us. Also, it measures the change in frequency of the waves. This is kind of like how a fire engine siren will sound differently depending on whether the fire engine is moving away from us or toward us. Because radar waves bounce off hard surfaces, they can provide measurements to objects without direct line of flight. Radars can see below vehicles, spot buildings and objects that might be hidden from the field of view. Radar technology is the least affected by weather conditions like rain and fog. Radars have a low resolution in the vertical direction as compared to the cameras. The lower resolution can cause problems related to reflections.
https://medium.com/swlh/introduction-to-sensor-fusion-for-self-driving-cars-81b4be57f7fa
['Prateek Sawhney']
2020-11-19 14:34:43.304000+00:00
['Deep Learning', 'Machine Learning', 'Data Science', 'Self Driving Cars', 'Artificial Intelligence']
Signs that you were emotionally neglected or abused as a child
by: E.B. Johnson Childhood is a crucial time and one in which we form our defining ideas on everything from romantic love to happiness. The relationships we share with our parents are important, but they can be damaging too. When you’re the adult child of an emotionally abusive parent, the road is hardly ever a straight one. It’s up to you to find healing, though, and it’s up to you to let in the light of happiness and truth. Though we aren’t responsible for the complex damage done to us by emotionally abusive or neglectful parents, we are responsible for healing that damage in our adult lives so we can find happiness for ourselves. That comes with a big dose of brutal self-acceptance, however, and committing to undoing the damage that’s been years in the making. We can find joy after waking up from an emotionally abusive childhood, but only when we accept both who we are and who we want to be. Childhood memories are rarely a straight road. Our childhood memories often seem to be tinged with a touch of rose-tinted nostalgia. When you think of your childhood amid a stressed adulthood, you’re inclined to remember surprise Santa Claus visits or happy days on the playground with friends. What about the hard stuff, though? What about those tough-to-swallow memories that make us squirm, or otherwise defined who we are? Emotional neglect occurs when our caretakers fail to appropriately respond to our emotional needs at critical stages in our development. While child abuse is a very intentional act, emotional neglect generally occurs out of ignorance or as the result of an extreme form of narcissism.It’s a failure to act and respond to a child’s emotional needs, and it’s an unwillingness to do the emotional work it takes to be an adequate parent. Because emotional neglect is so subtle, many of us fail to recognize its consequences in our lives until we are well into our adulthood. Overcoming the effects of emotional neglect is a long process, and it takes a certain amount of brutal honestly, applied self-compassion and understanding. Getting ourselves back to true happiness and peace takes learning how to correct these flaws and start loving ourselves for who we are. The 5 facets of childhood emotional abuse. Emotional abuse isn’t just screaming and stomping around. There are very different facets to childhood emotional abuse. From a reject of needs, to corrupting our sense of right and wrong — these are the 5 facets of childhood abuse which could indicate you were abused or neglected emotionally. Rejection of needs An emotionally abusive parent is one who dismisses the emotional needs of their child, or otherwise refuses to show affection. This might occur by withholding affection when the child is perceived to have done something “wrong” or it may occur outright — by treating the child with cold and distant disdain. The caretaker here is denying the child the affection it needs to thrive or feel secure, thus inflicting deep and long-lasting trauma that can make it hard for those children to have happy and balanced relationships in future. Unreasonable isolation One of the hallmark signs of abuse is, without question, isolation. Abusers isolate their victims because it limits the chances of discovery, and also allows them to exert greater control over those relationships. An emotionally abusive parent might refuse to allow their child to take place in normal activities, or they might (once again) use isolation as a heavy-handed means of “punishment” though it is ultimately more about control and inflicting distress. Terror, terror, terror Terror doesn’t occur only in the home of the child who receives regularly beatings — it’s a foundation of emotional abuse too. Parents and caretakers terrorize their children with the promise of severe punishments, or the threat of something far more sinister which can cause them to hide or fear opening up. This constant terror creates a climate of threat, which with it erodes all sense of trust and safety the child has in their home and their caretakers. Ignoring and dismissal Emotional abuse doesn’t just come with terror and threats. It can include dismissal and emotional neglect as well. When a parent goes out of their way to ignore the needs of their child, or if they are suffering from untreated mental illness, it can leave the child with a sense of being unwanted and unconsidered. Children need validation because that validation guides them toward future social skills, abilities and understandings. when they are denied that by their caretakers, it can lead to major emotional upsets later on. Corrupting the senses Perhaps the most insidious aspect of emotional abuse is that of emotional and social corruption. This form of abuse takes place when the parent or caretaker encourages their offspring to engage in malicious or antisocial behavior. They might do this directly, or they might do this by responding to the child only when they are in an extreme or unpleasant emotional state. Only receiving the attention that you need when you’re acting up, hurting yourself, or hurting others leads to the development of false values and even damaging behavioral patterns. Why we don’t always recognize childhood emotional abuse. Unlike physical abuse, the scars of emotional abuse run deep and often far beneath the places we are comfortable lurking. Because emotional abuse requires us to bury our pain and our experiences deep, and dismiss our base instincts, it can be hard for us to recognize and accept that we were childhood victims of emotional abuse. Dismissal of needs As adults, we have a toxic way at looking back at the needs of our childhood as “trivial” or not that important. More often than not, we brush-off the deep feelings of hurt, loss and rejection as common misunderstandings of childhood, rather than the definitive and scary moments they truly are. Dismissing these needs only further buries them in our subconscious. The longer this goes on, the more serious the side-effects can become. Normalization Normalization is one of the most common (and understandable) reasons that we struggle to resolve — or even accept — our childhood emotional trauma. This occurs when we accept the idea that our experience is common, and therefore invalid. Because everyone has this experience, we start to believe that it doesn’t matter very much. Nothing could be further from the truth, however. Just because something is common does not make it any less traumatic or damaging to who you are. Internalizing trauma Children have an uncanny way of internalizing the bad things that happen to them, and that’s especially true when it comes to emotional abuse. When we internalize our trauma, we start to believe that the poor treatment we received as children is our fault. It’s my fault, we tell ourselves. Anything my parents did to me was because I deserved it. It happened because I wasn’t good enough. Internalizing your trauma can cause you to turn away from it, and accept it as an inevitable part of your insecurities. Shamed into silence At one point in time, you might have known your trauma and named it for what it was. Things change, though, and the bonds we share with our families have a big part to play. You may know that it’s wrong, but the familial ties you share with your abusers leads to a shamed sense of silence, or otherwise convinces you to bury away your trauma and pretend that it never happened. The longer you allow yourself to be shamed into silence, the further you are pulled into shadow. The most common long-term effects of emotional abuse. Emotional abuse can be subtle and, likewise, the symptoms we exhibit can take a long time to manifest and impact our happiness. Having an inability to rely on others, or an over-the-top inner-critic that blames you for everything isn’t normal. It’s more commonly a sign that you’re living with the idea that you’re unloveable. An erroneous idea that was implanted in your head by a parent that didn’t live up to their responsibilities to you. Struggling self-esteem Adult children of emotional abuse often find themselves struggling with self-esteem later on in life. Whether they learned to feel unworthy of love, or they learned that they weren’t smart enough, or pretty enough, or successful enough — the way we bond and interact with our parents plays a critical role in every facet of our adult life. Holding onto the emotional traumas you share with your caretakers leads to a perpetuation of those negative and self-limiting beliefs, and forces us to accept the belief that we aren’t good enough or that we deserve to be treated poorly. Constant guilt Because emotional abusers are so skilled at making the victim feel at fault, those victims can also find themselves struggling with a constant and permeating sense of guilt. This constant guilt can cause you to act out, lean into addictive or risky behaviors, or even deny yourself the opportunities that would otherwise allow you to thrive and reach your dreams. It’s important to remember that, as a child, you were totally innocent. It is up to your parents to care for us, and provide us the emotional stability and affection we need. Obsessive privacy If your parents constantly snooped on you, or exposed your private information to people you didn’t trust (or didn’t wish to share that information with), it can lead a hyper-sense of violation that follows us through life. Because you never felt as though you had a sense of privacy, you might become an overly private or walled-off person in your adult life. Burying all your important thoughts, emotions and vulnerabilities down deep — you might find it hard to connect with others on an authentic level. Senseless relationships Broken relationships with our parents equals broken relationships with our partners, friends and even our own children later on in life. This comes down to the attachment styles we learn from our parents; as well as any other directly or indirectly instilled fears and insecurities like fear of abandonment. The adult child of an emotionally abusive household might find themselves in emotionally abusive romantic relationships, filled with blowups and lots of drama. It comes down to learning how to reshape the way we react and emote. Negative outlook It’s hard to retain a positive outlook, when you spend your childhood having your feelings dismissed or demeaned. Adults who were raised under the shadow of emotional abuse might find that their inner critic reigns supreme, and they may struggle to see anyone and anything in a positive light. Because their baselines were formed in the midst of terror, isolation and even abandon — it can be easier for them to expect the worst. It’s a coping mechanism, and one that’s meant to protect them from continued emotional harm (although, it does more damage in the long-term). Buried emotions When you’re emotionally abused, you learn that it isn’t safe for you to express your emotions. As a child, when you expressed your emotions they were probably dismissed or they may have even caused anger in your parent or caretaker. As an adult, this can lead to burying your hard or unpleasant emotions away; leaving them to fester and further manifest through subconscious patterns of behavior and reaction. How to resolve your childhood emotional abuse. If you’re struggling with the fallout of emotional abuse as an adult, you can find peace but it takes a lot of hard work. Not all emotional abuse is created equal. Some hurts run deep and require the help of a mental health professional to overcome. For everything else, however, we are the only life raft that there is. If you’re struggling to find love in your lifeyou have to learn how to love yourself, and let go of the things your parents couldn’t give you. 1. Get honest about what went wrong The first step in recovering from the emotional abuse in your childhood is to get honest about what went wrong. This means accepting what happened, and accepting your parents and caretakers for the mistakes that they made. You have to let go of the internalized guilt and understand that — as a child — you were not at fault for the pain in your past. Get honest about what went wrong and use that knowledge to create a plan moving forward. Start small, and ease into the waters of acceptance with a gentle journalling or meditation practice. Find a space where you can think uninterrupted, and put yourself back in a time when you felt that your parent didn’t listen to you, or didn’t respond to your emotional needs. Watch the scene replay as though you were distantly removed. How did your parent react to you? How did that make you respond? Consider that child as if it were your own, and let those emotions come back to you as if you were back in that moment. Understand that your parents were human, with all the complex and flawed emotions and experiences that you have to. Use that understanding to cultivate an acceptance of what happened to you, so you can figure out how to fix it. As you get more comfortable looking back into your past, lean into the big things. Don’t be afraid to shy away from the truth…no matter how much that might hurt. 2. Lean into your boundaries If you don’t carve out the mental space you need to detach from who and what was, you won’t be able to break free of the shackles your family past has over you. Emotionally toxic or damaging childhoods never go away. They follow us, manifesting again and again in a number of different manners that undermine our overall mental and emotional health. We have to set boundaries in order to let the healing process come full circle. Have enough respect for yourself to set boundaries with those who injure you more than they lift you up. Do whatever you need to do to protect yourself, and honor your worth by letting others know what you will and will not tolerate. If the emotionally distant parent is still in your life, communicate your needs to them and let them know that those needs take priority in your life. Embrace the emotions that make you uncomfortable and recognize the people and the triggers that bring out the best in you and your psyche.Learning to love ourselves takes time and effort, but know our worth isn’t difficult. As a human alive on this earth, you’re worth all the happiness, love and effort in the world. Only you can allow someone else to deny you that. When you start to recognize this, you’re on the path to being whole again. 3. Find comfort in your emotions Growing up in a household that is devoid of the right emotional connection can make it hard for you to recognize your emotions, and even harder for you to manage them.More often than not, our caretakers distance themselves emotionally because they don’t know how to deal with their own emotions. Only by learning how to confront our emotions can we deal with them efficiently and get back to the happiness we deserve. When we find ourselves in a stressful event, we often feel a flood of emotions all at once which makes it hard to process and orientate ourselves. Though we are often told the best way to deal with these emotions is to ignore them, we actually gain more benefits by learning how to identify each emotion as its experiences in a technique that’s known as emotional differentiation. Differentiation stops negative emotions from getting worse by building up our confidence in facing them. It allows us to identify what we’re feeling and (eventually) why we’re feeling that way, which leads to true resolution and clarity and, thus, higher levels of happiness and contentment. When we learn how to see our emotions for what they are — and where they come from — we can accept them and then get better at managing them. It’s like being a manager in a restaurant. If you really want to be effective, you have to get to know your staff and figure out what works best for everyone. 4. Get professional help Facing and resolving the emotional neglect in our pasts is not something that we can always do alone and it’s not something that can be managed simply with the help of a few good friends. Sometimes, it’s necessary to find a specialist when dealing with childhood neglect of every kind; but it’s important to make sure you’re finding the right person to help you resolve past issues. Trauma symptoms vary from case to case and as such need to be assess by qualified and experienced trauma professionals. Finding a therapist who has experience treating trauma like yours can take time, but cognitive-behavioral therapists and EMDR professionals are a good place to start. Take your time and don’t rush into anything that doesn’t feel right. A professional can help you get to the root of your problems, but you need to be ready to open up and need to know what direction you want to head in. Healing is hard but living eternally in pain is harder. If you think you need more serious help, reach out for it. When you feel better physically, you have more strength to engage in the mental and emotional war of healing and resolution. This puts our overall wellness in clearer focus and makes our efforts to heal more effective and less costly in the long run. 5. Practice self-compassion The act of applied self-compassion is a powerful tool in recovering from the pain caused by emotionally neglectful parents. Self-compassion is not self-kindness and it’s not self-pity either. It’s taking an active role in your own healing, and it involves embracing your faults, mistakes and suffering as equally you celebrate your joys, successes and triumphs.When we utilize real self-compassion in our lives, we extend the same kindness, caring and understanding to ourselves as we would to a friend or a loved one. According to Dr. Kristin Neff, a master in self-compassion research, there are 3 core components to true and realized self-compassion. More than just being nice to yourself, you also have to dig deep into your common humanity and become mindful of the way you both react and interact with your real, internal self. Self-compassion is a powerful tool, when we know how to wield it, but it takes a big commitment and it takes a lot of work each day to build. Adding it to our lives means finding happiness, however, and discovering that true beauty and joy is one of the most beautiful gifts we can give ourselves. Look at things from the perspective of your inner child. Are you finally standing up for the little boy or little girl and protecting them, the way they should have been protected all those years ago? Be mindful of yourself, and be mindful of your needs (both emotional and physical). Let go of your need to be perfect for anyone, and instead on becoming the best version of yourself, for yourself.If there’s something you don’t like about yourself — make a plan to change it — but only after looking it boldly in the face and accepting it for what it is. Spend a few minutes each day practicing this, and use it to get beyond the pain of your estrangement. 6. Be more grateful Gratitude is one of the best ways we can deal with our negative emotions. It doesn’t matter who you are, or whether you’re surrounded by a million people you love or not. If you’re a living human being — you have something to be grateful for. Big or small, there are beautiful things all around us that have the ability to give our lives meaning, or remind us of the good things that are just within our reach. Take 5 minutes to sit down each day and make a list of all the things in your life that you’re grateful for. List the great things in your life and the things that make you smile. Read through the list a few times and make sure not to forget the simple things. You’ll start to really connect with yourself and your emotions when you begin to remember that it’s not all doom and gloom. There’s something out there for everyone to love in life and if you haven’t found that yet it’s time to get started. The greatest thing about happiness is that it is not a luxury commodity — it’s a state of being that exists, naturally, within each and every one of us. You don’t need your parents or your siblings or anyone else to be happy. That’s something that can only be generated from within and shared without. 7. Re-parent yourself the right way When we are hurt by our parents, we often go out looking for healing in all the wrong places. We turn to other people, to drugs, to alcohol — all in the search of the love we were denied when we needed it most. Not being taught how to properly manage our emotions (the good and the bad) can result in associating happiness with the feeling of pleasure, when that’s not necessarily true. There is no salvation in pleasure alone. The problem with that is that no one else can save us. Only we can save ourselves. Sometimes, you have to step up and be the parent you always deserved for yourself. This means treating yourself well, checking in how you’re feeling and how you’re doing. Be a mentor for yourself; an advocate for yourself. Do all the things a caring mother or father would do and do it with complete radical abandon. Find activities that bring you peace and joy and be kind and gentle with yourself and the way you see the world. Work hard to build up that confidence that was wrecked by a dismissive or emotionally distant parent and celebrate your strengths and victories every single day.Write notes to yourself and start a mindful journaling practice that lets you get back in touch with that scared, broken little child that’s hiding deep inside. Learn how to love yourself and the rest of the world will follow. Give yourself a gift that never quits giving and be the parent you always needed. 8. Connect with your support networks Substituting our unhealthy family relationships for the ones that better suit our lifestyles and emotional needs is a good way to cut ties and find your way back to healing. It can be helpful to allow your attention to center on the healthy relationships that bring joy into your life, rather than the ones that attract nothing but negativity. There is no law that says family is blood and blood alone. You can choose your family, and you can choose people who provide emotional fulfillment. Get comfortable talking about how you feel, and find a friend you can trust that is willing to listen to you vent. Let them know exactly how your childhood or upbringing is still causing you to struggle and let them know you need a willing shoulder (and a willing ear) to listen to you work things out on a regular basis. Always make sure, however, that you have their consent before unloading. Not everyone has the ability toprocess our emotions and experiences in the same way. Even if we can trust them not to spread our business. The family and chosen family we surround ourselves is important, and can be especially important when it comes tocreating the lives we want. If you’re struggling to let go of a toxic or emotionally damaging family member, re-establishing abandoned ties with your own outside support networks can be a great way to get back in contact with whoyou are. This is because our relationships allow us to get a better grip in our perspective. And that makes all the difference when it comes to fulfillment and joy. Putting it all together… There are a number of ways in which our parents can emotionally wound or scar us, all of them resulting in deep-seated pain that can cause serious problems in our adolescent and adult lives. When we fail to recognize and deal with this pain, it follows us, and can lead to a lot of negative side effects and coping mechanisms that eat way at who we are and the future we’re trying to build for ourselves. If we want to heal, we have to dig deep. And get focused on our needs and fulfilling them with honesty and compassion. Get in touch with your past and start embracing those experiences for what they are — the good and the bad. Dig deep and realize that you have every right to feel how you feel. Our emotions don’t come from nowhere. They are a reaction to the stimuli in our environment. Be honest about your needs and reach out to a professional for further resolution if needed. Gratitude and compassion are two of the greatest gifts we can give ourselves when we’re trying to recover from a childhood that was lacking in affection and emotional connection. Give yourself a second childhood and become the parent you never had. Fall in love with your strengths and recognize that you empower yourself through your weaknesses. Find people you trust and gather them closely. When we open up to our support networks, we can unlock powerful healing we didn’t even know was possible. Sort out your boundaries and use them to boost yourself into an emotionally empowered future. The shackles of your childhood don’t have to be the prison of your future. Let go and find the healing you’re seeking yourself.
https://medium.com/lady-vivra/accepting-emotional-neglect-in-childhood-1edf44bf908e
['E.B. Johnson']
2020-04-21 06:46:01.393000+00:00
['Self Improvement', 'Self', 'Childhood', 'Mental Health', 'Emotional Abuse']
Sentiment Analysis Using Python and NLTK
How are we going to be doing this? Python, being Python, apart from its incredible readability, has some remarkable libraries at hand. One of which is NLTK. NLTK or Natural Language Tool Kit is one of the best Python NLP libraries out there. The functionality it leaves at your fingertips while maintaining its ease of use and again, readability is just fantastic. In fact, we’re going to be completing this mini project under 25 lines of code. And you’re most probably going to understand each line as you read through it. Crazy, I know. Let’s get right into it ! IDE Personally whenever I’m doing anything even relatively fancy, in Python, I use Jupyter Lab. Being able to see what each line does makes it really easy to debug and it’s also strangely therapeutic. Shrugs. Jupyter Lab But you’re free to use whatever you want. It’s a free world. Mostly. 2. Dependencies Now, we’ve got to get hold of the libraries we need. Just 4, super easy to get libraries. NLTK Numpy Pandas Scikit-learn To install NLTK, run the following in the terminal pip install nltk To install Numpy, run the following in the terminal pip install numpy To install Pandas, run the following in the terminal pip install pandas To install Scikit-learn, run the following in the terminal pip install scikit-learn So intuitive. I mean, come on, it really can’t get any easier. Time to code First things first. Let’s import NLTK. import NLTK Now, there’s a slight hitch. I did say 4 dependencies, didn’t I ? Ok, here’s the last one, I swear. But this one’s programmatic. nltk.download(‘vader_lexicon’) # one time only This is going to go ahead and grab, well, the vader_lexicon. What is this ‘VADER’ ? While this is the official page for NLTK’s VADER, it’s actually the code and not an explanation of VADER which by the way, does not, refer to Darth Vader, very sad, I know. It actually stands for Valence Aware Dictionary and sEntiment Reasoner. It’s basically going to do all the sentiment analysis for us. So convenient. I mean, at this rate jobs are definitely going to be vanishing faster. (No, I’m kidding) The way this magical downloadable works, is by mapping the word you pass into it, to lexical features with emotional intensities. In English, since you ask, that means figuring out, let’s just call them synonyms for now, to figure out what that word relates to and then gives it a score. A sentiment score, to be precise. So now that each word has a sentiment score, the score of a paragraph of words, is going to be, you guessed it, the sum of all the sentiment scores. Shocking, I know. Now, you might go thinking, ok, fine it goes ahead and gets the score of each word fine. But does it understand context ? Like for example, the difference between did work and did not work ? DUH !!! I mean otherwise why would it be ‘one of the best’ ? Another really important thing to keep in mind, is that VADER actually pays attention to capitalization and exclamations. It will give a higher positive score to AWESOME!!!!! than AWESOME and awesome. That’s it class, theory’s over. Now, back to business Let’s now import the downloaded VADER module. from nltk.sentiment.vader import SentimentIntensityAnalyzer and then make an instance of the SentimentIntensityAnalyzer, by doing this vader = SentimentIntensityAnalyzer() # or whatever you want to call it By now your code should look something like this Code snippet Upon running it, you should see something like this. If you get the same error as me, don’t worry, it’s basically warning you that the Twitter module from NLTK is not installed and so you won’t be able to tap into that functionality. Code snippet 2 Now let’s try out what this ‘VADER’ can do. Write the following and run it sample = ‘I really love NVIDIA’ vader.polarity_scores(sample) Code snippet 3 So, it was 69.2% positive. Which might not be perfect, but it definitely gets the job done, as you’ll see. In case you’re wondering, the compound value is basically the normal of the 3 values negative, positive and neutral. Now, try this sample = ‘I really don\'t love NVIDIA’ vader.polarity_scores(sample) Code snippet 4 54.9% negative, whew, by the skin of its teeth. Now let’s work on some real world data Here’s a file with Amazon reviews of a product from which we’re going to be extracting sentiments. Go ahead and download it. Also ensure that it’s in the same directory as the python file you’re working on. Otherwise remember to add the correct path to it. We’re going to be needing both pandas and numpy now import numpy as np import pandas as pd df = pd.read_csv(‘wherever you stored the file.tsv’, sep=’\t’) df.head() In the above code, we’ve initialized a Pandas Dataframe object, and called it to view the top 5 objects in the dataframe. This dataset already has all the reviews categorized under positive and negative. This is just for you to cross check the values you get back from VADER and calculate your metrics. To see how many positive and negative reviews we have, type in the following df[‘label’].value_counts() Code snippet 5 Let’s try one of the objects out, shall we ? But before we do that, let’s ensure that our dataset is nice and clean, i.e, ensure that there aren’t any blank objects. df.dropna(inplace=True) empty_objects = [] for index, label, review in df.itertuples(): if type(review)==str: if review.isspace(): empty_objects.append(i) df.drop(empty_objects, inplace=True) This little convenience function will drop any blank dataframe objects. The inplace=True method ensures that the dataframe keeps the changes made by dropping any blank objects, and not cheekily throwing them away despite all our effort. Very much like a commit in Github. Code snippet 6 However, this particular dataset had no empty objects, but still, it doesn’t harm to be careful. Currently there’s a couple of problems: We can’t compare the extracted sentiment to the original sentiment as doing that for each sentiment is time consuming and quite frankly, completely caveman. The extracted sentiment is printed out, which, in my opinion is plain flimsy. Let’s fix it. Let’s add the sentiment to the dataframe alongside its original sentiment. df[‘scores’] = df[‘review’].apply(lambda review: vader.polarity_scores(review)) df.head() The above code will create a new column called ‘scores’ which will contain the extracted sentiments. Code snippet 7 But currently the scores column has just the raw sentiment which, we can’t really compare programmatically with the ‘label’ column which already has all the data, so let’s find a workaround. Let’s use the compound value. Code snippet 8 If the compound value is greater than 0, we can safely say that the review is positive, otherwise it’s negative. Great ! Let’s implement that now ! Code snippet 9 Well then let’s check our score now, shall we ? Code snippet 10 There’s definitely room for improvement. But, do keep in mind that we got this score without making any changes to VADER and that we didn’t write any custom code to figure out the sentiment ourselves. Alright then, if you have any queries feel free to post them in the comments and I’ll try to help out ! Peace.
https://medium.com/swlh/sentiment-analysis-using-python-and-nltk-library-d68caba27e1d
['Pranav Manoj']
2020-12-06 16:43:54.978000+00:00
['Data Science', 'Sentiment Analysis', 'Python', 'Pandas Dataframe', 'NLP']
All aboard for One Planet Youth
by Thomas Gomersall Launched by WWF in 2017, the One Planet Youth (OPY) programme seeks to equip young people to become the next generation of conservation leaders and environmental spokespersons. OPY comprises a Citizen Science programme and a Leadership Training Programme, which seeks to harness the passion, energy, motivation and creativity of young people to preserve our planet. “[Young people] are the leaders of tomorrow, so they are the ones who should be empowered to contribute to sustainability and conservation,” says Yamme Leung, Director of Education for WWF-Hong Kong. “Our OPY programme is the ideal platform to empower our youth to take up their new responsibilities to save our planet.” Each year, the programme recruits 30 young people between the ages of 15 to 30 for an immersive, educational experience to actively engage them in WWF’s work and to make their own distinct contributions to conservation. To date, nearly 1,000 youths have taken part in OPY , whether or not they have a background in environmental science. “Around 50 per cent of our participants are studying ecology or biology. The rest know almost nothing about it,” says Augustine Chung, Senior Education Officer for WWF-Hong Kong and supervisor of the OPY Leadership and Training programmes. “They are just eager to learn more and to experience nature.” OPY members take part in a three-day orientation camp at Hoi Ha Wan. Photo credit: WWF-Hong Kong Running from October to July, OPY begins with a three-day orientation camp at which participants learn about local environmental issues through exploring natural habitats and seeing firsthand the threats facing them, as well as learning survey techniques used in conservation. Just as importantly, it is an opportunity for participants to come up with ideas for their own group projects and to build the skills and team relationships they will later need to carry these out. Plus, with activities like snorkelling in Hoi Ha Wan or dolphin-watching included, it’s also a lot of fun. OPY leaders carry out biodiversity surveys at Mai Po Nature Reserve. Photo credit: WWF-Hong Kong Over the next several months, participants must then contribute at least 40 service hours to supporting WWF’s conservation and education work, including (but not limited to) biodiversity checks at Mai Po and Hoi Ha Wan and surveying marine litter in Hong Kong. OPY findings contribute to WWF’s environmental work. “The beauty of citizen science is that we have lots of people and they can do large-scale surveys and monitoring,” says Chung. OPY Leadership team members gather for a mid-term review online in April. Photo credit: Matt Ming WWF-Hong Kong Participants are also required to develop their own conservation projects, for which WWF will provide consultancy, financial support and technical advice. For instance, if a group is doing a project on fast fashion, WWF’s sustainability team will offer advice on how best to approach it. In April, groups convened for a mid-term workshop to report on their progress and to gain advice on how to proceed based on peer reviews. The main implementation of the projects then takes place over the following two months. One OPY project seeks to educate the public on the importance of bats to our ecology. Photo credit: Gary Ades Five projects are currently underway for the 2019–2020 programme, mostly aimed at helping Hong Kongers to live more environmentally friendly lives. For instance, one project aims to educate the public about bats, negative opinions of which have only intensified in the age of COVID-19. To do this, the project leaders conducted online surveys on people’s thoughts on bats, whilst giving them an opportunity to learn more about the importance of bats and observe real ones in their roost. They then surveyed respondents after this experience to see if their views had changed. Unfortunately, for some, old prejudices die hard. But in a more encouraging sign, all respondents surveyed so far said that they recognised the importance of not encroaching on bat habitats and were more prepared to live in harmony with bats as a result. Promoting consumption of locally sourced produce is another project being undtaken by an OPY team. Photo credit: Thomas Gomersall Another OPY project aims to encourage Hong Kongers to eat more locally sourced produce, as the city currently imports 90 per cent of its food from as far away as Brazil and the US. Setting an example, the leaders of this project have committed to buying food from markets that sell locally grown vegetables and keeping records on the origins of not only the products they buy, but also the ingredients used to make them. This makes buying some products harder than others, particularly bread as the ingredients used to make most bread sold in Hong Kong are imported from all over the world. However, it is hoped that this project will inspire more Hong Kongers to choose local produce over imports and thus diminish the city’s large food-related carbon footprint. Conservation projects such as a recent trip to Bhutan allow OPY members to apply their learnings in the past year As an added bonus, OPY participants also get the chance to take part in conservation projects abroad. Past examples have included assisting in tiger conservation in India, taking part in a City Nature Challenge activity in Malaysia and most recently, helping out at a WWF camp in Bhutan to involve local children in environmental activities. But the benefits of the programme don’t end with the graduation ceremony in July. OPY alumni can then go on to assist WWF in workshops, leading programmes and guiding eco-tours and citizen science surveys. OPY also teaches young people skills in research, organisation and leadership useful for future careers. Perhaps most importantly, it instils in them a passion to continue spreading the word of the need to protect our planet to those around them. “The nine months spent with the OPY programme was enjoyable and helped me become more mature,” says Karol Fong, OPY project leader for the 2017–2018 programme. “With my passion, I hope to share experiencing the beauty of nature.”
https://medium.com/wwfhk-e/all-aboard-for-one-planet-youth-50208b621593
['Wwf Hk']
2020-04-30 07:56:00.872000+00:00
['Youth', 'Environmental Education', 'Hong Kong', 'Sustainability', 'Conservation']
Sketch Tutorial. A step by step tutorial to discover…
After a bit more than a month using Sketch 3, I feel confident enough to share a part of my workflow in form of a tutorial. To do so, I’m going to be describing how to create the Colorful switch freebie I made as it uses a lot of interesting features Sketch has to offer and is not extremely long to do. I want to keep this as simple but thorough as possible. This tutorial is designed for beginners so we’re going to take the time to describe a lot of things you may already know. Here’s the expected result: Side note: I’m not all knowing and I definitely do not deliver the holy one and only method to design things so if you have feedback or inputs, please feel free to comment on either this Google+ or Facebook post. I’m always interested in learning things as well and it will be beneficial for everybody. In case you get lost in the steps or if something is unclear there is a half way .sketch file you can download and the final source is also available. Alright let’s get to it. 01_Installing Sketch This one is a tough one☺. If you do not already own it, you can download a Free trial or directly buy it from the App Store. Install it and launch it. When you see the prompt, do not open any specific template and just click “ok” to open a new document. You’ll see this. 02_creating an artboard Artboards are “work areas” they can be as small as an icon or as big as you like. If you used Illustrator before, it’s the same thing. Press “A” on your keyboard or hit the “Insert” button at the top left and select artboard(1). As you can see a lot of convenient sizes are now available on the right column. We’re not going to use that. Simply draw your artboard of any size on your canvas. Once this is done, go in you right panel, and under “size”, enter 400x300 (2). This freebies was intended to be a Dribbble shot from the start. In you artboard/layer panel (left side), double click the “Artboard 1” label and rename it whatever you like. I hesitated between “Glørk the destroyer” or “Colorful switch”, I went for the latter as it was somehow more descriptive(3).
https://medium.com/google-design/sketch-tutorial_01-b76271a095e3
['Sebastien Gabriel']
2018-06-28 20:47:58.406000+00:00
['Sketch', 'How To', 'Design']
Word Embedding, Character Embedding and Contextual Embedding in BiDAF — an Illustrated Guide
Additional Details on 1D-CNN The section above only presents a very conceptual overview of the workings of 1D-CNN. In this section, I will explain how 1D-CNN works in details. Strictly speaking, these details are not necessary to understand how BiDAF works; as such, feel free to jump ahead if you are short on time. However, if you are the type of person who can’t sleep well without understanding every moving part of an algorithm you are learning about, this section is for you! The idea that motivates the use of 1D-CNN is that not only words as a whole have meanings — word parts can carry meaning, too! For example, if you know the meaning of the word “underestimate”, you will understand the meaning of “misunderestimate”, although the latter isn’t actually a real word. Why? Because you know from your knowledge of the English language that the prefix “mis-” usually indicates the concept of “mistaken”; this allows you to deduce that “misunderestimate” refers to “mistakenly underestimate” something. 1D-CNN is an algorithm that mimics this human capability to understand word parts. More broadly speaking, 1D-CNN is an algorithm capable of extracting information from shorter segments of a long input sequence. This input sequence can be music, DNA, voice recording, weblogs, etc. In BiDAF, this “long input sequence” is words and the “shorter segments” are the letter combinations and morphemes that make up the words. To understand how 1D-CNN works, let’s look at the series of illustrations below, which are taken from slides by Yoon Kim et. al., a group from Harvard University. Let’s say we want to apply 1D-CNN on the word “absurdity”. The first thing we do is represent each character in that word as a vector of dimension d. These vectors are randomly initialized. Collectively, these vectors form a matrix C. d is the height of this matrix, while its length, l, is simply the number of characters in the word. In our example, d and l are 4 and 9, respectively. 2. Next, we create a convolutional filter H. This convolutional filter (also known as “kernel”) is a matrix with which we will “scan” the word. Its height, d, is the same as the height of C but its width w is a number that is smaller than l. The values within H are randomly initialized and will be adjusted during model training. 3. We overlay H on the leftmost corner of C and take an element-wise product of H and its projection on C (a fancy word to describe this process is taking a Hadamard product of H and its projection on C). This process outputs a matrix that has the same dimension as H — a d x l matrix. We then sum up all the numbers in this output matrix to get a scalar. In our example, the scalar is 0.1. This scalar is set as the first element of a new vector called f. 4. We then slide H one character to the right and perform the same operations (get the Hadamard product and sum up the numbers in the resulting matrix) to get another scalar, 0.7. This scalar is set as the second element of f. 5. We repeat these operations character by character until we reach the end of the word. In each step, we add one more element to f and lengthen the vector until it reaches its maximum length which is l - w+1. The vector f is a numeric representation of the word “absurdity” obtained when we look at this word three characters at a time. One thing to note is that the values within the convolution filter H don’t change as H slides through the word. In fancier terms, we call H “position invariant”. The position invariance of the convolutional filters enables us to capture the meaning of a certain letter combination no matter where in the word such combination appears. 6. We record the maximum value in f. This maximum can be thought of as the “summary” of f. In our example, this number is 0.7. We shall refer to this number as the “summary scalar” of f. This process of taking a maximum value of the vector f is also referred to as “max-pooling”. 7. We then repeat all of the above steps with yet another convolutional filter (yet another H!). This convolutional filter might have a different width. In our example below, our second H, denoted H’, has a width of 2. As with the first filter, we slide along H’ across the word to get the vector f and then perform max-pooling on f (i.e. get its summary scalar). 8. We repeat this scanning process several times with different convolutional filters, with each scanning process resulting in one summary scalar. Finally, the summary scalars from these different scanning processes are collected to form the character embedding of the word. So that’s it — now we’ve obtained a character-based representation of the word that can complement is word-based representation. That's the end of this little digression on 1D-CNN; now let's get back to talking about BiDAF. Step 4. Highway Network At this point, we have obtained two sets of vector representations for our words — one from the GloVe (word) embedding and the other from 1D-CNN (character) embedding. The next step is to vertically concatenate these representations. This concatenation produces two matrices, one for the Context and the other for the Query. Their height is d, which is the sum of d1 and d2. Meanwhile, their lengths are still the same as their predecessor matrices (T for the Context matrix and J for the Query matrix).
https://towardsdatascience.com/the-definitive-guide-to-bidaf-part-2-word-embedding-character-embedding-and-contextual-c151fc4f05bb
['Meraldo Antonio']
2019-09-05 07:21:22.458000+00:00
['Machine Learning', 'Data Science', 'Computer Science', 'Artificial Intelligence', 'NLP']
Different Behaviors of Javascript Functions
A Look at What the Arrow and Key Word Does to “this” During my time in the software engineering program at the Flatiron School, I always thought that functions declared with the function key word and arrow functions behaved the same, and that arrow functions were just a more advanced way of writing it. It wasn’t until I made a lazy mistake (this IS the best way to learn) that I realized that these two ways of declaring functions have different affects on the this key word. Let’s take a look at the two functions below and what happens when we invoke them: function sayMyName () { name = "Travis" return this.name } sayMyName() // => "Travis" sayMyName = () => { name = "Travis" return this.name } sayMyName() // => "Travis" As you can see, both of these functions return “Travis” when they are invoked. In both cases, this is inside of the scope of both of these functions and this.name refers to the name attribute of that function. The change of behavior happens when we try to call these functions on an object. Let’s consider just the function keyword version and see what happens when we invoke it on an object that has a name key/value pair, and a reference to the function: function sayMyName () { name = "Travis" return this.name } let catt = { name: "Catt", sayMyName: sayMyName } let scott = { name: "Scott", sayMyName: sayMyName } sayMyName() // => "Travis" catt.sayMyName() // => "Catt" scott.sayMyName() // => "Scott" As you can see above, calling a function that uses the function key word changes what this refers to. In this case, this refers to the object that the function is invoked on, either the catt object, or the scott object. Now let’s look at the arrow function version: sayMyName = () => { name = "Travis" return this.name } let catt = { name: "Catt", sayMyName: sayMyName } let scott = { name: "Scott", sayMyName: sayMyName } sayMyName() // => "Travis" catt.sayMyName() // => "Travis" scott.sayMyName() // => "Travis" In this case, it returns “Travis” every time because when this is used in an arrow function, it is bound to the scope of the function that it is inside of therefor binding this.name to the value of “Travis” no matter what object the function is invoked on. This binding to the scope action is exactly why we always want to use arrow functions in class-based components in React.js. Consider the following basic component: class Button extends React.Component { state = { greeting: "Hello World!" } clickHandler () { this.setState({greeting: "Goodnight World!"}) } render() { return ( <> <h1>{this.state.greeting}</h1> <button onClick={this.clickHandler}>Click Me</button> </> ) } } export default Button The expected action here would be to change the text on the screen from “Hello World!” to “Goodnight World” when the button is pushed. When the button is pressed, however, we get an error of Cannot read property ‘setState’ of undefined . Because we are using the function keyword, the value of this is bound to the object that we called it on which in this case is nothing, making it undefined. To make this work, simply change the format of the clickHandler function into an arrow! class Button extends React.Component { state = { greeting: "Hello World!" } clickHandler = () => { this.setState({greeting: "Goodnight World!"}) } render() { return ( <> <h1>{this.state.greeting}</h1> <button onClick={this.clickHandler}>Click Me</button> </> ) } } export default Button
https://medium.com/dev-genius/different-behaviors-of-javascript-functions-7e2e07d52351
['Travis Prol']
2020-11-30 20:55:34.209000+00:00
['JavaScript', 'Functions In Javascript', 'React', 'This', 'Software Development']
Writers of ILLUMINATION-Curated
Writer Bios Writers of ILLUMINATION-Curated Explore accomplished writers from their pen Photo by John Jennings on Unsplash Writer bios are useful tools to create visibility and collaboration on Medium. In the earlier days of ILLUMINATION, Dr Mehmet Yildiz compiled writer bios, and readers enjoyed exploring their favourite writers from this collection. Many writers use their bios at the end of their stories. Many writers reported that they received substantial amount of views to their bio stories as they add them to the end of their stories for more information. Writer bios are excellent tools for writers to give more information about their background to their readers. Based on feedback from our readers, we decided to collect bios of writers contributing to ILLUMINATION-Curated. We sent an invitation. Since the invitation by Dr Yildiz, we received several bios. They are interesting, informative, and engaging. This is a new initiative on ILLUMINATION-Curated. Now that we invited more writers, we believe many more writers will submit their bios. We want to introduce our writers to readers. Writer bios are the unique tools for introduction because the content comes from their creative pen. Discover writers of ILLUMINATION-Curated More to be added soon… Editors of ILLUMINATION and ILLUMINATION-Curated Our editors work hard to help our writers succeed. They invest their precious time volunteering for smooth publishing of your stories. ILLUMINATION, Dr Mehmet Yildiz, Tree Langdon, Brian E. Wish, PhD, Dr Ron Pol, Dr Michael Heng, Dr John Rose, Paul Myers MBA, Karen Madej, Joe Luca, Sylvia Love Johnson, Dipti Pande, Timothy Key, Kevin Buddaeus, Kate Maxwell, Arthur G. Hernandez, Bill Abbate, Michael Patanella, Aurora Eliam, CMP, René Junge, Charlotte Zobeir Ali, Geetika Sethi, Ahmed Jamal, Britni Pepper, Selma, Earnest Painter, Dew Langrial, B. A. Cumberlidge. Lanu Pitan, Lynn Dorman, Ph.D.; J.D. Agnes Laurens, EP McKnight, MEd, CR Mandler MAT, The Maverick Files, Sumera Rizwan, Liam Ireland, Tony Young, Jr., Neha Sandhir S, Desiree Driesenaar, Stuart Englander INVITATION TO JOIN BOTH PUBLICATIONS: “ILLUMINATION” AND “ILLUMINATION-CURATED” ILLUMINATION is an innovative serendipity platform with optimistic writers, editors, and readers collaborating to create synergy. You too can join, engage, and make your dreams real. Our writers and readers are diverse, inclusive, and supportive. Like 4,600+ other writers, you too can confidently share your content without worrying about rejection and censorship. We can improve your content for higher quality and increase the chance of curation, and you can collaborate with our top writers and model their success. As a bonus, we love supporting writers and amplifying success stories with facts. If you are a new writer, we recommend joining ILLUMINATION. If you are an accomplished writer with a reasonable curation rate or good knowledge of Medium curation guidelines, you can request access for ILLUMINATION-Curated. We will help you transform.
https://medium.com/illumination-curated/writers-of-illumination-curated-d1cf5c601466
[]
2020-12-06 12:20:27.093000+00:00
['Illumination Curated', 'Relationships', 'Writing', 'Technology', 'Illumination']
How 2020 Was the Year I Got My Life Back
Photo by Edwin Hooper on Unsplash As 2020 comes to a close, there’s a lot to look back at and think about. We all had to make major unforeseen life adjustments and many had to deal with tragic loss and heartache. And while I think everyone wants to look back at this year as the worst year on record, I can’t help but think about how this year gave me a new chance at life. I think about the following lessons I learned and realize that, in many ways, 2020 was truly a blessing in disguise. 1. Gratitude Photo by Nicholas Bartos on Unsplash I started 2020 still dealing with debilitating symptoms caused by an autoimmune arthritis condition that first showed up over two years ago. Since that time I had been in and out of the doctor undergoing test after test and trying many different medications, but to no avail and no specific diagnosis. It wasn’t until right before the quarantine lockdown that I finally got the medicine I needed to get relief from the persistent pain and swelling that had kept me from living a normal life. Then COVID forced us all to stay home. What started out as two weeks turned into a much longer time and completely ruptured everyone’s view of “normal life.” The uncertainty alone brought on high anxiety and fear of what things would look like and how we were all going to adapt to such isolation. And while it was difficult at first, people quickly adapted, including myself. Looking back, I feel extreme gratitude for how quarantine played out and for many reasons. Firstly, I am grateful to those who put their lives on the line each and every day to serve and protect those of us who had the luxury of working from home. We owe those people a world of gratitude for their sacrifice. This includes teachers, a profession I left just last year as a result of my arthritis diagnosis and feel extremely thankful that I did. Instead, I listened with horror and sadness to my teacher friends as they shared how things at school were going. The worst part was hearing how leaders (from school administrators all the way to the U.S. president) were handling the situation. Not only did many leaders implement botched plans for how to re-open safely, but they also showed little appreciation for the hard work and dedication of those putting their lives in danger on the frontlines. Yet despite the stupidity at the top, there was so much hope coming from people doing the right thing, protecting themselves and others, and lifting each other up along the way. I will never forget those heroes and their actions as the bright side of 2020. Secondly, I am grateful for the expansion of work-from-home opportunities. 2020 taught us that technology really can re-shape the modern lifestyle. This has many benefits and opens up many opportunities for people, including those of us with health conditions that limit our ability to lead “normal” everyday lives. For me, working from home gave me the opportunity to heal from my chronic illness. It took six months for the new medicine to take full effect and put my arthritis into remission, but I finally feel back to normal. Working from home took away the stress of hiding my symptoms from people and of managing through the pain when some days I didn’t want to get out of bed. Now, I have energy again for the first time in two years. I am finally able to work out and that’s helping me re-gain not only the physical strength, but also the emotional strength that I had lost. I’m grateful to be able to jump out of bed in the morning and once again be the annoyingly cheery person with my high energy and enthusiasm for life. In this way, 2020 truly saved my life. Lastly, I am thankful for the progress that humanity has made in 2020. Although there’s still a long way to go, there were positive milestones throughout this year that showed me how much people care about each other and how much people can positively effect change. I became immensely grateful for so many people — for the frontline workers, the researchers and vaccine developers, the mask wearers and human rights activists, the political campaign supporters and poll workers, as well as the social media entertainers (because laughter is truly the best medicine). 2020 showed me the compassion and determination of humanity to come together and support each other through some of the biggest challenges we’ve ever faced. So while many people may look back and remember 2020 as a terrible year, I’ll look back and think of it as the year people overcame tremendous obstacles. And I want to thank each and every one of them for doing so. The most important lesson of 2020: We can’t choose what life will throw at us, but we can choose to be grateful for whatever life we have. 2. Connection Photo by Ben White on Unsplash Being stuck in isolation makes you appreciate new ways of connecting. I’m not talking about how we all learned how to have virtual hangouts and happy hours as a way to connect during the two-week lockdown. Because even once the world opened back up, we still had to live under permanent COVID conditions. The new normal became checking to make sure you had a mask with you at all times, limiting hangouts with friends, and avoiding long, socially-distant lines everywhere you went. With all of the typical social venues closed down and limits on the number of people allowed to gather, trying to truly connect with people started to feel like a draining chore. When I spent my whole day on the computer in virtual meetings, the last thing I wanted to do was see my family or friends through a screen (Zoom fatigue is real). There was one place, however, that was still safe to travel, always open for visitors, and provided relief from the stress and eyestrain of virtual interactions. 2020 was the year that I got back to nature and found real connection by distancing myself from the virtual world COVID imposed on me. I spent time camping for the first time ever. I went on countless walks and even scheduled them as a midday break from work. On the weekends, I hiked and hammocked, sunbathed and read outdoors. And more than ever before, I just sat outdoors with family and friends. I had always been a fan of nature, sure, but 2020 showed me the important role that nature plays in how humans function. There’s actually scientific evidence to prove that nature not only relieves stress, but literally resets our brains from the constant barrage of modern-day distractions and the noise that internet and cell service have created. While these are beneficial tools, I think it’s safe to say that most of us overuse these technologies as a way to feel “connected,” but this kind of connection only leaves us feeling dissatisfied and alone. When we spend more time staring at screens than enjoying our surroundings, it prevents us from enjoying the present moment. Suddenly, we fill all of our time with these fake distractions and lose our sense of connection, whether to each other or to nature or both. Spending time outdoors in 2020 has allowed me to unplug from social media, slow down, forget about my to-do list and email, and re-connect with the present moment and the people I’m enjoying it with. This year reminded me that while technology and human innovation are there to provide convenience and comfort, that’s not how humans were made to live. We should understand this, because there is clear evidence of how our modern lifestyles are actually destroying our natural environment through pollution and climate change. 2020 brought this human impact to light when we saw that by shutting down and staying home there were positive side effects on the environment. This fact alone should give us all pause as we consider how we’re treating the very home we need to survive and yet tend to avoid in favor of our modern conveniences. 2020 taught me this second important lesson: nature isn’t just a fun getaway spot; it’s a necessary part of everyday life that we must not only respect, but cherish with undivided attention. 3. Growth Photo by Tonik on Unsplash When you have to completely re-think your daily routine and adjust to new ways of living, it forces you to step out of your comfort zone. We all initially reacted to the pandemic by panicking and resisting the changes (some still resist wearing a mask, in fact). But the biggest lesson for me came in realizing that no amount of complaining was going to change reality, so I might as well figure out how to adapt. Once I started to embrace the reality of COVID, I learned so much about myself and unlocked incredible new possibilities. I learned, for example, how lucky I am for the job that allowed me to work from home and for the treatment I got to help heal my arthritis. I gained a grateful outlook and vowed never to take for granted the extra time I now have back in my schedule because of the convenience the pandemic gifted me. And then, virtual meetings taught me just how much time I spend in front of screens all day. When my mental health started to suffer, I learned to cut off my screen time and make my days more meaningful by spending them outdoors with people I love and not on my phone. I started using my extra time to reach out to others who didn’t have the luxury of staying home or who felt isolated and alone during quarantine. I learned that this should have always been a part of my life, whether or not there was a global pandemic happening, because helping others truly does bring joy and meaning to one’s existence. Don’t get me wrong, I also made plenty of mistakes in 2020 and felt frustrated and defeated many times. I kept wanting life to be back to “normal” and to go out and let loose in crowded social events. There were many days I didn’t want to talk to people, because I hated connecting with them virtually when I really wanted to see them in person. But all of this taught me about self-care and emotional expression. I was never one to talk about how I feel or open up to others, but this became impossible in 2020 as we all felt the weight of uncertainty, anxiety, and fear of COVID every single day. When will this end? Who will get sick next? When will the vaccine be ready? How will life look even if we get the vaccine? Would things ever go back to the way they were before the pandemic? It took a lot of reflecting, reading, spending time outdoors, and writing to understand my own feelings and how to express them. What I realized in 2020 is that I hadn’t overcome my two-year journey of dealing with my arthritis diagnosis and switching from a career I loved so much into one that I had to start all over again. In general, I realized that I am the type of person who is either stuck worrying about the future or lamenting the past, but never fully enjoying the present moment. 2020 helped me learn how to identify and overcome these tendencies. It pushed me to recognize my behavior, shift my mindset, and find joy in the here-and-now. The old me would’ve ended this by saying something like “let’s see what 2021 will bring”, but it honestly doesn’t matter. What matters is being grateful for each and every day, feeling connected in the present moment, and using all of life’s lessons as an opportunity to grow. While 2020 may have shut down the world and changed life forever, it opened my eyes to new ideas and new possibilities for the future. And that gave me my life back.
https://thestevenpost.medium.com/how-2020-was-the-year-i-got-my-life-back-8af61a4e9440
['Steven Hopper']
2020-12-28 19:19:47.699000+00:00
['Inspiration', 'Motivation', 'Life Lessons', 'Life', 'Growth']
What does big data look like?
“Big data is not easy.” We’ve all heard this time and time again yet organizations are racing for data driven solutions to help with cyber security, recommendation engines, customer 360, genome sequencing, IoT, and many other big data use cases. Yet when you engage with your IT support or data analyst team, you keep hearing how under-resourced they are to support your projects. What if there are pre-existing solutions that might work today? What if you are able to analyze data with existing software products that integrate with Cloudera Enterprise? What if you could set up visualizations to an existing Apache Impala (incubating) data store in under 10 minutes? Over the past quarter, we have posted a number of partner demos on the Cloudera YouTube channel that illustrate different ways to analyze, ingest, and process different forms of data. On top of the video demos themselves, some even have links to hands-on demo environments that can provide you a test drive of their software without having to purchase or download anything. Here is a summary of these demos, so you can find a solution that fits your big data needs. Zoomdata Demo — Data Sharpening with Apache Impala (incubating) This demo shows how to do data sharpening by using Zoomdata with Apache Impala (incubating). Learn how to sort, pivot, and get complete visualizations of data captured in Impala. Check out the hands-on test drive here. Syncsort Demo — Mainframe Data Access with Hadoop This demo shows you how to migrate data from mainframes into Cloudera Enterprise. Syncsort DMX can be particularly useful for end users who aren’t familiar with mainframe systems but need to get valuable data into HDFS to run analytics. Trifacta Demo — Customer Behavior Use Case In this demo, you will learn how to use Trifacta to wrangle data from different data sources and create automated pipelines for future data preparation. You will be presented with a use case where retailers are trying to understand how the weather affects retail sales. Check out the hands-on test drive here. Paxata Demo — Retail Solution Use Case This demo highlights how Paxata is able to help users quickly gain faster insights from their data in Apache Hadoop. It compares customer data from purchases, loyal customer demographics, and external survey data so marketing can pick the right social media channel to reach interested customers. Check out the hands-on test drive here. Cask Demo — Using CDAP with Cloudera for Data Flows, Ingestion, and Governance This demo shows how to manage different data pipelines using Cask CDAP and Cloudera Manager. This is a great solution for organizations looking to have greater control of their data ingestions process. Check out the hands-on test drive here. Informatica Demo — Create Better Upsell and Cross-sell Initiatives with Customer Prospects This demo shows how you can use Informatica Big Data Manager, using Cloudera Navigator, to get valuable customer insights that allow marketing organizations to improve up-sell and cross-sell opportunities. Qlik Demo — Using Big Data Analytics to Monitor Zika This demo, developed by Bardess Group, shows how people can use big data analytics to monitor the spread of Zika virus with Qlik and Cloudera. The use case is a combination of near real-time analytics with visualizations that can allow healthcare, travel, and government organizations to make critical, life saving, decisions. StreamSets Demo — Connected Car with StreamSets Data Collector This demo highlights how you can use the StreamSets Data Collector for building big data ingest pipelines with Apache Hadoop. The use case presents a connected car simulation that shows, in real-time, traffic issues that could be avoided by drivers. Check out the hands-on test drive here. Paxata Demo — Preparing Data for Health Care Cost Analytics This demo shows how you can use Cloudera and Paxata to integrate and cleanse disparate insurance claim data to uncover potential savings for payers and patients. Pentaho Demo — Transactional Fraud Detection In this demo you will learn how to use Pentaho Data Integration with Cloudera Enterprise to address transaction fraud at a financial services organization. Arcadia Data Demo — Security with Role-Based Access Control Learn how Arcadia Data’s role-based access control provides security to Apache Hadoop by using Sentry with Cloudera Manager. This example can help BI systems avoid fragmented role definitions and policies. For more information about Cloudera software partners, check out the partner solution page.
https://medium.com/cloudera-inc/what-does-big-data-look-like-455671221132
['Michael Moreno']
2016-11-09 18:21:23.434000+00:00
['Cloudera', 'Data Science', 'Big Data', 'Demos', 'Bi Analytics']
How No Man’s Sky Helped Me Deal With Depression
How No Man’s Sky Helped Me Deal With Depression Finding Peace in the Void. One of the many beautiful vistas from my time playing No Man’s Sky I was diagnosed with depression in my early 20s, but I’m pretty certain I had it long before that, at least since my early teens. Now 35 I’ve been knowingly living with it for 15 years and in that time I’ve tried many different methods of managing it. Talking to professionals, going on medication, attending groups, journaling, CBT-ing, even meditating. I’ve tried a lot of different things and while some have worked better than others, for me the best way to manage my depression is to lose myself. The phrase “to lose oneself” can conjure up a myriad of meaning, not all of them good. However, what I mean here is to lose yourself completely in a fictional world; to forget the reality around you and completely surrender yourself to the immersion of this new reality. Of course this can happen with any kind of medium; a novel, a comic, a film or TV series, a podcast or audiobook. For me though, the medium in which it is easiest to lose myself is video games, in particular those that offer you a grand, rich universe to explore. Such as No Man’s Sky. If you’ve not heard of No Man’s Sky, in essence it is a game which presents players with an open world universe of 18 quintillion planets, all of which are procedurally generated. You literally have an entire universe to explore, no hyperbole needed. While the game does have a story with missions and goals like traditional games, the true brilliance of No Man’s Sky, and I would argue the true joy of it, is when you put all those to the side and simply explore. Every time you visit a new planet you are never sure what you are going to find. Perhaps it will be a huge seed pod large enough to build a little rest stop in, or a gigantic canyon through which you can fly your ship. Maybe you’ll come across the ruins of a previous civilisation and uncover secrets of the past, or find a hidden nest of creatures long thought to be extinct. Sometimes you’ll happen across an abandoned communications station to find a distress call that has gone unanswered for decades. Investigating the source of the call will reveal a long-ago crashed starship, the final recordings of its pilot hinting towards greater mysteries. If you explore the deepest depths of the universe, you may even discover the nature of reality. One of the many strange and wonderful creatures you can discover. The planets of No Man’s Sky range from peaceful green paradises to volcano-filled hell holes where the ground literally catches on fire. All of them are a joy to explore. There is something very satisfying about landing on a planet, jumping out of your ship, and beginning your trek across the landscape, not knowing what it is you’ll come across. If I’ve had a bad day mentally, few things help me relax and get back into a positive mind space as well as exploring the universe of No Man’s Sky. I find it very relaxing to walk around the various planets, scanning all the flora, fauna and rock formations into my planetary discoveries database. Coming across a settlement or building is always exciting as they often hold something of interest, even if it’s just a store from which to buy the various resources to help on my exploration. I recall one day that had been particularly bad. I was feeling extremely low and wanted nothing to do with the world. I sat down and began playing No Man’s Sky. I arrived on a new planet and after a half hour or so of exploring, I began to see something I’d not seen before off in the distance. It looked like trails of dust being kicked up from the ground. As I drew nearer I saw that about half a dozen such trails were criss-crossing over themselves. Puzzled, I continued up the small hill on which I saw them and when I crested the top I was met with an amazing sight. The cause of the dust trails was a new type of fauna I had never encountered, despite my many hours of playing. They were creatures which I can only describe as organic corkscrews, playfully burrowing into the earth, travelling underground for a few seconds to then burst out again in curving arcs through the sky. Seeing something for the first time after playing the game for longer than most games take to complete was incredibly exciting for me, and spurred my desire to explore and delve into the universe further, leaving behind the troubles of my more mundane existence. The planetary discoveries database record of the “corkscrew” creature. I think this joy of discovery and exploration speaks to something deep within humans. From our earliest histories we’ve gone exploring, wanting to find what is over the horizon, or just beyond the next mountain range. While this has not always ended well for all the peoples of the world, humans as a species would never have grown as much was we have without the desire to discover. Now, with much of the world known, video games gives us the opportunity to explore and discover once more. Escapism has long been the balm of the troubled mind, and it can be argued that video games are the ultimate form of escapism. The brilliance of No Man’s Sky, however, is that the escapism it provides can be as serene as one wishes. While there is combat in the game, as well as other obstacles the player must overcome, it is all optional. The game provides you with the information on how likely a battle is to break out, or if you’ll have to deal with a planet’s toxic environment. Thus, if you wish to simply relax and explore you can, and you can play the game for hours without having to worry about any of those aforementioned troubles. My favourite screenshot I’ve taken throughout all my time playing No Man’s Sky. No Man’s Sky is unlike any other game. In fact, the only other game I can really liken it to is the equally genre obscuring Myst series. The game allows you to play at your own pace and discover its universe as you wish. With everything that’s going on in the world, to be able to simply explore and discover is an experience of pure joy. No Man’s Sky has saved me mentally many times, and I’m sure it’ll save me many more. It’s pure escapism and it’s incredibly good for the mind. If you need me, you’ll find me among the stars.
https://joemdouglas.medium.com/how-no-mans-sky-helped-me-deal-with-depression-de7642f874cb
['Joe Douglas']
2020-11-11 09:32:43.184000+00:00
['Videogames', 'No Mans Sky', 'Mindfulness', 'Mental Health', 'Depression']
Structuring ML Pipeline Projects
Directory Structure and Intuition Behind It $project-name is the root directory of your project is the root directory of your project $project-name/ml includes machine learning related stuff. includes machine learning related stuff. $project-name/ml/pipelines includes the actual ML pipeline code includes the actual ML pipeline code Typically, you may find yourself with multiple ML pipelines to manage, such as $project-name/ml/pipelines/predict-sales and $project-name/ml/pipelines/classify-fraud or similar. and or similar. Here is a simple tree view: $project-name/ml/pipelines includes the following: data → small amount of representative training data to run locally for testing and on CI. That’s true if your system does not have a dedicated component to pull data from somewhere. If this is true, make sure to include a sampling query with a small limited number of items. → small amount of representative training data to run locally for testing and on CI. That’s true if your system does not have a dedicated component to pull data from somewhere. If this is true, make sure to include a sampling query with a small limited number of items. util → code that is reused and shared across $pipeline-name s. It is not necessary to include input_fn_utils.py and model_utils.py . Use whatever makes sense here. Here are some examples: In my own projects, it made sense to abstract some parts on the utility module, like building named input and output layers for the keras models. Building the serving signature metagraph using Tensorflow Transform output. Preprocessing features into groups by using keys. And also other common repetitive tasks, like building input pipelines with the Tensorflow dataset api. cli.py → entry point and command line interface for the pipelines. Here are some common things to consider when using TFX. By using abseil you can declare and access flags globally. Each module defines flags that are specific to it. It is a distributed system. This means that the common flags, like --data_dir=... , --hparam_tuning , --pipeline_root , --ml_metadata_url , --use_cache , --train_epochs is some you can define on the actual cli.py file. Other, more specific ones for each pipeline can be defined on submodules. This file acts as an entry point for the system. It uses contents in pipeline.py to set up the components of the pipeline as well as provide the user-provided module files ( in the tree example these are constants.py , model.py , training.py ) based on some flag like --pipeline_name=$pipeline-name or some other configuration. Finally, with the assembled pipeline, it calls some _runner.py file, by using a --runner= flag. pipeline.py → parameterised pipeline component declaration and wiring. This is usually just a function that declares a bunch of TFX components and returns a tfx.orchestration.Pipeline object. local_beam_dag_runner.py → configuration to run locally with the portable Beam runner. This can typically be almost configuration — free, just by using the BeamDagRunner . → configuration to run locally with the portable Beam runner. This can typically be almost configuration — free, just by using the . kfp_runner.py → configuration to run on Kubeflow Pipelines. This typically includes different data path and pipeline output prefixes and auto-binds an ml-metadata instance.
https://towardsdatascience.com/structuring-ml-pipeline-projects-97c16348be4a
['Theodoros Ntakouris']
2020-09-19 16:58:32.983000+00:00
['Deep Learning', 'Machine Learning', 'Tensorflow Extended', 'Cloud Native', 'Software Engineering']
Creating a High-Fidelity User Interface & Interactive Prototype in an hour
Marvelapp has several advantages that we need in the process of creating user interfaces and also interactive prototype: Can be used to quickly make high-fidelity user interface & interactive prototypes Web-based, so it can be used on platforms such as Windows and Mac Provides applications for Android and iOS so the process of designing & prototyping can be done on a mobile basis Provides applications for Android and iOS so we can validate directly from the Marvelapp application on our Android or iOS platform. 2. Adopt potential big ideas into a Flow Starting to design applications without understanding flow is a wrong step and we will waste time and energy. Understanding flow first helps us to focus on what information we want users to see and what actions users need to take. So we can convey it in an effective way. Big Idea from ideation process (Crazy 8s) Do’s: Keep referring to the moment/focus of the big idea Discuss and understand the flow quickly Check and re-order if there is a flow that is still not sequential. Also, understand the information and actions that will be displayed on every page we want to make. Don’ts: Start designing immediately without clear flow and with the random flow Don’t understand what you want to make Too long discussion. 3. Visual Design Style Avoid copying the aesthetics/design of a particular product or application without clear reasons. Start looking for design inspiration that fits the application categories, patterns, and elements that we will create. After that, quickly understand the identity of the product we are going to make, especially in the colors that represent our product. Keep in mind, this is the initial iteration you designed for your product. So, you will have time to iterate the next design after we validate. Use this web to look for design inspiration based on patterns and elements: http://mobbin.design Do’s: Look for design inspiration & styles that match the applications we make based on the categories, patterns, and elements we want to make. Example: “Health” category, “Charts” pattern, “Cards” element. Do a quick analysis of the identity of our brand product to determine color and style design. (Example: Health & fertility use a green base and must look clean). Don’ts: Debate to determine the color & visual fit Make several design options for each page Change the design style during the design process. 4. Apply Design Theory & Basic Principles of Visual Design In Validation Stack, this stage is the lowest level before we do Design validation to the next level such as User Research and User Evidence. This part is very important. We will use this systematic method to prove that at least our prototype design is supported by theoretical principles and has scientific foundations for good design. Here are some design theories and principles that can be used as references: 🔸 Visual Hierarchy A good visual hierarchy helps us present information in an ideal way and has a clear structure. So, users can understand the order of information for each element. Hierarchy can also help our brains distinguish the most important sequences/levels of visual elements in design and reduce cognitive burdens. It allows users to take the actions they will take. We can create a visual hierarchy by distinguishing size, color, contrast, fonts, and using other basic design principles. Do’s: Combine this technique with the “proximity” design principle Use font types that have a complete font weight type such as Thin, Light, Regular, Medium, Semibold, Bold, Extrabold, and Black. This will make it easier for us to create visual and information hierarchies. Don’ts: Distinguish each visual hierarchy using many colors and sizes Use more than two types of fonts/typeface and are not natural in combining fonts. 🔸 Content The habit that must be abandoned when creating designs and prototypes is to always fill in the content using “lorem ipsum”, the most used dummy text by designers. We will not be able to run valid Usability Testing with dummy text because real users really need real content to understand and capture the information on the prototype. Using content that can actually help users understand the information that we present. If we don’t have real content yet, we can replace it with similar content and don’t use “lorem ipsum” or placeholders to replace the content. Do’s: Use real content or something similar. Fill with relevant content. Don’ts: Using dummy text. Use language and terms that are difficult for users to understand. Make users get lost and make wrong decisions. 🔸 Proximity In design, related elements must be brought together so they will be seen as a unit/group, will reduce visual distraction and make the design easier for users to understand. While unrelated elements must be separated from each other because they will be considered as unrelated elements and not a single entity. Do’s: Close the related elements in a composition to form relevance, hierarchy, organization, and structure. Don’ts: Separating related elements and bringing together unrelated elements. 🔸 Similarity The similarity is usually combined with proximity techniques to form an organization from a collection of several elements or components by showing elements/components related in groups. Do’s: Arrange the elements or components that have the same element/attribute/function in one group. Don’ts: Group elements / components that do not have the same element / attribute / function. 🔸 Continuity This technique is also used to give instructions to the user that there are elements/components can be seen next to them/afterwards. Do’s: Arrange elements/components to form a continuous pattern so that it attracts the user’s eyes to follow the elements/components arranged from one to the other. Don’ts: Don’t hide elements/components that should have continuity. 🔸 Focal Point With this technique, we can clarify and pay attention to what we want users to see and we want them to do. Do’s: Create Visual elements that we want to highlight look prominent so it will attract and capture the user’s attention. Don’ts: Putting too many focal points that cause visual distraction and will make users confused. 🔸 Common Region This principle is closely related to proximity. When the objects are in the same closed area (group), we will see them as 1 group unit that is the same. Do’s: Add barriers(negative space) or separators between groups Don’ts: Add barriers(negative space) or separators too far between groups and use unnatural barriers(negative space) between groups. 🔸 Simplicity Simplicity is a technique that applies and emphasizes simplicity in design and tends not to consist of many visual elements. The challenge is with a simple design, we must be able to convey strong and very clear information that can be quickly digested by the user. The simple design is easier to understand and easier to remember. The simpler your design, the more striking and easy it is to understand it. Do: Presenting information that is strong and clear simply. Don’t:
https://uxplanet.org/creating-a-high-fidelity-user-interface-interactive-prototype-in-1-hour-f0550dfc966a
['Rizki Mardita']
2019-06-18 10:18:22.941000+00:00
['Marvel App', 'Design', 'Prototype', 'UI', 'Hi Fidelity']
Why Dating Apps Rarely Work. Or at least a useless distraction
I remember back in high school and college I would make friends with a girl, get to know her, then a small percentage might end up becoming a “girlfriend”. We might do something simple like go to a movie, have a picnic, make out in the back of a car. You know pretty vanilla stuff. And damn was it easy, because we had already spent a couple of months getting to know each other, to see if we even go along. Yes, that’s right several months and no sex. I might as well be a catholic school girl…well I was an acolyte in church. You might wonder why this forty-some year old is talking about high school and college? That’s because until 3 years ago I was married to my college sweetheart. So, needless to say, I dove into dating apps, a mix between a deer in the headlights and a kid in a candy store. Now, I’ve tried nearly all of the apps — the bee, the flame, the okay dokey, the fish, the joint that opens a door, the curvy singles, and other lower-quality apps. (Side note, if you figured out what apps I am actually talking about you’re probably as addicted as I was.) I’ve tried them all because I don’t actually have a type, much to the disappointment of the person I am dating. By the way, I’m fairly certain I am supposed to say my type is “you” to the person asking the question. I guess brutal honesty is not a good trait for dating. We all know swiping is mindless. How many of us do it while drunk, let our teenage kid or friend do it, or just swipe mindlessly while binge-watching Shameless? I’ll be honest, at the end of this maddening game of whack-a-mole sent to us from the tech devils, I just started swiping everyone just to see who was swiping me. No surprise, mostly outdoorsy, down to earth women. A perfect fit had I met another way, but the app approach leaves much to be desired. I feel a bit bad about the large queue of people I have not responded to until I talk to my female friends who have what sounds like thousands of suitors in waiting. Hopefully, your knight in shining armor isn’t number 1,000. John William Waterhouse Do Dating Apps Work? People who have used a dating site or app say the experience left them feeling more frustrated (45%) than hopeful (28%). The Pew Research Center found that “particularly younger women — report being harassed or sent explicit messages on these platforms.” Early on I remember being the only person a woman would meet, based solely on the fact that I didn’t ask her to have sex with me or send an explicit photo. I mean really, if it doesn’t work in a bar or at the gym, why would it work here. It’s called sexual harassment. But are people actually forming real relationships? The same study says that 12% of people using online dating apps formed a meaningful relationship or got married. Okay just to be clear for every 10 dates 1 might work out, or more realistically, every 100 dates 10 might work out. This is better than 3% reported in 2013. What about the other 9 in 10 people? Well apparently, more than half find their soul mate through a friend. Are People Being Authentic? According to Eharmony, 53% of people lie on their dating profiles about their age, height or weight, and job or income level. This seems about right. I’ve heard most of my dates comment about the guy who was shorter, older, or balder than his profile. Of course, there are the military scams where someone desperately needs to get back to the U.S., and if only you would send money his way you would fall deeply in love. Then there are the escorts. Yes, there are escorts on these apps. Even if you get past the physical deceptions, then there’s the question of what else they are lying about. A 2019 study of 1,700 Tinder users found that 22% of users were married and 44% were involved in a relationship. Now it did not distinguish the all too common ethical non-monogamy, but still, that number is pretty astonishing. Lack of authenticity isn’t just a problem early on but goes all the way to the end. And the worst part of it all, almost half of these online relationships end by email. That’s actually a bit surprising to me. I thought it was by text or even worse, just ghosting the person. Addiction and Compulsive Behavior Even worse, these apps are a big distraction, and I would argue, in some ways predatory. I was the poster child of using dating apps as a distraction. Have a deadline? Swipe right. Hate my commute? Swipe right. Watching a boring show? Swipe right. I would commit to leaving the apps, uninstall every one of them, only to slowly add them back again. Apparently I am not alone. Millennials spend up to 20 hours a week swiping and trying to find their perfect mate. As a parent with a full-time job and side work, I can’t imagine what I would do with 20 extra hours. “I am unable to reduce the amount of time I spend on dating apps.” — Ohio State University study This is no mistake. App creators know that gamifying dating can send a shot of endorphins like serotonin and dopamine into our system, reducing anxiety and making us feel better — giving us that feeling of being high. In fact a study out of Australia in March 2020 found that people who used swipe-based dating apps were more likely to be depressed, anxious, and feel distressed than those who did not. Maybe this is why as I have recovered from my divorce I have been less and less drawn into app-based dating.
https://medium.com/be-unique/dating-apps-are-the-devil-21602fef2724
['Marcus Griswold']
2020-08-26 03:59:29.655000+00:00
['Dating App', 'Technology', 'Love', 'Psychology', 'Dating']
Generative Adversarial Networks: Which Neural Network Comes On Top?
In a world filled with technology and artificial intelligence, it is becoming increasingly harder to distinguish between what is real and what is fake. Look at these two pictures below. Can you tell which one is a real-life photograph and which one is created by artificial intelligence? The crazy thing is that both of these images are actually fake, created by NVIDIA’s new hyperrealistic face generator, which uses an algorithmic architecture called a generative adversarial network (GANs). Researching more into GANs and their applications in today’s society, I found that they can be used everywhere, from text to image generation to even predicting the next frame in a video! This article provides a brief overview into the inner workings of GANs and how they are currently being used in the workspace today. I know you are probably as excited as I am to learn more about this, so let’s get started! The Basic Architecture of GANs List of the different generative models. Credit: subscription.packtpub.com GANs fall under the broader category of generative models, using an unsupervised learning approach to recognize new patterns from the training data. These models have the corresponding input variables but lack the output variables to perform classical prediction models. Some examples of these generative models include the Naive Bayes network, the Latent Dirichlet algorithm, and the aforementioned GANs. Let me know if you want an article on these algorithms (Spoiler alert: They’re just as cool as GANs!). GANs, like other generative models, are able to generate new examples that are similar and in some cases indistinguishable from the training set that was provided. So how do these GANs actually form these new examples out of thin air? As you may have guessed from the title, GANs use two neural networks that compete with one another to create or generate variations in the data. Other machine learning models can be implemented, but often, neural networks are the most optimum solution. These two sub-models are often called the generator model and discriminator model. The Generator and Discriminator Model An analogy to how GANs work! Credit: DZone The generator and discriminator model have opposing goals, basing their performance on different sets of metrics. The generator model has one sole purpose: to produce fake data from a set of random inputs. The generator model aims to produce the most realistic fake images from random noise that is given to it. Its main purpose is to maximize the discriminator model’s loss or the classification error. The discriminator model, on the other hand, decides whether the data that is given to it is from the real sample or the fake sample produced by the generator network through binary classification. The discriminative network is trained to take the true data and generated data and classify them accordingly. Therefore, in a way, you can conclude that the discriminator’s main purpose is to (you guessed it!) decrease the classification error. Another analogy to understand GANs! Credit: Datacamp To recap, the generator model aims to maximize the classification error by producing extremely realistic images so that the discriminator is unable to tell which image is fake and which image is real. On the other hand, the discriminator model aims to classify correctly whether an image is fake or not, trying to minimize the classification error produced by the model. Now, you can see how the generator and discriminator are sort of fighting against one another. Both are seeking to best the other: the generator in generating the best fake image that the discriminator can’t deduce and the discriminator in finding the difference between the generator’s image and the discriminator’s image. One tries to minimize the classification error while the other one seeks to maximize it. The Process The basic architecture of GANs. Credit: Medium Now that you have gotten the basics down, let’s go through the workflow that a GAN takes in order to produce those hyperrealistic images. First, random noise is generated, and random variables are inputted that serve as the fake images. There are two main training phases that go into one iteration of a GAN workflow. First, the discriminator is trained with the generator model frozen. In this phase, the discriminator is trained with real data, inputted by the user, and fake data, generated by the random noise generated by the generator model mentioned above. The discriminator’s main goal is to be able to distinguish whether these pictures given to it are real or fake. Next, the generator is trained with the discriminator frozen. The generator receives the results from the discriminator model and uses them to make more realistic images to try to fool the discriminator better. The generator model aims to sample new data from the random noise generated to make a realistic image at the ending. A key thing to know is that images are simply probability distributions over an N dimensional vector space; to say that something looks like an image is actually just conveying that something has a very specific probability distribution. The generative network in a GAN takes point from the random distribution as input and turns them into points to achieve the resulting target distribution that can fool the discriminator model. The cycle continues as the discriminator model uses the generator’s new updated results to make better classifications on the model. This back and forth process eventually leads to indistinguishable images from the generator that the discriminator can not deduce. A rudimentary diagram explaining the process. Credit: Geeks for Geeks At the end, remember that the discriminator model’s accuracy is not the one that matters. Our main goal is to maximize the generator model’s efficiency at the ending since we want to produce images that are indistinguishable from real life ones. The Challenges of GANs GANs face multiple problems with their current implementation; however, they are quickly being solved by new and more advanced GANs by large tech companies like NVIDIA. These challenges often make other generative models used other than GANs. Here are the two main challenges regarding GANs that companies are currently facing. A central problem is of the stability between the generator and the discriminator; this is important as this stability can ruin the sole purpose of the GAN model to create new images. If the discriminator model is too powerful, then it will simply classify all images as fake; however, if it is too lenient, the GAN will never improve, leading to a useless network. Often, finding this level of stability is difficult as it’s hard to predict what the generator and discriminator will do when training and updating their gradients since there is no human intervention. Another problem with GANs is that they are unable to determine the positioning of certain objects and understand the perspective of the images. For example, it will generate one dog with six eyes rather than two eyes because it doesn’t understand how many times an object (like an eye) needs to occur at a particular location. GANs are unable to understand a wholistic or global perspective, which is why other generative models are often more commonly used. The Applications Of GANs Example of GANs being used for super resolution of images! Credit: Medium Although GANs face a wide number of challenges, they have a vast array of applications and future possibilities in the technology field. GANs can do anything regarding image generation from producing images from natural language descriptions to surveillance and security to determine footage that may get distorted in the rain. GANs are also filled a variety of niche use cases like being able to morph audio from one speaker to another, enhancing the resolution of an image, and doing image to image translation. In my opinion, the most important application of GANs lies in its ability to create data to train classifiers with limited amounts of data. Data generation is often one of the most difficult components of training any type of model in machine learning, and GANs can fix that solution by creating images from thin air that relate closely to the training data. GANs will be instrumental in improving classifier accuracy and giving data to large models. Comment below what you think the coolest application of GANs are! Since you know the main architecture behind GANs, I suggest that you try coding one up! A great place to start is by generating images using the MNIST dataset through GANs. You can find the steps to complete it here or check out my Github repository containing the full detailed implementation below. TL;DR GANs take an unsupervised learning approach by placing two neural networks against each other with opposing purposes(called the generator and discriminator model). The generator model’s main purpose is to maximize the classification error produced by the discriminator model by making extremely realistic “fake” images to present to the discriminator. The discriminator model’s main purpose is to minimize the classification error between which images are real and which images are fake. The gradients between the generator and the discriminator model are constantly updated based on the performance of the opposing model. GANs often face two challenges: balancing the stability between the generator and discriminator model and determining the position of certain objects to develop a wholistic perspective of the image. GANs have a vast array of applications from creating images to train image classifiers with limited amounts of data to text to image detection. Additional Resources
https://medium.com/analytics-vidhya/generative-adversarial-networks-the-fight-to-see-which-neural-network-comes-out-at-top-c771757f4a49
['Karthik Mittal']
2020-12-22 17:28:44.975000+00:00
['Machine Learning', 'Image Classification', 'Generative Adversarial', 'Artificial Intelligence']
Easy Ways To Tell You’re Not Spending Enough Time On “It” To Build Something and Achieve Success.
I want to X. I don’t care what it is. I want to lose weight, get healthier, get fit, move, change jobs, become a craftsman, learn something new, start a new hobby, create a side business, become a whatever, read more, rest more, go more, do more, be more. Fill in the box. It doesn’t matter what it is. Here are ways you can tell you’re not spending enough time on it to build something and achieve success. I can see multiple posts of yours on my Facebook timeline. Which means you post all day or night. I pull up my feed and all I see is your posts. Some of those posts are about how you figured people out, you are strong because you got your shit together and we don’t, and basically anything that makes us feel like you are accomplishing all this stuff when in fact you are just sitting at home doing nothing but posting memes on Facebook. Sorry I might be talking about my sister. You keep talking about starting. You never talk about finishing. All we get is one day I will and when I get to that point I will. You aren’t going to do it. It’s okay. Just admit it and do what it is you want to do or don’t do anything. It’s in your head. You talk about how you are a writer, painter, reader, doer, be-er, but you really aren’t. It is all in your head. You have the stuff to do it. You have the desire or thought to be that, but you don’t actually make consistent effort towards being what you say you want to be. Or doing what you say you want to do. I can’t succeed because I don’t have time. No you don’t succeed because you don’t put in consistent effort. You aren’t actually doing anything. You can’t talk about your failures. My dad was an entrepreneur. Not because he talked about it, posted about it, or thought about. Because he went out and got office space, put some furniture in it, and opened for business. He had many failures which was a good sign he was trying. It wasn’t just in his head. It was real. Some people think they are entrepreneurs, but they don’t have one try and fail story. They just have a lot of ideas. If you haven’t actually tried you won’t have any failures. If you haven’t take any serious action you can’t succeed. Success requires actual work. It’s Not a Part of Your Daily Life. Everyone called my dad a dreamer. He was, but he was also a doer. Now along the way he should have thought about keeping a mundane full time job to help pay the bills, but he didn’t. Instead he would focus on doing his thing 40 hours a week. He decided golf was his thing. He lived and breathed golf, becoming a golf pro at a local country club. We knew he loved golf and was changing his life to incorporate it because he read about it, talked about it, went to classes on it, got a job in it, and designed and built a course. It was a part of who he was. We all knew. You might not be spending enough time on your thing if we can’t see it in your daily life. People who really know us are pretty good at identifying what we are about. If they aren’t saying you are a writer it’s because you aren’t writing enough. If they can’t say you are a painter it’s because you aren’t painting enough. When you do it enough people will identify that and regardless of your success with it, will announce to the world you are “that” person, doing “that” thing. If they aren’t saying it that should tell you something. My dad was a golf pro. Everyone knew it without knowing it was his job. It didn’t matter how good he was at it or how much success he attained doing it. It mattered that he spent the majority of his time talking about and practicing golf. You couldn’t talk to my dad without knowing what he loved. He loved golf. Actually talked about it more than he talked about me. I know all this because I am guilty of all this. In 2015 I started a career transition and a nice life crisis. My husband encouraged me to start writing. He thought I could make a go of it. I did start. I do write. I have started and stopped several blogs. I have written on a semi-consistent basis, but the majority of my work has been done in my head. Which means I think more about being a writer than actually writing. I know because I keep track of what I write on some home grown pieces of note paper. I entered all that into an Excel spreadsheet and quickly realized I am not putting the time into writing that I think I am. I talk more about it than I do it. If I really want to write then I need to write. I need to write enough that the people that know me say I am a writer. My husband hasn’t said that yet because I am not spending enough time doing it. You know if you really don’t want to be something don’t be it. Just do what you do and then do nothing. It’s okay not to have huge goals of changing your life and becoming something else. Maybe you just want to get a job, work it, and then go home and hang out with your cat. Great. Just do that. There is no requirement to become something other than you are today, but if you do have some goals, don’t chalk up your lack of success to anything until you take a good hard look at how much time you are spending on them. I can’t make my Instagram go. How much are you really posting? Have you measured it? Let’s be sure we aren’t just thinking about stuff. Let’s be sure we are doing it. Marcy Pedersen
https://marcypedersen.medium.com/easy-ways-to-tell-youre-not-spending-enough-time-on-it-to-build-something-and-achieve-success-486c9dbb2da7
['Marcy Pedersen']
2020-01-23 01:23:01.337000+00:00
['Productivity', 'Work', 'Achievement', 'Goals', 'Time Management']
What Copywriters Can Learn From a Good Old-Fashioned Ghost Story
What Copywriters Can Learn From a Good Old-Fashioned Ghost Story Basic human psychology is the foundation of every good advertisement and successful scary story When Edgar Allan Poe was a child, his family once drove past a cabin with graves in the yard. Seeing the gravestones, young Poe called out, “They will run after us and drag me down!” Funny, I wonder how many people fear something similar when they see a group of marketers standing by. As the saying goes, marketers ruin everything. So today, let me ruin horror stories by comparing them to marketing copy. You’re welcome. In the spirit of Halloween, I purchased my first ghost story in years: The Haunting of Hill House by Shirley Jackson. I was hooked by the first paragraph and finished the book in just two sittings, less than 24 hours after picking it up from Barnes & Noble. It dawned on me that the best scary stories are like the best ads: you can’t look away. This comparison begged the questions: what techniques do horror authors use to seize their reader’s attention? Could copywriters borrow some of their writing methods to improve sales? I’m not suggesting we should scare readers into making purchases — only that we should consider borrowing some of the stylistic choices from horror to make our marketing copy more compelling. Here are three things copywriters should learn from the craft of ghost story writing.
https://medium.com/better-marketing/what-copywriters-can-learn-from-a-good-old-fashioned-ghost-story-135c79587c6a
['Alexander Lewis']
2020-10-09 16:57:39.485000+00:00
['Horror', 'Copywriter', 'Halloween', 'Marketing', 'Copywriting']
21+ Spring MVC + REST Interview Questions Answers for Beginners and Experienced Developers
21+ Spring MVC + REST Interview Questions Answers for Beginners and Experienced Developers javinpaul Follow Nov 3 · 13 min read Hello guys! if you are preparing for Java and Spring interviews or Spring certification and looking for some frequently asked Spring MVC and REST interview questions then you have come to the right place. Earlier, I have shared best Spring MVC courses and books and today, I am going to share top 22 Spring Interview Questions for Java developers applying for web developer roles.. Since the Spring Framework is the most popular and standard framework for developing Java web applications and RESTful web services, a good knowledge of Spring core and Spring MVC is expected from any senior Java developer. But, if the job description mentions REST and web services, you also need to be aware of how to develop RESTful web services using the Spring Framework. From Spring 3.1, the framework has been enhanced a lot to support many features needed for the RESTFul API. The HTTPMessageConverter can convert your HTTP response to JSON or XML just by detecting a relevant library in the classpath, like Jackson and JAXB. Spring also provides customized annotations for RESTful Web Services, like @RestController, which can make your Controller REST more aware, so that you don’t need to do common stuff required by every single REST API, like converting the response to JSON. A deep knowledge of Spring Security is also mandatory for developing security for RESTful web services in the real world. Since you cannot make life a non-trivial REST API without security, a good knowledge of security basics, HTTP basic authentication, digest authentication, OAuth, and JWT is very important. By the way if you are new to Spring MVC and Spring Framework in general, then I highly recommend you to join a good, comprehensive Spring course like this Spring 5: Beginner to Guru resource to learn the basics first. This will help you to answer this question better and also to do well on both Spring certification and interviews. Top 22 Spring MVC + REST Web Service Interview Questions with Answers Here are a couple of frequently asked questions about using REST web services in the Spring Framework. 1. When do you need @ResponseStatus annotation in Spring MVC? This is a good question for 3 to 5 years as an experienced Spring developer. The @ResponseStatus annotation is required during error handling in Spring MVC and REST. Normally, when an error or exception is thrown at the server-side, the webserver returns a blanket HTTP status code 500 — Internal server error. This may work for a human user but not for REST clients. You need to send them the proper status code, like 404, if the resource is not found. That’s where you can use them @ResponseStatus annotation, which allows you to send custom HTTP status codes along with proper error message in case of an exception. In order to use it, you can create custom exceptions and annotate them using the @ResponseStatus annotation and proper HTTP status code and reason. When such exceptions are thrown from the controller’s handler methods and not handled anywhere else, then the appropriate HTTP response with the proper HTTP status code is sent to the client. For example, if you are writing a RESTful web service for a library that provides book information, then you can use @ResponseStatus to create an exception that returns the HTTP response code 404 when a book is not found instead of the Internal Server Error (500), as shown below: @ResponseStatus(value=HttpStatus.NOT_FOUND, reason="No such Book") // 404 public class BookNotFoundException extends RuntimeException { // ... } If this exception is thrown from any handler method, then the HTTP error code 404 with the reason “No such Book” will be returned to the client. If you are not familiar with the basics concepts of Spring MVC, Security, and REST, I suggest you go through these REST with Spring and Learn Spring Security courses to gain some experience before your next job interview. These two courses are specially designed to provide you with some real-world experience to boost both your knowledge and experience with Spring MVC, REST, and Spring Security. 2. What does @RequestMapping annotation do? (answer) The @RequestMapping annotation is used to map web requests to Spring Controller methods. You can map a request based upon HTTP methods, e.g. GET, POST, and various other parameters. For example, if you are developing a RESTful web service using Spring, then you can use, produce, and consume property along with media type annotations to indicate that this method is only used to produce or consume JSON, as shown below: @RequestMapping (method = RequestMethod.POST, consumes="application/json") public Book save(@RequestBody Book aBook) { return bookRepository.save(aBook); } Similarly, you can create other handler methods to produce JSON or XML. If you are not familiar with these annotations, then I suggest you join this Spring MVC For Beginners course on Udemy to learn the basics. 3. Is @Controller a stereotype? Is @RestController a stereotype? (answer) Yes, both @Controller and @RestController are stereotypes. The @Controller is actually a specialization of Spring's @Component stereotype annotation. This means that the class annotated with the @Controller will also be automatically detected by the Spring container, as part of the container's component scanning process. And, the @RestController is a specialization of the @Controller for the RESTful web service. It not only combines the @ResponseBody and @Controller annotations, but it also gives more meaning to your controller class to clearly indicate that it deals with RESTful requests. Your Spring Framework may also use this annotation to provide some more useful features related to REST API development in the future. 4. When do you need @ResponseBody annotation in Spring MVC? (answer) The @ResponseBody annotation can be put on a method to indicate that the return type should be written directly to the HTTP response body (and not placed in a Model, or interpreted as a view name). For example: @RequestMapping(path = "/hello", method = RequestMethod.PUT) @ResponseBody public String helloWorld() { return "Hello World"; } Alternatively, you can also use the @RestController annotation instead of the @Controller annotation. This will remove the need for using @ResponseBody because, as discussed in the previous answer, it comes automatically with the @RestController annotation. 5. What does @PathVariable do in Spring MVC? Why it’s useful in REST with Spring? (answer) This is one of the useful annotations from Spring MVC that allows you to read values from the URI, like query parameter. It’s particularly useful in case of creating RESTful web service using Spring, because, in REST, resource identifiers are part of the URI. This question is normally asked by experienced Spring MVC developers with 4 to 6 years of experience. For example, this URL can be helpful if you want to learn how to extract the id, then you can use the @PathVariable annotation of Spring MVC. If you are not familiar with Spring MVC annotations, then Spring MVC For Beginners: Build Java Web App in 25 Steps is a good place to start. 6. What is the difference between @Controller and @RestController in Spring MVC? (answer) There are many differences between them @Controller and @RestController annotations, as discussed in my earlier article (see the answer for more!), but the most important one is that with the @RestController you get the @ResponseBody annotation automatically, which means you don't need to separately annotate your handler methods with the @ResponseBody annotation. This makes the development of RESTful web services easier using Spring. You can see here to learn more about Spring Boot and how it can help you to create Spring MVC based web applications. 7. What are the advantages of the RestTemplate in Spring MVC? (answer) The RestTemplate class is an implementation of the Template method pattern in the Spring framework. Similar to other popular template classes, like the JdbcTemplate or JmsTempalte , it also simplifies the interaction with RESTful web services on the client side. You can use it to consume a RESTful web servicer very easily, as shown in this RestTemplate example. 8. Where do you need @EnableWebMVC? (answer) The @EnableWebMvc annotation is required to enable Spring MVC when Java configuration is used to configure Spring MVC instead of XML. It is equivalent to <mvc: annotation-driven> in an XML configuration. It enables support for the @Controller -annotated classes that use @RequestMapping to map incoming requests to handler methods that are not already familiar with Spring's support for Java configuration. The Spring Master Class on Udemy is a good place to start. 9. What is an HttpMessageConverter in Spring REST? An HttpMessageConverter is a strategy interface that specifies a converter that can convert from and to HTTP requests and responses. Spring REST uses this interface to convert HTTP responses to various formats, for example, JSON or XML. Each HttpMessageConverter implementation has one or several MIME Types associated with it. Spring uses the "Accept" header to determine the content type that the client is expecting. It will then try to find a registered HTTPMessageConverter that is capable of handling that specific content-type and use it to convert the response into that format before sending it to the client. If you are new to Spring MVC, see this Spring 5: Beginner to Guru resource to learn the basics. 10. How to create a custom implementation of the HttpMessageConverter to support a new type of request/responses? You just need to create an implementation of the AbstractHttpMessageConverter and register it using the WebMvcConfigurerAdapter#extendMessageConverters() method with the classes that generate a new type of request/response. 11. Do you need Spring MVC in your classpath for developing RESTful Web Service? (answer) This question is often asked by Java programmers with 1 to 2 years of experience in Spring. The short answer is: yes — you need Spring MVC in your Java application’s classpath to develop RESTful web services using the Spring framework.
https://medium.com/javarevisited/21-spring-mvc-rest-interview-questions-answers-for-beginners-and-experienced-developers-21ad3d4c9b82
[]
2020-12-11 10:07:23.881000+00:00
['Programming', 'Software Development', 'Java', 'Spring', 'Coding']
15 Rules For Writing Clean JavaScript
15 Rules For Writing Clean JavaScript From the book “Maintainable JavaScript” Photo by Sarah Dorweiler on Unsplash So you are a React developer or Node.js developer. You can write code that works. But can you write code that is visually beautiful and understandable by others? Today we will see some rules to make your JavaScript code clean and clear. Rule 1. Don’t Use Random Characters as Variable Don’t use some random character to express a variable. //BAD const x = 4; Name your variable properly so that it describes the value properly. //GOOD const numberOfChildren = 4; Rule 2. Use camelCase Variable Name Don’t use snake_case, PascalCase, or a variable name that starts with a verb. // Bad: Begins with uppercase letter var UserName = "Faisal"; ----- // Bad: Begins with verb var getUserName = "Faisal"; ----- // Bad: Uses underscore var user_name = "faisal"; Instead, use a camel cased variable name that represents a noun. // Good const userName = "Faisal"; Rule 3. Use Good camelCase Function Name Don’t use any noun as a function name to avoid confusion with the variable names. // Bad: Begins with uppercase letter function DoSomething() { // code } ---- // Bad: Begins with noun function car() { // code } ---- // Bad: Uses underscores function do_something() { // code } Instead, start the name with a verb and use camel case. //GOOD function doSomething() { // code } Rule 4. Use PascalCase For Constructor Function Naming // Bad: Begins with lowercase letter function myObject() { // code } ---- // Bad: Uses underscores function My_Object() { // code } ---- // Bad: Begins with verb function getMyObject() { // code } Also, Constructor function names should begin with a non verb because new is the action of creating an object instance. // GOOD function MyObject() { // code } Rule 5. Global Constants Global constants whose value doesn't change should not be named like normal variables. // BAD const numberOfChildren = 4; // BAD const number_of_children = 4; They should be all upper case and separated with underscores. // GOOD const NUMBER_OF_CHILDREN = 4; Rule 6. Assignment to Variable Don’t assign a comparison value to a variable without parentheses. // BAD const flag = i < count; Use parentheses around the expression: // GOOD const flag = (i < count); Rule 7. Usage of Equality Operators Don't use “==” or “!=” to compare values. Because they don't type check before comparison. //BAD if (a == b){ //code } Instead, always use “===” or “!==” to avoid type coercion errors. //GOOD if (a === b){ //code } Rule 8. Usage of the Ternary Operator Don’t use the ternary operator as an alternative to if statement: //BAD condition ? doSomething() : doSomethingElse(); Only use them to assign values based on some condition: // GOOD const value = condition ? value1 : value2; Rule 9. Simple Statement Although JavaScript supports it. Don’t write multiple statements in a single line. // BAD a =b; count ++; Instead, have multiple lines for multiple statements. And always use a semicolon at the end of a line. // GOOD a = b; count++; Rule 10. Use of If Statements Don’t omit the braces from an if statement and never keep them in a single line. // BAD: Improper spacing if(condition){ doSomething(); } ---- // BAD: Missing braces if (condition) doSomething(); ---- // BAD: All on one line if (condition) { doSomething(); } ---- // BAD: All on one line without braces if (condition) doSomething(); Always use braces and proper spacing: // GOOD if (condition) { doSomething(); } Rule 11. Usage of For Loop Don’t declare the variable in the initialization of a for loop. // BAD: Variables declared during initialization for (let i=0, len=10; i < len; i++) { // code } Declare them before the loop. // GOOD let i = 0; for (i=0, len=10; i < len; i++) { // code } Rule 12. Consistent Indentation Length Stick to using 2 or 4 all the time. // GOOD if (condition) { doSomething(); } Rule 13. Line Length Any line should not be more than 80 characters. If it’s more than that they should be broken up into a new line. // BAD: Following line only indented four spaces doSomething(argument1, argument2, argument3, argument4, argument5); ---- // BAD: Breaking before operator doSomething(argument1, argument2, argument3, argument4 ,argument5); The second line should be indented 8 spaces instead of 4 and shouldn't start with a separator. // GOOD doSomething(argument1, argument2, argument3, argument4, argument5); Rule 14. Primitive Literals Strings shouldn’t use a single quote. // BAD const description = 'this is a description'; Instead, they should always use double-quotation // GOOD const description = "this is a description"; Rule 15: Use of “undefined” Never use the special value undefined. // BAD if (variable === "undefined") { // do something } To see if a variable has been defined, use the typeof operator // GOOD if (typeof variable === "undefined") { // do something } So by following these rules you can make your JavaScript projects cleaner. These rules are taken from the book “Maintainable Javascript” written by “Nicholas C. Zakas”. So if you don’t agree with some of the points, I guess that’s fine. When it comes to styling there is no single way. But the rules here can be a good starting point for you. That’s it for today. Happy Coding! :D
https://medium.com/javascript-in-plain-english/15-rules-for-writing-clean-javascript-8e2b2b426515
['Mohammad Faisal']
2020-12-20 09:41:22.678000+00:00
['JavaScript', 'Programming', 'React', 'Nodejs', 'Web Development']
Laws of Nature: Waves
“As we penetrate into matter, nature does not show us any isolated “building blocks,” but rather appears as a complicated web of relations between the various parts of the whole. These relations always include the observer in an essential way. The human observer constitute the final link in the chain of observational processes, and the properties of any atomic object can be understood only in terms of the object’s interaction with the observer.” ― Fritjof Capra Soundwaves Sound is a sequence of pressure waves, able to propagate through various mediums like air or, to a much lesser extent, water. Unlike lightwaves, soundwaves can reverberate around objects; while this means that it’s easier for them to travel around objects and obstacles alike, it allows for the possibility of distortion. This distortion must be accounted for on many levels, most notably for our purposes, those of a social importance. Information, by its very nature, is subject to variable kinds of distortion as it travels from receiver to receiver. We ought to apply this fundamental rule of nature to our own social interactions, individually or collectively, and appreciate the fact that such reverberations can be employed as filters, amplifiers, scramblers or conveyors of any given message. There are various elements in nature that muffle or carry sound. Subtle elements which influence sound in subtle ways. Or obvious elements which can influence sound in more obvious ways. For instance, a dense enough fog can carry certain types of sounds exponentially farther than they would otherwise travel, bouncing and vibrating off the suspended water molecules in the air; the closer the molecules are to each other, the easier and farther the sound can travel. We ourselves can, if we so choose, consider our actions and our maneuverings through the world to be like sound. We can move through the various mediums which we inhabit, riding certain elements with a relative ease compared to being smothered by other elements. In a vacuum, we can’t do much, as we have little to no molecules to vibrate. In a sea of noise, we can be drowned out. But, all the while, the soundwave carries along, propagating itself throughout the surrounding world. Lightwaves Like sound, light is also a series of pressure waves — electromagnetic radiation. It’s much easier said than understood that humans can only perceive a sliver of the electromagnetic spectrum, what we call our range of visible light. Countless lifeforms in this world can pick up on various ranges of the spectrum that we can’t. For instance, how some birds perceive UV light and some reptiles perceive infrared. Still, from that limited portion of electromagnetic radiation that we can see, it lights up our world in ways we seldom appreciate. To see light from the sun being both entrapped and released as an iridescent scintillation of colors glistening atop snow is to realize that even a photon of light is comprised of many elements that can disperse and consolidate. To appreciate how the various colors of light become absorbed or photosynthesized by nature, that plants and animals like possess cryptochromes — photoreceptors — that regulate light absorption and tune the circadian clocks is to understand the inherently intricate design of our biological systems. To appreciate how light can equate to heat and how heat can, itself, illuminate; to also appreciate that heat need not signify light and that light need not radiate heat is to understand the interplay of energy. To understand that heat loss and heat transfer is the signature of entropy in our reality, signifying the perpetuity of motion and change, is to appreciate our ever spiraling existence from order to disorder. I encourage anyone to sit atop a mountain on a windy day and notice how the sun, upon emerging momentarily from the pacing clouds, creates a sudden but subtle reaction in the atmosphere, as wind gusts emanate from nowhere and the sudden impact of light photons raining upon the surrounding area is felt everywhere.
https://medium.com/borealism/laws-of-nature-waves-e2486e43c0b7
['Michael Woronko']
2020-11-27 17:24:36.845000+00:00
['Science', 'Philosophy', 'Life Lessons', 'Lifestyle', 'Energy']
The best UX and design conferences in 2019 — the definitive guide
Events about creativity and technology SXSW Interactive More than a week of music, film and creativity in one single event • Keynotes, workshops and showcases about cutting-edges technologies and ideas • UXers: keep updated on trends for the year ✈ Location: Austin, TX, USA 99u Two days of talks from creative visionaries • Focused on idea execution rather than idea generation • UXers: if you dream big, listen from successful people how to become one by doing it ✈ Location: New York, NY, USA Emerge Conference Two days of conference about world’s challenges and transformational change • For university students and young professionals • UXers: for the young designers who are willing to change the world ✈ Location: Oxford, UK Offset One of the world’s most inspirational, educational and vocational conferences for the design and creative industries. ✈ Location: Dublin, Ireland Forward Forward brings together the best international and local creative heads, who provide insights into their success stories in an exciting atmosphere. The conference, the centerpiece of the festival, is accompanied by various side events, such as workshops, live art sessions and networking events. ✈ Location: Hamburg, Germany Internet of Things Events Papers, demos and workshops about the new frontiers of the Internet • Cutting-edge researches aiming to change the interaction between humans and objects • UXers: learn about upcoming challenges for designing interaction beyond mobile-tablet-browser ✈ Location: multiple cities Service Design Network Events 2 days of conference for students and professionals • International speakers and cases • UXers: for those who want to design user experiences holistically ✈ Location: multiple cities The Feast A conference about innovation to solve the world’s challenges with big names and big ambition • Talks, workshops, hands-on activities and a global dinner party • UXers: you can join the proposed challenges without leaving your hometown ✈ Location: multiple cities
https://uxdesign.cc/the-best-ux-design-conferences-2019-events-bcd7b28f722d
['Fabricio Teixeira']
2019-06-10 11:32:35.617000+00:00
['Design', 'Product Management', 'Design Thinking', 'UX', 'Product Design']
Environment to get helping hand as Set4Earth launches new platform
Environment to get helping hand as Set4Earth launches new platform Blockchain startup Set4Earth has launched a new platform which encourages community environmental action and provides funds for social enterprises. The new eco-friendly marketplace enables users to back eco-friendly ideas, projects and products through a Save Environment Token (SET). “The aim is to create a community of like-minded people who want to protect the earth for future generations,” said Chief Executive of Set4Earth Ravindran Nambiar. “Funding and promoting environmentally friendly initiatives has typically been through government bodies. SET puts the responsibility of saving the planet directly into the hands of the community.” The Save Environment Token goes beyond a cryptocurrency, with one of the benefits being that it is a mechanism to reward users for supporting the planet. Investors can use their SET coins to hire, for example, bikes and in the process reduce their carbon footprint. Leading parking operator Multic in Poland has given the green light for users to redeem coins for parking. Other products include anti-pollution devices such as home and car air purifiers, as well as hydrogen water bottles. In addition to encouraging environmental action, the platform also promotes eco-friendly products to create an R&D fund for social enterprises. The project, which began eight months ago, is spearheaded by some of the world’s leading Blockchain experts from the United States, India and Australia all with the aim of creating a greener environment. “This is a team with unrivalled green credentials. Together, we’ve already set up a range of initiatives from fossil fuel reduction and air purification to waste management and renewable energy products,” said Mr Nambiar. The platform is open anyone to buy, sell or invest. The current token price is US$0.90 with a total of 45 million tokens on offer.
https://medium.com/coincast-media/environment-to-get-helping-hand-as-set4earth-launches-new-platform-a0f95d5c237d
['Monica Sacroug']
2018-07-13 03:35:54.927000+00:00
['Cryptocurrency', 'Environment', 'Blockchain', 'Bitcoin', 'Token Sale']
Animal Instinct
Animals tutored me my initial lessons in tenderness. When human beings entrusted with guarding and guiding me failed their parts, always it was a creature whose language I didn’t speak nor did it understand mine who had the best comprehension of the cipher I was growing into. The earliest instances of learning how to be soft without being weak, of embracing without naiveté had come from Ruby. She was my first pup. She was orphaned at birth in a hospice adjacent to my mother’s school. The elderly residents and the matron who juts out in my memory with her butterball nose and a permanently wet mop of hair had adopted Ruby’s gravid mother during the last few months of her pregnancy at my mother’s behest. My mother has raised roughly 2 dozen dogs till date. Most of them were pariah mongrels with some or other deformity. Almost all of them either died or ran away. After the last of this brood had to be put down due to a recklessly growing hysteria that was turning him into a rabid fiend, my grandmother — embodying the finest of autocrat ethos in response to being married to my communist grandfather — in all her dictatorial compulsion had launched a law worthy of the ukase — NO MORE PETS. This was to curb my mother’s wayward habit of bringing home whatever half-breathing critter she chanced upon on her way back from work, followed by weeks of bone-thinning depressive wailing if the said creature croaked its untimely hiccup in whatever fruit basket that had been manicured into its home. My mother, in the manner of perfectly astute peasantry combating such a communist home rule, ensured that while universally the edict was carried out, it was flouted quite well on an individual level interpretation. For eg. She often “baby-sat” other people’s pets since there was no clause against it. This is how I met my first guinea pig and subsequently released it into a basket of freshly laundered clothes till its poop was rolling around like a freshly dried black peppercorn. This was also how she had managed to rescue Ruby’s fertile mother and convinced the hospice to accept her as their latest in-patient despite not exactly fulfilling the criteria of age or generally being biped. Ruby’s mother died a few hours after childbirth and so did most of the litter since she was woefully undernourished despite the herculean efforts invested by the staffers and the usually grumpy geriatrics who somehow turned into a dollop of vanilla melting on a cracked asphalt when they saw her. They would share their rations with this feeble canine, one even snuck her into his room because he feared that she would go into labour without any supervision. I visited the scraggy mother to be whose protruding belly reminded me of my grandfather’s leather tote stuffed with all the books he would buy for me during his travels. I was tiny and sometimes thought that maybe when she had her babies some of them would be comics. Mum brought home Ruby because the household pet tyrant — my grandmum — was away in America and so we had a free range on how things were on the familiar front. I came back from school and mum said there was something in my grandmum’s room for me. Like any other idiot child of that specific age, it never occurred to me that the “thing” would come with a little beating heart and baby incisors tucking themselves into whatever soft surface their could wriggle close to. I don’t know who startled whom worse — possibly me since I jumped up like a mattress spring that been released abruptly after a full hour of being coerced into its lair by the weight of a really heavy bottom. Then like a spinning top I circled the room in a mixture of glee and panic. Ruby was adorable and frightening. This, I have since learned, is a common blend for all things worth loving in your life. Mum had recently remarried and my stepfather hated me. My grandparents were away and I was barely 8. We lived in a magnificently empty home where each room carried the silence of an abandoned bomb shelter. The ornate furniture slept cocooned in a thick quilt of dust and I often spent after school hours under the shade of a guava tree that was trying its best to imitate the leaning tower of Pisa. I did whatever it was I could to avoid being inside this empty shell of a house when my mother wasn’t around. The man inside incited a far more vicious fear in me than any ghost story I had smuggled from the library to read at leisure in the kitchen garden. As Ruby grew, so did my stepfather’s savagery. Mum was working full time and completing her dissertation. I saw her hulking over thick tomes, her blunt cut hair bobbing back and forth between pages. I didn’t want to disturb her with my petty screams for help. I took the beatings whenever they were dealt and pretended that I was an incredibly clumsy child whose sense of balance was akin to that of a newborn bunny. Ruby sat with me and we both looked skywards hoping to catch a comet in the glint of an eye. We climbed up the stairs to my grandfather’s library and played hide and seek in his room. We broke his bottle of scotch and she guarded the door as I cleaned the mess up. She had eyes that mimicked small poached eggs. Something on the brink of spilling out. At midnight, they looked like deep pitchers filled with fountain pen ink. Sometimes curiosity. Sometimes mischief. One day, my stepfather forced me to accompany him to a butcher’s shop and I stopped eating meat from that night. Ruby followed suit and decided that she liked cauliflowers and broccoli now. Feeding her meat was a hell-hard task. She folded herself into some yoga position that made her look half pretzel, half upturned snail. Most of that period is a medley of blurs; a city escaping into the rearview mirror. I do remember an afternoon when mum and I bathed Ruby and she danced around in a gown of shampoo foam. She preened as we dried her and slept for 4 straight hours afterwards to the point where we thought she had gone comatose. I paced endlessly in my pink bubblegum colored slippers and finally mum forced me to take a nap. When I woke up, Ruby was panting by the side of my bed with a half chewed rose-tinted piece of rubber from my former slipper in her mouth. I patted the blanket and she climbed up. We both jumped up and down as if the mattress were a trampoline. Mum made caramel custard and it was a happy day. A day that still anchors me to the tethers of the ribs in this rhythm I call a body when I am most motivated to fling myself into some unnamed void. Ruby died when I went on a camping trip with my school friends. I didn’t want to go but mum and the counsellor decided I needed to socialize in order to curb some of my staunch introversion. I didn’t tell them that I didn’t like people because I was scared of them. I didn’t speak much because speech throttled me daily. A person I shared the supposed safety of my own home had flung me across hardened walls and broken my thumb and locked me in bathrooms on more days than I could count on my fingers. I sullenly went on this trip. Ruby and I spent a lot of time being mutually distraught and sitting on my window ledge. We both plotted my escape once I was back from the trip. We had a pretty clever plan hatched on the back of my geography textbook right next to a map of Portugal. We would launch it on my return. So, I went away for 7 days. When I returned, mum’s sanguine eyes met mine in a gathering of sorrow and storms. She greeted me at the gate. She hugged my tired frame and asked me about the trip. We both wobbled into the living room, latched onto each other like sturdy vines hugging the iron grills of my bedroom. Ruby had died. Meat gone bad. Food poisoning. She didn’t want to eat the meat. The vet had told mum that this little vegetarian puppy would not survive without meat in her diet. So she was fed meat. Her body rejected it. Eventually, she rejected her body. They buried her in the graveyard that belonged to the hospice. My spine paralysed by grief, I collapsed into my bed like a parachute landing on a tree branch. Half my limbs dangled from the bed and I refused to eat. I don’t know how long I stayed like that. My grandparents came back a week after. I refused to be held or cuddled. I refused touch; the kinaesthetics of tangible affection petrified me for a long time. Then I realized that Ruby had educated me how to be softer than the world would have allowed me to be. She was born against chance. She survived against it as well. It was not important for how long but what was important was how much. She was fiercely dedicated to my happiness as I was to hers. She placed a paw on my nose whenever she wanted to discard one of my hastily concocted escape plans as if to indicate her prolific hauteur toward those harebrained blueprints. I survived because she did. And surprisingly, I continued to survive because she didn’t. Because I wanted to meet more Rubies. Thereon, it was an entire circus — parrots, fish, cats, more dogs, lovebirds, more parrots, chickens, cows. Every phase of growing up was ably accompanied by an animal. When I hit back at my stepfather, I had seen a snake in the garden the day before. He had rubbished my palpitations over the fact that I was unwell and therefore delirious. I had once overheard the woman who came to massage my grandmother’s legs say that seeing snakes in your dreams was a good omen. Till date, in my mother’s maternal clan, on a designated day of the year, bowls of milk are left at places populated by snakes in an act of obeisance and gratitude. Recently, I fell in love with guinea pigs. Mum rescued two from a breeder. They came home in a brown paper bag that looked like someone had aimed an Ak 47 at it. Mum explained that she and a student whom she had taken along to bring them punctured holes into the bag so the little ones could breathe. We placed them in an empty bird cage since we didn’t have anything else to keep them in. Then began the near soprano level wheeking — a specific sound that guinea pigs make. 2 sprigs of coriander were pushed through the bars. They disappeared. 2 slices of cucumbers. Bread sliced and shuffled across the slot. That disappeared too. Wheeking continued. We opened the cage door. One came out, followed by the other. In flash of an eye, their 20 mile dash began. Sprinting across the room, hiding under the sofa, biting the tips of the curtains across the French windows. At night, I didn’t want them in a birdcage so I placed each in a picnic basket padded with towels and pillows. In the morning I found them rolled into the shapes of two croissants the size of a toddler’s fist. These were lab used guinea pigs and one of them already was disposed toward a gastro-intestinal disease owing to the experiments that had been tried on it. It grew frail soon and eventually died from a hemorrage. We asked the house help’s son to bury him in the offshoot of the mangroves not very far from where we lived. Mum and I held each other after a decade, crying soundlessly. We watched as his spindly body disappeared behind the trees and with him, the shoebox carrying my guinea pig. We made him promise us that he would give him a proper burial. He did. Mourning grew its knotted blue roots into me and I looked at the lonely being left behind from the short-lived couple. It was my birthday. I told Mum we would get him a playmate. We called up a rescue and were told there were some guinea pigs but they were “damaged”. In July’s copious downpour, we landed at the rickety building and demanded to see them. The first one looked like a spoonful of cookie dough; fur splotched by chunks of tawny hints in its all white snowiness. This was going to be my Biscotti, I decided. I only wanted one and then I saw the tiniest moving bundle in the cage, lumped with 2 bunnies. It was trying to hide beneath the bunnies. I asked the helper to remove him from the enclosure and it had the most brilliant black beryl for eyes. I petted its coat and nuzzled it minute nub of a nose. It put its paw on my pinkie. Its tuxedo imprinted fur was crusted with rabbit piss and dirt. This was going to be Snowball. I told the helper I was taking both. Later that night my sister and I donned aprons and surgical masks cleaning and sanitizing each of them. They were scared and eager at the same time. This time we had beds readied beforehand and as we released them into the pudgy bedding, they started flipping about like popcorn kernels. This was their way of showing that they were happy. I learned that when you truly are happy, you disregard gravity. You don’t feel chained to your circumstance.
https://zaharaesque.medium.com/animal-instinct-db8706963a9
['Ʇsnſ Ʇuıɐs']
2018-04-27 08:44:58.443000+00:00
['Relationships', 'Animals', 'Essay', 'Mental Health', 'Animal Right']
How I Keep Coming Back to Writing
How I Keep Coming Back to Writing Now I can truly say, writing makes me happy. Photo by Ivan Samkov from Pexels I know you must be thinking what an idiot. If writing really makes you happy, then why would you keep leaving it in the first place? The answer to that is pretty straightforward — how would I know if this is what truly makes me happy if I didn’t even try something else? That’s the reason I kept trying — sketching, calligraphy, string art, doodle art, digital marketing, search engine optimization…I know there is no correlation whatsoever. But I had to try. And I did. The conclusion? Although I may live this life working as a digital marketer, some part, some hidden part, maybe even beneath my soul wants to type the letters one by one, form words, develop sentences and write a story. There is no better satisfaction than when I hit the “full stop” at the end of an article, a story, a poem or a blog post. Finding the truth Close your eyes and ask, is this something you can do all day long and not get bored of? If yes, then do it and see if you truly are enjoying. I can tell you, trying different things, not liking them, and finally coming back to writing gave me more peace of mind than I expected. Now I don’t even care if I can be successful as a writer or earn enough writing for a living, all these factors don’t make sense. I know I’m willing to fight, to write, to struggle, but stick as a writer because this is what makes me truly happy. Isn’t that the most important factor to live life happy? “Life is what happens to us while we are making other plans.” ― Allen Saunders Life goes on At the end of the day, it doesn’t even matter what you do, how you earn, how you live, life will go on. Time can’t be stopped. Time can’t be controlled. It’s our choice how we want to spend the remaining days. I don’t want to sound depressing rather want to motivate you, encourage you to try different things even if you’re happy with your current life. Move out of your comfort zone and experience. You never know, a Colonel Sanders may be sleeping within you to build the next KFC or something. My point is, you never know from where life can gift you an opportunity of a lifetime. You never know until you have experienced your life, not just in terms of career growth but soul searching. I’ve found my happiness. I’ve found my calling. Now I can heartily say, “I love writing and I want to be a writer. But can you?
https://medium.com/illumination/how-i-keep-coming-back-to-writing-8bd31d7a6876
['Prasanta Banerjee']
2020-12-27 14:57:32.786000+00:00
['Life Lessons', 'Thoughts And Feelings', 'Mindset', 'Writing', 'Thoughts']
The Stretchy, Blocky People
I have noticed a trend in illustration which may be obvious to others more ingrained in the industry, but from an outsider’s perspective, seems quite odd, the result of which is my interest to understand this particular style of illustration I call, “The Stretchy People.” Who are are the stretchy people? Or, rather, what is the stretchy people? Illustrated figures whose bodily characteristics are defined by elongated, angular limbs, thick bodies and small torsos or heads— in essence, a stretched-out, simplified depiction of the human figure. “International Cat Day” by Frederique Matti “Zoooom” by Spencer Gabor If this type of illustration is a trend, how did it begin? Where did it stem from? There must have been some moment when all of the graphic illustrators around the globe decided this was a particular style to emulate going forward in the business, right?! No, that’s ridiculous thinking. Did this simplified style sprout from a specific school? Maybe it was when Matisse’s cut-outs became popular again? From Henri Matisse cut-outs, published by Taschen Was there a Bauhaus for the exaggerated illustrated figure? Nay, no, la, niet, na, non … if anything, the school of the 21st century is the internet and shared spaces of collective collaboration or display (Behance, Dribbble, Instagram, etc.), and movements shift, grow depending on its impact. Like anything else which is designed for public consumption, its success and failure is dependent upon its relevance and impact — in this case, to sell or illustrate for business. This movement, if one could call it that, is a success. The prevalence of long-limbed figures, or characters with exaggerated legs, arms, torsos, is something I cannot escape seeing everywhere. And maybe that’s because I am looking for them, but I cannot help but notice this trend and question its existence. What is the appeal? The figures are simple to comprehend quickly, their design is easy on the eyes and doesn’t make one think too long about the content. The illustrations are not abrasive, with just enough description of line, wrinkle, angle, perspective. The illustrations are also fun, and often times lite, which presents a goofy, approachable, unpretentious attitude towards whatever the advertiser is selling. From “Balfe Park Lane” by Sebastian Curi You see? I want to be living in this illustration. The colors are bright, the people look happy, and it is all very easily digestible. “10% Happier” by Louis Wes The variations on the format is quick to see between different illustrators, but the overall impression is the same: stretchy, blocky limbs, exaggerated structure of human figure, simple shapes and bright, attractive colors. I could go on finding more links between these illustrations and others, noting the similarities between this illustration and that graphic, but what’s the point? People are influenced by each other, and ideally, create work based on the excitement of seeing other people’s work. A breadth of inspiration has spread through the illustration market to produce these types of images. Again, why is this style of illustration prevalent? Is it as popular as I am making it out to seem? Do these illustrations play to a certain market, specifically? Or can they be utilized across a wide spectrum of industries and uses? Perhaps the logistics of an ever-changing, rapid world necessitates illustrations made just as rapid, easy to consume, and non-threatening? Are there worse things in the world than a movement of illustrations which simplify the human figure to a series of blocky, exaggerated, stretchy shapes? Obviously, yes. How will this style change, and will it continue to include the kinds of diversity in style and perspective that exists in this diverse world? Because, while this style is considerably popular, I find it disheartening that it is exactly that, a certain style, rather than an expression. This style produces a certain feel and aesthetic, relevancy among the current trends of the day, and implicates a certain mood which the stretchy people represent. Looseness, fun, liteness, love? Trends and fads come and go … I wonder where this one will evolve into next?
https://syorgon.medium.com/the-stretchy-blocky-people-13b675d2f209
['Geoffrey Thomas']
2019-05-29 12:12:24.101000+00:00
['Design', 'Illustration', 'Graphic Design', 'Drawing', 'Trends']
Maybe I Check My Phone Too Much
Sign up for Yapjaw By Slackjaw Yapjaw is Medium's #1 newsletter for all things humor, and we're sorry to say you'll only receive it a few times a month. Take a look
https://medium.com/slackjaw/maybe-i-check-my-phone-too-much-c7e429661786
['David Milgrim']
2020-12-18 17:17:54.266000+00:00
['Self Improvement', 'Comics', 'Social Media', 'Psychology', 'Humor']
My Green List
In the past few years I’ve made several changes to be more ecologically responsible—use less, reduce, reuse, recycle. These seemingly small changes, though relatively easy in a place like New York, took a lot of effort. I want to acknowledge others like myself, battling away silently in homes and offices, and hope to lend them a hand. This list is for them, and those who want to join in, do more. Guiding principle Avoid stuff unless it’s absolutely essential, will be used for a long time, replaces one or more items, and can be recycled or reused. Teaching myself to want, and therefore, need less was the hardest part. The List Still need help with
https://medium.com/green-lists/my-green-list-5153763ce558
['Ritwik Dey']
2020-12-24 18:21:19.087000+00:00
['Environment', 'Reduce', 'Eco Friendly', 'Recycle', 'Consumerism']
12 Steps To Deploy A Python Flask App on Heroku
Detailed 1. Install GIT Why do you need GIT? GIT allows you to create a local repository of your Flask App which you will use to push to the version hosted on your Heroku app. I use Github Desktop, which makes it super easy to create a local repository. If you prefer the command line, you can find the installation and usage instructions at this link. Essentially, go to your app folder, create your first repo, add all the app files, and commit them. C:\project1\git init Initialized empty Git repository in .git/ C:\project1\git add . C:\project1\git commit -m "My first commit" 2. Create the Heroku app on Heroku This is easy. Go to https://dashboard.heroku.com/new-app, create a new app. This creates a GIT file on Heroku which you then can push your local repository on to at: https://git.heroku.com/YourHerokuAppName.git. This will be your remote GIT. Alternatively, you can do this step later after you installed Heroku CLI at step 3 and use the command line interface: C:\AppDirectory\heroku create 3. Install Heroku CLI (command line interface) You need this to deploy the app so install it from here. 4. Install Postgres (PSQL) Assuming you have a database addon, you should install Postgres (PSQL) locally. This will allow you to use many of the commands on Heroku Postgres, which is included with Heroku CLI. Once Postgres is installed, and you can connect, you’ll need to export the DATABASE_URL environment variable for your app to connect to it when running locally. See item 5 below. 5. Set up config / env. variables on Heroku Create a .env file in the root directory of your project. Add environment- variables on new lines in the form of NAME=VALUE . For example: DB_URL=postgres://.... You can also include environment variables on the settings section on the Heroku App website: https://dashboard.heroku.com/apps/ You can then refer to those variables in your app: URL = os.environ.get('DB_URL') 6. Create requirements.txt using freeze (pip3 freeze > requirements. txt) and runtime.txt Create a requirements.txt file in the root directory of your project. C:\project1\pip3 freeze>requirements.txt This file is used by pip to install required python packages. Create another txt file called “runtime.txt” with a single line stating the python version you are using: python-3.8.3 7. Install gunicorn (pip3 install gunicorn) After installing Gunicorn using pip 3, you should have access to the command line script gunicorn . What is gunicorn, and why do you need it? It is best to think of it like a middle man. The server (web server) which you are hosting your app on is not designed to (securely) run your python application. The python community then came out with WSGI (Web Server Gateway Interface) — a standard interface standard that deals with the web server. Gunicorn is one such WSGI. I am simplifying but i guess it essentially means you need to connect your app to the web server using gunicorn, a WSGI. 8. Set up Procfile (no extension) Once you installed Gurnicorn, in the root directory of your project, create a “Procfile” (no extensions) with a single line: web: gunicorn yourappname:app --log-file - Under your root directory, if your app is app.py, the line should read: web: gunicorn app:app --log-file - Under your root directory, if your app is hello.py, the line should read: web: gunicorn hello:app --log-file - Hello.py (and in the earlier example, app.py) should be in your project root directory. Basically, if you read that line, gunicorn communicates between your app and the web (server). 9. Login To Heroku Go to the command line. heroku login 10. Deploy the app This should only be done in the project directory where you had created the local depository. This pushes the local git to the remote git on Heroku. git push heroku master You can check your remote git on Heroku by using: git remote -v 11. Generate at least one instance of the app heroku ps:scale web=1 This is a step easy to forget. 12. Run the App
https://medium.com/python-in-plain-english/12-steps-to-deploy-a-python-flask-app-on-heroku-db7c5a311056
['Frank Li']
2020-09-14 21:16:13.240000+00:00
['Flask', 'Heroku', 'Python', 'Programming', 'Deployment']
Oh Yes, You are the One!
Here I am gonna tell you something about yourself that you are not gonna believe. Even though you will forget it the moment you go back to your normal life but still, there’s no downside to it, I am not gonna belittle or demean you anyway so you might as well hear it. I am not gonna stop unless this seed of an idea gets implanted into your subconscious mind so strongly that something absolutely mindboggling grows out of it. Yes, I am talking about the fact that YOU ARE INFINITE! Photo by Ivan Slade on Unsplash You Heard Me You are the one that is the very source of this Universe and not just the mortal bodies you think you are. All the living bodies there are, we humans and other animals, the flora and fauna, on this planet and any other planet; the material that they are all made up of — that’s the real YOU! This material is immortal, undying, unending, eternal, perpetual, deathless, ever-lasting, indestructible, imperishable, inextinguishable, constant, permanent, timeless, ageless and 112 more synonyms you can think of. The point to be noted here is the fact that I exhausted my vocabulary in the above statement, just to show you how serious I am when I say that you are INFINITE and you are the one! It is not a feel-good type of motivation I am trying to instill in you. I am also not trying to boost your egos by flattering you. I just wanna give you the facts. The truth. To tell you who you really are. Plain and simple. You never take BIRTH, nor do you ever DIE. There is no way to kill you. Yes, I am talking about you, not GOD. Expand the Horizon of your Understanding You don’t have to find joy in this world when you actually are HAPPINESS yourself. There is no reason to be sad or morose, but still, we are. And why is that? Because we don’t really know who we truly are. The wrong kind of understanding leads to all the problems. You don’t have to love anyone. You are LOVE yourself. And what is LOVE? This connection that connects everyone in this Universe is love, and this love is the real you. What we people consider as love; we are always scared of losing it. Once we experience real love, the connection I talked about, then there will be no fear of separation left. Once you truly understand your actual self, you will become united with this whole Universe. And then this body becomes like an instrument for you to play with. Photo by bruce mars on Unsplash You take care of it in every possible way so that you can experience the ultimate potential of this instrument. You will become absolutely clear that this body, which we confuse our identities with, is just an instrument to see, an instrument to hear, an instrument to experience, an instrument to store memories. But you are the EXPERIENCER, EXPERIENCING through this tool, and not the tool itself. And not just through this one instrument, but through all the instruments in this world. The consciousness that allows us to feel all the sensations and vibrations is the same for every animal on this planet. The Real You You are just like the electricity flowing through every electrical appliance on this planet. But the problem arises at the level of understanding when the electricity starts thinking that it is just limited to one appliance. The electricity just needs to understand that it is not limited to just one bulb, it exists in every bulb there is. Electricity is everywhere, and you are like this electricity, running through every living creature that exists on this planet. This is called understanding yourself at the deepest possible level where there is no separation of any kind, where there only is UNITY. This unity is called LOVE. Now you can see yourself in everyone, and everyone in yourself. The biggest misconception that has been planted in we people is that every person has his own different soul. “This is your soul; this is my soul.” BS. The truth is that there is no division of any kind. Take space for example. The space between your screens and your eyes is not different from the space inside your body. (When you open your mouth, there is some space right?) Photo by Shot by Cerqueira on Unsplash Well, space is space. There is no actual division in space whatsoever. All the divisions are imaginary, created in and by the mind. All the directions are just imagined for our own comfort. Is there actually a north direction? Think about it. The Universe Is Our Baby Just like for a mother, her child is a part of her body. She doesn’t need to try to love the child; for her, the baby is her own extension. Similarly, this universe is our baby. The whole universe grew out of us, not vice versa. It is made up of us. If you have the capability to see things this way, then all there will be left will be the actual LOVE. Once you understand this deeply, then there will be no one left to hate. Words like ‘Hate’, ‘Jealous’, ‘Envy’, ‘Loath’ will be struck off of your dictionaries. Now you will start loving even the shortcomings of this world, just like a mother’s love for her child is absolute and unconditional. So even if the child poops the mother will be ecstatic to see that her child is healthy. All right maybe that’s a stretch, but I hope you got the point I am trying to make here. Photo by Jude Beck on Unsplash If you can see things with this perception, then all the people you are in a relationship with, you wouldn’t have to put in efforts to love them, you will love them all the way. The ones who try to love will naturally have a tendency to fight all the time. The ones whose basic nature is to love, won’t fight at all. There is just no possibility for conflicts to arise this way. Stop Following The Masses One way is to live the way everyone is living in this world, where 99.99% of their actions are influenced by fear and desire, where they will just try to fake their happiness on the outside, and hide the misery on the inside. But if you understand who you are, you may sometimes cry on the outside, but you will always be in a state of eternal happiness on the inside. It is a completely different scenario. And this choice is totally up to you. So now even if your wife slams you for whatever reasons, you may start crying outwards, but inside you will be laughing and chilled out. Since you have understood the fact that your wife is not a separate entity, she is a part of you only, then there is no reason to be sad or angry. Just like the teeth in your mouth, are they separate from you or are they a part of you? Now if your tongue gets caught between your teeth, do you take out the hammer and start thrashing your teeth? I hope not. Well, why don’t you? After all, the teeth made such a horrendous mistake of laying their hands on the sweet and innocent tongue which lets you taste all the delicious and scrumptious food of the world. Photo by OSPAN ALI on Unsplash Because you know that the teeth are also you, and so is the tongue. This is a real understanding of yourself. Now if everything is you only, then what are you gonna do? Will there be anyone left in this universe who will be unforgivable to you, no matter how big a mistake they make? Are there any reasons left to be sad or unhappy? Who are you gonna fight with now? Who are you gonna get angry at now? Might Look Weird But Is True The wife or the husband is like a reflection of your image only. It is exactly like looking directly into a mirror. Can you now fight looking at your own image in the mirror? Humans don’t but animals do that. They look at their own reflection on the mirror and start scratching the surface of the mirror. Like a bird looking at the image of herself in the mirror, it will scare her because she thinks it is someone else. And then she starts pecking at it, and the reflection in the mirror does the same to her. She keeps on beating looking at the image which in turn is also pecking at her. All she needs to do is to understand the fact that the moment she stops beating, the image will too because they are both the same. That is precisely what’s going on in our lives as well. We only keep pecking ourselves all day long, because of our own lack of knowledge and understanding. Once you grasp this fact completely that this entire universe is a reflection of you only, how on earth are you gonna fight this? Why So Serious? After developing this kind of understanding, this whole world will become a playground for you, which for some people is a pretty grave and serious fight. Once you get this, there will be no greed to win or terror to lose. You will play this game of life, just because it will become your very basic nature to play. Like a monkey doing cartwheels in the middle of a forest, who is doing it not to impress an audience in a circus, but because it is its very basic nature. Try and remember when you were kids, you used to play each and every game pretty seriously. Winning or losing was a really important matter for us. Photo by Alvin Mahmudov on Unsplash But now if you play with your children, even if you are losing to them, you will be happy because you know it is just a game. And if this becomes your attitude in the game of life, then winning or losing won’t matter to you, what will and should matter is whether you enjoyed playing the game or not. That should be the End Goal After this, you don’t have to put in any effort or do any meditation to stay in the moment, you will already be in the present moment. Understanding yourselves means that your past and your future are just an illusion. The concept of time is just an illusion. It is just a concept of our brain. The reality is that we never really change at all. The deeper you dive into this reality that you are the one and you are this supreme reality and not different from it, something truly mesmerizing will start happening. Now whether you wanna be mesmerized by the reality or be grief-stricken in the illusion everyone believes in; that choice is yours to make. So, choose wisely. Even Rumi also said:
https://medium.com/illumination/oh-yes-you-are-the-one-20b67fa3535e
[]
2020-12-03 19:40:36.770000+00:00
['Wisdom', 'Self', 'Spirituality', 'Self-awareness', 'Understanding']
Digital Context: Greg Lynn & Stan Allen
Architectural Context Part 9: Physical Context The Digital Turn There were several influences that collectively contributed to the emergence of the digital paradigm in architecture. One of those influences was the form of response to the fragmentation and fracture of postmodern collage systems and a movement towards continuity and the pursuit of fundamentally different part-to-whole relationships within architecture. The conceptual basis for this continuity was found in Gilles Deleuze’s conceptual philosophy and in Rene Thom’s catastrophe diagrams (Figure Below). Gilles Deleuze, The Baroque House. From Gilles Deleuze, Felix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia (Minneapolis: The University of Minnesota Press, 2005), 5. Cusp Catastrophe diagram, René Thom. According to Carpo, Deleuze’s “The Fold Leibniz and the baroque” essentially can be interpreted as a vast hermeneutic of continuity applied to Leibniz’s conceptual ideas, his mathematics, and the baroque art.¹ The concepts defined in Deleuze’s work were introduced to the field of architecture primarily through the articulation of Peter Eisenman and his projects that explored parallel architectural analogies of Deleuzian philosophical concepts. The diagram of the Deleuzian fold carried the potential to offer an alternative for fragmented postmodern systems of collage. The ideas of folding and smooth continuous variation were suggesting to blur the edges of abrupt change and introduce smooth transitions between the parts of the whole. Variable curvature as opposed to fragmentation and disjunction. Alongside the rising availability of general-purpose computers and their ability to manipulate differential geometry at ease, the exploration of conceptual folding and resulting continuity became possible. Without computers, some of those complex forms could not have been conceived, designed, or measured. Thus the emergence of the Digital paradigm in architecture was a confluence of these seemingly non-related influences that together opened up new territory for exploration. However, the use of computers also directed a series of fundamental challenges to the discipline. The questions of authorship, originality, vision and mechanical reproduction needed to be re-thought and re-contextualized in the light of the new electronic paradigm. In his essay “Visions Unfolding: Architecture in the Age of Electronic Media.” Eisenman argues that the conceptual framework of Deleuzian fold in the context of the electronic paradigm is capable of providing a different architectural model.² He argues that the electronic paradigm and the arrival of media culture challenge the concept of interpretive vision.³ According to Eisenman, ever since the invention of one-point perspective architectural discourse was tailored around the concept of anthropocentric vision manifested in the use of perspective. However, since computer conceptualizes differently it offers a possibility for an alternative mode of architectural thinking. One that does not operate through a vision in its traditional sense (Figure Below). Alteka Office Building, Tokyo, 1991; Folding diagrams and plans. Courtesy of Eisenman Architects. Eisenman’s early steps into this undiscovered territory were merely embryotic attempts to utilize the Deleuzian conceptual philosophy and examine what changes the electronic paradigm might introduce to the field of architecture.⁴ This exploration continued with the preceding generations of architects and resulted in a formalized design methods that attempt to utilize the specificities of digital paradigm in architecture. Greg Lynn & Stan Allen In the proceeding decades, some of Eisenman’s abstract and conceptual ideas were molded into a more formalized design methods by Greg Lynn. In his essay “Architectural Curvilinearity The Folded, The Pliant And The Supple” Lynn attempted to introduce an alternative to postmodern collage-based design approaches through the logic of folds and introduction of smoothness.⁵ For Lynn, the conflict and contradiction of postmodern architects were the primary response to gradually raising awareness of the complexity and diversity of architectural context. Lynn’s resolution for the inherent contradictions of a collage was to use sequences of smooth transformation that would allow integrating diverse fragments within a continuous and heterogeneous whole. Lynn believes that the logic of smooth transformation is capable of accommodating and integrating differences within continuous and diverse systems without compromising the coherence of the system as a whole.⁶ Contextual conflicts are not violently clashed together but rather are smoothly folded into architectural form while maintaining their individual identities.⁷ This folding results in formal deformations that are made possible due to the flexibility of topographical geometry in response to external events. They result from the logic that seeks to internalize contextual forces. These deformations fold the outside with the inside and the context with the form of architectural artifact. The similar understanding of the relationships between inside and outside can be observed in Deleuze’s work: The outside is not a fixed limit but a moving matter animated by peristaltic movements, folds and foldings together make up an inside: they are not something other than outside, but precisely the inside of the outside.⁸ Lynn highlights that traditionally in the process of architectural design the rich network of contextual forces was commonly disregarded. This debate between understanding the virtual space as an equilibrium state system as opposed to the dynamic field of force flows can be traced back to Descartes and Leibniz and to their views on the nature of gravity. Descartes reduced elements of larger dynamic systems and isolated them to extract steady-state equations. While Leibniz examined the parts within the contextual field of the whole dynamic system. Thus Lynn concludes that: “This shift from passive space of static coordinates to an active space of interactions implies a move from autonomous purity to contextual specificity.” ⁹ This approach has been actively employed in other disciplines where the understanding of the design space as an active environment is an integral part of the design process. Naval engineering is one example where the form of the ship hull is shaped through the interaction between the envelope and active context. ¹⁰ This kind of active understanding of the environment is what constitutes the creation of “Animate Form.” Animate form incorporates a multiplicity of influences acting upon architectural form into a single continuous surface. If described as vectorial fields, the collective force of these contextual fields can morph the forms immersed within it. Since animate form by definition is a vector-based topological entity, it can systematically incorporate time motion and force into its shape in form of inflections.¹¹ The same underlying idea was used by D’Arcy Thompson in his theory of transformations where the contextual forces acting upon different species were notated with a curving geometric coordinate frame (Figure Below). Study of transformation of various crab carapaces through the deformation of a flexible grid. From D’Arcy Wentworth Thompson, On Growth and Form (Cambridge: Cambridge University Press,1942), 1057. From this point of view, the shape of the animate form becomes an information carrier that stores currents of forces acting upon it. To a certain extent, this idea is not novel to the discipline. Even Camilo Sitte showed that the location and shape of important urban monuments were influenced by the traffic flows within the plazas. However, what has changed since antiquity is the complexity of contextual forces that act within the environment of the city. Lynn’s description of form added elastic and malleable dimension to the understanding of form establishing a formalized system where the relationships between contextual field and architectural artifact can be described as an interaction of vectors and fields. Lynn writes: Issues of force, motion and time, which have been perennially eluded architectural description due to their “vague essence,” can now be experimented with by supplanting the traditional tools of exactitude and stasis with tools of gradient, flexible envelopes, temporal flows, and forces. ¹² Thereby, these properties of the design environment and their impact on forms can be expressed as numerical data and become parametrized equations targeted at resolving specific design situations. Lynn’s primary concern with his work was to explore the possibility of creating an architectural model where particularities of context are already “plied” in.¹³ Additionally, Lynn’s view on relationships of context as a field and form as a smooth morphable entity submerged in that field blurred the edges of the traditional figure/ground diagram. As a result of enfolding context, this blurring eventually formed into the idea of dynamic landscapes where the deformations of ground produce figurative slopes.¹⁴ The edge separating the figure from the ground started to be viewed as a line of a conceptual fold. The idea of the fold gave the traditional idea of the edge a dimension. The figure was no longer seen as a dialectical opposite of the ground, but rather it was the foldings of ground that produced figures. What can be seen in Lynn’s idea of dynamic landscapes is the attempt to bypass the figure/ground dialectics and render them as folded continuities. However, in his work, the subtle distinction between the two always persist. Meanwhile Stan Allen in the essay “Field Conditions” develops this idea further to its logical conclusion. Allen’s definition of figure implies a form of more dynamic relationships between figure and ground where figure emerges as an effect in the fluctuations of the ground (Figure below).
https://tigran-khachatryan.medium.com/architectural-context-part-10-digital-context-greg-lynn-stan-allen-f912f524017a
['Tigran Khachatryan']
2020-09-15 14:54:54.004000+00:00
['Architecture', 'Design Thinking', 'Design', 'Design Process', 'Reading']
Working Virtually is a Dance Between Humans and Technology
Working Virtually is a Dance Between Humans and Technology Will this information be obsolete by the time you finish reading it? For years before this pandemic put all of us online for everything, my students of group facilitation were already trying to hold meetings on digital platforms. Project managers, non-profit executives, college administrators, HR people, grassroots organizers, pretty much anyone who needs to run groups successfully came through my workshop, and mostly they were holding face-to-face meetings. But more and more of them needed to meet online. They used whatever platform their company recommended, and yeah, it was quite the variety. Lucky for me, in spite of different software, they all had the same questions about how to facilitate an effective online meeting. I enjoyed the challenge of translating the values and techniques I teach into an online environment, but I realized right away there was no way I was an expert on the technology. Tech is definitely not my jam. Plus, innovation was a moving target. Every few months there was a new favorite. Notice I am not mentioning any names? Because we don’t use those tools anymore. In fact, you might as well put this whole article in your 2020 Time Capsule right now because there’s a good chance that by the time you finish reading, it will be obsolete. Since my students were the ones using the tools, I could hardly claim more expertise than they had. So, I told them, “Right now, developing online meeting tech is at the Wright Brothers stage of flight. We are riding a bicycle with paper wings down a sand dune.” Then I reviewed the requirements of any successful meeting and encouraged them to find ways to meet effectively. “Innovate!” I said. It’s a partner dance: We lead and we follow. This got me thinking about how technology develops in relation to how we use it. Humans invent machines to do things we need to do. Meanwhile, machines shape us as we use them, based on what they are capable of — which is often amazing, but still limited. Probably none of us are old enough to remember when cars were invented, but there was a lot of discussion back then about how people would lose the ability to walk. It may seem silly now, but a century later here we are; the increase in obesity and cardiovascular disease in developed countries can be linked, in part, to the nearly universal use of automobiles. Raise your hand if this will stop you from driving your car. More recently (you do remember this,) along came the cell phone. Undeniable value. Everyone all over the world has a cell phone: in cities, remote villages, even people who have never had any other kind of phone. Even people who have to walk miles to access electricity now have a phone. It’s just a massively powerful computer in your pocket. What could possibly go wrong? Ahem. There’s plenty of data on how looking at little screens all day causes eye strain, bad posture, or even cancer. But the people I know who would forgo their cell phone for these reasons I can count on one finger. We humans create technology and then risk actual physical harm (not to mention environmental catastrophe) to use it and we barely notice. Why? Obviously, tech is magical, and it offers amazing personal value. But there’s another reason. The best technological advances are successful because they are intuitive to use. They do what we need them to do in a way that allows us to stay comfortable as human beings; they integrate well with innate human behavior. When a device feels intuitive it’s like an extension of what we were doing anyway — only better. It’s like when you are shopping with a friend and you try on a new shirt and they say, “It looks like you already own it.” Steve Jobs knew this and designed for it, famously saying, Some people say give the customers what they want, but that’s not my approach. Our job is to figure out what they’re going to want before they do. I think Henry Ford once said, “If I’d asked customers what they wanted, they would’ve told me ‘a faster horse’.” Jobs was talking about making things that already fit into what people do naturally and extend our capability without much effort from us. Socio-technical Systems Systems thinkers call this dance between people and machines a socio-technical system, meaning the elements of the system that influence how the system functions and changes are both human and technological. Meeting online is a prime example of this. “Meeting” means communicating. The struggle of digital communications is to recreate human face-to-face communications. And since we humans are evolutionarily the “storytelling ape” (apologies to Terry Pratchett) and we literally survive by communicating, our standard is very high and it involves all our senses. Technology is trying to catch up, and making progress. From drums in the distance, to letter writing, to the telegraph (not intuitive), to the telephone (intuitive), to big clunky video cameras in conference rooms (no one used them after the first week), we have finally arrived at — drumroll please — Zoom! Ok, I know there are many other platforms for meetings, but Zoom is the current hands-down favorite. It’s not perfect, but the pandemic has made it ubiquitous. Even my cousin Liz uses it. And why? Pop quiz! — you know the answer: because it is the most intuitive to use, and currently gives us the best technological version of face-to-face communication and the best tools to collaborate. This won’t last, for two reasons. The first is competition from other tech developers. (MS Teams has a beta version out that sends chills down my spine.) And the second reason is us. The users. People using technology changes the technology. We make tech do things we need it to do even beyond its original design. Cool Zoom Hack Here’s my example. Zoom has many handy features including easy-to-use breakout rooms. I absolutely need breakout rooms! I support groups to collaborate on complex issues, and small groups are where it’s at because people need to actually talk to each other. Zoom breakouts provide the obvious: You can put people in rooms randomly or manually organize who goes in what room. Every room has its own chat and a whiteboard — it's great. But what if you want to make topic-specific breakout rooms and give people the choice of which topic they want to talk about, then allow them to move from room to room on their own? I definitely need this. My philosophy is to give people as much autonomy as they can stand. This is what I do in my face-to-face work, and my clients need to do the same online. Luckily, in the mad scramble to teach Zoom to our clients, my friend Raymond van Driel from the Applied Improv Network figured it out. He immediately shared his discovery, and now we can all easily do Open Space and a host of other essential formats. You can get specific instructions here. Or wait for Zoom to make this easy, which I hear they are planning to do. When the breakout rooms can have designated topics, and people can move between the rooms on their own based on their interests, it feels natural. There are lots of applications, not just in work settings, but also in family gatherings, social events, even complex games like Werewolf. Not surprisingly, we aren’t the only ones to discover this, so it really is like the era of the Wright Brothers, when many groups were separately trying to invent a flying machine. We are all in this dance of invention, even those of us who don’t think of ourselves as tech-savvy. But let’s not stop there. Technology is being developed by us. The dance back and forth between human needs and tech capability will continue. You hear a lot about both the wonders and the horrors of new technology. This is an invitation to not just see ourselves as consumers of the wonder, and victims of the horror. Let’s do the best job we can to make technology reflect and support the finest aspects of who we are as humans. With a nod to Sam Kaner for inspiration.
https://medium.com/ninja-writers/working-virtually-is-a-dance-between-humans-and-technology-3b6f56419538
['Sarah Fisk']
2020-09-25 16:34:20.650000+00:00
['Productivity', 'Work', 'Digital Life', 'Technology', 'Remote Work']
All of this election anxiety has ruined my sense of time these days.
All of this election anxiety has ruined my sense of time these days. I’m so ready for us to move on, heal wounds, and try to resolve the gulf that has split America. I almost forgot to publish all the stories on Crow’s Feet these past ten days and the stories have piled up. I always include every story in the newsletter, but this week I will divide them into Part One and Part Two. PART ONE Want to Know the Date of Your Last Day on Earth? Check Out the Death Clock! By Roz Warren “The Beauty Of Time Itself…” When Goodbye transforms us by Ann Litts Am I Too Old for This? Why you should stop asking for permission and start living life on your own terms. By Rose Bak My Father’s Skin. A free-form poem by Dennett A Penny or a Dime. Braided Generations, a poem by Eileen Vorbach Collins Pavlov’s Pings? Most of us learned early about Pavlov’s dogs by Nalini MacNab The End of Life Conversation With My Father. It began three years before he died by Alice Goldbloom. If Life Is About Timing. . . Maybe your time is now by Max K. Erkiletian Sorting Through the Bits and Bobs of Life and Loss. My Dad’s garage was full of junk that needed dealing with when death was near by Mary DeVries How to Give a Good Eulogy. Pulling yourself together to speak meaningfully at a funeral is challenging but worth it by Mary DeVries Is There a Captivating Tale Hidden in Every Word, Just Waiting for Someone to Ask? Did someone mention the top of my piano? By Jean Anne Feldeisen Things Are Falling Apart. The fears of the future are coming to pass by Jo Ann Harris Give a gift filled with ideas about to get the most out of later life, Crow’s Feet: Life As We Age, now in paperback and ebook on Amazon, Barnes & Noble and Bookshop, where sales benefit your local independent bookseller.
https://medium.com/crows-feet/all-of-this-election-anxiety-has-ruined-my-sense-of-time-these-days-edac30cdb92d
['Nancy Peckenham']
2020-11-06 16:12:38.182000+00:00
['Healthy Living', 'Retirement', 'Wellness', 'Aging', 'Self Help']
Texas Chemical Plant Explodes One Week After EPA Rolls Back Protections
Aerial photo of the West, Texas fertilizer plant explosion site taken several days after blast: April 22, 2013. (Photo: Shane.torgerson) “This should not be anyone’s reality and unfortunately, it is for communities sitting at the fence-line of the petrochemical corridor along the Gulf Coast — an ever-growing corridor because of the billions of dollars being invested in petrochemical infrastructure related to plastic.” Local officials ordered the evacuation of tens of thousands of residents after a pair of explosions at a chemical facility in Port Neches, Texas, injured three workers and shattered windows and doors of nearby homes. The explosion, which local ABC affiliate KTRK reports is burning carcinogenic chemical butadiene, comes a week after the Trump administration rolled back safety protections aimed at preventing disasters caused by dangerous chemicals. Beyond saying that the incident involved a “processing unit,” the petrochemical company has not yet stated the exact cause of the explosion. Chemical Disaster Rule Rollback As Citizen Truth wrote last week, the Trump EPA recently rolled back a safety rule aimed at preventing similar incidents at chemical plants: “The Chemical Disaster Rule was imposed by the Obama administration after a 2013 explosion in West, Texas killed 15 people, including 12 first responders, and injured over 160 others. The rule mandated that chemical companies take new measures to prevent similar accidents, including the use of safer technology and procedures, third-party audits in the event of a problem, determining the “root-problem” of any spill, and public access to information about the types of chemicals stored on their sites.” Nearly 40 House Democrats urged the EPA to retain the Obama-era rule in a letter, citing 73 accidents and leaks that have taken place since it was suspended. The Chemical Disaster Rule rollback came a week after a federal appeals court ruled that the Trump administration illegally “ excluded millions of tons of some of the most dangerous materials in public use from a safety review,” as reported by the Associated Press. TPC Group And Other Petrochemical Manufacturers The TPC Group, the owner of the Port Neches plant, describes itself as “a leading producer of value-added products derived from petrochemical raw materials.” The formerly publicly traded company has a 75 year history in Houston and Port Neches. Its environmental record has worsened since it was purchased in 2012 by the New York private equity firms First Reserve and SK Capital Partners, which still own TPC today. “This facility has a track record of violating the Clean Air Act, with five other illegal emissions events just in 2019, emitting carcinogenic 1,3 butadiene and other chemicals, and a history of community complaints,” said Catherine Fraser, Environment Texas’s clean air associate, in a statement. Fraser notes that butadiene, the carcinogenic chemical burning in Wednesday’s explosions, has been the main source of concern among regulators at the Texas Commission on Environmental Quality. The TPC facility has failed to abide by the Federal Clean Air Act for the past 12 consecutive quarters. “According to the EPA, the TPC Plant has been in non-compliance 12 separate quarters over the last 3 years, and has received 7 formal enforcement actions over the last 5 years,” said Fraser. During a news conference Wednesday morning, TPC Group’s manager of safety, health and security Troy Monk said he was unaware of past fines and investigations regarding emissions standards. “Monk said he could not comment on the plant’s compliance,” reports Beaumont Enterprise, a newspaper headquartered in Beaumont, Texas. The Port Neches explosion is the latest in a series of industrial incidents that have afflicted the region. NPR reports that Houston saw three fires at chemical plants from March to April this year, and more than 30 people sustained injuries after a fire erupted in an Exxon Mobil refinery in Baytown. “Within the last year, I have witnessed an unacceptable trend of significant incidents impacting the Gulf Coast region,” Texas Commission on Environmental Quality Executive Director Toby Baker said in a statement. “While not all emergency events may be prevented, it is imperative that industry be accountable and held to the highest standard of compliance to ensure the safety of the state’s citizens and the protection of the environment.” Environmentalists spoke out against the negligence of the TPC Group and other petrochemical manufacturers, and condemned the ongoing production of similar facilities. “This should not be anyone’s reality and unfortunately, it is for communities sitting at the fence-line of the petrochemical corridor along the Gulf Coast — an ever-growing corridor because of the billions of dollars being invested in petrochemical infrastructure related to plastic,” Yvette Arellano from the Texas Environmental Justice Advocacy Service said in a statement. Over a dozen similar plastic-producing petrochemical plants are being built or have been proposed around the world. As Citizen Truth previously wrote: “Large U.S.-based oil firms such as ExxonMobil Chemical and Shell Chemical will help fuel a 40% rise in plastics over the next decade, according to experts. These companies have invested more than $180 billion in plastic production since 2010, boosted by the boom in shale gas.”
https://medium.com/citizen-truth/texas-chemical-plant-explodes-one-week-after-epa-rolls-back-protections-ed7794a7a750
['Citizen Truth Staff']
2019-11-28 22:50:01.307000+00:00
['Chemical Safety Rules', 'Port Neches', 'Environment']
Facial Recognition Companies See the Coronavirus as a Business Opportunity
Facial Recognition Companies See the Coronavirus as a Business Opportunity Facial recognition companies are pitching the technology as a sanitary alternative to fingerprint scanners Photo: Cai Zixin/China News Service/Getty Images The Covid-19 crisis enveloping millions of people around the world is also presenting an unlikely business opportunity for one sector of tech: facial recognition technology. Companies including DERMALOG in Germany and Telpo in China are pitching the technology as a method for identifying individuals without the risk of close contact. Fingerprint scanners, for instance, require that many people touch the same surface, which could potentially spread infection if someone with Covid-19 were to use an unclean scanner. Businesses in India are being directed by police to ditch fingerprint authentication in lieu of facial recognition or ID cards, and the NYPD is pausing its fingerprint entry amid coronavirus concerns. Companies eager to make facial recognition the default form of identification are rushing to fill the void. “In the face of this outbreak, we have developed a solution for noncontact body temperature measurement plus face recognition.” DERMALOG, a biometrics company that makes fingerprint, iris, and facial recognition hardware, has adapted its technology to determine temperature and is pitching the update as a safety feature. It’s already in use by the Thai government for border control. Telpo is launching temperature-sensing terminals with facial recognition, which allegedly work even if a person is wearing a face mask. “In the face of this outbreak, we have developed a solution for noncontact body temperature measurement plus face recognition in order to meet the rapid need to diagnose the patient and isolate and control the virus in time,” Crystal Chu, a platform marketing specialist at Telpo, wrote to OneZero in an email. And the company is touting the benefits of facial recognition beyond the coronavirus pandemic. “This technology can not only reduce the risk of cross infection but also improve traffic efficiency by more than 10 times, which will save time and reduce congestion. It is suitable for government, customs, airports, railway stations, enterprises, schools, communities, and other crowded public places,” the company wrote separately in a press release. Another Chinese company, Wisesoft, is also bundling facial recognition and temperature-sensing technology. Meanwhile, manufacturers of fingerprint scanners are pushing back. “People like facial recognition or iris companies are saying, ‘Well you don’t want to use fingerprint scanners anymore, they’re bad for you because of this virus. I think that’s a little ridiculous,” says David Gerulski, executive vice president of Integrated Biometrics, a firm that specializes in lightweight and mobile fingerprint scanners. The company has been communicating with customers for weeks about how to properly clean the devices, and advising that those who use the scanner should use hand sanitizer afterwards. Gerulski points out that he’s heard recommendations that people should use the iris scanners on Clear’s biometric machines at airports rather than its fingerprint function, but that requires holding your face close to a shared surface. “The main thing is we want to make sure there aren’t terrorists who take advantage of this time period when everybody’s worried about viruses and then something bad happens,” Gerulski said. Automated fingerprint analysis has been the most common form of biometrics since its invention and adoption in the 1980s. NEC, the Japanese technology company, was the first to build and market automated fingerprint analysis for forensic use by police and federal investigators. The company has now morphed into one of the biggest companies selling biometric technology across the world, and it pitches facial recognition as the most cutting edge of its offerings. Covid-19 has also spurred technology companies to market facial recognition algorithms that work even when someone is wearing a mask. Companies like China’s Hanvon and Spain’s Herta have announced their facial recognition is works with or without wearing a mask as well. Facial recognition is also being used amid the pandemic to determine if people are following local regulations. Baidu’s facial recognition can look for people who aren’t wearing masks in China, since masks have been deemed mandatory in many parts of the country. Russia is using facial recognition to track those who are leaving their quarantine. One Moscow resident was visited by police after violating his quarantine by taking out the trash, according to the Moscow Times. In Shanghai, communities are installing facial recognition in residential buildings to reduce contact with shared physical surfaces. Other residents in China, where facial recognition partnered with temperature sensing has become commonplace, wonder whether this level of surveillance will subside after the virus has been contained, according to The Guardian. “This epidemic undoubtedly provides more reason for the government to surveil the public. I don’t think authorities will rule out keeping this up after the outbreak,” activist Wang Aizhong told The Guardian.
https://onezero.medium.com/facial-recognition-companies-see-the-coronavirus-as-a-business-opportunity-6c9b99d60649
['Dave Gershgorn']
2020-03-19 13:58:42.667000+00:00
['Facial Recognition', 'Identification', 'Technology', 'Coronavirus', 'Biometrics']
man child and the white boy float atop the broken promised land
there’s this white boy sitting next to me on the plane. his angsty hands keep flashing towards the window too quickly. jerky motions that jut out just near my face as he points towards the window i’m sitting next to. the boy sitting next to him is brown with short straight black hair. the two of them hit each other back and forth. do as young boys do. i try not to keep score. try to ignore my mind as it does what it’s inevitably bound to. which is tally who hits who more and who initiates the hitting. but my mind has a mind of its own. or perhaps just preexisting conditions that won’t allow it to not go where it’s inevitably bound to. i’m reading Kiese’s Heavy and considering what it means to be a trapped man child in a Southern black man’s body. How sixty years ago, when my Pensacola born granddad would have been near the age I am now, he could have felt similar agitation to what I feel now. but for any expression of it he may have found his proud regal black frame in lord knows whose pallid hands. how his journey, presumably an escape from the vice grip of the South, led him to chasing Northern stars, singing on stages in NY city night clubs next to Nipsey Russell, Sammy Davis, Jr., Frank Sinatra and the Rat Pack. how for those shining lights, he left his son in Florida, who would one day leave me in Brooklyn and then eventually leave this Earth. how after that my mom dragged me and my brothers out of Brooklyn for New Orleans. and while much of my Bucktown ways still linger in my bones, my frame has been baptized in the dirty dirty. so now i’m thinking of the unspeakable walls of whiteness in the insane asylum we call race in America. how its mores straight jacket black bodies into immobility like Tupac’s in that one picture. if my heart had a face right now it would be Tupac’s in that picture. the author’s heart as Tupac… in a straight jacket the stewardess comes by. she tends to the white boy and the brown boy next to him. offers them several snack options to which they respond by accepting some chips and candy. she moves on to the next passengers. the white boy looks at me and asks if I want her. i nod my head. then he taps her on the arm and points in my direction. her taut, wrinkled mouth and stiff eyes spell out every expectation I already have. her deep Southern accent underlines my expectations in red. pretzels and water i say dryly. then imagine the stale face i’m gonna serve her ass when i walk off this plane. eat the pretzels. stare out the window… the white boy flings his hands in my direction again. damn near hits me in my face. reminds me of the little 5 year old black boy that did the same yesterday, but intentionally. while i sat under the dryer, locks freshly twisted, his too. little Ty’Kai, who I’d never seen in life prior to walking in the shop, insisted on walking up to me and playing with me like I was his big brother. grabbing at my wrists something strong with his little five year old self. we played a little game. i darted my finger an inch from his face, to which he responded by grabbing at it quickly. i then snatched my finger away, wove it around like a wand before he could catch it and as soon as he reached for my roving hand i’d thrust a finger back an inch from his face. he grabbed and missed, grabbed and missed. till the laughter rolled out his mouth like water he didn’t have room for and he reached for my face to even the score. i thought how this would never fly back in my classroom. how enabling similar freedom of motion in my students would unravel the class in seconds and i’d never get it back. how the administration would frown on this and cancel my contract not long after. but how here, in this black hair shop, all those rigid standards dissolved and this young brother was just that, if not the wily son i never had. we carried on with our little back and forth until his grandmother walked back in the shop telling him to “leave that man alone ‘fore I whoop you.” Ty’Kai sheepishly obeyed and wandered off. i winked at him, pounding my fist into my palm as if to say it was on as soon as she left. then i think what would an old black woman do if she sat in my seat on this plane. cause i know what she would do to my ass if i did what this white boy is doing. then i think what would she have done sixty years ago. One hundred… sixty — suddenly: two horses spear off in opposite directions at the snap of a bearded old white man’s whip. ropes tethered to the horses tug at the wrists and ankles of the old black woman. her limbs spread like a viper’s mouth. and the most venomously pained scream emerges from her. winds up slow as a vinyl record sketching out its first sound. then blares like a fire truck siren. the black women’s mouth opens wider like all the earth quaking. and a band of hundred thousand banshees ride the waves of her cry. “this your first time going to California?” the white boy says. looks at me all blue eyed manic and eager. hunger in the pimpled, pock marked face. hunger in the naive smile, yellow teeth, and gridded braces atop them. “nah. you?” i respond. “nope. this is my second time. my tenth time flying. my second time back in California.” “that your friend?” i ask him. “no that’s my brother.” his eyes all agape part fearless wonder, part curious joy. then a slight flinch at closer inspection of the black face in his reflection that doesn’t know what to make of this reckless specimen in front of him so momentarily guardrails his countenance in stern angry black man. stern angry black man. it’s a font for the lettering of emotions on black man typeface. a mood as the kids say. one the zealous white boy doesn’t quite know what to make of. zealous white boy’s flash of fearful wonder is blurred blue ink on a blank page. stern angry black man font softens. eyes lowercase. inspecting the white boy’s slowly morphing countenance like an editor. like a detective. turns away. back to the window. silence… “we’re going to visit his dad.” “yeah… but y’all live in Louisiana?” “yeah well he’s from there and i just been living there a year. i was in California 15 years.” “Fifteen years???” you don’t look a day over 12 or 13.” “yeah…” his smile as close to bashful as its been yet. “where you go to school?” “Beauregard.” “get outta here. I teach around the corner. you probably go to school with some of my old students.” absent head nod and smile. “how old is your brother?” “eight.” “so he looks big for his age and you look small.” little brown boy bows head and smiles all bashful. makes the closest thing to eye contact he’s made yet. big brother darts eyes restlessly. fingers his brown full rimmed straw hat. “what you doing with that hat? you trynna be a cowboy?” “yeah.” eager again. face all yellow teeth and braces, all pink splotches of awkward growth probing through. zany blue eyes threatening to leap out their sockets. “yeah well you should try to ride a horse.” “I did. me and him. we both rode horses.” “well that’s alright.” retiring to old man status now. i sound like somebody’s grandpa. running out of words to say. or sense to make of this strange de force into normalcy with the odd presence next to me. i stare out the window at a bridge that seems to span the length of the ocean beneath us. “i’m scared of that bridge. every time we’re on it i fee like we gonna fall off,” eager white boy says to his brother. “that the Golden Gate Bridge?” I ask them. “no. Golden Gate Bridge is farther up. That’s the…” and he looks to his brother. “i can’t remember what bridge that is.” minutes later, we’re only a few dozen feet from the ground. nothing but water beneath us. “where’s the bottom? it feels like we’re gonna land in the water,” the white boy says. but then, sure as sunset, the plane’s wheels make their requisite ka boom onto the runway. as we get off the plane, i unload my bags from the cabinet overhead. then i turn around to tell the white boy and his brown brother, “y’all have fun in California. and take care of your little brother.” they smile back nodding at me. the white stewardess lady smiles. i turn to walk. soon as I step into the welcoming lobby, folks are scattered about awaiting passengers as we exit. the most prominent of which is a huge Latino man in black t-shirt, baseball cap, and the unyielding countenance of a man who knows how to handle himself in the street. he nods his head at me. my eyes nod back. i’d been trying to imagine what kind of man the little white boy’s mom would’ve made his little brother with. tall and domineering? short and diminutive? who’s to know? then… a bend of the corner and a few minutes later, the big unyielding Latino man walks out of the airport with the two boys. in less than an hour, i’ll be immersed in the streets of Oakland, where the rigid binary of colors that code my every breath and step back down South will be blurred into a rainbow cornucopia of shades of brown that erase any knowledge of where the Chicano ends and where the Mestizo begins, who exactly is Philippino and who’s Honduran, Taiwanese or Chinese, Dominican or Haitian. who’s to know? but in the murky world i’m slowly emerging from, we wear black and white like prison stripes, and all that rainbow is muted in shades of grey.
https://ascribecalledquess.medium.com/man-child-and-the-white-boy-float-atop-the-broken-promised-land-9e13b0f8beb7
['A Scribe Called Quess']
2019-07-09 17:34:25.765000+00:00
['BlackLivesMatter', 'True Story', 'Storytelling', 'Fiction', 'Racism']
The Best Media Newsletters for Anyone Working in Journalism, Advertising, Communications, and Tech
The Best Media Newsletters for Anyone Working in Journalism, Advertising, Communications, and Tech As part of our own process of writing a weekly newsletter about the media industry, we at The Idea read a lot of other media newsletters ourselves. Below are some of our favorites. For links to top news: American Press Institute’s Need to Know Focus: Journalism Expect it: Weekday mornings Description: API’s daily newsletter is packed with links to the most important journalism news of the day. It includes article pull-outs to contextualize some of the top stories. Sample it here. Nieman Lab’s Daily Digest Focus: Journalism Expect it: Weekday afternoons Description: Nieman Lab’s daily newsletter primarily links back to its own content — not a bad thing, considering Nieman has great coverage of innovation in journalism. Ken Doctor’s in-depth analysis of the biggest media stories and Nick Quah’s commentary on podcasting make Nieman Lab a must read. AdExchanger’s Optimizing the News Focus: Advertising Expect it: Weekday mornings Description: AdExchanger’s daily newsletter offers platform and publisher news through an advertiser’s lens, among other advertising news. Many of the publisher-relevant stories featured in AdExchanger’s news roundup aren’t referenced by any of the other newsletters. Sample it here. The Splice Newsroom’s Slugs Focus: Media in Asia Expect it: Thursday evenings Description: With a splash of personality, Alan Soon covers media news and trends for a primarily Asian market. The Splice Newsroom recently came out with a new design-focused newsletter, Splice Frames, penned by Rishad Patel. (Note: we did an unrelated Q&A with Alan earlier, which you can check out here.) Sample it here. For original analysis: CJR’s The Media Today Focus: Journalism Expect it: Weekday mornings Description: For those of you who read The Idea, CJR’s The Media Today is the most similar to The Idea in format. Each issue starts with news and analysis of the biggest story of the day, and concludes with a few blurbs of other big stories to check out. Sample it here. Axios Media Trends Focus: Media Expect it: Tuesday mornings Description: Sara Fischer’s Media Trends rounds up the top ten or so media stories of the week, explaining what they are and why they matter in a pithy, digestible way. Media Trends is broader in scope than a lot of the other newsletters on this list — whereas the others are written for publishers & journalists or advertisers, Sara covers publishers, broadcasters, platforms, and advertisers. Sample it here. For deeper dives: The Lenfest Institute’s Solution Set Focus: Journalism Expect it: Thursday mornings Description: This new weekly newsletter penned by Joseph Lichterman dives deep into one innovative idea in journalism each week — including an analysis of strategy, stats, lessons, and what to look for in the future. It’s quite lengthy, but Joseph includes a handy TL;DR cheatsheet up top for the time-pressed. (Note: we’ve done an unrelated Q&A with Joseph earlier, which you can check out here.) Sample it here. Nick Quah’s Hot Pod Focus: Podcasts Expect it: Tuesday mornings Description: Another long one with a TL;DR up top, Nick Quah’s Hot Pod is the best newsletter out there on podcasting, featuring lots of smart analysis. There’s a paid membership too, which gets you twice-a-week coverage of the podcasting market. Sample it here. For those who want to pay: Ben Thompson’s Stratechery Focus: Tech and Media Price: $10/month (or $100/year) Expect it: Monday-Thursday mornings Description: A paid subscription gets you four deep dive analyses on the business of tech and media. You can’t predict exactly what you’re going to get, but you can be sure Thompson will explain both the foundational systems and nitty gritty details at play in a way that demonstrates an unparalleled understanding of how tech, media, policy, and economics intersect. Not ready to shell out? Thompson posts a fourth analysis for free each week on the Stratechery website, which you can check out here. Sample it here. A longer list of other newsletters and resources we like: POLITICO’s Morning Media , which covers political media. It can be a roller coaster of a read in the current administration — hold on tight. , which covers political media. It can be a roller coaster of a read in the current administration — hold on tight. Atlantic 57’s Digital Trends Index , written by our friends upstairs, covers trends in digital media with humor and sharp expertise. , written by our friends upstairs, covers trends in digital media with humor and sharp expertise. The Cohort from The Poynter Institute looks specifically at women in digital media. Poynter has an entire slate of newsletters that covers various topics within journalism. from The Poynter Institute looks specifically at women in digital media. Poynter has an entire slate of newsletters that covers various topics within journalism. The Wall Street Journal’s CMO Today is written primarily for marketers but is loaded with breaking news and analysis for anyone in media. is written primarily for marketers but is loaded with breaking news and analysis for anyone in media. A lot of Recode Daily ’s content falls outside of media, but analyses from Peter Kafka and Kara Swisher are almost always worth the read. ’s content falls outside of media, but analyses from Peter Kafka and Kara Swisher are almost always worth the read. Mediagazer doesn’t have a newsletter, but the website compiles content on digital media and journalism from solid sources all across the web. doesn’t have a newsletter, but the website compiles content on digital media and journalism from solid sources all across the web. We keep up with Ken Doctor and his Newsonomics column, which covers the economics of journalism. The column is syndicated on NiemanLab. and his column, which covers the economics of journalism. The column is syndicated on NiemanLab. Inside Social highlights a few important but under-circulated stories covering the inner workings of social platforms. The whole Inside slate is worth checking out as a unique digital media model. If there’s another media newsletter you think we should check out, let us know in the comments! The Idea is a Monday newsletter giving you the intel on the media trends and innovations you need to know about to be ready for the week ahead. It features Q&As with media innovators working across executive, product, editorial, and partnership roles. The Idea is written by the Strategy team at Atlantic Media, home of The Atlantic, Quartz, Government Executive, and National Journal. Check us out here, and subscribe!
https://medium.com/the-idea/the-best-media-newsletters-for-anyone-working-in-journalism-advertising-communications-and-tech-913d3051fa8a
['The Idea']
2018-04-11 20:35:07.490000+00:00
['Media', 'Journalism', 'Digital Media', 'Newsletter', 'The Atlantic']
Calculus You Forgot (Or Never Learned): Derivatives
Calculus You Forgot (Or Never Learned): Derivatives Intuitive ideas about the derivative Photo by Andrea Piacquadio from Pexels If you went to college, it is a fair bet that you took some sort of calculus class. If you went for engineering or science, you probably took a LOT of calculus classes. Unfortunately, a lot of math teachers like to show you how smart they are and they make simple ideas pretty hard to understand. Plus, in school you have a lot of things — both academic and non-academic — competing for your attention. So it isn’t surprising that a lot of people don’t have an intuitive grasp of some calculus ideas. Sometimes, I think some of the professors don’t either and that’s part of the problem. It is one thing to know the notes on each piano key and another thing to be able to play the piano. What is Calculus? So what is calculus? Easy answer: it is the mathematics of change. Regular math lets us answer questions like “Which rug is larger?” or “How many eggs do we need every day to feed a certain number of people?” Algebra answers questions like “If we earn 8% interest on a bank account and after a year we have $1122, how much money did we start with?” Geometry and trigonometry answer questions about shapes and angles. Calculus answers questions about change. Examples of questions you might answer with calculus are: If a pipe develops a 1mm leak that doubles in size every hour, how long does it take to empty a 300-gallon tank? Or, if a hockey stick snaps off the ice according to a math formula, what is the maximum acceleration it will sustain? If you want to deal with stock or option trades, write amazing real-world computer simulations in 3D, or build skyscrapers, you are probably going to need calculus at some point. Good thing it isn’t as hard as people say. What is a Derivative? A derivative is simply the rate of change of something. That’s all it is. A ball standing still or a hose with the water faucet off all have a formula that describes them and the derivative of that formula is zero. Because they are not changing. Sometimes, we really want to know about the actual rate of change. How fast is a rocket accelerating after its engine is on for 3 seconds? But sometimes we want to know the sign of the rate of change. For example, if we have a formula that approximates the cost of soybeans, we will note that the rate of change is positive when the price is going up, negative when it is going down, and zero at places where it changes from plus to minus or vice versa. Those zero points will be the minimum and maximum price of soybeans, at least according to our model. So this is a useful math trick. When I was a student, I used to think “How did Newton and Libnietz figure this s*** out from scratch? Crazy!” But now I see that it is actually really simple if you want to ask those kinds of questions that you would work out this kind of math. We are going to work it out, probably like they did. Start Simple Let’s say we have a function f(x)=4. That means that any value of x you plug in, the answer is 4. This could be a math model for a ball sitting 4 meters off the ground on the floor of a big building. At time 0, the position is 4 meters. At time 1,000 the position is… 4 meters. It isn’t moving. What’s the rate of change? Zero. The same goes for f(x)=20. Or f(x)=1000. No change based on x so therefore the rate of change — and, thus, the derivative is zero. Graphically, that looks like this: f(x)=4 is boring More Interesting Look at this graph: f(x)=4x This is a bit more interesting. It is a line. You might remember that a line is y=mx+b where m is the slope and b is the y-intercept. In this case, b=0 and you can see that m=4, so the slope is 4. For a line, though, the slope is the rate of change. that is at every point on the line, a change in x of 1 will cause a change in y of 4 in the same direction. If you didn’t know the formula for this line, you could compute the slope by measuring the rise over the run. What this means is you pick some run — a pair of x values. Find the y values for those two x’s. The difference in x values is the run and the difference in y values is the rise. The rise divided by the run is the slope. So if we pick x=0 and x=1, we see that f(0)=0 and f(1)=4. So the run is 1–0 or 1. The rise is 4–0 or 4. The rise over the run is 4/1 = 4. Same result. But it doesn’t matter what numbers we pick. Since f(1)=4 and f(11)=44 we can see that for a run of 11–1=10 we have a rise of 44–4=40 and 40/10 is… 4. You can do that with any two numbers you care to name. It also would not matter if we added an offset because that won’t change the slope: f(x)=4x+3 If you do the same math, the +3 term will cancel out when you subtract and so the slope of this line is still 4.
https://medium.com/cantors-paradise/calculus-you-forgot-or-never-learned-derivatives-5a69833c594c
['Al Williams']
2020-10-19 03:17:57.737000+00:00
['Programming', 'Derivatives', 'Mathematics', 'Calculus', 'Engineering']
Submission Guidelines
Welcome to our publication! We don’t have hard and fast rules. We just want you to express your love for your city. Our main focus is on the pictorial visit of your city with lovely brief description. Show us your love, memories, and affiliation with the city you were born, spent childhood, or you stayed in any city for some time. We, no doubt, are always in love with cities that give us shelter and pleasures. Here I want to quote a few lines from Alexander Smith’s poem Glasgow: “City! I am true son of thine; Ne’er dwelt I where great mornings shine Around the bleating pens; Ne’er by the rivulets I strayed, And ne’er upon my childhood weighed The silence of the glens. Instead of shores where ocean beats, I hear the ebb and flow of streets. … Afar, one summer, I was borne; Through golden vapours of the morn, I heard the hills of sheep: I trod with a wild ecstasy The bright fringe of the living sea: And on a ruined keep I sat, and watched an endless plain Blacken beneath the gloom of rain. O fair the lightly sprinkled waste, O’er which a laughing shower has raced! O fair the April shoots! O fair the woods on summer days, While a blue hyacinthine haze Is dreaming round the roots! In thee, O city! I discern Another beauty, sad and stern. Draw thy fierce streams of blinding ore, Smite on a thousand anvils, roar Down to the harbour-bars; Smoulder in smoky sunsets, flare On rainy nights, while street and square Lie empty to the stars. From terrace proud to alley base, I know thee as my mother’s face. When sunset bathes thee in his gold, In wreaths of bronze thy sides are rolled, Thy smoke is dusty fire; And from the glory round thee poured, A sunbeam like an angel’s sword Shivers upon a spire. Thus have I watched thee, Terror! Dream! While the blue Night crept up the stream…” So, keeping in mind the above poem, come forward and show your love for your city. I’ve selected a few stories for you to give you the sample stories. Here these are: We would love to see your city! Write a comment and let me know if you want to be added as a writer. Simply write: “Add Me” in the comments or email us at : [email protected] Thanks for following!
https://medium.com/show-your-city/submission-guidelines-44cfc028fe45
['Muhammad Nasrullah Khan']
2020-12-09 13:12:06.492000+00:00
['Submission', 'About', 'Publication', 'Writing', 'Submission Guidelines']
Deep Learning With Apache Spark — Part 1
Part 2 is available here: Deep Learning With Apache Spark — Part 2. A primer on Apache Spark If you work in the Data World, there’s a good chance that you know what Apache Spark is. If you don’t that’s ok! I’ll tell you what it is. Apache Spark TM. Spark, defined by its creators is a fast and general engine for large-scale data processing. The fast part means that it’s faster than previous approaches to work with Big Data like classical MapReduce. The secret for being faster is that Spark runs on Memory (RAM), and that makes the processing much faster than on Disk. The general part means that it can be use for multiple things, like running distributed SQL, create data pipelines, ingest data into a database, run Machine Learning algorithms, work with graphs, data streams and much more. The RDD From PySpark-Pictures by Jeffrey Thompson. The main abstraction and the beginnings of Apache Spark is the Resilient Distributed Dataset (RDD). An RDD is a fault-tolerant collection of elements that can be operated on in parallel. You can create them parallelizing an existing collection in your driver program, or referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat. Something very important to know about Spark is that all transformations (we will define it soon) are lazy, that menas that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset (e.g. a file). The transformations are only computed when an action requires a result to be returned to the driver program. By default, each transformed RDD may be recomputed each time you run an action on it. However, you may also persist an RDD in memory using the persist (or cache) method, in which case Spark will keep the elements around on the cluster for much faster access the next time you query it. There is also support for persisting RDDs on disk, or replicated across multiple nodes. If you want to now more about transformations and actions for RDDs in Spark check out the official documentation: The Dataframe From PySpark-Pictures by Jeffrey Thompson. Since Spark 2.0.0 a a DataFrame is a Dataset organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. We won’t discuss Datasets here, but they are defined as a distributed collection of data that can be constructed from JVM objects and then manipulated using functional transformations. They are only available in Scala and Java (because they’re typed). DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs. In simple words, the Dataframes API was the way from Spark creators to make easy to work with Data in the framework. They are very similar to Pandas Dataframes or R Dataframes, but with several advantages. The first of course is that they can be distributed across a cluster, so they work with a lot of data, and the second one is that it’s optimized. It was a very important step that the community took. By the year 2014 it was much faster to use Spark with Scala or Java, and the whole Spark world turned into Scala (is an awesome language btw) because of performance. But with the DF API this was no longer an issue, and now you can get the same performance working with it in R, Python, Scala or Java. The responsible for this optimization is the Catalyst. You can think of it as a wizard, he will take your queries (oh yes!, you can run SQL-like queries in Spark, run them against the DF and they will be parallelized as well) and your actions and create an optimized plan for distributing the computation. The process is not that simple, but you as a programmer won’t even notice it. Just now that it is there helping you out all the time. Deep Learning and Apache Spark If you want to know more about Deep Learning please read these posts before continuing: Why would you want to do Deep Learning on Apache Spark? This was the question I asked myself before beginning to study the subject. And the answer comes in two parts for me: Apache Spark is an amazing framework for distributing computations in a cluster in a easy and declarative way. Is becoming an standard across industries so it would be great to add the amazing advances of Deep Learning to it. There are parts of Deep Learning that are computationally heavy, very heavy! Distributing these processes may be the solution to this an other problems, and Apache Spark is the easiest way I could think to distribute them. There are several ways to do Deep Learning with Apache Spark, I discussed them before, I listed them here again (not exhaustive): 1. Elephas: Distributed DL with Keras & PySpark: 2. Yahoo! Inc.: TensorFlowOnSpark: 3. CERN Distributed Keras (Keras + Spark) : 4. Qubole (tutorial Keras + Spark): 5. Intel Corporation: BigDL (Distributed Deep Learning Library for Apache Spark) Deep Learning Pipelines Databricks But the one I will focus on these articles is Deep Learning Pipelines. Deep Learning Pipelines is an open source library created by Databricks that provides high-level APIs for scalable deep learning in Python with Apache Spark. It is an awesome effort and it won’t be long until is merged into the official API, so is worth taking a look of it. Some of the advantages of this library compared to the ones I listed before are: In the spirit of Spark and Spark MLlib, it provides easy-to-use APIs that enable deep learning in very few lines of code. It focuses on ease of use and integration, without sacrificing performace. It’s build by the creators of Apache Spark (which are also the main contributors) so it’s more likely for it to be merged as an official API than others. It is written in Python, so it will integrate with all of its famous libraries, and right now it uses the power of TensorFlow and Keras, the two main libraries of the moment to do DL. In the next post I will focus entirely on the DL pipelines library and how to use it from scratch. One of the things you will be seeing are Transfer Learning on a simple Pipeline, how to use pre-trained models to work with “small” amount of data and being able to predict things, how to empower everyone in your company by making the deep learning models you created available in SQL and much more. And also I will create an environment to work in a notebook with this library in the Deep Cognition Platform so you can test everything. Go ahead and create a free account if you don’t have one to get started: Oh!! BTW, if you want to now more about pipelines in Data Science with Python check out these great articles by Matthew Mayo: And for a brief on pipelines on Spark check out this: See you soon :)
https://towardsdatascience.com/deep-learning-with-apache-spark-part-1-6d397c16abd
['Favio Vázquez']
2018-05-11 16:46:57.544000+00:00
['Deep Learning', 'Apache Spark', 'Data Science', 'Big Data']
Staying at Shinola is Like Sleeping in a Showroom
Staying at Shinola is Like Sleeping in a Showroom A new Downtown Detroit hotel comes up short. The Shinola Hotel exterior. Source: shinolahotel.com. I fell in love with Shinola when I stumbled across its Sunset Boulevard store on a visit to LA in 2015. Stationery, watches, bikes, wallets… there wasn’t a single thing in there that I didn’t want to buy. Everything was timelessly stylish with a a strong midcentury flavour. I loved its proudly Made in Detroit story. The brand’s commitment to quality. Fast forward to 2019. By now I’m a Shinola watch-wearer (my folks gave me one for my thirtieth last year — at my request). And, fortuitously, I’m with my partner on a road-trip passing through Detroit, Shinola’s home. With its hotel having opened at the beginning of the year, I figured that I just had to stay there. After surrendering our car to the friendly valet gents out front, we’re ushered in and handed our key at check-in — which is attached to a comically large fob with leather tassles. Our reservation was for a Cass King, which starts at the not-insignificant $365/night (disclosure: I got a media rate for $205/night excluding tax). The room was comfortable — but in spite of its decent size, was lacking a desk. Instead, we had a couch, an armchair and a rather pointless credenza (which had empty shelves crying out for reading material or an objet of some kind). On top of the credenza were the bluetooth Shinola speakers that an extensive price list said could be ours for $1,500. The price list featured a bunch of other items in the room including the aforementioned key fob. In the anteroom, there was a cabinet containing sweets and snacks as well as minibar crammed with drinks — all for sale (obviously). There were no complimentary tea or coffee-making facilities — unusual for a hotel at this price point, I thought. If I wanted a cappuccino enough, I suppose I could’ve paid $5 for one to be delivered to the room. While the wash area (with a marble-topped basin above a sleek cabinet) was charming and gently lit, the rest of the bathroom reminded me of a 1950s operating room — a clinical combination of mostly white tiles with lines of dark green ones. The shower was large enough to host an orgy, but the shower-head could only properly douse one person at a time (my partner and I tested it out with a joint shower). Reluctantly, I had to admit to myself that as much as I love the Shinola brand, I was struggling to like its hotel. Perhaps this was because I couldn’t escape the sense I was sleeping in a showroom. When almost everything in the room has been produced by one brand, against a rather anaemic backdrop (white walls, inoffensive abstract framed prints), there’s a certain stifling uniformity, a lack of soulfulness. It was like being in a buy-to-rent apartment — by no means offensive, but a little lifeless and sterile. Perhaps a hanging plant or two (isn’t macramé kinda mid-century, after all?) would enliven things. One or two more pieces of art, perhaps. Some books — people are more likely to read them if they’re in their room and not in the hotel’s hallways where piles of them sit, unread. And perhaps most importantly — less of a Shinola product hard-sell would be nice. Outside of the room, there was certainly bits of the hotel I liked. On the upper floors, the hallways were sexy — dimly lit, with alcoves featuring velvet couches (rather pointless given that you’d more likely want to be ensconced in your room, but still, a cosy and welcoming touch nonetheless). Downstairs, the Living Room is a tad busy decor-wise — the walls are choked with art, and almost every square metre of space seems taken up by chairs or coffee tables upon which books, magazines and newspapers are arranged. But in spite of its slightly frenzied appearance, I rather liked it — it has warmth and character, a soulfulness that my bedroom lacked. Walking through more dimly lit corridors (this place is a maze), you’ll end up in the speakeasy-style Evening Bar. The bar counter and its back mirror is all beautifully lit — celestial gold, a halo of light. Sit further away, and the establishment becomes a little gloomy — I pity anyone older than 40 who forgot their reading glasses at home. At least the drinks were heavenly (I had the Old Pal, which is kind of a negroni, but with rye whiskey instead of gin). And the service was friendly too — something I noticed throughout the hotel. The Evening Bar. Source: shinolahotel.com. We wandered round the block to The Brakeman, the hotel’s beer hall, a hipsterish affair which thankfully doesn’t take itself too seriously. We sat in the vibey alley, eating fried chicken (hot, crispy and juicy just as it should be) from Penny Red’s, a food stand at the back, which I washed down with a local, unfiltered IPA. Back on the fifth floor, we slept pretty well on our Made in Michigan mattress. By 7:30 the next morning, though, construction in the cavernous site adjacent to the hotel was going ahead at full, jackhammering, tilt. Given the heavy machinery down below is still tinkering at the foundation level, it looks like guests have months and months of construction ahead to look forward to. The hotel might want to consider sound-proofing its windows (or at least warning their guests how noisy it gets from the early morning onwards). Otherwise, it will be impossible for guests to use their generous noon checkout time for a lazy lie-in. About half an hour before our checkout, I hit the “0” on the retro phone next to the bed and requested some ice. “Coming right up, sir,” the reception guy said. 25 minutes later it’s check out time and there was still no sign of the ice. We called again. “Did you ask in-room dining for it?” Nope we didn’t, because on the phone it says dial zero “for anything” and you told us it was coming right up. “Seriously, these people should stick to making watches and bikes,” my partner grumbled. I had to begrudgingly agree.
https://alexandermatthews.medium.com/staying-at-shinola-is-like-sleeping-in-a-showroom-7cb7a58ee574
['Alexander Matthews']
2019-06-10 16:12:05.235000+00:00
['Design', 'Luxury', 'Detroit', 'Hotel', 'Travel']
The Cost of White Ignorance
Racism The Cost of White Ignorance White ignorance is violent, dangerous, and deadly, and it must no longer be an option Photo from the author Throughout the years and throughout my journey to deconstruct my own racist beliefs, microaggressions, and white supremacy, I continue to see the same patterns in responses coming from white people when they are challenged to address their racism. They are nothing new and they are continuous products of white supremacy. These responses have to be confronted and challenged. For clarification purposes, I am very aware that I will have to dismantle these harmful and deadly ideologies that live within me until the day I die. So, as a white person, this piece 100% includes me as well. Priorities The amount of information that is available to Americans is truly something that cannot be fully grasped. There are seemingly endless opportunities to research, learn, and grow on our own. We can choose what subjects we want to learn about, and we make that decision every single day. That could look like learning about political candidates, looking at consumer reports on cars that we are interested in buying or researching different diets. It can take us mere seconds to choose to research some of the most trivial things. We sift through different information in our minds, and we prioritize them according to our interests and according to what we find to be important to our lives. There are specific subjects that hold different weight for all people, but there is one subject that most white folks conveniently keep either in the back of their mind or entirely out of their train of thought. That subject is racism. Choices When it comes to racism, white folks have the privilege to learn about it without experiencing it or just completely ignore it; because not only does it not affect us, but we significantly benefit from it. We make conscious and subconscious decisions daily when it comes to issues of racism. We often opt to ignore it and remain as comfortable as possible. That is a choice, and it is one we make on an almost continual basis. As white folks, white comfortability is one of the most frequent choices we make every single day, and that choice carries deadly consequences. We have the information available to us to address our racism and the systems of white supremacy that have served as a foundation and fuel for this country for centuries. Yet, many white folks choose ignorance instead of basic human decency. They actively choose not only to ignore the issue of racism, but they decide to stand in defense of racism. The white folks that offer undying support to police, racist presidents/politicians, the criminal justice system as a whole, the child welfare system, redlining, racist doctors, lawyers, and judges are often the ones that publicly prop up these significantly problematic ideals. However, the white moderate and apathetic white folks (aka self-proclaimed “peacemakers”) allow racism to flourish and spread rapidly amongst their white friends, family, acquaintances, coworkers, and communities. It is just as much of a choice to remain completely silent on issues of racism, white supremacy, racial injustices, social justice issues, and pervasive, wide-spread inequity as it is to overtly support these things. Learning White folks often want to rely on research, articles, books, and other means of impersonal information gathering in order to remain comfortable. Genuinely learning about Black folks comes from interacting with, learning from, and being open to being corrected and educated by the Black Community. Along with this understanding should also be the understanding that the Black Community never owes us their time, emotional energy, or education. So, when we get the opportunity to learn from Black folks, we have to understand that we are not owed that time and energy, and we must treat that information and the time offered to us accordingly. One of the first lessons we must learn and fervently employ is the necessity of silence and active listening. It is impossible to learn if we are speaking or have a rebuttal, question, or response lined up in our head when we learn from the Black community. Choosing to be anything other than silent and to listen as the first line of understanding is a demonstration of white supremacy. Our white supremacy already tells us that we are the standard for excellence, prosperity, and progress in this country. We have to kill those problematic ideals the minute we feel them arise in our spirit. It is illogical, impossible, and downright asinine to assume we can have any input on living Black in this country. It doesn’t matter how much research we do or how much anti-racist education we engage in. We are still the oppressors. Oppressors will never feel the experience of those they oppress. Learn information about the Black Community from the Black Community. White educators, Non-Black Persons of Color, and other white-influenced means of information can never fully inform us of what the Black Community experiences. Learning is a choice. Choosing to learn from sources that have never experienced or lived out the thing that is being discussed is willful ignorance at best. It leads to disingenuous misinformation that is both severely inaccurate and significantly dangerous. Choose to learn from resources that are specifically sourced from the Black community. That includes learning from and building relationships with Black folks. That is non-negotiable. Bias We all have biases. That is a fact that we must agree upon; because to say anything to the contrary is both a lie to ourselves and everyone else involved in our lives. Those biases inform how we view the world around us, and they help us piece together different narratives about other humans. These biases directly play into racism and white supremacy. There are frequent discussions by plenty of white folks about specific Black voices that meet their white-centric worldviews and uphold agendas riddled with White Supremacy. Many white folks will ignore 99% of the Black community and choose to use the 1% of the Black community that espouses their same views. They then decide to use those 1% Black voices as the standard when looking at issues of race and social justice. White supremacy is white folks believing that their ideas hold more weight and are more accurate than what 99% of the Black community experiences and knows as factual truth. White folks need to learn how to accept the majority voice of the Black community when learning about racism and what it means to be Black in America. Confirmation bias is a deadly beast that white folks feed daily. We have to make conscious efforts to address our biases. If we don’t, then we tell ourselves that it is acceptable to hold views of the Black community that are inaccurate, unrealistic, and deeply problematic. Unacceptable There are no excuses when it comes to learning about and addressing our racism. The vast amount of information about racism, Civil Rights, criminal justice reform, Black leadership, activism, Black liberation, Black History, and Black culture available to us could keep us busy for years. When I hear white folks say things like, “I didn’t know it was this bad.”, “Racism has been over for years.”, “I’m colorblind.”, “I have experienced reverse racism.”, or “Donald Trump isn’t racist.” I am reminded of the privilege that we all have to say willfully ignorant statements like that without receiving any life consequences. I believe that a severe lack of critical thinking skills regarding white folks and our perception of racism and systemic issues is tied directly to our white privilege. White folks inherently believe that if racism doesn’t directly affect us, then it can’t possibly be an issue for anyone else. Even on the off chance that white folks can finally admit that racism is experienced by someone else, they will try to attribute it to that person’s lack of ability to “deal” with that issue instead of understanding our complicity in propping up the systems that keep racism alive and healthy. We must choose to learn about and address our white supremacy every single day. There are no excuses when it comes to this. Ignorance doesn’t cost white folks anything, yet the consequences of our white ignorance can cost a Black person their life. Consequences With all of this in mind, we must take the time to understand the cost that comes with our white ignorance. We can freely walk through life without experiencing significant consequences for our ignorance. Black folks are directly affected by our white ignorance, and we will never feel the deadly effects of our ignorance, yet we will be the first ones to blame the Black community for the results of our ignorance. There are severe consequences to white folks having no direct interactions with folks from the Black community. There are deep-rooted systemic issues that have stayed in place for centuries due to white folks ignoring the facts, history, education, and experiences of the Black community. Generations upon generations of families have suffered because white folks choose silence and ignorance instead of learning ways to address the issues that we are responsible for creating and maintaining. When we are completely detached from the Black community, we will form our perception and understanding of the Black community based on media and our white parents’ perceptions and experiences. This leads to white folks not only discounting the experiences of being Black in America, but it allows for murder to become palatable to the white majority. This detachment dehumanizes the Black community and enables white folks to have zero empathy for George Floyd, Breonna Taylor, and Ahmaud Arbery. All of whom were executed by cops or over-zealous white citizens. Many white folks argued in support of the cops that murdered 12-year-old Tamir Rice. The depravity found in that experience alone is one that demonstrates the pure evil that is engrafted in white supremacy. The constant and vigorous support of cops who publicly execute Black folks is the norm amongst white America. The cost of our white ignorance is a price that we will never have to pay. White ignorance allows us to view the Black community as lesser than. It provides opportunities for us to victimize individuals who murder Black folks. It sets the tone for the way we teach our children to view the Black community. It informs our biases that attribute fear and misunderstanding to be the first-line descriptors of the Black community. The consequences of white ignorance are continued violence against the Black community. Acts of physical aggression don’t solely define violence. Violence also looks like the following: Redlining, the School to Prison Pipeline, Voter Suppression, New Jim Crow, Inadequate Medical Care, Inequity in Education and Quality of Teaching, Gentrification, A White-Centric Criminal Justice System, Wealth Gaps, Unequal Pay, and Racist Policies at Places of Employment, and that is the shortlist. When we look at the issues of racism in America, white folks must understand that we cannot look at the Black community as having anything to do with racism. We have to take an in-depth look at ourselves and understand what we are responsible for. We have to admit our racism. We have to accept correction from the Black community. We have to realize that we comfortably live within the lie that our whiteness is somehow superior and that anything opposing our whiteness is inherently dangerous to us. We have to focus on cleaning our house and our house alone, period. The time to change is far past due. The longer we wait to violently attack and address our white supremacy, the more harm we perpetrate against the Black community. Once the change process begins, it is essential to note that we will get things wrong. We will be embarrassed, angry, confused, and get our feelings hurt, but that is a small price to pay when we understand the cost of our ignorance. Speak up and speak out regardless. Challenge your white family, friends, acquaintances, and coworkers. Be prepared to lose some of the most precious people and things in your life when you choose to stand for the truth. We are not saviors. Requesting praise or time to grieve when we finally admit to our racism after decades of ignoring it is a way of keeping our white supremacy intact. White feelings have no place in the process of dismantling our racist ideologies. We must do it because it is at the very basic foundational level the right thing to do. White ignorance is violent, dangerous, and deadly, and it must no longer be an option.
https://medium.com/an-injustice/the-cost-of-white-ignorance-58404c21c998
[]
2020-12-28 19:24:13.597000+00:00
['Social Justice', 'Self-awareness', 'BlackLivesMatter', 'Racism', 'Change']
Jenkins, Kubernetes, and Hashicorp Vault
At Hootsuite we are moving towards having the majority of our services on Kubernetes, and this includes our CI/CD pipelines. Our goal was to use Jenkins, Kubernetes, and Vault to create a CI/CD system that was secure, portable, and scalable. Figuring out how Jenkins and Kubernetes work together Going into this, we knew two things: First we wanted Jenkins to be our CI/CD tool, and second we wanted to take advantage of Kubernetes to schedule our jobs. To link the two together we decided to use the jenkins-kubernetes-plugin. This plugin allows the Jenkins Masters to make Kubernetes API calls to schedule builds inside their own Kubernetes Pods. This provides the following benefits: Isolation: Builds being in their own Pod means it can’t affect other builds. Each Pod as per the Kubernetes documentation is a logical-host. Builds being in their own Pod means it can’t affect other builds. Each Pod as per the Kubernetes documentation is a logical-host. Ephemeral: Pods do a fantastic job of cleaning up after themselves. Pods by its nature are ephemeral. So unless we explicitly want to keep changes the Pod makes in its lifetime, everything will be erased. No more conflicting workspaces! Pods do a fantastic job of cleaning up after themselves. Pods by its nature are ephemeral. So unless we explicitly want to keep changes the Pod makes in its lifetime, everything will be erased. No more conflicting workspaces! Build Dependencies: Related to isolation, but with Pods each job can define exactly what their build needs. Say a build pipeline has several stages: one for building the JavaScript frontend, and another for building the Go backend. Each of those stages can have its own container, simply by pulling the necessary image for each stage. Below is a pipeline taken from the plugin repo which demonstrate the three benefits. We see that there is a unique Pod being made, this Pod will then have any of its state wiped when the build completes, and most importantly it uses a container with a specific image for each build step. How lovely! # multicontainer-jenkins-pipeline def label = "mypod-${UUID.randomUUID().toString()}" podTemplate(label: label, containers: [ containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'), containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat') ]) { node(label) { stage('Get a Maven project') { git 'https://github.com/jenkinsci/kubernetes-plugin.git' container('maven') { stage('Build a Maven project') { sh 'mvn -B clean install' } } } stage('Get a Golang project') { git url: 'https://github.com/hashicorp/terraform.git' container('golang') { stage('Build a Go project') { sh """ mkdir -p /go/src/github.com/hashicorp ln -s `pwd` /go/src/github.com/hashicorp/terraform cd /go/src/github.com/hashicorp/terraform && make core-dev """ } } } } } Containerized Jenkins Master and Agents Now that we have an idea of how Jenkins and Kubernetes will work together, we had to move our Jenkins Master and Agent into modern times. This meant defining two rock solid containers. For the Jenkins Master we started off by creating an image that would contain all the plugins that it would require. The base of this plugin image was this one. The reason we separated the plugins out was to allow us to update plugins independently from the Jenkins Master. Our plugin base image ended up looking something like this: # plugins.Dockerfile FROM jenkins/jenkins:lts # Install Jenkins plugins RUN /usr/local/bin/install-plugins.sh \ super-cool-plugin-1:latest \ super-cool-plugin-2:latest \ super-cool-plugin-3:latest \ # ... super-cool-plugin-n:latest From there we could then build our Jenkins Master image on top of it. One of the nice things about the official Jenkins image is that it offers a lot of flexibility. We took advantage of this by copying in configuration overrides, initial groovy startup scripts, and other files to ensure our Jenkins Master would start up configured and ready to go. └── usr └── share └── jenkins └── ref ├── config.xml.override ├── github-plugin-configuration.xml ├── init.groovy.d // All groovy scripts here run on start │ └── credentials.groovy ├── org.codefirst.SimpleThemeDecorator.xml ├── secrets │ ├── README.md │ └── slave-to-master-security-kill-switch └── userContent └── jenkins-material-theme.css As for our Jenkins Agent, we used this image as the base. The benefit here being that the Kubernetes plugin that we chose was developed and tested with this image in mind. As such the communication between our Master and Agents is well supported. Once we’ve decided how the images would look, we set up a Jenkins job that would regularly build and push our Master and Agent images to our registry. This way we would always be up to date and avoid having outdated versions of Jenkins and its plugins. Using Vault to handle our CI/CD secrets With our images set up, the next step was figuring out if we could #BuildABetterWay to manage our secrets in Jenkins. For this we turned to Vault. * This choice was made for the following reasons: Provides a single source of truth; previously our secrets would get sprawled across our Jenkins Masters and became difficult to manage We are already using Vault extensively at Hootsuite, so we have lots of support and knowledge Vault supports a Kubernetes Authentication method (More on this below) The first two reasons are self explanatory, but the third was where things got interesting and also the main focus of my contributions. Previously we were using AppRole Authentication and while it worked well, it meant we had a secret_id and role_id that we had to manage. Ideally what we wanted was a way for our Pods to tell Vault that it belonged to a certain Kubernetes cluster and should be granted certain access. This is where Kubernetes Authentication comes in. I’ve outlined the steps for our Kubernetes Cluster to authenticate with Vault: Before anything happens, we setup the Vault and Kubernetes relationship by giving Vault some information about our cluster: (The cluster’s CA cert, The host of our Kubernetes cluster, A Vault policy, A Vault role that is mapped to our Kubernetes namespace/serviceaccount). With that completed, Vault now knows which Kubernetes cluster to respond to and which ServiceAccount in the cluster is allowed to authenticate against the Vault Role. When we define the Jenkins Master Pod, we add a field that attaches a ServiceAccount to that Pod. This ServiceAccount is referenced when the Pod starts up and is used to retrieve the account’s JWT. Once the JWT is retrieved it is sent over to Vault which then forwards it to Kubernetes. Vault will then receive a response from Kubernetes that says the JWT came from the correct namespace and is actually the Service Account it claims to be. Once Vault gets confirmation it knows that the Pod has the right ServiceAccount which means it is mapped to a Vault role, and so Vault gives back a VAULT_TOKEN that the Pod can then use. The Five steps visualised What’s great is that there is no secret that has to be managed and the Pod only needs to use the Kubernetes API. So from the Pod’s perspective in the startup script of a container it would do something like: # Get the bearer token that gets mounted into all Pods for use in making k8s API calls KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token) # Retrieve the name of the secret associated with a ServiceAccount JENKINS_JWT_SECRET=$(curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/jenkins/serviceaccounts/jenkins | jq -r .secrets[0].name) # Get the JWT that is stored inside that secret JENKINS_JWT_64=$(curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/jenkins/secrets/$JENKINS_JWT_SECRET | jq -r .data.token) # Converts base64 to base64url (So that it's compatible with Vault) JENKINS_JWT_64URL=$(echo -e "import base64 print(base64.urlsafe_b64decode(\"$JENKINS_JWT_64\").decode('utf-8'))" | python) # Authenticate against a Vault server with the JWT VAULT_LOGIN_RESULT=$(vault write -format json auth/kubernetes/login role=jenkins jwt=$JENKINS_JWT_64URL) And with the Vault token inside VAULT_LOGIN_RESULT it can use that for subsequent calls to Vault. At this point you might be wondering how we can go from Vault Secrets to Jenkins Credentials. This is where the initial groovy startup scripts come in. On start up our entrypoint script reads in Jenkins related secrets from Vault and writes these values as JSON objects into a temporary file. This temporary file gets read by the startup script and converts those values into Jenkins Credentials. So at a Vault path containing Jenkins Credentials, we would keep something like: { "type":"username-password", "scope":"GLOBAL", "description":"API token for Github", "username": "my-github-username", "password": "my-github-token" } Which on Jenkins Master gets converted to A Hashicorp Vault Secret converted to a Jenkins Credential For information on how to programmatically add credentials check here. With all that done we now have a way to securely retrieve CI/CD secrets from Vault and a way to convert them to Jenkins Credentials if needed. * For those wondering why we didn’t use something like the Jenkins Vault Plugin, it was lacking in a few areas: It did not support Kubernetes Authentication The secrets could not be used to make Jenkins Credentials which other plugins can use It would mean adding initial setup scripts to our Jenkinsfiles Summary The steps in this post describe how you can improve your Jenkins CI/CD pipeline with Docker, Kubernetes, and Vault. Revisiting our goals (secure, portable, and scalable), we added security by letting Vault handle our CI/CD secrets. We used Docker for portable self contained Jenkins Master and Agent containers. Then, with Kubernetes orchestrating these containers we are able to handle dynamic workloads with dynamic scaling. About the Author David Jung is a co-op Student on the Production, Operations, and Delivery team. He is currently studying Computer Engineering at The University of British Columbia. Connect with him on LinkedIn.
https://medium.com/hootsuite-engineering/jenkins-kubernetes-and-hashicorp-vault-c2011bd2d66c
['Alister West']
2018-08-16 23:08:36.704000+00:00
['Kubernetes', 'Vault', 'Co Op', 'Jenkins', 'Docker']
How to Control Tilt
We’ve all done it. We ask ourselves how an idiot can stupidly win so many pots. At some point in our poker careers, we’ve thought that “it’s my turn to win” or “that donks luck is bound to run out!” Unfortunately, that’s not how poker works. Each hand is an independent event. The hands before it have no effect on how the current hand will play out. That donk is just as likely to have a monster on this hand as he did on the previous one. But suckout after suckout, bad beat after bad beat, can leave even the most talented of players letting their emotions get in the way of making correct decisions. Well, I’m here to help. I’ll give you a few pointers on how to control steaming/tilting so you can prevent yourself from being like this crazy German kid. First, let me give you a brief explanation on how emotions work. Most people think that you experience an event (getting sucked out), you experience an emotion (anger) and then you get a physiological response (increased heart rate). There is overwhelming psychological evidence that this is actually not the case. What really happens is you experience an event (getting sucked out), get a physiological response (increased heart rate) and then your mind infers an emotion (anger). If that’s really the case, you’d expect that influencing your physiology would influence your perception of your emotion. And guess what, you’d be completely right. There are literarily hundreds of studies that offer support. One study had some participants punch a punching bag while others just chilled and sat on a couch. All the participants later got to play a game where they could deliver a loud blast of noise to their partner if their partner messed up. The participants that punched the punching bag delivered more intense noise blasts and for longer durations. This demonstrates that a physiological response (acting angry by punching the punching bag), can have significant effects on behavior (expressing anger by delivering loud blasts of music). Another study had participants rate cartoons while holding a pencil between their teeth. One group was instructed to hold a pencil in such as a way as to create a smile, while the other group held the pencil in a way that resembled a look of disgust. The group that held the pencil in a smile position rated the cartoons as significantly funnier. The study shows that once again, a physiological reaction (facial expressions) has a profound impact on emotional experience. Ok, now that we know that physiology influences emotions, how can we apply it? How can we decrease our natural tendency to tilt? You’ve probably guessed it by now: eliminate the physiology associated with anger and eliminate the anger itself. So the next time you feel yourself starting to tilt here’s what you do: Force yourself to smile. Sit back in your chair with a relaxed posture. An angry posture (which you want to avoid at all costs) is as follows: leaning slightly forward, feet flat on the floor, fists clenched. Control your breathing. Breathe slowly and deeply. Don’t hit anything. Don’t slam your mouse down. Don’t act angry! I hope some of you take this advice to heart and try it the next time you feel yourself about to tilt your hard earned money away. Good luck and may the poker gods be with you! Notes: The study that had participants hold the pencil in their mouth to manipulate facial expression was done by Strack, Martin & Stepper in 1988. The study that had participants punch a punching bag and then play a cooperative game with a partner that they could punish with loud blasts of noise was done by Bushman, Stack, Baumeister in 1999.
https://medium.com/anskypoker/how-to-control-tilt-66a29d4050f2
['Michael Gugel']
2016-12-27 05:02:13.262000+00:00
['Psychology']
Irritability Can Be A Symptom Of Anxiety
Irritability Can Be A Symptom Of Anxiety I’m not just having a bout of PMS Photo by Thomas Verbruggen on Unsplash “Wow, are you about to start your period?” For being a friendly, empathetic human, I cringe when my irritability shows up to the scene. Being asked this single question would send me into a personal crisis about my character and ability to self-regulate. Oops, my humanity is showing. I tried to track the peaks and troughs of my aggravation. Despite the fact that hormones can cause swings, my bouts of extreme irritability did not parallel with hormone fluctuations. Come to find out, irritability can be a symptom of anxiety. The American Psychiatric Association cites irritability as a prime factor of anxiety disorders. With the fight or flight reaction being set off in an anxious mind, the sympathetic nervous system is activated. Irritability is one of the “fight” responses of this physiological reaction. Hypervigilance and agitation are team players. I become prickly, to myself and the world around me, when my thoughts are racing, feeling a loss of control. I desperately try to hide those thorny reactions. Because I always want to appear to be a nice person. After all, acting as the accommodating party, a skilled people-pleaser, was one method I used to survive years of abuse. I have overidentified with this reactive role, and surely never considered that Empaths could be cantankerous too. As referenced by Healthline, a research study done of over 6,000 adults revealed that more than 90% of people diagnosed with generalized anxiety disorder reported feeling highly irritable during times when their anxiety was peaking or at its worst.
https://medium.com/invisible-illness/irritability-can-be-a-symptom-of-anxiety-433dc3ee3208
['Scarlett Jess Perrodin']
2020-12-15 04:24:10.246000+00:00
['Awareness', 'Anxiety', 'Mental Health', 'Self Help', 'Trauma']
The Future of Apple & AR
Notoriously secretive, Tim Cook’s recent statements further reveal Apple’s path: “AR will happen in a big way, and when it does, we will wonder how we ever lived without it. Like we wonder how we lived without our phone today.” Cook’s comments lay out Apple’s intent to build an augmented future, and the tech embedded within their present lineup of products doubles down on that path. If AR glasses are going to be a success, they’ll need to tether with a powerful computer (the iPhone), and they’ll need to do so seamlessly while introducing new ways to interface with computers (Airpods + Siri). Apple is slowly building an ecosystem of products that AR can integrate with to be powerful, immersive, minimal, and consequently, actually work. Aside from being outrageously dorky, Google Glass lacked the power and wireless integration necessary for an organic, natural experience. Fortunately a product like the Apple Watch has demonstrated that a great deal of power can fit into a tiny package, and it can tether seamlessly with a variety of devices. While there’s certainly room for Apple’s hardware to grow, a solid foundation has been laid. This is where Apple’s second brilliant strategic move comes into play. Over the years personal computers have migrated from desks to our laps, and presently rest in our pockets. As our technology continues to improve, it’s not only shrinking in size, it’s moving closer to our body. Up to this point, wearable tech has been tough on the eyes (pagers, cell phone belt clips, Bluetooth earphones). Wearing a computer on our face is going to be a challenge. Who better to face that challenge than Apple? The aforementioned Apple Watch, however problematic, has taken a step in the right direction. It’s not only powerful, it interacts directly with our skin to monitor bodily functions, and does so unobtrusively. The equally controversial AirPods are just as impressive: a pair of tiny voice activated computers that switch on the moment they slip inside our ears, they deliver crystal clear audio while maintaining a wireless connection with a smartphone. What’s more, each pod sits gently inside an orifice with an understatement that borders on invisible. Imagine the computing power of HAL 9000 reduced to the size of a pearl. Apple has a history of re-thinking nearly every major step in personal computing: the desktop, the mouse, the mp3 player, the laptop, the cell phone. Next up, Augmented Reality. If you enjoyed reading this article, please hit the ♥ button in the footer so that more people can appreciate great design! Hi, I’m Daniel. I’ve founded a few companies including Piccsy (acq. 2014) and EveryGuyed (acq. 2011). I am currently open to new career and consulting opportunities. Get in touch via email. This article was co-authored by Shaun Roncken. You May Also Like: Design for Humanity An interactive essay I wrote exploring the past, present, and future of anthropomorphic design. Also available as a talk for conferences, events, etc.
https://medium.com/swlh/the-future-of-apple-ar-aad3db66db67
['Daniel Eckler']
2020-01-31 19:24:26.387000+00:00
['Design', 'Tech', 'Augmented Reality', 'Airpods', 'Conversational UI']