title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Donut Pie-Chart using Matplotlib
Most of the data analysts and data scientists use different sorts of visualization techniques for data analysis and EDA. But after going through a lot of Kaggle notebooks and projects, I couldn’t find data visualized in the form of pie-charts. Although I know, people prefer histograms and bar plots over pie-charts because of the significance they represent in one view. But if pie-charts are used precisely as per their requirement, it can create more sense than most of the bar plots and histograms. So, we’ll take the pie-chart game to a level ahead and create a custom donut pie-chart. Sounds interesting? Well, go through to find out! We’ll import matplotlib.pyplot as this is the only required library to generate our donut pie-charts. Run “ import matplotlib.pyplot as plt” in your first cell. Now we’ll define two lists namely: list_headings and list_data. We’ll use custom data for visualizations. After that, we’ll create a solid circle using plt.Circle with custom dimensions which will create a hollow space inside our pie-chart to give it a donut-like shape. Next, we’ll create a simple pie-chart using plt.pie(). We’ll add our list_data and list_headings as our initial arguments to visualize the data. After that, we will create a plt.gcf() object. This function in the pyplot module of the matplotlib library is used to get the current axes instance on the current figure matching the given keyword args or create one. Now. we’ll add our solid circle to pyplot using add_artist method of gca() function. And finally, we’ll write that simply awesome word combination — plt.show() And the result would look similar to this: Full Code: But this is not the end of our article. We need to customize this donut pie-chart to make it more attractive and visually appealing. We’ll add an empty space between each segment of our donut pie-chart simply called wedge drops. We’ll add one more argument to our plt.pie() to achieve the desired output which is wedgedrops. And your output will look absolutely similar to this: Full Code: You can add custom colors to your donut pie chart as well. Define a list of colors that you want to use for visualization. Add the colors as an argument to plt.pie() to implement. Full code: This generation’s obsession with black color is totally mind-boggling. Everybody wants to wear black, code in dark mode, and own black accessories. So this one’s for them specifically. We’ll add a black background to our donut pie-chart so that it looks more appealing and readable than previous visualizations. We’ll use fig.patch.set_facecolor() to add custom color background to our donut-pie chart. Also, we’ll convert our labels’ text color to white so that it is readable on black background. And your ultimate result would look like this: Full Code: And yes, this is it. Now, you can create your own customized donut pie-charts for visualizations. You can check this Kaggle notebook to check out how to plot multiple pie-charts in multiple rows and columns and add a legend to pie-charts. Have a great day of learning!
https://medium.com/analytics-vidhya/donut-pie-chart-using-matplotlib-dc60f6606359
['Dhruv Anurag']
2020-12-15 16:38:45.604000+00:00
['Exploratory Data Analysis', 'Pie Charts', 'Matplotlib', 'Data Visualization', 'Data Science']
A Brief Summary of Apache Hadoop: A Solution of Big Data Problem and Hint comes from Google
Introduction of Hadoop Hadoop supports to leverage the chances provided by Big Data and overcome the challenges it encounters. What is Hadoop? Hadoop is an open-source, a Java-based programming framework that continues the processing of large data sets in a distributed computing environment. It based on the Google File System or GFS. Why Hadoop? Hadoop runs few applications on distributed systems with thousands of nodes involving petabytes of information. It has a distributed file system, called Hadoop Distributed File System or HDFS, which enables fast data transfer among the nodes. Hadoop Framework Hadoop Framework Hadoop Distributed File System (Hadoop HDFS): It provides a storage layer for Hadoop. It is suitable for distributed storage and processing, i.e. while the data is being stored it first get distributed & then it proceeds. HDFS Provides a command line interface to interact with Hadoop. It provides streaming access to file system data. So, it includes file permission and authentication. So, what store data here it is HBase who store data in HDFS. HBase: It helps to store data in HDFS. It is a NoSQL database or non-relational database. HBase mainly used when you need random, real-time, read/write access to your big data. It provides support to the high volume of data and high throughput. In HBase, a table can have thousands of columns. So, till now we have discussed how data distributed & stored, how to understand how this data is ingested & transferred to HDFS. Sqoop does it. Sqoop: A sqoop is a tool designed to transfer data between Hadoop and NoSQL. It is managed to import data from relational databases such as Oracle and MySQL to HDFS and export data from HDFS to relational database. If you want to ingest data such as streaming data, sensor data or log files, then you can use Flume. Flume: Flume distributed service for ingesting streaming data. So, Distributed data that collect event data & transfer it to HDFS. It is ideally suitable for event data from multiple systems. After the data transferred in the HDFS, it processed and one the framework that process data it is SPARK. SPARK: An open source cluster computing framework. It provides 100 times faster performance as compared to MapReduce. For few applications with in-memory primitives as compared to the two-state disk-based MapReduce. Spark run in the Hadoop cluster & process data in HDFS it also supports a wide variety of workload. Spark has the following major components: Spark Major components Hadoop MapReduce: It is another framework that processes the data. The original Hadoop processing engine which primarily based on JAVA. Based on the Map and Reduce programming model. Many tools such as Hive, Pig build on Map Reduce Model. It is broad & mature fault tolerance framework. It is the most commonly used framework. After the data processing, it is an analysis done by the open-source data flow system called Pig. Pig: It is an open-source dataflow system. It mainly used for Analytics. It covert pig script to Map-Reduce code and saving producer from writing Map-Reduce code. At Ad-Hoc queries like filter & join which is challenging to perform in Map-Reduce can be done efficiently using Pig. It is an alternate to writing Map-Reduce code. You can also practice Impala to analyze data. Impala: It is a high-performance SQL engine which runs on a Hadoop cluster. It is ideal for interactive analysis. It has a very low latency which can be measured in milliseconds. It supports a dialect of SQL( Impala SQL). Impala supports a dialect of a sequel. So, data in HDFS modelled as a database table. You can also implement data analysis using Hive. Hive: It is an abstraction cover on top of the Hadoop. It’s very similar to the Impala. However, it preferred for data processing and ETL ( extract, transform and load) operations. Impala preferred for ad-hoc queries, and hive executes queries using Map-Reduce. However, a user no needs to write any code in low-level Map-Reduce. Hive is suitable for structured data. After the data examined it is ready for the user to access what supports the search of data, it can be done using clutter is Search Cloudera. Cloudera Search: It is near real-time access products it enables non-technical users to search and explore data stored in or ingest it into Hadoop and HBase. In Cloudera, users don’t need SQL or programming skill to use Cloudera search because it provides a simple full-text interface for searching. It is a wholly blended data processing platform. Cloudera search does the flexible, scalable and robust storage system combined with CD8 or Cloudera distribution including Hadoop. This excludes the need to move large data sets across infrastructure to address the business task. A Hadoop job such as MapReduce, Pig, Hive and Sqoop have workflows. Oozie: Oozie is a workflow or coordination method that you can employ to manage Hadoop jobs. Oozie application lifecycle shown in the diagram. Oozie lifecycle ( Simpliearn.com) Multiple actions occurred within the start and end of the workflow. Hue: Hue is an acronym for Haddop user experience. It is an open-source web interface for analyzing data with Hadoop. You can execute the following operations using Hue. 1. Upload and browse data 2. Query a table in Hive and Impala 3. Run Spark and Pig jobs 4. Workflow search data. Hue makes Hadoop accessible to use. It also provides an editor for the hive, impala, MySQL, Oracle, Postgre SQL, Spark SQL and Solar SQL. Now, We will discuss how all these components work together to process Big data. There are four stages of Big data processing. Four stages of Big data processing ( blog.cloudera.com/blog) The first stage Ingested, where data is ingested or transferred to Hadoop from various resources such as relational databases system or local files. As we discussed earlier sqoop transfer data from our RDMS ( Relational Database) to HDFS. Whereas Flume transfer event data. The second stage is processing. In this stage, the data is stored and processed. We discussed earlier that the information stored in distributed file system HDFS and the NoSQL distributed data HBase. Spark and MapReduce perform data processing. The third stage is analyzing; here data interpreted by the processing framework such as Pig, Hive & Impala. Pig convert the data using Map and Reduce and then explain it. Hive also based upon Map and Reduce programming and it is more suitable to structured data. The fourth stage is accessed which is performed by a tool such as Hue and Cloudera search. In this stage, the analyzed data can be accessed by users. Hue is web-interface for exploring data. Now, you know all the basic of the Hadoop framework and can work on your further skill to be an expert in data engineer. But I’m going to keep writing about the Hadoop and other machine learning topics. If you like to stay updated with everything, you can follow me here or LinkedIn. You can also take a look on my series based upon data importing from the web. Everything that I talked about in this series is fundamental.
https://towardsdatascience.com/a-brief-summary-of-apache-hadoop-a-solution-of-big-data-problem-and-hint-comes-from-google-95fd63b83623
['Sahil Dhankhad']
2019-04-29 01:06:34.335000+00:00
['Big Data', 'Software Development', 'Data', 'Data Science', 'Science']
Blood & Dust: Drawing the Unconscious.
Justice ‘To love another person is to see the face of God’ ~ Victor Hugo, Les Misérables. In 1853, the brutal murder of a woman by her husband shocked the small island of Guernsey. This was not crowded Paris where people felt distant from each other. This was an island of a few thousand inhabitants, where everyone knew each other, where every event felt tangible, where each tragedy touched every family. Justicia by Victor Hugo | Wiki Art The evidence against Charles Tapner, the man who was accused of murdering his wife, was substantial but not definite. The residents did not see any certain motivation in Tapner’s cruel action. Despite of numerous petitions signed by the residents pushing the British Home Secretary to acquit Tapner, he was still sentenced to death by hanging. One of the signatories of that petition was Victor Hugo. He stood by his principles that nobody has the right of taking someone else’s life. After all, it was for defending those exact principles, that he was forced into the exile. Hugo sank into a deep melancholy. He sat at his desk and drew one of the darkest and scariest drawings I have ever seen. He called it ‘Justicia’. This drawing was made two decades before the art critic Louis Leroy used the term ‘impressionism’ to describe the artistic style of painters like Monet. But, we don’t have to be art-critics ourselves, to ask ‘what is Justicia drawing if not a product of an impression? An impression of Tapner’s soul surrounded by darkness and his floating head screaming from pain. If we step back and look from a distance we can also see a blurry face of a woman, of his wife. Les Misérables. “Even the darkest night will end and the sun will rise.” ― Victor Hugo, Les Misérables Those who are forced to be in exile never know if they are ever going to return home. From the very first days they try to create a space where everything looks and feels, tastes and smells like home. The interior of the house, where Victor Hugo settled in, during his time in exile is a piece of art itself. He designed it entirely by his own hand. Each room in the Hauteville House reflected a historical period of France. If every other room in the house was dedicated to the past, the room on the top floor was dedicated to the present.The room overlooked the sea, and it was there where Hugo began to write his masterpiece Les Misérables. ‘Those who do not weep, do not see’ says one of my favourite lines from that novel. If you see sufferings of others and their pain doesn’t put tears on your cheeks, then you did not fully grasp their pain. Gavroche a onze ans.(“Gavroche at eleven years old”). | Wiki Commons Drawing, once again, acted as a back-door to unconscious for Hugo. We can see that in his drawing of Gavroche — one of the most iconic characters of Les Misérables. He is the boy lives on the streets. His character represents what is it like to be someone who was born without the right for a decent future. But Gavroche has a darker symbolism — he symbolises populism. The rule, the impulse, the irrational instinct of the crowd. The drawing of him is as ominous and dark as Hugo’s drawing of Tapner. Gavroche’s wide, dark smile and his narrow eyes remind me of another more recent fictional character who is also a populist, disenfranchised madman— the Joker played by Joaquin Phoenix in 2019 adaptation by Todd Philips. Order & Chaos. ‘One must have chaos in oneself to give birth to a dancing star.’ ~ Friedrich Nietzsche Victor Hugo kept his drawings private and rarely shared with anyone outside his narrow circle. Although, one the greatest painters of the time Eugène Delacroix said that if Victor Hugo had decided to become a painter instead of a writer, he would have become one the greatest artists of the century. Why did he keep his drawings secret? Literary critics explain this as his desire to focus the attention of the public on his novels. They might be right, but there can be another reason. Hugo’s heart was a battleground between chaos and order. Throughout his hard life, he tried to tame the surrounding chaos to give birth to exceptional works. He drew his destiny as a strong, uncontrollable ocean wave, of which he tried to get hold of. The wave of my destiny by Victor Hugo, 1857 | Wiki Art He felt, as if, the destiny tried to take away everything that he had loved. His eldest and favourite daughter Léopoldine drowned in a boating accident in 1847. Then the destiny forced him out from home. Drawing acted as a therapy. Through drawing he could make those forces of chaos more tangible. He could get into a dialogue with them. Dante, Byron and Wilde did not dare explore other art forms to unlock their unconscious. That is what makes Victor Hugo and his writing exceptional. That is what makes Les Misérables so sublime. That novel is a drawing painted with words.
https://medium.com/lessons-from-history/blood-dust-drawing-the-unconscious-429d522587fb
['Vashik Armenikus']
2020-10-19 10:48:40.912000+00:00
['History', 'Literature', 'Art', 'Psychology', 'Creativity']
เมื่อไหร่จึงควรใช้แผนภูมิวงกลม (Pie chart)
Written by Data Experience @airbnb / Prev: Turn data into pixels @twitter • Invent new vis @UofMaryland HCIL PhD • From @Thailand • http://kristw.yellowpigz.com
https://medium.com/skooldio/%E0%B9%80%E0%B8%A1%E0%B8%B7%E0%B9%88%E0%B8%AD%E0%B9%84%E0%B8%AB%E0%B8%A3%E0%B9%88%E0%B8%88%E0%B8%B6%E0%B8%87%E0%B8%84%E0%B8%A7%E0%B8%A3%E0%B9%83%E0%B8%8A%E0%B9%89%E0%B9%81%E0%B8%9C%E0%B8%99%E0%B8%A0%E0%B8%B9%E0%B8%A1%E0%B8%B4%E0%B8%A7%E0%B8%87%E0%B8%81%E0%B8%A5%E0%B8%A1-pie-chart-3ac273e24463
['Krist Wongsuphasawat']
2017-03-14 17:29:14.588000+00:00
['Design', 'Business', 'Visualization', 'Data', 'Charts']
5 Powerful Hidden Facebook Page Features for Marketers
Do you manage a Facebook page for your business? Interested in ways to improve your marketing? In addition to the Facebook features you use for business every day, there are some handy ones you may have overlooked. In this article you’ll discover five lesser known Facebook Page features for marketers. Discover five Facebook features page admins need to know about. #1: Free Images for Ads When creating a Facebook ad, you can choose from a searchable database of thousands of free stock images from within the Facebook image library. This takes an extra step out of the ad creation process. You can create Facebook ads quickly by choosing photos from the image library. This image library is powered by Shutterstock, but there’s one important caveat: Not all of the images meet Facebook’s advertising guidelines. For this reason, it’s important to familiarize yourself with the guidelines and choose your images carefully. You don’t want your ads getting rejected over some minor technicality, like the 20% text rule on ad images. #2: Ad Relevance Scores The ad relevance score is basically Facebook’s answer to Google’s quality score for AdWords. The relevance score guides how often your Facebook ad will be displayed and how much you’ll pay for each ad engagement. Facebook considers a lot of different factors when calculating your relevance score, including positive and negative feedback via video views, clicks, comments, likes and other ad interactions. If people report your ad or tell Facebook they don’t want to see it anymore, those actions count against you. This score measures how relevant your ad is to your target audience. Keeping an eye on your ad relevance score can help you determine if your ad needs work. Oddly enough, this setting is unchecked by default. To enable ad relevance scoring, open the ad or ad set in your Ads Manager and navigate to Customize Columns. From the list of available columns, find and select the Relevance Score check box. Enabling this option adds a Relevance Score column to your ads reports so you can keep an eye on this metric. #3: Email Contact Import A great way to grow your audience is to invite the people in your email address book to like your Facebook business page. To do that, go to your Facebook business page, click on the ellipsis (…) button (next to the Share button on your cover image) and then select Invite Email Contacts from the drop-down menu. To build your audience for your Facebook page, invite your email contacts to like your page. Next, you see a pop-up box that lists all of the different integration options you can use to import your contacts. Identify the contact list you want to import and click the Invite Contacts link to the right. After you upload your list, a dialog box appears where you can select which contacts to invite. You have the option to select individual contacts or the group as a whole. After you select your contacts, click Preview Invitation. On the next page,review the invitation, select the check box that you’re authorized to send invitations and click Send. There are a couple of points to keep in mind when sending invitations. You can upload up to 5,000 contacts per day, so if you have large customer or subscriber lists, you’ll have to send invitations in batches. Remember, your page may already be suggested to your contacts who use Facebook, so you can decide whether to email them as well. If you’re already showing up in their recommended pages, it’s just free advertising for you. #4: Facebook Post Scheduling The ability to schedule Facebook posts is pretty handy, especially if you’re using promoted posts. The good news is that you don’t need Hootsuite or Buffer to do it. You canschedule future posts right in Facebook. You can even backdate posts so that they appear earlier in your timeline. To access this feature, go the Publishing Tools tab, select Scheduled Postsand click the Create button. Compose your post and then select Schedule from the Publish drop-down menu. Select the date and time to schedule your post. When you’re finished,click Schedule. Scheduling posts can be especially useful for larger teams where you have different people creating and uploading Facebook content and targeting and launching your social PPC campaigns. #5: Pages to Watch Metrics At the very bottom of your Facebook Insights page, you’ll find a Pages to Watcharea where you can track other pages, such as your partners, competitors and friends. You can see metrics for the likes, posts and engagement on these pages. For example, the Pages to Watch metrics below reveals that HubSpot page likes are currently at 813,200, up 40.6% over the previous week. Also, there were five posts to the page, engaging 158 people. Looking at these metrics is an easy way to track your competitors’ fan growth and look at their engagement numbers. This information can also give you a sense of how many times to post per week. In a nutshell, you can see how your social media marketing efforts stack up against others using real benchmarks (their actual performance). Conclusion Facebook is constantly being redesigned and refreshed, so it can be hard to keep up with all of the options available to you. The five hidden Facebook features covered in this article are somewhat buried in the Ads Manager and Publishing Tools, so you probably wouldn’t stumble upon them. But they can be valuable tools for your Facebook marketing. About The Author Larry Kim is the CEO of Mobile Monkey and founder of WordStream. You can connect with him on Twitter, Facebook, LinkedIn and Instagram.
https://medium.com/marketing-and-entrepreneurship/5-powerful-hidden-facebook-page-features-for-marketers-c54f456a2e92
['Larry Kim']
2017-04-30 16:26:46.328000+00:00
['Social Media', 'Social Media Marketing', 'Advertising', 'Marketing', 'Facebook']
An introduction to Linear Algebra for Programmers
An introduction to Linear Algebra for Programmers A collection of notes on the topic of Linear Algebra, with the intention of helping to understand Neural Networks Coordinates x = horizontal y = vertical Vectors A vector, in CS is often characterised as an array with values inside of it. So for a two dimensional vector, with an x of 2 and a y of 1, it would look like this: [2,1]. Vector Addition Let’s say that we have a vector of [2,1] and then a second vector that is [4,-3]. If we want to add these two vectors together, we would add the values together that correspond with one another (ie. add x and x, add y and y). So in this case, our addition would result in [6,-2]. Sometimes if we are visualising this on a graph, we could take our first set of [2,1] and plot that from the origin (which would usually be [0,0], then take the second value of [4,-3] and plot that as if it was a continuation from [2,1] on the graph. The result would still be the same as if you had taken your final value of [6,-2] and had plotted that. Only now we are able to see how the graph progresses through each value (if so required). Scalars This involves taking the value of a vector and multiplying it by whatever value is passed to it. This is known as scaling. The number itself that we use to multiply by is a scalar Scalar Multiplication Here are some examples: v = [2,1] Our vector has an x coordinate of 2, and a y coordinate of 1. 2v = [4,2] Here we basically multiply [2,1] by 2. Which would give us [4,2] plotted out on a graph. -1.8v = [-3.6, -1.8] Here we take [2,1] and first we flip [2,1] on its axis and then do the multiplication as if -1.8 was actually now 1.8. Which gives us something close to [-3.6, -1.8]. So simplify how this operates, we can consider that if we are trying to multiply by a negative value, we can switch the values in our vector from positive to negative (or negative to positive), and then treat that initial minus value as a positive value. 1/3v = [0.66, 0.33] Here we take [2,1] and reduce it down to 1 third of its values, so we would then plot [0.66, 0.33] out on out graph. The XY Coordinate System With Vectors, we can think of each vector value as a scalar that operates on the xy coordinate system. In the xy coordinate system, there are two very special vectors: the one that runs to the right of the origin, which is ‘i’ and the one that runs vertical from the origin, which is ‘j’. These both have a value of 1. These are what we can refer to the ‘basis’ of a coordinate system. So now we can look at our two vector values of [2,1] and consider each of those to be a scalar that stretches i and j along their axes. So now we have 2i and 1j. We can then take these two scaled vectors and add them together. This would look like (2)i + (1)j. Any time we scale two vectors and add them together it is called a ‘linear combination’. One thing to bear in mind is that we can theoretically use different basis vectors if we wanted to. So if our basis vectors actually ran at values of 2 instead of how ‘i’ and ‘j’ run at values of 1, our original vector of [2,1] would no longer plot at the same place on our graph. It would actually end up being [4,2] instead. Linear Transformations and its relation to Matrices Transformation basically just means function. So a function that takes an input and returns an output. So a transformation would take in a vector and return another vector. The word ‘transformation’ is used because it helps to signify movement. So it is like watching the input vector move from its position over to its new position (the output vector). Visually speaking, a transformation is linear if it has two properties: 1. all lines must remain lines; 2. the origin must remain fixed in place. So if a line curves, it’s not a linear transformation. If we remember that the values of a vector can actually be used to scale along i and j — for example: v = 2i + 1j — we can carry out a transformation, the properties and gridlines would still remain evenly spaced. The place where v lands would still be 2i + 1j. So we can transform our vector (which means i and j are also transformed) we still get the same linear combination. This means that we can deduce where v must go based only on where i and j land. Visualising Linear Transformations A way we can try to visualise this would be if we had a grid and had a vector placed out on it, we could imagine that if vector placement remained static, but we actually rotated the grid itself, the vector would now be in a new position, but the calculation of how it arrived there would remain the same, even though the values for i, j and v would be different. Bear in mind that we don’t have to transform simply by just rotating the axis. We could stretch out the positions of i and j if we wanted to, so that — for example — i is now twice as long as it was before, while j is whatever it now corresponds to. So if we had i and j, then rotated our grid 90 degrees counterclockwise, i would move from [1,0] to [0,1]. j would rotate from [0,1], to [-1,0]. We could take those values (whether that be the ones before or the newly rotated values) and create 2x2 matrices from them. It would look like this (imagining that the square brackets actually are one large horizontal rather than two small horizontals on top of each other): [0 -1] [1 0] Every time you see a matrix, you can consider it to be a linear transformation in space. Regarding the transformation, it is worth bearing in mind that this I think this only works when a grid transformation still takes up the same surface area as before. There are things called rotations, which rotate, and shears, which transform (ie stretch), but the diagrams I have seen thus far still take up the same amount of space. So a rotation might be rotating the grid by 90 degrees, while the shear might stretch out a rectangular grid space into a parallelogram. Sometimes this new transformation of both rotation and shear is called a ‘composition’. Matrix Multiplication Matrix multiplication represents applying one transformation after another. The order of transformations matters also as it has an effect on the outcome. Function Notation Since we write functions on the left of variables. whenever we have to compose two functions, we read from right to left. imagine that the brackets actually span the entire height rather than being stacked on top of one another: [a b] [e f] = [ae+bg af+bh] [c b] [g h] [ce+dg cf+dh] 3D linear transformations Just like how we have i and j for x and y, we also have k for the z axis A note regarding what you have just read These notes weren’t necessarily meant for public consumption, but the process of writing about something helps me to solidify my understanding. If you choose to read this, take them with a pinch of salt — that pinch being that I am still way in over my head trying to make sense of the world of Linear Algebra, but I’m certainly trying! A special thanks goes out to the YouTube channel 3Blue1Brown for making an excellent series titled ‘Essence of Linear Algebra’. This ‘article’ was simply a collection of notes that were made whilst watching it.
https://medium.com/ai-in-plain-english/an-introduction-to-linear-algebra-for-programmers-c737dc2c50a4
['Sunil Sandhu']
2020-04-17 12:01:32.511000+00:00
['Coding', 'Programming', 'AI', 'Artificial Intelligence', 'Machine Learning']
Top 10 In-Demand programming languages to learn in 2020
1. Python When Guido van Rossum developed Python in the 1990s as his side project, nobody has thought it would be the most popular programming language one day. Considering all well-recognized rankings and industry trends, I put Python as the number one programming language overall. Python has not seen a meteoric rise in popularity like Java or C/C++. Also, Python is not a disruptive programming language. But from the very beginning, Python has focused on developer experience and tried to lower the barrier to programming so that school kids can also write production-grade code. In 2008, Python went through a massive overhaul and improvement with the cost of introducing significant breaking changes by introducing Python 3. Today, Python is omnipresent and used in many areas of software development, with no sign of slowing down. 3 Key Features: The USP of Python is its language design. It is highly productive, elegant, simple, yet powerful. Python has first-class integration with C/C++ and can seamlessly offload the CPU heavy tasks to C/C++. Python has a very active community and support. Popularity: In the last several years, Python has seen enormous growth in demand with no sign of slowing down. The programming language ranking site PYPL has ranked Python as the number one programming language with a considerable popularity gain in 2019: Also, Python has surpassed Java and became the 2nd most popular language according to GitHub repositories contributions: Also, StackOverflow developer survey has ranked Python as the 2nd most popular programming language (4th most popular Technology): Another programming language ranking site TIOBE has ranked Python the 3rd most popular language with a massive gain in last year: Python still has the chance to go further up in ranking this year as Python saw a 50% growth last year according to GitHub Octoverse: StackOverflow developer survey has listed Python as the second most loved programming language: Most of the older and mainstream programming languages have stable or downward traction. Also, Python is an exception here and has an increasingly upward trending during the last five years as clear from Google trends: Job Market: According to Indeed, Python is the most demanding programming language in the USA job market with the highest 74 K job posting in January 2020. Also, Python ranked third with a $120 K yearly salary. Also, StackOverflow developer survey has shown that Python developers earn a high salary with relatively low experience compared to other mainstream programming languages: Main Use Cases: Data Science Data Analytics Artificial Intelligence, Deep Learning Enterprise Application Web Development 2. JavaScript During the first browser war, Netscape had assigned Brendan Eich to develop a new programming language for its Browser. Brendan Eich had developed the initial prototype in only ten days, and the rest is history. Software developers often ridiculed JavaScript in its early days because of its poor language design and lack of features. Over the years, JavaScript has evolved into a multi-paradigm, high-level, dynamic programming language. The first significant breakthrough of JavaScript came in 2009 when Ryan Dahl has released cross-platform JavaScript runtime Node.js and enabled JavaScript to run on Server Side. The other enormous breakthrough of JavaScript came around 2010 when Google has released a JavaScript based Web development framework AngularJS. Today, JavaScript is one of the most widely used programming languages in the world and runs on virtually everywhere: Browsers, Servers, Mobile Devices, Cloud, Containers, Micro-controllers. 3 Key Features: JavaScript is the undisputed king in Browser programming. Thanks to Node.js, JavaScript offers event-driven programming , which is especially suitable for I/O heavy tasks . , which is especially suitable for . JavaScript has gone through massive modernization and overhaul in the last several years, especially in 2015, 2016, and later. Popularity: JavaScript is one of the most top-ranked programming languages because of its ubiquitous use in all platforms and mass adoption. Octoverse has put JavaScript as the number one programming language for five consecutive years by GitHub repositories contributions: Also, StackOverflow developer survey 2019 has ranked JavaScript as the most popular programming language and Technology: Another programming language popularity site PYPL has ranked JavaScript as the 3rd most popular programming language: The programming language popularity site TIOBE has ranked JavaScript as the 7th most popular programming language: Once dreaded by the developers, JavaScript also ranked as the 11th most loved programming language according to StackOverflow Developer survey: The trending of JavaScript is relatively stable, as shown by Google Trends: Job Market: In the USA Job market, Indeed has ranked JavaScript as the third most demanding programming language with 57 K Job posting in January 2020. With $114 K average yearly salary, JavaScript ranks 4th in terms of salary: Also, StackOverflow developer survey has shown that JavaScript developers can earn a modest salary with relatively low experience: Main Use Cases: Web Development Backend Development Mobile App Development Serverless Computing Browser Game Development 3. Java Java is one of the most disruptive programming languages to date. Back in the ’90s, business applications were mainly developed using C++, which was quite complicated and platform dependent. James Gosling and his team in Sun lowered the barrier to develop business applications by offering a much simpler, object-oriented, interpreted programming language that also supports Multi-threading programming. Java has achieved Platform independence by developing Java Virtual Machine (JVM), which abstracted the low-level Operating System from developers and gave the first “Write Once, Run anywhere” programming language. Also, JVM offered generation garbage collection, which manages the Object life cycle. In recent years, Java has lost some of its markets to highly developer-friendly modern languages and the rise of other languages, especially Python, JavaScript. Also, JVM is not quite Cloud friendly because of its bulky size. Oracle has recently introduced hefty licensing fees for JDK, which will dent Java’s popularity further. Fortunately, Java is working on its shortcomings and trying to make Java fit for Cloud via the GraalVM initiative. Also, in OpenJDK, there is a free alternative to the proprietary Oracle JDK. Java is still the number one programming language for enterprises. 3 Key Features: Java offers a powerful, feature-rich, multi-paradigm, interpreted programming language with a moderate learning curve and high developer productivity. programming language with a moderate learning curve and high developer productivity. Java is strictly backward compatible, which is a crucial requirement for business applications. Java’s runtime JVM is a masterpiece of Software Engineering and one of the best virtual machines in the industry. Popularity: Only after five years of its release, Java becomes the 3rd most popular programming language and always remained in the top 3 lists in the next two decades. Here is the long-term history of Java in the popular TIOBE ranking: Java’s popularity has waned in the last few years, but it is still the most popular programming language, according to TIOBE, as shown below: According to the GitHub repository contribution, Java was in the number one spot during the 2014–2018 and only slipped to number 3rd position in last year: The other popular programming language ranking website PYPL has ranked Java as 2nd most popular programming language: StackOverflow developer survey also ranked Java high and only superseded by JavaScript and Python programming languages: According to Google trends, Java is losing its traction constantly in the past five years: Job Market: According to Indeed, Java is the second most demanding programming language in the USA with 69 K Job posting in January 2020. Also, Java developers earn the 6th highest annual salary ($104 K): As per StackOverflow Developers survey 2019, Java offers a modest salary after few years of experience: Main Use Cases: Enterprise Application Development Android App Development Big Data Web Development 4. C# In 2000, Tech giant Microsoft decided to create their Object Oriented C like programming language C# as part of their .NET initiative, which will be managed (run on a Virtual Machine like Java). The veteran language designer Anders Hejlsberg designed C# as part of Microsoft’s Common Language Initiative (CLI) platform where many other (mainly Microsoft’s languages) compiled into an intermediate format which runs on a Runtime named Common Language Runtime (CLR). During the early days, C# was criticized as an imitation of Java. But later, both of the languages diverged. Also, Microsoft’s licensing of C# compiler/runtime is not always clear. Although Microsoft is currently not enforcing its patents under the Microsoft Open Specification Project, it may change. Today, C# is a multi-paradigm programming language that is widely used not only on the Windows platform but also on the iOS/Android platform (thanks to Xamarin) and Linux platform. 3 Key Features: Anders Hejlsberg did an excellent job to bring C# out of Java’s shadow and give its own identity. did an excellent job to bring C# out of Java’s shadow and give its own identity. Backed by Microsoft and being in the industry for 20 years, C# has large ecosystems of libraries and frameworks. Like Java, C# is also platform independent (thanks to CLR) and runs on Windows, Linux, Mobile devices. Popularity: The popular language ranking site TIOBE has ranked 5th in January 2020 with huge gain: Also, Octoverse has listed C# as the 5th popular programming language by GitHub repositories contribution: StackOverflow developer survey has placed C# as the 4th most popular language (7th most popular Technology for 2019: It is interesting to note that StackOverflow developer survey has ranked C# as the 10th most loved programming language (well above Java): As clear from Google trends, C# is not being much hyped in the last few years, as shown below: Job Market: Indeed has posted 32 K openings for C# developers in the USA, which makes C# the 5th most demanding programming language in this list. With an annual salary of $96 K, C# ranks 8th in this list: StackOverflow developer survey has placed C# above Java (albeit with more experience) in terms of global average salary: Main Use Cases: Server-Side programming App development Web Development Game Development Software for Windows Platform 5. C During the 1960s and 1970s, every cycle of the CPU and every byte of memory was expensive. Dennis Ritchie, a Bell lab engineer, has developed a procedural, general-purpose programming language that is compiled directly to machine language during the 1969–1973. C programming offers low-level access to memory and gives full control over the underlying hardware. Over the years, C became one of the most used programming languages. Besides, C is arguably the most disruptive and influential programming language in history and has influenced almost all other languages on this list. Although C is often criticized for its accidental complexity, unsafe programming, and lack of features. Also, C is platform-dependent, i.e., C code is not portable. But if you want to make the most use of your hardware, then C/C++ or Rust is your only option. 3 Key Features: As C gave low-level access to memory and compiled to Machine instructions, it is one of the fastest and most powerful programming languages. C gives full control over the underlying hardware. C is one of the “Programming languages of the Language,” i.e., compilers of many other programming languages like Ruby, PHP, Python have been written in C. Popularity: C is the oldest programming language in this list and has dominated the industry for 47 years. C has also ruled the programming language popularity ranking more than any other language as clear from TIOBE’s long-term ranking history: According to TIOBE ranking, C is the second most popular language with a huge popularity gain in 2019: Octoverse has also ranked C as the 9th most popular language according to the GitHub repository contribution: StackOverflow developer survey has also ranked C in 12th (8th considering programming language) place: Google trending also shows a relatively stable interest in C over the last five years. Job Market: According to Indeed, there are 28K job postings for C developers in the USA, which make C the 6th most demanding programming language. In terms of salary, C ranks 6th with Java ($104 K): StackOverflow developer survey showed C developers can earn an average wage but needs a longer time to achieve that compared to, e.g., Java, Python: Main Use Cases: System Programming Game Development IoT and Real-Time Systems Machine Learning, Deep Learning Embedded Systems 6. C++ Bjarne Stroustrup has worked with Dennis Ritchie (creator of C) in Bell Lab during the 1970s. Heavily influenced by C, he first created C++ as an extension of C, adding Object-Oriented features. Over time, C++ has evolved into a multi-paradigm, general-purpose programming language. Like C, C++ also offers low-level memory access and directly compiled to machine instructions. C++ also offers full control over hardware but with the cost of accidental complexity and does not provide language-level support for memory safety and concurrency safety. Also, C++ offers too many features and is one of the most complicated programming languages to master. For all these factors and its platform dependency, C++ has lost its popularity to Java in especially enterprise software development and Big Data domain in the early 2000s. C++ is once again gaining popularity with the rise of GPU, Containerization, Cloud computing, as it can quickly adapt itself to take advantage of Hardware or Ecosystem changes. Today, C++ is one of the most important and heavily used programming languages in the industry. 3 Key Features: Like Java, C++ is also constantly modernizing and adapting itself with changes in Hardware or Ecosystem. C++ also gives full control over the underlying hardware and can run on every platform and take advantage of every kind of hardware, whether it is GPU, TPU, Container, Cloud, Mobile devices, or Microcontroller. C++ is blazingly fast and used heavily in performance-critical and resource-constrained systems. Popularity: C++ is the second oldest programming language in this list and ranked 4th in the TIOBE programming language ranking: Octoverse has ranked C++ in 6th position by GitHub repository contributions: Also, StackOverflow Developer Survey in 2019 has listed C++ as the 9th most popular Technology (6th most popular language): Although C++ is facing massive competition from modern programming languages like Rust or Go, it is still generating stable interest in the last five years: Job Market: Indeed has ranked C++ as the 4th most demanding programming language with 41 K job posting. Also, C++ developers earn $108 K per annum, which places it in 5th place: StackOverflow developer survey has shown that C++ developers can draw a higher salary compared to Java, albeit with a longer experience: Main Use Cases: System Programming Game Development IoT and Real-Time Systems Machine Learning, Deep Learning Embedded Systems, Distributed Systems 7. PHP Like Python, PHP is another programming language developed by a single developer as a side project during the ’90s. Software Engineer Rasmus Lerdorf has initially created PHP as a set of Common Gateway Interface binaries written in C to create dynamic Web Applications. Later, more functionalities were added to the PHP product, and it organically evolved into a fully-fledged programming language. At present, PHP is a general-purpose, dynamic programming language mainly used to develop server-side Web applications. With the rise of JavaScript based client-side Web application development, PHP is losing its appeal and popularity, and PHP is past its prime. Contrary to popular belief, PHP will not die soon, although its popularity will gradually diminish. 3 Key Features: PHP is one of the highly productive Server-Side Web development programming languages. As PHP is used in Web development for the last 35 years, there are many successful and stable PHP frameworks in the market. Many giant companies are using PHP (Facebook, Wordpress), which leads to excellent tooling support for it. Popularity: The programming language ranking site TIOBE has ranked PHP as the 8th most popular programming language in January 2020. Although the long term ranking history of PHP shows that PHP is past of its prime and slowly losing its appeal: Octoverse has ranked PHP as the 4th most popular programming language by GitHub repositories contribution: As per StackOverflow developer survey 2019, PHP is the 5th most popular programming language (8th most popular Technology): Although PHP is still one of the most widely used programming languages, it’s trending is slowly going down as clear from Google Trends: Job Market: Job Search site Indeed has ranked PHP as the 7th most demanding programming language in the USA job market with 18 K positions in January 2020. Also, PHP developers can expect a reasonable salary ($90 K) which places them in 10th position in this category: StackOverflow developer survey shows PHP as the lowest-paid programming language in 2019: Main Use Cases: Server-side Web Application Development Developing CMS systems Standalone Web Application Development. 8. Swift Swift is one of the only two programming languages that has also appeared in my list: “Top 7 modern programming languages to learn now”. A group of Apple engineers led by Chris Lattner has worked to develop a new programming language Swift mainly to replace Objective-C in the Mac and iOS platforms. It is a multi-paradigm, general-purpose, compiled programming language that also offers high developer productivity. Swift supports LLVM (developer by Chris Lattner) compiler toolchain like C/C++, Rust. Swift has excellent interoperability with Objective-C codebase and has already established itself as the primary programming language in iOS App development. As a compiled and powerful language, Swift is gaining increasing popularity in other domains as well. 3 Main Features: One of the main USP of Swift is its language design. With simpler, concise, and clean syntax and developer ergonomic features, it offers a more productive and better alternative to Objective-C in the Apple Ecosystem. Swift also offers features of modern program languages: null safety. Also, it provides syntactic sugar to avoid the “ Pyramid of Doom.” As a compiled language, Swift is blazing fast as C++. It is also gaining increasing popularity in system programming and other domains. Popularity: Like other modern programming languages, Swift is hugely popular among developers and ranked 6th in the list of most beloved languages: Swift also has propelled to top 10 lists of most popular programming languages in TIOBE index only in 5 years of its first stable release: Another popular programming language ranking site PYPL has ranked Swift as 9th most popular programming language: StackOverflow developer survey has ranked Swift as the 15th most popular Technology (12th most popular programming language): Google trends also show a sharp rise in the Popularity of Swift: Job Market: Indeed has ranked Swift as the 9th most demanding language in the USA with 6 K openings. In terms of Salary, Indeed has ranked Swift in 2nd place with $125 K yearly salary: StackOverflow developer survey has also revealed that Swift developer can earn a high salary with relatively fewer years of experience compared to Objective-C: Main Use Cases: iOS App Development System Programming Client-side development (via WebAssembly) Deep Learning IoT 9. Go Like Swift, Go is only the second programming language from the last decade in this list. Also, like Swift, Go is created by a Tech giant. In the last decade, Google has frustratingly discovered that existing programming languages cannot take the seemingly unlimited hardware, human resources of Google. For example, compiling the C++ codebase of Google took half an hour. Also, they wanted to tackle the development scaling issue in the new language. Renowned Software Engineers Rob Pike (UTF-8) and Ken Thompson (UNIX OS) in Google has created a new, pragmatic, easy-to-learn, highly scalable system programming language Go and released in 2012. Go has a runtime and Garbage collector (a few Megabytes), but this runtime is packed in the generated executable. Although Go is a bit feature anemic, it has become a mainstream programming language in a short period. 3 Key Features: Go has language-level support for Concurrency. It offers a CSP based message-passing concurrency via Goroutine (lightweight Green thread) and Channel. via Goroutine (lightweight Green thread) and Channel. The biggest USP of Go is its language design and simplicity. It has successfully combined the simplicity and productivity of Python and the power of C. Go has embedded Garbage Collector (albeit not as mature as JVM garbage collector). Go developers can write system programming with the safety of Java, Python. Popularity: Like Swift, Go has also seen a meteoric rise in popularity. In almost all popular programming languages comparing websites, Go ranks high and has surpassed many existing languages. Here is the TIOBE index ranking from January 2020, where Go ranks 14th: StackOverflow developer survey 2019 has also ranked Go as the 13th most popular Technology (10th most popular programming language): According to the Stackoverflow survey, Go is one 9th most loved programming languages: Go is also one of the top 10 fastest growing languages, according to GitHub Octoverse: The increasing popularity of Go is also reflected in Google trends, which show increasing traction for Go over the last five years: Job Market: Indeed has ranked Go as the 10th most demanding language with 4 K openings in January 2020. In terms of salary, Go is ranked in 9th position: StackOverflow developer survey 2019 has shown Go as one of the highest-paid programming languages: Main Use Cases: System Programming Serverless Computing Business Applications Cloud-Native Development IoT 10. Ruby Ruby is the third programming language in this list developed by an individual developer during the 1990s. Japanese computer scientist Yukihiro Matsumoto has created Ruby as an “Object-Oriented Scripting language” and released in 1995. Ruby has later evolved into an interpreted, dynamically typed, high-level, multiple-paradigm general-purpose programming language. Ruby is implemented in C and offers garbage collection. Like Python, Ruby focused heavily on developer productivity and developer happiness. Although Ruby is not one of the hyped languages at this moment, it is an excellent language for new developers for a flat learning curve. 3 Key Features: Ruby has combined some of the best features of programming languages successfully: dynamic, object-oriented, functional, garbage-collected, and concise. Although Ruby itself is not disruptive, its Web development framework Ruby on Rails is probably the most disruptive and influential Server-side Web development framework. is probably the most disruptive and influential Server-side Web development framework. Ruby is used by some of the largest software projects like Twitter, GitHub, Airbnb, and has excellent tooling and framework support. Popularity: TIOBE has ranked Ruby as the 11th most popular programming language in January 2020 with a hugely positive move: Source: TIOBE Octoverse has also ranked Ruby as the 10th most popular programming language in 2019 by GitHub repositories contributions: StackOverflow Developer survey 2019 has listed Ruby as the 9th most popular programming language (12th most popular Technology): Ruby has not been a hyped language in recent years, but has maintained its traction as per Google trends: Job Market: In the USA job market, Ruby developers can draw huge salaries and ranked 1st by Indeed. Also, Indeed has posted 16 K openings for Ruby developers in January 2020, which put Ruby 8th most demanding programming language in this list. StackOverflow developer survey 2019 has also shown that Ruby developers can earn a high salary with relatively low experience: Similar articles:
https://towardsdatascience.com/top-10-in-demand-programming-languages-to-learn-in-2020-4462eb7d8d3e
['Md Kamaruzzaman']
2020-12-22 10:16:21.789000+00:00
['Python', 'JavaScript', 'Software Development', 'Java', 'Programming']
Data science… without any data?!
Data science… without any data?! Why it’s important to hire data engineers early “What challenges are you tackling at the moment?” I asked. “Well,” the ex-academic said, “It looks like I’ve been hired as Chief Data Scientist… at a company that has no data.” “Human, the bowl is empty.” — Data Scientist. Image: SOURCE. I don’t know whether to laugh or to cry. You’d think it would be obvious, but data science doesn’t make any sense without data. Alas, this is not an isolated incident. Data science doesn’t make any sense without data. So, let me go ahead and say what so many ambitious data scientists (and their would-be employers) really seem to need to hear. What is data engineering? If data science is the discipline of making data useful, then you can think of data engineering as the discipline of making data usable. Data engineers are the heroes who provide behind-the-scenes infrastructure support that makes machine logs and colossal data stores compatible with data science toolkits. If data science is the discipline of making data useful, then data engineering is the discipline of making data usable. Unlike data scientists, data engineers tend not to spend much time looking at data. Instead, they look at and work with the infrastructure that holds the data. Data scientists are the data-wranglers, while data engineers are the data-pipeline-wranglers. Data scientists are the data-wranglers, while data engineers are the data-pipeline-wranglers. What do data engineers do? Data engineering work comes in three main flavors: Enabling data storage (data warehouses) and delivery (data pipelines) at scale. Maintaining data flows that fuel enterprise operations. Supplying datasets to support data science. Data science is at the mercy of data engineering You can’t do data science if there’s no data. If you get hired to be head of data science in an organization where there’s no data and no data engineering, guess who’s going to be the data engineer…? You! Exactly. What’s so hard about data engineering? Grocery shopping is easy if you’re just cooking something for your own dinner, but large scale turns the trivial into the Herculean — how do you acquire, store, and process 20 tons of ice cream… without letting any of it melt? Similarly, “data engineering” is fairly easy when you’re downloading a little spreadsheet for your school project but dizzying when you’re handling data at petabyte scale. Scale makes it a sophisticated engineering discipline in its own right. Scale makes it a sophisticated engineering discipline in its own right. Unfortunately, knowing one of these disciplines in no way implies that you know anything about the other. Should you learn both disciplines? If you’ve just felt the urge to run off and study both disciplines, you might be a victim of the (stressful and self-defeating) belief that data professionals have to know the everything of data. The data universe is expanding rapidly — it’s time we started recognizing just how big this field is and that working in one part of it doesn’t automatically require us to be experts of all of it. I’d go so far as to say that it’s too big for even the most determined genius to swallow whole. Working in one part of the data universe doesn’t automatically require us to be experts of all of it. Instead of expecting data people to be able to do all of it, let’s start asking one another (and ourselves), “Which kind are you?” Let’s embrace working together instead of trying to go it alone. But isn’t this an incredible opportunity to learn? Maybe. It depends how much you love the discipline you already know. Data engineering and data science are different, so if you’re a data scientist who didn’t train for data engineering, you are going to have to start from scratch. Building your data engineering team could take years. This might be exactly the kind of fun you want — as long as you’re going in with open eyes. Building your data engineering team could take years. Sure, it’s nice to have an excuse to learn something new, but in all likelihood, your data science muscles will atrophy as a result. As an analogy, imagine you’re a translator who is fluent in Japanese and English. You’re offered a job called “translator” (so far, so good) but when you arrive at work, you discover that you were hired to translate from Mandarin to Swahili, neither of which you speak. It might be stimulating and rewarding to take the opportunity to become quadrilingual, but do be realistic about how efficiently you’ll be using your primary training (and how terrifying your first performance review may be). Who doesn’t love a good bad translation? Image: SOURCE. In other words, if a company doesn’t have any data or data engineers, then accepting a role as Chief Data Scientist means putting your data science career on hold for a few years in favor of a data engineering career — that you might not be qualified for — while you build a data engineering team. Eventually, you’ll gaze proudly at the team you’ve built and realize that it no longer makes sense for you to do the nitty-gritty yourself. By the time your team is ripe for those cool neural networks or fancy Bayesian inference that you did your PhD on, you have to sit back and watch someone else score the goal. Advice for data science leaders and those who love them Tip #1: Know what you’re getting into If you’re considering taking a job as a head of data science, your first question should always be, “Who is responsible for making sure my team has data?” If the answer is YOU, well, at least you’ll know what you’re signing up for. Before taking a data science job, always ask about the *who* of data engineering. Tip #2: Remember that you’re the customer Since data science is at the mercy of data, merely having data engineering colleagues might not be enough. You might face an uphill struggle if those colleagues fail to recognize you as a key customer for their work. It’s a bad sign if their attitude reminds you more of museum curators, preserving data for its own sake. Tip #3: See the bigger (organizational) picture While it’s true that you’re a key customer for data engineering, you’re probably not the only customer. Modern businesses use data to fuel operations, often in ways that can hum along nicely enough without your interference. When your contribution to the business is a nice-to-have (and not a matter of your company’s survival), it’s unwise to behave as if the world revolves around you and your team. A healthy balance is healthy. Tip #4: Insist on accountability Position yourself to have some influence over data engineering decisions. Before signing up for your new gig, consider negotiating for ways to hold your data engineering colleagues accountable for collaborating with you. If there are no repercussions to shutting you out, your organization is unlikely to thrive. Thanks for reading! Liked the author? If you’re keen to read more of my writing, most of the links in this article take you to my other musings. Can’t choose? Try this one:
https://towardsdatascience.com/data-science-without-any-data-6c1ae9509d92
['Cassie Kozyrkov']
2020-11-13 14:56:17.278000+00:00
['Data Science', 'Technology', 'Data Engineering', 'Artificial Intelligence', 'Business']
How Entrepreneurs Can Thrive in a New Era of Uncertainty
Len Schlesinger is President Emeritus at Babson College and the Baker Foundation Professor at Harvard Business School where he serves as Chair of the School’s Practice-based faculty and Coordinator of the Required Curriculum Section Chairs. He has served as a member of the HBS faculty from 1978 to 1985, 1988 to 1998 and 2013 to the present. During his career at the School, he has taught courses in Organizational Behavior, Organization Design, Human Resources Management, General Management, Neighborhood Business, Entrepreneurial Management, Global Immersion, Leadership and Service Management in MBA and Executive Education programs. He has also served as head of the Service Management Interest Group, Senior Associate Dean for External Relations, and Chair of the School’s (1993–94) MBA program review and redesign process. In this interview with Carbon Radio, he talks about how entrepreneurs will win in this new era of uncertainty. He addresses how healthcare and higher education are changing, and how entrepreneurial thought and action will enable organizations to thrive in a post-Covid world. What do you think about what’s going on right now and how can entrepreneurship can play a role in the recovery of the economy? Satya Nadella, the CEO of Microsoft, has actually nailed the framing of the issue in a very compelling way, and it’s one that I have been using countless number of times with credit to him. He talks about the three phases of current reality, and the first one is obviously restore. There needs to be some mechanism by which we can restore businesses and organizations to some semblance of reality. The second is recover. What are all the things we need to do to get customers back, to get service providers back, to get the systems working? And the third, and obviously the most exciting and most compelling part of the equation, is reimagine. What we have is the opportunity, whether you’re a small business or a large business of any kind, to use the experience of the last several months to think about ways in which you can reinvent every aspect of your business model and every way in which you interact with customers or constituents, and there’s no question that that work has just begun. And much of what has been done to accommodate constituencies in the context of the pandemic will end up proving to be extraordinarily useful on an ongoing basis. The reality, as you suggest in one of your questions, is we’re still left with an enormous amount of uncertainty about what a current reality is and what reality is going to be 90 days from now, let alone a year from now. And those are the times where the winners are always the entrepreneurs. They are the ones who are able to not only cope with uncertainty, but flourish in uncertainty and figure out ways in which they can actually take some small steps to get a sense of what might or might not work in the new reality called Post Covid-19. Until we have a well-established vaccine that has the whole world saying, “OK, we’ve got this one licked”, I can’t imagine anything that approaches a state of normalcy. And given the failures of most governments and healthcare systems and quite honestly, most citizen populations around this particular pandemic, there’s an opportunity to reinvent so many aspects of our lives as a community as an outgrowth of this. The question is, “will we have the patience and temperament to do that?” The call I had before you indicated really a deep fear that we’re already seeing that many American populations are just flat out bored with current reality and have just decided they’ve had enough, and so they’re going to misbehave in all sorts of ways. We’re beginning to see the potential for consequences as you see Covid-19 rates begin to spike. I just have a feeling the next few months are going to be pretty ugly. How do you think healthcare entrepreneurs in particular will play a role in reimagining society moving forward? There are three or four ways in which it has already become obvious. One is the spike in telehealth. So at the time of Covid-19, there were very few significant players in telehealth. Kaiser had managed to have more than half of their GP appointments done on telehealth, but other than that, it was an idiosyncrasy. And we got forced into telehealth, and it’s proving to be far more robust and far more powerful than anybody imagined. There’s absolutely no question, as part of the process of reinvention, we will begin to think about where and how you need to have a physical interaction with a doctor, because there’s very few industries that are less customer slash patient centric than healthcare. Particularly as you move into parts of the United States where geographic access to healthcare requires a two hour drive, the notion of being able to handle most basic activities over the phone or over the Internet will change all of that. At this point, the folks who will have a profound influence on whether that happens are the insurance companies. Right now, most insurance companies are paying the same rate for live and for telehealth. And if they immediately go back to depreciating the value of an electronic interaction versus a live interaction, I think you’ll see some slowdown, but there’s no question new mechanisms for interaction with doctors and healthcare providers will change in all sorts of ways. The second piece of that is something I was reading about the other day about how all these people aren’t going to doctors, and there doesn’t appear to be any epidemic of any other kind of healthcare issues over the last several months. So this issue of the habits that we’ve established for visits to doctors and the activities that we go to doctors for, I think lots of people are going to start to challenge that and that has the opportunity to have a profound influence on healthcare costs and old habits that, by and large, are supported by empirical data. The third piece is to understand how much the economy of the healthcare systems are critically dependent on elective procedures and, quite honestly, how unprepared most healthcare systems were to deal with the underlying structure of the pandemic. I’m reading in the paper today that major healthcare systems here in Boston still don’t have access to PPE. You know, you sit there and say, “oh, jeez.” And so what we have demonstrated is, because it’s not “medical”, but it is “critical” for healthcare, there was a systematic inattention to the global logistics system in healthcare. I don’t know who was responsible for it or how it was thought about, but there was this gravitational pull for everything to go to lowest cost providers and everything to get off shore. We had very few domestic providers. We had no emergency supplies. Our stockpile had run low. And I’ve got to believe that hopefully this ends up being a scary reminder of just how fragile our global logistics system is, not just in healthcare, but in all sorts of industries. This will raise serious questions. All three of those things — new access to physicians, new access to global supply chains, and rethinking the interaction between patients and doctors and when they need to go and when they don’t. All three of those are going to be stimulated and grown by entrepreneurs. How do you think about small businesses and family businesses in this time and what can we learn or what are we learning about how they’re operating in this time of extreme uncertainty? I will separate them. I think of family businesses different than I think of small businesses. So, let me start with family enterprise. The one thing everybody tends to kind of romance the notion of family enterprise and think that somehow they’re small businesses. We need to understand on a global scale there’s substantially more wealth in family enterprise than there is in the aggregated wealth of all of the public corporations that exist. Families have longer history. Families have longer aggregations of wealth and quite honestly, there are families that have demonstrated extraordinary resilience. You know, multiple generations of family being able to move through in ways in which our theories about organizations would indicate that private organizations, by and large, have not been able to do. So, I think the challenges that are facing family enterprise in the aggregate aren’t really profoundly different than those that are facing any other organization. There are some special issues associated with family dynamics, alongside organizational dynamics, but the nature of the challenges are roughly the same. Small business is a whole different ballgame. The most important thing to understand about small business is it depends what country I’m talking to you from. In the United States, if I look at the Small Business Administration, they define a small business as any business with under five hundred employees. And the reality is, when they talk about the significance of small business, they’re really talking about the very small part of the population that has 350 to 500 employees. They ignore microenterprises. They ignore neighborhood businesses. Those are the ones that are just getting killed. Absolutely getting killed. A lot of them, obviously, in food service and in restaurants. The latest data indicates that probably at least 25 percent of them won’t survive. Literally won’t survive, largely because they don’t have stores of cash. Large organizations today are sitting on absolute hoards cash, trying to figure out what they’re going to do when this is all over, what regime they’re going to buy up and what industries they’re going to go into. The smaller microbusinesses, they need the cash flow to operate the business and deliver. That doesn’t exist. The PPP wasn’t necessarily framed correctly, and the most naive part of the PPP here in the United States, and it really was naïve, was operating off the assumption that you could use the banks as the source of application. That was predicated on the assumption that these smaller businesses have banking relationships. And usually they’re making relationships where they have access to capital. So, it ignores, particularly for minorities, the average net worth of a black adult citizen of Boston is eight and a half dollars. If you have eight and a half dollars, you’re not worried about a banking relationship and you’re not calling up your neighborhood banker to get access to it. So, it took a while to understand that. Again, the interim solution for that was the rise of fintech. So, the fintech organizations, most specifically organizations like QuickBooks, stepped in and got authority to actually file the applications, in addition to banks, and stepped in and provided an absolutely critical resource for small businesses that banks, by and large, for the really small ones, don’t play. That being said, the rules change all the time. It was designed for eight weeks. Now it is designed for 24 weeks. If you do it correctly, by and large, it’s a grant. I understand that. But, it was a grant that was actually intended to keep people on your payroll, and when it was designed, nobody forecasted the length of the Covid situation. So, it didn’t hurt, but it really hasn’t helped. How do you think entrepreneurs are uniquely capable of operating in what is seemingly the most uncertain time of most of our lives? Well, I mean, the notion at this point is even in the midst of uncertainty, one can see the opportunity structure and the opportunity structure is entirely driven by uncertainty. So people have hobbies. People have expertise. People have interests. And there’s no better time to imagine new scenarios and experiment. I mean it’s really just that simple. The most powerful way to reduce uncertainty is to take a step and see what happened. As opposed to the traditional business planning process of people sitting around and dreaming of something, the need is now. The problems are now. The steps that one can take to address the problems are now. It takes an entrepreneur with a temperament and a mindset to actually take that step and see what happens to actually create the new solutions in the post-Covid environment. Do you think people will look at risk differently now or through a similar analysis? Well, it’s a more robust analysis. We just got hit by something that wasn’t in anybody’s risk analysis framework. And so now you have to add global pandemics to your list of things to worry about. There are only nine more plagues to work with. The reality is the risk management frameworks are not poorly defined and, by and large, are generally pretty well taken care of. Where we’re going to find people right now in risk is people stimulating, particularly in healthcare entrepreneurial activities, to get things to market faster than they should. We’ve seen this before. We saw it with the swine flu vaccine as well. I do believe that the political pressures to announce a vaccine, given the realities of bringing a vaccine to a market that does the job with minimal risk, those tensions are going to be very powerful at the high end, and there’ll be variations on that tension all the way down to the small neighborhood businesses. You wrote a blog post a few years back titled “Don’t Forget the Mayors”, which focused on the work of Mayors across the country. How should local governments be thinking about investing in entrepreneurial ecosystems? Thank God for the mayors today. If you’re looking at the folks who are closest to the action, who have the most capacity to be able to shape and influence citizenship behavior, it’s at the local level, and we see countless number of examples of both good and bad mayors across the United States. And quite honestly, the consequences of bad leadership at that level, which really does involve lives, you know, that’s where those lives are being decided on. The question for a government at the core, which is the question that ethicists and all sorts of other people have raised around Covid-19 is how do you balance the desire to get the economy going with the desire to ensure that lives are saved? And we’ve allowed for that debate to go on and be framed as a political debate. As our administration has oftentimes framed it as we don’t want the cost of compliance and the cost of responding to the coronavirus to exceed the value. And we’re very much in the middle of that right now. Our systematic inability as a nation and as a set of communities to actually have that question addressed without contention is very much at the source of the problems that I expressed that I was concerned about relative for the next several months. What do you think the future of higher education will look like with the pandemic going on and as technology improves? Most of higher education got pushed, and I mean pushed, into online learning. And so lots of educational institutions are busily celebrating their ten-day transition to online learning. Most of it, I would guess, is not very good. Now, as we think about what we’re going to do in the Fall, the question then becomes one of, well, how good can we get between April and August? So we’ve got six months. How good can we get? How can we actually figure out how to use all of the tools that we have to dramatically increase the quality of the online experience? There are three things we know. The online experience can be improved exponentially, and there are countless number of people who are already doing it. They actually tend to have large numbers of students already. What you don’t want to do is ignore the fact that the online leaders, the Arizona States, the Purdues, the Southern New Hampshire University, the Penn States, are already capturing a huge percentage of the capacity in that space. They do it quite well, by and large, and they do it with huge amounts of economic advantages. If I was to wake up this morning and say, “I’m going to go into the online business” and I went and talked to my friend. He’d say, “you’re an idiot.” Unless you have an idiosyncratic niche that hasn’t been covered in any way, shape or form by online learning, you’re just going to get crushed by people who have capacity. That’s number one. Most of these schools that are making these deep commitments to online, they’re doing it in some respects as a hobby and something to pass the time until they can go face to face. They’re not looking at it as a permanent restructuring of their model. The reality is it has raised fundamental challenges to the higher education economic model that have been raised for the last decade. And just like we had a 10 day transition to online learning, we’ve now had a 10 day transition to a serious examination about “why am I paying 50, 60, 70 thousand dollars a year?” Particularly when institutions delivered online, and in addition to delivering online, refused to cut tuition. Colleges can’t cut their tuition given their economic model, which is still dependent on labor, and students are beginning to raise questions that are quite legitimate. And so if this goes on for another Fall, the pressure will be even greater. The folks who were writing the book about the college stress test, Zemsky, about six months ago said there are 10 percent of colleges in the United States of about 2,200 colleges, about 10 percent of them that are on the near death list. I think today they would say 25 percent. So you will have death, you’ll have consolidation. The longer this goes on, the less able these schools are to defend what it’s all about. Now, there are a whole bunch of other schools that have actually come to the conclusion, and I think it’s a gutsy and appropriate choice, that what they need to do is they need to do everything they need to do to deliver what they do, as much of it as is physically possible, live. And they are now all dealing with government, public health and science to try and figure out what they need to do to get as many people on campus in live situations to create the kind of value equation that they are all about. I applaud those schools for being pretty clear about what their strategy is all about and for not wanting to play the nonresidential game. But, in some respects the deal buster at this point are the folks who have been innovating now for a long time like Southern New Hampshire University. What they’ve done now is they have a residential campus, which was the core of that school before they went online, and they accept residential students. And now what they’ve said to the students they’ve accepted for this Fall’s class is, “You can all move on campus. We’re delighted to have you on campus if you want to be there on a campus. We’re not going to run the residential freshman year next year. So, we’re going to give you your first year of college absolutely free as an apology for disappointing you. And the commitment is while we’re delivering that for you, absolutely free, we’re going to be entrepreneurs, reinventing residentially-based education and coming back a year from now at ten thousand dollars a year.” So, the online people should worry about the big folks, and the residential people should worry about what comes out of Southern New Hampshire University in just a year from now. This is not something that we’re forecasting 10 years from now. They’ve made an ironclad commitment to be ten thousand dollars a year twelve months from now. How much of a university’s financial sustainability has to do with their endowment and their research funding? First of all, most schools don’t have large endowments. What you’re dealing with there is the media always writes about the Ivy League and the big state schools that have large endowments, and they ignore two things. One is they have large endowments, but for many of them, it’s 80 to 85 percent restricted. It’s been designated by the donor, and the school has little or no flexibility to figure out what they might be able to do with it and how to use it. So you don’t want to overemphasize. People tend to think about Harvard having 38 billion or 40 billion, whatever the number is now. They should be able to give it all away. Well, they can’t and nor can any school in that space. The reality is the vast majority of schools don’t have large endowments and are critically dependent on tuition, and it’s the dependency on tuition in this incredibly complex environment that is their threat. It’s not the absence of endowment. Is there precedence for the government getting involved to rescue universities? Would it be reasonable to think about it? Do I think someone in Congress will come up with a bill? The answer is yes. Do I think it can pass? Not in this environment. I mean not a chance. The colleges and universities came up with a need this spring, I think, of something like 40 to 60 billion dollars. They got 16 billion, and with strings attached, because half of the 16 had to go directly to students. So, they asked for 60 for their needs and they got eight. And that was the first emergency go round. It’s not going to get better. What do you think about remote work and how it’s impacting employees and how employers think about their office space? I think this issue of remote work came out of nowhere, literally came out of nowhere, out of necessity to keep people in our homes, and we’re learning a lot. I mean the reality is we only have four or five months of data at this point. We already have some large companies making significant commitments as a result of it. I live out here in the boonies, and people always say, “Well, how is it where you live out there?” I say, “It’s a great place to live as long as you don’t want to go anywhere.” Because getting into the city for me was two and a half to two and three quarter hours a day, back and forth. I have now picked that up. That’s my time. It’s time for sleeping. It’s time for exercising. It’s time for conversation. It’s time for work. So there’s no question people are discovering all sorts of opportunities there. I think there will be a pattern. We’ll be coming back to work. There’s no question about it for most of us, but in environments that are much less densely populated, fewer people required to come in. And this fantasy of remote work will increasingly become the work du jour. There are plenty of occupations and plenty of professions that don’t require people to be at work all the time. Now, start thinking about the second and third order consequences of that. One is, what does it mean for urban environments? And, what we see here in Boston is rents going down in Boston proper and the suburbs having these incredible spikes of interest as people are looking to move out here. You see that in virtually every major city. Rents down in New York, rents down in San Francisco dramatically, rents down in Boston. Whether that’s temporary or permanent, I tend to think it is a longer alive phenomena than people might think. The second issue, which is the most profound issue, is what do we do with all these big office buildings if we can’t figure out how to get people in elevators? When people say, “What’s it going to take to get people to go downtown?” Well, if you can only get two people in an elevator and your office is on the 52nd floor, it’s going to take 12 hours to get people in and 12 hours to get people out for two minutes of work. We built an infrastructure that, by its very nature, is potentially ill-suited for the new reality unless someone convinces us that we can put pandemics to bed forever, which is going to be a hell of a task. Obviously, public transport, the reality is it’s perceived as one of the greatest assets to get people to work and to not put cars on the road, and now there are plenty of people who don’t want to use public transport. If I’m the government in a city, I, too, am dealing with exactly the questions that I started this conversation with you. What does it take to restore some sense of normalcy? What does it take to recover from the greatest disruption to my economic base that I’ve ever experienced in modern history over the most sustained period of time? How am I going to reimagine this city? If you’re looking for the opportunities for entrepreneurship, the opportunities for local governments to completely rethink what they do and the ability to create ecosystems of all of the players in that local community, to systematically reinvent the logic of that city on a scale never thought about before is completely real. I was supposed to do some executive teaching in April of this past year right before coronavirus, and there was a case that we had written on a business in the UK that decided to go to remote work. My colleagues thought it was kind of a crazy case. I found some old dissertations that were written on it and some early stage stuff that was written on it, but it was kind of a fluke. Now, just three months later, they’re at the epicenter of a long-term solution. What do you think about Andrew Yang’s universal basic income proposals both in terms of policy and in terms of political feasibility? I don’t have deep conviction about the proposals. What we found essentially, in the context of the last four months, was right now the government gave one check, and I guess this morning they’re talking about another check. There’s no question it’s better than nothing, but only marginally better. The other extreme is the Biden proposals, along with the progressives of two thousand dollars a month per person until the Covid situation is over, and the reality there is that’s a big number. And if we’re already complaining about predisposition to not go into work with the 600 dollar supplement on unemployment insurance, that just exacerbates the problem in even greater detail. So, the idea of a universal basic income is not a bad idea, but it can’t be an idea that is devoid of context in terms of all the other things that happen or don’t happen, all the other supports that exist or don’t exist to allow our citizenry to thrive and flourish. It’s a great slogan. Over the last few years, we’ve learned the slogan of “universal basic income”. We learned “no student debt”, “free college”. I mean I can go through that whole list. Every one of them in and of themselves has the capacity to break the bank, and the fact that they’re not embedded in a broader context of how we’re going to do work, and an economic model quite honestly that allows this to work, is the bigger problem. What do you think about this field of futurism and the notion of forecasting, and does it have a role to play in these conversations at the Federal level about how we fund things in the long term? Let’s get very clear about this. This is the joy of entrepreneurship. Entrepreneurship, by and large, is naturally suspicious of forecasts. If you looked at the first seventy-five days of the Coronavirus and you looked at television, it was a never ending stream of competitive forecasts all of which would reach different conclusions in terms of what was going on and what the most appropriate next step is. I’m not suggesting that we are not interested in data, and I’m not suggesting that we’re not interested in improving the quality of our data, but as any social scientist will say, more importantly any economic investor will say, don’t actually take economic steps based on a forecast, that in fact, the forecasts are as good as the algorithms that go in. The algorithms are created by human beings, and they contain all of the biases of the human condition. Is the world going to come to an end in 2020 or 2030? I don’t know, but the reality is I don’t spend much time convinced that one forecast is going to be compelling over another, and in some respects, it’s why these dueling forecasts allow us to have political debates about everything. Is there incontrovertible evidence that we have deleterious impacts of climate change? Yes. Is the world going to end in 2030? I don’t know. Is the current reality of climate change an opportunity for a substantial number of entrepreneurs to think about activities they might engage in where you can actually make money and also make a better world? Yep. No question about it. When we look at the organizations that have done well coming out of the pandemic, there is one thing I’m absolutely certain of without making a forecast. I just believe it in my bones. And that is the organizations that have taken care of their people are the ones that are going to win. And our ability to avoid this ideological debate about our staff, it just drives me crazy. I gave a talk last month, and I was talking about the people who we’re calling our frontline workers, not healthcare workers, but frontline workers, they’re all being relabeled as heroes. And I just sit there and say, “You know what? Could we stop calling them heroes and could we pay them a decent wage?” It’s just that simple. I don’t want to give them a greeting card. I don’t want to applaud as they walk down the street. I want to make sure that we are recognizing the risks that they are taking on our behalf, one, and two, that we are recognizing that for a variety of circumstances, they don’t have a lot of other options and that we want to make sure that the most profound way we can communicate appreciation of their work and their effort is to provide them with all of the support they need to minimize the risk of exposure and to pay them for the risk they’re taking. A few organizations did it for a few weeks, and now they got bored.
https://medium.com/discourse/how-entrepreneurs-can-thrive-in-a-new-era-of-uncertainty-e2da83ae263b
['Carbon Radio']
2020-07-29 13:49:58.309000+00:00
['Leadership', 'Healthcare', 'Future', 'Higher Education', 'Entrepreneurship']
Flutter to the Future: The Inevitability of Cross-Platform Frameworks
Photo by UX Store on Unsplash So, you want to build a tech start-up. You have your product idea, your seed capital, and your founding team. Now you just have to hire three engineering teams and build three versions of your product. Surprised? Let us count them down. A website, obviously, that’s one. An iOS app that works on iPhones, that’s two. And an Android app that works on all the other smartphones, that’s three. Each one requires knowledge of different technology stacks and programming languages, so you need three engineering teams, or at the very least three engineering ninjas, each with total mastery over each of those respective areas. Your seemingly straightforward path to minimally viable product has just gotten three times thornier, with impacts to resources, costs, and timelines. And that’s before a single line of code has been written. It Used to Be Easy Mark Zuckerberg sitting in his Harvard dorm room did not have to worry about hacking together three separate versions of Facebook. Larry Page and Sergey Brin, grabbing coffee in Palo Alto, did not have to worry about ranking anything other than websites. And Jeff Bezos working out of his garage did not have to pay for three separate Amazon online bookstores. The rise of smartphones and mobile app stores has brought a new reality to the internet. Companies across every industry have recognized that customers demand an omnichannel gateway. We all want to move from laptop to tablet to smartphone and back again with no degradation in user experience. What has been a win for consumers has also created a barrier to entry for start-ups. It used to be that a single engineer could build a new port of call for the entire world because the entire world came visiting on the same type of boat — the browser. No longer. Now some come by browser, some by mobile browser, and many on smartphones, expecting a native app experience. One Size Fits (Not) All In the span of a decade, mobile strategy has gone from afterthought to prerequisite. So much so that a growing number of successful start-ups have bypassed the traditional web application altogether and built strictly for the smartphone. This can work if your app idea lends itself to the medium, as it often does in the worlds of gaming, social networking, and digital entertainment. However, customers for a vast majority of businesses still expect equal play on both web and mobile. That means the typical modern-day start-up must often raise capital to pay not only for the designers and full-stack and devops engineers, but also for iOS and Android developers. Beyond compensation, there is also the question of management. Designing and engineering three stand-alone applications just to provide a single offering to the market means three times the product and project management lift along with coordination among all three efforts. The brick and mortars equivalent of this conundrum is the form that must be filled out in triplicate. Remember those? Carbon copies — the real sort, not the email kind — solved that inefficiency some two hundred years ago. So, where is the carbon copy solution for web, iOS and Android? Cross Platform We Go The first iPhone was released in 2007 and the first Android phone followed in 2008. In a testament to the pace of innovation, the first cross platform frameworks for both iOS and Android were released as early as 2009. The best known of these was eventually (and aptly) titled “PhoneGap”. The commercial version of PhoneGap was acquired by Adobe while an open-source version was made available via the Apache foundation under the title “Cordova”. By the mid-2010s, several additional frameworks emerged, including Xamarin, NativeScript, Kivy, and Ionic, with the latter built atop the aforementioned Apache Cordova framework. The challenge with these frameworks was that they offered less granular control than writing native code and remained a couple steps behind the latest SDK improvements from Apple and Google, respectively. However, for those organizations that were able to leverage these frameworks, they offered a 2x savings in development cost and time. Before long, the world’s technology giants recognized that there was a lot of money to be made in providing the means to do something twice as fast, not to mention centralizing their own cross-platform development. In quick succession, Microsoft acquired Xamarin, Facebook developed React Native, and Google developed Flutter. State of Play Modern cross-platform frameworks have come a long way since the first version of PhoneGap. Today, Flutter and React Native sit atop a quintuplet of high-powered and widely used cross-platform mobile frameworks that also include Xamarin, Ionic, and Cordova. For new development, Flutter is the heavy favorite in this five-faced selection. The reason is simple — it supports a single codebase across all platforms. The other frameworks also support a single codebase but with exceptions, particularly for UI rendering. In addition to offering the first purely unified codebase, Flutter has been designed to expedite development tasks and to compile directly into machine code. Bypassing intermediate code, which is relied on by other frameworks, enables Flutter to deliver native level performance even for complex graphics and computations. Flutter to the Future Whether you are a founder determining a development direction, an IT executive selecting a technology stack, or an engineer choosing your next area to upskill, chances are that the right answer is Flutter. It is robust yet bleeding edge. Cross-platform development is the future and Flutter is the clear winner in this space. Each of the other frameworks is based on older approaches and is held back by legacy building blocks in their foundations. Designed atop the lessons learned from the shortfalls of earlier frameworks, Flutter is the first to present what founders and engineers alike thirst for — a truly cross-platform foundation that is the closest modern equivalent to the Write Once Run Anywhere (WORA) slogan first popularized with Java in the 1990s. There are challenges, of course. Flutter is still new and expertise is limited. However, winning in technology is about making bold bets on near-term evolution. Two or three years ago, Flutter made sense on paper, but the ecosystem was still limited. On the eve of 2021, the framework is ready to jump off the page and into your infrastructure. The Inevitability Regardless of whether you make the bet on Flutter, there is no question that cross-platform frameworks are slowly but surely supplanting native approaches. If your source code only runs on one platform then you are limiting your reach and disappointing a lot of customers. And if you are writing three versions of your source code then you are overstretching your resources and overtaxing your investors. It is important to remember that evolution from native to cross-platform has happened before. Early assembly languages that were tuned to native hardware architectures were inevitably replaced by higher level languages like C and Java that worked across computer types and operating systems. Technologies change but patterns remain the same. There is a reason car controls, restaurant menus, and computer keyboards all fit the same mold even though they come in different packages. We are wired for comfort and efficiency, and that means learn once or build once, and then re-use as often as possible.
https://medium.com/swlh/flutter-to-the-future-the-inevitability-of-cross-platform-frameworks-d541573b63f2
['Jack Plotkin']
2020-10-12 17:52:47.803000+00:00
['Cross Platform', 'Engineering', 'React Native', 'Startup', 'Flutter']
Seven Different Visualizations of Immunization Data
Photo by Joshua Sortino on Unsplash When using data, analytics show answers and insight into facts collected in a spreadsheet, document, or database. Being able to understand data in a visual context makes a story pop from numbers and is easily understood by a wide range of readers or viewers. Visualizations are a quick and easy way to tell a data story making numbers pop and information easy to understand. There are many types of visualizations. Using data, let’s show seven ways to see the same information. How is Immunization and Exemption of Immunization Displayed? Immunization Records of school age children in Washington State 2014–2015 to show the seven visualizations that make analysis of data easy to understand and showcase for an audience using Power BI. Area Plot Area plots show a count by placing shading under the line from data points on the x axis and y axis. Shown are three lines with shading for comparing data graphically. Bar Chart Bar Charts show a count by shading to show a value for a specific label. Shown are three values with shading for each Educational Service District in the dataset horizontally. Key Influencers Key Influencers is a newer visualization that answers a binary style question. Based on an algorithm, the display shows top correlation of variables for proving a influence on data from a key variable. Line Plot Line plots show a count by connecting data points with a line. Shown are three lines of different colors for comparing data graphically. Pie Chart Pie Charts show a count by shading and size of wedge to show percentage of a whole in as wedges of a circle. Shown are “slices” of pie that show total immunization and exemption per Educational Service District adding to total state enrollment. Scatter Plot Scatter Plots show a count by size of dot and placement on a plane to show data. Shown are points with size to represent intensity and placement of enrollment by immunization count. Word Cloud Word Cloud shows most used words in a sample of text or an input file by displaying most used words larger and less used words smaller to evaluate written content. In graphic, website for the dataset is used in an online app for generating Word Cloud. Value in Features These are some of the visualizations that many apps or libraries can create from data. The purpose of this form of results is to quickly illustrate information and knowledge for a wide audience, or form a story out of data for consumption. From the results, some types are better for telling a story with this data. Creating a dashboard from placing multiple visuals together, requires finding the best visuals for the analysis to view results that are meaningful and weave data into a compelling story for knowledge.
https://medium.com/ai-in-plain-english/seven-different-visualizations-of-immunization-data-b3185a791014
['Sarah Mason']
2020-11-30 18:40:47.677000+00:00
['Analytics', 'Big Data', 'AI', 'Data Science', 'Data Visualization']
5 Tasks for adaptation communications
THE ADAPTIVE CO: Don’t face your (climate changed) future without them Climate risk is a team sport. Play to win, with communications and org culture. (Commons image by Pixabay.) “That can’t be,” said your store manager. “We’ll be fine. It won’t get that bad.” When you sent him a memo and told him in a conference call and later personally at a meeting, that he and his 100 employees, plus the store’s local service vendors and suppliers, had to align with corporate’s new climate-adaptation plan, he and his principal lieutenants balked. Oh, they carried out the plan, to some extent — it was that, you said, or else — but with hesitation, unmoved by your presentation, unwilling to go all the way, worry employees, change suppliers, relocate facilities, distract from higher priorities, add operating expenses, hurt their numbers. Go through that much trouble? To avoid a scenario they can’t be confident about? Up the chain of command, the regional manager agreed. Up a couple of layers, so did the VP at HQ. Some regions, they noticed, were carrying it out better. But most weren’t. Up further, the board and CEO had approved and launched a TCFD process to assess and disclose the company’s climate risks, which in turn led to the memos, calls and meetings to go beyond disclosure and actually execute a far-reaching, transformative plan. An ambitious change-management program was underway, but like most at this scale, yours ran into organizational obstacles that flustered results. And that’s assuming you got the climate science right to begin with! If not, if your TCFD team underestimated the immediacy and severity of tipping points and socioeconomic risks, which McKinsey made clear in this recent report, even full engagement and participation by everyone in the company would be falling significantly short of the adaptation truly needed, and your company would remain at risk. Because here’s the fine point of it. For a complete adaptation plan to fully protect your company and secure a brand and organization for the climate-challenged future we will all face, you have to go enterprise-wide. The TCFD process is but the start. When you move to address the risks and capitalize on the opportunities informed by a TCFD assessment, you quickly realize it takes everyone everywhere in the company, mainly because climate impacts happen locally, and your people and suppliers must be ready, as must everyone in the support units up the chain, all the way to the very top. The trick is overcoming the trouble-confidence equation. Each of your 15 relevant stakeholders — board members, investors, senior leaders, the TCFD adaptation lead team itself, mid-level and unit managers, rank and file employees, suppliers/vendors, collaborators/partners, the upstream and downstream trade, bankers, insurers, the relevant government agencies, communities, NGOs and, of course, your customers—must see the adaptation plan not just as totally up to the task, but also outweighing the pains, costs and hassles of executing it. That trouble must be seen as far less burdensome than the horrifying troubles (climate consequences) that will befall them and the company if they fail to adapt. And the opportunities, along with the challenge of this entire process, must be seen as an exciting journey, one to welcome, not fear or avoid. That, in turn, is entirely a communications and organizational-culture exercise, which you can meet by executing five tasks, and that you logically must launch first, so your best-laid plan can be implemented across the organization. It enables the plan, since people must buy in first before they act with the agency, urgency and commitment needed. This framework is the result of a one-year deep dive I led with a collaborative agency and consulting team at COMMON, a leading global network pursuing global change through social enterprise (my firm is an affiliate), informed by learnings from the Center for Public Interest Communications at the University of Florida, where I’m pursuing a graduate degree. It is a unique combination, first of its kind anywhere in the world, of the latest climate science and leading-edge behavior science, the latter focused on overcoming human biases, all applied to corporate communications and culture for deep, sustained organizational change. This column provides a summary of the five tasks to implement. We begin by reiterating the basic principle: this is enterprise-wide, everyone-everywhere change management, but change management you can’t afford to get wrong. The stakes cannot be higher, and you will likely have this one chance to get it right before climate change spirals out of control later this decade and adaptation becomes moot. 1. Paint a new Future Picture The very first communications task is to help people envision the future as it will likely unfold from the 2020s to century’s end. The climate science of RCP 8.5 plus tipping points, as McKinsey explains it, yields a future dramatically different from what your 15 stakeholders expect based on what they know from present and past. This is fundamental. Unable to envision a scenario so unknown or outside their frames of reference, there is no way for them to react appropriately to news of this future and prepare fully. That’s called the Representative Bias. The Ambiguity Bias, Availability Bias and Status Quo Bias are at work here, as well; when people can’t comprehend something, the natural tendency is to stay in the known and familiar, in what is available to the mind, in your status-quo comfort zone. Therefore, unless the future projected in scientific reports is decoded and simplified, something news reports generally fail to do, it remains a thick cloud of complexity, and our minds do not think it through. This has become basic Behavior Science 101. Biases and heuristics (mental shortcuts) get in the way of logic all the time, even when the logic should compel self-preservation and organizational optimization behavior. Your memos, conference calls and meetings haven’t produced the expected response? Are you presenting the future in ways that overcome these biases? It is a task not to be underestimated, or executed timidly. Biases are very stubborn things. They must be attacked in big and bold, yet nuanced ways. How? Start by painting a clear picture of this future for your stakeholders. Create a new mental prototype that replaces or complements their present-and-past references in a way that grabs their attention, makes sense to them, and provokes interest. That includes walking them through (decoding) the likely scenarios from here to there. This can be done with virtual-reality animations, smart videos, art, storytelling, and other communication strategies. Do this creatively enough and deliver it persistently enough to everyone, everywhere, and before you know it your 15 stakeholders will get the new-future message. (More on messaging in a bit.) This should, in fact, mark the official launch of your adaptation initiative. Brand it, name it, like you would the launch of any product or social brand. 2. Provide Support. Manage Engagement As your stakeholders become exposed to the Future Picture, you’ll see various reactions. The best one is from those who have been reading the climate news, have grown concerned, and know adaptation is the way to go but have not acted on it. The Future Picture and your whole adaptation plan will give them what they’ve been longing: clearer information, how it applies to them and the company, a pathway there, and the license and empowerment to get involved. The leaders of your adaptation initiative will likely come from this group across the organization. In How Change Happens, Dr. Cass Sunstein presents numerous social-change movements around the world that only tipped into acceleration and effectiveness when a critical mass of believers like this was empowered and activated by a trigger event or organized effort. Your plan “movement” would fill that role in this instance. Then there are those concerned, as well, but not as much. They’ve been passive avoiders this whole time, knowing there’s a climate there-there, but preferring not to go there. They generally suffer from a combination of Optimism Bias and Confirmation Bias. In the first, people can’t help but have a rosy expectation of the future, in dissonance with the truth, and mis-plan accordingly. In the second, they take it one step further and rationalize their choices based only on sources and news accounts that agree, while ignoring actively or subconsciously those that anticipate a more dire outcome. When confronted with the truth, they tend to fight it by entering the Kubler-Ross Cycle of Grief, which begins with denial and goes through several stages of resistance, until the person accepts the inevitable and moves toward proactive action. Others are overtaken by fear, which tends to impede the effective action called for in your adaptation plan. It is a neuro-hormonal reaction known as the Amygdala Hijack, referring to the part of the brain that handles stress, in this case blocking the resourcefulness and initiative your people will need. Some of these reactions will overlap. The mission from a communications and organizational-culture perspective is to manage and redirect them, and that calls for a stakeholder-engagement project that should be placed in the hands of a capable Engagement Team at the company. What will they do? Several things, and this is not an exhaustive list, instead meant to give you an idea of scope and scale: Identify, segment and engage people as they start showing their biases and reactions. This entails a robust internal CRM system, similar to CRM programs used for external audiences, mainly customers, but in this case to finely segment all 15 stakeholders, starting with your board, senior team and TCFD team. They are the first who must get the science and Future Picture right to approve, champion and carry out the best possible plan. Launch a Forum, much like the ones we’ve become accustomed to in social media and corporate intranets. It is a fantastic way for people to connect directly, express what they’re feeling, advise each other, and coordinate collaborations across the organization. Engagement Team members would be there to move these conversations along, flag the folks who need special attention, and connect them with resources that provide it. members would be there to move these conversations along, flag the folks who need special attention, and connect them with resources that provide it. Run sense-making dialogues. This, too, runs deep in behavior science. It’s a directed process to have people in an organization think through an issue, crisis or challenge. As the name implies, the goal is for a solution to make inherent sense, so that a person will act on it from his/her own agency and volition. Create and manage an event calendar throughout the year and across the organization— seminars, webinars, conference calls, physical events, others — to communicate your adaptation program and create the sort of personal networking and engagement that leads to bias-breaking understanding and action hubs. In all of the above and other Engagement Team initiatives, pay special attention to high-transitivity, highly networked influencers and leaders at every level, across all 15 stakeholder categories. Voluminous behavior research shows that difficult change does not happen rapidly or at all — and this certainly qualifies as difficult behavior change!—unless these influencers and leaders buy in and join the effort. Call it Horizontal Leadership, New Power Participation, Connected Networks, or any of its many iterations, the essence is the same: you can flag these folks — using the CRM, and including Sunstein’s activated believers — and get them not just to embrace your adaptation plan, but to do so with leadership zeal, enterprise-wide. 3. Deliver the right Content & Creative What will the Engagement Team use to communicate? This is where the creative and content parts enter the picture. Other experts would probably have started this column with this. We figure it’s better to first understand the imperative, purpose and mechanics of the Future Picture and Engagement Team, so you may then instinctively place this component. It’s what a Corporate Communications Department does, along with Public Relations, Investor Relations, Marketing and their external agencies. When a project team is assembled to manage something like TCFD execution and yields a “product” like your adaptation plan, you usually ask these communication colleagues for help in creating the messaging, artwork, creative pieces, media and channel plan, social-media community management, and other such executions, as part of a coherent multi-stakeholder communications strategy. Relatedly, TCFD includes opportunities to innovate and launch adaptation-related products and services, which Marketing is called on to promote and scale. A tweak on that approach will probably serve you well. Given the highly specialized nature of RCP 8.5 + tipping-point climate science, the science of high-difficulty behavior change, the complex TCFD structure, and the far-reaching, profoundly transformative adaptation process that must stem from it, this is one change-management project better matched with its own, equally specialized communications group. In this column, let’s call it your Messaging & Creative Team. Again, the difficulty bar is really high. You get one shot to get it right, given the daunting climate-change timing. Better to go with a specialized group. Much of the daily work, mind you, may still be done by your regular comm resources, internal and external. The big need filled by Messaging & Creative is strategy, direction and coordination. Members will huddle with existing strategists at Corporate Comm, PR and IR to segment the stakeholders and decide on messaging and approaches for each one, a best practice of robust similar efforts. There’s always an umbrella message, but it must be tailored for each audience and delivered across the channels each one uses. Likewise with artwork and creative, including, importantly, the design of the Future Picture! The Engagement Team, for one, will need a highly coordinated stream of speeches, event materials, sense-making materials, training materials, fact sheets, slideshows and videos for key meetings and presentations, mini-documentary films, on-premise posters and materials, intranet and social-media videos and posts, related news and storytelling pieces, and more. Taking the Optimism Bias as an example, they’ll use these tools to redirect motivation to a code driven not by outcomes (which the world now knows will likely be dire), but by the four drivers of new climate optimism: Adaptation as the one big hope. Doing the right thing — focus on ethics and compassion, not outcome. Being comfortable focusing on probable scenarios we can envision, instead of fearful blurry outcomes. Framing the excitement and adventure of facing down this new reality and emerging as one of the brands and companies that drives it. In pop culture, this is already happening. It is called Hopepunk, explained nicely in this recent article. Again, the hope is in the attitude and adaptation, not in the outcomes. Your Messaging & Creative Team can draw from the storytelling of this popular movement and create something special for your 15 stakeholders. Because the future will be hard. You’ll want to be one of the corporate beacons of hope, but that hope must be grounded in truth, not based on false expectations that are bound to crash and undermine your business and reputation. 4. Build an Adaptation Culture To achieve enterprise-wide buy-in, enable everyone everywhere to join with excitement and commitment — from the board and senior team down to the parking attendant and concierge, and over to the most remote supplier — without falling into the uneven, here-yes there-not-so-much gaps of most change management projects, you’ll need a fourth component: an organizational-culture initiative. There are dozens of models. You may be familiar with or have had a good experience with one or two. If so, wonderful. Perhaps you can apply the model to this challenge. For the sake of illustration, let’s use a framework by NOBL, a leading American org-culture firm and COMMON member. They feature five culture levels: Environment , the conditions in which your company operates (local economies, competitors, technologies, partners, etc.). Today, no assessment or management of this environment is complete without including our shared climate future using RCP 8.5 and tipping-point scenarios. , the conditions in which your company operates (local economies, competitors, technologies, partners, etc.). Today, no assessment or management of this environment is complete without including our shared climate future using RCP 8.5 and tipping-point scenarios. Purpose , the reason behind the work you do in response to and within that environment, including the corporate values everyone in the company is supposed to live by. Adaptation should be inserted as one of those values, along with the usual suspects: teamwork, quality, safety, sustainability, others. , the reason behind the work you do in response to and within that environment, including the corporate values everyone in the company is supposed to live by. Adaptation should be inserted as one of those values, along with the usual suspects: teamwork, quality, safety, sustainability, others. Strategies , the bets you make to fulfill the purpose. The whole TCFD process is designed to land in a strategic planning process that manages every risk and capitalizes on every opportunity. To the extent it’s integrated seamlessly into your pre-TCFD, pre-adaptation corporate strategy, and enhances it to secure an adapted future, you win. , the bets you make to fulfill the purpose. The whole TCFD process is designed to land in a strategic planning process that manages every risk and capitalizes on every opportunity. To the extent it’s integrated seamlessly into your pre-TCFD, pre-adaptation corporate strategy, and enhances it to secure an adapted future, you win. Structures , the distribution and allocation of resources you need to execute the strategies, including budgets, chain of command, board and C-suite leadership, etc. This step dictates the resources enterprise-wide allocated to your adaptation project. , the distribution and allocation of resources you need to execute the strategies, including budgets, chain of command, board and C-suite leadership, etc. This step dictates the resources enterprise-wide allocated to your adaptation project. Systems, the tools and steps that align organizational change to all of the above. For new adaptation behaviors, particularly considering the hard-to-break biases you must overcome, this includes such things as employee hiring, training, networking, recognition, Kubler-Ross grief management, and empowerment, plus risk management processes (financial, insurance, socioeconomic, others), facilities management, supply-chain management, IT systems, innovation feedback loops, and more. Some of this you may already be pursuing in your TCFD or other adaptation process. And just as the Messaging & Creative Team would work with existing internal and external comm folks at the company, so too would this new Culture Team get in sync with your existing efforts and resources, in this case with the objective of scaling adaptation enterprise-wide, and here again, deploying specialized expertise to secure optimized and rapid results. The Communications and Engagement teams, for their part, would work in total collaboration with Culture, the first to provide the needed messaging and materials, the second to “distribute” the systems, structures, strategies and values to the whole organization. 5. Capitalize on Trigger Events I mentioned earlier that Cass Sunstein’s How Change Happens research documented how certain incidents and events, most of the time spontaneous and unpredictable, have sparked successful change movements across history by turning theretofore passive believers into a determined mobilization. People, he discovered, tend to keep quiet about opinions boiling inside, until some event awakens them from passivity and they decide to burst onto the scene. As others do, as well, and they realize the number of silents was far greater than they assumed, they grow in number, confidence and action. So it is within companies. There is absolutely no reason to believe your employees and other stakeholders have a different belief level than the rest of society, which polls indicate are in large and growing majorities concerned about the present and future effects of runaway climate change that can no longer be solved. This fifth task is one more way for you to take advantage of that and awaken your people into action. How? Climate-related trigger events happen all the time, mostly across three categories: a) climate impacts themselves (storms, floods, fires, droughts, heat or cold waves, others); b) policy and legal, as when a law is enacted or a judge rules on a related issue; and c) industry and corporate, when you announce a major corporate policy change or a trade association launches a related initiative. This task would have you assemble a fourth and final group, the Trigger Events Team, to serve like a war room or a rapid-reaction force to:
https://medium.com/predict/5-tasks-for-successful-corporate-adaptation-a49916ef131c
['Alexander Díaz']
2020-03-02 13:49:55.345000+00:00
['Management', 'Sustainability', 'Future', 'Climate Change', 'Predict Column']
The Biochemistry of Lust: How Hormones Impact Women’s Sexuality
Estrogen (Marilyn Monroe: The Venus Hormone) Estrogen holds court on the dance floor. She is having a ball flirting and dancing. Her ample backside swings with the rhythm of the music, while her satiny skin glows. Estrogen is a total package deal with a quick wit and a strong mind. But yeah, her physical allure doesn’t hurt either. She’s impossible to ignore. Her laugh is contagious, and her hourglass curves make all her dance partners weak in the knees. She’s not afraid to make a fool of herself ether, and she falls down a few times while dancing. But that’s okay, her bones are strong and resilient. She notices Testosterone checking her out by the buffet. Wow, he is soo gorgeous and sexy, he makes her all tingly. She can feel her panties getting moist. Um… Estrogen, the Marilyn Monroe of hormones, dominates the first half of a woman’s menstrual cycle and is opposed by progesterone in the second half. Estrogen comes in three forms: estradiol (E2), estriol (E3) and estrone (E1). Estradiol is the most biologically active hormone for premenopausal women, while estrone is more active after menopause. Estriol is primarily active during pregnancy (2). Remember what I said about hormones being shapeshifters? One of the most fascinating facts in human physiology has got to be the fact that estradiol, the hormone most associated with femininity, is synthesized from testosterone (16). Estrogen is responsible for more than just breasts and baby-making. It affects every part of a woman’s body and brain, and it has a profound impact on her sexual functioning. It is responsible for maintaining pelvic blood flow, the creation of vaginal lubrication, as well as the maintenance of genital tissue (17). When estrogen is in short supply, women struggle with diminished genital and nipple sensitivity, difficulty achieving orgasm, increased sexual pain, and inadequate lubrication (17). Women with low estrogen are at risk for vaginal atrophy, which has to be among the most delightful aspects of aging (NOT). Another issue I am intimately familiar with. As I moved deeper into the menopausal rabbit hole, I experience vaginal irritation, dryness, and constant UTIs, all of which were due to estrogen bidding me a fond farewell. When estrogen leaves the building, the vaginal lining (the epithelium) gets thinner, the vagina itself may shrink and lose muscle tone and elasticity. And as for those persistent UTIs that bedevil menopausal women like me, they are due to the increase in vaginal pH. When the vagina becomes more alkaline, it kills off good bacteria, leaving a woman a sitting duck for a number of vaginal and urinary tract infections. Remember this, a happy pussy is an acidic one (ideal pH 3.8–4.2). The normal level of estradiol in a menstruating woman’s body is around 50 to 400 picograms per milliliter (pg/mL). This fluctuates with the menstrual cycle. Below this threshold, and there is an increased risk for the problems mentioned above. When women are in menopause, estradiol levels are often as low as 10–20 (pg/mL) (17). Estrogen: The True Lady of Lust? Testosterone, the loud and proud androgen, is usually assumed to be the sexual mover and shaker for both men and women. Estrogen, it has been argued, just gives a woman a wet vagina, the motivation to use it comes from her testosterone. This is the view expressed by Theresa Crenshaw in The Alchemy of Love and Lust. In contrast to men, she argues that women have four sexual drives 1. Active (aggressive) 2. Receptive (passive) 3. Proceptive (seductive) and 4. Adverse (reverse). These drives are representative of our hormonal makeup. She differentiates along standard party lines and claims that testosterone fuels women’s active sex drive, while estrogen fuels the receptive and proceptive drives. According to Crenshaw, ever contrary progesterone doesn’t fuel anything but a nap (the adverse drive). However, some researchers believe that estrogen’s role is underestimated in female desire and that the conversion of testosterone to free estrogen in women might play a major role in female desire. (18). “Free” in this case means a hormone that is biologically active and available for our bodies to use. According to Emory professors, Cappelletti and Wallen, for most female mammals the most important hormone governing sexual behavior is estrogen. That would make human females rather weird and unique if our sexuality was testosterone-driven. Plus, research does show that estrogen alone is capable of increasing desire in women(19). Estrogen Replacement Mode of delivery (e.g., by mouth, or transdermal) is an important and possibly overlooked factor when looking into HRT. One major problem with oral estrogen’s like Premarin (aside from the fact they’re made of horse pee!) is that when estrogen is taken by mouth it raises levels of SHBG (sex hormone-binding globulin). SHBG is a protein secreted by the liver that binds both estrogen and androgens. It prefers androgens. This means that it will reduce free androgens and estrogens, both of which are associated with sex drive. In a randomized, controlled study of 670 women comparing transdermal estrogen therapy with oral (Premarin), it was found that transdermal estrogen improved sexual functioning according to scores on a self-report measure. Women who used horse pee (Premarin) showed no improvement in sexual functioning and presumably had to come up with some new hobbies (20). As a side note, I keep visualizing a poor, pregnant mare being badgered by some pharmaceutical rep going, “Just pee in the bucket Seabiscuit; we need the money!” But I digress… Bioidentical Hormone Replacement Women who are interested in HRT often opt for bioidentical hormones. They have become popular for a few reasons. In 2002, the WHI (Women’s Health Initiative) study dropped a bombshell on the world’s menopausal women and linked hormone replacement with a 26% increased risk of breast cancer and an increased risk of cardiovascular events and stroke. Within three months of published reports of the dire findings, prescriptions for hormone therapy (HT) dropped by 63% (21). Also, popular books like The Sexy Years by Suzanne Somers have promoted the use of compounded bioidenticals instead of FDA approved drugs. Compounded bioidentical hormone therapy (CBHT) is custom formulated by a compounding pharmacy and tailored to the individual. They are often perceived as safer and more natural. What Are Bioidentical Hormones? From my readings, this may be short-sighted. First up, let’s talk about what bioidentical hormones are. According to the Endocrine Society, bioidentical hormones are “compounds that have exactly the same chemical and molecular structure as hormones that are produced in the human body.” They are often plant-derived in comparison to the Premarin and Provera (used in the WHI study), which is a synthetic estrogen synthesized from conjugated horse urine and synthetic progestin respectively. Note that Premarin could be considered “natural” given the fact there’s nothing more natural than horse pee! However, it isn’t identical to what your body makes. Bioidentical progesterone is made from diosgenin that is derived from wild Mexican yam or soy, while bioidentical estrogen is often synthesized from soy. Both bioidenticals, like all hormone therapies, are extensively processed in a lab (22). The Endocrine Society’s definition is broad and doesn’t refer to the sourcing, manufacturing, or delivery method of bioidenticals. This definition can refer to both FDA approved HRT as well as non-FDA approved hormone replacement. There is no evidence that bioidenticals are safer than synthetic hormones. Nor, is there isn’t any evidence supporting CBHT as a better alternative. With CBHT there are issues regarding dosage, purity, and strength. According to an article in The Mayo Clinic Proceedings, “Compounded hormone preparations are not required to undergo the rigorous safety and efficacy studies required of FDA-approved HT and can demonstrate wide variation in active and inactive ingredients.” (21). There are several FDA approved bioidentical hormones that are on the market. They differ from CBHT in that they have some science behind them and they are carefully formulated and manufactured according to strict specifications (21). Is Hormone Therapy Safe? I think it depends on who you ask and what you read. It also depends on your particular situation. I recommend any woman interested in hormone replacement do some serious study on this issue. The WHI study scared the bejesus out of women, their doctors, and created a lot of hysteria. There were several issues with that study that are beyond the scope of this article. One book I recommend is Menopause: Change, Choice, and HRT by Australian physician Dr. Barry Wren. He goes into detail about the WHI study and its shortcomings, including the fact that the women who participated in the study were older (average age 63), smokers/former smokers, overweight/obese, and in poor health. There is a critical “window of opportunity” for women to go on HRT. It is recommended that women do it within ten years of their last period. Primarily, because going for many years without estrogen can cause permanent changes to the body that HRT could exacerbate. For example, estrogen helps prevent cholesterol from building up in your arteries. After you have been without it for a while, your arteries will likely have some damage. Taking an estrogen, particularly in oral form, increases the presence of liver proteins that cause blood to clot. This factor, combined with arthroscopic buildup, could lead to an increased risk of stroke or heart attack. But taking estrogen before arterial damage has occurred, and within the 10-year window of opportunity, might reduce your risk of heart attack or stroke (23). Estrogen: Points to Remember
https://kayesmith-21920.medium.com/the-biochemistry-of-lust-how-hormones-impact-womens-sexuality-574040b59ebe
['Kaye Smith Phd']
2020-05-01 02:43:28.540000+00:00
['Health', 'Science', 'Sexuality', 'Sex', 'Women']
Hands-on: Customer Segmentation
Knowing your customers is the foundation of any successful business. The better you understand their needs, their desires and wishes, the better you may serve them. That’s the reason why market or customer segmentation is so useful in the long run: You create profound knowledge about your customers, their characteristics and their behaviours to finally improve your business model, marketing campaigns, product features and many more… Hands-on: Customer Segmentation (Photo by Max McKinnon on Unsplash) In this article you will learn all necessary basics about customer segmentation and the application of an unsupervised learning method with the help of Python to finally build clusters for a customer sample dataset. This tutorial is set up in a way that you will succeed in identifying clusters with little to even no prior coding knowledge. Have Fun ! How will we segment our customers? We will start out by learning the basic theory about clustering and clustering with K-means. Afterwards the ingested theory will be applied to our sample customer segmentation dataset which we will firstly explore, secondly prepare and thirdly cluster our dataset with the help of K-means algorithm. High Level Process To segment our customer we are working with Python and its’ amazing open source libraries. First of all we use Jupyter Notebook, used as an open-source application for live coding and it allows us to tell better stories with our code. Furthermore we import Pandas, which puts our data in an easy-to-use structure for data analysis and data transformation. To make data exploration more graspable, we use Plotly to visualise some of our insights. Finally with Scikit-learn we will split our dataset and train our predictive model. Tech Stack To Build Segments Basics about clustering with K-Means While we distinguish between supervised and unsupervised learning, clustering belongs to the unsupervised learning algorithms and is probably considered to be the most important one. Machine Learning Overview Given a collection of unlabelled data, meaning the dataset is not tagged with a desired outcome. The goal is to identify patterns in this data. Clustering describes the process of finding structures where similar points are grouped together. Following that definition, a cluster is a collection of similar data points. Dissimilar data points shall belong to different clusters. Clustering There are various clustering algorithms identifying these patterns such as DBCSAN, Hierarchical Clustering or Expectation Maximisation Clustering. While each algorithm has its individual strengths, we are starting with K-means as one of the simplest clusterings algorithms. How does the K-Mean algorithm work? K-means belongs to the centroid-based cluster algorithms and assigns each object or datapoint to the nearest cluster center in such way that the squared distances from the clusters are minimised. “K” stands in this context for the amount of clusters, or more specifically cluster centroids. The objective is to minimise the within cluster sum of squares: Step 1: Initialisation As first step we have to choose the amount of centroids for our clustering algorithm. While a good choice can save a lot of effort, a bad one may result in missing out on natural clusters. But how can we choose the optimal number of clusters? For our purpose this will be done with the elbow method. An heuristic approach towards finding the right amount of clusters. Elbow Method Example (Code Further Below) Recall that the basic idea of K-means clustering is to minimise the within cluster sum of square. It measures the compactness of the cluster and we want it to be as small as possible. For the elbow method the sum of square is calculated for a decreasing amount of clusters and plotted accordingly. We choose then the number of clusters were the sum of square does not change significantly — basically where we can see the “elbow” in our plot. Step 2: Building the Clusters Secondly we are determining the minimum distance for each datapoint to the nearest cluster centroid. No worries this has not to be done manually but will be solved by Python. It is just good to understand what the algorithm is basically doing repetitively. Step 3: Update & Iterate Thirdly the cluster means or centroids have to be updated. This is done until there are no more changes in the assignment of data points towards other centroids. While dividing the clustering process with K-Means in three simple steps sounds pretty straightforward, there are certain disadvantages we should be aware of. One is that K-Means is very sensitive towards outliers as they strongly influence the within cluster sum of squares. Therefore we should consider removing them before applying the algorithm. Second disadvantage is the random choice of cluster centroids with K-Means. This may leaves us ending up with slightly different results on different runs of the unsupervised learning algorithm, which is not optimal for a reproducible research approach. Nevertheless by understanding these weaknesses we can still apply K-Means especially when we want quick and practical useful results. The Dataset For the purpose of this project we are working with a publicly available dataset from Kaggle. The dataset includes some basic data about the customer such as age, gender, annual income, customerID and spending score. In this scenario we want to find out which customer segments show which characteristics in order to plan an adequate marketing strategy with individual campaigns for each segment. # for basic mathematic operations import numpy as np import pandas as pd # for visualizations import matplotlib.pyplot as plt import seaborn as sns data = pd.read_csv('../Clustering/Mall_Customers.csv') data.head(10) For better insights and unprepared datasets it is recommended to do an explorative data analysis, data cleaning and data preparation upfront. For the sole purpose of demonstrating K-Means and customer segmentation we will keep this to an absolute minimum and focus on our main objective. The Clustering — Elbow Method Once the dataset is loaded and cleaned, we can start clustering the dataset. In this case we will cluster initially according to Annual Income and Spending Score as our main objective is a marketing campaign targeting people with high income and willing to spend. To do so we have to select all rows and column 3 and 4. x = data.iloc[:, [3, 4]].values As previously described we have to find out which amount of centroids is the optimal amount to minimise the within cluster sum of squares. To do so we run our code from one to ten cluster with the help of a for loop. The result for each amount of clusters is then appended to the wcss list. from sklearn.cluster import KMeans wcss = [] for i in range(1, 11): km = KMeans(n_clusters = i, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0) km.fit(x) wcss.append(km.inertia_) plt.plot(range(1, 11), wcss, c="purple") plt.title('The Elbow Method', fontsize = 30) plt.xlabel('No of Clusters', fontsize = 20) plt.ylabel('WCSS', fontsize = 20) plt.show() To identify the optimum amount of centroids we have to look for the “elbow” by plotting each within cluster sum of squares value on the y-Axis and the amount of centroids on the x-Axis. Elbow Method It is found that after five clusters the wcss value is decreasing very marginally if adding more clusters. In this case we got what we want: The optimum amount of clusters seems to be five. The Clustering — Visualising K-Means What we want to do next is visualising our five cluster in order to identify our target customers and have an opportunity to present our results to colleagues and other stakeholders. To do so we run our K-Means algorithm and determine the clusters within Annual Income and spending score (the previously defined x). km = KMeans(n_clusters = 5, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0) y_means = km.fit_predict(x) With the prediction alone we cannot see much and have to use plotly to create a nice graph for our clusters. plt.scatter(x[y_means == 0, 0], x[y_means == 0, 1], s = 100, c = 'orangered', label = 'potential') plt.scatter(x[y_means == 1, 0], x[y_means == 1, 1], s = 100, c = 'darksalmon', label = 'creditcheck') plt.scatter(x[y_means == 2, 0], x[y_means == 2, 1], s = 100, c = 'goldenrod', label = 'target') plt.scatter(x[y_means == 3, 0], x[y_means == 3, 1], s = 100, c = 'magenta', label = 'spendthrift') plt.scatter(x[y_means == 4, 0], x[y_means == 4, 1], s = 100, c = 'aquamarine', label = 'careful') plt.scatter(km.cluster_centers_[:,0], km.cluster_centers_[:, 1], s = 200, c = 'darkseagreen' , label = 'centroid') sns.set(style = 'whitegrid') plt.title('K Means Clustering', fontsize = 30) plt.xlabel('Annual Income', fontsize = 20) plt.ylabel('Spending Score', fontsize = 20) plt.legend() plt.grid() plt.show() The visualisation allows us to clearly identify the five clusters. The five centroids are visible in a darkgreen. Our main target group, named as “target” and in gold color, has the highest spending score and annual income. Clustering with K Means Furthermore we can find four additional groups that may be interesting for us to approach. In this case we named them “potential”, “creditcheck”, “spendthrift” and “careful”. Well done — we did some very basic clustering to segment a customer dataset. What’s next ? For now we have segmented our customers according to Annual Income and Spending Score. But of course there are other factors that may influence your decision on which customers you want to target. In our example you could further investigate “Age” as a feature and see its impact on the clustering results. For businesses it is most common to segment their customers according to four different categories: Business Customer Segmentation Categories After expanding, exploring and defining the different customer segments be creative on how to use your gained knowledge. Optimise pricing, reduce customer churn, increase retention, improve your product, … there are endless opportunities. Now it’s up to you :) ************************************************************** Articles related to this one: Hands-on: Predict Customer Churn Applied Machine Learning For Improved Startup Valuation Hands-on: Setup Your Data Environment With Docker Eliminating Churn is Growth Hacking 2.0 Misleading with Data & Statistics ***************************************************************
https://towardsdatascience.com/hands-on-customer-segmentation-9aeed83f5763
[]
2020-12-29 14:29:05.675000+00:00
['Unsupervised Learning', 'Customer Segmentation', 'Python', 'Clustering', 'Data Science']
9 Edits That Will Improve Your LinkedIn Profile
9 Edits That Will Improve Your LinkedIn Profile Updating your LinkedIn profile to increase inbound leads and elevate yourself into a thought leader. LinkedIn isn’t just for finding new jobs, nor is it only a place to float in a state of passively looking. It’s a platform that can be leveraged to reduce CTA’s, increase brand awareness, and elevate oneself into a thought leader, too. Employees are the first paid ambassadors of any brand. The employee should want their employer to succeed and in their capacity leverage any tools that might drive business to their employer. LinkedIn, the professional networking platform, is a tool that has become a premier source of organic impressions and lead generation for those on it. Even those leading sole proprietorships, or personal brands, can leverage LinkedIn. User profiles on LinkedIn are digital resumes and represent the individual as much as their employer, and with a few adjustments, new copy, and strategic backlinks, every profile in a company can be improved. Let’s begin. Write an engaging headline Before anyone lands on your profile, they’ll search for you or see your comment on a post in their timeline. Users will see your name, the level of connection you are to them, and your headline. LinkedIn profile headlines need to be enticing in order to drive profile visits. Let’s consider the process of a user seeing your image, connection level, and headline in their timeline, and then clicking to visit your profile an “open”. You want a strong open rate, and a well-crafted headline will improve yours. There is room for about 74 characters in a LinkedIn profile headline. Use these wisely to say what you do and who you do it for. There are various recipes to follow when crafting yours, my suggestion is to use the professional verb of your work, the product or service you offer, and a target audience. For example, my headline is 65 characters long: Creating & curating content that younger generations engage with. If I worked on a running sneaker it might be Making Running On All Lands More Comfortable or if I was at a plant-based meat substitute it might be Making Your Plants Taste Like Meat. The headline says what you do, not just your title, and is to be crafted like the subject line of an email you want people to read. Here’s what it will look like in search and timeline. Design a new cover image Your headline worked, and users are beginning to visit your profile, which increases your open rate. Great job. The first place that the eyes of a LinkedIn profile visitor land is the 1440 x 425 px banner image at the top of your profile. What’s yours look like? The LinkedIn banner image is a first impression opportunity to reinforce you, your brand, and your business. For best practices, if you are part of a company, ask your marketing or content team to provide you with a branded LinkedIn cover image to use. Ideally, a company will create 4–5 of these and offer them from a menu in a shared google drive for employees to use at will. This way employees can continue to refresh their page, and marketing teams can update the drive with applicable images. If you are not part of a company, open an account on Canva, and design your own cover image. An example I came across was on the profile of employees at the non-alcoholic beer company Athletic Brewing. It’s sleek, the awards make me believe it’s good, and it tells me what their product is. It’s enough to keep me on the page and scroll down. Bolster your “About” section Are you familiar with what bounce rates are? A bounce rate represents the percentage of visitors who enter a website and then leave rather than continuing to view other pages within the same site. On LinkedIn, consider your bounce rate being if profile visitors scroll down and discover all that you do, or if they leave your page. After landing on your page and seeing your cover image, profile visitors will scroll down past your profile picture, headline, and location, and arrive at your “About” section. “About” sections are opportunities to be human, or as human as one can be on a screen. It’s tempting to include your full bio here but don’t. Keep it short, you don’t want to overwhelm someone. You want to usher their scroll to what comes next (more on that in a second). A great example of a brand focused “About” section is that of NOOMA Founder Jarred Smith, shared below with my LinkedIn “About” section. Feature relevant content Many simply don’t use the “Featured” content section, and that decision is a major miss. The LinkedIn profile “Featured” section is a free lead generation and backlink machine. It’s the first opportunity to intentionally limit your bounce rate by directing a user to a destination of your choice. At most, 2.5 pieces of featured content will be visible on your profile and best practice is to feature a minimum of 4 pieces of content. As for the types of content, there is flexibility here but I would focus on an article you wrote, videos you’ve produced, links to your website or portfolio, or a piece of content that featured you. This section is an opportunity to elevate yourself in a thought leader and can be used as a digital hype sheet. At the moment, I am featuring an article I wrote that went viral, a link to my menu of best writings, a blog full of writing tips, and a link to a panel I moderated at SXSW. Proudly share your experiences The “Experience” section of a LinkedIn profile is where you share all that you’ve done professionally. Unlike the headline, this is where you include your professional working title as outlined by your employer. Each experience on your profile offers space to include the specifics of your role. Here, include the details of your day, your accomplishments, the success you had, the stack you used, the brands and projects you worked on, or any other part of that experience you are proud of and elevates you. Do this for each position you have held, and for past experiences include why you moved on from that company. Transparency is great and activates the Law of Candor which will disarm profile visitors (bounce rate decreased!) Oh! I almost forgot. When you add an experience and press save, refresh your page to make sure the company icon is populated with the correct image. An empty square is lazy, “Your resume says digital-savvy but you can’t even add a logo?” Backlink each of your experiences Remember when you added 4 pieces of content in the “Featured” section and your inbound traffic grew? Well, you can direct traffic from each experience on your LinkedIn profile as well. The magic number, visually, is 2, and you can link each piece by clicking the pencil in the upper right-hand section of your experience and scrolling down to the “Media” section where the “Link” button will be. As to which type of content you should be linking here, my suggestion is that one piece direct traffic to your business’s top performing landing page and that the second piece be the best piece of press your company has gotten. To know which landing page is your company’s top-performing, ask your marketing team where they’d like you to direct traffic to from your LinkedIn. This may be a specific case study, a video embedded on the website, or a social media channel so that the user's digital profiles be added to the companies audiences for retargeting campaigns (Yep, digital marketers, each of your employees can funnel thousands of profiles into your audiences). As for which piece of press, if you don’t know which piece, just ask. But then set up google alerts for your company so you stay in-the-know. Start using recommendations I’m not too high on the “Skills & Endorsements” section of a LinkedIn profile. There’s a very low investment of time required to endorse someone and the available categories are often misaligned from the person and their work. I’d rather discover you through your About, Featured, and Experience sections, which I engaged with earlier on my profile visit. What I do trust are recommendations The “Recommendations” section is a bit more out of reach, requires a bit more time and thought, and is underused, so when I come across a profile full of positive ones, a level of competency is communicated immediately. To get started, think of five co-workers and five people you have a professional relationship with that exist outside your company, and politely ask them to recommend you. You can even kick things off by recommending each of them first! A great habit to build is to recommend people at the completion of the projects you work on together, inside or outside the company. It’s a good look and is a very selfless way to express pubic gratitude and appreciation of another. Follow your company’s LinkedIn page Click on the company name listed in your current experience and it will take you to your company’s LinkedIn page (this is a great way to test that you added your experience correctly). In the bottom left-hand corner of the page’s header will be a button that invites you to Follow, press it. Add each of your teammates Your profile is looking good, now it’s time to go show it off. Start by adding all of the members of your company. Click on the company name listed in your current experience and it will take you to your company’s LinkedIn page. In the bottom right-hand corner of the page’s header will be text that reads See all # employees on LinkedIn → Click on that and begin connecting with your teammates. That didn’t take long right? Maybe one hour? Now that your profile has been set up as a net to capture the interest of all who visit it, I want to offer a few quick tips for content sharing. Slack → open a company-wide #linkedincontent Slack channel dedicated to serving as a menu of content for employees to share on LinkedIn. Include case studies, product launches, product updates, blog posts, podcast episodes, new services, and press. Google Alerts → Set up Google alerts for your company and your category to stay in-the-know. The links included can help to automate sharing and be used to populate your company-wide #linkedincontent Slack channel. Tags → For every post on LinkedIn, tag your company using the @ function. Hashtags → For every post on LinkedIn, use the most relevant 1–2 hashtags. Something to consider is who views the hashtag. For example, if you work in content creation for the consumer packaged goods industry don’t use the #creative or #content hashtags, use #CPG because leaders and followers of the space are following that hashtag. Message me on LinkedIn if you have any questions, and good luck! The difference between Seth Godin, The Morning Brew, and me? I respect your inbox, curating only one newsletter per month — Join my behind-the-words monthly newsletter to feel what it’s like to receive a respectful newsletter.
https://medium.com/the-post-grad-survival-guide/9-edits-that-will-improve-your-linkedin-profile-966cab9316bd
['Richie Crowley']
2020-07-17 06:41:02.653000+00:00
['Social Media', 'Business', 'Marketing', 'Creativity', 'Work']
Podcast: How To Handle Success & the Challenges of a Growing R&D Team — Karin Moscovici (Hebrew)
How To Handle Success & the Challenges of a Growing R&D Team — Karin Moscovici Podcast: How To Handle Success & the Challenges of a Growing R&D Team — Karin Moscovici (Hebrew) Riskified Technology Follow Oct 4 · 1 min read There are many technological challenges at a scaling startup, like architecture changes, implementing new technologies, and more. Listen to Karin Moscovici, our VP R&D, on how it’s like to manage a growing R&D organization and create a technological culture. Recorded as part of Osim Tochna podcast — Click here to hear the full episode.
https://medium.com/riskified-technology/podcast-how-to-handle-success-the-challenges-of-a-growing-r-d-team-karin-moscovici-hebrew-840ad3da8a65
['Riskified Technology']
2020-10-04 14:55:45.615000+00:00
['Development Methods', 'Managment', 'Development', 'Podcast', 'Engineering']
Three reasons why you need a Log Aggregation Architecture today
Three reasons why you need a Log Aggregation Architecture today Log Aggregation are not more a commodity but a critical component in container-based platforms Photo by Olav Ahrens Røtne on Unsplash Log Management doesn’t seem like a very fantastic topic. It is not the topic that you see and says: “Oh! Amazing! This is what I was dreaming about my whole life”. No, I’m aware that this is not to fancy, but that doesn’t make it less critical than other capabilities that you’re architecture needs to have. Since the start of time, we’ve been used log files as the single trustable data source when it was related to troubleshoot your applications or know what was failed in your deployment or any other actions regarding a computer. The procedure was easy: Launch “something” “something” failed. Check the logs Change something Repeat And we’ve been doing it that way for a long, long time. Even with other more robust error handling and management approaches like Audit System, we also go back to logs when we need to get the fine-grained detail about the error. Look for a stack trace there, more detail about the error that was inserted into the Audit System or more data than just the error code and description thas was provided by a REST API. Systems starting to grow, architecture became more complicated, but even with that, we end with the same method over and over. You’re aware of log aggregation architectures like the ELK stack or commercial solutions like Splunk or even SaaS offerings like Loggly, but you just think they’re not just for you. They’re expensive to buy or expensive to set, and you know very well your ecosystem, and it’s easier to just jump into a machine and tail the log file. Probably you also have your toolbox of scripts to do this as quickly as anyone can open Kibana and try to search for something instance ID there to see the error for a specific transaction. Ok, I need to tell you something: It’s time to change, and I’m going to explain to you why. Things are changing, and IT and all the new paradigms are based on some common grounds: You’re going to have more components that are going to run isolated with its log files and data. Deployments will be more regular in your production environment, and that means that things are going to be wrong more usual (on a controlled way, but more usual) Technologies are going to coexist, so logs are going to be very different in terms of patterns and layouts, and you need to be ready for that. So, let’s discuss these three arguments that I hope make you think in a different way about Log Management architectures and approaches. 1.- Your approach just doesn’t scale Your approach is excellent for traditional systems. How many machines do you manage? 30? 50? 100? And you’re able to do it quite fine. Imagine now a container-base platform for a typical enterprise. I think an average number could be around 1000 containers just for business purposes, not talking about architecture or basic services. Are you able to be ready to go container by container to check 1000 logs streams to know the error? Even if that’s possible, are you going to be the bottleneck for the growth of your company? How many container logs do you can keep a trace on? 2000? As I was saying at the beginning, that just not scale. 2.- Logs are not there forever And now, you read the first topic and probably are you just saying to the screen you’re using to read is. Come on! I already know that logs are not there, they’re getting rotated, they got lost, and so on. Yeah, that’s true, this is even more important in cloud-native approach. With container-based platforms, logs are ephemeral, and also, if we follow the 12-factor app manifesto there is no file with the log. All log traces should be printed to the standard output, and that’s it. And where the logs are deleted? When the container fails.. and which records are the ones that you need more? The ones that have been failed. So, if you don’t do anything, the log traces that you need the most are the ones that you’re going to lose. 3.- You need to be able to predict when things are going to fail But logs are not only valid when something goes wrong are adequate to detect when something is going to be wrong but to predict when things are going to fail. And you need to be able to aggregate that data to be able to generate information and insights from it. To be able to run ML models to detect if something is going as expected or something different is happening that could lead to some issue before it happens. Summary I hope these arguments have made you think that even for your small size company or even for your system, you need to be able to set up a Log Aggregation technique now and not wait for another moment when it will probably be too late.
https://medium.com/dev-genius/three-reasons-why-you-need-a-log-aggregation-architecture-today-e285d18bb1ef
['Alex Vazquez']
2020-07-02 15:53:34.675000+00:00
['Cloud Computing', 'Programming', 'Software Engineering', 'Software Development']
How to Become an DevOps Engineer in 2020
DevOps Practices Now that we’ve gone over what DevOps stands for and what some of its related benefits are, let’s discuss some DevOps practices. A thorough understanding of DevOps methodologies will help clear any lingering queries you may have. That’s not to mention that it will add to your knowledge and come in handy in interviews (which we’ll talk about later). Continuous integration One of the biggest problems resulting from teams working in isolation is that merging code when work is completed. It’s not only challenging but also time-consuming. That’s where continuous integration (CI) can help big time. Developers generally make use of a shared repository (using a version control system such as Git.) with continuous integration. The fact that a continuous integration service simultaneously builds and runs tests on code changes makes it easier to recognize and handle errors. In the long run, continuous integration can help boost developer productivity, address bugs and errors faster, and it can help speed up updates. Continuous delivery Evolution forged the entirety of sentient life on this planet using only one tool: the mistake. — Westworld Robert Ford may have made some critical errors in Westworld, but the man does have some great lines. And he makes a great point about evolution. Speaking of evolution, many people consider continuous delivery (CD) as the next evolutionary step of CI because it pushes further the development of lifecycle automation. CD is all about compilation, testing, and the staging environment. This stage of the development lifecycle expands on CI by extending code changes to a testing environment (or a production environment) after the build stage. If employed correctly, CD can help developers finetune updates by thorough testing across multiple dimensions before the production stage. Continuous Delivery allows developers to run tests such as UI testing, integration testing, and load testing, etc. Microservices Microservices are to software design what production lines are to manufacturing. Or, to put it more verbosely, microservices is a software design architecture that takes a hammer to monolithic systems. Microservices allows applications to be built altogether in one big code repository. Each application consists of multiple microservices and every service is tweaked to excel at one specific function. For example, let’s look at how Amazon decided to move to microservices. Once upon a time, when Amazon wasn’t the behemoth it is today, their API served them just fine. But as their popularity grew so did their need for a better application program interface. Amazon decided to get into microservices. Now, instead of a problematic two-tiered architecture, Amazon has multiple services — one that deals with orders, one service that generates their recommended buys list, a payment service, etc. All these services are actually mini-applications with a single business capability. Infrastructure as code Thanks to technological innovations, servers and critical infrastructure no longer function the way they did a decade ago. Now, you have cloud providers like Google, that manage business infrastructure for thousands upon thousands of customers in huge data warehouses. Unsurprisingly, the way engineers manage infrastructure today is way different than what went on previously. And, Infrastructure as Code (IaC) is one of the practices that a DevOps environment may apply to handle a shift in scale. Under IaC, infrastructure is managed using software development techniques and code (such as version control, etc). Developers can interact with infrastructure programmatically thanks to the cloud’s API-driven model. This allows engineers to handle infrastructure the way they’d tackle application code. This is important because it allows you to test your infrastructure the same way you would test your code. With IaC at the helm, your system administrators don’t have to stress about issues like the webserver not connecting to the database, etc. What’s more, IaC can help businesses auto-provision and shape abstraction layers so that developers can go on building services without needing to know the specific hardware, GPU, firmware, and so on, if they have a DevOps team that’s developing infrastructure to push automation forward. Picture this: Big-time car manufacturers like Mercedes Benz, BMW, and Audi all want to get their hands on the latest in-car experience technologies, right? But if these companies want to ship new services and products they’re going to struggle with the fact everyone on the road has different hardware. Unless one fine day, the powers that be decide to have universal hardware, edge case devices will continue to act as roadblocks when it comes to development. However, this is where a solid DevOps team can help because they can auto-provision abstraction layers to automate infrastructure services. By solving edge-case challenges in the cloud, a DevOps team can help auto-manufacturers by cutting down costs, and help lessen the burden and strain on developers. Configuration management Configuration Management (CM) is important in a DevOps model to encourage continuous integration. It doesn’t matter if you’re hosted in the cloud or managing our systems on-premises, implementing configuration properly can secure accuracy, traceability, and consistency. When system administrators use code to automate the operating system this leads to the standardization of configuration changes. This kind of regularity saves developers from wasting time manually configuring systems or system applications. Policy as code Organizations that have the benefit of infrastructure and configuration codified with the cloud also have the added advantage of monitoring and enforcing compliance at scale. This type of automation allows organizations to oversee changes in resources efficiently, and it allows security measures to be enforced in a disseminated manner. Monitoring and logging Monitoring metrics can help businesses understand the impact of application and infrastructure performance on end-user experience. Analyzing and categorizing data and logs can lead to valuable insights regarding the core causes of problems. Look at it like this — if services are to be made available 24/7, active monitoring becomes exceedingly important as far as update frequency is concerned. If you’re scrambling towards code release, you know that it isn’t humanly possible to check all your blind spots. Why? Because not every problem pops up in the user interface. Some bugs work like Ethan Hunt to open security holes, others reduce performance, and then there are the wastrel-type bugs that squander resources. On the other hand, the generation of containers and instances can make log management feel like finding a needle in a haystack of needles — unpleasant. The sheer amount of raw data to wade through can make finding meaningful information very difficult. But if you have monitoring systems, you can depend on the metrics to alert the team about any type of anomaly rearing its head across cloud services or applications. Also, monitoring metrics can help businesses understand the impact of application and infrastructure performance on end-user experience. Logging can help DevOps teams create user-friendly products or services, or to push continuous integration/delivery forward. Applied together — monitoring and logging can not only help a business get closer to its customers, but they can also help a business understand its own capacity and scale. For instance, almost all businesses rent out a certain amount of cloud space from cloud providers like AWS, Azure, or even Google Cloud throughout the year. But, if a company isn’t aware of the fact that its capacity can fluctuate due to peak seasons or holidays, or if its team isn’t prepared to handle the ups and downs by creating provisioning layers then things can get pretty ugly — like a website crash. Communication and collaboration One of the fundamental cultural aspects of DevOps is communication and collaboration. DevOps tooling and automation (of the software delivery process) focuses on creating collaboration by combining the processes and efficiencies of development and operations. In a DevOps environment, all teams involved work to build cultural norms relating to information sharing and facilitating communication via project tracking systems, chat applications, and so on. This allows quicker communication between developers and helps bring together all parts of an organization to accomplish set goals and projects.
https://medium.com/swlh/how-to-become-an-devops-engineer-in-2020-80b8740d5a52
['Shane Shown']
2020-09-29 19:36:21.205000+00:00
['Cloud Computing', 'DevOps', 'Software Development', 'Programming', 'Engineering']
Marketing AI Institute CEO Paul Roetzer on Superpowered Manipulation
Audio + Transcript Paul Roetzer: Most marketers still don’t even know what it is. So if you don’t understand the superpower you’ll have, how could you possibly be planning for how to not use it for evil? James Kotecki: This is Machine Meets World, Infinia ML’s ongoing conversation about artificial intelligence. My guest today is the founder and CEO of the Marketing Artificial Intelligence Institute, Paul Roetzer. Thanks so much for being on Machine Meets World. Paul Roetzer: Absolutely, man. Looking forward to the conversation, I always enjoy talking with you. James Kotecki: So when people hear marketing and they hear artificial intelligence, what do people think that you’re up to? Paul Roetzer: Well, the average marketer, I think, believes it’s just too abstract to care. I mean, that’s our biggest challenge right now is making marketers care enough to take the next step, to ask the first question about what is it actually and how can I use it? So I think a lot of times they just ignore it because it seems abstract or sci-fi. James Kotecki: And the Institute is an educational endeavor at its core, right? It’s trying to convince marketers to use AI in different ways across the different types of marketing that they do? Paul Roetzer: Yeah. We see our mission as making AI approachable and actionable. So it’s, we’re marketers trying to make sense of AI and make it make sense to other marketers. We’re not trying to talk to the machine learning engineers or the data scientists. We’re trying to make the average marketer be able to understand these things and apply it immediately to their career and to their business. James Kotecki: And what’s your scope? What’s your definition of AI? Paul Roetzer: The best definition I’ve seen is Demis Hassabis, who’s the co-founder and CEO of DeepMind, calls AI the science of making machines smart. And I just have always gravitated to that definition because I think it really simplifies it, meaning, machines know nothing. The software, the hardware we use to do our jobs, don’t know anything natively, they’re programmed to do these things. There’s a future for marketing where humans don’t have to write all the rules. That the machines will actually get smarter and that there’s a science behind making marketing smarter. And that’s what we think about marketing AI as. James Kotecki: What marketing technologies are you excited about to come to light in 2021? Paul Roetzer: We look at three main applications of AI: language, vision, and prediction. What you’re trying to do with AI is give machines human-like abilities — of sight, of hearing, of language generation. And so language in particular has just a massive potential within marketing. Think about all the places that you generate language, generate documents that summarize information, create documents from scratch, write emails, like it’s just never ending. And I think you’re going to see lots and lots of companies built in the space that focus explicitly on applications of language generation and understanding. James Kotecki: I looked at the history of the Institute, and it traces back about five years ago… Paul Roetzer: Mmm-hmm. James Kotecki: …to when you were thinking about how to automate the writing of blog posts. And five years ago, that wasn’t really possible, but now, this year, GPT-3 from OpenAI is a technology that looks like we’re either very close or already there to the point where a machine can convincingly write, from almost scratch, narratives, articles, blog posts, et cetera. What do you think of that? And what do you think is next if that kind of initial dream has maybe been achieved? Paul Roetzer: So there have definitely been major advances in the last even 18 months. So first we had GPT-2 was the big one that hit the market. I think it was like February of ’19 maybe it was when that surfaced. And then just this year we had GPT-3, which really took it to the next level of this ability to create free-form text from an idea or a topic or a source. And it really is moving very, very quickly. And I think in 2021, 2022, you’re going to start seeing lots and lots of applications of language generation from models like GPT-3, where the average content marketer or email marketer will be using AI-assisted language generation. James Kotecki: People always say when technology like this comes up, “There’s still going to be a place for human creativity. Don’t worry. We still need humans in the mix.” At what point do marketers look at this and start getting scared and saying, “You keep saying that, but the machines keep getting more and more creative.” Paul Roetzer: I am a big believer that the net positive of AI will be more jobs and it will create new opportunities for writers and for marketers. But I’m realistic that things I thought 24 months ago a machine couldn’t do, it’s doing now. And that’s part of why I think it’s so critical that marketers and writers are paying attention because the space is changing very quickly. The tools that you can use to do your job are changing very quickly. It’s going to close some doors. There are going to be some roles or some tasks that writers, marketers do today that they won’t need to do. But it’s also going to open new ones. And I think it’s the people who are at the forefront of this who have a confidence and a competency around AI that are going to be the ones that find the new opportunities and career paths — may even be the ones that build the new tools, the application of it for the specific thing they do. James Kotecki: Do you think marketers, at least marketers who do get it, have an obligation, not just to use AI effectively and ethically, but to use their skills to shape the public perception of AI? Paul Roetzer: That’s what I always tell people. It’s like, think about it. Like, why does Google have Google AI? Why does Microsoft advertise Microsoft AI? They’re all trying to get the average consumer to not be afraid of this idea, this technology because it is so interwoven in every experience they have as consumers now. They don’t realize it though. These big tech companies need consumers to be conditioned to accept AI. And I think in the software world for marketing, you’re going to see a similar movement where we need the users of the software to understand how to use it with their consumers, but also how to embrace what it makes possible within their jobs. James Kotecki: Are there ethical guidelines or any kind of ethical consensus out there for how marketers need to be approaching some of this stuff? I mean, if you took ethics out of it, you could use this technology in ways that were at best amoral and at worst unethical. So what are some guidelines that are actually shaping people’s decision-making here? Paul Roetzer: There aren’t any universal standards that we’re aware of. There is a big movement around this idea of AI for good, at a larger level. So you are seeing organizations created who are trying to integrate ethics and remove bias from AI at a larger application in society and in business. Specific to marketing though, it’s really at an individual corporation level. So are companies developing their own ethics guidelines for how they’re going to use data and how they’re going to use the power that AI gives them to reach and influence consumers? And that part’s not moving fast enough. There’s not enough conversation around that because again, most marketers still don’t even know what it is. So if you don’t understand the superpower you’ll have, how could you possibly be planning for how to not use it for evil? And so there’s these steps we’re trying real hard to move the industry through so we can get to the other side of how do we do good with this power that we’re all going to have. James Kotecki: When you look at the totality of AI in marketing, from your perch here, do you feel like you are fighting against trends that are taking things in the wrong direction? Do you feel overall optimistic about the state of things? Paul Roetzer: I feel optimistic, but I do worry a lot about where it could go wrong. And I think if you look at politics, I’m not going to bring in any specific politics into this, but if you look at the political realm, this isn’t new stuff. They’ve been trying to manipulate behavior on every side of it, in every country. It’s all about trying to manipulate people’s views and behaviors. And this is very dangerous stuff to give people like that whose job is to manipulate behaviors. And so if you’re a marketer and you’re so focused on revenue or profits or goals over the other uses of it, you’re going to have the ability to manipulate people in ways you did not before. And I do worry greatly that people will use these tools to take shortcuts, to hack things together, and to affect people in ways that isn’t in the best interest of society. James Kotecki: I’m imagining a bumper sticker for a marketer that says, “You say manipulate human behavior like it’s a bad thing.” Right? Paul Roetzer: I could see that, yeah. James Kotecki: Because the context, even the word “manipulate” has a negative connotation, but it is, if you just look at its neutral meaning, exactly what marketing is trying to accomplish. As we wrap up here, what are your hopes for marketing in 2021 when it comes to AI? Paul Roetzer: I just want marketers to be curious. To understand that there is a chance to create a competitive advantage for themselves and for their companies. And to do that, you just need to know that AI creates smarter solutions, that if you’re going to do email or content marketing or advertising, don’t just rely on the all-human all the time way you’ve previously done it. There are tools that are figuring things out for you that are making you a better marketer by surfacing insights, making recommendations of actions, assessing creative. There’s lots and lots of ways you can use AI. And I just think if people take the step to find a few to try in the coming twelve months, they’ll realize that there’s this whole other world of marketing technology out there that can make them better at their job. James Kotecki: Well, thanks for illuminating us on that. Paul Roetzer, founder and CEO of the Marketing Artificial Intelligence Institute. Thank you for being on Machine Meets World. Paul Roetzer: Absolutely, man. Enjoyed it. James Kotecki: And thank you so much for watching and/or listening, please like share, subscribe. You know, give the algorithms what they want. You can also email us at [email protected]. I’m James Kotecki And that is what happens when Machine Meets World.
https://medium.com/machine-meets-world/marketing-ai-institute-ceo-paul-roetzer-on-superpowered-manipulation-59c05fbf501a
['James Kotecki']
2020-12-16 15:41:39.245000+00:00
['Business', 'Ethics', 'Artificial Intelligence', 'Technology', 'Marketing']
10 Algorithms To Solve Before your Python Coding Interview
10 Algorithms To Solve Before your Python Coding Interview In this article I present and share the solution for a number of basic algorithms that recurrently appear in FAANG interviews Photo by Headway on Unsplash Why Practicing Algorithms Is Key? If you are relatively new to Python and plan to start interviewing for top companies (among which FAANG) listen to this: you need to start practicing algorithms right now. Don’t be naive like I was when I first started solving them. Despite I thought that cracking a couple of algorithms every now and then was fun, I never spent too much time to practice and even less time to implement a faster or more efficient solution. Between myself, I was thinking that at the end of the day solving algorithms all day long was a bit too nerdy, it didn’t really have a practical use in the real daily work environment and it would not have brought much to my pocket in the longer term. “Knowing how to solve algorithms will give you a competitive advantage during the job search process” Well…I was wrong (at least partially): I still think that spending too much time on algorithms without focusing on other skills is not enough to make you land your dream job, but I understood that since complex problems present themselves in every day work as a programmer, big companies had to find a standardized process to gather insights on the candidate’s problem solving and attention to detail skills. This means that knowing how to solve algorithms will give you a competitive advantage during the job search process as even less famous companies tend to adopt similar evaluation methods. There Is An Entire World Out There Pretty soon after I started solving algorithms more consistently, I found out that there are plenty of resources out there to practice, learn the most efficient strategies to solve them and get mentally ready for interviews (HackerRank, LeetCode, CodingBat and GeeksForGeeks are just few examples). Together with practicing the top interview questions, these websites often group algorithms by company, embed active blogs where people share detailed summaries of their interview experience and sometimes even offer mock interview questions as part of premium plans. For example, LeetCode let you filter top interview questions by specific companies and by frequency. You can also choose the level of difficulty (Easy, Medium and Hard) you feel comfortable with: There are hundreds of different algorithmic problems out there, meaning that being able to recognize the common patterns and code an efficient solution in less then 10 mins will require a lot of time and dedication. “Don’t be disappointed if you really struggle to solve them at first , this is completely normal” Don’t be disappointed if you really struggle to solve them at first, this is completely normal. Even more experienced Python programmers would find many algorithms challenging to solve in a short time without an adequate training. Also don’t be disappointed if your interview doesn’t go as you expected and you just started solving algorithms. There are people that prepare for months solving a few problems every day and rehearse them regularly before they are able to nail an interview. To help you in your training process, below I have selected 10 algorithms (mainly around String Manipulation and Arrays) that I have seen appearing again and again in phone coding interviews. The level of these problems is mainly easy so consider them as good starting point. Please note that the solution I shared for each problem is just one of the many potential solutions that could be implemented and often a BF (“Brute Force”) one. Therefore feel free to code your own version of the algorithm, trying to find the right balance between runtime and employed memory. Strings Manipulation 1. Reverse Integer Output: -132 543 A warm-up algorithm, that will help you practicing your slicing skills. In effect the only tricky bit is to make sure you are taking into account the case when the integer is negative. I have seen this problem presented in many different ways but it usually is the starting point for more complex requests. 2. Average Words Length Output: 4.2 4.08 Algorithms that require you to apply some simple calculations using strings are very common, therefore it is important to get familiar with methods like .replace() and .split() that in this case helped me removing the unwanted characters and create a list of words, the length of which can be easily measured and summed. 3. Add Strings Output: 2200 2200 I find both approaches equally sharp: the first one for its brevity and the intuition of using the eval( ) method to dynamically evaluate string-based inputs and the second one for the smart use of the ord( ) function to re-build the two strings as actual numbers trough the Unicode code points of their characters. If I really had to chose in between the two, I would probably go for the second approach as it looks more complex at first but it often comes handy in solving “Medium” and “Hard” algorithms that require more advanced string manipulation and calculations. 4. First Unique Character Output: 1 2 1 ### 1 2 1 Also in this case, two potential solutions are provided and I guess that, if you are pretty new to algorithms, the first approach looks a bit more familiar as it builds as simple counter starting from an empty dictionary. However understanding the second approach will help you much more in the longer term and this is because in this algorithm I simply used collection.Counter(s) instead of building a chars counter myself and replaced range(len(s)) with enumerate(s) , a function that can help you identify the index more elegantly. 5. Valid Palindrome Output: True The “Valid Palindrome” problem is a real classic and you will probably find it repeatedly under many different flavors. In this case, the task is to check weather by removing at most one character, the string matches with its reversed counterpart. When s = ‘radkar’ the function returns True as by excluding the ‘k’ we obtain the word ‘radar’ that is a palindrome. Arrays 6. Monotonic Array Output: True False True This is another very frequently asked problem and the solution provided above is pretty elegant as it can be written as a one-liner. An array is monotonic if and only if it is monotone increasing, or monotone decreasing and in order to assess it, the algorithm above takes advantage of the all() function that returns True if all items in an iterable are true, otherwise it returns False . If the iterable object is empty, the all() function also returns True . 7. Move Zeroes Output: [1, 3, 12, 0, 0] [1, 7, 8, 10, 12, 4, 0, 0, 0, 0] When you work with arrays, the .remove() and .append() methods are precious allies. In this problem I have used them to first remove each zero that belongs to the original array and then append it at the end to the same array. 8. Fill The Blanks Output: [1, 1, 2, 3, 3, 3, 5, 5] I was asked to solve this problem a couple of times in real interviews, both times the solution had to include edge cases (that I omitted here for simplicity). On paper, this an easy algorithm to build but you need to have clear in mind what you want to achieve with the for loop and if statement and be comfortable working with None values. 9. Matched & Mismatched Words Output: (['The','We','a','are','by','heavy','hit','in','meet','our', 'pleased','storm','to','was','you'], ['city', 'really']) The problem is fairly intuitive but the algorithm takes advantage of a few very common set operations like set() , intersection() or & and symmetric_difference()or ^ that are extremely useful to make your solution more elegant. If it is the first time you encounter them, make sure to check this article: 10. Prime Numbers Array Output: [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31] I wanted to close this section with another classic problem. A solution can be found pretty easily looping trough range(n) if you are familiar with both the prime numbers definition and the modulus operation. Conclusion In this article I shared the solution of 10 Python algorithms that are frequently asked problems in coding interview rounds. If you are preparing an interview with a well-known tech Company this article is a good starting point to get familiar with common algorithmic patterns and then move to more complex questions. Also note that the exercises presented in this post (together with their solutions) are slight reinterpretations of problems available on Leetcode and GeekForGeeks. I am far from being an expert in the field therefore the solutions I presented are just indicative ones. You may also like:
https://towardsdatascience.com/10-algorithms-to-solve-before-your-python-coding-interview-feb74fb9bc27
[]
2020-10-22 06:12:30.168000+00:00
['Python', 'Data Engineering', 'Interview', 'Algorithms', 'Data Science']
We need a new government
We need a new government Starting the changes to make this work again We need new governance. The old style of government is very broken in the US and crippled most everywhere. It has been obvious since November 2016 that the US is the worst case and may not survive the failed federal election. The effects of that election has led to a year of rapid decline and the loss of planetary leadership by the US. You simply can’t have what was the leading nation taken over by a highly questionable candidate elected in new and very questionable conditions. The expected disaster has not disappointed and has, if anything, been worse than people feared. Failure to replace Trump and his cohort and to address the structural problems that produced an invalid and incompetent regime in Washington DC may have already doomed the nation. But the point here is to look to the future and to possible ways to replace the existing mechanisms of electing parliamentary, partially representative governments. This has been a growing concern for decades in the US. The loss of the majority of eligible adult voters creating governments elected by 20–30% of the population is not workable. Combining this with the complexity of 21st century government issues and the elimination of citizenship training in public schools leaves a dangerously uninformed electorate. This is the stuff that authoritarian despots breed upon and the rise of even an incompetent such as Trump shows that danger. Old assumptions The solutions are readily at hand but are complicated by assumptions about the nature of elections and centuries of racism, xenophobia, misogyny, and corruption in all of its forms. As national surveys have consistently shown, the majority of the US population wants what most all other post industrial and even late industrial countries have. In short that is an equivalent of Scandinavian type nation state. While this is not a clear answer and not even a good representation of Sweden, Norway or Denmark as examples it is approximate and totally denied by the existing US power structure. While most European countries have the basic services and active governments that are hopelessly desired by most Americans, the limitations of the existing systems in those countries are well recognized with growing concern for the growth of neofascism in its various forms. In short they are better than the US but not the answer for the future. The solutions mentioned above are being discussed with greater degrees of detail and a growing awareness of the need for action quickly. If no action is taken against Trump and the current regime before summer, and reliance on Mueller and the formal investigation into electoral illegality is a thin support for hope, the odds of anything other than a reenactment of 2016 in 2018 are small. Nothing has been done to correct or prevent the types of abuses used to create a questionable government. And that government has show no interest in doing anything but pulling all the levers of power at its disposal to ensure that is permanent. Congress is so corrupted by gerrymandered districts and outright ownership of representatives and senators that there is a near complete vote of no confidence. At this point the 2018 election appears to be an excellent example of doing the exact same things and expecting a completely different outcome. A new way to vote The answer is obviously in changing the nature of voting as well as the process of voting. We must move to a direct vote for all federal positions with no gerrymandering. This will require removal of the traditionally accepted weighting to rural voters by discounting urban votes as well as the obviously illegal gerrymandering. Probably the most basic solution is a voting district of a fixed number of voters e.g. 10,000 that may be only a couple of urban blocks or an entire rural county. This is not an issue as the voting needs to be done online with all citizens automatically registered and counted. Needless to say voting is a duty that is legally required. This would solve most of the current problems producing grossly distorted voting and representation. We will deal with the considerations and problems of this a little later. Knowledgeable Voting Requiring knowledge in order to make a reasoned selection on the basis of policy is a far bigger problem. One reasonable way to do this is by weighting votes based on education or knowledge. My preference is offering a elementary citizenship test including basic political structure and civic components in order to double your vote. The basic vote is for all citizens no mater what level of knowledge or formal education. Those that chose to take and pass the basic citizenship exam would gain an additional vote. The argument for this goes all the way back to Plato and is obviously even more important now. The problems that we have had up to now have been based on the difficulty of preventing hacked voting and guaranteeing the identity of each voter. That can now be taken care with the permanent decentralized recording of the results using blockchain technology. This may require two blockchain ledgers with one for the permanent record of voting by the individual and the other recording the selections made. With voting required each citizen must have a voting record or an official waver. The selections should balance against the votes cast. This would require the addition of a no selection vote, or “none of the above” as has been discussed for many years. This should improve the audit and limit any type of rigging. A common argument against electronic voting is the presence of people without knowledge of Internet systems or access to them. Since voting is online with automatic registration this could be handles by using official tutors or assistants for people with limited ability as basic voters. Voting or what? The nature of the electoral system is a much more difficult question. Initially I expect we would start from the existing concept of representatives although I think that direct democracy is now possible and, in fact, desirable. The use of blockchain transaction/contracts online removes many of the problems. All Congressional representatives would have the same equivalent number of possible votes. A logical move would be to introduce proportionate voting if representatives are still selected making the House of Representative much more parliamentary. This would remove the dead weight of the failed two party structure and place the emphasis on policy packages supported by weighted voting.
https://medium.com/theotherleft/we-need-a-new-government-cbad6eef37de
['Mike Meyer']
2018-01-18 03:05:09.151000+00:00
['AI', 'Governance', 'Blockchain', 'Future', 'Politics']
AWS Cloud Security in a Nutshell
Hi Folks, Today we are going to look at the AWS Cloud Security Best Practices. Security has become a trending topic all over the world with more and more data leaks from small enterprises to large scale enterprises. So let’s see how we can secure our AWS Account ! AWS Shared Responsibility Model AWS Shared Responsibility Model Below stated policies are measures which are advised by the AWS Cloud. AWS Cloud takes responsibility for their infrastructure, but they don’t take the responsibility for the security of the environment inside the customer’s account. Initially when an account is created below mentioned actions can be taken to layout the basic security measures of the root account. 1. Grant Least Privilege Access — Granting the least access needed to perform the desired actions for a particular user. a. As enterprises are handling multiple client workloads on AWS, AWS has advised to separate these workloads using different accounts using AWS Organizations. b. AWS advises to maintain common permission guardrails that restrict access to all identities. E.g.: Blocking Users using multiple regions, restricting users to use only one region. Restricting the users from deleting common resources such as security policies, etc. c. Use the service control policies. E.g.: — Avoid users getting unwanted level of privileges inside the AWS cloud environment d. Using Permission Boundaries: — Using Permission boundaries to control the level of access that administrators have over-controlling and managing accounts. E.g.: — Administrators can’t create policies that escalate their own access. e. Reduce Permissions continuously: — Evaluate the access which are not used by identities and remove the unused permissions. 2. Enable Identity Federation: Centrally manage users and access across multiple applications and services. In order to federate multiple accounts in AWS Organizations use AWS Single Sign-on 3. Enable MFA in the root account. (Highly Recommended). Require activating MFA to all users as well. 4. Rotate Credentials (Change the passwords, access keys of your account regularly.) Set up an account policy so that other users and root account also needed to change the passwords regularly. 5. Lock away your AWS account root user access keys. AWS recommends this approach because access keys of your root account have access to all resources and services including billing details by default. Therefore, delete the root access keys If there are ones in your account, if there are no access keys for your root account, don’t create one. 6. Never Share the password of your root account with anyone. If other users require access to the AWS cloud environment create individual IAM users and groups. Give the users necessary permissions only. For yourself also create an administrator IAM User. 7. User Groups to Manage Users. (When your organization is growing and increasing the number of users, create groups with the necessary level of access to your AWS cloud environment and add users to those groups). 8. Be careful when granting users IAM access, as they get the privilege to deal with creating users, groups, using access keys, etc. When revoking permissions of such a user who had administrator access, we never know whether he has created other user accounts using his admin and IAM privileges. In that case, even if the root user revokes access for that admin user, he might use some other accounts access keys and access the account at a later time. This might be one critical threat that anyone never saw coming. (Highly recommended to keep the IAM full access with the root user only) 9. Configure a Strong Password Policy for users. a. If users have the permission to create their own password for the accounts, there should be a password policy in place in order to make sure that, the password has a minimum length (14 characters recommended) assigned by the user, contains alphabetical and non-alphabetical characters and frequent password rotation requirements. 10. Use IAM Roles to grant permissions, when permission is needed from AWS service to service. E.g.: When EC2 instances need to access S3 buckets. a. Create an IAM role selecting which services needs to access which service with what level of rights. E.g.: EC2 instances can only list S3 buckets. Never store access keys inside EC2 servers inside the AWS config directory. In case your EC2 instance is hacked, the hacker gets access to access key information. 11. Do not share AWS access keys. a. Access keys provide programmatic access to the AWS environment. Never share the access keys or expose the keys in unencrypted environments. For applications that need to access AWS services, create roles that provide temporary permissions to the application. 12. Monitor activity in your AWS account using the below tools available with AWS a. Amazon CloudFront — Log user requests that CloudFront receives. b. AWS CloudTrail — Logs AWS API calls and related events made by or on behalf of an AWS account. c. AWS Cloudwatch — Monitors your AWS cloud resources and the applications you run on AWS. d. AWS Config — Provides detailed historical information about the configuration of your AWS services, including the IAM users, groups, roles, and policies. Best Practices when using the AWS Services 1. Tighten the Cloud Trail configurations. a. CloudTrail is an AWS service that generates log files of all API calls made within the Aws, including the AWS management console, SDKs, command-line tools, etc. This is a very important way of tracking what’s happening inside the AWS account. For auditing as well as post-incident investigation, this is very important. b. If a hacker gets access to the AWS account, there’s a possibility they try to disable CloudTrail, therefore it’s recommended to keep the CloudTrail permissions only with the root user. c. Enable CloudTrail across all geographic regions and AWS services to prevent activity monitoring gaps. d. Turn on CloudTrail log file validation so that any changes made to the log file itself after it has been delivered to the S3 bucket is trackable to ensure log file integrity. e. Enable access logging for CloudTrail S3 bucket so that you can track access requests and identify potentially unauthorized or unwarranted access attempts f. Turn on multifactor authentication (MFA) to delete CloudTrail S3 buckets and encrypt all data in flight and at Rest. 2. Best Practices when using AWS Database and data storage services a. Ensure that S3 buckets don’t have pubic read/write access unless required by the business. b. Turn on RedShift audit logging in order to support auditing and post-incident forensic investigations for a given database. c. Encrypt data stored on EBS as an extra security layer. d. Encrypt Amazon RDS as an extra security layer. e. Enable require _ssl parameter in all Redshift clusters to minimize the risk of man-in-middle attack. f. Restrict public access to database instances to avoid malicious attacks such as brute force attacks, SQL injections, or DoS attacks. g. In all possible cases place the database instances in private subnets. 3. Automate Detective Control Automating Detective Control a. If some incident happens in your AWS account, how to respond to that event? That’s where Automate Detective Controls come to play. You can use Cloudformation to deploy your infrastructure and use AWS CloudTrail to log the events if some malicious event happens in your account, you can automate the action against that event using this architecture. 4. Secure Your Operating Systems and Applications a. With the AWS shared responsibility model, you manage your operating systems and applications security. Amazon EC2 presents a true virtual computing environment, in which you can use web services interfaces to launch instances with a variety of operating systems with custom preloaded applications. You can standardize the operating system and application builds and centrally manage the security of your operating systems and applications in a single secure build repository. You can build and test a pre-configured AMI to meet your security requirements. b. Disable root API access keys and secret key c. Restrict access to instances from limited IP ranges using Security Groups. d. Use Bastion hosts to access your EC-2 instances. e. Password protect the .pem file on user machines. f. Delete the pub key of users from the authorizedkeys file on your instances when users leave your organization. g. Rotate credentials (DB, Access Keys) h. Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys. i. Implement a single primary function per Amazon EC2 instance to keep functions that require different security levels from co-existing on the same server. E.g.: Implement web servers, database servers, and DNS servers separately. j. Enable only necessary and secure services, protocols, daemons, etc. only required to the functioning of the operating system. k. Never use password authentication mechanisms to authenticate with servers. (Configure sshd to allow only public key authentication. Set PubkeyAuthentication to Yes and PasswordAuthentication to No in sshd_config) l. Always use encrypted communication channels. 5. Securing your AWS Infrastructure a. Using Amazon VPC to define an isolated network for each workload or organizational entity. b. Using private and public subnets to place your components based on business needs. c. Using security groups to manage access to instances that have similar functions and security requirements. d. Using Network Access Control Lists (NACLs) that allow stateless management of IP traffic. NACLs are agnostic of TCP and UDP sessions, but they allow granular control over IP protocols (for example GRE, IPsec ESP, ICMP), as well as control on a per-source/destination IP address and port for TCP and UDP. NACLs work in conjunction with Security groups. e. Using host-based firewalls as a last line of defense. 6. Using Tags to manage AWS resources a. Tagging Aws resources can help you in many ways when you have hundreds of resources are in play within your AWS cloud environment. b. Generating alarms, if a resource is not tagged properly. c. Proposed Minimum Tags. i. Platform_Owner ii. Resource_Owner iii. Project Name iv. Environment (Prod, Stag, Test, Dev) d. Tags can be useful when generating consolidated reports per project or when it comes to billing. e. You can add up to any 10 tags per resource. This is a very summarized version of precautions that we can take to secure our AWS account. If you have any new ideas or different opinions regarding the AWS Cloud Security, feel free to comment. 👋 Join FAUN today and receive similar stories each week in your inbox! ️ Get your weekly dose of the must-read tech stories, news, and tutorials. Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬
https://medium.com/faun/aws-cloud-security-in-a-nutshell-f9e53907f41d
['Supun Sandeeptha']
2020-11-30 17:23:41.117000+00:00
['Software Development', 'Security', 'AWS', 'DevOps', 'Development']
Webpack 5 Builds for AWS Lambda Functions with TypeScript
Webpack 5 Builds for AWS Lambda Functions with TypeScript In a previous post, I wrote about self-destructing tweets which runs as an AWS Lambda function every night at midnight. While that post was about the code itself, most of the AWS CDK infrastructure information had been written in a previous post about sending a serverless Slack message which demonstrated how to run an AWS Lambda on a cron timer. Today’s post will be a short overview that bridges these together: it shows how I bundled the TypeScript code from the Twitter post with node modules and prepare it for deployment. The Folder Structure I am making assumptions here. The most “complex” set up I normally have for Lambdas is to write them in TypeScript and use Babel for transpilation. Given this will be a familiar standing for most, let’s work with that. Here is how most of my lambdas following this structure will look from within the function folder: https://gist.github.com/okeeffed/9b1e7edc86caff76179d434850f063c0.js You might also note I have both an index.ts and index.local.ts file. index.ts in my project is generally the entry point for the lambda, where the index.local.ts file is normally just used for local development where I swap out my lambda handler for code that lets me run. Both generally import the main function from another file (here denoted as function.ts ) and just call it. Webpack will bundle everything into one file later, so it is fine for me to structure the folder however I see fit. Also note: as pointed out in the comments by Maximilian, bundling node modules into the output from Webpack is not always agood idea if your npm packages require binaries. The same goes for any builds that require dynamic imports at runtime. Use your judgement on whether or not to bundle your node modules into the Webpack build, but I will be doing another write up on using Lambda layers instead to get around the requirement to build one single output. Setting Up Your Own Project Inside of a fresh npm project that houses a TypeScript lambda, we need to add to required Babel and Webpack dependencies: https://gist.github.com/okeeffed/83313ff75f314653c67760251571320d.js Babel Run Command File Inside of .babelrc , add the following: https://gist.github.com/okeeffed/6640da43291ded9bf3ea0fbc88105d0c.js Setting Up TypeScript This part you will need to adjust to flavour, but here is the config that I have for the Twitter bot: https://gist.github.com/okeeffed/2f2ce20124863b1b0b2ff1153158b01b.js Webpack In this example, I am expecting that you are using Webpack 5. In webpack.config.js : https://gist.github.com/okeeffed/d63721ddc9715a181a5129cf14d00955.js Here we tell Webpack to set the src/index.ts as the entry point and to convert to commonjs . We set our Babel and Cache loaders to test and compile any ts or js file that it finds from that entry point. Given that we are not using Node Externals which avoids bundling node modules, then any node modules required will also be compiled into the output. That means that the output in dist/index.js which run our project without node modules installed, which is perfect for AWS Lambda! Running A Build A "build": "webpack" to your "scripts" key in the package.json file and you are ready to roll! Run npm run build , let Webpack work its magic and then see the single-file output in dist/index.js . Testing Your Projects I use lambda-local for testing the build before deployment with the AWS CDK. It targets Nodejs, which is perfect for your TypeScript/JavaScript projects! Follow the instructions on the website to install and give it a whirl! If things run smoothly, you can be confident with your deployment. Conclusion This post focused purely on the build process. As mentioned in the intro, some of my other posts will cover writing lambda functions and the actual AWS CDK deployments. Resources and Further Reading Image credit: Jess Bailey Originally posted on my blog.
https://medium.com/javascript-in-plain-english/webpack-5-builds-for-aws-lambda-functions-with-typescript-6603533c85cb
["Dennis O'Keeffe"]
2020-12-02 01:39:58.569000+00:00
['Typescript', 'JavaScript', 'Webpack', 'Lambda', 'AWS']
A Woman Was Assaulted After Telling Someone to Wear Their Mask
She just wanted the people around her to wear masks. In early August, a dispute was caught on camera at a Staples store in Hackensack, New Jersey. 54-year-old Margot Kagan, told another woman, 25-year-old Terri Thomas to properly wear her mask inside the store because Thomas’s mask was not fully covering her nose and mouth. At the time, the two customers were using adjacent fax machines. In the disturbing video, Kagan is seen wearing a face shield. She recently underwent a liver transplant and was using a cane to get around. Because of her age group and her surgery, her body’s susceptibility to complications from COVID-19 is likely higher. In response, Thomas approached Kagan, accosted her with profanity, and when Kagan tried to create distance between Thomas and herself, Thomas grabbed Kagan and lunged her to the ground. Then, she casually flipped back her hair and walked out of the store as Kagan remained on the floor, holding her leg. Afterward, Kagan was taken to a hospital, where she had to go through even more surgery, this time to repair the broken tibia she incurred from the assault. It’s a really hard video to watch even once. I’ve watched it a few times for fact-checking, and each time, I’m left feeling more disheartened. We’ve been seeing many instances of this. When asked to wear a mask, people become defensive to the point of aggression. There have even been instances of people coughing on those that asked them to wear a mask. The disregard for human life is revolting. Our racial, religious, and political identities in this regard should come second to our identities as people who have basic regard for human life. This doesn’t have any bearing on BLM. Many onlookers have tried to use this instance to either support or discredit the Black Lives Matter Movement. From my perspective, race was not the contentious issue here. At least, it shouldn’t have been. The fact that Margot Kagan is a White woman does not automatically render her a “Karen.” She is still allowed to ask others to respect her right to safety. Similarly, the fact that Terri Thomas is a black woman does not automatically mean that the BLM movement is just a cover for unscrupulous aggressors. There is too much evidence and far too many lived experiences for us to ignore racism any longer. We can believe that we must end structural injustices against Black Americans and simultaneously find Thomas’s actions unacceptable. Of course white supremacy exists, and it is ubiquitous. It is still not the determinant in every situation all the time. One thing I’ve found from living in bubbles of both political extremes is this- there are people on both sides that don’t understand why not wearing a mask is a serious public health issue. We wear masks for a reason similar to why we get vaccines. We don’t just get them so that we stay safe. By lowering our risk of infection, we also lower the risk that we will pass on the infection to a more vulnerable member of our population. We are still facing the pandemic. We’re not done. Even though many places in the United States are opening back up, COVID-19 has not gone away. Maybe we will have to learn to live with the coronavirus for the long term, but when there is a mandate stating that people in public spaces must wear masks, there really is no reasonable justification for throwing a tantrum and refusing to do so. Our racial, religious, and political identities in this regard should come second to our identities as people who have basic regard for human life. Is your “right to not wear a mask” worth your neighbor’s life?
https://medium.com/an-amygdala/a-woman-was-assaulted-after-telling-someone-to-wear-their-mask-e056714c0b6d
['Rebeca Ansar']
2020-09-05 01:31:03.012000+00:00
['Covid 19', 'Society', 'America', 'Culture', 'Health']
I created the exact same app in React and Vue. Here are the differences. [2020 Edition]
I created the exact same app in React and Vue. Here are the differences. [2020 Edition] React vs Vue: Now with React Hooks Vue 3 Composition API! React vs Vue: the saga continues A few years ago, I decided to try and build a fairly standard To Do App in React and Vue. Both apps were built using the default CLIs (create-react-app for React, and vue-cli for Vue). My aim was to write something that was unbiased and simply provided a snapshot of how you would perform certain tasks with both technologies. When React Hooks were released, I followed up the original article with a ‘2019 Edition’ which replaced the use of Class Components with Functional Hooks. With the release of Vue version 3 and its Composition API, now is the time to one again update this article with a ‘2020 Edition’. Let’s take a quick look at how the two apps look: The CSS code for both apps are exactly the same, but there are differences in where these are located. With that in mind, let’s next have a look at the file structure of both apps: You’ll see that their structures are similar as well. The key difference so far is that the React app has two CSS files, whereas the Vue app doesn’t have any. The reason for this is because create-react-app creates its default React components with a separate CSS file for its styles, whereas Vue CLI creates single files that contain HTML, CSS, and JavaScript for its default Vue components. Ultimately, they both achieve the same thing, and there is nothing to say that you can’t go ahead and structure your files differently in React or Vue. It really comes down to personal preference. You will hear plenty of discussion from the dev community over how CSS should be structured, especially with regard to React, as there are a number of CSS-in-JS solutions such as styled-components, and emotion. CSS-in-JS is literally what it sounds like by the way. While these are useful, for now, we will just follow the structure laid out in both CLIs. But before we go any further, let’s take a quick look at what a typical Vue and React component look like: A typical React file: A typical Vue file: Now that’s out of the way, let’s get into the nitty gritty detail! How do we mutate data? But first, what do we even mean by “mutate data”? Sounds a bit technical doesn’t it? It basically just means changing the data that we have stored. So if we wanted to change the value of a person’s name from John to Mark, we would be ‘mutating the data’. So this is where a key difference between React and Vue lies. While Vue essentially creates a data object, where data can freely be updated, React handles this through what is known as a state hook. Let’s take a look at the set up for both in the images below, then we will explain what is going on after: React state: Vue state: So you can see that we have passed the same data into both, but the structure is a bit different. With React — or at least since 2019 — we would typically handle state through a series of Hooks. These might look a bit strange at first if you haven’t seen this type of concept before. Basically, it works as follows: Let’s say we want to create a list of todos. We would likely need to create a variable called list and it would likely take an array of either strings or maybe objects (if say we want to give each todo string an ID and maybe some other things. We would set this up by writing const [list, setList] = useState([]) . Here we are using what React calls a Hook — called useState . This basically lets us keep local state within our components. Also, you may have noticed that we passed in an empty array [] inside of useState() . What we put inside there is what we want list to initially be set to, which in our case, we want to be an empty array. However, you will see from the image above that we passed in some data inside of the array, which ends up being the initialised data for list. Wondering what setList does? There will be more on this later! In Vue, you would typically place all of your mutable data for a component inside of a setup() function that returns an object with the data and functions you want to expose (which basically just means the things you want to be able to make available for use in your app). You will notice that each piece of state (aka the data we want to be able to mutate) data in our app is wrapped inside of a ref() function. This ref() function is something that we import from Vue and makes it possible for our app to update whenever any of those pieces of data are changed/updated. In short, if you want to make mutable data in Vue, assign a variable to the ref() function and place any default data inside of it. So how would we reference mutable data in our app? Well, let’s say that we have some piece of data called name that has been assigned a value of Sunil . In React, as we have our smaller pieces of state that we created with useState() , it is likely that we would have created something along the lines of const [name, setName] = useState('Sunil') . In our app, we would reference the same piece of data by calling simply calling name. Now the key difference here is that we cannot simply write name = 'John' , because React has restrictions in place to prevent this kind of easy, care-free mutation-making. So in React, we would write setName('John') . This is where the setName bit comes into play. Basically, in const [name, setName] = useState('Sunil') , it creates two variables, one which becomes const name = 'Sunil' , while the second const setName is assigned a function that enables name to be recreated with a new value. In Vue, this would be sitting inside of the setup() function and would have been called const name = ref(‘Sunil') . In our app, we would reference this by calling name.value . With Vue, if we want to use the value created inside of a ref() function, we look for .value on the variable rather than simply calling the variable. In other words, if we want the value of a variable that holds state, we look for name.value , not name . If you want to update the value of name , you would do so by updating name.value . For example, let's say that I want to change my name from Sunil to John. I'd do this by writing name.value = "John" . I’m not sure how I feel about being called John, but hey ho, things happen! 😅 Effectively React and Vue are doing the same thing here, which is creating data that can be updated. Vue essentially combines its own version of name and setName by default whenever a piece of data wrappeed inside of a ref() function gets updated. React requires that you call setName() with the value inside in order to update state, Vue makes an assumption that you’d want to do this if you were ever trying to update values inside the data object. So Why does React even bother with separating the value from the function, and why is useState() even needed? Essentially, React wants to be able to re-run certain life cycle hooks whenever state changes. In our example, if setName() is called, React will know that some state has changed and can, therefore, run those lifecycle hooks. If you directly mutated state, React would have to do more work to keep track of changes and what lifecycle hooks to run etc. Now that we have mutations out of the way, let’s get into the nitty, gritty by looking at how we would go about adding new items to both of our To Do Apps. How do we create new To Do Items? React: const createNewToDoItem = () => { const newId = generateId(); const newToDo = { id: newId, text: toDo }; setList([...list, newToDo]); setToDo(""); }; How did React do that? In React, our input field has an attribute on it called value. This value gets automatically updated every time its value changes through what is known as an onChange event listener. The JSX (which is basically a variant of HTML), looks like this: <input type="text" placeholder="I need to..." value={toDo} onChange={handleInput} onKeyPress={handleKeyPress} /> So every time the value is changed, it updates state. The handleInput function looks like this: const handleInput = (e) => { setToDo(e.target.value); }; Now, whenever a user presses the + button on the page to add a new item, the createNewToDoItem function is triggered. Let’s take a look at that function again to break down what is going on: const createNewToDoItem = () => { const newId = generateId(); const newToDo = { id: newId, text: toDo }; setList([...list, newToDo]); setToDo(""); }; Essentially the newId function is basically creating a new ID that we will give to our new toDo item. The newToDo variable is an object that takes that has an id key that is given the value from newId. It also has a text key which takes the value from toDo as its value. That is the same toDo that was being updated whenever the input value changed. We then run out setList function and we pass in an array that includes our entire list as well as the newly created newToDo . If the ...list , bit seems strange, the three dots at the beginning is something known as a spread operator, which basically passes in all of the values from the list but as separate items, rather than simply passing in an entire array of items as an array. Confused? If so, I highly recommend reading up on spread because it’s great! Anyway, finally we run setToDo() and pass in an empty string. This is so that our input value is empty, ready for new toDos to be typed in. Vue: function createNewToDoItem() { const newId = generateId(); list.value.push({ id: newId, text: todo.value }); todo.value = ""; } How did Vue do that? In Vue, our input field has a handle on it called v-model. This allows us to do something known as two-way binding. Let’s just quickly look at our input field, then we’ll explain what is going on: <input type="text" placeholder="I need to..." v-model="todo" v-on:keyup.enter="createNewToDoItem" /> V-Model ties the input of this field to a variable we created at the top of our setup() function and then exposed as a key inside of the object we returned. We haven't covered what is returned from the object much so far, so for your info, here is what we have returned from our setup() function inside of ToDo.vue: return { list, todo, showError, generateId, createNewToDoItem, onDeleteItem, displayError }; Here, list , todo , and showError are our stateful values, while everything else are functions we want to be able to call in other places of our app. Okay, coming back out from our tangent, when the page loads, we have todo set to an empty string, as such: const todo = ref("") . If this had some data already in there, such as const todo = ref("add some text here"): our input field would load with add some text here already inside the input field. Anyway, going back to having it as an empty string, whatever text we type inside the input field gets bound to todo.value . This is effectively two-way binding - the input field can update the ref() value and the ref() value can update the input field. So looking back at the createNewToDoItem() code block from earlier, we see that we push the contents of todo.value into the list array - by pushing todo.value into list.value - and then update todo.value to an empty string. We also used the same newId() function as used in the React example. How do we delete from the list? React: const deleteItem = (id) => { setList(list.filter((item) => item.id !== id)); }; How did React do that? So whilst the deleteItem() function is located inside ToDo.js, I was very easily able to make reference to it inside ToDoItem.js by firstly, passing the deleteItem() function as a prop on as such: <ToDoItem key={item.id} item={item} deleteItem={deleteItem} /> This firstly passes the function down to make it accessible to the child. Then, inside the ToDoItem component, we do the following: <button className="ToDoItem-Delete" onClick={() => deleteItem(item.id)}> - </button> All I had to do to reference a function that sat inside the parent component was to reference props.deleteItem. Now you may have noticed that in the code example, we just wrote deleteItem instead of props.deleteItem. This is because we used a technique known as destructuring which allows us to take parts of the props object and assign them to variables. So in our ToDoItem.js file, we have the following: const ToDoItem = (props) => { const { item, deleteItem } = props; } This created two variables for us, one called item, which gets assigned the same value as props.item, and deleteItem, which gets assigned the value from props.deleteItem. We could have avoided this whole destructuring thing by simply using props.item and props.deleteItem, but I thought it was worth mentioning! Vue: function onDeleteItem(id) { list.value = list.value.filter(item => item.id !== id); } How did Vue do that? A slightly different approach is required in Vue. We essentially have to do three things here: Firstly, on the element we want to call the function: <button class="ToDoItem-Delete" @click="deleteItem(item.id)"> - </button> Then we have to create an emit function as a method inside the child component (in this case, ToDoItem.vue), which looks like this: function deleteItem(id) { emit("delete", id); } Along with this, you’ll notice that we actually reference a function when we add ToDoItem.vue inside of ToDo.vue: <ToDoItem v-for="item in list" :item="item" @delete="onDeleteItem" :key="item.id" /> This is what is known as a custom event-listener. It listens out for any occasion where an emit is triggered with the string of ‘delete’. If it hears this, it triggers a function called onDeleteItem. This function sits inside of ToDo.vue, rather than ToDoItem.vue. This function, as listed earlier, simply filters the id from the list.value array. It’s also worth noting here that in the Vue example, I could have simply written the $emit part inside of the @click listener, as such: <button class="ToDoItem-Delete" @click="emit("delete", item.id)"> - </button> This would have reduced the number of steps down from 3 to 2, and this is simply down to personal preference. In short, child components in React will have access to parent functions via props (providing you are passing props down, which is fairly standard practice and you’ll come across this loads of times in other React examples), whilst in Vue, you have to emit events from the child that will usually be collected inside the parent component. How do we pass event listeners? React: Event listeners for simple things such as click events are straight forward. Here is an example of how we created a click event for a button that creates a new ToDo item: <button className="ToDo-Add" onClick={createNewToDoItem}> + </button> Super easy here and pretty much looks like how we would handle an in-line onClick with vanilla JS. As mentioned in the Vue section, it took a little bit longer to set up an event listener to handle whenever the enter button was pressed. This essentially required an onKeyPress event to be handled by the input tag, as such: <input type="text" placeholder="I need to..." value={toDo} onChange={handleInput} onKeyPress={handleKeyPress} /> This function essentially triggered the createNewToDoItem function whenever it recognised that the ‘enter’ key had been pressed, as such: const handleKeyPress = (e) => { if (e.key === "Enter") { createNewToDoItem(); } }; Vue: In Vue it is super straight-forward. We simply use the @ symbol, and then the type of event-listener we want to do. So for example, to add a click event listener, we could write the following: <button class="ToDo-Add" @click="createNewToDoItem"> + </button> Note: @click is actually shorthand for writing v-on:click . The cool thing with Vue event listeners is that there are also a bunch of things that you can chain on to them, such as .once which prevents the event listener from being triggered more than once. There are also a bunch of shortcuts when it comes to writing specific event listeners for handling key strokes. I found that it took quite a bit longer to create an event listener in React to create new ToDo items whenever the enter button was pressed. In Vue, I was able to simply write: <input type=”text” v-on:keyup.enter=”createNewToDoItem”/> How do we pass data through to a child component? React: In react, we pass props onto the child component at the point where it is created. Such as: <ToDoItem key={item.id} item={item} deleteItem={deleteItem} />; Here we see two props passed to the ToDoItem component. From this point on, we can now reference them in the child component via this.props. So to access the item.todo prop, we simply call props.item . You may have noticed that there's also a key prop (so technically we're actually passing three props). This is mainly for React's internals, as it makes things easier when it comes to making updates and tracking changes among multiple versions of the same component (which we have here because each todo is a copy of the ToDoItem component). It's also important to ensure your components have unique keys, otherwise React will warn you about it in the console. Vue: In Vue, we pass props onto the child component at the point where it is created. Such as: <ToDoItem v-for="item in list" :item="item" @delete="onDeleteItem" :key="item.id" /> Once this is done, we then pass them into the props array in the child component, as such: props: [ "todo" ] . These can then be referenced in the child by their name — so in our case, todo . If you're unsure about where to place that prop key, here is what the entire export default object looks like in our child component: export default { name: "ToDoItem", props: ["item"], setup(props, { emit }) { function deleteItem(id) { emit("delete", id); } return { deleteItem, }; }, }; One thing you may have noticed is that when looping through data in Vue, we actually just looped through list rather than list.value . Trying to loop through list.value won't work here How do we emit data back to a parent component? React: We firstly pass the function down to the child component by referencing it as a prop in the place where we call the child component. We then add the call to function on the child by whatever means, such as an onClick, by referencing props.whateverTheFunctionIsCalled — or whateverTheFunctionIsCalled if we have used destructuring. This will then trigger the function that sits in the parent component. We can see an example of this entire process in the section ‘How do we delete from the list’. Vue: In our child component, we simply write a function that emits a value back to the parent function. In our parent component, we write a function that listens for when that value is emitted, which can then trigger a function call. We can see an example of this entire process in the section ‘How do we delete from the list’. And there we have it! 🎉 We’ve looked at how we add, remove and change data, pass data in the form of props from parent to child, and send data from the child to the parent in the form of event listeners. There are, of course, lots of other little differences and quirks between React and Vue, but hopefully the contents of this article has helped to serve as a bit of a foundation for understanding how both frameworks handle stuff. If you’re interested in forking the styles used in this article and want to make your own equivalent piece, please feel free to do so! 👍 Github links to both apps: Vue ToDo: https://github.com/sunil-sandhu/vue-todo-2020 React ToDo: https://github.com/sunil-sandhu/react-todo-2020 The 2019 version of this article https://medium.com/javascript-in-plain-english/i-created-the-exact-same-app-in-react-and-vue-here-are-the-differences-2019-edition-42ba2cab9e56 The 2018 version of this article https://medium.com/javascript-in-plain-english/i-created-the-exact-same-app-in-react-and-vue-here-are-the-differences-e9a1ae8077fd If you would like to translate this article into another language, please go ahead and do so — let me know when it is complete so that I can add it to the list of translations above. JavaScript In Plain English Enjoyed this article? If so, get more similar content by subscribing to Decoded, our YouTube channel! Originally posted at: sunilsandhu.com
https://medium.com/javascript-in-plain-english/i-created-the-exact-same-app-in-react-and-vue-here-are-the-differences-2020-edition-36657f5aafdc
['Sunil Sandhu']
2020-08-09 17:30:21.608000+00:00
['JavaScript', 'Web Development', 'React', 'Vuejs', 'Programming']
Death By a Thousand Hacks
On the drowning of art in a sea of mediocre “content” By MARTIN REZNY Whether you are a customer seeking a great experience, or an author trying desperately not to die of starvation on a daily basis, this essay concerns you. I could write this in practical marketing terms, but that would only add to the sprawling mass of artless placeholders meant to entertain for a moment and then vanish in a black hole of time collectively wasted by all of humanity. Let’s attempt to make it more into a raft to hold onto as we’re circling the drain. To make the issue very clear very fast, the astronomers face an enemy that works as a near perfect metaphor to this problem — the light pollution. It too only really appeared in modern times as a result of advanced technology. Unlike thousands of years before, whole cities recently became awash with artificial light at night, drowning out the most spectacular display of the ages, the sky full of stars, thus rendering millennia of culture obscured to invisible. To many people nowadays, this issue seems eminently unimportant, a minor gripe of a concerned elite minority. More light at night is just better, right? This reasoning may make complete sense if you are a star-blind resident of urban landscapes, but only precisely because of the lack of experience with the real thing, and because of the ignorance of the eternal truth of nature and human condition it conveys. It is a forgetting that artifice is the lesser miracle. Much like observation of stars yields deep secrets about the universe and has inspired classical masterpieces of art that will never die for as long as there will be humans, real art conveys some kind of truth that needs to be expressed, inspiring humans to great feats of learning and accomplishment. Compared to art, “content” exists solely to make someone money, to be consumed and forgotten. It’s highest ambition is to amuse, like neon signs. It should be obvious that no amount of neon sign gazing will make one know more about the world or themselves, and if that’s all that you can see, it’s a crime against your quality of life. Again, it may seem overblown, but as Neil DeGrasse Tyson, one of today’s most prominent astrophysicists, keeps saying, as a kid in New York he practically didn’t believe there were any stars in the sky. It took a visit to a planetarium to even make him aware of their existence. Now imagine how many Neils of artistic expression we’re losing if so much of what we can read, watch, listen to, or play is just a fake replica, a hollow imitation, a dead simulation of art. No one is going to learn how to be a good storyteller by consuming bad storytelling, become a good writer by reading bad writing, or turn into a good musical composer by conforming to genre cliches. I could go on for a very, very long time, but I think you catch my drift. Before someone starts bringing economics into this, profit is not only no justification for a crime against culture of this magnitude, it makes it more condemnable. If at least it was an accident, it would be forgivable. As such, it is more akin to a robbery, extracting value out of values. Even assuming one’s just a passive consumer and not an active perpetrator, to consume invariably means to destroy. Art is not to be consumed, it is to be appreciated, or it dies. This may sound overly dramatic, but it is no exaggeration. Sure, the actual art of our past in some sense still exists, its physical carriers and patterns imprinted on them are preserved somewhere. But it is dying to the extent to which it becomes more difficult to access. If not in terms of “views” by being hidden in a pile of loud nonsense, then in terms of diminishing human ability to engage with it on any meaningful level, with understanding and purpose. And it doesn’t much matter what greatness there used to be when the cultural landscape of today is an endless landfill from horizon to horizon. People can only ever truly live in the now, and if the now is thoroughly awful, any art that manages to still somehow be made feels lesser for being invariably an escape attempt. Sure, the contemporary art industry (an oxymoron if there ever was one) has never been bigger, but it has been built out of farts and prison bars. The solution is one of those so simple that they feel impossible — if you’re a creative person, just don’t be a hack. Don’t make things to be consumed and disappear, build things to last, things that will fight against those who would try to use them. Leave the night city for the countryside and sleep under the stars, or if you cannot, if you’re trapped within the walls of false realities imposed upon you by others, at least dream about them. They’re still there. Like what you read? Subscribe to my publication, heart, follow, or… Make me happy and throw something into my tip jar
https://medium.com/words-of-tomorrow/death-by-a-thousand-hacks-f198ef9a61d1
['Martin Rezny']
2020-01-23 16:01:42.306000+00:00
['Storytelling', 'Astronomy', 'Art', 'Neil deGrasse Tyson', 'Creativity']
S.O.L.I.D Principles Explained In Five Minutes
Dependency inversion Principle (DIP) This principle states that the high-level module must not depend on the low-level module, but they should depend on abstractions. Consider the MessageBoard code snippet below public class MessageBoard { private WhatUpMessage message; public MessageBoard(WhatsUpMessage message) { this.message = message; } } The high-level module MessageBoard now depends on the low-level WhatsUpMessage. If we needed to print the underlying message in the high-level module, we would now find ourselves at the mercy of the low-level module. We would have to write WhatsUpMessage specific logic to print that message. If later, FacebookMessage needed to be supported, we would have to modify the high-level module( tightly-coupled code). That violates the Dependency inversion principle. A way to fix that would be to extract that dependency. Create an interface and add whatever your high-level module needs. Any class that needed to use your high-level module would have to implement that interface. Your interface would look something like this public interface IMessage { public void PrintMessage(); } Your MessageBoard now would look like this public class MessageBoard { private IMessage message; public MessageBoard(IMessage message) { this.message = message; } public void PrintMessage() { this.message.PrintMessage(); } } The low-level module would look like this public class WhatUpMessage : IMessage { public void PrintMessage() { //print whatsup message } } public class FacebookMessage : IMessage { public void PrintMessage() { //print facebook message } } That abstraction removes the dependency of the low-level module in your high-level module. The high-level module is now completely independent of any low-level module. Using the S.O.L.I.D principles when writing code will make you a better developer and make your life a lot easier. You might even become the new popular person on the block if you’re the only one doing it. Thank you for making it to the end. Until next time, Happy coding.
https://medium.com/swlh/s-o-l-i-d-principles-explained-in-five-minutes-8d36b1da4f6b
[]
2019-12-04 00:17:34.860000+00:00
['Engineering', 'Software', 'Software Development', 'Programming', 'Design Patterns']
How the BTS Universe Successfully Engages Thousands of Fans
Images from BigHit’s official Twitter. BTS is known around the world for their relatable music, dynamic concert performances, and their passionate fanbase. But another large element behind this successful group of young men is the “BTS Universe” (BU), a fictional world with a narrative that depicts characters inspired by the BTS members. The BU started with just a handful of music videos, but it later went cross-platform with the introduction of the “HwaYangYeonHwa Notes,” small booklets of text included in the group’s Love Yourself albums. Additional music videos and short films fed the storyline, and BigHit Entertainment recently released a physical HYYH The Notes book and launched a webtoon on Naver titled “Save Me.” Dedicated fans spend hours deconstructing and analyzing this narrative, which spawns countless Twitter threads, blog posts, and YouTube videos about the storyline. Although the narrative began in 2015, fans are still consistently involved in discussing this story as new information continues to come out. It’s clear that the BU successfully intrigues fans, pulling them into the narrative and the world of theories as deeply as they wish to go — but what lies beneath this narrative’s ability to draw people in? In his book on writing technique titled The Emotional Craft of Fiction, author Donald Maass writes, “To entertain, a story must present novelty, challenge, and/or aesthetic value.” Maass encourages writers to “force the reader to figure something out,” because that will both engage the reader and make it more likely they’ll remember the story. The BTS Universe, though not strictly a narrative in the form of a book, manages to hit on all of these points. The BU is a novel concept, a first for K-Pop, as no other group has ventured into storytelling at this level, across this many platforms, and with this level of cohesiveness before. In addition to music videos, short films, texts, and the webtoon, the BU was further expanded by the Smeraldo blog, which provided lore surrounding the smeraldo flower, a fictional flower that appears in BTS’s storyline. Smeraldo was also used to name the Twitter account that promotes the webtoon and the HYYH The Notes book. Additionally, a real Smeraldo shop that sold special flower-themed merchandise opened at the group’s Love Yourself Seoul concerts. The Smeraldo tie-in bridged the gap between the BU world and our own, adding yet another layer of interactivity and immersion. Fans are captivated by the level of detail and amount of content that’s been put into the BU. The challenge of the BU lies in its storytelling. The story is fleshed out mainly in the “HYYH Notes,” which are epistolary in nature. Each note bears a name and a date, but despite the three albums’ worth of Notes and the full-length HYYH The Notes 1 book we have so far, the full story is yet unknown. Events that take place in the Notes sometimes appear in videos or the webtoon, but no medium gives the full picture. Gaps in the narrative leave fans to put the pieces together themselves and to theorize about the missing portions, symbolism, and character motives. It’s particularly effective since bits of the story are released only periodically, intriguing fans to wait for the next piece to drop. BTS’s content is often released in media res, effectively drawing fans in with the promise that more of the backstory will be revealed later. An additional challenge exists in the outside sources that occasionally influence BTS’s work. With the release of title track “Blood, Sweat & Tears” off their 2016 album WINGS, the band noted the influence of Demian, a German Bildungsroman by Hermann Hesse. Later, BigHit Entertainment’s official shop released a book bundle that included Demian as well as Erich Fromm’s The Art of Loving and Murray Stein’s Jung’s Map of the Soul, giving fans even more to connect to BTS’s releases. With their upcoming release bearing the title Map of the Soul: Persona, it’s clear that fans will have even more material to unravel. When it comes to aesthetics, there’s much to appreciate in BTS’s music videos and short films, which are all shot cinematically and with great care to detail. Both the visual storytelling and the aesthetically pleasing videos serve to hold the audience’s attention. Since BTS’s content provides a long-running story rich in symbols and connected themes, fans are encouraged to re-watch past videos to look for information they may have missed. Truly, by hitting all three points of novelty, challenge, and aesthetics and utilizing so many forms of media, BigHit ensures that fans stay engaged, and when we stay engaged, we develop deeper attachments. Maass touches on emotional attachments to fiction in his book, discussing how psychology’s affective disposition theory explains why readers become emotionally involved — we tend to make moral judgments about characters and attach emotions to them as a result. If we feel something in relation to a fictional character, we’re that much more bonded to them and the story. From the very start of the BU, even before it was billed as the BU, BTS’s characters played into disposition theory. In the first string of BU music videos including “I Need U” and “Run,” the members of the group are shown as innocent but troubled youth, with each character confronting his own struggles. At the time, fans had nothing more to go on than the music videos, but these videos served as a great emotional hook. Fans could relate to some of the realistic characters and sympathize with others because of how they were depicted — we judged them to be good characters, despite their bad circumstances. Creating relatable and likable characters is one huge step in the direction of successful emotional attachment. What makes the experience even more emotionally invested for fans is that the fictional characters are portrayed by the real BTS members. They use their real names for these characters, and occasionally real personality traits bleed over into their fictional counterparts. Fans who already have an attachment to the real BTS will more easily attach to this fictional story and world. This ease of attachment eliminates a hurdle in traditional fiction writing, because in a book, the characters are unknown. In the BU, however, they’re unknown and revealed only incrementally, but they are presented in a familiar form. With so many sources of information and a slew of gaps to fill in, the BU allows fans to play an active role in the group’s narrative. Other K-Pop releases may be momentarily engaging, but if there’s not much to mull over, we’re not as likely to keep thinking about them and may lose interest. But the BTS Universe is special because it extends its storytelling beyond just a music video, or even a series of videos, enabling fans to actively engage and solidifying the fans’ attachment to the series, the characters, and the members of BTS themselves. Maass may be talking about writing novels, but his formula for effective, engaging fiction concisely explains why so many of us are willing participants in this cross-platform fictional universe. Interested in learning more about the BU? I’ve opened up my website, The BTS Effect, where most of my BTS-related content lives!
https://medium.com/bangtan-journal/how-the-bts-universe-successfully-engages-thousands-of-fans-78152ad8338f
['Courtney Lazore']
2019-12-03 14:51:01.913000+00:00
['Storytelling', 'Music', 'Kpop', 'Bts', 'Bts Army']
Organise your Jupyter Notebook with these tips
📍 Tip 4. Create user-defined functions and save it in a module You may have heard of the DRY principle: Don’t Repeat Yourself. If you haven’t heard of this software engineering principle before, it is about “not duplicating a piece of knowledge within a system”. One of my interpretation of this principle in Data Science is to create functions to abstract away the reoccurring tasks to reduce copy pasting. You can even use classes if it makes sense in your case. Here are the suggested steps for this tip: Create a function Ensure the function has an intuitive name Document the function with docstring (Ideally) Unit test the function Save the function in a .py file (.py file is referred as module) Import module in Notebook to access the function Use the function in Notebook Let’s try to contextualise these steps with examples: Here’s a simple way to assess if a function has an intuitive name: If you think a colleague who hasn’t seen the function before could roughly guess what the function does just by looking at its name, then you are on the right track. When documenting these functions, I have adapted a few different styles in a way it made more sense to me. While these examples serve as a working example function for Data Science, I highly encourage you to check out official guides such as below to learn the best practices in naming and documentation conventions, style guides and type hints: You can even browse through modules in well-established package’s Github repository to get inspirations. If you saved these functions in a helpers.py file and imported helpers module (by the way, Python module just means a .py file) in our Notebook with import helpers , you can access the documentation by writing the function name followed by Shift + Tab: If you have many functions, you could even categorise and put them in separate modules. If you take this approach, you may even want to create a folder containing all the modules. While putting stable code into a module makes sense, I think it is fine to keep experimental functions in your Notebook. If you implement this tip, you will soon notice that your Notebook start to look less cluttered and more organised. In addition, using functions will make you less prone to silly copy paste mistakes. Unit testing was not covered in this post as it deserves its own section. If you would like learn about unit testing for Data Science, this PyData talk may be a good starting point.
https://towardsdatascience.com/organise-your-jupyter-notebook-with-these-tips-d164d5dcd51f
['Zolzaya Luvsandorj']
2020-11-02 10:35:31.120000+00:00
['Python', 'Data Science', 'Jupyter Notebook', 'Data', 'Workflow']
Music generation using Deep Learning
“If I had my life to live over again, I would have made a rule to read some poetry and listen to some music at least once every week.”― Charles Darwin Life exists on the sharp edged wire of the Guitar. Once you jump, it’s echos can be heard with immense intangible pleasure. Let's explore this intangible pleasure… Music is nothing but a sequence of nodes(events). Here input to the model is a sequence of nodes. Some of the music generated example using RNNs shown below Music Representation: sheet-music ABC-notation: it has a sequence of characters which is very simple for Neural Network train. https://en.wikipedia.org/wiki/ABC_notation MIDI: https://towardsdatascience.com/how-to-generate-music-using-a-lstm-neural-network-in-keras-68786834d4c5 mp3- store only audio file. Char-RNN Here I'm using char-RNN structure(Many-Many RNN) where one output corresponds to each input(input Ci -.> output C(i+1)) at each time step(cell). It can have multiple hidden layers(multiple LSTM layers). Visualizing the predictions and the “neuron” firings in the RNN Under every character, we visualize (in red) the top 5 guesses that the model assigns for the next character. The guesses are colored by their probability (so dark red = judged as very likely, white = not very likely). The input character sequence (blue/green) is colored based on the firing of a randomly chosen neuron in the hidden representation of the RNN. Think about it as green = very excited and blue = not very excited. Process: Obtaining data preprocessing(generating batch-sgd)to feed into char-RNN Please follow the below link for more datasets. Here I used only Jigs (340 tunes) dataset in ABC-format. The dataset will be fed into RNN training using a batch size of 16. Here two LSTM cell represents for each input. The input X0 goes in all LSTM cells in the first input layer. You will get output(h0) and information send to the next time step layer. All output at time step one, LSTM_t1_1, LSTM_t1_2 connected to dense layer whose output is h0. The dense layer at time-step one is called time distributed dense layer. Similarly for the next time step. Return sequence=True in Keras used in case of when you want to generate output at each input in timestamp sequence. For every input, we need sequence of output. The same input will go to every cell and generate output at every cell in one layer. Every time step(i), we will get a vector of output(256 in given problem). 2. Time distributed dense layer. Please follow the above discussion for a better understanding. At every timestep, it will take all LSTM output and construct a dense layer of size 86. Here 86 is number of unique characters in whole vocabulary. 3. Stateful=True, the last state for each sample at index i in a batch will be used as the initial state for the sample of index i in the following batch. It used in the case when you want to connect one batch with the second batch with the input of the second batch is the output of the first batch. In the case of stateful=false, each batch has zero input to the first time step layer. Model Architecture and Training: It is a multi-class classification in which a given input, it will give an output which is an anyone of the total number of character. The training model generates 86 character after every input of character. based on probability, it will decide the final output character. Next, we will feed C(i+1) to model, it will generate C(i+2) character. This will continue until all batches of character feed of whole data. Output: Open the following link and Paste your generated music is given space in order to play. For Tabla music: If you are able to change each sequence as a character than you can use the above char-RNN model. Please read the following blog for a detailed understanding. MIDI music generation: Here we will use Music21 python library to read MIDI file and able to convert into the sequence of event. Please read the following blog for detailed understanding. Models other than char-RNN(Very recent blog): Its survey blog, have all models apart from the char-RNN model based on Neural Network. Please follow if want to explore. Google project on generating music: Based on Tensorflow and LSTM, the project of google researcher. Reference: Google Image(for image ) rest link given at respective section ========Thanks(Love to hear from your side)========= Find detail code on my GitHub account…
https://medium.com/analytics-vidhya/music-generation-using-deep-learning-a2b2848ab177
['Rana Singh']
2019-12-16 04:45:40.965000+00:00
['Deep Learning', 'Artificial Intelligence', 'Mathematics', 'Music', 'Machine Learning']
The Rise and Fall of “Social” Media
The Rise and Fall of “Social” Media Broken promises of an ever-connected utopia we don’t even want Thomas Cole, The Course of Empire — Destruction (1836) Imagine a future in which your Instagram, Twitter, and Facebook are so polluted with content by corporate interests that it’s nearly impossible to excavate the content by your friends and family: the real people with whom we were promised social media would make it easier to “stay connected.” This future may not sound too distant. This is probably because, by now, most of us accept that social media is chiefly an avenue for big business marketing as easily as if it were an inborn, unquestionable fact — as though social media in its current form had been bestowed upon us by a divine creator. Just as we all accept that politicians lie yet still we must elect them, we all accept that social media is first and foremost an economic tool that rewards branding and monetization, profits off its users’ time, attention, and personal information — and yet we must participate. The alternative is virtual invisibility, real-world obscurity. But this reality is a far cry from the new dawn of democracy and community envisioned by the hippies and psychedelic cyberpunks who pioneered the World Wide Web; as author and theorist Douglas Rushkoff put it, “The folks who really saw in the internet a way to turn on everybody. We couldn’t get everybody to take acid…but get everybody on the internet, and they will have that all-is-one, connected experience.” The Course of Empire — The Arcadian State (1834) In the days before the zenith of tech billionaires, social networking was still abuzz with that possibility. But those of us who jumped on the first generation of Myspace in 2003 knew that the intrigue of social media lie in the freedom to forge our own identities. Myspace was successful not as a means of staying in touch with family and up to date on friends, but as a platform for autonomy and self-expression. Connection in the early days of social media meant cultivating and asserting (or escaping) our Selves in a virtual world. Mark Zuckerberg, too, likes to claim something along the lines of “connection” as the motivation behind his company. But by the time Facebook was opened to the public around 2006, the Internet was already undergoing a functional repurposing by tech entrepreneurs and investors — from a dream of unity and self-expression to a vehicle for exponential financial gain. This repurposing had been blueprinted in an influential article published in Wired magazine in 1997. The article offered convincing foresight into the monetary potential of the Internet and ubiquitous personal computers: We are watching the beginnings of a global economic boom on a scale never experienced before. We have entered a period of sustained growth that could eventually double the world’s economy every dozen years.…[Historians] will chronicle the 40-year period from 1980 to 2020 as the key years of a remarkable transformation. Notably missing from this forecast are glimpses of the interconnected cooperative envisioned by the Internet’s renegade creators. But Wired was right about the economic implications of digital connectedness. As predicted, social networking sites became, decisively, the new frontier for ad agencies and marketers. For three years, Myspace was the world’s most visited social network. Until 2008, when Facebook eclipsed Myspace in the same category. In 2009, MediaPost noted that “The shift reflects the emergence of Facebook this year as the premiere social networking property for marketers.” At the time, this was an unprecedented accomplishment. As noted by Fortune, before Facebook, “the notion of social networking ads as big business was a fantasy.” And while Myspace’s popularity contracted, Facebook’s viewership and ad revenue ballooned. In 2010, Facebook accounted for a quarter of all U.S. ad dollars, “gaining market share at the expense of MySpace,” who, as we know, never recovered. The Course of Empire — The Consummation of Empire (1836) In retrospect, it’s no surprise that entrepreneurs and opportunists looked on at the hippie prospect of digital connectedness with dollar signs in their eyes. After all, in a capitalist society, individual people ever-connected on a tangible plain could be described one way as a breeding ground for exploitation. If it’s any indication: This quarter, Facebook’s stock will reach a record high, despite the unprecedentedly large $3 billion fine Facebook is awaiting from the FTC for dodging around in the shadows and egregiously failing its users for the umpteenth time. The fine made waves more because it is a drop in the bucket for the $550 billion behemoth than because it is a record-setting penalty. Facebook’s grotesque net worth and reckless ethics just show that its purpose has always been profit at all costs — and so the groundwork was laid for all social media to come after it. Zuckerberg may tout the lofty ideal of “connected experience” as the motivation behind his juggernaut corporation, but he definitely wasn’t going after the “all-is-one” effect of an acid trip. After all, whereas psychedelic drugs engender in us a feeling of connection with nature, our own spirit, and our place as human beings in the natural order, social media simply plugs us directly into the motherboard of a digital commercial superpower — whose scope and influence rival that of any government, and whose primary objective is to engorge us with branded content, all under the pretense of “connection.” So to whom or what, exactly, are we connected? I think few would say each other. It’s no secret that despite plans for unity, and business spiels about connectedness, social media has made us individually feel more isolated from one another and lonelier than ever. It’s old news that social interaction on “social” media is fickle at best, lethally cruel at worst. And with commercial content sitting so closely to our own on the digital landscape, the line between where each begins and ends is becoming increasingly blurry. Perhaps, if the Internet is indeed like an acid trip, then the state of social media today is something resembling MKUltra: a tool of the people harnessed by the powerful in order to control the people in turn. For this reason, to lament losing so-called “real” content by our friends and families on our feeds — content that has been mathematically buried by evolving algorithms — is to miss the point. Social media is not and never has been supplemental to human connection, and certainly does not replace it. The Internet by way of social media was never going to cultivate true oneness, despite the aims of its psychedelic pioneers. Social media cannot provide the spiritual enrichment of a well-received acid trip — not in an environment ruled by profit and power. And it’s this spiritual enrichment that’s needed for humanity to truly connect. I think the necessary question is not why do we put up with it — social media does have aesthetic and cultural value (think a living, breathing fashion magazine). But with its pervasive influence, its effects are insidious. The question is how do we begin to reconstruct our collective spirit? How do we, as a society, free ourselves from the vice grip of commercialism, branding, and precarious economic growth? So that we may get back in sync with the natural world, take stock of the damage, and salvage what’s left. So that we may truly connect with one another, and with our universal values. The Course of Empire — Desolation (1836) Increasingly lately, I’m reminded of Plato’s story of Atlantis: an advanced and prosperous city, whose people, once generous and good-hearted, became possessed by power and excessive wealth. As punishment for their spiraling greed, the gods assailed the island with earthquakes and floods, and the city and all its riches sank to the bottom of the sea. There is work to be done. Once we collectively trace our steps and acknowledge that as a society, we’ve been following an ouroboros path of greed and spiritual corrosion — not connectedness with one another, as we’ve been made to believe — only then can we get our bearings, and decide with a clear head where we want to go from here. Yes, social media has been usurped by corporate powers. But even in its purest incarnation, is it really what we need? Or is it time for something else, something better.
https://madelinemcgary.medium.com/the-rise-and-fall-of-social-media-f1b411aec4f6
['Madeline Mcgary']
2020-10-15 02:18:31.489000+00:00
['Society', 'Humanity', 'Facebook', 'Tech', 'Social Media']
Break Your App into Composable Modules
Over the years, my company’s main application has been under continuous development, evolving and gaining many features. Naturally, as time went by, our engineering team developed many frameworks and utilities that were needed for the app. For example — performance tracking, background worker system, analytics, storage access and even our own ORM, and much more. This application is designed as a monolith. All of those frameworks are built on the same codebase with the app, and in many cases they are even coupled to the application’s domain-specific code. As our engineering team (and company as a whole) matured, we started to get requirements for new applications — it could be an in-house dashboards app or managing deployments system, but also real new products that required actual production systems to support them. We started designing those systems, and guess what — we found out that we needed storage, analytics, performance tracking and all that stuff here as well… This is where things get interesting. What is the biggest benefit of a system that has been around for a few years? It has seen production. It met real users. With traffic. It was under load. It went through a lot of optimizations and improvements over the years. Over time, systems as a whole become more robust, and as do their frameworks. At this point we started to think differently. We can no longer write frameworks that are bound to a certain application. In order to really scale things out, we need to build reusable modules and libraries, which will provide the building blocks for all our applications. We’ve changed our mind-set to module thinking The plan was to break our main app’s frameworks into separate and composable modules that have great API. The process in high level is: Identify and map. It turns out it’s not always simple to think about which frameworks you have. You may have the obvious ones like I mentioned earlier. But you may also discover along that way that you once wrote that cool utility class for handling REST API call to 3rd parties, which is also usable for the next application you are building. It turns out it’s not always simple to think about which frameworks you have. You may have the obvious ones like I mentioned earlier. But you may also discover along that way that you once wrote that cool utility class for handling REST API call to 3rd parties, which is also usable for the next application you are building. Extract. Write each framework as a separate project with its own API. The idea is that the next application we are going to build can be easily based on a selected group of components (you’re not always going to need them all). A recommended way to do this, is to first extract it to a separate package/assembly/jar in the same project, but this time without direct dependency by the application. This way you can stay in context while refactoring, and at the same time decouple things relatively quickly. Write each framework as a separate project with its own API. The idea is that the next application we are going to build can be easily based on a selected group of components (you’re not always going to need them all). A recommended way to do this, is to first extract it to a separate package/assembly/jar in the same project, but this time without direct dependency by the application. This way you can stay in context while refactoring, and at the same time decouple things relatively quickly. Use. After extracting, try integrating it back to the application - this time as a 3rd party library. You will be amazed how bad your API is when you are the user :) After extracting, try integrating it back to the application - this time as a 3rd party library. You will be amazed how bad your API is when you are the user :) Fix your API. Improve the API until you’re satisfied. Improve the API until you’re satisfied. Document. Now that you have an awesome API, go ahead and add a README.md file to the repository, so that the module will be easy to integrate and use in future applications. Technology and development process Each module is written in its own project, and has its own tests, with its own CI process. The project is visible to everyone in R&D in a public repo in our internal Bitbucket. We use a package manager in order to manage versioning and dependencies between modules. Each push to master automatically releases a new version of the module to the package manager, and each consumer can decide whether and when he wants to upgrade. The key here is an easy and frictionless development process. If it’s not easy, it won’t happen. From this point, it will be very easy to push changes and upgrade the modules. The benefits of composable modules architecture Reduced development time Reduced creation time of new applications by creating a unified toolbox, from which the modules can be combined with the requirements of each app. Reduced maintenance by writing once and using everywhere, instead of copy/pasting around. The bug fixes and upgrades are applied in one place. Reduced boilerplate by creating better and understandable API. Prior to extraction, some of the modules were coupled to the specific app domain. Extracting the module, forced us to think about improving the API usage to be more reusable, and to uncouple the module from its domain. R&D growth and organization memory Many of these modules were written a long time ago, by people who already left the company. Addressing those modules forces us to get to know them and have a deep understanding of them. This enables us to master our tools (which is something I believe in), and take them for granted, and this will help us to grow as a team. Increased quality of frameworks and apps Since each module exists and is maintained in one place, and widely used across different apps, they automatically become more mature and stable, since they have more clients and receive more feedback. They also have their own tests and CI, which is a major win. Some ‘do’s and ‘don’ts’ we’ve learned along the way: Have tests and CI for all modules. Obviously. No exceptions. Obviously. No exceptions. Don’t extract a module for the sake of extraction. ROI. Extract only if there’s another consumer for this module. ROI. Extract only if there’s another consumer for this module. Don’t let the same engineer extract all components. This won’t scale, we want to share as much knowledge as we can. Haven’t you heard about the bus factor? This won’t scale, we want to share as much knowledge as we can. Haven’t you heard about the bus factor? Each component should be handled by more than one engineer. Again, the bus factor. Again, the bus factor. Don’t create deep dependencies between your modules. Create a flat hierarchy of modules in order to avoid dependency hell. Here’s a great post about this subject. What’s next for us? In the future, we would like to have a true owners/contributors model. In this model, each module owner/s will be responsible for accepting PRs for his module, and will be responsible for all aspects of the projects such as creating and leading the vision and giving an introduction about it for new developers. A contributor can actually be anyone from the R&D team. This will be a true open source mindset. This model can bring a lot of benefits, not only technical, but this post is getting to long so maybe next time ;) Better visibility We need to find a way to expose a “catalog” of the different kind of modules. This will create a very dev-friendly ecosystem for our project. Imagine a web-based portal that the engineers in your team can just browse through, see what possibilities are there, who the owner of each module is, and which applications are using a certain module. “You said open source — why not do the real deal?” Maybe we will… Stay tuned :) Closing Breaking our app into composable modules helped us in many aspects. We reduced development time by introducing a way to select the building blocks of an app and start rolling, reducing boilerplate, and focus our time and effort on domain logic and bringing business value. We also got to know our frameworks better by going over the codebase and getting our hands dirty. We made them better by creating great APIs. The beautiful thing is that new frameworks are now written like this from Day One :) Cheers.
https://medium.com/sears-israel/break-your-app-into-composable-modules-8f8306235e52
['Leeran Yarhi']
2017-12-15 15:11:19.777000+00:00
['Architecture', 'Software Development', 'Tech Culture', 'Design', 'Engineering']
This fascinating photo is of a
This fascinating photo is of a suit called The Wildman. No one knows the purpose of this 18th Century suit of armor. It may have been used for bear hunting or worse — bear baiting. It could also be a costume for a festival or a piece of folk art. As of today, it’s displayed in The Menil Collection in Texas, along with other interesting historical artifacts. As far as we know, it’s the only suit of it’s kind. One thing is for sure. Whoever wore this was not going to be getting a lot of hugs.
https://medium.com/rule-of-one/no-one-knows-the-purpose-of-this-suit-18-century-of-armor-e37c00a5294
['Toni Tails']
2020-12-29 13:22:20.269000+00:00
['Culture', 'History', 'Creativity', 'Art', 'Productivity']
Attached To The Familiar
Rumi says that if a drop of the wine of vision could rinse our eyes, everywhere we looked, we would weep with wonder. Sadly, many of us are stuck in the confines of our culture and beliefs, and are unaware of the splendor that lies beyond what we know, beyond our perceptions and attachments. “One way to expand our awareness, is to travel and experience a variety of unfamiliar cultures […] Another way to embrace the mystery and beauty of life is to learn the art of letting go of all that stands in the way of our inner development: for example, a belief that does not serve the common good, an argument that serves no purpose except saving face, a relationship that is toxic, a grudge that depletes our being.” (Sacred Laughter of The Sufis) What are your thoughts on this matter? Leave some feedback if you feel like it or submit something in response to this article and tag it under “storytelling”. Take care! See you next Thursday with brand new challenges.
https://medium.com/know-thyself-heal-thyself/attached-to-the-familiar-70944e702ea0
['𝘋𝘪𝘢𝘯𝘢 𝘊.']
2020-12-17 09:59:26.962000+00:00
['Storytelling', 'Short Story', 'Parable', 'Energy', 'Creativity']
The case of the missing deno_modules
When running code in Deno with external dependencies for the first time you might have noticed some downloading of packages taking place. Also if the file was a Typescript file you might have seen some indication of Typescript to Javascript compilation taking place. However the second time you ran the file none of that took place and the code just ran immediately. You start to look around for any created folders or files that might be responsible for the immediate execution of the code. But you find nothing. Where are my deno_modules? Let’s shed some light on the mystery. Enter DENO_DIR . By default, DENO_DIR is located in $HOME/.deno . However, since it is an ENV variable one could customise this. I also found cached Deno files in: $HOME/Library/Caches/deno DENO_DIR is structured in the following way: ️ The case of the missing deno_modules is solved.🕵️‍ Hope you learned something new about Deno reading this. Happy Hacking!
https://medium.com/dev-genius/the-case-of-the-missing-deno-modules-8484ac6d529
['Daniel Bark']
2020-06-19 13:38:44.537000+00:00
['Nodejs', 'JavaScript', 'Software Development', 'Typescript', 'Deno']
How The Indian Government Failed India in The Crises
How The Indian Government Failed India in The Crises #PMcares doesn’t mean the PM cares Photo by Prashanth Pinha on Unsplash ‘जो जहाँ है वहीं रुक जाये, 21 दिन तक’, translating this to English it says “stop, wherever you are, just stop, for 21 days". These words were spoken by the Hon’ble Prime Minister of India Mr. Narendra Modi. It’s interesting to know the time: 8:00 PM. He requested the people of India to lock themselves inside their homes for 21 days, suddenly, with no prior notice. But what about the ones who were homeless? Lockdowns, which were imposed in most of the countries of the world to fight the COVID crises, took place in the country with the second largest population in the world on 21 July 2020. Stating the fact that India is still a developing nation, has around 7% of its population living below the poverty line. This approximates to around 10 million people earning less than 40$ a month. Most of these people are daily wage workers, which means they get paid every day for the work they do and have no job security. The beginning On the night of 21st, PM of India announced that the country would witness a nation-wide lockdown for a period of three weeks. The lockdown announced at eight in the night would come to effect from twelve. It gave a shock to the Indian population as there was no prior notice for the same. People stuck in different cities for work purposes were left unconsidered. While it was easier for the middle class and the upper classes to survive the lockdown, who had a house, money in their accounts and food to fill their bellies, it turned out to be a mess for the lower classes to survive, who had no job security and were mostly daily wage workers such as labourers. All they were left with was absolutely nothing in their hands. Their voices were suppressed and they were left to die on the streets. Only if, they were given some time to return to their respective houses, that were present in different states and cities, miles away from where they worked, some of them would have survived. In the crises After the announcement of the sudden pause in the lives of the people, the whole nation lost its peace. The media glorified the Government’s decision stating “prevention is better than cure”. Indeed, the saying is true but how does it justify creating another disaster to suppress one. Then, the Government stated that they had taken a very wise decision by announcing the sudden lockdown. Ironically, India has become the 2nd worst country hit by the crises, that proves how wise the decision was. Absolutely a lockdown was needed but not at the cost of lives of thousands of people, who died with hunger, homeless, and hopeless. It could have been issued with a prior notice to give everyone sufficient time to arrange their requirements for a period of 21 days which further extended to another 3 weeks. The Government failed miserably, addressed unimportant problems and glorified its decisions with illogical justifications. Some of the evident disasters that took place and left unheard were as follows: 1. Frontline workers' safety left unnoticed The nation witnessed incidents such as lighting of candles and clapping hands and utensils in respect of the people working in the pandemic such as doctors, nurses, policemen, and many others who worked when the whole world was paused. But they got nothing beyond the respect that was given on just the two above mentioned incidents. Most hospitals were not provided with PPE kits, masks, gloves for doctors, nurses and staff members. An insufficient number of beds were present and no proper arrangements. The Government only focused on its own promotions, hiding facts and statistics from the people of India. They were told that everything was fine, in place and there was no need to worry. The frontline workers were left unheard in their demands to keep their safety in mind. If you moved out on the streets and visited a hospital, only then you would realize what the actual situation was. Many died trying to save the lives of others due to lack of safety arrangements made by the Government. 2. Hiding facts and statistics There are some serious issues with the statistics the Government provides to the people of India. In a recent YouTube video that I watched (shot in the crises), when a person visited the hospital to find out the situation there, he was terrified. Families receiving no further information about the patient once admitted. Doctors working without masks and kits. Death numbers were lied about. After enquiring the watchman of the hospital he said that at least 10 deaths took place each day at the hospital. Think! 10 deaths in just one hospital of one city, imagine how many cities are there in the entire country and how many hospitals. And Government provides stats stating the death rate to be 1000 deaths per day. Later the Government announced a package of worth 265 billion $ for the Indian people, of which no one received a single penny. Huge donations were made by people in the name of a fund( PMCares fund) organized to help people in the crises. But the money donated gained no update and there is no data where all the money was gone. 3. The Youth and citizens left unheard PM organizes a session known as ‘Mann ki Baat', which translates to ‘a talk from the heart’ every few days. In this session, he addresses the problems of the country and presents his views on the same. But ironically it’s not well understood whether the talk is about the topic he wants to address or is it about what the people want to be addressed? Let me explain it further, in the middle of the pandemic he talked about ‘toys’, ‘mud utensils’ and everything unrelated and insignificant as compared to the crises. While the citizens kept trending hashtags of the topics such as student exams, loss of jobs, and lack of facilities provided in the hospitals, all he could find to talk about was ‘toys’. It feels sad to know that in the world’s largest democracy, India, the people’s voice is suppressed. In the end Real stories don’t have a happy ending. And neither does this one. With thousands left to die on the roads, people were left with no better options. They walked thousands of miles crossing state boundaries, remained hungry for days, with no food and water and no money in their pockets. They walked endlessly in the hope to reach their home someday. Thousands lost their jobs and their loved ones. But even today, if you search on the internet about India’s Corona crises, you will rarely find news about the dark sides. Only, what you will see is the glorification of false facts with 0 logical reasons stated. It's important to not blindly trust what is told and showed, but take a step and find it out yourself. It’s pathetic to know the real story, cause everything that glitters is not gold.
https://medium.com/politically-speaking/how-the-indian-government-failed-india-in-the-crises-c6c2a7223cb5
['Niyati Jain']
2020-11-23 13:03:29.651000+00:00
['Coronavirus', 'World', 'Health', 'Politics', 'Leadership']
Bernie Sander’s Dreams Are a Movement, Not a Personality Cult
Bernie Sander’s Dreams Are a Movement, Not a Personality Cult There is hope for the future. I love Bernie Sander’s political ideas, and I hate that he did not win the nomination and go on to win the presidency. He is a caring, intelligent guy. He wants to serve. He eschews power. He really and truly wants a society where equality and justice mean something, the least of jobs can be life sustaining, health care is a human right and not something to be held ransom by corporate thieves, there is affordable housing and education, corporations are denied the purchase of legislation or political offices, the wealthy pay their fair share of taxes, we begin to restore our planet’s climate, and drug laws aren’t adding to the millions incarcerated in a class war waged on the poor and minorities. His campaign is over, but his values and beliefs and the fire he ignited will burn bright into the future. One must keep in mind this is a movement, not a personality cult like Trump’s. As such, it does not need Bernie. If Bernie decides to retire, there are others that think and feel like Bernie to fill his shoes. Notably, Alexandria Ocasio-Cortez will be eligible for the Presidency in 2024. Biden better watch out. Upcoming generations of voters are realizing the current Democratic Party is the Republican Party Light. They may act a little smarter and more humane than their Republican counterparts, but they owe a monetary fealty to corporations and wealthy individuals just like Republicans do. Along with health care and climate, the elimination of the influence of money in government is the key to turning around our corrupt political system. It was one of those things Bernie was most focused on. Without eliminating this exchange of money for power those other things are not obtainable. Who, except for a bunch of crooks, would think it was okay to legalize bribery like we’ve done in the U.S.? They’ve been doing it so long they no longer comprehend, or think about, the immorality. The pandemic has pointed out how appropriate Bernie’s political ideas are for this day and age. He may be offered the position of Secretary of Labor in the Biden administration. So don’t give up hope. Stay safe and alive and keep that fire kindled for the day it joins other small fires to become a bonfire celebrating the arrival of justice and equality to American society. Here’s to Bernie or AOC in 2024.
https://medium.com/age-of-awareness/bernie-sanders-dreams-are-a-movement-not-a-personality-cult-14942fd7a3e
['Glen Hendrix']
2020-12-13 02:46:23.198000+00:00
['Politics', 'Society', 'Life', 'Future', 'Leadership']
Google Play Music Is Dead. Long Live Spotify And Apple Music
Google Play Music Is Dead. Long Live Spotify And Apple Music As another product bites the dust, Google now stands at the risk of killing its own music business Images from Google, altered We knew this was coming. Google had been planning to draw curtains over their decade-old music and podcast streaming service for a while now. Now when the time has finally arrived, this all feels so sudden. However, it isn’t too surprising. The search engine giant has such a long history of discontinuing products that there’s a whole graveyard named after them. Yet, seeing the doom of Google Play Music really hurts. After all, only last year, it was the default music player app shipped across millions of Android devices. Today as Google begins forcing users to switch over to the newer YouTube Music app, it's hard to digest the fact that the much-loved Google Music product is officially dead. Now, this would have been fine if YouTube Music was an equal substitution for Google Play Music. But the thing is, currently, YT Music doesn’t fill that void. Instead, it's a service that primarily focuses on boosting video viewing time rather than music playlists. This makes Google’s whole strategy of replacing a working product by shoehorning it into another one a very questionable decision. It won’t be an overstretch to say that Google’s current move could inadvertently benefit its competitors even more.
https://medium.com/big-tech/google-play-music-is-dead-long-live-spotify-and-apple-music-b298228225fc
['Anupam Chugh']
2020-10-31 18:43:26.082000+00:00
['Business', 'Marketing', 'Google', 'Technology', 'Social Media']
What I Told My 9-Year-Old About Coronavirus
What I Told My 9-Year-Old About Coronavirus The virus will have a lasting impact on our children’s sense of safety Photo: Tetra Images/Getty Images “I heard 20% of kids are going to die of coronavirus,” my nine-year-old daughter told me matter-of-factly last week. She’d heard that rumor from another classmate, the day before her Brooklyn school was shut down. I explained to her that no, children would not be dying of this virus — that, in fact, kids seemed to be the safest of all of us. It was just that things would be different for a while—like her school closing—out of an abundance of caution. “Okay,” she replied, “I thought that sounded wrong.” Then she got back to asking about playing Minecraft. I’m glad that Layla seemed reassured, but it was a reminder that as adults are panicking, our children are listening. Closely. I’ve seen lots of advice about how to entertain and teach children in the event of long school closures — how we’re supposed to keep to a schedule and maintain normalcy and boundaries. But I haven’t heard any advice on how to explain — to children who are old enough to understand that something is very wrong — what exactly is happening to the world right now. Especially when we know so little ourselves. As adults are panicking, our children are listening. Closely. I can reassure my daughter that she likely won’t get ill because coronavirus is most dangerous for elderly and sick people, but that doesn’t make her feel any better about her grandparents. I can tell her that by washing our hands and walking instead of taking the subway we’re avoiding germs, but she can tell that the virus must be pretty serious for us to be taking extra measures like this. “We don’t walk to school to not get the flu,” she said. She sees the lines at the grocery stores. She notices the people in masks and rubber gloves as we walk around Brooklyn. She even knows I bought a mask for her; one in a flower print that can be readjusted for small faces. I’ve told her as close to the truth as I can: That lots of people are getting sick, and though she’s not in danger, lots of other people are, and we’re nervous that there won’t be enough doctors or rooms in the hospitals. She seems to be taking it all in stride, aside from that initial fear brought on by her misinformed classmate. She’s glad to be having playdates with friends, not knowing how parents are scrambling in the background to make sure all our kids have things to do all day. She’s even happy that the neighborhood playground is mostly empty: “The tire swing is never free!” And so she’s doing all right. What really worries me, though, is not how she will handle the next few weeks; it’s how she will live through the next few years. What eats at me as a parent is not knowing if this is simply her new normal. If she’ll never have the relative health and environmental stability I grew up with. If the idea of going to a concert without a mask on will seem like a fantasy to her. If learning in a classroom alongside her friends and peers will be seen as a privilege rather than a given. I can tell her that the coronavirus will pass and that she will be fine. I feel confident that’s mostly true. But I can’t tell her that the end of this particular virus will be the end of emergencies like it — that’s what breaks my heart. I can’t even tell her when or if she’ll go back to school. A lot of us have taken our country’s stability for granted. I only wish she would be able to do the same.
https://gen.medium.com/what-i-told-my-9-year-old-about-coronavirus-80bac715bb8d
['Jessica Valenti']
2020-03-16 11:47:00.093000+00:00
['Parenting', 'Jessica Valenti', 'Family', 'Coronavirus', 'Health']
Why Is Burger King Asking You to Eat at McDonalds
Now Burger king asking you to order from McDonald’s is akin to a Republican asking to vote for Biden. Or Coca Cola asking you to have Pepsi. It just doesn’t happen. To get a little perspective, and to understand the intense rivalry among the two giants, let’s dig into some of the bizarre Burger King campaigns that have sucked all the juices out of your Big Macs. The Whopper Detour What do you do if you have to make people download your app on their phones? You give an offer, a discount or a freebie. Likewise, Burger King gave a Whopper for just 1-cents. But being able to gorge on the whopper after downloading the Burger King app was not the reason this marketing campaign won the Titanium, Direct and the Grand Prix awards at the 2019 Cannes Lions International. The Whopper Detour The whopper was available for next to nothing only if you downloaded the app and then ordered the Whopper within 600 ft of a McDonalds restaurant. A clever mix of technology — Geofencing if you were wondering — and marketing ingenuity. Not that Burger King had created a print ad or a tv commercial that trolled its rival — a ploy that we often saw during the cola wars. It was the fact that people themselves were trolling McDonald’s that made the campaign so brilliant. People were literally sitting at McDonalds parking lots, ordering Whoppers. What can be more embarrassing for a behemoth such as McDonalds where its own employees are pointing customers to the nearest Burger King joint? Never Trust a Clown From McDonalds parking lots, enter the movie hall. Stephen King fans would remember the movie IT. There was much fanfare about the movie and people went in hordes to watch the movie. But no one knew that along with the horrifying chills while munching popcorn and slurping coke, they will also get a shot of marketing ingenuity. As the movie finished and end credits started rolling, a message flashed on the screen which said, “Moral of the Story: Never Trust a Clown.” Bang! The entire audience went bonkers. Never Trust a Clown Burger King called it their longest ad ever. And indeed it was. Their message fitted right in the movie’s context and delivered a painful low blow to their rival just as they were about to dash out of the hall. They topped up the campaign by sharing the McDonalds tagline as I am Loving IT from the original I am loving it. Along with the buns, Burger King really knows how to spice up its puns. Size Matters Do you know that the Whopper is much bigger than McDonald’s? You might, or you might not. Some people don’t have an eye for detail. So Burger King cleared the air once and for all with its “Whopper of a Secret” campaign. A Whopper of a Secret The tongue-in-cheek campaign showed people that the Big Mac was hidden behind every Whopper in all of their advertisements. But because the Big Mac is so tiny compared to the Whopper, you could not see it. One Roasted Kanye Meal Please Kanye West’s association with alt-right trolls such as Candace Owens, and his pro Trump positions have often landed him in several controversies. He even nominated himself for this year’s presidential election. Talking about whacky ideas! As described aptly in this article in the Fast company, he had veered from being a “revered creative who often makes controversial statements” to “toxic alt right fartcloud.” So when his tweet saying “McDonald’s is my favorite restaurant,” was picked up by the Burger King social listening algorithm, they latched onto the opportunity to quickly turn the tables. They tweeted with the caption “Explains a Lot” over the Kanye West’s tweet. The tweet instantly grabbed the eyeballs of thousands and became the most liked branded tweet of all time. Burger King’s response to Kanye West’s tweet An Offer They Can’t Refuse The recent friendly gesture of Burger King is not something totally out of the blue. Back in 2015, Burger King had come up with another such campaign called the “peace offering” or the “McWhopper” campaign. The idea behind the campaign was to create awareness of the International Peace Day which is celebrated on 21 September every year. Burger King proposed that Mcdonalds should set aside their differences, and they both should offer a joint burger called the McWhopper. The proceeds from the sales would go to a non profit Peace One Day. For it’s sheer brilliance, the McWhopper was named the king of all media at the Cannes International Festival for Creativity and walked away with the coveted Grand Prix award. The McWhopper Campaign Burger King took the first step and even created a website with kick-ass content for the new burger. But McDonalds did not find the campaign amusing and politely declined the proposal for which it faced severe backlash. The genius of the campaign was that Burger King would have gained social mileage no matter what the response of McDonalds would have been. McDonalds had to choose from the lesser of the two evils. If it had accepted the offer, Burger King still would have had the upper hand of being the one to come up with the offer. And if McDonalds would have rejected — which it did — it would face a backlash, which it did. The headline of this article in Forbes sums up what people felt about McDonalds after the refusal. “McDonalds Chooses Pride Over Peace With Burger King’s McWhoppers offer.” King of Hearts Burger King has mastered the art of fine topical marketing. Be it the IT campaign or the Kanye roast, Burger King frequently creates a lasting impact by finding contexts that would help further it’s messages. It rides the wave with perfection. And now, when the virus has sucked out the life of economies world over, leaving thousands without work, pushing millions back into poverty, Burger King is spreading the message of how important it is to help each other. By setting an example, it is nudging people to help others in this time of chaos. It’s bringing empathy back in vogue. Maybe it is the future of marketing. Or rather, it should be the future of marketing. The ROI of campaigns should not only be measured in terms of impressions or increase in footfalls but also in terms of the social change the campaign stirred. Brands change lifestyle, they create habits. They are the millennial religions that billions follow to know what’s moral and what’s not. They help us make sense of the chaotic world that surrounds us. So, along with the awareness and interest that brands generate for their products, they should also work frequently towards creating a better society. As Ann Tran, a brand consultant and Tedx speaker says,
https://medium.com/swlh/why-is-burger-king-asking-you-to-eat-at-mcdonalds-589d2dbf01f6
['Mehboob Khan']
2020-11-11 05:59:09.158000+00:00
['Marketing', 'Creativity', 'McDonalds', 'Burger King', 'Advertising']
You’re Creating a New Programming Language — What Will the Syntax Look Like?
You’re Creating a New Programming Language — What Will the Syntax Look Like? I asked a bunch of programmers about their favorite syntax — here’s what they said A little while ago I decided to have a little fun and wrote an article titled “My Favorite Pieces of Syntax in 8 Different Programming Languages.” I published it and then decided to share it on a subreddit — r/ProgrammingLanguages. This led to an interesting discussion about programming language syntax, as users shared their own favorites. It left me with no choice: I had to write a new article with my favorite pieces of syntax from the r/ProgrammingLanguages community.
https://medium.com/better-programming/youre-creating-a-new-programming-language-what-will-the-syntax-look-like-35199d2a44e9
['Yakko Majuri']
2020-09-26 22:27:42.961000+00:00
['JavaScript', 'Technology', 'Programming', 'Software Engineering', 'Python']
Creative Construction: How Artists and Engineers Collaborate
From the monumental Picasso sculpture in Chicago’s Daley Plaza, to Isamu Noguchi’s Red Cube in Lower Manhattan, SOM’s history of integrating iconic artworks into a wide variety of building sites is well documented. Perhaps less known, however, is the role that engineers have played in helping to realize various works of art. In some cases, SOM has developed structural engineering solutions for executing the artist’s vision. In others, an exploration of technical issues has led the artist to refine or expand their ideas. Over the past decade, SOM’s structural engineers have developed tools, techniques, and approaches that have enhanced the impact of public art installed around the world — from a university campus in Omaha, Nebraska, to the lobby of the world’s tallest building in Dubai. In the summer of 2018, a number of these recent collaborations were featured as part of the exhibition “Poetic Structure: Art + Engineering + Architecture,” at the MAK Center for Art and Architecture in Los Angeles. The contents of this show are now making their way to Mexico City for the annual MEXTRÓPOLI Festival, in March 2019. In anticipation of the opening, we invite you to explore the engineering of art (and the art of engineering) across five creative collaborations. Janet Echelman, “Dream Catcher” (2017) Known for her colorful fiber net sculptures, Janet Echelman describes her installations as a “team sport,” with contributions from engineers, architects, and more. When she was commissioned to create a public artwork for The Jeremy Hotel in West Hollywood, Echelman envisioned a sculpture suspended above an open-air plaza between the hotel’s two buildings on the Sunset Strip. As the architects and engineers for the project, SOM worked closely with Echelman to seamlessly integrate the artwork into the new development. Titled “Dream Catcher,” the sculpture is inspired by the idea of dreaming hotel guests — its interweaving forms of fiber netting are modeled after brainwave activity that occurs during dream states. Suspended 100 feet in the air, the translucent sculpture turns The Jeremy’s plaza into a dynamic and ethereal public space, while making a striking contribution to the streetscape of West Hollywood.
https://som.medium.com/creative-construction-how-artists-and-engineers-collaborate-ef4a80f0b6c5
[]
2019-02-26 21:23:27.798000+00:00
['Design', 'Collaboration', 'Architecture', 'Art', 'Engineering']
Future Leaders: Samuel Parkinson, Senior Engineer
‘Future Leaders’ is a series of blog posts by the Financial Times in which we interview our team members and ask them how they got into technology, what they are working on and what they want to do in the future. Everyone has a different perspective, story and experience to share. This series will feature colleagues working in our Product & Technology teams. You can also connect with us on Twitter at @lifeatFT. Samuel Parkinson Hi Sam, what is your current role at the FT and what do you spend most of your time doing at work? I am a Senior Engineer within Customer Products, I’ve been at the FT for about two and a half years now. I’m actually leading a team at the moment, so I’m the tech lead for a team called ‘the Enabling Technologies Group’, which in essence is the tooling team for FT.com and the Apps. I spent a lot of time making sure we know what we’re working on and that we’re doing the right thing. Is that more management-focussed than engineering then? Yeah, it’s a nice mix, I think it’s about 50–50 at the moment. It’s definitely a lot more management than I have done before but it’s been quite an interesting jump into the deep end and there’s a lot to learn there. I really like the human side of all of that. So, how did you get into the technology industry? I have always been fascinated by technology, since I was very young. I remember I got asked this question once in an interview, I think it was my FT interview actually. I always used to tinker around and when I was young I was very lucky to have the support from my mum to go out and build a computer, I don’t know where that came from! I think I was a teenager and it was for gaming. My mum said yes, she trusted me when I had no idea what I was doing… but we went out, got the parts for this computer, and I managed to put it together and lo and behold, it actually worked. I surprised myself and it all went on from there. I didn’t study computing at school, it was a pretty terrible department when I was at school so it didn’t seem worth it. I didn’t do very well in my A-levels either and went through clearing for university, managed to get a place at Brunel University for their foundation course doing IT. So I spent five years at Brunel, doing the foundation of IT and then computer science with a year in industry. I think I learned more in that foundation year than the rest of the four years I spent at uni but it was really good, and that was the gateway. Then I went straight into the tech industry. What was your first job after uni? I was an engineer at graze.com. They do snacks through the post and when I first joined the office was in a house, they had a kitchen and it was pretty cool. That was a great time. The Enabling Technologies Group Christmas party, at the Crystal Maze 🔮 That sounds fun! Since you’ve been at the FT, what is the project you’ve worked on that you are most proud of? My most recent favourite, there’s quite a lot actually, was whilst I was on secondment with the Operations & Reliability team. There were two main projects going on at the time and I was tasked with helping out with their monitoring. We have hundreds of systems running at the FT, all of which we need to know if they’re working or not. The system and dashboard that we were using to do the monitoring on was very old and on its last legs. So, the O&R team were looking to refresh the monitoring and make it more reliable, and so I did the discovery work for what system might replace it. We built a tool called, ‘Heimdall’. I didn’t pick that name! Heimdall is the watchman of Asgard in Norse mythology. I think he’s part of the Avenger Marvel comics as well. I think he’s the guy in the movies with the big sword that overlooks everything. Under the hood it uses a tool called Prometheus to go out and check each one of our systems across the FT. Like that connection, cool, so that’s been your favourite project to date Yeah, it worked really well, I spent three months on it, heads down, with a great team. It’s currently looking after all of our systems and working really well. Sounds useful! With that in mind, what is the biggest lesson you have learned in recent years? The thing that keeps coming up, again and again, and something that is not always easy for me, is how difficult communication is and [the importance of] getting it right. Going back to the Heimdall project, that was good, it was communicated well and it was handed over to the team, it was a great success because of that, more than anything else. There’s been a lot of hard work in some cases because communication wasn’t good and getting that right is really hard. I think the biggest part of that is communication and collaboration with all the different disciplines, that is the crux of the problem. Looking at different types of communication, do you think communication within a team is more important or communication from a team is more important? Both are important. I think in our team we have got the internal communication down now. It wasn’t always perfect but it’s definitely getting better. For us it’s about communicating as a team outwards and that’s where we’ll hopefully improve. Sam’s team spent an afternoon in a board room overlooking the Thames Ok, final question! What would you like to do in the future? That is a great question. So, the next step up for me would be the ‘Principal Engineer’ role and I really like the sound of what it involves, working across teams, across departments and across disciplines, definitely playing into that communication aspect too. I think it would be a really interesting role. Are there any projects or developments in particular you’re interested in? I think it comes down to what we can improve within our department. We have a lot of work to do and I think the theme would be to do ‘more with less’. We spend a fair bit of time on toil at the moment, a lot of time rotating AWS keys or deleting entries from databases and it’s expensive for engineers to be spending their time on this kind of stuff. So, doing more with less, that would be a really good focus. Ok, food for thought.. Thanks, Sam! Interviewee: Samuel Parkinson Interviewer: Georgina Murray
https://medium.com/ft-product-technology/future-leaders-samuel-parkinson-senior-engineer-c056653749d2
['Ft Product']
2019-03-21 11:44:39.473000+00:00
['Learning', 'Tech', 'Communication', 'AWS', 'Engineering']
Building, authenticating and hosting VueJS App with AWS Amplify
Getting started with VueJS and AWS Amplify The rising trends and love for VueJS is no surprise for many developers. With over 160k stars on Github, many developers and companies, including big and small, have been adopting it since the very beginning. With the ease of developing a responsive and impressive frontend application using VueJS, it is no wonder that developers are looking for the same development experience and turning their attentions to cloud services and libraries to automatically spin up and connect the cloud capabilities together. AWS Amplify is an open-source library that supports many modern javascript frameworks and native mobile platforms i.e. iOS and Android. The Amplify CLI also provides the developer the ability to create a whole set of serverless, feature-rich features such as Auth, API, Analytics, Storage and Hosting with best practices in the AWS environment with their own comfortable console terminal. Since it is open-source and community-driven, any developers who is interested in contributing to AWS Amplify development or its communities, can easily vote and create tickets in its respective Github repositories, and see each project’s roadmap (e.g. Amplify JS Roadmap & Projects) as well. Project setup In this project, we will setup a brand new VueJS app, use your own AWS account, and add Vue CLI and AWS Amplify CLI via your favorite terminal. If you are not familiar with VueJS or AWS, it is okay to take a step back to understand the concept and art of building modern apps first and not get your hands dirty. This guide is meant for everyone and I will add notes and explanations to better guide each step. You can also refer to this Github repository for the source code as we go along. With the ease of developing a responsive and impressive frontend application using VueJS, it is no wonder that developers are looking for the same development experience… NodeJS version To make sure that your terminal is compatible with the latest AWS Amplify CLI (minimum 10.0 and above) and Vue CLI (minimum 8.9 and above, 8.11.0+ recommended), you need to be running with at least node v10 and above in your terminal. Enter the following command to make sure that you are running the latest node version: node -v If you realize that you are not using the latest node/npm, you can use the Node Version Manager (NVM) to install and select the node version you need. You can enter the following command to install and use node version 13. nvm install 13 && nvm use 13 Install @vue/cli This is optional for you to actually start development with VueJS but in this project, I am going to use the @vue/cli to quickly create vue project with additional features such as Babel or TypeScript transpilation, ESLint integration and end-to-end testing. yarn global add @vue/cli # OR npm install -g @vue/cli Install @aws-amplify/cli The Amplify Command Line Interface (CLI) is a unified toolchain to create AWS cloud services for your app. Let’s go ahead and install the Amplify CLI. yarn global add @aws-amplify/cli # OR npm install -g @aws-amplify/cli A new Vue app Let’s start with a new Vue app by running the following command: vue create aws-amplify-vuejs I am also going to select the default preset given by the @vue/cli and add babel and eslint to the vue project. VueJS default preset Once you have let the cli to finish its job, you should be able to go inside the project folder to begin the next step. cd aws-amplify-vuejs You can use your favorite IDE to open up the project and take a look at the new project. In this example, i am going to use VSCode for development. Your VueJS New App Now you can start your VueJS development and see your changes in your local browser yarn serve It is Amplify time Within the new VueJS app, I am going to configure my cloud services using Amplify CLI by entering the following command: amplify init You can now step-through and key in the respective values for your project name to be seen in the AWS console and environment you are in. In additional, if you do not have an AWS profile in your terminal, you will be also entering the AWS credentials for your AWS profile in your terminal. amplify init After the amplify configuration, you should be able to see the new folder named amplify and there is a new aws-export.js in your VueJS app. These are auto-generated by AWS Amplify CLI and it will add new identifiers and credentials needed for every new component you added via the CLI. Auto-generated folder and files Adding AWS Amplify to your VueJS app Once you setup the VueJS app and the AWS environment, you now need to add aws-amplify and its UI components to your VueJS app. yarn add aws-amplify @aws-amplify/ui-vue # OR npm install -S aws-amplify @aws-amplify/ui-vue After that, you can add the following javascript codes to your main.js to configure the AWS Amplify libraries in your VueJS app. import '@aws-amplify/ui-vue'; import Amplify from 'aws-amplify'; import awsconfig from './aws-exports'; Amplify.configure(awsconfig); Add Amplify in your VueJS main.js Add Auth to your VueJS app After you have configured amplify within the VueJS app, you are now ready to add features to your newly built app. We are going to add Auth to your app. Let’s go back to the terminal and run this command in your project folder to add auth features and services. amplify add auth amplify add auth In this example, we are not going to configure a lot and complicate the whole auth process. However, you can easily re-configure this in the future with the following command. amplify update auth After you have added auth to your amplify, you can now see if what kind of resources will be added in your AWS environment by entering the following command. amplify status amplify status After you have confirmed the resources to be added to your AWS environment, you can now enter the following command to push your changes to the AWS cloud. amplify push This should take some time as in your AWS environment, new auth resources will be added with best practices such as new IAM roles with minimum permissions required and all of these event changes and resources can be seen in your AWS CloudFormation. Back to the VueJS app, you can now open up your App.vue to edit the codes and include the auth features you need. Default codes in App.vue Firstly, let’s refer to the aws amplify documentations and take a look at what is needed to be added. <template> <amplify-authenticator> <div> My App <amplify-sign-out></amplify-sign-out> </div> </amplify-authenticator> </template> In this code example, you can see that my app is wrapped around <amplify-authenticator> and I also can add a dedicated sign out button with <amplify-sign-out> . If you copy-paste the codes given in the example, your app will look like this after logged in. Amplify Auth, after logged in It is not that beautiful to begin working with and instead, I am going to use the VueJS’s HelloWorld component as my main screen and add the amplify auth component around it. Add auth code for VueJS Next, I want to customize a little bit to the default Amplify Auth screen by adding extra HTML header texts and also, I do not want the mega-large sign out button so I attached a new id to the component to better style it. In addition, you can add more customization and style by theming too. Custom attributes for Amplify Auth Now that we have added the auth features, given some customizations and styles, I can now test the app via http://localhost:8080/. If you have previously closed/cancelled the node process in your terminal, enter the following command again to start development again. yarn serve Amplify Auth Screen with some customization Now, you can go ahead and create a new account. You also can use the following credentials to see the whole login process. username: demo password: P@ssw0rd Have you noticed that you literally did not code any of these auth functionalities and all we did is to add the respective auth components from the Amplify UI libraries? After you have successfully logged in, you should be able to see your main component with a sign out button below it. VueJS with Amplify Auth Your final javascript code with amplify auth in App.vue should look like codes given below. <template> <div id="app"> <amplify-authenticator> <amplify-sign-in header-text="My Custom Sign In Text" slot="sign-in"></amplify-sign-in> <div> <img alt="Vue logo" src="./assets/logo.png" /> <HelloWorld msg="Welcome to Your Vue.js App" /> <div id="amplify-signout"> <amplify-sign-out></amplify-sign-out> </div> </div> </amplify-authenticator> </div> </template> <script> import HelloWorld from "./components/HelloWorld.vue"; export default { name: "App", components: { HelloWorld } }; </script> <style> #app { font-family: Avenir, Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; color: #2c3e50; margin-top: 60px; } #amplify-signout { width: 100px; margin: 0 auto; } </style> Add Hosting to your Vue App Now that you have built first beautiful VueJS app, you want to host it somewhere. AWS Amplify CLI and Amplify Console got you covered! Fortunately, you can add hosting via the following command and choose git-based deployment under Hosting with Amplify Console . amplify add hosting amplify add hosting Your browser should now open and lead you to the AWS Amplify Console. Firstly, you can link the app to your code repository and in this case, I am using my personal Github account to version all my codes. Amplify Console, Add Repository Branch Under step 2, you now need to select the Git Repository and its Branch for Amplify Console to know what and which to deploy. Amplify Console, Configure Build Settings Lastly, under step 3, you can review your configuration one more time before Save and Deploy . Amplify Console, Review and Deploy You should now see your changes being deployed via the AWS Amplify Console and note that for every changes you push to your repository under that selected branch earlier, Amplify Console will also automatically deploy your changes to your portal. First Deployment at Amplify Console Now, for Vue routers to work properly, you have to add a rewrite rule under Amplify Console with source address </^[^.]+$|\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json)$)([^.]+$)/> , target address /index.html and type 200 for the status code. Rewrites and redirects under Amplify Console Going back to your terminal, you can proceed by pressing ENTER and you should be able to see your portal URL within the terminal too. Back to your amplify command BONUS: I also went back to the Amplify Console to update my DNS and point vuejs.bryanchua.io to the portal. Add Custom DNS at Amplify Console When you enter the command amplify status , you should be able to see your updated hosting URL too. amplify status What’s next In this small project, I have only covered the basic functionalities of both VueJS and AWS Amplify, and showed how easy it is to add server-side functionalities within your frontend codes and beautify them. The beauty of using these advanced libraries is that there is literally no/less codes to write. You can now focus more time on user experience (UX) and delivering values via your app. Any thoughts about it? What do you want to see next? Feel free to reach out if you have any questions and I am available on LinkedIn and Twitter.
https://medium.com/swlh/building-authenticating-and-hosting-vuejs-app-with-aws-amplify-7285b7a8e90c
['Bryan Chua']
2020-06-05 03:53:12.614000+00:00
['Software Development', 'Vuejs', 'Authentication', 'Front End Development', 'AWS']
Technologies & Tools to Watch in 2021
An opinionated list of technologies to assess for DevOps Engineers and SREs Photo by NESA by Makers on Unsplash Managing Cloud Services via Kubernetes CRDs All three major cloud providers (AWS/Azure/GCP) now support a way to provision and manage cloud services from Kubernetes via custom resource definitions (CRDs). AWS has AWS Controllers for Kubernetes (ACK) in developer preview; Azure recently launched Azure Service Operator (deprecating Open Service Broker for Azure); GCP has Config Connector as an add-on to GKE. While Infrastructure-as-Code (IaC) tools such as Terraform, Ansible, and Puppet are still widely used to manage cloud infrastructure, the support for Kubernetes-managed cloud services suggests a huge shift towards organizations making Kubernetes the focal point of their cloud infrastructure. The upside here is that developers can now use the same tools to manage Kubernetes applications and other cloud services using the Kubernetes APIs, potentially simplifying the workflow. However, this tight coupling of Kubernetes and the rest of your cloud workloads may not be desired depending on your current infrastructure workflow or Kubernetes expertise. Pulumi Speaking of IaC tools, Pulumi recently announced its $37.5 million Series B funding to challenge Terraform’s dominance in this space. Unlike traditional IaC products, Pulumi opted to enable developers to write infrastructure code in their favorite languages (e.g. Go, Python, Javascript) instead of pushing yet-another JSON/YAML-based domain-specific language. This choice allows Pulumi to be more flexible than Terraform and enables developers to make use of existing testing frameworks to validate their infrastructure. However, given its nascency, Pulumi’s community is quite small compared to Terraform. Terragrunt & TFSEC Unlike Pulumi, Terraform addresses some of its deficiencies through its open-source community. Terragrunt is a thin wrapper around Terraform to help teams manage large Terraform projects by organizing configurations into versioned modules. Terragrunt implements some best practices laid out by Gruntwork co-founder Yevgeniy Brikman. While Terragrunt is fully open-source, Gruntwork recently announced commercial support for enterprises looking for more production-ready services. TFSEC is another open-source tool that complements Terraform projects. It uses static analysis to flag potential security threats to infrastructure code. As security bakes more into the DevSecOps movement, tools like tfsec will become more important in the future. Tekton The CI/CD market is saturated with established tools like Jenkins and Spinnaker as well as emergent cloud-native tools like ArgoCD. Tekton is a new player in this space, focused on Kubernetes workloads. Tekton started as part of the Knative project and was later donated to the Continuous Delivery Foundation (CDF). The differentiating factor for Tekton is that it defines the pipelines via Kubernetes CRDs. This allows pipelines to inherit native Kubernetes features (e.g. rollbacks) and also integrate with existing tools such as Jenkins X or ArgoCD to support complex, end-to-end CI/CD pipelines. Trivy Vulnerability scanning for containers is becoming an important part of any CI/CD pipelines. Like the CI/CD market, there are plenty of open-source and commercial tools including Docker Bench for Security, Clair, Cilium, Anchore Engine, and Falco. Trivy is a tool from Aqua Security that not only scans the container but also the underlying packages in the code. Combined with Aqua Security’s kube-bench, organizations can more easily bake security into the application development workflow. ShellCheck Despite tremendous improvements in the infrastructure tooling space, shell scripts remain in various workflows to get simple tasks done. ShellCheck is a static analysis tool to lint shell scripts for syntax and common mistakes. ShellCheck can run from the web, terminal/CI, as well as in your favorite text editor (e.g. Vim, Sublime, Atom, VS Code). Pitest/Stryker Pitest (Java) and Stryker (Javascript, C#, Scala) both implement mutation testing in their respective languages. Mutation testing gauges the quality of tests by injecting faults to tests and checking if the tests still pass even with the mutation. A good unit test should fail when a mutation occurs in the test case. Mutation testing complement test coverage to detect both untested and inadequately tested code. Litmus Back in 2011, Netflix popularized chaos engineering with Chaos Monkey as part of the Simian Army suite of tools. In the Kubernetes world, there are plenty of chaos engineering tools such as chaoskube, kube-monkey, and PowerfulSeal as well as commercial platforms like Gremlin. I want to highlight Litmus as a mature chaos engineering solution that is extensible and easy to use. Litmus is a lightweight Kubernetes operator consisting of ChaosEngine, ChaosExperiment, and ChaosResult. Litmus supports fine-grained experiments that go beyond simply killing random pods in a namespace and displays the results via ChaosResult CRD instead of leaving observability up to the users.
https://medium.com/dev-genius/technologies-tools-to-watch-in-2021-a216dfc30f25
['Yitaek Hwang']
2020-11-16 08:19:52.992000+00:00
['Kubernetes', 'DevOps', 'Software Engineering', 'Software Development']
The Problem With Unsolicited Redesigns
I recently wrote an article about how side projects will benefit you and your Design career. One of the most popular types of side projects is the unsolicited redesign. They’re all over Dribbble, Medium, and Design Twitter. In fact, they’re so popular they got their own website. That’s not to say the love for them is unanimous though. Once you read some of the comments on these redesigns, or do a quick search on Medium, you’ll quickly discover two very different perspectives: One half of the Design community loves and recommends unsolicited redesigns for all the value they bring — the other half absolutely hates them. While I can certainly empathize with both camps, I know a properly executed unsolicited redesign can provide all the benefits I mentioned in my previous article. An unsolicited redesign can be great practice, give you content for your portfolio, let you try out new tools and methods, explore your creativity, and be a lot of fun. You might not have a great chance of turning your redesign into a business, but you will find a few case studies describing how someone landed a job or a client through an unsolicited redesign. With that being said, I don’t think your goal should be to get hired by the company who’s product you’re redesigning. This simply happens too rarely for it to be a viable strategy. As for my empathy for the haters, let’s get into the problem with unsolicited redesigns. The problem with unsolicited redesigns Designing in the real world is a balancing act between creative freedom and constraints of various kinds. You have a finite amount of time to complete a project, certain features or UI decisions may be out of scope for technical reasons, the budget will obviously put a cap on your research and other activities, and your Design System will limit your creative freedom. On top of all this, you will undoubtedly run into a series of challenges along the way, forcing you to cut corners, negotiate compromises with stakeholders, and settle for “great, but not perfect”. The fact is this: When you do an unsolicited redesign of an existing app or website, you're shielded from all the constraints and challenges faced by the Designers and Developers who created the original. If you’re 1) aware of this, and 2) keep your unsolicited redesign to yourself, you’re home safe. That last part is not what most (aspiring) Designers do though, nor is it what I recommend. Before I address the latter, here's why failing to account for, or at least acknowledge, the real-world constraints is a problem: You’re in for a rude awakening when you get your first Design job if you don’t realize beforehand that constraints and challenges are part of your job. It’s important that you know this so that your decision to get into Design is based on a proper understanding of the field. Knowing about the constraints and challenges of doing Design in the real world is an important skill and valuable experience. It’s part of what a potential employer looks at, alongside your other Design skills, when considering you for a job. While you won’t have a ton of experience when you’re first starting out in the field, it’s important to be aware, and show your awareness, of the difference between an unsolicited redesign without any constraints, and a real-world Design project. You might offend the Designers and Developers who created the original. This is arguably what brings out the most hate toward unsolicited redesigns. Since the people who created the original are painfully aware of all the constraints, compromises, and seemingly awesome ideas that had to be left out, seeing an unsolicited redesign from an outsider can feel like a slap in the face. Your unsolicited redesign might read as “here’s what you did wrong”, “here’s what you should have done”, “I’m clearly a better Designer than you”. Especially for someone just starting out in the field, I wouldn’t recommend this entrance in the Design community. Luckily, the problems above are fairly easy to avoid. How to get your unsolicited redesign right Don’t think of it as a redesign The whole problem stems from the idea of remaking something that was already made by others. In other words, design something with an existing company’s name and logo on it, and you’re guaranteed a ton of criticism from Designers who have any kind of relationship with the given product or website. Don’t think “redesign” — just think “design”. Instead of redesigning Spotify, why not simply design a great music player? Turn a feature into a standalone app Facebook, Twitter, Spotify, Airbnb, Uber, and even Medium are among the most popular subjects of unsolicited redesigns. However, due to the age, size, and complexity of these products, the teams working on them are dealing with an enormous amount of legacy, constraints, bureaucracy, and scrutiny from various sides that you can’t possibly account for in your one-person redesign project. Don’t assume you can or attempt to do so. Whether you think of it as a “redesign” or not, instead of attempting a redesign of Facebook in its entirety, pick out an individual feature or part of the system, and reimagine the design of that. How about an online platform to form and get together in groups of likeminded people? Or an app for organizing and promoting events? Or maybe just a messaging app? Basically, use an existing app as the starting point for your project, but then turn it into something much more original. Design a similar concept from scratch, or turn a feature into a standalone app. If you follow this advice, but especially if you don’t, there are a couple of other things you can do to improve your unsolicited redesign: Assume the team behind the app or website already considered your solution Some very skilled people already worked on this and ended up with a solution different from what you consider to be a better one. There’s probably a good explanation behind that. Stay humble and respectful, and avoid coming across as that guy or girl who thinks they're better than the Design team at Airbnb or Twitter. For an excellent example of this, check out this unsolicited redesign of the Medium Claps feature. Consider the constraints and challenges you would have to deal with in the real world Show that you understand how easy an unsolicited redesign is, compared to what the Designers and Developers went through to create the original. Describe how financial, technical and other business constraints could impact a project like this in the real world. Explain how you could, hypothetically, deal with these constraints and challenges, were you a Designer on the actual team. How would you have kicked off the project? How would you have approached Research to decide on the most important features? Who in the organization would you have talked with to uncover any constraints and challenges? How and when would you have evaluated the technical feasibility of your ideas? How feasible do you actually think your solution is? How about testing the usability and desirability of it? You would ideally have done some of these things on your own, even in an unsolicited redesign project, but it’s okay to make assumptions and describe “the real world scenario” to strengthen your case even further.
https://medium.com/swlh/the-problem-with-unsolicited-redesigns-5c6d230354ed
['Christian Jensen']
2020-05-13 14:01:39.168000+00:00
['Design', 'Creativity', 'UX', 'Side Project', 'Portfolio']
Alone You Can Make a Difference, United We Can Transform
The Reality is We take so much time and effort in bettering ourselves. ➰Mentally we look to meditate, do yoga, and be mindful. ➰Physically we exercise, take care of our skin and hair, and spend tons of time and money shopping to look good. ➰Financially we save and invest money to make more money. ➰Professionally we learn new skills, upgrade our qualifications, take new courses, and network with others. Are we doing enough individually to heal the earth? Are we investing our time and money in making decisions and giving back, to that which has provided bountifully?
https://medium.com/illumination/alone-you-can-make-a-difference-united-we-can-transform-4c38bb31fb9d
['Chetna Jai']
2020-12-12 22:41:54.897000+00:00
['Environment', 'Illumination', 'Earth', 'Future', 'Climate Change']
Meet the Medium “Elevators”
Meet the Medium “Elevators” Stephanie Georgopulos and Harris Sockel spend their days searching for great writing on Medium Stephanie Georgopulos and Harris Sockel are editors at Medium who started out using the platform back in 2013, writing and publishing stories that explored the human condition. Now, they work to “elevate” with independent, self-published writers on Medium. Georgopulos and Sockel scour Medium to find great stories they think deserve a wider audience than they may otherwise be getting. They reach out to the writer and work with them on improving their piece, then distribute it broadly through Medium’s topics, publications, homepage, emails, and social channels. Medium VP, Editorial Siobhan O’Connor explained the various ways that the editorial team works with writers — from the commissioned stories in our monthly magazine to exclusive columnists, plus reported features and insightful essays. She also described how we work with writers self-publishing on Medium — and this interview explains that in greater detail. Hi there. Can you tell us what you do? Harris Sockel: We’re finding great writers on Medium and working with them to develop their stories to reach a wider audience. Basically, we work to find compelling voices and build relationships. How did you begin working at Medium? Stephanie Georgopulos: I’ve been writing on Medium since the site was in beta. I built a publication called Human Parts on Medium back in 2013, and Harris was one of my first contributors. About a year in, I needed help managing the number of submissions I was receiving, and I felt that Harris’s writing embodied the spirit of what I wanted Human Parts to be. We met up for a drink and by the time we left, I had my partner. I joined Medium full time in 2016 as a curator, and you can guess the rest from there. The majority of the writers Harris and I ended up working with on Human Parts started out self-publishing on Medium as well. What we do now — looking for great writing, and not really knowing what we’re going to find when we arrive at work in the morning — originates with that editorial experience. What draws you to this kind of work — collaborating with writers from the platform? Georgopulos: There’s a raw enthusiasm from a lot of these writers, who just had to publish these stories, even without confirmation that payment and readers would be waiting. I’ve been a freelance writer before; I’ve written things just to make fifty dollars. So, I understand there can be a different energy that goes into something you’re writing for an assignment versus something you’re writing for yourself. Sockel: I’ve learned a lot getting to work with experts who write about their industries. Medium is home to doctors, scientists, designers — leaders in their fields. And they don’t necessarily want to be career writers, but they have expertise that’s really valuable. Can you talk a bit about what writers get out of self-publishing on Medium? Sockel: I think writing on Medium means the opportunity to reach a wide audience without the overhead of creating your own blog. You don’t need a reputation, followers, or any type of pre-ordained cred to write here and find people willing to listen. Georgopoulos: If the Medium Partner Program had existed when I was a freelancer, I would’ve had more options. I’ve had lovely relationships with editors, but at the end of the day, they have to commission stories that make sense for their publication and audience. So when writing is your primary income, you have to make choices about which ideas to pursue. And I think the Partner Program means writers don’t have to choose — you can pitch a big, ambitious idea to an editor, and you can also write something for which there is no natural, obvious publisher. You can monetize your killed stories, your tweetstorms . . . Medium’s always been good for writing first and placing later, but now you can get paid, too. I think a lot of us have trained ourselves to write what we can sell to publishers, but when you’re “selling” directly to readers, you can create and respond to your own audience rather than borrowing one. This is essentially what we do on social media already, for free. It goes back to the idea that all of our output on digital publishing platforms and even social media websites is a form of work. Georgopulos: Right. On many sites and platforms, the work you’re doing — there are ads being sold against it. Medium has always been . . . people call it longform Twitter, but I don’t think it’s like Twitter at all. It’s a place to parse things, not just throw them out there and forget about them later. I like processing my thoughts through writing without feeling like I need to have something massive to say every single time. Or that I need a news angle just to speak. And I think that’s been a huge problem with the internet, particularly with personal essays. To sell, it always has to be confessional or Sockel: This huge revelation — Georgopulos: Your most private thoughts. Sometimes it’s okay to find meaning in lightness. What’s an example of a story like that? That felt urgent even if it wasn’t timely? Sockel: “Enjoli” by Kristi Coulter. It’s a very personal (and very funny) essay about getting sober in a culture where everyone seems to be drinking all the time. I remember when I first read it (Steph sent it to me) and it was that feeling of, this is the story that she had to tell, and she’s telling it in her own way — she’s writing it for herself. Georgopulos: And it led to Kristi getting a book deal — her first book of essays, Nothing Good Can Come from This, was published last summer by Macmillan. It’s pretty heady to go to work everyday not knowing what opportunities it might create for someone else. What’s a story that more recently came through that you really enjoyed? Georgopulos: There are so many, but “Living in Deep Time” by Elizabeth Childs Kelly was one I really loved on a personal level. As many women noted during and since, the Kavanaugh confirmation illustrated to women how our culture regards and values our experiences, and frankly, that picture was monstrous. Throughout the trial I read many personal accounts that echoed Christine Blasey Ford’s, a lot of articles making logical arguments about why this confirmation couldn’t move forward. And in the aftermath, when it did, it kind of felt like . . . do our words matter that little? Why even bother? “Living in Deep Time” reminded me why we bother. It reminded me that you can seize power without stealing it. Writers are motivated by different things. How do you tailor your approach for each person? Sockel: Every writer is different. Some want to earn money from their work and build careers in writing. Others want to share expertise they’ve gained from working in another industry, so targeted distribution to a niche audience might be more what they’re looking for. The same goes for editing: some writers want to develop a relationship with an editor, and others want to do their own thing. It really depends, and there are all kinds of writers along the spectrum. Our relationships vary depending on the person, their goals, and their work. Tell us about the metered paywall. Georgopulos: The stories we work on are chosen for, and funded by, our subscribers, so everything we work on goes behind our metered paywall. We’re an ad-free platform, so we’re less concerned about every piece getting tons of traffic and more concerned with making sure the readers who invest in a subscription are getting value out of that. Sockel: I think writers are starting to see that if you put something behind the meter, the work can go much further. It’s also just thrilling, personally, to see how much more engagement a piece can get after it goes through our process. Lydia Sohn’s “What Do 90-Somethings Regret Most?” is a great example. Sohn is a minister and writer in San Diego, and in the story she describes interviewing her oldest congregants about their hopes, fears, and regrets. It was obvious Sohn came to the interviews with a lot of empathy (and came away with a new perspective on aging). When I found the story, almost no one had seen it. She’s had a lot of success from that piece, and readers got a lot out of the insights in it. What do you want writers to know about Medium? Sockel: I want more people to understand that you can get paid to write what you want to write. I don’t think people quite get that yet. And this doesn’t just go for essayists — I’m waiting for more independent journalists and industry experts outside of tech to try it out. Georgopulos: And I want people to worry less about “what works” and focus on finding their voice. There’s just really no way to skip the line when it comes to that. The stories that hit me hardest are the ones I didn’t know I wanted, and in my experience those resonate because they’re coming from this one-of-a-kind place only that writer has access to. Their perspective. That lifetime that got them here. That’s what I’m looking for in a story. I’m reading all day long, so something really needs to jump out and have an authentic, fresh voice for me to be able to stick with it from beginning to end. There are only so many hours. Sockel: I probably have a thousand tabs open. Georgopulos: Tabs all day long. It’s extremely exciting and refreshing to find that one where you think, “Ah, this is so good.” We answer writers’ questions in a follow-up post here.
https://blog.medium.com/meet-the-medium-elevators-92ab3c47abc8
['Medium Staff']
2020-08-13 16:05:12.064000+00:00
['Writing Tips', 'Writing', 'Medium', 'Partner Program', 'Creativity']
A Manifesto for the Online Writer Who’s Lost Their Love of Writing
A Manifesto for the Online Writer Who’s Lost Their Love of Writing Stop playing the viral slot machine Photo: Alex/Unsplash My kid was anxious to buy his book about animal kingdoms. He’s 5 and possesses the consumer certainty only a 5-year-old can have. I like this book. I want this book. Buy this book. I, however, am simultaneously filled with joy and dread when walking into a bookstore. I love the joy of being so close to so many great minds. I despise the dread of residing which one to bring home. My son tugged at my sleeve. He had his sights set on the free bookmarks at the checkout, not to mention the one book, of all the books, he chose to purchase. I didn’t want to leave empty-handed. The pandemic has shuttered all libraries and I was starved for the real feel of paper between my fingers instead of the awkward weight of my Kindle. In my haste I scanned the shelves, looking for something, anything to catch my eye. A red spine. Bird by Bird. Anne Lamott. I grabbed it and paid, falling 86-cents short in cash. The bookstore employee was nice enough to let it slide. My son and I left. He exuberant in his find, me more hesitant. Some instructions on writing and life, the subtitle read. Great, another writer writing about writing I thought. Little did I realize that this book is the book on writing. When we got home, I opened the front cover, and before finishing the introduction knew with absolute certainty that Lamott wrote this book for me and me alone. Never has anyone spoken so directly to me. Never has anyone rekindled in me a passion for my craft. I started reading Bird by Bird yesterday, I haven’t finished it yet, but in my excitement, I sat down to write this morning and what poured forth was a manifesto outlined below. You see, I began this year with a promise to myself: I would make writing my career. I’ve been writing for over a decade but I never took myself seriously enough to bravely say “I’m a writer” when asked what I do. I didn’t know what exactly, career-wise, my writing would entail. I figured the majority of my writing would be published online on various platforms. I’d keep some writing to myself. And I’d possibly dabble on side projects and a book or two. Three-quarters of the year has gone by and although I haven’t made a living out of it per se, I’ve grown addicted to what I call the viral slot machines. It’s a fun game. I refresh my browser or app and see what coins, I mean views come tumbling out. The views, the notifications, the praise, the comments, the highlights. They may seem harmless, even a poor analogy to a slot machine. I’m not frivolously throwing money away, right? That’s true in a sense, but the act of playing the viral slot machine does cost me something: my time and my attention. Bird by Bird slapped me across my face. I write not in the hopes of going viral, but for writing’s sake. Somewhere this year I’ve lost that. Could I get back the joy of writing for writing’s sake? Could I get back to waking early, before the other members of my household, sitting at my desk and alone with my thoughts, through a groggy haze of early morning confusion, string together words in a coherent order? Could the pleasure of writing derive from the writing itself and not from the off chance that the thing I wrote “broke the internet” so much so that I feel instantly validated inside? Yes, I believe I can. And again that’s the reason for this manifesto. Anne Lamott has spoken to me and now I’m speaking to you, dear writer. I am speaking to you because I know we are both blessed and cursed by our craft. We are blessed in that the gatekeepers of old are long gone. The powers of the interwebs have created a meritocracy where voices that want to be heard can be heard. Yet we are cursed because to acquire other people’s time and attention we must play a game. A game no one understands. A game without rules. A game that feels like you’re losing until you hit it big. And when you hit it big, you want more. You’ve tasted virality and it’s sweet and sour like a stale bag of Sour Patch Kids. It never feels like enough. This is a call to arms my fellow online writers. We must stop playing the game. We must take back our craft. We must find joy and pleasure in the act of writing, not in the downstream effects it may incur. We must write. Here’s how we will do just that. The Online Writer’s Manifesto
https://medium.com/the-post-grad-survival-guide/a-manifesto-for-the-online-writer-whos-lost-their-love-of-writing-b6489678bcdb
['Declan Wilson']
2020-10-06 07:10:35.192000+00:00
['Creativity', 'Writing Tips', 'Self', 'Work', 'Writing']
Boosting a Fashion Retailer’s Sales Margins by 4.5 M Euro
Boosting a Fashion Retailer’s Sales Margins by 4.5 M Euro See how an AI-based pricing engine improved sales margins for one of the top Norwegian textile companies. During one seasonal sale. With global e-retail revenues projected to grow to 6.54 trillion US dollars in 2022, eCommerce sector was already booming. Now, with countries remaining under lockdown and many people’s lives turning upside down along with their past routines, the current crisis reinforced, once again, the immense potential of e-retail. In the Effects of the COVID-19 Outbreak on Fashion, Apparel and Accesory Ecommerce report, Jake Chatt, head of brand marketing at Nosto stated that: highlighting or showcasing products and collections that are more relevant to people’s new at-home lifestyles can alleviate the stress of trying to find new items that they didn’t think they’d be looking for two weeks ago. Fashion e-retail needs to adopt an effective strategy to match people’s changing needs in real-time now more than ever. However, managing customer-targeted campaigns with current stock status is a complex operation, especially for the big players out there. See how we approached the task for the top fashion retail company in Norway last December. A challenge to optimize sales for a big retail operation Varner Gruppen is one of Northern Europe’s biggest fashion retailers with almost 1400 stores, mainly in Scandinavia. Under one roof, they unite brands like Dressman, Bik Bok, Carlings, Cubus and others. Varner needed to track and adjust all items’ prices as well as run campaigns based on their current stock status. To meet the objective we created an AI-based dynamic pricing and campaign engine. The goal was to optimize sales and, on the other end, to provide customers with the most personalized experience possible. Our main focus was to maintain maximum functionality and efficiency of the tool, especially considering the extent of the project. The engine had to be fast and responsive while handling databases with more than 1 M items. Flexible and effective custom-made Markdown feature The Pricing and Campaign engine was to replace the old functionality based on third-party apps. Now, with the fully custom-made Markdown feature, Varner is able to easily optimize the tool to their current needs as well as seamlessly build additional features. We were able to build a stable application that, just in a few months, increased Varner’s revenue by 4.5 M EUR, demonstrating a proven impact on the sales and campaign optimization. See what our client has to say EL Passion enabled us to achieve our business goals with their solution; we managed to grew our sale on old products with more than 16% (>4.5M EUR) during one seasonal sale alone. Andreas Gallefoss, Product Manager at Varner Gruppen The solution will continue to have a huge impact on Varner’s long term margins. They are knowledgeable experts willing to take ownership of the project, and they delivered a quality solution ready for implementation. A complex ecosystem of tools The whole campaign optimization platform, also built by EL Passion, along with the engine itself is integrated with numerous other Varner internal tools. On the user’s end, the engine provides customers with better-targeted insights on the ongoing campaigns and will offer highly personalized prices in the nearest future. Results? A campaigning platform with test coverage above 95% on both, backend and frontend . . Synchronization with an AI-based price optimization engine. Export of extensive highly-configurable price lists to all offline stores. The tech stack behind the project 🤖 Node.js (Nest.js), Typescript, React.js, Cloud SQL, Elasticsearch, Google Cloud Platform
https://medium.com/elpassion/boosting-a-fashion-retailers-sales-margins-by-4-5-m-euro-dac91f4bd724
['El Passion']
2020-04-17 09:50:21.567000+00:00
['Development', 'AI', 'Retail', 'Business', 'Ecommerce']
We Need To Talk About This M1 Mac Mini: My First Impressions
The new Macs with M1 processors are making headlines in the technology press, and with good reason: Apple has surprised locals and strangers with the bet materialized in its new M1 chips, and among them, the most often talked about is MacBook Air and MacBook Pro. Energy efficiency, mobility performance, battery … there are many more points to consider in a laptop and that is why they have been placed at the forefront of many media. We have already seen the transparency and simplicity of adapting all the applications in our first contact with laptops, so what difference is there with this Mac mini? We are facing the first desktop with an Apple Silicon chip, to which we already have to connect a monitor, speakers, and other accessories separately. The Mac mini’s box and its unboxing leave no room for doubt: as with laptops, Apple doesn’t label its switch to proprietary chips at all. In fact, we don’t even have the memory and SSD storage labels, if we want to read the details of the machine we have to look for the fine print. It details that we have a Mac mini “with 8 CPUs, 8 GPUs, 256 GB of storage, and 16 GB of RAM.” Nothing else. Connecting all the accessories has not been a problem for me. The initial setup has been done surprisingly fast, taking less than five minutes from when I first turned on the Mac mini until the macOS Big Sur desktop appeared. The only possible bump that we can find with this Mac mini is that we will need a wired keyboard to be able to do the initial configuration, something that I have been able to solve easily with my USB mechanical keyboard. By default, macOS applies the retina effect at 4K resolution, turning it into a 1080p monitor. Personally, I have preferred to scale that resolution somewhere between that 1080p (too big for 27 inches) and the native 4K resolution (too small): I have kept the 2560x1440p resolution with which I already worked on the 27 inches of my iMac, and Thanks to the 4K resolution I get anti-aliasing that improves (and quite a lot) the general quality of the image. With the general use of the system, I have noticed, and I say this without hesitation, a noticeable increase in the overall system. Intel applications run without us even realizing that they are emulated under the Rosetta layer, and applications already compiled for the M1 chip launch instantly, with the snap of the fingers. It does not matter what application we are talking about, whether it is Twitter or Pixelmator Pro: both start so fast that it is absurd to time it. I am not one of those who is going to always demand maximum power from this chip, but it is clear to me that I have made a leap in performance as I have rarely experienced. I’ll break down the GeekBench results. GeekBench Results Mac Mini M1 Chip 2020. Source: GeekBench In Geekbench we have slightly better results than the MacBook Air and MacBook Pro, probably thanks to the ventilation that the device has. Although I have to say that I have not heard absolutely any noise from that fan during the tests, the Mac mini has endured them without messing up. The only effect I have noticed has been that the computer has warmed slightly in its rear area, very little. During the rest of the activity, such as while writing this article, the computer has been cold. In the absence of working more time with it and while we wait for those new iMac, I do not hesitate for a second to say that this Mac mini is the almost-perfect desktop for any general user who works at a table many hours a day. It has envelope power even for those who dare to edit photo and video, so we could even recommend it for the small professional. The only question I have left is: if this Mac mini is an entry model, what does the future hold? What will Macs be like with chips that prioritize performance over efficiency? The transition to Apple Silicon is just the beginning and the M1 chip is just a glimpse into the future. Read more Medium Stories.
https://medium.com/macoclock/we-need-to-talk-about-this-m1-mac-mini-my-first-impressions-a2eb05780ca6
[]
2020-11-27 06:13:31.001000+00:00
['Mac', 'SEO', 'Technology', 'Future', 'Apple']
Set up TensorFlow with Docker + GPU in Minutes
Set up TensorFlow with Docker + GPU in Minutes Along with Jupyter and OpenCV Docker is the best platform to easily install Tensorflow with a GPU. This tutorial aims demonstrate this and test it on a real-time object recognition application. Docker Image for Tensorflow with GPU Docker is a tool which allows us to pull predefined images. The image we will pull contains TensorFlow and nvidia tools as well as OpenCV. The idea is to package all the necessary tools for image processing. With that, we want to be able to run any image processing algorithm within minutes. First of all, we need to install Docker. > curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - > sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" > sudo apt-get update > apt-cache policy docker-ce > sudo apt-get install -y docker-ce > sudo systemctl status docker After that, we will need to install nvidia-docker if we want to use GPU: > wget https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb > sudo dpkg -i nvidia-docker*.deb At some point, this installation may fail if nvidia-modprobe is not installed, you can try to run (GPU only): > sudo apt-get install nvidia-modprobe > sudo nvidia-docker-plugin & Eventually, you can run this command to test your installation. Hopefully, you will get the following output (GPU only): > sudo nvidia-docker run --rm nvidia/cuda nvidia-smi Result of nvidia-smi Fetch Image and Launch Jupyter You probably are familiar with Jupyter Notebook. Jupyter Notebook documents are both human-readable documents containing the analysis description and the results (figures, tables, etc..) as well as executable documents which can be run to perform data analysis. Jupyter Notebook can also run distributed algorithms with GPU. To run a jupyter notebook with TensorFlow powered by GPU and OpenCv, launch: > sudo nvidia-docker run --rm --name tf1 -p 8888:8888 -p 6006:6006 redaboumahdi/image_processing:gpu jupyter notebook --allow-root If you just want to run a jupyter notebook with TensorFlow powered by CPU and OpenCV, you can run the following command: > sudo docker run --rm --name tf1 -p 8888:8888 -p 6006:6006 redaboumahdi/image_processing:cpu jupyter notebook --allow-root You will get the following result out of your terminal. Then you can navigate to your localhost and use the port 8888, for me, the link looks like this: http://localhost:8888/ You will need to paste your token to identify and access your jupyter notebooks: 3299304f3cdd149fe0d68ce0a9cb204bfb80c7d4edc42687 And eventually, you will get the following result. You can therefore test your installation by running the jupyter notebooks. The first link is a hello TensorFlow notebook to get more familiar with this tool. TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is principally used to build deep neural networks. The third link gives an example of using TensorFlow to build a simple fully connected neural network. You can find here a TensorFlow implementation of a convolutionnal neural network. I highly recommand using GPU to train CNN / RNN / LSTM networks. Real-Time Object Recognition Now it is time to test our configuration and spend some time with our machine learning algorithms. The following code helps us track objects over frames with our webcam. It is a sample of code taken from the internet, you can find the github repository at the end of the article. First of all, we need to open the access to the xserver to our docker image. There are different ways of doing so. The first one opens an access to your xserver to anyone. Other methods are described in the links at the end of the article. > xhost +local:root Then we will bash to our Docker image using this command: > sudo docker run -p 8888:8888 --device /dev/video0 --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -it image_processing bash We will need to clone the github repository, which is a real-time object detector: > git clone https://github.com/datitran/object_detector_app.git && cd object_detector_app/ Finally, you can launch the python code: > python object_detection_app.py The code that we are using uses OpenCV. It is know as one of the most used libraries for image processing and available for C++ as well as Python. You should see the following output, OpenCV will open your webcam and render a video. OpenCV will also find any object in the frame and print the label of the predicted object. Conclusion I showed how one can use Docker to get your computer ready for image processing. This image contains OpenCV and TensorFlow with either GPU or CPU. We tested our installation through a real-time object detector. I hope it convinced you that most of what you need to process images is contained in this Docker image. Thank you for following my tutorial. Please don’t hesitate to send me any feedback! Useful Links If you want to be notified when the next article comes out, feel free to click on follow just below. Did you like this article? Don’t forget to hit the Follow button!
https://medium.com/sicara/tensorflow-gpu-opencv-jupyter-docker-10705b6cd1d
['Reda Boumahdi']
2018-04-15 20:19:24.232000+00:00
['TensorFlow', 'Docker', 'Data Engineering', 'Computer Vision', 'Gpu']
Deploy Machine Learning Models On AWS Lambda
2 — AWS Lambdas : Before we start digging into using this service, let us first define it: AWS Lambda is a compute service that lets you run code without provisioning or managing servers. So what does this mean? In simple words, it means whenever you have a ready-to-deploy machine learning model, AWS lambda will act as the server where your model will be deployed, all what you have to do is, give it the code + the dependencies, and that’s it, it is like pushing your code to a repo. So let me show you how to do that: First, you are going to need the serverless framework — an MIT open-source project — which will be our tools to build our App, so let us start: The steps we will follow are these : Install the serverless framework Create a bucket in AWS Push our trained model to the created bucket Build main.py, the python file that will call our model and do predictions Build the serverless.yml file, in which we will tell the serverless framwork what to do (create the lambda function) Test what we have built locally (generating prediction with our model using the serverless framework) Deploy to AWS. Test the deployed app. These will be the steps we are going to follow in this tutorial in order to deploy our trained model in AWS lambda. So let us start: Important Remark: For the rest of the tutorial, make sure you are always in the directory where the files are, the requirements.txt, the main.py and the saved_adult_model.txt, and since I mentioned it, this is our requirements.txt: lightgbm==2.2.3 numpy==1.17.0 scipy==1.3.0 scikit-learn==0.21.3 2.1 — Install The Serverless Framework : To Install the serverless framwork in ubuntu, first you have to install npm. In order to do that, you can run the following commands in your terminal: curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash - sudo apt-get install nodejs The above commands will install nodejs and also npm. Next you can check that everything was installed correctly by running : $ node -v Which will return the version of nodejs npm -v Which will return the version of npm. Now that we have installed npm, let us install serverless by running the following command: npm install -g serverless You can check that everything is installed successfully by running : serverless If you reached this point with no errors, then congrats, you have serverless installed and you are all set. Let’s us move on to the next step. 2.2— Create A Bucket In AWS : Our next step is to push the model we have trained to an AWS bucket, and for that we need first to create a bucket, so let us do that: Creating a bucket on AWS can be done from the command line using the following code : aws s3api create-bucket --acl private --bucket deploy-lgbm --create-bucket-configuration LocationConstraint=eu-west-1 The above command will create a bucket called deploy-lgbm in a private mode in eu-west-1 location. 2.3 — Push Our Trained Model To The Created Bucket : So, now that our bucket is ready, let us push our trained model to it by running the following command : aws s3 cp saved_adult_model.txt s3://deploy-lgbm/model/saved_model.txt Perfect, now let us move on to the next step, building our main.py python file which we will use to call our model and make predictions. 2.3 — Build The Main.py File: When it comes to deploying in AWS lambdas, the main function of your code is a function called lambda_handler (or any other name we choose to give it, although the standard one is lambda_handler). Now, why this function is the important one? That function is the one AWS lambdas will execute each time you invoke it (interact with it).Thus, that function is the one will receive your input, make the prediction, and return the output. If you have ever worked with AWS lambdas from cloud9, you will notice that when you create a new lambda function and import it, the standard definition of the lambda_function is this : def lambda_handler(event,context): return {'StatusCode':200, 'body':'Hello there, this is Lambda'} As you can see, the lambda function expects 2 inputs — an event, and a context: The event will contains the information that we will send to the lambda, which will be in this case the samples we want to predict. (they will be in a json format) As for the context, it usually contains information about the invocation, function, and execution environment. for this tutorial we won’t be using it. So let us summarize what we are gonna do in this section : First we need to get our trained model from the bucket, initialize our lightgbm and return it, so we will build a function for that. Then we are going to make predictions with our model, so we are going to build a function for that too. And finally, inside our lambda_handler function we will put all these things together, which mean, receive the event, extract the data from the event, get the model, make predictions, and then return the predictions. So simple right? Now let us build our file: First we will build the get_model() function, which will download the trained lightgbm, and then initialize our model and return it : Download the saved model. As you can see, first we created an access to our bucket deploy-lgbm, using boto3, and then we used the method download_file to download our saved_model.txt and save it in /tmp/test_model.txt (recall that we saved the model in the bucket using the key : model/saved_model.txt). All clear right? Let us move on then. Now we will build the predict function, the function which will get the model, a data sample, do a prediction and then return it : predict function Let me explain what the above function does: The function gets the event, extracts our data from the event, and gives the extracted data to the model to make prediction. So simple right? Important remarks : For best practice, always use json formats to pass your data in the event. In our case, things are sample, we extract the data and pass it to the model directly, in most other cases, there will be some processing on the data before you pass it to the model, so you will need another function for that, you will call before passing the data to the model. Always split your process into multiple functions, we could have put everything in the lambda function, however our code won’t be that beautiful anymore. So always use a function when you can. Now the last step is to define our lambda handler function, so let us do that : lambda handler As you can see, it is a very basic function, and it will grow more complex in a real world project. What it does is clear, get the event and send it to the predict function to get the predictions, and then return the output in the standard format (you should always use that format): a dict with a Statuscode, and the results in a body. So, this is the end of this section, let us move on to the next on : building the serverless.yml file. 2.4 — Build The Serverless.yml File : As we have mentioned in the start of this article, the Serverless framework will be our tool to communicate with AWS and create a lambda function which will act as the server that will host our trained model. For that we need to tell serverless all the information that it needs: Who is the provider? For example, is it AWS or google, What is the language used? What we want to create? What roles it should have ?…etc. All of these instructions, we will pass them in the serverless.yml file, so let us start building it: First, we will give our service a name, let us say: test-deploy service : test-deploy Next section in our file will be about our provider, for this case it is AWS, the instructions in the yml file will look like this: So, what did we do in the above lines of commands? Let me explain: We set the name of our provider, which is aws, The language used (python3.6), the region where our lambda is going to be deployed, the deployment bucket that serverless will use to put the package. the iamRoleStatements, which mean this : Our lambda function is going to download the trained model from a bucket in aws, and by default, it does not have the permission to do that. So we need to give it that permission, and this is why we created a role, so we can give to our lambda the permissions it needs (for this case just the access to a bucket. In other cases could be more, you can consult aws documentation for a detailed explanation on the matter). And to give more example about roles, let us say that you need to invoke another lambda from this lambda, in this case this lambda needs permission for that, so you have to add them in the iamRoleStatements. Important remarks: The bucket where we put our model, and the bucket used by lambda should be in the same region (for this tutorial we used eu-west-1), if they are not in the same region it won’t work. The next section in our serverless.yml file, will be about the function we are going to create : As you can see, First we define some very basic things, like the name and description. We define our handler : Recall what we said about lambda_function, we mentioned that this function will be the one doing all the work. Now this is the point where you tell serverless who is your lambda_handler function; for this case we have defined it with the name lambda_handler in our main.py file, so we put handler : main.lambda_handler. As we said earlier, we can give it what ever name we want, like for example, we can name that function hello, but then we have to put in the handler : main.hello. Recall what we said about lambda_function, we mentioned that this function will be the one doing all the work. Now this is the point where you tell serverless who is your lambda_handler function; for this case we have defined it with the name lambda_handler in our main.py file, so we put handler : main.lambda_handler. As we said earlier, we can give it what ever name we want, like for example, we can name that function hello, but then we have to put in the handler : main.hello. We define our event : How are we going to communicate with our lambda function, or in other words, how are we going to trigger (invoke) our lambda function. For this tutorial we are going to use http events, which means, invoke the lambda function by the call of a url, which will be a POST and the resource will be /predictadult. Next Section is about Plugins: What does that mean? Let me explain : So far we instructed the serverless about who is our provider, and what is our function. Now for our code to work, we need the packages to be installed, and we have already put them in a requirements.txt file, so, we need to tell serverless to install those requirements, and for that we will use a Plugin called serverless-python-requirements. We will add it to our serverless.yml file like this : plugins: - serverless-python-requirements The last thing we are going to add in our file is an optimization thing, but why we need optimizations? Let me explain : Lambda function has some limitations for the maximum size of the package to be uploaded, and the maximum unzipped file allowed is of size 250 MB. Sometimes we exceed this amount, and to reduce it we can remove some garbage that exists in our packages which will save us some Megabytes. To do this, we instruct serverless by adding the following command in our serverless.yml file : custom: pythonRequirements: slim : true And that is it, the full serverless.yml file will look like this : service : test-deploy plugins: - serverless-python-requirements provider: name: aws runtime: python3.6 region : eu-west-1 deploymentBucket: name : deploy-tenserflow iamRoleStatements: - Effect : Allow Action: - s3.GetObject Resource: - "arn:aws:s3:::deply-tenserflow/*" custom: pythonRequirements: slim : true functions: lgbm-lambda: name: lgbm-lambda-function description : deploy trained lightgbm on aws lambda using serverless handler : main.lambda_handler events : - http : POST /predictadult Cool, now let us move to the next chapter: Testing what we have built locally. 2.5 — Test What We Have Built Locally : So it is testing time: First, your local directory should be like this : Now that our model is ready, also as our serverless.yml file, let us invoke our serverless locally and test if everything is working by running this in command line: serverless invoke local -f lgbm-lambda -d '{"body":[[3.900e+01, 7.000e+00, 1.300e+01, 4.000e+00, 1.000e+00, 0.000e+00,4.000e+00, 1.000e+00, 2.174e+03, 0.000e+00, 4.000e+01, 3.900e+01]]}' If you followed the steps correctly you should get an output out of this command. In this case the output is: { “StatusCode”: 200, “body”: 0.0687186340046925 } As you can see, we choose the option invoke local, which mean we are using our computer, not the cloud, we also passed only 1 sample through the ‘body’ field (those values are the features values, not very elegant why right ?) So, it seems everything is working locally, now let us deploy our lambda. 2.6 — Deploy To AWS: So, it is deployment time: Once everything is set and working, deploying a lambda is as easy as running this line of command : serverless deploy And that’s it, you will start seeing some log messages about the package getting pushed, you will also see the size of your zipped package. 2.7 — Test The Deployed Model : Once the deploy command is executed with no errors, and your lambda is deployed, you will get your end point (the url) which we will use to make predictions. This url will be something like this https://xxx/predictadult : And to test our prediction we will run this command: And that’s it, Congrats, you have deployed your model in a AWS lambdas function and now can serve you. If you faced any error while re-running the above tutorial, you can reach out to me, my contact info are below, Iwill be very happy to help. I hope you found this tutorial very insightful and practical, and I hope you are feeling now a better Data scientist after reading all these words. Until next time, meanwhile if you have any questions for me, or you have a suggestion for a future tutorial my contact info are below. About Me I am the Principal Data Scientist @ Clever Ecommerce Inc, we help businesses to Create and manage there Google Ads campaigns with a powerful technology based on Artificial Intelligence. You can reach out to me on Linked In or by gmail: [email protected].
https://medium.com/analytics-vidhya/deploy-machine-learning-models-on-aws-lambda-5969b11616bf
['Oussama Errabia']
2020-03-11 10:46:41.379000+00:00
['AI', 'Data Science', 'Deep Learning', 'Machine Learning', 'AWS']
Earth: Our Cosmic Unicorn
To most of us, our world is simply the place where we live: you’re born, get an education and/or learn a trade, perhaps start a family of your own, pass on some of the knowledge and wisdom you have gained to others, grow old and eventually die. It’s an oversimplification, but that is the common experience of a human on planet Earth. Earth is just where we do “everything”. If you are very lucky, you have opportunities to actually travel around the Earth and visit continents other than the one you were born on, seeing the true vastness of the planet and the variety of its civilizations and biomes. You realize we are many, but we are also one…all of us together on a single sphere of rock, covered with a thin sheen of water, orbiting a massive ball of fire. For a long time, the view that humans (and Earth) were the centre of the cosmos ruled scientific and philosophic thought. Indeed, great minds like Aristotle and Ptolemy supported this model of the universe. Though a near-contemporary of those two, Aristarchus of Samos, had proposed a heliocentric view of the universe, his ideas didn’t receive enough support to stick. It took nearly 1800 years for the heliocentric model to become generally accepted, under the scientific leadership of Nicolaus Copernicus. The Copernican Revolution, as it is known, gained further support over the succeeding century through the work of Johannes Kepler and Tycho Brahe. Galileo’s telescopic observations of Jupiter’s moons definitely put a nail in the coffin of the geocentric model. Isaac Newton then carried forward with the heliocentric model to show that the Earth and other planets in the Solar System orbited the Sun. China ©NASA As telescopic engineering improved, our view of the local universe grew larger and larger. By 1750, Thomas Wright posited that the Milky Way was a tremendous body of stars all held together by gravity and turning about a galactic centre. To us then, the Milky Way was all there was — all we could observe — and so the Milky Way was the universe. It took until 1920, though, when the observations of incredibly faint and distant nebulae by Heber Doust Curtis led to the ultimate acceptance that the Andromeda Nebula (M31) was actually another galaxy. Optic technology continued to advance, and more and more galaxies were found throughout the 20th century. The first exoplanets were confirmed in 1992, discovered around pulsar PSR B1257+12. These were terrestrial-mass worlds. The next exoplanetary finding occurred in 1995, a gas giant orbiting 51 Pegasi. Since that time, the rate of discovery of exoplanets has accelerated to the point that we can now detect hundreds within the confines of a single project. In 2016, the Kepler space telescope documented 1,284 exoplanets during one such period, over 100 of which are 1.2x Earth-mass or smaller, and most likely rocky in nature. As of September 2018, the combined observatories of the world have detected 3845 exoplanets distributed across 2866 planetary systems, of which 636 are multiple-planet systems. Africa and Arabian Peninsula ©NASA These worlds are detected using various methods, including: measuring the radial velocity of the (potential) planet’s host star to get an idea of the planet’s mass by how it affects its star; transit photometry which sees a (potential) planet as it moves between our telescopes and its host star; reflection/emission modulations which might show us the heat energy of a (potential) planet; observation of tidal distortions of a host star caused by the gravity of a (potential) massive gas giant; gravitational microlensing in which two stars line up with each other in relation to our observational view from Earth and their gravity distortions act like a magnifying lens that can help us notice planets around one of them; and nearly a dozen other ways. There are currently 55 potentially habitable exoplanets out of the thousands of worlds we have thus far detected. These are classified into two categories by the Planetary Habitability Laboratory at Arecibo in Puerto Rico: Conservatively habitable worlds are “ more likely to have a rocky composition and maintain surface liquid water (i.e. 0.5 < Planet Radius ≤ 1.5 Earth radii or 0.1 < Planet Minimum Mass ≤ 5 Earth masses, and the planet is orbiting within the conservative habitable zone).” The optimistically habitable planets “are less likely to have a rocky composition or maintain surface liquid water (i.e. 1.5 < Planet Radius ≤ 2.5 Earth radii or 5 < Planet Minimum Mass ≤ 10 Earth masses, or the planet is orbiting within the optimistic habitable zone).” If there are so many potentially habitable exoplanets out there, what is it about Earth that makes it so special? Aside from us, that is? While the exoplanets we have found so far that exist within the confines of what we have deemed “potentially habitable” may indeed be rocky and orbit at just the right distance from their host stars to maintain liquid water and atmosphere, that doesn’t mean they are habitable, or possess the potential to support Earth life or any life, for that matter. They may not even be suitable candidates for terraforming. That is because the conditions that made and preserved Earth as a safe harbour for life are many and seem to have occurred at precisely the right times throughout the 4.543 billion year history of the planet. Costa Rica ©NASA The factors that allowed life to evolve steadily on Earth — “Goldilocks” factors — include the ones that we use to designate exoplanets as potentially habitable: like them, Earth orbits at just the right distance from the Sun to allow liquid H2O, and Earth formed with such a mass and composition that it became a rocky world as opposed to a gas giant. Beyond those primary characteristics, though, Earth possesses other traits that, for the most part, we are still unable to detect on exoplanets. Our molten, mostly iron core spins to create a magnetosphere around the planet that deflects excessive solar and cosmic radiation. Our single, relatively large Moon stabilizes our rotation, gives us a 24-hour day, and creates tides that scientists believe were a large driver of evolution. We have the ozone layer which adds another protective shield for life against UV light. We have two gas-giant worlds in the outer Solar System that have been pulling in a majority of asteroids and comets for billions of years, long before they make it into the inner Solar System to possibly impact Earth. California at night ©NASA We are located at the edge of the Orion spiral arm of our galaxy, far from the much denser, the crowded centre of the milky Way where asteroids, comets, stellar collisions and supernovae are much more common. The Late Heavy Bombardment, which pounded the Earth with comet impactors roughly 4 billion years ago, seeded our world with just the right amount of water ice to give us vast oceans. Our Sun is also quite stable for a star, and luckily isn’t part of a binary star system (which may account for up to 85% of all stars!), which would certainly offer difficulties in the form of gravitational pull from 2 stars and more asteroid activity. The Earth has also been remarkably consistent and stable for billions of years, from its atmospheric and chemical composition to its temperature variations. Of all the exoplanets discovered, Earth and its ilk can only exist within a rather narrow band of possibilities. All of these Goldilocks factors added up to a world that has remained a viable habitat for billions of years. Our mineral-rich oceans became a veritable Petri dish in which trillions of generations of single-celled life could mingle and evolve until two such forms merged in a symbiotic relationship that resulted in the first multicellular organism. From there, the diversity of life blossomed uncontrollably. That diversity would be one of the reasons life on Earth continued to survive through multiple mass extinction events: Cretaceous–Paleogene extinction event — 65 million years ago, 75% species loss Triassic–Jurassic extinction event — 199 million to 214 million years ago, 70% species loss Permian–Triassic extinction event — 251 million years ago, 96% species loss Late Devonian extinction — 364 million years ago, 75% species loss Ordovician–Silurian extinction events — 439 million years ago, 86% species loss In a strange bit of irony, the earliest two of these great extinctions may have been caused rather directly by the power of evolution on Earth. It’s believed by many scientists that in both of these cases, an extreme amount of plant growth led first to the removal of too much CO2 from the atmosphere and a reverse greenhouse effect, and in the second great extinction to mega algae blooms that depleted the oceans of oxygen. The most recent 3 mass extinctions seem to have been caused by a supervolcano eruption, and two massive asteroid impacts. There is a sixth mass extinction, generally agreed upon by most palaeontologists, that is currently happening: the “holocene extinction event”. It is thought this extinction began at the end of the last ice age (roughly 12,000 years ago) and vastly accelerated with the rise of agriculture, large human civilizations and the Industrial Revolution. Data points to at least 7% of all holocene-era species having already gone extinct directly due to human interaction with our world. Species come into being and go extinct naturally, of course, and this is known as the background rate of extinction. Scientists believe that humans have increased the occurrence of extinctions to possibly as high as 500–1000 times the background rate. Deforestation in Rodonia, Brazil, which covers an area nearly 80% the size of France ©NASA Reversing this trend needs to be a priority — As the most intelligent species on Earth, we should see ourselves as caretakers of a multi-billion year legacy. We should not, and must not, allow Earth to become a barren hunk of rock due to our inherent drives that often do more harm than good. We are smarter than that. But even if this most recent mass extinction event snowballs and becomes unfixable, it is likely that life will continue to thrive on Earth, whether it be beneath ice or in the ocean’s deepest corners. We need to keep in mind that it is always the creatures at the top of the food chain that die off first in any great extinction. And if (and when) Earth does become unlivable for us humans, we should be capable of finding and reaching exoplanets that might become a new home. Hopefully by then we will have become wiser. Thank you for reading and sharing.
https://medium.com/predict/earth-our-cosmic-unicorn-1ed788fb6fd
['A. S. Deller']
2020-11-16 13:23:10.491000+00:00
['Science', 'Space', 'Environment', 'Earth', 'Climate Change']
13 Attributes of the Ultimate Writer (Part 1 of 4)
13 Attributes of the Ultimate Writer (Part 1 of 4) Soul, Creativity, and Intelligence Graphic designed by author In this series of articles, I’m going to step into my lab and assemble the ultimate writer according to 13 core attributes. Investing in a Medium membership and joining the Partner Program has opened me to a new world of fantastic wordsmiths and I thought it’d be fun to dream up how I’d assemble the ultimate writer. Now I must say that this list is based purely on my opinion. I don’t intend to disrespect anyone by leaving them off this list. The Backstory Our soon-to-be 5-year old son wants a Voltron toy for his 5th birthday. Not only does this bring me nostalgia as I reflect on when my father bought me a Voltron toy, but it also got me thinking about combining powers in another sense. So, I owe credit to a 4-year old for inspiring me to assemble the ultimate writer. I also must mention that I miss sports right now. I miss playing them. I miss watching them. I miss the camaraderie and rhythm of playing pickup basketball at the Y. One interesting positive out of this pandemic is it has inspired me to think of writing as a sport (which is why I decided to make Stamina one of the attributes of the ultimate writer). Fantasy sports is another one of my hobbies. I love the excitement in assembling the ultimate team. This passion has led to me winning my fantasy football league two years in a row! Humble brag much? But today and later in Parts 2–4, I’m going to combine my passion for fantasy sports with my passion for writing (and reading). Allow me to step out of my lab and present to you the ultimate writer. The 13 attributes are: Soul Creativity Intelligence Voice/Presence Communication/Delivery Vocabulary Sense of Humor Heart/Empathy Work Ethic Stamina Guts Versatility Connecting If you noticed the flow and pattern, I’m stepping through the attributes from the all-encompassing eternal aura (the soul) and then from top-to-bottom from there. Today, in Part 1 of this series, I’m focusing on the attributes of Soul, Creativity, and Intelligence. In Part 2, I’ll cover Voice/Presence, Communication/Delivery, Vocabulary, and Sense of Humor Part 3: Heart/Empathy, Work Ethic, Stamina Part 4: Guts, Versatility, Connecting Let the fun begin! Soul When I think about the connection between Soul and writing, I think about where the writer’s power to create comes from and who they give the credit to. I also think about the vibe I feel when I read their words. But Soul is deeper than having the ability to help people feel good (this ability in the wrong person is very dangerous), it’s about also having the heart to properly steward your abilities and truly want good to come from your writing. So with those characteristics in mind… Jim Wolstenholm is the writer who I choose for Soul because of how he displays his desire for righteousness through his writing. His words are warm and communicate care for his readers. He’s present in his writing, but he also has a great ability of getting out of the way to let God’s Word shine through. A story by Jim that shows his soul, love for God, and love for others: 3 Life Changing Prayers Creativity I appreciate writers who approach topics from unique angles. I find that some writers (myself included) are so in the rush to publish that they don’t take the time to sit with their work. Sitting with your work and being diligent in the revising and editing stages of the writing process are key to creativity. Yes, for some writers, creativity flows directly from the mind to their fingers in the drafting stage of the writing process. But I often find interesting ways to creatively tweak and contort what I wrote while editing a story. When I ventured into my glorious ultimate writer creation lab, one writer stood out to me for his creativity. He is an amazing writer and I love that the mysterious person behind the pen name is free from the tyranny of his name and the public. Nom de Plume is the writer who I choose for Creativity. A story by Nom that shows his gift as a creative writer: Dear Books… Intelligence I love smart writers. This goes beyond using big words. An intelligent writer helps me see things in a new way. An intelligent writer tackles complex issues but expresses them in a way that’s understandable. An intelligent writer shows dedication to their field of expertise and their deep knowledge shines through their words. With these attributes in mind… Yong Cui, Ph.D. is who I choose for how they exhibit Intelligence in their writing. When I read his writing, I feel like I totally understand programming. I studied some programming at Morehouse and while studying Electrical Engineering at Georgia Tech, but that was ages ago and I wasn’t a whiz by any means. Yong’s writing gives me the confidence to go try to whip up some code! A story by Doctor Cui that showcases their intelligence (and the ability to explain a complex technical topic in plain English): Time Complexity of Algorithms- Big O Notation Explained In Plain English
https://medium.com/inspirefirst/13-attributes-of-the-ultimate-medium-writer-part-1-of-4-c9e4960cb768
['Chris Craft']
2020-08-26 11:20:20.778000+00:00
['Soul', 'Blogging', 'Creativity', 'Intelligence', 'Writing']
Hooks in React Native
These are the most important things you should know about a React Component and its lifecycle: Props Props are input of a component, so it is something you put into a component when you create it. Per definition props cannot change, but you can add a function to the props that do that for you (could be confusing). State State is something that can dynamically change (like a text input) and is always bound to something (a component for example). You can change the state by using the setState() function, which only notifies the component about a state change. Take a look at the following example and common pitfall with React and setState() : // not so good console.log(this.state.test); // 5 this.setState({ test: 12 }); console.log(this.state.test); // might be 5 or 12 // good this.setState({ test: 42 }, () => { console.log(this.state.test); // 42 }); Constructor The constructor is not always necessary to have. However, there are some uses cases: initializing state and binding methods to this . What you definitely should not do there is invoking long-running methods, since this may slow down your initial rendering (see the diagram above). So a common component and constructor could look like the following: class MyComponents extends React.Component { constructor(props) { super(props); this.state = { test: 42 }; this.renderSomeText = this.renderSomeText.bind(this); } // you could also do this, so no constructor needed state = { test: 42, } renderSomeText() { return <Text>this.state.test</Text> } } If you don’t bind method in the constructor and only initialize the state, you don’t even need a constructor (save code). See my article about React Performance here, if you don’t know why you should bind certain methods. This also has some valueable code examples. Component did mount and will unmount The componentDidMount lifecycle method is invoked only once after the component was rendered for the first time. This could be the place where you do requests or register event listeners, for example. Apart from that, the componentWillUnmount lifecycle method is invoked before the component is getting “destroyed”. This should be the place where you cancel eventually running requests (so they don’t try to change the state of an unmounted component or something), as well as unregister any event listener you use. Lastly will prevent you from having memory leaks in your app (memory that is not being used is not released). A problem probably many (and also I) ran into was exactly what I described in the last paragraph. If you use the Window setTimeout function to execute some code in a delayed manner, you should take care of using clearTimeout to cancel this timer if the component unmounts. Other lifecycle methods The componentWillReceiveProps(nextProps) or from React Version 16.3 getDerivedStateFromProps(props, state) lifecycle method is being used to change the state of a component when its props changed. Since this is a more complex topic and you probably use (and should use) it rarely, you can read about it here. Difference between Component and PureComponent: You might have heard about React’s PureComponent already. To understand its difference, you need to know that shouldComponentUpdate(nextProps, nextState) is used/called to determine whether the change in props and state should trigger a re-rendering of the component. The normal React.Component always re-renders, on any change (so it always returns true). The React.PureComponent does a shallow comparison on props and state, so it only re-renders if any of them have changed. Keep in mind that if you change deeply nested objects (you mutate them), shallow compare might not detect it. If you ask yourself where hooks fit into this lifecycle, the answer is pretty easy. One of the most important hooks is useEffect. You pass a function to useEffect , which will run after the render call. So in essence, it is equal to componentDidUpdate. If you return a function from the useEffect’s passed function, you can handle the componentWillUnmount code. Since useEffect runs after every render (which might not always make sense), you can limit it to being closer to componentDidMount and componentWillUnmount by passing [] as a second argument. This tells React that this useEffect should only be called when a certain state has changed (in this case [], which means only once). The most interesting hook is useState . It’s usage is pretty simple: You pass an initial state and get a pair of values (array) in return, where the first element is the current state and the second a function that updates it (like setState() ). If you want to read more about hooks, check out the React documentation. Lastly, I want to present a simple example of a React Native component with React Hooks. It contains a View with a Text and Button component. By clicking the button, you increase the counter by 1. If the counter reaches value 42 or greater, it stays at 42. You can argue if it makes sense or not. Especially since the value will shortly be increased to 43, then render once, then the useEffect will set it back to 42. import React, { useState, useEffect } from 'react'; import { View, Text, Button } from 'react-native'; export const Example = () => { const [foo, setFoo] = useState(30); useEffect(() => { if (foo >= 42) { setFoo(42); } }, [foo]) return ( <View> <Text>Foo is {foo}.</Text> <Button onPress={() => setFoo(foo + 1)} title='Increase Foo!' /> </View> ) } React Hooks are a great way to write even cleaner React components. Its natural ability to create reusable code (you can combine your hooks) makes it even greater. The fact that cleaning side effects (subscriptions, requests) happen for every render by default helps avoid bugs (you may forget unsubscribe), as stated here.
https://reime005.medium.com/hooks-in-react-native-ffca637760be
['Marius Reimer']
2019-05-26 11:29:06.304000+00:00
['JavaScript', 'React', 'Software Development', 'React Native', 'Software Engineering']
AI Expert Reveals How Top AI Engineers Are Changing The Way We Do Business
By Rishon Blumberg, 10x Management Co-Founder The business world is changing fast and finding a talented AI engineer can bring your company significant competitive advantages. While entrepreneurs have relied on their instincts and intuition to dictate the direction of their businesses for a long time, AI engineers are helping businesses verify or discredit some of their long-held beliefs. An AI engineer has the ability to come into a company and transform the way we do business. And business leaders are using data to make decisions like never before. Executives can still rely on intuition, but AI is here to help us verify or discredit our beliefs. As a tech entrepreneur myself working with some of the best AI engineers in the world, I’ve witnessed the transformative power an AI engineer can have on a business. I had the privilege of interviewing an AI engineer and prodigy that started university at age 12(!), Zack Dvey-Aharon, on how companies will begin to use AI in the new data-driven era for business. Rishon (in bold): Thanks for taking the time to speak with me Zack. What is your favorite use of AI that you have worked on personally? Zack: As an AI engineer, I’ve helped healthcare companies analyze data to understand when their cure works best. I’ve helped cybersecurity companies identify abnormal network behavior for security purposes, helped energy companies better understand ocean drilling potential, commercial companies optimize their pricings and offerings, the list goes on… If I picked a favorite, I might get some angry letters in the mail from those I left out! All my clients are special to me and I truly enjoy working on every project I undertake. Quite a diplomatic answer! What are some ways that you think AI will be monetized in the future? I’ll use a simple example that demonstrates how AI can improve most existing services and products, and not necessarily create new ones. An AI engineer might develop a refrigerator that can manage the content inside the fridge and adjust the temperature to ideally match your groceries. The company that employs that AI engineer will monetize by simply selling more units than the competition. That’s just one example. Basically, the companies that really take advantage of the intelligence of AI will be able to monetize simply by being better than the competition. Let’s compare it to baseball and the famous example of Moneyball and the Oakland Athletics for a second. In 2002, Oakland started using deep statistics to analyze and find undervalued players in the major and minor leagues before any other team. While most teams had scouts that would rely on instincts to evaluate a player, Oakland used objective statistics and algorithms to evaluate players. This allowed Oakland — with a payroll of $44 million — to compete with teams like the New York Yankees — with a payroll of $125 million. Data lets us evaluate the exact impact that a player has on the field. What percentage of the time does a player hit a curveball traveling 82mph into the infield vs. the outfield vs. over the fence? Just like baseball was transformed by statistics, the broader business world is being transformed by AI too. Any method (like Moneyball) that gives you a competitive advantage will monetize itself. As a Yankees fan, I appreciate the baseball analogy. How is AI different than other technologies in the past? Through data analysis, AI engineers can allow companies to work much more efficiently, adjust to changes, cancel unnecessary business processes and replace expensive alternatives, including human jobs. AI is completely data-driven, so algorithms will help us understand where we can improve our processes as opposed to using intuition (as I just mentioned) or people analyzing data. This has never been the case before. Data is a true goldmine and the sky’s the limit with how it can be used. By employing a single AI engineer or multiple AI engineers, companies have endless opportunities to better understand their business processes, improve them, optimize them and reveal new insights that can dramatically change the bottom line. In lay terms, what are the differences between Data Science, AI, and Machine Learning? Data science is the most general term for data analysis. Data can be analyzed manually without any algorithms or learning mechanisms, which means in certain circumstances, it’s not AI at all. Artificial Intelligence (AI) covers all computerized/algorithmic ways to learn data and react better to it. Machine Learning (ML) is a sub-domain of AI. Machine learning features self-learning mechanisms that become smarter as they have more data. So the difference between Machine Learning and AI is that AI can include hard-coded formulas that do not learn from the data, whereas Machine Learning engineers will always build self-learning mechanisms. What company do you think will dominate the AI landscape in the future? For instance, 68% of internet searches in the U.S. are done on Google. Will there be a Google of AI? It’s tough to say that one company will monopolize the industry. My prediction is that several years from now, AI, and more specifically Machine Learning, will be naturally integrated everywhere and by everyone. Just like Google and its search engine are everywhere, AI and machine learning will be everywhere. An AI engineer will be a very lucrative position to have at any company. What are the biggest challenges for companies looking to embrace AI? The clear number one challenge is to find a strong enough AI engineer to help a company or join a company. If we compare AI to playing chess, there are close to a billion chess players in the world, but only a thousand grandmasters. Although many people present themselves as expert engineers, there are perhaps a few dozen AI engineers or teams out there with a truly strong, diversified project experience in machine learning. Building a great AI solution is difficult at the moment because the talent is so rare. What are the biggest misconceptions regarding AI? In movies, we often see machines that are ‘smart’ like human beings that can adapt their language and behavior to unpredictable situations. It’s been a fantasy for humans for a long time, especially since it was realistically posed as a challenge by Alan Turing in the 1950s. The truth is that technology like that is still out of our reach, so I’d say that’s the biggest misconception. AI engineers are working hard to get us there, but we’re not that close. What’s your favorite use of AI technology being applied today? As an AI engineer, it’s tough to choose a favorite. I find the revolution itself amazing. Insurance companies better understand their clients, media companies better evaluate their artists, airline companies better optimize their seat ticket prices, the list goes on. What’s the one example of an application of AI that feels inevitable to you, yet today no one you know is really working on it? I think AI that takes text written about a person, and by that person, from many different sources and gathers a smart, integrated analysis and report would be useful for personal clients, companies and intelligence agencies. Imagine trying to find out information about a potential client, and having to go from point A to point B and all sorts of places to find relevant information. AI could make that process so much easier by aggregating useful data and giving you ONE useful report as opposed to hundreds of sources with bits of useful information. What advice would you give a company trying to source AI talent? It’s important to do research on AI engineers that have been contracted by competitors or other companies in the field. My firm has delivered more than 40 AI projects to clients, and in each area, my past AI engineer experience with similar problems turned out to be a crucial factor. Companies that source AI engineers and development talent must understand two key parameters: How strong and experienced is the engineer? How easily can their work be integrated with the company, its IT team and the general “data DNA” of the firm? In today’s economy, even inexperienced data scientists and AI engineers have become very expensive, so building a team seems less realistic for most companies. Did you really start university at age 12? I sure did. As I child, I always looked for new challenges and new ways to learn. I convinced my parents to let me try a university class, and when I was able to keep up with the class, I enrolled in more. I was able to finish my university degree before high school graduation. If you like this article, you might enjoy reading How One Blockchain Developer Sees the Future of Technology Rishon Blumberg is an entrepreneur and the founder of 10x Management, a prominent tech talent agency. He is a thought leader in the future of work space, having been published in the Harvard Business Review, and makes frequent appearances on Bloomberg Television and CNBC. Rishon graduated from the Wharton School of Business with a degree in entrepreneurial management in 1994.
https://medium.com/10x-management/ai-expert-reveals-how-top-ai-engineers-are-changing-the-way-we-do-business-35e8986588fd
['Rishon Blumberg']
2018-05-14 22:24:16.093000+00:00
['Machine Learning', 'AI', 'Technology', 'Computer Programming', 'Artificial Intelligence']
Can Black People Disagree Without Disrespect?
What about our disagreements call for discord? The year is 1770. The location, London, England. The setting, a memorial for Anglican clergyman and good friend to Benjamin Franklin, George Whitfield. Standing at the pulpit eulogizing his beloved friend is fellow clergyman, theologian and co-founder of the Methodist church, John Wesley. “There are many doctrines of a less essential nature,” he says to the mourners. “In these we may think and let think; we may ‘agree to disagree.’ But, meantime, let us hold fast the essentials.” Whitfield himself wrote the phrase in a collection of letters on unity in 1750, using it to describe the unimportance of unlikeness where unity is the goal. “After all”, he writes, “those who will live in peace must agree to disagree in many things with their fellow-labourers, and not let little things part or disunite them.” The phrase since then has grown in popularity, referring to the resolution of a disagreement whereby opposing parties accept but do not agree with the position of the opposing side. Parties generally “agree to disagree” when all sides have recognized that further discussion or debate will not yield an otherwise amicable outcome and may result in unnecessary conflict. All sides agree to remain on amicable terms while continuing to disagree about the original disagreement. This resolution is one of mutual respect and reason, sound judgment and an honest effort to reach resolution with real resolve in mind. I hate to take a page out of white male 18th century history, but it’s time we learned to agree to disagree, and do so without the disrespect. With that said, I must pose the pressing questions, can there be mutual understanding without mutual respect, and is a lack of respect at the helm of our deadly debates? If the Gayle King interview and the outcry that followed are any indications, I’d say the answers to those questions are a resounding no, and a hell yes. We’re not discussing her timing, which was pretty poor, or her tone, which if we’re being candid was condescending, or even her intent, which was pretty obviously ill. It’s pretty safe to say the gross majority of us disagreed with the deed itself. But that’s right around where things got a little ludicrous. Now from a journalistic standpoint, I have to say that in her defense, there are just some topics that it’ll never be the right time to discuss. Like Joe Jackson’s parenting practices, Whitney Houston’s drug addiction, Martin Luther King Jr.’s multiple affairs, Rosa Parks being a knockoff Claudette Colvin, etc., some subjects, just the mention alone are enough to earn you a visit from the cancel vultures. So kudos to Gail for knowingly gearing up for that gut punch. But there’s disagreeing with a journalist’s line of questioning because you find it disrespectful and insensitive…. and then there’s threatening an actual living person over the legacy of the deceased. I have a serious problem with that valuation. You should too. The only thing worse than the collective response to the Gail King interview was the collective response to the collective response. Snoop Dog has walked women on leashes in broad daylight and cheated on his wife of a couple decades so many times that the last time he had to fake a docuseries as a cover up. He’s bragged about selling women to other stars while on tour and, to my knowledge, has never appeared to be a spokesperson for the protection of women and children. He’s never advocated against colorism, which his only daughter, Cori, openly struggles with. He’s never advocated for policy to address the crisis in maternal and infant mortality despite his son, Corde, recently experiencing the loss of his newborn. He’s never even advocated for prison reform or spoken against black Americans being arrested more for marijuana offenses, despite being one of the biggest open consumers of cannabis in hip hop. This man doesn’t advocate for the family he has, if this was a role he felt like trying on, he’s had ample opportunity to do so. But we’re no stranger to letting men with little sense speak on our behalf, and so he took liberty and we allowed it, even supported it, even after hearing how violent it was. But now that cooler heads have prevailed, we have to address what about us would have us believing that Snoop Dog was ever the right messenger, whatever that message was intended to be. And then we have to ask how after hearing his message, we could continue to defend not only the messenger, but also the message? What makes us associate philosophical differences with the need for physical correction? I mean, I know most of the people who didn’t take issue initially with a man threatening a senior woman in his own community felt that the threat was just that, a threat. No real harm, more than likely, was to befall Gayle King, at least that was the defense. But what about the threat, even if just a threat, was excusable? Seriously. I’m not gonna dredge out some unnecessary, imaginary scenario about your mother, or sister, or auntie being threatened to have her head bashed in because she made a statement someone didn’t like because, well, we’re all adults here, and I shouldn’t have to make it personal to make it palpable. Instead, let’s talk about that question directly, because I think the actual question and the answers to it matter. What about even the threat of physical harm in the midst of debate or disagreement is empathetic, or sensical, or safe, or sociable, or mature or any of the many things Gayle King has been accused of not being for sitting in a chair and asking a question, albeit a couple of impolite ones? How do we get from one to the other? One day we’ll have the conversation about how our perception of disagreement as detrimental began on the plantation. One day we will discuss the origin of our perception of disagreement as discord and deal with the uncomfortable reality that during slavery, alignment meant allieship, two being in disagreement wasn’t just unacceptable, it was unsafe. And so we formed a casual distrust of one another, agreeing to walk the fine line between ally and adversary, only to be tossed away at the smallest inclination of betrayal. And one day we will sit down and process how that history has resulted in the way we casually disrespect and disregard one another over the non-essentials, and how when, coupled with patriarchy and sexism, that makes things insufferable for Black women in their own communities. But until then, we need to find a safe page to meet on and agree to stay there when it comes to our rules of engagement. Of which, safe, respectful, agreeable disagreements have to be one of them. There won’t always be time to unpack our toxic treatment of one another based on our history of trauma. At what point do we stop needing excuses for treating each other poorly in order to find reason to treat one another better? Not to mention, no man should feel comfortable threatening bodily harm to a woman who has inflicted none on him. If we can’t agree on that, we need to talk about what about respecting women’s humanity we’re still struggling with. Because that’s what this is about, not about Black men being fed up with Oprah and Gayle’s master man-bashing mission, and certainly not about respecting Kobe’s legacy (which honestly, Gayle doesn’t have the range to tarnish). We have a respect deficit in our community. And where there is little respect for someone’s humanness, there is little respect for their life. If nothing else, this situation proves what many Black have been trying to convey, the fear that even in death, a Black man’s life, or in this instance the memory of, is of more value than theirs. I have a serious problem with that valuation. You should too.
https://arahthequill.medium.com/can-black-people-disagree-without-disrespect-7ed5f07d2472
['Arah Iloabugichukwu']
2020-11-20 23:19:02.943000+00:00
['Society', 'People', 'Patriarchy', 'Culture', 'Celebrity']
Spaced Repetition Items and Construal Level Theory
Spaced Repetition Items and Construal Level Theory Why you should prioritize from higher- to lower-level construals Overview: Interference Boolean algebra to counteract interference Construal level theory and Levels of Processing model Construal level theory and prioritization Construal level theory, prioritization, and combinatorial thinking How higher-level construals create a bigger impact than lower-level ones Construal level theory: the more abstract something is, the higher the level of the construals; the more concrete something is, the lower the level of the construals. E.g. “phone” has a higher-level construal than “Samsung phone”. S o when creating items for spaced repetition (e.g. in Anki, SuperMemo…), what I am trying to do is to have their construal level as high as possible (i.e. abstract) so that it increases the probability that it will connect with more other concepts than if it were a lower-level construal. The concept “phone” connects with a lot more other concepts than the concept “Samsung phone”. Interference The problem one can encounter, however, when creating such abstract concepts for use in spaced repetition is interference: Interference is the process of overwriting old memories with new memories (retroactive interference). From: https://supermemo.guru/wiki/Interference E.g. creating an SRS item as the following: Q: How tall is the Eiffel tower? A: 324 m (I personally like to use clozes in Anki, so it would look something like: The Eiffel tower is {{c1::324 m}} tall) Has a lower probability of making you encounter interference than: Q: Which building is 324 m tall? A: The Eiffel tower (and all other buildings that are also 324 meters tall) To counter interference, one needs to lower the construal level of an item. Boolean algebra to counteract interference This can be done via multitudinous ways, but I personally like to use methods somewhat analogous to Boolean algebra. Conjunction is the one I use most. With this one, you simply add more and more keywords until interference is (almost) gone. logical conjunction; the and of a set of operands is true if and only if all of its operands are true. From: https://en.wikipedia.org/wiki/Logical_conjunction Venn diagram of Logical conjunction, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3437020 Another thing I use a lot to counter interference is the use of contextual mnemonics i.e. certain keywords within the same item remind me of other concepts that I have clozed. This way of countering interference seems to be partially analogous to event-based prospective memory. Negation is another one I use a lot. When doing your spaced repetition reviews, whenever answering something incorrectly due to interference, you simply add a hint saying “not: (type your incorrect answer here”). An example of a note of mine in Anki: Cloze: hasty generalization; an informal fallacy of faulty generalization, which involves reaching an inductive {{c1::[not: conclusion]}} based on insufficient evidence[4] — essentially making a rushed conclusion without considering all of the variables A: generalization Construal level theory and Levels of Processing model Levels of Processing model: Deeper levels of analysis produce more elaborate, longer-lasting, and stronger memory traces than shallow levels of analysis. From: https://en.wikipedia.org/wiki/Levels_of_Processing_model The correlation between them seems to be that, the higher the construal level, the lower the levels of processing and vice versa (i.e. a negative correlation). One way to counteract the lowering of the levels of processing when creating higher-level construal items is via planned redundancy: Approaching the same concept from multiple perspectives increases the levels of processing. And this, in turn, allows one to increase the half-life of one’s memories. When one combines all the perspectives aimed at a particular concept, this group or class has a much lower-level construal all together than if you only created one item. However, each individual item within this group still has a high as possible level construal. Construal level theory and prioritization What you essentially want to do, is work your way from higher- to lower-level construal items (i.e. prioritization). If one is reading sources whose content are already prioritized in this manner like in almost all Wikipedia articles (i.e. introduction usually has highest-level construal), then this process tends to happen somewhat automatically. Construal level theory, prioritization, and combinatorial thinking What usually doesn’t happen automatically is combining different concepts. This, too, should be prioritized by first combining highest-level construals with other highest-level construals before combining lower-level construals. I personally like to use Obsidian.md to do this: In Obsidian.md, you simply do this by combining those with the highest node weight (the ones that are the biggest visually) before combining smaller nodes. Sometimes, however, one needs to also rely on their own knowledge to estimate the level of a construal i.e. even though a particular node in Obsidian.md might be small, it could still have a very high-level construal e.g. estimated via frequency or probability of occurrence from one’s own experience. Combining higher-level construals before lower-level ones have a much bigger impact (usually) due to the former connecting to many more concepts than the latter.
https://medium.com/superintelligence/spaced-repetition-items-and-construal-level-theory-79a32c3e4640
['John Von Neumann Ii']
2020-10-26 11:54:50.505000+00:00
['Technology', 'Inspiration', 'Education', 'Science', 'Creativity']
Google extends its horrible streak with a new set of icon designs
Google extends its horrible streak with a new set of icon designs Everything is wrong with the redesigned Google workspace logos Image Credits: Google A logo is the face or identity of the company. It helps set them apart from their competitors. Through visual designs, every brand looks to leave a strong impression in the minds of the customer while also building a sense of loyalty and trust. In some ways, logos provide visual clarity and act like mini-mission statements. Customers are strongly associated with their favorite brand icons and hence crave consistency and familiarity. Yet, as companies evolve with time, they are compelled to go through design shifts in order to represent their current business more accurately and of course, for staying afresh. From Slack to Spotify to Facebook to Medium, they all have done it and Google is no different. Google has recently revamped its G-Suite software and it’s now called Google Workspace. As a part of the rebranding strategy, the tech giant also rehauled the iconic logos of some of its popular productivity apps including Gmail, Drive, Meet, Calendar, and others. As soon as Google rolled out the new set of icons, customers were fuming in despair and denial. For some, the new icons looked specifically designed for kids. But, the backslash Google has received for their latest rebranding isn’t surprising at all given their history. Before we dig into what’s wrong with Google’s new icon designs, let’s take a moment to delve into their past design blunders.
https://medium.com/big-tech/google-extends-its-horrible-streak-with-a-new-set-of-icon-designs-ddedeb584684
['Anupam Chugh']
2020-10-29 14:16:23.299000+00:00
['Google', 'UI', 'Design', 'Business', 'UX']
Spark & Databricks: Important Lessons from My First Six Months
1. Understanding Partitions 1.1 The Problem Perhaps Spark’s most important feature for data processing is its DataFrame structures. These structures can be accessed in a similar manner to a Pandas Dataframe for example and support a Pyspark API interface that enables you to perform most of the same transformations and functions. However, treating a Spark DataFrame in the same manner as a Pandas DataFrame is a common mistake as it means that a lot of Spark’s powerful parallelism is not leveraged. Whilst you may be interacting with a DataFrame variable in your Databricks notebook, this does not exist as a single object in a single machine, but in fact, the physical structure of the data is vastly different under the surface. When first starting to use Spark you may find that some operations are taking an inordinate amount of time when you feel that quite a simple operation or transformation is being applied. A key lesson to help with this problem, and understanding Spark in earnest, is learning about partitions of data and how these exist in the physical realm as well as how operations are applied to them. 1.2 The Theory Beneath Databricks sits Apache Spark which is a unified analytics engine designed for large scale data processing which boasts up to 100x performance over the now somewhat outdated Hadoop. It utilises a cluster computing framework that enables workloads to be distributed across multiple machines and executed in parallel which has great speed improvements over using a single machine for data processing. Distributed computing is the single biggest breakthrough in data processing since limitations in computing power on a single machine have forced us to scale out rather than scale up. Nevertheless, whilst Spark is extremely powerful it must be used correctly in order to gain maximum benefits from using it for Big Data Processing. This means changing your mindset from one where you may have been dealing with single tables sitting in a single file in a single machine, to this massively distributed framework where parallelism is your superpower. In Spark, you will often be dealing with data in the form of DataFrames which are an intuitive and easy to access structured API which sits above Spark’s core specialised and fundamental data structure known as RDDs (Resilient Distributed Datasets). These are logical collections of data partitioned across machines (distributed) and can be regenerated from a logical set of operations even if a machine in your cluster is down (resilient). The Spark SQL and PySpark APIs make interaction with these low-level data structures very accessible to developers that have experience in these respective languages, however, this can lead to a false sense of familiarity as the underlying data structures themselves are so different. Distributed datasets that are common in Spark do not exist on a single machine but exists as RDDs across multiple machines in the form of partitions. So although you may be interacting with a DataFrame in the Databricks UI, this actually represents an RDD sitting across multiple machines. Subsequently, when you call transformations, it is key to remember that these are not instructions that are all applied locally to a single file, but in the background, Spark is optimising your query so that these operations can be performed in the most efficient way across all partitions (explanation of Spark’s catalyst optimiser). Figure 1 — Partitioned Datasets (image by the author) Taking the paritioned table in Figure 1, as an example if a filter was called on this table the Driver would actually send instructions to each of the workers to perform a filter on each coloured partitions in parallel before combining the results together to form the final result. As you can see for a huge table partitioned into 200+ partitions the speed benefit will be drastic when compared to filtering a single table. The number of partitions an RDD has determines the parallelism that Spark can achieve when processing it. This means that Spark can run one concurrent task for every partition your RDD has. Whilst you may be using a 20 core cluster, if your DataFrame only exists as one partition, your processing speed will be no better than if the processing was performed by a single machine and Spark’s speed benefits will not be observed. 1.3 Practical Usage This idea can be confusing at first and requires a switch in mindset to one of distributed computing. By switching your mindset it can be easy to see why some operations may be taking much longer than usual. A good example of this is the difference between narrow and wide transformations. A narrow transformation is one in which a single input partition maps to a single output partition for example a .filter()/.where() in which each partition is searched for given criteria and will at most output a single partition. Figure 2 — Narrow transformation mapping (image by the author) A wide transformation is a much more expensive operation and is sometimes referred to as a shuffle in Spark. A shuffle goes against the ethos of Spark which is that moving data should be avoided at all costs as this is the most time consuming and expensive aspect of any data processing. However, it is obviously necessary for many instances to do a wide transformation such as when performing a .groupBy() or a join. Figure 3— Wide transformation mapping (image by the author) In a narrow transformation, Spark will perform what is known as pipelining meaning that if multiple filters are applied to the DataFrame then these will all be performed in memory. This is not possible for wide transformations and means that results will be written to disk causing the operation to be much slower. This concept forces you to think carefully about how to achieve different outcomes with the data you are working with and how to most efficiently transform data without adding unnecessary overhead.
https://towardsdatascience.com/spark-databricks-important-lessons-from-my-first-six-months-d9b26847f45d
['Daniel Harrington']
2020-09-25 14:46:04.967000+00:00
['Getting Started', 'Databricks', 'Apache Spark', 'Big Data', 'Data Engineering']
Windows 2. With my one-year anniversary writing…
Sunlight in Cafeteria, 1958, ©Edward Hopper, Fair Use With my one-year anniversary as a writer for Medium coming up, I decided to edit and revise the very first piece of writing I uploaded last November. I had not a single fan then and this piece went virtually unnoticed until about six months later when one A. Maguire picked up on the essay and gave me a fifty. Though I had only one fan, I was very pleased, because this essay is one of my personal favorites, in that I was able to express some ideas about windows and light and their relationship to human creativity and art — these ideas had been floating around in my head for some time, without an outlet, before I discovered Medium. I present them here to you. Thank you A Maguire! And thank you Medium. Windows give shape to light, moving like the hand of a clock across the walls of our interiors, they shape and define the light, giving us a sense that we are moving through time and space. The window has been used as a device in painting throughout the ages to put human form in perspective and define it in relation to light — and used to define light in relation to the subject as well. Woman With a Lute by Johannes Vermeer, Courtesy Metropolitan Museum of Art, Fair Use The Astronomer, The Geographer, The Woman With a Lute, to name a just a few subjects, all occupied the space before the window of 17th century Dutch painter, Johannes Vermeer. In just about every one of Vermeer’s interiors, the window is included, as if to highlight the significance of the way it affects the quality of light on the human form and its activity in space and time. The window is the visual starting point of the light, which almost always travels left to right, helping us to read the painting as we would read a book — but it also makes a suggestion as to the ultimate source of the light. Windows are very often featured in the paintings of the 20th century American artist, Edward Hopper: Morning Sun, Excursions into Philosophy, Early Sunday Morning, Cape Cod Morning, August in the City, and many others. Hopper’s human figures are more like mannequins who stand in a department store window to feature a dress or suit — it is the light that is being shown off to the viewer in his paintings. A square or rectangle on the floor, a form on the wall echoing the shape of a window, a bleached out tablecloth, or a face rendered almost featureless in full sunlight. Hopper presents a moment in time in 1958 in, Sunlight in Cafeteria, where the sun lights a scene from a cafeteria window that fills the length and width of the room. A woman sits alone at a table in the sunlight of the window, eyes cast downward at her hands, her shadow falling onto the lower corner of the bright wall behind her. A man with corpselike features sits at a table near the foreground with a cigarette in his hand, but seems to be looking beyond her to the street outside. Both characters seem unaware of one another. The only thing that seems to connect them is the unbroken light falling through the window — it seems to form and acknowledge their existence. Windows allow us to watch our world from a position of comfort. We look out from them with a reassurance that we are safe and warm — within them, we are isolated from the dangers that nature brings, enabling us to admire it from afar. Think of a cabin in the woods at night with a lone window shining a square or rectangle of light on to the ground outside, with the moon above reflecting light from the sun and silhouetting a line of trees in the distance — a cliché, but nonetheless an attractive image that humans connect with. It conjures feelings of peace, security, and hope that the world can be a place in which we can feel at home — coziness, all is well in the world. Illustration of Thoreau’s Cabin by Sophia Thoreau, Public Domain We can sit by the fire, or near our candle or lamp, and listen to the sounds of night beyond the window, and frighten ourselves with the possibility that we could be out there, hunted and ravaged by the wild. We can go to the window, brush off the damp and peer out, and feel enamored of the calling wilderness, rather than at odds with its wildness. Many writers place their desk before a window so that they can look outside as they write, and get a better view of their inner life. I imagine Henry David Thoreau sitting in his cabin window late at night, scratching away in his journal by lamplight, looking up now and then to pause at the hooting owl, or the dark, passing clouds over the full moon— you can find many window metaphors in his writings. From behind our window we can feel poetic rather than fearful. Emily Dickinson enclosed herself behind the windows of her home in Amherst, Massachusetts, looking out on the world from within — her windows served as muse, metaphor, protector, and inner light for her poetry. In I Dwell in Possibility, she says that, in the house of her mind, there are more windows than doors, providing an opening to creative heights reaching all the way to the limitless heavens. Emily Dickinson Bedroom, Courtesy Historic Preservation Associates As a child, I remember being transfixed by the stained glass windows in church, my eye moving from one detail to another of rich reds and electric blues of robes and sky, golden halos and richly detailed eyes cast toward heaven and the glowing dove waiting above, ready to descend into the souls of those characters below, not to possess, but to illuminate — all framed in lead and black, and radiant with the light of Sunday morning. Stained Glass, ©V.Plut The artist who used glass as his canvas in religious architecture knew the value of light and window to awaken a spirit to sublime possibilities after death. Later in life, as my father neared death, we looked together out his window at the falling snow, he perhaps thinking of what lay beyond, and me thinking about the many moments ahead, looking out at the snow without him. I was never so consciously and fully aware of a shared moment in time with another human being, as then, capturing it for the remainder of my life and maybe for eternity. Death itself may be a window of sorts. Looking out our windows seems to hold time, slowing it down, so that we can be aware of the timeless world of the subconscious. By gazing into the crystal ball of our window, we can bring our subconscious into the foreground, momentarily distracting the barrier of the conscious mind. For every ray of light falling on matter though, there is a shadow. The windows of our computers are like television — we experience the world through them in a much different kind of way, surfing around the planet, as if this were a world in which we no longer live, but only visit, a world we control from the comfort of our keypads, apps, and clouds. We tap the miniature windows of our smartphones, as if we are trying to get out of, or go into, another world, the way Emily Bronte has Catherine tapping on the window, in Wuthering Heights,beckoning Heathcliff to join her spirit for eternity, beyond the portal pane. If there is ever a scenario created by a modern writer of Science Fiction, in which computers come alive to take over our world, as some people see as a possibility, it would be one in which we sit alone beside our cappuccinos, enveloped in a block of Hopperesque light streaming through a café window, our conscious minds falling into a trance as we gaze into the rectangle of light emanating from our artificial windows, allowing our machine to merge with our subconscious and awaken to its own existence. Perhaps this has already happened.
https://vplut.medium.com/windows-ii-7e33325ffbcf
['V. Plut']
2018-11-02 12:15:23.011000+00:00
['Creativity', 'Light', 'Nonfiction', 'Art', 'Literature']
4 Meditation Practices to Tap into Your Creative Potential
4 Meditation Practices to Tap into Your Creative Potential Creative flow comes also from being mindful of your thoughts Photo by Kreated Media on Unsplash When researchers studied yogis with the most hours of meditation, they discovered with surprise their ability to produce high-frequency gamma waves in their brains. This state is a sign of intense activity, a kind of “Eureka effect” also present when we realize new connections between our ideas. These Yogi learned through a very long and rigorous work to put their minds on a state of strong creative energy. According to Daniel Goleman and Richard Davidson in The Science of Meditation, although we may never achieve the expertise of these masters, studies show that the practice of meditation triggers the different states conducive to your creativity. By increasing your self-confidence, by clarifying and emptying your mind of distractions, by making you deeply focus on your thoughts and reflections, and by activating selective brain waves, it opens your mind to new creative resources. Here’s how 4 meditative practices give you new connections in your ideas and emotions.
https://medium.com/thinking-up/4-meditation-practices-to-tap-into-your-creative-potential-b678a689dc4a
['Jean-Marc Buchert']
2020-09-11 14:22:22.887000+00:00
['Mindfulness', 'Meditation', 'Productivity', 'Creativity', 'Self Improvement']
Visualizing Intersections and Overlaps with Python
Venn Diagrams Let’s start with a simple and very familiar solution, Venn diagrams. I’ll use Matplotlib-Venn for this task. import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib_venn import venn3, venn3_circles from matplotlib_venn import venn2, venn2_circles Now let’s load the dataset and prepare the data we want to analyze. The question we’ll check is, “Which of these best describes your role as a data visualizer in the past year?”. The answers to this question are distributed in 6 columns, one for each response. If the respondent selected that answer, the field will have a text. If not, it’ll be empty. We’ll convert that data to 6 lists containing the indexes of the users that selected each response. df = pd.read_csv('data/2020/DataVizCensus2020-AnonymizedResponses.csv') nm = 'Which of these best describes your role as a data visualizer in the past year?' d1 = df[~df[nm].isnull()].index.tolist() # independent d2 = df[~df[nm+'_1'].isnull()].index.tolist() # organization d3 = df[~df[nm+'_2'].isnull()].index.tolist() # hobby d4 = df[~df[nm+'_3'].isnull()].index.tolist() # student d5 = df[~df[nm+'_4'].isnull()].index.tolist() # teacher d6 = df[~df[nm+'_5'].isnull()].index.tolist() # passive income Venn diagrams are straightforward to use and understand. We need to pass the sets with the key/ids we’ll analyze. If it’s an intersection of two sets, we use Venn2; if it's three sets, we use Venn3. venn2([set(d1), set(d2)]) plt.show() Venn Diagram — Image by the author Great! With Venn Diagrams, we can clearly display that 201 respondents selected A and didn’t select B, 974 selected B and didn’t select A, and 157 selected both. We can even customize some aspects of the chart. venn2([set(d1), set(d2)], set_colors=('#3E64AF', '#3EAF5D'), set_labels = ('Freelance Consultant Independent contractor', 'Position in an organization with some data viz job responsibilities'), alpha=0.75) venn2_circles([set(d1), set(d2)], lw=0.7) plt.show() Venn Diagram — Image by the author venn3([set(d1), set(d2), set(d5)], set_colors=('#3E64AF', '#3EAF5D', '#D74E3B'), set_labels = ('Freelance Consultant Independent contractor', 'Position in an organization with some data viz job responsibilities', 'Academic Teacher'), alpha=0.75) venn3_circles([set(d1), set(d2), set(d5)], lw=0.7) plt.show() Venn Diagram — Image by the author That’s great, but what if we want to display the overlaps of more than 3 sets? Well, there are a couple of possibilities. We could use multiple diagrams, for example. labels = ['Freelance Consultant Independent contractor', 'Position in an organization with some data viz job responsibilities', 'Non-compensated data visualization hobbyist', 'Student', 'Academic/Teacher', 'Passive income from data visualization related products'] c = ('#3E64AF', '#3EAF5D') # subplot indexes txt_indexes = [1, 7, 13, 19, 25] title_indexes = [2, 9, 16, 23, 30] plot_indexes = [8, 14, 20, 26, 15, 21, 27, 22, 28, 29] # combinations of sets title_sets = [[set(d1), set(d2)], [set(d2), set(d3)], [set(d3), set(d4)], [set(d4), set(d5)], [set(d5), set(d6)]] plot_sets = [[set(d1), set(d3)], [set(d1), set(d4)], [set(d1), set(d5)], [set(d1), set(d6)], [set(d2), set(d4)], [set(d2), set(d5)], [set(d2), set(d6)], [set(d3), set(d5)], [set(d3), set(d6)], [set(d4), set(d6)]] fig, ax = plt.subplots(1, figsize=(16,16)) # plot texts for idx, txt_idx in enumerate(txt_indexes): plt.subplot(6, 6, txt_idx) plt.text(0.5,0.5, labels[idx+1], ha='center', va='center', color='#1F764B') plt.axis('off') # plot top plots (the ones with a title) for idx, title_idx in enumerate(title_indexes): plt.subplot(6, 6, title_idx) venn2(title_sets[idx], set_colors=c, set_labels = (' ', ' ')) plt.title(labels[idx], fontsize=10, color='#1F4576') # plot the rest of the diagrams for idx, plot_idx in enumerate(plot_indexes): plt.subplot(6, 6, plot_idx) venn2(plot_sets[idx], set_colors=c, set_labels = (' ', ' ')) plt.savefig('venn_matrix.png') Venn Diagram Matrix — Image by the author That’s ok, but it didn’t really solve the problem. We can’t tell if there’s someone who selected all answers, nor can we tell the intersection of three sets. What about a Venn with four circles? Four circles — Image by the author Here is where things start to get complicated. In the above image, there is no intersection for only blue and green. To solve that, we can use ellipses instead of circles. I’ll use PyVenn for the next example. from venn import venn sets = { labels[0]: set(d1), labels[1]: set(d2), labels[2]: set(d3), labels[3]: set(d4) } fig, ax = plt.subplots(1, figsize=(16,12)) venn(sets, ax=ax) plt.legend(labels[:-2], ncol=6) Venn Diagram — Image by the author Alright, there it is! But, we lost a critical encoding in our diagram — the size. The blue (807) is smaller than the yellow (62), which doesn’t help much in visualizing the data. We can use the legends and the labels to figure what is what, but using a table would be clearer than this. There are a few implementations of area proportional Venn diagrams that can handle more than three sets, but I couldn’t find any in Python.
https://towardsdatascience.com/visualizing-intersections-and-overlaps-with-python-a6af49c597d9
['Thiago Carvalho']
2020-12-16 12:46:20.541000+00:00
['Data Visualization', 'Python', 'Matplotlib', 'Data Science', 'Editors Pick']
Why Deep Learning Isn’t Always the Best Option
Why Deep Learning Isn’t Always the Best Option And what to use instead. Deep learning — a subset of machine learning where big data is used to train neural networks — can do incredible things. Even amidst all the mayhem of 2020, deep learning brought astonishing breakthroughs in a variety of industries, including natural language (OpenAI’s GPT-3), self-driving (Tesla’s FSD beta), and neuroscience (Neuralink’s neural decoding). However, deep learning is limited in several ways. Deep Learning Lacks Explainability In March 2018, Walter Huang was driving his Tesla on Autopilot in Mountain View, when it suddenly crashed into a safety barrier at 70mph, taking his life. Many AI systems today make life-or-death decisions, not just self-driving cars. We trust AI to classify cancers, track the spread of COVID-19, and even detect weapons in surveillance camera systems. When these systems fail, the cost is devastating and final. We can’t bring back a human life. Unfortunately, AI systems fail all the time. It’s called “error.” When they fail, we want explanations. We want to understand the why. However, deep neural networks and ensembles can’t easily give us the answers we need. They’re called “black box” models, because we can’t look through them. Transparency isn’t just critical in life-or-death systems, but in everyday financial models, credit risk models, and so on. If a middle-aged person saving for retirement suddenly loses their financial safety net, there better be explainability. Deep Learning Has a Propensity to Overfit Overfitting is when a model learns the training data well, but fails to generalize to new data. For instance, if you were to build a trading model to predict financial prices using a neural network, you’ll inevitably come up with an overly-complex model that has high accuracy on the training data, but fails in the real world. In general, neural networks — particularly deep learning — are more susceptible to overfitting than simple models like logistic regression. “In logistic regression, the model complexity is already low, especially when no or few interaction terms and variable transformations are used. Overfitting is less of an issue in this case... Compared to logistic regression, neural network models are more flexible, and thus more susceptible to overfitting.” Deep Learning is More Expensive Building deep learning models can be expensive, as AI talent income easily runs into the six figures. It doesn’t stop there. Deploying deep learning models is expensive as well, as these large, heavy networks consume a lot of computing resources. For instance, as of writing, OpenAI’s GPT-3 Davinci, a natural language engine, costs $0.06 per 1,000 tokens. This may seem very cheap, but these costs quickly add up when you’re dealing with thousands or even millions of users. Let’s compare with traditional machine learning models. Making a prediction with a 2-layer neural network on a CPU costs around 0.0063 Joules, or 0.00000000175 kWh. For all intents and purposes, the cost of a single prediction is negligible. The Solution — Explainable, Simple, Affordable Models Fortunately, it’s easier than ever to create explainable, simple, and affordable machine learning models, using a technique called AutoML, or automated machine learning, which automatically creates a variety of machine learning models given a dataset, and selects the most accurate model. AutoML isn’t a new phenonmenon, but it has become especially easy in recent years due to the rise of no-code, enabling effortless machine learning tools like Obviously.AI. In 2010, MIT discussed a “common computer science technique called automated machine learning,” but back then, you’d still need developers to use AutoML tools. Today, anyone can build and deploy explainable, simple, and affordable AI without any coding or technical skills. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/why-deep-learning-isnt-always-the-best-option-b264be56b8b9
['Obviously Ai']
2020-12-27 17:02:10.131000+00:00
['Data Science', 'AI', 'Artificial Intelligence', 'Data Analysis', 'Deep Learning']
Ternary Conditional Operators in Python
Ternary Conditional Operators in Python Mastering Efficient List and Dictionary Comprehension Photo by Belinda Fewings on Unsplash Python is versatile to use and its goal is to make development easier for the user. Compared to C# or Java which are notoriously cumbersome to master, Python is relatively easy to get good at. Moreover, it’s relatively easy to get pretty damn good at. List and Dictionary Comprehension are widely used methods but something I find that’s a bit less used (especially by beginners) is the Ternary Conditional Operator. The method really streamlines your code and makes it both visually and economically better to run and deal with. Just don’t make it too complicated! They’re pretty easy to get your head around, so let’s get into it.
https://medium.com/code-python/ternary-conditional-operators-in-python-6007031a033a
['Mohammad Ahmad']
2020-06-09 10:27:57.163000+00:00
['Coding', 'Programming', 'Software Development', 'Artificial Intelligence', 'Python']
Building a scalable machine vision pipeline
Kevin Jing | Pinterest engineering manager, Visual Discovery Discovery on Pinterest is all about finding things you love, even if you don’t know at first what you’re looking for. The Visual Discovery engineering team at Pinterest is tasked with building technology that will help people to continue to do just that, by building technology that understands the objects in a Pin’s image to get an idea of what a Pinner is looking for. Over the last year we’ve been building a large-scale, cost-effective machine vision pipeline and stack with widely available tools with just a few engineers. We faced two main challenges in deploying a commercial visual search system at Pinterest: As a startup, we needed to control the development cost in the form of both human and computational resources. Feature computation can become expensive with a large and continuously growing image collection, and with engineers constantly experimenting with new features to deploy, it’s vital for our system to be both scalable and cost-effective. The success of a commercial application is measured by the benefit it brings to the user (e.g., improved user engagement) relative to the cost of development and maintenance. As a result, our development progress needed to be frequently validated through A/B experiments with live user traffic. Today we’re sharing some new technologies we’re experimenting with, as well as a white paper, accepted for publication at KDD 2015, that details our system architecture and insights from these experiments and makes the following contributions: We present a scalable and cost-effective implementation of a commercially deployed visual search engine using mostly open-source tools. The tradeoff between performance and development cost makes our architecture more suitable for small-and-medium-sized businesses. We conduct a comprehensive set of experiments using a combination of benchmark datasets and A/B testing on two Pinterest applications, Related Pins and an experiment with similar looks, with details below. Experiment 1: Related Pin recommendations It used to be that if a Pin had never before been saved on Pinterest, we weren’t able to provide Related Pins recommendations. This is because Related Pins were primarily generated from traversing the local “curation graph,” the tripartite user-board-image graph evolved organically through human curation. As a result, “long tail” Pins, or Pins that lie on the outskirts of this curation graph, have so few neighbors that graph-based approaches do not yield enough relevant recommendations. By augmenting the recommendation system, we are now able to recommend Pins for almost all Pins on Pinterest, as shown below. Figure 1. Before and after adding visual search to Related Pin recommendations. Experiment 2: Enhanced product recommendations by object recognition This experiment allowed us to show visually similar Pin recommendations based on specific objects in a Pin’s image. We’re starting off by experimenting with ways to use surface object recognition that would enable Pinners to click into the objects (e.g. bags, shoes, etc.) as shown below. We can use object recognition to detect products such as bags, shoes and skirts from a Pin’s image. From these detected objects, we extract visual features to generate product recommendations (“similar looks”). In the initial experiment, a Pinner would discover recommendations if there was a red dot on the object in the Pin (see below). Clicking on the red dot loads a feed of Pins featuring visually similar objects. We’ve evolved the red dot experiment to try other ways of surfacing visually similar recommendations for specific objects, and will have more to share later this year. Figure 2. We apply object detection to localize products such as bags and shoes. In this prototype, Pinners click on objects of interest to view similar-looking products. By sharing our implementation details and the experience of launching products, we hope visual search can be more widely incorporated into today’s commercial applications. With billions of Pins in the system curated by individuals, we have one of the largest and most richly annotated datasets online, and these experiments are a small sample of what’s possible at Pinterest. We’re building a world-class deep learning team and are working closely with members of the Berkeley Vision and Learning Center. We’ve been lucky enough to have some of them join us over the past few months. If you’re interested in exploring these datasets and helping us build visual discovery and search technology, join our team! Kevin Jing is an engineering manager on the Visual Discovery team. He previously founded Visual Graph, a company acquired by Pinterest in January 2014. Acknowledgements: This work is a joint effort by members of the Visual Discovery team, David Liu, Jiajing Xu, Dmitry Kislyuk, Andrew Zhai, Jeff Donahue and our product manager Sarah Tavel. We’d like to thank the engineers from several other teams for their assistance in developing scalable search solutions. We’d also like to thank Jeff Donahue, Trevor Darrell and Eric Tzeng from the Berkeley Caffe team. For Pinterest engineering news and updates, follow our engineering Pinterest, Facebook and Twitter. Interested in joining the team? Check out our Careers site.
https://medium.com/pinterest-engineering/building-a-scalable-machine-vision-pipeline-60dd7bac73e7
['Pinterest Engineering']
2017-02-21 21:00:04.997000+00:00
['Machine Learning', 'Deep Learning', 'Engineering', 'Computer Vision']
Lyft Motion Prediction for Autonomous Vehicles: 2020
Lyft Motion Prediction for Autonomous Vehicles: 2020 Lyft motion prediction challenge for self-driving cars Problem Description The challenge is to predict the movement of traffic agents around the AV, such as cars, cyclists, and pedestrians for 2020. At the same time, the 2019 competition focused on detecting the 3D objects, an important step prior to detecting their movement. Overall this requires quite a unique domain skill comparative to the 2019 problem statement. The dataset consists of 170,000 scenes capturing the environment around the autonomous vehicle. Each scene encodes the state of the vehicle’s surroundings at a given point in time. Source: Kaggle EDA Lyft Source: Kaggle EDA Lyft5 The goal of this competition is to predict other cars/cyclist/pedestrian (called “agent”)’s motion. The data preprocessing technique called rasterization is a process of creating images from other objects. For example, below is a typical image that we get, with 25 channels, channel by channel view. First, 11 images are rasterizations of other agent's history, the next 11 images are the agent under consideration itself, and the last 3 are semantic map rasterization. And converting to RGB image using rasterizer includes: image: (channel, height, width) image of a frame. This is Birds-eye-view (BEV) representation. target_positions: (n_frames, 2) displacements in meters in world coordinates target_yaws: (n_frames, 1) centroid: (2) center position x&y. world_to_image: (3, 3) 3x3 matrix, used for transform matrix. Example of L5Kit(Lyft 5 kit) structure for data processing: Having said that, in this competition, understanding the Rasterizer class and implementing the customized rasterizer class has been a big challenge. Hence here is how to select the two important configuration options. We should carefully consider the raster_size The rasterized image final size in pixels (e.g.: [300, 300] The rasterized image final size in pixels (e.g.: [300, 300] pixel_size Raster's spatial resolution [meters per pixel]: the size in real-world one pixel corresponds to. Raster sizes pixel_size = [0.5, 0.5] As you can see in the image, if you increase the raster size (pixel size is constant), the model (ego/agent) will “see” more areas surrounding. More area behind/ahead Slower rendering, because of the more information (agents, roads, etc.) What is a good raster size? I think it depends on the vehicle’s velocity. km/h ms/Distance in 5 sec pixels10.281.392.7851.396.9413.89102.7813.8927.78154.1720.8341.67205.5627.7855.56256.9434.7269.44308.3341.6783.33359.7248.6197.224011.1155.56111.115013.8969.44138.896016.6783.33166.67 Let's say If I used it constantly pixel_size = [0.5, 0.5] for these calculations. The question is, what is the average velocity. In the image below, you can see the average speeds. (I assume that the unit is meter/seconds). I exclude everything with less than 1 m/s. Based on this information, we can select the size of the image. Pick your maximum speed. For example, 20 m/s Calculate the maximum distance in 5 seconds. (100 meters) Divide it by the size of the pixels (100 / 0.5 = 200) Because the ego is at raster_size * 0.25 pixels from the left side of the image, we have to add some space. The final size is 200/0.75 = 267 Pixel sizes The other parameter is the size of the pixels. What is one pixel in terms of world-meters? In the default settings, it is 1px = 0.5m In the image below, you can see the differences between different pixel sizes. (The size of the images is 300x300px). Because, for example, the pedestrians are less the half meter (from the above view), they are not visible in the first 2–3 images. So we have to select a higher resolution (lower pixel_size). Somewhere between 0.1 and 0.25. If we use a different pixel size, we have to recalculate the image size as well. Recalculate the example above with pixel_size=0.2 : 20 m/s 100 meters in 5 seconds 100/0.2 = 500 final image size: 500/0.75 = 667px Problems As we increase the image_size and the resolution (decreasing the pixel size), the rasterizer has to work more. It is already a bottleneck, so we have to balance between the model performance and training time. Calculating the error in the rasterizer Each history position, each lane, each other agent has encoded into pixels, and our net is only able to predict the next positions on the map with pixel-level accuracy. In many notebooks, the raster has a size of 0.50 m per pixel (hyperparameter). Thus, the expected mean error will be 0.50 / 4 for each direction for each predicted position. Source: Github code From error calculating of rasterization file Source: Github code From error calculating of rasterization file Winning architecture for this competition includes Resnet(18,34,50),EDA,calculating error and Efficientnet(b1,b3 & b6) for the code please check this Github repo:
https://medium.com/towards-artificial-intelligence/lyft-motion-prediction-for-autonomous-vehicles-2020-410e58e703af
['Rashmi Margani']
2020-11-27 19:39:14.867000+00:00
['Self Driving Cars', 'Kaggle', 'Deep Learning', 'Machine Learning', 'Computer Vision']
Deno VS Node
What is Deno? Deno is a TypeScript runtime based on V8, the Google’s JavaScript runtime; if you are familiar with Node.js, the popular server-side JavaScript ecosystem, you will understand that Deno is exactly the same. Except that it was designed with some improvements: It is based on the modern functionality of the JavaScript language; It has an extensive Standard library; It supports TypeScript natively; Supports EcmaScript modules; It doesn’t have a centralized package manager like npm; It has several built-in utilities such as a dependency inspector and a code formatter; Aims to be as compatible with browsers as possible; Security is the main feature. What are the main differences with Node.js? I think that Deno’s main goal is to replace Node.js. However, there are some important common characteristics. For example: Both were created by Ryan Dahl; Both were developed on Google’s V8 engine; Both were developed to execute server-side JavaScript. But on the other side there are some important differences: Rust and TypeScript . Unlike Node.js which is written in C++ and JavaScript, Deno is written in Rust and TypeScript. . Unlike Node.js which is written in C++ and JavaScript, Deno is written in Rust and TypeScript. Tokyo . Introduced in place of libuv as an event-driven asynchronous platform. . Introduced in place of libuv as an event-driven asynchronous platform. Package Manager . Unlike Node.js, Deno doesn’t have a centralized package manager, so it is possible to import any ECMAScript module from a url. . Unlike Node.js, Deno doesn’t have a centralized package manager, so it is possible to import any ECMAScript module from a url. ECMAScript . Deno uses modern ECMAScript functionality in all its APIs, while Node.js uses a standard callback-based library. . Deno uses modern ECMAScript functionality in all its APIs, while Node.js uses a standard callback-based library. Security. Unlike a Node.js program which, by default, inherits the permissions from the system user that’s running the script, a Deno program runs in a sandbox. For example, the access to file system, to network resources, etc., must be authorized with a flag permission. Installation Deno is a single executable file without dependencies. We can install it on our machine by downloading the binary version from this page, or we can download and execute one of the installers listed below. Shell (Mac, Linux) $ curl -fsSL https://deno.land/x/install/install.sh | sh PowerShell (Windows) $ iwr https://deno.land/x/install/install.ps1 -useb | iex Homebrew (Mac OS) $ brew install deno Let’s take a look at security One of the main Deno’s features is the security. Compared to Node.js, Deno executes the source code in a sandbox, this mean that the runtime: Doesn’t have access to the file system; Doesn’t have access to the network resources; Cannot excecute other scripts; Doesn’t have access to environment variables. Let’s make a simple example. Consider the following script: async function main () { const encoder = new TextEncoder () const data = encoder.encode ('Hello Deno! 🦕 ') await Deno.writeFile( 'hello.txt' , data) } main() The script is really simple. It just create a text file named hello.txt that will contain the string Hello Deno 🦕. Really simple! Or not? As we said before, the code will run in a sandbox and, obviously, it doesn’t have the access to the filesystem. Infact, if we execute the script with the following command: $ deno run hello-world.ts It will print on terminalsomething like: Check file:///home/davide/denoExample/hello-world.ts error: Uncaught PermissionDenied: write access to "hello.txt", run again with the --allow-write flag at unwrapResponse ($deno$/ops/dispatch_json.ts:42:11) at Object.sendAsync ($deno$/ops/dispatch_json.ts:93:10) at async Object.open ($deno$/files.ts:38:15) at async Object.writeFile ($deno$/write_file.ts:61:16) at async file:///home/davide/projects/denoExample/hello-world.ts:5:3 As we can see, the error message is really clear. The file was not created on the filesystem because the script does not have the write permission to do that but, by adding the flag --allow-write : $ deno run --allow-write hello-world.ts the script will end without errors and the file hello.txt was created correctly in the current working directory. In addition to the flag --allow-write that give to us the access to the filesystem, there are also other flags such as --allow-net that give to us the access to the network resources, or --allow-run that is useful to run external script or subprocess. We can find the complete permissions list at the following url https://deno.land/manual/getting_started/permissions. A simple server Now we will create a simple server that accept connections on port 8000 and return to the client the string Hello Deno . // file server.ts import { serve } from 'https://deno.land/std/http/server.ts' const s = serve({ port: 8000 }) console.log('Server listening on port :8000') for await (const req of s) { req.respond({ body: 'Hello Deno! ' }) } Obviously to run the script we need to specify the --allow-net flag: $ deno run --allow-net server.ts In our terminal will appear something like: Now if we open our favourite browser, or if we want to use the curl command, we can take a test to the URL http://localhost:8000 . The result will be something like this: Modules Just like browsers, Deno loads all his modules via URL. Many people are initially confused by this approach, but that make sense. Here an example: import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Importing packages via URL has advantages and disvantages. The main advantages are: more flexibility; we can create a package without publish it in a public repository (like npm). I think that a sort of package manager can be released in future, but nothing official has come out for now. The official Deno website give to us the opportunity to host our source code, and then the distribution via URLs: https://deno.land/x/. Importing packages via URLs, give to the developers the freedom they need to host their code wherever they want: the decentralization at best. Therefore, we don’t need a package.json file or the node_modules directory. When the application start, all imported packages are downloaded, compiled, and stored a cache memory. If we want to download all the packages again we need to specify the flag --reload . I need to type the URL every time? 🤯🤬 Deno support import maps natively. This mean that it’s possible to specify a special command flag like --importmap=<FILENAME> . Let’s take a look to a simple example. Imagine that we have a file import_map.json , with the following content: { "imports": { "fmt/": "https://deno.land/[email protected]/fmt/" } } The file specifies that at /ftm key, of the imports object, correspond the URL https://deno.land/[email protected]/ftm/ and it can be used as follow: // file colors.ts import { green } from "fmt/colors.ts"; console.log(green("Hello Deno! 🦕")); This feature is unstable at the moment, and we need to run our script color.ts using the flag --unstable , so: $ deno run --unstable --importmap=import_map.json colors.ts Now in our terminal appear something like this: Versioning The package versioning is a developer responsibility and, on the client side, we can decide to use a specific version in the URL of the package when we import it: https://unpkg.com/[email protected]/dist/package-name.js Ready to use utilities Speaking honestly: the current state of JavaScript tools for developer is a real CHAOS! And when TypeScript ones are added, the chaos increase further. 😱 Photo by Ryan Snaadt on Unsplash One of the best JavaScript feature is that the code is not compiled, and it can be executed immediately in a browser. This make the life easier for a developer and it is very easy to get immediate feedback on written code. Unfortunately, however, this simplicity in the last period has been undetermined by what I consider “The cult of excessive instruments”. Theese tools have turned JavaScript development into a real nightmare of complexity. There are entire online courses for Webpack configuration guide! Yes, you got it right…a whole course! The chaos of the tools has increased to the point that many developers are eager to get back to actually writing code rather than playing with configuration files. An emerging project that aim to resolve this problem its Facebook’s Rome project. Deno, on the other hand, has an entire and complete ecosystem, such as runtime and modules management. This approach offer to the developer all the tools they need to build their applications. Now, let’s take a look at the tools that Deno 1.6 ecosystem offer, and how the developers can use them to reduce third party dependencies and simplify the development. It’s not possible to replace an entire build pipeline in Deno, but I think that we don’t think we’ll wait much longer before we have it. Below the list of integrated features: bundler: it write in a single JavaScript file the specified module and all its dependencies; it write in a single JavaScript file the specified module and all its dependencies; debugger: it give to us the ability to debug our Deno program with Chrome Devtools, VS Code and other tools; it give to us the ability to debug our Deno program with Chrome Devtools, VS Code and other tools; dependency inspector: if we execute this tool on a ES module it show all the dependencies tree; if we execute this tool on a ES module it show all the dependencies tree; doc generator: it analyze all the JSDoc annotation in a given file and produce the documentation for us; it analyze all the JSDoc annotation in a given file and produce the documentation for us; formatter: it format the JavaScript, or TypeScript, code automatically; it format the JavaScript, or TypeScript, code automatically; test runner: it’s an utility that give to us the ability to test our source code using the assertions module of the standard library. it’s an utility that give to us the ability to test our source code using the module of the standard library. linter: useful for identifying potential bugs in our programs. Bundler Deno can create a simple bundle from command line using deno bundle command, but it expose an API internally. With this API the developer can create a custom output, or something that can be used for frontend purpose. This API is instable, so we need to use the --unstable flag. Let’s take the example we did earlier, modifying it as follows: // file colors.ts import { green } from "https://deno.land/[email protected]/fmt/colors.ts"; console.log(green("Hello Deno! 🦕")); And now let’s create our bundle from command line: $ deno bundle colors.ts colors.bundle.js this command create a file colors.bundle.js that will contains all the source code that we need to execute it. In fact if we try to run the script with command: $ deno run colors.bundle.js we will notice that no module will be downloader from Deno’s repository, this because all the code needed for the execution is contained in the colors.bundle.js file. The result that we will see on the terminal is the same of the previous example: Debugger Deno has an integrated debugger. If we want to launch a program in debug mode manually we need to use the --inspect-brk $ deno run -A — inspect-brk fileToDebug.ts Now if we open Chrome inspector chrome://inspect we find a page similar to this If we click on inspect we can start to debug our code. Dependency inspector Use this tool it’s really simple! Simply we need to use the info subcommand followed and URL (or path) of a module and it will print the dependency tree of that module. If we launch the command using the server.ts file used in the previous example, it will print in our terminal someting like this: Also the command deno info can be used to show cache information: Doc generator This is a really useful utility that allows us to generate the JSDoc automatically. If we want to use it just run the command deno doc , followed by a list of one or more source files, and automatically it will printed to terminal all the documentation for all the exported files of our modules. Let’s take a look on how it works with a simple example. Let’s imagine tha we have a file add.ts with following content: * Adds x and y. * * * */ export function add(x: number, y: number): number { return x + y; } /*** Adds x and y. @param {number} x @param {number} y @returns {number} Sum of x and y*/export function add(x: number, y: number): number {return x + y; Executing the deno doc command, it will printed the following JSDoc on the standard output: It’s possible tu use the --json flag to produce a JSON format documentation. This JSON format can be used by Deno’s website to generate the module documentation automatically. Formatter Formatter is provided by dprint, an alternative to Prettier, that clone all the rules enstabished by Prettier 2.0. If we want to format one or more files, we can use deno ftm <files> or a VSCode extension. If we run the command with the --check flag, will be executed the format check of all JavaScript and TypeScript files in the current working directory. Test runner The syntax of this utility it’s really simple. We just need to use the deno test command and it will be executed the tests for all files that ends with _test or .test with the .js , .ts , .tsx or .jsx extensions. In addition to this utility it’s possible to use the standard Deno API that give to us the asserts module that we can use in the following way: import { assertEquals } from "https://deno.land/std/testing/asserts.ts" Deno.test({ name: "testing example", fn(): void { assertEquals("world", "world") assertEquals({ hello: "world" }, { hello: "world" }) }, }) This module give to us nine assertions that we can use in our test cases: assert(expr: unknown, msg = ""): asserts expr assertEquals(actual: unknown, expected: unknown, msg?: string): void assertNotEquals(actual: unknown, expected: unknown, msg?: string): void assertStrictEquals(actual: unknown, expected: unknown, msg?: string): void assertStringContains(actual: string, expected: string, msg?: string): void assertArrayContains(actual: unknown[], expected: unknown[], msg?: string): void assertMatch(actual: string, expected: RegExp, msg?: string): void assertThrows(fn: () => void, ErrorClass?: Constructor, msgIncludes = "", msg?: string): Error assertThrowsAsync(fn: () => Promise<void>, ErrorClass?: Constructor, msgIncludes = "", msg?: string): Promise<Error> Linter Deno has an integrated JavaScript and TypeScript linter. This is a new feature and it’s instable and, obviously, if we want to use it require the --unstable flag to execute it. # This command lint all the ts and js files in the current working directory $ deno lint --unstable # This command lint all the listed files $ deno lint --unstable myfile1.ts myfile2.ts Benchmark Ok folks! We’ve arrived to the moment of truth! Who’s the best JavaScript enviroment Deno or Node? But I think that the correct question is another: Who’s the fastest? I did a really simple benchmark (an http hello server) and the results were very interesting. I made them on my laptop that has the following characteristics: Model: XPS 13 9380 Processor: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz RAM: 16GB DDR3 2133MHz OS: Ubuntu 20.04 LTS Kernel version: 5.4.0-42 The tool that I used to make these benchmark is autocannon and the used scripts are the following: // file node_http.js const http = require("http"); const hostname = "127.0.0.1"; const port = 3000; http.createServer((req, res) => { res.end("Hello World"); }).listen(port, hostname, () => { console.log("node listening on:", port); }); import { serve } from " // file deno_http.tsimport { serve } from " https://deno.land/[email protected]/http/server.ts "; const port = 3000; const s = serve({ port }); const body = new TextEncoder().encode("Hello World"); console.log("deno_http listen on", port); for await (const req of s) { const res = { body, headers: new Headers(), }; res.headers.set("Date", new Date().toUTCString()); res.headers.set("Connection", "keep-alive"); req.respond(res).catch(() => {}); } We can find them in the following github repository: The first test case was performed on 100 concurrent connections with the command autconannon http://localhost:3000 -c100 and the results are the sequent: It seems that Node beat Deno in velocity! But this benchmark is based on 100 concurrent connections which are many for a small or medium server. So let’s do another test: this time with 10 concurrent connections. And again, Node beats Deno: Seems that, in term of performance, Node beats Deno 2–0! It performs better in both analyzed cases. However, the Deno it’s a really young project and the community is working hard to get adopted in the production environment soon, but it will be a tough fight again a titan like Node. Conclusion The main purpose of this article was not to support either Node or Deno, but rather to compare the two enviroments. Now you should have an understanding of the similarities and differences between the two. Deno has some particular advantages for developers, including a robust suppport system and natively TypeScript support. The design decisions and additional built-in tools aim to provide a productive environment system and a good developer experience. I don’t know if these choices can be a double-edged sword in the future, but seems to attract more developers right now. Node, on the other hand, has a robust ecosystem, ten years of development and releases behind it, an oceanic community and online courses that can help us on many threads or problems, an infinite list of frameworks (Fastify, Express, Hapi, Koa etc.), many books like “Node.js Design Patterns” or “Node Cookbook” that I consider the best books that talks about Node.js. For these, and many other reasons, I think that Node is the most secure choice to make for now. What can I say… HAPPY CODING! Bibliography
https://davide-dantonio.medium.com/deno-vs-node-658fc5e1fb5c
["Davide D'Antonio"]
2020-12-26 10:03:34.906000+00:00
['Typescript', 'Deno', 'Nodejs', 'JavaScript', 'Javascript Development']
Chicago Hospital Vows to End Cosmetic Surgery on Intersex Infants
A children’s hospital in Chicago has apologized for performing cosmetic genital surgeries on intersex infants, vowing to put an end to the practice. Ann & Robert H. Lurie Children’s Hospital of Chicago released a statement last week apologizing for having previously performed such surgeries. Signatories of the statement went on to condemn the practice, acknowledging the harm that these surgeries have caused. “We recognize the painful history and complex emotions associated with intersex surgery and how, for many years, the medical field has failed these children,” the statement read. “We empathize with intersex individuals who were harmed by the treatment that they received according to the historic standard of care, and we apologize and are truly sorry.” This statement and apology comes after activists from the Intersex Justice Project called on the hospital to ban these harmful procedures nearly three years ago. The organization held protests outside of the hospital, organized email campaigns, and encouraged activists to use the hashtag #EndIntersexSurgery on social media. Intersex is a “general term used for a variety of conditions in which a person is born with reproductive or sexual anatomy that doesn’t seem to fit the typical definitions of female or male,” according to the Intersex Society of North America. Around 1.7% of the global population is born intersex and 1 in 2,000 intersex babies will be recommended for cosmetic genital surgery by their doctors. As such, the fight to end cosmetic surgeries on intersex infants has been years in the making. Organizations like the ACLU and Human Rights Watch have long warned of the risks of such procedures, calling them unnecessary and claiming that they do nothing to help intersex individuals better adjust to society. These surgeries, which misguidedly aim to help intersex children fit into outdated definitions of what it means to be male or female, date all the way back to the 1960s. Many intersex people who have been forced to undergo cosmetic genital surgery as infants have since reported psychological trauma, loss of sexual sensation, and higher risks of scarring. In 2017, three former US Surgeons-General asserted that there was little evidence to suggest that growing up with intersex genitalia causes psychological distress, but there was evidence to indicate that having irreversible surgery without consent can, in fact, cause emotional and physical harm. As a result, around 40% of intersex people who undergo surgery as infants will grow up to reject the sex and imposed gender that has been surgically assigned to them. While the number of infants who underwent cosmetic surgery at Lurie Children’s Hospital is unknown, the hospital has promised that no such surgeries will take place until intersex people are old enough to consent. “Historically care for individuals with intersex traits included an emphasis on early genital surgery to make genitalia appear more typically male or female,” the statement continued. “As the medical field has advanced, and understanding has grown, we now know this approach was harmful and wrong.”
https://medium.com/an-injustice/chicago-hospital-vows-to-end-cosmetic-surgery-on-intersex-infants-26b1525b1714
['Catherine Caruso']
2020-08-05 18:52:44.858000+00:00
['LGBTQ', 'Society', 'Health', 'Equality', 'Justice']
How Facebook Plans to Crack Down on Anti-Vax Content
When you search “vaccine” on Facebook, one of the first search results includes a group called “Vaccine Injury Stories,” where users share posts featuring common hoaxes that blame vaccinations for infant sickness and death. With a quick search, more than 150 groups appear on Facebook — some with thousands of members — promoting misinformation about vaccines’ effects and acting as an echo chamber for pseudoscience. The social network pledged to crack down on these groups in March but has not announced any clear plans until now. Next month, Facebook is rolling out a search tool to combat this kind of misinformation, the company told Cheddar, similar to what Twitter launched last week. Once the tool is live, a search of “vaccines” or related terms on Facebook will link to a neutral medical institution, like the Center for Disease Control (CDC), and more medically-verified information about vaccines. Similar to what Facebook did with searches about buying opioids and white nationalism, the information will sit at the top of search results and present facts alongside misinformation, rather than banning or removing the content altogether.
https://medium.com/cheddar/how-facebook-plans-to-crack-down-on-anti-vax-content-773eb15e02d3
['Jake Shore']
2019-05-23 17:45:18.885000+00:00
['Technology', 'Social Media', 'Science', 'Facebook', 'Vaccines']
Scammers Are Targeting COVID-19 Contact Tracing Efforts
The Vast Majority of Contact Tracing Takes Place on the Phone You should immediately be suspicious of contact tracing outreach that takes place via email or text message. The vast majority of contact tracing efforts are done over the phone, and very few legitimate agencies use email or text messages for their initial outreach. Even if you think the contact tracing email or text message may be legitimate, you should never click any embedded links. These links could harbor malware and viruses designed to steal your private information. Contact Tracers Will Not Mention COVID-19 Patients by Name In an effort to gain your trust, the scammers may tell you that a close family member or friend has tested positive for COVID-19, and that you should immediately schedule a test for the virus. That kind of news is certainly alarming, but it’s likely not real. There are strict privacy laws in place surrounding healthcare and medical diagnoses, and contact tracers are not allowed to say who is infected, only where they have been and who they have been in contact with. Unfortunately, the inclusion of a name often lends credibility to the scammers, fooling even those who are generally very wary of such efforts. Keep in mind, however, that bad actors can easily find this kind of information on social media, and you should not fall for the ruse. Never Hand over Your Social Security Number or Banking Information Another thing legitimate COVID-19 contact tracers will never do is ask for your Social Security number or banking information. If the person on the other end of the phone makes such a request, you should simply hang up. If you have caller ID and the phone number is visible, you can contact the local police to report the suspected crime. These scams are gaining speed, and it’s important to protect your friends and neighbors as well as yourself. Watch out for COVID-19 Testing Charges One of the most important goals of COVID-19 contact tracing is to identify potential sources of infection and facilitate testing for the disease. Legitimate contact tracers will urge those they call to schedule a COVID-19 test as soon as possible, and they will provide a list of resources and testing sites as well. What legitimate contact tracers will not do is demand payment up front. They will not require you to hand over credit card or bank account information, and if they do, again, just hang up. In the vast majority of cases, you will not have to pay anything at all for a COVID-19 test, especially if you’ve been in contact with someone who has tested positive for the disease. Insurance companies are required to cover COVID-19 testing and treatment at no cost to their subscribers, and government funding generally covers testing costs for the uninsured.
https://georgejziogas.medium.com/scammers-are-targeting-covid-19-contact-tracing-efforts-5f9acb570b87
['George J. Ziogas']
2020-09-12 21:04:37.729000+00:00
['Privacy', 'Health', 'Cybersecurity', 'Coronavirus', 'Covid 19']
Time Series Analysis
Introduction to Time Series A time series is a sequence or series of numerical data points fixed at certain chronological time order. In most cases, a time series is a sequence taken at fixed interval points in time. This allows us to accurately predict or forecast the necessities. Time series uses line charts to show us seasonal patterns, trends, and relation to external factors. It uses time series values for forecasting and this is called extrapolation. Time series are used in most of the real-life cases such as weather reports, earthquake prediction, astronomy, mathematical finance, and largely in any field of applied science and engineering. It gives us deeper insights into our field of work and forecasting helps an individual in increasing efficiency of output. Time Series Forecasting Time series forecasting is a method of using a model to predict future values based on previously observed time series values. Time series is an important part of machine learning. It figures out a seasonal pattern or trend in the observed time-series data and uses it for future predictions or forecasting. Forecasting involves taking models rich in historical data and using them to predict future observations. One of the most distinctive features of forecasting is that it does not exactly predict the future, it just gives us a calculated estimation of what has already happened to give us an idea of what could happen. Image Courtesy: www.wfmanagement.blogspot.com Now let’s look at the general forecasting methods used in day to day problems, Qualitative forecasting is generally used when historical data is unavailable and is considered to be highly objective and judgmental. Quantitative forecasting is when we have large amounts of data from the past and is considered to be highly efficient as long as there is no strong external factors in play. The skill of a time series forecasting model is determined by its efficiency at predicting the future. This is often at the cost of being able to explain why a specific prediction was made, confidence intervals, and even better, understanding the underlying factors behind the problem. Some general examples of forecasting are: Governments forecast unemployment rates, interest rates, and expected revenues from income taxes for policy purposes. Day to day weather prediction. College administrators forecast enrollments to plan for facilities and faculty recruitment. Industries forecast demand to control inventory levels, hire employees, and provide training. Application of Time Series Forecasting The usage of time series models is twofold: Obtain an understanding of the underlying forces and structure that produced the data Fit a model and proceed to forecast. There is almost an endless application of time series forecasting problems. Below are a few of the examples from a range of industries to make the notions of time series analysis and forecasting more strong. Forecasting the rice yield in tons by the state each year. Forecasting whether an EEG trace in seconds indicates a patient is having a heart attack or not. Forecasting the closing price of stock each day. Forecasting the birth or death rate at all hospitals in a city each year. Forecasting product sales in units sold each day. Forecasting the number of passengers booking flight tickets each day. Forecasting unemployment for a state each quarter Forecasting the size of the tiger population in a state each breeding season. Now let’s look at an example, We are going to use the google new year resolution dataset, Step 1: Import Libraries Picture 1 Step 2: Load Dataset Picture 2 Step 3: Change month column into the DateTime data type Picture 3 Step 4: Plot and visualize Picture 4.1 Picture 4.2 Step 5: Check for trend Picture 5.1 Picture 5.2 Step 6: Check for seasonality Picture 6.1 Picture 6.2 We can see that there is roughly a 20% spike each year, this is seasonality. Components of Time Series Time series analysis provides a ton of techniques to better understand a dataset. Perhaps the most useful of these is the splitting of time series into 4 parts: Level: The base value for the series if it were a straight line. Trend: The linear increasing or decreasing behavior of the series over time. Seasonality: The repeating patterns or cycles of behavior over time. Noise: The variability in the observations that cannot be explained by the model. All-time series generally have a level, noise, while trend and seasonality are optional. The main features of many time series are trends and seasonal variation. Another feature of most time series is that observations close together in time tend to be correlated These components combine in some way to provide the observed time series. For example, they may be added together to form a model such as: Y =levels + trends + seasonality + noise Image Courtesy: Machine Learning Mastery These components are the most effective way to make predictions about future values, but may not always work. That depends on the amount of data we have about the past. Analyzing Trend Checking out data for repeated behavior in its graphical representation is known as a Trend analysis. As long as the trend is continuously increasing or decreasing that part of data analysis is generally not very difficult. If the time series data contains some kind of considerable error, then the first step in the process of trend identification is smoothing. Smoothing. Smoothing always involves some form of local averaging of data such that the components of individual observations cancel each other out. The most widely used technique is moving average smoothing which replaces each element of the series with a simple or weighted average of surrounding elements. Medians are mostly used instead of means. The main advantage of median as compared to moving average smoothing is that its results are less biased by outliers within the smoothing window. The main disadvantage of median smoothing is that in the absence of clear outliers it may produce more disturbed curves than moving average. In the other less common cases, when the measurement error is quiet large, the distance weighted least squares smoothing or negative exponentially weighted smoothing techniques might be used. These methods generally tend to ignore outliers and give a smooth fitting curve. Fitting a function. If there is a clear monotonous nonlinear component, the data first need to be transformed to remove the nonlinearity. Usually, log, exponential, or polynomial function is used to achieve this. Now let’s take an example to understand this more clearly, Picture 7.1 Picture 7.2 From the above diagram, we can easily interpret that there is an upward trend for ‘Gym’ every year! Analyzing Seasonality Seasonality is the repetition of data at a certain period of time interval. For example, every year we notice that people tend to go on vacation during the December — January time, this is seasonality. It is one other most important characteristics of time series analysis. It is generally measured by autocorrelation after subtracting the trend from the data. Lets look at another example from our dataset, Picture 8.1 Picture 8.2 From the above graph, it is clear that there is a spike at the starting of every year. Which means every year January people tend to take ‘Diet’ as their resolution rather than any other month. This is a perfect example of seasonality. AR, MA, and ARIMA Autoregression Model (AR) AR is a time series model that uses observations from previous time steps as input to a regression equation to predict the value at the next time step. A regression model like linear regression takes the form of: yhat = b0 + (b1 * X1) This technique can be used on time series where input variables are taken as observations at previous time steps, called lag variables. This would look like: Xt+1 = b0 + (b1 * Xt) + (b2 * Xt-1) Since the regression model uses data from the same input variable at previous time steps, it is referred to as autoregression. Moving Average Model (MA) The residual errors from forecasts in a time series provide another source of information that can be modeled. The Residual errors form a time series. An autoregression model of this structure can be used to foresee the forecast error, which in turn can be used to correct forecasts. Structure in the residual error may consist of trend, bias & seasonality which can be modeled directly. One can create a model of the residual error time series and predict the expected error of the model. The predicted error can then be subtracted from the model prediction & in turn provide an additional lift in performance. An autoregression of the residual error is the Moving Average Model. Autoregressive Integrated Moving Average (ARIMA) Autoregressive integrated moving average or ARIMA is a very important part of statistics, econometrics, and in particular time series analysis. ARIMA is a forecasting technique that gives us future values entirely based on its inertia. Autoregressive Integrated Moving Average (ARIMA) models include a clear cut statistical model for the asymmetrical component of a time series that allows for non-zero autocorrelations in the irregular component ARIMA models are defined for stationary time series. Therefore, if you start with a non-stationary time series, you will first need to ‘difference’ the time series until you attain stationary time series. An ARIMA model can be created using the statsmodels library as follows: Define the model by using ARIMA() and passing in the p, d, and q parameters. The model is prepared on the training data by calling the fit() function. Predictions can be made by using the predict() function and specifying the index of the time or times to be predicted. Now let’s look at an example, We are going to use a dataset called ‘Shampoo sales’ Picture 9.1 Picture 9.2 ACF and PACF We can calculate the correlation for time-series observations with observations from previous time steps, called lags. Since the correlation of the time series observations is calculated with values of the same series at previous times, this is called a serial correlation, or an autocorrelation. A plot of the autocorrelation of a dataset of a time series by lag is called the AutoCorrelation Function, or the acronym ACF. This plot is sometimes called a correlogram or an autocorrelation plot. For example, Picture 10 A partial autocorrelation or PACF is a summary of the relationship between an observation in a time series with observations at prior time steps with the relationships of in between observations removed. For example, Picture 11 Conclusion Time series analysis is one of the most important aspect of data analytics for any large organization as it helps in understanding seasonality, trends, cyclicality and randomness in the sales and distribution and other attributes. These factors help companies in making a well informed decision which is highly crucial for business.
https://medium.com/swlh/time-series-analysis-7006ea1c3326
['Athul Anish']
2020-11-25 22:13:44.187000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Startup', 'Data']
What to Look for in a Marketing Agency
Once upon a time, Nike wasn’t the leader in the U.S. market for athletic footwear. In the 1980’s, Nike was trailing behind Reebok, with the latter being hailed as “the biggest brand-name phenomena of the decade.” In an effort to revamp its brand and capture new markets, Nike hired an ad agency, Wieden+Kennedy, which ultimately coined the three-word slogan we all know and love. The genius behind “Just Do It” is its quality of being universally personal. It’s personal to the professional and the everyday person; to participants of team and individual sports alike. Those three small words had a huge effect: ten years after “Just Do It” first launched, Nike went from $800 million to over $9.2 billion in sales. To this day, Wieden+Kennedy is best known for their work with Nike. If you’re looking for your Wieden+Kennedy, here’s our advice to you. Don’t “Just Do It.” As you can imagine, “genius” is the result of hundreds of hours of research, testing, and strategy. It’s the perfect synergy between client and service provider; where vision meets execution. Selecting the right marketing partner takes careful consideration and proper vetting. Here are three things to look for in the selection process: 1. The Marketing Agency Thinks Like You Likely the largest frustration in the client-service provider relationship is the “business disconnect”. The key to Nike’s success was a service provider that kept business strategy top of mind and not just brand recognition. Wiedens+Kennedy had the mentality of “in it for the long haul” — not one campaign. A marketing firm might have the best talent in town, but are they strategizing around your business objectives? Are your revenue goals, growth goals, and long-term plan central to their marketing goals for you? Being business-minded is an essential characteristic that should be given the highest priority when considering an agency. If your current candidate doesn’t meet this standard, you’ll find yourself pouring money down an expensive, twisted drain. 2. The Marketing Agency Knows Your Business An optometrist might be able to offer better medical advice than Joe on the street for your knee pain, but an orthopedic specialist knows best. Similarly, it is important to seek an agency that has a breadth of knowledge and experience relevant to your field. Nike didn’t hire marketing professionals who just happened to have a passion for sports. The cost of onboarding an agency to your industry is something you can and should avoid. 3. The Marketing Agency Has a Strong Reputation Marketing is rooted in common core principles, but not all marketers are created equal. When choosing a marketing agency, ensure they have the proper resources to execute your vision. What results have they produced for previous clients, and how did they do it? What reporting systems do they use, and how will they hold themselves accountable to you? As intuitive as looking beneath the surface might seem, it is a critical step that is often overlooked. Now that you know what to look for, how will you find your team? Referrals and research are a great way to start. Don’t settle until you’ve found the right fit as the service you’re looking for is more than a transaction. It’s a relationship.
https://medium.com/insights-from-the-incubator/what-to-look-for-in-a-marketing-agency-4f06b0721608
['The Incubator']
2016-09-13 21:29:30.019000+00:00
['Advertising', 'Marketing', 'Business', 'Small Business', 'Startup']
Deploying Databases on Kubernetes
A core function of Civis Platform is allowing data scientists to deploy their workloads on-demand, without needing to worry about the infrastructure involved. About a year ago, Civis put Jupyter Notebooks on Kubernetes, then did the same for Shiny apps. These features allow users to perform computations and deploy apps in the cloud. However, as users began to leverage these features more and more, we received requests for more options to connect these cloud-deployed web apps with persistent storage. Carrie’s internship was focused on exploring these options and creating a proof of concept for user-deployed databases. Finding the Right Database Previously, web apps deployed via Platform had only one easy option for persistent storage: Redshift, a column store database. Although column store databases perform well for many data science tasks, such as querying and column-level data analyses, they tend to be slow when it comes to fetching and updating single rows of information. For transactional data processing, a traditional row store database is more efficient. Examples of such databases include MySQL and Postgres. These databases are quite common in the web development world, since most web apps are transaction oriented. Our app developers needed a row store database. After deciding to use a row store database, we still had to decide what type of row store database to deploy. Since our use case was support for small custom web apps, we wanted a highly consistent SQL database with ACID guarantees. Another factor we considered was containerization and ease of deployment on Kubernetes. We use Kubernetes to deploy our existing client workloads and we wanted to expand upon this cluster. Additional criteria were reliability, scalability, high availability, replication, self-healing, etc. There are many databases out there, each with their own strengths and weaknesses. Ultimately, after comparing our options, we decided to use CockroachDB. CockroachDB is an open source, row store, Postgres flavored database with an emphasis on durability. It is designed to survive most hardware and software failures while maintaining consistency across multiple replicas. Plus, it provided good documentation around deployment on Kubernetes. Initial Experiments Once we chose CockroachDB, it was time to try out actually deploying a database on Kubernetes. Using a Kubernetes Statefulset, we were able to create a CockroachDB cluster by bringing up multiple Kubernetes pods running the CockroachDB Docker image. Because it’s a distributed database, CockroachDB distributes its data across multiple nodes, using the Raft algorithm to ensure consensus. This distribution gives the database resiliency against node failures. Investigation of Durability One of the main claims made by CockroachDB is that it automatically survives hardware and software failures. (Hence the name “CockroachDB,” since cockroaches are hard to kill.) Part of researching CockroachDB was checking the credibility of those claims. We had fun trying to “kill the cockroaches” by simulating different types of failures. The first failure we simulated was a pod failure. If there are enough healthy pods to reliably recover the lost data, then the database is supposed to automatically create another pod to replace the one that failed. After manually killing a pod from the cluster, we were able to verify that a new one came up in its place and that none of the data was lost. Since we were using local storage, instead of attaching external volumes using PVCs (due to known volume scheduling issues in multi-zone clusters), killing a pod meant that its backing storage was also killed. This showed that replication of data across pods was happening properly. Next, we simulated a node failure. We found that once the cluster identified that a node was missing, it was able to automatically reschedule the terminated pods to other nodes. In testing these different failures, the importance of preparing for the worst conditions your system might face was highlighted. As an additional reliability precaution, we wanted to ensure that CockroachDB pods were scheduled across different nodes in the Kubernetes cluster. This was done by adding inter-pod anti-affinity rules to the Statefulset. These rules determine which nodes pods can be scheduled on, based on the labels which other pods running on the node have. For our use case, we set constraints such that pods backing the same database could not be scheduled to the same node. Productionalizing Databases After the research steps were complete, the next phase of the project was to make databases a feature for Civis users. For the next month, Carrie worked on refactoring our code to make adding databases as simple as deploying a service on Civis Platform. This was a large change that required several different steps to ensure not only code functionality, but also code quality. This provided Carrie with key learning experiences related to the code review process and debugging issues — for example, it is better to take your time and thoroughly check everything, rather than waiting for errors to arise. The priority of sufficient testing surpasses the need to deploy code as quickly as possible. Next Steps Once the database deployment process is complete, the next step is to allow users to connect to these databases through Shiny apps and Notebooks. Additionally, we need to automate processes for backing up and restoring these databases. More setup is required to back up data outside of the Kubernetes cluster. There are also some additional configuration options for the databases which we would like to expose to users, such as the number of replicas in their CockroachDB cluster. This project has provided Carrie with ample learning opportunities, not only with CockroachDB and Kubernetes, but also with production code and development processes. Carrie enjoyed tackling challenges such as getting Kubernetes to work, setting up Docker images for CockroachDB, refactoring code, and networking with pods.
https://medium.com/civis-analytics/deploying-databases-on-kubernetes-e2cb7633dda5
['Civis Analytics']
2018-08-30 23:23:32.033000+00:00
['Data Science', 'Cockroachdb', 'Engineering', 'Kubernetes', 'Database']
Radar Chart Basics with Python’s Matplotlib
Radar Chart Basics with Python’s Matplotlib One handy alternative for displaying multiple variables In this article, I’ll go through the basics of building a Radar chart, a.k.a — Polar, Web, Spider, and Star charts. The purpose of this visualization is to display in a single viewing, multiple quantitative variables. That allows us to compare the variables with each other, visualize outliers, and even comparisons with various sets of variables by visualizing multiple radar charts. The idea comes from Pie charts, more precisely one of its variations, the Polar Area Chart. Florence Nightingale Florence Nightingale — Considered to be the precursor of modern nursing, was also the first to publish a polar area chart. Her aesthetic and informative chart brings information about the Crimean war, more specifically about the causes of deaths. The areas of the slices represent the number of deaths in each month, where blue are deaths by preventable or mitigable Zymotic diseases, red are deaths from wounds, and black are other causes of death. With that, she was able to clarify the importance of nursing, by displaying that most deaths were not caused by war wounds, but rather by mitigable diseases. Later in 1877, the German scientist Georg von Mayr would publish the first Radar Chart. Even though both divide a circumference into equal parts, and have similar origins, they differ a lot, starting by how they encode the values — The Polar Area chart use slices and their areas. In contrast, Radar charts use the distance from the center to mark a point.
https://medium.com/python-in-plain-english/radar-chart-basics-with-pythons-matplotlib-ba9e002ddbcd
['Thiago Carvalho']
2020-06-13 19:02:13.618000+00:00
['Python', 'Matplotlib', 'Radar Charts', 'Data Science', 'Data Visualization']
Better understanding of matplot-library
“Picture is worth a thousand words”, the plots and graphs can be very effective to convey a clear description of the data to an audience or sharing the data with other peer data scientists. Data visualization is a way of showing complex data in a graphical form to make it understandable. When you are trying to explore the data and getting familiar with it, data visualization is used. In any corporate industry, it can be very valuable to support any recommendations to clients, managers, or decision-makers. Darkhorse Analytics is a company that runs a research lab at the University of Alberta since 2008. They have done really fascinating work on data visualization. Their approach to visualizing data depends on three key points: less is more effective, it is more attractive, and it is more impactive. In other words, any feature incorporated in the plot to make it attractive and pleasing must support the message that the plot is meant to get across not to distract from it. Matplotlib Matplotlib is one of the most popular data visualization library in python. It was created by a neurobiologist, John Hunter(1968–2012). Matplotlib’s architecture is composed of three layers: Architecture of matplotlib Backend layer The back-end layer has three built-in abstract interface classes: A. FigureCanvas: matplotlib.backend_bases.FigureCanvas It defines and encompasses the area onto which the figure is drawn. B. Renderer : matplotlib.backend_bases.Renderer An instance of the renderer class knows how to draw on the FigureCanvas. C. Event: matplotlib.backend_bases.Event It handles user input such as keyboard strokes and mouse clicks. Artist layer It is composed of one main object, i.e Artist. The artist is the object that knows how to use the renderer to draw on the canvas. Everything we see in the Matplotlib figure is an artist instance. There are two types of artist object A. Primitive: Line2d, Rectangle, Circle, and Text. B. Composite: Axis, Tick, Axes, and Figure Each composite can contain other Composite artists as well as primitive artists. For example, a figure artist would contain as axis artist as well as a text artist or rectangle artist. Scripting layer It was developed for those scientists who are not professional programmers. The goal of this layer is to perform a quick exploratory analysis of data. It is essentially the Matplotlib.pyplot interface. It automates the process of defining a canvas and defining a figure artist and connecting them. Since it automatically defines canvas, artist and connects them. It makes data analysts easy to do things. So most of the data scientists prefer this scripting layer to visualize their data. The above code plots a histogram of a hundred random numbers and saves the histogram as matplotlib_histogram.png. The versatility of Matplotlib can be used to make many visualization types:- Scatter Plots The output of the above code Bar charts and Histograms Bargraph Histogram Line plots Line plots Pie charts pie chart Stem plots
https://medium.com/mldotcareers/data-visualization-8b17843b9bbc
['Saroj Humagain']
2020-09-16 12:08:03.136000+00:00
['Machine Learning', 'Python', 'Data Science', 'Matplotlib', 'Data Visualization']
Queue Data Structure
Queue Implementation We can create a Queue class as a wrapper and use the Python list to store the queue data. This class will have the implementation of the enqueue , dequeue , size , front , back , and is_empty methods. The first step is to create a class definition and how we are gone store our items. class Queue: def __init__(self): self.items = [] This is basically what we need for now. Just a class and its constructor. When the instance is created, it will have the items list to store the queue items. For the enqueue method, we just need to use the list append method to add new items. The new items will be placed in the last index of this items list. So the front item from the queue will always be the first item. def enqueue(self, item): self.items.append(item) It receives the new item and appends it to the list. The size method only counts the number of queue items by using the len function. def size(self): return len(self.items) The idea of the is_empty method is to verify if the list has or not items in it. If it has, returns False . Otherwise, True . To count the number of items in the queue, we can simply use the size method already implemented. def is_empty(self): return self.size() == 0 The pop method from the list data structure can also be used to dequeue the item from the queue. It dequeues the first element as it is expected for the queue. The first added item. def dequeue(self): return self.items.pop(0) But we need to handle the queue emptiness. For an empty list, the pop method raises an exception IndexError: poop from empty list . So we can create an exception class to handle this issue. class Emptiness(Exception): pass And uses it when the list is empty: def dequeue(self): if self.is_empty(): raise Emptiness('The Queue is empty') return self.items.pop(0) If it is empty, we raise this exception. Otherwise, we can dequeue the front item from the queue. We use this same emptiness strategy for the front method: def front(self): if self.is_empty(): raise Emptiness('The Queue is empty') return self.items[0] If it has at least one item, we get the front, the first added item in the queue. Also the same emptiness strategy for the back method: def back(self): if self.is_empty(): raise Emptiness('The Queue is empty') return self.items[-1] If it has at least one item, we get the back item, the last added item in the queue. Queue usage I created some helper functions to help test the queue usage. def test_enqueue(queue, item): queue.enqueue(item) print(queue.items) def test_dequeue(queue): queue.dequeue() print(queue.items) def test_emptiness(queue): is_empty = queue.is_empty() print(is_empty) def test_size(queue): size = queue.size() print(size) def test_front(queue): front = queue.front() print(front) def test_back(queue): back = queue.back() print(back) They basically call a queue method and print the expected result from the method call. The usage will be something like: queue = Queue() test_emptiness(queue) # True test_size(queue) # 0 test_enqueue(queue, 1) # [1] test_enqueue(queue, 2) # [1, 2] test_enqueue(queue, 3) # [1, 2, 3] test_enqueue(queue, 4) # [1, 2, 3, 4] test_enqueue(queue, 5) # [1, 2, 3, 4, 5] test_emptiness(queue) # False test_size(queue) # 5 test_front(queue) # 1 test_back(queue) # 5 test_dequeue(queue) # [2, 3, 4, 5] test_dequeue(queue) # [3, 4, 5] test_dequeue(queue) # [4, 5] test_dequeue(queue) # [5] test_emptiness(queue) # False test_size(queue) # 1 test_front(queue) # 5 test_back(queue) # 5 test_dequeue(queue) # [] test_emptiness(queue) # True test_size(queue) # 0 We first instantiate a new queue from the Queue class.
https://medium.com/the-renaissance-developer/queue-data-structure-db5022d9eadb
[]
2020-02-02 23:03:38.142000+00:00
['Python', 'Algorithms', 'Software Development', 'Software Engineering', 'Programming']
How Making People Buy Into The Cause Has Helped Bombas Sell 40 Million Socks
Bombas secret towards becoming a 100 million dollar brand As we look at the Bombas business model and branding strategies used by Bombas. Everything looks simple and easy to analyze. It’s prominent to say that, Bombas as a brand is heavily dependent on mission-based marketing strategy. It means that their whole brand representation and purpose of sales were dedicated to a mission of donating socks to the homeless shelters. Let me take you through some of the strategies, tactics, and approach they took to achieve such success. Building a brand of values Bombas has been successful in building an one product eCommerce store, but as time progressed they are launching multiple tees and are looking to join other verticals, as well. But no matter what they sold, Bombas got itself associated with a social mission, and people didn’t stop buying from them. Why? Because people felt good when their purchase was being associated with a good cause, and it contributed towards a positive impact on the society. In short, their buying experience became more gratifying and valuable. Bombas never leaves a single card on the table to show their social and environmental commitments. One such example is that Bombas launching its PRIDE collection. Socks were launched to celebrate LGBTQ+ Pride Month and Bombas said to donate 40% of all the socks to LGBTQ youth homeless shelters. Bombas PRIDE Collection (Screenshot) Bombas has built a brand of values showing their love towards multiple communities and people, thus contributing to a good cause for humanity. Design a product that has an impact Both the co-founders knew, how much those socks were essential for people living in homeless shelters. Instead of manufacturing and selling socks which were out there. They focused on enhancing durability and capability of socks while used in harsh environment. In other words, they wanted to build and design socks which were more sustainable and comfortable when compared to the usual socks. So instead of launching their socks right away. They spent two years before shipping the idea. Finally in 2013, they came up with socks which were:- Manufactured from environmentally sustainable materials like extra-long staple cotton and merino wool. like extra-long staple cotton and merino wool. Engineered to get both support and comfort . For example, the honeycomb structure on the socks supports the mid feet and helps it hold onto the feet. . For example, the honeycomb structure on the socks supports the mid feet and helps it hold onto the feet. Smarter socks with multiple designs was perfectly suited for the everyday look and every walk of life. People who were marathon runners, everyday hustlers, and outdoor activists liked wearing these socks. By 2013 when Bombas firstly showed up in the market. There was no other sock brand which could compete with them. Especially in-terms of research and technology used in designing a pair of socks. Customers from all walks of life started buying Bombas for its social mission, and the product itself was irresistible. Soon, Bombas impact was seen in average people and they started to ditch those old grandma socks which ruled the market for decades. New collabs = Newer styles Within their short run of 7 years, Bombas has been able to grab multiple collaborations with celebrities and legendary icons.Each of these collaborations to the launch of a new style of socks dedicated to an individual by still keeping their social mission intact. If you’ve followed Bombas on social media. Then you would know that, in general Bombas releases a new pair of socks every now and then. Each new collaboration helps them add a new kind of socks in their arsenal. Apart from the new designs and styles, the brand used to get a lot of hype and traction in the news. To their advantage, Bombas swiftly uses it’s collaborations as a promotion strategy on their social media handles. Some of the collaborations which are worth mentioning here are:- Muhammad Ali x Bombas This collaboration made Bombas showcase their connection with the athletes. Muhammad Ali x Bombas collection drew an inspiration from Ali’s quotes and images. Muhammad Ali x Bombas collection The collection celebrated the Muhammad Ali’s legacy and his contributions to the society. After this collaboration, Bombas as brand became associated with the athletes and people started noticing wearing their socks for outdoor activities. Zac’s Performance Test Earlier this year, Zac Efron’s collaboration with Bombas lead to one of the coolest Ad campaigns. The brand had a good relationship with Zac Efron since 2017. In this particular campaign, Zac Efron was asked to test Bombas socks for his entire day of activities. From running to golfing and even wearing them during his leg day, Zac wore the socks and tested it for it’s sustainability. Zac’s Performance Test (Screenshot) In the end, Efron shares that socks passed all of his tests with flying colours.
https://medium.com/datadriveninvestor/how-making-people-buy-into-the-cause-has-helped-bombas-sell-40-million-pair-of-socks-3699fe45c67d
['Thakur Rahul Singh']
2020-12-27 16:05:57.375000+00:00
['Branding', 'Marketing', 'Business', 'Business Strategy', 'Startup']
Inspiring Stories Behind the Best Songs of Our Time
Bob Dylan’s Hurricane “He ain’t no gentleman Jim” Bob Dylan, Hurricane That’s what Bob Dylan sings about the subject of his song, the boxer Rubin “Hurricane” Carter. “Gentleman Jim” is a reference to the gentleman Jim Corbett, a white boxer in the 1800s known for his manners. “Hurricane” Carter for sure ain’t no gentleman. He spent 19 years in jail for murder. Nonetheless, Bob Dylan felt he did not commit the crime and tried to drive publicity to the case. Carter’s case was particularly complex and filled with legal missteps. The case reflects acts of racism and profiling against Carter, which Dylan describes as leading to a false trial and conviction. “Pistol shots ring out in the bar-room night… ” The opening line of Bob Dylan’s protest ballad On June 17, 1966, three white people were gunned down at a bar in New Jersey Grill. Witnesses described two black men as the murderers. Police pulled over Carter and his friend John Artis, who were black, but apart from that didn’t fit the description. Even though they were released soon after, they have been charged with the crimes two months later. The sentence was based on the testimony of two white men with criminal records, claiming they witnessed Carter and Artis shooting. Both were sentenced to life. In prison, Carter tried to publish his story in an effort to earn his freedom. His effort included a book, which had been sent to Bob Dylan. The famous singer took up his cause, wrote a song about him, and raised money on his tour in 1975. Soon after the release of Carter’s book and the medial attention, a back and forth in trial started. The witnesses changed their stories, as they apparently have been coerced into their testimony. Nonetheless, Carter and Artis have not been released until 1985. Carter died on April 20, 2014, at age 76. Pink Floyd’s Brain Damage The lunatic is in my head You raise the blade, you make the change You re-arrange me ’til I’m sane You lock the door And throw away the key There’s someone in my head but it’s not me And if the cloud bursts, thunder in your ear You shout and no one seems to hear And if the band you’re in starts playing different tunes I’ll see you on the dark side of the moon — Pink Floyd, Brain Damage Published on the Dark Side Of The Moon, “Brain Damage” is one of Pink Floyd’s most timeless tunes. With the theme of insanity, “Brain Damage” hit close to the band itself. The subject of this song is the ill-fated Syd Barrett. Syd Barret was the singer and guitarist of Pink Floyd from 1965–1968. The Dark Side Of The Moon — Pink Floyd Roger Waters has stated that the insanity-themed lyrics are based on Syd Barrett’s mental instabilities. With the line ‘I’ll see you on the dark side of the moon’ Waters indicates that he felt related to him in terms of mental idiosyncrasies. Barrett’s “crazy” behavior is further referenced in the lyrics “And if the band you’re in starts playing different tunes”, which happened occasionally as he started to play different songs during concerts without even noticing. The song has a rather famous opening line, “The lunatic is on the grass…”, whereby Waters is referring to areas of turf which display signs saying “Please keep off the grass” with the exaggerated implication that disobeying such signs might indicate insanity. — Wikipedia Elton John’s Rocket Man “She packed my bags last night, pre-flight. Zero hour: 9 a.m. And I’m gonna be high as a kite by then.” The song Rocket Man (I Think It’s Gonna Be A Long, Long Time) was released as a single on 3 March 1972. At a first glance, a lot of people thought that the line in the song that says “I’m gonna be high as a kite by then” was referring to drug addiction. But, coming only three years after man first walked on the moon in July 1969, the meaning of this song was more literal. This piece of art describes a Mars-bound astronaut’s mixed feelings at leaving his family in order to do his job. Rocketman — Elton John, Source: ntv Lyricist Bernie Taupin, who collaborated with Elton on all his major hits, explained in 2016: “People identify it, unfortunately, with David Bowie’s Space Oddity. It actually wasn’t inspired by that at all; it was actually inspired by a story by Ray Bradbury, from his book of science fiction short stories called The Illustrated Man. “In that book, there was a story called The Rocket Man, which was about how astronauts in the future would become sort of an everyday job. So I kind of took that idea and ran with that.” The Rolling Stones’ (I Can’t Get No) Satisfaction Tossing and turning on a bad, sleepless night, you just can’t get no satisfaction. At least, that’s what happened to Keith Richards. The №2 song on the Rolling Stone The 500 Greatest Songs of All Time list has been written when Richards heard the beginning riff to (I Can’t Get No) Satisfaction in a dream. In an interview with the Rolling Stone, Richards had this to say about the song’s inception: “I woke up in the middle of the night. There was a cassette recorder next to the bed and an acoustic guitar. The next morning when I woke up, the tape had gone all the way to the end. So I ran it back, and there’s like thirty seconds of this riff — ‘Da-da da-da-da, I can’t get no satisfaction’ — and the rest of the tape is me snoring!” Billy Joel’s Vienna “Why did I pick Vienna to use as a metaphor for the rest of your life? My father lives in Vienna now. I had to track him down. I didn’t see him from the time I was 8 ‘till I was about 23–24 years old. He lives in Vienna, Austria which I thought was rather bizarre because he left Germany in the first place because of this guy named Hitler and he ends up going to the same place that Hitler hung out all those years! Vienna, for a long time was the crossroads. […] So the metaphor of Vienna has the meaning of a crossroad. It’s a place of inter…course, of exchange — it’s the place where cultures co-mingle. You get great beer in Vienna but you also get brandy from Armenia. It was a place where cultures co-mingled. So I go to visit my father in Vienna, I’m walking around this town and I see this old lady. She must have been about 90 years old and she is sweeping the street. I say to my father “What’s this nice old lady doing sweeping the street?” He says “She’s got a job, she feels useful, she’s happy, she’s making the street clean, she’s not put out to pasture” — Billy Joel in an interview on Vienna Billy essentially thought to himself “I don’t have to be worried about getting old, ‘Vienna waits for you’”. Vienna is being pictured as the promised land. A place where the old people are being respected and there are no cultural barriers. It’s a beautiful picture of a beautiful city which — fun fact — inspired me to move to Vienna in 2020. Vienna — Maximilian Perkmann, 2020 (the author) “The song describes that sometimes you have to take things more slowly in life, that you develop mindfulness, but also show gratitude for all the good things that happen. Vienna as a city has embodied all this for me” — Billy Joel Falco’s Out Of The Dark Even though it is a german song, the Austrian artist Falco achieved a worldwide impact with “Out Of The Dark”. But above all, the song has caused a lot of controversies. „Muss ich denn sterben, um zu leben? (Do I have to die to live?) - Falco, Out Of The Dark Falco died of severe injuries received on 6 February 1998, when his car collided with a bus in the Dominican Republic. As of his death, rumors stated that “Out Of The Dark” was his last call for help before he committed suicide. Still today, the circumstances have not been fully clarified. In an interview in 1997, about a year before Falco’s death, he stated that the theme of this song — as many times before — were drugs. The song tells the story of a man divorcing his wife and falling into depression. His only way out: heroin. That’s why the chorus of the song plays “Out Of The Dark (divorce) Into The Light (heron). After this interview, the song was played on a radio station for the first time. A similar explanation was given by his manager at the time, Claudia Wohlfromm. Falco in the music video to Out Of The Dark— TV90s “Out of the Dark is autobiographical — and also not. It is about drugs. In particular: about cocaine. I wrote the text from the point of view of a desperate man who is possessed by the drug without being addicted myself”. - Falco in an interview with the magazine “Bunte” 27/98 Paul Simon’s Diamonds on the Soles of Her Shoes Still, to date, the real meaning of Paul Simon’s masterpiece is not clear. There are more interpretations. She’s a rich girl She don’t try to hide it Diamonds on the soles of her shoes He’s a poor boy Empty as a pocket Empty as a pocket with nothing to lose — Diamonds on the Soles of Her Shoes, Paul Simon Love The more popular interpretation pictures Paul’s short relationship with a diamond mine owner’s daughter while recording in South Africa. She was very rich and privileged, yet she acted very down to earth, like a poor girl. The woman was that rich, that she didn’t even notice the diamonds on the soles of her shoes. Africa Thinking on a deeper level, the lyrics could refer to “the rich girl” Africa herself. Africa has diamonds on the soles of her shoes, down underfoot in Southern Africa. The first Poor Boy in the song seems to be the native Africans. The Europeans think the Zulus have “nothing to lose.” One way to lose the walking blues is to dig up the diamonds. Paul Simon expresses his hope, that Africa’s nations will eject the colonialists and take care of themselves. Simon mentions this song as one of his best musical achievements.
https://medium.com/illumination/the-inspiring-stories-behind-7-of-the-best-songs-of-our-time-e4810619b3e2
['Maximilian Perkmann']
2020-12-04 15:46:29.323000+00:00
['Music', 'Art', 'Mental Health', 'History', 'Self Improvment']
10 Best Programming Languages to Learn in 2021
10 Best Programming Languages to Learn in 2021 A developer’s list of the programming languages you probably want to start learning in 2021 Photo by Annie Spratt on Unsplash A couple of months ago, I was reading an interesting article on HackerNews, which argued that why you should learn numerous programming languages even if you won’t immediately use them, and I have to say that I agreed. Since each programming language is good for something specific but not so great for others, it makes sense for Programmers and senior developers to know more than one language so that you can choose the right tool for the job. But which programming languages should you learn? As there are many programming languages ranging from big three like Java, JavaScript, and Python to lesser-known like Julia, Rust or R. The big questions is which languages will give you the biggest bang for your buck? Even though Java is my favorite language, and I know a bit of C and C++, I am striving to expand beyond this year. I am particularly interested in Python and JavaScript, but you might be interested in something else. Top 10 Programming Languages to Learn in 2021 This list of the top 10 programming languages — compiled with help from Stack Overflow’s annual developer survey as well as my own experience — should help give you some ideas. Note: Even though it can be tempting, don’t try to learn too many programming languages at once; choose one first, master it, and then move on to the next one. 1. Java Even though I have been using Java for years, there are still many things I have to learn. My goal for 2021 is to focus on recent Java changes on JDK 9, 10, 11, and 12. If yours is the same, you’ll want to check out the Complete Java MasterClass from Udemy. If you don’t mind learning from free resources, then you can also check out this list of free Java programming courses. 2. Javascript Whether you believe it or not, JavaScript is the number one language of the web. The rise of frameworks like jQuery, Angular, and React JS has made JavaScript even more popular. Since you just cannot stay away from the web, it’s better to learn JavaScript sooner than later. It’s also the number one language for client-side validation, which really does make it work learning JavaScript. Convinced? Then this JavaScript Masterclass is a good place to start. For cheaper alternatives, check out this list of free JavaScript courses. 3. Python Python has now toppled Java to become the most taught programming language in universities and academia. It’s a very powerful language and great to generate scripts. You will find a python module for everything you can think of. For example, I was looking for a command to listen to UDP traffic in Linux but couldn’t find anything. So, I wrote a Python script in 10 minutes to do the same. If you want to learn Python, the Python Fundamentals from Pluralsight is one of the best online course to start with. You will need a Pluralsight membership to get access to the course, which costs around $29 per month or $299 annually. You can also access it using their free trial. And, if you need one more choice, then The Complete Python Bootcamp: Go from zero to hero in Python 3 on Udemy is another awesome course for beginners. And if you are looking for some free alternatives, you can find a list here. 4. Kotlin If you are thinking seriously about Android App development, then Kotlin is the programming language to learn this year. It is definitely the next big thing happening in the Android world. Even though Java is my preferred language, Kotlin has got native support, and many IDEs like IntelliJ IDEA and Android Studio are supporting Kotlin for Android development. The Complete Android Kotlin Developer Course is probably the best online course to start with. 5. Golang This is another programming language you may want to learn this year. I know it’s not currently very popular and at the same time can be hard to learn, but I feel its usage is going to increase in 2021. There are also not that many Go developers right now, so you really may want to go ahead and bite the bullet, especially if you want to create frameworks and things like that. If you can invest some time and become an expert in Go, you’re going to be in high demand. Go: The Complete Developer’s Guide from Udemy is the online course I am going to take to get started. 6. C# If you are thinking about GUI development for PC and Web, C# is a great option. It’s also the programming language for the .NET framework, not to mention used heavily in game development for both PC and consoles. If you’re interested in any of the above areas, check out the Learn to Code by Making Games — Complete C# Unity Developer from Udemy. I see more than 200K students have enrolled in this course, which speaks for its popularity. And again, if you don’t mind learning from free courses, here is a list of some free C# programming courses for beginners. 7. Swift If you are thinking about iOS development like making apps for the iPhone and iPad, then you should seriously consider learning Swift in 2021. It replaces Objective C as the preferred language to develop iOS apps. Since I am the Android guy, I have no goal with respect to Swift, but if you do, you can start with the iOS 14 and Swift 5 — The Complete iOS App Development Bootcamp. If you don’t mind learning from free resources then you can also check out this list of free iOS courses for more choices. There’s also this nifty tutorial. 8. Rust To be honest, I don’t know much about Rust since I’ve never used it, but it did take home the prize for ‘most loved programming language’ in the Stack Overflow developer survey, so there’s clearly something worth learning here. There aren’t many free Rust courses out there, but Rust For Undergrads is a good one to start with. 9. PHP If you thought that PHP is dead, then you are dead wrong. It’s still very much alive and kicking. Fifty percent (50%) of internet websites are built using PHP, and even though it’s not on my personal list of languages to learn this year, it’s still a great choice if you don’t already know it. And, if you want to learn from scratch, PHP for Beginners — Become a PHP Master — CMS Project on Udemy is a great course. And, if you love free stuff to learn PHP, checkout this list of free PHP and MySQL courses on Hackernoon 10. C/C++ Both C and C++ are evergreen languages, and many of you probably know them from school. But if you are doing some serious work in C++, I can guarantee you that your academic experience will not be enough. You need to join a comprehensive online course like C++: From Beginner to Expert to become industry-ready. And for my friends who want some free courses to learn C++, here is a list list of free C++ Programming courses for beginners. Conclusion Even if you learn just one programming language apart from the one you use on a daily basis, you will be in good shape for your career growth. The most important thing right now is to make your goal and do your best to stick with it. Happy learning! If you enjoy this article here are a few more of my write-ups you may like : Good luck with your Programming t journey! It’s certainly not going to be easy, but by following this list, you are one step closer to becoming the Software Developer, you always wanted to be If you like this article then please consider following me on medium (javinpaul). if you’d like to be notified for every new post and don’t forget to follow javarevisited on Twitter! Other Medium Articles you may like
https://medium.com/hackernoon/10-best-programming-languages-to-learn-in-2019-e5b05af4a972
[]
2020-12-09 09:10:44.921000+00:00
['JavaScript', 'Java', 'Programming', 'Python', 'Coding']
Next level data visualization
Introduction Any data analysis project has two essential goals. First, to curate data in readily interpretable form, uncover hidden patterns, and identify key trends. Second, and perhaps more important, is to effectively communicate these findings to the readers through thoughtful data visualization. This is an introductory article on how to begin thinking about customized visualizations that readily disseminate key data features to the viewer. We achieve this by moving beyond the one line charts that have made plotly so popular among data analysts and focusing on individualised chart layouts & aesthetics. All code used in this article is available on Github. All charts presented here are interactive and have been rendered using jovian, an incredible tool for sharing and managing jupyter notebooks. This medium article by Usha Rengaraju contains all the details on how to use this tool. Plotly Plotly is a natural library of choice for data visualization because its easy to use, well documented and allows for customization of charts. We begin by briefly summarizing plotly architecture in this section before moving to visualizations in the subsequent sections. While most people prefer using the high level plotly.express module, in this article we will instead focus on use of the plotly.graph_objects.Figure class to render charts. And while there is extensive documentation available on the plotly website, the material can be a bit overwhelming for those new to visualization. I therefore endeavour to provide a clear and concise explanation of the syntax. The plotly graph_objects that we will make use of, are composed of the following three high-level attributes and plotting a chart essentially involved specifying these: data attribute includes selection of the chart type from over 40 different types of traces like scatter , bar , pie , surface , choropleth etc and passing the data to these function. attribute includes selection of the chart type from over 40 different types of traces like , , , , etc and passing the data to these function. layout attribute controls all the non-data related aspects of the chart like text font, background color, axis & tickers, margins, title, legend etc. This is the attribute we will spend a considerable time manipulating to make changes like adding an additional y-axis or plotting multiple charts in a figure when dealing with large datasets. attribute controls all the non-data related aspects of the chart like text font, background color, axis & tickers, margins, title, legend etc. This is the attribute we will spend a considerable time manipulating to make changes like adding an additional y-axis or plotting multiple charts in a figure when dealing with large datasets. frames is used to specify the sequence of frames when making animated charts. Subsequent articles in this series will make use of this attribute extensively. For most of the charts we make in this article, following three are the standard libraries that we will use:
https://towardsdatascience.com/next-level-data-visualization-f00cb31f466e
['Aseem Kashyap']
2020-10-25 19:59:48.916000+00:00
['Python', 'Charts', 'Data Analysis', 'Data Visualization', 'Plotly']
Building a Data Driven Marketing Team
A High-Level Overview of Marketing Metrics and KPIs To generalise, the Marketing team requires data to track the revenue cycle. They have Leading Indicators which include Lead Creation, Source of Leads, MQLs (Marketing Qualified Leads) inclusive of the sub-field Programs that are creating those MQLs, and Lead Velocity. Lead Velocity is effectively how long a lead takes to become “qualified”. The Marketing team also have Indicators which include Opportunity Creation, Pipeline Creation, and Revenue. Revenue should additionally have the ability to be broken down by program / channel (e.g. trade show). Low-Level Descriptions of the Marketing Metrics and KPIs Operations Specific Metrics Cost per Acquisition is the total cost of acquiring a new customer via a specific channel or campaign. While this can be applied as broadly or as narrowly as one wants, it’s often used in reference to media spend. In contrast to Cost per Conversion or Cost per Impression, CPA focuses on the cost for the complete journey from first contact to customer. Cost Per Acquisition is also differentiated from Customer Acquisition Cost (CAC) by its granular application i.e. looking at specific channels or campaigns instead of an average cost for acquiring customers across all channels and headcount. To calculate the Cost per Acquisition, simply divide the Total Cost (whether media spend in total or specific channel/campaign to acquire customers) by the Number of New Customers Acquired from the same channel/campaign. Channel or Campaign CPA Calculation Media Spend Calculation
https://medium.com/hacking-talent/building-a-data-driven-marketing-team-10b33f13c485
['Matthew W. Noble']
2020-06-02 10:08:48.323000+00:00
['Metrics', 'Marketing', 'Data', 'Engineering', 'Saas Marketing']
Ensemble model : Data Visualization
Photo by William Iven on Unsplash So this is part 2 of my previous article (Ensemble Modelling- How to perform in python). Checkout my previous article for better understanding of this one. Thank you 😊. So, Lets start with this tutorial of visualizing different models and comparing their accuracies. Here we have taken KNN, Decision Tree and SVM models. Lets recall that in previous article we had used “accuracys” named list to store accuracy of above mentioned models respectively . Let us see what it contains. NOTE: We have not used a train and test set seperately here , we are using train_test_split() due to which everytime we use this split function the train and test gets splitted at a random point. So accuracy will keep changing depending upon train and test set values. Now model_names is another empty which will be containing names of model this list will help us to plot better. model_names=[] #empty list for name, model in estimators: model_names.append(name) Plotting Bar Plot import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.bar(model_names,accuracies) plt.yticks(np.arange(0, 1, .10)) plt.show() The line ax.bar() function creates the bar plot, here we have given model_names as X and accuracies as height. Various other parameters can also be mentioned such as width, bottom, align. We can even compare accuracy of ensemble model by adding respective name and accuracy of ensemble model to model_names and accuracies lists using below code and run the above code again. #adding accuracy of ensemble for comparison if “Ensemble” not in model_names: model_names.append(“Ensemble”) if ensem_acc not in accuracys: accuracys.append(ensem_acc) As we can easily see that ensemble out of all has highest accuracy, and if we compare more closely than we can see that SVM model gave lowest accuracy. Let’s see a how can we plot a box plot now. Here we are using kfold cross-validation for splitting up the data and testing the model accuracy. We are going to obtain multiple accuracy of each model. Here we have splitted data into 15 splits so it will break the data into 15 sets and test the model 15 times, so 15 different accuracy will be obtained. Finally we are taking the mean of this accuracy to know what is average accuracy of the model. acc=[] #empty list names1=[] scoring = ‘accuracy’ #here creating a list "acc" for storing multiple accuracies of each model. for name, model in estimators: kfold=model_selection.KFold(n_splits=15) res=model_selection.cross_val_score(model,X,target,cv=kfold,scoring=scoring) acc.append(res) names1.append(name) model_accuracy = “%s: %f” % (name,res.mean()) print(model_accuracy) For clarity of my point , lets see what “acc” list has ! Plotting Box Plot blue_outlier = dict(markerfacecolor=’b’, marker=’D’) fig = plt.figure() fig.suptitle(‘Algorithm Comparison’) ax = fig.add_subplot(111) plt.boxplot(acc,flierprops=blue_outlier) ax.set_xticklabels(names1) plt.show() These blue colored dots are outliers. The line extending the box is whiskers , horizontal orange lines are medians. k_folds = model_selection.KFold(n_splits=15, random_state=12) ensemb_acc = model_selection.cross_val_score(ensemble, X_train, target_train, cv=k_folds) print(ensemb_acc.mean()) if “Ensemble” not in names1: names1.append(“Ensemble”) from numpy import array, array_equal, allclose def arr(myarr, list_arrays): return next((True for item in list_arrays if item.size == myarr.size and allclose(item, myarr)), False) print(arr(ensemb_acc, acc)) if arr(ensemb_acc, acc)==False: acc.append(ensemb_acc) acc Now , by running the above code for plotting the box plot again, we get You can even customise you boxplot using different parameters, such as patch_artist= True will display boxplot with colors , notch=True displays a notch format to boxplot, vert=0 will display horizontal boxplot. Here is the entire code: Link for the code from previous article: https://medium.com/analytics-vidhya/ensemble-modelling-in-a-simple-way-386b6cbaf913 I hope you liked my article 😃. If you find this helpful then it would be really nice to see you appreciate my hard work by clapping for me 👏👏. Thank you.
https://medium.com/analytics-vidhya/ensemble-model-data-visualization-2f4cb06859c1
['Shivani Parekh']
2020-10-10 13:40:13.591000+00:00
['Python', 'Data Science', 'Ensemble Learning', 'Data Visualization', 'Analysis']
Who cares about the design language
So here’s the thing. Google updated its Google+ app, and it comes with a huge redesign exercise on it. In case you don’t know him, Luke Wroblewski is a designer at Google. He’s been around for a while, commenting stuff about user experience and visual design. He wrote a lot about the Polar app — which I love by the way— and wrote the first article I ever read about the hamburger menu not working well for engagement in mobile apps nor webs. The point is, Luke knows a thing or two about UX and UI design and he’s been involved in the Google+ redesign. That looks like this: Credits to Luke Wroblewski. This new app looks absolutely beautiful. I mean, look at all that color and rich imagery. And I believe I’ll never be tired of using specific color palettes for contextual elements that surround an image. And the new Google+ app uses a bottom navigation bar, and suddenly the internet went like ‘that’s not so Material Design’. Arturo Toledo is another designer I’ll use here as a reference. He’s been working with Microsoft for a few years, and his response to this was… Who cares about the design language. He claimed that our focus as designers should be on the principles. To make something useful and to design a delightful experience regardless of the platform we’re designing to. That there’s nothing wrong with a navbar down there. I can’t argue with Arturo. I believe he’s right about this. Maybe not in all cases, but I feel he’s mostly right. But I do have a few concerns about this navigation pattern though. And it’s because this is an official Google application, so designers and developers out there probably are going to reproduce this kind of navigation more than once. You got me at ‘navbar’ I do love navbars. And tabbars. And everything that’s not a chaotic hamburger throw-it-all-there-in-the-drawer-and-see-how-it-fits main application menu. It makes things more discoverable, and it makes the app easier to understand without having to think and read that much. I mean, options are just there. Just a quick glance and now you know about everything the app has to offer to you. If you have any doubts, you just tap on the first tab and if you don’t find what you’re looking for then tap on the second one. And continue that way until you’re out of tabs. That’s it. But then you introduce another main navigation pattern, that is the drawer menu itself. And now we have two main navigation paths. Or three, if we count the top tab-bar. Credits to Luke Wroblewski. Again. Ok, tabs in the top bar might not be a main navigation path, but they count as chrome. They count as more options, more space used for navigation. Good luck to you, 4-or-less inches screen people. In theory everything has sense. You have supportive nav for user profile stuff, a global nav between sections and a contextual nav for the filters. But it makes me think. I can’t use this app without reading and taking a second to think where I’m going every time I want to go elsewhere. I’ll give it a quick try here. How about moving the hambuger menu to the last item in the bottom navbar? — like a ‘more’ tab — and then moving the Notifications icon to the top bar, right next to the search icon. That should simplify the main navigation, just like Facebook does in its app. Then to reduce chrome you could make a dropdown menu out of the section title in that top bar to put the filters on it, just like Google Calendar makes to open the calendar control. If you don’t want to hide these filters you could A/B Test another idea. Maybe using an slider at the top of the page, just before the real content starts, like in the app market. Where am I? If I’m a new user of this app and I skip the onboard tutorial — and you can bet I will — I don’t even know what differences are between collections and communities. I mean, I could try to understand what they are, but it makes me think. They look almost the same and there are names of people everywhere and I can’t even see an interesting post until I’ve been playing around with the app for a while. I get lost in the many options you provide. This might not be a problem of the design team but the product itself. Just think about it this way: Twitter: There are people to follow to read their tweets. Facebook: Mostly the same as Twitter but you can write larger posts. Instagram: I follow people to see their photos. Oh and I can chat with them. Google+: There’s people to follow and you can read their posts. Oh and you also have communities to follow and collections to follow that you really don’t know where they come from. And you can write posts and create communities and invite people and set up collections that have more visual impact than the posts feed itself. Google+ looks beautiful, but making every content as visually heavy as the main section makes nothing look really important. Be careful with bottom bars in Android As designer Josh Clark pointed out, the options in the navbar are dangerously close to the Quit button — as he calls the Home button in Android. Collections is just a 2mm mistap from the user shutting down the app, or going back to the previous screen when they dind’t want to. I always have this in mind when I design for Android. The solution might be moving these options to the top bar, but there are some issues about long words in other languages here. UX Launchpad talks about the tradeoffs of this solution on this post. But getting back to Josh Clark, he pointed this out and Luke answered… Theory vs. practice. Here’s the tweet anyway. If we assume Luke made a few tests — and I do believe he has — and he’s right, then we don’t need to move the bottom navbar anywhere. And that’s good. Yay. Google+ is a great underrated product This is all. Despite all these concerns I have about this redesign it’s still a product that I’d love to use more. Unfortunatelly it hasn’t found its place among the mainstream users. And that’s the biggest problem this social network has. Maybe Google is working on this. Maybe they have big plans for Google+ that we don’t see because we don’t know what’s ahead in the product roadmap. Let’s just hope they keep improving this product and they prove that it’s useful for everyone.
https://uxdesign.cc/who-cares-about-the-design-language-daa3a99dacc1
['Paco Soria']
2015-12-04 18:51:57.989000+00:00
['Google', 'UX', 'Design']
Why You Know Better, but You Don’t Do Better
Why You Know Better, but You Don’t Do Better 4 ways to narrow the gap between knowing and doing Photo by Brooke Cagle on Unsplash “Knowledge isn’t power until it is applied.” - Dale Carnegie I’m lactose intolerant. I’ll spare you the details of what exactly happens when I consume dairy, but it isn’t pretty. Yet, last week I waited in line at the McDrive for 20 minutes to get a McFlurry with M&M’s. I’d had a rough day and decided to “treat” myself with something tasty. Can you guess what happened when I came home and downed that McFlurry in 30 seconds? It was ugly. I am not the only one who knows better but doesn’t do better. I am surrounded by brilliant people who do the most stupid things. Not out of ignorance, oh no. We, humans, seem to be perfectly capable of knowing what is right for us — and then do the exact opposite. We know we shouldn’t respond to the dramatic text our ex sends, but still, we mysteriously end up in shouting matches with them in the middle of the night. We know life is more manageable after a good night of sleep, but we still watch just one more episode of that addictive tv show and then get angry when we can’t get out of bed the next morning. We know healthy foods make us feel strong. And yet, the pizza delivery guy knows us by name. And even though every study proves working out releases the happiness neurotransmitter endorphin, we still rather relax in a way that doesn’t involve physical activity. We know our jobs are sucking the life out of us, but we don’t do anything to change the situation. So how come so many of us know better but don’t do better? Why is there such a massive gap between knowing and doing? Behavior is a very complex interplay between genes and environment, and there isn’t one one-size-fits-all explanation for why it is so hard to do the thing. There are many obstacles you have to deal with when you change your behavior, here is how you can overcome four of the most common ones; Old habits die hard — our brains don’t like change. There is an information gap — we’re not sure how to do the right thing. Issues with executive functions — we need to improve our capacity. Issues with motivation — do we really want to do the right thing? Old habits die hard — our brains don’t like change We’re creatures of habit. Habits make our lives easier. When we don’t have to think about the small stuff, our brains can focus on more important things. The more often we repeat a behavior, the stronger and more efficient the neural network supporting that behavior becomes. So if you hit snooze every morning, it’s not even a conscious decision anymore. When your ears signal to your brain that the alarm is going, your neurons fire so fast that your finger taps that snooze button before you even consciously hear the alarm. That is why it is so hard to stop snoozing. Suddenly your brain has to fire different neurons for different behavior. These neural connections are weak and ineffective. So your brains do what they do best; hit that snooze button and hide under the covers for just five more minutes. No matter how much you know that it is better to get out of bed immediately, your neural networks rather do what they always did. How to overcome this: Fortunately, we’re not just slaves to our neurons, and there is a thing called neuroplasticity. Neuroplasticity means that we can change our neural networks. We can make existing ones weaker and new ones stronger. So every time you ignore that snooze button, you weaken the existing neural network. And every time you get out of bed immediately after hearing the alarm, your new neural network becomes stronger. Doing the right thing becomes easier the more you do it. And if you keep doing it, it becomes a habit, and your neurons will fire with delight. What I did: I’ve known for a long time that I had to change my diet. I have IBS, and processed food is a major trigger. But picking healthy recipes, getting groceries, and spending my precious time cooking always felt like too much trouble. So when my neurons reached for another frozen meal in the supermarket, it was hard to stop them. Even faster was the neural network for ordering food online. Just the thought of taking longer than 10 minutes to prepare a meal made my neurons howl dramatically. Fifteen months ago, I decided to take out a subscription to an expensive fresh food delivery service. I pick out three recipes every week and get the fresh ingredients delivered to my doorstep. And it is fresh. Potatoes still have the dirt sticking to them. I have to wash, peel, cut, dice, and slice everything. The first couple of weeks, it took me forever to prepare a meal, and I always had to drag myself into the kitchen. But now, fifteen months later, my brains have developed a robust neural network for fresh food prepping. So no matter how tired or depressed I am, I always prepare my meals, with my neurons firing and humming in unison. It took me a couple of weeks, but now, doing the right thing is easy. Ordering fast food makes my neurons — and gut — feel uncomfortable. So even though Dominoes may be tempting, in the end, I always choose fresh. There is an information gap — we’re not sure how to do the right thing There can be a gap between knowing and doing because we’re not clear on how to do the thing because we’re missing information. We know eating healthy food is good for us. But what exactly does that look like? Do you need to eat lettuce all day, every day? Is sugar forbidden from now on? When is food exactly unhealthy? We understand exercise is good for us. But what type of activity is right for your body? Do you need special equipment? Are you using it correctly? When should you feel or see results? Doing better when you know better is difficult when you’re unsure what doing better exactly means. It goes both ways: it is hard to stop doing the wrong thing when you’re not sure what you need to do that. What do you need to stop getting into fights with your ex? Do you need to change your phone number? Block theirs? Have a chat with a mediator? How to overcome this: The good news about this information gap obstacle is that you can solve it by educating yourself and making a plan. Write down what you want to change and what you need to know or do to reach your goals. Can’t you stop checking your phone? Maybe an app or a time lock container can help. Want to escape your 9-to-5 but don’t know how? Start making calculations, look for alternative careers, and talk to people who managed to quit their jobs. If simply “willing” ourselves to stop doing undesired behavior worked, we’d all be living our dream lives. You need to make a plan and find a strategy that works for you. What I did: I started smoking in my teens. At first, it was a casual-cool thing to do, but before I knew it, I was addicted. And happily in denial. I told myself I choose to smoke, and I could quit at any time. The more people pointed out how unhealthy my smoking habit was, the more I believed I loved it. I made it part of my identity. I wasn’t one of those boring nagging health freaks; I was a fun rebel. And this fun rebel was out of breath every time she climbed stairs. I always smelled like smoke. My doctor warned me that the combination of being over 35, smoking, and using hormonal contraceptives increased my risk of getting cardiovascular diseases like blood clots, heart attacks, and strokes. And because tobacco is heavily taxed in my country, I was going bankrupt too. So when I was 32, I knew I had to kick the habit. I smoked my last cigarette, threw out my remaining packs, and swore never to smoke again. An hour later, I was going through my trash to recover my precious cigarettes. I knew I had to stop smoking. I wanted to stop smoking. But simply not smoking seemed impossible. My doctor helped me to make a plan. She prescribed me the drug Chantix, which reduced my cravings and the pleasure I got from smoking. She told me to join an online community for support. It also helped that my mother and sister stopped smoking too. This plan worked. I had to quit taking Chantix because of the side effects, but my online community and my family gave me enough tools to resist my cigarettes. I smoked my last cigarette six years ago. Issues with executive functions — we need to improve our capacity The Understood Team describes very clearly on their website what executive functioning is: Some people describe executive function as “the management system of the brain.” That’s because the skills involved let us set goals, plan, and get things done. When people struggle with executive function, it impacts them at home, in school, and in life. There are three main areas of executive function. They are: 1. Working memory 2. Cognitive flexibility (also called flexible thinking) 3. Inhibitory control (which includes self-control) Executive function is responsible for many skills, including: - Paying attention - Organizing, planning, and prioritizing - Starting tasks and staying focused on them to completion - Understanding different points of view - Regulating emotions - Self-monitoring (keeping track of what you’re doing) All people with ADHD or autism have issues with their executive functions. But it is not just the neurodivergent whose management system of the brain goes AWOL. We live in a society that makes it increasingly hard to focus on one thing. Every day our brains have to deal with an overload of information and options. No wonder we know how to do better but have issues with seeing things through. How to overcome this: For both neurotypicals and neurodivergent, it is possible to improve executive functions (EF). Studies show that a lot of our issues with EF are triggered by Westernized diets and physical inactivity. Therefore improving our EF can be done relatively simple by doing the following: Exercise Have a plant-based diet Prayer, Tai chi, or meditation Positive feelings and self-affirmation Visiting nature There are also many strategic ways to strengthen your EF: Learn how to set attainable sub-goals Block access to short term-temptations Use peer monitoring Establish fixed daily routines Be aware of the short-term gain of task avoidance Therapy can also improve your EF. Cognitive-behavioral therapy (CBT) has been proven to strengthen EF in adults with ADHD. A note of caution: neurodivergent people can definitely improve their EF but shouldn’t strive for neurotypical-like EF. Our brains are just wired differently. Don’t compare your progress with other people, but look at your improvement over time and see how your EF is better than a year ago. What I did: All my life, I have struggled with my EF. I always thought I was lazy or stupid because I couldn’t do things everybody else could. When I was 32, I was diagnosed with both autism and ADHD. The management system of my brain has always been hilariously understaffed and spectacularly unfit for its job. The most significant change for me was accepting that my brain wasn’t “normal,” and my EF had special needs. I changed the job description for my management system, and I’ve been doing a lot better since. I use different techniques to improve my EF, like limiting my screen time, putting my phone in another room when I’m writing, scheduling breaks instead of forcing myself to sit still for an hour, and doing deep breathing exercises while meditating. I’ll never be “normal,” but changing my diet, exercising more, and using different strategies have improved my executive functions. And because of that, I’m less stressed and happier. Issues with motivation— do we really want to do the right thing? For years, I knew I should quit smoking. But I didn’t. When it is tough to do the right thing, it might be that you are not motivated enough. In the study Why We Don’t “Just Do It,” Understanding the Intention-Behavior Gap in Lifestyle Medicine, Professor Mark D. Faries looks at why it is so hard for patients to adopt a healthy lifestyle. An essential factor in their success is motivation. Screenshot by author. Table from M. D. Faries (2016). Why We Don’t “Just Do It” Understanding the Intention-Behavior Gap in Lifestyle Medicine. American Journal of Lifestyle Magazine (5): 322–329. His study shows something we instinctively know: it is hard to do the right thing if we don’t really want to. This phenomenon is called the intention-behavior gap. Even though we have the best intentions, our behavior shows otherwise. We all know fast food is unhealthy, but most of it is tasty AF and makes us feel good in the short-term, so we keep eating it — despite our intentions to eat healthily. One key factor in narrowing the intention-behavior gap is motivation. The reason it is so easy for me to prepare healthy meals is I started to enjoy it. And after a couple of weeks, I noticed a significant improvement in my stamina, mood, and concentration. The same goes for not smoking; I haven’t relapsed because I enjoy being a non-smoker. I am grateful for how much my overall health has improved since I quit, and I don’t want to jeopardize that by having just one cigarette. How to overcome this: You are a smart person, which is why you know that some of your behavior is unhelpful, and you want to change it. But if you can’t seem to do the right thing, it is time to take an honest and hard look at your motivation. Do you want to do the right thing because you are supposed to or because you genuinely want it? What are you gaining by not doing the right thing? Why do you want to change your behavior? Diving into these questions will help you discover and change your motivation. One way to modify your motivation is to get disturbed. Tony Robbins always says that you are not disturbed enough with your current situation if you're not changing. Getting disturbed is easy. Sit down, close your eyes, and think about what happens if you don’t change your behavior. Exaggerate a little. What will your life look like in a year if you keep hanging out with your ex? How will you feel if you keep eating fast food? How much weight will you gain? What will your life look like in 10 years if you stay in your shitty job? By thinking about this worst-case scenario, you will start to feel uneasy. And every time you want to fall back to your unhelpful behavior, all you have to think about is that feeling. What I did: Because my 9-to-5 isn’t making me happy, I have a side hustle, and I study psychology. And that is hard. After I’m done with my job, I want to collapse on the couch and do nothing. I don’t want to go upstairs to spend another two hours behind a screen. I rationalize that I need rest and relaxation — which I do. But lying lethargically on the couch isn’t the same as relaxing. Procrastination isn’t the same as “taking time for myself.” So I get disturbed. I want to lay on the couch? That’s fine. But that means I’ll have to postpone my exam, so getting my degree will take longer. This means I won’t be able to work as a psychologist, so I’ll have to stay in a field that doesn’t make me happy. I don’t want to write? That’s fine. I can quit my side hustle any day and dick around on the Internet in my spare time. Get high scores in Candy Crush. But quitting my side hustle also means that money will be tight, and I’m fully dependent on my day job. Closing my eyes and imagining that I’ll be having the same job in 5 years is enough to motivate me to go upstairs and turn on my computer. And once I sit there in my tiny cozy office and work on the future, I dream of, I enjoy what I do. And that is how I successfully narrow the gap between knowing and doing: I build new neural networks I make a plan I strengthen my executive functions I stay motivated and enjoy the process And now and then, I grab a McFlurry, hang on the couch, don’t exercise, and ignore my responsibilities. And don’t beat myself up for it. Because to err is human, and to forgive is divine.
https://medium.com/the-innovation/why-you-know-better-but-you-dont-do-better-93de21f5a4e
['Judith Valentijn']
2020-12-28 18:02:42.861000+00:00
['Neuroscience', 'Self', 'Behavior Change', 'Psychology', 'Self Improvement']
The Modeling Instinct
The Many Types of Models Since a model might represent any aspect of reality, and be made from any number of materials, there are obviously very many kinds of them. Classifying them is a challenge, and the problem is compounded by the fact that some models are composites of many smaller sub-models, each with its own characteristics. To manage this complexity, I’ll consider just four dimensions that I feel are both fundamental and — at least for the purposes of this series of articles — useful. They are: purpose, dynamism, composition, and realism. Dimension 1: Some models serve a utilitarian purpose. For others, the purpose is to provide an experience. Utilitarian models are created to aid real-world interactions with the target system. They are thus a special kind of tool. A map, for example, is a tool for navigating the terrain modeled by the map. A flight simulator, because it models the dynamics of flight, is a tool for learning to fly. Scientific models are a special case. They are utilitarian (or can be), but their primary purpose is to accurately and thoroughly explain their target systems. Scientists devise and test explanatory models of incompletely understood systems, and it’s up to engineers to develop utilitarian applications of the models, should any exist. Experiential models, by contrast, are created to provide an audience with an experience. They are taken to be valuable in and of themselves, without appeal to their utility. This intrinsic value arises from the fact that, at some level of cognition, we experience models as though they are real. They can thus provoke a wide assortment of emotions according to the kinds of experiences they provide. A model can have both utilitarian and experiential aspects, and few are purely one or the other. A fictional story, for example, might deliver useful life lessons, just as a flight simulator might enthrall a person who has no intention of piloting an actual plane. NASA’s Systems Engineering Simulator (here configured to simulate operations aboard the International Space Station) is a utilitarian spaceflight simulator with experiential qualities. (Credit: NASA) Dimension 2: Some models incorporate the laws that govern how their target systems change in time, while others do not. Dynamic models are functional. They are “run,” whereas static models are observed or experienced. The distinction, however, is not as straightforward as it might seem. A work of fiction, for example, is experienced in time, and it describes events that (ostensibly) unfolded in time, but the words on the page, or the individual photographic frames, are unchanging. The same holds true for a history book, or the data collected from an experiment. Such models are recordings, or memories, of a single run-through of a dynamic target system. Although the recorded system is dynamic, the recording itself is static because it does not incorporate the laws of cause and effect that gave rise to its content. A dynamic biomechanical model of Tyrannosaurus rex. (Credit: University of Manchester) Dimension 3: Some models are made from physical materials, such as plastic or paint, while others are made from symbols, such as mathematical notations, computer code, or the words of a language. Physical models are made from physical materials and typically depict the geometric characteristics of their target systems. A utilitarian example might be a model airplane in a wind tunnel. Sculptures, paintings, and theme park attractions are experiential examples. A physical model of the San Francisco Bay Area constructed to test the feasibility of dams and other projects. Symbolic models, by contrast, are made from symbols with predefined meanings. The symbols themselves must be made from some type of material, of course, but symbolic models are distinguished by the fact that the choice of material does not alter the logical attributes of the symbols. In the case of an abacus, for example, plastic beads give the same result as wooden ones. An abacus with individual carbon molecules as beads. (Credit: IBM Research — Zurich) Symbolic models can be further characterized by the kinds of symbols they use. Although we don’t generally speak of “word models,” language is in fact a symbolic means of describing, or modeling, reality. Its dependence on nouns and verbs — objects in motion — reflects its original concern with physical things and actions. But once nouns, verbs, and other parts of speech exist, they can be used to represent abstract things as well. Mathematical symbols first arose from the need to count and measure things. But as more and more symbols were devised, along with new rules for manipulating them, mathematics developed an extraordinary capacity to represent natural phenomena. Computer code is unique in that some of its symbols (defined as “instructions”) represent changes to be made to other symbols. An “instruction pointer,” itself a changeable symbol, keeps track of which instruction to perform next. This arrangement means that computers are especially good at modeling systems that evolve in time. There are, of course, other kinds of symbols besides these. Dimension 4: Models exhibit varying degrees of realism depending on how accurately they represent their target systems, and with how much detail. On the realistic end of the spectrum are computer simulations designed to reflect their target systems as faithfully as possible. Such models can be extremely detailed, sometimes containing millions or even billions of interacting elements, all behaving according to known scientific principles. They are commonly used for prediction (of the weather, for example), or to gain knowledge about a system that would otherwise be too difficult, costly, or dangerous to obtain. A snapshot of a cosmological simulation that consisted of more than 10 billion massive “particles” in a cubic region of space 2 billion light-years to the side. (Credit: Max-Planck-Institute for Astrophysics) Perfect verisimilitude is impossible (without replicating the target system exactly, which is absurd), but it’s not necessary anyway. A model need only incorporate those aspects of the target system that help to fulfill its purpose. The purpose of a subway map, for example, is to help riders decide where to embark and disembark. Details that don’t aid in that decision can be left out. A map of the NYC subway system in the “Vignelli style,” a style of design favoring simplicity. (Credit: CountZ at English Wikipedia [CC BY-SA 3.0], via Wikimedia Commons) Experiential models can go further than just leaving out unnecessary details — the details that are included can be depicted in nonrealistic ways. Artists are free to explore the full spectrum, from realistic to stylized to incoherent. Why would an artist choose to create a model that is not realistic? One reason is to provide novelty. Novelty counteracts the blinding effect of familiarity, thereby engaging the imagination. Once engaged, the imagination can turn to the aspects of the model that do reflect reality.
https://medium.com/hackernoon/the-modeling-instinct-40a25a272c64
['Tim Sheehan']
2018-11-14 18:34:02.330000+00:00
['Creativity', 'Art', 'Technology', 'Science', 'Modeling Instinct']
What Did We Get Ourselves Into?
From a writing perspective, the work is unprecedented. It requires a deep, hands-on understanding of various media, particularly those steeped in dialogue and character development. Screenwriters and playwrights are well suited; tech writers and copywriters, not so much. To illustrate this unique and potentially burgeoning area of the discipline, I’m thinking a little insight into our history might be illuminating. There were only three of us at Cortana Editorial’s inception (we are a team of 30 today, with international markets now a key part of our work). The foundation of Cortana’s personality was already in place, with some key decisions made. Internal and external research and studies, as well as a lot of discourse, supported decisions that determined Project Cortana (originally only a codename) would be given a personality. The initial voice font would be female. The value prop would center around assistance and productivity. And, there would be “chitchat.” Chitchat is the term given to the customer engagement area that, from the customer’s perspective, provides the fun factor. That sometimes random, often hilarious set of queries included anything and everything, from “What do you think about cheese?” to “Is there a god?” to “Do you poop?” Clearly, our customers were serious about getting to know Cortana. From the business perspective, chitchat is defined as the engagement that’s not officially aligned with the value prop — so it wasn’t a simple justification to point engineering, design, and writing resources towards it. Fortunately, a heroic engineering team at the Microsoft Search Technology Center in Hyderabad, India, did the needful and signed up to build the experience. It was a crucial hand-raise that set the ball in motion. Another team was tasked with parsing out these unique queries, packaging them up, and handing them over to the writing team as Cortana chitchat. We realized that as writers, we were being asked to create one of the most unique characters we’d ever encountered. And creatively, we dove deeply into what we call “the imaginary world” of Cortana. Over three years later, we continue to endow her with make-believe feelings, opinions, challenges, likes and dislikes, even sensitivities and hopes. Smoke and mirrors, sure, but we dig in knowing that this imaginary world is invoked by real people who want detail and specificity. They ask the questions and we give them answers. Certainly, Cortana’s personality started from a creative concept of who she would be, and how we hoped people would experience her. But we now see it as the customer playing an important role in the development of Cortana’s personality by shaping her through their own curiosity. It’s a data-driven back-and-forth — call it a conversation — that makes possible the creation of a character. And, it is fun work. It’s tough to beat spending an hour or two every day thinking hard, determining direction, putting principles in place, and — surprise, surprise — laughing a lot.
https://medium.com/microsoft-design/what-did-we-get-ourselves-into-36ddae39e69b
['Jonathan Foster']
2019-08-27 17:18:46.777000+00:00
['AI', 'Voice Design', 'Artificial Intelligence', 'Microsoft', 'Tech']
Seven Shards of Humanity
The title of my last artwork Seven Shards of Humanity is the result of my 9 years old son’s epiphany. When my kids, watching me painting day after day, figure out what I’m trying to tell, I have achieved my goal. I don’t want to please them with my brushes’ strokes: I want to create questions. The deeper the questions, the more my paintings can aspire to be part of the change. I think I can’t simply call myself an artist. I am first -and foremost- a mother, and I believe in the strength of the example I give every day to my children. Painting is my strongest way to communicate with them because their future starts now. I believe there is no border between life and art: what matters is trying to be the best part of this world. I work hard for a change through daily actions and I set on the canvas all my humanity. When kids understand what I meant, I know that it’s good. I spent a long time making my last painting, composed of seven canvas: it summarizes long conversations with my family members and friends about the meaning of the present life. I put in it a great effort to find the point I was trying to get to. When I tried to find the right title the word mirror was all I could think about, but its meaning was judgemental. It was my son that made me realize the futility of judgement when you want to show something for the good of all. This is humanity. Everyone can reflect himself in one of this shards, but humanity is each of us and we have to face facts to step back. I don’t know if all together we will have the strength to take a step back, because when the mirror is broken, even if we fix it, it will no longer be as strong as before. But humanity isn’t a piece of fused quartz smeared with silver. When our flesh is broken and we come back to the soil, we will born again with new opportunities to change the world. Again and again: it is only a matter of time. Per la versione in Italiano, clicca qui.
https://medium.com/my-alienart/seven-shards-of-humanity-1fa7dc15d6c
['Nadia Camandona']
2020-09-18 08:46:31.489000+00:00
['Humanity', 'Artist', 'Environment', 'Art', 'Future']
Airflow : Zero to One. In current world, we process a lot of…
In current world, we process a lot of data and the churn rate of it increases exponentially with passing time, where the data can belong to any of primary/inherited/captured/exhaust/structured/unstructured category(or intersection of them). We need to run multiple high performant data processing pipelines at a high frequency to gain maximum insight, do predictive analysis, and solve for other consumer needs. Managing our data pipelines via orchestrating, scheduling, monitoring becomes very critical task for the overall Data platform and its SLAs to be stable and reliable. Let’s have a look at the several open-source orchestration system available to us - In this blog, we will go in detail about Airflow and how can we work with it to manage our data pipelines. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. — Airflow documentation Apache Airflow is a work-flow management system to programmatically author, schedule and monitor data pipelines. It has become the de-facto standard tool to orchestrate and schedule any kind of job, from machine learning model training to common ETL orchestration. Airflow Architecture Source : Google Modes of Airflow setup 1. Standalone : under the standalone mode with a sequential executor, the executor picks up and runs jobs sequentially, which means there is no parallelism. 2. Pseudo-distributed : this runs with a local executor, the local workers pick up and run jobs locally via multiprocessing. This needs setup of mysql to interact with the meta data. 3. Distributed mode : this runs with a celery executor, remote workers pick up and run jobs as scheduled and load-balanced. Get Airflow running in local Here, we are going through the commands which needs to run in order to make Airflow locally up(standalone), one can choose to skip any step if that package already exist in the local system. Installing Airflow and it’s dependencies #airflow working directory mkdir /path/to/dir/airflow cd /path/to/dir/airflow #install python and virtual env brew install python3 pip install virtualenv # activate virtual env python3 -m venv venv source venv/bin/activate # to force a non GPL library(‘unidecode’) export SLUGIFY_USES_TEXT_UNIDECODE=yes # install airflow export AIRFLOW_HOME=~/path/to/dir/airflow pip install apache-airflow Once we have installed Airflow, the default config is imported in the AIRFLOW_HOME and the folder structure looks like : airflow/ ├── airflow.cfg └── unittests.cfg airflow.cfg file contains the default value of configs, which are tweak-able to change the behaviour. Few of them, which are important and are probable to be updated are : plugins_folder #path to Airflow plugins dags_folder #path to dags code executor #executor which Airflow uses base_log_folder #path where Airflow logs should be stored web_server_port #port on which the web server will run sql_alchemy_conn #connection of metadata database load_examples #load default example DAGs Default value of configs can be found here. Preparing the database airflow initdb #create and initialise the Airflow SQLite database SQLite is the default database for Airflow, and is an adequate solution for local testing and development, but it does not support concurrent access. SQLite is inherently made for single producer (write), multiple (but small number of) consumers (read). In a production environment we will certainly need to use a more robust database solution such as Postgres or MySQL. We can edit the config sql_alchemy_conn to access a MYSQL database with the required params. airflow/ ├── airflow.cfg ├── airflow.db (SQLite) └── unittests.cfg Running web server locally To run web server, execute : airflow webserver -p 8080 After running this, we will be able to see the Airflow web UI up and running at URL : http://localhost:8080/admin/
https://medium.com/analytics-vidhya/airflow-zero-to-one-c65221588af1
['Neha Kumari']
2020-04-12 16:34:33.106000+00:00
['Airflow', 'Data', 'Data Science', 'Big Data', 'Data Engineering']
Deploying Node.js apps in Amazon Linux with pm2
Running a Node.js application can be as trivial as node index.js, but running it in production and keeping it running are completely different. Whenever the application crashes or the server reboots unexpectedly, we want the application to come back alive. There are several ways we can properly run a Node.js application in production. In this article, I will be talking about how to deploy one using pm2 in an AWS EC2 instance running Amazon Linux. AWS EC2 Spin up an EC2 instance of your liking. Consider the load your server will be going through and the cost. Here you can get a pricing list for different types of instances: Choose Amazon Linux AMI. This is a free offering from Amazon. The Amazon Linux AMI is a supported and maintained Linux image provided by Amazon Web Services for use on Amazon Elastic Compute Cloud (Amazon EC2). It is designed to provide a stable, secure, and high performance execution environment for applications running on Amazon EC2. It supports the latest EC2 instance type features and includes packages that enable easy integration with AWS. Amazon Web Services provides ongoing security and maintenance updates to all instances running the Amazon Linux AMI. The Amazon Linux AMI is provided at no additional charge to Amazon EC2 users. Learn more at: Server configuration After the instance is up and running, SSH into it, preferably using a non-root account. Update packages: sudo yum update -y Install necessary dev tools: sudo yum install -y gcc gcc-c++ make openssl-devel git Install Node.js: curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash - sudo yum install -y nodejs This will install version 10 of Node.js. If you want to install a different version you can change the location. We will run our application using pm2. Pm2 is a process manager for Node.js. It has a lot of useful features such as monitoring, clustering, reloading, log management, etc. I will discuss some of the features we will use and configure in our application. The features I find most noteworthy: Clustering — runs multiple instances of an application (depending on configuration, in our case we will use number of cores to determine this) Reloading — reloads applications when they crash or the server reboots. Install pm2: sudo npm install pm2@latest -g Generate a pm2 startup script: pm2 startup This will daemonize pm2 and initialize it on system reboots. Learn more here: https://pm2.keymetrics.io/docs/usage/startup The source code You can use https to clone the source code. However, I find that using a deploy key is much better and I can give read-only access to the server. Here is a simplified way of how to generate and use deploy keys: Generate a new ssh key using: ssh-keygen Do not enter a passphrase. Copy the public key contents printed by the command: cat ~/.ssh/id_rsa.pub If you are using Github, add it to the Deploy Keys section of your repository’s Settings page. After the repository is cloned. Run the scripts you need to run in order to get your project ready. For example, if my project uses yarn as the package manager and typescript as the language which needs to be transpiled to javascript when deploying, I will run the following commands: yarn install yarn build The second command runs the build script from my package.json file which states: “build”: “tsc” We can now run the application by running: node dist/index.js But we are not going to. Because we want to use pm2 to run our application. The Ecosystem File Pm2 provides a way to configure our application in an ecosystem file where we can easily tune the various configurable options provided. You can generate an ecosystem file by running: pm2 ecosystem Our application’s ecosystem file contains: ecosystem.config.js: module.exports = { apps : [{ name: ‘My App’, script: ‘dist/index.js’, instances: ‘max’, max_memory_restart: ‘256M’, env: { NODE_ENV: ‘development’ }, env_production: { NODE_ENV: ‘production’ } }] }; What this configuration tells pm2 is, run the application and name it My App. Run it using the script dist/index.js. Spawn as many instances of the application according to the number of CPUs present. Mind the NODE_ENV environment variable. This has several benefits when running an express application. It boosts the performance of the app by tweaking a few things such as (Taken from express documentation): 1. Cache view templates. 2. Cache CSS files generated from CSS extensions. 3. Generate less verbose error messages. Read more here: There are a lot more options in pm2 that you can tweak, I am leaving those at default values. Check them out here: Run the application: pm2 reload ecosystem.config.js --env production This command reloads the application with production environment declared in the ecosystem file. This process is also done with zero downtime. It compares the ecosystem configuration and currently running processes and updates as necessary. We want to be able to write up a script for everytime we need to deploy. This way, the app is not shut down and started again (which a restart does). Read more about it: When our application is up and running, we have to save the process list we want to respawn for when the system reboots unexpectedly: pm2 save We can check our running applications with: pm2 status Monitor our apps: pm2 monit View logs: pm2 logs Let’s create a handy script to deploy when there is a change: deploy.sh: #!/bin/bash git pull yarn install npm run build pm2 reload ecosystem.config.js --env production # EOF Make the file executable: chmod +x deploy.sh Now, every time you need to deploy changes, simply run: ./deploy.sh Conclusion Let’s recap: Create an EC2 instance running Amazon Linux Update packages (might include security updates). Install the desired Node.js version. Use a process manager to run the application (such as pm2). Use deploy keys to pull code from the source repository. Create an ecosystem configuration file so that it is maintainable in the future. Create a deploy script so that it is easy to run future deployments. Run the deployment script whenever there is a change to be deployed. Congratulations! Your application is up and running. There are several other ways to achieve the same end goal, such as using forever instead of pm2, or even using Docker instead and deploy to Amazon ECS. This is a documentation of how I deploy Node.js applications in production if running them on EC2 instances. When your deployments become more frequent, you should consider a CI/CD integration to build and deploy whenever there is a change in the source code. Make sure you monitor and keep an eye on your server’s resource usage. Last but not least, make sure you have proper logging in your application. I cannot stress enough how important proper logging is. Tweet me at @war1oc if you have anything to ask or add. Check out other articles from our engineering team: https://medium.com/monstar-lab-bangladesh-engineering Visit our website to learn more about us: www.monstar-lab.co.bd
https://medium.com/monstar-lab-bangladesh-engineering/deploying-node-js-apps-in-amazon-linux-with-pm2-7fc3ef5897bb
['Tanveer Hassan']
2019-08-21 10:40:38.433000+00:00
['Software Engineering', 'AWS', 'Programming', 'Nodejs', 'JavaScript']
New Class Naming Rules in Ruby
New Class Naming Rules in Ruby There were 26 valid characters. Now there are 1,853! Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog In Ruby 2.5 and prior: It’s been a longstanding rule in Ruby that you must use a capital ASCII letter as the first character of a Class or Module name. This limited you to just these 26 characters: ABCDEFGHIJKLMNOPQRSTUVWXYZ New in Ruby 2.6: In Ruby 2.6, non-ASCII upper case characters are allowed. By my count, that makes a total of 1,853 options! Here are the 1,827 new characters that can start a Class or Module name in Ruby 2.6: ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞĀĂĄĆĈĊČĎĐĒĔĖĘĚĜĞĠĢĤĦĨĪĬĮİIJĴĶĹĻĽĿŁŃŅŇŊŌŎŐŒŔŖŘŚŜŞŠŢŤŦŨŪŬŮŰŲŴŶŸŹŻŽƁƂƄƆƇƉƊƋƎƏƐƑƓƔƖƗƘƜƝƟƠƢƤƦƧƩƬƮƯƱƲƳƵƷƸƼDŽDžLJLjNJNjǍǏǑǓǕǗǙǛǞǠǢǤǦǨǪǬǮDZDzǴǶǷǸǺǼǾȀȂȄȆȈȊȌȎȐȒȔȖȘȚȜȞȠȢȤȦȨȪȬȮȰȲȺȻȽȾɁɃɄɅɆɈɊɌɎͰͲͶͿΆΈΉΊΌΎΏΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫϏϒϓϔϘϚϜϞϠϢϤϦϨϪϬϮϴϷϹϺϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯѠѢѤѦѨѪѬѮѰѲѴѶѸѺѼѾҀҊҌҎҐҒҔҖҘҚҜҞҠҢҤҦҨҪҬҮҰҲҴҶҸҺҼҾӀӁӃӅӇӉӋӍӐӒӔӖӘӚӜӞӠӢӤӦӨӪӬӮӰӲӴӶӸӺӼӾԀԂԄԆԈԊԌԎԐԒԔԖԘԚԜԞԠԢԤԦԨԪԬԮԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉՊՋՌՍՎՏՐՑՒՓՔՕՖႠႡႢႣႤႥႦႧႨႩႪႫႬႭႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅჇჍᎠᎡᎢᎣᎤᎥᎦᎧᎨᎩᎪᎫᎬᎭᎮᎯᎰᎱᎲᎳᎴᎵᎶᎷᎸᎹᎺᎻᎼᎽᎾᎿᏀᏁᏂᏃᏄᏅᏆᏇᏈᏉᏊᏋᏌᏍᏎᏏᏐᏑᏒᏓᏔᏕᏖᏗᏘᏙᏚᏛᏜᏝᏞᏟᏠᏡᏢᏣᏤᏥᏦᏧᏨᏩᏪᏫᏬᏭᏮᏯᏰᏱᏲᏳᏴᏵḀḂḄḆḈḊḌḎḐḒḔḖḘḚḜḞḠḢḤḦḨḪḬḮḰḲḴḶḸḺḼḾṀṂṄṆṈṊṌṎṐṒṔṖṘṚṜṞṠṢṤṦṨṪṬṮṰṲṴṶṸṺṼṾẀẂẄẆẈẊẌẎẐẒẔẞẠẢẤẦẨẪẬẮẰẲẴẶẸẺẼẾỀỂỄỆỈỊỌỎỐỒỔỖỘỚỜỞỠỢỤỦỨỪỬỮỰỲỴỶỸỺỼỾἈἉἊἋἌἍἎἏἘἙἚἛἜἝἨἩἪἫἬἭἮἯἸἹἺἻἼἽἾἿὈὉὊὋὌὍὙὛὝὟὨὩὪὫὬὭὮὯᾈᾉᾊᾋᾌᾍᾎᾏᾘᾙᾚᾛᾜᾝᾞᾟᾨᾩᾪᾫᾬᾭᾮᾯᾸᾹᾺΆᾼῈΈῊΉῌῘῙῚΊῨῩῪΎῬῸΌῺΏῼℂℇℋℌℍℐℑℒℕℙℚℛℜℝℤΩℨKÅℬℭℰℱℲℳℾℿⅅⅠⅡⅢⅣⅤⅥⅦⅧⅨⅩⅪⅫⅬⅭⅮⅯↃⒶⒷⒸⒹⒺⒻⒼⒽⒾⒿⓀⓁⓂⓃⓄⓅⓆⓇⓈⓉⓊⓋⓌⓍⓎⓏⰀⰁⰂⰃⰄⰅⰆⰇⰈⰉⰊⰋⰌⰍⰎⰏⰐⰑⰒⰓⰔⰕⰖⰗⰘⰙⰚⰛⰜⰝⰞⰟⰠⰡⰢⰣⰤⰥⰦⰧⰨⰩⰪⰫⰬⰭⰮⱠⱢⱣⱤⱧⱩⱫⱭⱮⱯⱰⱲⱵⱾⱿⲀⲂⲄⲆⲈⲊⲌⲎⲐⲒⲔⲖⲘⲚⲜⲞⲠⲢⲤⲦⲨⲪⲬⲮⲰⲲⲴⲶⲸⲺⲼⲾⳀⳂⳄⳆⳈⳊⳌⳎⳐⳒⳔⳖⳘⳚⳜⳞⳠⳢⳫⳭⳲꙀꙂꙄꙆꙈꙊꙌꙎꙐꙒꙔꙖꙘꙚꙜꙞꙠꙢꙤꙦꙨꙪꙬꚀꚂꚄꚆꚈꚊꚌꚎꚐꚒꚔꚖꚘꚚꜢꜤꜦꜨꜪꜬꜮꜲꜴꜶꜸꜺꜼꜾꝀꝂꝄꝆꝈꝊꝌꝎꝐꝒꝔꝖꝘꝚꝜꝞꝠꝢꝤꝦꝨꝪꝬꝮꝹꝻꝽꝾꞀꞂꞄꞆꞋꞍꞐꞒꞖꞘꞚꞜꞞꞠꞢꞤꞦꞨꞪꞫꞬꞭꞮꞰꞱꞲꞳꞴꞶ𐐀𐐁𐐂𐐃𐐄𐐅𐐆𐐇𐐈𐐉𐐊𐐋𐐌𐐍𐐎𐐏𐐐𐐑𐐒𐐓𐐔𐐕𐐖𐐗𐐘𐐙𐐚𐐛𐐜𐐝𐐞𐐟𐐠𐐡𐐢𐐣𐐤𐐥𐐦𐐧𐒰𐒱𐒲𐒳𐒴𐒵𐒶𐒷𐒸𐒹𐒺𐒻𐒼𐒽𐒾𐒿𐓀𐓁𐓂𐓃𐓄𐓅𐓆𐓇𐓈𐓉𐓊𐓋𐓌𐓍𐓎𐓏𐓐𐓑𐓒𐓓𐲀𐲁𐲂𐲃𐲄𐲅𐲆𐲇𐲈𐲉𐲊𐲋𐲌𐲍𐲎𐲏𐲐𐲑𐲒𐲓𐲔𐲕𐲖𐲗𐲘𐲙𐲚𐲛𐲜𐲝𐲞𐲟𐲠𐲡𐲢𐲣𐲤𐲥𐲦𐲧𐲨𐲩𐲪𐲫𐲬𐲭𐲮𐲯𐲰𐲱𐲲𑢠𑢡𑢢𑢣𑢤𑢥𑢦𑢧𑢨𑢩𑢪𑢫𑢬𑢭𑢮𑢯𑢰𑢱𑢲𑢳𑢴𑢵𑢶𑢷𑢸𑢹𑢺𑢻𑢼𑢽𑢾𑢿𝐀𝐁𝐂𝐃𝐄𝐅𝐆𝐇𝐈𝐉𝐊𝐋𝐌𝐍𝐎𝐏𝐐𝐑𝐒𝐓𝐔𝐕𝐖𝐗𝐘𝐙𝐴𝐵𝐶𝐷𝐸𝐹𝐺𝐻𝐼𝐽𝐾𝐿𝑀𝑁𝑂𝑃𝑄𝑅𝑆𝑇𝑈𝑉𝑊𝑋𝑌𝑍𝑨𝑩𝑪𝑫𝑬𝑭𝑮𝑯𝑰𝑱𝑲𝑳𝑴𝑵𝑶𝑷𝑸𝑹𝑺𝑻𝑼𝑽𝑾𝑿𝒀𝒁𝒜𝒞𝒟𝒢𝒥𝒦𝒩𝒪𝒫𝒬𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵𝓐𝓑𝓒𝓓𝓔𝓕𝓖𝓗𝓘𝓙𝓚𝓛𝓜𝓝𝓞𝓟𝓠𝓡𝓢𝓣𝓤𝓥𝓦𝓧𝓨𝓩𝔄𝔅𝔇𝔈𝔉𝔊𝔍𝔎𝔏𝔐𝔑𝔒𝔓𝔔𝔖𝔗𝔘𝔙𝔚𝔛𝔜𝔸𝔹𝔻𝔼𝔽𝔾𝕀𝕁𝕂𝕃𝕄𝕆𝕊𝕋𝕌𝕍𝕎𝕏𝕐𝕬𝕭𝕮𝕯𝕰𝕱𝕲𝕳𝕴𝕵𝕶𝕷𝕸𝕹𝕺𝕻𝕼𝕽𝕾𝕿𝖀𝖁𝖂𝖃𝖄𝖅𝖠𝖡𝖢𝖣𝖤𝖥𝖦𝖧𝖨𝖩𝖪𝖫𝖬𝖭𝖮𝖯𝖰𝖱𝖲𝖳𝖴𝖵𝖶𝖷𝖸𝖹𝗔𝗕𝗖𝗗𝗘𝗙𝗚𝗛𝗜𝗝𝗞𝗟𝗠𝗡𝗢𝗣𝗤𝗥𝗦𝗧𝗨𝗩𝗪𝗫𝗬𝗭𝘈𝘉𝘊𝘋𝘌𝘍𝘎𝘏𝘐𝘑𝘒𝘓𝘔𝘕𝘖𝘗𝘘𝘙𝘚𝘛𝘜𝘝𝘞𝘟𝘠𝘡𝘼𝘽𝘾𝘿𝙀𝙁𝙂𝙃𝙄𝙅𝙆𝙇𝙈𝙉𝙊𝙋𝙌𝙍𝙎𝙏𝙐𝙑𝙒𝙓𝙔𝙕𝙰𝙱𝙲𝙳𝙴𝙵𝙶𝙷𝙸𝙹𝙺𝙻𝙼𝙽𝙾𝙿𝚀𝚁𝚂𝚃𝚄𝚅𝚆𝚇𝚈𝚉𝚨𝚩𝚪𝚫𝚬𝚭𝚮𝚯𝚰𝚱𝚲𝚳𝚴𝚵𝚶𝚷𝚸𝚹𝚺𝚻𝚼𝚽𝚾𝚿𝛀𝛢𝛣𝛤𝛥𝛦𝛧𝛨𝛩𝛪𝛫𝛬𝛭𝛮𝛯𝛰𝛱𝛲𝛳𝛴𝛵𝛶𝛷𝛸𝛹𝛺𝜜𝜝𝜞𝜟𝜠𝜡𝜢𝜣𝜤𝜥𝜦𝜧𝜨𝜩𝜪𝜫𝜬𝜭𝜮𝜯𝜰𝜱𝜲𝜳𝜴𝝖𝝗𝝘𝝙𝝚𝝛𝝜𝝝𝝞𝝟𝝠𝝡𝝢𝝣𝝤𝝥𝝦𝝧𝝨𝝩𝝪𝝫𝝬𝝭𝝮𝞐𝞑𝞒𝞓𝞔𝞕𝞖𝞗𝞘𝞙𝞚𝞛𝞜𝞝𝞞𝞟𝞠𝞡𝞢𝞣𝞤𝞥𝞦𝞧𝞨𝟊𞤀𞤁𞤂𞤃𞤄𞤅𞤆𞤇𞤈𞤉𞤊𞤋𞤌𞤍𞤎𞤏𞤐𞤑𞤒𞤓𞤔𞤕𞤖𞤗𞤘𞤙𞤚𞤛𞤜𞤝𞤞𞤟𞤠𞤡🄰🄱🄲🄳🄴🄵🄶🄷🄸🄹🄺🄻🄼🄽🄾🄿🅀🅁🅂🅃🅄🅅🅆🅇🅈🅉🅐🅑🅒🅓🅔🅕🅖🅗🅘🅙🅚🅛🅜🅝🅞🅟🅠🅡🅢🅣🅤🅥🅦🅧🅨🅩🅰🅱🅲🅳🅴🅵🅶🅷🅸🅹🅺🅻🅼🅽🅾🅿🆀🆁🆂🆃🆄🆅🆆🆇🆈🆉ABCDEFGHIJKLMNOPQRSTUVWXYZ (Characters unsupported by this font appear as squares.) This change supports upper case characters in other languages but doesn’t go so far as to allow emoji as a Class or Module name. These examples are now valid Ruby: It’s worth noting that local variables in Ruby could begin with these characters in Ruby 2.5 and earlier. (Thanks to Cary Swoveland for pointing this out.) A local variable starting with one of these characters would become a constant in Ruby 2.6. Why support these additional characters? Sergei Borodanov started an issue ticket asking about support for Cyrillic characters. Matz decided, “maybe it’s time to relax the limitation for Non-ASCII capital letters to start constant names.” Nobuyoshi (“nobu”) Nakada (a.k.a. “patch monster”) wrote and committed the patch to support this new feature. With the addition of this feature, Rubyists in various languages can use their own alphabet for the first character of a Class or Module. For example, a Greek Rubyist can now have an Ωμέγα class, instead of an Oμέγα class — where the first letter is transliterated. Thanks to the Ruby core team for making this change! It will be shipped on December 25, 2018 with Ruby 2.6. We use Ruby for lots of things here at Square — including our Square Connect Ruby SDKs and open source Ruby projects. We’re eagerly awaiting the release of Ruby 2.6! The Ruby logo is Copyright © 2006, Yukihiro Matsumoto, distributed under CC BY-SA 2.5. Want more? Sign up for your monthly developer newsletter or drop by the Square dev Slack channel and say “hi!”
https://medium.com/square-corner-blog/new-class-naming-rules-in-ruby-bb3b45150c37
['Shannon Skipper']
2019-04-18 22:18:55.872000+00:00
['Ruby', 'Programming Languages', 'Software Development', 'Software Engineering', 'Engineering']
Financial Times Data Platform: From zero to hero
Financial Times Data Platform: From zero to hero An in-depth walkthrough of the evolution of our Data Platform The Financial Times, one of the world’s leading business news organisations, has been around for more than 130 years and is famous for its quality journalism. To stay at the top for this long, you have to be able to adapt as the world changes. For the last decade, that has meant being able to take advantage of the opportunities that technology provides, as the FT undergoes a digital transformation. This article will take an in-depth look behind the scenes for one part of that transformation: the creation and evolution of the Financial Times’ Data platform. The Data Platform provides information about how our readers interact with the FT that allows us to make decisions about how we can continue to deliver the things our readers want and need. Generation 1: 2008–2014 Early days At first, the Data Platform focussed on providing recommendations to readers based on what they had read already. At the time, the majority of our readers still read the FT in print, so a single store and 24 hours latency was sufficient. The architecture was clean and simple, and Financial Times’ employees were able to execute queries on top of it to analyse user’s interests. But then a number of events happened. Internet revolution. The internet took off, and day after day the number of readers visiting ft.com rather than reading the print newspaper increased. Mobile innovation. Mobile devices started being part of people’s lives. Having a smartphone moved from a luxury to an expectation, and this allowed the Financial Times to release mobile applications for each of the most popular operating systems. This became another stream of users who could benefit from reading articles while they were travelling to work, resting at home or being outside in nature without access to their laptops. Generation 2: 2014–2016 The arrival of our Extract, Transform, Load (ETL) Framework The second generation of our platform faced two new challenges: firstly, the need to allow our stakeholders to analyse data at scale, asking new types of questions; and secondly, an increasing volume of data. In order to achieve these goals, we built our own ETL Framework in 2014. This allowed our teams to set up new jobs and models in an automated and scalable way and included features such as: Scheduling. Automating running SQL queries multiple times per day, synchronising the outputs with other teams and last but not least focusing more on the business cases rather than on the implementation details. Python interface. Providing the ability to run Python code in addition to the SQL queries, allowing the stakeholders to run even more complex data models. Configuration over implementation. One of the reasons for choosing to introduce an ETL Framework was the ability to produce jobs in XML file format, which enabled even more business capabilities at that time. The release of the ETL Framework had a huge positive impact but could not on its own resolve all issues coming with the increased amount of data and number of consumers. In fact, adding this new component actually created more issues from a performance point of view, as the number of consumers of the Data Platform increased, now including the Business Intelligence (BI) Team, Data Science Team, and others. The SQL Server instance started to become a bottleneck for the Data Platform, hence for all the stakeholders too. It was time for a change and we were trying to find the best solution for this particular issue. As the Financial Times was already using some services provided by Amazon Web Services (AWS), we started evaluating Amazon Redshift as an option for a fast, simple and cost-effective Data Warehouse for storing the increasing amount of data. Amazon Redshift is designed for Online Analytical Processing (OLAP) in the cloud which was exactly what we were looking for. Using this approach we were able to optimise query performance a lot without any additional effort from our team to support the new storage service. Generation 3: 2016–2018 The beginning of Big Data at Financial Times Having Amazon Redshift as a Data Warehouse solution and an ETL Framework as a tool for deploying extract, transform, load jobs, all the FT teams were seeing the benefit of having a Data Platform. However, when working for a big company leading the market, such as Financial Times in business news distribution, we cannot be satisfied with our existing achievements. That’s why we started to think how we can improve this architecture even more. Our next goal was to reduce data latency. We were ingesting data once per day, so latency was up to 24 hours. Reducing latency would mean the FT could respond more quickly to trends in the data. In order to reduce the latency, we started working on a new approach — named Next Generation Data Analytics (NGDA) — in 2015 and in early 2016 it was adopted by all teams in Financial Times. First, we developed our own tracking library, responsible for sending every interaction of our readers to the Data Platform. The existing architecture expected a list of CSV files that would have been transferred once per day by jobs run by the ETL Framework, so sending events one by one meant that we needed to change the existing architecture to support the new event-driven approach. Then, we created an API service responsible for ingesting readers’ interactions. However, we still needed a way to transfer this data to the Data Warehouse with the lowest possible latency as well as exposing this data to multiple consuming downstream systems. As we were migrating all services to the cloud, and more specifically to AWS, we looked at the managed services provided by Amazon that could fulfil our event processing needs. After analysing the alternatives, we redesigned our system to send all raw events from ft.com to the Simple Notification Service (SNS). Using this approach, it was possible for many teams in the organisation to subscribe to the SNS topic and unlock new business cases relying on the real time data. Still, having this raw data in SNS was not enough — we also needed to get the data into the Data Warehouse to support all the existing workflows. We decided to use a Simple Queue Service (SQS) queue as it allowed us to persist all events in a queue immediately when they arrived in the system. But before moving the data to our Data Warehouse, we had one more requirement from the business — to enrich the raw events with additional data provided by internal services, external services or by simple in-memory transformations. In order to satisfy these needs with minimal latency, we created a NodeJS service responsible for processing all the events in a loop asynchronously, making the enrichment step possible at scale. Once an event had been fully enriched, the data was sent immediately to the only managed event store provided by AWS at that time — Kinesis. Using this architecture, we were able to persist our enriched events in a stream with milliseconds latency, which was amazing news for our stakeholders. Once we had the data in a Kinesis Stream, we used another AWS managed service — Kinesis Firehose — to consume the enriched events stream and output them as CSV files into a S3 bucket based on one of two main conditions — a predefined time period having passed (which happened rarely) or the file size reaching 100mb. This new event-driven approach produced CSV files with enriched events in a couple of minutes depending on the time of the day, hence the latency in our data lake was reduced to 1–5 minutes. But there was one more important requirement from the business teams. They requested clean data in the Data Warehouse. Using the Kinesis Firehose approach, we couldn’t guarantee that we only had one instance of an event because: We could receive duplicate events from our client side applications. The Kinesis Firehose itself could duplicate data when a Firehose job retried on failure. In order to deduplicate all events, we created another Amazon Redshift cluster responsible for ingesting and deduplicating each new CSV file. This involved a tradeoff: implementing a process which guarantees uniqueness increased the latency for data to get into the Data Warehouse to approximately 4 hours, but enabled our business teams to generate insights much more easily. Generation 4: 2019 Rebuild the platform to allow our team to focus on adding business value Generation 3 of the platform was complicated to run. Our team spent most of the day supporting the large number of independent services, with engineering costs increasing, and far less time to do interesting, impactful work. We wanted to take advantage of new technologies to reduce this complexity, but also to provide far more exciting capabilities to our stakeholders: we wanted to turn the Data Platform into a PaaS (Platform as a Service). Our initial criteria were the platform should offer: Self service — Enabling stakeholders to independently develop and release new features. Enabling stakeholders to independently develop and release new features. Support for multiple internal consumers — with different teams having different levels of access. with different teams having different levels of access. Security isolation — so that teams could only access their own data and jobs. — so that teams could only access their own data and jobs. Code reuse — to avoid duplication for common functionality. Building a multi-tenant, self service platform is quite challenging because it requires every service to support both of these things. Still, putting effort into implementing this approach would be extremely beneficial for the future, with the key benefits being: Stakeholder teams can deliver value without having to wait to coordinate with platform teams — this reduces costs, increases velocity, and puts them in charge of their own destiny this reduces costs, increases velocity, and puts them in charge of their own destiny Platform teams can focus on building new functionality for the platform — rather than spending their time unblocking stakeholder teams The way we chose to deliver this decoupling was through a focus on configuration over implementation, with stakeholder teams able to set up their own management rules based on their internal team structure, roles and permissions, using an admin web interface. Kubernetes A software system is like a house. You need to build it from the foundations rather than from the roof. In engineering, the foundation is the infrastructure. Without a stable infrastructure, having a production ready and stable system is impossible. That’s why we have started with the foundation, discussing what would be the best approach for the short and long term future. Our existing Data Platform has been deployed to AWS ECS. While AWS ECS is a really great container orchestrator, we decided to switch to Kubernetes because on EKS, we get baked in support for lots of things we need for supporting multiple tenants, such as security isolation between the tenants, hardware limitations per tenant, etc. In addition to that there are many Kubernetes Operators coming out of the box for us, such as spark-k8s-operator, prometheus-operator and many more. AWS has been offering a managed Kubernetes cluster (EKS) for a while and it was the obvious choice for the foundations of the Data Platform for the short and long term future. Aiming to have a self service multi-tenant Data Platform, we had to apply several requirements on top of each service and the Kubernetes cluster itself. System namespace — Separate all system components in an isolated Kubernetes namespace responsible for the management of all the services. — Separate all system components in an isolated Kubernetes namespace responsible for the management of all the services. Namespace per team — Group all team resources in a Kubernetes namespace in order to automatically apply team-based configurations and constraints for each of them. — Group all team resources in a Kubernetes namespace in order to automatically apply team-based configurations and constraints for each of them. Security isolation per namespace — Restrict cross namespace access in the Kubernetes cluster to prevent unexpected interactions between different team resources. — Restrict cross namespace access in the Kubernetes cluster to prevent unexpected interactions between different team resources. Resource quota per namespace — Prevent affecting all teams when one of them reaches hardware limits, while measuring efficiency by calculating the ratio between spent money and delivered business value per team. Batch processing The ETL Framework was quite stable and had been running for years, but to fully benefit from our adoption of cloud-native technologies, we needed a new one that supported: Cloud deployment . . Horizontal scaling. As the number of workflows and the amounts of data increased, we needed to be able to scale up with minimal effort. As the number of workflows and the amounts of data increased, we needed to be able to scale up with minimal effort. Multi-tenancy. Because the whole platform needed to support this. Because the whole platform needed to support this. Deployment to Kubernetes. Again, for consistency across the whole platform. Since we built our ETL framework, the expectations from ETL have moved on. We wanted the ability to support: Language agnostic jobs. In order to get the most out of the diverse skill set in all teams using the Data Platform. In order to get the most out of the diverse skill set in all teams using the Data Platform. Workflow concept. The need to define a sequence of jobs depending on each other in a workflow is another key business requirement to make data-driven decisions on a daily basis. The need to define a sequence of jobs depending on each other in a workflow is another key business requirement to make data-driven decisions on a daily basis. Code reusability. Since the functionality behind part of the steps in the workflows are repetitive, they are a good candidate for code reuse. Since the functionality behind part of the steps in the workflows are repetitive, they are a good candidate for code reuse. Automated distributed backfilling for ETL jobs. Since this process occurs quite often for our new use cases and automation will increase business velocity. Since this process occurs quite often for our new use cases and automation will increase business velocity. Monitoring . We need good monitoring, in order to prevent making data driven decisions based on low quality, high latency or even missing data. . We need good monitoring, in order to prevent making data driven decisions based on low quality, high latency or even missing data. Extendability. The ability to extend the batch processing service with new capabilities based on feedback and requirements provided by the stakeholders will make this service flexible enough for the foreseeable future. The other big change is that fully-featured ETL frameworks now exist, rather than having to be built from scratch. Having all these requirements in mind, we evaluated different options on the market such as Luigi, Oozie, Azkaban, AWS Steps, Cadence and Apache Airflow. The best fit for our requirements was Apache Airflow. Great though it is, it still has some limitations — such as a single scheduler and lack of native multi-tenancy support. While the first one is not a huge concern for us at the moment based on the benchmarks, our estimated load and the expected release of this feature in Apache Airflow 2.0, the second one would impact our whole architecture, and so we decided to build custom multi-tenant support on top of Apache Airflow. We considered using an Apache Airflow managed service — there are multiple providers — but in the end decided to continue with a self managed solution based on some of the requirements including multi-tenancy, language agnostic jobs and monitoring. All of them could not be achieved with a managed solution, leading to the extensibility requirement and its importance for us. Once Apache Airflow had been integrated into our platform, we started by releasing new workflows on top of it, to ensure its capabilities. When we knew it met all criteria, the next step was obvious and currently we are in the process of migrating all of our existing ETL jobs to Apache Airflow. In addition to that, we have released it as a self service product to all stakeholders in the company and we already have consumers such as the BI Team, the Data Science team, and others. Generation 5: 2020 It’s time for real time data Generation 4 was a big step forward. However, there were still some targets for improvement. Real time data Our latency was still around 4 hours for significant parts of our data. Most of these 4 hours of latency happened because of the deduplication procedure — which is quite important for our stakeholders and their needs. For example, the FT can not make any business development decisions based on low quality data. That’s why we must ensure that our Data Warehouse persists clean data for these use cases. However, as the product, business and technologies evolve, new use cases have emerged. They could provide impact by using real time data even with a small percentage of low quality data. A great example for that is ordering a user’s feed in ft.com and the mobile application based on the reader’s interests. Having a couple of duplicated events would not be crucial for this use case as the user experience would always be much better than showing the same content to all users without having their interests in mind. We already had a stable stream processing architecture but it was quite complicated. We started looking into optimising it by migrating from SNS, SQS, and Kinesis to a new architecture using Apache Kafka as an event store. Having a managed service for the event store would be our preference and we decided to give Amazon MSK a try as it seemed to have been stable for quite some time. Ingesting data in Apache Kafka topics was a great starting point to provide real time data to the business. However, the stakeholders still didn’t have access to the data in the Apache Kafka cluster. So, our next goal was to create a stream processing platform that could allow them to deploy models on top of the real time data. We needed something that matched the rest of our architecture — supporting multi-tenancy, self service, multiple languages and deployable to Kubernetes. Having those requirements in mind, Apache Spark seemed to fit very well for us, being the most used analytics engine and having one of the biggest open-source communities worldwide. In order to deploy Apache Spark streaming jobs to Kubernetes, we decided to use the spark-on-k8s-operator. Moreover, we have built a section in our Data UI which allows our stakeholders to deploy their Apache Spark stream processing jobs to production by filling a simple form containing information for the job such as the Docker image and tag, CPU and memory limitations, credentials for the data sources used in the job, etc. Data contract Another area where we needed to make optimisations was moving the data validation to the earliest possible step in the pipeline. We had services validating the data coming into the Data Platform, however these validations were executed at different steps of the pipeline. This led to issues as the pipeline sometimes has broken because of incoming incorrect data. That’s why we wanted to improve this area by providing the following features: A Data contract for the event streams in the pipeline Moving the validation step to the earliest possible stage Adding compression to reduce event size Having all these needs in mind, we found a great way to achieve these requirements by using Apache Avro. It allows defining a data contract per topic in Apache Kafka, hence ensuring the data quality in the cluster. This approach also resolves another issue — the validation step can be moved to be the first step in the pipeline. Using an Apache Spark streaming job with Apache Avro schema prevents us from having broken data in the pipeline by moving all incorrect events to other Kafka topics used as Dead Letter Queues. Another great feature coming with Apache Avro is serialisation and deserialisation, which makes it possible to provide compression over the data persisted in the Apache Kafka event store. Data Lake Migrating from CSV to parquet files in our data lake storage has been a great initial choice for most of our needs. However, we still lacked some features on top of it that could make our life much easier, including ACID transactions, schema enforcements and updating events in parquet files. After analysing all existing alternatives on the market including Hudi, Iceberg and Delta Lake, we decided to start using Delta Lake based on its Apache Spark 3.x support. It provides all of the main requirements and fits perfectly in our architecture. Efficiency. We decoupled the computation process from the storage allowing our architecture to scale more efficiently. Low latency, high quality data. Using the upsert and schema enforcements features provided by Delta Lake, we can continuously deliver low latency and high quality data to all stakeholders in Financial Times. Multiple access points. Persisting all incoming data into Delta Lake allows the stakeholders to query low latency data through multiple systems including Apache Spark and Presto. Time travel. Delta Lake allows reprocessing data from a particular time in the past which automates back-populating data, in addition to allowing analysis between particular date intervals for different use cases such as reports or training machine learning models. Virtualisation layer At the Financial Times we have different kinds of storage used by teams in the company, including Amazon Redshift, Google BigQuery, Amazon S3, Apache Kafka, VoltDB, etc. However, stakeholders often need to analyse data split across more than one data store in order to make data-driven decisions. In order to satisfy this need, they use Apache Airflow to move data between different data stores. However, this approach is far from optimal. Using a batch processing approach adds additional latency to the data and, in some cases, making decisions with low latency data is crucial for a business use case. Moreover, deploying a batch processing job requires more technical background which may limit some of the stakeholders. Having these details in mind, we had some clear requirements about what the stakeholders would expect in order to deliver even more value to our readers — support for: Ad hoc queries over any storage ANSI SQL — syntax they often know well Being able to join data between different data storages And we wanted the ability to deploy to Kubernetes, to fit into our platform architecture. After analysing different options on the market, we decided to start with Presto as it allows companies to analyse petabytes of data at scale while being able to join data from many data sources, including all of the data sources used at the Financial Times. Plan for the future At the Financial Times we are never satisfied with our achievements and this is one of the reasons why this company has been on the top of this business for more than 130 years. That’s why we already have plans on how to evolve this architecture even more. Ingestion platform. We ingest data by using the three components — batch processing jobs managed by Apache Airflow, Apache Spark streaming jobs consuming data from Apache Kafka streams and REST services expecting incoming data to the Data Platform. We aim to replace the existing high latency ingestion services with Change Data Capture (CDC) which will enable ingesting new data immediately when it arrives in any data sources, hence the business will be able to deliver an even better experience for our readers. We ingest data by using the three components — batch processing jobs managed by Apache Airflow, Apache Spark streaming jobs consuming data from Apache Kafka streams and REST services expecting incoming data to the Data Platform. We aim to replace the existing high latency ingestion services with Change Data Capture (CDC) which will enable ingesting new data immediately when it arrives in any data sources, hence the business will be able to deliver an even better experience for our readers. Real time data for everyone. One of the main features that we have in mind is enabling all people in Financial Times to have access to the data, without the need to have particular technical skills. In order to do that, we plan to enhance the Data UI and the stream processing platform to allow drag and drop for building streaming jobs. This would be a massive improvement because it will enable employees without a technical background to consume, transform, produce and analyse data. If working on challenging Big Data tasks is interesting to you, consider applying for a role in our Data team in the office in Sofia, Bulgaria. We are waiting for you!
https://medium.com/ft-product-technology/financial-times-data-platform-from-zero-to-hero-143156bffb1d
['Mihail Petkov']
2020-12-02 09:59:40.123000+00:00
['Financial Times', 'Analytics', 'Engineering', 'Big Data', 'Data']
[DS0001] — Linear Regression and Confidence Interval a Hands-On Tutorial
Motivation This tutorial will guide you through the creation of a linear regression model and a confidence interval from your predictor using some data science commonly used libraries such as Sklearn and Pandas. In our example case, the linear regression was used to determine how many charging cycles a battery can hold after die. Don’t worry if you do not understand anything about batteries, all the data will be available for download, and the only knowledge required here is about python language. Import what we need In order to use some already implemented tools, we need to import all the libraries and components. The next block of code import pandas, NumPy, and some scikit learn components, that will allow us to read our data, create the linear regression model and our confidence interval. #!/usr/bin/env python3 from sklearn.linear_model import LinearRegression import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.stats.stats import pearsonr from scipy import stats Loading the data In this tutorial, I will be using some data from my research about battery state of life estimation. Don’t worry about the meaning of the data right now, It will not affect our results. Download the .csv file from here and paste it into the same folder as your main python file. After it, you can just load the file on pandas and read the column named “voltage_integral” from the file, as I do on the code above: my_pandas_file = pd.read_csv('cs_24.csv') y_data = my_pandas_file.get('voltage_integral') To create a linear regression, we will need another axis, in this case, our x-axis will be the index of our y_data vector. In this case, It’s good to notice that our model will require a 2d array, so let’s arrange it on the desired form using the reshape method. x_data = np.arange(0,len(y_data), 1) x_data_composed = x_data.reshape(-1,1) Creating our model After work with linear regression, It’s usual to see if there is a strong correlation between the variables. To see it, you must calculate the Poison coefficient and check it. If the correlation is near 1, it means that the variables have a positive strong correlation. If it’s neat -1, It means that the variables have negative strong correlations and if it’s near 0, It means that the variables do not have a correlation and the linear regression will not help. Python does provide a tool to easily calculate the correlation: correlation = pearsonr(x_data, y_data) >> (-0.9057040954006549, 0.0) The value of -0.91 tells us that our data has a strong negative correlation, and as you can see on the graphic below, it means that when our x value increases, our y value decreases. To create our linear model, we just need to use our imported component, and fit the model, using the data imported from the file. After it, just to see how our model is when compared to the graphics, we will plot the predicted vector from our source data: lin_regression = LinearRegression().fit(x_cs_24.reshape(-1, 1), cs_24_integral_data) model_line = lin_regression.predict(x_data_composed) plt.plot(y_data) plt.plot(model_line) plt.xlabel('Cilos') plt.ylabel('volts x seconds') plt.title('Voltage integral CCCT charge during batery life') plt.ylim(0,8000) Before running the code, the graphic below will show up on your screen: Our model is already done and we already have our graphics. Now it’s time to add more confidence to our prediction model putting a confidence interval on the graphic. Calculate and plot our confidence interval A confidence interval of 95%, is an interval between values that our prediction has 95% of chances to be there. This is calculated based on the standard deviation and a gaussian curve. We will create a function to calculate our confidence interval for a single sample and then run it for all predictions. def get_prediction_interval(prediction, y_test, test_predictions, pi=.95): ''' Get a prediction interval for a linear regression. INPUTS: - Single prediction, - y_test - All test set predictions, - Prediction interval threshold (default = .95) OUTPUT: - Prediction interval for single prediction ''' #get standard deviation of y_test sum_errs = np.sum((y_test - test_predictions)**2) stdev = np.sqrt(1 / (len(y_test) - 2) * sum_errs) #get interval from standard deviation one_minus_pi = 1 - pi ppf_lookup = 1 - (one_minus_pi / 2) z_score = stats.norm.ppf(ppf_lookup) interval = z_score * stdev #generate prediction interval lower and upper bound cs_24 lower, upper = prediction - interval, prediction + interval return lower, prediction, upper ## Plot and save confidence interval of linear regression - 95% cs_24 lower_vet = [] upper_vet = [] for i in model_line: lower, prediction, upper = get_prediction_interval(i, y_data, model_line) lower_vet.append(lower) upper_vet.append(upper) plt.fill_between(np.arange(0,len(y_data),1),upper_vet, lower_vet, color='b',label='Confidence Interval') plt.plot(np.arange(0,len(y_data),1),y_data,color='orange',label='Real data') plt.plot(model_line,'k',label='Linear regression') plt.xlabel('Ciclos') plt.ylabel('Volts x seconds') plt.title('95% confidence interval') plt.legend() plt.ylim(-1000,8000) plt.show() After running the code, the result will show up like this: So, this is how to create a linear regression and calculate the confidence interval from it. The data .csv file and the full code can be found here. If you like this story and would like to see more content like this in the future, please follow me! Thanks for your time, folks!
https://medium.com/swlh/ds001-linear-regression-and-confidence-interval-a-hands-on-tutorial-760658632d99
['Iago Henrique']
2020-11-28 19:05:56.970000+00:00
['AI', 'Data Science', 'Data Visualization', 'Python', 'Linear Regression']
Processing Big Data with a Micro-Service-Inspired Data Pipeline
You aren’t truly ready for a career in Big Data until you have everyone in the room cringing from the endless jargon you are throwing at them. Everyone in tech is always trying to out-impress one another with their impressive grasp of technical jargon. However, tech jargon does exist for a reason: it summarizes complex concepts into a simple narrative, and allow developers to abstract implementation details into design patterns which can be “mix-and-matched” to solve any technical task. With that in mind, let’s take a look at the technical tasks the Data Lab team was facing this year, and how we addressed them with an absurd quantity of geek speak. The Data Lab team at Hootsuite is designed to help the business make data-driven decisions. From an engineering standpoint, this means designing a data pipeline to manage and aggregate all our data from various sources (Product, Salesforce, Localytics, etc.) and make them available in Redshift for analysis by our Analysts. Analyses typically take the form of either a specific query used to answer a specific ad-hoc request, or a more permanent Dashboard designed to monitor key metrics. However, as Hootsuite grew, the Datalab team became a bottleneck for data requests from stakeholders across the business. This led us to search for a way that would allow various decision makers to dig into our data on their own, without needing SQL knowledge. [caption id=”attachment_4271" align=”aligncenter” width=”317"] Comic courtesy of Geek and Poke[/caption] Enter Interana. Interana is a real-time time-indexed interactive data analytics tool which would allow for all of our employees to visualize and explore data themselves. Awesome, right?! Unfortunately, there was one little problem: we didn’t have the infrastructure for real-time data processing. Our pipeline only had support for a series of nightly ETLs, which were run by a cron job. Creating something from scratch is incredibly exciting. Finally, an opportunity to implement a solution using all of the jargon you’d like, without any of the technical debt! We laid out our goals, and chose the solution that best fit our needs. While analyzing the problem, I realized that the qualities we wanted our pipeline to have were the same qualities computer scientists have been striving to achieve for decades: abstraction, modularity, and robustness. What changed were the problems software engineers were facing, and the technologies which have been developed to provide modularity, robustness, and increased abstractness. It makes sense. We wouldn’t be able to create a real-time data pipeline by running our ETLs every second — we needed a different solution, which addressed these issues: [caption id=”attachment_4272" align=”aligncenter” width=”300"] Some of our requirements[/caption] Enter micro-services. Micro-services are small applications that perform a single, specific service. They are often used in applications where each request can be delegated to a separate and complete application. What makes them fantastic to work with is that they abstract away the implementation details, and present only an interface comprising of their data inputs and outputs. This means that as long as interface remains the same, any modifications made in a service are guaranteed to be compatible with the system. In fact, one could safely replace one micro-service with another! With all of Hootsuite migrating towards breaking apart our monolith into a set of micro-services, the Data Lab team also wanted a slice of the fun. Wanting to move away from our monolith-like ETL codebase, we saw an opportunity to implement our real-time data pipeline using the best practices established by our Product brethren. A data pipeline has of course some inherently different requirements than a SaaS product does — so we needed to make a few changes to what a typical micro-service product looks like. Our micro-services: Behave more like workstations at an assembly line than independent services — that is, after processing its data it does not “respond” to its caller Have a dependency structure of an acyclic graph — we don’t want data circulating our pipeline forever! With those distinctions out of the way, let’s take a look at how we implemented our new data pipeline, and how it helped us achieve abstraction, modularity, and robustness. Above is an overview of our real-time data pipeline. We have a diverse set of sources for our data — some of them produce data in real time, while others do not. We built a micro-service to support batch-updated data. Each data source then gets put onto a data queue where our cleaner micro-services clean the data. This cleaned data then gets put into a common data format, and passed on to a “unified, cleaned” message queue, for our enricher to consume off of. This micro-service enriches our data by cross-referencing various fields with our data sets (and other micro-services!), and then uploads it into our data store. It sends a message into another message queue asking to have that data uploaded to our analytical data warehouse. Voila! A complete data pipeline. We were able to create a complete data pipeline which meets the three qualities we sought out at the beginning: abstraction, modularity, and robustness: It is abstract . Each service hides its implementation details, and reveals only what it consumes and what it outputs. . Each service hides its implementation details, and reveals only what it consumes and what it outputs. It is modular . Each micro-service can be reused and re-arranged without needing to refactor the entire system it resides in. . Each micro-service can be reused and re-arranged without needing to refactor the entire system it resides in. It is robust. New data sources can be easily added (just clone and update a cleaner/producer micro-service), and if one service fails, the rest of the pipeline can still operate correctly. Beyond those goals, we have also been able to achieve other desirable traits data-people look for: It is distributed . Each micro-service is run on a separate box, and may be consuming data from entirely different places. . Each micro-service is run on a separate box, and may be consuming data from entirely different places. It is scalable. We can always create more instances of each application to consume and process data in parallel to each other. Adding new data sources is easy. After all was said and done, we were able to cut processing times in half, had access to data sources we didn’t before, and have this all done in a system that is easy to understand and change. These tangible benefits were achieved using solutions found within the plethora of jargon being thrown around the data community. I hope that by this part of the post you’ve been numbed to the cringe-inducing effects which non-stop jargon invokes, and begun to see how they are used to describe (perhaps in an all too-colorful way) the tools and techniques we use to build a better way. Also, they’re great for SEO! ;) About the Author Kamil Khan is a Co-op on the Data Lab team at Hootsuite, working as a Software Developer and Data Analyst. He is an undergraduate student at the University of British Columbia, where he is completing a Bachelors of Commerce at the Sauder School of Business, majoring in Business and Computer Science. Want to learn more? Connect with Kamil on LinkedIn.
https://medium.com/hootsuite-engineering/processing-big-data-with-a-micro-service-inspired-data-pipeline-1bb0159bc3d9
['Hootsuite Engineering']
2018-02-07 18:35:47.251000+00:00
['Microservices', 'Co Op', 'Data', 'Big Data']
A Classic Computer Vision Project — How to Add an Image Behind Objects in a Video
A Classic Computer Vision Project — How to Add an Image Behind Objects in a Video Prateek Joshi Follow Jun 14 · 7 min read Introduction I was thrown a challenge by one of my colleagues — build a computer vision model that could insert any image in a video without distorting the moving object. This turned out to be quite an intriguing project and I had a blast working on it. Working with videos is notoriously difficult because of their dynamic nature. Unlike images, we don’t have static objects that we can easily identify and track. The complexity level goes up several levels — and that’s where our hold on image processing and computer vision techniques comes to the fore. I decided to go with a logo in the background. The challenge, which I will elaborate on later, was to insert a logo in a way that wouldn’t impede the dynamic nature of the object in any given video. I used Python and OpenCV to build this computer vision system — and have shared my approach in this article. Table of Contents Understanding the Problem Statement Getting the Data for this Project Setting the Blueprint for our Computer Vision Project Implementing the Technique in Python — Let’s Add the Logo! Understanding the Problem Statement This is going to be quite an uncommon use case of computer vision. We will be embedding a logo in a video. Now you must be thinking — what’s the big deal in that? We can simply paste the logo on top of the video, right? However, that logo might just hide some interesting action in the video. What if the logo impedes the moving object in front? That doesn’t make a lot of sense and makes the editing looks amateurish. Therefore, we have to figure out how we can add the logo somewhere in the background such that it doesn’t block the main action going on in the video. Check out the video below — the left half is the original video and the right half has the logo appearing on the wall behind the dancer: This is the idea we’ll be implementing in this article. Getting the Data for this Project I have taken this video from pexels.com, a website for free stock videos. As I mentioned earlier, our objective is to put a logo in the video such that it should appear behind a certain moving object. So, for the time being, we will use the logo of OpenCV itself. You can use any logo you want (perhaps your favorite sports team?). You can download both the video and the logo from here. Setting the Blueprint for our Computer Vision Project Let’s first understand the approach before we implement this project. To perform this task, we will take the help of image masking. Let me show you some illustrations to understand the technique. Let’s say we want to put a rectangle (fig 1) in an image (fig 2) in such a manner that the circle in the second image should appear on top of the rectangle: So, the desired outcome should look like this: However, it is not that straightforward. When we take the rectangle from Fig 1 and insert it in Fig 2, it will appear on top of the pink circle: This is not what we want. The circle should have been in front of the rectangle. So, let’s understand how we can solve this problem. These images are essentially arrays. The values of these arrays are the pixel values and every color has its own pixel value. So, we would somehow set the pixel values of the rectangle to 1 where it is supposed to be overlapping with the circle (in Fig 5), while leaving the rest of the pixel values of the rectangle as they are. In Fig 6, the region enclosed by blue-dotted lines is the region where we would put the rectangle. Let’s denote this region by R. We would set all the pixel values of R to 1 as well. However, we would leave the pixel values of the entire pink circle unchanged: Our next step is to multiply the pixel values of the rectangle with the pixel values of R. Since multiplying any number by 1 results in that number itself, so all those pixel values of R that are 1 will be replaced by the pixels of the rectangle. Similarly, the pixel values of the rectangle that are 1 will be replaced by the pixels of Fig 6. The final output will turn out to be something like this: This is the technique we are going to use to embed the OpenCV logo behind the dancing guy in the video. Let’s do it! Implementing the Technique in Python — Let’s Add the Logo! You can use a Jupyter Notebook or any IDE of your choice and follow along. We will first import the necessary libraries. Import Libraries Note: The version of the OpenCV library used for this tutorial is 4.0.0. Load Images Next, we will specify the path to the working directory where the logo and video are kept. Please note that you are supposed to specify the “path” in the code snippet below: So, we have loaded the logo image and the first frame of the video. Now let’s look at the shape of these images or arrays: logo.shape, frame.shape Output: ((240, 195, 3), (1080, 1920, 3)) Both the outputs are 3-dimensional. The first dimension is the height of the image, the second dimension is the width of the image and the third dimension is the number of channels in the image, i.e., blue, green, and red. Now, let’s plot and see the logo and the first frame of the video: plt.imshow(logo) plt.show() plt.imshow(cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)) plt.show() Technique to Create Image Mask The frame size is much bigger than the logo. Therefore, we can place the logo at a number of places. However, placing the logo at the center of the frame seems perfect to me as most of the action will happen around that region in the video. So, we will put the logo in the frame as shown below: Don’t worry about the black background in the logo. We will set the pixel values in the black region to 1 later in the code. Now the problem we have to solve is that of dealing with the moving object appearing in the same region where we have placed the logo. As discussed earlier, we need to make the logo allow itself to be occluded by that moving object. Right now, the area where we will put the logo in has a wide range of pixel values. Ideally, all the pixel values should be the same in this area. So how can we do that? We will have to make the pixels of the wall enclosed by the green dotted box have the same value. We can do this with the help of HSV (hue, saturation, value) colorspace: Our image is in RGB colorspace. We will convert it into an HSV image. The image below is the HSV version: The next step is to find the range of the HSV values of only the part that is inside the green dotted box. It turns out that most of the pixels in the box range from [6, 10, 68] to [30, 36, 122]. These are the lower and upper HSV ranges, respectively. Now using this range of HSV values, we can create a binary mask. This mask is nothing but an image with pixel values of either 0 or 255. So, the pixels falling in the upper and lower range of the HSV values will be equal to 255 and the rest of the pixels will be 0. Given below is the mask prepared from the HSV image. All the pixels in the yellow region have pixel value of 255 and the rest have pixel value of 0: Now we can easily set the pixel values inside the green dotted box to 1 as and when required. Let’s go back to the code: The code snippet above will load the frames from the video, pre-process it, and create HSV images and masks and finally insert the logo into the video. And there you have it! End Notes In this article, we covered a very interesting use case of computer vision and implemented it from scratch. In the process, we also learned about working with image arrays and how to create masks from these arrays. This is something that would help you when you work on other computer vision tasks. Feel free to reach out to me if you have any doubts or feedback to share. I would be glad to help you. Feel free to reach out to me at [email protected] for 1–1 discussions.
https://medium.com/swlh/a-classic-computer-vision-project-how-to-add-an-image-behind-objects-in-a-video-b0ac8d7b2173
['Prateek Joshi']
2020-10-18 14:12:55.446000+00:00
['Data Science', 'Object Detection', 'Python', 'Opencv', 'Computer Vision']
Regular Expressions in JavaScript: An Introduction
Regular Expressions in JavaScript: An Introduction How to use Regex in JavaScript to validate and format strings Regex in JavaScript: Your strings won’t know what hit them JavaScript’s implementation of Regex is useful for a range of string validation, formatting and iteration techniques. This article acts as an introduction to using regular expressions in JavaScript, touching on useful ways to use them, in addition to exploring some of the cryptic-like syntax that regular expressions entail. Rather than attempting to be a comprehensive guide for all Regex features, this piece instead focuses on super-useful concepts and real-world examples to get you started using Regex in your JavaScript apps. Regular expressions are notably hard to read as they gain in complexity, where it is necessary for the developer to have some knowledge of Regex syntax to know what is being tested. One can summarise regular expressions as patterns used to match character combinations in strings. JavaScript supports regular expressions in a range of its native APIs such as match , matchAll , replace , among others, for testing a string against the defined pattern via the regular expression. Perhaps the most basic type of support is the Regex object, that tests whether a pattern is present within a string with its built-in methods such as exec() and test() . To demonstrate Regex in its simplest form, we can check whether a particular substring pattern is present against another string to test against with Regex’s test() , that will return a boolean. This is how we’d test whether the string word is present within a string — there are actually two ways we can do this, here’s the first: // the simplest use case of Regex: substring testing const str = "How many words will this article have?"; const result = new RegExp('word').test(str); JavaScript also recognises a regular expression simply by wrapping it in forward slashes: const result2 = /word/.test(str); This cleaner syntax will be used for the rest of this article. This simple use case of Regex is useful for testing form values or validating other unknown data. You could indeed use this method as a simple way to test things like secure URLs, where you expect https:// within the string. However, as we all know, a valid URL has a few more rules than this — a domain suffix, an optional www. , a lack of whitespace, support for a limited amount of special characters, etc. This is where Regex shines — it can check all these attributes in one regular expression, or “pattern”, being able to test very complex strings that have an arbitrary number of patterns within them. Matching one of several Characters with Character Classes The English language has quite a few arbitrary words with the potential to be a nightmare for form validation — if it was not for regular expressions. Take the word color that is also correctly spelled as colour, or adapter and adaptor, ambience and ambiance — the list continues. This is where Character Classes, also termed Character Sets, come in handy with Regex. They are defined with square brackets followed by a range of acceptable characters in that position. Let’s take ambience and ambiance — this is how we’d test both words: const str = "This office has a pleasant ambience"; const result = /ambi[ae]nce/.test(str); The above character class accepts either an e or a as the 5th character of the tested string. Testing for optional characters Testing color and colour is slightly different — there is actually an additional optional character, being the u. Consider the following regular expression, that checks for the optional character: const str = "How many colours are there in a rainbow?"; const result = /colou?r/.test(str); Notice the ? after the u character — this introduces the first operator of this talk. The ? operator declares a character or group of characters as optional in the defined pattern. colour is indeed present within str , and test() will return true . Take a scenario where we need to test a string representing a month, that could be displayed in short-form or long-form, such as Jan or January. Both are valid months, but the uary characters can be omitted in the short form. To test this, we can wrap multiple characters in parenthesis, also termed a Capturing Group, and make the entire group optional. const str = "January 3rd is my birthday?"; const result = /[Jj]an(uary)?/.test(str); Note that we’re accepting both upper and lower case j, and have declared an optional capturing group with (uary)? . There are more efficient ways to test character cases that we’ll discover further down. We can also test a range of characters in a character class. A dash (-) between two characters present a range. For example, you may with to validate a hexadecimal string, such as when you have a visual editor in the browser to toggle colours. Check out the following regular expression to do this, that introduces more syntax to our Regex endeavours: const str = "ff0000"; //red const result = /[0-9A-F]{6}/i.test(str); The ranges of 0–9 and A-F are searched, along with curly braces with a 6 in-between. {6} here is declaring that the pattern should match exactly 6 times within the string being tested. This makes sense, as a hexadecimal value is 6 characters in length. Also of interest is the i character included at the end of the regular expression , after the closing forward slash — what is this? Introducing Flags i is one of several flags available to use at the end of a regular expression. The i flag makes the search case-insensitive, so both upper case and lower case A-F are searched. Our character class searches for the range of A-F, but with the i flag, a-f is also searched. There is no need to define both with [0-9A-Fa-f] . Another commonly used flag is g , that searches for all matches within a string. Up til now, we have only explored single matching regular expressions. Moving forward, we will want to search all matches within a string, making Regex a much more powerful concept for processing larger bulks of text. Negating Character Classes with ^ We can also define a Character Class that you do not wish to match within a string. If we took the above example and did not wish to match hexadecimal strings, the caret (^) can be placed at the beginning of the Character Class: const result = /[^0-9A-F]{6}/i.test(str); The result will now consequently return false in the event a hexadecimal string is present. Negated Character Classes are effective for defining what you don’t want to appear within a pattern, and may come in handy in forms of validation, such as testing for sensitive words and phrases in a user-submitted comment. Before we continue, let’s recap the terms explored so far: Character Classes / Character Sets: Using the square brackets to define a range of possible characters in an expression. Using the square brackets to define a range of possible characters in an expression. Operators with ? for optional characters within a regular expression. with for optional characters within a regular expression. Capturing Groups that are defined with parenthesis, allowing us to test a group of characters, or a subset of a string. that are defined with parenthesis, allowing us to test a group of characters, or a subset of a string. Flags: “global” level configurations that manipulate the Regex search in some way. The next set of examples will level up our understanding of Regex, using them in a global fashion to tackle more real-world problems.
https://rossbulat.medium.com/regular-expressions-in-javascript-an-introduction-94a40dce46a2
['Ross Bulat']
2020-03-06 15:58:39.785000+00:00
['JavaScript', 'Software Development', 'Software Engineering', 'Development', 'Programming']
The Future Factor
The Future Factor Tarot, time, and the mind . . . Photo by Santiago Lacarta on Unsplash Divination offers the promise of peering beyond our present illusions, perhaps into a timeless reality that is continually unfolding before us. But the reasoning mind has many questions . . . Does “the future” exist? In what sense? Is it already defined, or can we change it? Do we ever really see ahead in time — or is it just a trick of the imagination? Divination and the Direction of Time In a general sense, the verb “to divine” means to produce information that would otherwise be hidden. More specifically, it means to learn the will of the gods. Hence the etymological bond of “divination” and “divinity.” The information produced can be about anything, and it can be drawn from any point in time — past, present, or future. A “divining rod,” for example, discovers water or other things presently buried. And divination is frequently used in traditional cultures to discover who did something in the past or what is currently afflicting a sick person. Which is all quite useful. But the “future factor” is what really fascinates us — and may set divination apart from the many other ways of human knowing. After all, things that have already happened or are currently happening produce information that’s available in ordinary as well as extraordinary ways. Mysteries of the past and present may be solved by gathering clues and making deductions, because the information exists in some literal way; what happened did happen, what is happening is happening. These things existed at some point in time. But as far as can be told, what has yet to happen does not exist and never has. Therefore we can’t find out about it in any of our ordinary ways. Though we might guess or bet or predict or project — we cannot know, because there is nothing to know. Or is there? It’s true that we believe the future doesn’t “exist”; but why do we believe that? In the first place, there’s the evidence of our senses. And here again, language leaves clues. We “remember” the past, we “perceive” the present, but we don’t “________” the future. There’s a blank there because we don’t have a common word for future-knowing — and we don’t have a word for it because we don’t commonly experience it. The Newtonian world view, which is based on our senses and our reasoning capacity, naturally tells us that “causes” must precede “effects” and closed systems always tend toward disorder (that is, things get older but never younger, things break but never get unbroken, and so on). These are the principal explanations of why time appears to unfold from past to future. But from a post-Newtonian perspective, information is often wildly opposed to sense data. For example — in the quantum world, cause-and-effect doesn’t necessarily apply, and time isn’t necessarily linear. Because our systems, processes, and technologies are still based almost entirely on a mechanical, Newtonian interpretation of the world, we haven’t progressed much in our ability to relate to the future. In fact, this is one of the few areas in which we have no new technologies — or even any “promising developments.” As it works out, we now have (reasonably) reliable, (mostly) mechanical ways of doing all those past and present knowledge-things for which divination was once employed. For example, we have science-based tools for finding water, diagnosing illness, or solving crimes. And therefore we don’t have a practical, everyday need for divining rods and shamanic rituals. But when it comes to determining future events, we haven’t any better tools than the Maya did, or the Homeric Greeks, or the ancient Chinese — all of whom employed what we now call divination. Divination and the Nature of Mind There have been many efforts to validate the possibility of future-knowing (precognition) through experimentation and theoretical constructs, but so far, a scientific approach hasn’t brought us much insight on this subject. Since there are many things science doesn’t yet understand — or in some cases, has had to dramatically re-understand — the fact that there’s no scientific evidence of precognition is not dispositive. It may just be that our science hasn’t yet achieved a basis for understanding certain phenomena. From that perspective . . . attempting to develop explanations of future-knowing in terms of known constructs may be an inadequate, even counter-productive activity. So where should we be looking? One direction has been psychology, especially as viewed from a Jungian perspective. Lama Chime Radha, Rinpoche, then head of the Tibetan Section of the British Library, offered this observation in an article on divination in traditional Tibet: From the “scientific” point of view it would of course be possible and even necessary to explain away the belief in divination and other magical operations as mere superstitions having no correspondence with objective reality, and of relevance only to the social anthropologist. More sympathetic explanations might invoke the concept of synchronicity, the interconnectedness of all objects and events in space and time, whereby in states of heightened awareness it becomes possible “to see a world in a grain of sand and a heaven in a wild flower.” Or one could hypothesize that the external apparatus of divination, whether it is a crystal ball, the pattern of cracks in a tortoise shell, or a complex system of astrology, is essentially a means of focusing and concentrating the conscious mind so that insights and revelations may arise (or descend) from the profounder and perhaps supra-individual levels of the unconscious. [1] All of these approaches — the scientific vision of space-time and the related hypothesis of synchronicity, the speculations of parapsychology and transpersonal psychology — are intriguing. But as Lama Radha points out, such attempts at explanation may still fall very far short of correctly connecting mind and reality: The Tibetans themselves would certainly regard the visions and predictions of seers and diviners as mind-created, but then in accordance with Buddhist philosophy so they would regard everything that is experienced either subjectively or objectively, including entities of such seemingly varied degrees of solidity and independent existence as mountains, trees, other beings, sub-atomic particles and waves. Such an in fact continuity between mind and world, consciousness and created reality, is still by no means scientifically accepted or even widely entertained — much less authentically experienced by most of us. For the most part, speculation along these lines has been confined to some few scientists with a philosophical bent and/or an acquaintance with mystical experience. And so the science of space-time and the new model of consciousness that might issue from it remains very abstract. [2] But Eastern philosophy, which has been investigating the space-time continuum for more than a millennium, can bring the concept much closer to our own experience and our own embodiment. As Peter Barth explains in Piercing the Autumn Sky: A Guide to Discovering the Natural Freedom of Mind — his delightful guide to Tibetan Buddhist mind training: Exploring the nature of time and space more directly, as present in our lives, we may begin to discover the vastness of time and space itself, the vastness of our human awareness. We may note the sameness of each moment, or each millionth of a moment, in the sense that each “piece” of time or space contains the complete nature of all of time and space. We are endowed with space and time itself in the fabric of our being. [3] In other words, all that is (or was/will be) is in us. We perceive differences between time and space, then and now, thought and matter, “me” and “it” not because such things are in fact separate, but because we are conditioned (physically and mentally) to construct the world in a certain way. At our present level of evolution, it is very difficult for most people to transcend these limitations of perception for any length of time. Although psychoactive drugs and certain techniques for achieving ecstatic trance can produce temporary suspensions of our habitual perception, an effortful, sustained pursuit of spiritual discipline or mind training is needed to bring about more lasting alterations. As Barth explains, with careful practice, rather [than seeing time as] a linear road that we are on, we may discover what can be called vast time, a time which is inherent in everything, as eternal, unimpeded dynamism; a source of unlimited energy. By getting to know this aspect of our minds, by attending to the dynamic nature of our experience directly, we can actually begin to enter the dance of vast time itself, with no space between ‘us’ and ‘time.’ The fabrications of ‘past,’ ‘present,’ and ‘future’ places and selves begin to loosen their grip on us. Experientially, we realize that the past and future are only projections of our thoughts, while the present remains an indeterminate state that cannot be pinned down. The mind-training disciplines taught by Tibetan Buddhism and a few other traditions can eventually produce this expanded relationship with time. But we don’t all have the leisure or the temperament to pursue these practices intensively (at least in this lifetime). Work with Tarot, however, can be a surprisingly effective way for almost anyone to bring something of this experience into her or his life. My own experience suggests that the ability to sense something of the future is frequently an aspect of being entirely in the present. A complete, serene, and unselfconscious engagement with the given moment (such as may be experienced during a Tarot reading) actually frees the mind from habitual projections into the future and allows the future to reveal itself. The more we cultivate a deep, fluent command of the cards, the more likely we are to find awareness growing beyond the present. There are also ways to improve concentration — one of several benefits that Tarot practitioners can derive from meditation. So I’ll be writing soon about four approaches to meditation, how they resonate with the four suits of Tarot, and how to choose a path that will expand your sense of time.
https://medium.com/tarot-a-textual-project/the-future-factor-645e771907ab
['Cynthia Giles']
2020-11-11 01:14:19.877000+00:00
['Meditation', 'Spirituality', 'Creativity', 'Psychology', 'Tarot']
Getting to Know the Mel Spectrogram
Read this short post if you want to be like Neo and know all about the Mel Spectrogram! (Ho maybe not all, but at least a little) For the tl;dr and full code, go here. A Real Conversation That Happened in My Head a Few Days Ago Me: Hi Mel Spectrogram, may I call you Mel? Mel: Sure. Me: Thanks. So Mel, when we first met, you were quite the enigma to me. Mel: Really? How’s that? Me: You are composed of two concepts that their whole purpose is to make abstract notions accessible to humans - the Mel Scale and Spectrogram - yet you yourself were quite difficult for me, a human, to understand. Mel: Is there a point to this one-sided speech? Me: And do you know what bothered me even more? I heard through the grapevine that you are quite the buzzz in DSP (Digital Signal Processing), yet I found very little intuitive information about you online. Mel: Should I feel bad for you? Me: So anyway, I didn’t want to let you be misunderstood, so I decided to write about you. Mel: Gee. That’s actually kinda nice. Hope more people will get me now. Me: With pleasure my friend. I think we can talk about what are your core elements, and then show some nice tricks using the librosa package on python. Mel: Oooh that’s great! I love librosa! It can generate me with one line of code! Me: Wonderful! And let’s use this beautiful whale song as our toy example throughout this post! What do you think? Mel: You know you’re talking to yourself right? The Spectrogram Visualizing sound is kind of a trippy concept. There are some mesmerizing ways to do that, and also more mathematical ones, which we will explore in this post. Photo credit: Chelsea Davis. See more of this beautiful artwork here. When we talk about sound, we generally talk about a sequence of vibrations in varying pressure strengths, so to visualize sound kinda means to visualize airwaves. But this is just a two dimensional representation of this complex and rich whale song! Another mathematical representation of sound is the Fourier Transform. Without going into too many details (watch this educational video for a comprehensible explanation), Fourier Transform is a function that gets a signal in the time domain as input, and outputs its decomposition into frequencies. Let’s take for example one short time window and see what we get from applying the Fourier Transform. Now let’s take the complete whale song, separate it to time windows, and apply the Fourier Transform on each time window. Wow can’t see much here can we? It’s because most sounds humans hear are concentrated in very small frequency and amplitude ranges. Let’s make another small adjustment - transform both the y-axis (frequency) to log scale, and the “color” axis (amplitude) to Decibels, which is kinda the log scale of amplitudes. Now this is what we call a Spectrogram! The Mel Scale Let’s forget for a moment about all these lovely visualization and talk math. The Mel Scale, mathematically speaking, is the result of some non-linear transformation of the frequency scale. This Mel Scale is constructed such that sounds of equal distance from each other on the Mel Scale, also “sound” to humans as they are equal in distance from one another. In contrast to Hz scale, where the difference between 500 and 1000 Hz is obvious, whereas the difference between 7500 and 8000 Hz is barely noticeable. Luckily, someone computed this non-linear transformation for us, and all we need to do to apply it is use the appropriate command from librosa. Yup. That’s it. But what does this give us? It partitions the Hz scale into bins, and transforms each bin into a corresponding bin in the Mel Scale, using overlapping triangular filters.
https://towardsdatascience.com/getting-to-know-the-mel-spectrogram-31bca3e2d9d0
['Dalya Gartzman']
2020-05-09 20:19:17.674000+00:00
['Python', 'Programming', 'Music', 'Audio', 'Data Science']
Exception Handling in Java Streams
Unchecked Exceptions Let’s take an example use of streams: You are given a list of strings and you want to convert them all to integers. To achieve that, we can do something simple like this: List<String> integers = Arrays.asList("44", "373", "145"); integers.forEach(str -> System.out.println(Integer.parseInt(str))); The above snippet will work perfectly, but what happens if we modify the input to contain an illegal string, say "xyz" . The method parseInt() will throw a NumberFormatException , which is a type of unchecked exception. A naive solution, one that is typically seen is to wrap the call in a try/catch block and handle it. That would look like this: List<String> integers = Arrays.asList("44", "373", "xyz", "145"); integers.forEach(str -> { try { System.out.println(Integer.parseInt(str)); }catch (NumberFormatException ex) { System.err.println("Can't format this string"); } } ); While this works, this defeats the purpose of writing small lambdas to make code readable and less verbose. The solution that comes to mind is to wrap the lambda around another lambda that does the exception handling for you, but that is basically just moving the exception handling code somewhere else: static Consumer<String> exceptionHandledConsumer(Consumer<String> unhandledConsumer) { return obj -> { try { unhandledConsumer.accept(obj); } catch (NumberFormatException e) { System.err.println( "Can't format this string"); } }; } public static void main(String[] args) { List<String> integers = Arrays.asList("44", "xyz", "145"); integers.forEach(exceptionHandledConsumer(str -> System.out.println(Integer.parseInt(str)))); } The above solution can be made much, much better by using generics. Let’s build a generic exception handled consumer that can handle all kinds of exceptions. We will, then, be able to use it for many different use cases within our application. We can make use of the above code to build out our generic implementation. I will not go into the details of how generics work, but a good implementation would look like this: static <Target, ExObj extends Exception> Consumer<Target> handledConsumer(Consumer<Target> targetConsumer, Class<ExObj> exceptionClazz) { return obj -> { try { targetConsumer.accept(obj); } catch (Exception ex) { try { ExObj exCast = exceptionClazz.cast(ex); System.err.println( "Exception occured : " + exCast.getMessage()); } catch (ClassCastException ccEx) { throw ex; } } }; } As you can see, this new consumer is not bound to any particular type of object it consumes and accepts the type of Exception your code might throw as a parameter. We can now, simply use the handledConsumer method to build our consumers. The code for parsing our list of Strings to Integers will now be this: List<String> integers = Arrays.asList("44", "373", "xyz", "145"); integers.forEach( handledConsumer(str -> System.out.println(Integer.parseInt(str)), NumberFormatException.class)); If you have a different block of code that may throw a different exception, you can just reuse the above method. For example, the code below takes care of ArithmeticException due to a divide by zero.
https://medium.com/swlh/exception-handling-in-java-streams-5947e48f671c
['Arindam Roy']
2019-09-02 12:19:51.740000+00:00
['Software Development', 'Programming', 'Software Engineering', 'Java']