title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
8 AI-driven Creative Apps for You and Your Kids
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/merzazine/8-ai-driven-creative-apps-for-you-and-your-kids-ebc475652740
['Vlad Alex', 'Merzmensch']
2020-07-09 08:37:36.566000+00:00
['AI', 'Artificial Intelligence', 'Merznlp', 'Published Tds', 'Art']
Getting Started With TensorFlow in Angular
Polynomial Regression using TensorFlow JS, Typescript, and Angular Version 10 Introduction AI/ML (Artificial Intelligence/Machine Learning) is a hot topic and it’s only natural for Angular developers to want to ‘get in on the action,’ if only to try something new and fun. While the general concepts behind neural networks are intuitive, developers looking for an organized introduction are often suffocated with jargon, complex API’s, and unfamiliar math concepts just from a few web searches. This article provides a simple introduction on how to use TensorFlow.js to solve a simple regression problem using Typescript and Angular version 10. Regression and Classification Regression and classification are two important types of problems that are often solved with ML techniques. Regression is a process of ‘fitting.’ A functional relationship between independent and dependent variables is presumed. The function exposes a number of parameters whose selection uniquely determines a fit. A quality-of-fit metric and functional representation are chosen in advance. In many cases, the desire is to fit some smooth and relatively simple curve to a data set. The function is used to predict future values in lieu of making ‘guesses’ based on the original data. Classification involves selecting the ‘best’ output among a number of pre-defined ‘classes.’ This process is often used on images and answers questions such as Is this an image of a bird? Does this image contain clouds? Does this image contain grass? Is this image the Angular logo? ML techniques are also used to solve important problems where a set of inputs are mapped to a set of outputs and the functional relationship between the inputs and outputs is not known. In such cases, any functional relationship is likely to be discreet (or mixed discreet/continuous), nonlinear, and likely not closed-form. Ugh. That’s a fancy was of saying that we don’t even want to think about a mathematical model for the process :) A neural network is used to create an approximation for the problem based on some sort of scoring metric, i.e. a measure of one solution being better or worse than another solution. Two Dimensional Data Fitting By Regression Let’s start with a simple, but common problem. We are given a collection of (x, y) data points in two dimensions. The total number of points is expected to be less than 100. Some functional relationship, i.e. y = f(x) is presumed, but an exact relationship is considered either intractable or inefficient for future use. Instead, a simpler function is used as an approximation to the original data. The desire is to fit a small-order polynomial to this data so that the polynomial may be used as a predictor for future values, i.e. y-estimated = p(x), where p represents a k-th order polynomial, p(x) = a0 + a1*x + a2*x² + a3x³ + … where a0, a1, a2, … are the polynomial coefficients (Medium does not appear to support subscripting). A k-th order polynomial requires k+1 coefficients in order to be completely defined. For example, a line requires two coefficients. A quadratic curve requires three coefficients, and a cubic curve requires four coefficients. The polynomial for this discussion is a cubic, which requires four coefficients for a complete definition. Four equations involving the polynomial coefficients are required to uniquely compute their value. These equations would be typically be derived from four unique points through which the polynomial passes. Instead, we are given more than four data points, possibly as many as 100. For each point, substitute the value of x into the equation p(x) = a0 + a1*x + a2*x² + a3*x³ For N points, this process yields N equations in 4 unknowns. N is likely to be much greater than 4, so more data is provided than is needed to compute a unique set of coefficients. In fact, there is no unique solution to this problem. Such problems are often called overdetermined. What do we do? Do we throw away data points and only choose four out of the supplied set? We could take all possible combinations of four data points and generate a single cubic polynomial for each set. Each polynomial would interpolate (pass through) the chosen four points exactly, but would appear different in terms of how well it ‘fit’ the remaining data points. In terms of the approximating polynomial, are we interested only in interpolation or both interpolation and extrapolation? Interpolation refers to using the polynomial to make predictions inside the domain of the original data points. For example, suppose the x-coordinates (when sorted in ascending order) all lie in the interval [-5, 10]. Using a polynomial function to interpolate data implies that all future x-coordinate values will be greater than or equal to -5 and less then or equal to 10. Extrapolation implies some future x-coordinate values less than five or greater than 10. The polynomial will be used to make predictions for these coordinate values. In general, performance of a predictor outside the interval of original data values is of high interest, so we are almost always interested in extrapolation. And, if we have multiple means to ‘fit’ a simple function to a set of data points, how do we compare one fit to another? If comparison of fit is possible, is there such a thing as a best-possible fit? Classical Least Squares (CLS) The classical method of least squares defines the sum of squares of the residuals to be the metric by which one fit is judged to be better or worse than another. Now, what in the world does that mean to a developer? Residuals is simply a fancy name given to the difference between a predicted and actual data value. For example, consider the set of points (0, 0), (1, 3), (2, 1), (3,6), (4,2), (5, 8) and the straight-line predictor y = x + 1 (a first-order or first-degree polynomial). The x-coordinates cover the interval [0, 5] and the predicted values at each of the original x-coordinates are 1, 2, 3, 4, 5, and 6. Compute residuals as the difference between predicted and actual y-coordinate. This yields a vector, [1–0, 2–3, 3–1, 4–6, 5–2, 6–8] or [1, -1, 2, -2, 3, -2] As is generally the case, some residuals are positive and others are negative. The magnitude of the residual is more important than whether the predictor is higher or lower than the actual value. Absolute value, however, is not mathematically convenient. Instead, the residuals are squared in order to produce a consistent, positive value. In the above example, the vector of squared residuals is [1, 1, 4, 1, 9, 4]. Two common metrics to differentiate the quality of predictors are sum of the squared residual and mean-squared residual. The former simply sums all the squares of the residuals. The latter metric computes the mean value of all squared residuals, or an average error. The terms residual and error are often used interchangeably. The Classical Least Squares algorithm formulates a set of polynomial coefficients that minimizes the sum of the squared residuals. This results in an optimization problem that can be solved using techniques from calculus. For those interested, this algorithm is heavily documented online, and this page is one of many good summaries. When formulated with normal equations, polynomial least squares can be solved with a symmetric linear equation solver. For small-degree polynomials, a general dense solver can also be used. Note that the terms order and degree are often used interchangeably. A fifth-degree polynomial, for example, has no term higher than x⁵. <aside> The normal equations formulation is important as it avoids having to solve a linear system of equations with a coefficient matrix that is a Vandermonde matrix. Empirical evidence shows these matrices to be notoriously ill-conditioned (with the most notable exception being the Discrete Fourier Transform). </aside> In general, it is a good idea to keep the order of the polynomial small because higher-degree polynomials have more inflection points and tend to fluctuate quite a bit up and down. Personally, I have never used this technique in practice on more than a couple-hundred data points and no more than a fifth-degree polynomial. Now, you may be wanting to experiment with CLS, but find the math pretty intimidating. Never fear, because we have a tried and true method for handling that pesky math. Here it goes … Blah, blah … matrix … blah, blah … least squares … blah, blah … API. There! It’s all done for you. Just click on this link and grab all the Typescript code you desire. Typescript libraries are provided for linear and polynomial least squares with multiple variants for linear least least squares. This code base is suitable for fitting dozens or even hundreds of data points with small-degree polynomials. Again, I personally recommend never using more than a fifth-degree polynomial. Classical least squares is a good technique in that it provides a proven optimal solution for the sum of the squared residuals metric. There is no other solution that produces a smaller sum of residuals, inside the interval of the fitted data set. So, CLS is useful for interpolation, i.e. we expect to make predictions for future x-coordinates inside the interval of the original data set. It may or may not be useful for extrapolation. This long introduction now leads up to the problem at hand, namely, can we use ML techniques for the cubic polynomial fit problem, and how does it compare to CLS? This leads us into TensorFlow and neural networks. What Are Tensors? Tensors are simply multi-dimensional arrays of a specified data type. In fact, if you read only one section of the massive TensorFlow documentation, then make sure it’s this one. Many of the computations in neural networks occur across dimensions of a multi-dimensional array structure, and such operations can be readily transformed to execute on a GPU. This makes the tensor structure a powerful one for ML computations. Neural Networks 101 In a VERY simplistic sense, neural networks expose an input layer where one input is mapped to one ‘neuron.’ One or more hidden layers are defined, with one output from a single neuron to all other neurons in the subsequent layer. Each of these outputs is assigned a weight through a learning or training process. The final hidden layer is connected to an output layer, which is responsible for exposing a solution (fit, extrapolation, control action, etc) given a specific input set. The network must be trained on a sample set of inputs, and it is generally validated on another data set that is separate from the training set. The training process involves setting weights along the paths that connect one neuron to another. Weights are adjusted based on a loss function or metric that provides a criteria to measure one candidate solution vs. another solution. The training process also involves selection of an optimization method and a learning rate. The learning rate is important since the learning process is iterative. Imagine being at the top of a rocky mountain range with a desire to traverse to the bottom as quickly as possible. There is no direct line of sight to an optimal path to the bottom. At best, we can examine the local terrain and move a certain distance in what appears to be the best direction. After arriving at a new point, the process is repeated. There is, however, no guarantee that the selected sequence of moves will actually make it to the ground. Backtracking may be necessary since the terrain is very complex. I experienced this in real life during a recent visit to Enchanted Rock near Fredericksburg, TX. After ascending to the top, I ignored the typical path back down and elected for a free descent down the SE side. Three backtracks and a number of ‘dead ends’ (local optima in math parlance) were encountered before I finally made it to ground level. The optimizer attempts to move in the ‘best’ direction for a single step according to some pre-defined mathematical criteria. Gradient-based optimizers are common. The gradient of a multi-variable function is a vector whose direction defines the slope of the function at a particular point (value of all independent variables). The negative gradient provides a direction in which the function decreases. A gradient descent method steps along a direction in which the loss function decreases with the hope of eventually reaching a minimum. The learning rate defines the ‘length’ of each step in the descent (technically, it is a multiplier onto the error gradient during backpropagation). Larger learning rates allow quick moves in a particular direction at the risk of ‘jumping’ over areas that should have been examined more closely. It’s like hiking on a path that is not very well defined and missing an important turn by moving too fast. Low learning rates can be nimble and move quickly in any valuable direction, but they have higher execution time and can become ‘bogged down’ in local minima. So, the learning process is rather involved as it requires selecting good data for training, a good loss function, a proper optimizer, and a balanced learning rate. The process is almost equal part art and science (and a good deal of experience really helps). These observations are one of the reasons I personally like using a UI framework such as Angular when working with ML models. The ability to present an interactive UI to a someone involved with fine-tuning an ML model is highly valuable given the number of considerations required to obtain good results from that model. TensorFlow Approach to Polynomial Regression Polynomial regression using TensorFlow (TF) has been covered in other online tutorials, but most of these seem to copy-and-paste from one another. There is often little explanation given as to why a particular method or step was chosen, so I wanted to provide my own take on this process before discussing the specifics of an Angular implementation. I recently created an interactive demo for a client who had spent too much time reading about CLS on the internet. The goal of the demo was to illustrate that CLS methods are quite myopic and better used for interpolation as opposed to interpolation and extrapolation. Here is a visualization of a test dataset I created for a client many years ago. This is a subset of the complete dataset that resulted from a proprietary algorithm applied to a number of input equipment measurements. A linear CLS fit is also shown. Sample Data set and linear least squares fit Now, you may be wondering how the plot was created. I have multiple Angular directives in my client-only dev toolkit for plotting. This one is called QuickPlot. It’s designed to perform exactly as its name implies, generate quick graphs of multiple functions and/or data sets across a common domain and range. No grids, axes, labels or frills … just a quick plot and that’s it :) While I can not open-source the entire client demo, I’m pleased to announce that I’m open-sourcing the QuickPlot directive. A quick visualization of the data seems to support using a low-degree polynomial for a fit. A cubic was chosen for this article, although the completed project supported making the degree of fit user-selectable (with a maximum of a fifth-degree polynomial). The ultimate goal is for TensorFlow to compute the coefficients, c0, c1, c2, and c3 such that the polynomial c0 + c1*x + c2*x² + c3*x³ is a ‘best’ fit to the above data. What criteria do we use to determine that one fit is better than another? The sum of squared residuals has already been discussed, but this is ideal for interpolation inside the domain of the supplied data. Sometimes, it is better to have a more ‘relaxed’ criteria when extrapolation is involved. For this reason, we begin the learning process using average squared residual. This is often called mean-square error or MSE. This metric allows for some larger deviations as long as they are countered by a suitable number of smaller deviations, i.e. the error is smaller ‘on average.’ The use of MSE also allows us to compare two different final fits using the SSE (sum of squared errors or residuals) metric. The TF optimizer selected for this process is called Stochastic Gradient Descent (SGD). We briefly discussed classical gradient descent (GD) above. SGD is an approximation to GD that estimates gradients using a subset of the supplied data that is pseudo-randomly selected. It has the general qualities of faster execution time and less likelihood to ‘bog down’ in areas of local minima. This is especially true for very large (tens of thousands or higher) data sets. SGD is not the only optimizer that could be applied to this problem, but it’s generally a good first start for regression problems. The other nice feature of this approach is that we do not have to give any consideration to network structure or architecture; just select an optimizer, loss function, and then let TensorFlow do its work! Fortunately, we have quite a bit of experimental evidence for selecting learning rates. A relatively small rate of 0.1 was chosen for this example. One of the benefits of an interactive learning module is the ability to quickly re-optimize with new inputs. We have the option to use SSE as a final comparative metric between an ‘optimized’ and ‘re-optimized’ solution. Data Selection and Pre-Processing One final consideration is preparation of the data set to be presented to TF. It is often a good idea to normalize data because of the manner in which weights are assigned to neuron connections inside TF. With x-coordinates in the original domain, small changes to the coefficient of the x³ term can lead to artificially large reductions in loss function. As a result, that term can dominate in the final result. This can lead the optimizer in the wrong path down the mountain, so to speak, and end up in a depression that is still far up the mountain face :) The data is first normalized so that both the x- and y-coordinates are in the interval [-1, 1]. The interval [0, 1] would also work, but since some of the data involves negative x-coordinates, [-1, 1] is a better starting interval. The advantage of this approach is that |x| is never greater than 1.0, so squaring or cubing that value never increases the magnitude beyond 1.0. This keeps the playing field more level during the learning process. Normalization, however, now produces two scales for the data. The original data is used in plotting results and comparing with CLS. This particular data set has a minimum x-coordinate of -6.5 and a maximum x-coordinate of 9.7. The y-coordinates vary over the interval [-0.25, 4.25]. Normalized data is provided to TF for the learning process and both the x- and y-coordinates are in the interval [-1, 1]. We can’t use the normalized scale for plotting or evaluating the polynomial for future values of x since those values will be over the domain of all real numbers, not restricted to [-1, 1]. Don’t worry — resolution of this issue will be discussed later in the article. Now that we have a plan for implementing the learning strategy inside TF, it’s time to discuss the specifics of the Angular implementation. TensorFlowJS and Angular Version 10 TensorFlow JS can be exercised by means of a Layer API or its Core API. Either API serves the same purpose; to create models or functions with adjustable (learnable) parameters that map inputs to outputs. The exact functional or mathematical representation of a model may or may not be known in advance. The Layer API is very powerful and appeals to those with less programming experience. The Core API is often embraced by developers and can be used with only a modest understanding of machine-learning fundamentals. The Core API is referenced throughout this article. Here are the two dependencies (other than Angular) that need to be installed to duplicate the results discussed in this article (presuming you choose to use the QuickPlot directive for rapid plotting). "@tensorflow/tfjs": "^2.4.0" . . . "pixi.js": "4.8.2", Following are my primary imports in the main app component. I should point out that I created my dev toolkit (from which this example was taken) with Nx. The multi-repo contains a Typescript library (tf-lib) designed to support TensorFlow applications in Angular. import { AfterViewInit, Component, OnInit, ViewChild } from '@angular/core'; import { TSMT$LLSQ, ILLSQResult, IBagggedLinearFit, TSMT$Bllsq, TSMT$Pllsq, IPolyLLSQResult, } from '@algorithmist/lib-ts-core'; import * as tf from '@tensorflow/tfjs'; import * as fits from '../shared/misc'; import { GraphBounds, GraphFunction, QuickPlotDirective } from '../shared/quick-plot/quick-plot.directive'; import { mseLoss, sumsqLoss, cubicPredict, normalize, normalizeValue, denormalizeValue } from '@algorithmist/tf-lib'; You can obtain the code for all the CLS libraries in my lib-ts-core library from the repo supplied above. The line, import * as fits from ‘../shared/misc’ simply imports some type guards used to determine type of CLS fit, import { ILLSQResult, IBagggedLinearFit, IPolyLLSQResult } from '@algorithmist/lib-ts-core'; export function isLLSQ(fit: object): fit is ILLSQResult { return fit.hasOwnProperty('chi2'); } export function isBLLSQ(fit: object): fit is IBagggedLinearFit { return fit.hasOwnProperty('fits'); } export function isPLLSQ(fit: object): fit is IPolyLLSQResult { return fit.hasOwnProperty('coef'); } Now, let’s examine each of the library functions imported from @algorithmist/tf-lib, as this serves to introduce low-level programming with TensorFlow JS. mseloss: This is a loss function based on the MSE or Mean-Squared Error metric discussed above. import * as tf from '@tensorflow/tfjs'; export function mseLoss(pred: tf.Tensor1D, label: tf.Tensor1D): tf.Scalar { return pred.sub(label).square().mean(); }; The first item to note is that most TF methods take tensors as an argument and the operation is performed across the entire tensor. The mseLoss function accepts both a one-dimensional tensor of predictions and a one-dimensional tensor of labels as arguments. The term labels comes from classification or categorical learning, and is a fancy term for what the predictions are compared against. Let’s back up for a second and review. The learnable inputs to our ‘model’ are four coefficients of a cubic polynomial. We are given a set of data points, i.e. (x, y) values, that we wish to fit with a cubic polynomial (which is the function or model for our example). The predictions are an array of y-coordinates created from evaluating the cubic polynomial at each of the x-coordinates of the supplied training data. The labels are the corresponding y-values of the original training data. The mseLoss function subtracts the label from the prediction and then squares the difference to create a positive number. This is the squared error or residual for each data point. The TF mean() method produces the average of the squared errors, which is the definition of the MSE metric. Each of these TF methods operates on a single one-dimensional tensor at a time and each method can be chained. The final result is a scalar. mseLoss is used to compare one set of predictions vs. another. That comparison is used to assign weights in a network that eventually predicts the value of the four cubic polynomial coefficients. sumsqLoss: This is another loss or comparative function. Instead of mean-squared error, it computes the sum of the squared error values. This is the function that is minimized in CLS. import * as tf from '@tensorflow/tfjs'; export function sumsqLoss(pred: tf.Tensor1D, label: tf.Tensor1D): tf.Scalar { return pred.sub(label).square().sum(); }; This function also takes predictions and labels (1D tensors) as arguments and produces a scalar result. cubicPredict: This is a predictor function, i.e. it takes a 1D tensor of x-coordinates, a current estimate of four cubic polynomial coefficients, and then evaluates the cubic polynomial for each x-coordinate. The resulting 1D tensor is a ‘vector’ of predictions for the cubic polynomial. Before providing the code, it is helpful to discuss the most efficient way to evaluate a polynomial. Most online tutorials evaluate polynomials with redundant multiplications. In pseudo-code, you might see something like y = c3 * x * x *x; y += c2 * x * x; y += c1 * x; y += c0 to evaluate the cubic polynomial c0 + c1*x + c2*x² + c3*x³. A better way to evaluate any polynomial is to use nested multiplication. For the cubic example above, y = ((c3*x + c2)*x + c1)*x + c0; The cubicPredict code implements nested multiplication with the TF Core API. The operations could be written in one line, but that’s rather confusing, so I broke the code into multiple lines to better illustrate the algorithm. You will also see a Typescript implementation later in this article. import * as tf from '@tensorflow/tfjs'; export function cubicPredict(x: tf.Tensor1D, c0: tf.Variable, c1: tf.Variable, c2: tf.Variable, c3: tf.Variable): tf.Tensor1D { // for each x-coordinate, predict a y-coordinate using nested multiplication const result: tf.Tensor1D = x.mul(c3).add(c2); result.mul(x).add(c1); result.mul(x).add(c0); return result; } Notice that the polynomial coefficients are not of type number as you might expect. Instead, they are TF Variables. This is how TF knows what to optimize and I will expand on Variables later in the article. normalize: This function takes an array of numerical arguments, computes the range from minimum to maximum value, and then normalizes them to the specified range. This is how arrays of x- and y-coordinates, for example, are normalized to the interval [-1, 1]. export function normalize(input: Array<number>, from: number, to: number): Array<number> { const n: number = input.length; if (n === 0) return []; let min: number = input[0]; let max: number = input[0]; let i: number; for (i = 0; i < n; ++i) { min = Math.min(min, input[i]); max = Math.max(max, input[i]); } const range: number = Math.abs(max - min); const output: Array<number> = new Array<number>(); if (range < 0.0000000001) { output.push(from); } else { let t: number; input.forEach((x: number): void => { t = (x - min) / range; output.push((1-t)*from + t*to); }) } return output; } The inverse process, i.e. transform data from say, [-1, 1], back to its original domain is denormalize. export function denormalize(output: Array<number>, from: number, to: number, min: number, max: number): Array<number> { const n: number = output.length; if (n === 0) return []; const range: number = Math.abs(to - from); const result: Array<number> = new Array<number>(); if (range < 0.0000000001) { let i: number; for (i = 0; i < n; ++i) { output.push(min); } } else { let t: number; output.forEach((x: number): void => { t = (x - from) / range; result.push((1-t)*min + t*max); }) } return result; } Sometimes, we want to normalize or denormalize a single value instead of an entire array. export function normalizeValue(input: number, from: number, to: number, min: number, max: number): number { const range: number = Math.abs(max - min); if (range < 0.0000000001) { return from; } else { const t: number = (input - min) / range; return (1-t)*from + t*to; } } export function denormalizeValue(output: number, from: number, to: number, min: number, max: number): number { const range: number = Math.abs(to - from); if (range < 0.0000000001) { return min; } else { const t: number = (output - from) / range; return (1-t)*min + t*max; } } These are just some of the functions in my TF-specific Typescript library. They will all be referenced during the course of the remaining deconstruction. Writing the Polynomial Regression Application This client demo was created entirely in the main app component. Layout was extremely simplistic and consisted of a plot area, some information regarding quality of fit, polynomial coefficients, and a select box to compare against various CLS fits of the same data. Note that a later version of the application also provided an area in the UI to adjust the degree of the TF-fit polynomial (not shown here). app.component.html <div style="width: 600px; height: 500px;" quickPlot [bounds]="graphBounds"></div> <div> <div class="controls"> <span class="smallTxt">RMS Error: {{error$ | async | number:'1.2-3'}}</span> </div> <div class="controls"> <span class="smallTxt padRight">Poly Coefs: </span> <span class="smallTxt fitText padRight" *ngFor="let coef of coef$ | async">{{coef | number: '1.2-5'}}</span> </div> <div class="controls"> <span class="smallTxt padRight deepText">{{dlStatus$ | async}}</span> </div> <div class="controls"> <span class="smallTxt padRight">Select Fit Type</span> <select (change)="fit($event)"> <option *ngFor="let item of fitName" [value]="item.name">{{item.label}}</option> </select> </div> </div> Graph bounds are computed by scanning the training data x- and y-coordinates to determine min/max values and then adding a prescribed buffer (in user coordinates). They are computed in the ngOnInit() handler. this._left = this._trainX[0]; this._right = this._trainX[0]; this._top = this._trainY[0]; this._bottom = this._trainY[0]; const n: number = this._trainX.length; let i: number; for (i = 1; i < n; ++i) { this._left = Math.min(this._left, this._trainX[i]); this._right = Math.max(this._right, this._trainY[i]); this._top = Math.max(this._top, this._trainY[i]); this._bottom = Math.min(this._bottom, this._trainY[i]); } this._left -= AppComponent.GRAPH_BUFFER; this._right += AppComponent.GRAPH_BUFFER; this._top += AppComponent.GRAPH_BUFFER; this._bottom -= AppComponent.GRAPH_BUFFER; this.graphBounds = { left: this._left, top: this._top, right: this._right, bottom: this._bottom }; The cubic polynomial coefficients are defined as TF Variables. Variables inform TF of the learnable parameters used to optimize the model. protected _c0: tf.Variable; protected _c1: tf.Variable; protected _c2: tf.Variable; protected _c3: tf.Variable; Many online demos (which are often copied and pasted from one another) show Variable initialization using a pseudo-random process. The idea is that nothing is known about proper initial values for variables. Since the data is normalized to a small range, initial coefficients in the range [0,1) are ‘good enough.’ So, you will see initialization such as this in many online references, this._c0 = tf.scalar(Math.random()).variable(); this._c1 = tf.scalar(Math.random()).variable(); this._c2 = tf.scalar(Math.random()).variable(); this._c3 = tf.scalar(Math.random()).variable(); where a native numeric variable is converted into a TF Variable. In reality, a decision-maker often has some intuition regarding a good initial state for a model. An interactive learning application should provide a means for the decision-maker to express this knowledge. A brief glance at the original data leads one to expect that it likely has a strong linear component and at least one inflection point. So, the cubic component is likely to also be prevalent in the final result. Just to buck the copy-paste trend, I initialized the coefficients using this intuition. this._c0 = tf.scalar(0.1).variable(); this._c1 = tf.scalar(0.3).variable(); this._c2 = tf.scalar(0.1).variable(); this._c3 = tf.scalar(0.8).variable(); Initialization to fixed values should lead to a fixed solution, while pseudo-random initialization may lead to some variance in the final optimization. Learning rate and TF optimizer are defined as follows: protected _learningRate: number; protected _optimizer: tf.SGDOptimizer; The learning rate is initialized to 0.1. This has historically shown to be a reasonable starting point for regression-style applications. Recall that TF is trained on normalized data that we wish to differentiate from the original data. TF also operates on tensors, not Typescript data structures. So, TF training data is also defined. protected _tensorTrainX: tf.Tensor1D; protected _tensorTrainY: tf.Tensor1D; TF has no knowledge of or respect for the Angular component lifecycle, so expect interactions with this library to be highly asynchronous and out-of-step with Angular’s lifecycle methods. Plotting occurs in a Canvas, so it can remain happily divorced from Angular’s lifecycle. Everything else in the UI is updated via async pipes. Here is the construction of the application status variable, error information, and the polynomial coefficient display. Each of these shown in bold are reflected in the above template. this._statusSubject = new BehaviorSubject<string>('Training in progress ...'); this.dlStatus$ = this._statusSubject.asObservable(); this._errorSubject = new BehaviorSubject<number>(0); this.error$ = this._errorSubject.asObservable(); this._coefSubject = new BehaviorSubject<Array<number>>([0, 0, 0, 0]); this.coef$ = this._coefSubject.asObservable(); The remainder of the on-init handler performs the following actions: 1 — Copy the training x- and y-coordinates into separate arrays and then overwrite them with normalized data in the interval [-1, 1]. 2 — Initialize the TF optimizer. this._optimizer = tf.train.sgd(this._learningRate); 3 — Convert the normalized x- and y-coordinates to tensors, this._tensorTrainX = tf.tensor1d(this._trainX); this._tensorTrainY = tf.tensor1d(this._trainY); 4 — Assign graph layers to the QuickPlot directive. There is one layer for the original data (in its natural domain), one for the TF fit, and one for the CLS fit. @ViewChild(QuickPlotDirective, {static: true}) protected _plot: QuickPlotDirective; . . . this._plot.addLayer(PLOT_LAYERS.DATA); this._plot.addLayer(PLOT_LAYERS.TENSOR_FLOW); this._plot.addLayer(PLOT_LAYERS.LEAST_SQUARES); The remainder of the work is performed in the ngAfterViewInit() lifecycle hander. First, the original data is plotted and then TF is asked to optimize the current model. this._optimizer.minimize(() => mseLoss(cubicPredict(this._tensorTrainX, this._c0, this._c1, this._c2, this._c3), this._tensorTrainY)); Note that mseLoss is the defined loss-function or the metric by which one solution is deemed better or worse than another solution. The current predictions for each x-coordinate depend on the current estimate of each of the polynomial coefficients. The cubic polynomial is evaluated (on a per-tensor basis) using the cubicPredict function. The labels or values TF compares the predictions to are the original y-coordinates (normalized to [-1, 1]). In pseudo-code, we might express the above line of code as the following steps: 1 — vector_of_predictions = evaluate cubic poly(c0, c1, c2, c3, vector_of_x_coordinates) 2 — Compute MSE of vector_of_predictions vs. normalized_y_coords 3 — Optimize model based on MSE comparison criterion. Once the optimization completes, the sumsqLoss function is used to compute the sum of the squares of the residuals as another measure of fit quality. let sumSq: tf.TypedArray = sumsqLoss(cubicPredict(this._tensorTrainX, this._c0, this._c1, this._c2, this._c3), this._tensorTrainY).dataSync(); The TF dataSync() method synchronously downloads the requested value(s) from the specified tensor. The UI thread is blocked until completion. The SSE value could be reflected in the UI or simply logged to the console, console.log('initial sumSq:', sumSq[0]); It’s also possible to re-optimize, i.e. run the optimization again using the current Variables as starting points for a new optimization. We can see if any improvement is made in the total sum of squares of the residuals. this._optimizer.minimize(() => mseLoss(cubicPredict(this._tensorTrainX, this._c0, this._c1, this._c2, this._c3), this._tensorTrainY)); sumSq = sumsqLoss(cubicPredict(this._tensorTrainX, this._c0, this._c1, this._c2, this._c3), this._tensorTrainY).dataSync(); console.log('sumSq reopt:', sumSq[0]); This yields the result shown below. Optimization and re-optimization of cubic polynomial regression So, how does this result compare against traditional cubic least-squares? Here is the result. Deep Learning vs. Traditional Cubic Least Squares This is really interesting — CLS (shown in blue) and TF (shown in red) seem to have different interpretations of the data (which is one reason I like to use this dataset for client demonstrations). Recall that CLS is very myopic and optimized for interpolation. There is, in fact, no better interpolator across the original domain of the data. The real question is how does the fit perform for extrapolation? As it happens, the generated data tends downward as x decreases and upward as x increases outside the original domain. So, in some respects, TF ‘got it right,’ as the TF fit performs much better on out-of-sample data. Dealing With Multiple Domains The QuickPlot Angular directive plots functions across the same bounds (i.e. extent of x-coordinate and y-coordinate). The original data and CLS fits are plotted across the same bounds, i.e. x in the interval [-6.5, 9.7] and y in the interval [-0.25, 4.25]. The cubic polynomial, computed by TF, has both x and y restricted to. the interval [-1, 1]. The shape of the polynomial is correct, but its data extents do not match the original data. So, how it it displayed in QuickPlot? There are two resolutions to this problem. One is simple, but not computationally efficient. The other approach is computationally optimal, but requires some math. Code is provided for the first approach and the second is deconstructed for those wishing to delve deeper into the math behind this project. The QuickPlot directive allows an arbitrary function to be plotted across its graph bounds. It samples x-coordinates from the leftmost extent of the graph to the rightmost extent, and evaluates the supplied function at each x-coordinate. For each x-coordinate in the original data range, perform the following steps: 1 — Normalize the x-coordinate to the range [-1, 1]. 2 — Evaluate the cubic polynomial using nested multiplication. 3 — Denormalize the result back into the original y-coordinate range. This approach is illustrated in the following code segment. const f: GraphFunction = (x: number): number => { const tempX: number = normalizeValue(x, -1, 1, this._left, this._right); const value: number = (((c3*tempX) + c2)*tempX + c1)*tempX + c0; return denormalizeValue(value, -1, 1, this._bottom, this._top); }; this._plot.graphFunction(PLOT_LAYERS.TENSOR_FLOW, 2, '0xff0000', f); This approach is inefficient in that a normalize/denormalize step is required to move coordinates back and forth to the proper intervals. It is, however, easier to understand and implement. Another approach is to compute cubic polynomial coefficients that are ‘correct’ in the original data domain. In other words, TF computes coefficients for one polynomial, P, such that P(x) accepts values of x in [-1, 1] and produces y-values in [-1, 1]. Define another cubic polynomial, Q, with coefficients a0, a1, a2, and a3 that accepts x-coordinates in the original data’s domain (all real numbers) and produces y-coordinates in the original data’s range (all real numbers). The coefficients of P(x) are c0, c1, c2, and c3. This information is used to compute a0, a1, a2, and a3. There are four unknowns, which requires four equations to uniquely specify these values. Take any four unique x-coordinates from the domain of P, say -1, 0, 1/2, and 1. If the normalize-value function is called N(x), for example, then compute x1 = N(-1) x2 = N(0) x3 = N(1/2) x4 = N(1) Now, evaluate y1 = N(P(-1)) y2 = N(P(0)) y3 = N(P(1/2)) y4 = N(P(1)) P(x) = ((c3*x + c2)*x + c1)*x + c0 in nested form. For example, P(0) = c0 and P(1) = c0 + c1 + c3 + c3. This process produces four equations a0 + a1*x1 + a2*x1² + a3*x1³ = y1 a0 + a1*x2 + a2*x2² + a3*x2³ = y2 a0 + a1*x3 + a2*x3² + a3*x3³ = y3 a0 + a1*x4 + a2*x4² + a3*x4³ = y4 Since x1, x2, x3, and x4 (as well as y1, y2, y3, and y4) are actual numerical values, the system of equations is linear in the unknowns a0, a2, a2, and a3. This system can be solved using the dense linear equation solver in the repo provided earlier in this article. This approach requires some math and for some that can be pretty intimidating. However, once the new coefficients for Q are computed, the TF cubic polynomial fit can be efficiently computed for any new x-coordinate without consideration of normalization or denormalization. Tidy Up Your Work TF produces interim tensors during the course of computations that persist unless removed, so it is often a good idea to wrap primary TF computations in a call to tidy(), i.e. const result = tf.tidy( () => { // Your TF code here ... }); To check the number of tensors currently in use, use a log such as console.log('# Tensors: ', tf.memory().numTensors); Returned tensors (or tensors returned by the wrapped function) will pass through tidy. Variables are not cleaned up with tidy; use the tf.dispose() method instead. Summary Yes, that was a long discussion. Pat yourself on the back if you made it this far in one read :) TensorFlow is a powerful tool and the combination of TF and Angular enables the creation of even more powerful interactive machine-learning applications. If you are not already familiar with async pipe in Angular, then master it now; it will be your most valuable display tool moving forward with TF/Angular. I hope you found this introduction helpful and wish you the best with all future Angular efforts!
https://medium.com/ngconf/getting-started-with-tensorflow-in-angular-36c0e9d26964
['Jim Armstrong']
2020-11-11 15:31:30.267000+00:00
['Angular', 'Math', 'TensorFlow', 'Typescript', 'AI']
Generic Table component with React and Typescript
Display any kind of data using the same table component while enjoying the benefits of typescript features. A very common use case is to be able to list multiple types of data using tables. This happens quite frequently in dashboards for instance. For example, an e-commerce dashboard might want to display a table to list the items they have in their stock, and in another table, they want to display the list of client addresses. When using React with Typescript we want to explore as much as possible the typing benefits it brings to us by creating a consistent component API. Which is both pleasant to work with and also powerful. We’ll illustrate this by creating a simple Table component which provides us the basic functionality to have a generic table which deals primarily with 3 things:
https://fernandoabolafio.medium.com/generic-table-component-with-react-and-typescript-d849ad9f4c48
['Fernando Abolafio']
2020-12-08 20:58:20.161000+00:00
['Programming', 'Software Development', 'React', 'JavaScript', 'Typescript']
Why Does Silicon Valley Want to Reengineer Humans?
Why Does Silicon Valley Want to Reengineer Humans? Human beings are not the problem. We are the solution. Image: Andriy Onufriyenko/Getty Images To many developers and investors in Silicon Valley, humans are not to be emulated or celebrated, but transcended or, at the very least, reengineered. These technologists are so dominated by the values of the digital revolution that they see anything or anyone with different priorities as an impediment. This is a distinctly antihuman position, and it’s driving the development philosophy of the most capitalized companies on the planet. In their view, evolution is less the story of life than of data. Information has been striving for greater complexity since the beginning of time. Atoms became molecules; molecules became proteins; proteins became cells, organisms, and, eventually, humans. Each stage represents a leap in the ability to store and express information. Now that we humans have developed computers and networks, we are supposed to accept the fact that we’ve made something capable of greater complexity than ourselves. Information’s journey to higher levels of dimensionality must carry on beyond biology and humans to silicon and computers. And once that happens, once digital networks become the home for reality’s most complex structures, then human beings will really be needed only insofar as we can keep the lights on for the machines. Once our digital progeny can care for themselves, we may as well exit the picture. This is the true meaning of the “singularity”: It’s the moment when computers make humans obsolete. At that point, we humans will face a stark choice: Either we enhance ourselves with chips, nanotechnology, and genetic engineering to keep up with our digital superiors, or we upload our brains to the network. If we go the enhancement route, we must accept that whatever it means to be human is itself a moving target. We must also believe that the companies providing us with these upgrades will be our trustworthy partners — that they wouldn’t remotely modify equipment we’ve installed into ourselves, or change the terms of service, or engineer incompatibility with other companies’ enhancements or planned obsolescence. Given the track record of today’s tech companies, that’s not a good bet. Plus, once we accept that every new technology has a set of values that goes along with it, we understand that we can’t incorporate something into ourselves without also installing its affordances. In the current environment, that means implanting extractive, growth-based capitalism into our bloodstreams and nervous systems. If we go with uploading, we’d have to bring ourselves to believe that our consciousness somehow survives the migration from our bodies to the network. Life extension of this sort is a tempting proposition: Simply create a computer as capable of complexity as the brain, and then transfer our consciousness — if we can identify it — to its new silicon home. Eventually, the computer hosting our awareness will fit inside a robot, and that robot can even look like a person if we want to walk around that way in our new, eternal life. It may be a long shot, but it’s a chance to keep going. Others are hoping that even if our consciousness does die with our body, technologists will figure out how to copy who we are and how we think into an A.I. After that, our digital clone could develop an awareness of its own. It’s not as good as living on, perhaps, but at least there’s an instance of “you” or “me” somewhere out there. If only there were any evidence at all that consciousness is an emergent phenomenon, or that it is replicable in a computer simulation. The only way to bring oneself to that sort of conclusion is to presume that our reality is itself a computer simulation — also a highly popular worldview in Silicon Valley. Whether we upload our brains to silicon or simply replace our brains with digital enhancements one synapse at a time, how do we know if the resulting beings are still alive and aware? The famous Turing test for computer consciousness determines only whether a computer can convince us that it’s human. This doesn’t mean that it’s actually human or conscious. The day when computers pass the Turing test may have less to do with how smart computers have gotten than with how bad we humans have gotten at telling the difference between them and us.
https://medium.com/team-human/why-does-silicon-valley-want-to-reengineer-humans-f7cfcf9ad052
['Douglas Rushkoff']
2020-11-12 14:20:41.208000+00:00
['Society', 'Technology', 'Book Excerpt', 'Silicon Valley', 'AI']
Where Does Inspiration Come From?
I’ve been thinking a lot lately about how creative people are inspired. And at the same time, how consciously I seek out inspiration in my own life. I may be a self-proclaimed SuperGeek, but I would actually describe myself more as a jack-of-all-trades. WordPress consulting is just one of these “trades.” Where Does Inspiration Come From? Most of my other interests are creative and artistic, so Visualmodo Blog allows me to dabble at the intersection of creativity and tech geekiness. Have you been watching the Netflix series Abstract? I loved delving into the lives of the various types of artists, one after the other, and learning about how they carve out their creative space in the world. I have always loved watching the creative process at work, which is why I love Project Runway, for instance. I am always more interested in the designers’ processes than in the catty in-fighting. One thing that the artists featured in Abstract seemed to keep saying is that their creativity thrives when there are limits imposed upon it. Maybe you only have a Sharpie to draw with. Or your only canvas is a picture window. Or your logo design is restricted by the company’s name. Similarly, I enjoy having some restrictions on my work. It’s better to be able to immediately reject some ideas before they can even bubble to the surface, rather than having too much choice and too many options about what to create. As someone who works on a computer all day, it’s easy to get so obsessed with working that you barely get up to eat, let alone leave the house. But it’s important for me, and other creative types, to remember that creativity doesn’t happen in a vacuum. You need to get out into the world and see the colours of nature, or the typography on local signage, or the way the light falls on a building at dusk. One of the most inspiring things I do on a regular basis is visit an art gallery. I have a membership to a modern art museum, and I like to wander through just for an hour, limiting myself to just one section. Sometimes I find somewhere to sit and write, or doodle. Having a membership means that I can take my time, and really soak in the creative atmosphere. I also love to travel, as time and money permit. Getting out of my comfort zone is inspiring, and I’ve been inspired by discovering the way I react to new and often surprising situations. Equally, I love coming home, and applying the lessons I’ve learned to my work and everyday life. For me, inspiration comes from learning new things. I have an insatiable need to continue learning, so I thrive on signing up for online courses. I may not actually complete (or even start!) many of the courses that I sign up for, but when I do I am immediately transported into the world of whatever I’m learning: I completed a course on drawing, and gained so much confidence by putting pen to paper and just giving it a go. I went through a course on branding, and learned some great techniques for visualising a brand in the very early conceptual stages. I did a course about MailChimp, and learned how to set up an auto responder and what it means to segment a list. All of these courses may have had some elements that I already knew, but the teacher put their own personal spin on them which framed them as something new. And there was always enough content that was new to me, to inspire some new type of creativity. Finally, I get inspiration through sharing my vulnerable stories and the work I create. The conversations I’ve had with people after they have read my writing, or seen my rudimentary doodles, have been some of the most enlightening. Often, they have even been surprising discussions with people I’ve been friends with for some time, but have never conversed with them like this before. I’m so curious! Please tell me how you get your inspiration. Does it come easily to you, or do you really need to put yourself in a specific state to discover it? Leave a comment to let me know!
https://medium.com/visualmodo/where-does-inspiration-come-from-464ebd2b0476
[]
2018-02-15 15:59:32.282000+00:00
['Creative', 'Tips', 'Inspiration', 'Creativity', 'Motivation']
Relentlessly Improving Performance
Relentlessly Improving Performance NVIDIA DGX A100 640GB Systems & BlazingSQL Provide Big Value in a Small Space NVIDIA Selene — an implementation of NVIDIA DGX SuperPOD Introduction In the two years since its introduction, the RAPIDS team has been laser-focused on bringing the performance of GPUs to the Python data science ecosystem. If performance is a primary goal, it is critical to be able to measure that performance, and how it changes over time. Benchmarks help us understand the impact of both improvements (and sometimes regressions!) to the software and hardware. There are a number of internal benchmarks that have been used to benchmark RAPIDS. However, in the last year, we have focused on building a robust, repeatable, realistic performance benchmark to more consistently test RAPIDS against an end-to-end Big Data workflow at a 10+ terabyte (TB) scale. Today, we want to explain what we are using to benchmark RAPIDS and how our results have progressed. We also want to showcase what RAPIDS can do on the new NVIDIA A100 80GB GPU just announced. Spoiler alert — the new A100 80GB blows away our previous performance numbers, doing the same amount of work in about half the number of nodes. It’s an impressive suite of hardware, ideally suited to data science workloads. Summary of GPU Big Data Benchmarks in 2020 Our benchmark, which we are calling the GPU Big Data Benchmark (GPUbdb), uses 10 TB of simulated data designed to mimic real data from a large retail or finance company. It comprises a mix of structured and unstructured data, requiring large scale ETL, natural language processing, and machine learning. Most importantly, the benchmark is evaluated “end-to-end”, including everything from loading data all the way to writing output files — that means data starts on disk, is read into GPUs, analytics are performed, and the result is written back out to disk. This makes the benchmark directly relevant to real-world workflows applicable to many enterprises. Great, so how does RAPIDS look on this benchmark? May 2020 — NVIDIA DGX-1 + RAPIDS + Dask At GTC Spring 2020, NVIDIA CEO, Jensen Huang, announced preliminary results, completing the entire GPUbdb benchmark workload in under 30 minutes on 16 NVIDIA DGX-1 nodes using 128 NVIDIA V100 GPUs. Our code implementation primarily relied on GPU DataFrame analytics at scale provided by PyData tools like RAPIDS 0.13, Dask, Numba, CuPy, and more. We relied on UCX to take advantage of NVIDIA NVLink, the high-speed GPU-to-GPU interconnect. The cost of the cluster was approx. $2M. Setup: Scale Factor — 10TB — 10TB Systems: 16x NVIDIA DGX-1 16x NVIDIA DGX-1 Hardware: 128 total NVIDIA V100 GPUs with 4 TBs total GPU memory connected locally over NVIDIA NVLink and node-to-node via NVIDIA Mellanox InfiniBand networking. 128 total NVIDIA V100 GPUs with 4 TBs total GPU memory connected locally over NVIDIA NVLink and node-to-node via NVIDIA Mellanox InfiniBand networking. Software: RAPIDS v0.14, Dask v2.16.0, UCX-Py v0.14 June 2020 — NVIDIA DGX A100 320GB + RAPIDS + Dask In June 2020, NVIDIA again announced breakthrough performance, completing the GPUbdb benchmark in under 15 minutes (½ the time) using 16 NVIDIA DGX A100 320GB nodes and the newly released RAPIDS 0.14 software. Using the same number of nodes with new (at the time) A100GPUs, we were able to double our performance. Improvements to both hardware and software made this huge leap possible. The cluster cost was approx. $3.2M — higher than the previous result on 16 DGX-1 systems, but the increase in performance led to a lower total cost of ownership (TCO) than the DGX-1 solution. Setup: Scale Factor — 10TB — 10TB Systems: 16x NVIDIA DGX A100 320GB 16x NVIDIA DGX A100 320GB Hardware: 128 total NVIDIA A100 GPUs with 5 TBs total GPU memory connected locally over NVIDIA NVLink and NVIDIA NVSwitch and node-to-node via NVIDIA Mellanox InfiniBand networking. 128 total NVIDIA A100 GPUs with 5 TBs total GPU memory connected locally over NVIDIA NVLink and NVIDIA NVSwitch and node-to-node via NVIDIA Mellanox InfiniBand networking. Software: RAPIDS v0.15, Dask v2.17.0, UCX v0.15 However, most of our code relied on DataFrames. While DataFrame APIs are great, many people are more familiar with SQL. We knew that implementing this benchmark primarily in SQL would be critical to demonstrating that the thousands of SQL queries running on CPU clusters across companies would be faster and cheaper on GPUs. Enter BlazingSQL. October 2020 — NVIDIA DGX A100 320GB + RAPIDS + Dask + BlazingSQL BlazingSQL is a high-performance distributed SQL engine in Python built on RAPIDS. The BlazingSQL implementation of this benchmark raised the bar even higher. The BlazingSQL version completed the benchmark in fewer than 12 minutes, with just 10 DGX A100 320GB nodes. That’s 20% faster performance at only 60% of the cost (approx $2M for the cluster). Setup: Scale Factor — 10TB — 10TB Systems: 10x NVIDIA DGX A100 320GB 10x NVIDIA DGX A100 320GB Hardware: 80 total NVIDIA A100 GPUs with 3.2 TBs total GPU memory 80 total NVIDIA A100 GPUs with 3.2 TBs total GPU memory Software: RAPIDS v0.16, Dask v2.30.0, BlazingSQL v0.16 As excited as we were about these results, we were nowhere near stopping. November 2020 — DGX POD with DGX A100 640GB + RAPIDS + Dask + BlazingSQL NVIDIA announced at SuperCompute 2020 a new version of the A100 GPU, increasing the per-GPU memory from 40GB to 80GB. This is huge for data science workloads, which are commonly memory intensive and often need to temporarily increase memory usage. While RAPIDS has many tools to manage larger-than-memory workloads, there is no substitute for simply having more memory. Based on this release, we’re excited to show a step function improvement in our performance and cost savings. Using BlazingSQL, RAPIDS, Dask, CuPy, and Numba on a single DGX POD with 6 DGX A100 640GB nodes, we completed the benchmark in under 11 minutes. That’s even faster still while reducing cost by yet another 10% (approx $1.8M for the cluster). The larger memory capacity enables us to spike memory higher during key ETL operations, dramatically reducing the need to spill data from GPU memory to CPU memory. Setup: Scale Factor — 10TB — 10TB Systems: 6x NVIDIA DGX A100 640GB 6x NVIDIA DGX A100 640GB Hardware: 48 total NVIDIA A100 GPUs with 3.8 TBs total GPU memory 48 total NVIDIA A100 GPUs with 3.8 TBs total GPU memory Software: RAPIDS v0.16, Dask v2.30.0, BlazingSQL v0.16 Wrapping Up These results are exciting, because not only do we have a clear path to more effectively test the performance of RAPIDS, but we also get to show our improvement over time. Progressing from the V100 32GB to the A100 80GB GPUs, we’ve improved from our RAPIDS 0.14 results completing the benchmark in 30 minutes on a $2M system to under 11 minutes on a $1.8M system using BlazingSQL on RAPIDS 0.16. The A100 80GB is an impressive suite of hardware, and its additional memory makes it ideally suited to the sorts of high-performance data analytics RAPIDS is designed to perform. A six node, single rack, DGX POD with DGX A100 640GB can run 10TB workloads at blazing speeds — an impressive performance. Just as importantly, the RAPIDS software is becoming more and more efficient. By considering both software and hardware holistically, the RAPIDS team is making the vast potential of accelerated computing accessible to data practitioners across industries and institutions.
https://medium.com/rapids-ai/relentlessly-improving-performance-d1f7d923ef90
['Josh Patterson']
2020-11-17 20:41:05.088000+00:00
['Machine Learning', 'AI', 'Rapids Ai', 'Analytics', 'Big Data']
4 Opposing Traits That Define Creative Personalities
4 Opposing Traits That Define Creative Personalities And how you can find inspiration from them Photo by melissa mjoen on Unsplash Various psychologists and scientists have tried to study the personality that defines the creative person. Facing numerous and contradictory traits, they found that the talented artist, the genius scientist, or the deep thinker cannot be reduced to a single psychological type. According to Mihály Csíkszentmihályi in Creativity: Flow and the Psychology of Discovery, creatives embody personalities fluctuating between various and contradictory tendencies, crossing all the spectrum of human emotions. Their creativity comes from an ability to bring this complexity together in a shifting, adaptive, and unique personality. It comes from personal aspects such as a self-confident modesty, a playful focus on their work, transgressive respect for tradition, and a rigorous passion for their efforts. Here is how examples of creatives from Csíkszentmihályi’s book talk about these 4 traits and how you can find inspiration from them. Showing Self-Confidence and Modesty When creative people talk about their field of expertise, they show a precise and firm vision of its history and future. Yet, when they present their work, they are also often modest and minimize the importance of their discovery compared to the long line of their successors. Creative personalities are lucid enough about their success that they can move from absolute certainty to uncertainty about their future accomplishments. They can put their judgment, taste, and competitive spirit into perspective on a field larger than themselves, being both highly ambitious and selfless. This can be read in their words: “When people ask me if I’m proud of something, I just shrug and hope to get away as soon as possible. I should explain that my way is always to look ahead, all my pleasant thoughts are about the future” Elisabeth Noelle-Neumann, famous social scientist. Michael Snow, a great experimental filmmaker, attributes his incessant experimentation to a sense of insecurity and doubt that he tries to illuminate. To reach a state of creativity, you have to take hold of your field, to the point of knowing the value of its great figures and your chances of differentiating yourself from them. Being Focused and Playful When creative people are immersed in their work, they are in a state of optimal concentration where nothing can disturb them and where they are obsessed with their work. But at the same time, they feel great pleasure in not taking their work seriously, playing, and experimenting constantly with their achievements. Their creativity requires both a strong discipline when working in their ideas and an ability to twist these ideas freely when reflecting. They use their capacity for hard work in the service of a volatile, distracted, and fanciful imagination. Cutting out their work in a two-part process -ideation and hard work- is how these artists or thinkers resolve this contradiction. Nina Holton says: “Tell anybody you’re a sculptor and they’ll say, Oh how exciting, how wonderful” And I tend to say, “What’s so wonderful?” I mean, it’s like being a mason, or being a carpenter, half the time.” Sculpture is the combination of wonderful wild ideas and then a lot of hard work. Jacob Rabinow claims that he is in prison, to free his imagination from the constraint of time but also to focus only on the progress of his work. Growing your creativity thus means increasing your ability to move from a state of distraction and mental association to a state of concentration and intense work. Seeking to Break and Save Tradition Artists are known for their tendency to constantly question the achievements and forms of the past to create new ones. Yet, they also display deep respect and admiration for their inspirer and master and do not hesitate to assimilate and claim their image and heritage. Their creativity stems both from a spirit of independence from the influences of the past and a desire to return to the level of past achievements. Artists seek to recapture the best of the past by seeking to surpass it in something new. They seek to take risks and bring something new without necessarily wanting to make a difference at all costs. They value transgression and daring as much as they reject the will to distinguish themselves at all costs. According to artist Eva Zeisel, “Wanting to be different can’t be the motive of your work. To be different is a negative motive, and no creative thought or created thing grows out of a negative impulse. No negative impulse can work, can produce any happy creation”. While according to the economist George Stigler: “I’d say one of the common failures of able poeple is a lack of nerve. They’ll play safe games. They’ll take whatever the literature’s doing and add a little bit of it.” To follow the path of creativity, find a dynamic between the boldness and independence of the creator and the conservatism and modesty of a true art lover. Expressing Affection and Objectivity for Their Work One of the other elements differentiating creative persons is an energy that drives them in their work. This energy is both deep love and attachment for their achievements and a rational and strong detachment from them. Creativity comes as much from a passionate enthusiasm for their work process as it does from rationality and systematic judgment in the evaluation of their work. The creative person is both exceptionally tender for the fruits of his efforts and particularly cruel when it comes to recreating them and deciding what needs to be changed, modified, or corrected. This alternation between fertile emotional production and cold cutting and analysis is what gives the artist or thinker sufficient flexibility to create and recreate his work. According to historian Natalie Davis, “I love what I am doing and I love to write. I just have a great deal of affect invested… [Yet] I think it is very important to find a way to be detached from what you write, so that you can’t be so identified with your work that you can’t accept criticism and response. “ Finding the ability to switch between passionate work and a cold reason that judges his work. You will then get closer to the creative drive. It’s your turn to be inspired by these traits to aspire to a creative personality!
https://medium.com/thinking-up/4-opposing-traits-that-define-creative-personalities-b58d3eedf340
['Jean-Marc Buchert']
2020-09-04 12:00:36.745000+00:00
['Inspiration', 'Artist', 'Creativity', 'Psychology', 'Art']
Compare Amazon Textract with Tesseract OCR — OCR & NLP Use Case
Compare Amazon Textract with Tesseract OCR — OCR & NLP Use Case Comparison of two known engines for optical character recognition (OCR) and Naturtal Language Processing Image by Felix Wolf from Pixabay What is OCR anyway and why the buzz? Artificial Intelligence (AI) enables entities with Human Intelligence (us) process data at a large scale — faster and cheaper. Unarguably, a large portion of data is saved digitally- easy to read and analyze. However there is a significant portion of data that is stored in physical documents — both type written and hand-written. How to analyze this category of data. This is where fascinating technology of Optical Character Recognition (OCR) comes in. Using OCR you are able to convert documents into text format of data suitable for editing and searching. This is what OCR is able to do. Image by Gerd Altmann from Pixabay In the article we will focus on two well know OCR frameworks: Tesseract OCR — free software, released under the Apache License, Version 2.0 - development has been sponsored by Google since 2006. Amazon Textract OCR — fully managed service from Amazon, uses machine learning to automatically extract text and data We will compare the OCR capabilities of these two frameworks. Let's start by a simple image as below: Image by Author — typewritten.jpg $ git clone https://github.com/mkukreja1/blogs.git Download and Install Notebook blogs/ocr/OCR.ipynb !pip install opencv-python !pip install pytesseract !pip install pyenchant import cv2 import pytesseract import re from pytesseract import Output img_typewritten = cv2.imread('typewritten.jpg') custom_config = r'--oem 3 --psm 6' txt_typewritten=pytesseract.image_to_string(img_typewritten, config=custom_config) print(txt_typewritten) OCR Output using Tesseract OCR: BEST PICTURE FORD V FERRARI THE IRISHMAN JOJO RABBIT JOKER LITTLE WOMEN MARRIAGE STORY 1917 ONCE UPON A TIME…IN HOLLYWOOD PARASITE OCR Output using Amazon Textract OCR: Both frameworks performed exactly the same. Lets see how handwritten text compares. Image by Author — handwritten.jpg img_handwritten = cv2.imread('handwritten.jpg') txt_handwritten=pytesseract.image_to_string(img_handwritten) print(txt_handwritten) OCR Output using Tesseract OCR: <> mhassadey VWENS YEA) sore &&. a) OW!NS ané Lp Real Estate Group RKSHIRE | Ambassador H ATH. AWAY | Real Estate lomeServices Ky Ie, aa Nim So mul for meet With me today | Ht wis qed Catching Wp and Vin glad 10 hear Mk Ke well for bu. | Lovie Brwoud 10 Seeing You AA aiv Go| Cheers| Megan Owens, Realtor 402–689–4984 www.ForSalebyMegan. MMS com OCR Output using Amazon Textract OCR: Image by Author Amazon Textract OCR performed marginally better than Tesseract OCR for handwritten text. Now we will try a busy image. Image by Author — invoice-sample.jpg img_invoice = cv2.imread('invoice-sample.jpg') custom_config = r'--oem 3 --psm 6' txt_invoice=pytesseract.image_to_string(img_invoice, config=custom_config) print(txt_invoice) OCR Output using Tesseract OCR: http://mrsinvoice.com I 7 Your Company LLC Address 123, State, My Country P 111–222–333, F 111–222–334 BILL TO: P: 111–222–333, F: 111–222–334 a. z cient@eromplent Contact Phone 101–102–103 john Doe office ayment Terms ‘ash on Delivery Office Road 38 P: 111–333–222, F: 122–222–334 Amount Due: $4,170 [email protected] NO PRODUCTS / SERVICE QUANTITY / RATE / UNIT AMOUNT HOURS: PRICE 1 aye 2 $20 $40 2 | Steering Wheel 5 $10 $50 3 | Engine oil 10 $15 $150 4 | Brake Pad 24 $1000 $2,400 Subtotal $275 Tax (10%) $27.5 Grand Total $302.5 ‘THANK YOU FOR YOUR BUSINESS OCR Output using Amazon Textract OCR: Image By Author Amazon Textract identifies tables and forms in documents. This is neat. Image By Author Advanced Features — Spell Checking Results from an OCR scan are often fed into an NLP model. Therefore, it is important to have a high degree of accuracy of the resulting text. We can handle it two ways: Pass every work through a spell-checker module like enchant Option 1 — if spell check failed — Mask/remove the word from the resulting text Option 2 — if spell check failed — Use spell-checker suggestions and edit the resulting text Enchant module is very frequently used in Python to check the spelling of words based on dictionary. In addition to spell checking enchant can give suggestions to correct words. img = cv2.imread('invoice-sample.jpg') text = pytesseract.image_to_data(img, output_type='data.frame') text = text[text.conf != -1] lines = text.groupby('block_num')['text'].apply(list) print(lines[25]) [‘‘THANK’, ‘YOU’, ‘FOR’, ‘YOUR’, ‘BUSINESS.’] Note that the first word seems to have an spelling issue. import enchant dict_check = enchant.Dict("en_US") for word in lines[25]: if (dict_check.check(word)): print(word+ ' - Dictionary Check Valid') else: print(word+ ' - Dictionary Check Valid Invalid') print('Valid Suggestions') print(dict_check.suggest(word)) ‘THANK — Dictionary Check Valid Invalid Valid Suggestions [‘THANK’] YOU — Dictionary Check Valid FOR — Dictionary Check Valid YOUR — Dictionary Check Valid BUSINESS. — Dictionary Check Valid Note that enchant found the first word invalid and was able to provide a alternative suggestion. Using Regular Expressions for NLP My early days in IT (almost 25 years ago) were filled with ups and downs. One day I would learn something new and feel on top of the world. Other days not so much. I remember the day when my manager asked me to work on a pattern-matching problem. I had to do the pattern-matching over data in Oracle. Since I had never done this before, I request him for pointers. The answer I got was was “This can be very easily done using REGEX”. I was a good follower, except there was nothing easy about using REGEX pattern matching). It took me a while to realize that my manager was kidding. I still fear using REGEX but can’t escape it…..it still is widely used in Data Science especially NLP. Learning REGEX takes time. I have frequently used websites like https://regex101.com/ for practice. For example if you want to extract all date fields from a document. d = pytesseract.image_to_data(img, output_type=Output.DICT) keys = list(d.keys()) date_pattern = '^(0[1-9]|[12][0-9]|3[01])/(0[1-9]|1[012])/(19|20)\d\d$' n_boxes = len(d['text']) for i in range(n_boxes): if int(d['conf'][i]) > 60: if re.match(date_pattern, d['text'][i]): (x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i]) img_date = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2) print(d['text'][i]) Output: 12/12/2001 You may even highlight the date fields in a given document cv2.imwrite('aimg_date.png', img_date) Image by Author I hope this article was helpful in kick-starting your OCR and NLP knowledge. Topics like these are covered as part of the AWS Big Data Analytics course offered by Datafence Cloud Academy. The course is taught online by myself on weekends.
https://towardsdatascience.com/compare-amazon-textract-with-tesseract-ocr-ocr-nlp-use-case-43ad7cd48748
['Manoj Kukreja']
2020-09-17 17:55:38.674000+00:00
['Machine Learning', 'Data Science', 'NLP', 'Artificial Intelligence', 'AWS']
We Need to Stop Using null: Here’s Why
Photo by Ben Hershey on Unsplash In general, while doing code reviews, I have developed a strong notion of why the use of null (or nil , NULL , nullptr etc depending on your programming language) generally causes more problems than it solves. The fundamental problem of null is that it is trying to represent the fact that it is not a value while being assigned as a value. This fundamental flaw then snowballs and manifests into problems that we see in everyday production code. Here, I have made an attempt to document the various types of issues I commonly find with using null. Poor language Decisions Certain languages have not handled edge-cases of null type variables in a consistent manner. For example, in Java, you have primitive and reference types of variables. Primitive variables do not allow null initialization. They throw a compile-time error. int i = null; //this throws a compile error error: incompatible types: <null> cannot be converted to int int i= null; But an instance of the class Integer can accept a null initialization. Integer j = null; //perfectly fine Now, in Java, when you assign a reference type to a primitive type, it silently does a type conversion. Integer p = 100; int q = p; //works fine If you try the same with an Integer instance initialized to null , you will now get a runtime exception. Integer i= null; int j = i; //throws a runtime exception The above, when executed, throws an exception: Exception in thread "main" java.lang.NullPointerException at Main.main(HelloWorld.java:5) This kind of inconsistency can be extremely confusing, and lead to difficult to track bugs being introduced to the codebase. Tracking Nulls Leads to Sloppy Code The problem with things allowed to be null is that you need to check for their null case every time. For example, in Java, an empty check for a string always looks like this: String s = "hello"; if(s == null || s.length() == 0) { //do stuff } This is, in fact, so prevalent that almost all production code either has a bunch of utility classes to handle these edge cases or used a library like StringUtils to handle these cases. The native language has no support for it. In fact, C# has added a isNullOrEmpty function natively to handle this extremely common scenario. Multiple, Often Confusing Definitions In weakly typed languages, things can get even more confusing. For instance, in Javascript, since there are not types, we need a way to explicitly identify whether a property of an object is null , or if it simply does not exist. To handle this, Javascript now has two types of nulls. undefined — property exists but has no value. null — property exists and is assigned a null value. This again is extremely confusing. For instance, I am allowed to explicitly set the value of a variable as undefined , which contradicts the above definition. And now for the biggest problem that I see every day. Type Subversion Null is an exception to type-definitions. Type definitions are a great advantage of strongly typed languages. For instance, let’s take the method trim() on the class String . Now, if I try to call trim() on any instance, the language (in this scenario — Java) will check if the type of the instance if String . If not, it will provide a compile-time error. String x = " Hello "; x = x.trim(); System.out.println(x); Above code works perfectly. It provides a trimmed string. But if we try to do it on a different object, let’s say an instance of Integer , we get a compile-time error. Integer x = 5; x = x.trim(); This generates an error: error: cannot find symbol x = x.trim(); ^ symbol: method trim() location: variable x of type Integer These compile-time errors let developers write high-quality production code. Null breaks this rule because any reference can be null. Calling methods on references generate NullPointerException . These often go unchecked even in code-reviews and then come out during rum-time requiring extensive debugging. String s = null; s = s.trim(); //throws NullPointerException
https://medium.com/swlh/we-need-to-stop-using-null-heres-why-c56ff3ac72dd
['Arindam Roy']
2020-06-29 11:07:11.416000+00:00
['JavaScript', 'Software Development', 'Software Engineering', 'Java', 'Programming']
Roadmap to Becoming a Successful Data Scientist
1) Python Basics In order to become a data scientist, you need to understand the basics of Python first. Because Python is the favored language for data science. By going through this course of edX “Introduction to Python for Data Science” you will have some basic understanding of python. 2) Statistics & Probability Stats is a key part of Data Science. If I am not wrong Data Science is all about statistics. This course of KhanAcademy will help you to understand the basic concepts of statistics and probability. 3) Data Analysis Data preprocessing and visualization of data is one of the main components of data science. This course will teach you how computing and mathematics come together. The data analysis involves the collection of data preprocessing and interactive visualization of data. If you want to master this key component, click here. 4) Machine Learning for Data Science and Analytics Now its time to learn some Machine learning stuff. After taking this course “Machine Learning For Data Science & Analytics” you will be able to understand algorithms and how to create ML models. 5) Deep Learning This is a complete book of Deep Learning in which you will have got clear and very precise knowledge about deep learning. 6) Intro to Relational Databases (SQL, DB-API & More..) If you have learned python completely, this course will help you a lot to understand this course, because it is all about SQL queries and how you will use the relational database from your code using python example. You will learn the basics of SQL along with Python API for connecting python code to the database. 7) Intro to Hadoop and MapReduce For the handling of Big Data, Apache Hadoop develops open-source software for the manipulation of Big Data. This course will make you understand the basics of Hadoop and the principles behind it. 8) Data Storytelling To be a data scientist is not enough, you have to learn the way of representing data and its insights to the management, executives and other stakeholders. So, it is necessary to learn the data storytelling skill so that you wouldn’t get confused at the time of data representation. Click here. *Important* Learning is not enough if you don’t practice what you have learned. So I urged you to do at least 2 to 3 Kaggle projects to polish your data science skills. This roadmap is enough for you to start your career as a data scientist. You don’t need to go anywhere or enrolled $1000 of courses to become a data scientist. You can do it on your own. You just need to start once with passion!
https://medium.com/dataseries/roadmap-to-becoming-a-successful-data-scientist-7daf4d2b0e11
['Mustufa Ansari']
2020-08-28 09:10:51.156000+00:00
['Machine Learning', 'Artificial Intelligence', 'AI', 'Data Science', 'Data']
Searching for Kindness on the Interwebs
The talk that resonated with me most at D&CC came from Beth Dean, who spoke about Facebook’s efforts to imbue kindness into its interactions. Facebook’s problems are unique, complicated, and usually unintentional, in the same way Tay’s downfall was never intended. A grieving parent sees “memories” collected by Facebook featuring a deceased child. Mother’s Day reminders tear at the hearts of those without mothers. Your news feed shouts out the political diatribes of the day, every day, while your community adds to the stream of angry comments. As software developers at any tech company become more skilled at using our data to customize and personalize our experiences on the web, we are in turn demanding a higher level of thoughtfulness from our interactions. Are we setting higher expectations from artificial entities because our interactions with people on the web have become more tense? From the venom spewed about immigrants in certain corners of the Internet to the threats leveled at two Republican Senators who disagreed with their leadership over the recent healthcare fights, it’s as though we’ve accepted this behavior as the new normal, and now turn to AI to give us the kindness that’s missing from our human interactions. I don’t have any answers to this question. But amidst this disconnect that I struggle with — creating a positive, welcoming environment for Microsoft’s customers, while feeling as if I can’t expect the same courtesy from certain parts of the media, or in my political leaders — what I try to keep top of mind in all this is my aunt-in-law. We’ve had different political leanings since I married into the family when I was 22. But in the end, the only thing that ever really mattered was that she loved my first husband, who passed away 5 years ago, and now simply wants to know that his children are growing up into good men who will follow in their father’s footsteps. With her in mind, those on the opposite side of the political spectrum no longer are the “other.” And maybe the reason that those of us in the tech field are focused now on ensuring that our creations reflect a kindness is to provide a proxy for my aunt-in-law. A techy reminder that if bots and other AIs can be courteous and careful of our feelings, we shouldn’t forget that it’s possible in our human-to-human interactions too. Whether you’re engaging with a bot, your social media audience, your political representatives, or your best friends, your words — and the humanity behind those words — matter more than ever.
https://medium.com/microsoft-design/searching-for-kindness-on-the-interwebs-76f5b247dd9
['Elizabeth Reese']
2019-08-27 15:56:42.985000+00:00
['Bots', 'Cortana', 'AI', 'Artificial Intelligence', 'Microsoft']
Write Cleaner Code by Using JavaScript Destructuring
Destructuring Objects In the example above, all the magic happens in the following line: const { title, rating, author: { name } } = article Now it may seem a bit weird to have those brackets like that on the left side of the assignment, but that’s how we tell JavaScript that we are destructuring an object. Destructuring objects lets you bind to different properties of an object at any depth. Let’s start with an even simpler example: const me = { name: "Juan" } const { name } = me In the case above, we are declaring a variable called name that will be initialized from the property with the same name in the object me so that when we evaluate the value of name , we get Juan . Awesome! The same can be applied to any depth. Heading back to our example: const { title, rating, author: { name } } = article For title and rating , it’s exactly the same as we already explained. But in author , things are a bit different. When we get to a property that is either an object or an array, we can choose whether to create the variable author with a reference to the article.author object or do a deep destructuring and get immediate access to the properties of the inner object. Accessing the object property: Doing a deep or nested destructuring: Wait, what? If I destructured author , why is it not defined? It is actually quite simple. When we ask JavaScript to also destructure the author object, that binding itself is not created and we instead get access to all the author properties we selected. So please always remember that. Spread operator ( … ): Additionally, we can use the spread operator ... to create an object with all the properties that did not get destructured. If you are interested in learning more about the spread operator, check out my article. Renaming properties One great property of destructuring is the ability to choose a different name for the variable to the property we are extracting. Let’s look at the following example: By using : on a property, we can provide a new name for it (in our case, newName ). And then we can access that variable in our code. It’s important to note that a variable with the original property name won’t be defined. Missing properties So what would happen if we tried to destructure a property that is not defined in our object? In this case, the variable is created with value undefined . Default values Expanding on missing properties, it’s possible to assign a default value when the property does not exist. Let’s see some examples of this:
https://medium.com/better-programming/write-cleaner-code-by-using-javascript-destructuring-cd6b55c25bac
['Juan Cruz Martinez']
2020-07-17 14:15:21.852000+00:00
['Nodejs', 'React', 'Programming', 'JavaScript', 'Typescript']
Building a Treemap with JavaScript
Building a Treemap with JavaScript At Foxintelligence, while building our data visualization platform we wanted to challenge the pie chart. This is our journey of building a Treemap in JavaScript from scratch. EDIT: an open source JavaScript package I published an open source JavaScript package that calculates the Treemap. It uses the algorithm described in this article and is available both with npm and in the browser. https://www.npmjs.com/package/treemap-squarify The goal: an effective data visualization The goal was to find the best way to represent market shares among categories in e-commerce. The pie chart representation is the first one that comes to mind for this kind of data. Colorful pie charts However, it can be hard to compare different categories in a pie chart. The pain points of this type of visualization are: It’s hard to compare pie slices It’s hard to display labels on pie slices It’s hard to compare two pie charts What kind of data visualization can we use instead? 🙌 The Treemap That’s where we discovered the Treemap. A Treemap is a way to visualize hierarchical data in a rectangle (hence tree in Treemap). The value of one data point is proportional to its area in the main rectangle. One popular use is to visualize a computer file system. Here is an example of the initial tree and its counterpart on a Treemap. A tree and its Treemap from [1] In our case however, we don’t use it this way as we have a list of data instead of a tree. Nevertheless, we wanted to keep the idea of having one data point represented by an area on one main rectangle. In the following example, we have one main rectangle which represents 100% and each rectangle is a category market share of x%. Market shares on a Treemap Why is it better? First it’s visually easier to compare rectangle area than pie slices. It’s also easier for the label of each area to be displayed on the graphic instead of having to refer to the legend each time you need to. Plus we wanted to compare one merchant with the market (or with another merchant), and having two pie charts next to each other is not readable. 🎯 The specifications So we’ve decided to use a Treemap as our data visualization. Let’s now talk about the specifications from the design/product team. Each area should be a rectangle that is as square as possible (which is the Squarified Treemap type of Treemap) type of Treemap) We want 100% customization possibility over the UI (colors, hover effects, label or icon inside each square)
https://medium.com/foxintelligence-inside/building-a-treemap-with-javascript-4d789ad43a85
['Clément Bataille']
2020-04-17 19:25:01.252000+00:00
['Data Visualization', 'Software Engineering', 'JavaScript', 'Vuejs', 'Technology']
Go: AWS Lambda Project Structure Using Golang
Go: AWS Lambda Project Structure Using Golang dm03514 Follow May 16 · 4 min read AWS Lambda is a serverless solution which enables engineers to deploy single functions. AWS Lambda handles orchestrating, executing, scaling the function invocations. It’s important to structure go lambda projects so that the lambda is a simple entry point into the application, equivalent to cmd/ . After a project is structured, it important to keep logic outside the lambda, which allows for easy reuse and testing of the application logic. The following are a series of steps which can be used in Go based lambda projects to help keep projects structured and increase the testability of lambda-based projects. Structure In Go, it’s common to see a cmd/ directory which contains CLI entry points into an application. Using a test project with 2 separate apps, the layout appears as: $ tree test-go-lambda/ test-go-lambda/ └── cmd ├── app1 │ └── main.go └── app2 └── main.go It’s helpful to take the same approach with lambdas: $ tree test-go-lambda/ test-go-lambda/ ├── cmd │ ├── app1 │ │ └── main.go │ └── app2 │ └── main.go └── lambda ├── lambda1 │ └── main.go └── lambda2 └── main.go All lambdas live in the lambda directly, and each directory within lambda contains a single lambdas main command. In the example above there are two lambdas, lambda1 and lambda2 . Each contains a main.go file with a main command which can be executed by AWS lambda's go runtime. The benefits to this convention are the same as the cmd/ convention: Makes it easier to inventory entry points into the code base Helps to reduce onboarding friction Provides a structured convention which makes builds easier This convention simplifies build tooling by providing a single location for all lambdas. It’s trivial to package all lambdas in the same .zip or generate a .zip per lambda. “Thin” Lambdas “Thin” lambdas delegate to other functions. I like to structure it so that a lambda delegates to a single function. AWS Lambda documentation recommends this as a best practice: Separate the Lambda handler from your core logic. To achieve this, lambdas can delegate to a single domain logic entry point. Each lambda will: Parse the environment and initialize a domain specific config in main Initialize domain logic in Handler Delegate to the domain logic and return an error Which looks like: func Handler(ctx context.Context, e events.CloudWatchEvent) error { var conf Config // initialize conf doer, err := domain.NewDoer(conf) if err != nil { return err } return doer.Do(ctx, e) } func main() { lambda.Start(Handler) } “Lambda” Structs for Configuration AWS Lambda can reuse execution environments for individual lambdas, which means that AWS may keep handlers alive and invoke them multiple times. Resources, like database connections, initialized outside of the handler function can be maintained for multiple handler invocations! I like to call these main scoped resources, opposed to handler scoped resources. I like to setup each lambda main.go file to have a structure: # lambda/x/main.go type Lambda struct { Conf domain.SpecificConf } func (l Lambda) Handler(...) error { // handler scoped l.Conf configuration doer, err := domain.NewDoer(l.Conf) if err != nil { return err } return doer.Do(ctx, e) } func main() { // initialize resources // build conf l := Lambda{ Conf: domain.SpecificConf{ // main scoped configuration } } lambda.Start(l.Handler) } Examples of main scoped configuration are: Environmental Variables Session i.e. AWS-SDK sessions and clients Database connections (redis, mysql, postgres, elastic search, etc.) Examples of handler scoped configuration: Times / Timing AWS Lambda Payload / Parameter based configuration Each lambda is configured through its environment. I like to use envdecode to handle parsing the environmental variables in the main function: func main() { // initialize resources // build conf conf := domain.SpecificConf{ // main scoped configuration // db connections, aws-sdk sessions, etc } // pull in environment based config if err := envdecode.Decode(&conf); err != nil { panic(err) } l := Lambda{ Conf: conf, } lambda.Start(l.Handler) } Expose SQS Interface on Cron Lambdas One great use case for lambda is “cron” based workloads (through cloudwatch events). This is where AWS executes a lambda function on a fixed schedule. Many of these include a dead letter queue for failed messages and a lambda that consumes from dead letter queue. Since the messages in the dead letter queue contain the original message the dead letter queue lambda is usually very similar to the original lambda in terms of configuration and resource dependency. What does change is that the dead letter queue messages have a different structure, events.SQSEvent . One way to handle this is to expose a new SQSHandler which delegates to the original: type LambdaConf struct { HandlerType string `env:"HANDLER_TYPE,default=cloudwatch_event"` } type Lambda struct {} func (l *Lambda) SQSHandler(ctx context.Context, sqsEvent events.SQSEvent) error { var e events.CloudWatchEvent for _, record := sqsEvent.Records { if err := json.Unmarshal([]byte(record.Body), &e); err != nil { log.Printf("event: %+v ", record) return fmt.Errorf("unable to parse event into CloudWatchEvent: %s", err) } // delegate to l.Handler(ctx, e) } } func (l *Lambda) Handler(ctx context.Context, e events.CloudWatchEvent) error { doer, err := domain.NewDoer() if err != nil { return err } return doer.Do(ctx, e) } func main() { l := Lambda{} lambdaConf := LambdaConf{} if err := envdecode.Decode(&lambdaConf); err != nil { panic(err) } if lambdaConf.HandlerType == "sqs_event" { lambda.Start(l.SQSHandler) } else { lambda.Start(l.Handler) } } The SQSHandler delegates to the Handler in order to reduce duplication and keep the handlers "thin". Conclusion If careful attention isn’t paid to go-based lambdas, projects risk creating untestable, hard to work with lambdas. Placing lambdas in their own directory makes lambda discovery and builds easy. Minimizing business logic inside of lambdas makes it easier to isolate logic for unit tests. I’ve found that projects that follow the above structure are much easier to understand, test, build and extend. Happy Hacking!
https://medium.com/dm03514-tech-blog/go-aws-lambda-project-structure-using-golang-98b6c0a5339d
[]
2020-05-16 23:56:20.497000+00:00
['Software Engineering', 'AWS Lambda', 'Software Development', 'Golang', 'AWS']
Understanding Design Patterns: Facade using Pokemon and Dragonball Examples!
There are 23 classic design patterns, which are described in the original book, Design Patterns: Elements of Reusable Object-Oriented Software . These patterns provide solutions to particular problems, often repeated in the software development. In this article, I am going to describe the how the Facade Pattern; and how and when it should be applied. Facade Pattern: Basic Idea The facade pattern (also spelled façade) is a software-design pattern commonly used in object-oriented programming. Analogous to a facade in architecture, a facade is an object that serves as a front-facing interface masking more complex underlying or structural code. — Wikipedia Provide a unified interface to a set of interfaces in a subsystem. Facade defines a higher-level interface that makes the subsystem easier to use.- Design Patterns: Elements of Reusable Object-Oriented Software The main feature of this pattern is using a class which simplifies the interface of a complex system. Therefore, these are two problems that this pattern resolves: Complex subsystem are easier to use. The dependencies on a subsystem are minimized. To sum up, the facade pattern contains several instance of different classes which must be hidden to the client. This is the way in which the interface is simplified. The UML’s diagram of this pattern is the following one: The Facade class is a middleware between modules and the external client. In the UML there is a single Facade class but the pattern can be used between different layers when the interface is very complex. Facade Pattern: When To Use There is a complex system and you need a simple interface to communicate with it. The code is tightly coupled due to the clients need a wide knowdlege about the system. The Facade pattern allows reduce the coupled between components. The system need an entry point to each level of layered software. The Facade Pattern has several advantages, summarised in the following points: The code is more easier to use, understand and test since the facade simplify the interface. since the facade simplify the interface. Clean code because the client/context does not use a complex interface and the system is more flexible and reusable. Facade pattern — Example 1: A client want to use several class from different systems I will now show you how you can implement this pattern using JavaScript/TypeScript. In our case, I have made up a problem in which there is a class named Client which defines two methods that use several classes from different packages ( System1 and System2 ). These packages are composed by several classes which have several public methods. The following UML diagram shows the scenario that I have just described. The client code associate is the following ones: The main problem in this solution is that the code is coupled. Meaning that, the client needs to known where is and how works each class. The large list of imports is the first symptom that a facade is the solution of our problem. Another warning symptom is that client required a wide knowledge about the operation of each class. The solution is to use an facade pattern that consists in a class ( Facade ) which uses System1 and System2 . I.e, the new UML diagram using the adapter pattern is shown below: The code associate to the client and facade are the following ones: In the new code the client delegates the responsability to the facade, but the facade is doing the same functionality that client did. In fact, if the code is increasing the facade can be a antipattern called BLOB (https://sourcemaking.com/antipatterns/the-blob). So, a good idea is use a facade in each package such as you can see in the following UMLs: The code associate to the client , facade , facadeSystem1 and facadeSystem2 in this solution are the following ones: The client is exactly the same that in the previously version. The facade uses each of the facades created for each subsystem. Now the more important is that the Facade class only knows the interface that is provides by FacadeSystem1 and FacadeSystem2 . The FacadeSystem1 and FacadeSystem2 only know the classes of their package. It is very important to remind that each facade exports only the classes that are meant to be public, and these methods can be the combination of several methods between internal classes. I have created several npm scripts that run the code’s examples shown here after applying the Facade pattern. npm run example1-problem npm run example1-facade-solution-1 npm run example1-facade-solution-2 Facade pattern — Example 2: Pokemon and DragonBall Package together! Another interesting problem which is resolved using Facade pattern is when there are several packages with different interfaces but they can works together. In the following UML’s diagram you can see this situation: In this case, the client uses the packages DragonballFacade and PokemonFacade . So, the client only needs to know the interface provided by theses facade. For example, DragonballFacade provides a method called genki which calculates the value of several objects working together. In other hand, PokemonFacade provides a method called calculateDamage which interacts with the rest of classes of its package. The code associate to the client is the following ones: And the code associated to the facades are the following ones: I have created two npm scripts that run the two examples shown here after applying the Facade pattern. npm run example2-problem npm run example2-facade-solution1 A great advantage in favor of the façade is developing the simplest system from one not that simple. For example, in the dragon ball package there is an adapter pattern which does not affect the correct behavior of the client. But the complexity of the Pokemon package is greater since there is a design pattern called Template-Method for the method of calculateDamage and a factory pattern for the creation of different pokemons. All this complexity is hidden by the facades and any change in these classes does not affect the client's behavior whatsoever, which has allowed us to create much more uncoupled system. Conclusion Facade pattern can avoid complexity in your projects, when there are several packages communicating with each other, or a client that requires the use of several classes the facade pattern is perfectly adapted. The most important thing has not implement the pattern as I have shown you, but to be able to recognize the problem which this specific pattern can resolve, and when you may or may not implement said pattern. This is crucial, since implementation will vary depending on the programming language you use. More more more…
https://medium.com/hackernoon/understanding-design-patterns-facade-using-pokemon-and-dragonball-examples-5aeaa49e2b64
['Carlos Caballero']
2019-07-07 17:01:25.093000+00:00
['Design Patterns', 'Software Development', 'Facade', 'JavaScript', 'Typescript']
Trichomoniasis: A Common Sexually Transmitted Infection You May Have Never Heard About
Trichomoniasis: A Common Sexually Transmitted Infection You May Have Never Heard About A doctor’s guide to protecting yourself and others Photo by Nik Shuliahin on Unsplash “What the heck is trichomoniasis,” is the most common reaction to the diagnosis of this sexually transmitted infection. While not as common as HPV, chlamydia, and herpes, this less well-known infection affects 174 million people infected around the world each year. No one wants to hear it, but any sexually active person in a nonmonogamous relationship is potentially at risk. What is trichomoniasis? Trichomoniasis is a common and curable sexually transmitted disease caused by a single-celled protozoan parasite called Trichomonas vaginalis. The parasite lives in the urethra in men and the lower genital tract in women. The female anatomic parts included are the vagina, urethra (where urine comes out), and the cervix (the opening of the uterus). The colloquial term for this STI is “trich” (pronounced Trick). Trichomonads are tiny parasitic bugs living in body fluids such as semen and vaginal secretions. They jump from one person to another when fluids transmit during sexual activity. How do you know if you have trichomoniasis? Most people have no symptoms at all. Only about 30% of people infected develop signs and symptoms meaning asymptomatic carriers pass the infection from one partner to another. An asymptomatic carrier is someone who has the disease but does not know it. Without testing, people who do not know they have trichomoniasis unknowingly spread the infection to their partners. When symptoms occur, they appear one to four weeks after infection. Typically, men do not have symptoms at all. Some men may have an irritation on the inside of the penis, mild discharge, testicular pain, or slight burning after urination or ejaculation. Women have symptoms more often than men. Trichomoniasis causes a malodorous discharge that is frothy and yellow-green. It is a common cause of vaginal itching, often leading to incorrect self-treatment with over-the-counter yeast medication. This STI can lead to irritation of the genital area and discomfort during sex and urination. How do you catch trichomoniasis? There are two ways most STDs are transmitted: fluid transmission or skin-to-skin contact. Trichomoniasis is a fluid transmitted infection. It is transmitted when bodily fluids from one person are shared with another via vaginal, anal, or oral sex. Fluids are present in the vagina, penis, mouth, and anus. Infections can occur even without ejaculation. To keep things as clear as possible, any sexual act involving the exchange of bodily fluids allows trichomoniasis to spread from one person to another. The parasite can survive on surfaces for around 45 minutes. Sex toys should be properly cleaned after use. Photo by National Cancer Institute on Unsplash How do you diagnosis trichomoniasis? Most of the time, a simple physical exam can not accurately diagnose trichomoniasis. Experienced medical providers may suspect the infection based on the classic musty odor and appearance of discharge. A confirmation test is the most accurate way to diagnose this STI. A microscope can be used to evaluate fluid from the vagina, penis, or urine to look for the parasites using a technique called a wet prep. A microscopic examination allows for immediate point-of-care diagnosis but will miss many infections. A nucleic acid amplification test (NAAT) is the most accurate way to confirm the infection. The CDC does not have specific recommendations on who should get tested for trichomoniasis. Risk factors include new sex partners, multiple sex partners, men who have sex with men, a sex partner with concurrent partners, and a partner who has a sexually transmitted infection. How is trichomoniasis treated? Fortunately, we can easily treat trichomoniasis with antibiotics. It is a curable STI. Metronidazole is the most common antibiotic and is most effective when given in a single dose. High dose Metronidazole can cause nausea and one must avoid alcohol when taking this medication. All sex partners should be notified, evaluated, tested, and treated. The parasite is harder to detect in men making male testing less reliable. Male partners of trichomoniasis positive women should be treated regardless of their results. This strategy reduces the risk of reinfection by an untreated partner or the spread to future partners. One should abstain from unprotected sexual contact until all partners have completed their treatment. I recommend a follow-up test to confirm treatment success (a test of cure), but this is not the official standard of care. Prevention is key Prevention is best achieved by abstinence from sexual activity or to be involved in a long-term, mutually monogamous relationship. The use of latex condoms consistently and correctly can reduce the risk of transmission. Condoms are highly effective in preventing fluid transmitted sexually transmitted infections. Water-based lubricants combined with latex condoms provide the most protection. Non-water-based lubricants can break down latex and reduce protection. Men and women with risk factors, including a new sex partner or multiple sex partners, should undergo testing. Testing for all sexually transmitted infections helps keep you and your partners safe.
https://medium.com/beingwell/trichomoniasis-a-common-sexually-transmitted-infection-you-may-have-never-heard-about-c7911bd4e9d7
['Dr Jeff Livingston']
2020-05-24 18:49:32.239000+00:00
['Health', 'Public Health', 'Relationships', 'Sexuality', 'Society']
Data Lake Change Data Capture (CDC) using Apache Hudi on Amazon EMR — Part 2—Process
Data Lake Change Data Capture (CDC) using Apache Hudi on Amazon EMR — Part 2—Process Easily process data changes over time from your database to Data Lake using Apache Hudi on Amazon EMR Image by Gino Crescoli from Pixabay In a previous article below we had discussed how to seamlessly collect CDC data using Amazon Database Migration Service (DMS). https://towardsdatascience.com/data-lake-change-data-capture-cdc-using-amazon-database-migration-service-part-1-capture-b43c3422aad4 The following article will demonstrate how to process CDC data such that a near real-time representation of the your database is achieved in your data lake. We will use the combined power of of Apache Hudi and Amazon EMR to perform this operation. Apache Hudi is an open-source data management framework used to simplify incremental data processing in near real time. We will kick-start the process by creating a new EMR Cluster $ aws emr create-cluster --auto-scaling-role EMR_AutoScaling_DefaultRole --applications Name=Spark Name=Hive --ebs-root-volume-size 10 --ec2-attributes '{"KeyName":"roopikadf","InstanceProfile":"EMR_EC2_DefaultRole","SubnetId":"subnet-097e5d6e","EmrManagedSlaveSecurityGroup":"sg-088d03d676ac73013","EmrManagedMasterSecurityGroup":"sg-062368f478fb07c11"}' --service-role EMR_DefaultRole --release-label emr-6.0.0 --name 'Training' --instance-groups '[{"InstanceCount":3,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"CORE","InstanceType":"m5.xlarge","Name":"Core - 2"},{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"MASTER","InstanceType":"m5.xlarge","Name":"Master - 1"}]' --scale-down-behavior TERMINATE_AT_TASK_COMPLETION --region us-east-1 --bootstrap-actions Path=s3://aws-analytics-course/job/energy/emr.sh,Name=InstallPythonLibs After creating the EMR cluster logon to the Master Node using SSH and issue the following commands. These commands will copy the Apache Hudi JAR files to S3. $ aws s3 cp /usr/lib/hudi/hudi-spark-bundle.jar s3://aws-analytics-course/hudi/jar/ upload: ../../usr/lib/hudi/hudi-spark-bundle.jar to s3://aws-analytics-course/hudi/jar/hudi-spark-bundle.jar $ aws s3 cp /usr/lib/spark/external/lib/spark-avro.jar s3://aws-analytics-course/hudi/jar/ upload: ../../usr/lib/spark/external/lib/spark-avro.jar to s3://aws-analytics-course/hudi/jar/spark-avro.jar $ aws s3 ls s3://aws-analytics-course/hudi/jar/ 2020-10-21 17:00:41 23214176 hudi-spark-bundle.jar 2020-10-21 17:00:56 101212 spark-avro.jar Now create a new EMR notebook and upload the notebook available at the following location. Upload hudi/hudi.ipynb $ git clone https://github.com/mkukreja1/blogs.git Create a Spark session using the Hudi JAR files uploaded to S3 in the previous step. from pyspark.sql import SparkSession import pyspark from pyspark.sql.types import StructType, StructField, IntegerType, StringType, array, ArrayType, DateType, DecimalType from pyspark.sql.functions import * from pyspark.sql.functions import concat, lit, col spark = pyspark.sql.SparkSession.builder.appName("Product_Price_Tracking") \ .config("spark.jars", "s3://aws-analytics-course/hudi/jar/hudi-spark-bundle.jar,s3://aws-analytics-course/hudi/jar/spark-avro.jar") \ .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") \ .config("spark.sql.hive.convertMetastoreParquet", "false") \ .getOrCreate() Lets read the CDC files. We will start by reading the full load file. TABLE_NAME = "coal_prod" S3_RAW_DATA = "s3://aws-analytics-course/raw/dms/fossil/coal_prod/LOAD00000001.csv" S3_HUDI_DATA = "s3://aws-analytics-course/hudi/data/coal_prod" coal_prod_schema = StructType([StructField("Mode", StringType()), StructField("Entity", StringType()), StructField("Code", StringType()), StructField("Year", IntegerType()), StructField("Production", DecimalType(10,2)), StructField("Consumption", DecimalType(10,2)) ]) df_coal_prod = spark.read.csv(S3_RAW_DATA, header=False, schema=coal_prod_schema) df_coal_prod.show(5) +----+-----------+----+----+----------+-----------+ |Mode| Entity|Code|Year|Production|Consumption| +----+-----------+----+----+----------+-----------+ | I|Afghanistan| AFG|1949| 0.04| 0.00| | I|Afghanistan| AFG|1950| 0.11| 0.00| | I|Afghanistan| AFG|1951| 0.12| 0.00| | I|Afghanistan| AFG|1952| 0.14| 0.00| | I|Afghanistan| AFG|1953| 0.13| 0.00| +----+-----------+----+----+----------+-----------+ only showing top 5 rows Apache Hudi requires a primary key to singularly identify each record. Typically, a sequentially generated primary key is best for this purpose. However our table does not have one. Tosolve this issue let us generate a PK by using a composite of Entity and Year columns. The key column below will be used as the primary key. df_coal_prod=df_coal_prod.select("*", concat(col("Entity"),lit(""),col("Year")).alias("key")) df_coal_prod_f=df_coal_prod.drop(df_coal_prod.Mode) df_coal_prod_f.show(5) +-----------+----+----+----------+-----------+---------------+ | Entity|Code|Year|Production|Consumption| key| +-----------+----+----+----------+-----------+---------------+ |Afghanistan| AFG|1949| 0.04| 0.00|Afghanistan1949| |Afghanistan| AFG|1950| 0.11| 0.00|Afghanistan1950| |Afghanistan| AFG|1951| 0.12| 0.00|Afghanistan1951| |Afghanistan| AFG|1952| 0.14| 0.00|Afghanistan1952| |Afghanistan| AFG|1953| 0.13| 0.00|Afghanistan1953| +-----------+----+----+----------+-----------+---------------+ only showing top 5 rows We are now ready to save the data in the Hudi format. Since this is the very first time we are saving this table we will use the “bulk_insert” operation and mode=overwrite. Also notice that we are using the “key” column as the recordkey. df_coal_prod_f.write.format("org.apache.hudi") \ .option("hoodie.table.name", TABLE_NAME) \ .option("hoodie.datasource.write.storage.type", "COPY_ON_WRITE") \ .option("hoodie.datasource.write.operation", "bulk_insert") \ .option("hoodie.datasource.write.recordkey.field","key") \ .option("hoodie.datasource.write.precombine.field", "key") \ .mode("overwrite") \ .save(S3_HUDI_DATA) We can now read the newly create Hudi table. df_final = spark.read.format("org.apache.hudi")\ .load("s3://aws-analytics-course/hudi/data/coal_prod/default/*.parquet") df_final.registerTempTable("coal_prod") spark.sql("select count(*) from coal_prod").show(5) spark.sql("select * from coal_prod where key='India2013'").show(5) +--------+ |count(1)| +--------+ | 6282| +--------+ +-------------------+--------------------+------------------+----------------------+--------------------+------+----+----+----------+-----------+---------+ |_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path| _hoodie_file_name|Entity|Code|Year|Production|Consumption| key| +-------------------+--------------------+------------------+----------------------+--------------------+------+----+----+----------+-----------+---------+ | 20201021215857|20201021215857_54...| India2013| default|8fae00ae-34e7-45e...| India| IND|2013| 2841.01| 0.00|India2013| +-------------------+--------------------+------------------+----------------------+--------------------+------+----+----+----------+-----------+---------+ Notice that we have 6282 rows from the full load and data for key India2013. This key will updated in the next operation so it is important to note the history. We will now read the incremental data. The incremental data came with 4 rows — 2 rows were Inserted, one row was Updated, and one row is Deleted. We will handle the Inserted and Updated rows first. notice the filter for (“Mode IN (‘U’, ‘I’)”) below. S3_INCR_RAW_DATA = "s3://aws-analytics-course/raw/dms/fossil/coal_prod/20200808-*.csv" df_coal_prod_incr = spark.read.csv(S3_INCR_RAW_DATA, header=False, schema=coal_prod_schema) df_coal_prod_incr_u_i=df_coal_prod_incr.filter("Mode IN ('U', 'I')") df_coal_prod_incr_u_i=df_coal_prod_incr_u_i.select("*", concat(col("Entity"),lit(""),col("Year")).alias("key")) df_coal_prod_incr_u_i.show(5) df_coal_prod_incr_u_i_f=df_coal_prod_incr_u_i.drop(df_coal_prod_incr_u_i.Mode) df_coal_prod_incr_u_i_f.show() +----+------+----+----+----------+-----------+---------+ |Mode|Entity|Code|Year|Production|Consumption| key| +----+------+----+----+----------+-----------+---------+ | I| India| IND|2015| 4056.33| 0.00|India2015| | I| India| IND|2016| 4890.45| 0.00|India2016| | U| India| IND|2013| 2845.66| 145.66|India2013| +----+------+----+----+----------+-----------+---------+ +------+----+----+----------+-----------+---------+ |Entity|Code|Year|Production|Consumption| key| +------+----+----+----------+-----------+---------+ | India| IND|2015| 4056.33| 0.00|India2015| | India| IND|2016| 4890.45| 0.00|India2016| | India| IND|2013| 2845.66| 145.66|India2013| +------+----+----+----------+-----------+---------+ We are now ready to perform a Hudi Upsert operation for the incremental data. Since this table already exists this time we will use the append option. df_coal_prod_incr_u_i_f.write.format("org.apache.hudi") \ .option("hoodie.table.name", TABLE_NAME) \ .option("hoodie.datasource.write.storage.type", "COPY_ON_WRITE") \ .option("hoodie.datasource.write.operation", "upsert") \ .option("hoodie.upsert.shuffle.parallelism", 20) \ .option("hoodie.datasource.write.recordkey.field","key") \ .option("hoodie.datasource.write.precombine.field", "key") \ .mode("append") \ .save(S3_HUDI_DATA) Check the underlying data. Notice that the 2 new rows have been added so the table count has gone up from 6282 to 6284. Also note the row for key India2013 now has been updated for Production & Consumption columns. df_final = spark.read.format("org.apache.hudi")\ .load("s3://aws-analytics-course/hudi/data/coal_prod/default/*.parquet") df_final.registerTempTable("coal_prod") spark.sql("select count(*) from coal_prod").show(5) spark.sql("select * from coal_prod where key='India2013'").show(5) +--------+ |count(1)| +--------+ | 6284| +--------+ +-------------------+--------------------+------------------+----------------------+--------------------+------+----+----+----------+-----------+---------+ |_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path| _hoodie_file_name|Entity|Code|Year|Production|Consumption| key| +-------------------+--------------------+------------------+----------------------+--------------------+------+----+----+----------+-----------+---------+ | 20201021220359|20201021220359_0_...| India2013| default|8fae00ae-34e7-45e...| India| IND|2013| 2845.66| 145.66|India2013| +-------------------+--------------------+------------------+----------------------+--------------------+------+----+----+----------+-----------+---------+ Now we would like to deal with the one Deleted row. df_coal_prod_incr_d=df_coal_prod_incr.filter("Mode IN ('D')") df_coal_prod_incr_d=df_coal_prod_incr_d.select("*", concat(col("Entity"),lit(""),col("Year")).alias("key")) df_coal_prod_incr_d_f=df_coal_prod_incr_d.drop(df_coal_prod_incr_u_i.Mode) df_coal_prod_incr_d_f.show() +------+----+----+----------+-----------+---------+ |Entity|Code|Year|Production|Consumption| key| +------+----+----+----------+-----------+---------+ | India| IND|2010| 2710.54| 0.00|India2010| +------+----+----+----------+-----------+---------+ We can do this with a Hudi Upsert operation but need to use and extra option for deletes hoodie.datasource.write.payload.class=org.apache.hudi.EmptyHoodieRecordPayload df_coal_prod_incr_d_f.write.format("org.apache.hudi") \ .option("hoodie.table.name", TABLE_NAME) \ .option("hoodie.datasource.write.storage.type", "COPY_ON_WRITE") \ .option("hoodie.datasource.write.operation", "upsert") \ .option("hoodie.upsert.shuffle.parallelism", 20) \ .option("hoodie.datasource.write.recordkey.field","key") \ .option("hoodie.datasource.write.precombine.field", "key") \ .option("hoodie.datasource.write.payload.class", "org.apache.hudi.EmptyHoodieRecordPayload") \ .mode("append") \ .save(S3_HUDI_DATA) We can now check the results. Since one row has been deleted the count has gone down from up from 6284 to 6283. Also, the query for the deleted row came back empty. Everything worked as desired. df_final = spark.read.format("org.apache.hudi")\ .load("s3://aws-analytics-course/hudi/data/coal_prod/default/*.parquet") df_final.registerTempTable("coal_prod") spark.sql("select count(*) from coal_prod").show(5) spark.sql("select * from coal_prod where key='India2010'").show(5) +--------+ |count(1)| +--------+ | 6283| +--------+ +-------------------+--------------------+------------------+----------------------+-----------------+------+----+----+----------+-----------+---+ |_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path|_hoodie_file_name|Entity|Code|Year|Production|Consumption|key| +-------------------+--------------------+------------------+----------------------+-----------------+------+----+----+----------+-----------+---+ +-------------------+--------------------+------------------+----------------------+-----------------+------+----+----+----------+-----------+---+ All the code used in this article can be found on the link below: I hope this article was helpful. CDC using Amazon Database Migration Service is covered as part of the AWS Big Data Analytics course offered by Datafence Cloud Academy. The course is taught online by myself on weekends.
https://towardsdatascience.com/data-lake-change-data-capture-cdc-using-apache-hudi-on-amazon-emr-part-2-process-65e4662d7b4b
['Manoj Kukreja']
2020-10-22 02:17:25.622000+00:00
['Data', 'Machine Learning', 'Data Science', 'Artificial Intelligence', 'AWS']
Using Flask to optimize performance with Mask R-CNN segmentation
Using Flask to optimize performance with Mask R-CNN segmentation How to improve Mask R-CNN segmentation performance using a Flask web service. In a recent project I was facing the challenge of creating a machine learning powered photo booth solution. The software allows its users to take pictures of themselves in a car, have the background automagically removed without the usage of a greenscreen, and replaced with another more pleasing background scenery. Among other things, performance was key here. The segmentation needed to be finished by the time it takes the user to exit the car and walk up to the nearby pickup station. That gives us around five to ten seconds. In this story you’ll learn how you can use Flask, a micro web framework written in Python, to drastically reduce the time it takes for the pre-trained Mask R-CNN model to generate bounding boxes and segmentation masks for the captured photograph. Mask R-CNN The Mask R-CNN for object detection and segmentation framework is a Python implementation based on the work published by Facebook Research and allows to get bounding boxes and classifications for a variety of object types, such as cars and people, for any input image provided. Of course its performance is heavily bound to the computational power of the host machine. That said, we built a powerful rig equipped with a Nvidia Titan RTX GPU with CUDA support and a high end CPU for any CPU bound work that needed to be done, which is mostly OpenCV based image processing and error corrections. With this setup we achieved a total computation time of around 15 seconds per photograph for an input / output resolution of 1920x1920 pixels. Here’s one example of the processing steps from input to output with me and my colleague: Background segmentation steps and output So how does Flask help here? Once you have started playing around with Mask R-CNN you will notice that the model and its weights need to be loaded and initialized whenever you want to run segmentation on an image. Say you had a Python script called my_segmentation_script.py which takes the path to the input image as an argument, you’d execute it like this: Now this runs fine and will perform the required work, assuming you have setup Mask R-CNN properly. The issue here is that once the script is done it will clean up any allocated resources and release the model and its weights again. So for any subsequent image they have to be loaded again, which costs us around up to 10 seconds of time! So let’s try and get rid of this overhead using Flask. Follow the instructions over here to install and setup your Flask web service. Once it’s running you’ll be able to create Views which you can think of as API endpoints. Following example code creates a GET endpoint for the api/v1/ route of your flask app that returns "Hello World" when called in a browser: All we need to do now is create a view / endpoint for our segmentation python code to execute and pass it an input and output path as parameters so it can read in the image and output the segmentation at the desired path. The important differentiation is that we will only load the model and the weights once when the Flask app starts and then re-use it throughout the session / lifetime of the service. Here’s the full code and then we’ll take a look at it in more detail: Most if this is just boilerplate code for importing the required Python modules and creating the Flask view. Pay attention to lines 11 through 23 as we are initializing the keras session here and then loading in the model weights. At line 23 we are then finally assigning the created session to be the active session. Next we are simply defining a POST endpoint that takes the input and output path in its request body. Note lines 30, 31 and 32 here as this is where we are making sure that we are using the globally defined session and graph variables we just created. Using main.doSegmentation(rcnn, input_file_path, output_file_path) we are then passing everything to our segmentation code, which makes use of the pre-loaded weights. Summary By loading the model weights only once and then using them in a shared session throughout the lifetime of the Flask service we were able to save another ~10 seconds of computation time for a captured photograph to run through background segmentation. Combined with decent hardware this fulfills our requirements in performance for the photo booth to work. Please share in the comments if you run in to any issues or if you have questions. I hope you find this little trick useful and can boost your performance just as I did!
https://medium.com/medialesson/using-flask-to-optimize-performance-with-mask-r-cnn-segmentation-39752f153029
['Dino Fejzagić']
2020-07-06 09:41:50.508000+00:00
['Keras', 'Flask', 'AI', 'Python', 'TensorFlow']
It’s Just Us Down Here
After the fire comes the smoke. It blew into Albuquerque last Tuesday on, oddly, a cold front. I’d seen the photos of orange skies raining ash and heat a thousand miles west of us, so I guess that’s what I expected: winds that burn. Instead they were frigid. They chilled us and pricked goosebumps from our skin. Earlier that morning, the weather was so mild that we’d opened all our windows. Then came the crash of icy air off the West Mesa. Around midday, the screens started to rattle and howl. The sky took on that close-hanging dirt-colored tinge I associate with tornado warnings from back when I lived in the Midwest. Here in New Mexico, it meant wildfire smoke, sweeping a desert’s worth of sand along with it. A fine grain of dirt slanted in through the still-open windows with each gust. Its grit gradually collected on our dining room table. We’d already placed a grocery order to pick up curbside — a new pandemic routine. After another day working from home, we drove across town together, Harrison behind the wheel. Strapped in passively beside him, I could feel the wind driving against the car, strong enough to make its metal shudder. I knew the wind had picked up, but I didn’t expect such detritus all around. The roads were littered with blowing leaves, twigs, whole branches — whole trees knocked cockeyed, root balls half-ripped from the earth. The power was out at some intersections. As we approached San Mateo and I-40, we saw the traffic lights sputter and go dark before us like candles in a storm. It was the worst place for it to happen, for us to be: three northbound lanes, three southbound, an off-ramp funneling interstate traffic out into the mix. Everyone clumsily came to a standstill. Then ten hulking beasts at a time tentatively revved their engines as drivers jerked and dodged, hesitant to believe everyone agreed this was now a four-way stop. Did we? Did we all agree, without being able to speak across our sealed-up containers? Harrison and I didn’t even speak to each other. What was there to say? I just dug my nails into the fabric under my legs and hunched slightly. My body anticipated the lurch and smash of getting T-boned; I shook the thought. There’s nothing you can do. Nothing. Just keep your seat belt fastened and wait this out.
https://medium.com/indelible-ink/its-just-us-down-here-b8ca41830e08
['Karie Luidens']
2020-09-17 14:40:27.993000+00:00
['Wildfires', 'Health', 'Mental Health', 'Climate Change', 'Life']
How does Panorama work?. Panorama effect! Yes, that’s what we…
Image Stitching Panorama effect! Yes, that’s what we will uncover in this post. Image mosaicing/stitching is the task of sticking one/more input images. We have information spread across these images which we would like to see at once. In a single image! We all have used Panorama mode on mobile camera. In this mode, the image-mosaicing algorithm runs to capture and combine images. But we can use the same algo offline also. See the image below, where we have pre-captured images and we want to combine them. Observe the hill is partially visible in both inputs. When you have multiple images to combine, the general approach followed is to pairwise add 2 images each time. The output of last addition is used as an input for the next image. Assumption is that each pair of images under consideration do have certain features common. Left and Right images stitched together using local features of interest points Let’s see the step by step procedure for creating a panorama image: Step 1] Read Input Images Take images with some overlapping structure/object. If you are using own camera then make sure you do not change camera properties while taking pictures. Moreover, do not take many similar structures in the frame. We do not want to confuse our tiny little algorithm. Now read both first two images in OpenCV. def createPanorama(input_img_list): images = [] dims = [] for index, path in enumerate(input_img_list): print (path) images.append(cv2.imread(path)) dims.append(images[index].shape) return images, dims Step 2] Compute SIFT features Detect features/interest points for both images. These points are unique identifiers which are used as markers. We will use SIFT features. It is a popular local features detection and description algorithm. It is used in many computer vision object matching tasks. Some other examples of feature descriptors are SURF, HOG. SIFT uses a pyramidal approach using DOG (difference of gaussian). Features thus obtained will be invariant to scale. It is good for panorama kind of applications wherein images might have features variations in rotations, scale, lighting, etc. ## Define feature type feature_typye = cv2.xfeatures2d.SIFT_create() ​ points1, des1 = features.detectAndCompute(image1, None) points2, des2 = features.detectAndCompute(image2, None) Here points1 is a list of key points whereas des1 is a list of descriptors expressed in the feature space of SIFT. Each descriptor will be a 1x128 vector Step 3] Match strong interest points Now we will be matching points based on vector representation. We will assume a certain threshold for deciding whether two points are near or not OpenCV has inbuilt FLANN based Matcher for this purpose. It is a histogram based matching technique which computes the distance for 2 points described in SIFT feature space. ## Define flann based matcher matcher = cv2.FlannBasedMatcher() matches = matcher.knnMatch(des1,des2,k=2) # important features imp = [] for i, (one, two) in enumerate(matches): if one.distance < dist_threshold*two.distance: imp.append((one.trainIdx, one.queryIdx)) Step 4] Calculate the homography Now that we have matching points identified in both images, we will use them to get the generic relationship between images. This relationship is defined in the literature as homography. It is a relationship between image1 and image2 described as a matrix. We need to transform one image into other image’s space using the homography matrix. We can do either way from image 1 to 2 or image 2 to 1 since the homography matrix (3x3 in this case) will be square and non-singular. Only thing is that we need to be consistent while passing points to homography. I have used my own RANSAC based approach to get a Homography matrix. But you can also use inbuilt OpenCV function cv2.findHomography() ### RANSAC def ransac_calibrate(real_points , image_points, total_points, image_path, iterations): index_list = list(range(total_points)) iterations = min(total_points - 1, iterations) errors = list(np.zeros(iterations)) combinations = [] p_estimations=[] for i in range(iterations): selected = random.sample(index_list,4) combinations.append(selected) real_selected =[] image_selected =[] for x in selected: real_selected.append(real_points[x]) image_selected.append(image_points[x]) p_estimated = dlt_calibrate(real_selected, image_selected, 4) not_selected = list(set(index_list) - set(selected)) error = 0 for num in tqdm(not_selected): # get points from the estimation test_point = list(real_points[num]) test_point = [int(x) for x in test_point] test_point = test_point + [1] try: xest, yest = calculate_image_point(p_estimated, np.array(test_point), image_path) except ValueError: continue error = error + np.square(abs(np.array(image_points[num])-np.asarray([xest,yest]))) # print("estimated :",np.array([xest, yest]) ) # print("actual :",image_points[0]) # print("error :",error) errors.append(np.mean(error)) p_estimations.append(p_estimated) p_final = p_estimations[errors.index(min(errors))] return p_final ,errors, p_estimations Step 5] Transform images into the same space Compute the second image transformed coordinates using the homography matrix output of step 4. image2_transformed = H*image2 Step 6] Let’s do the stitching... After computing transformed images we get two images each having some information separate and some common w.r.t. other. When it is separate the other image will have 0 intensity value at the corresponding location We need to fuse with the help of this info at every location while keeping overlapping info intact. It is a pixel level operation which technically can be optimized by doing image level add, subtract, bitwise_and, etc, but I found the output was getting compromised in doing this manipulation. Output generated with old-school for-loop based approach to choose the maximum pixel from the corresponding input pixels worked better. ## get maximum of 2 images for ch in tqdm(range(3)): for x in range(0, h): for y in range(0, w): final_out[x, y, ch] = max(out[x,y, ch], i1_mask[x,y, ch]) See the output below: Although the post was just a small guide, you can refer to the entire code available here.
https://medium.com/tech-that-works/how-does-panorama-work-image-stitching-bf1a9f0e4fa5
['Samrudha Kelkar']
2019-06-03 06:58:26.190000+00:00
['Machine Learning', 'Computer Vision', 'Image Processing', 'Panoramic Image Stitching', 'Panorama']
Sunsets
Peach. That was the color of the sunset the night they discovered Susie Taylor was missing. The community Memorial Day barbeque was in full swing. Katie walked behind her parents a little apprehensive about showing up to a party and not knowing anyone there. Her family had only moved to Hidden Lake two weeks before, and in that first week, they had met all their immediate neighbors. The people of Hidden Lake were very eager to know them and ingratiate themselves into their lives. She met their first neighbor the day they moved in. Katie was sprawled out on the front lawn as if marking her territory. She looked up at the crisp blue sky and watched as a flock of geese overhead called out flying orders. A box sat at her side labeled bedroom, its contents ready to burst through the stretched packing tape. The bright green of the trees enveloped her and made her feel insignificant, just a speck of humanity with all the other specks. She closed her eyes and thought about all the endless lazy summer days she had ahead of her, endless in their opportunity yet so fleeting. “Are you going to lie there, or do you plan on helping me,” Katie’s mother Laurel squawked at her sounding exactly like the geese flying over the lake. “I’m just taking a break. We have a lot of boxes.” “It seems your father is taking a break as well,” Laurel said before returning to the house. Katie could hear the sounds of her father strumming his guitar faintly from the garage. He played a familiar tune as Katie picked up the box and walked into the garage to join him. She always found it fascinating that somehow, he managed to be in a band only knowing three cords, but he had personality. His presence on the stage was such that the rest of his bandmates overlooked the fact that he only had four songs in his repertoire and pretended to play along with the rest. “Mom’s looking for you,” Katie said and dropped the box down at his feet. “She’s always looking for me, and I’m always around, yet she never seems to find me,” Tom said with a wink before grabbing the box and taking it into the house. Katie trailed behind him. The new house was a mansion in comparison to their condo in the city. Living downtown had its perks. The first Laurel would exclaim at the closing, “Thank God for expensive city living.” The condo was worth $250,000 more than the house they bought which left them with a great deal of money in the bank. But even with all the financial perks, Katie didn’t see the benefit of leaving the city life behind for the slower one in a rural small town. But they got the large house; the American Dream wrapped up in a big red bow. She got to have her room instead of a loft and lots of land to run around. But at thirteen, Katie was past the age of playing in the backyard, making mud pies. It gave her space to dream and space away from her mother, and she was grateful for that. Laurel met them in kitchen, tired and already profusely sweaty from the effort of moving in. “You don’t have to get all the unpacking done today, but I would like to at least get everything into to the house so you can return the truck,” Laurel said to Tom before taking a swig of water which was interrupted by the doorbell. Laurel asked no one in particular, “Who could be ringing the door; we just got here?” “I’ll get it,” Katie said already in route. “Hello.” “I’m Zack,” the boy said standing in the doorway. “My mom sent me over to help. We live across the street. Well across the street and down the hill a bit but around here that makes us neighbors.” Katie ushered him inside and reported to Laurel that they had another set of hands. Laurel was all too eager to put him to work considering that Tom often disappeared and took way too many breaks in her opinion. With Zack Murphy’s help, they would have the rest of the boxes and furniture in before dinner. “If you’re going to order from Little Italy”, Zack suggested, “Don’t get anything besides cheese pizza and definitely stay away from the wings unless you want to be on the toilet the rest of the night,” and he grabbed his stomach, pretending to be ill which made Katie chuckle. “We’ll take that advice, thanks Zack and tell your mother thank you for sending you over,” Laurel said as she walked him out. Laurel tried to give him some cash but he protested, she shoved it into his shirt pocket anyway. Laurel watched as Zack started down the drive and then turned around to pronounce that he had forgotten to tell them about the Memorial Day Barbeque and he hoped to see them there. The next two weeks until Memorial Day flew by in a flurry of unpacking, arranging and then rearranging. Laurel threw out half of the things she packed making Katie wonder why she even packed them at all. They fell into their routine pretty easily. Both of them worked from home except when they had to travel. Tom ran his firm, Katie was never quite sure what he did, but she knew it involved a lot of meetings and dinners. Laurel worked as a real estate agent and only went into the office once a week to check in unless she had a closing. When they were in the condo, Katie hated having them around so much. They were always tripping over each other, and she was constantly told to keep quiet, one of them was always on a phone call. In this bigger house, each of them had an office with a door they could close. But it made the house feel empty and lonely without the background noise of deals constantly made. Katie settled in and found a routine to keep her occupied over the long summer break. Every morning she took a walk around the lake, and if she timed it right, she would run into Taylor’s dog Skipper, out for his morning run. After her walk she would come home for brunch with her parents and after, when they retreated to their respective offices again, she would settle in the backyard with one her favorite books. Life was quiet, and it was exactly what Katie’s parents wanted. The small-town life, where nothing ever happened.
https://medium.com/magnum-opus/sunsets-9bce307c81d6
['Michelle Elizabeth']
2019-05-29 14:51:00.796000+00:00
['Novel', 'Creativity', 'Fiction', 'Hidden Lake', 'Writing']
The Power of Song
As I get older, I find myself singing more and more often. Part of this, I think, is the fact that I have been separated from my beloved piano. This beautiful instrument that I so love to play lives at my mother’s house now because it does not really fit into my little “cottage.” I own a guitar and a banjo, yet I don’t know how to play either because I am lazy and distracted and impatient, and I don’t like the calluses they leave on my fingertips — I’m a knitter and callousy fingertips can be very dangerous to my beautiful yarn. So what do I do? I strum my instruments clumsily and without even the slightest competence, just to have a little something in the background to sing to. Or I just sing without them. I sing in my garden. I sing in the shower. I sing in my bed. I sing in the woods. I sing to my nieces and nephews. Sometimes I think the more I sing, or rather, the more music we all make, the better we can shield ourselves against the darkness. When I start to think that the world is filled with people who only want to hurt others, flinging the cruelest words at strangers on the internet, I sing more often and I sing louder because I know my song will drown out their hateful words. I sing to disarm them. I sing to interrupt their cruelty. I sing to welcome more light into this world. I also sing to fortify the earth. I sing as a way to restore dignity to the natural world. To tell the trees and land and flowers and birds and coyotes that I recognize them as my kin. I recognize their right to be here. I appreciate their place in this world. I honor what they have to teach me. And I sing like I’m casting magic spells, weaving threads of melodies between myself and my loved ones. I sing Be with Me now not so much for my owls, who have grown and flown away to find their own territory, but for my little nephew, Alex, who I hope will always remember me, even though he is moving far away. River runs to the sea Salmon runs to the sea If you need someone You can run to me I pray that the river will always run to the sea, that he will always run to me. I believe so deeply in this. I might lose my faith in so much, but I never seem to lose faith in song. I remember once, all those years ago when I was observing my three baby owls, I couldn’t find them one afternoon, so I took a little drum out to a glade that I love and played a rhythm on it while singing a Tori Amos tune, my eyes closed so I could feel the full energy of the song. When I opened my eyes, one of the baby owls streaked past me, a white and brown blur, and then she disappeared into the shadowy grove of trees just beyond. I knew she heard me and liked my song and wanted to let me know that she was there, somewhere just beyond where I could see. © Yael Wolfe 2020
https://medium.com/wilder-with-yael-wolfe/the-power-of-song-714995d7f458
['Yael Wolfe']
2020-08-19 03:56:19.231000+00:00
['Creativity', 'Magic', 'Soul', 'Spirituality', 'Music']
The Strategy That Increases Model Accuracy, Every Time, Guaranteed
The goal is to predict ConfirmedCases and Fatalities . This is a perfect example of real-world data. We’re given the country, the latitude and longitude of the country, and the date. First, we must preprocess the date. A machine learning model cannot intake a date, so instead, I created a new column days_from_start that represents the number of days that have passed from 1/22, the date of first data. Let’s first train a Random Forest model on the raw data. The latitude (how north or south the country is) may be of use because some studies have shown that hotter countries (closer to the equator) are less at risk of the coronavirus. The mean absolute error is 67.8. On average, a prediction made by our model is 67.8 cases off the target. As for fatalities, We are about 2.6 deaths off on average. The low number is because of the scale (not many people have died, and the countries that do have lots of deaths like China the model can exploit via longitude and latitude. It cannot find much relation between location and deaths that can be generalized beyond the training set). When one thinks about factors that might influence the coronavirus, the first thing that may come to mind is population of a country. A country that has more people will probably have more interactions, meaning the transmissibility of the virus is increased. So let’s add the data! Wikipedia is usually a good source full of tables, but there are plenty of other sites that have scrapable data. The easiest way to scrape online data is with pandas ’ read_html function. We will use the most recent population count (collected July 1st, 2019) to use as a feature, and map the Country or area column to each of the countries within the coronavirus forecasting dataset. This is where a lot of work using external data comes in — Wikipedia pages have different naming conventions for names than the ones in our data. For instance, ‘United States’ might be ‘US’ or ‘U.S.’ or ‘United States of America’. Additionally, in Wikipedia there are often footnotes, and to reference them, there are often brackets next to country names, like in the example below.
https://medium.com/analytics-vidhya/the-strategy-that-increases-model-accuracy-every-time-guaranteed-6ee5e476262d
['Andre Ye']
2020-06-25 16:11:26.138000+00:00
['Machine Learning', 'Data Science', 'AI', 'Kaggle', 'Data']
Why I Wrote Under Another Profile
Why I Wrote Under Another Profile I guest blogged under McGraw-Hill From kalhh on Pixabay As a 9th-grade special education teacher, I write about work sometimes. I do it less than I did when I started my job, but often, I’ll write about the intersections between work and mental health. On one of these pieces, McGraw Hill, the textbook publishing company and one of the “big three” educational publishers, reached out to me about whether I wanted to guest blog for their publication, and I graciously obliged. I wasn’t paid for it, but I was glad to guest blog for McGraw-Hill. At first, when they reached out to me, I thought “McGraw-Hill? Like the textbook company?” and then weighed the pros and cons quickly before making a choice to do so. I took my time given a really busy schedule, and I’m very glad McGraw-Hill was understanding of my need to delay my submission. I gladly took the chance to write a piece about encouraging active citizenship in the classroom, and especially in a virtual classroom during the pandemic. According to Dave at Content Development Pros, guest blogging is beneficial because it builds relationships with your audience, helps develop professional connections, and establishes yourself as an “industry expert.” As only a second-year teacher, I don’t really consider myself an industry expert, but having McGraw-Hill, a household name in education, reach out to me about guest blogging certainly raised my confidence. Plus, it looks really good on an education-related resume. Of course, guest blogging helps build online influence and authority to the reader. For me, guest blogging was a break from writing for my own profile and own following. I’m well aware that I don’t make any money to avoid a possible conflict of interest, but guest blogging allows me to say “hey, I got published by McGraw-Hill,” which helps significantly for my credibility as an educator more than it helps my credibility as a writer and editor. According to Vikas Agrawal at Search Engine Journal, guest blogging helps spread a brand message and wins the trust of a target audience. Although you’re writing for someone’ else’s site, guest blogging allows instant exposure to targeted traffic, so I might build an audience of people interested in reading education-related work. It also allows connecting with influencers and expanding a personal network, and helps me be more active in the education community. Within the realm of where you’re guest blogging, you build more influence and authority, and you also build more social media following to who sees your content. After all, you are marketing yourself to a whole new network. “By contributing to an authoritative blog, you are essentially getting them to vouch for your brand,” Agrawal says. McGraw-Hill linked back to my blog profile, and writing for the site was a major exercise. It helped me build the skills I need to do research and write for a different audience than I normally do, and adopt a new voice in doing so. It’s a cool thing to put that I wrote for the McGraw-Hill blog on my resume, but building connections is also a large part of guest blogging. It never hurts to have more connections. Guest blogging was simply something different to tackle, and I’m very grateful I did. McGraw-Hill used more subheadings to improve my search engine optimization and working with someone else on the editing side helped improve my writing. According to Hallam, a digital marketing agency, the end goal of guest blogging is to position yourself as a thought leader in an industry, even if it doesn’t happen overnight. It also helps build brand awareness within a very crowded market, and promotes search engine optimization since publishing on a high quality and relevant website allows you to bring more traffic to your own website. Before you decide to guest blog, Hallam emphasizes only guest blogging on trustworthy and relevant websites. I mentioned McGraw-Hill to a couple of people to make sure they’d heard of it too — and everyone I talked to associated McGraw-Hill as an authoritative textbook and education brand. As such, guest blogging requires a targeted approach. We all only have so much time and attention to give, and we can’t give it all. I never sought out guest blogging opportunities, but have been approached a couple times, and only accepted an opportunity when an influential brand in my field asked me. Of course, I did wonder whether the post would do better on my own blog, but I have a lot of content already that readers find value in. Branching out to a new audience is always a fun excursion — and guest blogging was the way to do it for me. I don’t seek out opportunities to guest blogging because writing for myself and my own readers always comes first — before you make connections and establish yourself as a “thought leader” in your field, it’s important to build your own brand. Guest blogging is just one step in the process. For me, I published under someone else’s profile because it was a nice break from the norm and great to get out of the cycle of building my own following and brand and focusing on my own writing.
https://medium.com/the-partnered-pen/why-i-wrote-under-another-profile-ae1f62b4e63e
['Ryan Fan']
2020-11-08 21:24:18.976000+00:00
['Freelancing', 'Marketing', 'Medium', 'Business', 'Writing']
Most Read & Interesting Stories from Technicity in 2020
This year will undoubtedly go down in history as one for the record books. Although there were a lot of important events that transpired in 2020, the onset and the continuation of the pandemic has affected the lives of the global population in more ways than one. The effects of COVID-19 have been so far-reaching that it has eclipsed almost everything else. For Technicity, it was the first full year of publication on Medium, after starting in Oct. 2019. And I would like to take this opportunity to thank each and every one of you for your support — for me personally and the publication as we enter the new year with renewed hope & vigor for the future. Technicity has a following of 2,653 followers at the time of writing and growing every day. Wishing you all a happy, healthy & prosperous 2021, and here’s a look back at some of the most popular & interesting blogs from 2020. Meanwhile, I will continue to bring the best content. JANUARY 2020
https://medium.com/technicity/most-read-interesting-stories-of-technicity-in-2020-ec641ca1895c
['Faisal Khan']
2020-12-28 02:36:24.675000+00:00
['Space', 'Bitcoin', 'Business', 'Health', 'Artificial Intelligence']
3 Issues Bicultural People Face — Part Three
Photo by John Robert Marasigan via Unsplash In part two of this three-part series, I discussed the vague, yet constant sense of otherness that often haunts bicultural people. For many of those living in the U.S., learning to function in a society that touts individualism while still respecting the collectivistic values of our parents is a challenging endeavor. Trying to adhere to both cultures can be frustrating and isolating; we are constantly aware that we are not quite [insert ethnicity here] nor quite American. The best solution I’ve come up with as an early 20-something bicultural person is to strike a balance between our cultural identities so that we aren’t denying any part of who we are. In part one, I touched on a similar approach: the idea of cherry-picking the best traits from both of our cultures and using them to our advantage. This final article focuses on the transition that bicultural children make from childhood to adulthood. It’s obviously something everyone goes through, but bicultural people seem to have a unique experience in this endeavor. Since beginning my own transition to adulthood I’ve found it more challenging than I anticipated, and realized that much of it has to do with my bicultural upbringing. Conflicting Views The transition to adulthood is scary, confusing, and difficult. A good chunk of your time is spent wandering around in the dark, hoping you make it out on the other side with some semblance of who you are and what you want to do in life. Though I’ve just begun the transition, it has already been fairly taxing in a way that neither the people of my heritage culture or my American peers can fully understand. This is because as a bicultural person, I’ve grown up with two completely different ideas of adulthood. Collectivistic cultures tend to regard someone as a fully grown adult only when they are married. They prepare kids for marriage from a young age — little girls learn to cook and clean so they can take care of the household and their future husbands, while little boys are encouraged to be brave, strong, and resilient so they can provide for their future wives and families. Naturally, immigrant parents pass down this kind of grooming to their children, expecting them to only leave their household and make their own choices when they have a spouse. Of course, this notion of adulthood has a major disconnect with the American concept of what it means to be an adult. Though American society is not an exception to raising kids to be good husbands and wives one day, the practice is outdated. Nowadays, becoming an adult means becoming financially independent, moving out of your parents’ place, and taking on a long list of responsibilities. You start to make decisions in spite of what anyone has to say, pursuing anything your heart desires. As bicultural people living in the U.S, we’re obviously inclined to take on the American vision of adulthood, which offers the freedom that all 20-somethings crave. However, the children of immigrants typically face an additional, difficult step in the process: getting our parents on the same page. A Long Battle Ahead It’s not uncommon for a lot of conflict to arise when bicultural people start to assert our independence and move away from our families. Our hunger for freedom comes as a shock to our parents, who value family and community over personal desires. They passed those same values down to us, so how could we reject that? “Why would you not want to be with your family?” “Why do you want to leave us? Aren’t we good to you?” You try to explain your point of view, but it only irritates them further. “That’s not the way we do things back home!” “You’re being selfish! How could you do this to us?” Reasoning isn’t working, so for the time being, concede and recoup. This is when the mental battle begins — another battle in the long-standing war of your two identities. Our [insert ethnicity here] half will start to feel that maybe they’re right. Maybe it is selfish to want a life separate from your family, to want things that don’t necessarily benefit them. After all, they’ve done so much for us. We should be grateful to them. As Americans though, we want to say screw that. It’s your life, why should they have any say in it? Why do their needs come before yours? Shouldn’t they want you to be happy? The mental deliberation is exhausting, sending us through a rollercoaster of emotions — guilt for not wanting to sacrifice our needs and wants for the people who raised us. Shame for dreaming of a different life. Anger at the lack of support and encouragement. Loneliness when no one seems to understand. It’s tough. At times, it’s enough to make you want to give all your dreams up and just do what your family tells you to do. Other times, you want to run in the other direction, away from them, and never look back. Either way seems extreme. Which way is right? Photo by Pablo García Saldaña via Unsplash Choose What Works for You Every situation is different, so let me say right off the bat that there is no right answer. For bicultural people living in abusive and/or toxic households, cutting ties with family and going your own way is most likely the best choice. For others, you might find that your goals align similarly to that of your family, so you might have little issue with living up to their expectations. For the many bicultural people that stand somewhere in the middle, there is, of course, a middle path for you, though it is not easy. Not by a long shot. I’m talking about compromise. Compromise, compromise, compromise. It’s hard, but it’s sometimes necessary. As a bicultural person in this middle ground, much of my time transitioning into adulthood has been spent compromising my needs and desires with those of my family. Ever since I was a kid, I dreamed of striking out on my own and living my life however I choose. I wanted to see what the world had to offer, and what I could offer it. This was pretty clear to my family, but they always pictured I’d stay close to home and chase my dreams nearby. I’m sure they want to stay involved in my daily life even after I move out, although this idea runs counter to what I want for myself. As frustrating as it is when we butt heads on how I’ll live my life, I’ve learned that I need to compromise in order for the two halves of my cultural identity to coexist. These negotiations usually never fully satisfy both of us, but something needs to give. Often times, I give up the little things — like the timeline of my pursuits or the means to go about it — but never my main objective. On the flip side, they forfeit their unreasonable expectations but retain the main standards they raised me with. There’s always a little give and take. At the end of the day, my family is the main tie to my ethnicity, and I can’t imagine living without that tie. But I can’t live my life catering to their every want and need. I have dreams worth pursuing, and it wouldn’t be very American of me if I didn’t give it a shot. Compromise is a difficult path, and again, there is no one size fits all. But I’ve found that once we’re well versed in compromising, it’s a good way for bicultural people to push our boundaries without doing anything too extreme. This gradually eases us into adulthood while slowly getting our parents accustomed to the idea that we’ve grown and changed. In this middle ground, [insert ethnicity here] Americans can grow and blossom without denying huge parts of who we are. We can exist here. Bicultural people face a number of issues as we navigate life in a society with such different values than that of our immigrant parents. It’s crucial for us to recognize these persistent juxtapositions as we try to understand ourselves and who we want to become. Running with the theme of this three-part series, I find that the best approach to fulfilling both parts of our cultural identities is to adopt a middle-of-the-road mindset. This allows us to develop a diverse set of values and bring a unique mindset to the table, whether it be in school, work, or even relationships. It’s not at all an easy road to take, but in this way, we nurture the two halves of ourselves and learn to thrive in every part of our lives. Thanks for reading!
https://medium.com/the-innovation/3-issues-bicultural-people-face-part-three-86d400eaea0
['Victoria Nowrangilall']
2020-11-18 16:28:31.619000+00:00
['Biculturalism', 'Personal Development', 'Self', 'Society', 'People']
This is an AI-generated artwork of a person who doesn’t exist
This, that you see right here, is an artwork I produced using AI. First creating a person that doesn’t exist at all, with a GAN model (check https://www.thispersondoesnotexist.com/ to get one yourself!). Then, creating the painting artwork by using a styling model. An AI-generated person for the creation of the artwork Recently, I got the unluck of running into an artwork that reminds me of the famous “Banana on the wall” piece. Only that this time, it was eluding animal abuse, actually using the hashtag for it. The reason I came across that image was that recently I been reminded about the huge problem in social platforms related to abusive and violent content. Especially towards animals, who sadly don’t have the same rights in all countries. Like for example, Hungary, Finland, Romania, and even some states of the United States. I consider leaving that topic about animal rights and social media, for another time. When I saw that image I was outraged, I only thought about the millions of animals that are oppressed by our society in the dairy industry, meat industry, leisure industry with bullfights, and now in the art field. I called out the situation, speaking my mind and opinion in a, maybe for some, exaggerated way. One of the responses I got, from one of the artist’s friend, was that I didn’t understand art, and if I considered this not to be art is because I don’t have a brain. The rest of the “arguing” in the responses was mainly about how I look, and other of my physical characteristics. They wouldn't reply whether or not they are supporting/promoting animal abuse through this type of “art”. But instead, every time I asked about it, this person only changed the topic back to my looks and content, complaining that what I do and this field is useless and no one sees value from it. I thought about it, while I was doing endless homework for university and work. And I thought how different some people can see automation, technology, and their inevitable influence in the future. Perhaps some people thought that algorithms could never be better than them as artists, as painters, sculptors, or scriptwriters. And perhaps they are right. Or maybe one day AI will be able to recreate a "Banana on the wall", and understand art without the need of having a human brain. What I can be sure about, is that there is still a long long way for tech to continue surprising us in all the professional and non-professional fields known by men, today. Try yourself! I would love to share with you some free, online tools for you to try to generate art using AI: Runway ML — An easy, code-free tool that makes it simple to experiment with machine learning models in creative ways. Our overall staff pick. — An easy, code-free tool that makes it simple to experiment with machine learning models in creative ways. Our overall staff pick. GANBreeder — Breed two images to create novel new ones using GANBreeder. (Note that GANbreeder was renamed ArtBreeder, with several AI models to manipulate photos). — Breed two images to create novel new ones using GANBreeder. (Note that GANbreeder was renamed ArtBreeder, with several AI models to manipulate photos). Magenta — An open-source research project exploring the role of machine learning as a tool in the creative process. (Coding skills required). — An open-source research project exploring the role of machine learning as a tool in the creative process. (Coding skills required). Processing — A flexible software sketchbook and language for learning how to code within the context of the visual arts. Includes p5js (Processing for JavaScript) and Processing.py (Processing for Python). [Processing does not use AI, but is a great tool for generative visual art]. — A flexible software sketchbook and language for learning how to code within the context of the visual arts. Includes p5js (Processing for JavaScript) and Processing.py (Processing for Python). [Processing does not use AI, but is a great tool for generative visual art]. ml5.js — ml5.js aims to make machine learning approachable for a broad audience of artists, creative coders, and students through the web. AI-Generated Music/Sound: Magenta Studio — A collection of music plugins built on Magenta’s open-source tools and models. — A collection of music plugins built on Magenta’s open-source tools and models. AI Duet — Play with a piano that responds to you. — Play with a piano that responds to you. NSynth Sound Maker — Create your own hybrid sounds and instruments. — Create your own hybrid sounds and instruments. MuseNet — Generate 4-minute musical compositions with 10 instruments, and combine styles from country to Mozart with MuseNet (also available on GitHub). — Generate 4-minute musical compositions with 10 instruments, and combine styles from country to Mozart with MuseNet (also available on GitHub). Pitch Detection — Use a pre-trained pitch detection model to estimate the pitch of sound files through a computer mic. AI Generated Images / Pictures: Deep Dream Generator — Stylize your images using enhanced versions of Google Deep Dream with the Deep Dream Generator. — Stylize your images using enhanced versions of Google Deep Dream with the Deep Dream Generator. DeepArt.io — Upload a photo and apply different art styles with this AI image generator, or turn a picture into an AI portrait of yourself (also check out DreamScope ). — Upload a photo and apply different art styles with this AI image generator, or turn a picture into an AI portrait of yourself (also check out DreamScope ). Visionist : Upload and apply AI Art styles to your photos, including abstract filters, cutout portraits, and more (iOS. Made by 3DTOPO Inc.). : Upload and apply AI Art styles to your photos, including abstract filters, cutout portraits, and more (iOS. Made by 3DTOPO Inc.). GoArt — Create AI photo effects that make your photos look like famous portrait paintings with this AI image generator. (Web, Android and iOS. Made by Fotor). — Create AI photo effects that make your photos look like famous portrait paintings with this AI image generator. (Web, Android and iOS. Made by Fotor). Deep Angel — Automatically remove objects or people from images. (Web. Made at MIT). — Automatically remove objects or people from images. (Web. Made at MIT). Google Deep Dream — GitHub repository for implementing Google Deep Dream. — GitHub repository for implementing Google Deep Dream. GANBreeder — Merge images together to create new pictures, make hybrid AI portrals and create wild new forms that have never been seen before. (GANbreeder is now called ArtBreeder). AI artwork sells for $432,500 Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network). Sold for $432,500 on 25 October at Christie’s in New York. Image © Obvious The portrait in its gilt frame depicts a portly gentleman, possibly French and — to judge by his dark frockcoat and plain white collar — a man of the church. The work appears unfinished: the facial features are somewhat indistinct and there are blank areas of the canvas. Oddly, the whole composition is displaced slightly to the north-west. A label on the wall states that a sitter is a man named Edmond Belamy, but the giveaway clue as to the origins of the work is the artist’s signature at the bottom right. In cursive Gallic script it reads: Image © Obvious This portrait is not the product of the human mind. It was created by an algorithm defined by that algebraic formula with its many parentheses. And when it went under the hammer in the Prints & Multiples sale at Christie’s on 23–25 October, Portrait of Edmond Belamy sold for an incredible $432,500, signaling the arrival of AI art on the world auction stage. The team collected a set of 15,000 portraits from the online art encyclopedia WikiArt, spanning the 14th to the 19th century, and fed them into the GAN algorithm. GAN algorithms have two parts: the generator and the discriminator. The generator learned the ‘rules’ of the portraits, “for example, everything has two eyes and a nose,” Caselles-Dupré says, describing a process that takes about two days. Then it starts to create new images based on those rules. Meanwhile, the discriminator’s job is to review the images and guess which are ‘real’ ones from the dataset and which are ‘fake ones’ from the generator. It’s hard for us to naturally think that AI can be our ally, instead of our enemy. But that’s why we all should take part in this new, modern revolution. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/this-is-an-ai-generated-artwork-from-a-person-who-doesnt-exist-f81b224984f0
['Rebeca Sarai G. G.']
2020-09-25 13:45:18.102000+00:00
['Data Science', 'AI', 'Technology', 'Artificial Intelligence', 'Art']
Various ways of handling environment variables in React and Node.js
Various ways of handling environment variables in React and Node.js Learn the way to secure your application data using environment variables Photo by Fotis Fotopoulos on Unsplash Using Environment variables is very important to keep your private information secure. It may contain your API keys or database credentials or any other private information. It’s always recommended to use environment variables to keep the information secure and you should never write them directly in your code. Also, you need to make sure that, you add the environment variables file name to your .gitignore file so it will not be added to your Git repository when you push the code to the repository. Let’s look at the various ways of using the environment variables Using Create React App with single .env file: If you’re using create-react-app, then to use environment variables in your application, you need to create a .env file in the root of your project with each variable name starting with REACT_APP_ Create React App will make sure, the variables declared in the .env file will be available in your application if you're naming it starting with REACT_APP_ For example, If your .env file looks like this: REACT_APP_CLIENT_ID=abcd2whdkd REACT_APP_API_KEY=3edcb4f9dd472ds4b47914ddcfb1791e1e1ab Then you can access the variables directly in your React application using process.env.REACT_APP_CLIENT_ID and process.env.REACT_APP_API_KEY Demo: https://codesandbox.io/s/env-vars-create-react-app-mr0rl with multiple .env files: If you’re having multiple .env files like .env.prod , .env.uat , .env.dev for production, UAT, and development environment respectively then just using REACT_APP_ for environment variable name will not work. Suppose, you’re using the firebase database in your application and your firebase configuration looks like this: For the development environment: const config = { apiKey: 'AIdfSyCrjkjsdscbbW-pfOwebgYCyGvu_2kyFkNu_-jyg', authDomain: 'seventh-capsule-78932.firebaseapp.com', databaseURL: 'https://seventh-capsule-78932.firebaseio.com', projectId: 'seventh-capsule-78932', storageBucket: 'seventh-capsule-78932.appspot.com', messagingSenderId: '3471282249832', appId: '1:3472702963:web:38adfik223f24323fc3e876' }; For the production environment: const config = { apiKey: 'AIzaSyCreZjsdsbbbW-pfOwebgYCyGvu_2kyFkNu_-jyg', authDomain: 'seventh-capsule-12345.firebaseapp.com', databaseURL: 'https://seventh-capsule-12345.firebaseio.com', projectId: 'seventh-capsule-12345', storageBucket: 'seventh-capsule-12345.appspot.com', messagingSenderId: '3479069249832', appId: '1:3477812963:web:38adfik223f92323fc3e876' }; but you should not write this code directly in your application because anyone can just copy-paste the above configuration into their app and can manipulate your firebase data so Instead, you should create an environment variable for each property of the config object and use that. If you create a .env.prod file for the production environment then it will look like this: REACT_APP_API_KEY=AIzaSyCreZjsdsbbbW-pfOwebgYCyGvu_2kyFkNu_-jyg REACT_APP_AUTH_DOMAIN=seventh-capsule-12345.firebaseapp.com REACT_APP_DATABASE_URL=https://seventh-capsule-12345.firebaseio.com REACT_APP_PROJECT_ID=seventh-capsule-12345 REACT_APP_STORAGE_BUCKET=seventh-capsule-12345.appspot.com REACT_APP_MESSAGING_SENDER_ID=3479069249832 REACT_APP_APP_ID=1:3477812963:web:38adfik223f92323fc3e876 and your .env.dev file will look like this: REACT_APP_API_KEY=AIdfSyCrjkjsdscbbW-pfOwebgYCyGvu_2kyFkNu_-jyg REACT_APP_AUTH_DOMAIN=seventh-capsule-78932.firebaseapp.com REACT_APP_DATABASE_URL=https://seventh-capsule-78932.firebaseio.com REACT_APP_PROJECT_ID=seventh-capsule-78932 REACT_APP_STORAGE_BUCKET=seventh-capsule-78932.appspot.com REACT_APP_MESSAGING_SENDER_ID=3471282249832 REACT_APP_APP_ID=1:3472702963:web:38adfik223f24323fc3e876 To access these environment-specific files, install the env-cmd npm package using the following command: yarn add env-cmd OR npm install env-cmd and then change the package.json file script section to use env-cmd command "scripts": { "start": "env-cmd -f .env.dev react-scripts start", "start-prod": "env-cmd -f .env.prod react-scripts start", "build": "react-scripts build", "test": "react-scripts test --env=jsdom", "eject": "react-scripts eject" }, So now, when you run the yarn start or npm start command from the terminal, it will load the environment variables from the .env.dev file and when you run the yarn start-prod or npm start-prod command from the terminal, it will load the environment variables from the .env.prod file. You can even create a single .env-cmdrc , If you're using env-cmd npm package and declare all environment variables in a single file as a JSON object like this: { "dev": { "REACT_APP_API_KEY": "AIdfSyCrjkjsdscbbW-pfOwebgYCyGvu_2kyFkNu_-jyg", "REACT_APP_AUTH_DOMAIN": "seventh-capsule-78932.firebaseapp.com", "REACT_APP_DATABASE_URL": "https://seventh-capsule-78932.firebaseio.com", "REACT_APP_PROJECT_ID": "seventh-capsule-78932", "REACT_APP_STORAGE_BUCKET": "seventh-capsule-78932.appspot.com", "REACT_APP_MESSAGING_SENDER_ID": "3471282249832", "REACT_APP_APP_ID": "1:3472702963:web:38adfik223f24323fc3e876" }, "prod": { "REACT_APP_API_KEY": "AIzaSyCreZjsdsbbbW-pfOwebgYCyGvu_2kyFkNu_-jyg", "REACT_APP_AUTH_DOMAIN": "seventh-capsule-12345.firebaseapp.com", "REACT_APP_DATABASE_URL": "https://seventh-capsule-12345.firebaseio.com", "REACT_APP_PROJECT_ID": "seventh-capsule-12345", "REACT_APP_STORAGE_BUCKET": "seventh-capsule-12345.appspot.com", "REACT_APP_MESSAGING_SENDER_ID": "3479069249832", "REACT_APP_APP_ID": "1:3477812963:web:38adfik223f92323fc3e876" } } and then use the -e flag for specifying which environment to refer in your package.json file like this: "scripts": { "start": "env-cmd -e dev react-scripts start", "start-prod": "env-cmd -e prod react-scripts start", "build": "react-scripts build", "test": "react-scripts test --env=jsdom", "eject": "react-scripts eject" },
https://medium.com/javascript-in-plain-english/various-ways-of-handling-environment-variables-in-react-and-node-js-5b9ce13aa7b1
['Yogesh Chavan']
2020-09-29 13:00:51.080000+00:00
['Programming', 'JavaScript', 'React', 'Development', 'Nodejs']
After the Workshop
Spiralbound Comics for life, brought to life by Edith Zimmerman.
https://medium.com/spiralbound/after-the-workshop-b71142727d16
['Sean Conner']
2019-06-11 11:01:01.261000+00:00
['Innovation', 'Comics', 'Creativity', 'Design Thinking', 'Startup']
DNC Tech Choices: Why we chose Google BigQuery
DNC Tech Choices: Why we chose Google BigQuery Migrating to a new data warehouse This blog post walks through how we chose and migrated to Phoenix, our data warehouse. Thanks to a number of hard-working colleagues, this work was underway when I joined the DNC in the summer of 2019. Thank you to Ben Matasar for his collaboration on this post. Where we were in 2018 The data warehouse that served as our foundation during the 2016 and 2018 election cycles dated back to Obama’s 2012 run. It was fed by ETL pipelines custom-built by previous generations of the team. As people came and went with each election cycle, maintenance suffered and tech debt on this system piled up. This is a familiar challenge in political tech. Data, analytics, and digital organizing tools are all extremely vital to winning an election, but they often languish after an election is over. While the warehouse had served us well in our earlier victories, we needed a new data warehouse that would support Democrats up-and-down the ballot for the 2020 cycle (and future election cycles). Why build vs buy? and why Google BigQuery? When we started evaluating different data warehouse options, we had a few objectives in mind: Serve not just the next presidential candidate, but all Democratic candidates across all 50 states and D.C. Solution: Prioritize multi-tenancy and a high amount of concurrency Easily scale up for the frenetic intensity of an election, but also scale down with a limited team for its aftermath. Solution: Buy a managed service over building and operating our own instance. Reduce the amount of custom solutions that are susceptible to documentation rot and knowledge loss. Solution: Privilege cloud services or open source frameworks that are actively maintained outside of the DNC. Build tools for users with varying levels of sophistication. Solution: Create an ecosystem with a variety of services for power users and laypeople. Allow ourselves to onboard and offboard hundreds of staff members across dozens of organizations in a secure and scalable way. Solution: Prioritize a platform that can integrate with single-sign on, centralized provisioning, and role-based access. Buy into a platform with best-in-industry security practices. After evaluating Amazon Redshift, Azure, Snowflake, Google BigQuery, and the potential of simply upgrading our legacy platform, we chose to adopt Google BigQuery. It satisfied our major goals for multi-tenancy, usage concurrency, and being a service that could be managed with a relatively small team. However, a big part of the appeal of moving to Google Cloud Platform was that scaling up and scaling down our infrastructure was a natural part of the product, and we would only pay for what we would use. From bursts of building to continuous stewardship In December 2018, we started building a proof-of-concept. By early 2019, we kicked off our MVP. In the fallow period between elections, we prioritized re-building syncs for the data that mattered most to our users at state parties and sister committees — voter file data and data from VoteBuilder. It was essential that we migrate users at state parties to “Phoenix”, the new warehouse, as early as possible. This would enable our users to get started in the new data warehouse, and reduce our need to support the legacy warehouse. By midsummer 2019, only internal DNC data teams were using the old platform. By the fall, we completed migrating the archives. We said goodbye to the old warehouse with a Viking funeral. Where we are in 2020 In the year and a half since we began the migration, we have been able to do so much more now that we’re on BigQuery and Google Cloud Platform. State party data teams, sister committees, presidential primary campaigns, leverage Phoenix to manage, access, and organize data. State parties, sister committees, and campaigns up-and-down the ballot use Phoenix, the DNC’s data warehouse The reduced support overhead has allowed us to go deep on projects like automating and streamlining our voter file updates, as well as building out new data pipelines with other progressive data partners. Seamless integration with Google Sheets has also made our data more accessible to users, while more sophisticated tools like MLFlow have been an essential part of our data modeling infrastructure. Beyond 2020 Phoenix will be a reliable and secure warehouse for Democratic data beyond the 2020 election cycle. This long-term infrastructure allows our team and Democratic campaigns to focus on building tools and talking to voters, rather than managing a database cluster. Across the progressive ecosystem, our teams are small, our funding is tight, and our deadlines are tighter. We are in the business of winning elections, not husbanding a database cluster. By using Google BigQuery, we are able to focus on building tools that are making a difference in 2020 (and beyond). And that’s something that excites us all.
https://medium.com/democratictech/dnc-tech-choices-why-we-chose-bigquery-5f7ba18f3b6f
['Cris Concepcion']
2020-06-11 15:34:54.827000+00:00
['Software Development', 'Big Data', 'Engineering', 'Google Cloud Platform', 'Engineering Mangement']
How Data Visualization Helps Us See the Effects of Climate Change
How Data Visualization Helps Us See the Effects of Climate Change Mapping techniques highlight how quickly the glacier margins in Glacier National Park are receding The importance of maps I’ve always considered maps to be powerful visuals because of how universal they are. They’re a pretty standardized encoding system, as we learn about maps and what the legends mean all the way back in school, and then we keep seeing them throughout our lives, everywhere from the weather channel to planning our commutes. When using maps for data visualization, we build on all those lessons taught about maps throughout the years. If we choose to visualize a metric by country on a world map, we can introduce characteristics that that can capture the interest for a wider audience. Let’s take life expectancy as an example. This may not be something you’ve considered in the past, but if you were shown a map of average life expectancy, I’m fairly certain you’d look to see how your country compared to others. (Romania, where I was born, averages 74 years, and I’m pretty confident you also checked your home country.) Methods for visualizing data on maps have become more advanced to meet the needs of our global landscape. The Tableau-Mapbox integration has made exploring map data increasingly easy, leading to some interesting visuals such as how Africa would look like without forests or what the military presence looks like in Syria. Maps and climate change Inspired by the many compelling visuals, I set out to explore the effects of climate change using maps. Climate change is a complex topic that first came under the name of “global warming.” That term caused people to associate climate change with weather, which is the shorter-term manifestation of overall climate trends. This, in turn, led to questions like “How can the Earth be getting warmer if we just had a really cold winter?” Climate is a longer-term trend than seasons and can be harder to understand intuitively, because singular weather events can skew perception of what the overall climate trend looks like. This is where visualizing data can help us look past our own experience of the weather and instead focus on long-term patterns and trends that exist on a scale we otherwise would have trouble noticing. A good proxy to understand climate change is to look at geographical units that would otherwise be stable and see how they evolved over the years. The most straightforward example for this is glaciers. Why is the melting of glaciers always mentioned in climate change debates? The most commonly known consequence of melting glaciers is rising sea levels, but it’s just one of many. According to the WWF, the melting process increases coastal erosion, creates more frequent coastal storms and affects the habitat of wildlife such as walruses and polar bears. Through continual melt, glaciers are a source of water throughout the year for many animals, and help regulate the stream temperatures for animals that need cold water to survive. When a glacier shrinks, that cyclical source of water disappears. It’s difficult to remember what a glacier looked like 10 years ago compared to now. While pictures of glacier valleys before and after melting can provide an interesting visual, they can be seen as anecdotal, and lack a certain quantitative element to them. We know glaciers are melting, but how fast and by how much? What do the glaciers look like now, and what are the implications? I decided to see if I could use Tableau and map data to visualize how the glaciers are shrinking over time, to see if I could help answer some of those questions. Choosing Glacier National Park 2.9% of the Earth is covered with glaciers, so representing all the glaciers in the world in one visualization is tricky. Not only that, but there is no universal, consistent dataset of all glacier margins that is detailed enough to derive meaningful visualizations from. As a result, I chose to narrow down my analysis to Glacier National Park, a forest preserve that spans over 1 million acres (4,000 square kilometers) between Canada and the US state of Montana. It’s home to more than 1,000 different species of plants and animals. Currently, it’s home to 37 named glaciers that have been evolving over time. The USGS (US Geological Survey) has collected a time series spanning 49 years of the 37 named glaciers and their margins based on aerial imagery. The dataset consists of shape files that can be overlayed to produce a comparison of what the park looked like 50 years ago versus now. Full visualization can be found here. Looking at some of the glaciers with the most recession (as percentage of their initial span in square meters), we can compare the 1966 margins (dark blue) with the 2015 margins (very light blue). These comparisons tell a story that is less than ideal. Taking Herbst Glacier and Two Ocean Glacier as examples, we can see that over 80% of the main glacier body area has disappeared. Summing up the losses, we find that over 7.6 million square meters of glacier area has receded. That’s over 1,000 football fields lost just out of this one national park in Montana. Below are sevenof the glaciers visualized: What can we do next? There is a wealth of resources that address how to slow down the effects of climate change, so access to information is clearly not the problem. From a data perspective, what we can do is help understand it better by providing the information in an accessible way. One of my favorite Tableau vizzes presents the topic of ocean plastic in a way that made me look at it closely, even after scrolling past multiple articles with the same theme throughout the years. The method in which data in presented has a great impact on whether the information is consumed. So I invite you to dig into the data, visualize it yourself, find key insights, and share with others. In doing so, we have the ability to make others see the issues more clearly and in turn, work to address those issues.
https://medium.com/nightingale/how-data-visualization-helps-us-see-the-effects-of-climate-change-8b937ab7a71f
['Maria Ilie']
2019-12-27 13:01:01.135000+00:00
['Environment', 'Climate Change', 'Data Science', 'Mapping', 'Data Visualization']
Supreme Court Appears Likely to Uphold the Affordable Care Act
The Supreme Court appears poised to uphold the Affordable Care Act (ACA), despite Republican-led states’ continued efforts to challenge the constitutionality of the law. During oral arguments in California v. Texas on Tuesday, at least five Supreme Court justices indicated that they would leave the ACA mostly intact. Siding with the court’s more liberal justices, Chief Justice John Roberts and Associate Justice Brett Kavanaugh argued that the court could strike down the ACA’s individual mandate without overturning the law in its entirety. The individual mandate required uninsured individuals to either purchase health insurance or pay a penalty. While the Supreme Court upheld the constitutionality of the individual mandate in 2012, ruling that the penalty was a legitimate use of Congress’s taxing power, Congress effectively nullified the mandate in 2017 by reducing the penalty to $0. Because of this, red states are once again calling on the Supreme Court to overturn the entire law, arguing that, without the tax, the ACA is no longer constitutional. “It’s hard for you to argue that Congress intended the entire act to fall if the mandate were struck down when the same Congress that lowered the penalty to zero did not even try to repeal the rest of the act,” Roberts told Kyle Hawkins, the Texas solicitor general, on Tuesday. In 2017, Congress tried and failed to repeal the ACA, with 48 Senate Democrats and 3 Republicans voting against the effort. Roberts acknowledged that while Republican members of Congress may have wanted the court to strike down the law, “that’s not our job.” Kavanaugh agreed with Roberts’s assessment, calling the case “very straightforward” and claiming that, based on the court’s precedents, “the proper remedy would be to sever the mandate and leave the rest of the act in place.” Challengers of the ACA have also argued that the law would fall apart without the individual mandate, but even conservative Justice Samuel Alito questioned whether or not the importance of the mandate had been overstated. The elimination of the penalty has had no real effect on the law. Marketplace enrollment has only minimally decreased since 2017. “There was a strong reason to think of the mandate like a part of an airplane that was essential to keep it flying,” Alito said. “But now it has been taken out, and the plane has not crashed. So how would we explain that the mandate in its present form is essential to the operation of the act?” While the the ACA has often been tossed around as a political football, overturning it would impact more than 20 million Americans who rely on the law for healthcare coverage. Without it, people with preexisting conditions like asthma, cancer, and diabetes would once again be subject to discrimination by health insurance providers. Women and LGBTQ+ people who rely on the ACA’s gender-based protections would also be negatively impacted. Not to mention, young adults on their parents’ policies and low-income adults who gained access to Medicaid under the law would be among the majority of those to lose their coverage. The court ruling to sever the individual mandate from the rest of the law, as Roberts and Kavanaugh suggested, would be the best possible outcome for those who rely on the ACA for access to healthcare, especially those with preexisting conditions. If Tuesday’s oral arguments are any indication of how the Supreme Court will rule in the spring, the ACA will likely live to see another day.
https://medium.com/an-injustice/supreme-court-appears-likely-to-uphold-the-affordable-care-act-5c3dd0fb5efa
['Catherine Caruso']
2020-11-12 01:57:52.680000+00:00
['Justice', 'Health', 'Equality', 'Politics', 'Society']
Building a ChatBot in Python — The Beginner’s Guide
Step 4 — Relation Function This is an extra function that I’ve added after testing the chatbot with my crazy questions. So, if you want to understand the difference, try the chatbot with and without this function. You will see why I decided to write this function. And one good part about writing the whole chatbot from scratch is that we can add our personal touches to it. In the Chatbot responses step, we defined answers lists to specific questions. And since we are using dictionaries, if the question is not exactly the same (literally), the chatbot will not return the response for that question we tried to ask. Sometimes, we might forget the question mark, or a letter in the sentence or something else. In this finding relation function, we are checking the question and trying to find the key terms that might help us to understand the question. You will understand what I mean when you see the function: def related(x_text): if "name" in x_text: y_text = "what's your name?" elif "weather" in x_text: y_text = "what's today's weather?" elif "robot" in x_text: y_text = "are you a robot?" elif "how are" in x_text: y_text = "how are you?" else: y_text = "" return y_text I used if and else statements, but it could be done using switch cases too. Feel free to convert it to a switch case. Also, this is just to give some idea. There are many other techniques to increase the understanding, for example, by using NLP (Natural Language Processing) techniques. Here is a simple example of how the function returns the question that probably asked by the user:
https://towardsdatascience.com/building-a-chatbot-in-python-the-beginners-guide-2743ad2b4851
['Behic Guven']
2020-12-20 00:21:42.528000+00:00
['Machine Learning', 'Artificial Intelligence', 'Technology', 'Future', 'Programming']
Want to Get Healthier? Hack Your Five Senses.
About a quarter of the human brain’s mass is devoted to processing information from the five senses. Given that the brain plays such a central role in health, it’s not surprising that the five senses are closely tied to well-being. But beyond merely proving those connections exist, researchers have recently started to explore ways to purposely manipulate them for people’s benefit. “Interventions based on what we see, feel, and even taste can have a seemingly dramatic effect on health,“ says Charles Spence, an Oxford University PhD researcher who runs a lab dedicated to studying the role that perception plays in behavior and health. “They can reduce pain, speed recovery from illness, and much more.” SMELL Smell plays an important role in overall health. “Smell is associated with neurodegeneration, heart disease, and early demise, among other problems,” says Richard Doty, a PhD smell researcher who directs the Smell and Taste Center at the University of Pennsylvania’s medical school. Doty notes that multiple studies have shown that people whose sense of smell becomes heavily dulled over time are at a higher risk for those diseases. In the case of Alzheimer’s, clumps of protein known as plaques that build up in the brains of people who suffer from the condition form in the part of the brain responsible for smell, possibly explaining the association. Paying more attention to smell could provide a critical early tipoff to brewing problems. A decline in your sense of smell is as good a predictor of Alzheimer’s as genetic tests, according to Doty. To test how useful that relationship can be, in 2020, Doty will send 80,000 people a smell test he developed as part of a study funded by the Michael J. Fox Foundation. People whose results on the test indicate that they’re losing their sense of smell can be tagged for brain imaging and other advanced tests for neurodegeneration. If the effort helps catch problems early — when diseases of the brain may be more treatable — promoting smell awareness could become a major public health initiative, says Doty. TOUCH The sense of touch is getting a closer look, too. Research shows that people in stressful situations tend to touch their faces more often. In addition, Joshua Ackerman, a PhD psychology researcher at the University of Michigan, has studied the impact of face-touching on decision-making. He’s learned that people touch their faces an average of about 15 times an hour, and they say it clarifies thoughts and feelings. “Just putting your hand up to feel your face at some level directs your attention toward what’s going on in your head,” he explains. “The same areas of the brain that process information from the senses also deal with other types of thoughts and actions. So the processes can spill over and influence one another.” “Stroking the skin at a rate of two to four inches per second appears to produce the biggest positive effect. ‘We’re starting to bring scientific rigor to what has traditionally been a touchy-feely area, if you’ll excuse the pun.’” Ackerman conducted a study in which he found that merely touching an object can influence people’s attitudes. What happens, he explains, is that the feel of an object can trigger associations to specific thoughts and emotions. Holding a heavy object may conjure a feeling of gravity, for example. “The sensations trigger metaphorical ways of thinking,” he says. Putting these learnings to work is tricky, admits Ackerman, because like many unconscious effects, they tend to disappear when you tell people about them. Thus simply advising people to touch their faces to reduce stress or lift a lightweight object if they want to feel more easygoing might not work. But there are other ways people can benefit from touch. Oxford’s Charles Spence — who is writing a book called Sense Hacking, to be published early next year — recently identified specific types of caressing (of the arm and other areas of the body) that are linked to feelings of well-being. The research found that stroking the skin at a rate of two to four inches per second appears to produce the biggest positive effect. “We’re starting to bring scientific rigor to what has traditionally been a touchy-feely area, if you’ll excuse the pun,” he says. Spence advises anyone recovering from an illness to embrace caresses, whether from family, friends, or care providers, as a way of speeding up recovery. His one qualification: Make sure the stroking is performed by a human being. “There have been efforts to create the same sensations with a robot’s touch,” he says. “But they don’t have the same effect. We don’t know why, but there’s something about human interpersonal touch that has special properties.” SIGHT Spence studies the impact of sight on health, too. His lab has found, for example, that people who are told to look at their painful wounds and injuries through a “minimizing lens” — essentially a pair of binoculars turned backwards to make things look smaller — experience notable improvements not only in pain, but even in swelling and other physiological symptoms. It sounds bizarre, but Spence offers a simple explanation: “A big wound hurts more than a small one, doesn’t it?” Convincing part of the brain that the wound is smaller may trigger changes in both pain perception and the immune system. Another sight-based phenomenon Spence’s lab uncovered addresses the problem of undernutrition among elderly people. Studies have found that getting older people to eat a bit more each day cuts their risk of dying during a hospital stay in half. Spence’s group showed that by simply using brightly-colored plates and flatware, they were able to increase older people’s food consumption as much as 30%. It’s not entirely clear why the gimmick works, but Spence says it may turn eaters’ attention to the meal. “It works just as well back at home as it does in the hospital,” Spence says. SOUND Spence is also considering ways to use hearing to people’s advantage. He’s specifically interested in building on the well-established finding that music assists in healing, largely because music reduces stress and anxiety. Spence speculates that the soothing powers of music are heightened if music is specifically curated to address different types of illness — such as one playlist for someone recovering from heart surgery, and another for a person receiving chemotherapy. Research has demonstrated that physical-therapy patients do better when certain sounds are played at specific points during the therapy. “Playing a nice, harmonious sound in synchrony with patients flexing their backs leads to much better flexibility than when the sound of a creaking door is played,” he says. Ravi Mehta, a PhD consumer psychology researcher at the University of Illinois at Urbana–Champaign, has found that sound doesn’t have to be harmonious to bring benefits. Studies have shown the impact of noise can vary with the type of noise, the volume of the noise, and the type of thinking. To tease some of the effects apart, Mehta and colleagues exposed people in a study to different types of noisy soundtracks while asking them to perform various tasks. The results suggested that when people need to be creative, just the right amount of noise can win the day. The brain easily filters out a bit of noise, and becomes impaired by a lot of it, but is most productive when there’s a medium amount. “If you need to think more broadly, then noise helps,” he says. The reason it works, Mehta speculates, is that the noise may “defocus” the brain, forcing it out of whatever rut it may be in and freeing it up to explore new avenues of thought. Mehta now wants to pin down specific aspects of the noise, such as tempo and pitch, that might fine-tune the benefits. For now he recommends working in coffee shops, but he also offers a warning. “Noise can make things worse if you need to focus in on routine details,” he says. “A coffee shop is probably not a good place to do your taxes.” TASTE Eighty percent of what the brain interprets as a taste sensation is in fact really coming from our sense of smell. For that reason, researchers usually focus on smell as the more important input, even when it comes to what we put in their mouths. Not only do people smell food through their noses, but we smell it even more strongly when we place it on our tongues and particles from the food waft back in our mouths and into our nasal passages. Studies suggest people can differentiate between as many as a trillion different odors, but taste — when smell is taken out of the picture — is in comparison a blunt instrument, capable only of picking out five different inputs: saltiness, sweetness, bitterness, sourness, and savoriness (sometimes called “umami.”) You might think that would leave little opportunity for manipulating taste in the interests of better health. But in fact, there are a few techniques that are proving useful. One is the reassuring power of sweetness. For centuries parents and care providers have given sugar and other sweets to children in order to distract them from things like a painful shot, or as a reward. The phenomenon turns out to work just as well in adults. Spence conducted an experiment where he asked students to plunge their arms into an ice bath. The students reported significantly less discomfort when they were given some sort of sweet-tasting food just before the plunge. “Strawberries, caramel, and vanilla seem to work as well with adults as sugar does with babies,” he says. “And there’s evidence it applies in clinical settings in the same way.” That doesn’t mean a person must be exposed to ice or a needle to get the benefits of sweetness, he adds. A sweet taste appears to generally mitigate physical discomfort or unease. Bitterness may also come in handy for health. Research at Rutgers University has shown that people who are more sensitive to bitter tastes tend to be more selective about what they eat, and often end up with healthier diets that are lower in sugar and fats. While it hasn’t yet been proven, researchers have speculated that this and similar findings on taste sensitivities might be put to use by adjusting the flavor intensity in foods so that healthier foods become more appealing. But research like that is many years down the line.
https://elemental.medium.com/want-to-get-healthier-hack-your-five-senses-ddc9e749c41b
['David H. Freedman']
2019-09-05 11:01:01.943000+00:00
['Brain', 'Senses', 'Recovery', 'Illness', 'Health']
AWS — How to build a static website with S3 in 5 min
In this article, we will build a static website on AWS. we are going to use Simple Storage Service (s3) to host the static website. static website hosting on AWS Here is the step by step guide for creating a static website on AWS using S3. There are some prerequisites for this project. you need to have AWS account Domain name HTML and CSS AWS Account you can create an AWS account with the free tier for 12 months. Let’s create and log into the console. once logged in you can see the console like below. You logged in as a root account. The best practice is to create an IAM user to access s3 and route 53 but, for simplicity, we are not doing that and out of scope for this tutorial. Domain Name you can buy a domain from route 53 from the AWS. it’s pretty easy and straight forward. You can check if there are available or not and add to cart. it might take some time to have that appear on registered domains like in the second figure. I bought myagileboard.com for this tutorial. Route 53 Registered Domains HTML and CSS Let’s create some simple HTML and CSS files for our index.html. when the users hit the domain, these are pages that loads. clone the bwlow repo for the files. sample page Now we have completed all the prerequisites, Let’s start the process. upload to S3 since our domain name is myagileboard.com, Let’s create two buckets called myagileboard.com and www.myagileboard.com. The reason that we create two buckets is to display the coming soon page for both these domains. Sometimes we omit the www before the website URL. Make sure you create these buckets as public. upload files to both www.myagileboard.com and myagilleboard.com bucket with public access permissions. let’s enable static website hosting on www.myagileboard.com bucket by going to the properties tab and do redirect on agileboard.com bucket. use this bucket to host a website use this to redirect all the routes check the URL working or not by copying endpoint. s3 URL is working fine configure Route53 now go to route 53 and create a new record set called www.agileboard.com and give it an alias target to the bucket name www.agileboard.com hit the URL in the browser www.myagileboard.com. That completes our static website hosting!!!. Conclusion Static website hosting on the AWS is pretty straight forward and your bucket name and your domain name should be same. I omitted a lot of details for simplicity purpose and you can find a lot of documentation on AWS.
https://medium.com/bb-tutorials-and-thoughts/aws-how-to-build-a-static-website-with-s3-2fa0b8c8c417
['Bhargav Bachina']
2019-05-20 03:11:44.834000+00:00
['Programming', 'AWS', 'Cloud Computing', 'Software Development', 'Web Development']
PUBG — Make analytics with AWS and Plotly
Application of Athena and cufflinks to analyse PUBG data For this article, I am going to start the analysis of the data extracted with the pipeline explained on this article. The goal of this article is too: Get an introduction to AWS athena Get insights on the data by using plotly Better understand the consumption of the video game PUBG PUBG is one of this game that is define as battle royale, where the principle is there is X peoples (or squads) that are drop on an island, the goal is to be the last survivor on the island by using the items, weapons that are deploy randomly on the island. And to increase the tension in the game (and do give it an end), the part of map available is decreasing regularly to push the players to fight for their lives. In term of gameplay, there is multiples islands available (each with its own environment like desert, snow) , and you can play in different modes (solo, duo squad) and sometimes the camera can be predefined (fpp for first person player only or first/third person camera) Overview of Athena Athena is the service developed by Amazon to give the possibility to someone to easily query data from a S3 bucket without using servers or data warehouses. There is an example of the interface to use Athena on a web browser. The system developed is pretty open in terms of data format that can be used CSV, JSON, ORC, Avro, and Parquet. The core of the system is build on Presto that is defined as an open source distributed SQL query engine. One of the main user of this project is Facebook on various topics around interactive analytics, batch ETL, A/B testing and developer advertiser analytics. There is a good article about Presto that is entering more in details on all the machinery of the engine. Some other big users of this tool are Netflix and Airbnb that are building services around this kind of system. If we go back to the Athena service, the serverless system is interesting because the billing is only based on the data scanned. So really this combo S3 + Athena is a good mix for people who wants to handle a volume of data that can be consider “big” without handling all the infrastructure (that is a full time job). To connect AWS Athena to a python script there is a package pyathenajdbc that can be installed and will install a connector that can be use in a pandas dataframe. There is an example of a script to connect the data. The code is quite simple and looks like call to a classic postgreSQL database. To be really transparent in terms of cost there is a graph with the cost for the experiment. The real amount is the one that is coming from the S3 reading and Athena activation and it’s less than 2$ per day of analysis (that are just 4 for this experiment). Let’s start the analysis of the data associated to PUBG. Status on the data collected The pipe ran during more than a month between the 26th january 2019 and the 5th April 2019, this amount of time represent around 69000 matches so that a pretty interesting volume of data to handle (with the amount of events collected during the match). In terms of area and platform, the pipe was focused to extract the data from the PC platform in north america. I am going to focus on 3 events for this analysis: The gamestat periodic, that is around 11 000 000 rows The player kill, that is around 7 000 000 rows The items used, that is around 19 000 000 rows Plotly and co I am a pretty big fan of the library plotly I did an article last year on a Shiny like python package call Dash powered by plotly, it’s really a cool package that makes the building of dashboard more easier in Python. The package has been developed by the company Plotly that is based in Montreal in the mile end so we are literally neighbors (really like 4 minutes walking time). The original package is a really cool library that is free to make interactive plot based on D3.js, they offer the possibility to deploy on the their chart studio the plot that are produce (the free version offers you the possibility to host 25 charts on their platform but with a premium account you can host more charts/data) In term of usage, the original syntax is a little bit “heavy” so there is people that have developed wrappers to facilitate the building of a graph on a more one liner style: For this article I have only used cufflinks but I am planning to make others articles that will use plotly express. Data extracted To be honest there is multiples websites than have done similar analysis like PUBGmap.io but it is still interesting to make different analysis and make some comparisons. There is a representation of the matches collected based on the map and the mode. The mode that is the most popular is the squad one, and the map is Savage. For the following analysis, I am going to focus on the mode squad. Duration of the matches In term of duration , I took a sample of the match (1000 matches) for each map and there is some boxplots. The map Savage looks to have a different comportment on the duration, that could be explained by the size that is way smaller than the other ones (there is like a 5 minutes differential for the median). Evolution of the matches For the lifespan of the players, in the following figure there is the representation of the percentage of player alive VS the completion of the match in function of the map. At the beginning of the match , most of the players stay alive that is related to the landing moment of all the players on the map. In terms of evolution of the players alive, the map seems to not have the same evolution to the end, the map to be “smoother” than the other one, Erangel and dihorotok are very similar and Savage is the one that seems the most violent (and that can explained by the format of the map). Weapons usage In term of weapons used to kill there is the distribution of number of kills per weapon during the period. There is multiple types of weapon in the game from handgun, rifle shotgun or crossbow but the most popular is the AK47, and there is a lot of rifles in the top of the weapons. Another really interesting data from the kill event is the localisation on the body of the final shot, in the following figure there is the repartition for each weapon of the localisation of the final shot. The portion of headshot is different in function of the weapon, the handguns look way less precise to do headshot than the rifle (and that makes sense) Another interesting insights is that some weapons seems to be map specific, in the following figure there is a distribution of the weapons per map. Support items usage To conclude I decided to have an overview of the heal and booster usage in the game, there is in the following figure the evolution of the events to use an heal object and a booster object in function of the completion of the match. For the booster, there is definitely a peak around 40% of the match, but for the heal object there is a first bump in the 15% that refer to the first wave of eliminated player, the peak is on the middle of the match with a second phase of elimination. This article was an introduction to more works that I am going to do with the data collected from PUBG, I will give a focus on a future article on PUBG about the position data , this dataset represent around 500 000 000 rows so that is going to be more interesting.
https://towardsdatascience.com/pubg-make-analytics-with-aws-and-plotly-17639a692e10
['Jean-Michel D']
2019-10-22 15:34:45.081000+00:00
['Pubg', 'Presto', 'AWS', 'Plotly', 'Data Visualization']
The Work of Art in the Age of Algorithmic Reproduction
Anna Ridler’s Fall of the House of Usher unspools, rooms and bodies spreading half-seen across the frames of this 12-minute film like gossamer. A woman appears to walk down a hallway, then melts into a moonlit sky. A face appears in the dark, contorts into shapes. The animation is based on a 1929 film version of Edgar Allen Poe’s story, but its inky and strange visuals are the result of something altogether more modern: machine learning. Each moment of Ridler’s film has been generated by artificial intelligence. The artist took stills from the first four minutes of the 1929 movie, then drew them with ink on paper. These versions were then used to train a generative adversarial network (GAN), teaching it what sort of picture should follow on from another. The GAN uses this information to create its own procession of stills, based around a pair of networks that work in competition with each other — one as a generator, one as a discriminator, evaluating the work of the former like an algorithmic critic. ‘Fall of the House of Usher,’ by Anna Ridler. Photo: Anna Ridler The result is an AI-generated animation based on drawings that are based on the opening minutes of a 1929 film, which is based on an 1839 short story about a decaying lineage. It is a project that uses machine learning techniques not to showcase the technology, but as a way to engage with ideas of memory, the role of the creator, and the prospect of degeneration. It is primarily an artistic work, leveraging artificial intelligence as a medium in a way another artist may use acrylics or videotape. “By restricting the training set to the first four minutes of the film, I was able to control to a certain extent the levels of ‘correctness,’” Ridler explains. “As the animation progresses, it has less and less of a frame of reference to draw on, leading to uncanny moments that I cannot predict where the information starts to break down, particularly at the end of the piece. I deliberately take the ‘decay’ offered by making an image in this way and turn it into a central part of the piece, echoing the destruction that is so central to the narrative.” ‘Fall of the House of Usher,’ by Anna Ridler Ridler is part of a new wave of artists who are adept at coding and plugged into the nascent field of machine learning. If neural networks have largely been the domain of the computer science community, projects like Fall of the House of Usher are efforts to reframe these cryptic technologies as both artistic apparatus and important subject matter. After all, talk of adversarial networks may sound obscure, but these are techniques that lie beneath the interfaces we swipe and stroke on a daily basis, from video games to photo recognition on Facebook. “Given the implications on our society that machine learning already has, and will increasingly have, it is crucial that people investigate and question this technology from all possible angles,” says Mario Klingemann, currently artist in residence at Google Arts and Culture. “Artists tend to ask different questions than scientists, businesspeople, or the general public,” Klingemann adds. “Artists also might be in the right position to interpret or extrapolate the possibilities and dangers that machine intelligence harbors and express their findings in a language that many people can understand.” Bodies Built by Machines Klingemann’s work, much like Ridler’s, hinges between artificial intelligence and human bodies. His 2017 collaboration with Albert Barqué-Duran, titled My Artificial Muse, for example, resulted in an oil painting of a neural network–generated “muse,” itself based on a training set of classic paintings, including John Everett Millais’ Ophelia. Earlier this year, another project, Alternative Face, involved training a neural network to generate controllable faces based on the French singer Françoise Hardy. Klingemann used this to make it seem as if Hardy was speaking the words of Trump counselor Kellyanne Conway during her infamous “alternative facts” interview. ‘Alternative Face,’ by Mario Klingemann The ability to put the words of one person into the face of another is an unsettling illustration of the scope for machine learning to undermine the truth of what we see on our screens. Klingemann has continued to mine this seam, regularly posting neural network–generated faces on Twitter that bring to mind the abject self-portraits of Cindy Sherman. I asked him if he considers working with artificial intelligence in this way to be a form of collaboration. Klingemann told me that it’s closer to playing an instrument that he happens to build himself. ‘Alternative Face,’ by Mario Klingemann. Photo: Mario Klingemann “Admittedly it is a very complex instrument that at times seems to have its own unexpected behaviors, but with more practice and experience, outcomes that at first seem to be unexpected or surprising become predictable and controllable,” he said. For Klingemann and Ridler, engaging with artificial intelligence means understanding the meat and potatoes behind neural networks: algorithms and code. But other artists are also tackling these concepts and asking the same questions about invisible, intelligent systems, but from a different angle. Lauren McCarthy’s work, for example, frequently involves substituting software for herself. In Follower, volunteers were invited to download an app that granted them a real-life follower for one day. At the end of the day, the volunteer was sent a picture of themselves, taken by the follower. In a more recent project, LAUREN, McCarthy took on the role of an artificial intelligence assistant, akin to Amazon’s Alexa. Volunteers allowed the artist to install cameras, microphones, and smart sensors in their homes. Over the course of three days, LAUREN studies their habits, takes orders, makes recommendations, and controls everything from bathroom taps to door locks. It’s a purposeful inversion of the networks purported by Amazon, Google, and Apple—something McCarthy calls a “human intelligent smart home.” ‘Followers,’ by Lauren McCarthy. Photo: Lauren McCarthy/David Leonard “I’d say that I am trying to remind us that we are humans in the midst of technological systems,” McCarthy told me. “My interest is in people, not technology. What it means and what it feels like to be a person right now is changing quickly as the systems around us evolve, but there are also some parts of the human experience that remain constant through it all. I think this is the question as we think about ourselves in relation to machines. Where is the boundary of what we consider ‘human’?” Editorial Control One advantage of McCarthy’s approach is that by using human performance in favor of artificial intelligence, she skirts a potential pitfall in terms of patronage. Because AI is such an emerging area in art circles, there isn’t much set up for it in the way of funding streams. This means artists are often reliant on commercial companies to offer funding or technical expertise, and this, McCarthy suggests, could limit the subjects that tech-heavy projects are given license to confront. ‘LAUREN,’ by Lauren McCarthy. Photo: Lauren McCarthy “On one hand, I am happy to see corporations recognizing the potential for art to explore these topics and putting money toward it,” McCarthy says. “However, we need to be careful. Google and other companies providing these funding streams means that they have ultimate editorial control. It is unlikely that we will see work come from it that includes strong critique of AI, political provocation, or questioning of technologies developed by the companies.” This October, Arts Council England announced a new pot of funding for arts organizations working in the relatively new field of virtual reality, so it follows that machine learning–based projects could similarly be included in future publicly funded initiatives. There’s also the sentiment, however, that it’s ultimately more crucial to interrogate the systems of power this type of technology facilitates, rather than the technology itself — regardless of whether an artwork uses a generative adversarial network or a human sitting behind a monitor, watching a man brush his teeth. If AI is to be part of our lives, art should be there to meet it. “Most artists are dealing, in one way or another, with the experience of being a person right now,” McCarthy says. “Technology is a force that affects almost every aspect of this experience, whether we feel it directly or not.”
https://medium.com/s/living-in-the-machine/the-work-of-art-in-the-age-of-algorithmic-reproduction-bd3bd9b4e236
['Thomas Mcmullan']
2017-12-12 17:01:01.447000+00:00
['AI', 'Artificial Intelligence', 'Technology', 'Art', 'Machine Learning']
GAN — Role of Individual Units in a Deep Network
When a neural network is trained on a high-level task like classifying a place or generating a scene, individual neuron units in the network are often responsible for concepts that are interpreted to humans. For example, trees, windows or human faces. However, at the moment, the task of analyzing parts of the model remains relevant. To solve the problem of analyzing neural networks, researchers consider two neural networks that contain interpreted parts: Networks that have been trained to classify scene images with a teacher; Networks that have been trained to generate scene images ( GAN ) Analysis of the parts of the classifier and generator To recognize the units that are responsible for human-readable concepts, the researchers compare the outputs of the units to the outputs of a semantic segmentation model that has been trained to tag pixels with classes of objects, parts, materials, and colors. This technique is called network dissection. It allows you to standardize and scale the process of finding parts of a model that belong to the same semantic classes.
https://medium.com/deep-learning-digest/gan-role-individual-units-deep-network-76e422119b2c
['Mikhail Raevskiy']
2020-09-19 12:02:10.576000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'AI', 'Data Science']
How Pinterest runs Kafka at scale
Yu Yang | Pinterest engineer, Data Engineering Pinterest runs one of the largest Kafka deployments in the cloud. We use Apache Kafka extensively as a message bus to transport data and to power real-time streaming services, ultimately helping more than 250 million Pinners around the world discover and do what they love. As mentioned in an earlier post, we use Kafka to transport data to our data warehouse, including critical events like impressions, clicks, close-ups, and repins. We also use Kafka to transport visibility metrics for our internal services. If the metrics-related Kafka clusters have any glitches, we can’t accurately monitor our services or generate alerts that signal issues. On the real-time streaming side, Kafka is used to power many streaming applications, such as fresh content indexing and recommendation, spam detection and filtering, real-time advertiser budget computation, and so on. We’ve shared out experiences at the Kafka Summit 2018 on incremental db ingestion using Kafka, and building real-time ads platforms using kafka streams. With >2,000 brokers running on Amazon Web Services, transporting >800 billion messages and >1.2 petabytes per day, and handling >15 million messages per second during the peak hours, we’re often asked about our Kafka setup and how to operate Kafka reliably in the cloud. We’re taking this opportunity to share our learnings. Pinterest Kafka setup Figure 1 shows the Pinterest Kafka service setup. Currently we have Kafka in three regions of AWS. Most of the Kafka brokers are in the us-east-1 region. We have a smaller footprints in us-east-2 and eu-west-1. We use MirrorMaker to transport data among these three regions. In each region, we spread the brokers among multiple clusters for topic level isolation. With that, one cluster failure only affects a limited number of topics. We limit the maximum size of each cluster to 200 brokers. We currently use d2.2xlarge as the default broker instances.The d2.2xlarge instance type works well for most Pinterest workloads. We also have a few small clusters that use d2.8xlarge instances for highly fanout reads. Before settling on d2 instances with local storage, we experimented with using Elastic Block Store st1 (throughput optimized hard drives) for our Kafka workloads. We found that the d2 instances with local storage performed better than EBS st1 storage. Figure 1. Pinterest Kafka setup We have default.replication.factor set to 3 to protect us against up to two broker failures in one cluster. As of November 2018, AWS Spread Placement Groups limit running instances per availability zone per group to seven. Because of this limit, we cannot leverage spread placement groups to guarantee that replicas are allocated to different physical hosts in the same availability zone. Instead, we spread the brokers in each Kafka cluster among three availability zones, and ensure that replicas of each topic partition are spread among the availability zones to withstand up to two broker failures per cluster. Kafka Cluster auto-healing With thousands of brokers running in the cloud, we have broker failures almost every day. Manual work was required to handle broker failures. That added significant operational overhead to the team. In 2017, we built and open-sourced DoctorKafka, a Kafka operations automation service to perform partition reassignment during broker failure for operation automation. It turned out that partition reassignment alone is not sufficient. In January 2018, we encountered broker failures that partition reassignment alone could not heal due to degraded hardware. When the underlying physical machines were degraded, the brokers ran into unexpected bad states. Although DoctorKafka can assign topic partitions on the failed brokers to other brokers, producers and consumers from dependent services may still try to talk to the failed or degraded broker, resulting in issues in the dependent services. Replacing failed brokers quickly is important for guaranteeing Kafka service quality. In Q1 2018, we improved DoctorKafka with a broker replacement feature that allows it to replace failed brokers automatically using user-provided scripts, which has helped us protect the Kafka clusters against unforeseeable issues. Replacing too many brokers in a short period of time can cause data loss, as our clusters only store three replicas of data. To address this issue, we built a rate limiting feature in DoctorKafka that allows it to replace only one broker for a cluster in a period of time. It’s also worth noting that the AWS ec2 api allows users to replace instances while keeping hostnames and IP addresses unchanged, which enables us to minimize the impact of broker replacement on dependent services. We’ve since been able to reduce Kafka-related alerts by >95% and keep >2000 brokers running in the cloud with minimum human intervention. See here for our broker replacement configuration in DoctorKafka. Working with the Kafka open source community The Kafka open source community has been active in developing new features and fixing known issues. We set up an internal build to continuously pull the latest Kafka changes in release branches and push them into production in a monthly cadence. We’ve also improved Kafka ourselves and contributed the changes back to the community. Recently, Pinterest engineers have made the following contributions to Kafka: KIP-91 Adding delivery.timeout.ms to Kafka producer KIP-245 Use Properties instead of StreamsConfig in KafkaStreams constructor KAFKA-6896 Export producer and consumer metrics in Kafka Streams KAFKA-7023 Move prepareForBulkLoad() call after customized RocksDBConfigSettters KAFKA-7103 Use bulk loading for RocksDBSegmentedBytesStore during init We’ve also proposed several Kafka Improvement Proposals that are under discussion: KIP-276 Add config prefix for different consumers KIP-300 Add windowed KTable API KIP-345 Reduce consumer rebalances through static membership Next Steps Although we’ve made improvements to scale the Kafka service at Pinterest, many interesting problems need to be solved to bring the service to the next level. For instance, we’ll be exploring Kubernetes as an abstraction layer for Kafka at Pinterest. We’re currently investigating using two availability zones for Kafka clusters to reduce interzone data transfer costs, since the chance of two simultaneous availability zone failures is low. AWS latest generation instance types are EBS optimized, and have dedicated EBS bandwidth and better network performance than previous generations. As such, we’ll evaluate these latest instance types leveraging EBS for faster Kafka broker recovery. Pinterest engineering has many interesting problems to solve, from building scalable, reliable, and efficient infrastructure to applying cutting edge machine learning technologies to help Pinners discover and do what they love. Check out our open engineering roles and join us! Acknowledgements: Huge thanks to Henry Cai, Shawn Nguyen, Yi Yin, Liquan Pei, Boyang Chen, Eric Lopez, Robert Claire, Jayme Cox, Vahid Hashemian, and Ambud Sharma who improved Kafka service at Pinterest. Appendix: 1. The Kafka broker setting that we use with d2.2xlarge instances. Here we only list the settings that are different from Kafka default values. 2. The following is Pinterest Kafka java parameters. We enable TLS access for Kafka at Pinterest. As of Kafka 2.0.0, each KafkaChannel with a ssl connection costs ~122K memory, and Kafka may accumulate a large number of unclosed KafkaChannels due to frequent re-connection (see KAFKA-7304 for details). We use a 8GB heap size to minimize the risk of having Kafka run into long-pause GC. We used a 4GB heap size for Kafka process before enabling TLS.
https://medium.com/pinterest-engineering/how-pinterest-runs-kafka-at-scale-ff9c6f735be
['Pinterest Engineering']
2018-11-28 22:21:11.105000+00:00
['AWS', 'Engineering', 'Kafka Streams', 'Apache Kafka', 'Open Source']
Complex PTSD (c-PTSD) and How It Damages the Brain
C-PTSD is a relatively new construct and doesn’t appear in the current version of the DSM (Diagnostic and Statistical Manual of Mental Disorders, 5th edition), which is used by health care professionals to diagnose and treat mental disorders. PTSD is included in the DSM-5 under a new category called “Trauma- and Stressor-related Disorders”. Because of this, and because of the overlap of symptoms with PTSD, c-PTSD may be hard to diagnose correctly. Its symptoms are often much more intense and variable than those of PTSD, and health care professionals may mistake it for other mental disorders — particularly borderline personality disorder (BPD). Symptoms Common symptoms of PTSD and c-PTSD include: flashbacks and nightmares depression or negative mood changes avoiding situations that remind you of the trauma physical effects (dizziness, nausea, shaking, an adrenaline rush, etc.) when remembering the trauma living in a continual state of high alert (termed hyperarousal) perceiving the world as a dangerous place loss of trust in others or self sleeping or concentrating difficulties easily startled by loud noises or sudden appearances Some of the symptoms associated with c-PTSD in particular include: disturbances in self-organization, such as… emotional dysregulation loss of or negative self-concept relational difficulties C-PTSD suffers experience changes in the brain due to the continuous exposure to the flight-or-fight hormones adrenaline, norepinephrine, and cortisol. These hormones powerfully create changes in our biochemistry that help prepare us to physically save our own lives. Adrenaline (aka epinephrine): Increases heart rate and respiration Increases blood flow to the brain and major muscle groups Stimulates a huge dump of sugar (in the form of glucose) into the bloodstream to fuel the muscles Inhibits production of insulin that would counteract the glucose surge Decreases our ability to feel pain This “adrenaline rush” happens almost instantaneously, allowing us to run for our lives or turn and fight our attacker. The problem with adrenaline The problem is, this system evolved over millions of years as a way to react to sudden, unexpected dangers. When we experience prolonged danger, the system is constantly pouring out adrenaline and then releasing cortisol to counteract the effects of the adrenaline. Repeated exposure to adrenaline becomes addictive; people who become addicted to adrenaline are referred to as “adrenaline junkies”, and may seek out thrills and shocks (such as skydiving, racing, or watching horror movies) to get the rush. … in other words, being tapped on the shoulder can create the same physical reaction as might a shark attack. Other hormones in the overstimulated brain In addition to its addictive nature, constant exposure to adrenaline (and the norepinephrine and cortisol that go along with it) taxes the system and begins to wear down our organs. The prefrontal cortex (the part of the brain behind the forehead) can’t regulate the excessive amounts of norepinephrine released by the amygdala in a continuously stressed system, and the PFC begins to lose its ability to discern the severity of a given threat — in other words, being tapped on the shoulder can create the same physical reaction as might a shark attack. Studies have indicated that the PFC can undergo physical changes as a result of repeated exposure to stress, including decreases in volume in some of its areas and a loss of overall cohesiveness (i.e. integrity). These decreases may be part of why PFC function is impaired in people with c-PTSD.
https://medium.com/what-to-do-about-everything/complex-ptsd-c-ptsd-and-how-it-damages-the-brain-628639c8d6ac
[]
2020-08-16 20:35:34.478000+00:00
['PTSD', 'Mental Health', 'Health', 'Trauma', 'Self Improvement']
Presto at Pinterest
Ashish Singh | Pinterest Engineer, Data Engineering As a data-driven company, many critical business decisions are made at Pinterest based on insights from data. These insights are powered by the Big Data Platform team, which enables others within the company to process petabytes of data to find answers to their questions. Analysis of data is a critical function at Pinterest not just to answer business questions, but also to debug engineering issues, prioritize features, identify most common issues faced by users, and see usage trends. As such these analytics capabilities are needed by engineers and non-engineers equally at Pinterest. SQL and its variants have proven to provide a level ground for employees to express their computational needs, or analysis, effectively. It also provides a great abstraction between user code / query and underlying compute infrastructure, enabling the infrastructure to evolve without affecting users. To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges. In this post, we share our journey. Overview Figure 1 below gives an overview of Presto deployment at Pinterest. Our infrastructure is built on top of Amazon Web Services (AWS) EC2 and we leverage AWS S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data. We have multiple Presto clusters that serve different use cases. These clusters can be long or short-lived. Two major ones are ad-hoc and scheduled cluster: the former serves ad-hoc queries and the latter serves scheduled queries. Keeping ad-hoc queries separate from scheduled queries enables us to provide better SLA for scheduled queries, and also brings more predictability in resource demand on the scheduled cluster. Pinterest’s analytical need was served by a more conventional data warehouse that didn’t scale with Pinterest's data size until 2016, which was then replaced by Presto. Running Presto at Pinterest’s scale came with its own challenges. In the early days of onboarding Presto, we frequently saw issues including Presto coordinator crashes and cluster getting stuck with close to zero worker parallelism. Later in this blog, we explain the reasons for these issues and discuss how we solved them. Deployment We have hundreds of petabytes of data and tens of thousands of Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month. Presto is well known for its capability to query from various systems, however, only the Hive connector is currently used at Pinterest. Hive and Presto both share the same Hive Metastore Service. It’s very common for our users to use Hive for writing data, and Presto for read-only analysis. In addition, we recently started allowing Presto to create tables and insert data primarily due to the following reasons. Capability to run big queries We limit queries by their runtime and the data they process on Presto. Write support provides an alternative way to run big queries by breaking them into smaller queries. Each small query can read from previous queries’ output and write to an intermediate table which is then consumed by the next query. This is a better approach to dealing with big queries, as it provides easy debuggability, modularity, sharing and checkpointing. If one sub-query fails, only that sub-query and subsequent sub-queries need to be re-run, and not the entire big query, which saves time and resources/ money. Supporting workflows: Impressed by Presto’s processing speed, users have been asking for support for defining workflows on top of Presto. With only read capability, Presto either could only have served at the end of a flow providing final output, or Presto output would have been brought in memory of workflow system and then passed to next job/ execution. Both of these approaches would have been very limiting. With Presto supporting write, it can be easily used within a flow. Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Presto deployment at Pinterest should look very similar to any large scale Presto deployments. There are a couple of in-house pieces, i.e., Presto Controller and Presto Gateway, that we talk about in the next subsections. Presto Controller Presto controller is a service built in-house that is critical to our Presto deployments. Following are some of the major functionalities served by the controller as of today. Health check Slow worker detection Heavy query detection Rolling restarts of Presto clusters Scaling clusters Presto Gateway Presto gateway is a service that sits between clients and Presto clusters. It essentially is smart http proxy server. We got a head start on this by using Lyft’s [Presto-Gateway](https://github.com/lyft/presto-gateway). Since then, we’ve added many functionalities on top of it, and we plan on contributing those functionalities back to Lyft’s version. This service makes clients agnostic of specific Presto clusters and enables the following usages. Some of these features are in active development and we are slowly moving all of our clients from talking to specific clusters to Presto Gateway. Rules-based routing of queries Resource usage limits and current usage visibility for users Overall Presto clusters’ health visibility Monitoring/ Alerting Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time. JMX and host OS metrics are logged to OpenTSDB via tcollector that runs on all Pinterest hosts. Using metrics from OpenTSDB, Presto real-time dashboards are published on Statsboard (Metrics monitoring UI at Pinterest). This has been handy for debugging service issues. Statsboard also has an alerting mechanism which is tied to PagerDuty. Clients There are a few options for users to interact with Presto. Most common ones are DataHub (an in-house web UI tool), Jupyter, and Tableau. However, there are quite a few custom tools that are powered via Presto. Analysis To scale the Presto usage at Pinterest, we cautiously decided which pain points to prioritize. We utilized data collected from Presto clusters and Presto query logs to derive informative metrics. Below are a few. Which tables are slow while reading? Which queries when run together can crash or stall the cluster? Which users/ teams are running long queries? What is a good threshold for a config? P90 and P99 query runtimes? Query success rates? Figure 1: Presto deployment at Pinterest Challenges and our solutions Deeply nested and huge thrift schemas Coordinator in a Presto cluster is very important for the entire cluster operation. As such, it’s also a single point of failure. Until mid last year, our Presto version was based on open source Presto version 0.182. Since then many bug fixes and improvements have been made to the coordinator to better cope with its critical responsibilities. However, even with improvements, we found our Presto clusters’ coordinators would get stuck or even crash with out of memory (OOM). One of the most common reasons for crashes was very large and very deeply nested thrift schemas, which are very common among our Hive tables. For instance, a popular and commonly used large Thrift schema has over 12 million primitives and a depth of 41 levels. This schema when serialized to string takes over 282 MB. We have close to 500 hive tables with over 100K primitives in their schemas. In Presto, it’s the responsibility of the coordinator to fetch schemas of tables from Hive Metastore for Hive catalog and then serialize that schema in each task request it sends to workers. This design keeps Hive Metastore service from getting bombarded with hundreds of requests simultaneously from Presto workers. However, this design has an adverse effect on coordinator memory and network when schemas are very large. Fortunately, our large and deeply nested schemas issue is only limited to tables using Thrift schemas. In our deployments, a Thrift schema Java archive (jar) file is created and put into the classpaths of coordinator and each worker of a Presto cluster and is loaded at service start time. A new jar with updated schemas is created and reloaded during daily service restart. This enabled us to completely get rid of Thrift schemas from tasks’ requests: instead, only a Thrift class name is passed as part of the request, which has helped stabilize Presto coordinator in deployments by a huge factor. Slow or stuck workers Presto gains a part of its efficiency and speed from the fact that it always has JVMs up and is ready to start running tasks on workers. A single JVM is shared for multiple tasks from multiple queries on a Presto worker. This sharing often used to lead to a heavy query slowing down all other queries on a cluster. Enforcing memory constraints with resource groups, which enforces limits on the amount of memory a query can consume at a given time on a cluster, went a long way to resolve these issues in a highly multi-tenant cluster. However, we still used to see clusters coming to a standstill. Queries would get stuck, worker parallelism would drop to zero and stay there for a long time, communication error started popping up and queries started getting timed out. Presto uses a multilevel feedback queue to ensure slow tasks aren’t slowing down all tasks on a worker. This can lead to a worker having a lot of slow tasks accumulated over time, as quick tasks would be prioritized and will quickly finish. Slow IO tasks can also accumulate on a worker. As mentioned, all our data sits in AWS S3 and S3 can throttle down requests if a prefix is being hit hard, which can further slow down tasks. If a worker is slow or stuck, the slowness gradually spreads through the Presto cluster. Other workers waiting on pages from a slow worker would slow down and will pass down the slowness to other workers. Solving this problem requires a good detection and a fair resolution mechanism. We resorted to following checks to detect workers getting slow. Check if a worker’s CPU utilization is lower than cluster’s average CPU utilization over a threshold and this difference is sustained over some time. Check if a number of queries are failing with internal errors, indicating failure while talking to a worker over a threshold over some time. Check if a worker has open file descriptors higher than a threshold for more than some time. Once a worker matches any of the above criteria, Presto Controller would mark the worker for a shutdown. A graceful shutdown is first attempted, however failure to gracefully shutdown a worker in a few attempts will lead to controller forcibly terminating the EC2 instance for dedicated workers or shutting down the Kubernetes pod hosting the worker. Unbalanced resources in multiple clusters As shown in Figure 1, we have multiple Presto clusters at Pinterest. To efficiently utilize the available resources across all Presto clusters, a new query should be sent to an under-utilized cluster or resources from an under-utilized cluster must be moved to a cluster where the query is going to run. It would be easier to do the former, however at Pinterest different Presto clusters have different access patterns and different characteristics. Some clusters are tuned for very specific types of queries/ use-cases that run on them. For instance, running ad-hoc queries on the scheduled cluster, which is meant to run only scheduled queries, will interfere with scheduled cluster usage pattern analysis and can also adversely affect the queries on the cluster. This interaction between queries is why we prefer moving resources from under-utilized clusters to over-utilized clusters. Moving a dedicated EC2 instance from one cluster to another would have required us to terminate and re-provision the instance. This process can easily take close to or more than ten minutes. Ten minutes in the Presto world, where our P90 query latency is less than five minutes, is a long time. Instead, the Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc. Presto controller service is responsible for adding/ removing workers on Kubernetes. We have a static count of workers on Kubernetes today for each cluster. However, we plan to soon auto-scale clusters based on current demand, and also historic trends of demands on these clusters. Ungraceful cluster shut down Each night we restart all Presto clusters to load updated configuration parameters, Thrift schemas, custom Hive Serializer/Deserializer (SerDe) and User Defined Functions (UDFs). Ability to shut down a service without affecting any running tasks is an important aspect of a service (usually referred to as graceful shutdown). In open source Presto, there is no way to initiate a graceful shutdown of a cluster. Presto operators at various organizations handle graceful shutdowns by controlling traffic to the clusters. We’re starting to do the same at Pinterest too with Presto Gateway. However, currently, there are some clients that talk to a specific Presto cluster and get affected by ungraceful cluster shut down. Even with Presto Gateway, we’ll still have some clients that will continue to talk to specific Presto clusters without going through Presto Gateway, either due to security reasons or the fact that there is just one cluster serving a specific use-case. In Presto, one can perform a graceful shutdown of a worker. However, that alone is not sufficient to ensure graceful shutdown for an entire cluster. We added a graceful shutdown capability to Presto coordinator to perform a graceful shutdown of an entire cluster. When a cluster graceful shutdown is initiated, a shutdown request is sent to the Coordinator of the cluster. On receiving a graceful shutdown request, similar to Presto Workers, the Coordinator changes its state to SHUTTING_DOWN. In this state, Presto coordinator does not accept any new query and waits for all existing queries to finish before shutting down itself. In this state, the coordinator responds with an error to any new query, informing the client that cluster is shutting down and asking them to retry in some time, usually around maximum allowed query runtime. This fail-fast with informative message alone is much better than previous behavior of clients seeing abrupt failures, prompting them to simply retry the queries only to see those failures again. In the future, we plan to add a capability to reload jars without restarting processes and make some configuration parameters dynamic to get away from the need to restart clusters frequently. No impersonation support for LDAP authenticator As shown in Figure 1, we have various clients connecting to Presto clusters. Some of these are services that allow users to run queries. For resourcing and accounting purposes, it is required that these services impersonate each user on whose behalf they are running a query. This is possible to do out of the box on Presto if Kerberos authentication is used. We use LDAP authentication, which does not have a way of connecting services to impersonate and restrict only allowed services to be able to do so. We added impersonation support to LDAP authenticator that takes a configurable whitelist of services that can perform impersonation. Summary Presto is very widely used and has played a key role in enabling analysis at Pinterest. Being one of the very popular interactive SQL querying engines, Presto is evolving very fast. Recent versions of Presto have a lot of stability and scalability improvements. However, for Pinterest scale, we’ve had to resolve a few issues along the way for successful Presto operation and usage, like graceful cluster shutdown, handling of large deeply nested thrift schemas, impersonation support in LDAP authenticator, slow worker detection and auto-scaling of workers. Some of these can benefit the community too, and we plan on contributing back. In the future, we want to continue making reliability, scalability, usability and performance improvements, like rolling restarts, reloading of jars without needing to restart cluster and visibility into cluster resource utilization for users. We’re also very interested in on-demand checkpointing of tasks to enable seamless usage of Amazon EC2 spot instances and enabling our users to be able to get a query runtime estimate without waiting for query completion. Acknowledgments: Huge thanks to Pucheng Yang, Lida Li and entire Big Data Platform team who helped in improving and scaling Presto service at Pinterest.
https://medium.com/pinterest-engineering/presto-at-pinterest-a8bda7515e52
['Pinterest Engineering']
2019-07-16 19:28:24.707000+00:00
['Presto', 'AWS', 'Data Engineering']
What’s the Deal With Pulse Oximeters?
What’s the Deal With Pulse Oximeters? Here’s what to know about devices that can measure blood oxygen saturation Pulse oximeters measure the percentage of your red blood cells that are carrying oxygen by beaming different wavelengths of light into your finger after it’s clipped on. Photo: Smith Collection/Gado/Getty Images People showing up at the ER with suspected Covid-19 tend to have an especially concerning symptom, explained emergency room physician Dr. Richard Levitan in the New York Times this month: Pneumonia caused by Covid-19 comes with dangerously low oxygen levels that sometimes go unnoticed. This symptom is known as “silent hypoxia,” wrote Levitan, “‘silent’ because of its insidious, hard-to-detect nature.” The silence is worrying because it means people might be suffering from Covid-19 pneumonia without even knowing it. To detect silent hypoxia early enough to get treated for Covid-19 pneumonia, Levitan said people could use a home pulse oximeter, a small device for measuring blood oxygen saturation that can be purchased at a pharmacy. That’s not to say that you should go out and buy one immediately. As one San Francisco physician noted in The Guardian, they “aren’t necessary” for people who are healthy and don’t have other Covid-19 symptoms. In Quartz, an interventional pulmonologist said there “is no good role” for a pulse oximeter if you’re a person who is healthy and doesn’t have supplemental oxygen on hand. Pulse oximeters are normally recommended for people with chronic lung disease who have to monitor their fluctuating oxygen levels; these people usually have devices for delivering extra oxygen into their lungs. However, a pulse oximeter could be helpful for people who suspect they have Covid-19 but are not sure whether they should go to the hospital, or for those who have tested positive for Covid-19 but felt their symptoms were otherwise mild. A low oxygen reading would suggest silent hypoxia caused by Covid-19 pneumonia, a symptom that requires hospital treatment. This very useful New York Times Q&A on pulse oximeters described the story of one Philadelphia emergency room physician who had stayed home after testing positive for Covid-19 but checked herself into the hospital after she got a pulse oximeter reading of 88% blood oxygen saturation, which fell below the “healthy” cutoff of 92%. Should you decide to buy one — and there’s no dearth of reviews from sites like the The Strategist and Wirecutter — it helps to know the basics. A pulse oximeter, which looks a bit like a plastic clothespin, measures the percentage of your red blood cells that are carrying oxygen by beaming different wavelengths of light into your finger after it’s clipped on. According to the American Lung Association, a “good number” for your blood oxygen saturation would be over 90% to 92%. Some key caveats: The accuracy of pulse oximeters varies, readings can differ depending on what finger you use (and if you’re wearing dark nail polish), and, perhaps most critically, it’s not a single reading that matters but the trend — if there is one. If you do use a pulse oximeter, it can be helpful to take multiple readings per day to establish a baseline and take note of any downward trends. Most importantly, pulse oximeter readings shouldn’t distract you from the established symptoms of Covid-19, listed in full in this handy chart from Elemental. Even if you get a “good” oxygen saturation reading, severe symptoms like chest pain, high fever, or severe shortness of breath warrant a call to your health care provider.
https://coronavirus.medium.com/whats-the-deal-with-pulse-oximeters-b3feea3dfe2b
['Yasmin Tayag']
2020-04-30 19:16:16.015000+00:00
['Health', 'Quick Q And A', 'Pulse Oximeter', 'Coronavirus', 'Covid 19']
How we switched our template rendering engine to React
Jessica Chan | Pinterest engineer, Core Experience In 2015, we made the decision to migrate our legacy web experience to React to keep up with our fast growth and perform better with increased developer velocity. Ultimately, we found React rendered faster than our previous template engine, had fewer obstacles to iterating on features and had a large developer community. Expanding on a previous post which covered migrating Pinner profiles to React, here we’ll dive deeper into migrating the web infrastructure to serve React pages, which required moving a huge amount of code without breaking the site. The roadmap to React When we began this project, Pinterest.com was humming along on the existing architecture for some time. On the server, Django, a Python web application framework, served our web requests and Jinja was rendering our templates. The server response to the browser included all the markup, assets and data the browser needed to fetch our JavaScript, images and CSS, and initialize our client-side application. Nunjucks, a JavaScript template rendering engine that uses the same template syntax as Jinja, did all subsequent template renders on the client side. The template syntax and the stack looked like this: This architecture worked, since the template syntax was (almost) the same between Jinja and Nunjucks. However, the template rendering utilities and libraries had to be duplicated, as illustrated above, meaning for every Jinja Python utility we needed to add, we had to write a JavaScript version for Nunjucks. This was cumbersome, resulted in a lot of bugs and was yet another reason to move to a world where template renders would happen using the same language and engine both client-side and server-side. Here’s a diagram of what our end-goal was for consolidating template rendering in React: It looks pretty good: we can share utilities and libraries between client and server, and we have one engine, React, rendering templates on both. But how do we get there? If we switch our client-side rendering engine from Nunjucks to React, we’d also have to switch our server-side rendering, so they could share the same template syntax. Halting development so we could switch all of our templates to React wasn’t an option. We needed a solution that would allow us to iteratively convert the hundreds of Pinterest components without interrupting the work of product teams or the experience of Pinners. That solution looks like this: The first step was to consolidate to a single template rendering engine between client and server before we could replace that engine with something else. If the server could interpret JavaScript, use Nunjucks to render templates and share our client-side code, we could then move forward with an iterative migration to React. Server-side Nunjucks architecture When we first considered how we’d interpret JavaScript on the server-side, there were two main choices: PyV8 and Node. PyV8 had the advantage of giving us a quick way to prototype and not worry too much about standing up a separate service, but it wasn’t well maintained and the memory footprint of the package was significant. Node was a more natural choice, despite the overhead of standing up a new service and that we’d be communicating with this service via a network interface with its own complexities (described more in the next section). There was a large community supporting and using Node and we’d have better control over tuning and optimizing the service. In the end, we went with standing up Node processes behind an Nginx proxy layer and architected the interface in such a way that each network request would be a stateless render. This allowed us to farm requests out to the process group and scale the number of processes as needed. We also refactored our client-side rendering JavaScript so it could be used by both the client and the server. This resulted in a synchronous rendering module with an API that took environment-agnostic requests and returned final markup using Nunjucks. Both Node and the browser called this module to get HTML. On the web server, we short-circuited template rendering so instead of calling Jinja, it made network requests to farm the template render out to our Node workers. Before: Jinja renders an entire module tree in one pass Pinterest templates are structured as trees. A root module calls children modules, which also have children modules, etc., and a render pass traverses these modules to generate the resulting HTML which makes up the final result. Each module can either render based on the data it receives from its parent, or it can request a network call be made to acquire more data in order for rendering to continue. These data requests are necessarily blocking, since we don’t know the render path until we hit the node. This means module tree rendering is blocked by downstream data requests that can initiate at any time. Because Python is doing all the rendering on a single thread, renders block the thread and are essentially serial. The purple circle that appears when the user agent makes a request represents a module render request with no data. The API is called to get the data, filling the circle and readying it for a render pass. Rendering materializes the children and stops when it reaches children that need data. Subsequent calls to the API fulfill these data requests and rendering continues. After: Nunjucks requests over the wire As before, a user agent makes a request which results in a latent module render request that needs data. Data is obtained again by making a call to the API, but another network call is made to a co-located Node process to render the template as far as it can go with the data that it has. Then, Node sends back a response with the rendered templates, and also a “holes” array indicating the modules the worker was unable to render because they still need data. Our Python webapp then provides the data they need by calling the API, and each module is sent back to Node as completely independent module requests in parallel. This is repeated until the entire tree is rendered and all requests return with no holes. Rollout Confidence in the new system was key to rolling it out. Developers were still building with Jinja and creating and modifying new Python utilities, and we had to be sure the new system didn’t introduce latency to page loads for Pinners. We also had to build error handling, service monitoring, alerts and a runbook to scale maintenance and troubleshooting of the new Node processes. There were many dependencies for ensuring a smooth transition, and two tools were essential to the project’s success. Linters and tests. Jinja and Nunjucks syntax is close to the same, but not identical. The difference in what each template engine supported as well as the language differences in Python and Nunjucks forced us to keep tight restrictions on what engineers could do with templates. Ultimately, we needed to ensure templates rendered on the server would render identically on the client, and templates rendered by Jinja would render identically when rendered by Nunjucks. At Pinterest, we rely heavily on build-time linters that prevent developers from doing things that would break the site as they develop, which assisted in making sure all templates being developed only used the subset of features supported by both Jinja and Nunjucks. We even wrote a special extensible Nunjucks extension that takes custom rules we write, written in an ESLint-style fashion, and applies them to all the Nunjucks templates during every build. We also implemented a special all-encompassing unit test suite called “render all tests” that literally rendered every single template and ensured they rendered identically between Jinja and Nunjucks, and between client and Node. This helped safeguard our releases from crazy bugs that would’ve been extremely difficult to track down. Pinterest experiment framework. We rolled out the new architecture to employees only at first, and then to a very small percentage of Pinners. We kept an eye on the metrics that track user activity and performance via our experiment dashboard. A gradual rollout allowed us to track down tricky render bugs, Python/JavaScript discrepancies and performance issues before the majority of Pinners were exposed to the new system. One example of a bug caught by the experiment dashboard was a nuanced client-side-only render bug that only affected a tiny percentage of users on a specific browser doing a very specific action. Tracking this action allowed us to narrow in on the bug and verify when it got fixed: Performance Server-side rendering plays an important role in serving rich content on Pinterest to Pinners. We rely on performant server response times in order to provide a faster experience and maintain good SEO standing. During early iterations, the Nunjucks architecture was slower than our existing Jinja setup on the server side. Making multiple network calls introduced extra overhead (preparing the request, serializing and deserializing data), and the roundtrips added nontrivial milliseconds to our render time. We did two things that helped bring down the delta and allow us to launch. Parallelization. With Jinja, we didn’t need to call a sidecar process over a network protocol in order to render a template. However, because of the CPU-bound nature of template rendering, this also meant Jinja template renders couldn’t be meaningfully parallelized. This wasn’t the case with our Nunjucks render calls. By parallelizing our calls with gevent, we were able to kick off simultaneous network connections to our proxying nginx layer which farmed the requests out to available workers very efficiently. Avoid unnecessary data serialization. There were several hotspots in our template rendering where we were simply embedding large amounts of data in the markup in order to send to the browser. These were located mainly in the static head and around the body end tags, and were consistent for every web request. A big slowdown was the serialization and deserialization of these huge JSON blobs of data to our workers. Avoiding this helped us gain another performance edge that finally got us to parity. Here’s a graph of the results (Nunjucks in red, Jinja in green): React is happening Once the Nunjucks engine was in place and serving 100 percent of Pinterest.com’s templates, it was open season for developers to start converting their modules to React components. Today, with Nunjucks code quickly being replaced by React conversions all over the codebase, we’re deprecating our old framework and happily tackling the challenges of building a complete React application while seeing many performance and developer productivity gains from the migration.
https://medium.com/pinterest-engineering/how-we-switched-our-template-rendering-engine-to-react-a799a3d540b0
['Pinterest Engineering']
2017-02-21 20:42:52.851000+00:00
['JavaScript', 'Engineering', 'React', 'Node', 'Developers']
Reach a Little Higher
Reach a little higher Stretch beyond a limit Stride a bit longer Breathe deeper Exhale longer Forgive one more time Release another grudge Surrender an attachment Accept something new Choose peace over strife Pause an extra moment Listen like the words are new Act with intent Hear the inner voice Trust your intuition Choose the healthy snack Know you don’t need the drug Move your body even if it hurts Take action when you’re inspired Rest when you need rest Say the words that are hard to say Draw a line in the sand Be honest about what you want Take a risk Have faith Get my new book Happy Ever After and learn how to cultivate true and lasting happiness in your life. I made a free 5-day Mastering Happiness email course, and I want to share it with you! Visit me at christinebradstreet.com where you can get your course for free. All images open source from https://pixabay.com/
https://medium.com/change-your-mind/reach-a-little-higher-a4dd348f10c4
['Dr. Christine Bradstreet']
2020-11-02 17:02:38.143000+00:00
['Mental Health', 'Health', 'Inspiration', 'Spirituality', 'Mindset']
How to Write Joyful Headlines That Comfort Amidst Crisis
Classic Headline Techniques Frontload and backload keywords Word placement is important in headlines. The first three words and last three words have been shown to catch reader attention — I’ve bolded them in the headlines below: Use short headlines Buffer tells us that according to science, the ideal length of a headline is six words since we absorb only the first three words and the last three words of a headline. Here are three short headlines (note that one is seven words): Use long headlines Backlinko analyzed 912 million blog posts and suggests 14 to 17 words are the way to go, generating 76.7% more social shares than short headlines. I’ve noticed COVID news headlines tend to be long, rather than short. This allows clarity through specificity. Also, many headlines tell a story and it’s easier to provide more detail with longer headlines. Headline length may seem contradictory and confusing. For a deeper discussion about whether to write long or short headlines, read Long or Short: Which Headlines Are Better? Tell a story You’ll notice many articles tell a story. Even before paper was invented, this was the way humans passed important messages down through generations. In research conducted in 2018, researchers at McMaster University found that our brains relate to and focus on the thoughts and feelings of the protagonist of each story (The art of storytelling: Researchers explore why we relate to characters). We’re wired to tune into stories, whether they’re true or not — whether they’re through words, gestures or drawings. Use highly rated sensorial words Our senses help us create richness in life. They help us connect, learn, navigate, and so much more. When used in your headlines, sensory words can help you initiate a bond with your readers. But researcher Barbara Juhasz discovered that not all words evoke the same sensory experience. She created an index she called the Sensory Experience Rating (SER) scale to rank the strength of the sensory experience evoked by nearly 3,000 mono-syllabic words, including nouns, verbs, adjectives, and adverbs. Using highly-rated words in our headlines can strengthen the power of our headlines. Here are three examples of headlines and highly rated sensorial words — the word ranking (out of 5) has been included: To learn more about sensorial words and headlines, visit The 194 Highest-Rated Sensorial Words for Gorgeous Headlines. Focus on your reader’s needs Use the WIIFM approach. WIIFM stands for “What’s In It For Me.” It’s what every reader thinks. They’re busy. Make it quick and easy for them to figure out if you’re giving them something relevant and useful. Ask yourself if your title tells your reader how this content will help them get better at something, solve a problem, deal with fear, keep their jobs or find a new one, help them avoid anxiety, or work better at home. If you have trouble, Maslow’s hierarchy of needs is a great way to understand what’s important to us: you’ll see it’s more than tangible needs. Physiological needs: food, shelter, clothing food, shelter, clothing Safety: health, employment, personal security health, employment, personal security Love and belonging: friendship, family, sense of connection friendship, family, sense of connection Esteem: recognition, freedom, respect recognition, freedom, respect Self-actualization: desire to fill their potential How can your headline fulfill one or more of these needs? How can you connect more deeply to your customers during this unsettling time? Here’s how four uplifting COVID headlines do it, and the need they connect with for readers: Show credibility Opinion on its own is weak. Experience and reputation strengthen credibility. Experts are generally thought of as reliable and credible. Research provides objectivity and reliability. Statistics also help as our brains use statistics to make objective decisions. Highlight the extraordinary Daily life tends to get boring… our ears perk up when we read something out of the norm. Typically we’re attracted to negative news. Yet research such as this scientific research about happiness by Steptoe and Marmot, two of the world’s leading experts on the psychobiology of health and disease, finds that positivity leads to more happiness and better health. Use proven templates Templates are a quick and easy way to write headlines. We all need a prompt now and then, don’t we? Well, you can easily create your own templates from analyzing headlines as shown below. Test them out to see if they hold, as we’ve done below — then store them in your headline hacks collection. Headline: Social distancing has actually been an asset for creating Template: <something> has actually been <a surprising outcome> Test: Homeschooling my 7-year-old has actually been fun. Headline: Why these chalk messages could be ‘really important’ for getting through the coronavirus pandemic Template: Why <something> could be ‘<expert opinion>’ for <doing something> Test: Why two minutes of meditation every day could be the way to keep your relationships strong Headline: Got a 3D printer? You could help supply hospitals with essential PPE Template: <question> <benefit> Test: Got a pillar? You could keep rockclimbing at home to keep your sanity Make a statement A firm statement may arouse curiosity. Assert authority. It may answer a question. Surprise. Or simply give information. You need to fully understand what your reader’s desires, fears, and challenges are when writing headlines statements. For instance, the headlines below are fascinating to all of us because it gives us surprising insight into how the environment has improved without masses of people around. It reassures us that the medical staff we need to help us are being looked after. It gives us hope that we may be close to getting a vaccine that can end all of this. And it reinforces our need for human connection during this time. Challenge Challenges give us something to aim for, make us curious about whether we can take them on, and drive us to give them a go. Play with power words Power words can help you catch your reader’s attention. They can be simple, sensorial, call a reader to action, or unusual. I’ve highlighted the power words in bold for you. Gujarat: 44-year-old brain-dead worker turns saviour for three persons Mass. hospitals getting massive machine that can sterilize 80K N95 masks in a day South Africa comes with Blockchain application CoviID, will reward you for isolating Irish Researchers Have Developed Hospital Robot That Uses UV Light to Kill Viruses, Bacteria, and Germs (‘Kill’ is used here in a positive context) Focus on your main message Our brains are designed to save energy. Make it easy for your reader’s brain. When it comes to headlines, you help your readers by giving them the gist of the article so they can either read more if that’s what they feel like doing now, save it for later, or take your message and never return. Here’s where it’s crucial to understand your reader and make sure your headline is clear so they know it will be relevant and useful. Be careful not to deceive — instead, learn how to write genuine, enticing headlines without clickbaiting.
https://medium.com/better-marketing/how-to-write-joyful-headlines-that-comfort-amidst-crisis-64aaee108ede
['Cynthia Marinakos']
2020-04-07 14:10:33.004000+00:00
['Social Media', 'Headline Hacks', 'Writing', 'Business', 'Creativity']
13 Marketing Tactics For Artist Scale
1. Collaborations Making music with artists whose audiences overlap, thus gaining exposure to a captive audience of similar minded, potential fans. Check out Spotify’s Today’s Top Hits where 20 of its 63 tracks are collaborations (as of April 2nd, 2019). The top artists/labels use this tried and true tactic to get a second wind out of a single, surge streams by getting placement on another artist’s profile and crossover to adjacent genres, e.g.: rap/hip-hop to pop, dance to pop. 20/63 of the most popular songs on Spotify are collaborations. Playlist: Today’s Top Hits, data pulled on April 2nd, 2019. 2. Brand partnerships Working with companies with bigger platforms, media budgets, and audience reach who share similar values to the artist. Moby releases album on World Sleep Day through Calm meditation app generating headlines in tech, cultural and music circles. (via Techcrunch) Moby released his new album exclusively through the Calm meditation app. This worked because it was a concept album — a collection of songs Moby made because he couldn’t find songs to sleep to. He tapped into a wider cultural moment by launching on World Sleep Day, thus getting coverage in cultural, tech and music circles. 3. Support tours Opening for another artist with audience overlap gets free media impressions through tour advertising the headliner, as well as on show night a captive audience to play to. 4. Festival appearances Festivals are the clearest example of light listeners — thousands of music fans going to experience the day, some seek out their favorite artists but many are there for the vibe. Festivals give artists larger audiences than many would pull alone, promotion through festival advertising and other artists promoting the event. Festivals, the tasting menu of music. 5. Radio play People sitting in their cars or at home listening to a radio station with broad enough taste to enjoy catch-all rhythm, pop or indie etc. music but not pointed enough they’re putting on a specific artist. 8-year-old hit ‘Walking on a Dream’ revived re-entering the Billboard Dance chart at #1 after sync in Honda commercial airs. 6. Sync Solid sync can propel a track to gold with the right placement in film, TV or advertising. Think Robyn’s ‘Dancing on My Own’ perfectly placed at the season finale of Girls while Hannah and Marnie dance like crazy people after finding out Hannah’s boyfriend is gay. So much emotion. Empire of the Sun’s 8-year-old ‘Walking on a Dream’ hits #1 on Billboard Dance Chart after placement in Honda commercial airs. 7. Mass-comms Advertising that targets the masses rather than hyper-targeted and retargeted ads to the already aware or converted. E.g.: Radio ads, billboards, subway advertising, street posters, street chalking 8. TV performances Playing shows to TV viewership in the millions. Karen-O and Danger Mouse not only crushed their Colbert TV performance but used the media slot to shoot a delightful music video directed by Spike Jonez. The earned media in press and social the day after extended the TV opportunity. Other TV includes Late night TV like Jimmy Kimmel, James Cordon, and Awards performances i.e.: Grammys, MTV VMAs. 9. Creative release strategies Releasing music in unexpected ways through unconventional platforms get people going “mmm, that was cool”, sharing it and generating trial from new fans. Tierra Whack’s Whack World — an album of 15 x 1-min songs released on Instagram (and later to DSPs) got everyone’s attention including the NYTimes. Released as 1 min tracks because people’s attention spans are now zilch and Instagram caps videos at 60 sec. The album later went to DSPs. 10. Playlists If there were a pinnacle of light listeners, playlists would be it. Millions of people listeners to music to catch a vibe, trigger a mood, or help with whatever they’re doing be it set a scene for a coffee shop, soundtrack their dinner party or get them through a workout. The user is not intentionally playing any single artist. 11. Algorithms Building upon playlist marketing is having platform algorithms do your marketing. Algorithmic-generated Spotify playlists like ‘Release Radar’ can dramatically increase 1st-week streams and YouTube can send up to 30% more traffic to videos by way of recommendations. Working the algorithms by direct marketing campaigns for follows, subscribers and streams/watches helps. 12. Influencers Working with influential personalities online with adjacent interest verticals like gaming, make-up tutorials, aesthetic videos, and workouts have been the launching pad for emerging genres like bedroom pop and artists Clairo, KhaiDreams, Beabadoobee, and Conan Gray. A recent Guardian article points to the generation gap in music caused by teens discovering new artists through YouTube influencer channels, giving insight for how marketers should speak to these audiences. Underground bedroom pop, an alternate musical universe that feels like a manifestation of a generation gap: big with teenagers — particularly girls — and invisible to anyone over the age of 20, because it exists largely in an online world that tweens and teens find easy to navigate, but anyone older finds baffling or risible. It doesn’t need Radio 1 or what is left of the music press to become popular because it exists in a self-contained community of YouTube videos and influencers; some bedroom pop artists found their music spread thanks to its use in the background of makeup tutorials or “aesthetic” videos, the latter a phenomenon whereby vloggers post atmospheric videos of, well, aesthetically pleasing things. 13. Retail marketing Placement on Apple, Amazon, Spotify, and retail stores like Urban Outfitters and Target attract light listeners who are seeking out competitive artists upon release. I.e.: if I go to iTunes to get the new Billie Eilish record but see Logic, I could be prompted to get both or try other titles.
https://amberhorsburgh.medium.com/13-marketing-tactics-for-artist-scale-8867203e783b
['Amber Horsburgh']
2019-04-03 18:33:23.626000+00:00
['Marketing', 'Music Marketing', 'Spotify', 'Labels', 'Music']
Exploratory Data Analysis(Part I)
Bivariate Analysis: As name suggest Bi means two so the analysis of exactly two variable is what we call a bivariate analysis. It is more interesting analysis compared to the univariate analysis because we can actually see if there is any impact of one variable on the other variable such as if there is any impact of patient age over recovery time? Or if women are more vulnerable to a pandemic than men? Following are the visualizations often used in bivariate analysis, Bar Plot: The bar plot is used for comparing items and in EDA we often use bar plot to find if one group is different from the others such as are females more likely to survive than males in a tragedy like titanic? Ideally bar plots are used when we have at least one categorical and one numeric variable. Bar Plot With Two Variables Scatter Plot: The scatter plot is used to find the relationship between at least two numerical variables such as what will be the impact of increase in square yards of the house on its price? Before using more advanced algorithms (non-linear) we often plot the data to see if the data is linearly separable or not. If it is linearly separable (rarely that happened in real world) we would like to go with linear algorithms like logistic regression in a classification problem. Scatter Plot Line Plot: From cricket score to electrocardiogram (ECG or EKG) line plots are almost everywhere. They are used to show the change in trend over a period of time such as how mortality rate increase with age? Line plots are usually used with two numerical variables in a way that one variable represent an interval. Line Plot Cross table: Cross tables are used to see if one group is significantly different from the other groups. We use cross tables when we have at least two categorical variables. Cross tables are also used to find the association between categorical variables using more advanced methods such as chi square. Cross Table Box Plot With Two Variables: Often we would like to compare the distributions of different groups within one variable such as marital status can have groups like single, married, divorced, separated and widow. Box plots with two variables required one categorical variable (groups) and a numerical variable (distributions). Based on initial findings from this type of analysis we decide to run statistical hypothesis test such as t-test or ANOVA to confirm our hypothesis. Box Plot With Two Variables Summary: Initially start with univariate analysis to build the general understanding of each variable in your data then move to bivariate to find out the more interesting relations between different variables in your data. I will be discussing multivariate analysis in my next blog of this series. I hope this blog helped you in someway furthermore I would love to hear from you in the comment section. You can also connect with me on twitter at @SehanFarooqui Feel free to clone the code from github and starting doing your own analysis.
https://medium.com/analytics-vidhya/exploratory-data-analysis-part-i-7992935f0b9b
['Sehan Farooqui']
2020-12-14 03:45:48.488000+00:00
['Data Science', 'Matplotlib', 'Data Visualization', 'Machine Learning', 'Exploratory Data Analysis']
6 Essential Things I Wish I Knew When I Started Programming
Communication Skills More Important Than Coding Skills When I just started my career, lack of social skills was not my main problem. But when I moved higher, to the middle, senior, and leadership position, my weak soft skills became my Achilles heel. When you work on a product with a group of different people (engineers, designers, managers), communication is the only thing that makes you a “team” and helps you effectively develop the product. Lack of social skills does the opposite — it decreases the product development time and overall productivity. Here is the real situation you might face: The leadership team tells your product manager that they want to create a new product feature and put it in the next product release. It’s not urgent, they just want to release it as soon as possible (as always). The product manager calls you on Zoom, tells you what you need to build, and asks, “How much time do you need to build it?” You are doing a rough calculation and say, “I need 20 hours.” The product manager is not satisfied with your answer. He wants to release it as soon as possible and show the management that he can deliver results fast (this is a very common situation). So he asks you, “Can you build it for 10 hours? We really need this feature in the next product release!” And you know that you can if you cut the corners (no tests, messy code) but then you will need to refactor it, and it will take an additional 30 hours. Because other engineers will work with your messy code when you release it. And after refactoring, you will need to integrate their code with yours. So here’s what will happen next. If you have bad social skills, you will not convince the product manager that you actually need 20 hours to build this feature. Why? Product managers often have good social skills, from my experience. So if you can’t convince them that refactoring later is worse than spending 20 hours right now, they will easily argue with you and convince you that “refactoring later is OK.” And the whole team will lose an additional 30 hours for this refactoring (I don’t count the time to fix unpredictable bugs after). But if you have good communication skills you will be able to convince them of the opposite. So improve your social skills as well as coding skills (send memes in the group chats on Slack or something). And remember one simple truth: People work with people, not machines.
https://medium.com/better-programming/6-essential-things-i-wish-i-knew-when-i-started-programming-b4c1ea2c2d59
['Nick Bull']
2020-12-02 17:53:59.890000+00:00
['Python', 'Programming', 'Software Development', 'Productivity', 'JavaScript']
Takeaways from dotJS 2019
We had the opportunity to attend dotJS Day 1 this year. Here is a sum up of the most remarkable takeaways and themes ! Accessibility Was the Most Recurrent Topic Web accessibility is not something new, but it was a recurring topic this year and was mentioned by several speakers during the event as well as by developers from the audience. Sara Vieira reminded us in her talk Make the Web Easier for All of Us that “form placeholders are not read by some assistive technologies.” When talking about how much we are overwhelmed by the demand, Chris Heilmann mentioned performance, security, and interoperability, but also accessibility. Making the web accessible is not just about allowing people with disabilities to perceive and interact with it, but also allowing them to contribute to it — we often forgot that. It’s also about improving the experience on every device (mobile phones, smart watches, smart TVs, etc.) and slow connections. Like a lot of things, developers expect accessibility to be a default. It is quite impossible, of course, because we are still responsible for our applications, but UI kits could — and should — help us with that. Unfortunately, they still have progress to make. After the JS fatigue Came the “Migration Fatigue” Angular bashing at dotJS is still a thing, but it has become less and less so. Last year’s dotJS was an opportunity for the audience to wake up each time a speaker was trolling (or simply talking about) Angular. This year, only one speaker tried to get a reaction, which was surprisingly small… … Maybe because Igor Minar was there to give a speech in the afternoon. He mentioned a “migration fatigue” in the world of the web. “Web platforms are ubiquitous, accessible, malleable, evolving, yet stable,” whereas the web ecosystem does not manage to be stable. While web is “evergreen,” libs are not. Well, except Angular (according to him). Still, the migration of AngularJS 1.x to Angular was a good example of handling annoying breaking changes. He explains this by the cost of not having breaking changes on their side. Among other things: the burden of maintenance, payload size, and accumulation of API cruft. But “Angular updates work now” — according to him — thanks to their efforts: Semantic versioning Time-based releases Deprecation policy and docs Testing at Google Static analyses Automated migrations Fixes of common anti-patterns To beat the migration fatigue, developers still “must do their part of the work to ease migrations by updating more often.” Progressive Enhancement Is Being Embraced, As Well as the “JAMstack“ Pushed by Google in particular a few years ago, progressive enhancement and Progressive Web Apps (PWA) are now technically embraced directly within the frameworks and have even become a selling point for both SEO and fast content displays. Tim Neutkens mentioned the capabilities of pre-rendering in Next.js. He encouraged the audience to embrace SSR (Server Side Rendering) and SSG (Static Site Generation). Today, both React and Vue provide hydration, the process of turning static HTML from the server into interactive DOM that reacts to data changes, and that is very impressive. Same point of view from Phil Hawksworth, DX at Netlify, during his talk Exploring a Server-Less Web. “Static first + enhancement if required” is now his go-to. He used a word I had honestly never heard before: JAMstack or “JavaScript, APIs, and Markup” stack. It is a fancy word to refer to the way of working on static sites (sites without databases). The kinds of things you can easily do with static sites generator like Jekyll, Gatsby, or Hugo, but also with Nuxt and Next now. Possible architecture following JAM principles. Source: nordschool Of course you cannot easily embrace a JAMstack in the real world with a SPA working on top of a monolith, but it is interesting to see the growing community, tools, and products around this. In particular, Netlify has built its business around it, and it seems to work well. Inheritance Does Not Work Well with Components; Composition Does During his talk State of Components, the infamous Evan You encouraged us to see components as “instantiable and stateful modules.” He stated that logic reuse could be challenging with classes, which are probably not the most natural fit for designing components. The language does not help a lot, as class fields and decorators are still respectively at stage 3 and 2 as of today. The future of component development is written by the frameworks for now. Vue 3 is to be released for Q1 2020, and one of the most expected changes is the answer to React Hooks: The Composition API. It is a new function-based approach to writing components. Just like React Hooks, the Composition API exposes mechanisms currently available through component properties into JS functions called compositions functions. For instance: ref : Wraps a primitive and returns its reactive reference : Wraps a primitive and returns its reactive reference computed : Declares a computed property : Declares a computed property onMounted : Access to the mounted lifecycle hook It allows you to encapsulate code logic and reuse it across components. This is a new — and in some ways cleaner — solution for sharing code compared to what Vue currently has: Mixins, with which it is not easy to know what is added to the component. Scoped slots, only accessible from within the templates and only for the component where they are used. We Have Become “Full Stackoverflow Developers” It is Christian Heilmann that introduced us to this phenomenon. It refers to developers that often copy paste code from Stack Overflow without understanding what they are doing. He recalled a time when developers used to learn before coding. Now we get caught in the rush of new things, overwhelmed by the demand, so we rely heavily on libraries, frameworks, and services that do things we cannot learn all by ourselves, hoping they will do well for us. He admits he has no solution for all of these issues but that there is a natural approach he would like everyone to follow: learn while coding and debugging. Using context documentation through linting is one possible way. Making Cross-Frameworks Design System Is Smart Why should we pick a particular framework when developing a design system when the low-level component model is sufficient to build reliable components? Well let’s go then. Source: itnext.io In his talk Architecting a Component Compiler, Adam Bradley explained why at Ionic they decided to stop using only Angular: large organizations of many teams often use different frameworks, which makes the UI consistency challenging. That is why they created Stencil, a compiler that generates web components, which means they can natively run within any framework. Their built-time tools enhance the possibilities with Typescript, JSX, a tiny Virtual DOM layer, one-way data binding, asynchronous rendering pipeline, lazy-loading, etc. Tests Are Often a Third User, and It Is Sad Testing frontend is not an easy thing; false positive errors are common, and we sometimes have to spend time fixing the test instead of fixing the code. Source: our #tech-jokes Slack channel This happens when tests are written as a third user (the first ones being customers and developers). Starting from this statement, Adrià Fontcuberta from Holaluz introduced us to testing-library. It helps us do tests the right way by following two principles: 1. Avoid relying on the implementation details and 2. Act as a user would act (using DOM interactions). Of course, and as he said, what matters is following these principles, not specifically using testing-library. What about Svelte? Not that much. Last year, the trendy topic of DotJS was TypeScript, which was not surprising. Now the language is used by every major framework and is widely adopted by developers. We could have expected the 2019 edition to be the year of Svelte; it was not. Only one lightning talk was dedicated to it, and it was only briefly mentioned by Evan You and Tim Neutkens (who gave me the opportunity to hear about Sapper for the first time). The thing is, Svelte is already in its third version, and yet it was not such a hot topic this year. This could be explained by the fact that we already have three major frameworks, and that’s already enough. Plus, Svelte is not a framework but a compiler, like Stencil, and people are not really used to this way of building components. Maybe next year… maybe not. Some Cool Stuff to Discover Final Thoughts DotJS 2019 was overall a good conference, with some great talks throughout the day. The speakers came with very different topics and different ways of making websites along with a few specific — but very interesting — talks. While we come out of this event having learned some new things and being inspired by the charisma and motivation of the speakers, we can also be confused by how people disagree about how to make websites and how it seems that the same topics and debates tend to disappear and then come back again and again without proper and unique solutions. But that is probably for the best.
https://medium.com/data-from-the-trenches/takeaways-from-dotjs-2019-76ae351416cc
['David Dogneton']
2019-12-11 16:03:43.438000+00:00
['JavaScript', 'Angular', 'Vuejs', 'Engineering', 'Front End Development']
The AI Citizen and Climate Change
The AI Citizen and Climate Change In Search for Data Science’s best Levers for Climate Action While I was in London, taking part in a conversation on how technology leaders can help meet the UN Sustainable Development Goals, a very interesting workshop was being held in Long Beach: ‘Climate Change: How Can AI Help?’ I wish I had been able to attend. Headlining the workshop was a very detailed review paper on the topic, titled Tackling Climate Change with Machine Learning, which I highly recommend reading. It’s a great answer to questions that I, a machine learning scientist eager to learn how I can contribute to addressing this pressing issue, have been asking myself for a long time. Figuring out what role one can play, both as a domain expert and as an individual citizen, in tackling the challenges associated with climate change is not simple. The majority of problems that machine learning is well suited to answer revolve around prediction and optimization, which could presumably provide solutions to issues around improving energy efficiency. On the surface, better efficiency seems desirable, but it is often paradoxically a double-edged sword: since demand for energy is essentially infinite, more efficiency directly translates into lower energy costs, which increases the availability of cheaper energy, and leads to more demand. This cycle is known as Jevon’s paradox and largely holds true for resources with very elastic demand: planes for instance are more efficient than cars, which are more efficient than horses, but that doesn’t mean we use less jet fuel today than we used horse feed then. Efficiency often drives more consumption, which is not great when you’re making fossil fuels cheaper as a result. If you’re interested in the dynamics at play, and how complex thinking about energy can be, I recommend the (slightly dated but) very provocative book The Bottomless Well. There are areas where more efficiency does mean a better advantage for renewables, and the workshop paper does a good job at highlighting those opportunities, whether it’s about better forecasting of energy demand or better material design. There is also a very large opportunity in bringing the best that data science has to offer to scientific fields that have not traditionally been data-driven and but instead rely predominantly on mathematical modeling. It’s always struck me how much of our entire technological progress is often bottlenecked by battery technology: Improving the energy density of storage by a few factors at equivalent cost would completely change the transportation landscape, mobile computing, and tilt the equation unequivocally in favor of renewables. The most daunting task, however, is not simply to enable more use of renewables and to put a stop to carbon emissions, but to find ways to bind the existing carbon in the air in a more efficient way than evolution has managed to come up with in the billions of years of tweaking photosynthesis. Surprisingly, attempts here are not without precedent. It is also important for us, the machine learning community, to not become part of the problem in the first place. We collectively use a lot of energy. While I am extremely fortunate to be part of an organization which matches 100% of its energy use with renewables, most other infrastructure providers, even the big ones, struggle to get there, and it’s not for lack of trying and investing in it. Offsets are also not the complete answer, since they can only make a positive difference in a regime where carbon-based fuels are still an alternative to renewables, and may ultimately detract from the goal of eliminating them altogether. Nonetheless, today’s world is still one where these forms of energy can be traded against each other with varying degrees of utility. Accounting for my personal impact as a citizen brings up a very different set of questions. My home city of San Francisco provides us the choice of using 100% renewable energy sources for electricity, which is fantastic. However I, like most of my neighbors, still rely on natural gas for heating. I’ve actually wondered how many pounds of carbon I would have to offset to be ‘neutral’ in terms of direct energy consumption. Thanks in part to my solar installation, I am left mostly with air travel to account for, which, sadly, dwarfs all other direct carbon emission sources for my household (Since last year, I purchase offsets for that footprint, too). Even though I try to reduce my own air travel, I remain ambivalent about the ‘travel shaming’ trends that are entering the zeitgeist on the basis of the carbon impact of that industry. Mark Twain’s famous quote, ‘Travel is fatal to prejudice, bigotry, and narrow-mindedness’ expresses a sentiment that I can directly relate to. I can’t imagine the person I would be today if not for easy access to a world so different from my rural France, and how foregoing that experience would have shaped my own thinking on global citizenship and impact on the planet at large. If I wanted to account for all my indirect impact — meaning the environmental impact of the things that I purchase and use, it would seem, on the surface, to be difficult, particularly in the absence of the exact provenance and energy footprint of the things I consume. Interestingly, and controversially, there is a crude shortcut one can use to estimate it: to a first order approximation, your entire footprint is the number of kids you have, times a constant factor. Kids will have kids, and each direct descendent you have today compounds to a number of additional human beings on the planet. Each will individually yield a degree of carbon-generating consumption that’s largely independent of whatever action you take today, and that will have more to do with lifestyle choices, technology and economic context of future generations. That aggregate consumption, under reasonable assumptions, dwarfs your own. Hence the constant factor, what you do today to influence your indirect footprint has little bearing to the amortized decisions your descendants will make, unless you believe that sensibilizing your kids to the issue can have material trans-generational effects, and the discount factor to apply largely depends on how long you believe this crisis will take to unfold. Hearing that your best lever to limit your environmental footprint may be to have fewer kids is yet another angle by which the issue of climate change can tickle the world’s religious and geopolitical neuroses. It is also a poor excuse to pass the buck to the next generation. There is broad consensus in particular that the best and most urgent levers to pull in the fight against climate change are political. Many of my Bay Area neighbors feel quite disenfranchised on this front, given the modest role we can hope to play on the national US stage. Perhaps having an environmentally-minded Oaklandite in the presidential race can change that. On the regulatory front, I am personally very eager to see how Canada’s foray into revenue-neutral pricing of carbon emissions unfolds, because we need more large-scale policy experiments like this one to bring about change. Even though climate change is the issue of our times, I can’t help but wonder to what extent it is at risk of overshadowing the other environmental problems we’re facing today. Many of these issues are beholden to the evolution of the climate, but many others are at risk of neglect in the face of a worldwide, invisible crisis unfolding quickly. What happens when biodiversity and habitat preservation run counter to the sustainable development of renewable energy sources? Or when we collectively decide that preserving the world’s forests is no longer a meaningful enough lever in this fight that we should care? I fear that increasingly, anthropocentric responses to the climate crisis are going to force us into tradeoffs that run up against the broader goal of ecosystem preservation. We are at risk of losing sight of what’s worth saving in the first place, and the solutions we come up with will force us into very difficult compromises about how we choose to shape the future of planet Earth. Nonetheless, it’s important to think about how one may best employ one’s strengths towards these important social goals, because that’s where we can likely find the most leverage to make a material difference in outcomes beyond merely making good personal choices as a citizen.
https://vanhoucke.medium.com/the-ai-citizen-and-climate-change-59ff9a93803f
['Vincent Vanhoucke']
2019-07-08 10:18:29.102000+00:00
['Machine Learning', 'Citizenship', 'Sustainability', 'AI', 'Climate Change']
Contribute Your Work On Storymaker
Contribute Your Work On Storymaker Share your creative ideas, your artwork, writing, music, creative process and unique creations on Storymaker My name is Larry G. Maguire, I am a writer and artist. I set up this publication here on Medium as a place where creative people like you and I can share our art, writing, music and unique creative ideas with fellow artists. Then, Anna Rozwadowska came along as our poetry editor, followed by Michael Stang as prose editor. If you are a creative person and you’d like to spread the word about your work and ideas then we want to hear from you. (See bottom of the article for how to request to be added as a writer) What We’re Looking For Storymaker celebrates art and creativity in written form. Poetry, short stories, creative non-fiction, and essays are welcome. Write about life, death, love, hate and everything in between. Express what you feel in a few short lines or a few thousand words. We wish to showcase your interpretation of the world and the people in it. What is your story and what are the challenges you face with your daily work? We want that too. In other words; offer your experience and advice to the community. If you are a designer, portrait artist, sculptor, writer or author, poet, singer-songwriter, musician, illustrator, craftsperson, photographer, furniture maker, or whatever, we want to hear from you and publish your work. Write and share photos, audio and video of your work. Explain your creative process, share your struggles and challenges. Tell us how you started and where you’re going. Here’s How It Works We want Storymaker to become a place where creative people like you can share their work in comfort and security so there are a few ground rules in making your submissions. 14 rules to be exact; You Must Have A Medium Profile. I’m afraid we don’t accept Word docs etc. because with Word we need to do all the formatting for you. You have to have a profile here on Medium and your post must be created on Medium but unpublished to any other publication. You’ll Be Added As A Writer. On your request to join Storymaker (comment with “add me as a writer” below or send us a private note) you’ll be added as a writer. You will then be able to add new pieces as drafts whenever you please. However, they will be queued to go out as per our editorial process. That means they won’t be published one after another, there will be a day or two between your articles published. Introduce Your Piece. This is not always necessary but rather than just jumping right in, it may be good for you to share a paragraph introducing your story. Is it an extract from a current book? Is is a new creative idea? Is it an opinion piece? Is it a drawing? Is it a research-based article? What’s the background, how did the idea come about? Aim between 800 and 3000 words. Less than that might be okay ( where poetry is concerned mainly) but to tell a story it must have some substance and less than 800 words probably won’t do it (unless it is poetry). That’s not to say your story will be rejected. Most stories will be accepted to be honest. Formatting. Use Medium’s built-in formatting tools. Here’s an article that will help you get your head around formatting. We’re not too fussy about everyone formatting their posts the same way, so do whatever is best for your piece while using Medium’s formatting tools. Tag Your Article. Use one of the following tags before publishing so your piece ends up in front of the right eyeballs and in the right section on Storymaker and beyond. — For poetry, use tag; Poetry | For short stories, use tag; Short Story| For design, use tag; Design | For general writing tips or advice, use tag; Writing. You get the picture… Add a full-width image. Your article should have a full-width image, which can be your book front page, artwork or some other representative image. Make your image 2000 x 1000 px. Your image shouldn’t be a spam image with “buy it now” slogans etc. Make it tasteful and appropriate. Check out Unsplash, Twenty20 or Pixabay for images. Only one link to your sales page, please. If you have a piece for sale then add a link to your sales page on your website or Amazon or wherever but only once at the bottom. If there are multiple links to sales pages throughout the piece they will be removed or your article may be rejected. The idea here is that we keep it real, not spam nor hard selling. No affiliate links are permitted. Email List Signup Forms are allowed. If you have an email signup form on the foot of your post that’s fine. In fact, we’d encourage you to get one if you don’t already. Try Upscribe Include An Author Profile. At the bottom of your piece, include your author profile if you wish. Create it whatever way you like, link to your website, your book again if you wish (it’s ok to do that here just not more than once in the article). We Don’t Do Erotica. Sexually explicit material is off the menu. Not that we are prudes or shun sex or anything, it’s just this is not the place for it. If you write “romance” then Storymaker is not for you. We Don’t Do Preachy Religious Dogma. Larry grew up in what he’d call a repressive religious society in the 70's and 80's. Therefore, he has little time for preachy non-secular ideologies and religious dogma. With respect, if you want to share a personal spiritual story you’re entirely welcome — just don’t preach your religious views. Comment On Other Author’s Stories. Storymaker is a place where creative people can interact and share stories and ideas. It’s a place where you can get help refining your work and make connections with others. So it’s important that we interact and comment on each other's material. Keep It Clean. Needless to say, we’d like for us to afford respect and encouragement to each other. However, we do accept that views differ so disagreement and debate is fine. Let’s just remember that it’s nice to be nice. Consider a private note to a fellow writer if the need arises. To Become A Writer & Make A Submission Do This Just let us know you’re interested in contributing by adding “make me a writer” in the comments below. Tag Larry G. Maguire, Anna Rozwadowska or Michael Stang in your comment and we will add you then. That’s about it. Go ahead and create your article then shoot us a message and we’ll get you published here on Storymaker. Talk soon! Larry G. Maguire Anna Rozwadowska Michael Stang
https://medium.com/storymaker/contribute-your-work-to-storymaker-794f0c7d6c6a
['Larry G. Maguire']
2020-09-09 09:03:08.280000+00:00
['Artist', 'Writing', 'Art', 'Writer', 'Creativity']
Remarkable AI Solutions that Help to Achieve Business Goals
AI is not just a techy buzzword anymore. We regularly come across AI/ML technology in both personal and business use applications. Businesses are looking at AI for ways to increase revenue, improve operations, and to enhance productivity. If you are looking for readily available AI solutions that could accelerate your business, then dive straight in. What is Democratization of AI ? According to a recent Gartner report, 77% of the organizations surveyed planned to either increase or retain their AI investments in spite of the global pandemic. Additionally, the report also talks about the “Democratization of AI”. This implies that businesses want everyone linked to the organization to experience the benefits of AI solutions. This includes customers, business partners, business executives, salespeople, assembly line workers, application developers and IT operations professionals. How exactly can this sort of democratization be achieved when every aspect of the business functions differently? In spite of the differences, there are many common practices across departments which are tedious and still carried out manually. For example, why do we still need someone to jot down meeting notes? While most businesses appreciate the value that AI solutions can provide, they are unaware of the common areas where automation can increase productivity. Fear of the unknown acts as a barrier to get started. Also, acquiring the required expertise or knowledge to build and subsequently work with AI solutions is another challenge. Let us find out the relationship between the different business goals and available AI solutions. How to transform your business with AI There are several ways in which AI can be integrated into your day-to-day operations. Let us look at how different AI solutions can be used to achieve specific business goals. Increase revenue : Your sales revenue largely depends on better customer understanding and improved customer service. AI solutions that analyze data from both new sources like CCTV and beacons, in combination with existing sources like POS can provide a deeper understanding of customers/users. This can help to influence customer spending habits and increase stickiness. AI solutions that suggest optimal product placements in retail stores or online product recommendations are results of such analysis. : Your sales revenue largely depends on better customer understanding and improved customer service. AI solutions that analyze data from both new sources like CCTV and beacons, in combination with existing sources like POS can provide a deeper understanding of customers/users. This can help to influence customer spending habits and increase stickiness. AI solutions that suggest optimal product placements in retail stores or online product recommendations are results of such analysis. Improve operations : To improve operations, you need to identify redundant or lengthy processes that can be optimized. Again, operational data analyses can be used to identify and gain insights into operational inefficiencies. These can be subsequently addressed using problem-specific AI solutions that eventually result in a reduction in unnecessary operational costs. : To improve operations, you need to identify redundant or lengthy processes that can be optimized. Again, operational data analyses can be used to identify and gain insights into operational inefficiencies. These can be subsequently addressed using problem-specific AI solutions that eventually result in a reduction in unnecessary operational costs. Enhance productivity: AI solutions provide intelligent tools that can reduce effort, time, and financial investments by automating manual or expertise-heavy processes. They can be trained to deliver an equal or similar quality of output at a fraction of the initial resource investments. Let us now look at a few common business scenarios that can be optimized with the use of AI. AI Applications that address your mundane problems We will now look in detail at five challenges that are common to many businesses and AI solutions that can be used to address these challenges. Meeting Notes The process of taking down meeting notes usually does not cover all spoken content and is prone to human errors. Without recorded texts of the meeting, it is impossible to run a textual search to find or reference discussions that happened in the meeting. The result is lost productivity (in recalling or listening through the recorded audio, if any) and errors (in an incorrect recall or misinterpretation due to imperfect memory). Voice AI solutions like automatic speech recognition from Sentient,io can process such recordings and transcribe them to textual meeting notes with high accuracy. This text may be further processed to add context, before sharing with all participants and stakeholders. Customer Profiling In the retail industry, customer profiling helps businesses to understand spending habits of customers and design targeted marketing campaigns aimed at a particular type of customer. Customer profiling is usually unidimensional, from a single data source that belies the multifaceted nature of customer behaviour. Such data may not be enough to provide a good understanding of customers and subsequently result in poorly informed business decisions. Data about customers may be available from other sources which can help to build an accurate picture about the customer. For example, a retail chain may combine CCTV data with POS data to better profile their target customers at different locations. This would help to increase revenue while allowing for better management of inventories. A possible solution could be to count the density of people in specific departments of the store, helping to identify items which were seen, but not purchased by customers. Podcast Production The podcast production process requires an effective voice and speaking capabilities, in addition to a lengthy process of metadata creation. You may also want the podcasts to be delivered in specific accents depending on the audience demographics. Cost of production is directly proportional to human effort involved in most cases. However, AI based text to speech services, like from Sentient.io, can help you to not only edit but also create podcasts from scratch. Such services are available as affordable SaaS solutions. Extraction of Financial Information Businesses may be required to extract financial information about individuals in situations where background checks are required. Extraction of such information manually is a laborious process because the information available is unstructured and distributed across different types of sources. An individual search solution combines data from different sources and provides it in a readable and manipulatable format thereby addressing the operational inefficiencies of the extraction process. Video Editing Creating short-form videos from long-form videos is a laborious and time-consuming process that requires adequate skills and experience. There is a high cost attached if you engage internal or third-party resources to perform this task. An AI based video processing software could provide the solution to this problem by analyzing the video content to identify key scenes in the video based on certain criteria and using these to build a shorter video. This solution would eliminate the requirement for human effort and provide businesses with an affordable SaaS solution. To read more on AI solutions, click here.
https://brandzasia.medium.com/remarkable-ai-solutions-that-help-to-achieve-business-goals-6956567fc94
[]
2020-12-10 06:45:38.004000+00:00
['Machine Learning', 'Artificial Intelligence', 'Business', 'Technology', 'AI']
2 Reasons Not to Share Your Pronouns
As a Southerner, if I introduced myself to someone new and told them my pronouns were they/them, they would probably look at me like I had said, “I just flew in from the Moon.” Wild, considering the existence of the gender-neutral word “y’all,” we use so much down here. Still, the South tends to feel like a place where the only acceptable way to refer to a stranger is “sir” or “ma’am.” I’m not saying that’s the way it should be, but that’s the way it is here — and many other places around the world. Those reactions fully justify the existence of a holiday like International Pronouns Day, which occurred earlier this month on October 20th. Correct pronoun usage is so vital for so many people. Things are miles from perfect, but society’s acceptance of the LGBTQ+ community has come a long way in the grand scheme of things. The Williams Institute at UCLA released a report showing that 75 percent of the countries polled had increased acceptance of LGBT people since 1981. Even as being trans or gender non-conforming has become more widely understood and accepted, it’s still commonplace for people to assume someone’s gender identity and pronouns based solely on their appearance. Being misgendered can be devastating for anyone experiencing gender dysphoria, which is why asking someone’s pronouns is generally thought of as being supportive — as doing the right thing — as lifesaving. And in many circumstances, it is. Throughout the day on International Pronouns Day, I read so many wonderful, encouraging stories that made my heart soar and brought a happy tear to my eye. And yet, these stories also made me feel *something* that I couldn’t quite put my finger on for a long time. I finally realized that rather than that *something* being inclusivity, it was the opposite of that — fear and alienation. Whether or not you ask someone for their pronouns should depend heavily on the context. If you’re in a one-on-one conversation with someone, it’s probably a great idea. In a group setting? Not so much. While it might seem like the most inclusive thing you can do, introducing yourself with your pronouns and asking for the other person’s pronouns might not have the effect you intended. The worst part of all is that these days, due to the prevalence of “cancel culture,” you’re bound to garner some side-eyes from some activist circles for refusing to share your pronouns. Refusal to comply might even cause some to accuse you of being transphobic — even if you’re trans — because they assume you’re cis. I’ve seen it happen. While the attempt to normalize not assuming anyone’s pronouns is a well-intentioned step in the right direction, it’s important to remember that not everyone will feel comfortable disclosing their pronouns for various realistic and valid reasons. There are plenty of reasons why someone wouldn’t want to share their pronouns with someone — especially a stranger. And yet, as our world changes, it is becoming more common to feel put on the spot by someone you just met when they ask you, “What are your pronouns?” But why wouldn’t someone want to disclose their pronouns? And how can we work together to fix this predicament? 1. Being forced to disclose pronouns can be dangerous for those who aren’t out yet. Telling the truth can put people at risk for physical and emotional violence. Trans and nonbinary people experience physical violence at an alarming rate, although it’s nearly impossible to get accurate statistics due to “substantial limitations of available data.” The world is a dangerous place for people who aren’t cisgender, which explains why so many trans and gender-nonconforming people aren’t fully out. The consequences for coming out or being outed, purposely or inadvertently, can be deadly. It’s more than just physical violence, though. Upon finding out a person’s pronouns, some people purposely misgender people to be cruel — whether they’re parents, “friends,” or co-workers of those who are trans or gender non-conforming. Research shows that this act of misgendering from their support system is a betrayal that can lead to self-harm — even suicide. Even more heartbreaking is that the families of many trans and non-binary folks refuse to respect their gender identities and pronouns even in death, frequently misgendering those they claim to love in obituaries and funeral services. So consider this scenario. Let’s say you’re not out yet. You’re in mixed company composed mostly of strangers and acquaintances. Someone asks for your pronouns in an attempt to be more inclusive and progressive. You have two options. You can tell the truth and hope that everyone around accepts trans people and won’t try to harm you or ruin your life somehow. You can lie. Telling the truth is a risk that could mean losing a lot — possibly everything. Discrimination based on gender identity is still legal in 23 states here in the US — and 34 percent of the national LGBTQ population lives in those states. Therefore, despite the Supreme Court’s ruling, in states with at-will employment, you can lose your job for being Trans, NB, Genderfluid, Agender, or otherwise gender non-conforming. You can lose your shelter, too. You can even lose your children. Generally speaking, you can lose all of your most important basic needs in Maslow’s hierarchy. Realizing all of this, perhaps you might decide it’s in your “best interest” to lie. Those who aren’t out may feel the need to lie about their pronouns, causing gender dysphoria. Yikes. Now we’re inadvertently back in the same situation we were trying to avoid by not assuming peoples’ pronouns. 2. Being forced to disclose pronouns can be alienating for some rather than inclusive. For example, those who are questioning or still figuring out their gender may feel awkward. I’ve wondered if I might be non-binary since I first heard the term. As a child, and to some extent, as an adult, I’ve always felt like I was never feminine enough to fit in with women but somehow too feminine to fit in with men. I didn’t have the language yet to express this feeling when I was young. I found myself using the word “tomboy” as a badge of honor — even though anyone of any gender can enjoy running barefoot in the yard to catch frogs, grasshoppers, and fireflies. If I ever was feminine or masculine enough to fit in somewhere, the feeling was always fleeting. Sure, maybe I felt more feminine some days, but would my acceptance be revoked if I didn’t continue performing femininity to conform when I didn’t feel like it? Feelings of confusion and cognitive dissonance about this topic have kept me up at night before. Sadly, rather than feeling “seen” when I heard about NB folks, I felt even more scared than before. Well, shit, I don’t fit in with the cis folks, but what if I don’t fit with the trans and NB folks either — because I’m not queer enough. I don’t experience dysphoria. My presentation usually allows me to pass as a cis woman. This experience of staying closeted due to not fitting anywhere is similar to my experience with bisexuality — not straight enough for the straights and not queer enough for the LGBTQ community. In that way, wrestling with my gender identity on International Pronouns Day reminds me of how I always feel on National Coming Out Day — also in October. Before publication of this article, I was only officially out as “NB questioning” to two people. So yeah. Besides living through seven months of a pandemic, October has been a weird month for me. I’m sure I’m not the only one. I recently watched one of Blair Imani’s “Smarter in Seconds” videos on Instagram on the topic of pronouns. If you don’t know Blair, you should check her out. She identifies as a queer, black, Muslim historian. I adore her. However, this made me wonder how I would respond if I were ever to meet Blair Imani, and she recited the script she recommends: “Hi! I’m _______! I use she/her pronouns. What about you?” Beyond being starstruck by talking to Blair Imani, I wouldn’t know how to respond to her without feeling even more awkward because I’m still struggling with this. Quite simply, I wouldn’t know what to say my pronouns are because I myself don’t even know what they are. I’m working on it. In that work, I’ve discovered that maybe, like my gender itself, my pronouns might change depending on how I feel that day. Some peoples’ pronouns change day by day. In my experience, being forced to choose a pronoun and announce it aloud makes me feel just as “boxed in” and diminished as I felt when I thought you had to choose a gender to love — a choice based on a binary system of “one or the other.” It reminds me of forms that force me to check a box — male or female — with no other option. Usually, given the option, I say, “I prefer not to disclose.” So what is the solution? Err on the side of caution and always use gender-neutral pronouns as the default in your speech and writing. I’ve been practicing this exercise in my writing, which has been exciting for me as I attempt to kick off my writing career. It’s honestly not hard — people have been using the singular they for centuries — including some of the most famous and beloved writers in literary history. Don’t believe me? Here is a list of all the times Jane Austen, for example, used the singular they in writing. My favorite applicable quote from the list is this one from Emma: “It is very unfair to judge of any body’s conduct, without an intimate knowledge of their situation.” Still don’t believe me? Well, you just got punked. I used the singular “they” in this article’s first sentence, and I bet you didn’t even notice. Let me reiterate: it isn’t that hard. I promise. If you accidentally use a gendered pronoun with a stranger or if you accidentally misgender someone whose pronouns you know, correct yourself without making a big production out of it. The easiest solution of all, though? Skip the pronouns altogether and people by their names when you first meet them. It’s as easy as introducing yourself to a Kindergartener.
https://medium.com/an-injustice/2-reasons-not-to-share-your-pronouns-2a892915fdf4
['Megan Brown']
2020-10-30 00:36:11.650000+00:00
['Mental Health', 'LGBTQ', 'Equality', 'Culture', 'Society']
Writing Technical Specification, and Why it Matters
Why tech spec? Imagine if you jump to coding right after receiving project requirements, you’ll either repeatedly changing the design during the implementation, delivering a buggy product, or even jump back to where you started because the result is “not what’s expected”. Even worse, if you give a promise based on your intuition, and then you found out that the deadline is too early, or the budget isn’t enough. Chart picture by Raygun Remember the chart above? I believe you saw it somewhere in some form. Basically, it says “the earlier it is detected, the cheaper”. The same applies to design changes. For me, technical specification is a document to outline how we are going to tackle technical issues in order to achieve a goal. Usually, this document will be written by senior engineers or leads before implementation. Furthermore, below are the benefits of writing tech spec: Knowing the unknowns , when you breakdown all the work, you’ll then realise the possible blockers, potential performance issues, something missed (such as security approval, or you just forgot to update a particular script), possible risks, etc , when you breakdown all the work, you’ll then realise the possible blockers, potential performance issues, something missed (such as security approval, or you just forgot to update a particular script), possible risks, etc Documentation , tech spec can be revisited anytime required, such as on an audit (can also act as decision/design logs) , tech spec can be revisited anytime required, such as on an audit (can also act as decision/design logs) Work consensus , by getting approval from the tech spec’s reviewers, meaning that they've agreed with the solution , by getting approval from the tech spec’s reviewers, meaning that they've agreed with the solution Reduce the effort on explaining to parties , when you need to explain the work to team members or other teams, you can always bring this tech spec to make them easier to understand , when you need to explain the work to team members or other teams, you can always bring this tech spec to make them easier to understand Better task distributions , the team can easily understand what will be done and feeling confident to work on it because the tech spec was agreed by required parties , the team can easily understand what will be done and feeling confident to work on it because the tech spec was agreed by required parties Faster development and better quality , things are prepared and the team understand what to do before the execution , things are prepared and the team understand what to do before the execution Better estimation, because you know what to do because you know what to do Cheaper failure, because design changes and bugs will be detected earlier Prerequisites Requirements are the core of software development, so ensure the requirements are clear and you understand it correctly. I recommend you to write it in a document (if there isn’t) and get an agreement with your project manager or stakeholder about it. Furthermore, I’d recommend you to check out “Software Requirements” part of SWEBOK (an ISO for software engineering) to better understand this subject. Suggested Format The below format is the most ideal tech spec format based on my experience, but feel free to change it according to your requirements because different context might need different treatment. Head Author , the tech spec’s author name , the tech spec’s author name Reviewer(s) , the tech spec’s reviewers name , the tech spec’s reviewers name Status, it’s either stable or draft (default), to set the reader’s expectation Table of Contents For reader to easily navigate through the document. Introduction Business requirements , can be links of the related requirements document and story/epic link. This is the goal of your tech spec, so it’s really important to put this , can be links of the related requirements document and story/epic link. This is the goal of your tech spec, so it’s really important to put this Glossary, any jargon that is related to specific domain, or perhaps not everyone knows Solution Technical decision , describe why you choose this technical decision over the others, including proof of concept, performance impact, etc , describe why you choose this technical decision over the others, including proof of concept, performance impact, etc Implementation , describe the current state and suggested changes that must be done to satisfy the requirements based on the technical decision that was made , describe the current state and suggested changes that must be done to satisfy the requirements based on the technical decision that was made Metrics , list of success metrics of the work, or it can be a list of definition of done , list of success metrics of the work, or it can be a list of definition of done Possible risks, anything that might harm/block this solution. It is recommended to include these components: Possible Risk (what is the risk), Preventive Action (Actions that can be taken to prevent or reduce the risk). e.g: Plans Test plan , how the work should be tested , how the work should be tested Release plan (usually checklist), how to release the work to production, it is recommended include these components: Who (who will do it), Action (what will be done) (usually checklist), how to release the work to production, it is recommended include these components: (who will do it), (what will be done) Rollback plan, as a contingency plan when something went wrong. It is recommended to include these components: Fail Type (e.g deployment error, test failure, SQL error), Scenario (e.g fail to execute migration script), Solution (e.g if this then that, if that then this), Appendix (e.g script to clean broken data caused by the failed migration) Supporting resources Any resources that are related and able to support the tech spec. Almost there! Remember to ask relevant parties to review your tech spec, and iteratively improve the document until an agreement is reached. Then, you can change the tech spec status to Stable and distribute the work to team members. Conclusion Writing a technical specification requires an extra effort at the early process of your development cycle. But, the benefits in the long run are well worth it. However, if you still doubt or not used to it, you can try to write it in one or two epics of your project and see how it goes. Furthermore, I also recommend you to read this article to broaden your knowledge.
https://medium.com/easyread/writing-technical-specification-and-why-it-matters-b9ea78fbeb49
['Anggie Aziz']
2020-08-23 05:52:16.479000+00:00
['Software Development', 'Engineering', 'Process', 'Software Engineering', 'Agile']
#WorldMentalHealthDay: Look Beyond Yourself & Care for Others.
Mental health is a trending 21st-century issue that affects a person’s emotional state and physical output. Mental health becomes negative when you begin to engage in toxic activities or unhealthy/abusive relationships which could lead to depression and some times suicide. The World Health Organization estimates that about 1 million people die of suicide annually around the world. Despite these frightening statistics, poor mental health can be addressed and extreme results like depression and suicide can be prevented if: 1. You talk to someone you trust like close friends, parents, colleagues, mentors or religious leaders to confide in them on issues bothering you 2. Seek professional counselling from a psychologist to get expert advice.
https://medium.com/climatewed/worldmentalhealthday-look-beyond-yourself-care-for-others-28a411295192
['Iccdi Africa']
2019-10-10 17:40:34.834000+00:00
['Health', 'Youth Development', 'Mental Health', 'Climate Change', 'Women']
Next Level Art and the Future of Work and Leisure
The last artist I want to introduce here is computational design lecturer Tom White. His project Perception Engines focuses, as the name suggests, on the role of perception in creativity. In his words, “Human perception is an often under-appreciated component of the creative process, so it is an interesting exercise to devise a computational creative process that puts perception front and center.” Again, it is essentially an exercise in tricking neural networks to do things they were initially not intended to do by cleverly changing the domain on which they are applied. It uses the idea of adversarial examples and puts an interesting artistic twist on it. Particularly, White sets up a feedback loop where the perception of a network guides the creative process, which then in turn again influences the perception. In a nutshell (and a bit simplified), White took a neural network trained to recognize objects in images, and then used a second system that can generate abstract shapes and search for a result that can “trick” the network into a high certainty prediction of a particular object class. The results are seemingly abstract shapes (which White later turned into real screen-prints) that still convince the networks to be photorealistic representations of certain objects. What is interesting is that once we know what the network thinks it sees, we can in most cases suddenly also see the objects in most of the images (although I doubt anyone is tricked into confusing it with the real thing). The true process White used is actually even more clever and deep than the abbreviated outline of the project I gave here. If you are interested in the details I highly recommend checking out his article on the piece. Qosmo: Computational Creativity and Beyond Now that you have a little bit of an idea of what AI art can be and gotten to know a few people working in this emerging field, let me briefly give you my own story of how I got involved in this. I actually started my academic career as a physicist, doing my PhD in Quantum Information Theory. But while doing this I realized that I wanted to do something a bit more applied. And having had some entrepreneurial experience as well through a startup I co-founded, I decided that AI seemed both interesting from a purely academic perspective, but also very promising tool to solve some really cool real-world problems (and make some money). So after my PhD I worked for a few years at a startup that was applying AI to business problems in a wide range of fields, for example finance and health care. While there are certainly interesting problems to be solved in these domains, I personally got more and more interested in the creative side of AI. Eventually, in February 2019 I finally decided to quit my previous job and join my friend Nao Tokui at his company Qosmo. If you are interested in the full story of how all this unfolded (and how I also became an author and musician at the same time), I wrote about all this in detail recently: Qosmo is a small team of creatives based in Tokyo. The core concept of the company is “Computational Creativity”, with a strong focus on AI and music (but certainly not limited to these areas). Here I want to briefly introduce you to three of our past projects. AI DJ Probably Qosmo’s most famous project so far is our AI DJ Project. Originally started in 2016, AI DJ is a musical dialogue between human and AI. In DJing, playing “back to back” means that two DJs take turns at selecting and mixing tracks. In our case we have a human playing back to back with an AI. Specifically, a human (usually Nao) selects a track and mixes it, then the AI takes over and selects a track and mixes it, and so on, creating a natural and continuous cooperative performance. This idea of augmenting human creativity and playing with the relationship of human and machine creativity is at the heart of what we do at Qosmo. We are not particularly interested in autonomously creative machines (nor do we really believe they are possible in the near future), but rather in how humans can interact with AI and machines for creative purposes. AI DJ consists of several independent neural networks. At the core are a system that can select a track based on the history of previously played tracks, as well as a system that can do the beat matching and mixing. Crucially, we are not using digital audio but actual vinyl records. The AI has to learn how to physically manipulate the disc (through a tiny robot arm trained using reinforcement learning) in order to align the beats and get the tempo to match. While the project is already several years old, we are still constantly developing the system. For example using cameras to analyze crowd behavior and trying to encourage people do dance more by adjusting the track selection to this information. We have taken this performance to many venues in the past, both local as well as global. Our biggest performance so far was at Google I/O 2019, where we did a one hour show on the main stage warming up the crowd right before CEO Sundar Pichai’s keynote. You can read more about the details of AI DJ on our website. Imaginary Soundscapes As humans we have quite deeply linked visual and auditory experiences. Look at an image of a beach and you can easily imagine the sounds of waves and sea gulls. Looking at a busy intersection might bring car horns and construction noises to mind. Imaginary Soundscapes is an experiment in giving AI a similar sense of imagined sounds associated with images. It is a web-based sound installation that lets users explore Google Street View while immersing themselves into the imaginary soundscapes dreamed up by the AI. Technically it is based on the idea of cross-modal information retrieval techniques, such as image-to-audio or text-to-image. The system was trained with two models on video (i.e. visual and audio) inputs: one well-established, pre-trained image recognition model processes the frames, while another convolutional neural network reads the associated audio as spectrogram images, with a loss that forces the distribution of its output to get as close as possible to that of the first model. Once trained, the two networks allow us to retrieve the best-matching sound file for a particular scene out of our massive environmental sound dataset. The generated soundscapes are at times interesting, at times amusing, and at times thought provoking. Many of them match human expectations, while others surprise us. We encourage you to get lost in the imaginary soundscapes yourself. Neural Beatbox Our most recent art project is Neural Beatbox, an audio-visual installation which is currently featured at the Barbican in London as part of the exhibition “AI: More Than Human” (which also features works by Mario Klingemann and Memo Akten). Just like AI DJ this piece is centred around a musical conversation. However, other than in AI DJ, here the AI is not a participant but only the facilitator, and the dialogue takes place between different viewers of the installation. Rhythm and beats are some of the most fundamental and ancient means of communication among humans. Neural Beatbox enables anyone, no matter their musical background and ability, to create complex beats and rhythms with their own sounds. As viewers approach the installation they are encouraged to record short clips of themselves, making sounds and pulling funny faces. Using that video, one neural network segments, analyses and classifies a viewer’s sounds into various categories of drum sounds, some of which are then integrated in the currently playing beat. Simultaneously, another network continuously generates new rhythms. By combining the contributions of subsequent viewers in this way, an intuitive musical dialogue between people unfolds, resulting in a constantly evolving piece. The slight imperfections of the AI, such as occasional misclassifications or unusual rhythms actually enhance the creative experience and result in interesting and unique musical experiences. As a viewer, trying to push the system beyond its intended domain by making “non-drum sounds” can lead to really interesting results, some of which are actually surprisingly musical and inspiring. Currently Neural Beatbox is limited to being displayed in public places like the Barbican exhibition, but we are considering opening it up as an interactive web-based piece as well. We are just a bit concerned what kind of sounds and videos people on the internet might contribute to this installation… While the results could be hilarious and entertaining, they would probably also fairly quickly contain some NSFW content. ;) Generative Models and VAEs Besides my (still very recent) work at Qosmo, I have also done a few of my own artworks and more general creativity related projects with AI. Before showing you some of these, I’d briefly like to go into a quick and simple technical excursion. Many of the models used in the creative scenes fall under the broad category of “Generative Models”. The GANs introduced above are one variant of this. Generative models are essentially models that, as their name suggest, learn how to generate more or less realistic data. The general idea behind this is very nicely captured in a quote by physicist Richard Feynman. “What I cannot create, I do not understand.” — Richard Feynman As researchers and engineers working with AI, we hope that if we can teach our models to create data that is at least vaguely realistic, these models must have gained some kind of “understanding” of what the real world looks like, or how it behaves. In other words, we use the ability to create and generate meaningful output as a sign of intelligence. Unfortunately, this “understanding” or “intelligence” still often looks like the following image. While our models are definitely learning something about the real world, their domain of knowledge is often severely limited, as we have already seen in the examples of bias above. In my previous job working on practical business applications this was bad news. You do not want your financial or medical predictions to look like the image above! Now as an artist however, I find this exciting and inspiring. In fact, as already pointed out, many artists deliberately seek these breaking points or edge cases of generative models.
https://towardsdatascience.com/next-level-art-and-the-future-of-work-and-leisure-f66049112e44
['Max Frenzel']
2019-07-04 01:05:18.521000+00:00
['Artificial Intelligence', 'Work', 'Art', 'Creativity', 'Deep Learning']
What Color Is Your Name? A New Synesthesia Tool Will Show You.
For as long as I can remember, I’ve seen letters and numbers as colors. It’s a form of synesthesia called grapheme-color synesthesia, and for me, this translating from symbols to colors happens most often with names. When I meet new people, I forget their name immediately. Don’t get me wrong, I hear the name, but my mind is distracted. In my head, I am counting the number of letters in the name, and visualizing the colors of each letter. Your name may be Emily, but to me, you’re a bright, sunny swath of five letters with an “E” and an “I.” When I meet you again later, I may think your name is Emily or Jille or Ellie. Those three names “look” remarkably similar to someone who operates as I do — they all have five letters, they all include the letters “i,” “l,” and “e.” In my head, Emily, Jille, and Ellie are remarkably similar. What is synesthesia? Synesthesia is a rare sensory trait shared by about 4% of the population, and it comes in many forms. People who “see” or associate letters and numbers with specific colors have grapheme-color synesthesia, and it’s the most common form. Other forms of synesthesia involve seeing or feeling musical notes as colors or textures, having visualized representations of time, and in rare cases, even tasting words. After many years of struggling to describe my synesthesia visually, I created a website called Synesthesia.Me. It features simple geometric portraits of these color combinations. The specific renderings are based on my own unique synesthesia color alphabet. Every synesthete’s color alphabet is unique, although there are certain universal matches for specific letters. For example, red is often cited as a common color for the letter A. What I love most about this project is that it allows me to see not just the individual letters, but actual composed illustrations of names I hear every day. As I was building the site, I was repeatedly struck by how “right” the names looked once I had them displayed in front of me. Over and over, names just seemed to jump out at me as being so perfectly right. Jille is a bright sunny day in spring. Sarah is a bold saturated blanket of red and purple. Bob and Tom are solid blocks of sturdiness and strength. Heather is a vibrant sunny rainbow and Juliet is a freshly planted garden. Jason and David are strong solid pillars and Bill is a bold whisper. Heather is a vibrant, sunny rainbow. How a name looks is not related to how it sounds I’ve received some great questions about whether the sound of the name makes a difference for names like Lachlan, which sounds like “lock lan.” Yet, it’s not actually the sound of the name that creates the colors, so it doesn’t usually pose a problem if the name doesn’t sound like how it’s spelled. For example, I know the name Siobhan sounds like “shivon,” but my brain still sees it as Siobhan. I associate colors with all the letters of the name, even the ones that go unheard in pronouncing it, like the b in this case. But what if I don’t know how to spell your name? Remarkably, the phonetic spelling of most names turns out to be a pretty close match: What about names that sound the same but are spelled differently? This is another excellent question — best answered by a visual example. For most names there’s not a huge difference. Kathie and Cathy aren’t that different. The letters “i,” “e,” and “y” are all very light, bright blocks and are somewhat interchangeable in my head. Meaghan is just a more elaborate version of Megan, and I usually default to picturing the shorter version. At a certain point, though, some names do have too many letters. I found through this project that names with more than eight letters essentially mush together in my brain. It’s as if my brain says “yeah, enough, got it” and almost abbreviates the color pattern, adding emphasis to the vowels and dominant consonants. Does the color palette of your name match your personality? I’m fascinated by this one — my initial answer has always been, “sometimes.” But as I have done more compiling and creating, I’ve realized the answer may be “yes” a lot more frequently. I have some definite opinions about people’s personalities based on their names — and I often find them to be accurate. So, what is your color palette? The Synesthesia.me site includes an interactive tool for seeing what your name, or any name, looks like. Simply type in any name, and as you type, the “synesthesia” view of your name appears dynamically on the screen. Curious? Play with it to see how people like me see your name.
https://elemental.medium.com/what-color-is-your-name-a-new-synesthesia-project-will-show-you-51bb3f0dc638
['Bernadette Sheridan']
2020-07-21 23:19:56.751000+00:00
['Synesthesia Visualizer', 'Design', 'Brain', 'Synesthesia', 'Grapheme Color']
Natural Language Processing: Count Vectorization with scikit-learn
Count Vectorization (AKA One-Hot Encoding) If you haven’t already, check out my previous blog post on word embeddings: Introduction to Word Embeddings In that blog post, we talk about a lot of the different ways we can represent words to use in machine learning. It’s a high level overview that we will expand upon here and check out how we can actually use count vectorization on some real text data. A Recap of Count Vectorization Today, we will be looking at one of the most basic ways we can represent text data numerically: one-hot encoding (or count vectorization). The idea is very simple. We will be creating vectors that have a dimensionality equal to the size of our vocabulary, and if the text data features that vocab word, we will put a one in that dimension. Every time we encounter that word again, we will increase the count, leaving 0s everywhere we did not find the word even once. The result of this will be very large vectors, if we use them on real text data, however, we will get very accurate counts of the word content of our text data. Unfortunately, this won’t provide use with any semantic or relational information, but that’s okay since that’s not the point of using this technique. Today, we will be using the package from scikit-learn. A Basic Example Here is a basic example of using count vectorization to get vectors: from sklearn.feature_extraction.text import CountVectorizer # To create a Count Vectorizer, we simply need to instantiate one. # There are special parameters we can set here when making the vectorizer, but # for the most basic example, it is not needed. vectorizer = CountVectorizer() # For our text, we are going to take some text from our previous blog post # about count vectorization sample_text = ["One of the most basic ways we can numerically represent words " "is through the one-hot encoding method (also sometimes called " "count vectorizing)."] # To actually create the vectorizer, we simply need to call fit on the text # data that we wish to fix vectorizer.fit(sample_text) # Now, we can inspect how our vectorizer vectorized the text # This will print out a list of words used, and their index in the vectors print('Vocabulary: ') print(vectorizer.vocabulary_) # If we would like to actually create a vector, we can do so by passing the # text into the vectorizer to get back counts vector = vectorizer.transform(sample_text) # Our final vector: print('Full vector: ') print(vector.toarray()) # Or if we wanted to get the vector for one word: print('Hot vector: ') print(vectorizer.transform(['hot']).toarray()) # Or if we wanted to get multiple vectors at once to build matrices print('Hot and one: ') print(vectorizer.transform(['hot', 'one']).toarray()) # We could also do the whole thing at once with the fit_transform method: print('One swoop:') new_text = ['Today is the day that I do the thing today, today'] new_vectorizer = CountVectorizer() print(new_vectorizer.fit_transform(new_text).toarray()) Our output: Vocabulary: {'one': 12, 'of': 11, 'the': 15, 'most': 9, 'basic': 1, 'ways': 18, 'we': 19, 'can': 3, 'numerically': 10, 'represent': 13, 'words': 20, 'is': 7, 'through': 16, 'hot': 6, 'encoding': 5, 'method': 8, 'also': 0, 'sometimes': 14, 'called': 2, 'count': 4, 'vectorizing': 17} Full vector: [[1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 2 1 1 1 1 1]] Hot vector: [[0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Hot and one: [[0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0]] One swoop: [[1 1 1 1 2 1 3]] Using It on Real Data: So let’s use it on some real data! We will check out the 20 News Group dataset that comes with scikit-learn. from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import CountVectorizer import numpy as np # Create our vectorizer vectorizer = CountVectorizer() # Let's fetch all the possible text data newsgroups_data = fetch_20newsgroups() # Why not inspect a sample of the text data? print('Sample 0: ') print(newsgroups_data.data[0]) print() # Create the vectorizer vectorizer.fit(newsgroups_data.data) # Let's look at the vocabulary: print('Vocabulary: ') print(vectorizer.vocabulary_) print() # Converting our first sample into a vector v0 = vectorizer.transform([newsgroups_data.data[0]]).toarray()[0] print('Sample 0 (vectorized): ') print(v0) print() # It's too big to even see... # What's the length? print('Sample 0 (vectorized) length: ') print(len(v0)) print() # How many words does it have? print('Sample 0 (vectorized) sum: ') print(np.sum(v0)) print() # What if we wanted to go back to the source? print('To the source:') print(vectorizer.inverse_transform(v0)) print() # So all this data has a lot of extra garbage... Why not strip it away? newsgroups_data = fetch_20newsgroups(remove=('headers', 'footers', 'quotes')) # Why not inspect a sample of the text data? print('Sample 0: ') print(newsgroups_data.data[0]) print() # Create the vectorizer vectorizer.fit(newsgroups_data.data) # Let's look at the vocabulary: print('Vocabulary: ') print(vectorizer.vocabulary_) print() # Converting our first sample into a vector v0 = vectorizer.transform([newsgroups_data.data[0]]).toarray()[0] print('Sample 0 (vectorized): ') print(v0) print() # It's too big to even see... # What's the length? print('Sample 0 (vectorized) length: ') print(len(v0)) print() # How many words does it have? print('Sample 0 (vectorized) sum: ') print(np.sum(v0)) print() # What if we wanted to go back to the source? print('To the source:') print(vectorizer.inverse_transform(v0)) print() Our output: Sample 0: From: [email protected] (where's my thing) Subject: WHAT car is this!? Nntp-Posting-Host: rac3.wam.umd.edu Organization: University of Maryland, College Park Lines: 15 I was wondering if anyone out there could enlighten me on this car I saw the other day. It was a 2-door sports car, looked to be from the late 60s/ early 70s. It was called a Bricklin. The doors were really small. In addition, the front bumper was separate from the rest of the body. This is all I know. If anyone can tellme a model name, engine specs, years of production, where this car is made, history, or whatever info you have on this funky looking car, please e-mail. Thanks, - IL ---- brought to you by your neighborhood Lerxst ---- Vocabulary: {'from': 56979, 'lerxst': 75358, 'wam': 123162, 'umd': 118280, 'edu': 50527, 'where': 124031, 'my': 85354, 'thing': 114688, 'subject': 111322, 'what': 123984, 'car': 37780, 'is': 68532, 'this': 114731, 'nntp': 87620, 'posting': 95162, 'host': 64095, 'rac3': 98949, 'organization': 90379, 'university': 118983, 'of': 89362, 'maryland': 79666, 'college': 40998, ... } (Abbreviated...) Sample 0 (vectorized): [0 0 0 ... 0 0 0] Sample 0 (vectorized) length: 130107 Sample 0 (vectorized) sum: 122 To the source: [array(['15', '60s', '70s', 'addition', 'all', 'anyone', 'be', 'body', 'bricklin', 'brought', 'bumper', 'by', 'called', 'can', 'car', 'college', 'could', 'day', 'door', 'doors', 'early', 'edu', 'engine', 'enlighten', 'from', 'front', 'funky', 'have', 'history', 'host', 'if', 'il', 'in', 'info', 'is', 'it', 'know', 'late', 'lerxst', 'lines', 'looked', 'looking', 'made', 'mail', 'maryland', 'me', 'model', 'my', 'name', 'neighborhood', 'nntp', 'of', 'on', 'or', 'organization', 'other', 'out', 'park', 'please', 'posting', 'production', 'rac3', 'really', 'rest', 'saw', 'separate', 'small', 'specs', 'sports', 'subject', 'tellme', 'thanks', 'the', 'there', 'thing', 'this', 'to', 'umd', 'university', 'wam', 'was', 'were', 'what', 'whatever', 'where', 'wondering', 'years', 'you', 'your'], dtype='<U180')] Sample 0: I was wondering if anyone out there could enlighten me on this car I saw the other day. It was a 2-door sports car, looked to be from the late 60s/ early 70s. It was called a Bricklin. The doors were really small. In addition, the front bumper was separate from the rest of the body. This is all I know. If anyone can tellme a model name, engine specs, years of production, where this car is made, history, or whatever info you have on this funky looking car, please e-mail. Vocabulary: {'was': 95844, 'wondering': 97181, 'if': 48754, 'anyone': 18915, 'out': 68847, 'there': 88638, 'could': 30074, 'enlighten': 37335, 'me': 60560, 'on': 68080, 'this': 88767, 'car': 25775, 'saw': 80623, 'the': 88532, 'other': 68781, 'day': 31990, 'it': 51326, 'door': 34809, 'sports': 84538, 'looked': 57390, 'to': 89360, 'be': 21987, 'from': 41715, 'late': 55746, '60s': 9843, 'early': 35974, '70s': 11174, 'called': 25492, 'bricklin': 24160, 'doors': 34810, 'were': 96247, 'really': 76471, ... } (Abbreviated...) Sample 0 (vectorized): [0 0 0 ... 0 0 0] Sample 0 (vectorized) length: 101631 Sample 0 (vectorized) sum: 85 To the source: [array(['60s', '70s', 'addition', 'all', 'anyone', 'be', 'body', 'bricklin', 'bumper', 'called', 'can', 'car', 'could', 'day', 'door', 'doors', 'early', 'engine', 'enlighten', 'from', 'front', 'funky', 'have', 'history', 'if', 'in', 'info', 'is', 'it', 'know', 'late', 'looked', 'looking', 'made', 'mail', 'me', 'model', 'name', 'of', 'on', 'or', 'other', 'out', 'please', 'production', 'really', 'rest', 'saw', 'separate', 'small', 'specs', 'sports', 'tellme', 'the', 'there', 'this', 'to', 'was', 'were', 'whatever', 'where', 'wondering', 'years', 'you'], dtype='<U81')] Now What? So, you may be wondering what now? We know how to vectorize these things based on counts, but what can we actually do with any of this information? Well, for one, we could do a bunch of analysis. We could look at term frequency, we could remove stop words, we could visualize things, and we could try and cluster. Now that we have these numeric representations of this textual data, there is so much we can do that we couldn’t do before! But let’s make this more concrete. We’ve been using this text data from the 20 News Group dataset. Why not use it on a task? The 20 News Group dataset is a dataset of posts on a board, split up into 20 different categories. Why not use our vectorization to try and categorize this data? from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn import metrics # Create our vectorizer vectorizer = CountVectorizer() # All data newsgroups_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes')) newsgroups_test = fetch_20newsgroups(subset='test', remove=('headers', 'footers', 'quotes')) # Get the training vectors vectors = vectorizer.fit_transform(newsgroups_train.data) # Build the classifier clf = MultinomialNB(alpha=.01) # Train the classifier clf.fit(vectors, newsgroups_train.target) # Get the test vectors vectors_test = vectorizer.transform(newsgroups_test.data) # Predict and score the vectors pred = clf.predict(vectors_test) acc_score = metrics.accuracy_score(newsgroups_test.target, pred) f1_score = metrics.f1_score(newsgroups_test.target, pred, average='macro') print('Total accuracy classification score: {}'.format(acc_score)) print('Total F1 classification score: {}'.format(f1_score)) Our output: Total accuracy classification score: 0.6460435475305364 Total F1 classification score: 0.6203806145034193 Hmmm… So not super fantastic, but we are just using count vectors! A richer representation would do wonders for our scores! Wrapping Up Hopefully you feel like you learned a lot about count vectorization, how to use it, and some of the potential applications of it! If you enjoyed reading this article, drop me a comment or maybe donate to my GoFundMe to help me continue with my ML research! And stay tuned for more word embedding content coming soon!
https://towardsdatascience.com/natural-language-processing-count-vectorization-with-scikit-learn-e7804269bb5e
['Hunter Heidenreich']
2018-08-24 13:05:36.086000+00:00
['Machine Learning', 'Python', 'NLP', 'AI', 'Data Science']
Radio #LatentVoices. Intro. 01–04
Do you know this feeling: you wake up with a melody in your ears, a beautiful, unique song. And while you are trying to capture it (with a notation if you can scores, humming if you have a recording device near your pillow), this unearthly music vanishes, being overlayered by everyday sounds, thoughts, and needs. It’s gone. And you realize you were the only one person who could listen to this melody. Nobody else in this world. Because it was inside your dreams. Then you are deluged by the melancholy of happiness. Happiness — because you are the lucky one to experience this harmonious sensation. Melancholy — because you won’t be able ever to share it with somebody. Entering the epoche of Artificial Intelligence, you can. Well, it isn’t quite your dream, it’s a dream of a machine. But you can embrace its uniqueness and share it with the world. That is what I love in JukeBox: But let’s forget technology for a while. Because of the Music in the Air.
https://medium.com/merzazine/radio-latentvoices-intro-01-04-fc51aff2f717
['Vlad Alex', 'Merzmensch']
2020-11-16 22:34:58.294000+00:00
['Music', 'Culture', 'Artificial Intelligence', 'Art', 'Jukebox']
The simplest generative model you probably missed
Idea of probabilistic PCA Let’s start out with the graphical model: Pretty simple! Assume the latent distribution has the form: The visibles are sampled from the conditional distribution: Here W,\mu,\sigma^2 are the parameters to be determined from the dataset. So everything’s a Gaussian! With this in mind, probabilistic PCA can be visualized best in the following stolen graphic from Bishop’s “Pattern Recognition and Machine Learning” text: Source: Bishop’s “Pattern Recognition and Machine Learning”, chapter 12. Left panel: After sampling a variable from the latent distribution, Middle panel: The visibles are drawn from an isotropic Gaussian (diagonal covariance matrix) around W * x_h + \mu . . Right panel: The resulting marginal distribution for the observables is also a Gaussian, but not isotropic. From the last two equations we can find the marginal distribution for the visibles is: The parameters W,\mu,\sigma^2 can be determined by maximizing the log-likelihood: where S is the covariance matrix of the data. The solution is obtained using the eigendecomposition of the data covariance matrix S : Where the columns of U are the eigenvectors, and D is a diagonal matrix with the eigenvalues \lambda . Let the number of dimensions of the dataset be d (in this case, d=2 ). Let the number of latent variables be q , then the maximum likelihood solution is: and \mu are just set to the mean of the data. Here the eigenvalues \lambda are sorted from high to low, with \lambda_1 being the largest and \lambda_d the smallest. Here U_q is the matrix where columns are the corresponding eigenvectors of the q largest eigenvalues, and D_q is a diagonal matrix with the largest eigenvalues. R is an arbitrary rotation matrix — here we can simply take R=I for simplicity (see Bishop for a detailed discussion). We have discarded the dimensions beyond q — the ML variance \sigma^2 is then the average variance of these discarded dimensions. You can find more information in the original paper: “Probabilistic principal component analysis” by Tipping & Bishop. Let’s get into some code. Load some data Let’s import and plot some 2D data: where loads a 2D text file. You can see some plotting functions in use in the complete repo. Visualizing the data shows the 2D distribution: Max-likelihood parameters Here is the corresponding Python code to calculate these max-likelihood solutions: Sampling latent variables from the data After determining the ML parameters, we can sample the hidden units from the visible according to: You can implement it in Python as follows: where we have defined: The result is data which looks a lot like the standard normal distribution: Generative mode: sample new data points We can sample new data points by first drawing new samples from the hidden distribution (a standard normal): and then sample new visible samples from those: where we have defined: The result are data points that closely resemble the data distribution: Rescaling the latent distribution Finally, we can rescale the latent variables to have any Gaussian distribution: For example: We can simply transform the parameters and then still sample new valid visible samples from those: Nota that \sigma^2 is unchanged from before. In Python we can do the rescaling: and then repeat the sampling with the new weights & mean with the same function from before: Again, the samples look like they could come from the original data distribution: Final thoughts We’ve shown how probabilistic PCA is derived from a graphical model. This Gaussian graphical model allows us a generative perspective on PCA, letting us draw new data samples. The latent distribution must be Gaussian, but can be any Gaussian — we can simply rescale the parameters. For non-Gaussian latent distributions, see e.g. independent component analysis (ICA). In future articles, I’ll try to make the extension to Bayesian PCA, and the connection to factor analysis and general Gaussian graphical models. You can find the complete code on GitHub here.
https://medium.com/practical-coding/the-simplest-generative-model-you-probably-missed-c840d68b704
['Oliver K. Ernst']
2020-12-26 22:02:21.430000+00:00
['Programming', 'Coding', 'Artificial Intelligence', 'Python', 'Machine Learning']
5 Geniuses Who Were a Little Mad
Not everyone is a genius. True genius comes around infrequently and often unnoticed but every so often people come around with a gift and take the world by storm. No, not everyone is a genius, and upon further examination, it does not seem like many people would want to be geniuses in the first place. Genius throughout history often comes hand in hand with loneliness, delusion, and madness. In fact, the great philosopher and genius himself Aristotle said: “No great mind has ever existed without a touch of madness.” That seems to be very much the case. Here are five examples of true and proven geniuses who influenced society in major ways and the telltale signs of madness that touched each of them. 1. Nikola Tesla Photograph of Telsa sitting by one of his coils. (Dickenson V. Alley / Public domain) What they’re known for In the title of his book about Nikola Tesla, author Robert Lomas pegs Tesla’s contributions to modern science perfectly by dubbing him “the man who invented the 20th century”. Nikola Tesla’s scientific work was extensive. He was involved in the famous Tesla coils, x-ray, radar, wireless radio, and more. His most well-known contribution to modern society was his championing of Alternating Current (AC) power which is still the global standard for the transmission of electricity to this day. Tesla’s mind worked like none other. His ideas were wild and creative. He believed that everything could one day be powered from the background energy found in space. He worked on ideas revolving around time travel and teleportation, things that would lead to the eventual field of quantum physics. He was a real and utter genius who also was tinged by a touch of madness. Evidence of madness In the 1930s Tesla saw the threat that aerial bombardment posed to large cities such as New York and London and began to feverishly work on the problem. Using his extensive knowledge of physics he claimed to have invented a sort of death ray which could wipe out enemy armies and sweep the skies clean of enemy aircraft. Tesla described the invention, which was called Teleforce, writing: “My apparatus projects particles which may be relatively large or of microscopic dimensions, enabling us to convey to a small area at a great distance trillions of times more energy than is possible with rays of any kind. Many thousands of horsepower can thus be transmitted by a stream thinner than a hair, so that nothing can resist.” He took this idea with him to the grave and never wavered in his claim that he had invented such machines and even became irate at the idea that his invention was any sort of “ray.” Tesla was adamant that his technology was not a ray because rays dissipated far too quickly to be the correct term for what he had created. He also claimed that he could speak to the pigeons in New York and became obsessed with the number three in his later years. 2. Sir Isaac Newton The painting “Newton” by William Blake (Public domain) What they’re known for Sir Isaac Newton is best known for his contributions to physics. He crafted the “Three Laws of Motion” which are still widely taught today and are the basics of nearly all modern physical calculations and assumptions. Newton’s theories are still tested and proven frequently. His work is so widely accepted that the term Newtonian Physics is commonplace. He also put in extensive work in the formulation of calculus as well as optics. His work with prisms helped to refine and craft the earliest refracting telescopes and gave his peers a better understanding of the way light worked. Evidence of madness In his study of light, Newton locked himself into a darkened room until his eyes had adjusted to the gloom. With his pupils fully dilated, he opened a window and allowed a beam of sunlight to enter and reflect off of a mirror. Newton stood and stared at the bright reflection of the sun until his eyes began to degrade. It was his attempt at trying to look directly at the sun. Naturally, it failed. He stared through the pain nearly blinding himself. His eyes hurt so badly after his attempt to stare at the sun that he locked himself away for three days. Newton was said to have often been taken by flights of manic fancy that would push him to do things such as staring at the sun, even if it might blind him. After his death, it became clear that Newton was obsessed with a great many things. One of the main pursuits of his during life was an obsession with alchemy. Unlike optics, which absorbed him for a brief time during his middle years, it appears that Newton was obsessed with alchemy for decades. He was so paranoid about his work that many of his papers were hidden away and never read by anyone but himself. Many were even written in code. Newton believed that he could unlock the secrets of alchemy and turn base metals into gold and silver. This obsession coincided with odd beliefs about religion and the occult which led him to believe that God had been leaving secret messages in ancient religious works and even tried to calculate the date for the end of days. He penned it to be in the year 2060. 3. Francis Crick Portrait of Francis Crick by Marc Lieberman (CC BY 2.5) What they’re known for Francis Crick was a luminary in the field of microbiology and molecular biology. His work with James Watson gave us our understanding of the structure of DNA including the famous double-helix shape. Crick paved the way for much of our modern understanding of DNA, RNA, and other molecular level proteins that make up life. He was also a noted neuroscientist in his later years. Crick’s contributions to modern science, especially medicine and molecular biology cannot be understated. His discoveries serve as a bedrock for most people’s basic beliefs about life, DNA, and their interactions. However, his obsession with life led him down some odd roads. Evidence of madness Beginning in the 1970s, Francis Crick would get an idea in his head that would permeate the rest of his life and his career. The idea goes by the name of Directed Panspermia. This is the idea that the so-called “building blocks of life” did not occur naturally in Earth’s ancient past but were rather seeded here by intelligent spacefaring aliens. His study into the complexities of DNA and the origins of life led him to believe that these things were so complex and uncommon that they could have only been created and placed here by extraterrestrials. In the early 1970s, this was proposed simply as an alternative idea to the prevailing theories about how the original DNA strands might have been forged in Earth’s violent geologic past. However, he continued to hold these beliefs for many decades and never came back around to the accepted way of thinking. It was also discovered in his private letters that Crick was a proponent of eugenics and expressed the belief that the only thing standing between humanity and a full-blown eugenics movement was the existence of religion in society. He even mentions madness in his autobiographical work What Mad Pursuit and focused much on theoretical neuroscience in his private time. 4. Bobby Fischer Bobby Fischer playing chess in 1960. (CC BY-SA 3.0) What they’re known for Bobby Fischer is noted as being one of the greatest chess players of all time — if not the greatest. During his time as an active player on the international circuit, he defeated nearly every one of his contemporaries, oftentimes handily. He was a key figure during the Cold War and played numerous games in Europe against Soviet and communist players in a perceived battle of wills between East and West. He is the only chess player to have a perfect win rate in the United States Championship, scoring eleven wins and no draws. Fischer wrote extensively on the game of chess and is one of the last chess grandmasters to be so well known by the public. He was one of the first people to play chess against a computer and for a long time having a computer beat or outplay Bobby Fischer was a staple in the development of game-playing AI (Artificial Intelligence). When Fischer won the 1972 World Chess Championship, he became the first American to do so and broke a quarter-century of dominance by Russian speaking chess players. His victory was sensational in the United States and seen as a key moment of the Cold War. Evidence of madness After his meteoric rise to fame following the 1972 World Chess Championship, Fischer cracked. Upon returning to the United States following his victory in Europe over some of the greatest chess minds in the world, the country was ready to welcome him as a hero. Bobby Fischer day was declared and he was offered millions of dollars in sponsorship opportunities and appearance fees. He declined them all. Instead of ushering in a new era of American chess dominance led by Fischer, he disappeared from the public eye. In 1975 he forfeited his world title when he refused to show up for the match. When asked to attend the event, Fischer wrote back a list of strenuous demands for the match which read as follows: “The match continues until one player wins 10 games, draws not counting. No limit to the total number of games played. In case of a 9–9 score, the champion (Fischer) retains the title, and the prize fund is split equally.” When his demands were not met, he refused to show and the title was forfeited. He wouldn’t play chess again publicly or professionally for two decades. He bounced around from place to place playing chess masters in private. He would never emerge from his self-imposed exile in any meaningful way again. One of his contemporaries, Reuben Fine, said of Fischer: “Some of Bobby’s behavior is so strange, unpredictable, odd and bizarre that even his most ardent apologists have had a hard time explaining what makes him tick.” For the remainder of his life, Bobby Fischer remained a mythical legend in the chess community but an odd pariah for the rest of the world. He was arrested numerous times, made heinous antisemitic comments, and praised the September 11th attacks in an interview conducted in the Philippines. 5. Pythagoras Illustration of Pythagoras teaching philosophy to women (The Story of the Greatest Nations / Public domain) What they’re known for Pythagoras gave us the well-known Pythagorean Theorem which gives us a way to deduce the lengths of the sides of a triangle. This is the most enduring part of Pythagoras’s legacy but he had an extensive body of work in the fields of geometry and astronomy. Pythagoras was one of the first people to ever propose that the Earth was a sphere rather than a flat disk. He also deduced that the Morning Star and the Evening Star were the same celestial object, Venus. These propositions, including his famous theorem, were being made 2500 years ago. He is one of the most well-read and well studied of the famous Greek philosophers. His influence is still felt today in the modern classroom but his ideas and his figure have been present in the world since his existence. Evidence of madness Pythagoras was obsessed with numerology. Aristotle once said of Pythagoras that he only studied mathematics and geometry in the pursuit of mystical goals. Pythagoras believed that everything was made up of numbers, from music to the origins of the universe. His belief in numerology was so strong and so fiery that it spawned an entirely new religion —Pythagoreanism. He invited his dedicated followers to Croton where they were invited to begin a new phase of life where the study of mathematics and mysticism existed hand in hand. Adherents were encouraged to study numerology but were also given strict guidelines about how they should live. Their lives were carefully regulated and there were rules about what you could wear, what roads you could take, what you should eat, and more. Beans were forbidden. New recruits could not meet Pythagoras until they had been involved in the “school” for at least five years. Today, Pythagoreanism has all of the telltale signs of a cult. Some compared it to a strange monastery. His contemporaries were derisive of it. Aristotle wrote in Metaphysics: “The so-called Pythagoreans, who were the first to take up mathematics, not only advanced this subject, but saturated with it, they fancied that the principles of mathematics were the principles of all things.” So, while Pythagoras spread knowledge and love of mathematics that exists to this day, he also used his position to found a sort of mystical math cult in ancient Greece.
https://medium.com/history-of-yesterday/5-geniuses-who-were-a-little-mad-bbfd07fd0715
['Grant Piper']
2020-12-15 06:02:44.453000+00:00
['Mind', 'Society', 'People', 'History', 'True Story']
Meditation and Software Engineering
Meditation to me means connecting to myself and carrying the feeling of connectedness during the day. I used to be more anxious and a little lost. Meditation brought clarity and self-reference into my life. With them I’m much more aware, present and creative. I chose a career in software because complex problem solving couples well with the standpoint of the observer often experienced in meditation. In turn I’m able to meditate deeper and I’m even more present in my daily life. What is meditation? Meditation can mean many things. From zen retreats to walking in the woods, gardening, Headspace app, mindfulness. Or retreating into a cave in a remote backcountry. By meditation, I mean sitting still with your back straight, eyes closed and internalizing your consciousness. Meditation is not relaxation. it is about raising the voltage of consciousness. It is about becoming more ourselves. Relaxation is usually a pleasant side effect. Meditation as a spiritual path is a separate topic. Here I’ll focus on practical application of a meditation practice. Meditation is good because… Our lives have become fast and information rich. We’re inseparable from smartphones, communication media, errands, family needs, work requirements, emergencies, to do lists, news. A constant stream of information flooding our mind, creating imprints that cause persistent thoughts even when we’re not doing anything. I used to compulsively check my phone every few minutes, had a verbose and judgmental internal commentator that wouldn’t stop. I wasn’t able to really focus. A constant background noise. When I would write a test plan for a new feature I’d check my phone at least fifteen times, go to the bathroom twice and catch myself wandering away from the topic at least five times. And the pandemic and shelter in place. Meditation helped me to slow down my thought processes to the point where I could separate myself (the observer) from the thoughts. Now I can literally see my thoughts in front of me coming in and coming out of my consciousness. The purpose is to strengthen the observer, to give yourself a solid place to rest on. When we are able to see the thoughts as separate to who we are, then we can choose to follow them or not. We’re in control. Otherwise it’s the opposite, a stream of thoughts that controls us. And it’s not that we take this slow meditation practice to work and meditate in front of our screens. Meditating regularly at home will strengthen the perception of the inner observer to the point that we will become better and better at separating ourselves from the thoughts. Eventually we will be able to speed up the process and apply the same principle in the fast paced work environment. For example, you are doing some deep work like coding or reviewing a PR. A thought about needing to go to an important meeting later in the afternoon appears. You may start becoming a bit anxious, asking yourself have you prepared, finding yourself chaining that thought to the list of things you need to prepare, which reminds you of a shopping list for the kids birthday celebration which leads to a thought about an appointment at the vet you forgot to reschedule. And so on. The domino effect takes place. You could say that is the ability to focus. To bring yourself back to the task, fighting distractions. Meditation helps a lot here. It is not the same as fighting thoughts, it’s deeper than that. Observing thoughts and de-identifying ourselves from them is a form of focus. But it has more profound effects. When you meditate you are in a deeper state of consciousness, there’s more depth than just a mental focus. That depth combined with becoming the observer is something that enriches us as human beings. You get to focus better plus you become more yourself. With practice we become aware not just of immediate thoughts, but also of more subconscious ones. Some that we didn’t even know about or have completely forgot about. The imprints of all sorts, memories, habits that we learned when growing up.Those deep behavioral patterns and conditioning from our parents and culture are a whole new level of meditation that can result in significant positive life changes. I prefer to meditate first thing in the morning while the mind is impressionable. I usually go for 45 minutes, but if you are just beginning simply sit in silence for 10 minutes. Letting the thoughts come and go and resting our awareness on the qualities of the inner space will create a separation between the sense of us and the distracting thoughts (a good resource: Meditation, Portal to Inner Worlds). There is also an external distraction management. We can limit the distracting factors such as turn off phone notifications, enable do not disturb mode, put headphones on or reevaluate how you use your smartphone. But that’s not about meditation. How does this relate to software engineers ? You will be more present. More in control of what you decide to distract you. You will become better at letting distractions pass without pulling you off your focus. Your attention span and clarity of thinking will improve. When you hit a hard computing problem, meditation can bring unexpected hints to solutions. The method of “sleeping on the problem and waking up with a solution” will become easier and more intentional because meditation is a similar process. Another technique that could be used in engineering is thinking in a packed form. Packed thinking is more a silent knowing than active processing and interpreting in your head. For example when you look at the keyboard don’t say “This is a keyboard.” in your head. Instead hold a silent knowing that that is a keyboard. You simply know it is a keyboard. Like the essence of the sentence. Stay with the essence, no need to unpack it. The knowing should be instant. Then move on. It comes handy when you hold multiple pieces of abstractions at the same time, carefully architecting the most efficient design solution for an algorithm. You need silence and it helps if you are in the zone to be able to do that. It’s cognitively intense. In that sense programming can be a dynamic form of meditation. Or perhaps a technique that may deepen your daily meditation. You know that building something you feel inspired about can be addictive, creative, beautiful, poetic. There is beauty in moving pieces of code around. If you think about it, who hasn’t spent hours or days behind the computer working on their app? Why is it so addictive? Part of the answer is a higher mode of thinking. A silent knowing. Things are just different there. Similarly in meditation. Here is how… Start with something small. Sit in silence for 15 minutes every morning and watch your breath. Attend a high quality meditation workshop. Practice regularly, daily. Best in the morning when you are most impressionable and can set yourself for the day. Find a community where you can ask questions and have guidance through the process. Regularity is key. Practice, practice, practice. It is more efficient to stick with one technique and push it, than keep switching them. If sitting still is not an option, then have moments of self-reflection. Be with yourself. Take a walk, stay silent. Be with your thoughts. Conclusion Regular meditation practice is an efficient way to gain control over what we allow to distract us. It brings clarity of mind and trains us to be present in what we do. We don’t fight thoughts and distractions, we get better at detaching from them so we increase focus and productivity. Packed thinking or silent knowing is a technique that lets you think at a higher abstraction level where you operate at the level of silent knowing and not on the level of spelling out thoughts in your head. It allows you to think faster, to hold more than one abstraction at the same time, it brings more inspiration and is overall more satisfying. Don’t think of meditation as a stress relief. It’s a tool to become more yourself, gain depth and be more present, productive and happier. But for that you need to practice. Start with 10 or 20 minutes per day. Sit in silence, observe your breath, let thoughts come and go. Best to attend a high quality workshop to get you started. And most importantly — have fun!
https://medium.com/engineers-optimizely/meditation-and-software-engineering-2e438eb35fbb
['Matjaz Pirnovar']
2020-10-30 20:32:45.494000+00:00
['Engineering', 'Software Engineering', 'Consciousness', 'Software Development', 'Meditation']
Handling Spaces in Column Names During Kinesis Firehose JSON-Parquet Data Transformation
Handling Spaces in Column Names During Kinesis Firehose JSON-Parquet Data Transformation Engineering@ZenOfAI Follow Jun 24 · 7 min read Parquet is an open source file format for Hadoop. Parquet stores nested data structures in a flat columnar format. Compared to a traditional approach where data is stored in a row-oriented approach, parquet is more efficient in terms of storage and performance. A common industry standard is to use parquet files in S3 to query with Athena. As parquet format is best suited and gives optimized performance compared to other data storage formats. You can read more about it here. However Parquet doesn’t support spaces in column names, this will be an issue if you are using a Kinesis Firehose to stream log data. Typically logs are in JSON format. A common practise is to transform these JSON logs into parquet while writing to S3 so as to query log data in Athena. Json keys are mapped as column names, If your json keys have spaces in it, the transformation results in a failure. You can use the transformation lambda to handle those spaces (replace with underscores) in the keys. We shall look at a setup, where cloudwatch logs are directly streamed to kinesis firehose (follow Example 3, point 12 to create subscription). Here, the log format can be a simple json or the cloudwatch embedded metric format. Creating the transformation lambda: Create a lambda named ‘cloudwatch_logs_processor_python’ with the following code, set the runtime environment to Python 2.7, timeout at 5min 20sec. Python 2.7 because the following code is just an improved version of a Lambda blueprint. Note: This lambda will handle only data sent by Cloudwatch logs to firehose only, if the source is different you might want to tweak the code a bit. Spaces in column names will be replaced with underscores. Give the comments in code a read to understand the functionality. lambda_function.py """ For processing data sent to Firehose by Cloudwatch Logs subscription filters. Cloudwatch Logs sends to Firehose records that look like this: { "messageType": "DATA_MESSAGE", "owner": "123456789012", "logGroup": "log_group_name", "logStream": "log_stream_name", "subscriptionFilters": [ "subscription_filter_name" ], "logEvents": [ { "id": "01234567890123456789012345678901234567890123456789012345", "timestamp": 1510109208016, "message": "log message 1" }, { "id": "01234567890123456789012345678901234567890123456789012345", "timestamp": 1510109208017, "message": "log message 2" } ... ] } The data is additionally compressed with GZIP. The code below will: 1) Gunzip the data 2) Parse the json 3) Set the result to ProcessingFailed for any record whose messageType is not DATA_MESSAGE, thus redirecting them to the processing error output. Such records do not contain any log events. You can modify the code to set the result to Dropped instead to get rid of these records completely. 4) For records whose messageType is DATA_MESSAGE, extract the individual log events from the logEvents field, and pass each one to the transformLogEvent method. You can modify the transformLogEvent method to perform custom transformations on the log events. 5) Concatenate the result from (4) together and set the result as the data of the record returned to Firehose. Note that this step will not add any delimiters. Delimiters should be appended by the logic within the transformLogEvent method. 6) Any additional records which exceed 6MB will be re-ingested back into Firehose. """ import boto3 import StringIO import gzip import base64 import json def transform_metrics(cloudwatchmetrics): newcloudwatchmetricslist = [] for index, cloudwatchmetric in enumerate(cloudwatchmetrics): newcloudwatchmetric = { "Namespace": cloudwatchmetric['Namespace'], "Dimensions": [], "Metrics": [] } # print(cloudwatchmetric['Namespace']) for key,value in cloudwatchmetric.items(): if key == 'Dimensions': for eachlist in value: newlist = [] for eachDimension in eachlist: newlist.append(eachDimension.replace(' ', '_')) newcloudwatchmetric['Dimensions'].append(newlist) if key == 'Metrics': newmetricslist = [] for eachMetric in value: newmetric = {} newmetric['Name'] = eachMetric['Name'].replace(' ','_') newmetric['Unit'] = eachMetric['Unit'] newmetricslist.append(newmetric) newcloudwatchmetric['Metrics'] = newmetricslist newcloudwatchmetricslist.append(newcloudwatchmetric) return newcloudwatchmetricslist def transform_record(log_json): log_transformed = {} for k, v in log_json.items(): if isinstance(v, dict): v = transform_record(v) log_transformed[k.replace(' ', '_')] = v if '_aws' in log_transformed: transformedmetrics = transform_metrics(log_transformed['_aws']['CloudWatchMetrics']) log_transformed['_aws']['CloudWatchMetrics'] = transformedmetrics return log_transformed def transformLogEvent(log_event): """Transform each log event. The default implementation below just extracts the message and appends a newline to it. Args: log_event (dict): The original log event. Structure is {"id": str, "timestamp": long, "message": str} Returns: str: The transformed log event. """ log_str = log_event['message'].encode('utf-8') log_json = json.loads(log_str) log_transformed = json.dumps(transform_record(log_json)) log_unicode = unicode(log_transformed, "utf-8") return log_unicode + " " def processRecords(records): for r in records: data = base64.b64decode(r['data']) striodata = StringIO.StringIO(data) with gzip.GzipFile(fileobj=striodata, mode='r') as f: data = json.loads(f.read()) recId = r['recordId'] """ CONTROL_MESSAGE are sent by CWL to check if the subscription is reachable. They do not contain actual data. """ if data['messageType'] == 'CONTROL_MESSAGE': yield { 'result': 'Dropped', 'recordId': recId } elif data['messageType'] == 'DATA_MESSAGE': data = ''.join([transformLogEvent(e) for e in data['logEvents']]) data = base64.b64encode(data) yield { 'data': data, 'result': 'Ok', 'recordId': recId } else: yield { 'result': 'ProcessingFailed', 'recordId': recId } def putRecordsToFirehoseStream(streamName, records, client, attemptsMade, maxAttempts): failedRecords = [] codes = [] errMsg = '' # if put_record_batch throws for whatever reason, response['xx'] will error out, adding a check for a valid # response will prevent this response = None try: response = client.put_record_batch(DeliveryStreamName=streamName, Records=records) except Exception as e: failedRecords = records errMsg = str(e) # if there are no failedRecords (put_record_batch succeeded), iterate over the response to gather results if not failedRecords and response and response['FailedPutCount'] > 0: for idx, res in enumerate(response['RequestResponses']): # (if the result does not have a key 'ErrorCode' OR if it does and is empty) => we do not need to re-ingest if 'ErrorCode' not in res or not res['ErrorCode']: continue codes.append(res['ErrorCode']) failedRecords.append(records[idx]) errMsg = 'Individual error codes: ' + ','.join(codes) if len(failedRecords) > 0: if attemptsMade + 1 < maxAttempts: print('Some records failed while calling PutRecordBatch to Firehose stream, retrying. %s' % (errMsg)) putRecordsToFirehoseStream(streamName, failedRecords, client, attemptsMade + 1, maxAttempts) else: raise RuntimeError('Could not put records after %s attempts. %s' % (str(maxAttempts), errMsg)) def putRecordsToKinesisStream(streamName, records, client, attemptsMade, maxAttempts): failedRecords = [] codes = [] errMsg = '' # if put_records throws for whatever reason, response['xx'] will error out, adding a check for a valid # response will prevent this response = None try: response = client.put_records(StreamName=streamName, Records=records) except Exception as e: failedRecords = records errMsg = str(e) # if there are no failedRecords (put_record_batch succeeded), iterate over the response to gather results if not failedRecords and response and response['FailedRecordCount'] > 0: for idx, res in enumerate(response['Records']): # (if the result does not have a key 'ErrorCode' OR if it does and is empty) => we do not need to re-ingest if 'ErrorCode' not in res or not res['ErrorCode']: continue codes.append(res['ErrorCode']) failedRecords.append(records[idx]) errMsg = 'Individual error codes: ' + ','.join(codes) if len(failedRecords) > 0: if attemptsMade + 1 < maxAttempts: print('Some records failed while calling PutRecords to Kinesis stream, retrying. %s' % (errMsg)) putRecordsToKinesisStream(streamName, failedRecords, client, attemptsMade + 1, maxAttempts) else: raise RuntimeError('Could not put records after %s attempts. %s' % (str(maxAttempts), errMsg)) def createReingestionRecord(isSas, originalRecord): if isSas: return {'data': base64.b64decode(originalRecord['data']), 'partitionKey': originalRecord['kinesisRecordMetadata']['partitionKey']} else: return {'data': base64.b64decode(originalRecord['data'])} def getReingestionRecord(isSas, reIngestionRecord): if isSas: return {'Data': reIngestionRecord['data'], 'PartitionKey': reIngestionRecord['partitionKey']} else: return {'Data': reIngestionRecord['data']} def handler(event, context): isSas = 'sourceKinesisStreamArn' in event streamARN = event['sourceKinesisStreamArn'] if isSas else event['deliveryStreamArn'] region = streamARN.split(':')[3] streamName = streamARN.split('/')[1] records = list(processRecords(event['records'])) projectedSize = 0 dataByRecordId = {rec['recordId']: createReingestionRecord(isSas, rec) for rec in event['records']} putRecordBatches = [] recordsToReingest = [] totalRecordsToBeReingested = 0 for idx, rec in enumerate(records): if rec['result'] != 'Ok': continue projectedSize += len(rec['data']) + len(rec['recordId']) # 6000000 instead of 6291456 to leave ample headroom for the stuff we didn't account for if projectedSize > 6000000: totalRecordsToBeReingested += 1 recordsToReingest.append( getReingestionRecord(isSas, dataByRecordId[rec['recordId']]) ) records[idx]['result'] = 'Dropped' del(records[idx]['data']) # split out the record batches into multiple groups, 500 records at max per group if len(recordsToReingest) == 500: putRecordBatches.append(recordsToReingest) recordsToReingest = [] if len(recordsToReingest) > 0: # add the last batch putRecordBatches.append(recordsToReingest) # iterate and call putRecordBatch for each group recordsReingestedSoFar = 0 if len(putRecordBatches) > 0: client = boto3.client('kinesis', region_name=region) if isSas else boto3.client('firehose', region_name=region) for recordBatch in putRecordBatches: if isSas: putRecordsToKinesisStream(streamName, recordBatch, client, attemptsMade=0, maxAttempts=20) else: putRecordsToFirehoseStream(streamName, recordBatch, client, attemptsMade=0, maxAttempts=20) recordsReingestedSoFar += len(recordBatch) print('Reingested %d/%d records out of %d' % (recordsReingestedSoFar, totalRecordsToBeReingested, len(event['records']))) else: print('No records to be reingested') return {"records": records} Save it. Enable transform source record with AWS Lambda: On your Firehose stream, enable Transform source records with AWS lambda, and replace/use the above-created lambda function. Note: Make sure that your Athena reference table schema has the same column names that will be there after replacing the spaces with underscores in column names of the JSON log. Or else the transformed parquet files will have null value columns. After setting up, it would like this: To check if the data is transformed properly, download the transformed parquet file and use this online parquet viewer. If the data is transformed without any loss, build a table in Athena using a crawler, load partitions, and query your parquet log data. I hope it was helpful. Thank-you! This story is authored by Koushik. He is a software engineer specializing in AWS Cloud Services.
https://medium.com/zenofai/handling-spaces-in-column-names-during-kinesis-firehose-json-parquet-data-transformation-219014070d35
['Engineering Zenofai']
2020-08-04 03:53:24.975000+00:00
['AWS', 'Kinesis', 'Cloud Computing', 'Software Development']
How to Kill a Dragon
Photo by Massimo Negrello on Unsplash My friend Derek has a six year old grandson who came for a visit. They were exploring the basement together when the boy pointed to the door on the left and asked, “What’s in there grandpa?” “Canned foods and supplies for the kitchen,” said Derek. His grandson pointed to the door ahead of them, “What’s in there grandpa?” “Your grandmother’s files,” said Derek. “Nothing too exciting.” He pointed to the door on the right, “What’s in there grandpa?” “I don’t know,” responded Derek, entirely truthfully. “I have no idea. Maybe dragons?” “Dragons!” he said, eyes lighting up. “Get me a broomstick!” “What?” “A broomstick!” After a quick search, Derek returned with a broomstick and handed it to his grandson. “What are you going to do?” asked Derek. “You’re going to open the door,” said his grandson. “And then I’ll run in there and see what happens. Okay, go!” Derek complied and his grandson rushed headlong into the dark room, smashing everything right and left. After a few seconds of commotion, grandpa hit the light switch, hoping that nothing too valuable or dangerous had been shattered. The light revealed his grandson stomping his foot onto the floor with great enthusiasm. “What are you doing?” asked Derek. “I’m killing the dragon, grandpa!” he said. “But I don’t see it,” said Derek. “You’re silly, grandpa,” he said. “Don’t you know how to kill dragons? If you run away they get bigger and bigger and you’ll never escape. But if you charge forward and face them down they get smaller and smaller until they disappear. I just squished it.”
https://eliotpeper.medium.com/how-to-kill-a-dragon-cccbbd705922
['Eliot Peper']
2020-12-18 19:53:24.855000+00:00
['Leadership', 'Philosophy', 'Self', 'Psychology', 'Creativity']
Our onboarding process at Packlink Engineering
New employee onboarding is one of the most critical processes in a company that genuinely cares about its culture and team members. At Packlink, this is not an exception; we take it very seriously. The onboarding is an iterative and a critical cultural process. What do I mean by “cultural”? Every step in the onboarding process is documented and available to all the team. Everyone in the organization can challenge, give feedback, or propose new ideas related to it. Our onboarding tasks are done collaboratively, and it involves different profiles and roles that are rotated by all team members and involves many different areas. We expect that every person who comes to the team needs to have at least one month for adaptation. We want them to get used to our processes and workflows before they start being productive within a team. All the people in the company understand it, so the process is designed with this in mind. We have three main phases in our onboarding process; we are going to look at each of these in-depth in this article, so take a coffee, get comfortable, and let’s start! The pre-onboarding The goal of the pre-onboarding is to prepare all that the new person’s needs, tools, and equipment in advance. This phase of the process is critical and involves several tasks that need to be completed by people in different areas inside and outside of the Engineering Team. We start the pre-onboarding with a month in advance of the incorporation date. When the new team member confirms the incorporation, our awesome HR department opens an Epic ticket in Jira with the necessary information of the person. Then, all the tasks and subtasks that need to be done for that person’s onboarding are created automatically. We had automated this with a few lines of a Groovy script that runs in a Script Runner Listener. Although we have automated the creation of all the onboarding tasks, the process is also documented in a Confluence page as a checklist that can be printed if it were necessary. This is a crucial point to take into account. Even if a process like this is automated, we need to have the documentation and an alternative for the improvable case the digital tool fails or isn’t available anymore. Also, with this practice, we make visible all the tasks we do so every member of the team can propose new tasks and even make a Pull Request to the groovy script. When the tasks are created, the Engineering Managers, Helpdesk, and Human Resources representatives become the stakeholders of the Epic, and each area and department assigns the driver of the tasks. Assign the Engineering Manager and the buddy Two tasks need to be done before starting the rest: assign an Engineering Manager and propose the Buddy role to a person in the team. We will talk later in detail about the Buddy role at Packlink. When the Engineer Manager is assigned, she assumes the ownership of all the onboarding process and ensures that the onboarding is done correctly and on time. This Engineer Manager will be responsible for her personal growth in the company, her career, revisions, and retention. The hardware and licenses One of the first tasks we do is sending to the new team member a Google Form with some questions about the equipment, preferences, and other personal information. At the time of this writing, we give to the new members two choices for the laptop they want to use, a MacBook Pro 2019 16” Core i7, 16GB and 256GB SSD or an equivalent Linux-compatible alternative. If you are curious about how many people use MacBooks or Linux, we can say that the choices are very balanced, so there is not a clear winner at this moment. One thing that we can say is that nobody misses the Windows option until the date. We have no limitation about the number of monitors, and you can choose a mac keyboard and the weird magic mouse. With the laptop and hardware chosen, we register a new JetBrain IntelliJ-based IDE license (Idea, WebStorm…) for the new software engineer. Security and access to all the tools we use At Packlink, we have a cloud mindset, and practically all of our tools are cloud-based. At Packlink we use the Google Suite, Atlassian tools, Github, CircleCI, NewRelic, and many other SaaS solutions. We need to create accesses, assign the person to particular security groups, and invite her to them. All these tasks will be done almost a week before the incorporation day. The physical location and desk Although we are not fully remote, every member of the team can work eight days per month remotely. The rest of the month we are in our Madrid office. The Engineering Management team decides where the people sit when they are in the office. At the office, people are working with each other tightly, and if team distribution changes, they change their seats too. When the seat is chosen, Help Desk Staff will prepare the place, taking into account the choices made by the new person. The official presentations On the onboarding day, the new person is presented to all the Engineering teams and will have a little talk with both our CTO and COO. We schedule in advance those small meetings in the Chiefs’ calendars. The Tech Leads of the different areas -you know frontend, backend, operations…- are aware too to prepare an onboarding meeting of each area. The 90 days plan The manager will prepare the 90 days plan for the new person. The goals for the 90-day plan are to ensure the person: Gets off on the right foot and becomes familiar with Packlink, your team, how you’ll contribute and what you’ll deliver in your first 90 days. Has a clear understanding of what you will need to learn and deliver in your first 90 days Learn about how we build software at Packlink and specific practices and processes We have a confluence template that helps the manager to prepare and customize the plan according to the person’s role. There are a lot of resources about how to build a great 90 days plan for your company’s onboarding; we recommend this article and template from Atlassian that inspired ours. The welcome pack The last step is to prepare our welcome pack with some Packlink stuff like a water bottle, a mug, notebook, pens, an awesome t-shirt and a convenient bag. As the onboarding process is iterative, we are continually working to improve with new ideas and stuff to include in the welcome pack. The onboarding’s day This is a special day. Everyone in the company receives an email with a picture and an informal presentation of the new person. This person is presented to a lot of people and teams. The Help Desk team assists the person to configure her computer and hardware. The manager does a brief presentation of some topics that are also documented in a getting started guide in our Confluence for later checking. The rest of the day, this person will know more people, talk with our CTO and COO and start to configure her local dev environment. Our platform is running on containers on the Google Cloud Platform, and this makes it easy to have the equivalent environment for production, pre-production, and local. So the first week, she will have all the microservices and web applications running on her computer thanks to a well-documented guide. This guide is provided by our Operations team and is maintained by all the Engineering team. The first month During her first month, the new person is assisting onboarding meetings of other departments like finance, sales, customer service, human resources, marketing, procurement, and others, including the different tech areas. She will work in low priority tasks out of the on-going projects that other team members are working on. The goal is to let new team members get used to our workflow and tools and know the product and architecture before starting in a project or a technical initiative. A lot of people deploy their first feature to production during their first week. The buddy role During her first month, she counts with a person on the team who assumes the Buddy role. This role and its benefits are well-known in the industry. You can see a great definition of the role on the PMI website. In our context, the buddy is a role that is assumed by one of the engineers. That person is a helpful reference during this phase of onboarding. The buddy will help with all her daily basis doubts, providing all the guidelines or documentation resources that she needs and do pair programming with her. The manager She will also count on an Engineer Manager to ensure her growth. During the first month, she will have one or two one on one meetings with her manager. At those meetings, the manager will ask about their impressions and talk about her expectations and next steps before starting to find personal and professional objectives for her first year. The manager will take note of the impressions and feedback of that person. We love new ideas and feedback from a fresh point of view. What’s next? As we said, our Onboarding process is an iterative one, so we are improving it continuously. For this reason, during and after the onboarding, we ask the person for feedback and we take it very seriously. When the onboarding for this person finishes, the Engineering Manager will work with her to set some personal and technical goals, aligned with our Career Path. Her manager will follow the progress of those goals and her skill improvements during the rest of the year. The manager also provides feedback and provides the tools for the person to be able to reach her goals. The person will be part of a team and initiatives that are aligned with the Packlink goals and values. She will take part in technical decisions, interview processes, and learning initiatives. And also, she will assume the Buddy role for new team members.
https://medium.com/packlinkeng/our-onboarding-process-at-packlink-engineering-108ad0182cd6
['Asier Marqués']
2020-04-06 10:19:59.372000+00:00
['Management', 'Tech Culture', 'Onboarding Process', 'Engineering', 'Startup']
The mysterious monolith — is it art?
The mysterious monolith — is it art? A Weird Monolith is Found in Utah Desert: “It’s probably art” officials say. [And now it’s gone] “Weird Monolith, probably art” That is the headline of a November 24 New York Times story. Update: now that the ‘mysterious monolith’ has been removed, its conventionality has been revealed. And its lack of depth (in both senses of the word) is about the only surprise. It doesn’t change what I wrote, but it is a nicely wrapped up conclusion. My only question is why leave that single steel triangle? Updated Update: this is why writing about current news is problematic…now that a number of these have appeared, disappeared, been replaced by wooden crosses or whatever ‘art criticism’ is next, the monolith(s) are becoming a lot less mysterious. They are now political/social/crowd-sourced/interactive/participatory but they are also, ironically, less ‘art’ (in my view) than when originally discovered. They are now a brawl, and very little of that brawl is artful. They also used to be fun, and funny, but now are simply a headline that won’t go away! Fortunately, this article isn’t just about the monolith(s) so it is still worth a read! To paraphrase Supreme Court Justice Potter Stewart, “I may not know what art is (or even what is art) but I know it when I see it.” [Stewart was talking about defining pornography, but it seems to work when talking about art.] Apparently we don’t know when something is art or not…This is definitely art. So why doesn’t everyone get that part right away? a) People are visually and artistically illiterate b) All they can think of is 2001: A Space Odyssey c) It’s more fun to imagine this is an alien probe deep in the earth’s crust d) They don’t think it’s art because they don’t like it e) All of the above It is an unmistakable nod to Stanley Kubrick’s black monolith, that appeared in seemingly Neolithic times, freaking out the monkeys and turning bones into weapons. So doesn’t that put it firmly in the ‘yes, art’ category? Apparently not quite. 2001: A Space Odyssey image of mysterious monolith surrounded by primates Of all filmmakers, it is incontrovertible (and an unremarkable observation) that Kubrick made art. LACMA’s landmark exhibition (and it is, you know, an ART museum) should have put any doubts to rest, but maybe the mere popularity of the films injects some doubt into the proposition. But why? Art can’t be popular? Films are not art? Inscrutable parts of films are too challenging to be art? The exhibition dedicated a set of rooms for each film he made, and some he didn’t make but planned. One planned but unmade film I discovered at the exhibition was ‘Aryan Papers’ based on Louis Begley’s book “Wartime Lies”, written while the then international lawyer was transforming himself into a respected and awarded author. He wrote the book living in a house I designed in the Hamptons (and still lives there, part time, after 40 years), something he was kind enough to write a short essay about for a book 13 years later. The exhibition, in addition to being a remarkable and remarkably dense retrospective on Kubrick’s work, included an app that I sorely miss now that it has disappeared from my phone. The app included documents, images, letters, and other resources that didn’t just distill the show into an abbreviated format, but actually became its own exhibition, continuing years after the exhibit closed. Kubrick’s correspondence alone form an exhibition unto itself, including letters: Kubrick just checking… Asking IBM for permission to use their brand as part of the invented computers. Not noted, after they declined, is the myth that HAL, the eventual name for the computer, is simply the letters in the alphabet that precede (H) I (A) B (L) M . (H) (A) (L) . Asking a studio for some monstrous Mitchell BNC cameras ‘for sentimental reasons’ (when he actually used them to fit the huge Zeiss Planar lenses commissioned by NASA needed to shoot Barry Lyndon scenes in candlelight.) Saul Bass sucking up to Kubrick praising 2001. Maybe that’s a bit harsh, but Saul won’t be contradicting me, so I’m going with it. Plus such gems like pics of the typewriter from The Shining, with an ‘all work and no play makes Jack a dull boy’ sheet still in the platen. And the light blue dresses the twin sisters wore in those terrifying hotel interior scenes. But enough reminiscing. [as an aside, my wife grew up with her intellectually and developmentally disabled brother Richard. One day in 1968 Richard, who watched a LOT of TV, kept announcing that Kennedy had been shot. Carin told him to “stop reminiscing”…until she realized he was reporting on RFK’s assassination, not reminiscing about JFK’s!] It is natural to make the connection to 2001: A Space Odyssey, but bizarre to discount the stainless steel pylon as not definitely art, but just ‘probably art’. It is art. Period. It’s hard to get a detailed view of the Utah piece, but what we know is that it is, in form, an extruded triangle, not a four-sided but a three-sided volume, like a prism on end (a triangular prism, to be exact). The material and shape reference the St. Louis Arch as much as Stanley Kubrick. One edge is pointing, like an arrow if you looked at it from above, to a vertical crevice in the massive rock formation behind it. It appears to be made of sheets riveted to a frame or bolted to the other panels. It is about 10’ high and its perfectly flat surface (no visible ‘oilcanning’ or buckling of the surface) belies a very carefully considered design and fabrication. It also fits perfectly, as some of the modern primates climbing around it note (not knowing how deep it goes!) into a pocket cut in the stone below, and that reinforces the idea that this was no amateur artist. John McCracken, undated photo, with mini-monolith David Zwirner, whose gallery represents John McCracken (who died in 2011) and believes this is definitely McCracken’s work. It’s not just the form, but informed by the artist’s belief in UFOs, time travel and extraterrestrial beings. That pretty much sealed it for me, but some in Zwirner’s gallery believe it is an homage to McCracken by an admiring artist. McCracken work in polished metal [n.b. a McCracken exhibition is scheduled for Spring 2021] Lt. Nick Street, a spokesman for the Department of Public Safety in Utah said the authorities were confident that “it’s somebody’s art installation, or an attempt at that” notes The New York Times. So, now Department of Public safety officials are art critics? Is there anything that Americans won’t opine on, despite their obvious ignorance on a topic? A friend, a design writer, likes nothing more than to discuss movies she hasn’t seen, and we are all in on the joke. But this is something else. “A resonance deflector”, a “satellite beacon” and an object “dropped by aliens” are just some of the alternative facts; as with other self-evident truths there is no lack of contrarians to firmly deny reality. A majority of Republicans believe that Trump won the 2020 election, while less than 1/3 of all Americans believe in evolution. The percentage of Americans accepting Darwin decreases the more one attends church, until only 1% of weekly church attendees believe in evolution. Evolution. A theory as sound as Newton’s laws of motion (and as subject to continual refinement) is dismissed by 2/3 of Americans while accepted by 95%-99.9% of scientists (depending on the survey). And the list of denials of reality is endless; the Holocaust, AIDS, Climate Change, GMO’s, Vaccines, Covid-19. The list of false beliefs is equally bottomless; QAnon, Big Foot, Pizzagate, Flat Earth, Chem Trails, 5G, Birtherism, et. al. Why? Much has been written on the precise mechanisms that drive the acceptance of patently false information. Confirmation bias, belief perseverance, tribal identity and algorithms that tend to push online viewers to more and more extreme inputs, all conspire to reinforce a view, no matter its relationship to reality. Pew Research graphic on international wealth vs prayer frequency The US is an outlier in the global trend that the higher the per capita GDP the lower the frequency of daily prayer; we are alone in both wealth and high incidence of religious devotion. The US population also embraces, to an astonishing degree, the notion of American Individualism; the belief that the individual is the primary locus of agency. Responsibility to a larger collective, and the collective’s responsibility to the individual, is less evident in the American psyche. I live, these days, in a small upstate New York town that is pretty evenly divided politically, and elections are routinely won and lost by fewer than 100 votes. One election in a nearby town was won by 1 vote. If anything reinforces the primacy of the individual an election won by a single vote would be it! Our town has just tipped last year, for the very first time, to a Democratic party dominated town board. On the way out of office the departing Republicans passed, in secret, a 60% tax cut, crippling the tiny town’s finances. At a contentious town board meeting my neighbor, from across our small road, stood up to ask why everyone was so upset about a tax cut! Shouldn’t we all be happy to have a bit more money in our pockets? The town taxes are so minuscule that my ‘savings’ amounted to about $185. That is 60% of my total town tax (and just so you don’t gasp, other property taxes amount to something closer to $10,000). An individual (and not a poor one by any means) was so dominated by his own greed that even the $185 that he had paid for years was too much to contribute to the common good. In a town that for most of the last 225 years has banded together to build schools, roads, recreational centers, and parks, at the direct expense of its citizens, has now devolved to the point where it takes 10 years to privately raise the money to build a new library. Town residents were once happy to assess themselves for the good of the town, but now even funding the town’s minimal services is less important than a few extra dollars of one’s own.
https://uxdesign.cc/the-mysterious-monolith-is-it-art-or-design-c71526c92dac
['James Biber']
2020-12-09 17:29:20.169000+00:00
['Ts', 'Design', 'Art', 'Creativity', 'UX']
4 Top Programming Languages to Learn in 2021
Technology has made our lives easier with several forms of implementation that are seen in different professional fields. As most individuals began to efficiently program computers, programming languages with powerful tendencies and functionalities were born. With several programming languages that are available to software programmers, picking an ideal one for a job can be quite tricky. You need to consider the simplicity of each one and its demand, among other factors that make it more confusing for beginners who may soon realize that their choices and expectations do not align. Practical knowledge of more than one language has helped data scientists, senior developers, and driverless vehicle engineers to excel in their profession. One of the largest online developer communities, StackOverflow in their 2019 survey of the most sought-after programming languages shed some light on what to expect in the coming year: You can see that Python tops the list and may remain there for a long time due to its versatility and the move towards open-source software. In this article, we will be looking at the four (4) best programming languages to learn in 2020. #1 Python Most people that intend to focus on server-side programming often go with Python due to the several libraries that make it useful in writing scripts and plug-ins. The simplicity of a programming code that is written in Python makes it easy to read, which is why it is often recommended for beginners who may not understand the complex syntax seen in other languages. You soon begin to write simple codes that run without errors after a few lessons. Python is open-source, meaning it is free to use despite being an object-oriented language. The asynchronous coding design is another important benefit of using Python, which makes it possible to run a unit of code separately from the main thread. This type of parallel programming does not affect the performance of your code in any way. Highlights ● Open-source. ● Implementation in various fields, including Artificial Intelligence and machine learning, as well as desktop and web applications. ● Access to several modules. ● Object-oriented language. ● Asynchronous coding design. ● Cross-platform solutions. Big tech companies have chosen Python as their primary backend programming language as they begin to explore other possibilities in the area of data analysis, robotics, and lots more. Although debugging may not be that easy, it is possible to develop algorithms for testing your code. Running both the debug test and your main code will save your time while developing programs that give the desired output. The likes of Instagram, Google, and Netflix are using Python to develop cross-platform solutions. More implementations would be seen in the year 2020, which is an advantage for experienced python developers, as well as those that want to learn the programming language. #2 JavaScript JavaScript is a popular language among web developers which gave rise to several frameworks that simplify your code. It improves the possibility of data validation on the client-side to ensure that there are no vulnerabilities that can be exploited in your web application. The fact that JavaScript is not compiled and runs within the browser, makes it really fast. Improperly written programs can be exploited by cybercriminals who inject malicious code into the application that would run on a victim’s browser. To ensure that computers remain protected, users turn off JavaScript on their browsers to forestall the devastating effects of a data breach. Highlights ● Regular updates. ● Object-oriented programming. ● Access to several frameworks. ● Used for both server-side and client-side programming. ● Data validation functionality. ● Compatible with several programming languages. Following the release of the ECMAScript 6 and the popularity of frameworks like Angular, Node, Express and React, the use of JavaScript for both server-side and client-side programming has become common. Several startups are now using JavaScript to create dynamic web pages that are secure and fast. Popular sites like eBay, PayPal, and Uber are developed using JavaScript. Web applications that are developed in JavaScript can be adapted for different languages and countries using online localization services. #3 TypeScript The need to constantly improve the performance and other attributes of computer programs have led to the development of TypeScript. Microsoft thought it best to develop a programming language which can be used in developing large applications with a strict syntax which enhances security. Knowing that TypeScript is well structured and can be to JavaScript have led to most beginners picking it as their first programming language. It is designed to be compiled in such a way that there are fewer errors, hence debugging is not always necessary. With an extended toolbox, you can create several components to make application development a lot easier. Highlights ● Create several components with an extended toolbox. ● Less likelihood of errors. ● A strict syntax for enhanced security. ● Object-oriented language. TypeScript is an object-oriented programming language which is constantly updated with new features and additional functions, making it easier to use. Its use in the development of the Microsoft Visual Studio is proof of the endless possibilities that can be unlocked with TypeScript. As we can see from the StackOverflow statistics, TypeScript is also gaining popularity and may overtake JavaScript by the year 2020. #4 Kotlin Kotlin is another great cross-platform programming language that you should consider having in your arsenal in the year 2020. Its similarities with Java has made it possible for Android developers to switch seamlessly while being able to access their previously created frameworks. With Android gradually taking over the smartphone market, several opportunities would be open to Kotlin developers who prefer both front-end and back-end programming. The fact that IDEs like Android Studio and IntelliJ support Kotlin is a great advantage, giving users the power and flexibility to write efficient code with ease. Highlights ● Object-oriented programming. ● Works with Java frameworks. ● Used or both front-end and back-end programming. ● Secure and flexible. ● Easy to debug. The implementation of Kotlin in the Pinterest and Evernote applications have shown you how amazing exploits are done with fewer lines of code. Conclusion There are several other languages that may also gain popularity in 2020 as seen on the survey chart, including GoLang which is mostly used for creating frameworks. Your programming journey may not be an easy one but with practical knowledge of any of the four programming languages mentioned above, your career growth is certain in the year 2020.
https://medium.com/dev-genius/the-4-best-programming-languages-to-learn-in-2020-1f915a9981c6
['Haider Imtiaz']
2020-11-07 08:20:16.757000+00:00
['Python', 'JavaScript', 'Typescript', 'Coding', 'Programming']
The Brutal Truth About News Break for Writers
How to Evaluate a Writing Platform like News Break When I look at a platform to write on, I look at it from many different angles. The brutal truth is many writers jump ship from a platform without mastering it or even understanding it. This is my evaluation criteria. What sort of platform are they? News Break is a news app, not a blogging platform. They are trying to be more than that but their branding is incredibly confusing for an already exhausted reader just trying to find helpful content to assist them with their everyday lives. This is a problem for writers. The chance of them pivoting with their current branding will be tough. Does the platform understand writing? News Break isn’t built for writers or content creators. That was an afterthought. The best platforms I have found to write on are obsessed with writers. They lose sleep at night thinking about how to make writers successful. I don’t think News Break loses sleep over writers based on all the content they have published about their creator’s program and the platform itself. That’s OK. It’s nothing to be angry about; they’re a business. Look at the ethics of the platform News Break is a VC funded company from the U.S. Their goal is to make money any way they can. They don’t seem to have a secondary goal to speak up about important topics or shine a light on injustices they can help change. A goal beyond money is key as a writer. It gives a platform purpose. Without purpose, money takes over, and writers become whipping objects. Pay close attention to the incentives News Break is clear about how writers make money (see here). A minimum monthly payment is a clever idea in my opinion. It gives writers certainty about their income. This is a crucial detail many writers are missing: be careful with cash incentives. The monthly payment offered by News Break is generous and thrives on writers’ insecurities. It does come across as a little get-rich-quick, which is never good for writers. If you’re only motivated by money as a writer then you will give up. I see it every day. There is a network marketing element too. This is something I dislike. Network marketing with referral links becomes exhausting. It breeds fake influencers who try to lure readers into their trap for their own financial benefit. Many network marketing companies are giant Ponzi schemes. I hope this feature is removed. Generous incentives might seem good on the surface, but they encourage bad behavior. What is the writer’s KPIs? Writers have KPIs on News Break when it comes to followers and pageviews. I don’t think the KPIs are achievable for most writers. This could pose a problem for writers real fast. There’s no such thing as a free payday as a writer. Remember that. If you don’t deliver, you don’t get paid. So your ability to deliver good content, consistently, will always be the key to every writing platform. Is the future clear for the platform? News Break doesn’t have a clear future. They’re just trying stuff on to see what sticks which I admire. Writers are an experiment. This can end badly if you bet your life on News Break. When I assess a platform I look at the founders, where they get money from, their experience of publishing high-quality content, how long they’ve been around, and what their mission statement is. News Break is an “I don’t know” in response to pretty much all of these questions. News Break is based in Mountain View, California. They have three main investment companies that funded the business. All of them are from China. The founder, Jeff Zhang, is originally from China and so is the COO, Vincent Wu. In the current economic environment and with what happened to TikTok, this could become an issue in the future. Their senior leaders all have good media chops. Jeff oversaw algorithm improvements when he worked at Yahoo, so it’s no wonder News Break has grown enormously and understands viral content. What’s the writing editor like for writers? It’s passable. It has the basics you need, although nothing fancy. I like that when you copy and paste the content into the editor there is virtually nothing else you need to do. News Break content is designed to be consumed on a smartphone so the images are quite small. Stripping out links from your content is one issue with the editor. The only way to remove links is to copy your content without any formatting, and then go back and format everything from scratch. This is time-consuming. I leave links in my articles unless they contain something News Break wouldn’t be OK with — like a link to me selling my eBook. You must add a follow widget to every article too, otherwise, you won’t meet the follower count KPI. What’s their support like? I emailed support several times. They were kind, friendly, and knew their stuff. They even gave me advanced tricks on how to use the editor. What are their content guidelines like? I find them great. They don’t limit your voice or tell you how to write. There are a few topics you can’t write about like conspiracies, which is expected. Does the platform limit low-quality content? Unfortunately not. Low-quality content can end up on the platform. News Break gets around this by having an aggressive algorithm that doesn’t let low-quality content be seen. It looks as though they use clicks and reading time as data to determine whether readers find the content helpful, and therefore, whether they should promote it. I like this. The algorithm speaks on behalf of the reader, not the platform. Can you take users off the platform? This is the most important criteria of any writing platform. You want to take users off the platform so you can own them — your way. In other words, you want to be able to collect users’ email addresses so you’re not reliant on an algorithm to show your content to readers, and so you can build a deeper relationship with your audience. Followers are useless. You can have 100,000 followers and still have your story read by less than 100 people. I wish writers all understood this truth. News Break does not allow you to have a call to action in your article or to insert links that make you money. I think that’s fair. You can’t double-dip as a writer and it’s greedy to try. News Break does allow you to have a link in your bio which you can use to send readers to a landing page and capture their email address. I like this feature. It works extremely well on Instagram, as an example. How much money can you make? Writers need to eat, I get it. We’re in a global recession, I get that too. So what’s the financial opportunity of News Break? Based on writers I’ve spoken to and the publicly available data I’d say $1,000-$2,000 a month. You can’t quit your 9–5 job on this sort of money but it can help you live with less stress and pay a few bills, so it’s worth exploring.
https://medium.com/better-marketing/the-brutal-truth-about-news-break-for-writers-f4e0a35c8ebf
['Tim Denning']
2020-12-14 15:32:30.383000+00:00
['Social Media', 'Money', 'Creativity', 'News Break', 'Writing']
How Google Designers Adapt Material
Material Design provides a set of tools and guidance to help you make informed decisions about the different UX design directions you could take when creating an app. But what happens when the guidelines don’t fit your product needs? And what happens at Google when a designer is working on a product that doesn’t quite fit the guidelines? The Material guidelines adapt. In this article we’ll look at two Google apps — Keep and Inbox — to understand how they not only bend some of the rules, but how they help shape the Material Design guidelines as a whole. Inbox: Exploring the typographic grid Designing a new email web application is an extremely ambitious undertaking at Google, especially when it appears alongside an established product like Gmail. The Inbox team set out to create a denser UI as well as a unique user experience and brand identity while playing by the new Material Design rules. While the Inbox design team was putting together their initial designs, Material Design was still being developed. This presented the team with a great opportunity: The potential to establish what the Material Design web standards could be, while solving the problem of designing for dense UIs. Designing dense UIs The initial design for Inbox wasn’t flexible enough, as the grid only had space for seven emails on a 13-inch screen. This was far too small when compared to Gmail, which could show 16-20 emails. Tim Smith, Visual Design Lead for inbox explains: “If you open Gmail and Inbox side by side, there is a big difference in visual density. This also turned out to be one of the greatest challenges; finding the ideal balance of content and breathing room.” By making adjustments to the grid, row heights, and how type presented, Inbox managed to set a design standard for dense web UIs and display 12–17 emails, each inside a Material Design card. The font size and interface were also designed to change and adjust to a person’s device. For example, the subject line in an email will increase in font size depending on an increase in screen size. Inbox was designed to fit as much information as possible even at small screen sizes, setting a Material Design standard for dense web interfaces. Using colors, imagery, and icons to give context to the user Inbox’s visual distinction from Gmail was handled by their use of header images, which relate to the content within bundled emails. If someone using Inbox plans a trip to the New York City, for example, they’ll be shown an image of the Manhattan skyline. Inbox also uses a vast array of icons in a left navigation drawer that are colored according to their function in the app. For example, when a user clicks or taps the green “Done” button, the background color of the header bar also changes to green, keeping the user informed of the change and context. This use of contextual imagery is another defining characteristic of Inbox’s distinct brand experience. Inbox will add imagery to a bundle of emails to give them meaning. For instance, plane ticket and hotel reservation emails for a trip to New York show a picture of the city’s skyline. Designing a header bar for the web Another challenge for the team was the design of the app bar. The initial proposal was a variable header that didn’t stretch to fill the full browser window, but instead matched the width of the content. “We worked through about a dozen different variations of this concept until ultimately landing on the full width header bar you see today. We also worked through several prototypes to determine the best search field styling.” Tim Smith, Visual Design Lead, Inbox Since the cards in Inbox expand and contract, this meant having to adjust the header every time the user interacted with an email. The app bar also contains a search field and a menu that displays other Google apps. This approach lets Inbox remain responsive without complicating the interface. Keep: Adapting navigational patterns Keep is a cross-platform, note-taking application that expands and collapses Material cards on screen to focus a user’s attention while adding notes. A modified bottom navigation bar also lets people create a new note quickly with a single tap. Encouraging actions with empty states and motion Empty states typically occur when there isn’t any content to show to the user. Keep uses this design pattern by giving people a blank canvas on which to draft their thoughts. The spareness of the UI encourages the user to explore different elements in the app bar search, which expands to reveal icon filters; a sorting menu that allows users to toggle between list view and grid view; and a left navigation drawer to adjust the app’s main settings. The cards expand and contract to give users context. Keep uses empty states to encourage people to create new notes “Motion is something we have put a lot of effort into — from the way notes animate into the stream view, to the way they transition when you open and close them.” Genevieve Cuevas, Software engineer, Google Keep Using the right Material patterns for your app: bottom navigation vs. FAB When redesigning the app, Keep’s team of designers and developers pored over the Material Design patterns and ended up applying components like cards which help distinguish notes from one another, a left navigation drawer that makes settings for the app easily available, and contextual menus that change to fit the context of each note — like a note with checkboxes displaying a menu to check all items in a list. Combined, these different design patterns create a clean and functional user experience that adapts depending on the context and needs of the user, a key factor in Keep’s simplicity and easy-to-use interface. During the redesign process, the Keep team experimented with some of Material’s core navigation by testing an expandable FAB in lieu of the existing bottom navigation. For context, the bottom navigation offered a simple one-tap call to action to create new notes. The new FAB required two taps: one to expand options, and a second tap to create notes. “When we launched the FAB, some of our users complained about losing the one-tap create note behavior.” Genevieve Cuevas, Software engineer, Google Keep This change seemed regressive to people who previously used the app and were accustomed to single-tap navigation. Keep’s journey, testing out and ultimately abandoning core Material components like the FAB, stands as a great example of choosing the Material guidance that works best as opposed to shoehorning behavior that doesn’t fit the product. Guides not rules Both Inbox and Keep teams utilized the Material Design guidelines, using them to help design and develop their applications. When they came across a use case that didn’t work for their product, they adapted their designs accordingly. Material Design offers a lot of guidance, built on years of UX experience throughout Google, but it can’t cover everything. Hopefully these examples above show that you can adapt it to suit your needs while still conforming to the overall spirit of the guidelines.
https://medium.com/google-design/how-google-designers-adapt-material-e2818ad09d7d
['Mustafa Kurtuldu']
2017-09-12 16:37:56.380000+00:00
['Google', 'Material Design', 'UI', 'Design', 'UX']
A Business Used to be a Black Box. Now it’s a Glass Box.
In a connected world, the relationship between powerful organizations and the societies in which they operate is being redrawn. We all understand that. But we’re still catching up to the full implications. Here is one such implication. A transparent world means a radical change in the nature of brands. That’s a huge shift, because brands — business, political, individual — do much to shape the world we live in. This week we learned that Microsoft employees have been sharing stories of sexual harassment and discrimination in a long internal email chain. Many of those employees say they originally complained to HR, but got nowhere. Reports say the chain started when one female staff member emailed others to ask for advice on how to break through the glass ceiling at Microsoft. Stories of harassment, abuse and prejudice began to pour in, including one by a woman who says she was asked to perform sex acts by a senior employee of a partner company, and was ignored when she complained. As news of the email chain spread through Microsoft, HR announced an investigation. Stories of gender discrimination at big corporations are, sadly, nothing new. But the way this story surfaced — as an email chain first shared between staff and then leaked to the media — is a reminder of a powerful truth. A connected world is a more transparent world. And a transparent world is transformative. That’s because transparency reconfigures power relationships. Just look at what transparency has done to some of the most powerful organizations of our time across the last couple of years. Transparency ended Travis Kalanick’s reign as CEO of the unicorn he grew from birth. It brought us the truth about Facebook, and forced Mark Zuckerberg to do the previously unthinkable: call for regulation of social media. Credit: Shutterstock A connected world allows citizens to see deeper inside the powerful organizations that shape their lives. It also empowers them to challenge power in new ways. Not only are those who are the victims of wrongdoing now able to corroborate each other’s stories and speak out in unison, but millions around the world can insist that they be heard and that action is taken. That’s what is happening with #MeToo, a movement that signals a historic redrawing of the power relationship between men and women. We’re just at the beginning of the reconfiguration that transparency will impose on our societies. But one powerful implication is becoming clear. And its best understood as the difference between a black box and a glass box. Organizations — businesses, institutions, and so on — used to be black boxes. For the most part, no one could see inside. The brand was whatever those inside the box painted on the outward-facing walls. Now, organizations are glass boxes. People can see right inside. They can see the people, the processes, the values at work. In other words, they can see the organization’s internal culture. And once people can see that culture, they will feel something about it. That is to say, it will become part of the set of cognitive and emotional associations that they tie to the organization. It will become part of — perhaps the most important part of — the organization’s brand. In a transparent world, internal culture is brand. Increasingly, an organization can no longer paint a brand on its outward-facing walls and expect people to believe in it. Instead, because of transparency, brand will be an organic outgrowth of internal culture. That’s a massive shift. Indeed, by blurring the boundaries between inside and outside, this shift changes the very nature of what it is to be an organization at all. It also explodes the communications disciplines that shaped so much of traditional 20th-century mass media, consumer democracies: public relations, branding and advertising. Once, Microsoft could have sought to PR their way out of a story such as this. Today, they will be judged primarily not on what they say, but on the meaningful changes they make to their internal culture, which will be reported to the world via their employees. If they make no such changes, the brand will rightly suffer. But if they do, they have a chance to powerfully enhance the way that customers and clients feel about engaging with Microsoft. In 2019, there can be no brand as distinct from the organization as it authentically exists; there can be no marketing department as distinct from the rest of the organization. The entire culture is the brand; every department is the marketing department. The implications — for businesses, institutions of state, political parties, powerful individuals — are huge. And so is the opportunity for organizations and individuals who understand those implications. One further glimpse of this shift? A spate of 2020 presidential candidates are about to start spending millions of dollars on their campaigns. Meanwhile, Alexandria Ocasio-Cortez could be found last week broadcasting from her living room on Instagram Live, assembling the furniture she has just bought for her new apartment, drinking wine and answering questions from viewers. Ask yourself: in 2019, who is building the more powerful brand?
https://dmattin.medium.com/a-business-used-to-be-a-black-box-now-its-a-glass-box-de6fba93fd59
['David Mattin']
2019-04-05 20:16:36.202000+00:00
['Politics', 'Society', 'Business', 'Technology', 'Future']
Handy Data Visualization Functions in matplotlib & Seaborn to Speed Up Your EDA
For ease of demonstration, I’ll start out by fetching one of the datasets available via Sklearn, the California housing prices dataset. This happens to be a dataset that lends itself to a regression type problem, but the graphing functions I describe below work just as well for categoricals and classification-oriented exercises, either as they are or with minor modifications. If and when you use the plotting functions, you should, of course, use the dataframe that you have set up and ignore this particular dataset. First, the basic imports: fig. 1 … basic imports Next, fetch the demonstration dataset and stick it in a pandas dataframe. There are 20,648 observations in this Sklearn dataset of California housing prices. The eight feature columns reside in the dataset.data structure. fig. 2 … fetch dataset Finally, create the target column price (prices in $000,000) which resides in the dataset.target structure. It happens to be placed into a multilevel column index, so I will need to flatten that to one level, as below. I also verified that there were no null values, just to be sure.
https://medium.com/better-programming/handy-data-visualization-functions-in-matplotlib-seaborn-to-speed-up-your-eda-241ba0a9c47d
['Manu Kalia']
2019-06-21 19:03:36.939000+00:00
['Python', 'Eda', 'Matplotlib', 'Data Science', 'Data Visualization']
Industries AI Is Poised to Revolutionize in the Next 20 Years
Industries AI Is Poised to Revolutionize in the Next 20 Years Devinder Sarai Follow Dec 15 · 12 min read With artificial intelligence and its applications growing at an ever-increasing rate, let’s explore what this means for key industries as we approach 2040. Photo by Drew Beamer on Unsplash It’s 2016. Experts say it would be at least ten more years until AI could beat a world-class human player at Go. This ancient game has 10¹⁷⁰ possible combinations, more than the number of atoms in the known universe and is a googol — the digit 1 followed by a hundred zeros — times more complex than chess. On March 9 of that year, history was made when AlphaGo, a computer program from the company DeepMind, beat the 18-time Go World Champion Lee Sedol in the first game of a five-game series. This incredibly advanced program used search trees and deep neural networks both to select the next move and predict the winner of the game based on the current state of the board. A few days later, AlphaGo had won four matches to one. Around the world, 200 million people watched in surprise as the computer program went against conventional wisdom to play the now-renowned move 37, one of many incredibly creative winning moves AlphaGo played during the series, leaving even Lee Sedol amazed: “I thought AlphaGo was based on probability calculation and that it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative.” Yet, just over a year later, DeepMind released AlphaZero, “a single system that taught itself from scratch how to master the games of chess, shogi, and Go, beating a world-champion program in each case.” We’re at an inflection point in the progress of AI, where it shifts from being able to accomplish a narrow range of tasks — say, playing either chess or Go — to an increasingly broad skillset that will truly change the world. What AI’s been able to do up until now is only the beginning. The “Landscape of Human Competence” by Hans Moravec. Creative tasks and original work are highest. If we place most human tasks on terrain, elevated to represent the difficulty for computers, the capabilities of AI are a rising sea level as pictured in the image above. Already submerged are several low-lying plateaus while the water laps at the base of the foothills of driving, investment, and translation. Even so, before we can really start talking about AI, we first have to define intelligence. In his book Life 3.0, physicist and MIT professor Max Tegmark defines intelligence as not uniquely human and the “ability to accomplish complex goals”. Just as it would be futile to try and draw a line to classify intelligence as an all-or-nothing trait, we must approach intelligence as a spectrum that is determined by the degree of ability in accomplishing different goals — a system that can translate from one language to another in all its intricacies and subtleties is more intelligent that one that can play chess. Read on for three major industries that will be completely transformed by AI in the coming decades and how exactly we could get there. Photo by Javier Allegue Barros on Unsplash Healthcare Nine out of the ten leading causes of death in the United States occur from preventable causes such as heart disease, diabetes, or cancer. Globally, 43 million people are affected by medical errors and there is a pressing need for more advanced healthcare services in low-income countries, especially in the context of a global pandemic. With AI, we’ll be able gain unique and so-far-unseen insights into many different diseases and conditions with big data, diagnose more accurately with machine learning, and use personalized medicine to tailor treatment to the individual person along with a faster and more fruitful drug development process. What’s more, integrating with smart wearable devices will allow for real-time monitoring of health, alerting a diabetic if their blood-sugar levels surpass a certain threshold for example. Already, lots of progress has been made. At Harvard University’s teaching hospital, doctors are using AI-enhanced microscopes to scan for harmful bacterias like E. coli in blood samples quicker than manual scanning with an astounding 95% accuracy. Lung cancer results in over 1.7 million deaths per year, making it the deadliest of all cancers worldwide and the sixth most common cause of death globally. Unfortunately, it has the worst survival rate among all cancers as it is usually caught far too late when treatment is less successful. In 2019, Google Health published research that showed that by using an artificial intelligence model they could detect 5% more cancer cases while reducing false-positives by more than 11% compared to radiologists. In applying AI to breast cancer detection, pathologists were able to halve the average time they needed to spend to find small metastases in the lymph nodes. Moreover, the program was able to correctly distinguish a slide with metastatic cancer from a slide without cancer 99% of the time while being able to “accurately pinpoint the location of both cancers and other suspicious regions within each slide, some of which were too small to be consistently detected by pathologists” (source). Just recently, in November 2020, DeepMind revealed that their AlphaFold 2 AI system has effectively solved the protein folding problem — a challenge in biology that goes back 50 years. Any one protein is estimated to have 10³⁰⁰ possible conformations, meaning that it would take millions of years to model all the possibilities. Using deep learning and almost 200,000 proteins whose structure is already known, AlphaFold 2 could then compare its predictions with researchers by modelling proteins that scientists are still working to determine their structure. On a scale of 1–100, the AI achieved a remarkable median score of 92.4, considered on par with a team of human researchers, although significantly faster — days versus years. Ultimately this will lead to faster drug discovery and even specialized protein design. DeepMind’s chief executive, Demis Hassabis remarks on AlphaFold 2’s success: “I do think it’s the most significant thing we’ve done, in terms of real-world impact.” Clearly, there’s a lot of incredible development happening at the intersection of AI and healthcare which offers plenty of hope for a healthier future and millions of lives saved every year. However, there are some challenges and questions that need to be addressed before AI can reach its full potential. For one, since machine learning models require large amounts of patient data to effectively train on, hospitals and government regulators alike need to tackle the issue of data privacy and ownership. As a patient, if you receive care, are you able opt-out of having your medical record and health information uploaded to a database that can be accessed by AI models? What about if your genetic information puts you at an elevated risk for developing certain conditions — is your insurance company entitled to know about it? If so, is it ethical for them to raise your premiums? Another sobering challenge is the issue of responsibility; if a hospital uses an AI program to help diagnose you or an algorithm to determine your treatment, perhaps the first time that it goes wrong is at that moment — and with malpractice, where does the responsibility lie? These questions need to be answered for AI to enter the mainstream healthcare systems of the world and while we won’t discuss potential solutions here (I’ll be writing about some of those in another article), it’s important that we understand both the promise and the risk that AI holds. Photo by Sharon McCutcheon on Unsplash Finance The stock market aside, there are many opportunities for AI to change the way we interact with and manage money. Whether to promote good credit or more accurately assess under-served minority communities, tools to manage risk and detect fraud for larger companies, or to offer personalized banking to an increasingly online world, there will be a shift in the coming years toward an interconnected and accessible financial system brought about in part by AI. Using machine learning to evaluate borrowers with little to no credit information or history, Zest AI reduces loses while more accurately predicting risk. By adopting their product, banks were able to give out more loans while reducing the default rate by more than 30% — benefiting both parties and the economy as a whole. Kavout, a startup founded in 2015, uses AI to identify real-time patterns in financial markets and condenses massive amounts of unstructured data into a numerical rank for stocks. Its top-ranking stocks have outperformed the S&P 500 by almost double over the last five years. The traditional banking experience is very impersonal and new tools such as financial advice chatbots are using AI to create a better and more personalized customer experience. Money-saving assistant Trim cancels money-wasting subscriptions, finds more cost-effective services such as insurance or cell phone plans, and even negotiates bills — saving the average person almost $1,500 a year. Of course, with so much of our day-to-day financial activity happening online, fraud detection is often a time-consuming process for financial institutions. Shape Security protects more accounts from fraud than all other financial security firms in the world combined. The software used by most of the largest banks in the US was trained on billions of interactions, allowing it to distinguish between real customers and bots using machine learning. While the work that these companies — among others — have done to date is extraordinary considering the strict regulation in this space, they are just part of the transition to what the financial system will look like in the future. Centralized banks and financial institutions will gradually become less prominent as loans and even credit will become partially crowd-sourced or on the blockchain (more about this soon). Kiva is a great example, connecting over 3.5 million borrowers in 77 different countries with 1.9 million lenders, resulting in over $1.5 billion in zero-interest loans to date. Photo by Radek Kilijanek on Unsplash Transportation In the US alone, more than 38,800 people die every year in car accidents. Another 4.4 million are injured seriously enough to require medical attention. That’s $871 billion a year, according to the NHTSA. Solving the issue of self-driving cars is a moral imperative. In creating a fully-fledged autonomous driving system that will reduce accidents on the road, AI has a crucial part to play — being able to take in data from the car’s surroundings and analyze it in real-time to determine if a situation requires a certain response or to keep cruising along. What’s more, the average person spends 54 hours wasted in traffic every year — what if that time could be used for something more productive? To put things in perspective however, humans are really good at driving when it comes down to it. In 2018, there were about 1.22 deaths per 100 million miles driven, roughly three-quarters of the distance from the Earth to the Sun. For an AI-driven system to have a meaningful impact on the number of deaths caused per year by car accidents, it has to be even better. As self-driving car pioneer Sebastian Thrun put it: “To [build a self-driving car] that manages 90% of the problems encountered in everyday driving can literally be done over a weekend, to do 99% percent might take a month, and then there’s 1% left […] it keeps going until you get to that 0.01% and then it’s hard.” So what advancements there are have been in this AI-powered technology? Heralded as the “future of driving”, Tesla’s Autopilot feature promises full self-driving capabilities in the near future thanks to a combination of cameras, ultrasonic sensors, and radar. In order to achieve this remarkable feat, Tesla uses a neural network that they’ve trained to emulate human driving at its best, using data from over three billion miles driven. Today, Autopilot enables the car to steer, accelerate, and brake automatically within its lane as well as be summoned to its driver within a parking lot. A new upgrade allows for lane changes and taking highway interchanges or exits based on the destination. Via over-the-air software updates, Tesla is able to incrementally improve its Autopilot feature over time. Google’s self-driving project, now called Waymo, takes a different approach, first mapping out detailed maps with information such as road profiles, sidewalks, lane markers, traffic lights, and stop signs. Instead of integrating cameras and sensors into its vehicles, it instead adds them on to the cars it uses. Waymo’s software has driven billions of miles in simulations using AI-generated images and is currently operating two services in both testing and limited stages: Waymo One — an autonomous vehicle taxi service — and Waymo Via, a transporting service. Other companies such as Daimler, Embark, and TuSimple have already put driverless-equipped trucks on the road for testing. Tesla plans to produce a fully electric autonomous truck called the Semi which provide enough fuel savings to pay back the entire cost of the truck in only two years. The autonomous transportation industry experiences perhaps the most hype and it’s reflected in soaring stock prices and ballooning company valuations. However, the potential that exists in AI-fuelled self-driving vehicles and the impact they could have is — within all likelihood — going to be made reality in the coming decades. Photo by Aaron Burden on Unsplash What Does This Mean for Us Humans? Many people fear that the rise in AI-empowered industries will lead to a swift decline in jobs for humans. Take for example that more than 3.5 million people work as truck drivers in the US alone — what will happen to their jobs once autonomous trucking reaches the mainstream? What about radiologists whose tumour-identifying role will be taken over by superior machine learning systems? If you foresee a future where human job prospects are grim and ever-dwindling then you’re only looking at one side of the coin. Sure, there will be plenty of jobs that will pretty much disappear, but it won’t happen overnight. On the flip side, the amount of jobs that will be created could bring about even more progress and advancements in technology — just imagine what a million more computer scientists or engineers could do for the world. The changes that will be brought about by AI in the coming years and decades only stress the need for current and future generations to be multi-skilled, having the invaluable ability to learn and then re-learn in a continuous cycle that will last a lifetime. If you’re interested in this art of reinvention, I encourage you to read this article by historian and writer Yuval Noah Harari. Photo by Benjamin Davies on Unsplash Summary In short, AI will revolutionize the world as a whole in the coming 20 years, changing not only the systems we interact with, but even our everyday lives. We had a look at three key industries to give us a glimpse of the future: Healthcare: getting insights into different diseases with big data, diagnosing more accurately with machine learning, using personalized medicine to tailor treatment, and accelerating drug discovery getting insights into different diseases with big data, diagnosing more accurately with machine learning, using personalized medicine to tailor treatment, and accelerating drug discovery Finance: promoting good credit for individuals, managing risk and detecting fraud for larger companies, and offering personalized banking promoting good credit for individuals, managing risk and detecting fraud for larger companies, and offering personalized banking Transportation: reducing the thousands of fatal accidents on the road and saving people time Whatever change does happen over the next two decades, you can be sure that it’s going to be an exciting and rapidly-changing time. Humanity is going to need a new generation of problem-solvers, creative thinkers, and doers to overcome the greatest challenges in the history of our species. The only thing that we know for certain in the future is that change itself is the only certainty. I hope you’re ready. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/industries-ai-is-poised-to-revolutionize-in-the-next-20-years-3064cb58b1f4
['Devinder Sarai']
2020-12-26 08:28:04.556000+00:00
['Future', 'Healthcare', 'Finance', 'Transportation', 'AI']
Chilean Artist Draws from Covid-19 Hospital Bed
What happens when an artist treks the inscrutable space between life and death? Covid-19 survivor Guillermo del Valle coped the way he knows best: he drew. The Chilean visual artist nearly died, having spent 18 days in a Barcelona hospital. Between oxygen doses, del Valle took whatever supplies he could find near his hospital bed and sketched visual fragments of his coronavirus fight. “It felt like a Pac-Man was eating my insides making it difficult for me to breathe,” said del Valle about his battle with Covid-19 in March 2020 in Spain. “I had been in the ICU, my immune system was in crisis, my tired body was ready to let go, and my mind was struggling, alert, sleepless,” the 65-year-old said. At one point, sensing he might not live past the night, del Valle emailed goodbye letters to his wife and children. “He was about to cross over and came back to life,” said Doctor Ivan Pelegrin of del Valle’s illness. As Covid-19 wards fill to capacity in large swaths of the United States, del Valle’s sketches offer a unique glimpse into the mental battles unfolding amongst ICU patients. And his art conveys a simple truth: creation persists in the face of this destructive virus. “The art became a way to look at myself, at my limit, and to register my fragility, the beauty of life, and the natural part of death,” del Valle said. “The art became a way to look at myself, at my limit, and to register my fragility, the beauty of life, and the natural part of death,” del Valle said. “I think that by stepping outside of myself in this way, it helped me to fight against the virus. It helped the force of the mind prevail over exhaustion and the tendency to surrender to the body.”
https://medium.com/art-direct/chilean-artist-draws-from-covid-19-hospital-bed-a0b5766a00a2
['Linda Freund']
2020-07-16 17:35:35.568000+00:00
['Creativity', 'Life Lessons', 'Coronavirus', 'Art', 'Covid 19']
Should You Wear Masks While Exercising?
By virtue of pure logic, the argument for wearing masks while exercising makes complete reasonable sense. Up to this point, most public health experts agree that the holy grail of Covid-19 prevention consists of physical distancing, avoiding crowds, and wearing masks when you leave home. Since most modern exercise regimens include activities in close quarters and public spaces, the safest bet to minimise transmission risk, even during exercise, is to don a mask. But is this reasonable logic? In June, the World Health Organization released an official statement in an attempt to address this issue. Now embedded within the Mythbusters page on their website, the organisation advises NOT wearing masks when exercising, lest they “reduce the ability to breathe comfortably”. We all know that oxygen is absolutely necessary for exercise, especially strenuous exercises like HIITs, long-distance running, and uphill cycling. Oxygen is required for aerobic respiration, which produces energy-storage molecules called ATP that are expended during physical exertion. Masks give your body the illusion of hypoxia, or lack of oxygen, similar to what one will experience when they’re training in altitude, where oxygen availability is naturally low. While the average healthy person can compensate for this oxygen decrease, it may be dangerous for people with preexisting health conditions, such as obesity and atherosclerosis. Atherosclerosis is a condition in which a person’s blood vessels are clogged, usually by a lipid plaque that forms from years of constant exposure to risk factors, one of which is obesity. When blood vessels of the heart are obstructed, blood flow to the organ is reduced, which can become fatal when a particular activity, such as exercise, induces a sharp and sudden increase in oxygen demand. Science journalist Lindsey Bottoms, who wrote a similar article on the topic of masks during exercising, enrolled herself in an exercise test to measure the fraction of oxygen in inspired air while running on a treadmill (a) with a minimally occluding fencing mask, and (b) with a surgical mask. Lindsey, who is a fencer, ran 10kph on the treadmill for 3 minutes to emulate the intensity of a fencing bout. With her fencing mask on, the oxygen concentration she breathed in was 19.5%, down from 21%, which is the normal oxygen fraction at sea level. While using a face mask, she breathed in just 17% of oxygen, a steep drop from 21% — equaling exercise at 1,500m above the sea level. A lower fraction of inspired oxygen often drags blood oxygen concentration down with it, and this could trigger complications from atherosclerosis to manifest. When the tissues supplied by these obstructed vessels are deprived of oxygen for a prolonged period of time — especially during moments of increased demand — an ischemic event could occur. In the heart, this is commonly known as a heart attack. In the brain, a stroke. What’s worse is that people don’t usually realise they have atherosclerosis until their first cardiac event, which often follows after a specific triggering incident. Classically, the trigger takes the form of emotional stress or physical exertion, which causes a sudden increase in metabolic demand and puts the system into temporary overload. This momentary breach in the supply-demand equilibrium is sometimes all it takes to induce a devastating event. A freshly published paper in Elsevier similarly advocated against the use of masks during exercise. Exercising with customised tight face-masks induces a hypercapnic hypoxia environment, which occurs as a direct result of inadequate O2-CO2 exchange in the lungs. This can lead to a dangerous build up of CO2 in the blood and subsequent acidosis, where your blood pH plunges below safe margins. The problems range from headaches, functional impairment in a slurry of bodily systems, including immune cell motility, muscle metabolism, kidney function, cognition, cardiac perfusion, and an increase in cardiac load and anaerobic respiration. The net effect of these impairments includes an ironic increase in susceptibility to infection, multi-organ damage, and acute heart failure. The decrease in resistance to infection is thought to be due to a combination of carbon dioxide induced immune system down-regulation, and the mechanical limitation of such masks, which WHO claims can trap sweat, providing mechanical resistance for oxygen entry and promoting the growth of microorganisms. But if the benefits of exercising without a mask on make mask-less exercise seem like the unanimous winner in this contention, the environmental context makes this advice seem less clear-cut. After all, there will always be circumstances that make the case for mask-less exercise less convincing. While the spacious outdoors provide a safe environment to practice WHO’s advice, the crammed indoor space of gyms and fitness studios provides a persistent threat for viral spread to occur between individuals. In this context, the case for mask-less exercise is still a topic of hot debate. Lead researcher Baskaran Chandrasekaran advises that social exercises perform low to moderate-intensity exercise, rather than vigorous exercise when they are wearing face masks. He also recommends people with known chronic disease to exercise in solitude at home, under adequate supervision, and without the use of masks. The jury is still out for mask-wearing during exercise. Even if a safe inter-individual distance protocol is instituted, the risk from fomite spread (from the communal handles of gym equipment and water dispensers) is still not fully mitigated. For now, it’s probably best to steer clear from crowded indoor areas when it comes to exercise. Baskaran underlines that social distancing and self-isolation appear to be better than wearing face-masks while exercising during this global crisis. So if you do find ample space outside that isn’t too crowded to plot an exercise course, then, by all means, ditch the mask.
https://medium.com/beingwell/should-you-wear-masks-while-exercising-774167df6986
['Jonathan Adrian']
2020-09-17 12:48:30.338000+00:00
['Sports', 'Coronavirus', 'Exercise', 'Health', 'Life']
Data Mesh Simplified: A Reflection Of My Thoughts On Data Mesh
This is the republished article from https://www.dataengineeringweekly.com/. Subscribe to Data Engineering Weekly for your weekly data engineering news in the industry. The Rise of Data Mesh Data Mesh is a set of data engineering principles coined by Zhamak Dehghani from ThoughtWorks. I highly recommend reading How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh and Data Mesh Principles and Logical Architecture. Data Mesh influenced by domain-driven design, emphasizing the importance of data ownership and shared tooling to generate, curate, and democratize data. Data Mesh principles getting good adaption in some of the leading organizations. Yelp talked about its data mesh journey, So does Netflix and Zalando. Though the literacy around data mesh maturing, I see a few confusions around Data Mesh in a few blogs. I’m not an authoritative person to describe data mesh, but an attempt to convey what I learned about data mesh. I’m eager to hear the alternate viewpoints on this. The sad state of data engineering Now the fundamental question you may ask. Why Data Mesh and Why now? To understand Data Mesh, we need to understand the current state of the data engineering world. It may not directly apply to your organization, but most of the data infrastructure remains in this sad state. Imagine you are writing a dictionary with only the words with no meaning for it. On top of it, you shuffle the words randomly and publish the dictionary without any index for it, and hire high-paid data engineers and analysts to decode the dictionary. It is the current state of the data infrastructure. The modern data infrastructure has sophisticated systems like Kafka, Spark, and the ability to emit and process events at a petabyte-scale. Yet, the data generation process we follow is equivalent to writing a dictionary without meaning. Wait, Don’t Data Lake solved It? A data lake is a centralized repository that allows you to store all your structured and unstructured data at any scale. You can store your data as-is without having to structure the data. As the definition suggests, Data Lake focuses on centralized data storage to break the organization’s data silo. The central repository removes entry barriers to integrate and analyze from various data sources in an organization. However, as the data lake grows, the complexity of the data management grows. The producer of the data generates data and sends it to the data lake. The consumers down the line have no domain understanding of the producer and struggle to understand the data lake data. The consumers then connect with the data producer to understand the data. At that point, the producer side’s domain expertise depends on human knowledge that may or may not available. As Data Lake grows, it becomes a technical debt rather than a strategic advantage. How Data Mesh Solve It? Data Mesh is an enterprise data platform principle that converges the principles from Distributed Domain Driven Architecture, Self-serve Platform Design, and Thinking Data as a Product. Data Mesh focuses on treating data as a product. The feature team that works on the product feature has the domain understanding of the data, not the data’s consumer. Data Mesh pushes the data ownership responsibility to the feature team to create, attach metadata, catalog the data, and store it for easier consumption. The data curation and the cataloging of the data at the data creation phase bring more visibility to the data and make it easier for consumption. The process also eliminates human knowledge silo and truly democratize the data. It helps the data consumers not worry about the data discovery and focuses on producing value from the data. How Data Lake Different From Data Mesh? Data Lake is like a reporter writing an article for the New York Times. The reporter goes and interviews related people to write a story, fact checks it, and delivers a reporter narration to the readers. Data Mesh is like writing a book for O’Reilly or similar publications. The publication provides a foundational infrastructure for all the authors. The authors write their views, add index, and glossary for the book and deliver their narration to the readers. But, There Is Always A Catch Data Mesh sounds very cool, but there is always a catch. This is a great Twitter thread summarizing the challenges in adoption. Ananth Packkildurai @ananthdurai @gwenshap @KishoreBytes @byte_array The data mesh approach of services owns the data is a great idea. However, there is a catch. If you don’t have data discovery and links between those domains, we will be back to square 1 of good old org data silo world. A process without proper tooling is dangerous to adopt. December 4th 2020 5 Likes As I mentioned, If we blindly adopt Data Mesh principles without the proper tooling, it can easily lead us to the good old org data silo problem. As mentioned in the below Tweet, no tool can fix the problem. Kishore Gopalakrishna @KishoreBytes @ananthdurai @gwenshap @byte_array @ananthdurai You nailed it… putting in a process without tooling is not only dangerous but will create new set of problems that no amount of tooling can fix December 4th 2020 2 Likes The following threads narrate the observation from the industry and how to adapt the Data Mesh principles. Gwen (Chen) Shapira @gwenshap @KishoreBytes @byte_array We want to allow each team that owns services to also own the data store. Maybe even use S3 or Dynamo or something else. December 4th 2020 Vinoth Chandar @byte_array @ananthdurai @gwenshap @KishoreBytes But the thing is, this is how most #data organizations grow organically within a company. Then they try to build out something centralized and few are able to succeed there today. So this is not as radical a shift. December 4th 2020 1 Retweet5 Likes Kishore Gopalakrishna @KishoreBytes @byte_array @ananthdurai @gwenshap This is definitely not new — saw this at Yahoo. I remember we had close to 8 databases at Yahoo and tons of libraries and tools doing the same thing. It has its pros — each team moves fast but highly inefficient at a company level. December 4th 2020 1 Like Sriram Subramanian @sriramsubram @KishoreBytes @byte_array @ananthdurai @gwenshap I have seen both. Individual teams incentivized to moving fast and building lots of microservices/DBs with 0 leverage and early teams building monoliths with tightly coupled architecture that makes it hard to scale. It is always hard to move from one to the harder December 5th 2020 2 Likes Conclusion Data Mesh is not a technology or a storage solution instead of a set of principles to streamline its data management. As Gwen, Sriram, Kishore, and Vinoth pointed out, it is an invisible structure in most organizations and requires proper tooling to enable the Data Mesh principles. The analogy of evolving monolithic architecture to microservices architecture fits well with the Data Mesh principles. If you are starting up, a monolithic architecture may work well for you. As you grow, focus on building tools to label, catalog, organize, and search your data, leading to the adoption of the Data Mesh principles. Links are provided for informational purposes and do not imply endorsement. All views expressed in this blog are my own and do not represent current, former, or future employers’ opinions.
https://ananthdurai.medium.com/data-mesh-simplified-a-reflection-of-my-thoughts-on-data-mesh-4d4f01c37185
['Ananth Packkildurai']
2020-12-10 13:57:48.638000+00:00
['Data Mesh', 'Data Lake', 'Data Engineering', 'Data Catalog']
The Enigma Guide to Avoiding an Actual Pandas Pandemonium
By Pam Wu When you first start out using Pandas, it’s often best to just get your feet wet and deal with problems as they come up. Then, the years pass, the amazing things you’ve been able to build with it start to accumulate, but you have a vague inkling that you keep making the same kinds of mistakes and that your code is running really slowly for what seems like pretty simple operations. This is when it’s time to dig into the inner workings of Pandas and take your code to the next level. Like with any library, the best way to optimize your code is to understand what’s going on underneath the syntax. It can be hard to know where to start, though. There are tools out there that can help boost productivity — but what exactly are these tools, and where can you find them? In the spirit of learning — and sharing! — I recently culled together some of the Pandas tips and tricks I’ve come across over the years. Some of these methods I’ve learned at conferences, others I’ve picked up in books or from colleagues. After running a few tutorials at Enigma — including a session with our Software Craftsmanship Guild, an internal club that promotes the learning and practice of software engineering skills — I realized this information was worth sharing more broadly. So, here is The Enigma Guide to Avoiding an Actual Pandas Pandemonium, which digs into coding best practices, common silent failures, how to speed up your runtime, and ways to lower your memory footprint. This is a bunch of suggestions for optimizing your Pandas code, conveniently packaged together in one place. Have thoughts on the tutorial, or tips you want to share? Let us know! _______________________ Interested in joining the team? Enigma is hiring!
https://medium.com/enigma-engineering/enigma-guide-to-avoiding-an-actual-pandas-pandemonium-a6dddeb800cc
[]
2019-02-26 18:17:43.160000+00:00
['Python', 'Software Development', 'Engineering', 'Tutorial', 'Pandas']
The Trouble with Metaphor
That’s the trouble with metaphor. Many of my students were taught it equaled art. They were trained to come up with something that represented something else. Of course, sometimes, to get to the “thing itself,” we must enlist metaphor. Take writing. It’s filled with them. Used well, crafted well, metaphor can coax out the thing itself. A problem I’ve noticed in the visual arts, especially with young students, is that metaphor can be the end goal. Therefore, all too often, the result is a cliché and ultimately a cop-out. A bad painting of a butterfly, for example, could have a moving story behind it. At a junior level critique, everyone is enthralled with the student’s story of struggle and transformation, of a new gender designation or the lack of one. How brave they are, is the consensus. But wait, it’s an awful painting! When metaphor is the objective, this important fact can be ignored. Why did they choose to paint the butterfly in the first place? What would the work be doing if they had cast the butterfly in cement? If they made it out of diaphanous paper? Performed it? Why is the craft meticulous or slapdash? An idea must be transformed through its making to have significance. “What are the materials doing? What is your artwork doing?” I’d ask. Never, “What does your artwork represent?” — to my student’s consternation. A similar dilemma can happen with political work, particularly with political painting, drawing, sculpture, where the artwork itself takes a backseat to the message. Take, for example, a student of Armenian descent who paints a picture depicting some aspect of the Armenian Genocide by the Ottoman Empire during WWI. Again, a compelling story, a righteous intention. But the painting sucks. That critical detail that it’s a shitty painting, or perhaps shouldn’t be a painting at all, can get lost or even regarded as irrelevant. Political art, almost by definition, stands against. But in his book The Aesthetic Dimension: Toward a critique of Marxist Aesthetics, the German-American philosopher Herbert Marcuse observed that what is in opposition to something else, is horizontally equal to what it opposes. It puts art on the same level as what it counters, becoming one-dimensional, straightforward, intelligible. When it comes to art with a message, the difficult part is not to mediate with righteous intention all the mystery out of it. The distinction of art is in its ability to transcend social determination and retain its poetical aesthetic nature. In this way, art is autonomous, so it may rise above opposition and conflict even when its stance is critical. To be any kind of artist, one must remain a poet, Rainer Maria Rilke taught me. When art is autonomous, it is free. Free from all authority but its own. Therein lies its radicality. This immunity from external authority includes never being contingent on it. Art, including the literary arts, must be entirely itself — other, in some ways mysterious, and true.
https://bradleywester.medium.com/the-trouble-with-metaphor-48685116b20c
['Bradley Wester']
2020-11-24 15:26:09.955000+00:00
['Nonfiction', 'Poet', 'Creativity', 'Metaphor', 'Art']
When to Write, and When Not to Write
ON WRITING When to Write, and When Not to Write Simple ideas that will put this debate to bed once and for all Photo by Mark Landman on Unsplash Writing is self-expression on steroids. It is penning beautiful thoughts that destroy painful realities. Writing is meaning multiplied. It is the miracle of taking six deep breaths in, and the magic of taking six deep breaths out. Writing is discipline dominated. It’s the delirium of dancing with your prince until midnight, then dutifully dashing back home to avoid a bitter disaster. Writing gives entrance and allows escape. It takes the burdens off our shoulders and rests them at our feet. It gives us solace in a world where there’s sometimes victory, and oftentimes defeat. Oh, dearest writing! Wisdom we dost seek. To understand when we will — when we won’t. Or when we do — and when we don’t. When to write We write when there’s nothing to write. When, like Hemingway, we “sit down at a typewriter and bleed.” We write when there’s something to write. When the words pour out and it seems like we’re writing faster than we can read. We write when we’re sick of life; when life gives us no respect. “We write to taste life twice, in the moment and in retrospect.”― Anais Nin We write when we’re scared, when we don’t know where to start. We write when we’re prepared, oozing passion and creative art. We write when we’re lonely, facing all kinds of winter blues. We write when we’re excited, brandishing our laundry list of to-dos. We write in the middle of the night, inspired by a dream. We write right after a fight, trying to find the guts to come clean. We write through the tears, desperately seeking peace. We write through our fears, praying for a release. Photo by Seven Shooter on Unsplash We write when we read, when we learn from the best. We write when we need to share great experiences with the rest. We write when we’re obsessed when it’s mind over matter. We write when we’re depressed, to quiet the inner chatter. We write when we’re in love, dopamine hits to the heart. We write when those above demand we sit six feet apart. We write to survive, to quench our thirst to tell stories. We also write to thrive, to leverage our glories. We “write to be a poet, even in prose.” — Charles Baudelaire We write for various reasons, some so private we’ll never disclose. We write to discover what we think we might know. We write to uncover what may get us to flow. Photo by Daan Mooij on Unsplash We write when life tells us we have only months left to live. We write when life yells — and on our knees we finally forgive. We write when we mourn from the depths of our soul. We write to ease the pain, to find peace is our goal. When not to Write When not to write? I admit I don’t know. Is there really a debate? It’s always a “hell yes!” and never a “hell no.”
https://medium.com/blankpage/when-to-write-and-when-not-to-write-c01acc3f13ac
['Nicole Bryan']
2020-12-21 15:52:47.429000+00:00
['Writing', 'Writing Tips', 'Creativity', 'Art', 'Writer']
How To Get Away
NCiR’ Pilin’ On How To Get Away It’s been a while since I gave thought to gore and death that monster’s wrought. Memories of being distraught by doors ajar and clowns Mom bought. The dark recesses in my room hid monsters plotting this kid’s doom. When I’m asleep, they’ll leave their tomb. Their midnight snack’s me, I assume. I used to think best stay in my bed the covers pulled-up o’er my head, better than going under the bed where the monster hides to my dread. But if the monster’s joining me, I’ll not wait here, his face to see. So he won’t know my plan to flee, I’ll say “Excuse me, got to pee!” ©2020 HHThorpe. All rights reserved.
https://medium.com/no-crime-in-rhymin/how-to-get-away-6f7f658c41fc
['Harper Thorpe']
2020-09-17 22:43:56.709000+00:00
['Music', 'Satire', 'Poetry', 'Creativity', 'Humor']
Nightingale’s First All-Visual Article
It’s the most wonderful time of the year. That’s a common refrain around the holidays, and here at Nightingale, we tend to agree. I mean, when else is there so much time to catch up on reading? And where else will you find articles about the resurgence of vinyl or maps of the Roman Empire’s road networks? Happy Holidays! This week, Nightingale published our first graphic article, Surasti Puri’s “An Illustrated Look at the Booker Prize.” In addition to its stunning visuals, the article sheds some light on the gender breakdown of the Booker prize and has some ideas to get you started on your 2020 reading list. If you haven’t quite finished your holiday shopping, don’t worry — Coleman Harris has you covered with his “Gift Guide for the Data Viz Practitioner.” These books, activities, and tools will make great gifts for anyone who loves making data visualizations. In 2019, vinyl outsold CDs for the first time since 1986. Allen Hillery wondered, “Is the Resurgence of Vinyl Driven By Nostalgia or Fad?” To help find an answer, he dove into the data. Paul Kahn continued his series on global information design with “Visualizing Overland Travel, When All Roads Led to Rome.” Learn about the design trick in these early road maps that forms the basis for many of today’s subway maps. Krishna P wrote “Getting Started With Small Multiples,” in which he explains what small multiples are as well as how you can use them. The upshot is that small multiples are a powerful and effective data viz tool that is extremely underused.
https://medium.com/nightingale/nightingales-first-all-visual-article-ba4a4c551664
['Isaac Levy-Rubinett']
2019-12-23 00:18:56.977000+00:00
['Programming', 'Data Visualization', 'History', 'Books', 'Music']
Algorithms In Python: Quicksort
Photo by Martin Adams on Unsplash Today we will not be solving any leetcode question. Instead, we will be looking at a sorting algorithm. Quicksort Quicksort is an efficient sorting algorithm and falls in the divide-and-conquer category of sorting algorithms. Quicksort is an unstable sorting algorithm, which means if two values are the same in a list, the algorithm can still swap them. The idea behind quicksort is having a pivot, and finding the correct position in the list. Let us walk through an example. It should be easier to understand the algorithm. Suppose we have a list with the following numbers. A = [8, 1, 6, 10, 5] We need to consider a pivot. We can choose the pivot element randomly. In this case, we have considered the last element in the list as the pivot. In the next step, we need two variables. Let us name it as “i” and “j”. We need to run “j” from start to end of the list. Compare the value of “j” and pivot. If “j” is smaller or equal to the pivot value exchange the values in position “i” and “j” and increment the value of “i”. Continue this step until “j” reaches the end of the loop. Once “j” reaches the end of the list, the exchange value of “i” and pivot. We can notice that the position of the pivot is in its sorted position. Next, we break the array into two-part, one before the pivot and one after the pivot. All the elements before the pivot will be smaller than the pivot. While all the elements after the pivot will be larger than the pivot. Repeat the same steps until no more elements are required to be sorted. Let us look into the code. class Solution: def partition(self, low, high, array): i = low pivot = array[high] # pivot for j in range(low , high): # If current element is smaller than or # equal to pivot if array[j] <= pivot: array[i],array[j] = array[j],array[i] # increment index of smaller element i = i+1 array[i],array[high] = array[high],array[i] return ( i ) def quicksort(self, low, high, array): if(low < high): pi = self.partition(low, high, array) self.quicksort(low, pi-1, array) self.quicksort(pi+1, high, array) return array Complexity analysis Time Complexity In the worst-case scenario, if the partitions are imbalanced, the time complexity is O(N²). In the best-case scenario, if we have the most balanced partition, the time complexity is O(logN). In the average case, the time complexity is O(NlogN). Space Complexity In the worst case, scenario space complexity is O(N) since it is recursive calls, while in the best case, it is O(logN).
https://medium.com/python-in-plain-english/quicksort-day-34-python-83218a1f6f62
['Annamariya Tharayil']
2020-11-29 09:08:46.168000+00:00
['Sorting Algorithms', 'Algorithms', 'Coding', 'Python', 'Software Engineering']
Android TensorFlow Lite Machine Learning Example
Using TensorFlow Lite Library For Object Detection TensorFlow Lite is TensorFlow’s lightweight solution for mobile devices. TensorFlow Lite is better as: TensorFlow Lite enables on-device machine learning inference with low latency. Hence, it is fast. TensorFlow Lite takes a small binary size. Hence, good for mobile devices. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API. TensorFlow Lite uses many techniques for achieving low latency such as: Optimizing the kernels for mobile apps. Pre-fused activations. Quantized kernels that allow smaller and faster (fixed-point math) models. How to use TensorFlow Lite in an Android application? The most important tricky part while using the TensorFlow Lite is to prepare the model(.tflite) which is different from the normal TensorFlow model. In order to run the model with the TensorFlow Lite, you will have to convert the model into the model(.tflite) which is accepted by the TensorFlow Lite. Follow the steps from here. Now, you will have the model(.tflite) and the label file. You can start using these model and label files in your Android application to load the model and to predict the output using the TensorFlow Lite library. I have created a complete running sample application using the TensorFlow Lite for object detection. Check out the project here. Credit: The classifier example has been taken from the Google TensorFlow example. Sample Application — Object Detection Example Object Detection Object Detection Complete Project Link. Originally published on AfterAcademy.com Check out my other articles on Machine Learning Recommended Reading Happy Learning AI :) Also, Let’s become friends on Twitter, Linkedin, Github, and Facebook. Check out all the top articles at blog.mindorks.com Learn Data Structures & Algorithms By AfterAcademy from here.
https://medium.com/mindorks/android-tensorflow-lite-machine-learning-example-b06ca29226b6
['Amit Shekhar']
2019-12-06 09:26:07.294000+00:00
['Machine Learning', 'AI', 'Android', 'Artificial Intelligence', 'TensorFlow']
Kite Launches AI-Powered JavaScript Completions — Code Faster with Kite
Let’s take a look… Now let’s break it down… Kite can complete up to multiple lines of code at a time, reducing the time you spend writing repetitive code. Kite is able to provide completions when editors like VS Code cannot understand the code. Kite shows completions in more situations, for example after a space. Kite works alongside your editor’s completions. We use carefully-designed filters to reduce noise. We’ve trained a new deep learning model on 22 million open-source JavaScript files to ensure Kite works with your favorite libraries and frameworks like React, Vue, Angular, and Node.js. Kite for JavaScript is free and works 100% locally. You can download it here . Kite makes coding faster and easier The JavaScript ecosystem continually invents new frameworks and design patterns. These inventions make it a vibrant place to be, but it also creates the need to learn an ever-changing set of code patterns and APIs. Kite’s deep learning models have learned all of these patterns, understand the context of your code, so Kite can predict chunks of code and put them in your completions. This can be useful in two ways: If you already know what you need to type, Kite helps you jump ahead to the next task. If you’re having trouble remembering an API or design pattern, Kite can remind you so you don’t need to search on Google. As a result, writing JavaScript with Kite becomes faster and more fun. We’re just getting started Kite can help you ship software faster today. And we’re just getting started. We believe that machine learning can automate away the tedious parts of writing code, and there’s so much more to do. We’re continually exploring ways ML can unlock productivity gains for developers, and we hope that you will join us. A big thanks to the 250,000 developers who use Kite We’re thrilled to share today that over 250,000 people are coding with Kite every month . We feel grateful to have reached this milestone, and we’d like to thank everyone who uses Kite. This amazing progress wouldn’t be possible without each of you giving us the encouragement and feedback that fuels the hard work — 30,000 code commits to date — that got us here. We hope you’ll join us on this journey by downloading Kite! Happy coding, The Kite Team P.S. We are also announcing Kite Pro today, which is our first paid product for Python professionals. Check out our separate blog post about Kite Pro for more info!
https://medium.com/kitepython/kite-launches-ai-powered-javascript-completions-code-faster-with-kite-c2cffc1ecf78
['The Kite Team']
2020-05-12 17:10:32.334000+00:00
['AI', 'JavaScript', 'Deep Learning', 'Python', 'Machine Learning']
When Life Feels Out of Control, Run
One day during lunch, Boy came in, ready to chat. “Mrs. Hayyyyynes, oh my God. Ha! I gotta tell you something.” He pulled up a chair at my desk. “Ok, so you know that girl I told you was my girlfriend?” “Yeah.” “Well, she’s not actually my girlfriend. We have just been acting like it to make her guy jealous.” “Haha, ok.” “And my girlfriend’s friend told me to stop it because I already have a girlfriend.” “Well, yeah, that makes sense.” “I know, right? I should’ve listened.” “Well her friend and me, we ride the same bus. And she was getting mad at me and she told me if I didn’t stop she was gonna tell my girlfriend.” “Sounds like she was serious.” “I KNOW! Anyway, the girl I had been pretending with asked me to sit with her at lunch because her boyfriend was getting mad at her.” “And she wanted that?” “Yeah, Mrs. Haynes. Keep up.” “I got it. Keep going.” “Anyway, I knew I wasn’t supposed to, but this girl wanted my help! So I was like, aight, I’ll sit with you. “And so that’s what I did. I sat with her. And as I was sitting with her, I saw my girlfriend’s friend, and I was like, ‘Oh shoot!” (Except, he said the real word.) “Boy!” “Oh my bad, my bad. I saw her, and I was like ‘Oh shoot!’ (this time, it was actually ‘shoot’). I didn’t know she had this lunch! “And she is facing me, talking to someone whose back is toward me, and she locks eyessss on me, Mrs. Hayyynes.” “Yikes.” “And that’s not it! She then nods at me and the girl she was talking with turns around. And it’s my girlfriend! And I’m like, SHE GOES TO THIS SCHOOL?!” “You didn’t know your girlfriend went to this school?” “No! How was I supposed to know that?” Before I could explain to him that usually one of the first questions you ask your love interest is where they go to school if you, in fact, did not meet them at school, Boy continues. “I don’t know what I’m going to do, Mrs. Haynes! I gotta GO! I am running away! You may not see me next block.” As he’s getting louder and speaking faster, he stands up, and to my surprise, puts his chair back. “I got to GO. Got to RUN!” He slaps the door frame and runs out. Heart stomping, thoughts swirling, pits sweating situations Most of us left behind the out-of-whack hormones and romantic confusion (also known as comedies) when we graduated high school. Poor Boy still has a few years to go. Unfortunately, if he’s anything like me, those situations that welcome a kaleidoscope of butterflies inside his stomach may shift in context, but they will keep coming long after he graduates. And I believe many would agree with me. Stories of high school students withstanding the idiosyncrasies of adolescence amuse us. This is partly due to the fact that many of us can still relate as adults. We may no longer sympathize with the situations, but many of us understand heart stomping, thoughts swirling, and pits sweating reactions to uncomfortable circumstances. These responses to stress are physical expressions known as fight or flight. The high school stories are also funny because they are simultaneously so juvenile. We think, that silly little thing caused all of that inner chaos? We run households. We work full-time jobs while making sure our children are participating in their virtual learning. We have coworkers and bosses with whom we frequently disagree. We have clients who believe they own us the moment they start paying our bosses. Good gracious, those are the things that can make a person want to get the heck out of here! We want to run The uptick in adrenaline that follows these stressful situations is a natural phenomenon. It evolved in humans to keep us alive and out of danger. Unlike the cavemen who bore fight or flight, in modern-day society, we cannot run away every time we find ourselves in undesirable situations. Boy could not switch schools though the thought of confronting his love life mortified him. Adults today cannot leave their homes or quit their jobs every time mirky situations appear. Running away, except in truly dire situations, is typically unacceptable. Or, is it? Don’t escape. Do run. Because teen’s brains are still developing, it would be unfair to expect them to approach anything with a clear mind. (Not to mention, it would take away from our comic relief!) They have so many things going on developmentally that affect their decision-making skills. However, those of us who are older, wiser, more responsible… Well, at least older with fully developed brains and more fragile responsibilities, we can take steps to better address our anxious situations. It turns out, our bodies are right to tell us to run. Sometimes we perceive this to mean escape. However, if we let go of the notion of escaping, and instead interpret it as physically running, then we will be able to confront the situation with a calm body and a clear mind. According to the Anxiety and Depression Association of America, “participation in aerobic exercise has been shown to decrease overall levels of tension, elevate and stabilize mood, improve sleep, and improve self-esteem.” Think about the last time life started whirling out of control. Did your mind race and your heart flutter? I bet, at that moment, you would give anything to address the situation with calmness, clarity, and confidence. Instead of yelling at yourself like Edna Mode to Mr. Incredible (“Pull yourself togetha!”), listen to your body: run. But not in the figurative sense, literally run. Once you have finished running, return to the problem. See if that circumstance is as intimidating as you initially thought. Science says you should be calmer, have a clearer mind, and feel more confident. Does that stress you out? You may be thinking, “Um, excuse me? I can’t just up-and-leave for an extended period every time I’m stressed out.” I get it. You have responsibilities! In a study that measured the reactivity of anxiety after being confronted with stressors, those who participated in physical activity were calmer than those who did not. Interestingly, when underactive participants were asked to exercise, they reacted with the least negative effects when the stressor was closer to the time that they exercised. Whereas consistently active participants had low negative effects to the stressors as long as they exercised that day. Though Quarantine may add some flexibility to our schedules, most adults cannot get up and go for a run the moment our bodies tell us to flee. So, we cannot time it in a way to be like the first group of underactive participants. We can, however, join the second group. Create routines that encourage physical activity every day In order to join the second group, we need to create routines that encourage physical activity (nearly) every day. I prefer running because it gets me outside, it is free, and it literally works my butt off. I do not have any children yet, but I will most definitely purchase a jogging stroller the moment I become pregnant. However, running is not everyone’s cup of tea. Here is the good news: any form of aerobic exercise on a regular basis will produce similar results. If Quarantine has brought us one good thing, it would be the plethora of online fitness instructors. Open Youtube or Instagram and search HIIT (it stands for high-intensity interval training). Zillions of personal trainers pop up with free, at home, zero equipment workout videos. (Of course, there are links to where you can pay money for even better workouts). Some of these workouts last as little as ten minutes. Everyone has ten minutes to spare in their day, even if that means waking up ten minutes earlier. In fact, ten minutes of aerobic exercise will fight fatigue and stress much better than the extra sleep. Let boy ring in your ear Sure, you will still encounter stressful situations that will make you wish your predicament involved pretending to date someone only to realize your girlfriend was watching. But if you focus on yourself and maintain healthy habits, you will be far better prepared to handle whatever chaos you encounter. So, here is to letting Boy ring in our ears: I have GOT TO RUN!
https://medium.com/curious/when-life-feels-out-of-control-run-cb1615322cf7
['Meg Haynes']
2020-12-02 20:53:28.663000+00:00
['Self', 'Mental Health', 'Health', 'Life Lessons', 'Life']
Define a Small Daily Action for Your Biggest Goal in 3 Minutes
Define a Small Daily Action for Your Biggest Goal in 3 Minutes It’s stupid math, but it works Photo by Crissy Jarvis on Unsplash If you want to achieve your biggest goal, all you have to do is transform it into a number and divide by 365. If you want to write a book, make it 365 pages and write one per day. If you want 10,000 subscribers, start manually reaching out to 27 a day. If you want to be a director at your company, email one new person each day for a year. The point of this overly simple, naïve napkin math isn’t to nail the plan for your journey. The point is to get moving. The main reason most people don’t achieve their big aspirations is that they never map out a path towards them. Even if the map is poorly drawn, not the fastest route, or flat out wrong, it’ll still help you get there because now, finally, you can start walking. Without a map, you can’t go from A to B. When you don’t know which step to take next, you won’t take a step at all. No map, no departure from A, let alone progress towards B. Your car just stays in the driveway. “I’d love to write a book someday!” That’s vague. Don’t settle for vague. If you’re satisfied with the mere idea of it, your dream is dead in the water. Don’t let your dream die in the water. What will it actually take for your book to come to fruition? How does someday become October 2nd, 2022? Break it down! Whip out a napkin. A book has lots of pages. Let’s say yours has 365. That’s one per day for a year. See how easy that was? From super-vague to ultra-specific in two seconds. Now, you have something to do tomorrow. This off-the-cuff calculation won’t answer all your questions, but it doesn’t have to in order to be valuable. The point is to get a clear-enough picture of the road from A to B. The point is to get your car out of the driveway. As soon as you sit down to write your first page tomorrow, you’ll realize you don’t know what your book is about — but for the first time, you’ll actually think about it. You’ll take notes, brainstorm ideas, and be on your way. You’ll also realize you don’t know how to write one page a day. Immediately, your simple equation breaks. That’s fine. Adjust it. New goal: Write half a page for two years instead of one. There we go! The pressure is off for today, and tomorrow? Tomorrow, the plan will adjust once more. The highest-certainty path to achieving any goal is to define a small, daily, repeatable action — and do it every day. You’ll have to change the size of the action many times, maybe even the action itself, but none of that matters once your car has left the driveway. You’ll keep driving. Smart people know that achieving goals has little to do with planning and a lot to do with doing. Plans change all the time. They must. Life happens. Obstacles occur. But objects in motion tend to stay in motion, and despite this law coming from physics, humans may be its most powerful example. If you want 1,000 followers, ask 3 people to follow you today. If you want a platinum record, write half a song today. If you want to meet someone special, reach out to one person today. Turn your biggest goal into a number, break it into 365 chunks, and pin those chunks on your calendar. This is stupid, gullible math. That math, however, acknowledges both the futility of planning and the fact that, every day, you must keep moving. Don’t let the fog stop you from achieving your dreams. It only takes minutes to clear it away. You won’t catch every bit of it. You’ll still stumble and fall. But, maybe for the first time, you’ll be on your way — and that feeling is hard to put in numbers.
https://ngoeke.medium.com/define-a-small-daily-action-for-your-biggest-goal-in-3-minutes-454463fa821b
['Niklas Göke']
2020-06-18 12:27:17.882000+00:00
['Habits', 'Creativity', 'Psychology', 'Goals', 'Self Improvement']
Think Designing an Effective A/B Test is Easy? Think Again
Let’s begin by considering the question again. Lookout is aiming to increase the number of users on its platform. Should it advertise on TV? Lookout wants to increase the number of users on its platform. It doesn’t seem that advertising on TV would hurt the number of acquired users. (Perhaps this is not the case, but we’ll take it as an assumption. In reality you’d need to test this assertion out.) The obvious answer, then, without any testing, is yes, advertise on TV! Regardless the cost, Lookout should advertise on TV if it’s looking to increase users. Surely paying for exposure, at any price, shouldn’t hurt. However, this is obviously not the intention of the question. Let’s rephrase it: Lookout is aiming to increase the number of users on its platform. Is it worth it to advertise on TV? Cost is now introduced. If it costs several million dollars to run TV ads but only a small handful of users joined Lookout, it’s not worth it, even if the service did increase the number of users on its platform. But as with most things, grounding our problem in reality also made it more complicated. We now need to not only think about cost, but also add some aspect of comparison. Is running TV ads worth it, compared to what? To start, we might consider something like “cost per user” and compare it to other advertising mediums, like Google search ads or Facebook ads. Finally, we can add some more clarity to our question: Lookout is aiming to increase the number of users on its platform. If we have $x to spend on advertising, should it be spent on TV or some other ad medium? Great! So we are aiming to find the cost per user. A simple approach towards calculating this can be as follows: This definition of the metric is inherently a bit flawed. It essentially treats all users the same; that is, if ad A only appeals to 30% of the audience and ad B only appeals to 50% of the audience, we must choose ad B. On the other hand, it may be true that ads A and B appeal to different segments of the audience, and that together the ads can appeal to 80% of them. Instead, we may try to come up with some model M like this: This more dynamic model can be used for better comparison. At the same time, we’ve made the whole situation a lot more complex. If we’re going to train a model to identify the cost to acquire a user, we need to have training data that looks something like this: We still need to determine how to acquire the cost of the user for this model to work. We’ve wound ourselves into a bit of a loop: to find the cost to acquire a user, we need to find the cost to acquire a user. There are probably ways around this, but there’s too complex to cover in just one article. In the meantime, let’s just return to our original simpler definition of cost to acquire a user, acknowledging that it has weaknesses. We still have an important question to consider that we’ve been pushing back a bit: how does one determine is a user joined Lookout from television? There’s a fair bit of obscurity around what constitutes ‘from’. For example, what if a user clicks a Facebook add for Lookout and signs up for an ad, only because they saw a Lookout ad on TV? In this case, the television ad was the determining factor. There are further questions of overlap that go in the other direction: even if someone goes to Lookout directly after watching a TV advertisement, what role do previous Lookout ads play in exposure and brand? Again, we’re going to tread over these problems for now, although in real experimentation they need to be considered. Noise between interfering sources of variation is a real problem that exists in most A/B tests. It’s not enough to just say that variation is accounted for on both sides; it causes inaccuracies and precision errors. Regardless, we may design a test that looks something like this. We take two groups of DMAs, or Designated Marketing Areas (discrete marketing regions that can include cities and/or rural areas)— that are very similar to each other. We run exclusively TV ads in one DMA group and Facebook ads in another DMA group (assuming we’re only comparing these two mediums). Then, we measure the number of new users from each of these areas. If there is a substantial difference between the groups, we can attribute it to the type of advertisement. We do need to be careful, though; because this type of test has such a large error range, we likely cannot take away much unless the different is truly significant. What’s nice about this is that it embodies the spirit of A/B tests, and it makes the problem a bit simpler. Factors other than the presence of a certain type of ad will be cancelled out (at least in theory). We can make a conclusions about the cost-per-user for TV ad and Facebook ad campaigns, and compare them. Finally, we can make our boss happy! Not so fast. The question of finding these DMA groups, is extraordinarily difficult. The two DMA groups needs to be representative of each other, on various levels, such that they can be properly compared. The people in the DMA groups must be similarly receptive — effective — to certain types of ad campaigns. That is, if the audience in group A is 30% receptive to Facebook advertisements, the audience in group B should be at least between 25% and 35% receptive. This way, we can properly compare the users acquired. The accessibility to advertisement mediums must be similar across DMA groups. If few people in DMA group A turn on their TVs and many people in DMA group B don’t use Facebook, we can’t expect to compare the advertising mediums properly. (This is separate from ‘repective-ness’, which assumes equal access to advertisement and concerns user choice.) Other factors that may impact the results, like population sizes, need to be very similar. The reason why we choose DMA groups is because it allows us to create two similar groups without DMAs that need to be identical, which would be pretty much impossible. Although this is difficult, it’s totally possible. Data about accessibility to advertisement mediums is available. Regional TV stations have viewer data, and Facebook provides reach data to advertisers. We can build a probabilistic model determining if a user will see an ad or not, and apply it to the entire population of a DMA to find how receptive it is over all. In the end, our data may look something like this: Then, we can set some constraints and set a program to explore possible splits of DMA groups. Machine learning certainly can be used for this, but it might be overkill, depending on the number of DMAs we have data for. So, finally, we’ve arrived at a solution — kind of; we’ve explored plenty of issues with it. We have generated many more questions than answers, but hopefully it illustrates just how much thought and problem-solving is required to engineer a meaningful and effective A/B test in the real world.
https://towardsdatascience.com/think-designing-an-effective-a-b-test-is-easy-think-again-e1bcb1f48210
['Andre Ye']
2020-12-27 17:47:27.904000+00:00
['Machine Learning', 'Marketing', 'Business', 'AI', 'Data Science']
Kate, Tony and My Mom
Kate, Tony and My Mom It’s time to look at suicide more clearly My mom in better days The news about Kate Spade saddened me, but Anthony Bourdain did me in. I felt more connected to Bourdain, an irresistible truth-teller with a sharp wit. Their deaths are a terrible loss, but it was their families I thought of almost immediately. I know very well the lifetime of difficulty ahead for them. My mother shot herself when I was thirty-three. I try to tell people this as evenly as possible. As if it was an interesting fact of my life, and not a deep and central nerve. A thing woven into flesh and bone. Children left behind by suicide carry the persistent ache of a parent erased. They wanted to die (supposedly), making our grief secret and complicated. A grievous injury not only to its intended victim, but flattening everyone in its vicinity. My mother’s suicide is over for her, but lives on in us. Sometimes, it’s a blinding, searing pain. Most often, the hum of a steady sadness. It’s left me wondering about my value as a person. It’s a hell of a thing to look and sound like a woman who no longer wanted to exist. I’ve continued to get out of bed in the morning and do my best to carry on, but it has broken me in some permanent ways. The worst legacy of suicide is the corridor it opens in other’s minds. An unthinkable choice made possible. In fact, two risk factors for suicide are a family history of suicide and exposure to another person’s suicide. Pain begets pain. Suicide is a public health crisis that repeats and perpetuates. The cost of shaking our heads and believing in its inevitability is more death. I can’t state this plainly enough, it’s a terrible myth that nothing can be done for a suicidal person. Data and common sense don’t bear it out, and yet our health policies and societal behaviors support aversion and denial. It’s still taboo to examine it with clear eyes, talk about it, admit it happened in your family, or even offer condolences. When my mom died I didn’t receive a single card or call from extended family. Years later one admitted to not knowing what to say. ‘I’m sorry for your loss’ works as well for people who lost a loved one to suicide as cancer. Survivors have to deal with the death, and then in a double whammy are expected to quietly slink away with their shame. I’m angry about this treatment, and I’m not alone. Rates of suicide are rising sharply, nearly 45,000 Americans took their own lives in 2016 alone. Each year hundreds of thousands of people are left to deal with the aftermath without psychological counseling or societal support. The very people most at risk. 51% of all suicides involve a firearm. Suicide costs the US $69 billion annually. This is reality. I refuse to shame any suicide victims, nor will I live in shame. My mother had severe mental illness and took her own life. I could cry an ocean of tears and it wouldn’t be enough to purge that grief. It’s been made worse by the stigma of suicide which I am choosing to shed on this day. What can you do? Get educated about the facts that lead to suicide, destigmatize discussions of mental health, insist on gun policy that accounts for suicide, support affordable access to mental healthcare, make it ok for people in your life to be seen struggling in front of you, and for god’s sake, send a condolence card.
https://rebeccathomas.medium.com/kate-tony-and-my-mom-73f9ed06b71f
['Rebecca Thomas']
2019-06-29 17:40:21.163000+00:00
['Suicide', 'Anthony Bourdain', 'Mental Health', 'Health', 'Life']
A progressive Web application with Vue JS, Webpack & Material Design [Part 1]
[Updated 11/27/2017] Read the original article on Sicara’s blog here. Progressive web applications are the future. And more and more big companies are starting playing with them (such as Twitter: https://mobile.twitter.com/). Imagine a Web Application that you can browse in the subway, that keeps engaging its user through notifications, up-to-date data and that offers app-like navigation, and you get an overview of PWAs capabilities. A Progressive Web Application (PWA) is a web application that offers an app-like user experience. PWAs benefit from the modern Web technological innovations (Service Workers, Native APIs, JS frameworks) and raise web application quality standard. If you want to learn more about PWAs, please visit this great Google developer page. Look at the following PWA ! It looks like a native app, doesn’t it? Twitter progressive web application From the developer point of view, PWAs have huge plus on native applications. It’s basically a website, so: you can write them with any framework you want; one code to rule them all: it is cross-platform and cross-devices (the code is executed by user’s browser); easy to ship: no need to download it through a Store. However, in early 2017, PWAs still face some restrictions: Safari does not support some basic PWAs features, such as Service workers, but Apple seems to be working on it; some native functions are still not supported: for more information, see this page What web can do. Tutorial objective This tutorial aims to create a basic but complete progressive web application with VueJS and Webpack, from scratch. Our application will meet all the requirements announced in introduction: progressive, responsive, connectivity independant, etc. I want to give you an overview of what can be achieved with PWAs: fluid native-like application, offline behaviors, native features interface, push notifications. To keep things challenging, we are going to build a cat picture messaging app: CropChat! Cropchat users will be able to read a main flow of cat pictures, open them to view details and post new cat pictures (first from internet, then from device drive or camera). The tutorial will be split in several parts, that will be published successively: Basic components of our PWA Our Progressive Web Application is based on modern components you are going to like! Let’s start with part 1! [PART 1] Create a Single Page Application with VueJS, Webpack and Material Design Lite If you are not familiar with VueJS 2, I strongly recommend that you take a look at the official tutorial Build the VueJS App base We are going to use Vue-cli to scaffold our application: npm install -g vue-cli Vue-cli comes along with a few templates. We will choose pwa template. Vue-cli is going to create a dummy VueJS application with Webpack, vue-loader (hot reload!), a proper manifest file and basic offline support through service workers. Vue pwa template is built on top of Vue webpack template. Webpack is a modern and powerful module bundler for Javascript application that will process and build our assets. vue init pwa cropchat You will be asked a few questions. Here is the configuration I used: ? Project short name: fewer than 12 characters to not be truncated on homescreens (default: same as name) cropchat ? Project description A cat pictures messaging application ? Author Charles BOCHET < ? Vue build standalone ? Install vue-router? Yes ? Use ESLint to lint your code? Yes ? Pick an ESLint preset Standard ? Setup unit tests with Karma + Mocha? Yes ? Setup e2e tests with Nightwatch? No ? Project name cropchat? Project short name: fewer than 12 characters to not be truncated on homescreens (default: same as name) cropchat? Project description A cat pictures messaging application? Author Charles BOCHET < [email protected] ? Vue build standalone? Install vue-router? Yes? Use ESLint to lint your code? Yes? Pick an ESLint preset Standard? Setup unit tests with Karma + Mocha? Yes? Setup e2e tests with Nightwatch? No vue-cli · Generated "cropchat". This process creates a project folder with following subfolders: build: contains webpack and vue-loader configuration files config: contains our app config (environments, parameters…) src: source code of our application static: images, css and other public assets test: unit test files propelled by Karma & Mocha Then run: cd cropchat npm install npm run dev
https://medium.com/sicara/a-progressive-web-application-with-vue-js-webpack-material-design-part-1-c243e2e6e402
['Charles Bochet']
2019-12-06 14:14:48.972000+00:00
['JavaScript', 'Data Visualization', 'Vuejs', 'Development', 'Pwa']
The Fountain of Life
It’s five in the morning; I live for this time of day. Outside it’s dark and cool, unsullied by the rays of the sun. I listen to the sound of the birds. They speak with the voice of the earth. I listen and I write with no rigidity or hesitation. The page of my notebook is a canvas, mercurial in its ability to adapt and become anything I imagine. The only boundaries are the depths of my mind. A new day approaches — a page in the book of our lives to explore. I’m inspired; I can do no harm.
https://medium.com/scribe/the-fountain-of-life-2004f4a142d3
['Vincent Van Patten']
2020-05-17 23:02:10.317000+00:00
['Creativity', 'Life', 'Life Lessons', 'Inspiration', 'Writing']
Engineering visual search inside Pinterest browser extensions
Kelei Xu | Pinterest engineer, Product Engineering Pinterest is a visual discovery engine with 100B ideas saved by 150M people around the world. We recently launched three new ways to discover more ideas on Pinterest and from the world around you with Lens BETA, Shop the Look and Instant Ideas. Today we’re bringing that same visual discovery technology to the whole internet with the launch of visual search inside Pinterest browser extensions. For the first time, you can use our visual search technology outside of Pinterest across the web. Just hover over any image you see online and find related ideas and products without leaving the site you’re on. In this post, we’ll share how we built visual search into the Pinterest browser extension for Chrome. Inception The idea and initial prototypes for visual search inside Pinterest browser extensions started almost two years ago. Before we even launched visual search to Pinners, a couple engineers and a designer brainstormed product ideas where we could apply our visual search technology. The browser extension was one of the first ideas we came up with and prototyped. We were excited about the concept, but decided to prioritize launching visual search within our own app first. Since then, we’ve launched new visual search features like real-time object detection, and made significant improvements to our technology, including improving our visual model, developing new state-of-the-art visual signals and increasing the number of objects we recognize. Now, we’re launching visual search for the whole web. Serving visual search requests outside Pinterest There are two ways to visually search using the Pinterest browser extension. After you download the Pinterest browser button for Chrome, just hover over an image, click the visual search icon (magnifying glass) and get related results. You can also get results for the entire visible web page by right clicking on the page. Clicking on the visual search icon triggers a flow where we take the URL of the image and render it in our visual search overlay. When you right click on the page to search, we use Chrome’s captureVisibleTab API to screen capture the entire page. This allows us to visually search on things that aren’t static images, such as videos and GIFs. But, captureVisibleTab only works on background scripts and not the injected content scripts that handle all the UI. We use Chrome’s message passing API to send the screenshot data URI to our content script, resize it and display it as an image in our visual search overlay on the webpage. All of this happens in real-time, in a fraction of a second. To set up the visual search cropping selector interface, where you can move and resize the search box around anything in the image, we resize the image or screenshot to fit inside the available page height and be no greater than 50 percent of the available page width. We draw the resized image as the background of an HTML element and overlay it with a transparent canvas which contains the cropping selector. When we initially show the visual search overlay, we select about 90 percent of the image, animating the selector inwards from the edges so it’s apparent to the Pinner what’s going on. Backstage, we draw the original image into a hidden canvas and convert it to a data:URI using canvas.context.getImageData. In order to reduce latency, we resize the image to the minimum size necessary for our visual models. After the Pinner finishes making their crop selection, we send the data:URI to our background script, along with the selector’s top, left, height and width values, so we know what to search and where to look. In our background script, we convert the data:URI into a blob and send all the data to our API via an XMLHttpRequest. We always search for the initial selection on load, so there are some results (and hopefully some annotations) to work with. Search results come back from the API in the form of Pin objects. We render these as Pins in a familiar-looking Pinterest grid, which can be immediately saved or run through Search again, right there on the page. We’ve also added hovering Search buttons to images found on the page when someone clicks the browser button to help make visual search more discoverable. API layer On the API layer, we need to do two main things: upload the image from the client to a temporary S3 store and send the image to our visual search service. These used to be dependent, sequential tasks until one of our engineers parallelized them, cutting latency greatly. We temporarily store the image for performance reasons. On the initial search, we upload the raw image to the API along with crop coordinates, and the API sends back a link to the image. For second and subsequent searches we repeat this link back to the API along with new top, left, height, and width values, so we don’t have to keep sending the raw image data, which would be very wasteful. Future plans With this update, you can now use Pinterest visual discovery technology to find ideas in our app, across the web and out in the world. And this is just the beginning. Here are just a few of the things on the roadmap: We’ll bring real-time object detection to browser extensions to parallel the visual search experience in our app. This enables Pinners to simply tap on objects we identify and get results vs. manually identifying and pinpointing objects within images. We want to expand beyond visually similar search results to show you how to bring ideas to life, similar our approach with Lens’s results. For example, if the input image is an avocado, we want to show you more than other avocados, including health benefits, recipes and how to grow them. We’ll bring visual search to all our browser extensions. To start, we’re rolling out visual search inside the Pinterest browser button for Chrome to Pinners globally today. If you’re interested in solving computer vision challenges like these, join us! Acknowledgements: Albert Pereta, Andrew Zhai, Christina Lin, Dmitry Kislyuk, Kelei Xu, Kent Brewster, Naveen Gavini, Patrik Goethe, Steven Walling, Steven Ramkumar, Tiffany Chao, Tonio Alucema
https://medium.com/pinterest-engineering/engineering-visual-search-inside-pinterest-browser-extensions-90e7ed9d2b14
['Pinterest Engineering']
2017-03-07 19:55:00.021000+00:00
['Deep Learning', 'HTML', 'AI', 'Computer Vision', 'Visual Search']
The Geography of a Creative Life
The geography of a creative life is different than that of a normal one. It doesn’t follow predictable arcs, have clearly marked destinations or well-lit paths. It requires you in the words of my friend AJ Leon to “grab a machete and hack your own.” You don’t take the MCAT or pass the bar exam. No certification or diploma makes you qualified to do your work. Instead of choosing from the options in front of you, you take the scenic route and explore the possibilities that around you. False Starts I’ve had many false starts in my life. I quit Muy Thai, Bass Lessons, and Capoeira within a few weeks of starting I started a dozen writing projects, one of many digital graveyards where early incarnations of my work are buried Everyone has false starts. Entire industries thrive on false starts. We sign up for a class but never remove the shrink-wrap. Or we look for the next thing that’s going to solve our problem when we haven’t used the first one. We join the gym but never attend. We purchase an instrument but never practice A false start is better than standing still. You try something. You learn something. False starts allow you to collect data points and pay attention to what you find engaging. False starts are only a problem when they stop us in our tracks for good. Dead Ends When I was 20, I had a plan. But I quickly realized that life rarely goes according to plan, especially if you make that plan when you’re 20. I was fired on my 25th birthday, graduated into two recessions, and was near broke at 30. You couldn’t have planned such lousy timing., If you choose to pursue a life of meaning, intention, and purpose, you’ll hit dead ends. You don’t choose this kind of life if you want to get from A to B without stopping anywhere along the way. Sometimes you have to hit rock bottom to reach your peak. New beginnings are often disguised as endings, and dead ends precede significant change. Dead ends hurt. When a book doesn’t sell copies, the album is a dud, and the project fails we hit dead ends. “Artistic losses” are our miscarriages said, Julia Cameron. Layovers Every creative journey has a layover between the beginning and the final destination. Day jobs and whatever else we need to do to pay the bills are layovers in the geography of a creative life: Steven Pressfield picked fruit and drove trucks Michael Crichton went to Law School John Legend Worked for the Boston Consulting Group People on a layover know it’s temporary. They spend a small part of life they’re life doing what they need to do so they can do what they were born to do. Dyana Valentine says that you should treat your day job as the first angel investor in your dream or company. Treat your layovers accordingly. Detours In the detours of a creative life, we arrive at what elle luna calls the crossroads of should and must. We begin a part of the journey where everything is unknown, and anything is possible: When a doctor quits her practice to pursue some humanitarian effort, she takes a detour When a designer walks away from a startup to make art, she takes a detour Detours take us into uncharted territory. We can approach them like a person who drives across the country and only stops for gas. Or we can stop and look around. All it takes is one turn in a different direction to end up at a different destination. When you arrive at the detour, you’ll be encouraged to take the tried and true path, and discouraged from enduring the uncertainty of an unproven path. The detour is the call in every hero’s journey. Peaks and Valleys In 2013, I was on top of the world. I became a WSJ best selling author. I made more progress with my career in 6 months than I had in all the years prior. By the summer of 2014, I was in a dark valley, my crucible, or what I referred to in my previous book as the impact zone. You’re taking wave after wave on the head, and it seems like you’re never coming back up for air. We canceled an event because we didn’t sell enough tickets. A few weeks later an editor contacted me about writing a book. When you’re in a dark valley, it can be hard to find reasons to live. The wounds feel like they’re never going to close. It’s been said that cracks are how the lights get in, but in the midst of grief that light passes through you, quickly fading back into darkness. Every experience is intensified, and every emotion is heightened. You feel beyond lonely, yet you can’t be around people. You want to climb, but you can barely stand. You want to feel intimacy, but all you’re capable of is distance. But every dark chapter eventually comes to an end. Permanence is diminishing. The wounds close and we’re left with the scars of distant memories, and valuable lessons to carry us into what comes next. The clouds clear. The sun rises again. In the midst of darkness I always reflect on these words from Unmistakable Creative guest Ananta Ripa Ajmera It’s all about connecting with the sun because when we wake up before the sun, we’re able to see that transition from darkness to light. And I think there’s something so deeply healing to the human psyche about seeing that happen because it’s a reminder to us the darkness is temporary, and it passes and gives way for the light of the sun of the day. So too do these dark habits and destructive patterns and thoughts that we have only have a temporary existence in mind. All seasons of adversity eventually come to an end. In the dark valleys of a creative life, loss creates an opening. We lose what wasn’t meant to be so we can focus on what we’re destined for. False Horizons At the height of his career, when he was so famous that people recognized him on the streets, Ed Helms told Sam Jones “life is a series of false horizons.” The other day my friend David Burkus asked me “how’s it going?” This is code among writers for “how many copies has it sold.” I told him that I hadn’t checked the Amazon rankings more than once. All I knew was that the book seemed to be resonating with readers. When we’re not at the mountaintop, we think to reach it will lead to everlasting happiness. When we reach the mountaintop we understand why accomplishing our goals won’t make us happy forever. The expression of your soul’s calling is in a perpetual state of evolution. There is no I’ve made it in moment. There ’s only a craft to master and more work to be done. To be an eternal master, you must be a perpetual student.
https://medium.com/the-mission/the-geography-of-a-creative-life-23b17660ca68
['Srinivas Rao']
2018-09-04 01:15:55.078000+00:00
['Life Lessons', 'Writing', 'Art', 'Creativity']
Big Data’s Role in Creating Customer-Centric Business Intelligence
Customer experience has been one of the top focus areas for CIOs, and CMOs in recent years. A key requirement for improving customer experience is understanding the customer: their past and current interactions inside and outside the company, their preferences, demographic and behavioral information, etc… The big data phenomenon has introduced the ability to obtain a much deeper understanding of customers, especially bringing in social media data. With the volume and different types of data we have now available companies can run more sophisticated analysis, in a more granular way leading to a change in the size of customer segments. It is shrinking down to one, where each individual customer is offered a personalized experience based on their individual needs and preferences. This notion brings more relevance to the day-to-day interactions with customers and basically takes customers satisfaction and loyalty to a new level that was not possible before. Instead of relying on yesterday’s data, which may not be pertinent anymore, the solution should analyze the latest information and turn them into a deeper understanding of that customer. With that knowledge, the company can formulate real opportunities to drive higher customer satisfaction. Article source is from the book: “Unlocking Your Empire — Keys to enable your social media-powered business”, by Tullio Siragusa Know the numbers, know your business! Numbers are the fundamental language of business. The bottom line on the income statement is a number. The business plan is expressed specifically as numbers on the operating budget, numbers that may derive largely from statistical projections of revenues and costs. Decisions to invest in assets that can accelerate the growth of the business are usually based on numbers that reflect the expected profits and risks of each alternative use of invested funds. Success or failure of the business or any of its parts typically comes down to numbers. It has been well established that quality is the key to long-run growth in revenues. However, measuring quality is not enough. Controlling the quality of productions in a manufacturing plant or the quality of customer service by inspecting and measuring goods and customer satisfaction does not eliminate the need for commitment-to-excellence programs, thorough training of production and service personnel, and preventive maintenance of equipment. Regression analyses and moving average methods of time series analysis are two of the most commonly applied forecasting tools used in business, largely because they are robust yet easy to use. Other forecasting techniques range from qualitative approaches, such as juries of expert opinion, and subjective estimates of the sales staff, to highly sophisticate statistical methods of time series analysis, such as the Box-Jenkins and spectral analysis method. They are important in strategic planning to project consumer demographics that can prove critical in your ability to anticipate future consumption patterns. They are useful in marketing to estimate the effects of changes in pricing policy on sales volume and market share. No matter how you look at it, effective management is much more than just a matter of working with numbers. The successful manager relies on common sense and intuition; sensitivity to human factors that defy quantification, and creativity that transcends the numbers. When the numbers send up a red flag, the successful manager looks beneath them to find out what is going on. Most successful managers also know that the business cannot thrive without close attention to the numbers and that tools designed to work with the numbers can be indispensable. Social Media started a revolution Today there are other numbers to consider derived by social media reach, influence, sentiments, and tone; all which need to be measured, and put into the call for action operating model. Today’s successful manager understands that quantitative methods can be powerful agents for solving the problems of human institutions and in some cases human beings. In capitalist economies such as that of the United States, Canada, and the Western European countries, managers of firms are continuously faced with numerous choices. Managers of firms are assumed to have certain objectives, such as the maximization of profits of shareholder wealth, or the minimization of the cost of producing a given level of output. Because managers and consumers are pursuing their own private interests and decisions are made in a decentralized manner, rather than by a central planner, a very important question concerning the coordination of economic activities arises. This long-debated problem has been solved today with the explosion of access to real-time data that can be derived out of social media, and big data. Yet there are a few ways to manage this further, one is microeconomics, which seeks to provide a general theory to explain how the quantities and prices of individual commodities are determined. The development of such a theory will enable us to predict the effects of various events, such as industry deregulation and oil price shocks, on the quantity and price of output. One of the most important properties of the competitive market equilibrium is that the quantity produced is the socially efficient quantity. The cost of producing the last unit of output just equals consumers’ marginal willingness to pay for it. The supply-and-demand framework enables us to analyze or predict the effects of various events and government policy changes on the price and quantity of goods. Social media sentiment allows us to tie into it, real-time consumer opinion. The other way to manage the economics, is macroeconomics, this approach is concerned with the issue of how the quantity and price of output of individual firms or industries are determined. In contrast, macroeconomics addresses the determination of the entire economy or aggregate output and price. The most widely used measure of aggregate output is the gross national product (GNP Index); the market value of all final goods and services produced in an economy within a given time period. One of the aspects of macroeconomics is the fluctuation of the existence or lack thereof, of the trade-off between unemployment and inflation. However in the 1990s the trade and investment flow between the US and other economies increased, there is another variable of great concern to business people and policymakers: the exchange rate. This can change substantially by a few percents in a single day. Then there are interest rates; one important practical implication of the interest rate parity equation is that increases in the US interest rate cause the dollar to appreciate. US macroeconomic experience of the early 1980s provides a graphic, though somewhat painful, an illustration of the link between interest rates and exchange rates. On the other hand, under fixed exchange rate regimes, one country usually assumes the role of lead country, and the other countries act as followers. The ability of two countries to chart their own courses, or to pursue distinct, possibly even contradictory, macroeconomic goals is much lower when those countries attempt to maintain a fixed exchange rate. This may be a blessing or a curse. When exchange rates are fixed, and investors expect them to remain so, and interest parity conditions retract, interest rates must be equal in the two countries. The central bank of the follower country relinquishes its ability to control interest rates and thereby achieve macroeconomic objectives such as reducing unemployment. In the case of social media, it only showcases the consumer’s state of being right now, but it does not help predict market conditions which will influence that consumer, derived from micro or macro-economic concerns as described above. Big-data, on the other hand, can triangulate all these moving pieces, and with the right analytics, and decision algorithms could solve the biggest question companies always face: What to sell now, and tomorrow to maintain consistent growth and profits. The evolution of marketing and the consumer, a marriage is in order Before the development of the marketing concept as a management philosophy in the 1950s, marketing was defined essentially as selling. The traditional view of marketing up to that time was that marketing was responsible for creating demand for what farms; factories, forests, fishing, and mines could produce. Marketing has also been viewed in the past as the function responsible for creating a satisfied customer and for keeping the entire organization focused on the customer. Marketing is one of the functions that must be performed by the management of any organization, amongst other functions such as manufacturing, finance, purchasing, human resources, sales, R&D and accounting. The most effective marketing concept considers more carefully how the company can match up to its distinctive competence’s with a relatively undeserved set of customer needs and offer superior value to those customers. Market segmentation, market targeting, and positioning, ideas that were developed as part of the original marketing concept, become even more important strategically under the new concept. The value proposition, matching up customer needs and wants with company capabilities, becomes the central communication device both for customers and for all members of the organization. This is all possible today due to social media, and big data. Focusing attention on the company’s strategy for the delivery of superior value to customers is crucial. Superior marketing defined as customer-focused problem solving and the delivery of superior value to customers is a more sustainable source of competitive advantage than product technology per se in the global markets of the 1990s. Marketing is not a separate management function; rather it is the process of focusing every company activity on the overriding objective of delivering superior value to customers. It is more than a philosophy; it is a way of doing business. In the final analysis, only the customer can decide whether the company has created value and whether it will survive in the hyper-competitive global marketplace; more reason to tune into what consumers want and value, a social media-enabled company, along with the adoption of big-data can do that very effectively today. Listening to the consumer is not enough, tracking behaviors is not enough, enabling the consumer to have a voice is not enough, enabling the consumer to be at the helm is not enough. Predictive modeling of trends among social groups to identify life values, and developing the business intelligence approach to integrating such data across all business functions to properly develop, market, and retain relationships will be the marriage that’s missing today. Big-Data is nothing without new algorithms needed to match consumers to values. Forget psychographic, demographic, and life stage data — that’s not enough. Those models work in a verticalized market and were part of the industrial revolution assembly line methodology. What’s needed is to identify consumer values, as in what matters to that individual at the core of who they are — only through social media, and big-data tied into a centralized business intelligence engine can a segment of one strategy be accomplished. Imagination (CMO) married to sustainability (CIO) During the past two decades, the general view of the role of IT in business has shifted significantly from its traditional back-office functional focus toward one that fundamentally pervades and influences the core business of an organization. However, many managers entered the 1990s with a high level of skepticism regarding the actual benefits of IT. The productivity gains from IT investments have been disappointing, hence why the CIO is constantly tasked with cutting costs, vs. driving innovation. This is because IT was primarily expected to enhance operational efficiency (blue-collar) and administrative efficiency (white-collar). More recently, the dominant business competence appears to be business flexibility with significant competence brought together within a flexible business network of inter-organizational arrangements such as joint ventures, alliances and business partners, long term contracts, technology licenses and marketing agreements; this is why the cloud is growing so fast, because it allows companies to be nimble in changing the moving parts sort of speak in an efficient way and with agility. Undoubtedly, IT functionality will have a more profound impact on businesses than its effect this far, when it begins to focus on market convergence. Nevertheless, successful businesses will not treat IT as either a driver or the magic bullet for providing a distinctive strategic advantage, until marriage can occur. This marriage is between the CIO and the CMO, its’ “imagination tied to sustainability”. CMOs are often looking for better ways to position the brand’s viability in the market, and CIOs are always tasked with making sure the approaches are sustainable, scalable, and won’t break the bank. In many ways, these two CXOs have conflicting agendas, until now. With big data, it is becoming more realistic for this partnership to work, and CEO’s not driving it, are doing their companies a huge injustice. “CEOs who don’t understand social media, and big data will become extinct within 10–15 years, and sadly so will their companies.” The management challenge is to continually adapt the organizational and technological capabilities to be a dynamic alignment with the chosen business vision and more importantly with the consumer at the helm. Hence for strategists, IT is not simply a utility like power or telephone, but rather a fundamental source of business scope reconfigurations to redefine the rules of the game through restructured business networks. When the efficiency-enhancing business process redesign is pursued, the boundary conditions specified by the current strategy are considered fixed and given. The most challenging thing is for managers to implement the strategy for business network redesign in a coordinated way. However using IT applications for enhanced coordination and control is both efficient, and effective for carrying out the business processes. This challenge is difficult because the choices involved in exploiting the present and building the future confront managers with a complex trade-off. The conflict between the demands of the present and the requirements of the future lies at the heart of strategic management for at least three reasons: 1) The environment in which tomorrow’s success will be earned is likely to be quite different from the environment that confronts the organization today; 2) To succeed in the new environment of tomorrow, the organization itself must undergo a significant and sometimes radical change; 3) Adapting to change in and around the marketplace during a time of significant internal change places an extremely heavy burden on the leaders of any organization. The choices made in business scope, and competitive postures, are made to achieve purposes or goals. There are two central questions that need to be answered: 1) What does the organization want to achieve in the marketplace? 2) What returns or rewards does it wish to attain for its various stakeholders, stockholders, employees, customers, suppliers, and the community at large? It is no accident that some organizations successfully adapt to an environment and initiate new ventures in a number of related product areas while others never seem able to repeat a single success. In short, what takes place within the organization makes a difference. Winning in the marketplace is heavily influenced by how well the organization makes and executes its choices of where and how to compete. It has become commonplace to note that one of the hallmarks of today is change. It is our constant. Good management and the management of change is the same thing; how to make sure that what you have in place today will meet the challenges you will face tomorrow. Flexibility and quickness will count as much as vision and patience. As economies mote to an information age complex technologies heavily influence by social media, big-data, global markets, intense competitions, and turbulent constant change, managers everywhere are struggling to cope with failing organizations. Recently, the rise in environmental complexity has accelerated with revolutionary advances in computerization, with the introduction of social computing. An explosion of knowledge, a unified global economy, the ecological crisis mounting social diversity, and other global trends, are almost certain to blossom into a far more complex world. Major corporations comprise economic systems that are as large as some national economies, yet most executives and scholars think of them as firms to be managed with centralized controls. Moving resources about like a portfolio of investments, dictating which units should sell which product at which prices and setting financial goals. Today’s and tomorrow’s corporations are becoming more and more automated and mobilized. Rather than the traditional organization of permanent employees working 9–5 within the fixed confines of some building, the virtual organization is a changing assembly of temporary alliances among entrepreneurs who work together from anywhere using the worldwide grid of global information networks and social media. The interface between organization structure and IT systems has become one of the most crucial issues in management, yet it is so poorly understood that we usually allow the inexorable force of IT to ramble through organizations unguided, with powerful unintended consequences. It is almost as if robust ivy were growing over a building, destroying its aging mortar and old bricks, and leaving only the vine as a supporting structure. A business will not be able to use IT effectively without a sound working model of the modern organization, and that model seems to be the market paradigm or lines of business approach aligned around consumer values. IT is the major reason for the replacement of hierarchies to enterprise models in today and tomorrow’s corporations. The challenge is enormous but the stakes are also enormous. Managers can best prepare for this coming upheaval now by learning to make a mental shift from hierarchy to an enterprise. We are witnessing not only a dramatic increase in the need for leadership but also a transformation in what we call leadership. This turbulence is, in turn, changing where leadership is practiced. For example, hierarchies collapsing into flatter pyramids to respond to faster-paced markets are pushing leadership further and further down into the organization. Today’s flatter organizations mean that most of us will have to manage across more functions and be sitting on more project teams throughout our management careers. Big-Data is enabling the business intelligence of brands of the future It’s clear there are too many moving pieces for all the decisions to sit with either the CIO, or the CMO. Both need to work together to properly leverage all the components that make up today, and tomorrow’s an ever unpredictable consumer. What will such data reveal, once properly analyzed, synthesized, and put into a decision engine? 1) First of all, how we track data must change, we are too verticalized to properly make sense of it 2) Second, the education system was built on the assembly line concept, to support verticalization, this will not serve the needs of the future brand — it’s too limiting 3) Third, companies who are not engaging with consumers at all — will cease to exit within 10–15 years, no matter how big they are today 4) Branding, marketing, and consumer engagement will need to be realigned around values Values? Yes, values, as in understanding that a group of consumers may value connecting with other people, more than a group of consumers who values escaping from reality by being entertained. A consumer who values connecting with others will want you to be able to provide him/her all that goes into that, as in telecom, transportation, social networking, and events. These groups of consumers, who value connecting with others, are being served by multiple brands today, which use multiple strategies, which are all competing for a voice with them. I am proposing that ultimately the focus needs to be around a segment of one. “The future “value-based” brands, will be able to understand, through data science, how to properly align M&A activity, people training, R&D, sales and marketing strategies around the consumer. Such brands will need to break out of the verticalization mold, train its own people, because the educational system will simply not be ready, and start hiring cross-industry executives who can cross-pollinate the organization beyond verticalization, embrace virtualization, and a segment of one strategy”. In order to effectively accomplish this, the first and foremost step will be a marriage between the CMO and CIO, and a closely sponsored relationship from the CEO, with goals that align around the consumer, and providing value to the consumer not by vertical, but by segmentation, specifically values based data segmentation of one. In this new world of brands, the relationship of skilled workers will change also, you will see a huge spike in freelancers, and independent contractors doing projects, and a new respect for resources who can provide aligned values, without the concerns of long term contracts, or employment issues related to outdated skills. In markets like Europe, organizations that can provide for hire staff will see a huge win due to the restrictive labor laws in those markets. The biggest winner? You and me, where our own individual values will be served with some sense of uniformity, and cohesiveness, from possibly one brand that we will build relationships with because they serve our needs better than those talking at us, as it’s done today. Tomorrow’s brand will not need to market, sell, or advertise, tomorrow’s brand will be a data science integrated company whose approach to market will be so refreshingly simple with the consumer at the helm, that cross-selling will simply be the norm, not a major effort as it is today. “Tomorrow’s brand will not be limited to verticals, or specialization, but rather provide consumers solutions based on their values, with a total focus to a segment of one strategy.” Take a company like AT&T, in this new world, they would be in the telecom, transportation, PR, advertising, and social networking business serving the consumer values of “people who like connecting with other people”, and as markets evolve that will take shape into whichever way those consumers wish to connect, perhaps even teleportation (someday). Either way, the brand of the future will shape its strategy around data science, vs. trying to fit data science into its current strategy. Follow me on Twitter and stay tuned for my next article, focused on financial services segment of one centralized big data customer intelligence.
https://medium.com/datadriveninvestor/big-datas-role-in-creating-customer-centric-business-intelligence-afa32283215e
['Tullio Siragusa']
2019-06-24 20:36:50.143000+00:00
['Marketing', 'Business', 'Data', 'Data Science', 'Big Data']
Open Submissions To The Loners’ Hideout
Hey guys! This is Laura. I started The Loners’ Hideout over a month ago as a 30-day challenge to write every single day in the most authentic place I could — from the heart. Just some background about the Hideout. The intention behind it is to create the safe space for writers to write from the heart. The hope is that if you need some “alone time” to just write and let it vacate empty space in the universe, it can and will here at The Loners’ Hideout. My sincere hope for The Hideout is not really growth, but authentic engagement. What I don’t want to see is a “I follow you, you follow me” mentality when it comes to building and fostering relationships and never reading that person’s work again. I want this to be a real place for real writers (aren’t we all ;)) to connect and not hold back on our writing. I am currently abroad, but I will check for submissions at least 5 days out of the week. Either in August or September, I plan on restarting a 30-day challenge of sorts where I will post consistently every day on The Loners’ Hideout. Feel free to join me when the writing challenge begins. :) This is a small, small publication. Don’t be afraid to reach out to other people who might have interest in the Hideout. All are welcome. Submission Guidelines: Write from the heart. Poetry, prose, short stories, personal essays, food for thought, and guides are all accepted here. I want the hideout to be the creative hub for our imaginations to take off as well as the spot to relax. (here’s an imaginary cup of coffee for you). If I see that there is a trend, I will create sub-categories for our writing. Accepting ALL Writers: I’m not a big stickler when it comes to grammar, but if you struggle with grammar, do your best to self-edit and I will do the rest. Again, I encourage all writers to submit no matter if you are just starting or are prolific with your work. If you think your ideas are too odd, join the club. If you keep the content between G and PG-13, it more than likely will be accepted. Submit anyways and don’t hesitate. This is a safe space to make mistakes. That’s how we learn. :) Create a resonating title that even you yourself would want to click and read. Choose your title image. It’s better if I leave it to you to do this. You know your story — your heart — best. Email me at [email protected] if you are interested in becoming a contributor. Tell me a little bit about yourself, your profile on Medium, and I will add you. You can also write in the comments if you would like to contribute. Just like anything in life, this Hideout will transform, but the essence will always remain the same: When the world is too loud, sometimes we just need a safe space to hide out. I look forward to reading what we come up with! With love, Laura
https://medium.com/the-loners-hideout/open-submissions-to-the-loners-hideout-519ad43b359e
['Laura Gulbranson']
2019-07-12 06:40:48.718000+00:00
['Personal Essay', 'Creativity', 'Poetry', 'The Loners Hideout', 'Writing']
JavaScript’s Magical Tips Every Developer Should Remember
JavaScript is the most popular technology for full stack development. While I have been focusing mainly on Node.js, and somewhat Angular.js, I have realised that there are always some tricks and tips involved in every programming language irrespective of its nature of existence. I have seen (so many times) that we programmers can tend to make things over complicated, which in-turn leads to problems and chaos in the developing environment. It is beautifully explained by Eric Elliott in his post. Without further ado let’s get started with the cool tips for JavaScript: Use “var” when creating a new variable. Whenever you are creating a new variable always keep in mind to use “var” in front of the variable name unless you want to create a global variable. This is so because if you create a variable without using “var” its scope will automatically be global which sometime creates issues, unless required. There is also the option to use “let” and “const” depending on the use case. → The let statement allows you to create a variable with the scope limited to the block on which it is used. Consider the following code snippet. function varDeclaration(){ let a =10; console.log(a); // output 10 if(true){ let a=20; console.log(a); // output 20 } console.log(a); // output 10 } It is almost the same behaviour we see in most language. function varDeclaration(){ let a =10; let a =20; //throws syntax error console.log(a); } Error Message: Uncaught SyntaxError: Identifier ‘a’ has already been declared. However, with var, it works fine. function varDeclaration(){ var a =10; var a =20; console.log(a); //output 20 } The scope will be well maintained with a let statement and when using an inner function the let statement makes your code clean and clear. I hope the above examples will help you better understand the var and let commands. → const statement values can be assigned once and they cannot be reassigned. The scope of const statement works similar to let statements. function varDeclaration(){ const MY_VARIABLE =10; console.log(MY_VARIABLE); //output 10 } Question: What will happen when we try to reassign the const variable? Consider the following code snippet. function varDeclaration(){ const MY_VARIABLE =10; console.log(MY_VARIABLE); //output 10 MY_VARIABLE =20; //throws type error console.log(MY_VARIABLE); } Error Message : Uncaught TypeError: Assignment to constant variable. The code will throw an error when we try to reassign the existing const variable. 2. Always use “===” as comparator instead of “==” (Strict equal) Use “===” instead of “==” because when you use “==” there is automatic type conversion involved which can lead to undesirable results. 3 == ‘3’ // true 3 === ‘3’ //false This happens because in “===” the comparison takes place among the value and type. [10] == 10 // is true [10] === 10 // is false '10' == 10 // is true '10' === 10 // is false [] == 0 // is true [] === 0 // is false '' == false // is true '' === false // is false 3. ‘undefined, null, 0, false, NaN, ‘’ (empty string)’ are all false conditions. 4. Empty an array var sampleArray = [2, 223, 54, 31]; sampleArray.length = 0; // sampleArray becomes [] 5. Rounding number to N decimal places var n = 2.4134213123; n = n.toFixed(4); // computes n = "2.4134" 6. Verify that your computation will produce finite result or not. isFinite(0/0); // false isFinite('foo'); // true isFinite('10'); // true isFinite(10); // true isFinite(undefined); // false isFinite(); // false isFinite(null); // true Lets take an example to understand the use case of this function, suppose you have written a database query over a table which contains large amount of data and after the execution of query you are not certain about all the possible result values of that query. The data of query changes on something that you put dynamically in the query. In such a case you are not sure about finite property of the result and if you use that result directly it can give you infinite condition, which can break your code. Hence, it is recommended to use isFinite() before any such operations so that infinite values can be handled properly. 7. Use a switch/case statement instead of a series of if/else Using switch/case is faster when there are more than 2 cases, and it is more elegant (better organised code). Avoid using it when you have more than 10 cases. 8. Use of “use strict” inside your file The string “use strict” will keep you from worrying about declaration of a variable which I mentioned in 1st point. // This is bad, since you do create a global without having anyone to tell you (function () { a = 42; console.log(a); // → 42 })(); console.log(a); // → 42 “use strict” will keep you away from making above mistake. Using “use strict”, you can get quite a few more errors: (function () { “use strict”; a = 42; // Error: Uncaught ReferenceError: a is not defined })(); You could be wondering why you can’t put “use strict” outside the wrapping function. Well, you can, but it will be applied globally. That’s still not bad; but it will affect if you have code which comes from other libraries, or if you bundle everything in one file. 9. Use && and || to create magic “” || “foo” // → “foo” undefined || 42 // → 42 function doSomething () { return { foo: “bar” }; } var expr = true; var res = expr && doSomething(); res && console.log(res); // → { foo: “bar” } Additionally, don’t forget to use a code beautifier when coding. Use JSLint and minification (JSMin, for example) before going live. This will help you maintain a coding standard in your project and make it as standard as possible. The above is a mere reflection of the power of JavaScript.
https://medium.com/swlh/javascripts-magical-tips-every-developer-should-remember-38c71b1cbfba
['Tarun Gupta']
2020-12-25 14:45:06.231000+00:00
['Programming', 'Technology', 'Productivity', 'JavaScript', 'Software Engineering']
The Great Vaccine Race
Big bets on long shots Expenditures by the U.S government are closing in on $10 Billion to bring a viable vaccine to market. Some candidates who received funds including, Johnson&Johnson, Pfizer, Moderna, and AstraZeneca are all industry heavyweights, while one, Novavax, awarded $1.6 Billion in grants, has never in their thirty-three year history brought a vaccine to market. Some level of optimism, as well as reservation, exists with each biotech firm and their respective clinical trials. There is a high degree of coordination between American, European and Australian trials, many of which are being conducted across borders. While not verified by global health agencies, Russia has registered a vaccine, with President Putin indicating one of his daughters was a trial subject, and has been vaccinated. As of this writing, the United Arab Emirates has approved a vaccine from Chinese state owned manufacturer Sinopharm, to be used on front line healthcare workers before Phase III is completed. The UAE has 31,000 volunteers participating in the clinical trials, and officials have reported no adverse reactions thus far. When a vaccine against Sars-Cov-2 is ready for use stateside, some critical questions remain; how many people will volunteer for an injection, will it be mandated for children prior to attending school, or travelers boarding a plane, and what are the ramifications, if any, for those choosing not to vaccinate? A Harris Poll survey in the summer of 2019, before the current pandemic, of 2,000 U.S adults is in no way encouraging. Of those polled, 45 percent noted at least one source that caused doubts about the safety of vaccination. The top three doubt-causing sources were online articles (16 percent), past secrets/wrongdoing by the pharmaceutical industry (16 percent) and information from medical experts (12 percent). The survey also asked Americans to choose a statement that best represented their feelings about vaccine safety and efficacy. While the vast majority (82 percent) chose in favor of vaccines, 8 percent selected responses expressing serious doubt. An additional 9 percent said they were unsure. More recently (August 7th 2020), Gallup found one in three Americans would not opt for a Coronavirus vaccine, this even if the vaccine was both free and FDA approved. Such resistance is not unprecedented. When Gallup in 1954 asked U.S. adults who had heard or read about the then-new polio vaccine, “Would you like to take this new polio vaccine (to keep people from getting polio) yourself?” just 60% said they would, while 31% said they would not. So far, willingness to adopt a new vaccine looks similar today. It’s one thing to answer a poll question, but another to make a real life, health balancing decision for yourself and your family. Here in Michigan, our Governor Gretchen Whitmer received a flu vaccine shortly after an address on the importance of such measures. Flu vaccines though, have a decades long record for safety. Will President Trump volunteer for a Coronavirus vaccine? Will he encourage his extended family, and supporters to do the same? That and other questions remain open and unresolved for now. Should they chose to however, China, and possibly Russia, have one available.
https://medium.com/illumination/the-great-vaccine-race-66c121305376
['Bashar Salame']
2020-09-16 20:19:32.722000+00:00
['Covid 19', 'Vaccines', 'Society', 'Health']
Making the Invisible Visible. At Square, the Developers team exposes…
Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog At Square, the Developers team exposes APIs that allow third-party developers to build custom business-processing solutions. On October 18, we announced to developers using our APIs that webhooks would now support retries. If a webhook cannot be delivered, our system will now be able to retry multiple times until a successful delivery. Square’s webhook delivery system is a service I’ll refer to as Webhooks. Improving Webhooks’ reliability was a project I had the opportunity to dive into the first week I started at Square as a new college graduate. From the beginning, we wanted our Webhooks upgrades to require no change on the part of our external developers, in stark contrast to the amount of research, planning and work that went into the project. In fact, the announcement we sent out assured developers that they “should not need to make any changes to support webhooks with retries.” My manager remarked that it was “the most invisible change ever” — and that is how it should be. The focus of our team is to enable developers to easily integrate with Square’s APIs and provide them with a reliable and simple-to-use experience. Perhaps it was my manager’s remark about our “invisible” project that inspired me to share some insight into not only our webhooks reliability project, but also some of our “visibility” practices here at Square. Webhooks In web development, webhooks are defined as snippets of code (HTTP callbacks) that are triggered by specific events. As soon as an event occurs, a developer is notified of it and can handle it in real time. With traditional APIs, developers may have to constantly poll an endpoint in order to detect events. I wanted to cover how we designed Webhooks at Square and the changes we made to improve reliability. If you want more technical details about Webhooks at Square, check out our earlier blog post. The most engaging way of explaining webhooks is following the lifecycle of a single webhook notification through Square’s infrastructure. Let’s say that I own an online store called Lindy’s Laughing Llamas and that it uses Square’s APIs. The application that runs my online store receives webhook notifications from Square. When a llama-loving customer purchases a llama from Lindy’s Laughing Llamas, I receive a webhook notification informing me that someone just made a payment. Between the time of purchase to the time of notification, a lot has happened to transform this payment “event” to a webhook notification that I receive. After the customer purchases a llama, the payment event is published to a feed, which many internal services at Square read. The Webhooks service owned by our team reads this payment event from the feed, transforms the event into a webhook notification, and delivers this notification to the endpoint specified by the Lindy’s Laughing Llamas application. However, during periods of high traffic, events could pile up in the feed, and subsequent notifications could be delayed. Intuitively, we needed a way to separate the reading of events from a feed and the delivery of a notification. Although there are many approaches to solving this problem, the simplest approach is to create separate thread pools for reading events from the feed and for delivering notifications. Our solution was to shift our Webhooks’ delivery mechanism to the cloud using Amazon Web Services (AWS). Our primary motivation for moving Webhooks to the cloud was to lower Square’s system complexity and costs (rather than having to maintain Webhooks in our own data centers). We could improve Webhooks’ reliability by using well-documented and commonly used cloud infrastructure. Using AWS allows Webhooks to decouple event reading and notification delivery. In our new webhooks system, the lifecycle of a Lindy’s Laughing Llamas payment event becoming a webhooks notification changes slightly. After a customer purchases a llama and the payment event is published to a feed, our Webhooks service reads it just as it did before. Once the event is transformed into a webhook notification, our service then sends the notification as a message to AWS. Our tools in AWS contain logic to deliver the message to the Lindy’s Laughing Llamas’ application server. If Lindy’s Laughing Llamas takes too long to respond to the message, or is unable to take a message at the time of delivery, AWS will retry. During every subsequent delivery attempt, AWS increases the time in between retries; this backoff strategy ensures that the message will continuously be retried without overwhelming the server. Additionally, AWS sends metrics about all delivery attempts back to Square. The end result? A developer of the Lindy’s Laughing Llamas application can stop polling Square’s APIs, since webhook notifications will arrive in a timely manner. If the application server is busy, a developer doesn’t have to worry about missing notifications, because they will be retried. From the perspective of the developer, no changes were necessary to get the new timely notifications and retries. Visibility and Impact What surprised me at the conclusion of this project was the amount of visibility it had within Square, as well as other projects. On the day before the launch, we sent a company wide product update email. I was impressed with the meticulousness with which my team combed through old emails, work items, and document comments for people outside of our team who contributed to our project. From code review to design advice, the contributions of these people were not forgotten. And within minutes, the email received “Reply All’s”, expressing congratulations and providing context for the impact of a better webhooks on Square’s developer platform. A “Product Updates” email sent to everyone at Square. My team also had the opportunity to share our learning experience. The Developers team has a bi-weekly “lunch and learn” meeting where we get together to present about new technologies, frameworks, and the various services and projects we are working on. These lunch and learn meetings highlight Square’s emphasis on knowledge sharing within the organization. By creating awareness of Webhooks and the technologies our team adopted, the teams we work most closely with can use our learning experience to develop their own projects. Oh, and Webhooks wasn’t merely visible — it was also audible. After a launch, it is Square tradition to ring the gong, and we let the office know loud and clear that we shipped something new to production. Ringing the gong after launching a better Webhooks. Being able to share our knowledge and learning experience, both within Square and outside of Square, is meaningful, but it was just as important to hear feedback from our developers. One developer appreciated our back-off retry policy because it avoided “hammer[ing] [the] server when it’s already having a rough time.” In the week after we launched the improved Webhooks, we were able to successfully deliver over 31,000 notifications that had failed on their first delivery attempt. Being able to quickly make an impact on Webhooks — as a new hire still learning the ropes — inspires me to continue improving Webhooks and start on other projects that help not only our fictional Lindy’s Laughing Llamas, but real-world developers who want to use Square to process online and in-person payments seamlessly.
https://medium.com/square-corner-blog/making-the-invisible-visible-a-look-at-building-tools-for-square-developers-bae30a212950
['Lindy Zeng']
2019-04-18 20:40:28.874000+00:00
['Engineering', 'Webhooks', 'AWS', 'Square', 'Developers']
Rise of the Smart-Up
AI companies are getting snapped up in record numbers by tech giants hungry for brains. But are they really worth such big bucks? Magic Pony Founders Rob Bishop and Zehan Wang Magic Pony is hardly a household name. Chances are you never heard of them, or at least not before they were bought for $150 million by Twitter a few weeks ago. They were a company of 14 people, and at that cost this acqui-hiring spree which sees them joining Twitter’s Cortex division priced each of them at over $10 million. But although that’s certainly at the top end of the scale as far as talent acquisition goes, it’s far from an anomaly in the world of AI these days. A report by Magister Advisors reveals that the average price per high-quality employee in these acquisitions averages $2.4m. The technology advisory firm, which specializes in Mergers and Acquisitions (M&A) tracked 26 AI-driven deals since 2014 in the US, Europe and Israel, 11 of which involved companies with less than 50 employees which were acquired largely, or entirely, for the team and capability. “A good AI engineer is worth more than many company CEOs right now,” says Victor Basta, managing director at Magister Advisors. Technology journalist Luke Dormehl agrees that the valuation of tech companies can sometimes seem baffling, particularly if you compare them with their pre-digital equivalents. A lot of “traditional” companies producing physical products, rather than ethereal software, employed tens or even hundreds of thousands of employees. Today, tech giants can disrupt industries with a fraction of that. Famously, Instagram employed just thirteen people when it was snapped up by Facebook for $1 billion, explains Dormehl. In this new market, Israel and the UK are emerging as world centres at the top tier for AI innovation, with many firms such as IBM and Intel expanding their AI footprint in Israel, and the UK capitalising on the influence of institutions such as Cambridge, Oxford, Imperial College and a crop of AI-active VCs such as White Star, Playfair and Notion. This has been reflected in the recent UK exits which — apart from Magic Pony — include DeepMind (Google), Swiftkey (Microsoft), Evi (Amazon), VocalIQ (Apple) and PredictionIO (Salesforce). Where there is a broader trend in tech towards building sustainable business models as early as possible and proving sustainability through customer numbers and revenues, in the field of Deep Machine Learning that actually seems to be irrelevant. In fact, the report indicates that acquiring companies prefer to target start-ups with no revenue at all. Dormehl explains that the reason why the way that companies are valued has changed so dramatically has to do with the potential value of data. As the Internet has continued to become more and more present in our lives, we’ve seen a rise of ostensibly “free” services. Of course, these aren’t really free at all: they’re often raking in money through advertising or, if it’s not been able to monetise in that way yet, the value of user data. That makes sense when you think about it: Platforms such as Google, Facebook and Twitter already have all the users they need. They are incredibly pervasive. What they need — very badly indeed — are ways to leverage their user data and keep those people engaging with their content. Companies such as Magic Pony don’t offer a separate product as such. They bring a layer of complexity that can be overlaid on top of existing content such as videos, posts, news, and all manner of experiences. Their engines and algorithms can be embedded directly into the larger company’s offering and match content automatically and intuitively with what users want, need, or might like to discover. This, in nutshell, is the arms race we’re experiencing with AI, yet nobody has their hands on that Holy Grail just yet. The applications of AI technology — machine learning to robotics, virtual assistants, speech and image recognition, predictive analytics and many, many more — are extremely broad. These include major sectors such as advertising, security and healthcare, where, for example, doctors will be able to use image recognition to automate medical diagnosis in future. Since this demand is not segmented or restricted to any one particular sector, this contributes to the chronic shortage of high-quality AI engineers and the resulting price inflation for these companies. Magister’s report points to the fact that ordinarily buyers such as EBay, Twitter, Amazon and Microsoft would not compete for the same M&As. But in the area of AI their interests overlap (in visual search, for example) Although these sky-high valuations might sometimes be the result of over-hype, Dormehl concludes that there is a solid business case behind the trend: The concept of a “unicorn” economy, in which venture capitalist are always on the lookout for the next Facebook or Google, means that the potential of a company becomes wrapped up in its valuation. Is that always right? Probably not. Are these companies worth the money when they suddenly create their own version of AdWords and begin raking in billions of dollars? It’s hard to argue that they’re not. Luke Dormehl’s latest book Thinking Machines: The Inside Story of Artificial Intelligence and Our Race To Build The Future is out August 11 and available on Amazon
https://medium.com/edtech-trends/rise-of-the-smart-up-dfc8ec248952
['Alice Bonasio']
2016-07-13 13:34:57.632000+00:00
['Acquisitions', 'Data', 'Data Science', 'Artificial Intelligence', 'AI']
How Well Do You Know Your Vagina?
With all the progress being made in the world today, sexual health is still an uncomfortable topic to talk about. Doubts lead us to friends first and then to Google and then to a cycle of information that may or may not be true. Despite what Google says — the hymen doesn’t grow back if you don’t have sex for a long time! Our culture of considering sex and sexuality as dirty and shameful creates a negative environment for young people where they cannot talk about it openly. To separate fact from fiction, here are a few things I wish I knew growing up. Vaginal Discharge is normal. Yes, everyone has it, and it is normal. Healthy discharge can be clear or white depending upon your cycle. The consistency can change throughout your menstrual cycle. Vaginal odor is normal. It is not normal for your vagina to smell like flowers so stop trying to make that happen with douches and deodorants. A slight odor that isn’t strong smelling is normal. The Vagina is self-cleansing. Yes, it is. So there is no need to wash five times a day down there. The vagina uses natural secretion and discharge to clean itself and prevent infections. It is not necessary to wear underwear 24/7. Underwear can trap excess moisture and microbes. Other than comfort, not wearing underwear can prevent the buildup of heat and moisture, which can increase the risk of infection. It is ok to go commando! Let her breathe once in a while. Always choose cotton underwear. Cotton is more breathable, making it best for body parts that tend to lock in moisture. Regularly wearing silk and lace underwear can cause irritation. Bleached patches on your underwear are normal. The vaginal discharge is naturally acidic. When exposed to air, it can stain underwear a mild yellow due to oxidation. Fungal infections are more common than you think. About 75% of all women will have a yeast infection at least once in their lifetime. Do not hesitate to consult a doctor in case of itching, soreness, burning while urinating or pain during sex. Douches and perfumes are a NO. You don’t need any special products to clean unless your doctor tells you otherwise. Douches and perfumes can cause irritation, alter the pH, and can worsen infections. And never use soaps inside the vaginal cavity. The vagina is an internal cavity, not the entire genital area. What you see outside is the vulva. Please do not use soaps inside your vagina. You can do a vagina workout. Yep. Kegels can strengthen the muscles around the vaginal opening. Stronger muscles can help to control your orgasms better and make them stronger! Always urinate after sex. This helps the urethra to cleanse itself, reducing the chances of developing a urinary tract infection. Is the G-spot real? This is still a topic for debate. While researchers search for this mythical spot, why not concentrate on the clitoris instead? It has more than 8000 nerve endings, and according to scientists, its sole purpose is sexual pleasure. You can’t lose a tampon- or anything- in your vagina. At the deep end of your vagina is a cervix. It stays closed all the time except during childbirth. So you can’t really lose anything in there. The hymen is not an indicator of virginity. The hymen can break during many activities like horse-riding, riding your bike, or playing sports. A ruptured hymen is normal. According to the World Health Organization, there is no test- including the presence of an intact hymen- that can indicate whether a woman has had sex. Your pubes have a purpose. Pubic hair serves as a protective barrier to genital tissues, especially the sensitive vaginal opening. It also acts as a buffer against friction. Shaving can cause tiny wounds on the skin, temporarily raising one’s risk of infection. Your vagina is set at a 130-degree angle. The vaginal canal rests at an angle in the body, which is why it is recommended to insert tampons and menstrual cups towards your back rather than up and in for easier insertion. Your vagina will let you know when you are fertile. The discharge will be clear an stretchy during ovulation. You may also notice more than usual discharge during this time. Do not rely on mucous monitoring alone for pregnancy prevention. It is not always reliable. Your vagina will also let you know if something is wrong. Any changes in smell or color of discharge can be an indication of infection requiring a visit to a doctor. There is so much that we don’t know about our own bodies. We owe it to ourselves to learn more. There is also a lot of embarrassment around vaginas (thanks to those horrible sex-ed classes). Share the right knowledge and go be the badass woman that you are.
https://medium.com/beingwell/how-well-do-you-know-your-vagina-7491b5add7aa
['Eshal Rose']
2020-06-01 20:59:15.772000+00:00
['Health', 'Society', 'Sexual Health', 'Womens Health', 'Women']
A Rising Tide Lifts All Boats
As we’ve worked to make Chance Encounters stand out as a publication, LB and I have noticed that a few of our talented writers ran into some difficulties finding original images for their work. Rather than looking at this as a problem, we saw a solution. Our initial thought of, “How cool would it be if writers paired up with photographers to tell their stories in a unique way?” was realized faster than we could have imagined. We have already published stories (with new ones on the way) that incorporate this hybrid approach. This feels like a good moment to thank the photographers for their contribution, and the writers for being open to the idea. We know it’s not always an easy decision. Furthermore, if any other photographers out there would like to lend their talents to interesting, meaningful, and beautiful stories, let us know! This can be a great way to have your photos seen (with proper credit of course) while making friends. We couldn’t have imagined a better opportunity to create a chance collaboration! So meta of us, no? And as we bring up help and collaboration, we should also mention that Chance Encounters was created exactly with this same mindset. We want every published story here to be seen, read and shared, regardless of how many followers its author may have. These kind of stories happen in split seconds, but we want them to live out there for a while. By creating a life cycle and some consistency, we aim to run Chance Encounters like an old school publication, or tv/radio program, with the hopes of releasing a printed version in the future. It would be exciting to see our best work on printed pages. NOW you know why we’re ‘obsessing’ over original imagery. Lastly, I’d like to share my thoughts on how I support fellow artists. As a film school graduate I’ve seen tons of stories and scripts that never got off the ground, and plenty of video footage that never got edited or finished. So my support doesn’t reflect my taste, nor does it depend on my critique. My support celebrates a completion of an idea, the hard work and effort behind it. That’s why I highlight passages that resonate with me, I leave notes, and I clap the maximum I’m allowed. If there was a virtual hug option, I would be all over it. Imagine if everyone supported each other in the same every time one of our editors published a story. Our support could push their work further, and new many readers could discover their work, not to mention the Medium staff. We all want our stories read, our voices heard, and our pictures seen — we could help each other achieve that. Especially when we tune in and engage. All I know is, as a community we can help each other tremendously just by showing up. I know all this sounds like Utopia rather than a reality, and by no means is it a requirement to participate here. But by being open to collaboration, I just feel that we can do it much differently than others. I’m genuinely looking forward to hearing your thoughts on this topic. Feel free to leave your questions and feedback below. ~Attila
https://medium.com/chance-encounters/a-rising-tide-lifts-all-boats-77004fc7e6e3
['Attila Adam']
2020-11-13 15:07:18.062000+00:00
['Storytelling', 'Community', 'Creativity', 'Photography', 'Collaboration']
Lies People Spread About Writing
As NaNoWriMo approaches, here are some misconceptions to shed Photo by Kat Stokes on Unsplash Writing is an art form. Just like painting, sculpting, gourmet cooking, or photography. And yet, writing has always had the distinction of being the most democratic art form. You don’t need special supplies or a studio space to do it. You learn the basics when you’re in school. For better or worse, then, nearly everyone wants to write a novel it seems. As a professional writer with an English degree, I’ve gone through the phases of creation and rejection. I’ve also spent plenty of time with aspiring and occasionally, successful writers. So I want to decode some of the nonsense I see spread around through popular sayings and on the internet these days about writing. Everyone Has a Novel in Them This is an old one. My father, an avid reader would always state this adage as fact, even though he himself never wrote a novel. I used to believe this was true, but I’ve come to realize it’s not. Not everyone can be a professional painter or a gourmet chef, so why do we believe that everyone can be a novelist? Everyone has a story in them. A story is not the same as a novel. A story can just be a brief anecdote that one shares around at parties, or a collection of life lessons learned hard. Novels have structure, detailed plot lines (usually more than one), dialogue, themes… you get the point. If you hated writing 5-page papers in school, you’ll probably detest writing a 300-page novel. This isn’t to discourage you, but to put it in perspective. Even if you enjoy reading or writing, it doesn’t mean you have to write a novel. You can have fun with short stories, poetry, or blogging. Novels are not the end-all-be-all of writing. And being a reader is a fulfilling hobby on its own. Write What You Know This saying is a bit double-edged. Some people strictly adhere to the idea that you should only write about your own personal experiences. If you’re an accountant from Missouri, your main character should also be an accountant from Missouri. Well, writing about only things you’ve directly experienced may get boring quickly (though it does explain why so many novels are written about writers trying to write a novel…) Instead, think in broader strokes. Have you experienced grief or heartbreak? You now know those deeply human emotions and can authentically share them in your writing. People you’ve met and stories you’ve read are also part of what you know and can inspire you. You Need to Write a Certain Amount Every Day Obviously, if you’re participating in a writing sprint like NaNoWriMo, you’ll have a target word count every day. And it can be liberating to just pump out a huge collection of words if you’re always second-guessing or self-editing as you write. But there are a lot of quantity-based discussions on forums like Twitter. People will post their daily word count or keep a tally of how long their work in progress is in their username. Perhaps it helps with accountability for some because writing is often a solo endeavor. You don’t have coworkers to chat with or to check in on your progress. However, this can lead to the feeling that there is a “right” amount of words to produce and that you really have to write every single day. Everyone is different and creative processes vary wildly when it comes to professional authors. Quite simply, not everyone works the same way. Nor should they. If every writer wrote the same way, all books would be very similar. Many people don’t have the luxury of having hours of free time to work quietly by themselves every day either. Writers Should Publish by a Certain Age “30 Under 30” lists would have us believe that you should make your name as a writer in your 20s or else there’s no chance of having a career as a writer. This is completely false and, quite frankly, very elitist. Publishing is a hard world to break into. Writing a novel takes hundreds of hours of unpaid labor. If you don’t have a support system that allows you to work what is basically a full-time job for free, it may take quite a long time for you to get a manuscript polished enough to submit for publication. Agents and publishers get sent thousands of manuscripts every year, so unless you have some sort of connections, yours is just another one in the pile. It can take months just to hear back after submission. It’s a slow and tedious process. Besides, writing is a craft that takes time to master. As you get older, you can keep honing your skills. Mark Twain first published at 41. Annie Proulx at 57. Toni Morrison didn’t even start writing until 39. Alan Bradley didn’t publish his first Flavia de Luce novel until he was 70 and now it’s a very successful mystery series. You Have to Have a Big Online Presence to Get a Publishing Deal While having an online following may help, ultimately when you’re writing fiction, it’s all about having a good manuscript. Even if you have 50k followers on Twitter, that’s no guarantee that your book will sell well if it’s messy and ill-conceived. An online following is actually more important when it comes to non-fiction. If you’re writing as an expert on something, it’s important to have some sort of qualifications. For things like lifestyle, cooking, and self-help, people on the internet valuing your advice and experience is proof of that qualification, so a successful blog or YouTube channel can be a big help. I’ve seen writers with virtually no online presence land a good publishing deal and only then start a Twitter account to help with promotions. Social media can be a great networking tool, but it can also be a distraction. You shouldn’t feel that you must engage in it if you don’t want to. If You Write Something, You Have to Publish It While many people may engage in other art forms just for the sake of doing it, writing seems to be an exception. You might paint a picture just for the enjoyment of expressing yourself and not seek to sell it or put it in a gallery. But for someone to write a novel just for the sake of it isn’t seen in the same way. The assumption seems to be that if you write a novel, you have to seek publication, whether through traditional or self-publishing means. Just as any creative practice, it can be as personal or as public as you like. You can write just to explore an idea or character. You can write to work through trauma or grief. Or just for fun. Maybe you want to share it with friends and family or as a blog. For a book to be ready to be published in a more professional way, it can take a huge amount of work in terms of editing and revising. Self-publishing asks you to do copy editing, layout, cover design, and marketing on your own — or to pay professionals to perform those services. Traditional publishing is a slow process that is rife with rejection along the way.
https://medium.com/from-the-library/lies-people-spread-about-writing-643f583fa5c9
['Odessa Denby']
2020-10-08 15:05:51.206000+00:00
['Creativity', 'Art', 'NaNoWriMo', 'Novel', 'Writing']