title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Linear layers explained in a simple way | Linear layers explained in a simple way
A part of series about different types of layers in neural networks
Many people perceive Neural Networks as black magic. We all have sometimes the tendency to think that there is no rationale or logic behind the Neural Network architecture. We would like to believe that all we can do is just to try a random selection of layers, put some computational power (GPUs/TPUs) to it, and just wait, lazily.
Although there is no strong formal theory on how to select the neural network layers and configuration, and although the only way to tune some hyper-parameters is just by trial and error (meta-learning for instance), there are still some heuristics, guidelines, and theories that can still help us reduce the search space of suitable architectures considerably. In a previous blog post, we introduced the inner mechanics of neural networks. In this series of blog posts we will talk about the basic layers, their rationale, their complexity, and their computation capabilities.
Bias layer
y = b //(Learn b)
This layer is basically learning a constant. It’s capable of learning an offset, a bias, a threshold, or a mean. If we create a neural network only from this layer and train it over a dataset, the mean square error (MSE) loss will force this layer to converge to the mean or average of the outputs.
For instance, if we have the following dataset {2,2,3,3,4,4}, and we’re forcing the neural network to compress it to a unique value b, the most logical convergence will be around the value b=3 (which is the average of the dataset to reduce the losses to the maximum. We can see that learning a constant is kind of learning a DC value component of an electric circuit, or an offset, or a ground truth to compare to. Any value above this offset will be positive, any value below it will be negative. It’s like redefining where the offset 0 should start from.
Learning a bias = learning a threshold or an average
Linear Layer
y = w*x //(Learn w)
A linear layer without a bias is capable of learning an average rate of correlation between the output and the input, for instance if x and y are positively correlated => w will be positive, if x and y are negatively correlated => w will be negative. If x and y are totally independent => w will be around 0.
Another way of perceiving this layer: Consider a new variable A=y/x. and use the “bias layer” from the previous section, as we said before, it will learn the average or the mean of A. (which is the average of output/input thus the average of the rate to which the output is changing relatively to the input).
A linear curve without a bias = learning a rate of change
Linear Feed-forward layer
y = w*x + b //(Learn w, and b)
A Feed-forward layer is a combination of a linear layer and a bias. It is capable of learning an offset and a rate of correlation. Mathematically speaking, it represents an equation of a line. In term of capabilities:
This layer is able to replace both a linear layer and a bias layer.
By learning that w=0 => we can reduce this layer to a pure bias layer.
=> we can reduce this layer to a pure bias layer. By learning that b=0 => we can reduce this layer to a pure linear layer.
=> we can reduce this layer to a pure linear layer. A linear layer with bias can represent PCA (for dimensionality reduction). Since PCA is actually just combining linearly the inputs together.
(for dimensionality reduction). Since PCA is actually just combining linearly the inputs together. A linear feed-forward layer can learn scaling automatically. Both a MinMaxScaler or a StandardScaler can be modeled through a linear layer.
By learning w=1/(max-min) and b=-min/(max-min) a linear feed-forward is capable of simulating a MinMaxScaler
Learning a MinMaxScaler through a linear feed-forward layer
Similarly, by learning w=1/std and b=-avg/std, a linear feed-forward is capable of simulating a StandardScaler
Learning a StandardScaler through a linear feed-forward layer
So next time, if you are not sure which scaling technique to use, consider using a feed-forward linear layer as a first layer in the architecture to scale the inputs and as a last layer to scale back the output.
A linear feed-forward. Learns the rate of change and the bias. Rate =2, Bias =3 (here)
Limitations of linear layers
These three types of linear layer can only learn linear relations. They are totally incapable of learning any non-linearity (obviously).
relations. They are totally incapable of learning any non-linearity (obviously). Stacking these layers immediately one after each other is totally pointless and a good waste of computational resources, here is why:
If we consider 2 consecutive linear feed-forward layers y₁ and y₂:
Stacking several linear feed-forward layers
We can re-write y₂ in the following form:
We can do similar reasoning for any number of consecutive linear layers. A single linear layer is capable of representing any consecutive number of linear layers. Basically, for example, scaling and PCA can be combined with one single linear feed-forward layer at the input.
Stacking Linear Layers is a waste of resources! It won’t add you any benefits to stack them.
If you enjoyed reading, follow us on: Facebook, Twitter, LinkedIn | https://medium.com/datathings/linear-layers-explained-in-a-simple-way-2319a9c2d1aa | ['Assaad Moawad'] | 2020-07-28 07:43:53.359000+00:00 | ['Machine Learning', 'Neural Networks', 'Artificial Intelligence', 'Backpropagation', 'Linear Regression'] |
The Obscure Legacy of Stephen Dixon | Comics, often about movies || New York Times, The New Yorker, Hyperallergic || A House in the Jungle, Koyama Press || @gelgud || www.nathangelgud.com
Follow | https://medium.com/spiralbound/the-obscure-legacy-of-stephen-dixon-b7e2c3fcef81 | ['Nathan Gelgud'] | 2020-01-19 02:17:43.338000+00:00 | ['Stephen Dixon', 'Olivier Assayas', 'Books', 'Comics', 'Autofiction'] |
Taiwan Data Science Meetup|集合啦!程式語言麻瓜會!文組也能做資料工作? | 從 Marketing 到 Data,我在 17 的轉職之路
From Marketing to Data, My Career Roadmap in 17Media. | https://medium.com/twdsmeetup/taiwan-data-science-meetup-%E9%9B%86%E5%90%88%E5%95%A6-%E7%A8%8B%E5%BC%8F%E8%AA%9E%E8%A8%80%E9%BA%BB%E7%93%9C%E6%9C%83-%E6%96%87%E7%B5%84%E4%B9%9F%E8%83%BD%E5%81%9A%E8%B3%87%E6%96%99%E5%B7%A5%E4%BD%9C-591cef1b934f | ['Wendy Hsu'] | 2020-08-04 01:16:36.278000+00:00 | ['Data Analyst', 'Careers', 'Meetup', 'Business Intelligence'] |
Why Big Tech Companies Can’t Stop Being Evil | Rana Foroohar’s new book, ‘Don’t Be Evil,’ paints an alarming portrait of Silicon Valley tech companies that need to be reined in before they start to affect our lives in even more insidious ways.
Photo by Mitchell Luo on Unsplash
In 1998, in an academic paper titled “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Stanford University students Larry Page and Sergey Brin described how their project, which they had named Google, worked. Until Page and Brin published, their work had been shrouded in secrecy, much to the consternation of the Stanford faculty. However, their mentor was eventually able to convince Page and Brin to publish some of their work — it was an academic project, after all — and the two relented.
In the paper, Page and Brin outlined how search could be monetized — via advertising — and noted the danger of such a funding model, writing, “The goals of the advertising business model do not always correspond to providing quality search to users.” Indeed, such a model lead to what the two labeled “search engine bias” a phenomenon they identified as “particularly insidious.”
Financial Times writer Rana Foroohar includes the Google origin story in her scathing indictment of the two tech titans and, at least in her opinion, the road of moral compromise its two founders traversed to essentially take over the internet. Through stunning self-deception and adherence to a self-serving principle that “information wants to be free,” it wasn’t long before Google’s two founders succumbed to investor pressure and embraced an ideology of “data exists to be monetized.”
The Internet Problem
The title of Foroohar’s book is Don’t Be Evil: How Big Tech Betrayed Its Founding Principles — and All of Us, a pithy nod to the company’s early motto. Foroohar is not only trying to be clever, she believes that “don’t be evil” was Google’s model from the outset because Page and Brin knew the “insidious” potential of the technology they had created. Writes Foroohar,
“When Google advised its employees not to be evil, it did so because it knew full well that evil was more than a powerful temptation. Evil was baked into the business plan.”
Capitulating to the demands of investors to monetize by employing the advertising business plan they had previously denounced was merely their first sin. As Foroohar tells it, the company copied goto.com and Yelp, and cut off traffic to foundem.com, funneling users to Google’s results instead. This trend of big companies getting bigger and cutting off the ability of smaller firms to operate is at the heart of what Foroohar sees as the “the internet problem.”
Indeed, Foroohar is at her best when she is making the case against Google, Facebook, Amazon, and others based on their anticompetitive practices and showing how the subsequent market concentration is stifling competition and hurting innovation. She also compares the internet of today to railways of the late 1800s. At that time, there was significant market concentration and a number of industrialists who both owned the railroads and then did business on them, giving preference to their own services and keeping competitors off their networks. That is almost exactly the same thing Google did with both Yelp and foundem.com.
As a fix, Foroohar suggests regulations that would ban companies from conducting business on platforms they own. She also calls for a digital sovereign wealth fund to which Big Tech firms contribute and the money is used to educate a workforce that seems to be on a collision course with obsolescence — at the hands of Big Tech, of course.
While some of her proposals appear more legitimate than others — a global flat sales tax seems like a pipe dream, whereas data being valued on firm’s financial statements carries some merit — her list of proposals is as creative as it is thorough, and a good starting point for anyone serious about moving beyond merely griping about Big Tech and actually wanting to start thinking about reasonable solutions.
However, despite the urgency Foroohar brings to her writing about Big Tech and the necessity of breaking up Google, Facebook, Amazon, and others, she is perhaps a bit credulous regarding a significant aspect of the Big Tech debate. On the “addictive” nature of social media and the negative cognitive side effects that addiction can produce, two distinct schools of thought have emerged.
There are those like Foroohar, who believe the situation is bad enough that government action is necessary. This group also includes Jonathan Taplin, author of Move Fast and Break Things, former Googler Tristan Harris and his nonprofit The Center For Humane Technology, and even Republican Sem. Josh Hawley and his proposed SMART Act.
The other school of thought believes that tech “addiction” is a personal issue, so people simply need to exercise more self-control and not let themselves become addicted by Silicon Valley’s latest offerings. It could be said that Nir Eyal, author of Indestractible, and Cal Newport, author of Digital Minimalism, best represent this group. On some level this is the default position of many, especially people of faith. In this view, the onus is on individuals to moderate their tech use, not on Google and Facebook.
While there isn’t space in this article to discuss the merits of either position, it is clear from the totality of Foroohar’s argument that she believes social media platforms are addictive, intentionally designed to be so, and cause cognitive harm in the form of increased anxiety and depression. She then uses the claim that modern tech is addictive in her case for regulating Google, Facebook, and others.
In the book, Foroohar tells of her son, who racked up a $900 iTunes bill playing an online “freemium” soccer game on his iPhone after school and how, through more intentional parental discipline, her son quit the game and was no longer addicted to it. Although she tries to use the addictive nature of tech to explain why there needs to be regulation, her success in mitigating the negative effects of tech in her son’s life reveals that, on a personal level, individuals can choose to use Silicon Valley’s products more wisely. In that case, one begins to wonder whether regulation really is necessary after all.
Becoming the Villain
That inconsistency aside, Foroohar expertly and succinctly lays out the problems with Big Tech, drawing on her decades of experience covering the industry for the Financial Times. As noted before, her solutions are generally practical, varied, and for the most part modest.
And perhaps her most prescient ideas are already proving to be true. Foroohar retells Mark Zuckerberg’s testimony before Congress in the wake of the 2016 election and highlights a photograph that caught a glimpse of Zuckerberg’s notes telling him to mention China when questioned about Facebook’s monopolistic characteristics. Foroohar calls out Google and Facebook for crying “China” when pressure picks up against them:
Big Tech firms have responded to the growing public concern about privacy and anticompetitive business practices by playing to a long-standing American fear: It’s us versus China. Companies like Google and Facebook are increasingly trying to portray themselves to regulators and politicians as national champions, fighting to preserve America’s first-place standing in a video-game-like, winner-take-all battle for the future against the evil Middle Kingdom.
Recently Zuckerberg, in a speech at Georgetown University, portrayed the issue of free speech in just such a light. Said Zuckerberg, “There’s no guarantee these values [freedom of expression] will win out. A decade ago, almost all of the major internet platforms were American. Today, six of the top 10 are Chinese.”
While his speech made several good points about freedom of expression in the age of platform capitalism, the way he framed the issue — in terms of American values rather than international human rights — has caused some to wonder if his intent is genuine, or if he is doing exactly what Foroohar predicted he would do: make China into the enemy he needs them to be for Facebook to remain intact.
Foroohar is also right to identify Big Tech’s “nefarious side effects” as perhaps the major economic issue facing Congress in the next five years. Indeed, it would be hard to believe that at least a number of Big Tech’s main challenges wouldn’t be resolved in either president Trump’s second term or by whichever administration takes over in 2020. That’s especially if the next president is Elizabeth Warren, who has repeatedly voiced her concern with Big Tech in general and Facebook specifically.
If Foroohar has her way, it won’t take that long. Google, Facebook, Amazon, and other former darlings of the American economy would be broken up tomorrow. Foroohar cites management guru Peter Drucker who said, “In every major economic downturn in U.S. History the villains have been the heroes during the previous boom.” If that holds true, Big Tech’s days may be numbered.
John Thomas is a freelance writer. His writing has appeared at The Public Discourse, The American Conservative, and Christianity Today. He writes regularly at Soli Deo Gloria.
A version of this article first appeared at The Federalist. | https://medium.com/soli-deo-gloria/why-big-tech-companies-cant-stop-being-evil-b046ceb36d1f | ['John Thomas'] | 2019-12-20 10:04:39.061000+00:00 | ['Technology', 'Books', 'Social Media', 'Tech', 'Government'] |
Are you using the “Scikit-learn wrapper” in your Keras Deep Learning model? | Are you using the “Scikit-learn wrapper” in your Keras Deep Learning model?
How to use the special wrapper classes from Keras for hyperparameter tuning?
Image created by the author with open-source templates
Introduction
Keras is one of the most popular go-to Python libraries/APIs for beginners and professionals in deep learning. Although it started as a stand-alone project by François Chollet, it has been integrated natively into TensorFlow starting in Version 2.0. Read more about it here.
As the official doc says, it is “an API designed for human beings, not machines” as it “follows best practices for reducing cognitive load”.
Image source: Pixabay
One of the situations, where the cognitive load is sure to increase, is hyperparameter tuning. Although there are so many supporting libraries and frameworks for handling it, for simple grid searches, we can always rely on some built-in goodies in Keras.
In this article, we will quickly look at one such internal tool and examine what we can do with it for hyperparameter tuning and search.
Scikit-learn cross-validation and grid search
Almost every Python machine-learning practitioner is intimately familiar with the Scikit-learn library and its beautiful API with simple methods like fit , get_params , and predict .
The library also offers extremely useful methods for cross-validation, model selection, pipelining, and grid search abilities. If you look around, you will find plenty of examples of using these API methods for classical ML problems. But how to use the same APIs for a deep learning problem that you have encountered?
One of the situations, where the cognitive load is sure to increase, is hyperparameter tuning.
When Keras enmeshes with Scikit-learn
Keras offer a couple of special wrapper classes — both for regression and classification problems — to utilize the full power of these APIs that are native to Scikit-learn.
In this article, let me show you an example of using simple k-fold cross-validation and exhaustive grid search with a Keras classifier model. It utilizes an implementation of the Scikit-learn classifier API for Keras.
The Jupyter notebook demo can be found here in my Github repo.
Start with a model generating function
For this to work properly, we should create a simple function to synthesize and compile a Keras model with some tunable arguments built-in. Here is an example,
Data
For this demo, we are using the popular Pima Indians Diabetes. This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. So, it is a binary classification task.
We create features and target vectors — X and Y
and We scale the feature vector using a scaling API from Scikit-learn like MinMaxScaler . We call this X_scaled .
That’s it for data preprocessing. We can pass this X_scaled and Y directly to the special classes, we will build next.
Keras offer a couple of special wrapper classes — both for regression and classification problems — to utilize the full power of these APIs that are native to Scikit-learn.
The KerasClassifier class
This is the special wrapper class from Keras than enmeshes the Scikit-learn classifier API with Keras parametric models. We can pass on various model parameters corresponding to the create_model function, and other hyperparameters like epochs, and batch size to this class.
Here is how we create it,
Note, how we pass on our model creation function as the build_fn argument. This is an example of using a function as a first-class object in Python where you can pass on functions as regular parameters to other classes or functions.
For now, we have fixed the batch size and the number of epochs we want to run our model for because we just want to run cross-validation on this model. Later, we will make these as hyperparameters and do a grid search to find the best combination.
10-fold cross-validation
Building a 10-fold cross-validation estimator is easy with Scikit-learn API. Here is the code. Note how we import the estimators from the model_selection S module of Scikit-learn.
Then, we can simply run the model with this code, where we pass on the KerasClassifier object we built earlier along with the feature and target vectors. The important parameter here is the cv where we pass the kfold object we built above. This tells the cross_val_score estimator to run the Keras model with the data provided, in a 10-fold Stratified cross-validation setting.
The output cv_results is a simple Numpy array of all the accuracy scores. Why accuracy? Because that’s what we chose as the metric in our model compiling process. We could have chosen any other classification metric like precision, recall, etc. and, in that case, that metric would have been calculated and stored in the cv_results array.
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
We can easily calculate the average and standard deviation of the 10-fold CV run to estimate the stability of the model predictions. This is one of the primary utilities of a cross-validation run.
Beefing up the model creation function for grid search
Exhaustive (or randomized) grid search is often a common practice for hyperparameter tuning or to gain insights into the working of a machine learning model. Deep learning models, being endowed with a lot of hyperparameters, are prime candidates for such a systematic search.
In this example, we will search over the following hyperparameters,
activation function
optimizer type
initialization method
batch size
number of epochs
Needless to say that we have to add the first three of these parameters to our model definition.
Then, we create the same KerasClassifier object as before,
The search space
We decide to make the exhaustive hyperparameter search space size as 3×3×3×3×3=243.
Note that the actual number of Keras runs will also depend on the number of cross-validation we choose, as cross-validation will be used for each of these combinations.
Here are the choices,
That’s a lot of dimensions to search over!
Image source: Pixabay
Enmeshing Scikit-learn GridSearchCV with Keras
We have to create a dictionary of search parameters and pass it on to the Scikit-learn GridSearchCV estimator. Here is the code,
By default, GridSearchCV runs a 5-fold cross-validation if the cv parameter is not specified explicitly (from Scikit-learn v0.22 onwards). Here, we keep it at 3 for reducing the total number of runs.
It is advisable to set the verbosity of GridSearchCV to 2 to keep a visual track of what’s going on. Remember to keep the verbose=0 for the main KerasClassifier class though, as you probably don't want to display all the gory details of training individual epochs.
Then, just fit!
As we all have come to appreciate the beautifully uniform API of Scikit-learn, it is the time to call upon that power and just say fit to search through the whole space!
Image source: Pixabay
Grab a cup of coffee because this may take a while depending on the deep learning model architecture, dataset size, search space complexity, and your hardware configuration.
In total, there will be 729 fittings of the model, 3 cross-validation runs for each of the 243 parametric combinations.
If you don’t like full grid search, you can always try the random grid search from Scikit-learn stable!
How does the result look like? Just like you expect from a Scikit-learn estimator, with all the goodies stored for your exploration.
What can you do with the result?
You can explore and analyze the results in a number of ways based on your research interest or business goal.
What’s the combination of the best accuracy?
This is probably on the top of your mind. Just print it using the best_score_ and best_params_ attributes from the GridSearchCV estimator.
We did the initial 10-fold cross-validation using ReLU activation and Adam optimizer and got an average accuracy of 0.691. After doing an exhaustive grid search, we discover that tanh activation and rmsprop optimizer could have been better choices for this problem. We got better accuracy!
Extract all the results in a DataFrame
Many a time, we may want to analyze the statistical nature of the performance of a deep learning model under a wide range of hyperparameters. To that end, it is extremely easy to create a Pandas DataFrame from the grid search results and analyze them further.
Here is the result,
Analyze visually
We can create beautiful visualizations from this dataset to examine and analyze what choice of hyperparameters improves the performance and reduces the variation.
Here is a set of violin plots of the mean accuracy created with Seaborn from the grid search dataset.
Here is another plot,
…it is extremely easy to create a Pandas DataFrame from the grid search results and analyze them further.
Summary and further thoughts
In this article, we went over how to use the powerful Scikit-learn wrapper API, provided by the Keras library, to do 10-fold cross-validation and a hyperparameter grid search for achieving the best accuracy for a binary classification problem.
Using this API, it is possible to enmesh the best tools and techniques of Scikit-learn-based general-purpose ML pipeline and Keras models. This approach definitely has a huge potential to save a practitioner a lot of time and effort from writing custom code for cross-validation, grid search, pipelining with Keras models.
Again, the demo code for this example can be found here. Other related deep learning tutorials can be found in the same repository. Please feel free to star and fork the repository if you like. | https://towardsdatascience.com/are-you-using-the-scikit-learn-wrapper-in-your-keras-deep-learning-model-a3005696ff38 | ['Tirthajyoti Sarkar'] | 2020-09-23 00:53:29.541000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Technology', 'Data Science'] |
Ionic Capacitor vs Apache Cordova Difference With Example | Ionic Capacitor is an open source framework innovation to help you build Progressive Native Web, Mobile and Desktop apps. On the other side Apache Cordova formerly PhoneGap > Adobe PhoneGap does the same for accessing native features of the device from mobile WebView.
So, What really makes them succeed over the other, in future?
Apache Cordova aka PhoneGap, has been there for a while, with wide spread community support, like the growing Ionic Framework.
In developer terms using Cordova to build a mobile hybrid native app, you use cordova plugin libraries, which behind the scene builds your app using Android SDK or iOS within the Cordova framework (cordova.js/phonegap.js).
With Ionic Capacitor, you create the app, without using any cordova imports, not even cordova.js, instead Capacitor’s own native plugin repository imported as @capacitor/core. Capacitor can also be used without the Ionic framework and it’s backward compatible with Cordova.
As you know starting Ionic Framework v4, you can use any UI framework of your choice — Angular (Default), ReactJS and Vue as of now. This would majority help the software development industry, making it easier for them, to find any web developer, to hop-on and start coding with minimal or zero learning curve to build and publish hybrid native apps.
How to add capacitor to your Ionic project?
ionic start democap blank
Then, cd into,
cd democap
npm run build
Now, install capacitor package,
npm install --save @capacitor/cli @capacitor/core
above command will install two packages and save it inside package.json,
+ @capacitor/[email protected]
+ @capacitor/[email protected]
Initialize the capacitor app,
npx cap init demoappcap com.demoappcap.app
you should see below,
Your Capacitor project is ready to go! 🎉
Add platforms using "npx cap add":
npx cap add android <=== To build Android app
npx cap add ios <=== To build iOS app
npx cap add electron <=== To build Desktop app
For this example, we use Android, so run,
npx cap add android
You should see,
✔ Installing android dependencies in 26.22s
✔ Syncing Gradle in 2.55ms
✔ add in 26.31s
✔ Copying web assets from www to android/app/src/main/assets/public in 407.27ms
✔ Copying native bridge in 4.57ms
✔ Copying capacitor.config.json in 1.15ms
✔ copy in 429.06ms
✔ Updating Android plugins in 3.31ms
Found 0 Capacitor plugins for android:
✔ update android in 21.26ms
Lets add one camera plugin, open /src/app/home/home.page.html
<ion-header>
<ion-toolbar>
<ion-title>
IONIC CAPACITOR DEMO
</ion-title>
</ion-toolbar>
</ion-header>
<ion-content padding="">
<img [src]="takenPicture">
<button ion-button="" (click)="takePicture()">
TAKE PICTURE
</button>
</ion-content>
open, /src/app/home/home.page.ts
import { Component } from '@angular/core';
import { Plugins, CameraSource, CameraResultType } from '@capacitor/core';
@Component({
selector: 'app-home',
templateUrl: 'home.page.html',
styleUrls: ['home.page.scss'],
})
export class HomePage {
takenPicture: any;
async takePicture() {
const takePicture = await Plugins.Camera.getPhoto(
{
quality: 90,
allowEditing: true,
source: CameraSource.Camera,
resultType: CameraResultType.Uri
}
);
this.takenPicture = takePicture.webPath;
}
}
now lets, save and run,
npm run build |---or use---| npx cap android
every time you build you need to copy the files to directory so for us its android.
npx cp copy
unlike Ionic and Cordova, you cannot run the app onto the device from here, you need to open the installed IDE,
npx cap open android
Above source is your starting point, to understand the difference between Cordova vs Capacitor.
To open your code in specific IDE,
npx cap open ios // Xcode
npx cap serve // PWA - open your web app in a local web server
Migrating existing Ionic Framework app to Capacitor
Just go to the root of your project in CMD or Terminal and type below command
ionic integrations enable capacitor
Thats it, verify if you have a new file created in your project root
capacitor.config.json <------- file added by capacitor
You can continue to add any platform you need to support
Your Capacitor project is ready to go! 🎉
Add platforms using "npx cap add":
npx cap add android <=== To build Android app
npx cap add ios <=== To build iOS app
npx cap add electron <=== To build Desktop app
To avoid iOS app submission/re-submission rejections please make sure you upgrade your app to NOT use UIWebView and instead use WKWebView | https://sunilrk.medium.com/ionic-capacitor-vs-apache-cordova-difference-with-example-13d46ae11ad2 | ['Sunil Kumar'] | 2020-09-04 16:25:49.097000+00:00 | ['Mobile App Development', 'Ionic', 'Ionic Capacitor', 'Ionic Native', 'Ionic Framework'] |
Discussing Beacons and What They Mean for the Retail Industry | This week, I catch up with Mike Blackmore and pick his brain on all things Beacons
There is a relatively underused, and underdeveloped tool that will soon become the norm in retail stores around the globe. We call them Beacons. These are devices that enhance customer experience, track and measure foot traffic, and could potentially put an end to face to face sales. I chat with Mike Blackmore, a Google top contributor and and advertising tech Beta-Tester on his thoughts of this new wave of digital service.
Beacons are a new tool for advertisers that combine digital offerings with en-vogue experiential marketing offerings. What are your thoughts on Beacons?
We’ve seen the media hype about beacons. We’ve read about proof of concepts and how they are the latest and sexiest devices to hit the Internet of Things. Beacons are here to stay. Now’s the time to get serious in your business about the impact of these devices. This year alone beacons are set to influence over 4 billion US dollars of retail sales, and that number is set to grow tenfold by next year. But right now, many marketers only think about using beacons to push phone notifications.
So what exactly is a Beacon? How does it work?
To start off with, what is a beacon and how do they work? I like to use the analogy of a lighthouse. A beacon has one simple purpose in life, and that is to send out a signal and say I am here. It’s completely unaware of any mobile devices that are around it. It doesn’t connect to them. It doesn’t steal their data. It doesn’t know anything. It just sends out the signal and says hey, I’m here, I’m a beacon, much in the same way a lighthouse would. It doesn’t see the ships. The ships are the mobile devices.
They are a very simple device. They have a universally unique identifier, a major and minor. Beacons are used in many different situations from retail to museums, airlines and airports, and even sometimes indoor navigation. A beacon can be a physical hardware device that is just for the purpose of being a beacon. They can even be battery powered. They can even be a mobile device, an iPad, an iPhone, an Android device. They all have Bluetooth capable low energy antennas.
Pro-Tip: A beacon powered up to 70 meters might only last 6 months, whereas if you have it down to 2 meters you might get 2 whole years out of your beacon
So how will these affect advertisers and consumers? What are some typical uses for Beacons?
You can use beacons at the right time and place to give users that virtual tap on the shoulder and say: “Can we take your order. Would you like some help? Would you like a personal shopper dispatched to your location? Here’s your boarding pass for today. Would you like to get an updated priority boarding so you can skip the queue?” Also, I have seen some case studies where car dealerships are implementing beacons for after-hour lot shoppers so they are able to see the information about the specific vehicles on the lot.
This really opens up the ability for dealerships to get leads after hours with a simple push notification and the user is sent to the vehicle landing page where they can inquire.
Is implementing them difficult? What challenges do advertisers face?
There are many different models and brands of beacons out there, from Facebook to Eddystone and much more. Most just require an app and some bluetooth connections.
Beacons are very affordable. They’re practical. They’re easy to install. It’s not science fiction. These things can be put to work today in your company. When positioning your beacons, what you need to know as well is that water or human bodies absorb Bluetooth low energy signal. Don’t be afraid to take your beacon and position it up high. The ceiling’s usually pretty good.
Ok, good to know. How do consumers interact with beacons?
It’s more like the beacons interact with the consumer. For consumers, this means a frictionless shopping experience, with fewer gaps between channels. Consumers only need to have their bluetooth enabled on their device as it works on BLE (Bluetooth Low Energy) and be within range of your beacon. In some cases when you can get your hands on a Facebook beacon and the consumer is in your target radius. When they open Facebook on their device they will be shown your Facebook page and any special offers you are advertising. Basically a push notification lol.
What kind of information can a beacon deliver? How does it enhance the shopping experience?
iBeacons can only transmit one type of data: the UUID number. Beacons using the new Eddystone standard can transmit three types of data: UID (similar to UUID), URL (website addresses) and TLM (telemetry, such as temperature and beacon battery level). Standard beacons don’t have any memory to store arbitrary data, but you could work around that by using the Lightblue Bean or building a custom beacon from Raspberry Pi. Another option would be to use a backend, where you store and retrieve the data.
It enhances the shopping experience in many ways depending on what industry you are using your beacons for. A few examples I have already seen are:
Beacons can help customers find what they are looking for faster, find associates for one-on-one help and they can even streamline the checkout process. But without a beacon network a mobile app is simply just a mobile website.
As customers travel around a store the app can share relevant content about departments, feature sections and offers specific to showroom displays.
Retailers have been using various methods to monitor foot traffic around the store for a long time.
So to a consumer, a beacon is simply a push notification that does the job of a sales person. Is this the final blow to the face to face sales industry?
I think some industries should be concerned, but for the most part this could be used as tool to send people directly to sales people or have sales people directed to them with heat mapping etc. The dashboard is incredible because it allows you to see trends at your business such as where do people enter your dealership and where do they leave, you could offset this by placing salespeople on ups in those high traffic areas.
That’s fair, but considering the rapid advancements of both Virtual Reality and Augmented Reality it’s no longer the dreams of sci-fi writers that Bots and Artificial Intelligence can be making sales, demonstrating products, and be aggressively pricing items. Is this not a step towards giving the customer what they want? A truly transparent shopping experience. The Cost/Benefits of self service have been on display for years. I believe this is a monumental step in self service everything.
Paradigm shift is another term I would use when describing how this is going to transform consumer behavior. What’s leading this is the transparency and experience beacons offer.
People more than ever want to shop and not to be sold to. The market chooses the winners and I believe the cost/benefit of self service is already ingrained into certain day to day conveniences from the grocery store to the gas pump. The need to interact with a human is becoming less and less.
Technology, especially beacons will only increase the amount of services that will be driven by AI, beacons and whatever else the boys and girls are dreaming up in Silicon Valley. I am really interested to see where this ends up in the next 5–10 years. Beacons are not a new technology but since Eddystone changed the game in 2013 they have become more of a reality for any business to utilize.
I completely agree. Lastly, if you were to build an advertising campaign that utilized beacons in retail stores, what would it look like?
I would create an advertising campaign that would send push notifications for an example at West Edmonton Mall when you walked by a store you could get a push notification to come into the store and present your mobile phone for an additional 10–20% off items in the store. The retailer would be able to track conversions in real time and the customer would get notified of specials without having to take the dreaded step inside your store unless it appealed to them first. There are many other ideas flowing through my head for car dealerships but I will get into those at a later time.
Great! Thanks a lot Mike, always awesome to get your insight on emerging tech and marketing trends! | https://medium.com/digital-landmines/discussing-beacons-and-what-they-mean-for-the-retail-industry-3e4bc75ca34f | ['Dane Traill-Forbyth'] | 2017-09-08 18:53:10.314000+00:00 | ['Tech', 'Marketing', 'Beacons', 'Digital Marketing'] |
Motor Planning | Unsplash
Motor Planning
What it’s like when half your body doesn’t work properly
From Study.com:
Motor planning is the ability to assess a motor activity, plan and organize how to carry out that motor activity, and finally implement motor skills to achieve that motor activity.
I became aware of the term motor planning at a recent physical therapy session where the therapist asked me to complete a simple task of walking up a step then placing an object on the adjacent window sill. I told her how I planned to do each step out loud. She commented this was good motor planning. It was then I realized what a big part of life this is.
Most of us don’t think about motor planning. We take it for granted because it’s automatic. I have been gifted with a disability which makes me appreciate every move I make.
The left half of my body became paralyzed when I had a stroke at 35. I slowly had to re-learn how to move. My body has never fully cooperated in the 20 years since the stroke.
This forces me to engineer every movement and activity of daily living. From getting clothes on and off, to getting groceries into the house from the car, I have to mentally map out each move considering the limitations of my body.
My fine motor skills in my left hand are nearly nonexistent. My arm and leg are weak. My foot is paralyzed. My balance is poor. I carry a cane in my functioning right hand so I can’t always use it when I need it.
I spend every waking minute planning how to get by without the help of my left hand. This often means I rest my cane somewhere so I can carry something with my right hand. I walk ok without the cane for short distances. When I finish what I needed my right hand for, I regularly forget where I left my cane.
Ingenuity is my friend. I don’t usually realize how glaring it is when people watch me that I effectively ignore my left hand.
For instance, a teller at the drive through lane at the bank once asked what was wrong with my left hand since I reach across to the transaction drawer with my right. He was shocked and embarrassed when I told him I’d had a stroke.
It is exhausting to have to put so much thought and effort into getting through the day. Things that were once simple and automatic have become inordinately complicated.
Having physical limitations doesn’t seem as bad to me as having cognitive deficits would. I am grateful for the ability to assess a motor activity, plan and organize how to carry out that motor activity and figure out a way to implement it even if it isn’t elegant.
Having a minor disability has given me an appreciation for what totally disabled people have to deal with; and I take much less for granted.
I hope my story gives you a sense of gratitude for the automatic ease with which your body functions. | https://brooksidevic.medium.com/motor-planning-3bfe11a6d66f | ['Victoria Ponte'] | 2019-10-10 20:23:24.072000+00:00 | ['This Happened To Me', 'Mental Health', 'Disability', 'Stroke', 'Gratitude'] |
The Dawn | Driving down that road,
The lanes where farms ran across
Stars and moon looking from the sky
Gently caressing the sleepy crops
The cool soothing breeze rolls
Gently washing my face
Conveying the goodbye of the moon
To wake up my eyes and soul
The curls in my locks stretch
And all aquiver danced on my forehead
The breath deeply feels the zephyr
With the rising Sun, far, on the horizon
Sky’s silver touch turns slightly rouge
Moon being painted and some stars fading through
As the sun rises from its home
So mellifluously accepts the reins from the night’s crew
The shinning moon, now, quietly conceals
A few zealous stars hold on to bask in the Sun’s hues
And when the sunlight sprinkles its golden aura across
They too, follow the moon’s lead, internalizing their pause
The lingering softness of the cool breeze
Even to the warmer sun-rays, seem to enchant
The lingering darkness gradually trickles into oblivion
When the dawn gives way to the morning song
The Sun lifts up the sky, in full vigour
Waking up the last of the sleepy wings
Lending the colours to the grass beside
Warming the souls and the air alike!
© GK | https://medium.com/weeds-wildflowers/the-dawn-ff5ae297079a | [] | 2020-05-30 21:01:00.805000+00:00 | ['Poetry', 'Poems On Medium', 'Dawn', 'Nature', 'Writing'] |
Never Take Financial Advice From a Brick Maker | Image by Free-Photos from Pixabay
“Get your data from the right sources” — Grant Cardone
Advice is given out like candy on Halloween these days. Everyone has their own “two cents” to give regardless of whether or not they are qualified to give advice on a particular subject. What’s worse is that sometimes we don’t even notice that we’re getting bad advice. It is critical that one learns how to get advice from the right sources.
The Richest Man in Babylon is a book that delivers timeless financial advice through a set of parables. There’s a wonderful short story in it about the perils of taking the wrong advice. The story is that of a young man named Arkad who one day decided to take financial advice from the wrong guy: a brick maker. Arkad tells the story himself about how he came to entrust the man with his life-savings. I’ve summarised it here:
“I have given it to Azmur, the brick maker, who told me he was traveling over the far seas and in Tyre he would buy for me the rare jewels of the Phoenicians. When he returns, we shall sell these at high prices and divide the earnings. And it was as he [Arkad’s financial mentor] said. For the Phoenicians are scoundrels and sold to Azmur worthless bits of glass that looked like gems.”
The brickmaker turned out to have very little knowledge of gems (shocker!), just as Arkad’s mentor had warned him. Arkad had received bad advice from an unqualified source and paid for it with his life-savings.
The story itself may sound funny, but it is more common in today’s world than you’d initially think. People pick their stocks based on what a coworker mentioned was good one day. Teenagers in school are setting their career paths based on the advice of teachers and professors.
But how do you know that your coworker is qualified to give advice on stocks? Are they some sort of investing expert or multimillionaire? Have they done extensive research? Do they even hold the stock in their portfolio? These are important questions to ask when taking someone’s advice because the answers determine whether or not the advice is actually reliable.
The exact same thing goes for the other example. Why should a young person take the advice of a teacher to become an accountant, engineer, or doctor? The teacher is certainly qualified to give advice on whether or not the student might succeed in teaching, but totally unqualified to be giving advice on those other professions.
“Why trust the knowledge of a brick maker about jewels? Would you go to the bread maker to inquire about the stars? No, you would go to the astrologer, if you had the power to think.” — Algamish, The Richest Man in Babylon
Unqualified advice is the worst thing you can follow. If no one gives you any advice then at least you can experiment on your own to figure it out. But with unqualified advice, you could be heading in a completely wrong direction! This is really unfortunate because it leads people to have big financial losses and get stuck in dead-end careers.
Point being: don’t take advice from unqualified, non-expert sources.
Now comes the positive part of our story: you still can and certainly should get advice from experts. Expert advice offers you a tremendous advantage to get ahead and make sure you’re making the right moves.
“If you would have advice about jewels, go to the jewel merchant. If you would know the truth about sheep, go to the herdsman. Advice is one thing that is freely given away, but watch that you take only what is worth having.” — Algamish, The Richest Man in Babylon
Perhaps your coworker has no idea what they’re talking about when it comes to stocks. But maybe you know an expert through family or friends that has done well in the stock market who can give you better information. Even better, there are plenty of books written by expert investors that you can get the very best information from. For example, every year, Warren Buffett’s annual letters to shareholders get publicly published — it’s expert advice from one of the greatest investors of all-time.
For your career path, your teachers and professors aren’t going to be the best sources of advice since they have very little industry experience. But there are certainly many other people you can talk to among family and friends that are older and have been in their field for decades. Even better than that, you can reach out directly to people on LinkedIn who are doing well in their career to ask for their advice. People are usually open to offering a helping hand if you’re asking genuinely.
The main point we really want to drive home here is that you should always seek advice from the very best expert source you can find. Amateur advice is at best lucky and at worst catastrophic to your progress. But expert advice is like a shortcut. You’re quickly getting the wisdom gained by the expert through decades of experience. You make your own life easier by learning from the knowledge and experience of the very best. | https://medium.com/live-your-life-on-purpose/never-take-financial-advice-from-a-brick-maker-5293a812a59b | ['Mighty Knowledge'] | 2020-11-26 14:15:21.592000+00:00 | ['Books', 'Advice', 'Money', 'Life Lessons', 'Reading'] |
Everything a Data Scientist Should Know About Data Management* | The Rise of Unstructured Data & Big Data Tools
IBM 305 RAMAC (Source: WikiCommons)
The story of data science is really the story of data storage. In the pre-digital age, data was stored in our heads, on clay tablets, or on paper, which made aggregating and analyzing data extremely time-consuming. In 1956, IBM introduced the first commercial computer with a magnetic hard drive, 305 RAMAC. The entire unit required 30 ft x 50 ft of physical space, weighed over a ton, and for $3,200 a month, companies could lease the unit to store up to 5 MB of data. In the 60 years since, prices per gigabyte in DRAM has dropped from a whopping $2.64 billion in 1965 to $4.9 in 2017. Besides being magnitudes cheaper, data storage also became much denser/smaller in size. A disk platter in the 305 RAMAC stored a hundred bits per square inch, compared to over a trillion bits per square inch in a typical disk platter today.
This combination of dramatically reduced cost and size in data storage is what makes today’s big data analytics possible. With ultra-low storage cost, building the data science infrastructure to collect and extract insights from huge amount of data became a profitable approach for businesses. And with the profusion of IoT devices that constantly generate and transmit users’ data, businesses are collecting data on an ever increasing number of activities, creating a massive amount of high-volume, high-velocity, and high-variety information assets (or the “three Vs of big data”). Most of these activities (e.g. emails, videos, audio, chat messages, social media posts) generate unstructured data, which accounts for almost 80% of total enterprise data today and is growing twice as fast as structured data in the past decade.
125 Exabytes of enterprise data was stored in 2017; 80% was unstructured data. (Source: Credit Suisse)
This massive data growth dramatically transformed the way data is stored and analyzed, as the traditional tools and approaches were not equipped to handle the “three Vs of big data.” New technologies were developed with the ability to handle the ever increasing volume and variety of data, and at a faster speed and lower cost. These new tools also have profound effects on how data scientists do their job — allowing them to monetize the massive data volume by performing analytics and building new applications that were not possible before. Below are the major big data management innovations that we think every data scientist should know about.
Relational Databases & NoSQL
Relational Database Management Systems (RDBMS) emerged in the 1970’s to store data as tables with rows and columns, using Structured Query Language (SQL) statements to query and maintain the database. A relational database is basically a collection of tables, each with a schema that rigidly defines the attributes and types of data that they store, as well as keys that identify specific columns or rows to facilitate access. The RDBMS landscape was once ruled by Oracle and IBM, but today many open source options, like MySQL, SQLite, and PostgreSQL are just as popular.
RDBMS ranked by popularity (Source: DB-Engines)
Relational databases found a home in the business world due to some very appealing properties. Data integrity is absolutely paramount in relational databases. RDBMS satisfy the requirements of Atomicity, Consistency, Isolation, and Durability (or ACID-compliant) by imposing a number of constraints to ensure that the stored data is reliable and accurate, making them ideal for tracking and storing things like account numbers, orders, and payments. But these constraints come with costly tradeoffs. Because of the schema and type constraints, RDBMS are terrible at storing unstructured or semi-structured data. The rigid schema also makes RDBMS more expensive to set up, maintain and grow. Setting up a RDBMS requires users to have specific use cases in advance; any changes to the schema are usually difficult and time-consuming. In addition, traditional RDBMS were designed to run on a single computer node, which means their speed is significantly slower when processing large volumes of data. Sharding RDBMS in order to scale horizontally while maintaining ACID compliance is also extremely challenging. All these attributes make traditional RDBMS ill-equipped to handle modern big data.
By the mid-2000’s, the existing RDBMS could no longer handle the changing needs and exponential growth of a few very successful online businesses, and many non-relational (or NoSQL) databases were developed as a result (here’s a story on how Facebook dealt with the limitations of MySQL when their data volume started to grow). Without any known solutions at the time, these online businesses invented new approaches and tools to handle the massive amount of unstructured data they collected: Google created GFS, MapReduce, and BigTable; Amazon created DynamoDB; Yahoo created Hadoop; Facebook created Cassandra and Hive; LinkedIn created Kafka. Some of these businesses open sourced their work; some published research papers detailing their designs, resulting in a proliferation of databases with the new technologies, and NoSQL databases emerged as a major player in the industry.
An explosion of database options since the 2000’s. Source: Korflatis et. al (2016)
NoSQL databases are schema agnostic and provide the flexibility needed to store and manipulate large volumes of unstructured and semi-structured data. Users don’t need to know what types of data will be stored during set-up, and the system can accommodate changes in data types and schema. Designed to distribute data across different nodes, NoSQL databases are generally more horizontally scalable and fault-tolerant. However, these performance benefits also come with a cost — NoSQL databases are not ACID compliant and data consistency is not guaranteed. They instead provide “eventual consistency”: when old data is getting overwritten, they’d return results that are a little wrong temporarily. For example, Google’s search engine index can’t overwrite its data while people are simultaneously searching a given term, so it doesn’t give us the most up-to-date results when we search, but it gives us the latest, best answer it can. While this setup won’t work in situations where data consistency is absolutely necessary (such as financial transactions); it’s just fine for tasks that require speed rather than pin-point accuracy.
There are now several different categories of NoSQL, each serving some specific purposes. Key-Value Stores, such as Redis, DynamoDB, and Cosmos DB, store only key-value pairs and provide basic functionality for retrieving the value associated with a known key. They work best with a simple database schema and when speed is important. Wide Column Stores, such as Cassandra, Scylla, and HBase, store data in column families or tables, and are built to manage petabytes of data across a massive, distributed system. Document Stores, such as MongoDB and Couchbase, store data in XML or JSON format, with the document name as key and the contents of the document as value. The documents can contain many different value types, and can be nested, making them particularly well-suited to manage semi-structured data across distributed systems. Graph Databases, such as Neo4J and Amazon Neptune, represent data as a network of related nodes or objects in order to facilitate data visualizations and graph analytics. Graph databases are particularly useful for analyzing the relationships between heterogeneous data points, such as in fraud prevention or Facebook’s friends graph.
MongoDB is currently the most popular NoSQL database, and has delivered substantial values for some businesses that have been struggling to handle their unstructured data with the traditional RDBMS approach. Here are two industry examples: after MetLife spent years trying to build a centralized customer database on a RDBMS that could handle all its insurance products, someone at an internal hackathon built one with MongoDB within hours, which went to production in 90 days. YouGov, a market research firm that collects 5 gigabits of data an hour, saved 70 percent of the storage capacity it formerly used by migrating from RDBMS to MongoDB.
Data Warehouse, Data Lake, & Data Swamp
As data sources continue to grow, performing data analytics with multiple databases became inefficient and costly. One solution called Data Warehouse emerged in the 1980’s, which centralizes an enterprise’s data from all of its databases. Data Warehouse supports the flow of data from operational systems to analytics/decision systems by creating a single repository of data from various sources (both internal and external). In most cases, a Data Warehouse is a relational database that stores processed data that is optimized for gathering business insights. It collects data with predetermined structure and schema coming from transactional systems and business applications, and the data is typically used for operational reporting and analysis.
But because data that goes into data warehouses needs to be processed before it gets stored — with today’s massive amount of unstructured data, that could take significant time and resources. In response, businesses started maintaining Data Lakes in the 2010's, which store all of an enterprise’s structured and unstructured data at any scale. Data Lakes store raw data, and could be set up without having to first define the data structure and schema. Data Lakes allow users to run analytics without having to move the data to a separate analytics system, enabling businesses to gain insights from new sources of data that was not available for analysis before, for instance by building machine learning models using data from log files, click-streams, social media, and IoT devices. By making all of the enterprise data readily available for analysis, data scientists could answer a new set of business questions, or tackle old questions with new data.
Data Warehouse and Data Lake Comparisons (Source: AWS)
A common challenge with the Data Lake architecture is that without the appropriate data quality and governance framework in place, when terabytes of structured and unstructured data flow into the Data Lakes, it often becomes extremely difficult to sort through their content. The Data Lakes could turn into Data Swamps as the stored data become too messy to be usable. Many organizations are now calling for more data governance and metadata management practices to prevent Data Swamps from forming.
Distributed & Parallel Processing: Hadoop, Spark, & MPP
While storage and computing needs grew by leaps and bounds in the last few decades, traditional hardware has not advanced enough to keep up. Enterprise data no longer fits neatly in standard storage, and the computation power required to handle most big data analytics tasks might take weeks, months, or simply not possible to complete on a standard computer. To overcome this deficiency, many new technologies have evolved to include multiple computers working together, distributing the database to thousands of commodity servers. When a network of computers are connected and work together to accomplish the same task, the computers form a cluster. A cluster can be thought of as a single computer, but can dramatically improve the performance, availability, and scalability over a single, more powerful machine, and at a lower cost by using commodity hardware. Apache Hadoop is an example of distributed data infrastructures that leverage clusters to store and process massive amounts of data, and what enables the Data Lake architecture.
Evolution of database technologies (Source: Business Analytic 3.0)
When you think Hadoop, think “distribution.” Hadoop consists of three main components: Hadoop Distributed File System (HDFS), a way to store and keep track of your data across multiple (distributed) physical hard drives; MapReduce, a framework for processing data across distributed processors; and Yet Another Resource Negotiator (YARN), a cluster management framework that orchestrates the distribution of things such as CPU usage, memory, and network bandwidth allocation across distributed computers. Hadoop’s processing layer is an especially notable innovation: MapReduce is a two step computational approach for processing large (multi-terabyte or greater) data sets distributed across large clusters of commodity hardware in a reliable, fault-tolerant way. The first step is to distribute your data across multiple computers (Map), with each performing a computation on its slice of the data in parallel. The next step is to combine those results in a pair-wise manner (Reduce). Google published a paper on MapReduce in 2004, which got picked up by Yahoo programmers who implemented it in the open source Apache environment in 2006, providing every business the capability to store an unprecedented volume of data using commodity hardware. Even though there are many open source implementations of the idea, the Google brand name MapReduce has stuck around, kind of like Jacuzzi or Kleenex.
Hadoop is built for iterative computations, scanning massive amounts of data in a single operation from disk, distributing the processing across multiple nodes, and storing the results back on disk. Querying zettabytes of indexed data that would take 4 hours to run in a traditional data warehouse environment could be completed in 10–12 seconds with Hadoop and HBase. Hadoop is typically used to generate complex analytics models or high volume data storage applications such as retrospective and predictive analytics; machine learning and pattern matching; customer segmentation and churn analysis; and active archives.
But MapReduce processes data in batches and is therefore not suitable for processing real-time data. Apache Spark was built in 2012 to fill that gap. Spark is a parallel data processing tool that is optimized for speed and efficiency by processing data in-memory. It operates under the same MapReduce principle, but runs much faster by completing most of the computation in memory and only writing to disk when memory is full or the computation is complete. This in-memory computation allows Spark to “run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.” However, when the data set is so large that insufficient RAM becomes an issue (usually hundreds of gigabytes or more), Hadoop MapReduce might outperform Spark. Spark also has an extensive set of data analytics libraries covering a wide range of functions: Spark SQL for SQL and structured data; MLib for machine learning, Spark Streaming for stream processing, and GraphX for graph analytics. Since Spark’s focus is on computation, it does not come with its own storage system and instead runs on a variety of storage systems such as Amazon S3, Azure Storage, and Hadoop’s HDFS.
In an MPP system, all the nodes are interconnected and data could be exchanged across the network (Source: IBM)
Hadoop and Spark are not the only technologies that leverage clusters to process large volumes of data. Another popular computational approach to distributed query processing is called Massively Parallel Processing (MPP). Similar to MapReduce, MPP distributes data processing across multiple nodes, and the nodes process the data in parallel for faster speed. But unlike Hadoop, MPP is used in RDBMS and utilizes a “share-nothing” architecture — each node processes its own slice of the data with multi-core processors, making them many times faster than traditional RDBMS. Some MPP databases, like Pivotal Greenplum, have mature machine learning libraries that allow for in-database analytics. However, as with traditional RDBMS, most MPP databases do not support unstructured data, and even structured data will require some processing to fit the MPP infrastructure; therefore it takes additional time and resources to set up the data pipeline for an MPP database. Since MPP databases are ACID-compliant and deliver much faster speed than traditional RDBMS, they are usually employed in high-end enterprise data warehousing solutions such as Amazon Redshift, Pivotal Greenplum, and Snowflake. As an industry example, the New York Stock Exchange receives four to five terabytes of data daily and conducts complex analytics, market surveillance, capacity planning and monitoring. The company had been using a traditional database that couldn’t handle the workload, which took hours to load and had poor query speed. Moving to an MPP database reduced their daily analysis run time by eight hours.
Cloud Services
Another innovation that completely transformed enterprise big data analytics capabilities is the rise of cloud services. In the bad old days before cloud services were available, businesses had to buy on-premises data storage and analytics solutions from software and hardware vendors, usually paying upfront perpetual software license fees and annual hardware maintenance and service fees. On top of those are the costs of power, cooling, security, disaster protection, IT staff, etc, for building and maintaining the on-premises infrastructure. Even when it was technically possible to store and process big data, most businesses found it cost prohibitive to do so at scale. Scaling with on-premises infrastructure also require an extensive design and procurement process, which takes a long time to implement and requires substantial upfront capital. Many potentially valuable data collection and analytics possibilities were ignored as a result.
“As a Service” providers: e.g. Infrastructure as a Service (IaaS) and Storage as a Service (STaaS) (Source: IMELGRAT.ME)
The on-premises model began to lose market share quickly when cloud services were introduced in the late 2000’s — the global cloud services market has been growing 15% annually in the past decade. Cloud service platforms provide subscriptions to a variety of services (from virtual computing to storage infrastructure to databases), delivered over the internet on a pay-as-you-go basis, offering customers rapid access to flexible and low-cost storage and virtual computing resources. Cloud service providers are responsible for all of their hardware and software purchases and maintenance, and usually have a vast network of servers and support staff to provide reliable services. Many businesses discovered that they could significantly reduce costs and improve operational efficiencies with cloud services, and are able to develop and productionize their products more quickly with the out-of-the-box cloud resources and their built-in scalability. By removing the upfront costs and time commitment to build on-premises infrastructure, cloud services also lower the barriers to adopt big data tools, and effectively democratized big data analytics for small and med-size businesses.
There are several cloud services models, with public clouds being the most common. In a public cloud, all hardware, software, and other supporting infrastructure are owned and managed by the cloud service provider. Customers share the cloud infrastructure with other “cloud tenants” and access their services through a web browser. A private cloud is often used by organizations with special security needs such as government agencies and financial institutions. In a private cloud, the services and infrastructure are dedicated solely to one organization and are maintained on a private network. The private cloud can be on-premises, or hosted by a third-party service provider elsewhere. Hybrid clouds combine private clouds with public clouds, allowing organizations to reap the advantages of both. In a hybrid cloud, data and applications can move between private and public clouds for greater flexibility: e.g. the public cloud could be used for high-volume, lower-security data, and the private cloud for sensitive, business-critical data like financial reporting. The multi-cloud model involves multiple cloud platforms, each delivers a specific application service. A multi-cloud can be a combination of public, private, and hybrid clouds to achieve the organization’s goals. Organizations often choose multi-cloud to suit their particular business, locations, and timing needs, and to avoid vendor lock-in. | https://towardsdatascience.com/everything-a-data-scientist-should-know-about-data-management-6877788c6a42 | ['Phoebe Wong'] | 2020-05-31 03:03:17.843000+00:00 | ['Machine Learning', 'Database', 'Tds Narrated', 'Big Data', 'Data Science'] |
New Features in Python 3.9 | Dictionary Unions
One of my favorite new features with a sleek syntax. If we have two dictionaries a and b that we need to merge, we now use the union operators.
We have the merge operator | :
a = {1: 'a', 2: 'b', 3: 'c'}
b = {4: 'd', 5: 'e'} c = a | b
print(c)
[Out]: {1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e'}
And the update operator |= , which updates the original dictionary:
a = {1: 'a', 2: 'b', 3: 'c'}
b = {4: 'd', 5: 'e'} a |= b
print(a)
[Out]: {1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e'}
If our dictionaries share a common key, the key-value pair in the second dictionary will be used:
a = {1: 'a', 2: 'b', 3: 'c', 6: 'in both'}
b = {4: 'd', 5: 'e', 6: 'but different'} print(a | b)
[Out]: {1: 'a', 2: 'b', 3: 'c', 6: 'but different', 4: 'd', 5: 'e'}
Dictionary Update with Iterables
Another cool behavior of the |= operator is the ability to update the dictionary with new key-value pairs using an iterable object — like a list or generator:
a = {'a': 'one', 'b': 'two'}
b = ((i, i**2) for i in range(3)) a |= b
print(a)
[Out]: {'a': 'one', 'b': 'two', 0: 0, 1: 1, 2: 4} | https://towardsdatascience.com/new-features-in-python39-2529765429fe | ['James Briggs'] | 2020-07-22 08:24:12.321000+00:00 | ['Python', 'Technology', 'Code', 'Data Science', 'Programming'] |
3 common mistakes when doing content experiments | #2: Test concepts that are too complex (or unclear)
In the early days when I was training people at Booking.com I used different sets of “innovation cards” (Mental Notes, Design with Intent Toolkit & Brains, Behavior & Design) to inspire new content ideas.
On each of these cards there is a summary of a psychological principle and how you could apply that to content. The very first trainings I did I used all 3 sets to inspire new ideas. I had an enormous pool of concepts and principles that people could choose from. I thought that was a good thing.
Although it wasn’t difficult to apply the different concepts and principles, translating it in a clear experiment was less easy. Let’s take the examples of the Endownment effect and commitment mentioned earlier. These are some of the more complex concepts you can use to create content experiments.
Actually a lot of scientific work is more complex to translate into content.
Let’s take also return that as an example and let’s go back to the shopping cart example. And just for fun let’s use the lean startup hypothesis template for the endownment effect:
We believe [a person who visits our website] has a problem [shopping]. We can help them with [giving him a sense of ownership before he really owns the specific product - adding a product to a shopping cart increases the sense of ownership]. We'll know we're right if [those website visitors will be less likely to abandon the things in the shopping cart]
And for commitment:
We believe [a person who visits our website] has a problem [shopping]. We can help them with [making shopping easier by cutting it in some smaller steps- adding a product to a shopping cart will ask a lower level commitment]. We'll know we're right if [those website visitors will be more likely to buy in a next step]
We can see that the shopping cart feature could be used in both hypotheses. And the hypotheses are very similar. Teasing out causality there is rather difficult. It’s a cool challenge, but maybe not something you want to start with.
Luckily for us, there are also pretty straightforward concepts. For example Social Proof: If other people do it, you are also more likely to do it. Translating that into content is rather easy, and I haven’t seen many mistakes made there. Just because it’s way less ambiguous.
Check out this:
We believe [a person who visits our website] has a problem [shopping]. We can help them with [making shopping easier by showing the most popular product (for his/her profile)]. We'll know we're right if [those website visitors will be more likely to buy in a next step]
If the most popular product also happens to be an staff’s pick, or on discount until the end of the week, it get’s more tricky. But in general I advise companies to start with those concepts that they are very sure of, and that are as little ambiguous as possible.
#3: Test changes that are too subtle | https://medium.com/i-love-experiments/3-common-mistakes-when-doing-content-experiments-371d759aaaff | ['Arjan Haring'] | 2016-06-15 21:09:37.988000+00:00 | ['Marketing', 'Ecommerce', 'Experimenting'] |
Using Docker as a Data Scientist | Good day everyone!
The ship with our data science project is just arriving off the port! Get ready to unload the containers and fire up the environment. Wait… what? Wish it was that easy to go? Well, it basically is. Welcome to Docker!
In this article, we are going to learn
How to install Docker Desktop
How to docker — Docker images and Docker Hub
How to Docker — Docker Containers
Run a complete data science project with Docker
Docker is a modern breakthrough. A way in which we can save and share our projects complete with their original environments. No more incompatibility shenanigans and it's quick and easy. Anyone can take a docker image from a docker image registry, download it, run it and begin working in the same environment the project was built in. I explain what Docker is and how it works in my article What the hell is Docker?!
The ability to reproduce project results over different machines is incredibly important in data science. Docker makes this incredibly easy.
Photo by Noah Carter on Unsplash
If you’re ready, let's get going | https://medium.com/analytics-vidhya/using-docker-as-a-data-scientist-8bbb203fb6b7 | ['Aleksandar Gakovic'] | 2020-06-14 18:25:41.704000+00:00 | ['Python', 'Docker', 'Containerization', 'Containers', 'Data Science'] |
Your Essential Checklist for Polished Writing on Medium | Your Medium Basics Checklist:
1. Create a strong title
A good title is crucial. Without it, no-one will even read your story.
Take the time to study good titles, learn different successful formulas, and practice writing them. Many successful writers create 10 or more titles for each story before they settle on one.
Highlight your title and click the Large T formatting tool.
You can choose to place your title above or below the image. Most people place it above.
Use title case — each word has an uppercase letter, except small words such as “in”, “a” etc.
Do not add punctuation at the end, except a question mark if appropriate.
Medium will not curate stories with titles that are click-bait (dishonest titles that over-promise, exaggerate, or are not delivered on in the story.)
2. Create a subtitle that matches your main point or conclusion
Don’t overlook your subtitle. You are not likely to have your story curated (recommended by Medium) without it.
Place your subtitle directly below your title.
Highlight it and use the small t formatting tool to create your title. It should change color to a light grey.
Use sentence case — words are only capitalized as they would be in a normal sentence. For example: “Make sure you’ve done these before you submit”
Do not punctuate the end of the subtitle, except if a question mark is needed.
If you change your title or subtitle, check that it is also changed under settings. Go to the three dots in the top corner near “publish” and select “change display title/ subtitle”.
3. Add a clear, quality image
The image is what readers will see when they are on their Medium home page, or the home page of the publication where your work is published. Stories without images at the start get lost in the crowd and don’t get curated.
Make sure to use an image which you have the rights to use. Unsplash and Pexels have many free images. There are more image sources listed in this article:
Credit the creator of the image. Click on the image and below it you will be able to paste or write the credits, e.g. Image by Freddy Frog @ Pexels.com.
Images which are not credited will stop your story being curated.
If you use your own image, make sure it’s high quality. Credit yourself below it.
Click on your image to change it’s size and add Alt text (a description of the image for readers with vision impairment).
4. Make sure your first 50 words are strong
You have a few seconds to engage your reader online. Spend time making sure those first few sentences hook them in.
Get to your point, cut the waffle.
5. Check for errors
Most weeks, I receive a draft from a writer that is so full of errors it’s a struggle to read. Spelling is one thing, but when whole words are missing it becomes a problem.
Read your work out loud. Listen for how the words sound together, whether you’ve made any double ups or errors, and whether any sentences sound strange to you.
Try an online editing tool, such as Grammarly or ProWritingAid.
Leave your story for a few hours (or days) so that you can read it over with fresh eyes.
Punctuation: think about using oxford commas for lists, for example: write, read, and then cut. The third comma makes a list clear and easy to read (this is up to you, it’s just my personal preference).
If you want to make an em dash — hit the dash key twice. Once, like this - is wrong.
6. Break up large text chunks
Many people are reading on their phones and long blocks of text are hard to read.
Break up your text into smaller chunks — even single sentences.
Use bullet points if appropriate to your story. Hit the Shift and * keys and then press the space bar.
Use subtitles to break up sections (highlight and use the small or large T.) Some publications have formatting preferences for subtitles within the text. It’s a good idea to check their guidelines before you submit your story.
Some writers like to place images throughout their text to break it up.
7. Publish or submit your draft
There are two ways to publish a story on Medium — by yourself or through a publication.
To publish immediately, hit “publish” and add 5 tags. The numbers next to the tags show which ones are the most popular with readers and most followed. Mental health, Self Improvement, Productivity for example are popular tags.
To submit to a publication DO NOT hit “Publish”.
Read the publications submission guidelines. Usually you’ll need to email them to be added as a writer. Click the three dots and select “Share draft link” to send them your draft. You can copy and paste the link.
You can also share your draft link with other people helping you edit your story.
Most publications do not accept stories which are already published.
Medium staff also have their own blog where you can find updates on changes, what curators are looking for, how to write headlines etc.
Medium let’s you decide how you present your work. You can ignore all the rules above and do your own thing if you really want to.
If you’d like to see your work in publications though, these guidelines will get you half the way there — which leaves you with more time to focus on writing those captivating stories.
If you want to find out more about mentoring contact me, or go here for Inspired Writers Mentor Program.
For weekly inspiration, tips, and resources for new writers, join my newsletter Because You Write. | https://medium.com/inspired-writer/your-essential-checklist-for-polished-writing-on-medium-af3293c1e064 | ['Kelly Eden'] | 2020-04-28 08:45:10.852000+00:00 | ['Writing Tips', 'Writing', 'Self Improvement', 'Life Lessons', 'Writing On Medium'] |
Getting started with .NET Core API, MongoDB, and Transactions | MongoDB is a kind of NoSQL database. NoSQL is a document-oriented database that is organized as a JSON. Some points about MongoDB:
Performance : It stores a majority of data in RAM, so the query performance is much better here than in a relational database. But it requires more RAM and precise indexes.
: It stores a majority of data in RAM, so the query performance is much better here than in a relational database. But it requires more RAM and precise indexes. Simplicity : Some users consider the query syntax is simpler here than in relational databases. The installation, configuration, and execution are simple and quick to do. And the learning curve is shorter than in the others.
: Some users consider the query syntax is simpler here than in relational databases. The installation, configuration, and execution are simple and quick to do. And the learning curve is shorter than in the others. Flexibility : It’s dynamic because it doesn’t have a predefined schema.
: It’s dynamic because it doesn’t have a predefined schema. Scalability : It uses shards for horizontal scalability, which makes it easier to increase storage capacity.
: It uses shards for horizontal scalability, which makes it easier to increase storage capacity. Transaction: v3.6 and beyond allow using the transaction concept. And v4.0 and beyond allow using the multi-document transaction concept.
Now that you know a little more about MongoDB, let’s go to our goal! This article proposes to teach you how you can build a .NET Core API connecting with MongoDB and use the Transactions with Dependency Injection.
#ShowMeTheCode!
For this example, don’t consider the structure and the architecture, it was built in this way just because I think that it’s easier to explain. So, I won’t explain the layers, folders, and other things in detail, except those needed by the use of MongoDB and the transactions.
The project
In this case, I used the Visual Studio 2019, Community version. So, with the VS installed, we select the “ASP.NET Core Web Application” option and select the API type.
About the base structure
After creating the project, we create five folders like below:
Controllers: responsible to answer the requests
Entities: Domain classes
Interfaces: like contracts, we’ll use it to do the DI and IoC
Models: Classes that we’ll use to receive and return data on controllers
Repositories: Classes with methods that contain the implementation of MongoDB operations
We’ll focus on just Controllers and Repositories folders and the Startup class. If you want to see the complete code, wait for the end of this article.
Installing and configuring MongoDB
Now, we need to install the more important package for our project, that is the MongoDB.Driver.
After installed, we need to modify the Startup class, on the ConfigureServices method, specifically.
Let’s analyze this code:
The ConfigureServices method is where we do our IoC. So, the first statement is where we make the MongoDB connection. Pay attention in this step because we make a Singleton, we do this because the MongoClient instance, in MongoDB, is already a pool connection, so if you don’t use a Singleton, a new pool connection will always be created.
And the second statement is where we declare the IoC, which we’ll use to start a session of the MongoDB Transaction. Notice that, in this case, we make a Scoped, because the transaction life cycle will be equal to the request life cycle.
Creating a base repository
So now, let’s make a base repository that we will use whenever we want to do some operations. First, the base repository class will receive a Generics <T> to identify the entity in our code that represents a collection in the database, like below:
After creating the class, let’s go to declare some attributes that will be important to us:
We have four attributes:
DATABASE: it’s the Constant that represents the name of our database.
_mongoClient: it’s the client interface to MongoDB. Using it we do some operations on the database.
_clientSessionHandle: it’s the interface handle for a client session. So, if you start a transaction, you should pass the handle when you do some operation.
_collection: it’s the name of the collection used by the Generic class.
All these attributes will receive a value on a class constructor:
Look at the constructor parameters. We receive these parameters by dependency injection. We get the values of parameters and we assign them to the attributes that we created.
Another very important thing is the code below of the assignments. If you work with transactions, on MongoDB, and work with a multi-document transaction, we need to create the collection first of all. And for it, we verify if the collection exists in our database and, if not, we create it. If we try to create a collection that already exists, the flow will break and throw an exception.
Now, we will do an important code too, specifically a virtual property that we use to facilitate to do some operations:
From the _mongoClient attribute, we retrieve the database and, from the _collection attribute, we retrieve the collection and associate it with the Generic class.
Finally, we build some base operations to change the data in the database like Insert, Update, and Delete:
Some important things:
We have to pass the _clientSessionHandle for all the methods that do, in fact, the operation, like the InsertOneAsync method.
To do an Update operation, we build a Lambda Expression, using the ID property. And then, we use a Reflection to retrieve the ID value and we make the filter that will use to do, in fact, the operation.
Example of a specifical repository
Now I show you how we can use the Base Repository to do some other specifically repository. In this case, we’ll build an AuthorRepository.
First, we build the class with the inheritances and the constructor:
When we inherit the BaseRepository class, we force it to make a constructor as shown. Look at how we pass the collection name using a string “author”.
Finally, we build some other MongoDB operations, here we build only specific query operations to get data. It happens because we can get specific data in each moment, but the operations that change the database data (like insert, update, delete) will be always the same in this example.
In this code, I show you some query operations examples:
We can recover one author searching by id.
We can recover all authors.
We can recover all books of a specific author, searching by author’s id, using the Project method.
We can recover some authors searching by name.
Business rules examples with transactions
Let’s use these repositories to do some business rules and use, in fact, the transactions! For this, we’ll make all business rules in the classes located in the Controller folder.
So, we build a class called BusinessController, like below:
I’ll don’t talk about the Annotations and the inheritances class, because it’s specific for the API. So, look that we use the repositories that we created before and a session _clientSessionHandle that we’ll use to make the transactions. All these attributes we’ll receive on the constructor parameters by dependency injection.
So now, let’s show how we can use a transaction, in fact:
In this case, it’s a POST method, which means that we want to insert a new record on the database. In the method beginning, we need to start a transaction (line 11), then we can make our business rules and, if there is nothing wrong, we can Commit all changes we did on the database. But if something break, an exception will be thrown and the flow will begin on the Catch statement, thereby, we’ll abort all the transaction and nothing in the database will be changed.
Below you can look another example using multi-document transactions. Where we use more than one operation on the database:
The logic is the same as the code shown before. | https://alexalvess.medium.com/getting-started-with-net-core-api-mongodb-and-transactions-c7a021684d01 | ['Alex Alves'] | 2020-08-14 15:17:25.664000+00:00 | ['Transactions', 'Csharp', 'Mongodb', 'Dotnet Core', 'NoSQL'] |
Learn the basics of the Ethereum JSON API in 5 minutes | Learn the basics of the Ethereum JSON API in 5 minutes Nicolas Schapeler Follow Apr 5 · 2 min read
Photo by Shahadat Rahman on Unsplash
The other day I got myself in a situation where I needed to communicate with the Ethereum network using python in an environment where getting web3.py to work seemed pretty much impossible. Since I still needed to talk to the network, I resorted to using the JSON-RPC API provided by Ethereum, which all web3 libraries are built on top of. Turns out, it’s pretty interesting! So, let’s get started!
Basic setup
First things first — let’s declare a few variables that will be helpful when sending requests later:
To keep things simple, we use an Infura Node to connect to the Ethereum Ropsten Testnet. You can get an API Key for that here.
Your first request
Let’s get our feet wet by fetching the current gas price of the network. We can do this by simply doing:
How did I know what method to use and what parameters to send? It can all be found in the official Ethereum docs.
Getting the latest Block
Let’s try something a bit more interesting — let’s fetch the latest block and see what we can read from there!
Let’s have a closer look at one of the transactions:
You’re probably starting to get the pattern of how these calls work, so let’s try something a little more advanced:
Sending Transactions
First, let’s create a new account using the web3.py library and load it with some Ropsten ether. I like to use this faucet.
To send a transaction, we need the nonce. This too we can fetch with the RPC JSON API using the same pattern as above:
Next, we create and sign our transaction and send it off using the JSON RPC API once more:
If you’re testing on a different Ethereum (Test)net, make sure to set the chain Id accordingly.
Notice how we were able to reuse the gas price which we fetched at the beginning.
Conclusion
And that’s all! You just learned the basics of interacting with the most influential blockchain in the world using the JSON RPC Ethereum API! You can find the code for all this here.
Thanks for following along!
Nicolas
If you liked this, give this story a couple of claps so more people can see it!
If you LOVED this, buy me a coffee :)
Ethereum: 0x190d71ba3738f43dc6075f5561e58ac9d4e3dfc2
Bitcoin: 1BRucWzs2vnrkfmbfss14ZQErW74A1H1KE
Litecoin: LbNfoJfTFVCj7GFaAUtWqhitHJcJ3m8y3V | https://medium.com/datadriveninvestor/learn-the-basics-of-the-ethereum-json-api-in-5-minutes-bc52966dea97 | ['Nicolas Schapeler'] | 2020-04-06 12:21:01.448000+00:00 | ['Python', 'API', 'Blockchain', 'Ethereum', 'Programming'] |
How to Generate Typescript Types for GraphQL in React Project | GraphQL codegen office logo
GraphQL is cool, it exposes the exact contract between frontend and backend in details.
However, when you try to use it, there is still gaps between GraphQL and the host programming language. For example, mostly frontend project is using typescript, while GraphQL is strong typed contract but still not typescript.
This situation is like when we develop SQL based application by using Java. At that time, we have ORM, stands for Object Oriented Mapping, which is a standard tool mapping object structure and sql relationship, and allow developer to access SQL database by knowing only JAVA or other object oriented languages. Do we have similar tool to map between GraphQL and typescript or other languages?
The answer is yes, it is graphql-codegen. (Although there are other tools for this goal, but graphql-codegen is definitely the most flexible and advanced one.
In this article, I am going to tell you what is graphql-codegen, and how to use graphql-codegen to generate typescript types and react GraphQL query/mutation hooks by using an example. | https://medium.com/swlh/how-to-generate-typescript-types-for-graphql-in-react-project-ec458aac36a3 | ['Ron Liu'] | 2020-12-14 16:15:45.043000+00:00 | ['React', 'GraphQL', 'Apollo Client'] |
Humor Writers I’ve Enjoyed Reading this Year | It is the season for giving, which is why I’m giving you (another) list of 10 funny writers you should read if you want to laugh or learn to write better humor. So in no particular order here it goes.
Gracie Beaver-Kairis creates this bit of satire using very specific knowledge about a genre. Anyone who’s watched any (seriously, any) Hallmark movie will recognize at least one of these tropes. Pointing out specific things others might be thinking in the back of their head is a surefire way to get a laugh, when done right.
David B. Clear comics are proof that something doesn’t need to complicated to get giggles. He knows when and how to show/tell through pictures vs. words. If you are considering creating comics and worry that you lack illustration skills, check out this guy. More importantly, go through his archive and see how daily practice improved his technique, allowing him to deliver more laughs though different artistic means. I mean look at those cute moving arms!
Bebe Nicholson is a long-time writer who graces whatever topic she darn-well feels like. And we are the luckier for it. Her piece below is a good one to read if you want to get an idea of how to use a well-known story and create original humor around it. She chooses to do the troll’s POV with hilarious results.
Earlier I talked about how simple illustrations work well, but I am also in awe of those who have technical skills both in art and humor. Cassie Soliday is one of those people. Her comics are short and sweet and always worth the read.
If you’ve been on this platform then Ryan Fan is a name you’ve seen over and over again in the trending area. He’s a beast when it comes to putting out new content and letting his imagination run wild, no matter how bizarre the idea. Below is a perfect example. He uses the absurdity of how most anime heroes are still in school and takes it from there. When you write humor, don’t be afaid to ask, “What if?” Ryan’s shown us all that such bravery gets results.
irene tassy took the popular topic that springs up every year and gave it a whole new spin. I’ve seen several “tell-alls” by elves, but Irene went a completely different direction and did what every parent wants to do. Scare the living tar out of kids so they behave. Since the elves are kind of terrifying anyway, she pushed the premise farther by seeing what would happen if a truly terrifying doll was in the house.
JL Matthews is another prolific writer whose stuff can be found in nearly every humor publication on this platform. Below he’s written a form I enjoy, but do not write well: dialogue humor. This is an insane piece that made me laugh out loud because he catches the personalities so carefully which (I think) is why I like this piece whereas I find this style so hard to enjoy otherwise.
Catherine Weingarten did what you should be doing if you want to write humor that hits hard. Take that thing/person/trend you dislike and satirize the shit out of them. Obviously you can tell by the title why this story was so popular. Because haven’t we all just wanted to block that person who will not be quiet about their big day long after it’s over?
Susan Sassi writes good stuff all around, yet this one taking the form of a quiz is one of her newest stories and a good example of how to write humorous quizzes if that’s something that interests you.
The story below by Julia Wolov has the list format down, including how to build upwards through the piece so it’s not “just” a list. Also, I mean, that title. Perfection. | https://medium.com/jane-austens-wastebasket/humor-writers-ive-enjoyed-reading-this-year-e4445973b8 | ['Kyrie Gray'] | 2020-12-23 08:01:56.768000+00:00 | ['Humor', 'Comedy', 'Satire', 'Humor Writing', 'Writing'] |
Tech Startup Interviews 101 | Get to know the company
When it comes to knowing the company, ignorance is not bliss.
I can’t count the number of times I’ve asked an interviewee, “So what do you know about our company? Have you had a chance to get on our Website and start a free trial of our tool?” …
…to which she confidently replies: “Hm. No. I didn’t have the time with all those interviews I’m doing lately. But I think that you ***do something with email automation***, is that right?”
Ouch. That’s like being on a first date saying “Hey, you’re John right? Oh, Steve, sorry I didn’t have time to check your name, with all those dates I have lately… you know how it is!”
Poor Steve has no idea how it is, In fact, in preparation for this meeting, he:
Sent several well-thought messages beforehand to convince you that meeting him was the right thing to do.
Prepared himself for this dinner.
Had great expectations about your relationship.
Was willing to spend his precious time talking with you hoping that there’ll be a shared interest!
Yes, we’re Steve when this happens. Kind of shocked and definitely hurt. Consider your interview over at this point.
To avoid this in your next interview, take the time to get to know the company. Check their website and try their product if it’s possible. If you can’t try the product, search for screenshots of the interface or video tutorials, to have a good enough idea of what they’re doing. Most SaaS startups have support resources nowadays, so you could even browse them to get a grasp of the business and the customers needs.
Or go onto a review site like G2Crowd or Capterra to learn about the tool, read user reviews, and see who the tool’s competitors are.
Show that you care! You’ll make a wonderful first impression. | https://medium.com/agorapulse-stories/tech-startup-interviews-101-32f12b43dbc6 | ['Florian Ernoult'] | 2020-03-12 12:00:07.708000+00:00 | ['Recruiting', 'Startup', 'Technology', 'Hiring', 'Interview'] |
Exploratory Data Analysis of New York Taxi Trip Duration Dataset using Python | Data Analysis is one of the most crucial steps of the model building process. In this article I will be performing Data Analysis on the NYC Taxi Trip Duration Dataset. This dataset and problem statement is taken from the Applied Machine Learning course by Analytics Vidhya which offers a number of such real life projects.
Let us now discuss about the problem statement for the project.
Problem Context:
A typical taxi company faces a common problem of efficiently assigning the cabs to passengers so that the service is smooth and hassle free. One of main issue is determining the duration of the current trip so it can predict when the cab will be free for the next trip.
The data set contains the data regarding several taxi trips and its duration in New York City. I will now try and apply different techniques of Data Analysis to get insights about the data and determine how different variables are dependent on the target variable Trip Duration.
Lets start!
Import Required Libraries
First we will import all the necessary libraries needed for analysis and visualization.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
sns.set()
Now that we have all the necessary libraries lets load the data set. We will load it into the pandas DataFrame df.
df=pd.read_csv('nyc_taxi_trip_duration.csv')
We read the dataset into the DataFrame df and will have a look at the shape , columns , column data types and the first 5 rows of the data. This will give a brief overview of the data at hand.
df.shape
This returns the number of rows and columns
df.columns
This returns the column names
Here’s what we know about the columns:
Demographic information of Customer & Vendor
id : a unique identifier for each trip
vendor_id : a code indicating the provider associated with the trip record
passenger_count : the number of passengers in the vehicle (driver entered value)
Information about the Trip
pickup_longitude : date and time when the meter was engaged
pickup_latitude : date and time when the meter was disengaged
dropoff_longitude : the longitude where the meter was disengaged
dropoff_latitude : the latitude where the meter was disengaged
store_and_fwd_flag : This flag indicates whether the trip record was held in vehicle memory before sending to the vendor because the vehicle did not have a connection to the server (Y=store and forward; N=not a store and forward trip)
trip_duration : (target) duration of the trip in seconds
Thus we have a data set with 729322 rows and 11 columns. There are 10 features and 1 target variable which is trip_duration
df.dtypes
This returns the data type of the columns
df.head()
This returns the first 5 rows of the Data set
Thus we get a glimpse of the data set by looking at the first 5 rows returned by df.head(). Optionally we can specify the number of rows to be returned, by sending it as a parameter to the head() function.
Some observations about the data:
The columns id and vendor_id are nominal.
and are nominal. The columns pickup_datetime and dropoff_datetime are stored as object which must be converted to datetime for better analysis.
and are stored as object which must be converted to datetime for better analysis. The column store_and_fwd_flag is categorical
Lets look at the numerical columns,
df.describe()
This returns a statistical summary of the numerical columns
The returned table gives certain insights:
There are no numerical columns with missing data
The passenger count varies between 1 and 9 with most people number of people being 1 or 2
The trip duration varying from 1s to 1939736s~538 hrs. There are definitely some outliers present which must be treated.
Lets have a quick look at the non-numerical columns,
non_num_cols=['id','pickup_datetime','dropoff_datetime','store_and_fwd_flag']
print(df[non_num_cols].count())
The count of the specified columns are returned
There are no missing values for the non numeric columns as well.
The 2 columns pickup_datetime and dropoff_datetime are now converted to datetime format which makes analysis of date and time data much more easier.
df['pickup_datetime']=pd.to_datetime(df['pickup_datetime'])
df['dropoff_datetime']=pd.to_datetime(df['dropoff_datetime'])
Univariate Analysis
Lets have a look at the distribution of various variables in the Data set.
Passenger Count
sns.distplot(df['passenger_count'],kde=False)
plt.title('Distribution of Passenger Count')
plt.show()
A histogram of the #of passengers in each trip
Here we see that the mostly 1 or 2 passengers avail the cab. The instance of large group of people travelling together is rare.
The distribution of Pickup and Drop Off day of the week
df['pickup_datetime'].nunique()
df['dropoff_datetime'].nunique()
The returned values are 709359 and 709308. This shows that there are many different pickup and drop off dates in these 2 columns.
So its better to convert these dates into days of the week so a pattern can be found.
df['pickup_day']=df['pickup_datetime'].dt.day_name()
df['dropoff_day']=df['dropoff_datetime'].dt.day_name()
Now lets look at the distribution of the different days of week
df['pickup_day'].value_counts()
A frequency distribution of the different pickup days.
df['dropoff_day'].value_counts()
A frequency distribution of the different dropoff days.
Thus we see most trips were taken on Friday and Monday being the least. The distribution of trip duration with the days of the week is something to look into as well.
The distribution of days of the week can be seen graphically as well.
figure,ax=plt.subplots(nrows=2,ncols=1,figsize=(10,10))
sns.countplot(x='pickup_day',data=df,ax=ax[0]) ax[0].set_title('Number of Pickups done on each day of the week')
sns.countplot(x='dropoff_day',data=df,ax=ax[1]) ax[1].set_title('Number of dropoffs done on each day of the week')
plt.tight_layout()
The distribution of the # of pickups and drop offs done on each day of the week
The distribution of Pickup and Drop Off hours of the day
The time part is represented by hours,minutes and seconds which is difficult for the analysis thus we divide the times into 4 time zones: morning (4 hrs to 10 hrs) , midday (10 hrs to 16 hrs) , evening (16 hrs to 22 hrs) and late night (22 hrs to 4 hrs)
def timezone(x):
if x>=datetime.time(4, 0, 1) and x <=datetime.time(10, 0, 0):
return 'morning'
elif x>=datetime.time(10, 0, 1) and x <=datetime.time(16, 0, 0):
return 'midday'
elif x>=datetime.time(16, 0, 1) and x <=datetime.time(22, 0, 0):
return 'evening'
elif x>=datetime.time(22, 0, 1) or x <=datetime.time(4, 0, 0):
return 'late night'
df['pickup_timezone']=df['pickup_datetime'].apply(lambda x :timezone(datetime.datetime.strptime(str(x), "%Y-%m-%d %H:%M:%S").time()) ) df['dropoff_timezone']=df['dropoff_datetime'].apply(lambda x :timezone(datetime.datetime.strptime(str(x), "%Y-%m-%d %H:%M:%S").time()) )
Lets look at the distribution of the timezones
figure,ax=plt.subplots(nrows=1,ncols=2,figsize=(10,5)) sns.countplot(x='pickup_timezone',data=df,ax=ax[0])
ax[0].set_title('The distribution of number of pickups on each part of the day') sns.countplot(x='dropoff_timezone',data=df,ax=ax[1])
ax[1].set_title('The distribution of number of dropoffs on each part of the day') plt.tight_layout()
The distribution of # of pickups and drop offs done on each part of the day
Thus we observe that most pickups and drops occur in the evening. While the least drops and pickups occur during morning.
Lets have another column depicting the hour of the day when the pickup was done.
figure,ax=plt.subplots(nrows=1,ncols=2,figsize=(10,5))
df['pickup_hour']=df['pickup_datetime'].dt.hour
df.pickup_hour.hist(bins=24,ax=ax[0])
ax[0].set_title('Distribution of pickup hours') df['dropoff_hour']=df['dropoff_datetime'].dt.hour
df.dropoff_hour.hist(bins=24,ax=ax[1])
ax[1].set_title('Distribution of dropoff hours')
The distribution of # of pickups and drop offs done on each hour of the day
The 2 distributions are almost similar and are also aligned with the division of the hours of the day into 4 parts and their distribution done previously.
Distribution of the stored and forward flag
df['store_and_fwd_flag'].value_counts()
The returned frequency distribution of the Yes/No Flag
The number of N flag is much larger. We can later see whether they have any relation with the duration of the trip.
Distribution of the trip duration
sns.distplot(df['trip_duration'],kde=False)
plt.title('The distribution of of the Pick Up Duration distribution')
The distribution of the trip duration in seconds
This histogram shows extreme right skewness, hence there are outliers. Lets see the boxplot of this variable.
ns.boxplot(df['trip_duration'], orient='horizontal')
plt.title('A boxplot depicting the pickup duration distribution')
The box plot of trip_duration
Thus we see there is only value near 2000000 while all the others are somewhere between 0 and 100000. The one near 2000000 is definitely an outlier which must be treated.
Lets have a look at the 10 largest value of trip_duration.
print( df['trip_duration'].nlargest(10))
The returned 10 largest value in the column
The largest value is much greater than the 2nd and 3rd largest trip duration value. This might be because of some errors which typically occurs during data collection or this might be a legit data. Since the occurrence of such a huge value is unlikely so its better to drop this row before further analysis.
The value can be replaced by the mode or median of trip duration as well.
df=df[df.trip_duration!=df.trip_duration.max()]
Lets have a look at the distribution of the trip_duration after we have dropped the outlier.
sns.distplot(df['trip_duration'])
plt.title('Distribution of the pickup ditribution after the treatment of outliers')
The distribution of the trip duration in seconds after removing the outlier
Still there is an extreme right skewness. Thus we will divide the trip_duration column into some interval.
The intervals are decided as follows:
less than 5 hours
5–10 hours
10–15 hours
15–20 hours
more than 20 hours
bins=np.array([0,1800,3600,5400,7200,90000])
df['duration_time']=pd.cut(df.trip_duration,bins,labels=["< 5", "5-10", "10-15","15-20",">20"])
Distribution of pickup longitude
sns.distplot(df['pickup_longitude'])
plt.title('The distribution of Pick up Longitude')
The distribution of pick up longitude
Distribution of drop off longitude
sns.distplot(df[‘dropoff_longitude’])
plt.title(‘The distribution of Drop off Longitude’)
The distribution of drop off longitude
Distribution of dropoff latitude
sns.distplot(df['dropoff_latitude'])
plt.title('The distribution of drop off Latitude')
The distribution of drop off latitude
Distribution of pickup latitude
sns.distplot(df['pickup_latitude'])
plt.title('The distribution of pick up Latitude')
The distribution of pickup latitude
We see that the pickup longitude and the dropoff longitude has almost the same kind of distribution while the pickup latitude and the dropoff latitude has slightly different distribution.
Distribution of vendor_id
df['vendor_id'].hist(bins=2)
The distribution of the 2 vendor ids
The distribution of vendor id is not much different as expected.
Bivariate Analysis
Lets now look at the relationship between each of the variables with the target variable trip_duration.
The relationship between Trip Duration and The day of the week
sns.catplot(x="pickup_day",y="trip_duration",kind="bar",data=df,height=6,aspect=1)
plt.title('The Average Trip Duration per PickUp Day of the week') sns.catplot(x="dropoff_day",y="trip_duration",kind="bar",data=df,height=6,aspect=1)
plt.title('The Average Trip Duration per Dropoff Day of the week')
The graphs denote the average estimate of a trip for each day of the week. The error bars provides some indication of the uncertainty around that estimate
Thus the highest avg time taken to complete a trip is on Thursday while Monday, Saturday and Sunday takes the least time.
But this is not enough. We must also take into consideration the percentage of short, medium and long trips taken on each day.
ax1=df.groupby('pickup_day')['duration_time'].value_counts(normalize=True).unstack()
ax1.plot(kind='bar', stacked='True')
plt.title('The Distribution of percentage of different duration of trips')
The graph shows a percentage distribution of the trips of different duration within each day of the week.
This does not give much insights as the number of trips within 0–5 hours range is much larger for all the days,
Lets look at the percentage of only longer trips (with duration time > 5 hours)
figure,ax=plt.subplots(nrows=1,ncols=3,figsize=(15,5))
ax1=df[(df.duration_time !="< 5")].groupby('pickup_day')['duration_time'].count()
ax1.plot(kind='bar',ax=ax[0])
ax[0].set_title('Distribution of trips > 5 hours')
ax2=df[(df.duration_time !="< 5")].groupby('pickup_day')['duration_time'].value_counts(normalize=True).unstack()
ax2.plot(kind='bar', stacked='True',ax=ax[1])
ax[1].set_title('Percentage distribution of trips > 5 hours')
ax3=df[(df.duration_time !="< 5")].groupby('pickup_day')['duration_time'].value_counts().unstack()
ax3.plot(kind='bar',ax=ax[2])
ax[2].set_title('A compared distribution of trips > 5 hours')
The 3 graphs present 3 types of information here: The left most graph shows a frequency distribution of the number of trips(> 5 hours ) taken on each day of the week The middle one shows a percentage distribution of the trips of different duration ( > 5 hours )within each day of the week. The right one shows the frequency distribution of the trips of different duration (> 5 hours)within each day of the week.
Some key points :
The most number trips which lasts > 5 hours were taken on Thursday followed by Friday and Wednesday.(Left graph)
The most number of trips of duration 5–10, 10–15 was taken on Thursday.(right graph)
But the highest percentage of trips longer than 20 hours was taken on Sunday and Saturday.(middle graph)
The relationship between Trip Duration and The time of the day
figure,(ax1,ax2)=plt.subplots(ncols=2,figsize=(20,5)) ax1.set_title('Distribution of pickup hours')
ax=sns.catplot(x="pickup_hour", y="trip_duration",kind="bar",data=df,ax=ax1) ax2.set_title('Distribution of dropoff hours')
ax=sns.catplot(x="dropoff_hour", y="trip_duration",kind="bar",data=df,ax=ax2)
plt.show()
The highest average time taken to complete a trip are for trips started in midday(between 14 and 17 hours) and the least are the ones taken in the early morning(between 6–7 hours)
The relationship between passenger count and duration
sns.relplot(x="passenger_count", y="trip_duration", data=df, kind="scatter")
Here we see, passenger count has no such relationship with trip duration. But it is noted that there are no long trips taken by higher passengers counts like 7 or 9. while the trip duration time is more or less evenly distributed only for passenger count 1.
The relationship between vendor id and duration
sns.catplot(x="vendor_id", y="trip_duration",kind="strip",data=df)
Here we see that vendor 1 mostly provides short trip duration cabs while vendor 2 provides cab for both short and long trips
The relationship between store forward flag and duration
sns.catplot(x="store_and_fwd_flag", y="trip_duration",kind="strip",data=df)
Thus we see the flag was stored only for short duration trips and for long duration trips the flag was never stored.
The relationship between geographical location and duration
sns.relplot(x="pickup_latitude", y="dropoff_latitude",hue='pickup_timezone',row='duration_time',data=df);
Here’s what we see
for shorter trips (<5 hours), the pickup and dropoff latitude is more or less evenly distributed between 30 ° and 40 °
for longer trips(>5 hours ) the pickup and dropoff latitude is all concentrated between 40 ° and 42 ° degrees.
sns.relplot(x="pickup_longitude", y="dropoff_longitude",hue='pickup_timezone',row='duration_time',data=df);
Here’s what we see
for shorter trips (<5), the pickup and dropoff longitude is more or less evenly distributed between -80 ° and -65 ° with one outlier near -120 ° .
for longer trips(>5) the pickup and dropoff longitude is all concentrated near -75 °
Conclusion about Trip Duration and the data set:
Trip Duration varies a lot ranging from few seconds to more than 20 hours
Most trips are taken on Friday , Saturday and Thursday
The average duration of a trip is most on Thursday and Friday as trips longer than 5 hours are mostly taken in these days
The average duration of trips started in between 14 hours and 17 hours is the largest.
Vendor 2 mostly provides the longer trips
The long duration trips(> 5 hours) are mostly concentrated with their pickup region near (40 °,75 °) to (42°,75°)
The next part of this series can be found here | https://medium.com/analytics-vidhya/exploratory-data-analysis-of-nyc-taxi-trip-duration-dataset-using-python-257fdef2749e | ['Anuradha Das'] | 2019-09-21 07:48:50.625000+00:00 | ['Data Visualization', 'Exploratory Data Analysis', 'Data Analysis', 'Data Science', 'Analytics Vidhya'] |
Why Your iPhone is Slow | #3. Check the Memory Usage
The phone may be performing slow if the amount of free memory is low. If you’ve had the iPhone for over a year, take a lot of photos, or have a lot of apps installed, checking the memory usage periodically is good practice. Unused apps and photos that take up a lot of memory should be deleted. The iPhone storage can be managed by going to:
Settings > General > iPhone Storage
Image provided by author
Delete unused apps
As shown above, the iPhone storage screen provides a breakdown of the memory usage per app. If you choose to delete an app, select it from the list and press the Delete App option.
Image provided by author
2. Delete Messages, Photos and Music
Messages, photos and music all contribute to the overall iPhone storage usage. Selecting Messages from the iPhone Storage list gives the top conversations, photos, and GIFs/Stickers that can be deleted. Selecting Music displays all of the songs and memory size, with the option to delete. Photos will need to be deleted through the Photos app.
3. Clear “Other” | https://medium.com/macoclock/why-your-iphone-is-slow-a7edd7f23ebc | ['Julie Elise'] | 2020-12-24 05:36:17.749000+00:00 | ['Technology', 'iOS', 'Productivity', 'iPhone', 'Phone'] |
On-Premise Tribes in Shiny Caves | As I took her by the hand to school this morning my 6-year old daughter Kiara looked up at Google’s giant Lego project in progress and excitedly reminded me of the latest 3 numbers marking the tops of their latest 3 towers .… “11th floor, ehh … 13, umm .. I can’t see that one Daddy!”. This time she didn’t ask me when I’m going to join Google’s 8,000 people in Dublin. There’s a new flavour of the month. Recently I took her into an event close to us in Airbnb. It’s a beautiful cave and had colourful free stuff in the kitchen and the staff were friendly. Now that’s where Daddy “needs to get a job”. AirBnB has just leased another big cave for 20 years. Facebook, 40 metres from our apartment, will lease another massive space soon for another 5,000 employees etc.
After I leave Kiara to school, I wander back near our apartment and enter a wonderful co-working space called Workbench in a Bank of Ireland branch in the heart of Dublin’s Silicon Docks. I enjoy the energy of having a few other founders around. Sometimes I go to my alma mater Trinity College’s library if I need super levels of concentration. Or switch to a cafe if I need a change or a walk to contemplate an idea, or if I simply feel like a better coffee. No matter where I go I’m a headset away from a state of deep focus. Sometimes I just work out of my apartment if the kids are out. My co-founder and great friend Mike Quill works out of his home in San Diego. He prefers the weather and lifestyle there to his native Dublin. We are both free to do deep work wherever and whenever we want. We evolved away from large luminous caves a few years ago.
“Remote work has opened the door to a brave new world beyond the industrial-age belief in The Office … If you can’t let your employees work from home out of fear they’ll slack off without your supervision, you’re a babysitter, not a manager. Remote work is very likely the least of your problems … When you treat people like children, you get children’s work.”
Jason Fried & DHH
Richard Branson once said that in the future people will wonder why offices ever existed. Lots of big tech companies think otherwise. It’s 5 years since he said this, but companies are herding their people into towers punching ever bigger holes in the cloud. Arguments for enormous concrete, steel and glass caves with gorgeous interiors, filled with ever larger tribes of on-premise humans are being constructed all around me here in Dublin. And it’s happening in all the big tech hubs.
Remote 1st SaaS founders who evolved away from shiny-cave syndrome, Stephen Cummins
I worked in software caves in the high touch B2B SaaS world almost from the very start. One of these shiny caves eventually expanded into the 150+ on-premise people scale, with lots of free food, gorgeous furniture etc. That explosion in office numbers is one reason why I sold my allotment of Salesforce shares cheaply and left. Hyper-growth was in full gear and I believed it would become much bigger. And I had an exciting role. But I had enjoyed it so much more when they were smaller. I spent half my days abroad commuting on planes and trains and had complete control over my itinerary as the CSM covering continental Europe. Salesforce Dublin had created the first ever SaaS CSMs (evolving Vantive’s idea) and I was parachuting into companies from every industry and getting to use my languages. But for me the office in Dublin was now uncomfortably large.
More desks every month and more people and more noise and more meetings and more infernal interruptions. And all that chaotic jazz. Still loads of energy, but a slowly diminishing percentage of effective work being done as the numbers exploded. It wasn’t the fun, scrappy scale-up I had signed up for anymore. I wanted to leave before I stopped enjoying it. So that’s what I did.
Salesforce used to use high rise office blocks to explain the B2B SaaS model. We’d tell reams of sceptical prospects ad nauseam that these buildings had shared electricity, shared security, shared water etc., but tenant companies could configure their own offices any way they liked. That’s how we’d start to explain the benefits of multi-tenant architecture. We did it well enough because the utility model disrupted on-premise software far quicker than most would have predicted. The world has completely accepted that we don’t need servers in the building.
But I want to ask another question.
Jay Shaw for J.D.Ballard’s High-Rise film poster
Do we need employees in the building?
Most SaaS giants and hyper-growth scale-ups have very few remote workers. Yet we didn’t evolve to work in huge groups huddled together in a large building. Can you imagine 1,000 people sharing a giant cave in the stone age? Humans evolve slowly. It’s difficult to design a giant shiny cave that successfully overrides our evolutionary instincts.
I gave a talk a few weeks ago to open the SaaS Monster stage in the Web Summit about 100% remote SaaS companies. 100% Remote means all employees working in separate locations — using remote technologies and methodologies to communicate and collaborate. During the talk I asked how many people in the audience (of about 1,200) had worked in an office with more than 100 to 150 people? About half the audience raised arms. I then asked them to keep their hands up if they found it really challenging in that office to get stuff done. Hardly an arm dropped.
Most of us have experienced the issues around constant interruptions, someone noisy on the phone, social pressure to chat with colleagues, impromptu meetings that take 60 minutes to get 5 minutes’ work done. And that’s after ever-lengthening commutes, especially in tech-hubs. As companies scale, just being around, making the right noises, looking attentive, making political alliances, ‘managing upwards’ and other inefficient personal strategies take up far too much time and emotional disc space. It’s not always the way, but there’s a lot of toxic bullshit in a lot of giant offices.
After backpacking the planet with my better half Alicia Tejero and then doing photography for a couple of years, I went back to Salesforce.com to learn how to sell. I had decided I wanted to be an entrepreneur and this was a hole in my skillset. And in a high touch B2B SaaS world, Salesforce is a school of excellence. By then the numbers had multiplied again and it was a very different company, but I thought I could block that out with single-minded purpose — despite the big open office and a role that demanded less travel.
To circumnavigate the noisy open floor, I ended each day by booking up time in various quiet offices for the following day, making sure I had at least one scheduled call in each 2-hour slot — to justify escaping the large open-office distractions. I’d vary the offices across the whole building so people wouldn’t notice and complain. I’d head outside for a walk a few minutes before ‘town-halls’ so I could dodge the herding process and sneak back into my small nested cave. So much wasted energy in the service of rewiring my interaction with a traditional on-premise-people system to get effective work done.
“Everything evades you, everything hides, even your thoughts escape you, when you walk in a crowd.”
Edwin Way Teale
In an interview with David Darmanin on my podcast, he told me the story of how he rejected a big expanding office and left to found a 100% remote SaaS company called Hotjar (web session replay and feedback software). 4 of the 5 founders live in Malta, but David insists they do not socialise or meet with any regularity. They use their remote methodologies and technologies to communicate even though they are located close to each other. Hence they empathise with, and understand the working lives of non-founder colleagues.
David believes that one doesn’t create a company culture. Instead one devises shared values, develops ways to build alignment … and from there the culture will evolve as the company grows. Hotjar employees are deeply engaged in a culture that has evolved from shared values they helped craft. The employees have stayed and the company is approaching $20M annual recurring revenue (ARR) after just 4 years.
“The thing is, once you have a framework and a model like this which is very much built around trust and empowering people, they need to be able to have the drive to do that. We haven’t set up a company where we spoon-feed people. If they want to be spoon-fed, then it’s not going to work out.”
Dr David Darmanin, CEO and Founder of Hotjar
I also interviewed YouCanBook.Me’s Bridget Harris for14 Minutes of SaaS. Her company does online scheduling software. They built their success story with no sales, no marketing, and no funding. They eliminated marketing & sales by solving a big problem in blood red competitive water with a product that’s easy to use and easy to share. Product led marketing to produce a viral solution. They are now close to $3 million ARR and facilitate 1 million bookings a month. Profitable for the last 3 years too.
Bridget says that if on-boarding needs support, the product is broken. Now that’s not true for every business model, but that philosophy and that aim are fantastic. And in the future, when they do add on sales and marketing, Bridget maintains that the big challenges will have been worked out and they’ll be accelerating the acquisition of customers they know they’ll help make successful. The right customers. That is, of course, way better than boosting market position artificially with money and signing up quick-churn-customers they’ll piss off because they haven’t worked their stuff out properly. Bridget advises self-awareness, or understanding your own personal culture and wiring, is a key foundation for building a successful company.
“Culture eats hiring for breakfast and if you start with your culture, knowing who you are, you learn how that has to work through your processes and procedures.”
Bridget Harris, CEO and Co-founder of YouCanBook.me
Le Penseur (the Thinker), Auguste Rodin
A big part of many of the companies successfully adopting remote-working is about working in a better way and allowing people to have more meaningful and balanced lives, as opposed to chasing SaaS monster lottery tickets. It’s not about old-world carrot and stick nonsense. People who choose to tolerate being a slave to stress can easily become wired to only care about the money. That’s not good for anybody. Money is one very important part of the pie of course. Hiring in non-expensive places allows companies to reward employees generously. Hotjar gives employees 40 days planned vacation a year and YouCanBook.me pay their Spanish engineers double the local rates. And that’s the tip of a beautiful iceberg of rewards. As Bridget Harris says, it’s about jam today in some of your best years, rather than putting your life on hold.
“You don’t need to live in San Francisco to be a tech genius, or New York, or Austin. So why should you live anywhere if you don’t want to?”
Wade Foster, Co-founder and CEO at Zapier
A modest, but growing fleet of remote SaaS companies like Hotjar, YouCanBook.Me, Basecamp (project management), Zapier (iPaaS), Balsamiq (wireframing), Automattic (Wordpress ), InVision (software prototyping), Buffer (social media management), GitLab (version control hosting), Doist (task management & internal communications), Wildbit (several SaaS products) etc. are quietly landing on the shores of our business landscape.
And here’s the thing. These companies produce best in category SaaS solutions. If you don’t believe me, check their software against the X axis on G2 Crowd’s crowd sourced quadrants, the bit that’s labelled ‘Satisfaction’. G2 Crowd really needs to rename that ‘Customer Success’. These companies absolutely excel at customer success — many are number 1 in their competitive categories. The signs are that 100% remote SaaS companies consistently produce world class solutions. And unsurprisingly Glassdoor says they excel at employee success too.
For another indicator that an armada of these companies will slowly rise, look at the apparently shocking valuations for WeWork. Between 3 and 40 Billion USD depending on who you believe. Crunchbase did a meta-analysis of these disparate valuations and its guess is that the value is closer to $9B. Co-working spaces are a big enabler of remote. Hence the rise of remote is directly related to such stratospheric valuations.
Sophisticated companies like Plantronics have been producing hardware for enabling remote since long before big software enablers like Slack were born. It has known for decades that large shared work spaces are sub-optimal for effective work. And Plantronics has vast data to back that up. A Plantronics headset enabled the world to hear Neil Armstrong’s immortal words as he stepped on the moon for the first time in 1969. They’ve been doing this a long time and know a thing or two. And it was their customer, NASA engineer Jack Nilles, that coined the word telecommuting. Today about 60% of NASA’s staff work remotely.
Neil Armstrong on the moon (with a Plantronics headset)
So the next time you go to one of those open-office ‘we want to hire you’ events in a big SaaS company, and you see all those enormous cool spaces to collaborate & play, please don’t assume this is an efficient way. You might love being part of that scene. If you do, go for it. And actually, many of these companies are great places to start off in — especially if you can find a way to change role a couple of times early on.
Plantronics is a global enabler of all sorts of communication from executives on the move to remote workers to large call centres. It measures the effectiveness of workspaces across 4 ‘C’s; Communication, Collaboration, Contemplation and Concentration. And it does a lot more than headsets — it’s also a significant player in cloud software and works on solutions for many common issues with large spaces e.g. with habitat soundscaping.
Plantronics considers well designed office spaces to be optimal for collaboration. And there’s strong evidence to support that. when it comes to innovation, it’s obviously difficult to beat the likes of ServiceNow, Workday and Salesforce — the #1, #2, and #3 ranked companies for innovation according to Forbes World’s most Innovative Companies list. These are also the 3 largest pure-play B2B SaaS companies in the world. So innovation should not be underestimated. Even though some strong proponents of remote would say otherwise, I agree there’s a hit in 100% remote companies regarding face-to-face brainstorming.
Ticketmaster, London
However Plantronics also sees the strength of quieter, remote working spaces in the other three Cs, particularly concentration and contemplation, but also communication. Even in the most innovative successful companies, 99% of the work done is executing off stuff they already know. It has to be that way. Remote companies gain much more in terms of deep work with less interruptions. Otherwise it’s employees would fly around like a parliament of magpies in spring, chasing shiny ideas until they crash into some high rise windows. The vast majority of a company’s time, once a strong product-market fit and initial happy shiny paying customers have been established, is rinse and repeat. It keeps listening, learning and innovating obviously, but the ability to deliver effectively is almost everything. Deep, effective work gets done when one can concentrate and focus without fear of excessive interruptions, negatively multiplied by switch-tasking.
And even for innovation, remote can work well for some companies. It gives people time to think in advance and dismiss most shiny possibilities that don’t work before a web-enabled meeting. People who genuinely have something to say about the specific topic tend to pipe-up, rather than those who are simply good at speaking. And when communicating in asynchronous threads, team members tend to think carefully about input. They learn to value their colleagues’ time and that brings in a very positive dimension to collaboration.
“What I want inside my company is a sustainable culture where you can work for 20 years and be happy and have a life outside of work.”
Amir Salihefendic, Founder & CEO of Doist
Nicholas Bloom, Eberle Professor in the Department of Economics at Stanford, ran an excellent study and gave a tremendous talk on remote working with Trip (formerly Ctrip), China’s largest travel agency. The company has 20,000 employees and agreed to a comparative study of remote versus in in-office employees. The teams were divided with a random selection method based on whether their date of birth was an even or odd number. The results were staggering. Remote workers tended to do their full hours (which the in-office employees tended not to do for various reasons) and remote workers showed a better ability to concentrate on the task in hand. This resulted in an increase of 13% in work effectiveness and halved the annual rate at which people left the company. The upshot was that Trip made $2,000 more profit per person working from home.
100% Remote SaaS companies develop and constantly evolve effective ways of hiring. They find employees that have a grown-up attitude to work, and that embrace transparency and trust. They are forced to focus on values and culture from day 1, and to a degree that office based companies will not feel compelled to do. The employees can live wherever they want and remote companies can often afford to reward them generously by hiring outside expensive locations like San Francisco, London, New York and Dublin. 100% remote SaaS companies tend to really value the work-life balance of their employees. I suspect they don’t agree with Jason M. Lemkin’s pinned tweet.
“Once or twice in life, someone will give you a shot you don’t quite deserve.
Work 100 hours.
Do whatever it takes…”
My perspective on this tweet? To hell with that. I’m not going to mortgage my life and my relationship with my family. Long hours are rarely an efficient or sustainable way to work. In fact the Basecamp founders do a great job in deconstructing the ‘Do whatever it takes’ narrative in their new book ‘It doesn’t have to be Crazy at Work’. I highly recommend it, even if you are not interested in remote. And if you are interested in this, read all their books. Especially ‘Remote’. Unless we’re unlucky or desperate or lost, we should work to live. I know I’ve lost myself before. Never again.
Family time. What dent in the universe can be more important to me than this?
Loneliness is another challenge for remote. We are social animals. The evolution of our brains depended on interpersonal contact with other humans. To reduce the negative impact of loneliness, hire for employees that are resilient and resourceful. People who get up in the morning with a clear ikigai, and who can access coping mechanisms if they feel down. Hiring people proximate to co-working spaces or good cafes can help.
Hiring the wrong person is the most expensive thing that can happen to your company so you work hard to avoid that.”
Bridget Harris, CEO and Co-founder of YouCanBook.me
Can you imagine Serena Williams or Rafael Nadal making wholesale changes to their game every month? Re-engineering the serve one week, switching to a one-handed backhand the next. They practise what they know well and reserve a small amount of time to evolve their games and innovate. And can you imagine them being interrupted mid-match constantly? How would that influence the effectiveness of their games? The rest of the human race is no different. When we focus, we are effective.
Focus. Rafael Nadal, eye on the ball as he serves.
“Gitlab started as a remote company from day one. I didn’t meet my co-founder until a year after we started this company. We are now 300 remote people across 40 countries.”
Dmitriy Zaporozhets, Co-founder & CTO at GitLab
Of course, remote is not the only way. I also interviewed Nicolas Dessaigne in Algolia for 14 Minutes of SaaS and he’s seeded hyper-growth and amazing office based culture in Paris & San Francisco. Algolia’s culture is one where customer problems are viewed as opportunities rather than reasons to panic. Nicolas cares deeply about a culture of transparency and honesty. Intercom have also achieved big growth with offices in Dublin & San Francisco. Intercom’s culture is so cool, it makes a personal comic book to celebrate the work anniversary of every employee. Every year, every employee receives a copy of a beautifully illustrated ‘intercomic’ with themselves as protagonist and hero. This forces the company to really know it’s employees very well. And amongst the giants Salesforce is still a growth and innovation machine, and generally stands for strong values around equality and social responsibility.
So I’m not saying 100% Remote companies have a monopoly on values and employee success. And I’m not saying there’s no place for beautifully constructed caves with focussed on-premise humans. I’m saying that we seriously underestimate how big and valuable and liberating the ever-improving possibilities for remote-working will become. Even if your company is not interested in having a significant percentage of remote employees, a huge amount can be learned from the various cultures and methodologies that have evolved in remote companies. The focus on effective work, clarity of values, efficient communication, and on empowering people to live great lives are amongst the most compelling.
But there’s one other big challenge with 100% remote companies.
Scale.
100% remote companies are often bootstrapped and sacrifice hyper-growth for slower, more sustainable growth. They hire extremely carefully and typically don’t artificially accelerate their growth with huge marketing and sales spend. Hence I doubt there will be a 100% remote SaaS company as big as a Salesforce, or even a Workday or ServiceNow within the next 5 years. I do believe it will happen eventually though.
However, as DHH, the creator of Ruby on Rails and co-founder of Basecamp says …. if you set out to create a unicorn, the odds are stacked against you from the start. Why not make a highly lucrative dent in the universe rather than trying to own the fucking universe.
There’s room for thousands of SaaS companies to succeed. Not just a few dozen. These companies can massively improve the lives of their employees, and by extension their employees’ families. It’s a philosophy that favours the quality of the work and rewards us when we are effective adults. And that’s good news.
So if you’re a founder that tends to be more of an author than a reader in the narrative of your own life, why not consider building a company that attracts other women and men who tend to take control of their own lives too.
Live long and prosper!
Stephen Cummins, 17th November, 2018
— — — — — — — — — — — — — — — — — — — — — — — — — — — -
If you found this interesting, then …
1. Listen to me interview the greatest founders in the world on the14 Minutes of SaaS podcast … you can listen to it wherever you listen to podcasts:
14 Minutes of SaaS on Spotify / Apple podcasts / Google podcasts / TuneIn / Stitcher
2. Follow me on social networks you use: @Stephen_Cummins and @14MinutesOfSaaS and my LinkedIn profile
— — — — — — — — — — — — —
“There’s a crazy thing that happens as you’re growing a business … It tries to be real. It starts asking for things …The business isn’t real. We’ve invented it. We created it. It does not exist in and of itself … Our biggest challenge as founders is to tame the beast. To keep the beast from becoming the voice in your head. The beast is directly responsible for when it’s not fun anymore … It belongs to us and it needs to serve us.”
Natalie Nagele, Co-founder & CEO, Wildbit
“All that glistens is not gold —
Often have you heard that told.
Many a man his life hath sold…
Had you been as wise as bold,
Young in limbs, in judgment old,…
Fare you well. Your suit is cold —
Cold, indeed, and labour lost.”
William Shakespeare (The Merchant of Venice)
“My name is Ozymandias, king of kings;
Look on my works, ye Mighty, and despair!’
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare”
Percy Bysshe Shelley
This article is from my AppSelekt blog:
On-Premise People in Shiny Caves
— — — — — — — — — — — — — — — — — — | https://medium.com/understanding-as-a-service-uaas/on-premise-people-and-shiny-caves-remote-as-a-service-97cff86382b6 | ['Stephen Cummins'] | 2019-11-22 12:13:52.515000+00:00 | ['SaaS', 'B2B', 'Startup', 'Future Of Work', 'Remote Working'] |
DreamTeam Digest #28 | Here’s a quick DreamTeam update. Since the release of our last digest, we’ve added a new game to the platform, DreamTeam has been named the “Best Blockchain Startup”, and has been mentioned in several crypto publications.
Check out the details below.
DreamTeam adds Call of Duty: Modern Warfare to the platform
We know the gaming world has been waiting for Call of Duty: Modern Warfare to drop for a long time. And it’s finally here! With that, DreamTeam is thrilled to announce that CoD has been added to the DreamTeam family and is now live on the platform! Read exactly what players get with DreamTeam CoD here.
DreamTeam wins the EASAwards regional round
DreamTeam has been named “The Best Blockchain Startup” on the regional round of the EASAwards. In the regional round, DreamTeam competed against startups from Euroasian countries: Armenia, Azerbaijan, Belarus, Georgia, Moldova, the Russian Federation, and Turkey. See winners here.
Esports and Cryptocurrency Payments
As gamers are typically tech-savvy, crypto and esports go hand in hand. Check out CoinPoint’s view on esports, crypto, how they are made for each other, and how DreamTeam fits into the equation here.
Crypto and Esports
Can tech-savvy gamers help push crypto into mass adoption? Hedgetrade.com seems to think so. Find out what was written about DreamTeam and which other projects are helping make the push here.
Sincerely,
The DreamTeam Crew
About DreamTeam:
DreamTeam — infrastructure platform and payment gateway for esports and gaming.
Stay in touch: Token Website|Facebook |Twitter|LinkedIn|BitcoinTalk.org
If you have any questions, feel free to contact our support team any time at [email protected]. Or you can always get in touch with us via our official Telegram chat. | https://medium.com/dreamteam-gg/dreamteam-digest-28-e0f2d08cc976 | [] | 2019-12-12 14:05:30.319000+00:00 | ['Apex Legends', 'Gaming', 'Esports', 'Startup', 'Game Development'] |
The Future Is Meatless | Photo by Stijn te Strake on Unsplash
In the churches I grew up in, hypocrisy surrounded me. The lessons we learned on Wednesday and Sunday had no correlation to how we lived outside of church. Inside those walls, we were Christians; outside of them, we were regular people.
I hated being part of a group that was so unapologetically hypocritical. It has always been clear to me that it’s not okay to behave in a way that is inconsistent with your beliefs, and if you have a difficult time following your beliefs, maybe you should look for new ones.
I do my best to live an unhypocritical life. I drive the way I want others to drive; I obey all the rules of the road and am courteous to my fellow drivers. It would be unfair to expect everyone else to obey traffic laws and not adhere to them myself. Voting is a necessary, but painful, hypocrisy. There are normally only two options with a chance of winning on the ballot, and the values of the politicians I vote for rarely align with my own. This year, anyone who votes for either of the two major candidates will be voting for men accused of sexual assault.
I do my best to keep the hypocrisy in my life to a minimum, but there is one hypocrisy in my life that I don’t agonize over. I try my best not to think about it, actually. Most Americans do the same.
I normally eat meat at least once a day. If you asked me where it came from, I’d probably say Kroger. I don’t know how the meat on my plate got there, and I don’t want to know. I try to view meat as any other food: something made by people, not something that once was living and breathing. I’m not here to preach to you about eating meat, but I believe we should at least think about where our food comes from, and consider making changes when we are able.
Is eating meat moral?
It’s difficult to know exactly how many animals are capable of feeling and consciousness, but we know these traits aren’t exclusive to humans. Every time we eat meat, there is an animal that likely suffered to provide us that meal. Many of us don’t eat meat because we have to in order to survive, we eat meat because we enjoy it and it tastes good. Cows, chickens, pigs, and more suffer for our enjoyment.
If we didn’t eat meat, though, cows, chickens, and pigs wouldn’t live blissful lives in green pastures, dying of old age. They wouldn’t exist. Which brings up another question: is the short, often painful, existence of an animal better than no existence at all?
Many human lives are short and filled with suffering; are those humans better off not existing? It’s easier to say that an animal would be better off not having lived at all than suffering because we value human life more than animal life. Even if a human is in pain and suffers, that suffering is worth it because the cost, the loss of a human life, is worse than the human’s suffering.
The life of a cow or chicken or any other animal is not worth that much to begin with to us, so of course we believe it’s better for those animals to not live at all than to live and suffer. We can’t understand the meaningful life experiences of a cow. Cows don’t go to school or get a job or get married, but that doesn’t mean they can’t have a fulfilling life (which for a cow probably means eating grass and hanging out with cow friends all day).
Meat consumption is not a black and white issue. Who am I to say that a cow or a chicken would rather never live at all than be bred on an ethical and humane farm for food? I believe it’s better for an animal to have a pleasant existence than no existence at all, but also that many animals we eat for food would have been better off never coming into this world. I don’t believe eating meat is inherently wrong, and it’s possible for the animals we eat to have pleasant and enjoyable lives.
Unfortunately, not many animals we eat have a pleasant existence; over 99% of U.S.-farmed animals live on “factory farms.” Ethical, humane farms aren’t that great, either. Treating animals right is usually worse for the environment (grass-fed cow farming produces two to four times as much methane, for example, and uses more land and water), not to mention more expensive. The environmental costs of raising animals for food in a humane and ethical manner are too great. We can’t afford the enormous amount of resources required and pollution generated when raising animals for food in a humane way. This is why the future is meatless (well, meat alternative).
We can’t stop people from eating meat; even vegans and vegetarians have a hard time resisting animal products, as 84% eventually return to eating meat. Our taste for meat isn’t going anywhere, but animals are. Meat alternatives aren’t just for people that don’t eat animal meat anymore, they’re for everyone. Eventually, meat alternatives will replace traditional meat.
Meat alternatives are the best long-term solution we have. The switch won’t happen overnight, but we’ve already seen major fast-food restaurants, some of the biggest targets of PETA and similar animal rights groups, try out alternative meat products in their restaurants. Some of them, like Burger King, have embraced alternative meats and permanently added them to the menu. Eventually, alternative meat will taste identical to the real thing (some are already indistinguishable to “real” meat products), and will be cheap to produce. When that happens, expect almost all restaurants and grocery stores to offer primarily meat-free products (as long as the agricultural lobbyists don’t get in their way). Until that day comes, we need to make sure the animals we eat are treated as humanely and with as much respect as possible. | https://medium.com/anti-dote/the-future-is-meatless-c51a19b46fc4 | ['Daniel May'] | 2020-07-03 14:07:00.543000+00:00 | ['Vegan', 'Farming', 'Agriculture', 'Meat', 'Animal Rights'] |
How I Survived the Death of My Best Friend | What helped, what didn’t, and what would have.
Photo by Keegan Houser on Unsplash
How It Happened:
On August 1, 2010, I watched my best friend die. We were 17 years old, had just graduated from high school, and were soon to be heading off to different colleges. We had our whole lives ahead of us — or so we thought.
Cody fell while we were being pulled by a Jeep on our skateboards. He hit his head on the asphalt and was knocked unconscious. I let go of the Jeep and skated back to him. His nose was bleeding heavily and he wasn’t breathing. I called an ambulance but it was too late, Cody was already dead.
I left for college a few weeks later. And although I didn’t know it at the time, I was just beginning the biggest struggle of my young life.
What Helped:
Photo by Helena Lopes on Unsplash
1. Friends
All I wanted to do after Cody died was to isolate myself. Some of that alone time was healthy but it was the time spent with friends and family that helped me heal the most.
Tip: Never underestimate the value of friendship.
2. An Artistic Outlet
Everyone has an outlet that they can use to express how they are feeling. For me, it was writing songs and playing my guitar. I was able to scream and yell and strum as loud as I needed. With my guitar, I wasn’t just a crazy person screaming at nothing in an empty room, I was an artist.
Tip: Allow yourself to be as free and unfiltered as you can. You’ll be happy you did.
3. Tears
Before Cody died, I could count the number of times that I had cried on one hand: once at my grandma’s funeral, once when I took a nasty knee to the groin in my 6th grade basketball game, and once when I found out that it was actually my dad who was putting money under my pillow. Now, I couldn’t even attempt to count the amount of times that I’ve had a good cry. Shit, I could use one right now.
Tip: Don’t even bother trying to hold back the tears. Let ’em flow and soak up that sweet sweet relief.
What Didn’t Help:
Photo by Nathan Dumlao on Unsplash
1. Feeling Sorry For Myself
I spent the better part of two years moping around like I was the only person alive who had ever experienced loss. Instead of trying to achieve what I wanted out of life, I made excuses as to why it was okay for me to fail. I finished my second semester of freshman year with a 0.7 GPA. I dropped out of school. I told myself that it was okay because of what I had been through. What a load of shit. Slowly, I crawled out of my self-pity pit and enrolled in some community college classes. Eventually I enrolled back into the University and graduated with an overall GPA of 3.2. Not too shabby.
Tip: Tragedy is an unfortunate gift. Once you overcome grief, you become strong. And you can use that strength to achieve whatever it is you want in this life.
2. Alcohol
Not surprisingly, I started drinking a lot my freshman year of college. Drunk-me was brave. Drunk-me was strong. Drunk-me was cool. And that was the truth… as long as you were asking drunk-me. If you were asking sober-me, drunk-me was a loser. Drunk-me got me arrested a few times. Drunk-me ruined many of my relationships. Drunk-me used anger and violence to try and solve my problems. Drunk-me left sober-me with far too many messes to clean up in the morning.
Tip: It’s not as cool as you think to be the most fucked-up person at the party.
What I wish I would have done:
Photo by Rachel Cook on Unsplash
Found Professional Help:
I can’t be sure, but I think that if I would have gone to see a therapist early on, I would have saved myself years of struggle. I’ve learned a lot in the last nine years, but it’s no substitute for professional help. If nothing else, it probably could have saved me hundreds of dollars in court fees.
Tip: It doesn’t make you stronger to go through something alone. In fact, it only makes you weaker. | https://medium.com/swlh/how-i-survived-the-death-of-my-best-friend-85c866a90f7b | ['B. T. Swezey'] | 2019-11-22 17:41:47.758000+00:00 | ['Loss', 'Mental Health', 'Love', 'Grief', 'Self Improvement'] |
Hadoop Ecosystem — Get To Know The Hadoop Tools For Crunching Big Data | Hadoop Ecosystem - Edureka
In the previous blog on Hadoop Tutorial, we discussed Hadoop, its features and core components. Now, the next step forward is to understand Hadoop Ecosystem. It is an essential topic to understand before you start working with Hadoop. This Hadoop ecosystem blog will familiarize you with industry-wide used Big Data frameworks.
Hadoop Ecosystem is neither a programming language nor a service, it is a platform or framework which solves big data problems. You can consider it as a suite that encompasses a number of services (ingesting, storing, analyzing and maintaining) inside it. Let us discuss and get a brief idea about how the services work individually and in collaboration.
Below are the Hadoop components, that together form a Hadoop ecosystem, I will be covering each of them in this blog:
HDFS -> Hadoop Distributed File System
-> Hadoop Distributed File System YARN -> Yet Another Resource Negotiator
-> Yet Another Resource Negotiator MapReduce -> Data processing using programming
-> Data processing using programming Spark -> In-memory Data Processing
-> In-memory Data Processing PIG, HIVE -> Data Processing Services using Query (SQL-like)
-> Data Processing Services using Query (SQL-like) HBase -> NoSQL Database
-> NoSQL Database Mahout, Spark MLlib -> Machine Learning
-> Machine Learning Apache Drill -> SQL on Hadoop
-> SQL on Hadoop Zookeeper -> Managing Cluster
-> Managing Cluster Oozie -> Job Scheduling
-> Job Scheduling Flume, Sqoop -> Data Ingesting Services
-> Data Ingesting Services Solr & Lucene -> Searching & Indexing
-> Searching & Indexing Ambari -> Provision, Monitor and Maintain cluster
HDFS
Hadoop Distributed File System is the core component or you can say, the backbone of the Hadoop Ecosystem.
is the core component or you can say, the backbone of the Hadoop Ecosystem. HDFS is the one, which makes it possible to store different types of large data sets (i.e. structured, unstructured and semi-structured data).
HDFS creates a level of abstraction over the resources, from where we can see the whole HDFS as a single unit.
It helps us in storing our data across various nodes and maintaining the log file about the stored data (metadata).
HDFS has two core components, i.e. NameNode and DataNode.
The NameNode is the main node and it doesn’t store the actual data. It contains metadata, just like a log file or you can say as a table of content. Therefore, it requires less storage and high computational resources. On the other hand, all your data is stored on the DataNodes and hence it requires more storage resources. These DataNodes are commodity hardware (like your laptops and desktops) in the distributed environment. That’s the reason, why Hadoop solutions are very cost effective. You always communicate to the NameNode while writing the data. Then, it internally sends a request to the client to store and replicate data on various DataNodes.
YARN
Consider YARN as the brain of your Hadoop Ecosystem. It performs all your processing activities by allocating resources and scheduling tasks.
It has two major components, i.e. ResourceManager and NodeManager.
ResourceManager is again a main node in the processing department. It receives the processing requests and then passes the parts of requests to corresponding NodeManagers accordingly, where the actual processing takes place. NodeManagers are installed on every DataNode. It is responsible for the execution of the task on every single DataNode. Schedulers: Based on your application resource requirements, Schedulers perform scheduling algorithms and allocates the resources. ApplicationsManager: While ApplicationsManager accepts the job submission, negotiates to containers (i.e. the Data node environment where process executes) for executing the application specific ApplicationMaster and monitoring the progress. ApplicationMasters are the deamons which reside on DataNode and communicates to containers for execution of tasks on each DataNode.ResourceManager has two components, i.e. Schedulers and ApplicationsManager.
MapReduce
It is the core component of processing in a Hadoop Ecosystem as it provides the logic of processing. In other words, MapReduce is a software framework which helps in writing applications that process large data sets using distributed and parallel algorithms inside the Hadoop environment.
In a MapReduce program, Map() and Reduce() are two functions.
The Map function performs actions like filtering, grouping and sorting. While Reduce function aggregates and summarizes the result produced by map function. The result generated by the Map function is a key value pair (K, V) which acts as the input for Reduce function.
Let us take the above example to have a better understanding of a MapReduce program.
We have a sample case of students and their respective departments. We want to calculate the number of students in each department. Initially, Map program will execute and calculate the students appearing in each department, producing the key-value pair as mentioned above. This key value pair is the input to the Reduce function. The Reduce function will then aggregate each department and calculate the total number of students in each department and produce the given result.
APACHE PIG
PIG has two parts: Pig Latin , the language and the pig runtime, for the execution environment. You can better understand it as Java and JVM.
has two parts: , the language and for the execution environment. You can better understand it as Java and JVM. It supports pig latin language, which has SQL like command structure.
As everyone does not belong from a programming background. So, Apache PIG relieves them. You might be curious to know how?
Well, I will tell you an interesting fact:
10 line of pig latin = approx. 200 lines of Map-Reduce Java code
But don’t be shocked when I say that at the back end of Pig job, a map-reduce job executes.
The compiler internally converts pig latin to MapReduce. It produces a sequential set of MapReduce jobs, and that’s an abstraction (which works like black box).
PIG was initially developed by Yahoo.
It gives you a platform for building data flow for ETL (Extract, Transform and Load), processing and analyzing huge data sets.
How does Pig work?
In PIG, first, the load command loads the data. Then we perform various functions on it like grouping, filtering, joining, sorting, etc. At last, either you can dump the data on the screen or you can store the result back in HDFS.
APACHE HIVE
Facebook created HIVE for people who are fluent with SQL. Thus, HIVE makes them feel at home while working in a Hadoop Ecosystem.
Basically, HIVE is a data warehousing component which performs reading, writing and managing large data sets in a distributed environment using SQL-like interface.
HIVE + SQL = HQL
The query language of Hive is called Hive Query Language(HQL), which is very similar like SQL.
It has 2 basic components: Hive Command Line and JDBC/ODBC driver .
. The Hive Command line interface is used to execute HQL commands.
interface is used to execute HQL commands. While Java Database Connectivity ( JDBC ) and Object Database Connectivity ( ODBC ) is used to establish a connection from data storage.
) and Object Database Connectivity ( ) is used to establish a connection from data storage. Secondly, Hive is highly scalable. As, it can serve both the purposes, i.e. large data set processing (i.e. Batch query processing) and real-time processing (i.e. Interactive query processing).
It supports all primitive data types of SQL.
You can use predefined functions, or write tailored user-defined functions (UDF) also to accomplish your specific needs.
APACHE MAHOUT
Now, let us talk about Mahout which is renowned for machine learning. Mahout provides an environment for creating machine learning applications which are scalable.
So, What is machine learning?
Machine learning algorithms allow us to build self-learning machines that evolve by itself without being explicitly programmed. Based on user behavior, data patterns and past experiences it makes important future decisions. You can call it a descendant of Artificial Intelligence (AI).
What Mahout does?
It performs collaborative filtering, clustering, and classification. Some people also consider frequent item set missing as Mahout’s function. Let us understand them individually:
Collaborative filtering: Mahout mines user behaviors, their patterns and their characteristics and based on that it predicts and make recommendations to the users. The typical use case is E-commerce website. Clustering: It organizes a similar group of data together like articles can contain blogs, news, research papers etc. Classification: It means classifying and categorizing data into various sub-departments like articles can be categorized into blogs, news, essay, research papers, and other categories. Frequent itemset missing: Here Mahout checks, which objects are likely to be appearing together and make suggestions, if they are missing. For example, cell phone and cover are brought together in general. So, if you search for a cell phone, it will also recommend you the cover and cases.
Mahout provides a command line to invoke various algorithms. It has a predefined set of a library which already contains different inbuilt algorithms for different use cases.
APACHE SPARK
Apache Spark is a framework for real-time data analytics in a distributed computing environment.
is a framework for real-time data analytics in a distributed computing environment. The Spark is written in Scala and was originally developed at the University of California, Berkeley.
It executes in-memory computations to increase the speed of data processing over Map-Reduce.
It is 100x faster than Hadoop for large scale data processing by exploiting in-memory computations and other optimizations. Therefore, it requires high processing power than Map-Reduce.
As you can see, Spark comes packed with high-level libraries, including support for R, SQL, Python, Scala, Java etc. These standard libraries increase the seamless integrations in complex workflow. Over this, it also allows various sets of services to integrate with it like MLlib, GraphX, SQL + Data Frames, Streaming services etc. to increase its capabilities.
This is a very common question in everyone’s mind:
“Apache Spark: A Killer or Saviour of Apache Hadoop?” — O’Reily
The Answer to this — This is not an apple to apple comparison. Apache Spark best fits for real-time processing, whereas Hadoop was designed to store unstructured data and execute batch processing over it. When we combine, Apache Spark’s ability, i.e. high processing speed, advanced analytics and multiple integration support with Hadoop’s low-cost operation on commodity hardware, it gives the best results.
That is the reason why Spark and Hadoop are used together by many companies for processing and analyzing their Big Data stored in HDFS.
APACHE HBASE
HBase is an open source, non-relational distributed database. In other words, it is a NoSQL database.
is an open source, non-relational distributed database. In other words, it is a NoSQL database. It supports all types of data and that is why it’s capable of handling anything and everything inside a Hadoop ecosystem.
It is modeled after Google’s BigTable, which is a distributed storage system designed to cope up with large datasets.
The HBase was designed to run on top of HDFS and provides BigTable like capabilities.
It gives us a fault-tolerant way of storing sparse data, which is common in most Big Data use cases.
The HBase is written in Java, whereas HBase applications can be written in REST, Avro and Thrift APIs.
For better understanding, let us take an example. You have billions of customer emails and you need to find out the number of customers who has used the word complaint in their emails. The request needs to be processed quickly (i.e. at real time). So, here we are handling a large data set while retrieving a small amount of data. For solving these kinds of problems, HBase was designed.
APACHE DRILL
As the name suggests, Apache Drill is used to drill into any kind of data. It’s an open source application which works with the distributed environment to analyze large data sets.
It is a replica of Google Dremel.
It supports different kinds of NoSQL databases and file systems, which is a powerful feature of Drill. For example Azure Blob Storage, Google Cloud Storage, HBase, MongoDB, MapR-DB HDFS, MapR-FS, Amazon S3, Swift, NAS, and local files.
So, basically, the main aim behind Apache Drill is to provide scalability so that we can process petabytes and exabytes of data efficiently (or you can say in minutes).
The main power of Apache Drill lies in combining a variety of data stores just by using a single query.
Apache Drill basically follows the ANSI SQL.
It has a powerful scalability factor in supporting millions of users and serve their query requests over large scale data.
APACHE ZOOKEEPER
Apache Zookeeper is the coordinator of any Hadoop job which includes a combination of various services in a Hadoop Ecosystem.
Apache Zookeeper coordinates with various services in a distributed environment.
Before Zookeeper, it was very difficult and time-consuming to coordinate between different services in the Hadoop Ecosystem. The services earlier had many problems with interactions like common configuration while synchronizing data. Even if the services are configured, changes in the configurations of the services make it complex and difficult to handle. The grouping and naming was also a time-consuming factor.
Due to the above problems, Zookeeper was introduced. It saves a lot of time by performing synchronization, configuration maintenance, grouping, and naming.
Although it’s a simple service, it can be used to build powerful solutions.
Big names like Rackspace, Yahoo, eBay use this service in many of their use cases and therefore, you can have an idea about the importance of Zookeeper.
APACHE OOZIE
Consider Apache Oozie as a clock and alarm service inside Hadoop Ecosystem. For Apache jobs, Oozie has been just like a scheduler. It schedules Hadoop jobs and binds them together as one logical work.
There are two kinds of Oozie jobs:
Oozie workflow: These are a sequential set of actions to be executed. You can assume it as a relay race. Where each athlete waits for the last one to complete his part. Oozie Coordinator: These are the Oozie jobs which are triggered when the data is made available to it. Think of this as the response-stimuli system in our body. In the same manner, as we respond to an external stimulus, an Oozie coordinator responds to the availability of data and it rests otherwise.
APACHE FLUME
Ingesting data is an important part of our Hadoop Ecosystem.
The Flume is a service which helps in ingesting unstructured and semi-structured data into HDFS.
is a service which helps in ingesting unstructured and semi-structured data into HDFS. It gives us a solution which is reliable and distributed and helps us in collecting, aggregating and moving a large amount of data sets .
and . It helps us to ingest online streaming data from various sources like network traffic, social media, email messages, log files etc. in HDFS.
Now, let us understand the architecture of Flume from the below diagram:
There is a Flume agent which ingests the streaming data from various data sources to HDFS. From the diagram, you can easily understand that the web server indicates the data source. Twitter is among one of the famous sources for streaming data.
The flume agent has 3 components: source, sink, and channel.
Source: it accepts the data from the incoming streamline and stores the data in the channel. Channel: it acts as the local storage or the primary storage. A Channel is temporary storage between the source of data and persistent data in the HDFS. Sink: Then, our last component i.e. Sink, collects the data from the channel and commits or writes the data in the HDFS permanently.
APACHE SQOOP
Now, let us talk about another data ingesting service i.e. Sqoop. The major difference between Flume and Sqoop is that:
Flume only ingests unstructured data or semi-structured data into HDFS.
While Sqoop can import as well as export structured data from RDBMS or Enterprise data warehouses to HDFS or vice versa.
Let us understand how Sqoop works using the below diagram:
When we submit Sqoop command, our main task gets divided into sub tasks which is handled by individual Map Task internally. Map Task is the sub task, which imports part of data to the Hadoop Ecosystem. Collectively, all Map tasks imports the whole data.
Export also works in a similar manner.
When we submit our Job, it is mapped into Map Tasks which brings the chunk of data from HDFS. These chunks are exported to a structured data destination. Combining all these exported chunks of data, we receive the whole data at the destination, which in most of the cases is an RDBMS (MYSQL/Oracle/SQL Server).
APACHE SOLR & LUCENE
Apache Solr and Apache Lucene are the two services which are used for searching and indexing in Hadoop Ecosystem.
Apache Lucene is based on Java, which also helps in spell checking.
If Apache Lucene is the engine, Apache Solr is the car built around it. Solr is a complete application built around Lucene.
It uses the Lucene Java search library as a core for search and full indexing.
APACHE AMBARI
Ambari is an Apache Software Foundation Project which aims at making Hadoop ecosystem more manageable.
It includes software for provisioning, managing and monitoring Apache Hadoop clusters.
The Ambari provides:
Hadoop cluster provisioning:
It gives us step by step process for installing Hadoop services across a number of hosts.
It also handles configuration of Hadoop services over a cluster.
2. Hadoop cluster management:
It provides a central management service for starting, stopping and re-configuring Hadoop services across the cluster.
3. Hadoop cluster monitoring:
For monitoring health and status, Ambari provides us a dashboard.
The Amber Alert framework is an alerting service which notifies the user, whenever the attention is needed. For example, if a node goes down or low disk space on a node, etc.
At last, I would like to draw your attention on three things importantly:
Hadoop Ecosystem owes its success to the whole developer community, many big companies like Facebook, Google, Yahoo, University of California (Berkeley) etc. have contributed their part to increase Hadoop’s capabilities. Inside a Hadoop Ecosystem, knowledge about one or two tools (Hadoop components) would not help in building a solution. You need to learn a set of Hadoop components, which works together to build a solution. Based on the use cases, we can choose a set of services from the Hadoop Ecosystem and create a tailored solution for an organization.
I hope this blog is informative and added value to you.
If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Big data. | https://medium.com/edureka/hadoop-ecosystem-2a5fb6740177 | ['Shubham Sinha'] | 2020-09-10 10:08:25.429000+00:00 | ['Big Data', 'Hdfs', 'Yarn', 'Spark', 'Hadoop'] |
3 Advanced Pandas Methods for Data Scientists | Let us start with a simple sample DataFrame with four columns and three rows.
import pandas as pd df1 = pd.DataFrame(
[["Keras", "2015", "Python","Yes"],
["CNTK", "2016", "C++","Yes"],
["PlaidML", "2017", "Python, C++, OpenCL","No"]],
index=[1, 2, 3],
columns = ["Software", "Intial Release",
"Written In","CUDA Support" ]) print(df1)
Sample DataFrame has three different attributes viz.initial release, written in and CUDA support of three of the machine learning libraries.
DataFrame df1 — Output of the above code
We can see that the machine learning library name is one of the columns and different attributes of the libraries are in individual columns.
Let us imagine that the machine learning algorithm expects the same information in a different format. It is expecting all the attributes of the libraries in one column and respective values in another column.
We can use the “melt” function to transpose the DataFrame from the current format to the expected arrangement.
The columns from the original DataFrame which stays in the same arrangement, with the individual column, are specified in id_vars parameter.
df2=df1.melt(id_vars=["Software"],var_name='Characteristics')
print(df2)
The columns from the original DataFrame which stays in the same arrangement, with the individual column, are specified in id_vars parameter. All the remaining attributes in the original DataFrame populated under the characteristics column, and with values in a corresponding column.
DataFrame df2 after “melt” function on df1 — Output of the above code
Many times the raw data is collected is arranged with all attributes in one column with values mentioned in the corresponding column. The machine learning algorithm is expecting the input data with each attributes values in individual columns. We can use the “pivot” method in Pandas to transform the data in the required format.
df3=df2.pivot(index="Software",columns="Characteristics",
values="value") print(df3)
In essence, “melt” and “pivot” are complementary functions. ‘Melt’ transforms all attributes in one column and values in the corresponding column while “pivot” converts attributes in one column in a separate column.
Pivot function on DataFrame df2— Output of the above code
Sometimes, we have raw data spread across several files, and we need to consolidate it into one dataset. For the sake of simplicity, first, we will consider the case where data points in individual datasets are already in the right row sequence.
In a new DataFrame df4, two more attributes declared for the machine learning libraries.
df4 = pd.DataFrame(
[["François Chollet", "MIT license"],
["Microsoft Research", "MIT license"],
["Vertex.AI,Intel", "AGPL"]],
index=[1, 2,3],columns = ["Creator", "License"]) print(df4)
We can concatenate this new DataFrame with original DataFrame using the “concat” function.
df5=pd.concat([df1,df4],axis=1)
print(df5)
The output of concatenated DataFrame df1 and df4 has the original three attributes and two new attributes in one DataFrame df5
Concatenation of DataFrame df1 and df4— Output of the above code
In real life, most of the time data points in the individual datasets are not in the same row sequence, and there is a common key ( identifier) which links the data points in the datasets.
We have a DataFrame df6 storing the “creator” and “license” value of the machine learning library. “Software” is the common attribute between this and original DataFrame df1.
df6 = pd.DataFrame(
[["CNTK","Microsoft Research", "MIT license"],
["Keras","François Chollet", "MIT license"],
["PlaidML","Vertex.AI,Intel", "AGPL"]],
index=[1, 2,3],
columns = ["Software","Creator", "License"]) print(df6)
We can see that the row sequence of the Dataframe df6 is not the same as the original DataFrame df1. In df6 the attributes of “CNTK” library is stored first, whereas in DataFrame df1 the values “Keras” is first mentioned.
DataFrame df1 and df6 — Output of the above code
In such a scenario where we need to consolidate the raw data spread across several datasets and values are not in the same row sequence in individual datasets, we can consolidate the data into one DataFrame using the “merge” function.
combined=df1.merge(df6,on="Software")
print(combined)
The common key (unique identifier) to identify the right corresponding value from the different datasets is mentioned with “on” parameter of merge function.
Output of above code — consolidated DataFrame df1 and df6 based on the unique identifier “Software”
Recap
In this article, we have learnt three advanced pandas functions is quite handy during data preparation.
Melt: Unpivots the attributes/features into one column and values in the corresponding column Pivot: Splits the attributes/features from one column into an individual column Concat: Concatenate DataFrames along row or columns. Merge: Merges DataFrames with unique identifier across the DataFrames and values in different row sequences.
Conclusion
Pandas can help to prepare the raw data in the right format for machine learning algorithms. Pandas functions like “melt”, “concat”, “merge” and “pivot” discussed in the article can transform the data sets with a single line of code, which otherwise will take many nested loops and conditional statements. In my view, the difference between a professional and amateur data scientist is not the knowledge of an exotic algorithm or super optimisation hyper-parameter technique. The deep knowledge of Pandas, NumPy and Matplotlib puts someone in the professional league and enables them to achieve the results quicker with cleaner code and high performance.
You can learn data visualisation using pandas in 5 Powerful Visualisation with Pandas for Data Preprocessing.
Also, learn Advanced Visualisation for Exploratory data analysis (EDA) like a professional data scientist. | https://towardsdatascience.com/3-advanced-pandas-methods-for-data-scientist-c7935152b2ca | ['Kaushik Choudhury'] | 2020-09-08 17:52:05.676000+00:00 | ['Machine Learning', 'Python', 'Data Science', 'Programming', 'Pandas'] |
The Stock Market Is a Ponzi Scheme | The Stock Market Is a Ponzi Scheme
The stock market is built on non-dividend paying companies and this is a real problem, according to financial expert Tan Lui.
Illustration by therealdeal.com / LexiPilgrim
I am a huge believer in buying stocks. I’ve always seen the stock market as my best friend. But as I look at my Amazon stocks that have increased in price by 70% since I bought them not long ago, I’m questioning my beliefs.
Questioning your beliefs and assumptions is a powerful practice.
It pays to challenge your beliefs if you want to make more money, so you can buy back time, relax and work less.
So I’m asking myself this question:
Is the stock market one giant Ponzi Scheme? | https://timdenning.medium.com/the-stock-market-is-a-ponzi-scheme-2776f075b67b | ['Tim Denning'] | 2020-07-03 07:11:06.500000+00:00 | ['Society', 'Business', 'Life', 'Money', 'Economy'] |
Healthcare is not a right | Americans have a right to life, liberty and the pursuit of happiness.
Senator Bernie Sanders has suspended his campaign for the Democratic nomination for the President of the United States. Even so, he strongly supports having the federal government provide health insurance for every American. He says healthcare is right. He is wrong.
A right is generally defined as “ a moral or legal entitlement to have or obtain something or to act in a certain way.” Healthcare is not an entitlement, nor should it be. According to the Universal Declaration of Human Rights, there are numerous rights humans have, but the right to healthcare is not included.
The Declaration of Independence of the United States says all Americans have a right to life, liberty and the pursuit of happiness.
Nowhere in any definition of rights, is there any mention of the right to have health care. The rights mentioned above, all deal with the rights that individuals should have in their daily lives and in their pursuit of happiness. There is no mention of a right that requires other citizens to pay for something some citizens can’t afford to pay for themselves.
Most people, in most societies, tend to be compassionate. In the US, compassionate Americans are willing to give some of the income they earn to those, who for whatever reason, have not earned sufficient income to pay for the necessities of life.
Through the government, compassionate Americans will agree to have the government use some of their earnings to ensure all Americans have at least some income. There are specific government programs where income earners have agreed to allow the federal government the ability to take some of their earnings and fund a welfare program, a food stamp program, a housing program and some other programs.
These income transfer programs were never intended to imply that any and all Americans have a right to food stamps or a right to welfare or a right to Section 8 housing allowances. The programs are not rights. They exist because of the compassionate generosity of fellow Americans.
Sanders wants to go beyond that. He says that all Americans have a right to basic income, food and shelter. The truth is these are not rights. These are programs that the majority of compassionate Americans freely choose to support.
Health care is expensive. Ideally, Americans would like to see everyone covered by health insurance. But the price for that is too high. We tried the Affordable Care Act (ACA) which increased the percentage of Americans with health insurance from 85% of the population to 91%. That means 6% of the population, about 20 million people, benefited.
Meanwhile, 275 million Americans had to pay more and received less care. While about half of Americans just accepted this, the other half strongly objected. Now the ACA may be declared unconstitutional by the Supreme Court as early as this summer. That means we will have to debate health care policy again.
If healthcare was given to all Americans who couldn’t afford it, the majority would have to pay even more for their healthcare and receive even less. This is not an ideal situation. Eventually, the quality of healthcare would decline and those who do pay would object.
President Trump has a better solution. Instead of simply giving healthcare to all Americans, Trump wants to give every able-bodied American the opportunity to earn enough income to pay for their own health insurance. That’s why Trump concentrated on economic growth and providing opportunity.
His policies paid off, since, by the end of 2019, the unemployment rate was at a historical low, meaning more Americans had jobs and could pay for their own health care.
By declaring healthcare coverage to be a right, many people will not be concerned with earning enough income to pay for it. Rather people will know they can get coverage without paying, so many will simply not pay. That creates yet another burden on those of us who do pay for our healthcare.
Americans have a right to life, liberty and the pursuit of happiness. Beyond that, those in need will have to rely on the compassion of their fellow citizens. There is, however, a limit for how compassionate Americans can and should be.
Michael Busler, Ph.D. is a public policy analyst and a Professor of Finance at Stockton University where he teaches undergraduate and graduate courses in Finance and Economics. | https://micbusler.medium.com/healthcare-is-not-a-right-e219aa7a661a | ['Michael Busler'] | 2020-04-15 20:35:33.625000+00:00 | ['Technology', 'Business', 'Government', 'Politics', 'Health'] |
Using Riak as Events Storage — Part 4 | If you missed Part 3...
We strongly recommend that you read Using Riak as Events Storage - Part 3 of this blog series, to understand why MapReduce doesn't fit our needs. The previous parts explains how Booking.com collects and stores events from its backend into a central storage, and how we use it to do events analysis.
Strategy and Features
The previous parts introduced the need for data processing of the events blobs that are stored in Riak in real-time, and the strategy of bringing the code to the data:
Using MapReduce for computing on-demand data processing worked fine but didn't scale to many users (see part 3).
Finding an alternative to MapReduce for server-side real-time data processing requires listing the required features of the system and the compromises that can be made:
Real-time isolated data transformation
As seen in the previous parts of this blog series, we need to be able to perform transformation on the incoming events, with as little delay as possible. We don't want any lag induced by a large batch processing. Luckily, these transformations are usually small and fast. Moreover, they are isolated: the real-time processing may involve multiple types and subtypes of events data, but should not depend on previous events knowledge. Cross-epoch data processing can be implemented by reusing the MapReduce concept, computing a Map-like transformation on each events blobs by computing them independently, but leaving the Reduce phase up to the consumer.
Performance and scalability
The data processing should have a very limited bandwidth usage and reasonable CPU usage. However, we also need the CPU usage not to be affected by the number of clients using the processed data. This is where the previous attempt using MapReduce showed its limits. Of course, horizontal scalability has to be ensured, to be able to scale with the Riak cluster.
One way of achieving this is to perform the data processing continuously for every datum that reach Riak, upfront. That way, client requests are actually only querying the results of the processing, and not triggering computation at query time.
No back-processing
The data processing will have to be performed on real-time data, but no back-processing will be done. When a data processing implementation changes, it will be effective on future events only. If old data is changed or added (usually as a result of reprocessing), data processing will be applied, but using the latest version of processing jobs. We don't want to maintain any history of data processing, nor any migration of processed data.
Only fast transformations
To avoid putting too much pressure on the Riak cluster, we only allow data transformation that produces a small result (to limit storage and bandwidth footprint), and that runs quickly, with a strong timeout on execution time. Back-pressure management is very important, and we have a specific strategy to handle it (see "Back-pressure management strategy" below)
The solution: Substreams
With these features and compromises listed, it is now possible to describe the data processing layer that we ended up implementing at Booking.com.
This system is called Substreams. Every seconds, the list of keys of the data that has just been stored is sent to a companion app - a home-made daemon - running on every Riak node. This fetches the data, decompresses it, runs a list of data transformation code on it, and stores the results back into Riak, using the same key name but with a different namespace. Users can now fetch the processed data.
A data transformation code is called a substream because most of the time the data transformation is more about cherry-picking exactly the needed fields and values out of the full stream, rather than performing complex operations.
The companion app is actually a simple pre-forking daemon with a Rest API. It's installed on all nodes of the cluster, with around 10 forks. The Rest API is used to send it the list of keys, and wait for the process completion. The events data doesn't transit via this API; the daemon is fetching itself the key values from Riak, and stores the substreams (results of data transformation) back into Riak.
The main purpose of this system is to drastically reduce the size of data transferred to the end user by enabling the cherry-picking of specific branches or leaves of the events structures, and also to perform preliminary data processing on the events. Usually, clients are fetching these substreams to perform more complex and broader aggregations and computations (for instance as a data source for Machine Learning).
Unlike MapReduce, this system has multiple benefits:
Data decompressed only once
A given binary blob of events (at mot 500K of compressed data) is handled by one instance of the companion app, which will decompress it once, then run all the data processing jobs on the decompressed data structure in RAM. This is a big improvement compared to MapReduce, the most CPU intensive task is actually to decompress and deserialise the data, not to transform it. Here we have the guarantee that data is decompressed only once in its lifetime.
Transformation at write time, not at query time
Unlike MapReduce, once a transformation code is setup and enabled, it'll be computed for every epoch, even if nobody uses the result. However, the computation will happen only once, even if multiple users request it later on. Data transformation is already done when users want to fetch the result. That way, the cluster is protected against simultaneous requests of a big number of users. It's also easier to predict the performance of the substreams creations.
Hard timeout - open platform
Data decompression and transformation by the companion app is performed under a global timeout that would kill the processing if it takes too long. It's easy to come up with a realistic timeout value given the average size of event blobs, the number of companion instances, and the total number of nodes. The hard timeout makes sure that data processing is not using too many resources, ensuring that Riak KV works smoothly.
This mechanism allows the cluster to be an open platform: any developer in the company can create a new substream transformation and quickly get it up and running on the cluster on its own without asking for permission. There is no critical risk for the business as substreams runs are capped by a global timeout. This approach is a good illustration of the flexible and agile spirit in IT that we have at Booking.com.
Implementation using a Riak commit hook
In this diagram we can see where the Riak commit hook kicks in. We can also see that when the companion requests data from the Riak service, there is a high chance that the data is not on the current node and Riak has to get it from other nodes. This is done transparently by Riak, but it consumes bandwidth. In the next section we'll see how to reduce this bandwidth usage and have full data locality. But for now, let's focus on the commit hook.
Commit hooks are a feature of Riak that allow the Riak cluster to execute a provided callback just before or just after a value is written, using respectively pre-commit and post-commit hooks. The commit hook is executed on the node that coordinated the write.
We set up a post-commit hook on the metadata bucket (the epochs bucket). We implemented the commit hook callback, which is executed each time a key is stored to that metadata bucket. In part 2 of this series, we explained that the metadata is stored in the following way: - the key is <epoch>-<datacenter_id> , for example: 1413813813-1 - the value is the list of data keys (for instance 1413813813:2:type3::0 )
The post-commit hook callback is quite simple: for each metadata key, it gets the value (the list of data keys), and sends it over HTTP in async mode to the companion app. Proper timeouts are set so that the execution of the callback is capped and can't impact the Riak cluster performance.
Hook implementation
First, let's write the post commit hook code:
metadata_stored_hook(RiakObject) ->
Key = riak_object:key(RiakObject),
Bucket = riak_object:bucket(RiakObject),
[ Epoch, DC ] = binary:split(Key, <<"-">>),
MetaData = riak_object:get_value(RiakObject),
DataKeys = binary:split(MetaData, <<"|">>, [ global ]),
send_to_REST(Epoch, Hostname, DataKeys),
ok.
send_to_REST(Epoch, Hostname, DataKeys) ->
Method = post,
URL = "https://" ++ binary_to_list(Hostname)
++ ":5000?epoch=" ++ binary_to_list(Epoch),
HTTPOptions = [ { timeout, 4000 } ],
Options = [ { body_format, string },
{ sync, false },
{ receiver, fun(ReplyInfo) -> ok end }
],
Body = iolist_to_binary(mochijson2:encode( DataKeys )),
httpc:request(Method,
{URL, [], "application/json", Body},
HTTPOptions, Options),
ok.
These two Erlang functions (here they are simplified and would probably not compile), are the main part of the hook. The function metadata_stored_hook is going to be the entry point of the commit hook, when a metadata key is stored. It receives the key and value that was stored, via the RiakObject , uses its value to extract the list of data keys. This list is then sent to the companion damone over Http using send_to_REST .
The second step is to get the code compiled and Riak setup to be able to use it is properly. This is described in the documentation about custom code.
Enabling the Hook
Finally, the commit hook has to be added to a Riak bucket-type:
riak-admin bucket-type create metadata_with_post_commit \
'{"props":{"postcommit":["metadata_stored_hook"]}'
Then the type is activated:
riak-admin bucket-type activate metadata_with_post_commit
Now, anything sent to Riak to be stored with a key within a bucket whose bucket-type is metadata_with_post_commit will trigger our callback metadata_stored_hook .
The hook is executed on the coordinator node, that is, the node that received the write request from the client. It's not necessary the node where this metadata will be stored.
The companion app
The companion app is a Rest service, running on all Riak nodes, listening on port 5000, ready to receive a json blob, which is the list of data keys that Riak has just stored. The daemon will fetch these keys from Riak, decompress their values, deserialise them and run the data transformation code on them. The results are then stored back to Riak.
There is little point showing the code of this piece of software here, as it's trivial to write. We implemented it in Perl using a PSGI preforking web server (Starman). Using a Perl based web server allowed us to also have the data transformation code in Perl, making it easy for anyone in the IT department to write some of their own.
Optimising intra-cluster network usage
As seen saw earlier, if the commit hook simply sends the request to the local companion app on the same Riak node, additional bandwidth usage is consumed to fetch data from other Riak nodes. As the full stream of events is quite big (around 150 MB per second), this bandwidth usage is significant.
In an effort to optimise the network usage, we have changed the post-commit hook callback to group the keys by the node that is responsible for their values. The keys are then sent to the companion apps running on the associated nodes. That way, a companion app will always receive event keys for which data are on the node they are running on. Hence, fetching events value will not use any network bandwidth. We have effectively implemented 100% data locality when computing substreams.
This optimisation is implemented by using Riak's internal API that gives the list of primary nodes responsible for storing the value of a given key. More precisely, Riak's Core application API provides the preflist() function: (see the API here) that is used to map the result of the hashed key to its primary nodes.
The result is a dramatic reduction of network usage. Data processing is optimised by taking place on one of the nodes that store the given data. Only the metadata (very small footprint) and the results (a tiny fraction of the data) travel on the wire. Network usage is greatly reduced.
Back-pressure management strategy
For a fun and easy-to-read description of what back-pressure is and how to react to it, you can read this great post by Fred Hebert (@mononcqc): Queues Don't Fix Overload.
What if there are too many substreams, or one substream is buggy and performs very costly computations (especially as we allow developers to easily write their own substream), or all of a sudden the events fullstream change, one type becomes huge and a previously working substream now takes 10 times more to compute?
One way of dealing with that is to allow back-pressure: the substream creation system will inform the stream storage (Riak) that it cannot keep up, and that it should reduce the pace at which it stores events. This is however not practical here. Doing back-pressure that way will lead to the storage slowing down, and transmitting the back-pressure upward the pipeline. However, events can't be "slowed down". Applications send events at a given pace and if the pipeline can't keep up, events are simply lost. So propagating back-pressure upstream will actually lead to load-shedding of events.
The other typical alternative is applied here: doing load-shedding straight away. If a substream computation is too costly in CPU time, wallclock time, disk IO or space, the data processing is simply aborted. This protects the Riak cluster from slowing down events storage - which after all, is its main and critical job.
That leaves the substream consumers downstream with missing data. Substreams creation is not guaranteed anymore. However, we used a trick to mitigate the issue. We implemented a dedicated feature in the common consumer library code; when a substream is unavailable, the full stream is fetched instead, and the data transformation is performed on the client side.
It effectively pushes the overloading issue down to the consumer, who can react appropriately, depending on the guarantees they have to fulfill, and their properties.
Some consumers are part of a cluster of hosts that are capable of sustaining the added bandwidth and CPU usage for some time.
Some other systems are fine with delivering their results later on, so the consumers will simply be very slow and lag behind real-time.
Finally, some less critical consumers will be rendered useless because they cannot catch up with real-time.
However, this multitude of ways of dealing with the absence of substreams, concentrated at the end of the pipeline, is a very safe yet flexible approach. In practice, it is not so rare that a substream result for one epoch is missing (one blob every couple of days), and such blips have no incidence on the consumers, allowing for a very conservative behaviour of the Riak cluster regarding substreams: “when in doubt, stop processing substreams”.
Conclusion
This data processing mechanism proved to be very reliable and well-suited for our needs. The implementation required surprisingly small amount of code, leveraging features of Riak that proved to be flexible and easy to hack on.
This blog post ends the series about using Riak for event storing and processing at Booking.com. We hope you liked it ! | https://medium.com/booking-com-development/using-riak-as-events-storage-part-4-43d088f80b7c | [] | 2018-02-28 22:05:01.204000+00:00 | ['Database', 'Mapreduce', 'Big Data', 'Riak', 'Programming'] |
Have we any empathy? | As October slowly comes to an end, we can most certainly say that there is a lot more to “dignity” in mental health than what we once thought. Whether it’s through the way we physically respond to sufferers, or the way we differentiate between those in society, the most important thing that has been highlighted this month is that Allah has created us with dignity, and that gives us every right to remain dignified. Regardless of our mental and physical state. We’ve established the meaning of dignity, we’ve discovered the relationship between dignity and mental health, and last week we focused on the importance of treating those with mental health problems with dignity — but how do we do this?
It may seem self-explanatory, i.e. in order to help someone with a mental health problem, with dignity, you have to listen, give them time, be kind, and be patient etc, but that’s all a little bit cliché, isn’t it? It’s actually a little bit more complex than what we think. How often do we listen, give time, etc but in the back of our minds we’re still thinking, “Is this true? Does this even exist? Why don’t I experience it? Get real!”. These thoughts don’t just stay in the back of our minds either, they somehow creep their way to our faces and our expressions say it all. Our dipped eyebrows, tight lips and overly concerned eyes no longer have their sincerity, and it becomes obvious to the one receiving our “attention”. This doesn’t apply to everyone, of course. These thoughts might not occur because of culture, tradition and ignorance — rather because, we’ve lost a very important human character trait. We’ve lost empathy. We don’t have the ability to empathise, but we are pro’s to sympathise.
Just think for a second, when you’re going through a really rough time, and you’re sick of people looking at you differently and making you feel like an alien, would you want someone to feel sorry for you, or someone that attempts to share and understand your pain? We’ve hit the nail on the head. Respecting and honouring someone’s dignity is not achieved through “oh, you must be having a really hard time. Better days are to come”, it’s succeeded through, “I can only imagine what you’re going through, but I can see that it’s hard. I’ll never understand truly what it feels like, but it must be suffocating, right?”. Instead of burying someone with artificial ‘positivity’ and ‘optimism’, be real with them, I’m sure they would appreciate someone that is willing to take baby steps with them instead of sprinting ahead while they slug behind. There is also another classic approach, the prying, the constant questions, and the unnecessary checking up on. Does it even make sense to interrupt a person who has been longing to get something off their chest with “so, like… what does it feel like?”, or “man, are you sure you’re okay? You know, you don’t have to carry on, but are you okay?”, and the favourite, “so, when did this happen? Is that when this happened? Or did it happen because of this?” — just chill! Let the person come out with it all, then carefully think about your questions, don’t overwhelm someone who is already overwhelmed with their own questions and distorted thoughts. Remember, you’re not the one that is asking for help — hence why empathy is so important.
From my own experiences and observations, there is this ‘unofficial’ script that is said robotically to anyone who simply wants to talk, who simply wants a pair of real ears that are actually hearing what is being said rather than listening out for key words to match for a response, and the chances are, that person just wants someone to hear their silent screams. Any good doctor will tell you that even if someone receives counselling, advising, medication, social support and the best of every service out there — without dignity, respect and empathy, they are truly meaningless. As a basic human need, we want to be understood — so why is it so difficult to make that attempt with someone who has a mental health problem?
Having a mental health problem does not take away your dignity, your respect or honour. You deserve to be understood, cared for and apart of society as much as anyone else. Having a mental health problem does not take away your dignity, your respect or honour. It only adds substance to which you really are, the survivor that you are. Having a mental health problem does not take away your dignity, your respect or honour; but it makes you noble, inspirational and strong.
As always, keep your goal in focus. No matter how foggy it gets, just remember your ultimate goal. | https://medium.com/inspirited-minds/have-we-any-empathy-4245afc5cdeb | ['Inspirited Minds'] | 2015-12-06 00:26:56.999000+00:00 | ['Islam', 'Mental Health', 'Dignity'] |
Create PDF viewer in React js | So today let us see in this tutorial how to create a PDF viewer to view our internal and external .pdf files. At some point in time, we need to load some pdf files to be viewed by our visitors but we don’t know how to display all those pdf files in our React js application. So here comes a handy react module to make our life Easy. @phuocng/react-pdf-viewer is a great module to create a PDF viewer in our React project. It is very easy to implement compared to other modules available. So let us see how to implement that in our project.
Requirements
Create and open your React js project.
yarn create react-app yourprojectname
cd yourprojectname
2. Install @phuocng/react-pdf-viewer and pdfjs-dist to your project.
yarn add @phuocng/react-pdf-viewer pdfjs-dist
3. Now open your app.js file and import Viewer, Worker, and CSS file from @phuocng/react-pdf-viewer. We will first load our pdf file from an internal source (Local file) so also import that pdf file.
import '
import filePDF from './dummy.pdf' import Viewer, { Worker } from ' @phuocng/react-pdf-viewer ';import ' @phuocng/react-pdf-viewer /cjs/react-pdf-viewer.css';import filePDF from './dummy.pdf'
4. Now we will use those Viewer and Worker tags. Paste the below code in your app.js file.
Here we have loaded the pdfjs-dist worker file inside the Worker tag. This is compulsory for the viewer to work. My current version of pdfjs-dist is 2.5.207 It might change in the future so load only the latest version of the pdfjs-dist worker file. Now call our pdf file inside the Viewer tag as fileUrl prop and then start your application yarn start and refresh the browser. The file will load fine.
This was pretty simple to load our local pdf file but what if we want to load pdf file from some external source? If we try to put our external pdf file directly inside the fileUrl prop of Viewer tag we will receive a CORS error. So how do we load that file? If the external file source is coming from some backend which we can control like in node or PHP we can set up a proxy to load the file but if we don’t have access to the source of the file there is this one website which can solve our CORS error https://cors-anywhere.herokuapp.com/ All you need is to put your external pdf link after the corsanywhere link https://cors-anywhere.herokuapp.com/your-pdf-file-link.com/file.pdf In our case it’s https://cors-anywhere.herokuapp.com/http://www.africau.edu/images/default/sample.pdf
Now let us add some CSS to make the pdf viewer look more professional.
That’s it for today. Below are the live codes and GitHub repository for your reference. | https://medium.com/how-to-react/create-pdf-viewer-in-react-js-d81c1563da3 | ['Manish Mandal'] | 2020-10-11 07:55:20.779000+00:00 | ['Reactjs', 'Pdf', 'React'] |
Introducing Bitfolio 4 | It seems yesterday that we launched Bitfolio 3 for iOS, with the ability to sync exchanges.
We kept adding exchanges over the year and we started working on Bitfolio 4 when we realised we needed something more.
After months of hard work, we are very please to introduce a new, redesigned, reimagined Bitfolio! 🎉
What’s new
The big change you will notice is no more tabs.
Everything is now part of a customizable dashboard and we think you are gonna love it.
At the moment the dashboard includes tickers, portfolios, exchanges, coingecko and news, but we have plans to add more sections over the coming months.
Every section can be enabled or disabled, and you can rearrange them as you please, so that your most important information are where you want them to be.
But it’s not all, we also redesigned the portfolio screen of Bitfolio.
Now you can view your historical profit and loss on a beautiful line chart or bar chart.
You can also sort the assets so that it’s easier to keep track of them.
The iPad version is also updated and taking advantage of the screen estate.
Lastly, we decided to make everything available in the free version! There will still be some ads but we removed all the annoying full screen videos, ads will now be less intrusive (you can still remove them with an in app purchase).
What’s coming
As stated above, we are planning on adding new sections to the dashboard but that’s not all.
We will keep adding exchanges and improving the compatibility, soon you will be able to see your active orders, your past trades and (depending on the exchange) also create new orders.
We are also planning to improve the news section so that you can customize it with your favourite feed and add more insights to your portfolio transactions.
The Android version is also being updated, while we are working on it, the current Play Store version is free, with all features unlocked.
As usual, if you have any feedback or suggestions, please let us know, we are always looking for ways to improve the app.
And much more, check out the Bitfolio roadmap on trello, you can vote on your favorite feature and keep an eye on the development. 👍
Useful Links
Thank you ❤️
We’ve received a lot of feedback and we love it, please keep it going, keep sending feature requests and if you see a bug or an issue, send a complaint! (but please don’t be rude 😉).
Please consider sharing Bitfolio with your friends and coworkers, it would help us tremendously.
Again, thank you ✌️
The Bitfolio Team | https://medium.com/bitfolioapp/introducing-bitfolio-4-7f398d062c0c | ['Francesco Pretelli'] | 2020-05-03 22:19:12.008000+00:00 | ['Apple', 'iOS', 'Cryptocurrency', 'Apps', 'Bitcoin'] |
‘Becoming a humanitarian was my way of saying no to what was happening in Syria’ | “I was a kid and used to tour almost all the 16 km-long bazars of the old city and never feared getting lost, people were so kind and caring,” says Bashir. “My father never closed his shop when he wanted to pray, he used to put a chair in the entrance and went to the mosque. Aleppo was the safest place.”
When the conflict broke out, Bashir and some of his classmates rushed to help. He adds: “That was the first time in my life I saw people from Aleppo running away and they did not know where to go. We couldn’t accept that and decided to help people however we could. We helped them to carry their belongings to safety at the university campus, but we knew we wanted to do something more.”
Bashir’s friends came across a group of Syrian Arab Red Crescent (SARC) volunteers who were supporting families displaced by the conflict. A supervisor asked Bashir to volunteer — then in no time Bashir joined the tactical field intervention unit. His team was responsible for negotiating access to besieged areas in order to deliver the lifesaving assistance, evacuate people who required medical care and open safe corridors for civilians.
‘I used to communicate with UN security units, I liked the way they handled such complicated, sensitive and critical missions.’ Photo: WFP/Hussam Al Saleh
“I gained a huge amount of experience by joining this unit, I learnt how to negotiate humanitarian access, how to keep myself safe and handle radio communications,” he says. “Every time we were on a mission, we learned new techniques to help people who were in desperate situations. During this time some members of my unit were killed, and others were kidnapped for short and long periods — myself included.”
“Becoming a humanitarian was my way of saying no to what was happening in Syria”
He says he’ll never forget the day he called an ambulance to rescue two injured civilians, and it was targeted while his colleagues were rushing to the rescue.
“When anyone targets an ambulance, he is not only killing the crew and the rescued person, he is killing all of humanity,” he says. “I have seen my city that once was called the ‘Jewel of Syria’ being destroyed … poor people became displaced. Becoming a humanitarian was my way of saying no to what was happening in Syria.”
Bashir volunteered with SARC for six years and says it helped him become a more confident, open-minded and analytical person.
“I used to communicate with UN security units, and I liked the way they handled such complicated, sensitive and critical missions. I knew that I would learn a lot if I joined the UN and I especially wanted to join WFP because it was the largest organization in Syria.”
Bashir joined WFP in 2018 and participated in many ‘cross-line’ missions — large convoy of trucks loaded with humanitarian aid moving from government- to opposition-held areas. It was his job to keep his WFP colleagues safe during these stressful journeys.
“The pressure and risks are enormous,” he says. He adds: “One day the Syrian crisis will come to an end, and the Syrian people will rebuild their country. Syrians are very resilient people and I hope this day will come soon.”
Read more about WFP’s work in Syria. | https://medium.com/world-food-programme-insight/becoming-a-humanitarian-was-my-way-of-saying-no-to-what-was-happening-in-syria-10d523a238ce | ['Hussam Al Saleh'] | 2020-07-13 12:00:34.155000+00:00 | ['Humanitarian', 'Syrian Crisis', 'War', 'Conflict', 'Humanitarian Aid Worker'] |
Stuck in Slumpland? Tips for my fellow NaNos. | NaNoWriMo participants around the world, congratulations on making it to the precipice of the halfway mark. By now, your protagonist(s) should have survived many painful obstacles you’ve thrown their way, your antagonist(s) has/have emerged, even if your protagonist(s) have no idea who their true enemies are yet. Notorious for tripping up budding and seasoned writers alike, the middle of your manuscript is prime real estate for Slumpland writing (if you’re managing any at all).
Slumpland writing covers everything from poor dialogue to excessive flashbacks to the dreaded cliche dream sequence. Cliches often plague the middle, as do worn-out tropes and the sensation that your protagonist(s) is/are treading water. You know where they’re supposed to be going but they seem not to get anywhere over page after page. Other common mistakes here: detailing every action a character does or slowing the action down to tune in on every random thought they have and each time they change emotions.
The good news is, if you keep putting your butt in the chair and your hands on the keyboard, your stay in Slumpland won’t last forever! If you need some inspiration for how to escape, I have some ideas:
Watch YouTube videos on how to improve your craft. Warning: You should give yourself a time limit for this activity and stick with it, or else a rabbit hole will open and your fellow Nano participants may never see you again. A few personal favorite YouTubers to learn from include: Abbie Emmons, Merphy Napier, Alexa Donne, and Brandon Sanderson.
Even if you’re a hardcore Pantser (been there) or a Nano Rebel, consider writing an outline or adding to the one that exists. When I’m dancing around the Slumpland border, adding to my outline helps me keep my protagonists on the paths to their destinies.
Avoid falling into the yawning abyss which dominates the center of Slumpland: The Pit of Premature Editing. Its appeal is undeniable, but you can resist its charms. The best way to do so, in my opinion, is by finding a buddy or many, and by meandering over to the Nano forums at least once per day to mingle with your fellow Nano writers. Buddies and groups are great for asking the questions, offering the prompts and feedback, and giving you the support you need to stay out of Slumpland, or to help you escape if you’re already there. You can find me on the Nano site under the username GeekMamacita, if you want to connect. I’m always happy to help. We are in this together!
Happy writing, and if you like my emerging blog, please give me a clap or a follow. I’ll post again soon. | https://medium.com/nanowrimo/the-nano-slump-458bc3029101 | ['Goivanna Irelund'] | 2020-11-16 21:41:14.144000+00:00 | ['Writing Tips', 'Writing Life', 'NaNoWriMo', 'Authors', 'Writing'] |
How to Embed React Apps in WordPress Sites | Create a New React Application
React is a JavaScript library for building user interfaces. To create a new React project, we will use create-react-app. You will need to have Node.js >= 8.10 installed on your local development machine. Once you have it installed, open your command prompt/terminal, go to the directory where you would like to create the project, and run the following line of code:
cd path/to/directory
Note that you should put your own path/to/directory here (on my Mac, I will often just drag a folder from my Finder into the command line). Then we will run the following line to create the project:
npx create-react-app crossword-app
crossword-app is the name of my project, so you can go ahead and change that to whatever you like. Next, we will simply open the project directory in our terminal again using cd and run our newly created app in development mode with the following line:
npm start
If you now open http://localhost:3000, you should see a landing page with the React logo and some text with a link to learn React. If you’ve gotten this far, well done! If not, you may have some debugging to do/conflicting dependencies, so check out the create-react-app documentation. | https://medium.com/better-programming/how-to-embed-react-apps-in-wordpress-sites-96a21b995290 | ['Natasha Butt'] | 2020-06-26 14:39:01.401000+00:00 | ['WordPress', 'React', 'Nodejs', 'Programming', 'JavaScript'] |
San Francisco’s 1970 Study for Urban Design | I love a bit of design history. Whilst in the Mission yesterday I picked up this gem from 1970 at Farnsworth Mercantile on Valencia. It’s part of an urban planning study that was carried out by the city in the late 60s and early 70s, eventually being adopted in 1972. There were 8 reports and this is Preliminary Report №5 Urban Design Principles for San Francisco (July 1970) by Thomas R. Aidala. When I saw it I couldn’t resist it. Square format, that beautiful big 5 on the cover and most of all it’s about design principles.
It looks like it belonged to one of the consulting or committee architects; Henrik Bull of BSA Architects as there is a newsletter from 1999 tucked inside address to him containing an article on the study.
Here’s a few spreads from the report. I love the spiral binding (although it’s a little tight) and the heavier paper section divides. | https://medium.com/paper-posts/san-franciscos-1970-study-for-urban-design-2ca8b13999c2 | [] | 2018-08-28 17:35:17.750000+00:00 | ['1970s', 'Architecture', 'Design', 'Urban Planning', 'San Francisco'] |
People Need People | I woke up feeling sad today.
I don’t know if it is the culmination of bad news I have heard this week or the grey skies left in the wake of a recent tropical storm.
But I am sad. Mopey sad. Sit in a room alone and listen to Billie Eilish kind of sad.
One of the cruelties of mental health issues is they tend to make you isolate yourself. Since I am an introvert, even a hard day makes me want to pull away from other people and lick my wounds alone. But a dive into anxiety and depression really makes me isolate from others.
Is it the shame? The exhaustion from fighting the fear and dread? The plain fact that I feel bad?
Photo by Noah Silliman on Unsplash
Whatever the reason, my mental health struggles have typically lead me to retreat from people when I need them most.
Because yes, People do need People.
Especially when we are struggling.
This lesson in moving towards people when I am anxious and depressed has been one of the hardest in my journey towards healing. It is possibly the most challenging thing I have had to learn but also one of the changes that has helped me the most. | https://medium.com/whenanxietystrikes/people-need-people-f83665b75730 | ['Dena D Hobbs'] | 2020-09-20 14:09:17.890000+00:00 | ['Depression', 'Friendship', 'Mental Health', 'Anxiety Disorder', 'Therapy'] |
Apache Hadoop: A Review on Security Issues and Solutions for HDFS | Photo by Liam Tucker on Unsplash
I. Introduction
Big data is trending. Smart devices, Internet and technologies allowed the unlimited generation and transmission of data, and from the data, new information is gained. The big data generated are in various form, it can be structured, semi-structured or unstructured data. The traditional data processing techniques like Relational Database Management System (RDBMS) are no longer capable to store or process the big data, as it has wide variety, extremely large volume, and generated at a high speed. Here’s where Hadoop come into the loop. Hadoop (Highly Archived Distributed Object Oriented Programming) capable to process all type of data in a very fast speed which close to real time and with minimum cost [1].
Fig. 1. Hadoop Architecture. | Source: B. Saraladevi, N. Pazhaniraja, P. V. Paul, M. S. Basha and P. Dhavachelvan, “Big Data and Hadoop-A Study in Security Perspective,” Procedia Computer Science, no. 50, pp. 596–601, 2015.
Hadoop is an open source Apache framework, written in JAVA programming language. Hadoop is designed to support distributed parallel processing of large scale of datasets across clusters of computers using simple programming model. Two main components of Hadoop are Hadoop Distributed File System (HDFS) for big data storing and MapReduce for big data processing as shown in Figure 1. Both mentioned components implemented a master and slave architecture, every cluster contain of one master node and various slave nodes. In HDFS, the master node is Name Node, and the slave node is Data Node. Name Node is responsible to store the metadata, for example, the filename, file attributes and location of the blocks where the data is stored. Data nodes are responsible to store the file itself, and the file will be duplicated in the blocks located in other racks. In MapReduce, the master node is the Job Tracker, and the slave node is the Task Tracker. Job Tracker are responsible to distribute the job, while Task Tracker is responsible to perform the job [1] [2].
It is a challenge to manage big data in distributed programming framework. Various issues can arise when handling big data, for instance, management issues, processing issues, storage issues and security issues. The security issues begin when the mammoth volume of data stored in a database which are not in regular format or not encrypted. Moreover, some of the tools and technologies used for handling large dataset are not developed with a proper security and policy certificates. System hackers and data hackers can steal the data and copy it to any type of storage devices like hark disk by attacking the data storage [2]. The type of attacks can be sent by the hackers are Denial of Service (DoS), Snoofing attack, Brute Force attack and CROSS-SITE scripting (XSS) [2] [3].
The flaw or weakness in the design, implementation, internal control or system security procedures caused the distributed systems to be vulnerable. Security breaches can be caused by the intentionally exploited or accidently triggered vulnerabilities. The vulnerability caused Hadoop environment components like Storm and Flink are likely to be attacked. In [4], vulnerabilities can be separated into three categories, which are infrastructure security, data management and data privacy. Then, the three categories can be further divided into three dimensions, which are architecture dimension, data value chain dimension and life cycle of data dimension. Infrastructure security are in architecture dimension and are referring to hardware and software vulnerabilities. Then, data privacy is in data life cycle dimension, and involving the data in transit and data at rest.
In [3], vulnerabilities of Hadoop are grouped into three categories, which are software / technology, web interface / configuration and security /network policy. According to the author, Hadoop is written in Java, which is a programming language that has been exploited by the cybercriminals and compromise for various security breaches. This is known as technology or software vulnerabilities. Then, Hadoop is configuration vulnerable as it has various default settings. For example, the default ports and IP addresses which make it vulnerable and have been exploited. Then, the Hadoop web interfaces like Hue (an open source SQL Cloud Editor, licensed under the Apache License 2.0), are weak against XSS scripting attack. Moreover, the Hadoop framework contain of multiple databases, which deployed different policy. These policies that are not configured properly lead to vulnerability.
Consequences for lack of security can be critical to an organization. HDFS is the base layer of Hadoop Architecture as shown in Figure 1, HDFS main functions are data storing and data processing, and hence, its more sensitive to security issues. Without an appropriate security measure, unauthorized data access, data theft, and unwanted disclosure of information could happen. Losing profit because of the proprietary information is stolen, or losing some data stored can bring trouble to the organization, but the consequences is small, and recoverable. However, if the data theft disclosed the private information of the customers, the image of the organization can be harmed, and caused the customer losing trust to the organization. Private information leakage in Financial Agency can cause a series of unfortunate happened to the customer themselves. For example, financial scam, using the customer information to acquire the fund, or impersonating them to borrow from their acquaintances. The consequences could be worse when the affected organization holds confidential information, for example, government department, which might create a chaos even if the hacker just manipulating the data. As Hadoop Technology security level is not satisfying, most of the government department and organizations do not use Hadoop environment to store valuable data [2].
HDFS security is crucial to organization that store their valuable data in Hadoop environment. HDFS is vulnerable to various form of attack., such as the DoS attack, which accomplished by causes a crash of data or flooding the target with traffic. Name Node in HDFS is vulnerable to DoS attacks [3]. The Name Node in HDFS will be coordinating to Job Tracker in MapReduce to execute data processing tasks. The DoS attack on Name Node can stop the read-write operation of HDFS and then affect task of data processing. In [2], three approaches are proposed to secure the data in HDFS, which are Kerberos Mechanism, Bull Eye Algorithm Approach and Name Node Approach. In this paper, the three security solution mentioned in [2], and the other security solution or tools that are applicable to secure HDFS will be researched and discussed on their pros and cons. With a comparison among the solution, the HDFS user could decide on which solution to used based on their context.
II. Available Security Tools
There are various tools and solutions available to secure the HDFS environment, and each of it have different features and different effectiveness under different context.
The security tools and solutions can be divided into four categories, which are encryption, authentication, authorization and audits. Authentication refers to verification of system or user identity for accessing the system, or in other words, it is the procedure of confirming whether the user is the person they claimed to be. Two common authentication technologies are Lightweight Directory Access Protocol (LDAP) for directory, identity and other services, and Kerberos [5] [6] [7].
Authorization is the process of determining the access rights of the user, specifying what they can do with the system [5] [6]. As Hadoop mix various systems in its environment, it required numerous authorization controls with different granularities. In Hadoop, the process of setup and maintain the authorization control are simplified and can be done by dividing users into groups by specifying in the existing LDAP or Active Directory (AD). Other than that, authorization can also be setup by giving role-based access control for connection methods that are alike. The popular tool for authorization control is Apache Sentry [8].
Data encryption is referring to the process of converting the data from readable format to an encoded format that only can be read or write after it is decrypted [9]. Encryption is to ensures the privacy and confidentiality of the data, and to secure the sensitive data stored in Hadoop [5]. There are two types of data encryption which are encrypting data in transit and encrypting data at rest. For HDFS, encrypting data in transit can be done by configuration, but Kerberos must be enabled before the configuration [10]. Transparent encryption for HDFS introduced in Cloudera apply a transparent and end-to-end encryption of data read from and written to HDFS blocks across the cluster. HDFS Transparent Encryption apply Key Concepts and Architecture where a key will be used to encrypt and decrypt the file [11].
Audit is referring to verification on the entire Hadoop ecosystem periodically and deployment of log monitoring system. HDFS and MapReduce provide basic audit support. The security breaches can be caused by intentionally exploited or accidently triggered. Hence, audit is important to meet security compliance requirements.
The following section discussed the security tools and solutions available for HDFS.
A. Kerberos Protocol
Fig. 2. Kerberos Protocol | Source: P. Raj, & R. Sudipta, B. Debnath, B. Samir & K. Tai-hoon. “Large Scale Encryption in Hadoop Environment: Challenges and Solutions,” IEEE Access, pp.1–1, 2017.
The most popular tool for authentication is Kerberos, which is also the primary authentication for Hadoop developed by MIT [3] [5]. Kerberos protocol provides secure communications over a non-secure network by using secret-key cryptography [3]. The protocol of Kerberos is shown in Figure 2. The client will first need to request Ticket Grant Ticket (TGT) from Authentication Server (AS) from Key Distribution Centre (KDC). After client received the TGT, the client will have to request Service Ticket (ST) from Ticket Grant Server (TGS). Client can use the ST to authenticate a name node. the TGT and ST will be renewed after long running of jobs. The greatest benefit of Kerberos is that the ticket cannot be renewed if it was stolen [2]. Kerberos provides powerful authentication for Hadoop. Instead of using password alone, the cryptographic mechanism is used when requesting services [7].
B. Bull Eye Algorithm
Bull Eye Algorithm in another approach proposed in [2] which claimed its able to provide security for sensitive data in 360° from node to node in HDFS. According to the author, the approach is using by Dateguise’s DGsecure and Amazon Elastic Map Reduce. The algorithm concentrates on sensitive data only, it will scan the data before the data is stored into blocks by Data Node. Then, the algorithm will scan the blocks to determine whether the sensitive data are stored in block properly and free of risk. It only allowed authorized person to read and write the data, and during the read-write operation, the algorithm will ensure the relation between the racks are safe. In a nutshell, the algorithm enhanced the security level of Data Node in HDFS.
C. Name Node Approach
Name Node Approach is the third solution mentioned in [2], which proposed to use two Name Nodes in HDFS. Single Name Node in HDFS makes it more vulnerable, the system is down when the Name Node is down. This approach provided a “back up plan” for the system. The system will be still running when one of the Name Node is down. The two redundant Name Nodes are provided by Name Node Security Enhance (NNSE), which prevent the rise of new issue when there are two Name Nodes. When both Name Nodes are alive, one of the Name Node will act as the Master Node, and the other will be Slave Node. the Slave Node which will cover data unavailability and time lagging in secure manner when the current Master Node is crashed, with the permission from NNSE.
D. Apache Knox
Apache Knox Gateway (“Knox”) is a single access point to single or multiple Hadoop Cluster. Knox provides perimeter security which allowed the organization to extend Hadoop access to more user while complying enterprise security policies. Kerberos complete the security of Hadoop cluster, but it is complex for client-side configuration. Knox is encapsulating Kerberos, which eliminates client-side configuration and simplifies the model. Furthermore, Knox can authenticate user credentials against AD/ LDAP with its REST API-based perimeter security system. Knox support for multi-cluster security management and integrated with existing IdM Systems, such as SSO for Hadoop UI (Ranger) [3] [5] [12].
E. Apache Ranger
Apache Ranger is an associate authorization system that allow authenticated users to access Hadoop cluster resources like Hive tables and HDFS files. Ranger provide comprehensive security across the Hadoop elements. The goals of Ranger are to centralize the security administration, provide standardize and fine-grained authorization, enhanced authorization methods support and centralize auditing of security related administrative actions and user access across all Hadoop components. For data protection, Ranger uses wire encryption. [3] [5] [13].
F. Apache Sentry
Apache Sentry is an open source project by Cloudera, which is Hadoop authorization module. Apache Sentry supports role-based authorization, multi-tenant administration and fine-grained authorization. Sentry provides unified administration for metadata and shared data for access frameworks like HDFS and Hive. Apache Sentry is pluggable authorization engine for HDFS, Hive and other Hadoop elements. In other words, Sentry is used to define what users and application can do with data. Different users have different authorization [3] [5] [14].
III. Comparison
Table 1 shown the comparison of the security tools/solutions mentioned in previous section in terms of features and functionalities [2] [3] [12] [13] [14].
TABLE I. Comparison of the functions of security solutions
IV. Discussion
There are six security tools / solutions discussed in this paper, which are Kerberos Mechanism and Apache Knox for authentication, Apache Sentry and Apache Ranger for authorization, Bull Eye Algorithm and Name Node Approach for audit.
For authentication, Apache Knox is better than Kerberos Mechanism. The Knox not only eliminated the configuration, which is so complex in Kerberos, but also simplified the model. Moreover, other than authentication, Knox is also capable for authorization control and audit, while Kerberos only support authentication.
Apache Sentry and Apache Ranger is both capable for authorization and authentication, and Apache Ranger also support for audit. However, these information in not enough to justify which tool are better. Hence, the service provider is determined. For Apache Sentry, it is support by Cloudera which is a leading company in Big Data Solution [14], while Apache Ranger has no formal support. Considering the support availability, Apache Sentry is a better solution.
For audit, the two solutions suggested in this paper are Bull Eye Algorithm and Name Node Approach. Bull Eye Algorithm is capable to audit the entire HDFS to prevent any security breach from happening. It scans the data before it is stored into blocks, and then scans the blocks to check whether the data is stored securely. The Bull Eye Algorithm enhanced the security for Data Node in HDFS while the Name Node Approach secure the HDFS continuity by ensuring the continuous service from Name Node, where there will be a second Name Node which will be replacing the current Name Node when the current Name Node is down. If there is only one Name Node, and it is down due to attacks, the whole HDFS is corrupted. Hence, the Name Node Approach is the system’s second chances. It would be good to have both Bull Eye Algorithm and Name Node Approach in HDFS, but Bull Eye Algorithm is better than Name Node Approach for auditing the whole HDFS system. This is because Bull Eye Algorithm focus on the entire data read-write operation and the authorization of user for read-write operation, while Name Node Approach only focus on Name Node itself.
The data encryption solution is not discussed in this paper as the encryption can be done by configuration in HDFS. However, it is good to noticed that the vulnerability and effectiveness of data encryption is depend on the key’s security. In most of the case, the keys are stored in local disk drives, thus it has high chances to be stolen by data hackers. This problem can be avoided by using key management service to distribute the keys and certificates. It will be more effective when combined with HDFS encryption zones, where different keys will be used for each user, application and tenant. The combination might require extra steps for setup, but it is essential [15].
For the future of Hadoop Security, the security tools or solutions is not necessary to be able to cover all security aspects of Hadoop. It can be focus on one aspect only, for example, authentication, and improve the security level from time to time. As mentioned in previous chapter, the vulnerability of Hadoop is because of the improper configurations of the different policies from different databases. In future development, the developer can focus on developing the proper configuration for policies from different databases.
In business context, the security tools and solutions shall also improve in the model and configuration by simplifying the process of setup and maintenance to extend the tools to more users. When the process is too complex, the organization that deployed the security tools might need to hire an expert for setup and maintenance, and this might be the drawback of the organization to use the security tool. Other than that, the availability of the documentation for the latest updates of the security tools, and the physical technology support will be critical when the organization is selecting the security tools. The developer of Hadoop security solutions and tools shall always consider how to attract more users to use their products because the product is meaning less when nobody is using it.
V. Conclusion
The traditional database is insufficient to handle big data, so Hadoop is introduced. However, the vulnerabilities of the system increase when the size of the data increase. This paper explained the type of vulnerabilities for Hadoop, which are software / technology vulnerability, web interface / configuration vulnerability and security /network policy vulnerability. The consequences for lack of security is vary with the data hold. Insufficient security to the Hadoop ecosystem not only can lead to loss of data, but also unwanted disclose of users’ privacy data. Then, HDFS as the base of Hadoop Architecture, is more sensitive to security issues, and HDFS malfunction can caused the malfunction for other Hadoop elements like MapReduce. Thus, it is essential to improve on the security of HDFS.
There are four security aspects to be consider when setting up the security solutions, which are authentication, authorization, audit and data encryption. For authentication, the security tools discussed in this paper are Kerberos Mechanism and Apache Knox. For authorization, the security tools mentioned are Apache Ranger and Apache Sentry. For audit, Bull Eye Algorithm and Name Node Approach is studied. Data encryption tools is not discussed as it can be done by configuration in HDFS, and the effectiveness is directly related to the key’s security. The tools are compared based on the four security aspects, the complication of usage and support availability. Based on the comparison, Apache Knox, Apache Sentry and Bull Eye Algorithm are better security solutions for authentication, authorization and audit respectively. The future development can be done on Hadoop security are the more focus security solution with more powerful security, and more technical support to extend the security solution to more users.
References
[1] P. Vijay and B. Keshwani, “Emergence of Big Data with Hadoop : A Review,” IOSR Journal of Engineering (IOSRJEN), vol. 06, no. 03, pp. 50–54, 2016.
[2] B. Saraladevi, N. Pazhaniraja, P. V. Paul, M. S. Basha and P. Dhavachelvan, “Big Data and Hadoop-A Study in Security Perspective,” Procedia Computer Science, no. 50, pp. 596–601, 2015.
[3] G. S. Bhathal and A. Singh, “Big Data: Hadoop framework vulnerabilities, security issues and attacks,” Array 1, p. 100002, 2019.
[4] Y. H, C. X, Y. M, X. L, G. J and C. C, “A survey of security and privacy in big data,” in 16th international symposium on communications and information technologies (ISCIT), Qingdao, 2016.
[5] P. P. Sharma and C. P. Navdeti, “Securing Big Data Hadoop: A Review of Security Issues, Threats and Solution,” (IJCSIT) International Journal of Computer Science and Information Technologies, vol. 5, no. 2, pp. 2126–2131 , 2014.
[6] J. Natkins, “Authorization and Authentication In Hadoop,” Cloudera Inc, 20 March 2012. [Online]. Available: https://blog.cloudera.com/authorization-and-authentication-in-hadoop/. [Accessed 1 July 2020].
[7] “Authentication,” Cloudera Inc., 2020. [Online]. Available: https://docs.cloudera.com/documentation/enterprise/latest/topics/sg_authentication.html#xd_583c10bfdbd326ba--5a52cca-1476e7473cd--7f90. [Accessed 1 July 2020].
[8] “Authorization,” Cloudera Inc, 2020. [Online]. Available: https://docs.cloudera.com/documentation/enterprise/latest/topics/sg_authorization.html. [Accessed 1 July 2020].
[9] “What is Data Encryption?,” Kaspersky Lab, 2020. [Online]. Available: https://www.kaspersky.com/resource-center/definitions/encryption. [Accessed 1 July 2020].
[10] “Configuring Encrypted Transport for HDFS,” Cloudera Inc, 2020. [Online]. Available: https://docs.cloudera.com/documentation/enterprise/latest/topics/cm_sg_hdfs_encrypt_transport.html. [Accessed 1 July 2020].
[11] “HDFS Transparent Encryption,” Cloudera Inc, 2020. [Online]. Available: https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_sg_hdfs_encryption.html. [Accessed 2 July 2020].
[12] “Apache Knox Gateway,” Cloudera Inc, 2020. [Online]. Available: https://www.cloudera.com/products/open-source/apache-hadoop/apache-knox.html. [Accessed 1 July 2020].
[13] “Apache Ranger,” The Apache Software Foundation, 7 August 2019. [Online]. Available: https://ranger.apache.org/#:~:text=Apache%20Ranger%E2%84%A2%20is%20a,across%20the%20Apache%20Hadoop%20ecosystem.. [Accessed 1 July 2020].
[14] “Apache Sentry,” Cloudera Inc, 2020. [Online]. Available: https://www.cloudera.com/products/open-source/apache-hadoop/apache-sentry.html. [Accessed 1 July 2020].
[15] A. Lane, “Securing Hadoop: Security Recommendations for Hadoop Environments,” Securosis, L.L.C., Arizona, 2016.
[16] G. Kapil, A. Agrawal, A. Attaallah, A. Algarni, R. Kumar and R. A. Khan, “Attribute based honey encryption algorithm for securing big data: Hadoop distributed file system perspective,” PeerJ Computer Science, vol. 6, p. e259, 17 February 2020. | https://medium.com/swlh/apache-hadoop-a-review-on-security-issues-and-solutions-for-hdfs-5ba06861b7cd | ['Kahem Chu'] | 2020-12-06 12:40:14.428000+00:00 | ['Security', 'Hdfs', 'Big Data', 'Hadoop', 'Cybersecurity'] |
Silencing the Media: | Silencing the Media:
Attacks on Journalists and Reporters Grow More Open, With Women as Particular Targets
by Barbara Crossette. Read more on PassBlue.
Clarice Gargard, a Dutch columnist for a large newspaper in the Netherlands, has described receiving online threats and attacks in comments related to her work. The remarks, she says, are mostly related to her “giving a different perspective on society.”
When Reporters Without Borders recently tallied the murders of journalists across the globe in 2019, the organization found that the confirmed death toll, 49, was the lowest since 2003. That was the good news.
The rest of the findings from this and other media groups are less reassuring.
Journalists are being targeted virtually everywhere, not only by government officials or their radical supporters, who may be armed and violent, but also by better-equipped old enemies like organized crime and corrupt politicians.
New “legal” excuses have been found by authorities to surveil, detain, harass, threaten or physically assault reporters, editors and publishers in media organizations, as well as independent commentators and op-ed writers. Charges of “terrorism” are leveled widely for political reasons when opposition protests happen and are covered by reporters — now a highly dangerous media assignment. Reporters and their photographers can document, download and almost instantly circulate evidence of abuses. Social media isn’t all terrible.
The New York-based Committee to Protect Journalists has just reported that most notably in 2019, “anti-terror legislation was used against journalists in in Turkey, India, Russia, Nigeria, and Nicaragua.” Such legislation has also been used in the United States when violence occurs and local police or investigative agencies seek access to social media or private computer files.
Attacks on the media are increasing amid proliferating authoritarian regimes and stumbling democracies, whose officials publicly disdain professional journalists’ work and willingly ignore or override legal protections for the media.
Catch-phrases such as “fake news” and “alternative facts,” coined by the Trump administration, now echo in many other countries across the globe, emboldening critics of a free media. On Jan 20, Unesco, which maintains a database on threats to the media, issued a statement calling attention to this phenomenon. It found efforts multiplying to silence critical voices and restrict public access to information.
“Aside from the risk of murder, journalists increasingly experience verbal and physical attacks in connection with their work,” the statement said. “Over recent years there has been a marked rise in imprisonment, kidnapping and physical violence amid widespread rhetoric hostile to the media and journalists.
“Women in the media are particular targets,” according to Unesco. “They are often targets of online harassment, and face threats of gender-based violence.”
First-person reports from women in the media about physical or psychological abuse are becoming more numerous around the world. In July 2019, the independent Dutch Association of Journalists published findings of a survey of more than 350 female journalists. Over half said they had been subjected to intimidation or violence in their work.
The Committee to Protect Journalists republished several of the women’s accounts, including this one, from Clarice Gargard. In her response to the survey, she said:
“Since I often express my opinion on political and social issues, I regularly receive online attacks, threats. Mostly insulting, derogatory, racist and sexist comments on Twitter or Facebook connected to my gender and skin color, but even death threats because of my work. These attacks are sometimes connected to the issues I am covering, but mostly I got them simply because I am a visible black woman in Dutch media and also one of the first such columnists to work for one of the biggest newspapers in the country, NRC Handelsblad, giving a different perspective on society. . . .
“A lot of people seem to have a problem with women expressing their opinion in public in general, and things get worse when you add race or other identities to it. It worsens when radical right-wing politicians, media or pundits target you, as has happened to me.”
For all journalists, especially investigative reporters, working close to home continues to pose hazards. For the first time, no journalist was killed while reporting abroad, the Reporters Without Borders report said. All victims in 2019 were killed in their own countries.
“The proportion of deaths in countries not at war (59%) was greater than deaths in countries at war,” the report found. “Last year, most of the journalists killed (55%) were the victims of a war or a low-intensity conflict. These figures explain another one: 63% of the journalists killed [in 2019] were murdered or deliberately targeted. This is 2% more than in 2018.”
While organizations may differ in methods of determining numbers of deaths, detentions, trials and convictions that take place every year, some common themes emerge.
Generally, the Latin America and the Caribbean region was considered the world’s most dangerous region for journalists in 2019, with — by United Nations count — 22 journalists killed. The next most deadly regions were Asia-Pacific, with 15 deaths, and the Arab states with 10.
Globally, Mexico and Syria had the most recorded killings — 10 each — according to Reporters Without Borders research. China led the world in detentions.
To complete the picture of a journalist’s life of uncertain safety and increasing risk as a new decade begins, the organization recorded 57 media professionals still held hostage at the end of 2019, with another 389 in detention, an increase of 12 percent. | https://passblue-un.medium.com/silencing-the-media-22e84dc15e31 | [] | 2020-02-24 17:51:39.473000+00:00 | ['Womens Rights', 'Journalism', 'Media', 'Women'] |
How to Perform Fraud Detection with Personalized Page Rank | Fraud detection is a major field of interest for data science. As fraud is a rare event, the main challenge is to find a way to bring to light abnormal behavior. That is why graph analysis is a useful approach to perform fraud detection. Many algorithms exist to extract information from graphs. In this article, we will study one of them: the Personalized Page Rank algorithm. To manipulate our graphs and compute this algorithm we will use the python package Networkx.
Page Rank Algorithm
Page Rank is a well-known algorithm developed by Larry Page and Sergey Brin in 1996. They were studying at Standford University and it was part of a research project to develop a new kind of search engine. They then successfully founded Google Inc.
This algorithm assigns a numerical weighting to every node of a connected network. This measure represents the relative importance of a node within the graph (its rank).
To compute Page Rank a random walk is performed. This random walk is defined as follow :
The walker starts at a random node in the graph.
At each iteration, the walker follows an outgoing edge to one of the next nodes with a probability α or jumps to another random node with a probability 1-α.
The PageRank theory holds that the imaginary walker who is randomly walking on links will eventually stop. The probability, at any step, that the person will continue is called the damping factor α.
As an example, for Google, the network is composed of websites that point to each other through links. The page rank measure of each web page is then the probability that a person randomly surfing on the internet would finally arrive on this specific page.
In mathematical terms, The Page Rank of a node is the stationary measure of the Markov Chain described by the random walk.
Page Rank measure of a node in the graph
On the animation below you can visualize a Random Walk performed on a connected graph with a damping factor set to 0.85.
Random walk on a graph — (α = 0.85)
On the above example, one would predict that the node ‘c’ is the one with the higher rank. This is the most central node. On the contrary, the node ‘h’ is more likely to have a low rank.
Page Rank measure for each node of the graph
How to compute Page Rank with Python and Networkx
The python package Networkx gives the possibility to perform graph analysis. A lot of algorithms are implemented in this package (community detection, clustering…), pagerank is one of them.
With the python script below, thanks to Networkx, we will first generate a random graph and then apply pagerank function.
Personalized Page Rank Algorithm
We have seen that the Page Rank is a representation of the importance of nodes within a network. Personalized Page Rank gives the possibility to bring out nodes in a graph that are central from the perspective of a set of specific nodes.
…
Read the full article on Sicara’s blog here. | https://medium.com/sicara/fraud-detection-personalized-page-rank-networkx-15bd52ba2bf6 | ['Antoine Moreau'] | 2020-01-30 13:53:45.733000+00:00 | ['Machine Learning', 'Data Science', 'Data Mining', 'Big Data'] |
The Rain Song | The Rain Song
A poem about the noise which awakens me
Photo by Alex wong on Unsplash
Will you believe me, if I tell you that today I collected sound?
I trapped raindrops which were pattering on the roof.
My own words knocking against the nerves and the raindrops somehow rhymed together, I blended the noise and wrote few poems.
I healed my heart.
Will you believe me if I tell you tomorrow it will be cuckoo song? | https://medium.com/scribe/the-rain-song-16abe2907273 | ['Priyanka Srivastava'] | 2020-12-11 11:21:18.343000+00:00 | ['Poetry', 'Rain', 'Muse', 'Nature', 'Writing'] |
Inverting an Image using NumPy’s Broadcasting method | In this article, we will learn how to invert an image using NumPy. To get some gist of this, let’s we have two values 0 and 1. Here 0 represents Black and 1 represents White. When we apply inversion to these values, we get:
0 → inversion → 1
1 → inversion → 0
The above only works when we two values. 0 for low and 1 for high. If we were to relate the same with the Binary Image whose pixel values are just 1’s and 0's. The inversion would be reversed. To put it in words we can say from White and Black to Black and White.
Broadcasting
Unlike lists, if we want to add a number to the values of the list. We iterate through each element and add the number. Whereas, in NumPy, we need not iterate through each element and add. Instead, we can treat the array list as a single element and add the number. NumPy automatically adds that number to all the elements of the array list. This technique is called broadcasting.
The broadcasting technique is applicable to both matrices and arrays. It is very fast when compared to normal loops.
>>> import numpy as np
>>> M = np.array([1, 2, 3, 4, 5, 6])
>>> M = 3 + M
>>> M
array([4, 5, 6, 7, 8, 9])
>>>
Let’s see the demonstration for a random matrix.
White — Black
A simple demonstration of the White — Black matrix and the image can be seen below.
1 is visualized as White
0 is visualized as Black
>>> import numpy as np
>>> image_b = np.array([
... [1,0,1],
... [1,1,0],
... [0,1,1]])
>>> image_b
array([[1, 0, 1],
[1, 1, 0],
[0, 1, 1]])
>>>
If we visualize the above matrix, we can see something like the below.
Image by Author
Black — White
A simple demonstration of the Black — White matrix and the image can be seen below.
1 is changed to 0 → Black
0 is changed to 1 → White
>>> # Broadcasting
>>> image_i = 1 - image_b
>>> # image_i = ~ image_b
>>> image_i
array([[0, 1, 0],
[0, 0, 1],
[1, 0, 0]])
>>>
If we visualize the above matrix, we can see something like the below.
Image by Author
To convert a matrix into an inverted matrix we can also use the operation ‘~’. It works in the same way.
If you want to know specifically how to convert an image into binary, you can refer to my article where I explain the procedure for both colored and grayscale images.
The above implementation worked since the images are already binarized. What if we wanted to apply this for a colored and non-binarized image? The scenarios will be different. Let’s find out what we can do for those. | https://medium.com/analytics-vidhya/inverting-an-image-using-numpys-broadcasting-method-1f5beb7f9fa5 | ['Sameeruddin Mohammed'] | 2020-12-27 06:20:12.165000+00:00 | ['Image Processing', 'Numpy', 'Programming'] |
7 Underrated Quotes That Helped Me Become More Productive | “Pleasure is an important component of the quality of life, but by itself, it does not bring happiness.” — Mihaly Csikszentmihalyi
Mihaly Csikszentmihalyi is well-known for his work on the experience of flow. He even published a book about the topic 30 years ago.
The whole premise of his research was that we need to create more flow experiences to accomplish more. He goes into great detail about different mindset changes to make but also mentions the concept of pleasure.
He believed pleasure was important but not enough to bring happiness. Now, it’s been proven. We need pleasure as a component to reach happiness.
After I read this quote, I began controlling my urges more. I love going to coffee shops but did I need to go 6 times a week? I enjoy smoking but wasn’t it breaking my flow experiences as I had to go out to smoke? Did I really need to party 5 times a week? I chose to be more mindful of how often I indulged in pleasure.
We need pleasure but we should never make it the priority. | https://medium.com/skilluped/7-underrated-quotes-that-helped-me-become-more-productive-ddfeb629e956 | ['Mathias Barra'] | 2020-11-27 03:05:02.681000+00:00 | ['Quotes', 'Self Improvement', 'Productivity', 'Habits', 'Inspiration'] |
What Your Social Media Manager/Team Wants You to Know | Marketing Twitter is group therapy for social media/community managers and strategists. We cherish our jobs, projects, and clients, but face a lot of challenges as social media marketing is still perceived as new/easy/lowest on the food chain and easiest for other departments to demand things of.
I think the most important thing we want everyone to know and understand is that this is a real job — there are now degrees for it. A lot of us paved the way and created the best practices by suffering through trials, errors, and crises.
We are humans. We are a department. We spend every waking hour on the front lines of the business — one on one with consumers. We know what should and should not be posted on social, we create strategies, and yet, we often aren’t taken seriously.
A large portion of the marketing community helped me write this post, and the best way to fully relate to how we feel is:
I realize this post can come off pretty negative, but it’s our cry for understanding — help us help you. Understand who we are, what we do, and that we do much more than ‘play on our phones all day.’ And honestly, we’ve left a lot out.
I encourage any other profession to write something similar — we don’t know what we don’t know and we all want to be understood, respected, and appreciated.
MENTAL HEALTH
In addition to everything listed below, this job is hard. We are the first person your customers or followers come to when they are MAD about a product or service and personally attack the person behind the computer. The bacon didn’t taste right? Our fault. The shipment was delayed? Our fault. There is no grace, it is immediately our fault, we suck and we better fix it.
Please remember that we are on the front lines and no one knows your engaged audience better than we do.
We cry — physically and on the inside. We have anxiety. We never feel like we are enough for our teams, ourselves and our families.
We don’t get days off, even on ‘vacation’ we are always checking social media notifications, engaging with our fans, as well as email for any potential crisis. I was managing emails and communities while in labor at the hospital because maternity leave doesn’t start until the baby is born.
2020 has been the hardest on us. From waking up on the 1st day of the state shutdown, my birthday, to losing 3 clients, and having the remaining clients freaking out and demand the world — we’ve all been burnt out since April, and it’s not letting up, and is affecting our personal lives.
We know we do a good job, but we are rarely reassured. A simple ‘thanks’ or ‘you’re doing great’ goes a long way.
We need to be reminded/encouraged to take time for ourselves — tell us to go get a massage or cut out for a bit to go have a drink with friends. We won’t take a break because we dont feel like we’re ever allowed to.
THIS IS A REAL JOB
We aren’t assistants.
Social media isn’t a service department.
We need tools — for listening, creating, scheduling and reporting.
Social media deserves its own contract/department/budget and shouldn’t be tacked on to an existing scope or marketing role.
Relying exclusively on part-time workers, particularly those who are students, interns or new hires to create and post your social media copy/assets is inherently exploitative. It sets your social media team up for burnout and your marketing strategy up for collapse when this happens.
You can’t schedule creativity.
Social media is an entire job/career. Many of us do the work of 2+ people, including the design team. We should be paid and valued as such, if not higher as we work 7 days a week.
We are busy, we are swamped, we aren’t extras in the office to take on whatever you feel you don’t have time for.
We need human support, not fancy project management tools.
YOUR REQUESTS
Your brand does not have to be on every social network in existence — focus on the ones where your customers are, and that makes the most sense for your brand and message.
If you want thoughts or feedback on how to execute a campaign concept on social media, send the PDF at least 24 hours BEFORE we have a phone call so I can come prepared. Otherwise, we’re going to sit on Zoom and stare at each other for an hour.
We have to write for our audience, not our personal intelligence level. The average American’s reading level is that of a 12–14-year-olds.
Utilizing buzzwords in marketing offers makes my job harder to translate into human speak.
We only have a few seconds to grab peoples’ attention. Images and copy need to be clear and engaging.
We need a few days of turnaround time to properly execute social requests.
We can’t post fliers and PDFs.
You can’t make up a hashtag and expect people to use it.
Most of your demands require a budget to meet the KPIs you’re expecting (the KPIs you’ve set that don’t always make sense for social).
We can’t create a post until we have the proper creative assets, and most of the time we have to create those ourselves — rarely do we have a designer at our disposal.
The longer you take to approve a content calendar, the more likely we’ll have to reschedule posts.
Just because you want something posted doesn't mean it should be.
Links are not clickable in Instagram captions.
Hashtags aren’t cool / used on Twitter anymore.
There needs to be a healthy balance between planned content and spontaneous content. Most often, spontaneous content and engagement perform the best because it didn't go through 10 approval processes.
Not all content needs to align with some fluffy big picture mission, but it still needs to be thoughtful and have intent.
TRUST THE PROCESS
Every social media manager keeps a bank of rejected ideas (which in fact are the best ones) and they will continue to be resurfaced until they get approved.
Quality > quantity — this goes for content and followers. The number of followers is a vanity metric — we’d rather have huge engagement rates with awesome people than have 1 million unengaged followers. With that, we don’t need to post every day — we need to post when it makes sense and with really rich content!
Have patience and be realistic with follower goals. It takes time, and often budget, to build a real, engaged audience.
You don’t need to crush it on every social media channel. Own only one and break the noise consistently. You might be losing a lot of opportunities trying to conquer channels where your audience isn't.
You don’t need a social media account for every product or line of business.
If you’re going to create a social media strategy, you should probably loop in the social team first. Let’s create a plan, not just ‘throw something up’ on social.
Long term strategy can drive business impact, but one post probably won’t do anything.
It’s a lot more helpful to tell your social media manager your g oal with social media and let them bring the tactics to you, versus giving them a piece of content/hashtag/activation you think they should use. Don’t make your social media manager back into your strategy.
Social content is not free, even without paid promotion, content takes time, effort and creative resources — PAY FOR IT.
Every time you say ‘it’s just a post’ and forces your team to tweet something lame or off-brand, you’re taking an ax to their metrics (and mental health)
It’s not always about KPIs — in social, quality and time matter the most. Sure, we’ll play along with the vanity metrics you need, but you know who cares about those numbers? You. You know who cares about how many followers a page has? Just you (certainly not the fans).
ALWAYS ON
Social media is fast-paced so if we are asking a question, we need an answer right away.
If you want to hop on a trend (that makes sense) we need to do it ASAP — we don’t have time to take days to plan, shoot and execute — it needs to happen NOW.
We are expected (and gladly do so) to respond to emails as soon as they come through, no matter the day or time.
We are always there for our social communities — typically responding immediately.
We don’t get nights and weekends off.
We are customer service. Consumers come to us first — they don’t want to email an info@ email or call an 800 number — they want an easy to access human at their fingertips: us.
When your institution is trending on Twitter, using the phrase “it’s just social” doesn’t justify taking 3 days to give a Pro-forma response.
Protecting and navigating the brand and supporting your user base on social media through major periods of unrest (hello, 2020) is a workstream in and of itself, and it takes careful, strategic work and emotional labor.
PLANNING
We need to be included in the planning/strategy/brainstorming meetings. We should be advising you on the best way to use social in the campaign or project, not the other way around.
A LOT of time goes into planning a monthly content calendar. We spend a lot of time laying the calendar out, sourcing and creating the content, copywriting, and perfecting before it goes to you for approval. There isn’t much room for random ideas and new campaigns once it gets to you.
The stuff you don’t think includes me does, in fact, include me.
Content should align with your product/brand/philosophy. Do not jump on a trending topic just because it’s trending.
Know your audience and the relevancy of your post to them.
GoiNg viRaL is not a strategy.
Many content strategists focus on the marketing side of things: the social, the products, the ads. That’s all good, but if you don't have great content from the start, what’s to market? Pay attention to what you’re linking to.
More posts are not the answer.
Meeting KPIs requires strategy, time, and budget.
USE SOCIAL MEDIA PERSONALLY
It’s really hard for us to continue to educate you on very simple aspects of social media. If you are a CMO or in the marketing department, please use social media in your personal time — look what other people are posting, how they post and etc. Most people know you can’t put a clickable link in an Instagram capation, yet we keep getting asked to.
If you don’t understand a platform or don’t personally use it, don’t shoot down ideas and strategies when it’s clear you don’t have a clue. Instead, ask questions, ask for clarification, ask for a demo or just stay quiet.
COMMUNITY MANAGEMENT
Community management isn’t for the interns. The person managing your communities is reading and responding (or at least should be) everything your brand is mentioned in. No one knows your brand’s audience and consumers better than your community manager. Not even the agency that created that pretty persona deck.
Not even the agency that created that pretty persona deck. If your brand isn’t going to engage on social media with the people talking to it — why are you on social media? Do you know how many people stop using a product or service because the brand is unresponsive? A LOT.
It takes a lot of time, passion and energy to build communities. We are talking to our followers at every waking hour, and it takes time to build trust and friendships. Personally, I have brand fans of several clients that now personally text, snap or message me because we’ve built such a strong relationship.
Social media is about being social and promotion is about taking people on a journey. Blasting your offers is unlikely to drive many conversions.
Social media is supposed to be social — it’s a conversation, not a bunch of corporate taglines strung together that nobody cares about. Keep it fresh and current and be part of the dialogue.
No, I’m not posting the same post on all platforms.
Comparison is the thief of joy (and engagement)
Reputation management matters — that means reading and responding to online reviews. Most often, consumers are leaving a review instead of contacting the brand. Word of mouth is powerful.
I truly hope you read through this entire list and will bookmark it to reference next time you may hesitate or want to demand something of your social media person. Also, please trust us when you ask for advice or strategy — just like you know what you are doing in your role and we don’t question you, we know what we are doing.
I can’t think of another career that is more underrated and less understood than working in social media. We are in this job because we are amazing at it, we know what works, but most importantly — we care. Working in social media takes a very patient and empathetic person, as well as having to be creative, always on, and really great at checking notifications and emails.
Please take us more seriously. If you don’t agree with something, pose it as a productive question or conversation starter to discuss and learn.
What would you like to learn more about when it comes to social media and working with social media freelancers, contractors, and colleagues? | https://medium.com/swlh/what-your-social-media-manager-team-wants-you-to-know-ae79df3ee047 | ['Chelsea Bradley'] | 2020-10-31 06:25:51.350000+00:00 | ['Marketing', 'Agencylife', 'Social Media', 'Freelancing', 'Social Media Marketing'] |
Use ColumnTransformer in SciKit instead of LabelEncoding and OneHotEncoding for data preprocessing in Machine Learning | In a very old post — Label Encoder vs. One Hot Encoder in Machine Learning — I had demonstrated how to use label encoding and one hot encoding to separate out categorical text data into numbers and different columns. But the SciKit library has come a long way since I wrote that post, and it has made life a lot more easier.
The developers of the library might have realised that people use LabelEncoding and OneHotEncoding very frequently. So they decided to come up with a new library called the ColumnTransformer, which will basically combine LabelEncoding and OneHotEncoding into just one line of code. And the result is exactly the same. In this post, we’ll quickly take a look at how we can do that with some code snippets.
The Code
First, as usual, we need to import the required libraries. We’ll go with the convention here for the aliasing:
import numpy as np
import pandas as pd
Next, let’s get some data into a variable and see what we’re working with:
dataset = pd.read_csv("/home/sunny/code/machine_learning/samples/sample.csv")
You can use the “Variable explorer” view in Spyder to take a look at the data. In my case, it’s something like the following:
As you can clearly make out, the data makes no sense and is clearly just for this demonstration. Anyway, the first column we have here is a text field, and is categorical in a sense. So we’ll have to label encode this and also one hot encode to be sure we’ll not be working with any hierarchy. For this, we’ll still need the OneHotEncoder library to be imported in our code. But instead of the LabelEncoder library, we’ll use the new ColumnTransformer. So let’s import these two first:
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
Next, we have to create an object of the ColumnTransformer class. But before we can do that, we need to understand the constructor signature of the class. The ColumnTransformer constructor takes quite a few arguments, but we’re only interested in two. The first argument is an array called transformers, which is a list of tuples. The array has the following elements in the same order:
name : a name for the column transformer, which will make setting of parameters and searching of the transformer easy.
: a name for the column transformer, which will make setting of parameters and searching of the transformer easy. transformer : here we’re supposed to provide an estimator. We can also just “passthrough” or “drop” if we want. But since we’re encoding the data in this example, we’ll use the OneHotEncoder here. Remember that the estimator you use here needs to support fit and transform.
: here we’re supposed to provide an estimator. We can also just “passthrough” or “drop” if we want. But since we’re encoding the data in this example, we’ll use the OneHotEncoder here. Remember that the estimator you use here needs to support fit and transform. column(s): the list of columns which you want to be transformed. In this case, we’ll only transform the first column.
The second parameter we’re interested in is the remainder. This will tell the transformer what to do with the other columns in the dataset. By default, only the columns which are transformed will be returned by the transformer. All other columns will be dropped. But we have the option to tell the transformer what to do with the other columns. We can either drop them, pass them through unchanged, or specify another estimator if we want to do some more processing.
Now that we (somewhat) understand the signature of the constructor, let’s go ahead and create an object:
columnTransformer = ColumnTransformer([('encoder', OneHotEncoder(), [0])], remainder='passthrough')
As you can see from the snippet above, we’ll name the transformer simply “encoder.” We’re using the OneHotEncoder() constructor to provide a new instance as the estimator. And then we’re specifying that only the first column has to be transformed. We’re also making sure that the remainder columns are passed through without any changes.
Once we have constructed this columnTransformer object, we have to fit and transform the dataset to label encode and one hot encode the column. For this, we’ll use the following simple command:
dataset = np.array(columnTransformer.fit_transform(dataset), dtype = np.str)
If you check your dataset now in the Variable explorer view, you should see something similar to this:
As you can see, we have easily label encoded and one hot encoded a column in our dataset using only the ColumnTransformer class. This so much more easier and cleaner than using both LabelEncoder and OneHotEncoder classes. | https://towardsdatascience.com/columntransformer-in-scikit-for-labelencoding-and-onehotencoding-in-machine-learning-c6255952731b | ['Sunny Srinidhi'] | 2020-01-09 09:06:06.741000+00:00 | ['Machine Learning', 'Python', 'Scikit Learn', 'Technology', 'Data Science'] |
Data Collection: Public Opinion | In order to test respondent’s aptitude on this topic against their levels of concern for data collection, only those who answered one of the most difficult questions (“Do you understand what using a private browser does?”) correctly were filtered out and tested for their levels of concern separately.
Visually, it appears that those who answered this digital knowledge question correctly are just as concerned as the entire sample population put together. Still, after a chi squared test was done, a correlation between how people answered this question and their digital knowledge did emerge.
Cross-tabulation of levels of concern against whether the respondent answered questions about digital knowledge correctly.
With the cross-tabulation pictured above it is evident why the visual does not align with the statistics of the data. There is solely a clear statistical significance among those that are “not at all concerned.” It is shown that the majority of those that answered this difficult digital knowledge question correctly, would have been the least likely to be inclined to say that they were “not at all concerned” about how companies are using the data they collect. When this column is dropped from the cross-tabulation, the correlation between these topics becomes obsolete. Still, it is interesting to learn that the more extreme stance of ‘no concern at all,’ diminishes among a more savvy sample.
In an attempt to make the categorical data more pliable in a statistical coding environment, the responses were converted to ordinal values. This garnered limited results as a regression line between levels of concern and self-perceived levels of understanding was attempted. After studying the OLS regression results it was clear this was not a useful tactic. This is because the survey’s random sampling model did not equally pull a sample from across social groups. For example, college graduates represent about 60% of the sample population in this data.
Instead, it is better to test who out of the sample data is more likely to believe that the benefits of data collection outweigh the risks. Public opinion on data collection is more clear in this survey question as it makes the respondent consider their concern in a more practical and inclusive way.
While testing how level of education may predict someone’s concern for data collection, the aforementioned unequal representation of college graduates in the data had to be accounted for. To do this, after filtering out those who answered that the risk outweighs the benefit, that group was divided by the entire sample so that a percentage was given that demonstrates how a percentage of each level of education answered.
It is clear that a vast majority of the respondents, no matter what level of education, believe that the risks of data collection outweigh the benefits, ranging between 82–86% of respondents, varying by education level.
Contrary to the hypothesis, instead of finding a correlation between understanding/education and concern, a correlation between concern and age was found. While most respondents believe the risk outweighs the benefit, it is very clear that the group most likely to believe in the benefits of data collection is the 18–29 age bracket.
The reason behind this generational divide of opinion is not clear in the data. Armed with the original hypothesis, I tested whether levels of understanding of data collection vary between age groups and found no correlation. It is possible to assume younger generations use technology more and therefore encounter times when their data is collected and used more often. From this one could possibly infer that familiarity breeds indifference, but this is a large leap to make without any data to back it up. In the end, the data showed a statistical correlation between age and whether one thinks advertisers have successfully tailored one’s ads to their actual interests. So in theory, the more useful a private entity’s data collection is to the public, the more the public might appreciate it.
In conclusion, my hypothesis was rejected. The little data that proved there was a correlation between understanding data collection and a respondent’s concern for it, only related to the most extreme stance of not caring at all. This showed that, contrary to my initial opinion, if someone understands how companies and the government collect and use your data, they are more likely to be somewhat to very concerned. Those who have no concern at all are more likely to have less understanding on the subject. On the other hand, the unrelated correlation found was age, and the only explanation found was that younger generations find companies collecting their data to be useful. This means that the only real way to get public opinion on the side of big data, is to make sure it comes in handy for the public as well. | https://agates-28486.medium.com/data-collection-public-opinion-872082835c9 | ['Áine G'] | 2020-11-27 10:46:41.112000+00:00 | ['Data', 'Facebook', 'Data Analytics', 'Data Collection', 'Cambridge Analytica'] |
[Week 1] 5 Articles to Keep You Updated on Blockchain | [Week 1] 5 Articles to Keep You Updated on Blockchain
Weekly newsletter from Bangalore Blockchain Application Development
Word Cloud via Word Art
Nilesh Christopher via Economic Times
After banning the banks to transact with crypto operated businesses, Reserve Bank of India(RBI) is checking whether it is feasible to issue a digital currency backed by Indian Rupee(INR).
via The Economist
Skepticism about blockchain has never been denied and this article gives a good insight into why it has been that way!
Mark Rogowsky via The Block Crypto
Who doesn’t love a good banter! This article is a response to the above article by The Economist allegedly claiming that Bitcoin and other cryptocurrencies are useless.
Seema Mody via CNBC
‘Andreessen Horowitz’ is perhaps the biggest name in VC sector. This investment comes at the time when the market has been in bear mode for few months and it is significant to the future of Blockchain.
Mohammed ElSeidy via ZK Capital
Every startup wants to ride the Blockchain hype and tries to incorporate Blockchain into their products. But do all these solutions really require blockchain ? This article tries to answer when to use Blockchain, if at all? | https://medium.com/bangalore-blockchain-application-development/5-articles-to-keep-you-updated-on-blockchain-week-1-a679cf9c126e | ['Chinmay Patil'] | 2018-09-24 09:02:21.921000+00:00 | ['Venture Capital', 'Blockchain', 'Development', 'Cryptocurrency', 'Bitcoin'] |
Young, Gifted, @ Risk and Resilient | Mental health among college students has become a national priority. Students of color in particular experience unique circumstances, such as racial/ethnic discrimination, disparities in mental health services, and marginalization. These experiences can contribute negatively to their mental health and well-being and impede both academic performance and college satisfaction.
This video series features scholars and practitioners from across the country who provide evidence-based information for faculty, staff, and providers to foster a positive learning environment and support the mental health and well-being of students of color.
Scholars and practitioners have informed this effort through a holistic approach. Understanding the space and place in which students of color operate provides us with a lens to examine the multiple dimensions of students’ academic and social contexts on and around their campuses. This includes not only the physical components of campus climate, but also the virtual contexts that impact students’ sense of belonging.
Scholars have also highlighted the formal and informal relational contexts among students of color and their faculty, student peers, staff, families, and communities, in order to understand the strengths and contributions that relationships can have on the well-being of students of color. Considering the barriers and strengths (including structural, social, cultural and personal) of students can help advocates and supporters identify students’ assets and support them as college campuses adapt to better serve students of color. Scholars and practitioners have introduced innovative interventions to address the complex challenges facing students and institutions, and guidance on best practices to implement, scale up, and advance rigorous research on this topic.
This video series is a product of a collective effort between scholars and practitioners across the country. We hope you will join us as we increase awareness and understanding about the experiences of students of color and consider how to best support them so that they may be successful and thrive in the educational environment. | https://medium.com/national-center-for-institutional-diversity/young-gifted-risk-and-resilient-4bc84efca3c6 | ['National Center For Institutional Diversity'] | 2020-06-10 17:46:09.423000+00:00 | ['Resilience', 'Mental Health', 'Series', 'Toolkit', 'Students'] |
Reshape Pandas Data Frames | Reshape Pandas Data Frames
A walk-through example of how you can reshape pandas data frames
Image on Unsplash
We will provide some examples of how we can reshape Pandas data frames based on our needs. We want to provide a concrete and reproducible example and for that reason, we assume that we are dealing with the following scenario.
We have a data frame of three columns such as:
ID : The UserID
: The UserID Type : The type of product.
: The type of product. Value: The value of the product, like ‘H’ for High, ‘M’ for Medium and ‘L’ for Low
The data are in a long format, where each case is one row. Let’s create the data frame:
Create the Pandas Data Frame
import pandas as pd df = pd.DataFrame({'ID':[1,1,1,1,2,2,3,3,3,4],
'Type':['A','B','C','E','D','A','E','B','C','A'],
'Value':['L','L','M','H','H','H','L','M','M','M']}) df
Aggregate the Data by ID
Let’s say that we want to aggregate the data by ID by concatenating the text variables Type and Value respectively. We will use the lambda function and the join where our separator will be the | but it can be whatever you want.
# Aggregate the data by ID df_agg = df.groupby('ID', as_index=False)[['Type','Value']].agg(lambda x: '|'.join(x))
df_agg
As we can see now we have 4 rows, one per each ID.
Reshape to a Long Format
Our goal is to convert the “ df_agg “ to the initial one. We will need some steps to achieve this.
Convert the Columns to Lists
We will need to split the Type and Value columns and to transform them into lists.
df_agg['Type'] = df_agg['Type'].apply(lambda x: x.split("|"))
df_agg['Value'] = df_agg['Value'].apply(lambda x: x.split("|")) df_agg
Create a list of tuples from the two-column lists
We know that the elements of each list appear in order. So, we need to do a mapping between the Type and the Value list element-wise. For that reason, we will use the zip function.
df_agg['Type_Value']= df_agg.apply(lambda x: list(zip(x.Type,x.Value)), axis=1)
df_agg
Exlpode the list of tuples
Now, we will “explode” the Type_Value as follows:
df_agg = df_agg.explode('Type_Value') df_agg
Split the tuple into two different columns
Now we want to split the tuple into two different columns, where the first one is referred to the Type and the second one to the Value.
df_agg[['New_Type','New_Value']] = pd.DataFrame(df_agg['Type_Value'].tolist(), index=df_agg.index) df_agg
Now we will keep only the columns that we want and we will rename them.
df_agg = df_agg[['ID','New_Type', 'New_Value']].\
rename(columns={"New_Type": "Type", "New_Value": "Value"}).\
reset_index(drop=True) df_agg
Check if the Data Frames are identical
As we can see we started with a long format, we reshaped the data, and then we converted it back to the initial format. Let’s verify if the initial data frame is the same as the last one that we created.
df_agg.equals(df) | https://medium.com/swlh/reshape-pandas-data-frames-30c218ff07aa | ['George Pipis'] | 2020-11-26 17:39:17.845000+00:00 | ['Python', 'Data Transformation', 'Data Cleaning', 'Data Science', 'Pandas'] |
A New Way to Write Conditional Statements in Kotlin | A New Way to Write Conditional Statements in Kotlin
Make your code more readable using “when” in Kotlin
Photo by Scott Graham on Unsplash.
Ever since Kotlin was born, it has been one of the preferred languages to develop apps for Android devs around the globe. People love Kotlin for various reasons: extension functions, ease of doing things, less boilerplate code, cross-platform support, and more.
Yet there are many of us who overlook the potential of little things in Kotlin. For example, I still see many of my teammates use if and else if nested statements occasionally.
With Kotlin, we can have a unique yet simple solution that can deal with multiple problems. For example, the when statement can replace all the conditional statements in any Kotlin project if you’re smart enough to use it. | https://medium.com/better-programming/a-new-way-to-write-conditional-statements-in-kotlin-a3c029e417dc | ['Siva Ganesh Kantamani'] | 2020-07-10 13:58:55.121000+00:00 | ['AndroidDev', 'Kotlin', 'Android', 'Software Engineering', 'Programming'] |
Building vegetable gardens and other innovations transform lives in South Sudan | People in Mangok Amuol in South Sudan used to struggle to get enough food to eat, surviving off wild foods such as water lilies and collecting firewood to sell. Akol Deng remembers how she lived hand-to-mouth, earning barely enough to feed her children much less ever send them to school.
But since a project to change people’s lives called Building Resilience Through Asset Creation and Enhancement — Phase II (BRACE II) has strengthened the village, those tough days appear at least to be behind her and her family.
BRACE II is a multi-year project implemented by the Food and Agriculture Organization of the United Nations (FAO) and the World Food Programme (WFP), with financial support from UKaid.
For three years, targeted households receive food assistance while they create assets that build their resilience to climate shocks. FAO provides seeds and tools to food insecure families while WFP provides a monthly cash transfer during the annual hunger season that coincides with the peak agricultural period. This ensures that people have enough energy to cultivate their land.
In Mangok Amuol, BRACE II participants recognized hunger as their main problem, followed by seasonal flooding which prevents children from reaching school and sick people from being able to receive treatment. At a community meeting, they agreed to establish a vegetable garden to grow their own food and a dyke road so their children could access school all year round.
“We now have vegetables which were never here before!” | https://wfp-africa.medium.com/building-vegetable-gardens-and-other-innovations-transform-lives-in-south-sudan-3953ab08d72d | [] | 2020-11-04 12:57:12.234000+00:00 | ['South Sudan', 'Innovation', 'Agriculture', 'Development', 'Africa'] |
6 Reasons We Should Celebrate 2020 | Redundancies
A record high in redundancies has sent unemployment to 4.9% in the UK. With 35% of our total waking hours over a 50-year working life spent going to work, it is no surprise that losing the thing that we wake up for each morning creates cracks in our identity.
I was made redundant in early July and it has changed my outlook on life completely. I finally got a moment to stop in my tracks and actually consider whether what I was doing was taking me to the place I wanted to be. It forced me to understand that I wasn’t actually enjoying my work-life. It offered me the freedom that I could have never offered myself. The freedom to lose the thing that defined me between Monday and Friday, without having made the decision to step away from it myself.
It caused me to pause and reflect, resulting in numerous reevaluations. I finally came to the conclusion that I no longer wanted to be sucked into the vacuum of the industry I was working in and developed a better idea of the way I wanted my life to look.
It brought me to discover the hobby that I actually wanted to pursue as a career and allowed me the time to do online courses that would aid me in the areas I had little experience in. It takes a life-changing event to remove the clutter that’s toppled over our true desires.
Your Daily Life Is Your Life
This year, it felt like every time I opened up Instagram, there was a new celebrity couple excitedly announcing their pregnancy. Of course, with not a lot to do, many couples could have easily come to the conclusion that the long and uneventful months of 2020 could actually work as the most appropriate time to throw a bun in the oven.
However, if we take a glance at the things that ended up being important to us and filling our days with, we can identify what is always there for us. The walks with our dogs, grabbing takeaway coffee, small-talk with neighbors and perhaps, it’s made us more content.
More content with the idea that those around us, are enough. Being separated from loved ones made us understand our need for physical touch and close proximity. It made us acknowledge the fact that man-made short fixes such as Facetime or Zoom just don’t do it for us.
It can be easy to get caught up in a constant chase for productivity and success and as a result, leave the important things in the background. This year gave us little possibility to feel fulfilled in any other way than turning to the only thing left; people.
We’ve begun to be kinder to each other. I notice it with passersby. A greeting or a simple scurry forward to allow a stranger to pass. The effects have touched us all. Perhaps, as we get back to normality, we will choose our own versions of success over the social constructs of it — a lesser-paid job that involves passion and allows us more time for people, over greater pay for longer hours in the office, staring at a screen.
Loved Ones With Differing Views
Jumping in and out of lockdowns isn’t something any of us were prepared for. Words such as ‘furlough’ and ‘lockdown’ have casually crept into our daily vocabulary over the course of this year. We never really had to consider rules or morals as such on a daily basis previously.
The general consensus of refraining from murder or endeavoring upon a bank robbery was something that we could expect of our friends. However, when regular Government announcements began to fill our TV sets with new restrictions every few days, it made us reevaluate our priorities. It blurred lines and forced us to look at the scale that balances our freedoms, the economy, and safety, as separate issues.
We found ourselves having arguments about issues not previously explored. Those whose interests lay with the general health and safety of society became enraged by the skeptics that ended up being their friends and partners. Many of us saw new sides to those we thought we knew inside out. When we are setting a lifetime precedent, it isn’t surprising that spikes in divorces and domestic life became more stressful than ever.
Other social issues also had the chance to come to light which caused a further conflict of interests. Those who were stringent with Covid guidelines but simultaneously cared deeply about and wanted to partake in protests or Black Lives Matter marches were faced with a very difficult decision. Either way, wherever you went and whatever you did, there was scrutiny and criticism awaiting.
Perhaps, these experiences will all lead us to become more tolerant and accepting. We’re all complex beings, each with a diverse range of interests, limits, and loyalties. Doing everything right during this year’s balancing act would be utterly impossible.
Considering Our Surroundings
There was a transition in the Youtube community this year. Those who pre-pandemic were used to working remotely, would use their social lives to make up for their often isolating work lives. With co-working spaces and cafes off the table, their huge houses felt more isolating than ever.
Vlogger Tara Michelle, amongst many others, has expressed her unhappiness in spending the majority of this year in such a lonely way. This isn’t something she’s ever needed to confront before, with a pre-pandemic life offering her a world of distractions. Tara has ended up leaving Los Angeles and moving back to Toronto to be closer to the family.
I for one, took a good look at my environment and asked myself if this way of life was to remain forever, would I want to spend it in this city? In this country? Even on this continent? Where do we wish to escape to? What type of lifestyle do we crave if life was just remote work and family life?
Living in London has always appealed to me because of the hustle and grind. The morning commute, the metropolitan energy, and the smell of roasted coffee beans but without that, I have realized that all I crave is a simple life by the beach, somewhere much less rainy. What I truly enjoy about the city I live in is the lifestyle — not the land. As we get older, our priorities shift and so I must ask myself; will I still enjoy this city when it’s no longer the lifestyle that I’m after?
2020 has given us the courage to ask the big questions. Ones that are so daunting and terrifying that we most likely would have never considered them. Many remote workers are now utilizing this new way of work and Slow Travelling or moving to a place that inspires them to be the best version of themselves — for themselves.
We Found Hobbies
Even for those still working, the extra hours of commute time pocketed and plan-free weekends meant that we had time to read, explore, and venture to new lengths to find passions.
How does one use all of this newfound time? What I didn’t realize about the fact that I somehow always ended up in a museum or gallery was that my seemingly random strolls were actually a search. My soul was searching for the next challenge, the next passion, the next burning desire. And that’s when I came across a Japanese poster at the Victoria & Albert Museum in London.
I froze. It captured me. Something had ignited in me after all this time. I felt something. I didn’t know what it was, I just knew that it was right. It has now led me to a spectacularly peculiar and awfully sudden interest in Japan. I am now besotted with the thought of witnessing the Cherry Blossoms one fine April and have looked into Japanese work visas. I play Japanese string instrument music each morning and my boyfriend believes I’ve gone absolutely bonkers. Blissful insanity that has re-inspired me for the future.
Witnessing where your mind goes when you are procrastinating or feeling bored is an easy way to figure out if there is a passion brewing within you. One that you didn’t even have an inkling about. And so, for some people, in some ways, this year could be the most life-changing one yet.
Sustainability
This year has truly shown us the effects of humanity. Images of Venice canals suddenly crystal clear, global levels of nitrogen dioxide at record lows, and urban air pollution in China dramatically decreasing has left an ever-changing mark on many frequent flyers. Perhaps, we can move forward with a more conscientious attitude towards travel.
High-mileage travellers have begun putting more thought into their bucket lists. “COVID-19 has allowed me to rethink how and why I travel,” says Erick Prince of The Minority Nomad. The unpolluted blue skies are likely to have struck a chord with many, and it could be a promising moment for companies advocating a more considered approach to travel.
Carbon Offsetting Gifts have become increasingly popular for this year's Christmas season showing positive new mindset developments. We have not only become more aware of the suffering of people around the world but we are taking into account the great damage we inflict on our planet. | https://medium.com/live-your-life-on-purpose/6-reasons-we-should-celebrate-2020-dc488f695f59 | ['Sandra Michelle'] | 2020-12-28 13:02:36.349000+00:00 | ['Life', 'Mental Health', 'Self', 'Self Improvement', 'Life Lessons'] |
Finding God in Gay | Learning to find God in all things can be tricky, but it can get even more daunting to find God in the darkness of rejection, discrimination and prejudice.
I didn’t have to grapple with my gay orientation for too long. I was accepting of who I was from the beginning, which was a blessing and a grace in many ways, though I did feel the sting of rejection from a place that was supposed to be my home. I consider my brothers and sisters who have not been as fortunate and I feel compassion for those men and women who struggle with self-hatred and condemn themselves to a life of secrecy, those who are constantly bombarded with messages telling them they are freaks. I think of the kids being bullied at school who then go to mass with their parents and hear anti-gay remarks. I wonder how many teenagers who kill themselves also went to church with their parents and heard those things. And mostly, I wonder where God is in the midst of all this.
I constantly struggle with the answer to that question, but I do know that who God made me to be has been a gift. I have learned that love has many faces and styles. I’ll never forget watching Chely Wright — the country singer who came out — describe her experience. She said she remembered praying to God to make her straight when she heard a still small voice saying, “You are fine the way I made you. Proceed as you are.” Her words resonated deep within me. That is a powerful message for everyone, but especially today’s youth.
We find God in the diversity of creation and we find God in love — whatever form that takes. We can also find God when we are rejected and abandoned. Jesus was rejected because most narrow-minded people have little capacity to receive the all-encompassing love of Christ. Jesus preached of loving your neighbor and accepting everyone as brother and sister. Sadly, many Pharisees found this loving message absolutely appalling. Even more, Jesus made the human sacred. He blessed diversity. He associated with all types of people — some of whom were of questionable character.
Whenever we are being our authentic true selves — the selves that God created us to be — we will meet some level of rejection. But we can find God in the love within us and in communities that accept and affirm us. We can find God in people like Chely Wright who was a voice for all gay people who have no voice. God can be found in that still, small voice, “You are fine the way I made you, proceed as you are.”
We also find God in hope. Hope for the future — that acceptance and change will happen. It already is. There are many voices speaking out in favor of that change. I believe there is hope that we will come together as one big human family and “love one another as Christ has loved us.” I pray to be a part of that change and I pray that together we can play a part in helping to end teen suicide over one’s sexual orientation. I pray that all my brothers and sisters would know that they are loved and created in the image of God who has created diversity when it comes to love.
I believe that it will be the small communities that meet outside of the institutional church that will make the difference in this world; the lay people and the clergy who actually follow Christ that will make a difference by their efforts and sacrifices, people like Jeannine Gramick and the late Mychal Judge and many others who are standing up against injustice. It’s the quiet nuns that live in obscurity taking care of AIDS patients and the small welcoming communities who worship in the seclusion of their own home who are making the difference. The men and women who have the courage to be who they are in the face of rejection and ridicule and those who minister to the ones who cannot be themselves — they show me the Kingdom of God. These people are the true church and the face of Christ for me. | https://medium.com/reaching-out/finding-god-in-gay-9090c7f48a3 | ['Stephen Fratello'] | 2018-01-14 21:47:31.575000+00:00 | ['Christianity', 'Life', 'Storytelling', 'LGBTQ', 'Catholicism'] |
How Bitcoin solves the Blockchain Trilemma | How Bitcoin solves the Blockchain Trilemma
When, in 2017, the bitcoin community activated SegWit, thus enabling second-tier solutions such as the Lightning Network, the cryptocurrency world changed.
The protocol created on top of Bitcoin to solve the scalability problem has allowed the cryptocurrency to answer one of the questions that plague the world of Blockchain. We try to analyze the relationship between these two protocols and how one is strictly necessary for the other to create, on paper, a perfect ecosystem.
What is Lightning Network (LN)?
Lightning Network is the solution of the level (or layer) 2 of Bitcoin that allows almost instant transactions, with very low costs, based on the idea of a two-way channel. Like the underlying Blockchain, it is equipped with peer-to-peer exchanges which, however, allow you to exchange money even in a network of channels that do not delegate the custody of funds. The transactions are off-chain, apart from the first and last which serve as “certification” of the balance for all the others.
In this way, Bitcoin has potentially found the solution to its scalability needs, having the possibility of reaching a quantity of tps tending to thousands if not millions. The solution proposed by Joseph Poon and Thaddeus Dryja in 2016 with the paper “The Bitcoin Lightning Network: Scalable Off-Chain Instant Payments” added the last piece to solve the famous Blockchain Trilemma.
What is the Blockchain Trilemma?
The Blockchain trilemma is a condition that concerns the three fundamental principles of technology, namely security, scalability and decentralization. Initially expressed by Vitalik Buterin, the trilemma states that all Blockchains can solve only two of the problems just mentioned.
The example of Bitcoin is emblematic: until 2017, Bitcoin was a decentralized and safe currency but extremely slow in processing a large number of transactions. The low scalability of the Bitcoin Blockchain is due to the construction of the same, which allows creating a wall to protect the cryptocurrency. Bitcoin’s consensus algorithm (Proof of Work) does not allow it to scale exponentially like others (eg Proof of Stake), but on its side, it certainly has the security and durability that make it the safest from attacks of any guy.
To solve the Trilemma, and thus become an ideal Blockchain, the protocol created by Satoshi Nakamoto necessarily had to scale. So in 2017, when the second tier solutions were introduced, Bitcoin implicitly became a complete machine. Obviously, LN is still a project in development, but the potential here is virtually endless.
From Gold 2.0 to Retail
If cryptocurrency is currently attracting the eyes of institutional investors and traders and the whole world, when it goes mainstream it will do the same with the entire retail world. Once tested and made stable, Lightning Network is the improvement that can bring micropayments and instant BTC transactions into everyday life.
If the Bitcoin’s Blockchain is the register for gold 2.0, and now it is no longer nerds who say it but also banks, payment channels and LN represents the necessary financial infrastructure to scale globally.
The on-chain transactions will be used exclusively for certification purposes, to transfer large amounts of money or for direct exchanges that do not require payment channels.
Bitcoin is not a static protocol but on the contrary, since its inception it is constantly evolving, bringing decentralized network architectures into a world that has always conceived them with the intermediary as guarantor. LN represents the second step from which other solutions will derive to make the protocol ready for the general public.
Conclusions
Bitcoin’s is and will be a slow and epochal change. Bitcoin is not a technology, it is a revolution. The resolution of the Trilemma is only the first of the various Milestones that will be reached by the cryptocurrency in a few years. Don’t expect to see Wallet with Lightning Network in everyone’s hands tomorrow. The transition will be gradual, giving developers the opportunity to make the complex network of IT and non-IT rules through an age-proof UI (User Interface).
The future is here, it is evolving, and it is not in the hands of states or banks but in ours.
For the first time, you have a choice. | https://medium.com/coinmonks/how-bitcoin-solves-the-blockchain-trilemma-54ffd84c42a8 | ['Gianmarco Guazzo'] | 2020-12-28 14:08:09.701000+00:00 | ['Lightning Network', 'Startup', 'Blockchain', 'Technology', 'Bitcoin'] |
Why Your Opinion Isn’t As Valid As You Think It Is | “All opinions are not equal. Some are a very great deal more robust, sophisticated and well supported in logic and argument than others.”
― Douglas Adams, The Salmon of Doubt
We’ve all heard it nine ways from Sunday. Someone voices a half-baked or uneducated or hyperbolic opinion (often at a family gathering), which, when backed into an intellectual corner to justify their nonsensical belief, counter with this line
“My opinion is as valid as anyone’s!”
We’ve heard it before, and we feel frustrated, stymied by this truism of democratic life. Yes, everyone is entitled to an opinion, and yes, their opinion is as valid as anyone else’s. We’ve been told so ad nauseum over the course of our lives.
From Flat Earthers to Anti-Vaxxers, to Creationists, to New Agers and pandemic and climate change deniers, we feel trapped from refuting their uneducated and ignorant opinions by the basic idea that all opinions are valid, and so cannot be refuted without our sinking to the level of some kind of intellectual authoritarianism.
Nope
Except that’s not what anyone ever said. No one ever said everyone’s opinions are as valid as anyone else’s. Over time, we’ve conflated two concepts into one, and that’s our mistake.
What you are ARE entitled to is an opinion of your own. This is very true. This is referred to as Freedom of Thought, which can be summed up thusly.
Freedom of thought (also called freedom of conscience or ideas) is the freedom of an individual to hold or consider a fact, viewpoint, or thought, independent of others’ viewpoints. — Wikipeida, the great and powerful
This is a pretty fundamental foundation of any democracy. Everyone is allowed to hold an opinion of their own, and they are entitled to hold this opinion because all individuals should be free to think and feel as they see fit.
However, this does not make that opinion as “valid” as anyone else’s.
Let’s see why.
Why Your Opinion is NOT as valid as any other
So why is your opinion not as valid any anyone else’s? Simply put: Expertise.
Let’s work this through by way of an example:
You own a car, and it starts making a nasty rattling sound. So, you take your car to your mechanic, a person who has years and years of expertise in diagnosing and fixing car troubles just like yours. The mechanic tells you your radiator is shot, and needs replacing.
Along comes me. I have no real world experience in fixing cars. I don’t really know much about your make and model, and I’m not all that good with telling a radiator from a muffler. Now, you tell me what your mechanic said, and I tell you “Oh, I have a feeling it’s something to do with your brakes.”
Okay, so now we have two opinions: your mechanic’s and mine. Is my opinion just as valid as your mechanics? Sure, I’m entitled to think it’s your brakes, but that doesn’t make my opinion as valid as the mechanic’s, now does it? Why not? Because I have no expertise in fixing cars. I am simply stating an uneducated opinion, and based on all logic and reason, you’d be foolish to trust my opinion on par with that of your mechanic.
You see, opinions increase in validity as they are measured against expertise. There is a large chasm between having an opinion and the validity of the opinion in question.
The opinions of scientists on climate change, on infectious disease, on just about anything related to the sciences invariably are MORE valid than the opinions of some clown on YouTube or some Facebook group of anti-vaxxers.
Not because the scientists are better people or entitled to special treatment, but simply their expertise in their field make their opinions of infinitely more value and validity than someone who heard something and thinks it sounds about right.
Why Opinions Do Not Trump Facts
Everyone is entitled to (their) own opinion, but not (their) own facts. — Daniel Moynihan
Let’s sashay into another realm of opinion-holding. This is an increasingly troublesome one these days, in what is appearing increasingly like a world where facts no longer matter.
Let’s take another example:
You’re a school teacher. You tell me and other parents at a PTA meeting that you want to teach kids about math, that 2+2 = 4. Well, I don’t like this one bit. My religion teaches that the Lord made 2 + 2 = 9, or at least that’s what my pastor / imam / rabbi / etc has explained to me.
I don’t want you and your anti-religious beliefs being taught to my children, and bring a protest against the school by members of my congregation because what you’re teaching runs counter to my religious beliefs, my opinions about what is true and what is not.
But of course my opinion / belief is nonsensical. Religious beliefs are all well and good, but they do not make 2 + 2 = 9. 2 + 2 = 4 is a FACT. What I am upset about is that this FACT doesn’t verify or validate my beliefs.
This is the difference between an opinion and fact. Facts are verifiable. Facts are (outside of quantum mechanics, that is) immutable. Facts are objective reality, and thus are the foundation of all rational thought, for they are reality as it is.
Facts are the ultimate in expertise, for they are independent of and stand above anyone’s beliefs or opinions.
I may believe I can fly off the roof of my house, but the facts of 1.) the pull of the earth’s gravity and 2.) that humans can’t fly by flapping their arms will put my belief in its place.
When our opinions fly in the face of objective reality, it is our opinions which are spurious, not the facts themselves.
Just because I want to believe that 2 + 2 = 9 — irrespective of the reason I want to believe this — does not set my opinion on anything close to the benchmark of facts. I’m wrong, and the facts are right.
Just because I don’t like facts, or they are not aligned to my opinion-based worldview, this doesn’t make the facts wrong. It just makes me wrong for not accepting reality for what it is.
All Opinions are Based upon Bias
“The opinions that are held with passion are always those for which no good ground exists; indeed the passion is the measure of the holders lack of rational conviction. Opinions in politics and religion are almost always held passionately.”
― Bertrand Russell, Sceptical Essays
Every opinion you hold is based upon some form of bias. You may not like this notion, but it’s true.
Take any opinion or belief you or I hold dear. I’m not talking about facts, or the acceptance of facts, but opinions you hold which cannot be 100% verified by facts. You hold that opinion and/or belief because of some amount of bias, either for or against an opinion on the subject. The less it can be verified by facts, the more bias is involved.
Are dogs better than cats? Bias. Was Jesus a Christian? Bias. Is there systemic injustice in the world? Bias. Opinions can be backed up by facts, and the more that fact verifies the opinion, the less of an opinion it becomes.
Bias is our unfair or skewed preference to one argument / idea / belief / group over another. Whenever we take an opinion, we are applying some level of bias when we do so. That’s all opinions are: Our preference of one thing over another.
I don’t state that it’s my opinion that sparrows can fly. This is a fact. I might state that sparrows fly more beautifully than any other bird, and now we’ve drifted out of fact and into my aesthetic preference.
You don’t state that millions died of bubonic plague across the world in the 14th century AD, known collectively as the Black Death. This is a historical fact. You may state that it happened because of God’s wrath, and now you’re into your own religious beliefs.
The closer an opinion or belief is to verifiable fact, the less of an opinion it becomes. The further it drifts from facts, the more bias we can find in the holding of this opinion.
In The End
“It makes people feel good to say that all opinions are equal, because somehow, they correlate equality of opinions with equality of human dignity. And that’s exactly where lies one of the most disgraceful and harmful perceptual errors of the society, that leads to an unfit government.”
― Abhijit Naskar, The Constitution of The United Peoples of Earth
When all is said and done, we must respect all individuals and their inalienable right to think and believe as they wish. We have opinions we cherish, so can anyone else.
That said, it should not stop us from feeling perfectly justified in dismantling this fallacy that all opinions are equal and valid. They are not. Opinions based in little to no fact, opinions overrun in bias and ignorance are not equal to nor in any way as valid as those expressed with as little bias and as much fact as possible.
While we must respect others’ opinions as their right, this does not stop us from using rationality, logic, and and yes, argument to show how their opinion is not equal to or as valid as any other. More now than any recent time in history, the truth and facts must matter and be counted as above all opinions or belief.
We have allowed this nonsensical notion of equality of opinion under any and all circumstances to subsume rational thought and educated expertise. It is well time we discard this notion, even when it means in doing so we must discard some amount of such nonsense from within ourselves.
The very best thing about opinions? You can always, always change them. | https://medium.com/intelligence-challenged/why-your-opinion-isnt-as-valid-as-you-think-it-is-5c32f3fdcad9 | ['Christopher Laine'] | 2020-08-29 17:01:38.258000+00:00 | ['Politics', 'Philosophy', 'Science', 'Religion', 'Life Lessons'] |
Corporations aren’t evil — they just might be our best allies | Sustainable development needs partners that have proven themselves capable of innovating and changing with the world.
Let’s be honest: multinational corporations are not the first actors we think of when we think of powerful positive sustainability action. More often than not, the most sweeping improvements to curb environmental harm or improve social welfare are achieved through the actions of government regulation, grassroots movements, and citizen-led action.
Corporations, on the other hand, are mostly about money… right? Revenue, profit, bottom line, dividends — corporate sustainability is often seen as an add-on to make companies look good through fluffy corporate social responsibility (CSR) agendas that might not actually do anything.
The idea that corporations can do nothing but harm runs deep, and the existence of global corporations masking unsustainable business models behind extensive CSR reporting and greenwashed marketing makes it all too easy to perpetuate that thinking. At the extreme, this ideology skirts the edge of stereotypical “corporations are evil”-thinking, where all actions of corporations are seen to have vindictive hidden agendas. But this shuts down the conversation about the role and ability of corporations to advance society’s progress towards collective sustainability goals.
By the dictionary definition of sustainability — to have the ability to be sustained — corporations are actually doing pretty well.
ExxonMobil has a history of 135 years, and this isn’t even impressive: companies with operations dating back hundreds of years still exist in the world today, adapting and changing to their environment as needed to ensure longevity.
These are the kinds of partners we need with us in the quest for a more sustainable world. Partners that have proven themselves capable of innovating and changing with the world rather than living as a blip to only serve the needs of the present moment. Corporations, due to their fiduciary duty to shareholders, need to ensure the company’s longevity, and the key to doing this is to innovate how the corporation does business, as long-standing companies historically have. Sustainable development, on the other hand, needs not only innovation but also revenue generation to fund the next round of development. Otherwise, sustainable solutions have little hope of being, well… sustainable. What better ally to engage than ones with a track record hundreds of years long?
We need corporate watchdogs. We need to call companies out when they act irresponsibly or when they don’t do enough to protect their stakeholders’ (not just their shareholders’) interests. But to look at a company’s sustainability agenda, at the real strides they make towards acting in a more sustainable way, and to merely scoff and conclude that this is corporate marketing that won’t really advance sustainability, cuts out one of the most powerful allies the fight for sustainability can have on its side.
Instead of discounting these potential partners, let’s ask how we can work together. Rather than doubt and hostility, it’s time to respond with “that’s excellent, now let’s see how we can make this positive impact even bigger”. | https://medium.com/pure-growth-innovations/corporations-arent-evil-they-just-might-be-our-best-allies-70199592e004 | ['Anna Pakkala'] | 2019-09-25 07:10:04.010000+00:00 | ['Partnerships', 'Innovation', 'Evil Corporations', 'Beyond Csr', 'Sustainability'] |
Philosophy Books for Self-Isolation | People all over the world are anxious about their health, the economy, the future. For the first time in the history of humankind we’re united with one purpose: defeating a microscopic enemy.
This unprecedented threat comes with an unprecedented opportunity.
We all have a part to play not only in staying at home and helping to save lives, but also making a better world when we come out of the other side of this.
As we sit at home either in self-isolation or during shut-downs, we can take this time to reflect, read and gain wisdom to emerge as better citizens, friends, family members and lovers.
Below I’ve picked a small selection of philosophy books that can help us cope, let us hope, and enable us to think beyond the present situation.
I’m deliberately not including links here. You can search the titles to buy from independent book shops that can deliver to your door or download them as e-books. These books needn’t cost money, either — you can find all the writings below in web or e-book form on the Project Guttenberg website.
Here’s a tip: always read the modern introduction of the book, or even the Wikipedia page about it. That will give you some basic context, commentary and explanation.
Reading philosophy isn’t easy, but can be a lot easier if you get the background and main ideas first. I have written articles on all the philosophers below, which you can find in my Medium articles index.
“The Last Days of Socrates” by Plato
“The unexamined life is not worth living.”
Philosophy really got going with Socrates. Philosophers before him — “Pre-Socratics” like Pythagoras, Zeno and Democritus— were more like what we’d describe as scientists today. They had theories about the nature of the universe, time, mathematics and change. Socrates (c. 470–399 BCE) was preoccupied with ethics.
The philosopher’s reputation and controversies hinge on his obstinate quest to understand what is good. On his quest he made a lot of powerful enemies. He was accused of not honouring the gods and corrupting the youth of Athens and subsequently put on trial. He was sentenced to death, and accepted his fate with immense courage.
Socrates never left any writing, and disdained the practice, preferring dialogue. As such he’s an enigmatic figure, known only through the writings of others.
We know Socrates mostly through Plato’s writing. While most scholars agree the character of Socrates became a mouthpiece of Plato’s ideas in Plato’s later writing, Plato’s account of Socrates’ trial and execution are deemed to reveal a more authentic Socrates.
It’s a great introduction to ancient philosophy and perfectly readable thanks to Plato’s penchant for uncomplicated dialogue. The two dialogues most worth reading are “The Apology” — Socrates’ defence at his trial (spoiler: he didn’t apologize, “Apology” is a traditional word for legal defence), and “Phaedo” — the last hours of Socrates’ life, which he spent talking with his friends before dying.
Both dialogues are surprisingly moving and full of valuable insights into goodness, courage, wisdom and self-possession.
“On the Nature of Things” by Lucretius
“The greatest wealth is to live content with little, for there is never want where the mind is satisfied.”
Epicurean philosophy is perhaps the most misrepresented in the modern world. We equate “Epicures” with being gluttons for buttery food and expensive wine, since to be Epicurean is to pursue pleasure.
The latter point is true, the goal of Epicureanism is indeed pleasure — it’s an unapologetically hedonistic philosophy.
But Epicurus (341–270 BCE) teaches us to find pleasure not by pursuing our desires, but by changing them. The man who takes pleasure in a glass of water and a bowl of lentils is a man who’ll be truly happy. To fully embrace Epicureanism is to leap off the so-called “hedonic treadmill” of unquenchable desires for things and popularity.
It’s no surprise that in the brutal madness of the ancient European world, Epicureanism became wildly popular. A personality cult developed around Epicurus long after his death in the Roman Empire. His likeness was reproduced in industrial quantities, followers of his philosophy celebrated his birthday as a holiday and the rich among them carved his ideas in stone monuments.
Sadly, despite being a prolific writer, most of Epicurus’s writing is lost. The most comprehensive explanation of Epicurean philosophy we have is On the Nature of Things, an epic poem written by a mysterious Roman Epicurean named Lucretius (c. 99–55 BCE).
Epicureanism isn’t just a moral philosophy, it’s a whole world view. The philosophy is based on Ancient Greek science and, oddly for an ancient philosophy, it is broadly materialistic and has no place for the supernatural or spirituality.
Lucretius manages to take all this materialist theory and turn it into a beautiful work of epic poetry. Some passages dazzle in their beauty. While the scientific theories of Epicurus are no longer viable, the philosophy has an internal coherence that makes it relevant — even urgent — to modern readers experiencing climate change and increasing levels of anxiety in society.
English translations of the poem can be challenging reading. If you’re not used to reading a lot of poetry, I’d recommend the prose translation by Martin Ferguson Smith and published by Hackett. The modern poetic translation by A.E. Stallings is wonderful.
“Letters from a Stoic” by Seneca the Younger
“Life is very short and anxious for those who forget the past, neglect the present, and fear for the future.”
The popularity of Stoicism has exploded in recent years thanks largely to the practical benefits of Stoicism for mental well-being. The philosophy had a good deal of influence over modern Cognitive Behavioural Therapy (CBT) in its insistence that while we can’t control what happens to us, we can control our response to those events.
Seneca the Younger (c. 4 B.C.E — 65 C.E.) is the most accessible of the Stoic philosophers. Among the celebrated Roman writers of the “Latin Silver Age”, his writing style is rich with metaphor and simile. Seneca’s letters aim to guide, console and inspire his friend Lucilius with philosophy. Seneca not only writes beautifully of Stoicism, but also draws on other philosophies of Ancient Greece, including Epicureanism. The statesman demonstrates that philosophical contemplation is key to both feeling and being good.
In his letter on the shortness of life, Seneca makes the point that “life” is but a small part of what preoccupied people really live, the rest is “just time.” Seneca makes the distinction between living and merely existing and urges us to live.
“Essays” or “The Complete Works” by Michel de Montaigne
“Man is certainly stark mad; he cannot make a worm, and yet he will be making gods by dozens.”
Michel de Montaigne (1533–1592) is often described as “the world’s first blogger”. He wrote about anything and almost everything. The retired lawyer began writing essays to get himself out of a depression after the death of his father and a good friend.
Montaigne actually coined the word “essay”, as we commonly use it in English for a written paper, from the French verb Essayer, meaning “to try”.
He was humble, a self-professed amateur who wrote of his essays, “These writings of mine are no more than the ravings of a man who has never done more than taste the outer crust of knowledge.”
But it’s his insistence that he didn’t know anything and merely wrote for his own gratification that give his writing so much philosophical vitality.
Montaigne reinvigorated interest in the Sceptic school of philosophy, which had flourished in ancient Athens. His longest essay, “An Apology for Raymond Sebond”, is a demolition of the folly of certainty and dogma.
He had the motto “what do I know?” and was among the first writers whose subject was self-reflection. His writings, occurring during the reformation — a time of great upheaval and violence in Europe, are breezy and modern but full of timeless wisdom.
“Thus Spoke Zarathustra” and the “Joyous Science” by Friedrich Nietzsche
“Man is a rope, tied between beast and overman — a rope over an abyss… What is great in man is that he is a bridge and not an end.”
Friedrich Nietzsche (1844–1900) was a man in intellectual rapture yet on the edge of despair when he wrote these two books.
Nietzsche met and fell in love with Lou Salomé in 1882, a young intellectual who’d spurn him for his close friend Paul Rée. The philosopher was physically and mentally ill, suffering from the past traumas of his military service, he was heavily sedating himself to sleep.
Because of Salomé, he had become alienated from his disapproving family. Nietzsche retreated to isolation in Rapallo in northern Italy and set to work on a philosophical novel — his masterpiece, Thus Spoke Zarathustra.
By the 1880s Nietzsche had taken a wrecking-ball to many philosophical and moral assumptions. He had picked apart western morality, finding its roots in a “slave morality” of resentment in the Roman world. His scepticism led him to reject Christianity, faith in the “progress” of civilisation, and the idealism so dominant in philosophical thinking at the time. His new ideas alienated him from friends like the composer Richard Wagner.
In his Joyous Science, Nietzsche declared the “death” of God. This is not to say that God had literally died or that God literally did not exist. Nietzsche understood that scientific and societal progress had made the traditional idea of God untenable.
“How shall we comfort ourselves, the murderers of all murderers? […] What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it?”
Having shattered the final idol, the philosopher set about constructing a new understanding of the world and our place in it. Thus Spoke Zarathustra and the Joyous Science contain some initial and influential ideas to achieve the vision of a post-religious morality.
In his novel, the Persian prophet Zarathustra (also known as Zoroaster) became his mouthpiece. Since Zarathustra was among the first religious leaders to make a binary distinction between good and evil, Nietzsche uses his voice to rectify this “mistake”.
The “death of God”, “eternal recurrence” and the Übermensch are concepts that grapple with morality in a world beyond religion and metaphysics. As the world continues to outpace the rules handed down to us, Nietzsche hoped to create new ideas of what it is to be human (a “tightrope” between beast and the Übermensch) and how we ought to create our future. | https://medium.com/the-sophist/philosophy-books-for-self-isolation-613fa62c26aa | ['Steven Gambardella'] | 2020-03-30 20:33:23.561000+00:00 | ['Philosophy', 'Culture', 'Books', 'Self Improvement', 'History'] |
Implementation of Logistic Regression without using Built-In Library | Implementation from scratch
Gradient descent is an optimization technique where we minimize the error by changing the values of coefficients repeatedly. To understand how logistic regression works, we will use gradient descent.
For this implementation, we are going to use the Breast cancer data set. By importing dataset module of sklearn library we can easily use this data set. It is a binary classification data set.
Now, let me run you through the implementation details — step by step.
First, we will import the numpy library for data manipulation. We will use dataset module of sklearn library to import data. Train Test Split module of sklearn library will be used for splitting the data into training and testing data. As well as we will use matplotlib for visualization.
Here is the github link to the implementation code in python.
Fig 4. Importing Libraries and splitting data
We will store the independent variables in x and dependent/ output variable in y. Using train test split module of sklearn we will split our data.
The logistic sigmoid function is,
Fig 5. Logistic sigmoid function
As I already mentioned, t is an equation consists of variables (Attributes) and coefficients. Our main aim is to find the coefficients of the equation in order to obtain good classification results.
Initially, we do not know the values of the coefficients. So, we will assign zero values for all weights and bias and small value for learning rate. Number of weights is equal to the number of independent variables. i.e. in the beginning, all coefficient value for the equation will be zero (b0, b1, …, bn =0).
Fig 6. Initializing weights and bias to zero
As, values of all coefficient is 0 thus,
Hence,
Thus, irrespective of the value of x, we will receive 0.5 as an output, and we will get a straight line as s curve.
We have defined a sigmoid function, where we can map our output into real numbers of range 0 to 1.
Fig 7. Defining Sigmoid function
Now, we have to change the values for coefficients so, we can get a better model with a higher accuracy. After finding the better values we will create a linear model using those values. We will pass this model to sigmoid function which will give us the predicted value output variable. By defining a threshold, we can easily do the classification.
Let’s assume, by using gradient descent we got new values for coefficients. Using these values we will create a model (An equation). We will pass this model to the sigmoid function to get predicted sigmoid values, which are between 0 and 1. By comparing these values to threshold we will perform the classification.
To change the values of the coefficient, we will take derivative with respect to weights and bias to change the values of the coefficients. Then we will update the weights and bias by multiplying the negative learning rate to the derivatives we got w.r.t weights and bias.
Fig 8. Loop to update the Weights and Bias
The above code shows the loop used for gradient descent to change the coefficient values.
Selecting range for for loop is an important choice you should do with this model. Bigger range for for loop will mean more number of iterations and may give better accuracy, but the time complexity will increase. In the other hand, smaller range may give poor fit as the final error achieved may not be near the point of minima. Also, using bigger range we might over-fit the model which results in less accuracy for test data.
There are many stopping criteria for gradient descent like initializing the number of iteration and stop when change in prediction error is less than epsilon etc. I will try to explain detail gradient descent in another blog. Here we are defining the range for loop as stopping criteria.
Fig 9. Creating object and fitting the model
We will initialize learning rate as 0.0001 and number of iteration for loop is 1000. Learning rate is a small value between 0 and 1 which controls the speed of change of weights. Bigger value of learning rate indicates rapid changes and fewer epochs but, it might skip the minima.
After running the for loop, we will get some values of the coefficients and bias used in the model which will give better results.
Fig 10. Updated Weights and Bias
This model with new coefficient and bias values will be used for testing data to see the accuracy of the model.
Fig 11. Defining Predict function
Predict function takes input data x and will give the predicted value of y. This function will first create a linear model using new weights and bias. Then we will pass this model to sigmoid function which will return the y predicted values in the range of 0 to 1.
We will set our threshold as 0.5. We will predict the label as 1 if the sigmoid function gives value greater than 0.5 else, we will predict it as 0. We will store these labels into y test predicted.
Fig 12. Accuracy Function
Accuracy function will give the accuracy of the model. It will take actual and predicted values and return the accuracy of the model.
This model gives more than 92% accuracy with 1000 iterations. The accuracy of your model depends on many factors like threshold, Number of iteration, and type of data (right skewed, left skewed etc.). Different values of threshold or range for loop may give a better output. One way to find value of threshold and number of iterations is the Elbow method. I have plotted the number of iterations and the respective accuracy. We can select an elbow point from which accuracy decreases or stops changing significantly.
I have tried implementing this model using different values for number of iteration. I have received maximum accuracy i.e. 92.98% with 1000 iteration. For 2000 iteration, accuracy for model is 92.10%. Here, we can see the decrease in the accuracy even after increasing the number of iterations beyond 1000.
At 500 iterations, the accuracy obtained was ~91.23% . Hence depending on our accuracy goal, 500 iterations can also be a reasonable choice as a further increase by putting in twice the number of iterations only gives ~0.9% increase in accuracy.
Fig 13. Number of iteration and respective Accuracy
Fig 14. Plot of number of iteration and Accuracy
In this way, we can implement the logistic regression without using built-in libraries. | https://medium.com/technology-through-the-prism/implementation-of-logistic-regression-without-using-built-in-library-90e2afffa137 | ['Adesh Dalvi'] | 2020-06-28 11:17:30.471000+00:00 | ['Machine Learning', 'Python', 'Classification Algorithms', 'Data Science', 'Logistic Regression'] |
Replacing VBA with Java in Excel | Excel is ubiquitous in nearly every workplace. From top tier investment firms and large scale engineering companies right down to individual sole traders, people get work done using Excel.
This article will look at some of the problems and advantages of using Excel, and how using Java embedded in Excel those problems can be overcome.
You don’t have to look far to find criticism of Excel and cases where its mis-use has resulted in heavy losses to companies. Over the last few years there have been many cases where bugs in Excel spreadsheets have been at least partly to blame for embarrassing and costly errors.
With such risks, why do companies still use Excel, and what can they do to prevent similar situations?
The main reason why Excel is so heavily used is worker productivity. Someone using Excel can do an incredible amount of complex work far more effectively than with any other tool.
Software developers often argue that the same can be achieved in their favourite programming language. Technically they may be correct, but that assumes that everyone has the time and appetite for learning to be a developer (which takes most of us many many years).
Most companies don’t have the resources to have developers dedicated to each business user, and even if they did then communicating what’s needed and iterating to get the desired outcome would struggle to compete with an individual ‘knocking together’ an Excel spreadsheet.
The Problems with Excel
What is it about Excel that makes it prone to errors? Nothing in itself, but there are various things that developers are used to that reduce the same sorts of risks in non-Excel based solutions. The following is a list of a few weaknesses in how Excel spreadsheets are typically developed.
Too much complexity . A spreadsheet may start off fairly simple with a few cells and formulas. Then bit by bit it grows and grows. Ranges get duplicated to handle more and more cases or multiple sets of data until it’s hard to reason about what’s going on. The task may still be reasonably simple, but because of the duplication required and the fact that each cell can only hold one unit of data a spreadsheet can sort of explode into something way too complex.
. A spreadsheet may start off fairly simple with a few cells and formulas. Then bit by bit it grows and grows. Ranges get duplicated to handle more and more cases or multiple sets of data until it’s hard to reason about what’s going on. The task may still be reasonably simple, but because of the duplication required and the fact that each cell can only hold one unit of data a spreadsheet can sort of explode into something way too complex. Minimal or no testing . Excel spreadsheets are subject to very little in the way of tests. The are typically no unit tests written for VBA code, and the only functional tests are often by the author of the spreadsheet on a limited set of inputs. When another user has to use the same spreadsheet there’s a high chance because they’re not familiar with the nuances of how it works they’ll stumble across some error that was never tested for.
. Excel spreadsheets are subject to very little in the way of tests. The are typically no unit tests written for VBA code, and the only functional tests are often by the author of the spreadsheet on a limited set of inputs. When another user has to use the same spreadsheet there’s a high chance because they’re not familiar with the nuances of how it works they’ll stumble across some error that was never tested for. Stale or outdated data . Connecting Excel to external data sources can be tricky. It can connect directly to a database, but what if the data you need is produced by another system, or you don’t have direct database access to the data you need? Data is often copied and pasted from reports from other systems to Excel, often with no way of knowing when the data was last copied or even if it is complete.
. Connecting Excel to external data sources can be tricky. It can connect directly to a database, but what if the data you need is produced by another system, or you don’t have direct database access to the data you need? Data is often copied and pasted from reports from other systems to Excel, often with no way of knowing when the data was last copied or even if it is complete. Bugs spread out of control . Even if you know there is an error in a spreadsheet, and you’ve figured out how to fix it, how can you find all the instances of other spreadsheets that have copied and pasted the same bit of VBA code, or even just copies of the exact same spreadsheet? The chances are that you can’t. Spreadsheets get copied and emailed around, and there’s no separation between the spreadsheet, data and code.
. Even if you know there is an error in a spreadsheet, and you’ve figured out how to fix it, how can you find all the instances of other spreadsheets that have copied and pasted the same bit of VBA code, or even just copies of the exact same spreadsheet? The chances are that you can’t. Spreadsheets get copied and emailed around, and there’s no separation between the spreadsheet, data and code. Version control hell. When you’re working on a large spreadsheet and you get to the point that it’s stable and working, what do you do? Most likely the answer is you save a copy — maybe with the date added to the file name. That’s about as far as version control goes in Excel. What if a change has unintended consequences? How do you know when that change was introduced? It’s almost impossible!
How can we tackle these problems?
These problems all stem from taking something that on the surface is quite simple (a grid of related numbers), and pushing it until it’s extremely hard to reason about its behaviour and correctness.
In essence, Excel is a way of expressing relationships between things. A1 is the sum of B1 and C1, for example. Where it starts to go wrong is when those relationships become more and more complex.
If you wanted to compute “A1 is the variance of daily returns of time series X”, what would that look like in Excel? If you are an experienced Excel user you might be imagining a table representing time series B with extra columns for computing the returns and a formula to compute the variance. But what if now we want to compute the returns for another N time series? Copy and paste the formulas for each new time series? This is how errors start to creep in!
Much better is to encapsulate the algorithm of computing the variance of the daily returns of a time series into a function. Then we can call that repeatedly for as many time series as we want without risk of one of the intermediate cells getting edited or not copied correctly.
Now imagine that instead of a table of data, a time series could be represented by a single cell in Excel. If that could be achieved then we’re back at a simple relationship between two cells — “A1 =daily_returns_variance_of(B1)”. Suddenly our spreadsheet starts to look a lot less complex!
We still have the problem that the time series data has to come from somewhere. Rather than copy and paste from another system or database, what if we had a function that loaded a time series from that the system or database directly? That way, each time we calculated the spreadsheet we’d know that the data was up to date and complete! To continue the previous example, we might have “B1 = load_time_series(ticker, start_date, end_date)”. We’ll come on to how exactly we can store a whole data set in a single cell later.
It often won’t just be the person using Excel who writes the functions they use. By providing end users with a solid set of Excel functions, technology teams can support the business more effectively than if they just write applications with only shallow Excel integration, like exporting reports.
How does Java help us?
By thinking about our spreadsheet and putting algorithms into functions, and by fetching data directly instead of copying and pasting, we’ve addressed a couple of the big issues with Excel spreadsheets. We haven’t yet touched on how those functions can be written, and the problems around testing, bug fixing and version control.
If we were to decide to write all of our functions in VBA (and believe me, many people do!) then we wouldn’t be taking advantage of any of the advances in software development made in the last 20 years!
Java has kept apace with modern software development and has a lot to offer over VBA.
Testing . Java has lots of different testing frameworks, all with different strengths and weaknesses. Whichever you choose though, being able to run automated test suits across your code base gives you confidence that it’s doing the right thing. This simply isn’t possible with VBA.
. Java has lots of different testing frameworks, all with different strengths and weaknesses. Whichever you choose though, being able to run automated test suits across your code base gives you confidence that it’s doing the right thing. This simply isn’t possible with VBA. Extensive Library Support. Writing VBA code is often a case of writing quite standard algorithms found online and converting them to VBA. Want to do something trivial like sort an array of data? In Java that’s no problem but in VBA you will be responsible for making sure your sorting algorithm works, and without any testing. Now imagine writing a complex derivative pricing model!
Writing VBA code is often a case of writing quite standard algorithms found online and converting them to VBA. Want to do something trivial like sort an array of data? In Java that’s no problem but in VBA you will be responsible for making sure your sorting algorithm works, and without any testing. Now imagine writing a complex derivative pricing model! Keep code outside of Excel. VBA code is usually saved inside the workbook, which is why when sharing workbooks bugs become so hard to track down. If your spreadsheet instead references a compiled Java library (JAR), then that is external to all the spreadsheets that reference it and can be updated easily.
VBA code is usually saved inside the workbook, which is why when sharing workbooks bugs become so hard to track down. If your spreadsheet instead references a compiled Java library (JAR), then that is external to all the spreadsheets that reference it and can be updated easily. Version Control. Java source code is just text, and so can easily be checked into a version control system. Most Java IDEs have excellent support for this as it is a standard part of modern software development.
Java source code is just text, and so can easily be checked into a version control system. Most Java IDEs have excellent support for this as it is a standard part of modern software development. Development Environment. The VBA editor (VBE) hasn’t changed in years. It offers little more than a very basic text editor with rudimentary debugging capabilities. Java on the other hand has a range of excellent IDEs to choose from.
But Java isn’t part of Excel!
That’s true, but Excel has a concept of “add-ins” that allow developers to extend Excel’s functionality. One such add-in is Jinx, the Excel Java Add-In.
Using Jinx, you can completely do away with VBA and write worksheet functions, macros and menus entirely in Java.
Writing a worksheet function in Java is as simple as adding Jinx’s @ExcelFunction annotation to a Java method:
package com.mycompany.xladdin;
import com.exceljava.jinx.ExcelFunction;
/**
* A simple Excel function.
*/
public class ExcelFunctions {
/***
* Multiply two numbers and return the result.
*/
@ExcelFunction
public static double multiply(double x, double y) {
return x * y;
}
}
You can return all the basic types you’d expect, as well as 1d and 2d arrays. For more complex types, you can write your own type converters, or you can return Java objects directly to Excel as object handles to be passed in to another Java method.
Jinx is free to download. See the Jinx User Guide for more information about how you can use Java as a replacement for VBA.
What was that about returning a time series as a single cell?
Jinx functions can return all the standard types you’d expect (ints, doubles, arrays etc.), but it can also return Java objects! When a complex Java object (like a class representing a time series loaded from a database) is returned it will be returned to Excel as an object handle. That object handle can then be passed to other Jinx functions, and the Java method will be passed the original Java object returned from the first function.
This is an extremely useful technique for simplifying spreadsheets to keep the complexity of the data involved away from the spreadsheet. When needed, the object can be expanded to an Excel array using a Jinx array function.
You can read more about these object handles in the Jinx Object Cache section of the user guide.
Other Languages
These techniques aren’t unique to Java.
The same could be said for other JVM languages like Scala or Kotlin. Jinx works with all JVM languages, not just Java.
Another popular language for writing Excel add-ins is Python. This can be achieved using the PyXLL add-in for Excel. | https://towardsdatascience.com/replacing-vba-with-java-in-excel-e9f5e28d4e5c | ['Tony Roberts'] | 2019-06-13 22:24:31.694000+00:00 | ['Excel', 'Vba', 'Java', 'Programming', 'Enterprise Technology'] |
Leashed | Leashed
Sometimes it’s the same person at both ends of the leash.
Photo by engin akyurt on Unsplash
Anxiety holds a retractable leash.
Most times keeping the leash short, other times offering finite slack. It taunts by dangling freedom just out of reach.
Resisting thrashing pulls and mournful howls, always at the ready to reel and send reeling.
The leash, firmly grasped, too thin to see from afar but capable of imprisoning even the strongest men.
Hostages at the end of their rope. | https://medium.com/scrittura/leashed-8d0593f4577e | ['Laura Misener'] | 2020-12-27 20:43:54.592000+00:00 | ['Poetry', 'Mental Health', 'Poem', 'Anxiety', 'Scrittura'] |
The Art Of Flight | Rustling leaves in shady trees
Sailing in the wind-
Gliding, diving down
To settled earth
From green to red their colors spread
Like butter on bread-
Shapes of puzzle pieces
Maneuver in the air
The art of flight, a friendly fight
To gracefully display,
Wings of paper
Thin as wafers
Branches clothed in shades of gold,
Of red and white and green-
Weighing heavy
To the ground | https://medium.com/the-partnered-pen/the-art-of-flight-da640586fa5e | ['Sarah E Sturgis'] | 2020-05-23 16:21:28.073000+00:00 | ['Poetry', 'Flight', 'Leaves', 'Nature', 'Writing'] |
A Letter to Santa From A UXer | Dear Santa,
I know you’re a real busy guy this time of year, what with all that toy making, but I hope you would spare a moment to hear a little constructive feedback on how to improve the user experience of Christmas.
Every year I mail my Christmas list to you (still haven’t got that hot dog roller, by the way) and I never have any idea if you’ve received my letter. If you have the ability to see me when I’m sleeping and know if I’ve been bad or good, at least send a delivery confirmation for goodness sake. How will I know if a full year’s worth of good behavior is even worth it if I don’t know if you’re receiving my list? Without the confirmation, it feels a little like I’m just mailing letters to some mythical being who wears a lot of red and is at risk for type II diabetes.
Also, how do I know if I am on the naughty or nice list? It’s 2015 Santa. A single year-end review is pretty antiquated. Routine performance reviews are the new standard. It would be nice if there was an app where I could track my behavior and list status. Perhaps complete with tips and improvement plans? I mean, banking has been online for 15 years Santa, what’s the holdup? Even a text every few months would go a long way. For example:
“Dear Nick,
Shape up or ship out.
Best, Santa Claus”
One more thing, (and this is just a nice to have) but it would be really great if you provided a window of when you plan on arriving Christmas night. I know you won’t come unless I’m asleep but life is unpredictable and I don’t wanna miss out just because I stayed up too late watching Home Alone and eating a sicko amount of figgy pudding. My cable company at least provides me with a window of when they plan on arriving (which they NEVER miss).
I know you’re immortal and all, but a little bit of modernizing could do you a lot of good. Remember Santa, give holiday cheer with the end user in mind.
Thanks for taking the time to read this Santa, I really appreciate it.
Sincerely,
Nick, age 27
To learn more about how we define UX , read our white paper below and let us know your thoughts. You can also follow Motivate Design on Twitter here and say “Hello!” on Facebook. | https://uxdesign.cc/a-letter-to-santa-from-a-uxer-980320c7c19c | ['Motivate Design'] | 2015-12-08 16:05:20.614000+00:00 | ['Memories', 'UX', 'Design'] |
Angular Learning (Part1) | Angular is a full phased Frontend Development framework. We can acually don’t need to configure anything for start developing with this framework. It has its own routing, http, testing module, which has already well configured with it. so no need to extra configuration. The Most valuable feature is it has its own compect CLI. I told compect CLI, because we can generate controller, class, enums and service by only provide the single line of command, it will automaically configure the things in codebase. It’s has integrated with cool feature like RxJS, so we can do Reactive programming easily with Angular.
Prerequsites of Angular
Node JS NPM/ Yarn Angular Cli Angular Material UI.
Install angular Cli
> npm install -g @angular/cli
Then create the folder in your pc name angular-playground
> ng new my-app
Please select those option during generating the application
Angular Router -> select yes
Application wide style SCSS -> select yes
To run the project
> ng serve -o
Main Structure of the project
|--my-app
|---- e2e (end to end test folder)
|---- src (Source code folder)
|-------- app (Contains modules, services, Component and classes)
|-------- environments (environment configuration file)
|-------- asstes (Static image and logo and assest files and folder)
|-------- index.html (Entry point html file for application)
|-------- main.ts (Entry point typescript file and boostraping app)
|-------- polyfills.ts
|-------- styles.scss (Main and Global Style file)
|-------- test.ts (Test Configuration file)
|---- tsconfig.app.json (type script configuration file)
|---- tslint.json (Typpscript lint file)
|---- karma.config.js (Karma test configuration file)
|---- package.json (Npm Package defination and Node Scripting file)
|---- .browserrslistrc (Supported Browser list config file)
|---- angular.json (Angular configuration file)
|---- .editorconfig (IDE formatting configuration file)
Before we start we can discuss about some feature which related to Angular
Component
Component is the fundamental block for angular application. component can be reuseable and composable. When we generate the project angular add the root component named AppComponent , it use for boostrap the angular application.
I have plan to create the components folder and add the home component to it for, so we can easy arrange the components to application, so other new developer can undrestand where is the components in application.
> ng g c components/home
after running the command 4 files for home component are generated to /src/components/home folder
src/app/components/home/home.component.scss
src/app/components/home/home.component.html
src/app/components/home/home.component.spec.ts
src/app/components/home/home.component.ts
home.component.ts , where we can add the typescript codes, methods, doing the reactive programming and handle the methods, component behaveior.
home.component.scss we can add the style for specific component. When i generate the project select scss as style for application. that why it generate that file of adding the style to it.
home.component.html using for adding the html template and databinding, allow you represent webview to broser ,which are processing in Component Class.
home.component.spec.ts , we can add the unit test to this class for testing the component validity and behaveior.
Interpolation
String interpolation is the most pouplar way to render and embedding expression and binding the data in template or mark up text. Angular by default, uses double curly braces, {{ and }} for interpolation.
<h3>First Name: {{ firstName }}</h3>
Template expressions
Template expression is the another procedures to binding the value and expression to template and Markeup text and attributes. Its use for property binding Form element and HTML element and component and directives.
We can add the template expression with follwoing syntax [property]="expression" .
Decorator
Decorator is a function which provide functionality to add extra meta data for Angular class, there is many types of decorator are introducted in Angular. such as
Class Decorator which express the intent of the class, it allows us to define the intent without having acually put the code inside the class. It allows us to attached the code and behavior with modifing the class at angular class instance.
Such as @Component decorators, where we can define the html selector where for render template , template where developer can specify HTML template for represent the data of angular class, styleUrls using for define the external style file, which will applied for specific component.
input {Component, Input} from '@angular/core' @Component ({
selector: 'home'
template: "<div></div>",
styleUrls: "./home.scss"
})
export class ExampleComponent {
@Input()
exampleProperty: string;
}
Property Decorator we can define the property in class. such as @input decorator which tell the angular compiler we got the property about input, it creates the input binding to template with property name and link to specific input in template. Such as
@Input()
Where we can control the template form input, how it’s behave with angular class.
@Method Decorator, We will dicuss about at my next part of the series.
Directives
Attribute directive
Attribute directive changes the appearance or behaviors of the DOM element.
We have generate the directive By
> ng generate directive highlight
It Generate the below Directive for where we can control the Angular component rendering behaveior.
ElementRef,
@
selector: '[appHighlight]'
})
export class HighlightDirective {
@ import { Directive HostListener } from ‘@angular/core’; Directive ({selector: '[appHighlight]'})export class HighlightDirective { HostListener ('mouseenter') onMouseEnter() { this.highlight('yellow');
}
this.highlight(null);
}
private highlight(color: string) { HostListener ('mouseleave') onMouseLeave() {this.highlight(null);private highlight(color: string) { this.el.nativeElement.style.backgroundColor = color;
}
}
Then you can control the dom element for this class
<p appHighlight>Higlight Me!</p>
Structural Directive
Structural directive are repsonsible for HTML layout, it sapes and reshapes the DOM structure typically adding, removing and manipulate the elements
<div *ngIf="name">{{name}}</div>
* is the syntactic suger for somting bit more complicated
<ng-template [ngIf]="name">
<div>{{name}}</div>
</ng-template>
Builtin Attribute directives listen to and modify the behavior of other HTML elements, attributes, properties, and components. You usually apply them to elements as if they were HTML attributes, hence the name. Those are
1. NgClass
2. NgStyle
3. NgModel
Pipe
There is lots of Pipe are available to Angular JS, Pipe will helps you to add the more cleaner code in Angular Template. If want to format the datetime, and want to Display the Json Object in Template you should use the pipe for that. Asnyc pipe is special pipe, by using it we can not add the subribe method to Obserable , it will automatically subscribe. Angular also provide the functionality to add custom pipe for Angular Template.
Built in Pipe Async Pipe Custom Pipe
We can declar the pipe in template by follwing structure
<div>{ expression or fieldname | pipname }</div>
If we use the datetime
<div>{{ obj.dateitme | datetime }}</div>
<div>{{ obj.person | json }}</div>
<div> {{ obj.price | currency:'JPY' }}
In this part of the application we just only discuss about the Reactive Form apporach to consume the data from form
HTTP module with Obserable to consume the data from ApiEndpoint Using Service
First Import the HttpClientModule to below to src/app/app.module.ts
import {HttpClientModule} from "@angular/common/http";
Then import it to app component src/app/app.module.ts file import section
imports: [
BrowserModule,
AppRoutingModule,
HttpClientModule
],
We are using Model to consume the data from external api to show into angular application.
To make clean structure i create the models folder and add the post.model.ts file into it.
in src/app/models/post.model.ts file add the following code bellow code to model class
export class PostModel { constructor(
public body: string,
public id: number,
public title: string,
public userid: number) {} }
PostModel is a class, i follow the ES6 contructor to implement the attribute of the Model class
Then we need to generate service, where we consume api data by obserable and http module of angular
> ng g service services/Post
After generating the code by Angular code strucutre is like below in Service file.
@Injectable({
providedIn: 'root'
})
export class PostService {
constructor() { }
}
Now we have some question in mind,
What is Injectable() decorator?
What exactly does providedIn do?
For service, Normally we have to add it to any Module as provider section for accessing the Service we created for application.
Providede in takes those value
providedIn: Type<any> | 'root' | null
When we add the value root to injector we can access the Service application wide no need to add it to module and other service as providers.
When you provide the service at the root level, Angular creates a single, shared instance of service and injects it into any class that asks for it.
providedIn: Module
By specify that a service should be provided in a particular @NgModule. For example, if you don’t want a service to be available to applications unless they import a module you’ve created, you can specify that the service should be provided in the module
import { Injectable } from '@angular/core';
import { UserModule } from './user.module'; @Injectable({
providedIn: HomeModule,
})
export class PostService {
}
Developer can also declare a provider for the service within the module:
import { NgModule } from '@angular/core';
import { UserService } from './user.service'; @NgModule({
providers: [UserService],
})
export class UserModule {
}
To consume the data for example purpose i use jsonplace holder as endpoint
The following code i applying the http module and RxJS Reactive Module to consume the data to PostService
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http'
import {Observable} from 'rxjs'
import { PostModel } from '../models/post/post.model'; @Injectable({
providedIn: 'root'
})
export class PostService {
private url: string = 'https://jsonplaceholder.typicode.com/posts'; constructor(private httpClient: HttpClient) { } public getPosts(): Observable<PostModel[]> {
return this.httpClient.get<PostModel[]>(this.url)
}
}
Declare the api endpoint to url variable as string
private url: string = 'https://jsonplaceholder.typicode.com/posts';
To consume the data from api i declare the getPosts() function and it returns the Obserable of PostModel which using HttpClient library to consume the data from api. To avoid the compile time error declare the type for return value as Obserable<PostModel[]>
You have some question in mind, what is obserable?
Obserable: A representation of any set of values over any amount of time. This is the most basic building block of RxJS.
Observable instance has a subscriber function. This is the function that is executed when a consumer calls the subscribe() method. The subscriber function defines how to obtain or generate values or messages to be published.
We have to Subscribe the Obserable to load the postlist to home component in file src/app/components/home/home.component.ts.
In onInit() function i will subscribe the postlist obserable data , later use it to template file
import { Component, OnInit } from '@angular/core';
import { PostModel } from 'src/app/models/post/post.model';
import { PostService } from 'src/app/services/post.service'; @Component({
selector: 'app-home',
templateUrl: './home.component.html',
styleUrls: ['./home.component.scss']
}) export class HomeComponent implements OnInit { name: string = "Angular Application Example"
posts = new Array<PostModel>()
constructor(private service:PostService) { } ngOnInit(): void {
this.service.getPosts().subscribe(res => {
this.posts = res.map(item => {
return new PostModel(
item.body,
item.id,
item.title,
item.userid
)
})
})
}
}
In the constructor , we inject the PostService using Dependency Injection pattern. So we can access the PostService implemented methods and function in HomeComponent class.
getPost funciton return the Obserable of PostModel array of data, so we can not use it to template to render it to pages in web browser. So we need to serilized by PostModel using javascript map function which data is return by subscribe function.
Then add the following below code snappit to home.component.html file to render the post list of articles.
<div class="container">
<table>
<tr>
<th>ID</th>
<th>Body</th>
<th>Title</th>
<th>UserID</th>
</tr>
<tr *ngFor="let post of posts">
<td>{{ post.id }}</td>
<td>{{ post.body }}</td>
<td>{{ post.title }}</td>
<td>{{ post.userId }}</td>
</tr>
</table>
</div>
we using the *ngFor directive to iterate through the list of post element. *ngFor directive are works like below structure
*ngFor="let <item> of <item-list>"
The output of the all the code doing for this section are shown below in browser in http://localhost:4200
Angular Material UI Integration
Go to Terminal and run that command below
> ng add @angular/material
Then select Deep Purple or Indigo/Pink as theme of angular material ui, You can also select the other one if you want. My Preference is to use Indigo/Pink .
If you’re using Angular CLI, this is as simple as including one line in your app.component.scss file:
@import '@angular/material/prebuilt-themes/indigo-pink.css';
Then import the individul Angular Materual UI Component Module to app.component.js in file
import {MatToolbarModule} from "@angular/material/toolbar"
import {MatIconModule} from "@angular/material/icon"
import {MatTableModule} from "@angular/material/table"
After import you have to add it to import section for app module , otherwise you do not get access to component.
imports: [
BrowserModule,
AppRoutingModule,
HttpClientModule,
BrowserAnimationsModule,
MatToolbarModule,
MatIconModule,
MatTableModule
],
If you get any error in command line during build the angular app like below example during add the Angular Material UI Component to module file.
This likely means that the library (
14 export declare class MatMenuModule {
~~~~~~~~~~~~~ ERROR in node_modules/@angular/material/menu/menu-module.d.ts:14:22 - error NG6002: Appears in the NgModule.imports of AppModule, but could not be resolved to an NgModule class.This likely means that the library ( @angular/material /menu) which declares MatMenuModule has not been processed correctly by ngcc, or is not compatible with Angular Ivy. Check if a newer version of the library is available, and update if so. Also consider checking with the library's authors to see if the library is expected to be compatible with Ivy.14 export declare class MatMenuModule {~~~~~~~~~~~~~
Stop the angular app and run the following command below
> npm install && ng serve
For windows
> npm install | ng serve
To check Angular Material UI is added successfully to our project, adding the navbar for Angular Material UI to app.component.html file.
Remove Angular generated code at app.component.html, then add this code to check
<div>
<mat-toolbar color="primary">
<mat-toolbar-row>
<span>My App</span>
<span class="example-spacer"></span>
<button mat-icon-button class="example-icon" aria-label="Example icon-button with menu icon">
<mat-icon>menu</mat-icon>
</button>
</mat-toolbar-row>
</mat-toolbar>
<app-home></app-home>
</div>
<router-outlet></router-outlet>
<app-home></app-home> is a home component, when we want to add component to other component,
we have to add by the selector name with dom structure of the component like that syntax.
To align the Toolbar element we use the flex in app.component.scss file
@import '@angular/material/prebuilt-themes/indigo-pink.css';
.example-spacer {
flex: 1 1 auto;
}
Change the Normal Table to Angular UI Matetial Table by following procedure Home.component.html
<div class="container">
<table mat-table [dataSource]="posts" class="mat-elevation-z8">
<!-- Id Column -->
<ng-container matColumnDef="id">
<th mat-header-cell *matHeaderCellDef> No. </th>
<td mat-cell *matCellDef="let element"> {{element.id}}</td> </ng-container>
<!-- Name Column -->
<ng-container matColumnDef="body">
<th mat-header-cell *matHeaderCellDef> Name </th>
<td mat-cell *matCellDef="let element"> {{element.body}}</td>.
</ng-container>
<!-- Title Column --> <ng-container matColumnDef="title">
<th mat-header-cell *matHeaderCellDef> Weight </th>
<td mat-cell *matCellDef="let element"> {{element.title}} </td> </ng-container> <!-- Symbol Column -->
<ng-container matColumnDef="userId">
<th mat-header-cell *matHeaderCellDef> User Id</th>
<td mat-cell *matCellDef="let element"> {{element.userId}} </td>
</ng-container> <tr mat-header-row *matHeaderRowDef="displayedColumns"></tr>
<tr mat-row *matRowDef="let row; columns: displayedColumns;"></tr> </table>
</div>
Routing with Angular JS
> ng g c components/about-us
All the following code to app-routing.module.ts file.
import {AboutUsComponent} from "./components/about-us/about-us.component"
import {HomeComponent} from "./components/home/home.component" const routes: Routes = [
{path: '', component: HomeComponent},
{path: 'home', component: HomeComponent},
{path: 'about-us', component: AboutUsComponent}
];
path: '' routing is serve when, there is no routing value to it, it will shows the HomeComponent information.
Remove the <app-home></app-home> from app.component.html file, because we don’t need it, Angular Router will handle all the component you specify in routing file.
<div>
<mat-toolbar color="primary">
<mat-toolbar-row>
<span>My App</span>
<a href="/home" mat-raised-button color="primary">Home</a>
<a href="/about-us" mat-raised-button color="primary">About</a>. <span class="example-spacer"></span> <button mat-icon-button class="example-icon" aria-label="Example icon-button with menu icon">
<mat-icon>menu</mat-icon>
</button>
</mat-toolbar-row>
</mat-toolbar>
</div>
<router-outlet></router-outlet>
Reactive Form Modules
Add the ReactiveFormsModule to app.component.js file to add into module
Import the module
import { ReactiveFormsModule } from '@angular/forms';
Add it to imports to enable the ReactiveFormsModule
imports: [
.......
ReactiveFormsModule ],
For example purpose, I will work on AboutUs component to contact form, add the formControl to about-us.component.js in class
import { Component, OnInit } from '@angular/core';
import { FormControl, FormGroup } from '@angular/forms'; @Component({
selector: 'app-about-us',
templateUrl: './about-us.component.html',
styleUrls: ['./about-us.component.scss']
})
export class AboutUsComponent implements OnInit { contactForm = new FormGroup({
name: new FormControl(''),
email: new FormControl(''),
details: new FormControl(''),
}) constructor() { }
ngOnInit(): void {} }
We add the FormControl to FormGorup, so we can grouping the FormControl of the template. ReactiveFormModule already include those control to handle the form element of template. we delare the contactForm later will specify it to template.
In about-us.component.scss file add the following code for simple design
.example-form {
min-width: 150px;
max-width: 500px;
width: 100%;
}
.example-full-width {
width: 100%;
}
In AboutUs component, we can add the following code below
<form [formGroup]="contactForm" class="example-form">
<mat-form-field class="example-full-width">
<mat-label>Name</mat-label>
<input matInput placeholder="tom" formControlName="name">.
</mat-form-field>
<mat-form-field class="example-full-width">
<mat-label>Email</mat-label>
<input matInput placeholder="[email protected]" formControlName="email" >
</mat-form-field>
<mat-form-field class="example-full-width">
<mat-label>Details</mat-label>
<textarea matInput placeholder="Please Contact" formControlName="details"></textarea>
</mat-form-field> <button mat-raised-button color="primary">Primary</button>
</form>
In this code sanppit, i already implement the Material UI Form Component. In the first line i specify the [formGroup]="contactFrom" which will automaically bind the contactForm (formGroup)variable which is declare in about-us.component.ts file. formControlName will create the binding between formControl which are spcify in formGroup .
Add the onSubmit function to about-us.component.ts file to listen to form element when sumitting.
export class AboutUsComponent implements OnInit {
....... onSubmit() {
console.warn(this.contactForm.value)
} }
Add the submit function to angular template ngSubmit like below, when we submit the form this function will tiggered in Angular class file named AboutComponent.
<form [formGroup]="contactForm" (ngSubmit)="onSubmit()" class="example-form">
Then add the some code to button to disabled the button when form value is valid after submiting
<button type="submit" mat-raised-button [disabled]="!profileForm.valid" color="primary">Save</button>
Output of the code is
You can find all source code in | https://medium.com/swlh/angular-learning-part1-df9f9f0f1337 | ['Tariqul Islam'] | 2020-10-09 06:26:49.344000+00:00 | ['Angular', 'Rxjs', 'Typescript', 'Learning And Development', 'Material Design'] |
Organizing your Python Code | Level 1: Functions and classes.
Both functions and classes are natural aggregators: Functions typically deal with statements (think actions or verbs and sentences) and classes with objects (think well, classes of things, or nouns and adjectives), programmatically they are deep subjects on their own and the cornerstones of the language, yet you could have a complex script or program run without them, so why use them ?
A well designed function will save you space and can be used as a sentence, building block or logic unit, a well designed class will dramatically expand your vocabulary , and together they will allow you to speak in paragraphs rather than yell commands willy nilly, let’s for instance rewrite the previous example with Functions:
def sing(line):
print ('Ma Ma')
if line == 1:
print('Mia')
else:
print ('Mia let me GO !') sing(1)
sing(1)
sing(2) # OUTPUT:
# Ma Ma Mia Ma Ma Mia Ma Ma Mia let me GO !
Classes ( in the words of the docs ) bundle data and functionality together , this allows you to start thinking in terms of more complex things, here for instance we can create a class that stands for a chorusSinger , sure there is more code to contend with, but we can now create unlimited chorus singers and ask them to sing the appropriate lines:
class chorusSinger:
"""This class creates a chorus singer that
can sing one of two lines.""" line1 = "Ma Ma Mia"
line2 = "let me GO !" def sing(self, line):
"""This function sings one of 2 lines"""
if line == 1:
print(self.line1)
else:
print(self.line2) # Create chorus singers:
chorusSinger1 = chorusSinger()
chorusSinger2 = chorusSinger() # Let them sing !:
chorusSinger1.sing(1)
chorusSinger2.sing(1)
chorusSinger1.sing(1)
chorusSinger2.sing(2) Output: Singer 1: Ma Ma Mia
Singer 2: Ma Ma Mia
Singer 1: Ma Ma Mia
Singer 2: let me GO !
Note that we also added comments in the form of “”” docstrings””" which can later be used to document your code and #inline comments , which help you re-read your own code.
We are not quite done with functions and classes, the execution if you notice this previous example is just left hanging there at the end, if you wanted to add sections or other singers it could get messy spaghetti, so we can further organize by adding a function that does that, once more what we earn is the power of multiplicity along with readability:
class chorusSinger:
"""This class creates a chorus singer that
can sing one of two lines.""" line1 = "Ma Ma Mia"
line2 = "let me GO !" def sing(self, line):
"""This function sings one of 2 lines"""
if line == 1:
print(self.line1)
else:
print(self.line2) def singPart():
"""Creates Singers and tells them to sing"""
chorusSinger1 = chorusSinger()
chorusSinger2 = chorusSinger()
chorusSinger1.sing(1)
chorusSinger2.sing(1)
chorusSinger1.sing(1)
chorusSinger2.sing(2) def main():
"""Plays parts"""
singPart()
singPart() # Runs the program
main() # OUTPUT:
Singer 1: Ma Ma Mia
Singer 2: Ma Ma Mia
Singer 1: Ma Ma Mia
Singer 2: let me GO ! Singer 1: Ma Ma Mia
Singer 2: Ma Ma Mia
Singer 1: Ma Ma Mia
Singer 2: let me GO !
This simple structure is fairly common, can serve as the basis of many short programs or scripts and open the door to more complex ones; here’s an overview of this pattern: | https://k3no.medium.com/organizing-your-python-code-ca5445843368 | ['Keno Leon'] | 2020-01-16 06:04:22.894000+00:00 | ['Coding', 'Programming', 'Software Development', 'Python', 'Data Science'] |
How I Designed My Own Full-Stack ML Engineering Degree | Earlier this year I came across a viral Twitter thread by Randall Kanna about how to create one’s own computer science degree with free online content. It was not only excellent for people who don’t have prior knowledge in computer science, it’s also valuable for new software engineers who didn’t major in CS in college. That’s when I thought about creating my own full-stack machine learning engineering degree. After being in the tech industry and having learned it the hard way, I believe a custom-designed curriculum is going to be valuable to myself and others who have similar goals like mine. I curated a collection of the best resources I found, wrote down a curriculum, shared it on Github and Twitter where I called it “My CS Degree” with a focus on Full-Stack ML Engineering. Since then, a lot of people have starred and forked it. I’m glad that they found it helpful.
In this article, I will explain the philosophy behind this curriculum, who can benefit from it, and how to reason about what and how to learn given your personal goal and the exploration-exploitation dilemma in the vast world of knowledge. If you are only interested in the actual curriculum, check it out on Github.
Disclaimer
This is not trying to be a one-size-fits-all machine learning degree. There are different career paths in machine learning. Everyone has their own background, career goals, and interests. The diversity in the field is healthy and should be encouraged. What shouldn’t be encouraged is the wrong organizational separation of responsibility. But more on that later.
The Target Audience
Before you read on, this question must be answered first: what is the goal of this curriculum, and who is it for?
If you want to build and ship machine learning powered applications end-to-end, from problem formulation to production, this curriculum is for you.
You can apply the skills learned here to find a job, or bootstrap your machine learning startup.
Why Should You Care
The field of data science and machine learning engineering is enormous and fast-evolving, nobody can learn everything. Traditional universities are too slow to design new curriculums. At the same time, there are tons of world-class educational resources online. The advantage of designing such a customizable online degree is easy to see. Even so, it is very hard for beginners to design such a degree for themselves because what they often see is stuff like this:
119 easy steps to becoming a data scientist
The wrong way to start is to try to cover as many points as possible in that subway map. In addition, if you want to become end-to-end, you need a similar map but for software engineering as well! A sane person would immediately see through the complexity on the surface and identify the real question:
What is your goal and how to determine relevance and priority based on it?
Having worked as a software and machine learning engineer for several years, I have developed my philosophy of learning in this field. I decided to write about my experience and share some advice with newcomers on what not to do, and what to do instead.
The Controversial Separation of Responsibility between Data Scientists and Engineers
Why the emphasis on “end-to-end” and “production”? Some data scientists build models and hand them off to engineers to productionize later. This is how data science organizations operate in many companies now and it is gradually recognized as an anti-pattern for those who need models in production. In Eugene Yan’s excellent article Unpopular Opinion — Data Scientists Should Be More End-to-End, he explained why it is the case eloquently. I completely agree and won’t repeat it here.
One major reason that this separation exists is the distribution of expertise in the talent market. Internally, it becomes perceived expertise based on job titles and descriptions. The average data scientist has a degree in statistics or other math-heavy fields with less knowledge in computer science. An engineer usually has a CS degree but lacks modeling expertise. Do you see the difference? It is a legacy issue caused by traditional universities. For most companies, there aren’t enough talents who learned both well enough in school or from previous experience. Even if the talents exist, a wrongly structured organization tends to underutilize them, or even pit data scientists and engineers against each other. It is time to rectify it with a set of more up-to-date expectations.
Nowadays, it is not unreasonable to ask for both software engineering and machine learning skills from one person. Since both are very big fields, it is unreasonable to expect deep knowledge in every domain, but the person should have a functional set of skills where they can handle end-to-end workflows in their strongest area reasonably well.
This isn’t to say we don’t have skill specialization anymore.
Think about it this way: each person has a relatively constant skill capacity that can be spread out to different topics. Traditionally, people’s skillsets form a software engineering cluster and a modeling cluster. As the space gets more complex, the boundary between these two clusters doesn’t make sense anymore. A person can be deeply specialized in a collection of topics in both clusters. The total volume they cover in this space is still roughly the same as traditional university-educated people, but the possible combinations of topics and depths increase significantly.
This increased diversity of expertise is a very good thing for creativity as well. Steve Jobs famously said:
If you’re gonna make connections which are innovative … you have to not have the same bag of experiences as everyone else does.
To be more in-demand in the current job market by being end-to-end, and not subject yourself to the limits imposed by traditional educational systems, you need to be result-driven. Recognize your personal interests and the trend in technology, create our own path that leads to value creation.
My Background
A little digression before I dive into the structure and philosophy of this degree. I have an undergraduate degree in physics and used to study machine learning in grad school before deep learning became the hottest thing. I didn’t study computer science systematically but did take courses in algorithms and data structures. So you may see me fall into the “data science” camp mentioned above in terms of education. As for real-world software engineering and web development, I was clueless when I graduated.
With a short period of intense preparation and some luck, I started my career as a software engineer at a Silicon Valley company, working on full-stack web development with some of the top engineers in the field.
Nobody had a bigger imposter syndrome than mine.
I went into the office every day feeling extremely insecure, grinding my way through each line of code, and somehow made it work without understanding the big picture how each component talked to each other.
It was initially painful but ultimately rewarding. I was lucky to have worked with an amazing team and learned a ton. During that period I also tried to make contributions to machine learning projects that were outside my team’s area. I began to realize the importance of software and data engineering. It is the most important aspect of any project, machine learning or not. Ask any machine learning professional, they will tell you that building models is just a tiny part of the job. Building something that people can actually use is almost an entirely different problem.
After that, I joined an early-stage startup to experience what it’s like to be one of the founding members of a machine learning team. This is the period when I realized that formulating the right problem based on business use cases is the hardest of all tasks. The next hardest is how you collect and engineer your dataset. In all cases, modeling is a relatively minor factor of success, which might be very counterintuitive to beginners who came from classrooms where datasets and problems are well-defined and handed to them.
The road to a full-stack machine learning engineer may not look as glamorous as becoming a rockstar researcher. The current machine learning education online and offline often don’t even mention these factors that are more important than modeling in real-world ML applications:
Computer science fundamentals and software engineering
Understanding how to formulate the problem correctly
Designing a system that works reasonably well in production with constraints
So if your goal is a career in building production-ready machine learning applications and not academic research or Kaggle competitions, the priority is to strengthen these areas and not to follow the next shiny model or tool every couple of weeks. This leads to the philosophy of how I structured my curriculum.
The Structure of the Curriculum
There are 2 main ways to categorize the courses in this degree.
1. Group by Bottom-up or Top-down Learning
There are two types of courses in this full-stack ML curriculum: general knowledge courses (bottom-up) and project-based courses (top-down).
General knowledge courses are for indexing knowledge in the brain into an organized, connected, and easy-to-search system. This is bottom-up learning and it is best when limited to the basics.
Project-based courses are where the real learning is. Come up with a small product you want to build, do some research on the necessary parts to learn, learn them on-demand as you build it up. When you are done, deploy it and try to get some users. Write an article teaching others by explaining the process in detail. It could become more useful than your resume in the future! This is top-down learning, an approach that is most effective in learning anything practical but is usually not adopted by traditional schools.
2. Group by Domain
Another way to group the courses is by knowledge domain. My degree has these major domains: computer science fundamentals, deep learning fundamentals, software engineering (including MLOps), and natural language processing.
Being end-to-end usually means you need to write production services. This requires you to understand the basics of computer science:
How does the computer work?
Courses: Computer Architecture, Operating Systems
How does the Internet work?
Courses: Computer Networking, Distributed System Design
How do databases work?
Course: Intro to Databases
How do some common algorithms and data structures work?
Course: The Coding Interview
There is no need to understand every topic in-depth, but the key ideas are needed to be effective as an engineer. After going through these courses, at least you know what relevant info to look for when facing a new problem at work.
For machine learning and deep learning, I highly recommend fast.ai. It is the best top-down teaching on the internet and it’s free! Jeremy packs rich content in the lectures, you’ll have to watch the lectures more than once to get the most out of them. It all pays off when you build your own deep learning project using fastai techniques to achieve state-of-the-art results.
For technical interviews, I recommend the book Elements of Programming Interviews. The goal is not quantity but quality. Once you understand the common algorithms, most problems are just their variants.
For production machine learning / MLOps which is the core of this degree, there is an online course currently being released (as of Nov 2020) Applied ML in Production created by MadeWithML. This is a rare gem that lets you take a close look into how a real production ML system is built from scratch. Check out other MLOps courses in the curriculum to complement your learning.
On top of these domains, you can pick your specialization. My interest is in natural language processing and sequence modeling, so I added a project course for it. You can come up with your own and use it as a capstone project, showcase your work to the world once you are done.
This is just a quick recommendation for each domain, there are a lot of other great courses listed on my Github page. The curriculum is open-source so you can fork it, trim down the ones not relevant to you, or add more as you wish.
Bottom-up or Top-down? — My Learning Philosophy
Neither bottom-up nor top-down is the “best way of learning”. They serve different purposes.
Top-down learning is usually not adopted in schools. The textbooks and lectures always have an organized flow of logic. Necessary knowledge points are introduced before more advanced topics, and it truly had an effect on me — it made me feel uncomfortable dealing with a complex task in which I had gaps in knowledge. It made me naturally want to understand everything before I’d start the task. Ironically, almost all real-world problems are complex and you often don’t understand every underlying mechanism. But you don’t have to know everything to do it! The stark contrast between school and reality shows that top-down learning needs much more attention. Without it, a lot of potentially productive people don’t even start.
On the other hand, there are cases where bottom-up is necessary. For instance, if you learned to code without learning computer science basics, you quite possibly will never encounter a problem that explicitly requires you to learn them.
You may become a somewhat competent web developer or data scientist without knowing how the computer works, why a bunch of transistors become programmable, what it takes to make the computer understand your code, or what makes the code efficient, despite it being your workhorse and the greatest machine mankind has ever created.
This is harmless to an extent. Many coders who didn’t major in CS made it in the tech industry without needing to master the basics, myself included. Yet, I can tell you that it does make a difference long term. It limits the things you can do and how good you can get. So if you are like me, taking the time to learn the basics is worth it.
Another example that favors bottom-up learning is math — the core of machine learning. Math is the subject where you work with a set of axioms, operations, and theorems as building blocks to solve new problems you’ve never seen before. The magic about math is that you can pretty much solve any problem once you master the logical building blocks, it’s just a matter of how to use the building blocks to get there (this statement has a caveat, Google “computability theory” if you are a nerd).
Algorithm is another form of math. When you learn the likes of depth-first search or breadth-first search, you can generalize and solve problems that are difficult to solve otherwise.
Enough with the philosophy, but when to use which?
When dealing with a complex system, it is not always possible to decompose it into clear building blocks that follow a strict line of logic. In this case, top-down is better. You try what works and what doesn’t in order to get a high-level feel of how to do it correctly. Examples include learning to play tennis, to swim, to recognize an object in an image (computer vision), or to build a new product (entrepreneurship).
When preparing for a coding interview, however, internalizing and generalizing the algorithms and data structures is key. It needs a more bottom-up approach.
Avoid the Traps
When there is an abundance of great resources, it’s easy to fall into the traps of the learning process. Here are some points I frequently need to remind myself of.
Trap 1. Learning it as a University Degree
Don’t do this as a university degree. You don’t need to finish one course after another religiously in order to graduate. Think about it, what are the things you can still remember from your school days? It could just be a handful of mental models and a personal way of learning new things. The reality is, you won’t remember much from this curriculum either. So do this instead:
Distill knowledge into mental models in short notes that are easy to remember. Be result-driven. Which domain is the one you most urgently need to work on? Pick one relevant general knowledge course and one project course at a time. Ignore others.
Trap 2. Lack of Positive Feedback
When you are starting out, set some humble goals to feel good about your progress. If you get discouraged too often, you won’t be able to make it through this long journey. You have to enjoy it to go on.
Trap 3. Chasing after Trendy New Things
When you are used to curating good resources, you may get distracted every now and then just to find new things to collect. In the field of machine learning, there are countless new papers, new tutorials, new tools coming out every day. If you constantly chase them, you will feel insecure forever.
Learning a new thing is an investment. It is a much better investment to first learn the fundamentals well because they stay the same and won’t be outdated for a very long time.
For example, there’s no need to feel insecure if you are not familiar with Spark when you don’t use it in your job. Instead, learn the basics of distributed computing. It will supercharge your future learning of any big data framework.
We need to ground ourselves every day, to cool our heads in the era of information overload and focus on a small set of truly important things.
Trap 4. Not Showcasing Your Work
If you are like me, we always feel insecure about the quality of the work we do. There are brilliant people out there who are more experienced and knowledgeable, what if my work is not good enough and people will laugh at me?
The truth is, people don’t care. We shouldn’t take ourselves too seriously either. Everybody is a student. If someone doesn’t think they are a student, they are not growing.
We don’t need to compare ourselves to others, only compare with our old self.
Another reason that showcasing your work is crucial is that the current ML hiring process is broken. Many companies can’t find qualified candidates, and candidates can’t stand out just with their resume. With a portfolio, you not only show that you are competent in what you do, you also show your passion and dedication in this field. All significant events in life are unpredictable. It may surprise you with exceptional opportunities someday! | https://towardsdatascience.com/how-i-designed-my-own-full-stack-ml-engineering-degree-297a31e3a3b2 | ['Logan Yang'] | 2020-12-08 16:17:09.999000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Full Stack', 'Data Science', 'Machine Learning Engineer'] |
Ansible and Google Calendar Integration for Change Management | Is anytime a good time to execute your automation workflow? The answer is probably no, for different reasons.
If you want to avoid simultaneous changes to minimize the impact on critical business processes and reduce the risk of unintended service disruptions, then no one else should be attempting to make changes at the same time.
In some scenarios, there could be an ongoing scheduled maintenance window. Or maybe there is a big event coming up, a critical business time, a holiday, or you prefer not to make changes on a Friday night.
Whatever the reason is, we want to signal this to our Automation platform and prevent the execution of periodic or ad-hoc tasks on specific time slots. In Change Management jargon we are talking about blackout windows when change activity should not occur.
Calendar in Ansible
How can we accomplish this in Ansible? While there is no Calendar function per se, however, Ansible’s extensibility will allow us to integrate with any Calendar application that has an API.
The goal is this: Before we execute any automation or change activity, we execute a pre-task that checks whether there is something already scheduled in the Calendar going on or soon enough, or to confirm we are not in the middle of a blocked timeslot.
Let’s pretend a fictitious module named calendar exists, and that it can connect to a remote calendar, like Google Calendar, to determine if the supplied time has been marked as busy. Then we could write a playbook that looks like this:
Ansible facts will give us ansible_date_time which we pass to the calendar module to verify the time availability. We register the response ( output ) to use in subsequent tasks.
If our calendar looked like this:
Then the output of this task would highlight the fact this timeslot is taken ( busy: true ).
Preventing tasks from running
Next, Ansible Conditionals will help us prevent the execution of any further tasks. As a simple example, you could use a when statement on the next task to enforce that it runs only when the field busy in the previous output is not true .
Conclusion
Like we mentioned in the previous post, Ansible is a framework to wire things together, interconnecting different building blocks to orchestrate an end-to-end automation workflow.
In this opportunity, we looked at how your Playbooks can integrate or talk to a Calendar application to check availability. However, we are just scratching the surface! For example, your tasks could also block a timeslot in the calendar… the sky is the limit. | https://medium.com/swlh/ansible-and-google-calendar-integration-for-change-management-7c00553b3d5a | ['Nicolas Leiva'] | 2020-09-28 12:36:13.147000+00:00 | ['Automation', 'Ansible', 'Software Engineering', 'Programming', 'Change Management'] |
Ross Douthat on Decadence and Dynamism (Ep. 91) | Ross Douthat on Decadence and Dynamism (Ep. 91)
For Ross Douthat, decadence isn’t necessarily a moral judgement, but a technical label for a state that societies tend to enter — and one that is perhaps much more normal than the dynamism Americans have come to take for granted. In his new book, he outlines the cultural, economic, political, and demographic trends that threaten to leave us to wallow in a state of civilizational stagnation for years to come, and fuel further discontent and derangement with it.
On his second appearance on Conversations with Tyler, Ross joined Tyler to discuss why he sees Kanye as a force for anti-decadence, the innovative antiquarianism of the late Sir Roger Scruton, the mediocrity of modern architecture, why it’s no coincidence that Michel Houellebecq comes from France, his predictions for the future trajectory of American decadence — and what could throw us off of it, the question of men’s role in modernity, why he feels Christianity must embrace a kind of futurist optimism, what he sees as the influence of the “Thielian ethos” on conservatism, the plausibility of ghosts and alien UFOs, and more.
Note: This conversation was recorded on February 25, 2020.
Listen to the full conversation
Read the full transcript
TYLER COWEN: Hello, and welcome to Conversations with Tyler. I’m here once again with Ross Douthat. Ross has a new book out, which I’m a big, big fan of, called The Decadent Society: How We Became the Victims of Our Own Success.
Ross, welcome.
ROSS DOUTHAT: Thank you for having me back, Tyler. It’s an honor.
COWEN: A very simple question about decadence. I read in the New York Times this morning, which is late February, “In the past several years, Kanye West has announced so many plans. That he wants to start a church. That he plans to run for president in 2024. That he will invent a method for autocorrecting emoticons. That he aims to redesign the standard American home. That he might legally change his name to ‘Christian Genius Billionaire Kanye West’ for a year.”
Is Kanye decadent?
DOUTHAT: No.
COWEN: Why not?
DOUTHAT: Kanye is not decadent because decadence involves drift, repetition, and stalemate, and Kanye’s public persona is defined by creativity, conversion, and reinvention. Now, that’s not to say that Kanye might not participate in decadence. One of the implicit or explicit arguments of my book is that even when you’re a rebel against decadence, it’s very hard to escape these fundamental forces in our society dragging us towards stalemates and repetition.
So maybe all of Kanye’s reinventions and plans to reinvent the question mark come to nothing in the end. But the fact of his ambitions and the fact that he has actually invented a church, right? Kanye has a cult. He has Sunday services that are a unique phenomenon amongst celebrities. So even if there are limits on what he can achieve, he is a force for anti-decadence in a decadent society.
COWEN: So does the popularity of Kanye suggest we’re moving away from decadence? Or is that a kind of placebo we use to insulate ourselves from dynamism? “Oh, I like Kanye.” And then we go about being decadent.
DOUTHAT: That’s probably a little bit of both. You can see this in politics, right? When I started writing the book, a long time ago now, the populist moment hadn’t really arrived in the Western world. So one of the questions hanging over the argument is, Does the populist moment prove that decadence is coming to an end? Does it end with populism and nationalism and the return of history?
I think that’s an open question. The answer that I lean towards in the book is that it doesn’t, that these rebellions are sort of more performative and virtual than real. But their appeal suggests that people aren’t content with decadence, and that is a sign that maybe, at some point, it will come to an end.
Support for Bernie Sanders and Donald Trump partakes of decadence but also represents a desire for something else. And as long as that desire is there, there has to be some possibility of escape.
COWEN: Our mutual acquaintance, Roger Scruton, who died recently — he seemed to love fox hunting and the operas of Richard Wagner. Was that decadence, or is that innovation?
DOUTHAT: I think there are forms of antiquarianism that are sufficiently antiquarian to cease to be decadent and to become almost innovative.
So you could argue that we’ve reached a point where our culture — our mass culture, but also our elite culture — is sufficiently dominated by just repetitions of cultural forms and products from the baby boomer era. That to be an antiquarian, to reach back towards traditions and forms and aesthetic forms that were popular 150 years ago, is also an escape from decadence.
So there we go. Kanye isn’t decadent; Roger Scruton isn’t decadent. We’re whittling away at my thesis one celebrity, or late celebrity, at a time.
COWEN: Are you decadent? You wrote a column about architecture, and in that column you say, “Making American architecture a little more traditional . . . certainly wouldn’t hurt.”
Shouldn’t American architecture be more like Kanye West?
DOUTHAT: Yes, that is decadent. Absolutely. I would say that, basically, the place that modern architecture has ended up and the traditionalist alternative are both sort of decadent, and I prefer the traditionalist forms on aesthetic grounds. But I recognize that they are not dynamic and innovative, that you’re accepting that we’ve reached some dead end in architectural style making and choosing the beauties of traditional forms over some of . . .
Frankly, the less the ugliness, the more the mediocrity that, I think, a lot of post-1960s architectural forms have ended up with. So under my definition, the golden age of brutalism is not decadent. It’s bad, right? I don’t like brutalist architecture —
COWEN: I do.
DOUTHAT: But we can find common ground, at least, in that it was trying to create a distinctive style for a disenchanted age and express something about modernity and the forms. Whereas I think that — and I say this: I’m not a professional architecture critic, so take this with a grain of salt — but the last 25 years of public architecture has been less spectacularly ugly and more just mediocre imitations of more striking modernist forms. But you probably have stronger opinions on this than I do, ultimately.
COWEN: I’m more of an optimist about architecture. The new art and interiors of homes, it seems to me, have become much better. It’s hard to see them precisely because they’re interiors. But when I was a kid, it was very rare you’d see a nice interior of any home, even of a wealthy person. Now, even an upper-middle-class person, you might think, “My goodness, that looks amazing.”
I was in someone’s house in Lubbock. It was gorgeous. That’s Lubbock, Texas, right?
DOUTHAT: Yes, HGTV — the interiors suggest a certain kind of aesthetic progress, although in other ways, HGTV is decadent, so it’s complicated.
COWEN: There’s a famous quotation, with the source actually being contested or unknown, and it goes as follows: “America is the only country that went from barbarism to decadence without civilization in between.” True or false?
DOUTHAT: Was that not Clemenceau?
COWEN: People don’t agree.
DOUTHAT: People don’t —
COWEN: Probably it was someone before him, some version of that.
DOUTHAT: Yeah. I enjoy that quote without agreeing with it. I think that America, in many respects, really did represent a particular peak of civilizational progress in the years of its ascent from somewhere in the 19th century up until the moon landing. So I would defend the civilization of the Americans.
It’s particular. There are certain forms — high cultural forms — that America has not specialized in. But America has produced a lot of great mass art, and a certain amount of good elite art, and a lot of impressive technological progress. And I think that adds up to civilization, not barbarism.
COWEN: Was the moon landing, in some sense, a mistake? That after we did that, we patted ourselves on the back too much, and we weren’t sure what to do for an encore, however glorious it may have been?
DOUTHAT: No, it wasn’t. It wasn’t a mistake. You had to do it. It was the thing to be done. It was the furthest step we could take into space that was dramatic and a sort of embodiment of human beings going beyond the planet Earth.
COWEN: But why did we basically stop a little bit after that?
DOUTHAT: What I want to say, as a critic of decadence, is that there was some failure of will or of nerve and so on, but you know —
COWEN: But that’s endogenous, right?
DOUTHAT: Right. You have to acknowledge that the problem with space is that, under current technological conditions, the moon landing was what we could do, and there wasn’t that much else that we could do, certainly that had the presumable financial rewards or opportunities for people starting new lives.
All of the things that drove ages of discovery in earlier periods in human history just weren’t available and still aren’t available. And whatever the sort of psychological civilizational factors at work, that’s the dominant reality that people like me, who really want to go into space, have to acknowledge.
COWEN: But we did settle Nevada afterwards, right? Is that not the greater achievement? They’re both dry, distant.
DOUTHAT: [laughs] The settlement of Nevada is, in a sense, yes, maybe the last great blow against decadence struck, but the way we settled it was not universally . . . I think you can see Las Vegas is decadent under the standard definition of decadence which involves —
COWEN: But not yours.
DOUTHAT: No, but it’s decadent under mine, too, in the sense that it represents a kind of simulated sublimity where you are creating models of all of the great achievements of the human species in the modern world and practicing various forms of entertainment around them. So in that sense, it is under my definition too, not just the chocolates-and-bondage-dens definition. I think it is decadent.
I borrow this concept of the technological sublime from David Nye, and before him, Perry Miller, a couple of great American historians — this idea that American history is punctuated by these moments of some technological breakthrough that’s also a kind of wonder of the world.
And that is distinctively American in a way that is not — in spite of the Eiffel Tower and other things as European examples, does sometimes seem to the more sophisticated European mind, maybe mildly barbaric — our delight in our steam engines and transcontinental railways. But I think it’s been a distinctive part of the American experience that has run out since the moon landing.
And Las Vegas and the iPhone, in different ways, are imitations of that but are more focused on simulation and entertainment than the steamship and the railway and the space shuttle were.
COWEN: Of all the Western nations, given your notion of decadence, which is the least decadent?
DOUTHAT: Are we counting Israel as Western?
COWEN: Not for the purposes of this question.
DOUTHAT: Then probably the United States of America.
COWEN: And what would be number two?
DOUTHAT: I have an impulse to say France, which is sort of strange, right?
COWEN: France? Why France?
DOUTHAT: I would put it this way: I think France is, in certain ways, very advanced in decadence, but it’s also a place where a lot of forces of post-decadence — whatever that may be — are sort of in play in really interesting ways. It’s a place that has some of the most interesting political and intellectual debates about liberalism and post-liberalism, even though it hasn’t actually seen a far-right or far-left party take power.
It’s a place that’s a particular example of the uneasy confrontation between a decadent Europe and Islam and Islamic immigration.
I don’t think it’s a coincidence that the great chronicler of decadence, Michel Houellebecq, comes from France. In that sense, I think France is both further advanced in its decadence, but also, therefore, more interested in and open to whatever strange things lie beyond. But that’s speculative.
COWEN: Are you long or short France?
DOUTHAT: I am long France in the sense that I think that it is the most likely crucible for whatever forces are going to reshape Europe over the next 100 years. I am short France in the sense that that means it is the place most likely to have a civil war, maybe in the next 100 years. So, long in terms of drama, short maybe in terms of stability.
COWEN: [laughs] I think that means short.
DOUTHAT: Only if you’re an investor, not if you’re a journalist.
COWEN: [laughs] What if I were to argue Canada is, in fact, highly innovative? They seem to have completely sane governance, which is now all of a sudden a novelty, and several of their major cities have foreign-born populations of more than half the total. It works very well.
DOUTHAT: Yes.
COWEN: Also, a lot of those populations are non-Western, different religions. In world history, this is quite astonishing: sane governance and so many foreigners, and their cities are wonderful. Why isn’t that this phenomenal innovation and Canada is the least decadent country, in your sense?
DOUTHAT: I think that’s a reasonable argument, and the careful reader of my book will notice that I don’t talk a great deal about Canada, in part because it doesn’t display as fully a lot of the manifestations of decadence that I’m talking about — political gridlock and sclerosis — which I think fits much of Western Europe, fits the United States, in certain ways fits the Pacific Rim. It has some of the same issues of demographic decline as other countries, but it’s not as steep as East Asia.
And, as you say, it has managed a certain kind of immigrant assimilation in ways that other Western countries are struggling with. In that sense, Canada — if it is decadent, it has decadence without some of the more extreme difficulties associated with it.
I guess my question for you, as an observer of Canada, is, At what point does this sort of Canadian exceptionalism start to dramatically influence the world? And fundamentally, should we think of Canada as a large country, which it is, or as a small country, which it also is?
COWEN: Small country.
DOUTHAT: Right.
COWEN: In some ways, like the Nordics. I’m not sure Canadian exceptionalism will last, but I find striking the question, Why don’t more Americans actually want to move to Canada? They would take either of us, right?
DOUTHAT: That’s very flattering to us, but —
COWEN: We’re still sitting here in Northern Virginia.
DOUTHAT: My wife has descended from Canadians on one side, from Newfoundland and Ontario. And when things get particularly hot on Twitter, she will sometimes suggest that she needs to reclaim her Canadian citizenship.
But I actually think that is how certain Americans think of Canada — not as a land of opportunity, but as sort of a stable and lower-risk version of the US to which they can abscond. And to the extent that’s true, then that suggests that there is this kind of resilient dynamism in the US that comes with risks but doesn’t seem to be on offer in Canada.
But do you think that’s out of date? How dynamic do you think Canada is right now?
COWEN: I think market size matters greatly. So talented Canadians come to the US in great numbers, but not so much vice versa. So it could just be that people care about market size much more than they care about the kinds of issues you and I talk about, and that always gives me pause. Just size of the country seems to be a more important variable relative to the amount of print space it gets from public intellectuals. And that’s why we don’t want to go to Canada. Plus, it’s cold.
DOUTHAT: It is cold. And all Canadians live in a very narrow strip of Canada. The population distribution is, as you would expect, extraordinarily compressed. In effect, geographically, it functions like a very nice northern province of the US with an immense and entertaining hinterland.
COWEN: They will get mad at you for saying that. To me, it feels more like a series of independent city-states on our margins, but I think I would find it problematic to live in a city-state, even in the US. When I go to Texas, I feel comfortable. It’s like I’m in a big country. But when I go to Rhode Island, I feel claustrophobic.
DOUTHAT: But also, the city-state model historically, the most dynamic . . . If you think of, obviously, Venice as a city-state, but even quasi city-states, like you can see the Low Countries in the 17th century as quasi city-states functioning within a landscape of empire. But they have tended to be influential, in part, because of their geographic placement.
Maybe that matters less globally than it once did, but it still feels like Toronto is a terrific city, but it’s not . . . If Toronto were dropped down in a different part of the world, it might be a more influential city.
COWEN: Is decadence cyclical, or it just keeps on getting worse and worse?
DOUTHAT: I think that it can be cyclical. One way to look at it is that, under my definition, it’s a very normal thing for human societies to enter into, perhaps much more normal than the kind of dynamism that we have taken for granted in the US. And to the extent that it reflects patterns of prosperity leading to torpor and stagnation, you would expect a kind of cyclical phenomenon, right?
To take the demographic example, you have real, substantial demographic decline in the Western world. Some of that just reflects the fact that we had this huge influential generation, the baby boom generation, that didn’t have nearly as many kids as their parents did and has sort of bestridden our world for a long time.
It’s possible — it’s sort of a subtheme of the book that, as the baby boom generation passes to their reward, that decadence will ease a little bit, and there’ll be more room for young people to do creative things and attain positions of power and all these kinds of things, and the demographic landscape of Western societies will change a little bit.
So in that sense, I think you can tell a cyclical story and certainly can see cyclical stories in history. I do think, though, that if you just push existing trend lines in the Western world forward, you would say the decadence is likely to deepen, at least over the next 25 to 40 years.
Right now, just to stay with fertility, there does seem to be a kind of low fertility trap that countries get into, where you have small families. Growth and vigor slow down, and in that landscape, there’s less support for having children. So fertility rates stay low, and that drives economic growth rates lower, and those things feed on one another.
That’s why I spend a certain amount of the later parts of the book talking about scenarios that are more disjunctive, where you need something unexpected and dramatic to happen to shift things.
Maybe that’s a particular invention that we’re on the cusp of reaching, or maybe it’s a religious revival that we don’t expect. I think the decadence we have now requires some sort of disjunctive event to shift us out of it, which could very well happen. But if you just plot the course of the United States forward to 2050, I would say that we stay decadent.
COWEN: Maybe the problem with the low birth rate — it’s not that kids aren’t fun, but men are not fun. So once women have some wealth or employment opportunities at all, they don’t need men as much. Those men don’t compete as much to marry those women. You have families forming later or not at all, or there’s high rate of single motherhood of course. But you’ll just have fewer kids. What would possibly reverse the problem of men simply not being that much fun?
DOUTHAT: I think it’s a hard question.
COWEN: Your kids are a lot of fun, right?
DOUTHAT: Right, but you’re right, that’s if you track right. If you track fertility declines in not just the US but, to take a strong case, Finland, which has had, in spite of all its social supports for child-rearing that are the envy of family policy experts the world over — their fertility rate keeps falling. And it seems to be just a consequence of all the trends you said: delayed marriage, delayed family formation, and men, in particular, not seeming to have a clearly defined role.
I go back and forth on this because on the one hand, we have three kids, we’re about to have a fourth —
COWEN: Oh, congratulations.
DOUTHAT: Thank you. And it seems to me, as an observer of marriages and child-rearing, that men are very important, even in our postfeminist, postindustrial age —
COWEN: The good ones.
DOUTHAT: Right, the good ones. But that should create an incentive for cultures and societies to form good ones and to figure out, How do you form good men in this landscape? And we haven’t. It’s pretty clear that we aren’t figuring out exactly how to do that.
Instead, you have these selection effects, where you increasingly have male spaces and female spaces that aren’t single-sex spaces. They aren’t spaces where women go to be women, and men go to be men, and then they meet in the dating market. They are spaces that are heavily female but have a certain percentage of males, or heavily male but have a certain percentage of females, have their own sort of self-contained dating markets where, because of the imbalance, there ends up being hostility between the sexes.
Thus, you have the far-right incels online, complaining about how women are terrible because in their world, there aren’t very many women. So all of the incentives are for women to behave like normal human beings and take advantage of their position. And then on the other side, in left-wing academic environments, you have fewer men, so the men don’t behave as well, and the women feel like the men are terrible.
That seems like a hard cycle to break, and it does seem like you would need some sort of effectively cultural campaign around rebuilding male education and manhood. Right now, the models for that are polarized between conservative religious models that, as a conservative religious person, I’m in favor of, but they obviously have a limited purchase in the culture as a whole, and then more feminist models that are trying to remake men along the lines of female virtues that I don’t think actually work, that don’t effectively identify core ways of making men successful as men.
But I do think — to be the social conservative for a minute — I do think pornography plays a really nasty role in all of this. A society that stigmatized and limited pornography would have at least slightly better men. I think readily available, constantly available pornography pushes men away from women, pushes men away from the cultivation of masculine virtues, makes them less marriageable, makes them literally more impotent in certain ways, and is a sort of underappreciated aspect in the decline of men.
COWEN: In the theology of original sin, what percentage of men are the good ones?
DOUTHAT: Zero.
COWEN: Zero.
DOUTHAT: Well, setting aside Jesus. But the famous phrase, “the line between good and evil runs through every human heart” — we just had this horrifying thing in my own Catholic world where Jean Vanier, the founder of L’Arche — this set of communities where people care for the disabled, but really, they’re communities where people live together. It’s not hospitals for disabled people. It’s communities of people living together — who had died recently, was considered a living saint, and had done incredibly good things.
I personally know people, young men, whose lives were transformed by working in these communities or just writing about these communities. It came out that he had, not a Harvey Weinstein problem, but a milder religious version where he had pressured women, including nuns, into sexual relationships. To their credit, the community put out a report about this.
But men, Catholic men, that I know or observe on the internet were particularly devastated by this because he was this model of saintly Catholic masculinity for them, so it’s a terrible thing. But it’s also — I’m circling around back to your original question — because it doesn’t take away, it doesn’t eliminate the good things he did. He did tremendously good things. He had saintly aspects, but he wasn’t a saint. He had that line running right down the middle of his heart.
COWEN: Do the arguments of your new book lead you to admire Mormonism more?
DOUTHAT: I admired Mormonism a great deal —
COWEN: But at the margin, right?
DOUTHAT: — before I wrote the book, but yes! You asked about nondecadent spaces in the Western world. Israel is one, and you could argue reasonably, I think, that Utah is another.
There is a difficulty for Mormons in that the founding of their faith and some of the pretty obvious controversies associated with it have pushed them a little bit away from certain forms of intellectual and theological work that you would want a really successful religious community bent on evangelizing the United States to be able to do. So that is sort of my non-Mormon Christian caveat about Mormonism.
But in general, I went out to Salt Lake City when Mitt Romney was running for president. They were trying to introduce journalists to Mormondom. We didn’t get to see their Holy of Holies, but we got tours of the missionary centers and the supermarket/food banks that they run for low-income people.
Speaking as a member of a Christian church — Catholicism — that has entered into its own obvious form of decadence in the US over the last 50 years, it’s a shaming experience in certain ways to see what the Mormons can do in the most basic forms of Christianity: feed the hungry, clothe the naked, help drug addicts, help people get their lives together while also serving God. It’s a remarkable and admirable thing that, basically, every other Christian church in the US should be envious of.
And they combine this — in a way that does fit with some of my speculations in the book — with this strong interest in economic development and technological progress. There isn’t high Mormon theology and high Mormon aesthetics, but certainly, the technical side of Americanness is very much on display in Mormon culture.
COWEN: If you see Israel as relatively nondecadent, do you then infer that being under military threat all of the time is what keeps decadence away?
DOUTHAT: Certainly that is one thing, yeah, Israel — there seems to be some sort of existential issue involved in how people think about the future, and part of the book is suggesting that there is a loss of optimism in the Western world, a sense that frontiers are closed. We’re not going to go to the stars. We’re stuck here with ourselves. We’re bored. What do we do now? But there’s also a sense in which extreme pessimism or extreme concern about the future can be a spur against decadence.
And what’s so striking about Israel in demographic terms is that Israel is the one country at its level of income that has a birth rate, not just at replacement, but way above replacement. It’s dipped a little in the last couple of years. And people hear that statistic and say, “Well, it’s just the Ultra-Orthodox having big families.”
But in fact, secular Israelis have much larger than American average families as well, again, in a landscape where their children are in more geopolitical peril than children in the US, in a country that is built out of a desert on a narrow strip of land up against the Mediterranean.
Now politically — we’re recording this in the midst of the interregnum between Israeli elections, of which we’re on track to have 17 in the next year or so — I think that you can see elements of the same political decadence on display in other countries on display in Israel too. So I don’t want to suggest that they’re exempt from the trends I’m describing, but they are — like the Mormons — exceptional relative to the rich society norm.
COWEN: To get back to the theme of your own decadence — you’ve written columns skeptical of the internet. You mentioned pornography a moment ago, which is usually now consumed over the internet. Presumably when it comes to CRISPR babies and transhumanism and genetic engineering, you’re at least partly skeptical, maybe very skeptical.
But if you think those are the areas right now where we’re seeing the major advances, isn’t it the case that, to overcome decadence, you have to actually embrace the innovations that you yourself are not comfortable with? The printing press in its early days led to religious wars. The Catholic Church —
DOUTHAT: I would have certainly been against the printing press, yes. Definitely, there are places where there is a tension between my Catholic or Christian moral commitments and my desire to escape decadence, and certainly I think elements of transhumanism are one of them. And I say as much in the book — you could imagine a real transhuman revolution that would not be decadent, that would mark the end of decadence as I’m describing it, but that I would not welcome.
Decadence isn’t necessarily a moral judgment. I’m stealing my definition from Jacques Barzun, who said, “It’s not a slur, the term, it’s a technical label.” And in that sense, to the extent that it’s a technical label, you have to be able to say things could happen that ended decadence and didn’t lead to collapse or catastrophe, that led to development, change, dynamism, that from a moral perspective I might find repellent.
It’s also why I’m drawn much more to the older frontier, the idea of the space program and space as a frontier, because that’s a case that I think the idea of human beings, as they are, going exploring seems to me much more fundamentally appealing than the idea of human beings staying put and changing who we are.
I’m drawn much more to the older frontier, the idea of the space program and space as a frontier, because that’s a case that I think the idea of human beings, as they are, going exploring seems to me much more fundamentally appealing than the idea of human beings staying put and changing who we are.
I can imagine someone with a different worldview having the opposite reaction, although Silicon Valley seems to have both reactions. You have both investment in space and investment in transhumanism, so they’re playing both sides of the escape-from-decadence scenario.
COWEN: You’ve argued at times that popes should never step down. Would you feel the same way if life extension meant that popes would live to the age of 140 or 150?
DOUTHAT: In that scenario — now we’re into totally speculative terrain — I think that you would expect popes to be elected later, and I think a 50-year pontificate is generally an unwise thing for the Catholic Church in the same way that a 50-year span of governance by any really powerful figure often ends up in bad places in the end. But in the scenario you’re describing, I would imagine that you would elect popes at the ripe late middle age of 120 —
COWEN: New York Times columnists also, right?
DOUTHAT: Well, that would be —
COWEN: You’d be 117, and you’d finally get a column.
[laughter]
DOUTHAT: I think in that case, yeah, you would — maybe 93. The perfect zone for the columnist would be age 80 to 100, and then you would step down. I’ve been a columnist for 10 years, and my assumption is that I will run out of things to say at some point. I just turned 40, so maybe 50 is the point at which I want to have fully transitioned to writing fantasy novels instead.
COWEN: I’ve argued that Peter Thiel is the most influential public intellectual on the right today. Agree or disagree?
DOUTHAT: Mostly agree.
COWEN: Why?
DOUTHAT: Well, first I should say, I have to agree because anyone who reads the book will find a number of quotations from Peter Thiel throughout. And I have, with some emendations, mostly accepted his analysis of the technological and economic component of our stagnation. So I am indebted to him.
I think that he — in his own evolution — has followed, but in certain ways blazed a trail for other evolutions of younger conservative intellectuals who are, in certain ways, in search of a new fusionism — one way to put it. Modern conservatism begins with the fusionism of social conservatism and mid-century Hayekian — Hayek to Rand — that wide spectrum of libertarianism.
There’s a general sense that that kind of fusion has broken down, and you have people who imagine a new conservatism that’s just social conservatism. Then you have libertarians who imagine a libertarianism that leaves conservatism behind.
But there are a lot of people who want to put things back together again, but in a slightly different way, and to make an argument that’s maybe like the argument that I end up with in the book, that there is some interesting alchemy between a society that looks a little further back and a little further ahead. So what I was saying earlier about Scruton, right? The idea that looking back to the 19th century or the 17th century isn’t necessarily decadent because it also lets you, maybe, look a little further ahead.
I think that’s, in certain ways, there in at least some of Thiel’s stuff where he’s simultaneously sympathetic to . . . He’s an eccentric Christian of some sort, maybe. He’s, at the very least, sympathetic to religious conservatives in a way that other Silicon Valley figures are not. At the same time, he is a dynamist in a way that the most Burkean version of social conservatism isn’t.
You’ve written about the idea of state capacity libertarianism, right? I think that’s one example of ways in which people who are skeptical of wherever liberalism is right now are trying to forge something else.
So some combination of a strong state, some kind of small-c conservative social renewal, and some sort of futurism offers some kind of alchemy. Thiel — he wrote Zero to One, which has an implicit political teaching, but there isn’t a Thielian manifesto at the moment. I think his influence is in the inchoateness of his combination of ideas, sort of speaking to the inchoateness of other people’s combinations of ideas.
He wrote an essay — there was a piece very critical of him, of course, I think in New York Magazine — but that looked at this essay he wrote for First Things a little while ago that had this very particular point aimed at Christian readers where he said, “Look, the Bible begins in a garden and ends in a city.”
I’d read that essay when he wrote it, and I think it actually did have some subconscious or conscious influence on me where I think there’s a strong religious, conservative draw towards pastoralism, towards the idea — my friend Rod Dreher’s book, The Benedict Option, has this idea of the retreat into the monastery, the retreat into the Wendell Berry farming community, and so on.
And I think that has to be — for Christianity to be a plausible faith for our civilization — that has to be balanced with a certain kind of the futurist optimism that has always been part of Christian cultures. And I think that —
COWEN: Always?
DOUTHAT: Two thousand years of history probably offers a lot of counterexamples, but I think the Christian world, in general, has been hospitable to dynamism. I think that’s a fair characterization of the history of Christianity, yeah.
COWEN: Do you think there could be a Peter Thiel manifesto, whether written by him or someone else? Or does the very existence of the Bible, or possibly the church, render that impossible, and thus much of it has to exist on the Straussian plane? And if thus more powerful —
DOUTHAT: Tell me more. What do you mean by the existence of the Bible or the church?
COWEN: The Bible sets out a very definite worldview — or worldviews, of course, depending on how you read it or even what you consider to be the Bible. But if you write a manifesto, you then have to lay out, What in the Bible are you agreeing with or not? And the manifesto then becomes quite subordinate or overly rebellious, and maybe the ideas are most powerful in the Straussian realm, where notions are hinted at, and you have to put the pieces together for yourself.
There’s a certain power to all of the ideas not being fully spelled out, and they also can evolve more freely in a dynamic way, which reflects the dynamism.
DOUTHAT: That’s possible. I think it’s particularly possible for someone like Thiel, who clearly has a very heterodox relationship, whatever it may be, to Christian faith. So yeah, you’re right that any manifesto he put out would highlight more clearly his points of tension with both the religious traditions that he is in dialogue with and the different broken factions of conservatism that he’s in dialogue with.
And the Thielian ethos, to me — well, it’s a venture capitalist’s ethos in the sense that he’s invested in Christianity and invested in transhumanism, so eternal life. He’s got an investment in eternal life and an investment in physical immortality, and he’s invested in disaster preparedness but also willing to invest — which I as a pundit was not — in the candidacy of Donald Trump.
So in that sense, a specific manifesto would limit his capacity to be poking at a lot of different points of our decadence and seeing where you could push your finger through. Maybe that’s not the right metaphor.
COWEN: Other than just mentioning the pope, do you think that most young people today could answer, with any kind of specificity, what is the difference between Catholics and Protestants in the United States?
DOUTHAT: No.
COWEN: Does that concern you? Do you care?
DOUTHAT: Yes, of course.
COWEN: What’s the difference they should focus on that they’re not grasping right now?
DOUTHAT: In fairness, you don’t want to overestimate the capacities of normal human beings in times past, right? It is not the case that there was some golden age of Christian history where farmers and peasants in rural Germany could recite the anathemas of the Council of Trent. I mean, this is —
COWEN: But they would read pamphlets about the anathemas of the Council of Trent.
DOUTHAT: Or they would have an intuitive grasp. I think if you asked a lot of people prior to Vatican II, what are the differences between Catholicism and Protestantism, they wouldn’t cite the Council of Trent. They would say, if they were Protestant, that Catholics have weird superstitious rituals and spooky nuns and priests. And if they were Catholic, they would say Protestants don’t really believe in the Virgin Mary, to be crude.
A signal failure of Catholicism since the ’60s — it’s not defined necessarily in the inability of people to recite the catechism, chapter and verse, but it’s more in that cultural and liturgical distinctive area.
So if you go into a typical suburban Catholic church — and I’ve been to mass in a lot of them — it can feel like a mainline church with a tiny bit more formality and a statue of Mary. And that, I think, is a mark of Catholicism’s attempt to assimilate to what was, in the ’60s, still a Protestant mainstream.
But now that that Protestant mainstream is gone, it just leaves Catholicism as this extra mainline denomination. And that will — since we’re talking about ways out of decadence — I do expect that to change over the next 50 years because I think Catholicism, more than evangelicalism, is likely to go into steeper decline over the next generation institutionally.
What will be left behind will be a weirder and more distinctive Catholic faith that will have some clearer differences from its Protestant neighbors than exist right now. But that’s a case of shrinking in order to become distinctive and dynamic again.
COWEN: If you’re worried about some aspects of the relative decline of Catholicism, why make marriage of the priesthood such a central issue for the church? As you well know, the Orthodox Church in the East has a very different attitude toward marriage of priests, and they are, broadly, a Catholic church, historically. Why not side with them? They are still distinctive, right? No one would confuse them with modern American Protestantism.
DOUTHAT: They are still distinctive. There are some wrinkles there. A lot of the Orthodox churches don’t let married men become bishops, and it’s a little bit more complex. But in general, it’s a case of — just to analyze it in cultural terms, leaving theology out of it — it’s another case of giving up a distinctive, giving up something that separates you and distinguishes you from other churches and suggest to people that there’s something interesting and particularist going on here.
Then there are structural hurdles, too. The Catholic Church is not actually set up to provide for married pastors and their families.
COWEN: But that’d be self-financing, right? If they chose to do it?
DOUTHAT: Well, they’d have to. They could try and finance themselves. Yeah, that is what —
COWEN: But more people would become priests if they could marry.
DOUTHAT: See, I’m not completely sure. I think you would see a temporary bump in the number of people becoming priests, but in general, there’s a problem of talent recruitment for mainline Protestant denominations, too. And also — this is the more Catholic argument — there’s a dynamic relationship in a healthy Christianity between a church having strong models of celibate life and strong models of married life.
When that goes away, in fact, married life gets harder, too, because there’s this sense that everyone is supposed to get married. If you’re not married, you’re defective, and marriage is the highest form of life. So, if your marriage isn’t particularly happy, then you should get out of it and find a better marriage. In fact, I think having a commitment to celibacy at the heart of your religion is better for the diversity of human types and experiences than just making marriage the summit of all things.
And it’s also — this is more of the argument of my last book than this one, but they relate to one another, I suppose — part of what I find attractive and persuasive about Catholicism is that, not always and everywhere, but in particular ways, it has preserved commitments to the radical side of the New Testament, the nonbourgeois side. I think that’s true in the Church’s resistance to divorce, which has crumbled a bit under this pontificate, but it’s true in issues of celibacy as well.
If you read the New Testament, and especially if you read the Gospels, but Paul’s letters too, you would not come away convinced that this is a religion that’s all about Ross Douthat and his wife and four kids as the model Christian, right? The model Christian is somebody doing something much more radical. And if you drop that or downgrade it from its position in the church, then a piece of New Testament radicalism goes away.
If you read the New Testament, and especially if you read the Gospels, but Paul’s letters too, you would not come away convinced that this is a religion that’s all about Ross Douthat and his wife and four kids as the model Christian, right? The model Christian is somebody doing something much more radical.
And you know that New Testament radicalism is literally what I think God has given us in his most direct and intimate revelation. So it would be a bad idea to jettison it.
COWEN: Here’s a reader question: “I believe 95 percent of Catholic universities are Catholic in name only. Does he agree? In what direction does he hope for the future of Catholic universities? Should the Church withdraw its sanction?”
DOUTHAT: I don’t know about 95 percent, but I think, generally, Catholic universities have followed the same path of imitation and assimilation that I was describing earlier.
COWEN: But say Georgetown — that’s nominally Catholic. But if you went there, it would in no way shape your time as a student, or . . . ?
DOUTHAT: No, I don’t want to be particularly harsh on Georgetown, but I do think it’s the Catholic university that’s most assimilated to the secular model of elite education.
If you went to a school like Notre Dame, it’s possible to go through Notre Dame without having — I should say “Noter Dame,” not Notre Dame; I sound pretentious — it’s possible to go to Notre Dame and have a very mild exposure to Catholicism. But there is an intense Catholic subculture there. There’s a beautiful basilica at the heart of campus. There’s still a real Catholic culture. And that’s a very successful top-tier university.
When I talk to Catholic academics at those kinds of schools, they will often say that the thing that the university supplies — in many cases, it’s not a Catholic identity for every student, but it’s a preservation of a Catholic option and a sort of potential encounter with religion that is not available at a secular university.
This is what a professor at Boston College — which should be cited as another example of a somewhat secularized Jesuit university — said to me. He said, “Look, BC is not going to become as Catholic as it was 50 years ago overnight, but it’s a place where the administration and the president want to preserve some Catholicism within the school.”
And to the extent that schools are trying to do that, I don’t think the Church should withdraw its sanction. That said, I do think there’s a certain range of schools that are now very much quasi secularized, and it wouldn’t be a bad thing if the Church just recognized that, and they came to effective parting of the ways. But I have more hope for the Notre Dame model than maybe your correspondent does.
COWEN: Does the Vatican have too few employees? There’s a Slate article — it claimed in 2012, the Roman Curia has fewer than 3,000 employees. Walmart headquarters at the time had 12,000. If the Church is a quite significant global operation, can it be argued, in fact, that it’s not bureaucratic enough? They don’t actually have state capacity in the sense that state capacity libertarianism might approve of.
DOUTHAT: Right. State capacity libertarianism would disapprove of the Vatican model. And it reflects the reality that media coverage of the Catholic Church doesn’t always reflect, which is that in Catholic ecclesiology and the theory of the institution, bishops are really supposed to be pretty autonomous in governance. And the purpose of Rome is the promotion of missionary work and the protection of doctrine, and it’s not supposed to be micromanaging the governance of the world Church.
Now, I think what we’ve seen over the last 30 years — and it’s been thrown into sharp relief by the sex abuse crisis — is that the modern world may not allow that model to exist; that if you have this global institution that has a celebrity figure at the center of it, who is the focus of endless media attention, you can’t, in effect, get away with saying, “Well, the pope is the pope, but sex abuse is an American problem.”
And to that extent, there is a case that the Church needs more employees and a more efficient and centralized bureaucracy. But then that also coexists with the problem that the model of Catholicism is still a model that was modern in the 16th century. It’s still much more of a court model than a bureaucratic model, and pope after pope has theoretically tried to change this and has not succeeded.
Part of the reality is, as you well know, as a world traveler, the Italians are very good at running courts that exclude outsiders and prevent them from changing the way things are done. Time and again, some Anglo-Saxon or German blunderer gets put in charge of some Vatican dicastery and discovers that, in fact, the reforms he intends are just not quite possible. And you know, in certain ways, that’s a side of decadence that you can bemoan, but in certain ways, you have to respect, too.
COWEN: Abortion, presumably, is an important issue for you. Given that, why not just outright support President Trump?
DOUTHAT: That’s a good question, and the basic answer that I’ve had is twofold. One, I’ve had, throughout Trump’s ascent and well into his presidency, an expectation that the gap between his skill level and competence and the challenges of being president was large enough that over a long enough time horizon, he would lead the US into some sort of catastrophe that would have a dramatically negative effect on the political causes that I care about, even —
COWEN: But it would have to kill many millions of people to outweigh the expected value of the change in abortion policy.
DOUTHAT: But my assumption is that you don’t get a substantial and long-lasting change in abortion policy without a pro-life political coalition that’s capable of governing the country for a long period of time. Maybe I read too much into this experience, but I came of age with George W. Bush’s presidency, who was a pro-life president, who put conservative justices on the Supreme Court. Then his foreign policy mistakes and other issues led to his presidency ending in total catastrophe.
This is unprovable, but I think there is some connection between the subsequent decline of religious affiliation and the total rout of social conservatives on issues of same-sex marriage and this sense that people had in the mid-2000s that religious conservatism was associated with a totally incompetent president and a botched war and then a financial crisis.
So I’ve imagined something similar as the likely endgame for Trump, that something — to pick an example from the news this winter and likely this spring, the coronavirus — that the incapacities of his White House are more likely to lead to some catastrophic failure that dramatically discredits his party and destroys his presidency.
Now, that being said, generally the Trump era has been more stable, more sustainably decadent, if you will, than I expected. And in that sense, I can certainly see why a certain faction of Never Trump says, “Well, we overestimated the tail risks of this presidency, and things are more stable than we thought, and therefore we should welcome his judicial appointments and embrace him for a second term.”
And it could be that I do too much cultural analysis in a way. I spend too much time thinking, “Well, what do younger people in churches think of the hypocrisy involved in evangelical support for Trump? And won’t that lead to a further decline for Christianity that outweighs any gains?” Maybe that kind of analysis is too much analysis —
COWEN: Just who wins might be what matters, right?
DOUTHAT: Right, right. Maybe —
COWEN: If Bernie Sanders wins, that helps one set of ideas, and all the other complicated second-order effects will dwindle.
DOUTHAT: Exactly, and you don’t know . . . The pundit’s mistake is sometimes to try and think 14 steps ahead, and the partisan mind may have a certain advantage, where it just says, “No, we have to win this election and let the future take care of itself.” That still hasn’t brought me around to supporting Trump, but I think my arguments against supporting him are weaker than they were, again, pending the outcome of whatever happens with the coronavirus.
COWEN: We live in an America that supposedly respects religions. Yet, if you were to try to argue in public that, say, a child were possessed by demons, you would be mocked and called insane, whether or not it were true.
Where do you personally draw the line? You respect religions, but are there claims you hear that, when you hear them, you think, “That’s so implausible. It couldn’t possibly be true”? You file it in the insane category the way most people, when they would hear you talk of a child being possessed by demons, would think that’s insane and not required by their supposed respective religions. Do you see what I’m asking?
DOUTHAT: I do. I think it is quite possible for a child to become possessed by demons. And I actually mildly disagree. I think in the circles in which you and I move, that claim would be just greeted with automatic mockery. But I think in American culture writ large, there is plenty of space for at least openness to ideas of the supernatural and the demonic. Yeah, the mockery is still — even in our more secularized age — an elite phenomenon. I’ve struggled to persuade my secular friends of this view, but it’s still the old Chestertonian view that I find the improbable harder to swallow than the impossible.
We were talking about Mormonism earlier, and my objection to Mormonism is not the idea of the Angel Moroni appearing to Joseph Smith. It’s the claim that there existed these large-scale civilizations in Central America for which we have no archeological evidence. I’m much more skeptical of claims that should be amenable to real-world, scientific, archeological — what have you — testing, and don’t pan out, than I am to supernatural claims.
Again, I recognize that’s a minority view in our peer group, but I’m a pretty convinced supernaturalist. The literature on demonic possession is . . . It’s unwise to spend too much time with it because it leads to dark places, but I think it’s quite convincing that there is something going on there that is not adequately explained by existing theories of psychology and the human mind.
COWEN: What’s your point estimate of the probability that what we now call UFOs are, in fact, something interesting and mysterious and related to some kind of life from a distance? Right now.
DOUTHAT: My probability that they are something interesting and mysterious is very high. I would say 80 percent, 85 percent. That they are related to life from a distant planet is a lot lower. I would say quite, quite low, maybe 10-15 percent.
COWEN: That’s very high. [laughs]
DOUTHAT: That’s actually high. I’m going to lose all credibility, so I should go a little lower.
But I think there are two things going on with UFOs. One, there is a historical continuity that I find very persuasive between human stories of fairy encounters from the Middle Ages and the pre-modern period and stories of alien abductions, where you have similar depictions of the creatures involved, similar emphasis on trickery and people playing games with human beings, similar emphasis on sex and quasi-medical experimentation, all of these kinds of things.
What that suggests to me is, on the one hand, that you should assign some probability to the possibility that there are supernatural beings who like to mess with us, who are neither angelic nor demonic.
But leaving that aside, there is some kind of human experience that we don’t fully understand that is not just made up, that is maybe some sort of union, unconscious thing that gets interpreted as aliens in one age and as fairies in another, but it’s real and interesting even if the fairies themselves aren’t real. That’s one area.
Then you have the UFOs that we pick up on video, that we now actually have published in my own newspaper — pretty compelling videographic evidence. It could be that that’s one more example — if the fairies are real — that this is just one more way they mess with us. I think I would assign a slightly higher probability to the weird-advanced-military-technology explanation for those, that they seem a little different from the UFO abduction stories.
Put it this way: There are unidentified flying objects that we can see on videos, that pilots have seen, that are presumably not a hallucination and therefore must represent, one, the supernatural, two, advanced military technology, or three, visitors from another planet. Right?
COWEN: Yes. Three final questions —
DOUTHAT: Wait, I know it’s not my interview, but what is your assigned probability of those options?
COWEN: It’s called Conversations with Tyler, right? Not by Tyler.
[laughter]
COWEN: I said on Marginal Revolution I thought it was maybe up to a 5 percent chance it was real beings, and then I talked myself down to about 1 percent. But 1 percent is still quite high.
DOUTHOUT: One percent is still a lot.
COWEN: So, we should be thinking and talking about it more.
DOUTHAT: What probability do you assign to the supernatural when you think about these? Not for this in particular, but generally. Like if I came to you, and I said, “Tyler, I want you to read the literature on hauntings and ghosts.” Going into reading that literature, what probability do you assign that ghosts are in some sense real?
COWEN: That’s a difficult question because I am so willing to entertain the notion that the true model of physics is so weird. It could be weirder than religion —
DOUTHAT: That is fair.
COWEN: So what you’re calling supernatural, I could say parallel universes.
DOUTHAT: I’ll accept it.
COWEN: So I don’t dismiss the weirdness, but I don’t know what should make me call it supernatural for almost tautological reasons.
DOUTHAT: Yeah. One of the UFO obsessives who pivoted to this fairy interpretation had basically that view. He was arguing that it is a parallel, a bizarre parallel-dimension-being effect. So I’ll allow it. You’re good.
COWEN: Three last questions. As technology advances, won’t we need to end most lives by euthanasia? Not people who fall off cliffs, but you could always hook someone up and keep them going. So, won’t euthanasia become, say, the case for 80 percent of deaths?
DOUTHAT: I understand why people are skeptical of it, but I generally buy the distinction that my own church makes between the withdrawal of care and the injection of lethal drugs. I know that there are areas where that line gets blurry, but I think, yes, over a long enough life-expectancy horizon, human beings would need to create a culture of refusing and withdrawing care. But that is still different from, at least right now, the means we have where you’re actually actively hastening death through interventions designed to do so.
COWEN: Last two questions. First, is Connecticut good?
DOUTHAT: Yes. I just did an interview with a very nice reporter for Connecticut Magazine where I was trying to explain . . . I was saying positive things about Connecticut, but then also saying that it was an example of decadence. It’s a very wealthy American state that has a lot of old institutions. Yale University, in the city that I live in, that is getting older and has trouble attracting young people.
It’s not a dynamic state, or not as dynamic as it once was. But I grew up in Connecticut, so I have that sort of partisanship, but I like living there. I like its mix of intimacy and history and the New England landscape. And I think that if you could rescue Connecticut from decadence, maybe you could rescue the whole world. So —
COWEN: Finally, is Lyme disease good?
DOUTHAT: In the sense that God uses all things for good, yes, but not in any sense besides that. And the next book I’m actually under contract for is about Lyme disease.
COWEN: In your own experience with it.
DOUTHAT: My own experience, but I do think of it, in part, as my own very small attempt to work against decadence. If I could convince readers that there are, in fact, better treatments for Lyme disease available and help people make progress against one particular disease, then maybe that’s a more effective anti-decadence effort than writing an entire book bemoaning the state of civilization.
COWEN: Ross, thank you very much. And again, I’d like to recommend his book to you all, The Decadent Society: How We Became the Victims of Our Own Success.
Thank you.
DOUTHAT: Thank you, Tyler. | https://medium.com/conversations-with-tyler/ross-douthat-tyler-cowen-decadence-religion-d2bb7eeb397b | ['Mercatus Center'] | 2020-03-25 11:55:00.725000+00:00 | ['Books', 'Decadence', 'Podcast', 'Religion', 'Authors'] |
Social Media in the Middle East: Five Trends Journalists Need To Know About | This article is authored by Damian Radcliffe, the Carolyn S. Chambers professor of journalism at the University of Oregon and Payton Bruni, a journalism student at the University of Oregon’s School of Journalism and Communication, who is also minoring in Arabic Studies.
The Middle East is a large, diverse, region. The fact that one-third of the population is below the age of 15 years, and a further one in five of the population is aged 15–24 years old, means that the Middle East and North Africa (MENA) is one of the most youthful regions on the planet.
Since the Arab Spring, there has been increased interest in the role that media, and in particular social media, plays in the region. Our recent report, State of Social Media, Middle East: 2018 explored this topic in depth. Here we outline the implications our research has for journalists.
News consumption for Arab youth is social media-led
“Like their peers in the West, young Arabs today are digital natives,” said Sunil John, founder and CEO of ASDA’A Burson-Marsteller, which produces the annual Arab youth survey.
“Young Arabs are now getting their news first on social media, not television. This year, our survey reveals almost two thirds (63 per cent) of young Arabs say they look first to Facebook and Twitter for news. Three years ago, that was just a quarter.”
YouTube is huge. And growing
The number of YouTube channels in MENA has risen by 160 per cent in the past three years. More than 200 YouTube channels in the region have over one million subscribers. Over 30,000 channels have more than 10,000 subscribers.
In 2017, the 16 nation Arab youth survey also reported that YouTube is viewed daily by half of young Arabs.
To encourage further growth of the network, Google opened a YouTube Space at Dubai’s Studio City in March 2018, the tenth such hub to be opened by YouTube around the globe.
According to Arabian Business, content creators with more than 10,000 YouTube subscribers enjoy “free access to audio, visual and editing equipment, as well as training programmes, workshops and courses. Those with more than 1,000 subscribers will have access to workshops and events hosted at the space.”
In most countries, Facebook has yet to falter
The social network now has 164 million active monthly users in the Arab world. This is up from 56 million Facebook users just five years earlier.
Interestingly, in contrast to many other markets, 61 per cent of Arab youth say they use Facebook more frequently than a year ago, suggesting the network is still growing.
Egypt, the most populous nation in the region with a population of over 100 million, remains the biggest national market for Facebook in the region, with 24 million daily users and nearly 37 million monthly mobile users.
Saudi Arabia is a social media pioneer
“In 2018, YouTube upstaged long-time leader Facebook to become the most popular social media platform in Saudi Arabia,” reported Global Media Insight, a Dubai based digital interactive agency.
Data shared by the agency showed YouTube has 23.62 million active users, in the country, with Facebook coming in second with 21.95 million users.
Alongside this, although there are about 12 million daily users of Snapchat in the Gulf region (an area comprising Saudi Arabia, Kuwait, the United Arab Emirates, Qatar, Bahrain, and Oman) a staggering 9 million of these are in Saudi Arabia (compared to 1 million in UAE).
A complicated relationship with platforms
To find out more, download the full study State of Social Media, Middle East: 2018 from the University of Oregon Scholars’ Bank, or view it online via Scribd, SlideShare, ResearchGate and Academia.Edu.
Despite YouTube’s wide popularity in the MENA region, the company faced some pushback in the past year, after the network was accused of removing online evidence of Syrian chemical attacks.
Meanwhile, YouTube suspended accounts belonging to Syria’s public international news organisation (SANA,) the Ministry of Defence, and the Syrian Presidency “after a report claimed the channels were violating US sanctions and generating revenue from ads,” Al Jazeera reported.
More generally, social networks have a complicated relationship with the region, with service blocks, or the banning of certain features (such as video calling) being relatively common place, and both news organisations and individuals, can fall foul of greater levels of government oversight.
Derogatory posts have resulted in deportations of residents from UAE, while in 2018, the Egyptian government passed legislation categorising social media accounts with more than 5,000 followers as media outlets, thereby exposing them to monitoring by the authorities. | https://medium.com/damian-radcliffe/social-media-in-the-middle-east-five-trends-journalists-need-to-know-about-46b3d87d3ee6 | ['Damian Radcliffe'] | 2019-05-07 21:10:09.866000+00:00 | ['Journalism', 'Social Network', 'Social Media', 'Middle East', 'Arab World'] |
5 Ways to React When a Younger Person Leaves You In the Dust | 5 Ways to React When a Younger Person Leaves You In the Dust
It’s about respect, not jealousy.
Photo by Victoire Joncheray on Unsplash
“There goes Jeff Bezos,” a man said.
Comedian Kevin Hart was in a room with well-known and successful business owners and individuals.
Then, Jeff Bezos walked in.
Instantly, Kevin said to his friend, “I’m going to go say what’s up to him. I would love to pick his brain. That’s an interesting individual.”
His friend said, “Don’t do that! Why do you want to do that? Don’t look like the dude that’s thirsty.”
“Just chill. Just relax,” the friend repeated.
What do you do when you meet someone more successful than you?
Do you relax? Do you tighten up?
Kevin Hart had to choose how he would react when he was in the presence of someone massively more successful than him.
Listening to Kevin Hart describe his reaction reminded me of my own choice when I ran into someone I had not seen in a few years.
He was significantly younger than me. Yet he had achieved a level of success that I am striving for. A few years ago when we last spoke, we were pretty even in our results — similar homes, cars, and lifestyle. But he had passed me.
A flurry of emotions started to simmer. What do I say to this guy? How do I react?
I thought through each of my options. Some were not healthy. Others were much better. I thought of five ways to react. | https://medium.com/mind-cafe/5-ways-to-react-when-a-younger-person-leaves-you-in-the-dust-b035c3c4793c | ['John Mashni'] | 2020-07-25 13:01:01.246000+00:00 | ['Entrepreneurship', 'Leadership', 'Self Improvement', 'Life Lessons', 'Inspiration'] |
Are we headed for a sports BOOM….then BUST? | There are cycles in almost everything in life. Nothing ever grows exponentially forever.
This can be seen in nature, our economy, and countless other places in our lives. No bull market lasts forever. No Bear market does either.
How you navigate those peaks and valleys comes down to how well you understand the factors that affect the system your industry is running on. What are the levers that increase success or drive failure?
If we can understand the system and levers, we can effectively navigate to success where others fail. We see it time & time again with companies in the market.
The sports industry is no different. We have very clear dynamics that affect attendance in our industry. Many times, we found band-aid solutions to cover them up. And somewhat..they worked.
But before the pandemic, sports had an issue with getting fans into the stands. We heard it non-stop, fans enjoyed watching at home.
We had prominent sports figures complain that the Smart Phone was the enemy, giving people access to sports content so they would stay home.
We saw 10%+ drops in attendance each year. This was a HUGE issue in sports. Then a pandemic came our attendance went to 0. We had to get creative in keeping fans engaged, connected, and ultimately monetize them without live games.
As an industry, we did an awesome job of being creative to solve these issues. But, we may have only ignited a larger issue.
Today I want to jump into some factors that concern me for sports. I believe in asking ourselves the hard questions in order to create tangible solutions to solve them. This is the first step to that.
The BOOM
Look, live sports being back in our stadiums will bring a boom. Many of you, in fact, are selling on this in with your clients.
I’m sure everyone in the sponsorship industry reading this article has used the line “When we do get sports back in our stadiums, the viewership will be astronomical. You will not want to miss out on that when it happens.”
I do think there will be a boom. If we look at the return of people to restaurants and bars in states where they are opening up, there is a pretty steady return. Here in Portland, OR some restaurants have a line to eat when pre-pandemic you maybe saw a handful of people there.
I am, though, skeptical on how big of a BOOM we will see. We haven’t seen the BOOM we expected on television ratings for watching key games in the playoffs for many leagues. You can mark it up to not counting streaming as well as they should and an oversaturation of all the sports happening at once.
It is important, unequivocally, to maximize whatever boom you see and be prepared. Have a plan for it and make sure you are maximizing it. You will need the cash from this boom to capitalize on the bust that I believe will follow.
The Bust
As with any boom, there is a carrying capacity for that upward trajectory. There are factors that will begin to eat away at that rise if not seen and understood properly.
I see three main factors we have to take into account as we look to navigate the next 5 years (and honestly 5 months) that will drop our attendance numbers.
The battle pre-covid for attendance wasn’t won
It is VITAL to understand that our problems of the past have not gone away. We were, for the most part, losing the attendance battle at our games.
Most of us in the industry don’t need proof of this as we lived this problem, but to put into context here is a quote from a 2019 USA Today article:
“But fans are now less inclined to go to games in person. Each league saw a decline in total attendance from 2008 to 2018. Fans are often unwilling to pay high ticket prices, and teams don’t seem to care, as an increasing amount and share of their revenue come from lucrative TV contracts as opposed to ticket sales. But not all teams are losing fans at an equal rate. Some have seen average attendance declines of more than a third over the last decade.”
Why was attendance dropping? Well, there are many answers to that question. But a few that stand out are:
-Comfort of watching at home
-Terrible Parking
-High ticket price
-High food cost
I’ve talked about it before, but our experience in the stadium ultimately wasn’t doing its job in matching the price of the ticket. This was a long-standing problem in our industry but was magnified when we gave fans the option to watch
We employed terrible tactics to solve it. Instead of addressing the problems above, we tried to solve with TV blackouts and restrictions in watching to force fans to pay for an experience we knew was not up to par with what we were charging.
Ultimately the foundation of my thesis of a bust comes from the fact that we did not solve the issues we had pre-pandemic.
If we haven’t implemented major tactics to solve them, why would we expect the results to change?
To double down on that, we’ve educated and trained our fans on how to consume at home
Not only have we not solved the problem that plagued our industry pre-pandemic, but we’ve also made it a bit worse.
Before the pandemic, although fans were streaming our games, the adoption of platforms like ESPN+ had not reached the maturity stage in the tech product lifecycle. In short, while a lot of fans were using these platforms to stream…MOST fans were not.
Further than this, we’ve also digitized our gameday experience. Pre-game camera angles into warm-ups, ways to view the game from home, etc.
Don’t get me wrong, I will be the first to tell you that this is a HUGE step forward for our industry. The amount of inventory & engagement we’ve open up here will more than double the revenue we bring in as an industry in the next 5 years.
A common saying in the business & tech world is the pandemic accelerated the adoption of tech for many industries. There is no difference in the sports industry. We now must implement these digital initiatives (digital programs, streaming, digital engagement, etc.) in order to keep up with the situation we are in with no fans.
But a side effect of this is we have now gotten fans familiar with the very platforms that were eating into our attendance. We have trained fans on how to engage in their own homes.
With an in-stadium product that has had its issues, why would fans leave the comfort of their own home now that they have every access to experience it at home?
This is a major question we have to take into account as we think through this issue. This is, in my mind, an accelerant that will only expose our issues with getting fans into the stadium further…in some cases bringing it to a tipping point for most organizations.
We can’t expect the situation to be the same. We have to understand that we will be in a bigger uphill battle than before.
Incoming recession, will we price out our fans?
However the presidential election goes this week, with the pandemic crippling our economy we can assume that our fans will have less disposable income than they have had in the past.
Much less disposable income. The honest truth is some fans will be thinking about how they will be able to pay rent. Your game experience won’t even be in the realm of any spending consideration.
Before the pandemic, in one of the strongest economies of all time, we had an issue with fans being priced out of tickets. The cost for a family of 4 to attend a professional game was astronomical.
To further add salt to the wound, the high price in many cases didn’t match the value that most fans felt the game experience held. By that, I mean many fans opted to stay home in large part because the cost was not worth the problems that they would face at the stadium (parking, traffic, high food prices, etc.).
If this was a problem before the pandemic, in a great economy, we can assume that it will be an even bigger issue when fans are allowed back into our stadiums.
Sure, at first I think we absolutely will see this affecting purchase rates less often as fans make an emotional purchase to attend the game after being neglected of it for so many months…but over a sustained amount of time it will be harder & harder for a fan to part with their
Ok, this sounds like a ticketing problem…how does it affect me in sponsorship?
In so many ways ticketing & sponsorship are inherently linked to each other. In sponsorship, we rely on eyeballs in our stadiums to make the high priced signage worth the money to sponsors.
If there are fewer and fewer fans in the stands you will have to subsidize the lost views with something else or lower your price.
We never want to lower our price, but if you cannot report the same value as in the past you will have to adjust or watch as your sponsors spend less and less with your team.
We all know as well that in sponsorship, perception is reality. As our prospect & sponsors see empty seats they perceive it as a loss in influence. If fans aren’t coming to games to watch the team…do you really have enough influence to help successfully vouch for a sponsor through your sponsorship packages.
It is vital that we understand that empty seats in our stadiums will do major damage to our sales in sponsorship. How your ticketing, marketing, and game ops departments solve this is absolutely critical to your success in bringing in sponsorship dollars.
How can we fight against the bust? A plan of attack
I think the first step is to realize this is coming. We are all good at scenario planning at this point in the pandemic, this is a must scenario to plan for.
Unfortunately, I am hearing and seeing in the industry the opposite. There has been a lot of selling on the idea that there will be a boom when sports come back and you will want to be a part of it.
And as I mentioned earlier, I fully expect somewhat of a boom coming back. I think the timing of how long that boom is will depend on the state of the economy, but we will see an impassioned boom.
But we have to make a game plan to combat the bust period. How can we ensure that we sustain that BOOM over the entire season?
First, we have to fix the sins of our past. Sponsors can help us do that
This a somewhat of a blanketed statement. There are many teams that have solved most of these problems to create a truly seamless game experience. For those teams, I salute you.
BUT, for the majority of us, we’ve swept these problems with our game day experience under the rug.
We’ve lived a lot on the nostalgia, emotion, and power of our brand to cover-up the issues with our game days.
Luckily, our sponsors can help us solve many of these issues. Mainly through funding.
Do you have long lines at your stadium to get in? Creating a sponsored entrance is a great way to alleviate the line. For example, at Barclays in Brooklyn NYC if you have an American Express card you can skip the line with an MVP entrance.
What this does is take the pressure off the overall line, as those American Express customers use the MVP line…there are inevitably fewer people to crowd the main entrance line.
Staying on lines, is there a technology out there that can alleviate long lines at concessions & bathrooms? Absolutely…but many times we can’t afford it.
Having a sponsor “fund” that technology with specific integrations into the experience is a great way to add it to your stadium experience.
If we are strategic, we can fund the alleviation of our stadium issues with sponsorship dollars and add them to the package.
In order to have a chance to come back stronger, we have to solve the sins of our past.
Parking, long lines, outdated chairs, high food cost, and countless more issues are solvable now. We can use sponsor dollars to help fund the solutions to these issues if we can link them back to the sponsor’s goal.
Sponsorship can help drive ticket sales, and sustain them if done correctly
As much as we are a pain in our ticketing departments butts when it comes to discounting tickets etc….we do have ways that can help them drive sales through our packages and sponsorship assets.
A key example comes with bobbleheads. Sponsors love to be the presenting partner on a bobblehead. Their logo is tied to that item forever.
If you didn’t know…bobblehead giveaways are a great way to drive ticket sales. In fact…
“Bobblehead giveaways, it turns out, increase attendance by around 25 percent on weekdays and 9 percent on weekends, Graduate School of Management student Mai Nguyen found.” via UC Davis (Go Aggies).
You can increase ticket sales by 9–25% by giving away a bobblehead. That may not seem like a lot…but when you are staring down the barrel at only 50% attendance…boosting it to 75% looks a hell of a lot better.
But more important is the buying psychology it brings.
When a fan is deciding whether to buy a ticket to a game and are short on disposable income they are looking for value. They need to have the value of the game match what they can spend.
A bobblehead immediately adds value. Since they are receiving it for free, the price of the ticket in their head drops as they are receiving more for the dollars paid.
Notice that this does not drop the price of the ticket, it only adds context and value for the fan. This needs to be the gameplan. We can still preserve the value of our ticket price…it just is more easily justified to the fan with the free item.
What is further on this example though is we now have broken the seal for that fan buying tickets. We have broken through the hardest part to the system, getting them used to and comfortable with buying tickets.
One sponsored bobblehead has just turned a one ticket sale into multiple without dropping the price of your tickets….all the while bringing value to your sponsor. It is a win-win.
This is a prime example of how sponsorship can help during the bust. Funding the bobblehead helps your team not have to take a hit OR pass the cost onto the fan while simultaneously preserving the value of your ticket price.
A bobblehead is just one example of how this can help, there are many others. Having a sponsor subsidize a ticket price if you make a purchase is another great way to preserve the value.
There are countless other ideas, many we haven’t thought of yet, but it is up to us in sponsorship to understand the bust is coming and get creative on how we can help.
In essence, when the bust comes…don’t drop your price. Add value instead. A sponsored item will work wonders here.
Timing on both of these points will be key
WHEN you launch these campaigns will be the difference to keeping fans rolling in during the BUST phase.
Again if we can expect a BOOM, we don’t want to waste the
Think of these tactics like the bobblehead like our NOS. For all you non-street racers out there, Nitrogen Oxide is used to give the case a boost of speed for a few seconds to gain an advantage.
Press it at the wrong time, you will have wasted it as your opponent drives by.
Probably a terrible analogy, but my point is you don’t want to waste these tactics too early.
If you give away bobbleheads when the fans were already coming, you are wasting that tactic and won’t have it when you truly need it.
If you can time a bobblehead launch to when you think the drop will come, you can sustain the attendance for many weeks after most teams see a drop. You can break that seal in a fan to get them comfortable buying tickets at a moment when they are ready to cut that off their spending list.
In my mind, there will be somewhat of a feedback loop here. If your stadium is full while others around the sport start to empty…fans will psychologically link your game as something that is a “can’t miss” event.
As you plan your tactics, understand that timing is so important in this process. Save your best tactics for when things get really bad. It will pay off as others see a slump.
With whatever you do, you have to have a plan
My biggest fear for the industry is that we don’t recognize these issues and don’t have a plan to solve them.
Not that we could ever see a pandemic coming…but the results were so much worse for the sports industry because our revenue-driving assets were over-leveraged on the physical side of the game. We HAD to have butts in seats.
The fact that we were over-leveraged in the physical part of our game was predictable. I wrote about it HERE.
And really it came down to us not asking the hard questions about our industry. We didn’t want to think about the worse because…well…we were making money. We relied on a great economy, brand name, and TV revenues to sweep our real problems under the rug.
There is no excuse now to ignore them. We have to have a solution to these issues. We have to solve the reasons fans hate our game days in order to create a product that can survive the coming years.
The teams that see this. The ones that see their issues, solve them, and create a strategy to thrive while others survive will win out in the end.
In the sponsorship industry, we can…with our sponsors…be the catalyst for these solutions. Our sponsors can help fund the solutions to our game day problems.
If we define our sponsors as truly our partners, we can create assets that help both ends grow. It has never been more important than now to think about how our sponsors can help us thrive in the bust while helping them do so as well in the process.
A bust is coming…how prepared is your team when it does?
— — — —
Want more sponsorship insights delivered directly to your inbox weekly? Sign up for the Digitally Sponsored email newsletter by CLICKING HERE. | https://medium.com/sqwadblog/are-we-headed-for-a-sports-boom-then-bust-12fe53c6129 | ['Nick Lawson'] | 2020-11-12 15:55:06.254000+00:00 | ['Marketing', 'Sponsorship Activation', 'Sponsorship', 'Sportsbiz', 'Sports Business'] |
I learned how to be gay on AO3 | Photo by Sharon McCutcheon from Pexels
On Sunday, I found out that the Archive of Our Own, affectionately known as AO3 to fans, had just been awarded the Hugo award for Best Related Work. For the uninitiated, AO3 is an archive of fanwork like fanfiction and fanart that is owned and run by fans, hence the name. And the Hugo awards are a huge deal within the science fiction and fantasy community — they are more famously known for awarding the best sci-fi and fantasy works every year, though there are other categories like the Best Related Work category.
My heart is bursting with so many feelings — joy, pride, and a profound sense of gratitude towards fandom in general. As a fanfic writer on the site, I want to shout it on the rooftops and tell everyone I know.
But I can’t.
Because I am gay, I write gay fanfiction, and I am not out in real life to many people in my extended family.
AO3 played a key role during a very sensitive time in my life. During my mid-twenties, fresh out of a bout of career burnout, I had a revelation — or, as I learned from works on AO3, ‘gay panic’.
I grew up middle class in a conservative Asian society, and I was a straight-A student who wanted to check every box. I threw my whole self into my studies and subsequently, my career. In the first few years after graduation, I’d always felt like I was one mistake away from financial ruin, and I had neither time nor care for romantic relationships. But after months of working past midnight and wondering if that was all my life was going to be, I realised that I wanted a relationship. And I didn’t want a relationship with a man.
I was horrified. I was in my mid-twenties, how could I not know this before? What was I supposed to do now? I had no one around me that I could talk to.
But stories — books, television, movies — have always helped me make sense of life. I found Alex Danvers’ coming out story. Alex is Supergirl’s adopted sister, and in season two of the TV show, when she was in her late twenties, she realised that she was gay. Her coming out story resonated with me so much that I must have watched those scenes a hundred times (literally, I kid you not. Thank you YouTube).
Still, I couldn’t get enough of it. There was so much about being gay that I didn’t know, and I didn’t know where to start. Googling for a list of LGBTQ resources to go through felt overwhelming and impersonal. Thus, I turned to fanfiction, where I could find more stories through characters that are already familiar to me. I had been reading fanfiction since I was fourteen, even though prior to my gay panic I had stopped reading them for years (striving to keep myself from financial ruin, remember?).
AO3 was the site I turned to, and the community of creators and readers I found there blew my mind. They were a lifeline for me during those first few months when my world was turned upside down. It wasn’t just the fact that there are amazingly good writers who tell great stories, but that so many of them are educational and inspiring.
There are stories about Alex coming out, about her learning gay culture and lingo — and I learned right alongside her. I learned about gay panics and u-hauling, meekly added ‘The L Word’ and ‘Carol’ to my playlist, and blushed at my laptop as I read erotic or ‘smut’ scenes — the latter served more or less as confirmation that I am, indeed, gay. I had never enjoyed sex scenes between a male and a female before, but reading about two women making love made my heart race, and made me think that sex might actually be beautiful.
But beyond the lingo and the facts, those stories and writers have taught me so much more. Things like what heteronormativity is, and how it was fine to figure things out later in life. Alex’s girlfriend in the show, Detective Maggie Sawyer, was often Alex’s guide, and Maggie herself is an inspirational figure. She was kicked out of her own home at fourteen after her parents found out that she was gay, and I found out to my horror that this is actually quite a common story in reality. An estimated 40% of homeless youth identified as LGBTQ. And as embarrassed as I am that I didn’t realise this about myself until later in life, part of me realised how lucky I was to discover this at a time when I could be relatively financially independent.
I learned about how chosen family is a big thing within the queer community. When I came out to my parents months later and they did not take the news well, it still hurt like hell, but I could at least take solace in the fact that I was not alone. It made me open up more quickly as I seek out a queer community of my own, instead of being held back by the shame that my own family had been horrified by who I turned out to be.
Some writers showed all these via their stories. Some shared extensive author notes about safe sex or where to find more resources, or offered encouragement that things would get better. Some even took requests of specific scenarios that readers sent in, and wrote them their happy endings.
Those writers and their stories offered much more than simple entertainment. They offered education and comfort and hope, and reading their stories had often made my day.
And you know what’s more amazing? They are not paid to do all these.
I am not exaggerating when I say they offered me a lifeline — what would I have done without such a gentle, supportive community to guide me in those early, tumultuous days?
Inspired by both what I had received and by the power of stories, I started writing my own fanfiction again too. I had written some in my teens, but for the first time I made my main characters queer. And what a difference it made — the stories felt more true to me than ever, and I stuck to it longer than I had with any other fic I started in my teens.
It was cathartic. I wrote coming out scenes that mirrored my own, coming out scenes that I wished I had, and my characters grappled with the tension between staying true to who they are and not disappointing the people they love.
The responses I received from readers were beyond my expectations. Not in terms of number of readers, but in how deeply they connected with the characters. It reminded me of why I had wanted to be a storyteller in the first place — because you could touch someone half a world a way and make them feel seen.
AO3 provided that platform for me. Through its community of creators and users I learned how to be gay, and I learned what it feels like to be seen.
So yes, I am over the moon that AO3 had won a Hugo. I can’t shout this from the rooftops or any virtual rooftops where I’m using my real name, because I’m still in the closet in many parts of my life (thankfully not all). But I would really like to share how much it means to me.
Thank you to all the founders and volunteers, and all the fellow creators and readers of AO3. The formal economy may not have the tools to articulate or measure the value you bring, but know that you have saved and invigorated many souls out there.
This is what I have always known art to be, and I believe this is what Picasso alluded to when he said, ‘Art washes away from the soul the dust of everyday life’. | https://elskye.medium.com/i-learned-how-to-be-gay-on-ao3-5f0f5ab28e07 | ['E. L. Skye'] | 2019-08-23 05:10:50.479000+00:00 | ['Coming Out', 'Hugo', 'LGBTQ', 'Fanfiction', 'Writing'] |
Socrates, Reductionism, and Proper Levels of Explanation | Socrates, Reductionism, and Proper Levels of Explanation
Science — formerly known as natural philosophy — is pretty much the only game in town when it comes to understanding and explaining the world
[image: Socrates about to drink the hemlock, Wikimedia Commons; this is essay #257 in my Patreon/Medium series]
Turns out, I have something in common with Socrates. Don’t worry, this essay isn’t about self-aggrandizing. It’s just that, rather surprisingly, Socrates and I have followed a similar career, 24 centuries apart. You see, like myself, he started out as a scientist. And, like myself, he ended up a philosopher, and specifically one interested in ethics. Moreover, the two career moves were motivated in part by a similar shift in interest. Let me explain.
The Phaedo is one of the most famous Platonic dialogues, the one in which we relive the last few hours of Socrates. (Luckily, in that our life trajectories depart, at least at the moment!) He has been condemned by the Athenian assembly, on charges of impiety (believing in the wrong gods) and corruption of the city’s youth, charges brought against him by a trio of shady characters: Meletus, Anytus, and Lycon.
Much of the Phaedo is devoted to Socrates’ ideas about the immortality of the soul, and the dialogue is pretty forgettable in terms of modern philosophy. Except, of course, the poignant end. And the bit I’m about to focus on. Near the end of the Phaedo, before the death scene, we find this tantalizing bit:
“When I was young, Cebes, I had a prodigious desire to know that department of philosophy which is called Natural Science; this appeared to me to have lofty aims, as being the science which has to do with the causes of things, and which teaches why a thing is, and is created and destroyed.”
Socrates is telling one of his close friends, Cebes, that he began his philosophical investigations as what we would today call a scientist. Just like I did, now almost four decades ago! Indeed, Socrates is made fun of by Aristophanes in his comedy, The Clouds, in part because he is represented as the quintessential natural philosopher (and, ironically, an inveterate sophist), always thinking about abstruse matters of no interest to anyone else, walking around with his head in the clouds. Plato was not happy about such caricature, and in the Apology even suggests that Aristophanes is indirectly responsible for the death of Socrates.
But, Socrates tells his assembled friends, he then changed his mind, in part because he understood that the science of the time was entirely speculative, with no hope of actually finding support for any of the fanciful theories proposed by the pre-Socratics. Thales thought that water was the basic principle of the universe; Anaximenes thought it was air; for Heraclitus it was fire. Parmenides claimed that change was an illusion, while for Heraclitus everything is in flux. And so on. Socrates was initially excited, but eventually the pre-Socratics let him down:
“I seized the books and read them as fast as I could in my eagerness to know the better and the worse. What hopes I had formed, and how grievously was I disappointed! As I proceeded, I found my philosopher [Anaxagoras] altogether forsaking mind or any other principle of order, but having recourse to air, and ether, and water, and other eccentricities.”
Well, science has certainly improved since the time of Socrates, and scientific hypotheses about the nature of the world have become more amenable to empirical testing. Though there certainly still are untestable theories that only superficially sound better than those of the pre-Socratics, like the notion that there are infinite parallel universes. Socrates left the field of natural philosophy and turned to matters more human, so much so that Cicero famously wrote:
“Socrates was the first to call philosophy down from heaven and set her in cities and even to bring her into households and compel her to inquire about human life and customs as well as matters good and evil.” (Tusculan Disputations, V.10–11)
Socrates became interested in the human condition and how to improve it, which is precisely what turned me increasingly toward practical philosophy in general, and Stoicism (a most decidedly Socratic philosophy!) in particular. Part of the reason for Socrates’ new interest lies in the fact that he saw that the pre-Socratics sometimes were spectacularly missing the point. Here is a longish, but crucial, excerpt from the Phaedo:
“[Anaxagoras] endeavored to explain the causes of my several actions in detail, went on to show that I sit here because my body is made up of bones and muscles; and the bones, as he would say, are hard and have ligaments which divide them, and the muscles are elastic, and they cover the bones, which have also a covering or environment of flesh and skin which contains them; and as the bones are lifted at their joints by the contraction or relaxation of the muscles, I am able to bend my limbs, and this is why I am sitting here in a curved posture; that is what he would say, and he would have a similar explanation of my talking to you, which he would attribute to sound, and air, and hearing, and he would assign ten thousand other causes of the same sort, forgetting to mention the true cause, which is, that the Athenians have thought fit to condemn me, and accordingly I have thought it better and more right to remain here and undergo my sentence; for I am inclined to think that these muscles and bones of mine would have gone off to Megara or Boeotia — by the dog of Egypt they would, if they had been guided only by their own idea of what was best, and if I had not chosen as the better and nobler part, instead of playing truant and running away, to undergo any punishment which the state inflicts. There is surely a strange confusion of causes and conditions in all this.”
This is, in my opinion, simply beautiful. What Socrates is commenting on is what contemporary philosophers of science refer to as distinct levels of explanation. Anaxagoras is not wrong when he says that part of what explains Socrates sitting in the cell talking to his friends is the anatomy and physiology of the human animal called “Socrates.” But he is spectacularly missing the point if he thinks that’s the beginning and end of it. Socrates, as opposed to any other member of the species Homo sapiens, is sitting in the cell because he has been unjustly condemned by his fellow Athenians, and this second level of explanation is what really matters, even though it is certainly compatible with, and non contradictory to, the first one.
This sort of Socratic reasoning dispatches of the so-called “eliminativist” approach in vogue among some contemporary philosophers and scientists who are affected by a strange disease known as “greedy reductionism.” The term was introduced by Daniel Dennett in 1995, in his Darwin’s Dangerous Idea. Reasonable reductionism is a major tool of science, and consists in explaining phenomena in terms of their constituent components, at whatever level of explanation happens to be most appropriate for the problem at hand. Greedy reductionism, by contrast, occurs when “in their eagerness for a bargain, in their zeal to explain too much too fast, [some] scientists and philosophers … underestimate the complexities, trying to skip whole layers or levels of theory in their rush to fasten everything securely and neatly to the foundation.”
Someone whose career is founded on greedy reductionism is “neurophilosopher” Patricia Churchland who, together with her husband Paul, has proposed that a number of concepts in what she dismissively label “folk psychology” will eventually be replaced by more precise scientific language. The folk psychological concept of “pain,” for instance, will be replaced by something like “my C-fibers fired,” since C-fibers are the kind of biological structure that, when activated, mechanistically trigger the sensation of pain. The Churchlands here are confusing levels of explanations. The C-fibers stand to the explanation of my feeling pain when I hit my finger with a hammer in pretty much the same way as Anaxagoras’ talk about Socrates’ anatomy and physiology stands to the explanation of why he is in a cell waiting to drink hemlock. Socrates elaborates:
“It may be said, indeed, that without bones and muscles and the other parts of the body I can not execute my purposes. But to say that I do as I do because of them, and that this is the way in which mind acts, and not from the choice of the best, is a very careless and idle mode of speaking.”
In the same way, to say that my C-fibers firing is the cause of my feeling pain is a careless and idle mode of speaking, because the explanation doesn’t feature the crucial fact that I just hit my finger with a hammer.
Science — formerly known as natural philosophy — is pretty much the only game in town when it comes to understanding and explaining the world, no matter what you may have heard about “other ways” to know. However, science itself is a tool, with its proper domains of application, and its proper instruction manual. Try to apply it where it doesn’t belong, or in the wrong fashion, and you are simply missing the point, regardless of whether your name is Anaxagoras or Patricia Churchland. | https://medium.com/socrates-cafe/socrates-reductionism-and-proper-levels-of-explanation-135872cb8a2 | ['Massimo Pigliucci'] | 2020-12-23 22:34:06.469000+00:00 | ['Philosophy', 'Epistemology', 'Reductionism', 'Socrates', 'Writing'] |
Kotlin vs Swift: The Init Construction | Learning Kotlin and Swift
Kotlin vs Swift: The Init Construction
Explore Kotlin and Swift together and learn their differences
Photo by Mason Kimbarovsky on Unsplash
In Mobile development, the 2 major programming language to learn is Swift and Kotlin. They look very similar, but there are subtle differences.
Let’s look at the one called during construction, the init that exist in both Kotlin and Swift.
The init function
In Swift, init is the constructor and is called before the parent is constructed.
In Kotlin, the init is not really a constructor, and it is called after the parent is constructored.
More details below.
In Swift
The init function can have parameters to form a constructor for the class. Cannot have more than one init() function in the class (i.e. empty parameter constructor) The init function is called before the parent constructor is called i.e. the parent object is not yet formed. The init function cannot call the class member function unless it’s a static function. The reason is because the parent is not constructed yet, so the child object is not constructed yet.
class Base {
private let someString: String
init(someString: String) {
print("In Base init")
self.someString = someString
}
} class Child: Base {
init() {
print("In Child init")
super.init(someString: ChildClass.localStaticFunction())
} static func localStaticFunction() -> String {
print("In local static function")
return "something"
}
}
The result of the above code will be
In Child class init
In local static function
In Base init
In Kotlin
The init scope cannot have parameters to form a constructor for the class. The constructor is the actual constructor function that can have parameter. Can have more than one init scope in the class. The execution order dependent of the order where it is placed The init scope is called after the parent constructor is called i.e. the parent object is already formed. (note, the constructor function is also called after the parent is constructed) The init scope can call the class member function .The reason is because the object has been constructed.
open class Base {
private val something: String
constructor(something: String) {
this.something = something
println("In Base constructor")
}
init {
println("In Parent Init")
}Base} class Child: Base {
constructor(something: String): super(something) {
println("In Child Constructor")
}
init {
println("In Child Init")
localFunction()
}
private fun localFunction() {
println("In Local Function")
}
}
The result of the above code will be
In Base Init
In Base constructor
In Child Init
In Local Function
In Child Constructor
Summary
To summarize, I have put them side by. side for easier comparison. | https://medium.com/mobile-app-development-publication/kotlin-vs-swift-the-init-construction-f82224a24664 | [] | 2020-10-22 12:47:11.681000+00:00 | ['Mobile App Development', 'Android App Development', 'Kotlin', 'Swift', 'iOS App Development'] |
Uncomplicating | I got so used to write this daily entries that I practically and unintentionally left aide my other publications and interests.
As I’ve been trying to get back to my technology publication I have to admit that it’s been hard, feeling blocked and not knowing how to start or finish a story, like it has been delated from my mind.
Today I decided to just do it, go through it, to start again, but most importantly, to make it simple. It doesn’t matter how elaborated the idea in my head might be, avoid the complications, tell the story and share the perspective, then rest will follow with time and practice until getting back to where I used to be.
When in doubt, always keep it simple. | https://medium.com/thoughts-on-the-go-journal/uncomplicating-163d09628a2f | ['Joseph Emmi'] | 2018-08-08 22:46:28.279000+00:00 | ['Personal Growth', 'Challenge', 'Journal', 'Self Improvement', 'Writing'] |
Mortgage calculator in R Shiny | Introduction
I recently moved out and bought my first apartment. Of course, I could not pay it entirely with my own savings, so I had to borrow money from the bank. I visited a couple of banks operating in my country and asked for a mortgage.
If you already bought your house or apartment in the past, you know how it goes: the bank analyzes your financial and personal situation and make an offer based on your propensity to repay the bank. You then either accept the offer if you are satisfied with the rate and conditions, or visit another bank if you believe you could receive a better offer. Mortgages and loans are more complicated than that of course, but let’s keep it simple here.
As I kind of like to control and keep a close eye on my personal finances (sometimes a bit too close I must admit), I knew precisely how much I could spend for my monthly mortgage repayment while still being able to cover my living expenses. However, I had no clue how much I could borrow in total for my new apartment given these housing repayments.
I knew I was not the first person in this case, so I looked online if I could find a R script which would answer my question (and potentially also give me the total cost of the housing loan, including the loan amount and the accumulated interests). I finally found a R script created a while ago by Prof. Thomas Girke.
Mortgage calculator
The function in the script was functional and solved my main issue, but I wanted to be able to play more easily with the different settings such as the amount, the duration and the interest rate of the loan.
For this reason, I created a R Shiny app which is available here:
Mortgage calculator in R Shiny
In the meantime, I received an Excel file from a friend working in a Belgian bank which does precisely the same task. I am not an actuary nor a expert in mortgage loan, so with his file I was able to cross check the results and edit the code accordingly.
The app greatly helped me to know the maximum amount I could borrow from the bank by playing with the three main settings of a mortgage, so it gave me a precise price limit when looking for apartments online.
Note that the app can of course be used for any loan, not only for mortgage.
How to use the mortgage calculator?
First, you can find the mortgage calculator here.
I try to keep all my Shiny apps easy to use for everyone. However, here is how to use it in case it is not intuitive enough:
Enter the amount of the loan (i.e., the amount you would like to borrow, do not include downpayment) Enter the annual interest rate in % Enter the duration of the loan in years
On the right panel (or bottom if you use the app on mobile) you will see:
a summary repeating the settings you entered,
the total cost of the loan (principal and interests included), and more importantly
the amount of the monthly payments
A plot representing the percentage attributed to the repayment of the interests and the capital is also displayed. You see that (especially in the first years of the loan), the higher the interest rate and the duration of the loan, the higher the percentage of the monthly repayments is attributed to the repayments of the interests.
Finally, the amortization table showing the remaining balance month by month is displayed after the summary and the plot. You can copy, export (in PDF, CSV or Excel) or print this amortization table for further use.
Code of the app
Below the entire code in case you would like to enhance the app (feel free to send me your app if you happen to improve it!).
Thanks for reading. I hope this mortgage calculator helped you to play with the different settings of a mortgage, and who knows, helped you to decide which house or apartment to buy.
As always, if you have a question or a suggestion related to the topic covered in this article, please add it as a comment so other readers can benefit from the discussion.
Disclosure: Note that this application does not include investment advice or recommendations, nor a financial analysis. This application is intended for information only and you invest at your own risks. I cannot be held liable for any decision made based on the information contained in this application, nor for its use by third parties.
Related articles | https://medium.com/datadriveninvestor/mortgage-calculator-in-r-shiny-f6fe1b1fc23b | ['Antoine Soetewey'] | 2020-08-17 15:53:56.015000+00:00 | ['Investing', 'Technology', 'Finance', 'Education', 'Science'] |
The Ultimate Guide to Writing on Medium | As a frequent Medium writer and platform enthusiast, I receive a lot of questions about achieving success on Medium.
I have an entire publication on the topic. A widely read newsletter containing regular tips and tricks. My own Facebook group for Medium writers.
It’s safe to say I spend a lot of time thinking about Medium.
However, while I’ve tried to make my content as useful as possible, I’m a firm believer that organizing information for readers so that it is clear and accessible is just as valuable as publishing it.
What good is insightful content if nobody ever reads it?
That is why I created this guide.
I’m hoping that the multi-step and somewhat interactive layout will help writers find the content they need, in an efficient manner.
Instructions: select the topic, which best addresses your question or area of interest, and click on one of the corresponding section tabs listed below.
The section topics cover 8 key areas relating to Medium:
About Medium — This section is useful for basic questions about Medium Examples: Who founded Medium? What is Medium? What are some key stats that explain why I should use Medium over a self-hosted blog? Medium Article Format — This section is useful for new writers trying to learn how to format their articles, as well as experienced users who are looking for subtle tips and tricks to help their article stand out among the crowded field of articles. Examples: How do I edit the meta description of my Medium story? How can I add custom symbols to my article? How can I create a bulleted list? Medium Partner Program Earnings — This section contains all information to the Medium Partner Program and earning money as a writer on Medium. Examples: How much money can you make writing for Medium? How does Medium compare to a self hosted blogging site, that I create myself? How often do top earning writers publish? Medium Publications — This section is useful for users who want to create (and perfect) their own publication. It is also a great way for writers to find useful publications and learn how to submit their own articles to these publications. Examples: How do I create a Medium Publication? How do I submit my content to a publication? What are the advantages and disadvantages of creating your own Medium publication? Medium Curation — This section contains content aimed at demystifying the curation process. Examples: What is Medium curation and it essential to my success on Medium? What topics are eligible for Medium curation? Are there things I should avoid in my article in order to improve my odds of curation? Medium Writing Tips — This section contains answers to questions relating to Medium writing tips. Examples: What are some of your top tips for a new Medium writer? How can I improve my Medium article formatting? How can I boost my earnings on existing articles? Medium Statistics & Data Analytics — This section contains answers to questions relating to Medium’s internal statistics and data analytics tools that writers have access to. Examples: How does Medium calculate read time? What are fans? Do claps affect my earnings? Medium FAQs — This section contains a list of some of the most frequently asked questions about Medium. Examples: How can I add a kicker to my title and subtitle? How much does Medium pay its writers? How do I get published in a publication (other than my own)?
Note: I plan to keep this guide updated regularly and will continue to add content. So, if you enjoy this guide, be sure to save or bookmark this page (CTRL + D) for future reference. | https://medium.com/blogging-guide/the-ultimate-guide-to-writing-on-medium-d1c5acbac50a | ['Casey Botticello'] | 2020-11-12 23:20:24.201000+00:00 | ['Medium', 'Blogging', 'Curation', 'Medium Partner Program', 'Writing'] |
The Friend on the Hill | Ted and his family lived up the hill from us in a medium-sized white house with a green awning and a slanted roof. The roof looms large in my memory, though I’m sure it will be anti-climatic for most of you. Ted’s father held the august title of Superintendent of the Bessemer city school system, which, as history notes, maintained its segregated status well into the mid-1960’s. I don’t remember Ted’s father being vocal or instrumental in the changes occurring around us: in the attempts by many to integrate and the attempts by man others to obfuscate the court orders. Someone had to superintend our district despite its flaws, and so Ted lived under the roof of the man who did so, whether he did so well, with good intentions or bad. Also living under this roof were Ted’s mother — a fairly typical, warm, housewife of her age — and Ted’s younger sister Ellen, who lives almost as large as Ted in my memories and whom we will see more of later.
Even at the age of five, Ted was a head taller than the rest of the kids our age. I suppose his father was tall, too, though Mr. Clark’s physical stature does not linger in my memory. Many assumed that Ted and our close neighbor Mary Jane were sweethearts in this time, though once we got to first grade, I always assumed that Ted went for Pam Gassaway who was both pretty and as tall as Ted. I don’t know if I was ever right in my hopes, but I do know that I always wanted Mary Jane for my girlfriend, and for a year — from mid 5th grade to mid 6th grade — I got my wish. But that’s another story, and Mary Jane isn’t lost to me, as friends and this series go.
In Miss Carroll’s kindergarten, Ted and I were inseparable. How often I recall some teacher — Miss Garner or Miss Pierce — crying out “Ted and Terry,” or “Terry and Ted,” as if in one voice for one person. We sat by each other, stood by each other in our choir, and always huddled together at mid-morning snack time, where many of us brought peanut butter and Graham Crackers, and were served some sort of juice by Miss Carroll. I hated the days when we got apple juice — another of my strange childhood ticks. Orange juice, yes. Orange pineapple, yes. Grape, yes. Even Tang. But not apple juice.
Strange kids are strange kids.
But what happened to Ted on a particular day I’m thinking of, the day of his Barbecue potato chips, was less strange than tragic.
Miss Carroll’s kindergarten was four or five miles from our neighborhood, and so our parents (mothers) arranged a standing carpool, including my would-be girlfriend Mary Jane, our neighbor Elise, Ted, me, and I also seem to remember a girl named Dale Evans riding with us, for who, back in this era, could ever forget a girl named Dale Evans?
Even counting Ted, we were small enough to cram into the back seat of all the various cars, though when Mary Jane’s mother drove, we had the expansiveness of a station wagon. We also had a maniac for a driver, so tradeoffs are always just that. I so remember seeing the speedometer, which was color-coded, shifting colors as quickly as a green chameleon caught on a woodpile.
So on this day, when the car picked me up, I slid in by Ted, who proudly displayed his bag of Barbecue Golden Flakes, offering to share with me at snack time. I’m sure we both salivated, at least until we arrived at Elise’s house. I can see it so plainly: Ted had set his bag of chips on the other side of his lap from me, and when Elise began climbing in the back seat beside him, he plainly forgot about the bag.
No one knows for sure what exactly happened next, though in his own mind, Ted, at least, was certain.
“SHE SMUSHED MY POTATO CHIPS.”
And indeed, Elise did smush them. Sat right down on top of them.
It’s possible, of course, that Elise hadn’t seen the glittering bag of chips right in front of her and just hopped on in like normal. A pure accident with no ill intent.
Except Elise had at least one historical precedent for meanness — the time she purposefully picked up handfuls of sand from my very own sandbox, and on my very own birthday, and threw that sand right in my eyes.
Talk about screaming.
She got caught, too, and I want to say she got whipped for her act.
But on this car ride, as Ted held up his bag of chips, which were indeed clearly smushed to crumbs — begging the question of not only how hard Elise sat on them, but also, did she grind into them? Because these chips never stood a chance once her five-year-old bottom reached them — and as his tears crept down the crevices of his nose and mouth, Elise said nothing. Just stared and turned beet red in her face.
To my knowledge, she never apologized to Ted, who, at snack time, despite my assurance that we could still attempt to eat the chips and that maybe they weren’t so smushed, held up the bag and cried,
“No we can’t. See? They’re still smushed!”
And so they were. | https://medium.com/weeds-wildflowers/the-friend-on-the-hill-a38ecd3503af | ['Terry Barr'] | 2020-12-02 16:28:58.420000+00:00 | ['Friendship', 'Nonfiction', 'Love', 'Weeds And Wildflowers', 'Children'] |
Simple Monte Carlo Options Pricer In Python | Today we will be pricing a vanilla call option using a monte carlo simulation in Python. Monte Carlo models are used by quantitative analysts to determine accurate and fair prices for securities. Typically, these models are implemented in a fast low level language such as C++. However, for the sake of ease, we’ll be using Python.
Pre-Requisites:
Below is a list of pre-requisite knowledge to get the most out of this tutorial.
Required:
Calculus
Probability and Statistics
Very basic programming
Recommended:
Stochastic Processes
Stochastic Calculus or an introductory asset pricing class
Understanding The Math
Before diving into the code, we’ll cover some of the basic financial mathematics necessary to understand the model.
Assumptions
In our model we will make the following assumptions.
The price of our stock will generally increase with respect to time
The expected return is a fixed rate of the current share price
The stock follows a random-walk behavior
The Stock Price Evolution Model
We will be using a stochastic differential equation as the model of our stock price evolution. It will consist of a component representing the expected return on the stock Sₜ over an infinitesimal period of time dt, represented by:
Where Sₜ is the price of our share with respect to time, and μ is our expected rate of return.
To make our model representative of real share price behavior we must also add a stochastic element represented by a fixed fraction of our price σ, our share price Sₜ and a random walk process dWₜ to construct
Altogether, we get the equation:
Black-Scholes Option Pricing Model
The Black-Scholes option pricing model tells us the the price of a vanilla option with a compounding rate r , expiration T and payoff function f is:
Solving for Expectation
In order to finish deriving an expression for our fair options price, we must solve the expression within the expected payoff function. To begin with, we will use a risk-neutral stochastic differential equation (see stock price evolution model) for the expectation of Sₜ:
By taking the log of both sides of the equation and using Ito’s lemma, a solution to the stochastic process:
substituting a constant coefficient for d, (a convenient model simplification) we will obtain the final expression:
Wₜ is a brownian motion process with a mean 0 and variance T and normally distributed random variable N(0,1):
Our final expression for Sₜ is:
Raising the e to the equation, we can simplify to:
We will continuously sample random numbers for N(0,1) into the equation while maintaining a running sum:
Where K is the strike price. The average of this expression where n is the number of random samples we take will be
Final Expression
Our final expression for the fair price of the option will be the above expression multiplied by e⁻ʳᵗ to complete our original black Scholes model.
The Code
import math
import random
class SimpleMCPricer():
def __init__(self, expiry, strike, spot, vol, r, paths):
#The sigma value on the left side of the exponent
self.variance = vol**2 * expiry
#The sigma value on the right side of the e exponent
self.root_Variance = math.sqrt(self.variance)
#Corresponds to the (-1/2 * sigma^2)
self.itoCorr = -0.5*self.variance
##Corresponds to S0e^(rT - 1/2 sigma^2T)
self.movedSpot = spot*math.exp(r*expiry + self.itoCorr)
self.runningSum = 0
##Simulate for all paths
for i in range(0,paths):
thisGauss = random.randrange(0,1000,1)
thisGauss = thisGauss/1000
##Our rootVariance already has been multiplied by the expiry
thisSpot = self.movedSpot*math.exp(self.root_Variance*thisGauss)
#Determine payoff of this specific path
thisPayoff = thisSpot - strike
#Value of option is zero is our price is less than the strike
thisPayoff = thisPayoff if thisPayoff > 0 else 0
self.runningSum+=thisPayoff
self.mean = self.runningSum/paths
self.mean*= math.exp(-r * expiry)
def getMean(self):
return round(self.mean,2)
Run the Model!
Let us run the model on an option with expiration in 2 years, with a strike price of 32 dollars, a current price of 30 dollars, a 10% volatility parameter, and a 3% rate of return. We will simulate 1,000,000 paths and determine the fair price.
model = SimpleMCPricer(2,32,30,.1,0.03,1000000)
model.getMean()
As you can see, the calculated fair price of the option is 1.79 dollars.
1.79
Sources and Further Reading
[1] C++ Design Patterns and Derivatives Pricing, Mark S. Joshi
[2] The Concepts and Practices of Mathematical Finance, Mark S. Joshi
[3] Stock price modelling: Theory and Practice, Abdelmoula Dmouj | https://optimizedpran.medium.com/simple-monte-carlo-options-pricer-in-python-92050df4eeb3 | ['Pranav Ahluwalia'] | 2020-11-28 19:18:12.393000+00:00 | ['Python', 'Mathematical Modeling', 'Options', 'Quantitative Finance', 'Monte Carlo'] |
Python Map Reduce Filter Tutorial Introduction | Map, Filter And Reduce In Pure Python
The concepts of map, filter and reduce are a game changer. The usage of these methods goes way beyond Python and are an essential skill for the future.
Map, Filter and Reduce (Image by Author)
The Basics
Map, filter and reduce are functions that help you handle all kinds of collections. They are at the heart of modern technologies such as Spark and various other data manipulation and storage frameworks. But they can also very powerful helpers when working with vanilla Python.
Map
Map is a function that takes as an input a collection e.g. a list [‘bacon’,’toast’,’egg’], and a function e.g. upper(). Then it will move every element of the collection through this function and produce a new collection with the same count of elements. Let’s look at an example
map_obj = map(str.upper,['bacon','toast','egg'])
print(list(map_obj))
>>['BACON', 'TOAST', 'EGG']
What we did here is using the map(some_function, some_iterable) function combined with the upper function (this function capitalizes each character of a string). As we can see we produced for every element in the input list another element in the output list. We receive always the same amount of elements in the output as we will put into it! Here we send 3 in and received 3 out, this is why we call it an N to N function. Let’s look at how one can use it.
def count_letters(x):
return len(list(x)) map_obj = map(count_letters,['bacon','toast','egg'])
print(list(map_obj))
>>[6, 5, 3]
In this example we defined our own function count_letters(). The collection was passed through the function and in the output, we have the number of letters of each string! Let’s make this a little bit sexier using a lambda expression.
map_obj = map(lambda x:len(list(x)),['bacon','toast','egg'])
print(list(map_obj))
>>[6, 5, 3]
A lambda expression is basically just a shorthand notation for defining a function. If you are not familiar with them you can check out how they work here. However, it should be fairly easy to understand how they work from the following examples.
Filter
In contrast to Map, which is an N to N function. Filter is a N to M function where N≥M. What this means is that it reduces the number of elements in the collection. In other words, it filters them! As with map the notation goes filter(some_function, some_collection). Let’s check this out with an example.
def has_the_letter_a_in_it(x):
return 'a' in x # Let's first check out what happens with map
map_obj = map(has_the_letter_a_in_it,['bacon','toast','egg'])
print(list(map_obj))
>>[True,True,False] # What happens with filter?
map_obj = filter(has_the_letter_a_in_it,['bacon','toast','egg'])
print(list(map_obj))
>>['bacon', 'toast']
As we can see it reduces the number of elements in the list. It does so by calculating the return value for the function has_the_letter_a_in_it() and only returns the values for which the expression returns True.
Again this looks much sexier using our all-time favorite lambda!
map_obj = filter(lambda x: 'a' in x, ['bacon','toast','egg'])
print(list(map_obj))
>>['bacon', 'toast']
Reduce
Let’s meet the final enemy and probably the most complicated of the 3. But no worries, it is actually quite simple. It is an N to 1 relation, meaning no matter how much data we pour into it we will get one result out of it. The way it does this is by applying a chain of the function we are going to pass it. Out of the 3, it is the only one we have to import from the functools. In contrast to the other two it can most often be found using three arguments reduce(some_function, some_collection, some_starting_value), the starting value is optional but it is usually a good idea to provide one. Let’s have a look.
from functools import reduce map_obj = reduce(lambda x,y: x+" loves "+y, ['bacon','toast','egg'],"Everyone")
print(map_obj)
>>'Everyone loves bacon loves toast loves egg'
As we can see we had to use a lambda function which takes two arguments at a time, namely x,y. Then it chains them through the list. Let’s visualize how it goes through the list
x=“Everyone”, y=” bacon”: return ”Everyone loves bacon“ x=”Everyone loves bacon“, y=”toast”: return ”Everyone loves bacon loves toast“ x=”Everyone loves bacon loves toast“, y=”egg” : return ”Everyone loves bacon loves toast loves eggs“
So we have our final element ”Everyone loves bacon loves toast loves eggs“. Those are the basic concepts to move with more ease through your processing pipeline. One honorable mention here is that you can not in every programming language assume that the reduce function will handle the element in order, e.g. in some languages it could be “‘Everyone loves egg loves toast loves bacon’”.
Combine
To make sure we understood the concepts let’s use them together and build a more complex example.
from functools import reduce vals = [0,1,2,3,4,5,6,7,8,9]
# Let's add 1 to each element >> [1,2,3,4,5,6,7,8,9,10]
map_obj = map(lambda x: x+1,vals)
# Let's only take the uneven ones >> [1, 3, 5, 7, 9]
map_obj = filter(lambda x: x%2 == 1,map_obj)
# Let's reduce them by summing them up, ((((0+1)+3)+5)+7)+9=25
map_obj = reduce(lambda x,y: x+y,map_obj,0)
print(map_obj)
>> 25
As we can see we can build pretty powerful things using the combination of the 3. Let’s move to one final example to illustrate what this might be used for in practice. To do so we load up a small subset of a dataset and will print the cities which are capitals and have more than 10 million inhabitants!
from functools import reduce #Let's define some data
data=[['Tokyo', 35676000.0, 'primary'], ['New York', 19354922.0, 'nan'], ['Mexico City', 19028000.0, 'primary'], ['Mumbai', 18978000.0, 'admin'], ['São Paulo', 18845000.0, 'admin'], ['Delhi', 15926000.0, 'admin'], ['Shanghai', 14987000.0, 'admin'], ['Kolkata', 14787000.0, 'admin'], ['Los Angeles', 12815475.0, 'nan'], ['Dhaka', 12797394.0, 'primary'], ['Buenos Aires', 12795000.0, 'primary'], ['Karachi', 12130000.0, 'admin'], ['Cairo', 11893000.0, 'primary'], ['Rio de Janeiro', 11748000.0, 'admin'], ['Ōsaka', 11294000.0, 'admin'], ['Beijing', 11106000.0, 'primary'], ['Manila', 11100000.0, 'primary'], ['Moscow', 10452000.0, 'primary'], ['Istanbul', 10061000.0, 'admin'], ['Paris', 9904000.0, 'primary']] map_obj = filter(lambda x: x[2]=='primary' and x[1]>10000000,data)
map_obj = map(lambda x: x[0], map_obj)
map_obj = reduce(lambda x,y: x+", "+y, map_obj, 'Cities:')
print(map_obj)
>> Cities:, Tokyo, Mexico City, Dhaka, Buenos Aires, Cairo, Beijing, Manila, Moscow
If you enjoyed this article, I would be excited to connect on Twitter or LinkedIn.
Make sure to check out my YouTube channel, where I will be publishing new videos every week. | https://towardsdatascience.com/accelerate-your-python-list-handling-with-map-filter-and-reduce-d70941b19e52 | [] | 2020-12-01 14:56:19.868000+00:00 | ['Python', 'Mapreduce', 'Tutorial', 'Programming', 'Intro'] |
Parsing JSON in Dart/Flutter | Hi everybody!
There are many ways to parsing json from request. In this post we are going to code how to parse json using http package in Flutter and Dart.
At first, we need to add http dependency in pubspec.yaml
http: ^0.12.2
We are going to make a request, it’s take a long time. That’s why, we need to use asynchronous programming. Otherwise, user interface will be frozen waiting response from request. We don’t want that!.
Dart provides two different way to asynchronous programming: future and the async and await keywords. We are going to use both of them to code this example.
Note: in futures post we can talk about asynchronous programming. In this post we are going to focus in parsing json.
Let’s code!.
Imagine we need to get Flutter repositories from Github. So, we need to create a model which represents this repository.
Repository model
We can code using async and await keywords and it looks like this:
Parse response using async and await keyword
Let’s analyse point by point:
1- get(url) is a function from http package which response a Future<Response> but we used await keywords it returns just Response .
2- Response contains body which is a string. Json could be parsed by json from Dart package. import ‘dart:convert’; to List<dynamic> .
3- We need to iterate each element from List and map to our Repository .
Use await keyword to get completed results of an asynchronous expression. Use async keyword before a function’s body to mark it as asynchronous. | https://medium.com/swlh/parsing-json-in-dart-flutter-37c411f2707a | ['Mariano Castellano'] | 2020-08-26 06:57:17.753000+00:00 | ['Flutter', 'Json', 'Google', 'Software Development', 'Dart'] |
The Myth of the Data Scientist Shortage | By Thomas Davenport
Data scientists — people who can manage and analyze big, unstructured data — were once as scarce as vegetarian dogs. If your business wasn’t based in Silicon Valley or Boston, if you couldn’t offer massive stock options, and if you didn’t have a sexy business model, you were unlikely to be able to hire any. When I interviewed 35 of them in 2013 for an article in Harvard Business Review, my co-author (D.J. Patil, now a data scientist in the White House) and I wrote, “The shortage of data scientists is becoming a serious constraint in some sectors.” The most common educational background among the 35 data scientists I interviewed was a Ph.D. in experimental physics, and there aren’t a lot of those sitting around.
But now the world of data science has changed dramatically. There may not be a glut of data scientists, but they are much easier to find and hire than they used to be.
If you’re based in Omaha, you’ve got a good shot at finding some good ones. If you can offer only a decent salary, you’ll probably be OK. And even the most traditional business can hire them these days. In short, there is no excuse for not building a data science capability.
Here are some common excuses companies use for not employing data scientists, and why they’re no longer valid:
“Universities just aren’t turning out data scientists.” Au contraire. There are more than 100 programs at U.S. universities alone that focus on analytics or data science. Some schools, like Northwestern University, New York University, and the University of California at Berkeley, have more than one degree program in data science. These programs are already churning out thousands of graduates.
There aren’t enough quantitative Ph.D.s to go around. First of all, you probably don’t need a Ph.D. data scientist. There are plenty of Master’s degree graduates out there who will have all the skills you need. Moreover, you’d be surprised how many unemployed or underemployed Ph.D.s there are in quantitative and scientific fields. There are programs that provide data science internships to Ph.D.s like the Insight Data Science Fellows program, and programs at vendors like SAS and Microsoft can ensure that the Ph.D.s have all the latest skills.
Data scientists don’t want to work in the hinterlands where my company is based. Think again. There are universities with programs in data science or analytics in Alabama, Kansas, Nebraska, South Dakota, West Virginia, and many other states far removed from the east or west coasts. At least some of the graduates of those programs are willing to work where they went to school.
Data scientists fresh out of school won’t understand my business. That may well be true. So train your own employees in data science. Cisco Systems, for example, worked with two universities to create distance learning education and certification programs in data science. More than 200 data scientists have been trained and certified, and are now based in a variety of different functions and business units at Cisco.
My company isn’t hiring anybody, but we still need data scientists. Again, you can retrain existing employees. You could take the Cisco approach and create a custom program. Or keep in mind that many of the university programs are online and can be taken part-time.
If you make it known to your employees that your company needs and values data science skills — and that you might pay for some of the education — you will probably have some certified data scientists within a year.
Of course, it still may be difficult to find highly productive and effective data scientists, as with any sort of job. But there are now many potential candidates out there. No matter what your business is or where it’s based, chances are good that you can find someone to help with your difficult data problems.
Tom Davenport is a senior advisor, Deloitte Analytics, and distinguished professor, Babson College. He is also a Fellow of the MIT Initiative on the Digital Economy.
This blog first appeared in the WSJ.com CIO Journal on August 11, 2016, here. | https://medium.com/mit-initiative-on-the-digital-economy/the-myth-of-the-data-scientist-shortage-f1b77b10aee3 | ['Mit Ide'] | 2016-09-26 15:11:56.015000+00:00 | ['Data Science', 'Big Data', 'STEM'] |
The Secret to Making Yourself More Coachable | Learn how to become more coachable. Use this checklist to find the areas you need to work on so you can have a successful relationship with your coach.
Communication Skills
Listen closely. Pay attention to what your coach has to say when he or she asks you deep-rooted questions you may not want to answer.
Look for the truth in the questions instead of dismissing their perspective or trying to make excuses. Resist any urge to interrupt or defend yourself as they ask the questions.
Ask questions. It is okay to ask your coach to clarify a question so you are better positioned to answer it.
Ensure you understand what your coach is asking you. Paraphrase their statements in your own words, and if you need to, write the question on a sheet of paper.
Take your time. Allow yourself to absorb information fully. When your coach asks questions it is okay to think about your answer. Often the questions are hard to answer, because they cause you to look within yourself.
Focus on responding thoughtfully rather than quickly. If a situation stirs up strong feelings, give yourself an opportunity to calm down so you can think.
Welcome feedback. It is okay to ask your coach for feedback. However, as a coach we ask questions, so be ready to have a question thrown back at you regarding the feedback.
As coaches we try to stay away from “our opinionated” responses and giving advice.
Photo by Joshua Ness on Unsplash
Watch your body language. Ensure your gestures and expressions are friendly and consistent with your words. Coaches are human and we can tell when you are shutting down, and getting uncomfortable even on the phone.
Remember, respect goes both ways in a coaching relationship. It’s easier for them engage with you when they feel respected and appreciated.
Open up. Be honest with your answers. Your session with your coach is a safe place, share what you need to. | https://medium.com/the-partnered-pen/the-secret-to-making-yourself-more-coachable-d927faab9b5d | ['Marla J. Albertie'] | 2020-04-25 04:01:00.856000+00:00 | ['Life', 'Coaching', 'Productivity', 'Communication', 'Soft Skills'] |
How long should a novel be? | What’s the shortest novel ever written? What makes a work of fiction a novel and not a short story? And how do novellas fit in?
We’re examining ten extremely short novels from around the globe to find the world’s shortest novel and, while we’re at it, figure out what really makes a story a novel.
Today, we’re looking at length. Is it the best way to separate novels from short stories?
When you try to pinpoint the exact difference between a novel and a short story, one of things all the literary wonks cite is length. A novel is longer than a short story, period. And by longer, we mean a novel has more words.
The problem is, nobody seems to agree how many words a novel should have.
Ask around and you’ll hear, “Novels range from 55,000 to 300,000 words.”
And “anything above 70k but less than 115k.”
And “a novel is usually defined as anything over 40,000 words.”
And “anything more than 50,000 words is probably a full novel.”
It’s even harder to determine how many words a narrative should have if it’s to be called a “short story.”
Some say, “a short story is 1,500 to 30,000 words.”
Or maybe, “anywhere from 1,000 to 4,000 for short stories, however some have 20,000 words.”
But don’t forget about novellas! “Novellas generally run 20,000–50,000 words.” Maybe?
With this uncertainty in mind, let’s consider by Joseph Boyden.
Set in Northern Ontario, Canada, during the mid-twentieth century, Wenjack is a short work of fiction structured in the manner of a traditional novel, with a plot that offers a classic presentation of the stages of a hero’s journey. The call to adventure, departure, ordeals, and even a literal journey are all there.
The title comes from the surname of the protagonist, a young Ojibwe boy named Chanie but called “Charlie” by the white folks who forcibly removed him from his family home and imprisoned him in a residential school for the “education” (assimilation) of Native and First Nations children.
In structure, Wenjack indeed seems to be a novel. It’s broken up into chapters, it has a protagonist and antagonist, a linear structure, a clear conflict and a definitive resolution. And the book calls itself a novel — it’s right there on the copyright page:
While incidents in the novel are based on real people, events and locations, they have been recreated fictitiously.
At first glance, even though it looks like a novel, it doesn’t seem like a good candidate for the shortest novel ever written. It’s 97 pages long (not counting the author’s note at the end), making it longer than any number of short novels including Ethan Frome, Animal Farm, and A Christmas Carol.
However, these 97 pages are tiny, printed in a large font with huge margins. The book is just four-and-a-half inches by six-and-a-quarter inches. It’s not much bigger than a U.S. passport.
Is Wenjack actually shorter than it seems?
The true length of a book depends on its word count per page. And word count per page, in turn, varies based on the “trim size” (the physical dimensions of the book), font, and layout the publisher chooses. So it’s hard to pin down a universal page-count-per-word-count for novels as a group.
However, using a formula for calculating word count in print books from The Pen & The Pad on a randomly chosen paperback novel from my book hoard (a well-thumbed copy of The Silence of the Lambs, in case you’re wondering), we get an estimated count of 360 words per page. We can round this down to 350 words per page to accommodate book design elements like partial lines of text and half-pages at the end of chapters.
Using the same formula, I estimated that Wenjack clocks in at an average of 140 words per page, giving the entire text a word count of approximately 10,220. That means, if printed as a conventional paperback, Wenjack would be just 28 pages long. On page 28 of The Silence of the Lambs, Clarice Starling had just returned from her first visit with Hannibal Lecter. The novel had barely gotten started.
So, is Wenjack the shortest novel in the world?
No.
In the final analysis, Wenjack is a short story. Here’s why.
It’s not the extremely low word count (we’re looking for the world’s shortest novel, after all). It’s the filler.
Wenjack felt like a 5,000-word short story, written from the first-person point of view of Chanie, that was padded out with an additional 5,000 words in alternating chapters that either previewed or reviewed Chanie’s chapters. These extra chapters were narrated in the unusual first-person plural (“we” instead of “I”) from the point of view of what seemed to be various species of Canadian wildlife (we learn very, very late in the story that they are actually manitou spirits).
As is typical of short stories, the narrative centers on a single plotline: Chanie’s journey. The hockey-game-style commentary by the manitous did not add to this plotline, however. It merely repeated it.
For example, on page 22 the manitou spirits inform us:
They shake in half sleep despite holding on to a friend or a brother. When they awake, though, they will feel the shame of having touched one another, if even just for warmth.”
And then two pages later, via Chanie’s narration, we learn that, yep, that’s what happens, alright:
My arms are wrapped around his brother with his back to me and we are curled like dogs. I watch as the older brother shakes his head in a big no.”
Chanie’s story was based on real events that Boyden read about in a magazine article written in 1967. As Boyden told The Globe and Mail just after the book was published in 2016, “Chanie’s voice came to me very quickly. And I was like, ‘he doesn’t know the bigger picture, how am I going to paint the bigger picture? This book has to open up in ways.’ That’s when the voices of the manitous started emerging. At first it was just a crow and then that crow transformed into an owl. These different voices of the different spirits following him and watching him on his journey allowed me as the writer to explain the bigger picture going on.”
This impulse — “How am I going to paint the bigger picture?”- was valid. Painting the bigger picture would have transformed the tale of Chanie’s ill-fated walk home from short story to novel. The author’s note explaining the context of the real-life incident that inspired Boyden added the outlines of the “bigger picture” he was seeking. However, the constant play-by-play from what I inelegantly called “some magic animals” in an email to a friend (in my defense, this was before their status as supernatural guardian spirits was made clear) merely painted the same picture, itself a poorer copy of the original.
And on the subject of pictures, though the animal (manitou) illustrations by Cree artist Kent Monkman that are included in the book are superb, they only added to the page count and weren’t relevant to Chanie’s story. An image of the map that Chanie uses in his attempt to find his way home would have been an appropriate addition to the text; I’m still confused about just how far north the poor kid was wandering alone in the snow, and how far he still had to go when the story ended.
Be sure to check out my extremely short novel, The Drowned Town. What do you think? Is it really a novella, a short story, or something else?
Next up: It Was a Dark and Stormy Night, Snoopy by Charles M. Schulz. | https://katherineluck.medium.com/how-long-should-a-novel-be-8c8a3ae250ac | [] | 2019-10-03 16:16:01.192000+00:00 | ['Writing Tips', 'Short Story', 'Novel Writing', 'Fiction Writing', 'Writing'] |
Home Automation With MQTT and Home Assistant — TechTalk Gist | Practical Considerations
Quality of service
QoS 0
Fire and forget approach. The MQTT broker will not care if the message is actually received or not by all the subscribers.
Eg:
DHT (digital humidity temperature) sensors that transmit climate information every 10s
Presence sensors (PIR) that check presence every 5 seconds and transmit the status
Suitable for scenarios where missing a single message or point of data is not significant. Or power is constrained and network overhead is preferred at a minimum.
QoS 1
The messages are guaranteed to be sent at least once. This implies that messages may be sent several times.
Suitable when you expect all the readings to be received without loss. This also assumes that in case, duplicate data can be removed by other means (by using timestamps, etc).
Weather monitoring, where time-series information is vital. Packet losses can occur due to network issues. Hence, application layer delivery Acks needed.
QoS 2
Delivery is guaranteed to be just once. No duplicates or lost messages.
Parking lots need to monitor the number of vehicles entered to show customers about available parking slots. Can have parallel vehicle entry from multiple locations. No assumptions about time are possible.
Deep ocean monitoring systems where timestamps might not be reliable. In such cases, messages must be received just as many times as the events occur.
In QoS 1 and 2. Messages are queued by the clients for sending, often a bad choice for memory-constrained devices such as Arduinos or ESPs.
Network usage and power consumption
Network connections can be reliable yet, the IoT devices can be in states that make them incapable of being in a stable connection. For example, device meshes (like ZigBee or ESP) can change topology with movement. Some devices can restart periodically due to OTA (over the air updates). Or the user might decide to plug out the smart bulb from the Dining room and move it to the kitchen!.
In such cases, there must be a means by which the communicated messages can be made persistent. In other words, if a light is switched on, and mains go off for some reason, the bulb should still be able to know that it should come on when it comes online again. We achieve this using the retain flag.
Turn the retain flag on, the broker will keep the last message with the retain flag, indefinitely. This can be cleared using the — remove-retained argument.
Real Example
The flow of a typical MQTT device is as follows.
Click — smart home app button for light L1.
Send the following payload to MQTT broker under topic arduino-esp/light/L1/set
{
“state”: “ON”,
“brightness”: 255
}
Now the message is sent to the subscribing bulb (or controller) under the same topic with the same payload.
Now the device turns the bulb relay to a closed position (bulb turns on) and sends the following payload to topic arduino-esp/light/L1/state.
{
“state”: “ON”,
“brightness”: 255
}
When this is received by the smart home app, it will turn the colour of the button to yellow and brightness level to the set amount.
Why is this done so?
There could be several users controlling the brightness/light. At the same time, there could be failures in this operation due to the many issues we discussed. Therefore, the bulb controller itself must share its status to the broker to be sure.
One way of doing this is the publishing state every 10 or 20 seconds. And to be more responsive, an immediate message could be sent when the state is changed. Periodic messages could be expensive and can avoid devices going to deep sleep. We can use only the immediate acknowledgements for updates. But for that, we need a QoS above 0. Otherwise, the controlling app may be unresponsive. This comes to a trade-off between how active the device is, and its capability to be in deep sleep with WiFi or any other kind of service.
Practical Example with Home Assistant — Home Assistant IO
Screenshot from Reddit
Steps you need to follow before you can start implementing this example!
Install Home Assistant
Install MQTT broker plugin. Link: MQTT
ESP32 with Arduino IDE and PubSub library (MQTT library)
ESP32 MQTT Publish-Subscribe with Arduino IDE (Image source+Tutorials)
Try the following!
Image Source: Link
More elegant approach with Home Assistant! You have a nice UI and Mobile Apps. It is open source too!
Home Automation with Home Assistant and MQTT
Home Assistant can discover MQTT enabled devices automatically and register in the application. MQTT Discovery
Discovery message template for a light
{
“~”: “homeassistant/light/kitchen”,
“name”: “Kitchen”,
“unique_id”: “kitchen_light”,
“cmd_t”: “~/set”,
“stat_t”: “~/state”,
“schema”: “json”,
“brightness”: true
}
“~” is a placeholder to be placed at its occurrence in values of this JSON object.
This message has to be sent to the discovery topic of Home Assistant, which is;
homeassistant/light/kitchen/config
Everything in the JSON object is configurable by the device itself. The discovery topic must be strictly followed. Otherwise, some hacks are needed.
Steps:
The device (light) will publish the above JSON to the discovery topic.
Home Assistant will add a bulb controller to its UI
Rename it as you wish, say “Dining room light”
The above bulb has brightness control too and it runs with a JSON payload.
When the user clicks the button to switch the bulb, the home assistant will publish a payload to topic “~/set” (determined by cmd_t which expands to command_topic). “~” will be replaced by “homeassistant/light/kitchen”.
Then light will publish its new state to “~/state” which is determined by the “stat_t” which expands to “state_topic”.
JSON keys are shortened as some devices operate with limited memory and string processing power. It also saves network packet usage, needed bandwidth, RAM usage and internal buffer sizes. Some networks have limited packet sizes or MTUs (Max Transmit Units).
Advanced Automations with NodeRED and Home Assistant
Install: Home Assistant Community Add-on: Node-RED
NodeRED is a virtual wiring tool for MQTT (or any other) messages. You can transform one MQTT message to another or several others. An example as follows.
PIR sensors detect home as empty for 5 hours.
Turn off all electrical appliances except for vitals (Fridge, Cameras, etc) (state: LOCK)
Now the gate sensor detects an open message from the user
Exit the home LOCK state as follows
Turn on the Garage light
Turn on AC and cool the house
Turn the geezer on
Open blinders of the bedroom to get some sunlight
Play some music
If needed the exit LOCK routine can have a different path, say if someone returns home from the front door without a car. All this is 100% compliant with Home Assistant and can be done with Arduino IDE and ESP8266 or ESP32 MCUs.
Why should one do this? (Just for knowledge)
Privacy is a massive concern in the modern world. Ubiquiti like companies provide cheap wifi solutions to cover entire houses. Therefore, the limitations of devices such as Philips Hue, Google Home or Alexa will no longer be a problem. You anyway got WiFi. ZigBee devices are expensive and limited in variety. (Philips Hue Bulb: Rs 5000+ per bulb). It is a closed standard. However, ESP32 is less than Rs 2000 (even cheaper on Ali) and can automate a Huge Chandelier with little to no effort. Zero Access to the Internet needed. The home assistant was released as an OS to run on Raspberry. The open-source mobile app (Android and iOS) can run within the local network. The quality and stability of these applications are far better than Google and Alexa apps.
Limitations of MQTT
It is a SPOF (Single Point of Failure).
MQTT needs to be run with a watchdog (to restart if it crashes).
Poorly configured MQTT can be vulnerable to intruders.
MQTT brokers need to be online always. This is not the case in BLE beaconing devices. Hence needs to be running in a mains powered device.
Routing is IP based. This essentially means that only a limited number of devices can connect in a given subnet. Needs a DHCP server to assign IPs. This is not the case in ZigBee or ESP-Mesh.
Considered not suitable for IIoT (Industrial IoT) applications. One major reason is the QoS support. QoS is honoured only when both subscribers and publishers are at the same level of QoS. In MQTT the same topic can have several subscribers with different QoS levels. This makes it difficult for the publisher to conclude delivery and the overhead is high for communication.
This is an application layer messaging protocol. This makes it difficult to route data in a customised manner. For example, a faraway device without a wifi connection cannot be reached by asking a near-by device to relay data. However, there are network layer messaging applications that support WiFi mesh networks that overcome this. (Painless Mesh Arduino)
Ways to overcome these limitations
Securing with SSL (MQTTS)
Use bridging devices that connect to MQTT brokers.
The bridging device can make a mesh of other devices and relay messages between external devices and the mesh. Philips Hue bridge does something similar by connecting the IP network and ZigBee bulbs. This can be achieved fully using owagner/hue2mqtt: Gateway between a Philips Hue bridge and MQTT and official Philips Java API library.
This is indeed a bulky article. However, I believe this might as well be complete for someone whos looking out to get themselves into home automation with Home Assistant. | https://medium.com/swlh/home-automation-with-mqtt-and-home-assistant-techtalk-gist-734bc89b5e53 | ['Anuradha Wickramarachchi'] | 2020-11-16 22:24:47.139000+00:00 | ['Software Development', 'Engineering', 'IoT', 'Home Automation', 'Home Assistant'] |
How Kazakhstan Turned the New Borat Film in its Favor | How Kazakhstan Turned the New Borat Film in its Favor
A great lesson for all marketing enthusiasts.
Source: BBC
One of the most outrageous moments of the 2006 film ‘Borat: Cultural Learnings of America for Make Benefit Glorious Nation of Kazakhstan’ was a made-up version of the Kazakh national anthem. It was just one of the many stereotypes that the movie exaggerated as part of its satire on the American way of life and ideologies. However, it’s not surprising that the immediate response in Kazakhstan was not that welcoming. The government was not pleased with the way in which the country was portrayed and released a four-page ad in the New York Times to state the reality of their country. They banned the film in Kazakhstan, even threatening legal action against Sacha Baron Cohen, the creator and actor who played Borat.
The film angered many Kazakhs for its inaccurate depiction and supposed mockery of the country since it was actually shot in Romania and not in Kazakhstan. However, it cannot be denied that the film did wonders for tourism in general.
October 2020 saw the sequel of Borat hit online streaming platforms, which featured Sacha Baron Cohen in the titular role again. He played the journalist from a fictitious and over-exaggerated representation of Kazakhstan. Aware of the atrociousness of his character, he was prepared for a lawsuit from the Kazakh government — but to his surprise, that never happened.
You see, they had learnt from their previous mistakes. It’s said that the reaction of Kairat Sadvakassov, the deputy chairman of Kazakhstan’s tourism board was ‘Oh, again?’ This time they were determined not to look foolish in their response to a satirical film about the US.
So, they embraced the opportunity to take the joke on themself.
They took the bold step of changing their country’s slogan.
Among the many mannerisms of Borat was his frequent use of the phrase ‘Very Nice’ in a distinct accent. The Kazakh Tourism Board has taken de facto ownership of this phrase as their slogan for their country, featuring regular use in their latest advertisements.
This was the brainchild of Dennis Keen, an American who has settled in Almaty, the largest city of Kazakhstan, who gives walking tours and hosts a state television program.
Why it works on so many levels
Turning the catchphrase around is not only a depiction of courage but most importantly shows that the real Kazakhstan is completely opposite to the fictitious world created by Borat. The people know how to take a joke, and turn something that might have seemed negative into a positive story. And you know what, the country looks really beautiful in the commercial and a place that would be loved by tourists. It’s truly a lesson that all marketing enthusiasts should take note of, where an acknowledgment is more powerful than rebuttal. Showing your humorous side can win more people over than threats and display of anger.
When he found out the way the Kazakh government had reversed its earlier stance and embraced the joke, even Sacha Baron Cohen went to the extent of clarifying in a statement that the Kazakhstan depicted in his film was completely fictitious. He went on to say that the real place was a beautiful country and he had only chosen it as the location of the movie since Americans were completely unaware of its culture and heritage.
You know you have been successful when you can get Borat to be courteous toward you! | https://medium.com/digital-diplomacy/how-kazakhstan-turned-the-new-borat-film-in-its-favor-533144de3b6e | ['Anmol Bhotika'] | 2020-11-15 12:06:16.395000+00:00 | ['Humor', 'Marketing', 'Movies', 'Culture', 'Technology'] |
HelloGold Foundation Update #27– 21st November 2019 | Technical
Improved SmartSaver cycles
SmartSaver has benefited from backend enhancements to smoothen the transition from one cycle to the next. The modifications came to solve some glitches that happened when the first day of the cycle landed on a weekend or a bank holiday. It also changes the cancellation mechanism. Any cancellations requested by the user will now take effect once the ongoing monthly cycle ends. Upon reaching the end of the cycle, funds saved under the SmartSaver plan will be transferred to the user’s main HelloGold account.
Moving from add cash to buy gold
HelloGold users have witnessed a change in the UX of their favourite savings app. “Add Cash” has now disappeared to give way for “Buy Gold”. This means that users will no longer be able to top-up cash into the app without converting it first into gold. This change comes to solve two issues identified by the team over the past year. Firstly, many users would KYC fully and add cash but not convert it into gold. Secondly, a small number of users were “gaming” the add cash feature through the Boost mobile wallet, moving funds in and out through HelloGold to claim cashback rewards. They can now stop tapping their day away for a few cents and enjoy the security and peace of mind associated with buying gold for their savings.
No worries, you’ll thank us later ;)
Blockchain accountability phase 2
The Merkle tree data architecture chosen to store database hashes onto Ethereum mainnet has now moved to phase two (read our previous update to learn about phase one). HelloGold’s blockchain team is now building different attribute layers through which the database can be divided and hashed before being pushed in Merkle trees onchain.
Storing information per transaction category and per user will allow a more refined data structure, allowing greater precision and reducing the amount of data needed to be revealed, should regulators seek to inquire for a specific event. | https://medium.com/hellogold/hellogold-foundation-update-27-21st-november-2019-460e9bf6745e | ['Quentin Bignon'] | 2019-11-21 01:46:59.254000+00:00 | ['Fintech', 'Startup', 'Gold', 'Ethereum', 'Financial'] |
The 4-Step Strategy to Staying Motivated During This Pandemic | “I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.”
― Frank Herbert, Dune
The beginning of this pandemic was terrifying. Do you remember the confusion? The uncertainty? I’m honestly still feeling it all.
We had no idea what this “coronavirus” was exactly, and the fear could be felt across Greece, where I ended up living for over eight months.
I noticed my mental health starting to take a hit and wondered what I could do with the extra free time.
So I decided to take a leap of faith and start writing again, after years of thinking I just couldn’t share my words, letting them free for others to critique.
“It’s not what happens to you, but how you react to it that matters.“ — Epictetus
This year was supposed to be a year of opportunity and change. My main goal was to break out of my comfort zone and publishing my words for the world to read was something I was unsure about but knew I had to do.
Scary things are going to happen throughout our lifetimes, but you can’t let anything stop you from realizing your goals.
I did spend a few weeks wallowing in fear and scrolling through terrible news all day, but I quickly snapped out of it.
Everything, including this catastrophic event, will pass. I wanted to be in a better place than I started when it did.
Here’s how I accomplished that. | https://medium.com/age-of-awareness/the-4-step-strategy-to-staying-motivated-during-this-pandemic-fe3eeb31a619 | [] | 2020-11-21 03:19:23.848000+00:00 | ['Covid 19', 'Motivation', 'Self', 'Education', 'Self Improvement'] |
Author Spotlight — August 2020. Featuring: Mark Starlin | Interview with Mark Starlin
Q: Why do you write?
Mark: Writing is a creative outlet for me. I have learned that I need creative outlets in my life to feel fulfilled and happy. So I have several. Writing is actually my newest outlet. Before discovering Medium two and a half years ago, my writing history consisted mainly of writing humorous posts on Facebook (back when Facebook was still fun) and a few guitar method books. I never gave writing much thought. Music and playing guitar were my primary creative outlets. Then quite by accident, I discovered Medium, and the writing monster was unleashed. For better or worse [laughs].
Q: What is your earliest memory involving writing?
M: Besides learning to write my ABCs on the three-dotted lines, I had a third-grade teacher who thought math and science were a waste of time. She only taught reading and writing. We had to write a new story every day. I think that forced me to use my imagination and become a creative writer. It seemed like a difficult chore at the time, but I love writing fiction now.
Q: What is your favorite genre to write? To read?
M: My favorite genre to write is variety. Honestly, I love variety in almost everything. I would be miserable sticking to a single genre. I don’t even think about genres when I write. Although I am probably best-known (by tens of people) for writing humor, I would get bored and burned out only writing humor. I have learned to go with what interests me at the moment. If that is poetry, I write poetry. If it is fiction, I write fiction. It works for me and keeps writing interesting.
Ten years ago, I would have said my favorite genre to read was fantasy, with biography and history tied for second. But I have never been an avid reader. I was a “go outside and play” kind of kid. Then I became obsessed with music. Writing on Medium has expanded my horizons and made me into more of a reader. I read every day now.
Q: What are you currently reading?
M: I am currently reading books by some of my favorite Medium authors. The novel Introspectors by Rick Post, and a story collection, The Imposters and Other Stories, by Terrye Turpin. I think it is awesome that I can read books by writers I have interacted with on Medium. Stephen King won’t take my calls.
(Check out Introspectors by Rick Post here, and The Imposters and Other Stories by Terrye Turpin here.)
Q: Do you have any current writing projects? If so, what are they?
M: [Laughs.] Starting projects is my specialty. Finishing them is another matter. I just started on a sequel to my first novel. I have five other ideas for novels, which may turn out to be novellas or short stories. I let stories decide their own length. And I have a Scrivener file full of ideas for Medium stories.
I have tried to write a story for The Friday Fix every week since I discovered it (I have missed a few weeks) as a little bit of disciplined writing. Plus, I like the challenge of paring a story down to exactly 50 words and still make it a story.
Q: Do you have any writing rituals, routines, or quirks?
M: My writing is based on quirks. Does that count? I can’t write with television or music on (and I am a musician!) But I can block out traffic noise or nature sounds. My only writing routine is getting my best ideas when I am in the shower.
Q: Besides reading and writing, what are some of your other favorite hobbies and interests?
M: Making music (playing guitar, keyboards, bass) and songwriting. Walking and hiking. Bike riding. Making comics. Photography. Eating. Converting rotary phones into night-vision goggles. Creating murals of Shakespeare plays using Scrabble tiles. Planning my escape from planet Earth. | https://medium.com/the-friday-fix/author-spotlight-august-2020-ee7f14cc98a3 | ['Justin Deming'] | 2020-08-01 10:58:57.601000+00:00 | ['The Friday Fix', 'Authors', 'Author Spotlights', 'Newsletter', 'Writing'] |
My Mom Used to Steal My Perfume. I Didn’t Know She Was Gaslighting Me. | It’s a little bit ironic, but my mother is the person who first introduced me to the 1944 film Gaslight. She and I never had a particularly close relationship, but for several years, we watched old movies from the local library.
It was the only thing we really did together. Growing up, we were very poor. My mother never had a paying job and instead relied upon welfare and subsidized housing.
At home, she left me alone most of the time, preferring to do her own thing — cooking, studying the Bible, or sleeping — while she told me to stay in my room. I had a sister who was five years older than me, but our mom did a lot to discourage us from building a relationship. As soon as she was in high school, my sister spent as much time as possible away from home. I was a little bit afraid of my sister, too, since our mother was constantly telling me how bad and rebellious she was.
As a result, my life was incredibly lonely. I spent most of my time cooped up at home and read a great deal in my room. I welcomed any opportunity to watch movies with my mother. She was never interested in any of my movie choices, but luckily, I enjoyed enough of her selections. Sometimes, she’d watch a miniseries on television, like Masterpiece Theater on PBS.
However, watching movies with my mother isn’t something I remember too fondly. It could be awkward. Anytime there was even a hint of love, sex, or romance, I was scared that my mom might take my interest in the film the wrong way. From the time I was about eight years old, she lectured me about the evils of sex and warned me not to be “boy crazy.” She even warned me against masturbation before I understood what it was by telling me that demons would possess me if I touched myself.
If we watched a film with any kissing in it, I was careful to avert my eyes until it was over. This fear about my mom thinking I was too interested in sex hung over my whole childhood.
When she played Gaslight, I was still too young to grasp exactly what it really meant. I knew it was a movie about a man who drove his wife crazy by making her think she was going crazy. My mom had a tendency to internalize all depictions of victimhood. Any time we watched a film with abuse or toxic behavior, she had a story to tell me about how it related to her life, and how other people — everyone — had been so cruel to her.
I believed my mother and all of her stories, no matter how far-fetched they seemed. I believed her when she told me that my father tried to kill her and that he didn’t want me, or when she said my grandmother thought we all wanted to poison her food. I even believed my mother when she told me that God spoke to her. About her, about me, about anyone.
And anything.
It’s funny. Now, when I watch the movie Gaslight, I can’t help but be enraged at all of the red flags I missed when I was young. If you’re acquainted with the film, then you already know the husband was a suspicious man long before he married Ingrid Bergman’s character. As a child and teen, I couldn’t see that, however. All I saw was my poor mother who’d been victimized and abused for her whole life.
Like a lot of women, I have a complicated relationship with my mother. But although we were never really friendly, I trusted her to want the best for me.
For a long time, I saw her as the most unselfish person I’d ever known. She pointed out everything she’d ever done or “given up for me,” and I felt guilty for even being born. | https://medium.com/honestly-yours/my-mom-used-to-steal-my-perfume-i-didnt-know-she-was-gaslighting-me-50e515c6b277 | ['Shannon Ashley'] | 2020-10-10 14:52:39.162000+00:00 | ['Family', 'Mental Health', 'Self', 'Life Lessons', 'Parenting'] |
Making the Case for Accessibility | Making the Case for Accessibility
How to convince your team to invest in more accessible design
You know accessibility is important. You want to dedicate the time to research and design a product that a wide variety of users can easily use. You want your products to make life easier for a person with a disability.
How do you convince your team to invest the time and resources for accessibility research and design?
Here are five tactics centered around empathy, flexibility, incremental changes, market share, and industry standards to convince your team to make accessibility a priority.
1. Make it personal, make it stick
Valuing the importance of accessibility in design can be hard for people to grasp unless they have personally experienced a disability or know someone who has. You can show statistics of the demographic trends of people with disabilities and the growing aging population or advocate for how much bigger your product’s market potential will be if the product is usable by people with disabilities, but unfortunately, the numbers alone won’t tell the story. Statistics don’t stick in people’s minds, stories and experiences do. Build empathy and understanding by combining your facts, charts, graphs, and statistical analyses with stories, videos, images, and live demonstrations.
A few years after communicating via email became common, a deaf friend of mine said, “Email is like water to me. I can easily communicate with people via email.” Although she can read lips in both her native German and in English, communication can be difficult, especially when she is not speaking with people face-to-face. Before email and text messaging existed, she had no way to quickly communicate with people remotely. She couldn’t make a phone call without a hearing person assisting her.
Have your team envision your product being so vital to the life of someone with a disability, that it becomes like water to them.
Strategies:
Share stories about how products work or don’t work for you and people you know with accessibility challenges.
Show videos and photos of how people with disabilities use technology or various products to help them in their daily lives.
Do a live demonstration of how to use assistive technology. For example, show someone using a CaptionCall phone that provides captions for the hard-of-hearing while they are speaking on the phone.
Conduct an inclusive design exercise that demonstrates personal experiences of temporary or permanent disabilities. Challenge your colleagues to think about how they would use their product in tricky situations that may give them a temporary impairment, such as when cooking (temporary motor impairment), having screen glare (temporary low vision), or being at a loud bar (temporary deafness). Encourage them to think of accessibility changes (high contrast, voice control, and captions, etc.) that could remedy these situations.
Have your colleagues try different assistive technologies on themselves to simulate temporary and permanent disabilities. Here are some ideas of exercises to do:
Ask your coworkers to visit their favorite websites and apps on their mobile devices and see how easy or hard it is for them to use the zoom features to increase text and image sizes. Put on a blindfold and use a screen reader such as Chrome Vox or Voice Over to navigate a website. Look at your company’s website, app, or product images using the No Coffee Visual Simulator to see what it might look like to users with color blindness, nystagmus, low acuity, cataracts, and other visual disabilities. Have your coworkers pretend they have a broken arm or a permanent arm disability by disabling their dominant hand. If they are right-handed, see if they’re comfortable putting their right-arm in a sling or holding it behind their back and if they are left-handed, try the same with their left arm. Instruct them to type emails by turning on voice dictation. Ask your coworkers to check how accurate the voice typing seems and how much time it takes for them to correct mistakes with their non-dominant hand.
Once you’ve completed each exercise, talk about how you can improve your website, app, or product images to make them easier to see or use for people with disabilities.
Resources:
2. Make it universal and flexible
A curb cut helps people in wheelchairs and those pushing strollers, delivery carts, suitcases, and other items. Image credit: Ryan Kiley, Visual Designer
Do a group brainstorming session about how your products can be used by a variety of users, even for use cases you hadn’t originally envisaged.
There’s a concept in universal design called the Curb Cut effect, if a product is made to work better for people with disabilities (mobility, visual, hearing, and more), it will work better for everyone. A dropped curb, curb ramp, or curb cut on a sidewalk benefits not just wheelchair users, but people with strollers, suitcases, carts, and other items that would be hard or dangerous to transport from the sidewalk to the road without a ramp.
For example, closed captioning on television or online videos doesn’t only benefit the hearing impaired. Captions in noisy places such as bars, restaurants, and airports are convenient for many people. While at work or in public transit, some people might watch videos or television without earphones and rely on captions to know what is going on.
Products can be repurposed. The National Aeronautics and Space Administration (NASA) created scratch-resistant lenses, memory foam, freeze-dried foods, and other inventions for space travel that are now regularly used in everyday life. The lightweight shock-absorption technology used for space suits was repurposed to create the shock absorbers in popular sneakers.
Using voice translation, Mary’s doctor speaks into her phone about her medical concerns. Mary ignores the French translation.
Just like NASA inventions were repurposed, other products can have unintended but positive accessibility use cases. For example, Mary speaks English, has hearing loss, and uses hearing aids. She goes to a doctor’s appointment and realizes that she can barely hear her doctor because her hearing aid batteries are low and beeping as a result. She has no new batteries. The doctor hears Mary but she can’t clearly hear the doctor. She opens the Google Translate app on her phone and asks the doctor to speak into the microphone using Translate’s speech-to-text feature. For the app to work, she selects a language, French, for Google Translate to translate the doctor’s spoken English, but she ignores the French translation. Mary reads what the doctor says in English using Google Translate’s speech recognition. She verbally responds to what the doctor has said. The doctor continues to speak into Translate and Mary reads the English transcription of the doctor’s spoken words until the appointment is over. Mary and her doctor are repurposing Google Translate for an accessibility purpose — live transcription of a monolingual conversation.
The versatility of your app might make it a lifesaver or simply make lives easier.
(Good news! Google now has an app, Live Transcribe, that offers real-time live transcription in over 70 languages using speech recognition technology.)
Strategies:
Think of how the Curb Cut effect applies to your product. What changes can your team make to not only help people with disabilities, but all users?
Do a brainstorming session or sprint with your team about how they could repurpose products for accessibility use cases and how your product could benefit people in creative ways.
Resources:
3. Small changes — > big impact
Incremental small changes build a better and more robust product. Your team could start with implementing closed captions and alt text, and then move on to checking the color contrast in images, and then do screen reader tests and automated audits.
The truth is that accessibility may not be easy. However, if you take things step by step and first change the most obvious aspects of your product to make them more accessible, you will build a sense of progress on your team. This pride in team progress will be valuable to keep the team motivated to make more changes and design their products with a sense of empathy for users with accessibility needs.
In the May 2011 Harvard Business Review article, “The Power of Small Wins,” business researchers, Teresa Amabile and Steven Kramer explain the progress principle: “Of all the things that can boost emotions, motivation, and perceptions during a workday, the single most important is making progress in meaningful work. And the more frequently people experience that sense of progress, the more likely they are to be creatively productive in the long run.”
Make an accessibility plan that creates a sense of progress in your team. Decide which accessibility changes or features are needed for your product. Plan to make those changes or features in various stages. Acknowledge the team members who created the features or changes and have them share about their process to the rest of the team. Group sharing is important so that the rest of the team understands how they, too, can participate in accessibility.
Here are some scenarios and strategies of small changes that can create a sense of progress:
Scenario #1
Your team dedicated months to making funny Do-It-Yourself (DIY) houseware repair videos with jokes and fun music. Add a transcript of your video. A transcript is not only important for your customers who can’t hear, but it’s also good for the search engine optimization (SEO) of your website. Search engines can’t crawl the audio and video input of your video. However, if there is a transcript, then search engines can find keywords such as “DIY dishwasher repair” or “fix the microwave” from the transcript.
Strategies:
Provide human-generated captions, or at least, edit auto-generated YouTube captions for accuracy.
Add transcripts of videos with descriptions of the images and music lyrics. Remove the time stamps from the auto-generated YouTube caption file, and edit for accuracy to create a transcript.
Scenario #2
You are making the website for a new restaurant. The restaurant owner wants to upload the JPEG images of the fancy printed menu to the website. There is gray text on a light beige background that matches the interior colors of the restaurant. The restaurant owner likes these images and colors because they represent what the restaurant has to offer. What could be a problem with this situation?
Problem #1: Screen readers and text in images
Screen reader software usually doesn’t read text in images. People with vision impairments relying on screen readers might not hear the menu. Potential customers may give up and find another restaurant with a text menu online. Just as with the example of the missing video transcript in Scenario #1, the lack of a text menu means that search engines won’t be able to crawl the restaurant’s site to know the food and prices. If the owner insists on using the images of the printed menu, consider adding alt tags to the images with the food names and prices. Screen readers will read out the text in the alt tags. Search engines can also crawl the text in alt tags.
(Using the device camera, apps such as Select to Speak for Android and Google Translate’s image translation feature can recognize text in images.)
Problem #2: Color contrast
Do (green): The text follows the color contrast ratio recommendations and is more legible against the white background. Caution (mustard): The text doesn’t meet the color contrast ratio recommendations and may be difficult to read against the white background. Image above adapted from the Crane Material study.
Gray text on a light beige background could be hard to read for someone with low vision or for a user reading in bright sunlight or low-light conditions. The Material Design color contrast guidelines, based on the World Wide Web Consortium’s (W3C’s) recommendations, suggest that small text should have a contrast ratio of at least 4.5:1 against its background.
Change the colors of the gray text and light beige background in the menu to meet the 4.5:1 contrast ratio.
Strategies:
Add alt text to images, unless they are decorative.
Check for important text in images and try to reflect that text in the caption or alt text.
Test out your app or website with screen reader software.
Check for the color contrast ratios in your app or website.
Visit a TED talk video to see how the site has both subtitles on the video and a transcript.
Resources:
4. Money talks
Connect the stories with the statistics by explaining how not designing for accessibility scenarios leads to lost revenue. For example, government agencies may ask companies who want to sell products or software to the government for a Voluntary Product Accessibility Template (VPAT). If your product has not considered accessibility, then your company may lose potential sales to a large buyer such as the government.
If your competitor has successfully designed a product that people with accessibility needs can use, your competitor may dominate the market. The OXO vegetable peeler is a good example of how a product designed for an accessibility need gained wide market share. Sam Farber saw how his wife, who had arthritis, was uncomfortable holding a vegetable peeler. The peeler slipped out of her hands, especially when her hands were wet. He created the OXO vegetable peeler with an ergonomic rubber grip that wouldn’t easily slip out of the user’s hand. The OXO brand grew beyond just appealing to elderly and arthritic customers and became a popular kitchenware brand.
Strategy:
Talk with your colleagues about how making your product more accessible may lead to better market opportunities and as a result, more profits.
Resources:
5. Aim for industry standards
Look for helpful guides or industry standards to help your product stay ahead of possible complications. Your product may be subject to accessibility and disability requirements in the countries where your product is available. In the US, the Americans with Disabilities Act (ADA) was passed in 1990, before websites and apps were created. But as with many laws, new inventions and technology sometimes outpace specific rules and regulations.
In June 2017, a Florida federal court required a grocery store chain to make their website more accessible to a screen reader user who alleged that they could not use the site to order prescriptions to be picked up in the store or to access coupons to use in the store. The judge ordered the store to take several steps such as making its website comply with industry standards, instituting mandatory accessibility training for its web developers, and paying attorney fees. (The case is currently under appeal.) It’s important to consider accessibility early on, and continuously make improvements to your products, so that you can get ahead of challenges (and frustrated users). Industry guidelines and standards can help address this need.
Strategies:
Learn about and aim for industry standards, like the Web Content Accessibility Guidelines published as part of the Web Accessibility Initiative within the W3C.
Consult your own lawyer to find out what requirements may apply to your product and whether any scenarios or outcomes may force you to incorporate accessibility compliance into your product.
Resources:
Disclaimer: These materials are for your information only, and do not constitute legal advice. You should consult your attorney for advice on any particular issue. | https://medium.com/google-design/making-the-case-for-accessibility-350da9e30c84 | ['Susanna Zaraysky'] | 2020-09-02 23:07:28.859000+00:00 | ['Tools', 'Accessibility', 'UI Design', 'Design', 'UX'] |
The Lifespan of a Lie | It was late in the evening of August 16th, 1971, and twenty-two-year-old Douglas Korpi, a slim, short-statured Berkeley graduate with a mop of pale, shaggy hair, was locked in a dark closet in the basement of the Stanford psychology department, naked beneath a thin white smock bearing the number 8612, screaming his head off.
“I mean, Jesus Christ, I’m burning up inside!” he yelled, kicking furiously at the door. “Don’t you know? I want to get out! This is all fucked up inside! I can’t stand another night! I just can’t take it anymore!”
It was a defining moment in what has become perhaps the best-known psychology study of all time. Whether you learned about Philip Zimbardo’s famous “Stanford Prison Experiment” in an introductory psych class or just absorbed it from the cultural ether, you’ve probably heard the basic story.
Zimbardo, a young Stanford psychology professor, built a mock jail in the basement of Jordan Hall and stocked it with nine “prisoners,” and nine “guards,” all male, college-age respondents to a newspaper ad who were assigned their roles at random and paid a generous daily wage to participate. The senior prison “staff” consisted of Zimbardo himself and a handful of his students.
The study was supposed to last for two weeks, but after Zimbardo’s girlfriend stopped by six days in and witnessed the conditions in the “Stanford County Jail,” she convinced him to shut it down. Since then, the tale of guards run amok and terrified prisoners breaking down one by one has become world-famous, a cultural touchstone that’s been the subject of books, documentaries, and feature films — even an episode of Veronica Mars.
The SPE is often used to teach the lesson that our behavior is profoundly affected by the social roles and situations in which we find ourselves. But its deeper, more disturbing implication is that we all have a wellspring of potential sadism lurking within us, waiting to be tapped by circumstance. It has been invoked to explain the massacre at My Lai during the Vietnam War, the Armenian genocide, and the horrors of the Holocaust. And the ultimate symbol of the agony that man helplessly inflicts on his brother is Korpi’s famous breakdown, set off after only 36 hours by the cruelty of his peers.
There’s just one problem: Korpi’s breakdown was a sham.
“Anybody who is a clinician would know that I was faking,” he told me last summer, in the first extensive interview he has granted in years. “If you listen to the tape, it’s not subtle. I’m not that good at acting. I mean, I think I do a fairly good job, but I’m more hysterical than psychotic.”
Now a forensic psychologist himself, Korpi told me his dramatic performance in the SPE was indeed inspired by fear, but not of abusive guards. Instead, he was worried about failing to get into grad school.
“The reason I took the job was that I thought I’d have every day to sit around by myself and study for my GREs,” Korpi explained of the Graduate Record Exams often used to determine admissions, adding that he was scheduled to take the test just after the study concluded. Shortly after the experiment began, he asked for his study books. The prison staff refused. The next day Korpi asked again. No dice. At that point he decided there was, as he put it to me, “no point to this job.” First, Korpi tried faking a stomach-ache. When that didn’t work, he tried faking a breakdown. Far from feeling traumatized, he added, he had actually enjoyed himself for much of his short tenure in the jail, other than a tussle with the guards over his bed.
“[The first day] was really fun,” Korpi recalled. “The rebellion was fun. There were no repercussions. We knew [the guards] couldn’t hurt us, they couldn’t hit us. They were white college kids just like us, so it was a very safe situation. It was just a job. If you listen to the tape, you can hear it in my voice: I have a great job. I get to yell and scream and act all hysterical. I get to act like a prisoner. I was being a good employee. It was a great time.”
For Korpi, the most frightening thing about the experiment was being told that, regardless of his desire to quit, he truly did not have the power to leave.
“I was entirely shocked,” he said. “I mean, it was one thing to pick me up in a cop car and put me in a smock. But they’re really escalating the game by saying that I can’t leave. They’re stepping to a new level. I was just like, ‘Oh my God.’ That was my feeling.”
Another prisoner, Richard Yacco, recalled being stunned on the experiment’s second day after asking a staff-member how to quit and learning that he couldn’t. A third prisoner, Clay Ramsay, was so dismayed on discovering that he was trapped that he started a hunger strike. “I regarded it as a real prison because [in order to get out], you had to do something that made them worry about their liability,” Ramsay told me.
When I spoke to Zimbardo this past May about Korpi’s and Yacco’s claims, he initially denied that they were obligated to stay.
“It’s a lie,” he said. “That’s a lie.”
But it is no longer just a question of Zimbardo’s word against theirs. This past April, a French academic and filmmaker named Thibault Le Texier published Histoire d’un Mensonge [History of a Lie], plumbing newly-released documents from Zimbardo’s archives at Stanford University to tell a dramatically different story of the experiment. After Zimbardo told me that Korpi and Yacco’s accusations were baseless, I read him a transcript unearthed by Le Texier of a taped conversation between Zimbardo and his staff on day three of the simulation: “An interesting thing was that the guys who came in yesterday, the two guys who came in and said they wanted to leave, and I said no,” Zimbardo told his staff. “There are only two conditions under which you can leave, medical help or psychiatric… I think they really believed they can’t get out.”
“Now, okay,” Zimbardo corrected himself on the phone with me. He then acknowledged that the informed consent forms which subjects signed had included an explicit safe phrase: “I quit the experiment.” Only that precise phrase would trigger their release.
“None of them said that,” Zimbardo said. “They said, ‘I want out. I want a doctor. I want my mother,’ etc., etc. Essentially I was saying, ‘You have to say, “I quit the experiment.”’”
But the informed consent forms that Zimbardo’s subjects signed, which are available online from Zimbardo’s own website, contain no mention of the phrase “I quit the experiment.”
Zimbardo’s standard narrative of the Stanford prison experiment offers the prisoners’ emotional responses as proof of how powerfully affected they were by the guards’ mistreatment. The shock of real imprisonment provides a simpler and far less groundbreaking explanation. It may also have had legal implications, should prisoners have thought to pursue them. Korpi told me that the greatest regret of his life was failing to sue Zimbardo.
“Why didn’t we file false imprisonment charges?” Korpi asked during an interview. “It’s embarrassing! We should have done something!”
According to James Cahan, former Deputy District Attorney for Stanford University’s Santa Clara County, Korpi may well have had a case: the six hours or so after Korpi made his desire to quit the experiment plain, much of which he spent confined in the closet, appear to have met the statutory requirements for false imprisonment in California.
“If he says, ‘I don’t want to do this anymore. I want to talk to you about getting out,’” Cahan said, “and he’s then locked in a room, and he is at some point trying to or asking to get out of that room in order to communicate as a contract employee or whatever he is, and he is unable to get out of that room, then that would seem to get very close to being out of the realm of informed consent and into the realm of a violation of the penal code.”
While Zimbardo likes to begin the story of the Stanford prison experiment on Sunday, August 15th, 1971, when guards began harassing newly arrived prisoners at the “Stanford County Jail” — making it sound as if they became abusive of their own accord — a more honest telling begins a day earlier, with the orientation meeting for the guards. There, addressing the group less as experimental subjects than as collaborators, Zimbardo put a thumb on the scales, clearly indicating to the guards that their role was to help induce the desired prisoner mindset of powerlessness and fear.
“We cannot physically abuse or torture them,” Zimbardo told them, in recordings first released a decade and a half after the experiment. “We can create boredom. We can create a sense of frustration. We can create fear in them, to some degree… We have total power in the situation. They have none.”
Much of the meeting was conducted by David Jaffe, the undergraduate student serving as “Warden,” whose foundational contribution to the experiment Zimbardo has long underplayed. Jaffe and a few fellow students had actually cooked up the idea of a simulated prison themselves three months earlier, in response to an open-ended assignment in an undergraduate class taught by Zimbardo. Jaffe cast some of his dormmates in Toyon Hall as prisoners and some as guards and came up with 15 draconian prison rules for his guards to enforce, including “Prisoners must address each other by number only,” “Prisoners must never refer to their condition as an ‘experiment’ or a ‘simulation,” and “Failure to obey any of the above rules may result in punishment.” Zimbardo was so taken with the tears and drama produced by Jaffe’s two-day simulation that he decided to try it himself, this time randomly assigning guards and prisoners and dragging the action on much longer. Because Zimbardo himself had never visited a real prison, the standards of realism were defined by Jaffe’s prison research and the nightmarish recollections of Carlo Prescott, a San Quentin parolee whom Zimbardo met through Jaffe and brought in as a consultant. Jaffe was given extraordinary leeway in shaping the Stanford prison experiment in order to replicate his previous results. “Dr. Zimbardo suggested that the most difficult problem would be to get the guards to behave like guards,” Jaffe wrote in a post-experiment evaluation. “I was asked to suggest tactics based on my previous experience as master sadist. … I was given the responsibility of trying to elicit ‘tough-guard’ behavior.” Though Zimbardo has often stated that the guards devised their own rules, in fact most of them were copied directly from Jaffe’s class assignment during that Saturday orientation meeting. Jaffe also offered the guards ideas for hassling the prisoners, including forcing them to clean thorns out of dirty blankets that had been thrown in the weeds.
Once the simulation got underway, Jaffe explicitly corrected guards who weren’t acting tough enough, fostering exactly the pathological behavior that Zimbardo would later claim had arisen organically.
“The guards have to know that every guard is going to be what we call a tough guard,” Jaffe told one such guard [skip to 8:35]. “[H]opefully what will come out of this study is some very serious recommendations for reform… so that we can get on the media and into the press with it, and say ‘Now look at what this is really about.’ … [T]ry and react as you picture the pigs reacting.”
Though most guards gave lackluster performances, some even going out of their way to do small favors for the prisoners, one in particular rose to the challenge: Dave Eshleman, whom prisoners nicknamed “John Wayne” for his Southern accent and inventive cruelty. But Eshleman, who had studied acting throughout high school and college, has always admitted that his accent was just as fake as Korpi’s breakdown. His overarching goal, as he told me in an interview, was simply to help the experiment succeed.
“I took it as a kind of an improv exercise,” Eshleman said. “I believed that I was doing what the researchers wanted me to do, and I thought I’d do it better than anybody else by creating this despicable guard persona. I’d never been to the South, but I used a southern accent, which I got from Cool Hand Luke.”
Eshleman expressed regret to me for the way he mistreated prisoners, adding that at times he was calling on his own experience undergoing a brutal fraternity hazing a few months earlier. “I took it just way over the top,” he said. But Zimbardo and his staff seemed to approve. After the experiment ended, Zimbardo singled him out and thanked him.
“As I was walking down the hall,” Eshleman recalled, “he made it a point to come and let me know what a great job I’d done. I actually felt like I had accomplished something good because I had contributed in some way to the understanding of human nature.”
According to Alex Haslam and Stephen Reicher, psychologists who co-directed an attempted replication of the Stanford prison experiment in Great Britain in 2001, a critical factor in making people commit atrocities is a leader assuring them that they are acting in the service of a higher moral cause with which they identify — for instance, scientific progress or prison reform. We have been taught that guards abused prisoners in the Stanford prison experiment because of the power of their roles, but Haslam and Reicher argue that their behavior arose instead from their identification with the experimenters, which Jaffe and Zimbardo encouraged at every turn. Eshleman, who described himself on an intake questionnaire as a “scientist at heart,” may have identified more powerfully than anyone, but Jaffe himself put it well in his self-evaluation: “I am startled by the ease with which I could turn off my sensitivity and concern for others for ‘a good cause.’”
From the beginning, Zimbardo sought a high media profile for his experiment, allowing KRON, a San Francisco television station, to film his mock arrests and sending them periodic press releases as the action evolved. But Zimbardo’s prison simulation quickly garnered more press attention than he could have imagined. On August 21st, a day after the study’s premature closure, the attempt by George Jackson, radical black activist and author of the bestselling Soledad Brother, to escape from San Quentin, an hour north of Stanford, led to the deaths of three corrections officers and three inmates, including Jackson himself. In short order, KRON arranged a televised debate between Zimbardo and San Quentin’s associate warden. Three weeks later, the predominately African American prisoners at Attica State Prison in New York seized control of the facility from the nearly all-white correctional staff, demanding better treatment. Ordered by Governor Nelson Rockefeller to retake the prison by force, helicopters dumped tear gas canisters and hundreds of law-enforcement officers and armed Attica guards fired blindly into the smoke, slaughtering prisoners and hostages alike.
In an era before the mass shootings that have since become the norm in American news headlines, it was a shocking bloodbath — one of the deadliest since the Civil War, according to the New York State Special Commission on Attica. The country scrambled for answers, and Zimbardo’s experiment appeared to offer them, putting guards and prisoners on the same moral plane — mutual victims of the carceral state — though in fact nearly all the Attica killings had been committed by guards and officers. Zimbardo’s tale of guards-run-amok and terrorized prisoners first came to national attention with a twenty-minute prime-time special on NBC. Richard Yacco told NBC’s reporter that he and other prisoners had been told they couldn’t quit, but, after he failed to hew to Zimbardo’s narrative of prisoners organically “slipping into” their roles, he was edited out of the program (the recording survives here).
In his 1973 article in the New York Times Magazine, Zimbardo wrote unequivocally that Korpi’s breakdown was genuine. By the mid 1980s, when he asked Korpi to appear on the Phil Donahue show and in the documentary Quiet Rage, Korpi had long since made clear that he’d been faking, but Zimbardo still wanted to include the breakdown. Korpi went along with it.
“If he wanted to say I had a mental breakdown, it seemed a minor note,” he told me. “I didn’t really object. I thought it was an exaggeration that served Phil’s purposes.”
In Quiet Rage, Zimbardo introduced dramatic audio footage of Korpi’s “breakdown” by saying “he began to play the role of the crazy person but soon the role became too real as he went into an uncontrollable rage.” A taped segment in which Korpi admitted playacting and described how tiring it was to keep it up for so many hours was edited out. Korpi told me that Zimbardo hounded him for further media appearances long after Korpi asked him to stop, pressuring him with occasional offers of professional help.
“We unlisted the number and [Zimbardo] figured out our unlisted number,” Korpi said. “It was just bizarre. I would always tell him, ‘I don’t want to have anything to do with the experiment anymore.’ ‘But Doug, but Doug, you’re so important! And I’ll give you lots of referrals!’ ‘Yeah, I know Phil, but I testify in court now and it’s embarrassing how I was. I don’t want to have that be a big public thing anymore.’ But Phil just couldn’t hear that I didn’t want to be involved. This went on for years.”
(Zimbardo confirmed that he gave Korpi referrals, but declined to comment further.)
The Stanford prison experiment established Zimbardo as perhaps the most prominent living American psychologist. He became the primary author of one of the field’s most popular and long-running textbooks, Psychology: Core Concepts, and the host of a 1990 PBS video series, Discovering Psychology, which gained wide usage in high school and college classes and is still screened today. Both featured the Stanford prison experiment. And its popularity wasn’t limited to the United States. Polish philosopher Zygmunt Bauman’s citation of the experiment in Modernity and the Holocaust in 1989 typified a growing tradition in Eastern Europe and Germany of looking to the Stanford prison experiment for help explaining the Holocaust. In his influential 1992 book, Ordinary Men, historian Christopher Browning relied on both the Stanford prison experiment and the Milgram experiment, another social psychology touchstone, in arguing that Nazi mass killings were in part the result of situational factors (other scholars argued that subscribers to a national ideology that identified Jews as enemies of the state could hardly be described as “ordinary men”). 2001, the same year Zimbardo was elected president of the American Psychological Association, saw the release of a German-language film, Das Experiment, that was based on the SPE but amped the violence up to Nazi-worthy levels, with guards not only abusing prisoners but murdering them and each other. When prisoner abuse at Abu Ghraib came to light in 2004, Zimbardo again made the rounds on the talk show circuit, arguing that the abuse had been the result not of a few “bad apple” soldiers but of a “bad barrel” and providing expert testimony on behalf of Ivan “Chip” Frederick, the staff sergeant supervising the military policemen who committed the abuses. With the resurgence of interest in the experiment, Zimbardo published The Lucifer Effect in 2007, offering more detail about it than ever before, though framed in such a way as to avoid calling his basic findings into question. The book became a national bestseller.
All the while, however, experts had been casting doubt on Zimbardo’s work.
Despite the Stanford prison experiment’s canonical status in intro psych classes around the country today, methodological criticism of it was swift and widespread in the years after it was conducted. Deviating from scientific protocol, Zimbardo and his students had published their first article about the experiment not in an academic journal of psychology but in The New York Times Magazine, sidestepping the usual peer review. Famed psychologist Erich Fromm, unaware that guards had been explicitly instructed to be “tough,” nonetheless opined that in light of the obvious pressures to abuse, what was most surprising about the experiment was how few guards did. “The authors believe it proves that the situation alone can within a few days transform normal people into abject, submissive individuals or into ruthless sadists,” Fromm wrote. “It seems to me that the experiment proves, if anything, rather the contrary.” Some scholars have argued that it wasn’t an experiment at all. Leon Festinger, the psychologist who pioneered the concept of cognitive dissonance, dismissed it as a “happening.”
A steady trickle of critiques have continued to emerge over the years, expanding the attack on the experiment to more technical issues around its methodology, such as demand characteristics, ecological validity, and selection bias. In 2005, Carlo Prescott, the San Quentin parolee who consulted on the experiment’s design, published an Op-Ed in The Stanford Daily entitled “The Lie of the Stanford Prison Experiment,” revealing that many of the guards’ techniques for tormenting prisoners had been taken from his own experience at San Quentin rather than having been invented by the participants.
In another blow to the experiment’s scientific credibility, Haslam and Reicher’s attempted replication, in which guards received no coaching and prisoners were free to quit at any time, failed to reproduce Zimbardo’s findings. Far from breaking down under escalating abuse, prisoners banded together and won extra privileges from guards, who became increasingly passive and cowed. According to Reicher, Zimbardo did not take it well when they attempted to publish their findings in the British Journal of Social Psychology.
“We discovered that he was privately writing to editors to try to stop us getting published by claiming that we were fraudulent,” Reicher told me.
Despite Zimbardo’s intervention, the journal decided to publish Reicher and Haslam’s article, alongside a commentary by Zimbardo in which he wrote, “I believe this alleged ‘social psychology field study’ is fraudulent and does not merit acceptance by the social psychological community in Britain, the United States or anywhere except in media psychology.”
“Ultimately,” said Reicher, “what we discovered was that we weren’t in a scientific debate, which is what we thought we were in. We were in a commercial rivalry. At that point he was very keen on getting the Hollywood film out.”
Zimbardo’s decades-long effort to turn his work into a feature film finally bore fruit in 2015 with The Stanford Prison Experiment, for which he served as consultant (he is played by Billy Crudup). Though the film purports to take a critical stance toward the experiment, it hews in essential ways to Zimbardo’s narrative, neglecting to include Zimbardo’s encouragement of tough tactics in the Saturday guard orientation or to mention David Jaffe’s role at all. Beleaguered by abusive guards, the character based on Korpi (Ezra Miller) succumbs to a delusion that he is not participating in an experiment at all but has been placed in a real prison and, in the emotional turning point of the film, suffers a screaming meltdown. Gradually his delusion begins to infect the other prisoners.
Somehow, neither Prescott’s letter nor the failed replication nor the numerous academic critiques have so far lessened the grip of Zimbardo’s tale on the public imagination. The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.
For psychology professors, the Stanford prison experiment is a reliable crowd-pleaser, typically presented with lots of vividly disturbing video footage. In introductory psychology lecture halls, often filled with students from other majors, the counterintuitive assertion that students’ own belief in their inherent goodness is flatly wrong offers dramatic proof of psychology’s ability to teach them new and surprising things about themselves. Some intro psych professors I spoke to felt that it helped instill the understanding that those who do bad things are not necessarily bad people. Others pointed to the importance of teaching students in our unusually individualistic culture that their actions are profoundly influenced by external factors.
“Even if the science was quirky,” said Kenneth Carter, professor of psychology at Emory University and co-author of the textbook Learn Psychology, “or there was something that was wrong about the way that it was put together, I think at the end of the day, I still want students to be mindful that they may find themselves in powerful situations that could override how they might behave as an individual. That’s the story that’s bigger than the science.”
But if Zimbardo’s work was so profoundly unscientific, how can we trust the stories it claims to tell? Many other studies, such as Soloman Asch’s famous experiment demonstrating that people will ignore the evidence of their own eyes in conforming to group judgments about line lengths, illustrate the profound effect our environments can have on us. The far more methodologically sound — but still controversial — Milgram experiment demonstrates how prone we are to obedience in certain settings. What is unique, and uniquely compelling, about Zimbardo’s narrative of the Stanford prison experiment is its suggestion that all it takes to make us enthusiastic sadists is a jumpsuit, a billy club, and the green light to dominate our fellow human beings.
“You have a vertigo when you look into it,” Le Texier explained. “It’s like, ‘Oh my god, I could be a Nazi myself. I thought I was a good guy, and now I discover that I could be this monster.’ And in the meantime, it’s quite reassuring, because if I become a monster, it’s not because deep inside me I am the devil, it’s because of the situation. I think that’s why the experiment was so famous in Germany and Eastern Europe. You don’t feel guilty. ‘Oh, okay, it was the situation. We are all good guys. No problem. It’s just the situation made us do it.’ So it’s shocking, but at the same time it’s reassuring. I think these two messages of the experiment made it famous.”
In surveys conducted in 2014 and 2015, Richard Griggs and Jared Bartels each found that nearly every introductory psychology textbook on the market included Zimbardo’s narrative of the experiment, most uncritically. Curious about why the field’s appointed gatekeepers, presumably well-informed about the experiment’s dubious history, would choose to include it nonetheless, I reached out. Three told me they had originally omitted the Stanford prison experiment from their first editions because of concerns about its scientific legitimacy. But even psychology professors are not immune to the forces of social influence: two added it back in under pressure from reviewers and teachers, a third because it was so much in the news after Abu Ghraib. Other authors I spoke with expressed far more critical perspectives on the experiment than appeared in their textbooks, offering an array of reasons why it nonetheless had pedagogical value.
Greg Feist, coauthor of Psychology: Perspectives and Connections, told me that his personal view of the experiment shifted some years back after he came across the 2005 Op-Ed by Carlo Prescott, which he described as “shocking.”
“Once I found out some of the ethical and scientific problems with the study, I didn’t think it was worth perpetuating, to be honest,” Feist said.
But there it is in his textbook’s third edition, published in 2014: a thoroughly conventional telling of Zimbardo’s standard narrative, with brief criticisms appearing only later in the chapter.
On October 25, 1971, a mere two months after concluding an experiment so stressful that he lost ten pounds in the span of a week, Philip Zimbardo traveled to Washington D.C. at the request of the House Committee on the Judiciary. In the hearing chamber, Zimbardo sat before the assembled Congressmen of Subcommittee 3 and told a whopper: the “guards” in his recent experiment “were simply told that they were going to go into a situation that could be serious and have perhaps some danger… They made up their own rules for maintaining law, order, and respect.” Zimbardo described a laundry list of abuses, causing “acute situational traumatic reactions” from the prisoners. Despite still never having set foot in an actual prison, he generalized freely from the unreviewed, unpublished, and largely unanalyzed results of his study: “The prison situation in our country is guaranteed to generate severe enough pathological reactions in both guards and prisoners as to debase their humanity, lower their feelings of self-worth, and make it difficult for them to be part of a society outside of their prison.” Zimbardo was a hit. As Representative Hamilton Fish, Jr. of New York put it: “You certainly helped me a great deal in clarifying some of the things we have seen the past few days and understanding them.”
In the wake of the prison uprisings at San Quentin and Attica, Zimbardo’s message was perfectly attuned to the national zeitgeist. A critique of the criminal justice system that shunted blame away from inmates and guards alike onto a “situation” defined so vaguely as to fit almost any agenda offered a seductive lens on the day’s social ills for just about everyone. Reform-minded liberals were hungry for evidence that people who committed crimes were driven to do so by the environment they’d been born into, which played into their argument that reducing urban crime would require systemic reform — a continuation of Johnson’s “war on poverty” — rather than the “war on crime” that President Richard M. Nixon had campaigned on. “When I heard of the study,” recalls Frances Cullen, one of the preeminent criminologists of the last half century, “I just thought, ‘Well of course that’s true.’ I was uncritical. Everybody was uncritical.” In Cullen’s field, the Stanford prison experiment provided handy evidence that the prison system was fundamentally broken. “It confirmed what people already believed, which was that prisons were inherently inhumane,” he said.
The racial dynamics of the Stanford prison experiment, which have never been adequately explored, should probably have given reformers pause. Carlo Prescott, who had just suffered sixteen years of imprisonment as an African American, played a pivotal role in shaping the architecture of the experiment. Frustrated in part by the lack of black experimental subjects, he intervened repeatedly in the action, seeking to bring, as he put it to me, “an air of authenticity to boys who were getting $15 a day to pretend to be prisoners — all Caucasian, as you recall. [Ed. note: one prisoner was Asian American.] Some of the genuine things that shock you as a result of having your liberty taken and your ass being controlled by people who hate you before you even get there.” Yet Zimbardo’s account of the “situation” that engendered abuse left race out of the equation. He often used the word “normal” to describe the participants in his study despite the fact that they were hardly a normal representation of the American inmate population at that time. Analyzing American prisoner abuse as a product of race-blind “situational forces” erased its deep roots in racial oppression.
Nonetheless, the Stanford prison experiment came to exert a significant influence on American criminology. Zimbardo’s first academic article about his results was published in the International Journal of Criminology and Penology rather than a psychology journal. A year later, Robert Martinson, one of a team of sociologists who had been commissioned by the state of New York to evaluate various prison programs, appeared on 60 Minutes with a dark message: when it came to rehabilitation of prisoners, Martinson said, nothing worked. Almost overnight, Martinson’s “nothing works doctrine” became accepted wisdom in America. It is often cited as the cause of the widespread abandonment in the 1970s by academics and policymakers alike of the notion that a prison could be a rehabilitative environment. Cullen believes Zimbardo’s study played a role too.
“What the Stanford Prison Experiment did,” Cullen says, “was to say: prisons are not reformable. The crux of many prison reforms, especially among academic criminologists, became that prisons were inherently inhumane, so our agenda had to be minimizing the use of prisons, emphasizing alternatives to prison, emphasizing community corrections.”
In an era of rapidly rising crime, this agenda proved politically untenable. Instead, conservative politicians who had no qualms about using imprisonment purely to punish ushered in a decades-long “get tough” era in crime that disproportionately targeted African Americans. The incarceration rate rose steadily, standing now at five times higher than in comparable countries; one in three black men in America today will spend time in prison.
It would, of course, be unfair to lay mass incarceration at Zimbardo’s door. It is more accurate to say that, for all its reformist ideals, the Stanford prison experiment contributed to the polarizing intellectual currents of its time. According to a 2017 survey conducted by Cullen and his colleagues Teresa Kulig and Travis Pratt, 95% of the many criminology papers that have cited the Stanford prison experiment over the years have accepted its basic message that prisons are inherently inhumane.
“What struck me later in life was how all of us lost our scientific skepticism,” Cullen says. “We became as ideological, in our way, as the climate change deniers. Zimbardo’s and Martinson’s studies made so much intuitive sense that no one took a step back and said, ‘Well, this could be wrong.’”
Most criminologists today agree that prisons are not, in fact, as hopeless as Zimbardo and Martinson made them out to be. Some prison programs do reliably help inmates better their lives. Though international comparisons are difficult to make, Norway’s maximum-security Halden prison, where convicted murderers wear casual clothing, receive extensive job-skill training, share meals with unarmed guards, and wander at will during daylight hours through a scenic landscape of pine trees and blueberry bushes, offers a hopeful sign. Norwegians prisoners seldom get in fights and reoffend at lower rates than anywhere else in the world. To begin to ameliorate the evils of mass incarceration, Cullen argues, will require researching what makes some forms of prison management better than others, rather than, as the Stanford prison experiment did, dismissing them all as inherently abusive.
Meanwhile, the legacy of Zimbardo’s work goes well beyond its influence on our troubled criminal justice system, touching directly on how we understand our personal moral freedom.
On a sunny August afternoon in 2006, at the height of the Iraq War, a nineteen-year-old U.S. Army Ranger named Alex Blum drove a superior in the Rangers and three other men to a Bank of America branch in Tacoma, where they leapt out of his car and committed an armed takeover robbery using pistols and AK-47s. Three days later, Alex, who happens to be my cousin, was arrested in our hometown of Denver, Colorado. Alex claimed to our family to have believed he had been participating in a training exercise. After the radical conditioning of the month-long Ranger Indoctrination Program he had just undergone, he had followed his superior without questioning. For Alex’s sentencing hearing, his defense team called on a prominent expert to argue that his involvement in the robbery was due not to his own free will but to powerful “situational forces”: Dr. Philip Zimbardo. Alex received an extraordinarily lenient sentence, and Dr. Zimbardo became a family hero.
In October 2010, Zimbardo co-hosted a special episode of the Dr. Phil Show entitled When Good People Do Bad Things, using Alex’s story to spread his message that evil acts are the result of circumstance rather than character and choice. From my place in the studio audience, I heard Zimbardo describe guards abusing prisoners without any urging whatsoever — “I constrained the guards not to use any physical force, but they intuitively knew how to use psychological force,” Zimbardo said. Next he used his theories to explain the abuses at Abu Ghraib, offering up the same arguments he’d recently used to defend Ivan “Chip” Frederick. When Dr. Phil asked who in the studio audience thought they too might have tortured detainees in a similar situation, everyone in my family stood up, almost the only ones to do so. We were proud to support Alex, and we knew this was the lesson we were supposed to derive from Zimbardo’s work.
A few years later, after deciding to write a book about Alex’s story, I discovered evidence that he hadn’t told the whole truth about his involvement. When I confronted him, he confessed to me that his choice to participate in the bank robbery was freer and more informed than he had ever let on before. Accepting responsibility was transformative for him. It freed him from the aggrieved victim mindset in which he had been trapped for years. Zimbardo’s “situational forces” excuse had once appeared to give my cousin a way to believe in his fundamental goodness despite his egregious crime, but seeing the personal growth that came with deeper moral reckoning, I began to wonder if it had really done him a service.
It was only after interviewing Zimbardo at his home in San Francisco for my book about Alex that I began researching the history of his famous experiment in depth. The more I found, the more unsettled I became. Shortly after my book was published, having by now spoken to a number of former participants in the study, I approached Zimbardo for another interview. For months I didn’t hear back. Then Le Texier’s book was published, and Zimbardo suddenly agreed to speak to me, apparently eager to respond to the charges. We spoke by Skype shortly after his return from a psychology conference. His office was stacked high with books and papers, the phone constantly ringing in the background as we talked.
After hearing Zimbardo describe the experiment so many times over the years, I was not expecting to hear anything new. The first surprise came when I asked about Korpi’s and Yacco’s claims to me that they had been told they couldn’t leave. After dismissing them at first as lies and then claiming that Korpi and Yacco had simply forgotten the safe phrase “I quit the experiment,” Zimbardo astonished me by acknowledging that he had in fact instructed his staff to tell prisoners they couldn’t get out.
“If [prisoners] said, ‘I want to get out,’ and you said, ‘Okay,’ then as soon as they left, the experiment would be over,” Zimbardo explained. “All the prisoners would say, ‘I want to get out.’ There has to be a good reason now for them to get out. The mentality has to be, in their mind, ‘I am a prisoner in a prison,’ not, ‘I’m a college student in an experiment. I don’t want my money. I’m quitting the experiment.’ You don’t quit a prison. That’s the whole point of the Pirandellian prison [Ed. note: Pirandello was an Italian playwright whose plays blended fiction and reality]. At one level you’re a student in a basement in an experiment. At another level, you’re a prisoner being abused by guards in a county jail.”
Zimbardo confirmed that David Jaffe had devised the rules with the guards, but tried to argue that he hadn’t been lying when he told Congress (and, years later, insisted to Lesley Stahl on 60 Minutes) that the guards had devised the rules themselves, on the grounds that Zimbardo himself had not been present at the time. He at first denied that the experiment had had any political motive, but after I read him an excerpt from a press release disseminated on the experiment’s second day explicitly stating that it aimed to bring awareness to the need for reform, he admitted that he had probably written it himself under pressure from Carlo Prescott, with whom he had co-taught a summer school class on the psychology of imprisonment.
“During that course, I began to see that prisons are a waste of time, and money, and lives,” Zimbardo said. “So yes, I am a social activist, and prison reform was always important in my mind. It was not the reason to do the study.”
At the close of a long, tense conversation, I asked him whether he thought Le Texier’s book would change the way people saw the experiment.
“I don’t know,” he said, sounding tired. “In a sense, I don’t really care. At this point, the big problem is, I don’t want to waste any more of my time. After my talk with you, I’m not going to do any interviews about it. It’s just a waste of time. People can say whatever they want about it. It’s the most famous study in the history of psychology at this point. There’s no study that people talk about 50 years later. Ordinary people know about it. They say, ‘What do you do?’ ‘I’m a psychologist.’ It could be a cab driver in Budapest. It could be a restaurant owner in Poland. I mention I’m a psychologist, and they say, ‘Did you hear about the study?’ It’s got a life of its own now. If he wants to say it was all a hoax, that’s up to him. I’m not going to defend it anymore. The defense is its longevity.”
Zimbardo has spent much of the last fifty years answering questions about the darkest six days of his life, in some ways a prisoner of his own experiment’s success. When I asked him if he was glad, looking back, to have conducted the study, he said he had mixed feelings. He considered the shyness clinic he founded in Palo Alto in 1975 to be his most important work.
“If it was not for the prison study, that would be my legacy,” he said.
“Does some part of you wish that were your legacy?” I asked.
“Yeah, sure,” he said, “of course. That’s positive. The prison study, the negative part is I am Dr. Evil. I created this evil situation, like Svengali or something.”
In Zimbardo’s telling, he, too, is a victim of circumstance — shaped by his surroundings like everyone else.
“I gradually, with a lack of awareness, transformed myself into being the prison superintendent,” he said. “Why? My office had the sign ‘Prison Superintendent.’ David Jaffe’s office said ‘Warden.’ Then I had to deal with the parents. I had to deal with the parole board hearings. I had to deal with the priest coming in. People are dealing with me not as a researcher but as the prison superintendent, asking for help with their prisoner son.”
This excuse has served Zimbardo as well as anyone over the years, but it may no longer be enough. After reviewing some of Le Texier’s evidence, textbook author Greg Feist told me he is considering taking a firmer stand in the forthcoming edition of Psychology: Perspectives and Connections.
“I hope there does come a point, now that we know what we do, where Zimbardo’s narrative dies,” Feist said. “Unfortunately it’s not going to happen soon, but hopefully it will happen. Because I just think it’s a…”
Feist paused, searching for the appropriate word, then settled on a simple one.
“It is a lie.” | https://gen.medium.com/the-lifespan-of-a-lie-d869212b1f62 | ['Ben Blum'] | 2019-09-06 18:35:23.240000+00:00 | ['Prison Reform', 'Psychology', 'Prison', 'Trust Issues', 'Philip Zimbardo'] |
Sweetviz: Automated EDA in Python | Sweetviz: Automated EDA in Python
Exploratory Data Analysis using the Sweetviz python library
Exploratory Data Analysis is a process where we tend to analyze the dataset and summarize the main characteristics of the dataset often using visual methods. EDA is really important because if you are not familiar with the dataset you are working on, then you won’t be able to infer something from that data. However, EDA generally takes a lot of time.
But, what if I told you that python can automate the process of EDA with the help of some libraries? Won’t it make your work easier? So let’s start learning about Automated EDA.
In this article, we will work on Automating EDA using Sweetviz. It is a python library that generates beautiful, high-density visualizations to start your EDA. Let us explore Sweetviz in detail.
Installing Sweetviz
Like any other python library, we can install Sweetviz by using the pip install command given below.
pip install sweetviz
Analyzing Dataset
In this article, I have used an advertising dataset contains 4 attributes and 200 rows. First, we need to load the using pandas.
import pandas as pd
df = pd.read_csv('Advertising.csv')
Advertising dataset.
Sweetviz has a function named Analyze() which analyzes the whole dataset and provides a detailed report with visualization.
Let’s Analyze our dataset using the command given below.
# importing sweetviz
import sweetviz as sv #analyzing the dataset
advert_report = sv.analyze(df) #display the report
advert_report.show_html('Advertising.html')
EDA Report
And here we go, as you can see above our EDA report is ready and contains a lot of information for all the attributes. It’s easy to understand and is prepared in just 3 lines of code.
Other than this Sweetviz can also be used to visualize the comparison of test and train data. For comparison let us divide this data into 2 parts, first 100 rows for train dataset and rest 100 rows for the test dataset.
Compare() function of Sweetviz is used for comparison of the dataset. The commands given below will create and compare our test and train dataset.
df1 = sv.compare(df[100:], df[:100])
df1.show_html('Compare.html')
Comparison Analysis using sweetviz
Other than this there are many more functions that Sweetviz provides for that you can go through this.
So what do you think about this beautiful library? Go ahead try this and mention your experiences in the response section.
There are some other libraries that automate the EDA process one of which is Pandas Profiling which I have explained earlier in an article given below.
Before You Go
Thanks for reading! If you want to get in touch with me, feel free to reach me on [email protected] or my LinkedIn Profile. You can also view the code and data I have used here in my Github. | https://towardsdatascience.com/sweetviz-automated-eda-in-python-a97e4cabacde | ['Himanshu Sharma'] | 2020-07-06 21:42:12.043000+00:00 | ['Eda', 'Data Analysis', 'Python', 'Data Science', 'Data Visualization'] |
Essential R packages for data science projects | Quick reminder: install and use packages
The most common way is to install a package directly from CRAN using the following R command:
# this command installs tidyr package from CRAN
install.packages("tidyr")
Once the package is installed on your local machine, you don’t need to run this command again, unless you want to update the package with its latest version! If you want to check the version of a package you installed, you may use:
# returns tidyr package version
packageVersion("tidyr")
RStudio IDE also provides a convenient way to check if any update is available for installed packages in Tools/Check for packages updates…
Update all your packages in a few clicks using RStudio
Last but not least: how to use a package now it is installed :) You may either specify the package name in front of its included method:
stringr::str_replace("Hello world!", "Hello", "Hi")
Or run the following command to load all the package’s functions at once:
# load a package: it will throw an error if package is not installed
library(stringr)
Now you’re ready to go!
If you want to learn basically everything about R packages development, I highly recommend Hadley Wickham R packages book (free online version).
Fetching data
Fetching data is often the starting point of a data science project: data can be located in a database, an Excel spreadsheet, a comma-separated values (csv) file… it is essential to be able to read it regardless of its format, and avoid headaches before even starting to work with the data!
When data is located in a .csv files or any delimited-values file
The readr package provides functions that are up to 10 times faster than base R functions to read rectangular data.
Great R packages usually have a dedicated hex sticker: https://github.com/rstudio/hex-stickers
Convenient methods exist for reading and writing standard .csv files as well as custom files with a custom values separation symbol:
# read csv data delimited using comma (,)
input_data <- readr::read_csv("./input_data.csv")
# read csv data delimited using semi-colon (;)
input_data <- readr::read_csv2("./input_data.csv")
# read txt data delimited using whatever symbols (||)
input_data <- readr::read_delim("./input_data.txt", delim = "||")
In addition to good looking stickers, great R packages also have cheat sheets you can refer to!
When data is located in an Excel file
Microsoft Excel has its own file formats (.xls and .xlsx) and is very commonly used to store and edit data. The package readxl enables efficient reading of these files into R, you can even only read a specific spreadsheet:
# read Excel spreadsheets
input_data <- readxl::read_excel("input_data.xlsx", sheet = "page2")
When data is located in a database or in the cloud
When it comes to fetching data from databases, DBI makes it possible to connect to any server, as long as you provide the required credentials, and run SQL queries to fetch data. Because there are many different databases and ways to connect depending on your technical stack, I suggest that you refer to the complete documentation provided by RStudio to find the steps that suit your needs: Databases using R.
Make sure to check if a package exists to connect to your favorite cloud services provider! For example, bigrquery enables fetching data from Google BigQuery platform.
Wrangling data
You may have noticed a lot of the previously mentioned packages are part of the tidyverse. This collection of packages forms a powerful toolbox that you can leverage throughout your data science projects. Mastering these packages is key to become super efficient with R.
The pipe operator shipped with the magrittr package is a game changer https://github.com/tidyverse/magrittr
Data wrangling is made easy using the pipe operator, which goal is simply to pipe left-hand values into right-hand expressions:
# without pipe operator
paste("Hello", "world!")
# with pipe operator
"Hello" %>% paste("world!)
It may not seem obvious in this example, but this is a life-changing trick when you need to perform several sequential operations to a given object, typically a data frame.
Data frames usually contains your input data, making it the R object you probably work the most with. dplyr is a package that provides useful functions to edit, filter, rearrange or join data frames.
library(dplyr)
# mtcars is a toy data set shipped with base R
# create a column
mtcars <- mtcars %>% mutate(vehicle = "car")
# filter on a column
mtcars <- mtcars %>% filter(cyl >= 6)
# create a column AND filter on a column
mtcars <- mtcars %>%
mutate(vehicle = "car") %>%
filter(cyl >= 6)
Now you should understand my point about the power of the pipe operator :)
There is so much more to say about data wrangling that you can find entire books discussing the topic, such as Data Wrangling with R. In addition, a key work on leveraging tidyr functionalities is R for Data Science. A free online version of the latter can be found here. Please notice that these are Amazon affiliated links so I will receive a commission if you decide to buy the books.
Visualization
One of the main reason R is a very good choice for data science projects may be ggplot2 . This package makes it easy and eventually fun to build visualizations that looks good and gather a lot of informations.
You may find inspirations from this Top 50 ggplot2 visualisation article : http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html
ggplot2 is also part of the tidyverse collection, that’s why it works perfectly with shapes of data you typically obtain after tidyr or dplyr data wrangling operations. Managing to plot histograms and scatter plots is rather quick. Then many additional elements can be used to enhance your plots.
Machine learning
Another very convenient package is caret , that wraps up a lot of methods typically used in machine learning processes. From data preparation, to model training and performances assessment, you will find everything you need when working on predictive analytics tasks.
I recommend reading the caret chapter about model training where this key task is discussed. Here is a very simple example of how to train a logistic regression:
library(dplyr) # say we want to predict iris having a big petal width
observations <- iris %>%
mutate(y = ifelse(Petal.Width >= 1.5, "big", "small")) %>%
select(-Petal.Width) # set up a a 10-fold cross-validation
train_control <- caret::trainControl(method = "cv",
number = 10,
savePredictions = TRUE,
classProbs = TRUE) # make it reproducible and train the model
set.seed(123)
model <- caret::train(y ~ .,
data = observations,
method = "glm",
trControl = train_control,
metric = "Accuracy")
Final words
Thanks a lot for reading my very first article on Medium! I feel like there is so much more to say in each section, as I did not talk about other super useful packages such as boot , shiny , shinydashboard , pbapply … Please share your thoughts in the comments, I am very interested in feedbacks on what you are willing to explore in future articles.
Useful documentations and references | https://ericbonucci.medium.com/essential-r-packages-for-data-science-projects-d79cb5698b96 | ['Eric Bonucci'] | 2020-11-12 09:37:01.238000+00:00 | ['Rstudio', 'Community', 'Productivity', 'Data Science', 'R'] |
Musings: On Pain Begetting Pain, On Healing Begetting Healing | When we see a stray, a cat or a dog, who has been living on the streets, fending violently for itself, just a creature against the world, we recognize that there are behaviors that they have acquired to adapt to harsh environments that have allowed for their survival. If we try to save this creature or better its circumstances, we may be scratched or bitten or attacked. Somehow, this knowledge does not lessen our compassion for the creature. If you’re anything like me, your heart breaks even more to see how scared and hurt a thing it is that it can’t recognize love and care for what it is, that it can’t fathom the idea of accepting tenderness. It can’t quite understand what that looks like or how to behave in response. Somehow everything is a threat. We shield ourselves against possible injury, sure, but we also persist diligently in trying to heal the little being of its pains.
We are those creatures. We are the strays. We have all acquired certain survival skills that are not well suited for domesticity. But we are also the rescuers. With animals, we show empathy. We look to help them better understand love. We look to teach them skills, patiently and caringly, to better receive and show love. But when we are the injured ones, we rarely take the time to extend the same kindnesses to ourselves. And so we scratch and bite and attack those nearest us, those who are wanting to show us the most that we are loved and safe and wanted. We scratch and bite and attack ourselves, when so very deeply, all we want is to be able to be a loving, cuddled little pet.
You are not the villain. Nor am I. You are not the hero, and likewise I am not either. We are simply injured little creatures and fierce protectors, perpetually discovering new vulnerabilities in ourselves and capable, truly, of creating loving environments to allow those vulnerabilities to mend and to resolve and to become our greatest strengths.
I have a little rescue pup. I found her on the streets when she was only three months old, and she was terrified of everything, quite literally afraid even of shadows. I spent years patiently teaching her that every little scary thing she encountered was okay, that a blinking light wasn’t dangerous, that a friendly old lady wasn’t going to hurt her. Now and again, we still find a little thing that startles her, and we slow down and we approach it quietly and safely and without harshness or frustration, and she learns that it isn’t something scary after all. And the next time she encounters it, she doesn’t show the same fear response. She doesn’t run away as frightened. She doesn’t bark as loudly. She shows more confidence.
Sprout is five years old now, and she has made so very much progress. And the thing is, she is the most compassionate little dog I’ve ever met. Every time I make the sharp inhale of pain, she comes to check in on me to see if I’m hurt. If I sigh in melancholy, she places her paw on my arm and looks up expectantly in case I start to cry. If I do cry, she is in my face, smooching away tears and cuddling into my neck. She’s come to check in on me from the other room a number of times as I’ve been writing this, hearing me start to sniffle and feeling moved to make sure I’m okay.
It sounds silly to say, but I know that as a person, I want to be like Sprout. I want to know what hurt feels like, what fear feels like, and to simultaneously be mended enough in my own struggles to be able to nurse someone else in their times of need. That is my life’s aim, to be able to care for others from a place of empathy and strength because I have been able to care for myself from that same place. | https://jacqbabb.medium.com/musings-on-pain-begetting-pain-on-healing-begetting-healing-211da939ba5 | ['Jacq Babb'] | 2019-07-18 22:08:17.021000+00:00 | ['Self Care', 'Healing', 'Mental Health', 'Love'] |
You Will Receive Criticism in the Comments — Here’s How to Handle it | You Will Receive Criticism in the Comments — Here’s How to Handle it
Writing for discourse, not dictatorship
Photo by Victoria Heath
It was a political piece. Some would say that I was asking for it. But nothing I had written was new. The only difference was that I was telling it through my eyes, embellished with anecdotes from my life. So you can imagine that I was quite surprised when my thoughts on the “Invisible America” keeping Trump in power were read and shared thousands of times.
The engagement was excellent. I received plenty of thoughtful comments that were beautifully articulated. The kinds of comments that make me fall in love with Medium time and time again. I was pointed in the direction of prose that would get me probing, theories that would get me thinking, and perspectives that promised to expand my mind. My contribution to the messy spheres of politics and society was far from perfect, but it provided a platform upon which conversations desperate to unfold could occur.
It wasn’t all daisies and rainbows, though. Peppered amongst the well-phrased, thought-provoking discourse from those treading boldly from all walks of life were aggressive notes left by those looking to start a fight, along with patronizing remarks from those who felt they knew me and my deserved place underfoot.
If you’ve never received harsh criticism before, it can feel somewhat akin to being bulldozed, flattening the foundation upon which you stand. No matter your level of expertise in the area, it’s instinctive to doubt yourself in the face of conflict about a point you believe to be true.
It’s very human of you to respond this way, but the criticism is likely to pack the most powerful of punches if it rocks the beliefs upon which you’ve modeled your life or framed your identity. Put plainly: it sucks being made to question our very beings; our egos don’t take to it particularly well.
We have a natural inclination toward loathing criticism directed at us, but this only becomes amplified amongst neurotic individuals that gravitate more toward the anxious end of the spectrum and are arguably more vulnerable. Did your parents criticize you when you were a child? That echoes later in life from peers.
But don’t let this put you off.
I live and work in academia. I wouldn’t call myself an academic per se; in fact, I detest much of the current academic structure and pretentiousness associated with it. But if there’s one thing that doing a PhD has exposed me to, it’s criticism and I’m grateful for it.
I’m a scientist and as such, I follow the scientific method. Science differs from other “belief systems”, if you will, in that it is self-correcting. As scientists, we strive to know the truth and put our hypotheses to the test with the most robust, rigorous experiments possible. We peer review each other’s work and make it our job to identify any holes, flaws, or ambiguity, aiming to present our best work for the collective good. We also welcome the evolution of ideas and innovation that, over time, expose many things as being incorrect.
This training has helped immensely in navigating negative comments on articles I’ve written over the years because it allows me to quickly distinguish between the good, the bad, and the ugly. I write primarily about science, health, writing, or personal stories, but I occasionally dabble in areas more prone to harsh critiques, such as complaints about the US, from my perspective as a British person living here.
Using a few questions I’ll share below, I’m able to discern between garbage and comments worth responding to. Whether you’re writing for academia, a platform like Medium, the news, or whatever else, here are my tips for navigating negative comments. | https://medium.com/age-of-awareness/you-will-receive-criticism-in-the-comments-heres-how-to-handle-it-968b5c4b0ec9 | ['Kat Kennedy'] | 2020-09-03 20:58:01.636000+00:00 | ['Mindfulness', 'Education', 'Self Improvement', 'Life Lessons', 'Writing'] |
Patterns Are Power: A Beginner’s Guide To Spam Detection | We’ve all gotten spam, we’ve all felt its hideous, annoying burden, and we’ve all (but Lord I hope not) had to lovingly explain to a grandparent that “No, that strange digital letter isn’t actually from a foreign king in need.” A bit played out, I know, but a poignant example, and one that highlights a key to understanding at least one of the ways we can avoid it altogether — but more on that later.
Photo by Lorie Shaull, sourced by Wikimedia Commons
Spam Detection Methods
If you do any amount of research, you’ll see that there are countless — and I barely mean that hyperbolically — ways to filter for spam, or just filter email in general. The approaches are as plentiful as there are types of phishy emails one can receive. There are networking-oriented solutions like tarpits, which selectively slow down suspected spam and allow ham messages (non-spam) to flow free. There are content-oriented solutions that check the URL’s in a given message against a database of URL’s that are commonly used for phishing or other malicious practices. And there are even solutions which depend on real-world action, such as reporting the IP from which spam originates and terminating the user’s internet service provision.
But, of course, there are other methods. And the key method I want to talk about, which was alluded to above, has to deal more with pattern recognition than anything else. I’m talking about machine learning.
Remember the grandparent example with the fake-royalty phishing scheme? Remember how you have either received an email like this or heard of someone who has? These two questions can only net agreement if there are generalizable patterns found in spam. Of course, this won’t always hold, but in many cases the language and/or syntax which spam tends to take can be meaningfully different from the nuanced communication made by human users. If this is true (surprise, it often is), then we can train a machine-learning model to tell the difference. This, my friends, is a binary classification problem.
Spam Detection And Binary Classification
For the uninitiated, binary classification in this context just means “Using machine learning to have the computer tell us what group something is in.” This logic extends to any number of categories, but here we just have the two. Before the theory rambling goes on and on, let’s get into a concrete example.
First, some samples of spam messages:
“URGENT! Your Mobile number has been awarded…” “Loan for any purpose £500 — £75,000…” “Congrats! Nokia 3650 video camera phone is yours…”
And some samples of ham messages:
“Hi Steven, your dropbox is full and stopped syncing…” “Hope you’ve been well. I attached a photo of…” “Thanks for responding! I have added you as a writer…”
There’s some patterns that pop out at first. The spam messages are typically asking something of us, or offering us something. Their purpose in not communication. Their purpose is engagement, typically monetary, that will net the spam-senders our personal information. But, that’s a fairly high-level semantic distinction to make. A simple neural network to be trained for the sake of example can’t pick out and identify these themes.
What it can instead pick out is word orders. Of course this is handled numerically, and it’s handle just like this: We take our dataset of spam and ham emails, fit a Tokenizer on the data, turn the data into sequences of numbers, and then pad the sequences to a uniform length. In Python code, this looks like:
We instantiate a Keras Tokenizer:
tokenizer = Tokenizer()
We fit it on the emails we have. Here it is in the form of a Pandas Series:
tokenizer.fit_on_texts(dataframe[‘email_message’])
We convert that same Series of emails to a bunch of sequences:
email_sequences = tokenizer.texts_to_sequences(dataframe[‘email_message’])
We convert them to a NumPy Array (this part is for compatibility with our neural network):
email_array = np.array(email_sequences)
We pad these sequences to a uniform length. In common language: We take a message that is, for example, 25 words long (and therefore a sequence of 25 numbers now) and add 75 zeroes in front. The pad_sequences() function comes from keras.preprocessing.sequence, by the way.
padded_email_array = pad_sequences(email_array, maxlen=100)
An example of an email before an after this process could look like:
‘REMINDER FROM O2: To get 2.50 pounds free call credit and details of great offers pls reply 2 this text with your valid name, house no and postcode’
Turning into:
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1341 42 510 2 30 17 255 511 46 15 836 9 322 18 109 632 69 116 17 41 99 31 13 368 254 228 37 9 1342]
With all of our spam and ham data processed as such, we can feed it into a neural network, and train it to separate the one group from the other. Note: There are a few more steps both before and after this, that are viewable here in the full notebook for this example. I am simplifying some things for the sake of explanation — focusing on the concept of relating word order to numerical sequence.
Spam Detection Neural Network
There are numerous ways to do this, but for the current workflow, I’m using a Sequential model in Keras. We are less interested here in the code, which is fully explained here, than in the outcomes. For transparency sake, the code is:
# Instantiate model base_model = Sequential() # Add model layers based on above rationale base_model.add(Embedding(len(total_vocab), 100)) base_model.add(LSTM(8, return_sequences=True)) base_model.add(GlobalMaxPool1D()) base_model.add(Dense(25, activation=’tanh’)) base_model.add(Dense(2, activation=’softmax’)) # Compile model base_model.compile(loss=’categorical_crossentropy’, optimizer=’Adam’, metrics=[‘accuracy’]) # Fit model and store results base_results = base_model.fit(X_train, y_train, epochs=20, batch_size=30, validation_split=0.2)
What’s key here is the Dense layer with 2 nodes in it, which effectively is the final decider on whether or not “spam” or “ham” is the final decision made. Performance-wise, this model did as follows:
The neural network was able to use these numerical sequences to predict whether or not an email was spam with a 97.92% accuracy. That is exceptionally high, and only a piece of the puzzle. In practice, if you were developing a new spam-filtering technology, you would likely be using this machine-learning methodology in conjunction with some of the others mentioned in the introductory text. I encourage you to try this brief example out yourself, following along the same notebook linked here.
Reach out to me on Twitter @zych_steven with any feedback! | https://stevenzych.medium.com/patterns-are-power-a-beginners-guide-to-spam-detection-6bc0e9f4d68 | ['Steven Zych'] | 2020-12-20 21:07:51.257000+00:00 | ['Machine Learning', 'Python', 'Spam', 'Data Science', 'Keras'] |
10 Node.js Frameworks Worth Checking Out: Express, Loopback, Hapi, and Beyond | Node.js frameworks
As technology is changing at a rapid pace, developers are moving to use new technologies and adopting more convenient frameworks for their web development needs. Node.js is getting huge exposure from developers who love to use JavaScript for app development.
Being a developer, you can manage the same language for both client-side and server-side scripting, and this has brought huge adoption and use of Node.
Node.js frameworks are getting huge demand in the market and 2019 is bringing a lot more features and advantages. There are so many top programming languages available in the market but the best Node.js Frameworks of 2019 have drastically changed the development process.
But before we go in deep discussion, it is important to understand what a Node Framework is
Node.js is an open-source and a cross-platform JavaScript run-time environment that runs JavaScript code outside the browser. You cannot ignore it when you prepare a list of JavaScript frameworks.
JavaScript is utilized chiefly for client-side scripting, in which scripts written in JavaScript are embedded in a website page’s HTML and run client-side by a JavaScript engine in the browser.
Node.js lets developers utilize JavaScript to get written command line tools. For server-side scripting, it runs the required scripts server-side to develop dynamic web page content before the page is driven to the available user’s browser. As a result, Node.js embodies a “JavaScript everywhere” paradigm, combining web application development across a particular programming language for both server side as well as client side scripts.
Top Benefits of Node.js Frameworks
The use of Node.js frameworks is growing because they have tremendous functionalities like the best productivity, high speeds, and scalability. All these features make Node.js the first choice for developing enterprise level applications for huge companies.
Node.js allows you to use the same language for both the front-end and backend. This saves you from the stress of learning new languages and implementing them for running the whole code structure and program.
With the help of Node.js frameworks, you can use different tools, refer to different guidelines and also you can recommend practices that will ultimately save you a lot of time. With such an approach, you can become a pro in the coding field.
Here are some of the main benefits:
Functions with high speed
Supports data streaming
Works in real-time
Has a solution to all the database queries
Straightforward coding
Open Source
Cross Platform
In charge with the proxy server
Higher productivity
Competent with Sync problems
User and community friendly
Let’s have a look at the top Node.js frameworks that will shine in 2019 and the upcoming years.
AdonisJs
AdonisJs is one of the most popular Node.js frameworks that runs on all the major operating systems. This framework has a static ecosystem for writing server-side web apps, and this way you can target your business needs and decide which package to use. It is the simplest framework and especially targets development.
Features of AdonisJs
Supports ORM which is made up from SQL databases
Efficient SQL query creation which is based on active record idea
An easy-to-learn query builder that allows you to quickly build simple queries
It provides good support to No-SQL databases like MongoDB
Express.js
Express.js is the simplest, fastest, non-opinionated Node.js framework. It is a simple technology which is built on Node.js and acts as middleware to manage the servers and routes.
Node.js has an asynchronous nature and Express.js has the ability to develop light-weight apps that can process multiple requests seamlessly and depends on the capability of the express technology.
Features of Express.js
Fully customizable
Standard for Node.js web middlewares
Low learning curve
More focus on browser
Hapi.js
Hapi.js is the best Node.js web framework that is utilized for developing application program interfaces. This framework has a strong plug-in system which helps developers to manage the whole development process.
Hapi.js comes under the top Node.js frameworks for web application development and is loved by developers as they find it easy to work with and manage the whole script.
Features of Hapi.js
Strong input validation
Configuration-based functionality
Caching implementation
Improved error handling
Meteor.js
Meteor.js is used for building modern web and mobile applications and is defined as a full-stack JavaScript platform. The most important feature of Meteor.js is that it provides real-time updates so that all your changes on the web will get updated on the template instantly.
The framework has a simplified platform for the whole tier of the app and it’s in the same language (JavaScript). This makes this framework work in a more efficient manner on the server side as well as the client side.
Features of Meteor.js
It has the capability to manage larger projects
Has rich and organized documentation community
It leverages the Facebook GraphQL data stack
It’s easy to understand for most developers
Sails.js
Sails.js is yet another popular Node.js framework that is used to develop custom enterprise-grade Node.js applications. It has all the capability to build the best apps with the support that modern apps need. Sails.js consists of the APIs which are data-driven attached to a scalable service-oriented architecture.
Features of Sails.js
A lot of automated generators
No need for additional routing
Awesome frontend compatibility with different frontend technologies
Clear support for Web Sockets
Compatible with all databases
Koa.js
The team that created Express.js developed Koa.js. It has been developed for filling the gaps of Express.js. Koa has a unique script and methods that make it work on different browsers. It helps you to work without any callbacks and will provide you with a strong effort in error handling.
Features of Koa.js
Utilizes required generators to manage and handle callbacks
Has strong and efficient error handling processes
Building blocks based on components
Cascading middlewares and ditched callback hell
LoopBack.js
LoopBack.js is yet another famous and well-used Node.js framework having an easy-to-use CLI and an API explorer which is dynamic in nature. It helps you create different models depending on your required schema (or even if there is no requirement of a schema). It has good compatibility with different REST services and different varieties of databases that cover MySQL, MongoDB, Oracle, Postgres and more.
Features of LoopBack.js
Rapid creation of dynamic end-to-end REST APIs
Better connection amid different devices and browsers
Improved correlation between diverse data and services
Utilization of Android, iOS and Angular SDKs for creating client apps
Runs on its own premises and even in the cloud
Derby.js
Derby.js provides seamless data synchronization between the server and the client. Derby.js is well-known as a full stack Node.js framework for writing modern web apps. It provides you an opportunity to add customized code and build highly efficient web apps. Derby.js is going to get massive exposure in 2019 as it has some great features.
Features of Derby.js
MVC architecture for both client-side and server-side
Best utilized for creating mobile and web applications
It uses server rendering for fast page loading, HTML templates and search engine support
Total.js
Total.js needs very little maintenance and gives a strong performance and a flawless scaling transition. The whole team of Total.js is working hard to match user requirements and make it a lovable and highly usable Node.js framework worldwide. This indicates that the Total.js framework will likely get good exposure in the coming years.
Features of Total.js
Model-view-controller software architecture
Highly extensible and asynchronous framework
Provides full support to RESTful routing mechanism
Full support to web sockets and media streaming
Nest.js
Nest.js is a type of Node.js framework which is used for developing professional and scalable Node.js server-side applications. It uses JavaScript and has been written in TypeScript. Since it’s built with TypeScript, this means it comes with powerful typing and combines all the elements of Object Oriented Programming (OOP), Functional Programming (FP), and Functional Reactive Programming (FRP).
Features of Nest.js
It has an out of the box application architecture
Effortless creation of highly testable and scalable applications
Nest CLI is used for generating Nest.js applications
How to select a Node Framework
This can be a tough decision to make as there are so many Node.js frameworks in the market as we’ve seen here. But the decision solely depends on your project and business requirements. Different Node.js frameworks have different specialties ranging from speed, learning curve, coding structure, flexibility, configuration, and more.
Key Takeaways
The pervasion of technology is getting greater in today’s digital world. This means that the level of competition between frameworks and different technologies is getting higher and higher. There are so many Node.js frameworks available in the market but you just need to choose the best one to meet your business’s demands.
The features and functions of Node.js frameworks have all the capabilities to allow you to build a strong and error-free application for your enterprise. You can also hire the best Node.js developers for that.
Choosing the best NodeJS framework is a tough task as it takes a lot of research and analysis to understand the details of each particular framework. It’s up to you to do that further research and select the framework that will help you develop top website applications. | https://medium.com/free-code-camp/10-node-js-frameworks-worth-checking-out-express-loopback-hapi-and-beyond-7b537b590f89 | ['Sanjay Ratnottar'] | 2019-01-30 19:21:20.051000+00:00 | ['Nodejs', 'Programming', 'JavaScript', 'Technology', 'Productivity'] |
Microsoft Research Open Sourced TextWorld to Train Reinforcement Learning by Playing Text Games | Microsoft Research Open Sourced TextWorld to Train Reinforcement Learning by Playing Text Games
Language games are fun. How can they be used to train reinforcement learning agents?
I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:
Conversational interfaces and natural language processing(NLP) are, arguably, the most widely adopted segment of modern artificial intelligence(AI). Despite the continuous progress in NLP research, most conversational interfaces today still feel rather primitive compare to a human equivalent. Most popular conversational AI agents such as Alexa or the Google Assistant are solid on very short dialogs but lack cognitive aspects of human dialogs such as memory, planning and even common sense. How can we establish a repeatable and quantifiable mechanism for training AI agents in sophisticated conversational capabilities? A few months ago, researchers from the Microsoft Research Montreal Lab, released an open source project called TextWorld, which attempts to train reinforcement learning agents using Text-Based games. The ideas behind TextWorld were captured in a recent research paper published by Microsoft.
Text-Based Games
It might seem unusual to talk about Text-Based games in a time in which AI agents are mastering complex multi-player games such as Dota2 or Quake III. Whereas multi-player graphic environments are great to train agents on spatial and time-based planning, Text-Based games can play a similar role developing advanced conversational skills such as affordance extraction, memory and planning, exploration and several others.
Conceptually, Text-based games are complex, interactive simulations in which text describes the game state and players make progress by entering text commands. After each command, the game usually provides some feedback to inform players how that command altered the game environment. A typical text-based game poses a series of puzzles to solve, treasures to collect, and locations to reach. Take, for instance, the legendary Zork game which is shown in the figure below. The game uses natural language to describe the state of the world, to accept actions from the player, and to report subsequent changes in the environment.
The richness and complexities of Text-Based games makes them an ideal environment to train reinforcement learning agents. If we extrapolate Text-Based games to the context of reinforcement learning agents, language plays both the role of the action and observation space. In both cases, the complexity of the space is combinatorial and compositional which creates many challenges for reinforcement learning agents.
Reinforcement Learning Challenges in Text-Based Games
The only simple thing about Text-Based games is the input/output mechanism via a terminal. Other than that, the nature of Text-Based games poses numerous challenges for reinforcement learning agents:
· Partial Observability: In the context of reinforcement learning, Text-Based games can be considered partially observable environments. At any given time, only a snapshot of the current game environment is presented to the players and even many of the local details might not be apparent in the observation.
· Exploration vs. Exploitation: The infinite friction of reinforcement learning models is amplified in Text-Based games. Throughout the game, players need to balance the ability of exploring the environment further vs. capitalizing on immediate rewards.
· Long-Term Credit Assignment: Sparse rewards are inherent to Text-Based games in which the agent must generate a sequence of actions before observing a change in the environment state or getting a reward signal. Knowing which actions produced a specific reward becomes incredibly challenging for reinforcement learning agents.
Entering TextWorld
The idea of TextWorld is not to directly create reinforcement learning agents that can beat a specific Text-Based games. That has been done before. TextWorld leverages Text-Based games differently by creating simplified representations of them that can be used to train reinforcement learning agents. Having simpler versions of Text-Based games improves the evaluation and interpretability of reinforcement learning algorithms in a highly controlled game space.
From the functional standpoint, TextWorld is a Python framework for creating Text-Based games environments that can be used to train reinforcement learning agents. The framework has two main components: a game generator and a game engine. The game generator converts high-level game specifications, such as number of rooms, number of objects, game length, and winning conditions, into an executable game source code in the Inform 7 language. The game engine is a simple inference machine that ensures that each step of the generated game is valid by using simple algorithms such as one-step forward and backward chaining.
Using TextWorld, it is possible to generate combinatorial sets of Text-Based games based on a specific set of parameters. TextWorld’s game generator takes as input a high-level specification of a game and outputs the corresponding executable game with specific parameters such as the number of rooms, the number of objects, the length of the quest, the winning conditions, and options for the text generation.
Reinforcement learning models can interact with TextWorld using a simple API that can be encapsulated in a few lines of Python code:
import textworld env = textworld.start("zork1.z5") game_state = env.reset() # Reset/initialize the game. reward, done = 0, False while not done: # Ask the agent for a command. command = agent.act(game_state, reward, done) # Send the command to the game and get the new state. game_state, reward, done = env.step(command)
Using that model, developers can create rich natural language training environments for reinforcement learning agents that will help with the development of skills such as memory, contextual analysis or long-term planning. TextWorld is available as an open source release in GitHub and is interoperable with many of the popular deep learning frameworks such as TensorFlow or PyTorch. | https://medium.com/dataseries/microsoft-research-open-sourced-textworld-to-train-reinforcement-learning-by-playing-text-games-5569f96f436f | ['Jesus Rodriguez'] | 2020-09-10 12:38:38.689000+00:00 | ['Machine Learning', 'Deep Learning', 'Data Science', 'Artificial Intelligence', 'Thesequence'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.