title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
The Borrowed Breathe
The Borrowed Breathe Life after lockdown in Singapore आज बाहर के रंगो को देखकर एक अलग सी घुटन आज आज़ाद होकर भी एक अलग सी उलझन image from Author Now I am not worried that my hair has the texture of dried grass and my eyebrows have become bushy. Now I don’t miss the regular parlour appointment for a head massage or the foot reflexology. Now my priorities are different. There should be a book to read, in the morning I should do one or two hours of writing followed by home chores, the evening is for a walk if I can manage. In my home, I am happily settled like a stranger, each day I am learning to fall in love with the colours of the sky and the noise of the raindrops, the only guests I can attend at my home. image from Author This post is not about this strange life which I am living since the past few months, this post is about the outing which happened today. Now that rules are being relaxed in Singapore today we went for grocery shopping and also had coffee at my favourite place but instead of being happy I was sad. I felt suffocated, I failed to recognize my city which reminds me of colours. Today it was bleak, there was no warmth. I felt I am in a box surviving on borrowed breathe. I can’t think of travelling in this situation whenever we do, I missed my home and that freedom, for the first time I felt caged like a bird.
https://medium.com/illumination/the-borrowed-breathe-f1f9347fe133
['Priyanka Srivastava']
2020-06-20 21:20:57.165000+00:00
['Personal Essay', 'Singapore', 'Hindi', 'This Happened To Me', 'Writing']
14 Data Visualization Plots of Seaborn
Data Visualization plays a very important role in Data mining. Various data scientist spent their time exploring data through visualization. To accelerate this process we need to have a well-documentation of all the plots. Even plenty of resources can’t be transformed into valuable goods without planning and architecture. Therefore I hope this article would provide you a good architecture of all plots and their documentation. Content Introduction Know your Data Distribution Plots a. Dist-Plot b. Joint Plot c. Pair Plot d. Rug Plot Categorical Plots a. Bar Plot b. Count Plot c. Box Plot d. Violin Plot Advanced Plots a. Strip Plot b. Swarm Plot Matrix Plots a. Heat Map b. Cluster Map Grids a. Facet Grid Regression Plots Introduction Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics. For the installation of Seaborn, you may run any of the following in your command line. pip install seaborn conda install seaborn To import seaborn you can run the following command.
https://towardsdatascience.com/14-data-visualization-plots-of-seaborn-14a7bdd16cd7
['Aayush Ostwal']
2020-12-22 12:26:19.189000+00:00
['AI', 'Data Science', 'Data Visualization', 'Machine Learning', 'Data Analysis']
A Basic Introduction to Separable Convolutions
Alright, but what’s the point of creating a depthwise separable convolution? Let’s calculate the number of multiplications the computer has to do in the original convolution. There are 256 5x5x3 kernels that move 8x8 times. That’s 256x3x5x5x8x8=1,228,800 multiplications. What about the separable convolution? In the depthwise convolution, we have 3 5x5x1 kernels that move 8x8 times. That’s 3x5x5x8x8 = 4,800 multiplications. In the pointwise convolution, we have 256 1x1x3 kernels that move 8x8 times. That’s 256x1x1x3x8x8=49,152 multiplications. Adding them up together, that’s 53,952 multiplications. 52,952 is a lot less than 1,228,800. With less computations, the network is able to process more in a shorter amount of time. How does that work, though? The first time I came across this explanation, it didn’t really make sense to me intuitively. Aren’t the two convolutions doing the same thing? In both cases, we pass the image through a 5x5 kernel, shrink it down to one channel, then expand it to 256 channels. How come one is more than twice as fast as the other? After pondering about it for some time, I realized that the main difference is this: in the normal convolution, we are transforming the image 256 times. And every transformation uses up 5x5x3x8x8=4800 multiplications. In the separable convolution, we only really transform the image once — in the depthwise convolution. Then, we take the transformed image and simply elongate it to 256 channels. Without having to transform the image over and over again, we can save up on computational power. It’s worth noting that in both Keras and Tensorflow, there is a argument called the “depth multiplier”. It is set to 1 at default. By changing this argument, we can change the number of output channels in the depthwise convolution. For example, if we set the depth multiplier to 2, each 5x5x1 kernel will give out an output image of 8x8x2, making the total (stacked) output of the depthwise convolution 8x8x6 instead of 8x8x3. Some may choose to manually set the depth multiplier to increase the number of parameters in their neural net for it to better learn more traits. Are the disadvantages to a depthwise separable convolution? Definitely! Because it reduces the number of parameters in a convolution, if your network is already small, you might end up with too few parameters and your network might fail to properly learn during training. If used properly, however, it manages to enhance efficiency without significantly reducing effectiveness, which makes it a quite popular choice.
https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728
['Chi-Feng Wang']
2018-08-14 15:01:43.050000+00:00
['Mobilenet', 'Neural Networks', 'Artificial Intelligence', 'Deep Learning', 'Convolutional Network']
Interpretable Machine Learning for Image Classification with LIME
Interpretable Machine Learning for Image Classification with LIME Increase confidence in your machine-learning model by understanding its predictions. The increasing trend in the use of machine learning for critical applications such as self-driving vehicles and medical diagnosis suggests an imperative need for methodologies that can help to understand and evaluate the predictions of machine-learning models. Local Interpretable Model-agnostic Explanations (LIME)[1] is a technique that explains how the input features of a machine learning model affect its predictions. For instance, for image classification tasks, LIME finds the region of an image (set of super-pixels) with the strongest association with a prediction label. This post is a step by step guide with Python code on how LIME for image classification internally works. Let’s start by reading an image and using the pre-trained InceptionV3 model available in Keras to predict the class of such image. This script loads the input image in the variable Xi and prints the top 5 classes (and probabilities) for the image as shown below: Labrador Retriever (82.2%) Golden Retriever (1.5%) American Staffordshire Terrier (0.9%) Bull Mastiff (0.8%) Great Dane (0.7%) With this information, the input image and the pre-trained InceptionV3 model, we can proceed to generate explanations with LIME. In this example we will generate explanations for the class Labrador Retriever. LIME Explanations LIME creates explanations by generating a new dataset of random perturbations (with their respective predictions) around the instance being explained and then fitting a weighted local surrogate model. This local model is usually a simpler model with intrinsic interpretability such as a linear regression model. For more details about the basics behind LIME, I recommend you to check this short tutorial. For the case of image classification, LIME generates explanations with the following steps: Step 1: Generate random perturbations for input image For the case of images, LIME generates perturbations by turning on and off some of the super-pixels in the image. The following script uses the quick-shift segmentation algorithm to compute the super-pixels in the image. In addition, it generates an array of 150 perturbations where each perturbation is a vector with zeros and ones that represent whether the super-pixel is on or off. After computing the super-pixels in the image we get this: The following are examples of perturbation vectors and perturbed images: Step 2: Predict class for perturbations The following script uses the inceptionV3_model to predict the class of each of the perturbed images. The shape of the predictions is (150,1000) which means that for each of the 150 images, we get the probability of belonging to the 1,000 classes in InceptionV3. From these 1,000 classes we will use only the Labrador class in further steps since it is the prediction we want to explain. In this example, 150 perturbations were used. However, for real applications, a larger number of perturbations will produce more reliable explanations. Now we have everything to fit a linear model using the perturbations as input features X and the predictions for Labrador predictions[labrador] as output y . However, before we fit a linear model, LIME needs to give more weight (importance) to images that are closer to the image being explained. Step 3: Compute weights (importance) for the perturbations We use a distance metric to evaluate how far is each perturbation from the original image. The original image is just a perturbation with all the super-pixels active (all elements in one). Given that the perturbations are multidimensional vectors, the cosine distance is a metric that can be used for this purpose. After the cosine distance has been computed, a kernel function is used to translate such distance to a value between zero and one (a weight). At the end of this process we have a weight (importance) for each perturbation in the dataset. Step 4: Fit a explainable linear model using the perturbations , predictions and weights We fit a weighted linear model using the information obtained in the previous steps. We get a coefficient for each super-pixel in the image that represents how strong is the effect of the super-pixel in the prediction of Labrador. We just need to sort these coefficients to determine what are the most important super-pixels ( top_features )for the prediction of Labrador. Even though here we used the magnitude of the coefficients to determine the most important features, other alternatives such as forward or backward elimination can be used for feature importance selection. After computing the top super-pixels we get: This is what LIME returns as explanation. The area of the image (super-pixels) that have a stronger association with the prediction of “Labrador Retriever”. This explanation suggests that the pre-trained InceptionV3 model is doing a good job predicting the labrador class for the given image. This example shows how LIME can help to increase confidence in a machine-learning model by understanding why it is returning certain predictions. A Jupyter Notebook with all the Python code used in this post can be found here. You can easily test explanations on your own images by opening this notebook in Google Colab. References [1] Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “Why should I trust you? : Explaining the predictions of any classifier.” (2016) Proceedings of the 22nd ACM SIGKDD. ACM.
https://towardsdatascience.com/interpretable-machine-learning-for-image-classification-with-lime-ea947e82ca13
['Cristian Arteaga']
2019-10-22 06:03:48.223000+00:00
['Machine Learning', 'Interpretable Ai', 'Image Processing', 'Deep Learning', 'Explainable Ai']
Time series transformations selection using EDAspy
Time series transformations selection using EDAspy Example of optimisation in forecasting model In Machine Learning (ML) projects, when we work with time series (TS)in the project we usually look for the ideal time series transformations in order to improve the implemented model performance. This can be a tedious task if we work with a large dataset, and we want to try lots of different time series transformations. I have implemented an approach that can efficiently manage this task. Estimation of Distribution Algorithms (EDAs) are a type of evolutionary algorithms that reproduce the next generation using a probabilistic model based on the best individuals selected in the previous generation. Some different EDAs implementations are implemented in the Python package EDAspy (https://github.com/VicentePerezSoloviev/EDAspy ; https://pypi.org/project/EDAspy/). To install the package just do: pip install EDAspy A very easy example is shown below. Note that for such an easy example the improvement obtained is very small compared with the improvement we could obtain if a larger dataset is used. Some easy time series transformations are implemented, but feel free to try more TS transformations. First we load the needed libraries: import pandas as pd import statsmodels.api as sm from statsmodels.tsa.api import VAR import matplotlib.pyplot as plt from sklearn.metrics import mean_absolute_error # EDAspy libraries from EDAspy.timeseries import EDA_ts_fts as EDA from EDAspy.timeseries import TS_transformations Then we use a little public dataset to use as an example (available in Pandas library). Visualize the data: mdata = sm.datasets.macrodata.load_pandas().data df = mdata.iloc[:, 2:12] df.head() Image by author We list the variable without the variable we want to forecast (‘pop’), and build the dataset with the time series transformations of the rest of variables. More transformations can be added following the steps: - Add the transformation postfix - Add to the dataset the respective variable with name (name + postfix) But some available time series transformations are available in TSTransformations. variables = list(df.columns) variable_y = 'pop' # pop is the variable we want to forecast variables = list(set(variables) - {variable_y}) TSTransf = TSTransformations(df) transformations = ['detrend', 'smooth', 'log'] # postfix to variables, to denote the transformation # build the transformations for var in variables: transformation = TSTransf.de_trending(var) df[var + 'detrend'] = transformation for var in variables: transformation = TSTransf.smoothing(var, window=10) df[var + 'smooth'] = transformation for var in variables: transformation = TSTransf.log(var) df[var + 'log'] = transformation We must define a cost function. In this case is the following with hyper-parameters. The cost function returns a MAE and inputs a list of variables from the built dataset (with time series transformations): def cost_function(variables_list, nobs=20, maxlags=15, forecastings=10): """ variables_list: list of variables without the variable_y nobs: how many observations for validation maxlags: previous lags used to predict forecasting: number of observations to predict return: MAE of the prediction with the real validation data """ data = df[variables_list + [variable_y]] df_train, df_test = data[0:-nobs], data[-nobs:] model = VAR(df_train) results = model.fit(maxlags=maxlags, ic='aic') lag_order = results.k_ar array = results.forecast(df_train.values[-lag_order:], forecastings) variables_ = list(data.columns) position = variables_.index(variable_y) validation = [array[i][position] for i in range(len(array))] mae = mean_absolute_error(validation, df_test['pop'][-forecastings:]) return mae We take the normal variables without any time series transformation and try to forecast the y variable using the same cost function defined. This value is stored to be compared with the optimum solution found. mae_pre_eda = cost_function(variables) print('MAE without using EDA:', mae_pre_eda) # MAE without using EDA: 5.091478009948458 Initialization of the initial vector of statistics. Each variable has a 50% probability to be or not chosen vector = pd.DataFrame(columns=list(variables)) vector.loc[0] = 0.5 Run the algorithm. The code will print some further information during execution eda = EDA(max_it=50, dead_it=5, size_gen=15, alpha=0.7, vector=vector, array_transformations=transformations, cost_function=cost_function) best_ind, best_MAE = eda.run(output=True) EDA execution. We see how along the iterations the algorithm decreases the optimized MAE of the forecasting model. Image by author The results are the following. In the left side the best local costs, and in the right side the best global cost find until the respective iteration. To plot the results: hist = eda.historic_best relative_plot = [] mx = 999999999 for i in range(len(hist)): if hist[i] < mx: mx = hist[i] relative_plot.append(mx) else: relative_plot.append(mx) print('Solution:', best_ind, ' MAE post EDA: %.2f' % best_MAE, ' MAE pre EDA: %.2f' % mae_pre_eda) plt.figure(figsize = (14,6)) ax = plt.subplot(121) ax.plot(list(range(len(hist))), hist) ax.title.set_text('Local cost found') ax.set_xlabel('iteration') ax.set_ylabel('MAE') ax = plt.subplot(122) ax.plot(list(range(len(relative_plot))), relative_plot) ax.title.set_text('Best global cost found') ax.set_xlabel('iteration') ax.set_ylabel('MAE') plt.show() Image by author Hope that this is useful for your future projects. Future medium posts will share more real examples showing how to use EDAspy functionalities. Feel free to look for more examples in the notebooks section of the package (https://github.com/VicentePerezSoloviev/EDAspy/tree/master/notebooks)
https://towardsdatascience.com/time-series-transformations-selection-using-edaspy-d97e0dc1fcca
['Vicente P. Soloviev']
2020-12-23 09:06:15.619000+00:00
['Python', 'Time Series Forecasting', 'Python Packages', 'Time Series Analysis', 'Feature Engineering']
One Month With The iPhone 12 Mini
Time passes by really quickly, doesn’t it? In just a short 10 days, the year 2020 as we know it will come to an end. Whether 2021 will be better or worse, we can only speculate at the moment. I can only pray and hope that it gets better. Another thing that has passed by really quickly is the fact that I’ve now spent exactly a month on my iPhone 12 Mini. I bought the 12 Mini on the 21st of November and after getting a case and screen protector for it, I popped in my sim card and started using it as my primary phone from the 22nd of November onwards. Today is the 22nd of December and thus marks an entire month that it has been used on a daily basis. How then has it stacked up to all my previous iPhones over the years and what’s my verdict on it? Let’s find out! What I like I’ll start off with the things that I’ve really enjoyed most about the iPhone 12 Mini because let’s add positivity to our articles, why not? Screen size and design The best thing (for me, personally) has to be the 5.4-inch screen on the phone. If you’ve read my previous iPhone related articles, you’ll know by now that my favourite iPhone is the 5S in terms of size and design. That phone for me was THE perfect iPhone. So, when Apple decided to go down the route of the bigger screens with the iPhone 6 and above, I was a little disappointed. Until today, I’ve never bothered to get the Plus or Max versions of any of the iPhones. The biggest screen size I’ve owned is the iPhone XR which had the 6.1-inch. I was also never a fan of the rounded edges that Apple debuted with the iPhone 6. That’s why when they went back to the design of the flat edges with the iPhone 12 lineup and introduced the Mini, it was a no-brainer for me to get it. Ever since day one of using it, I’ve loved how it feels in my hand. The ease of being able to type with just one hand, not worrying so much about it slipping out of my hand because I’m able to really grip it, and the ease of it fitting nicely in the pocket of my jeans/trousers, I really hope Apple never does away with this size (or design) ever again because I will want to keep upgrading to a new iPhone that’s similar in size to the Mini. Honestly, if you already have an iPad or a MacBook, you really don’t need a big screen on your iPhone. Though it may not be as great for browsing the internet, watching movies or playing games, I find that the size is not that bad, to be honest. If you’ve been using a big screen before this (like I have), then you’ll need some getting used to but if you’re upgrading from an iPhone 4 to an iPhone 8, it’s actually better. And because it’s a smaller screen than that on the iPhone 12, 12 Pro and 12 Pro Max, but with the same OLED technology, everything appears so beautifully clear on it. The colours are vibrant, texts are crisp and sharp, and the details of everything that appears on its screen are just very clear. This for me is the best feature of the Mini. Small but powerful It goes without saying that every single time Apple releases a new iPhone lineup, it comes with their latest and greatest technologies and the iPhone 12 lineup isn’t any different. Packed with their latest Apple A14 Bionic chip, this phone is just super fast. And I mean SUPER fast. Of course, it helps that it’s still new but I’ve not had any issues with any app not opening up. I’ve not had any issues with the phone crashing or restarting on its own. Apps open up really quickly and immediately available to use. If I’m to be honest, I don’t remember the iPhone XR being this quick when I first got it after it was released. The fact that the iPad Air is also powered by the same chip, for it to be powering such a small device is just extraordinary. Double the lense, double the quality For someone who’s only had a single camera setup all these years, it’s not surprising that finally having a phone with at least a dual-lens setup excites me and that’s exactly how I feel with the 12 Mini’s camera. Equipped with two 12MP cameras (one wide and one ultra-wide), pictures are sharp and clear. It’s also now able to record videos in 4K up to 60 fps and HDR video recording with Dolby Vision of up to 60 fps. That’s cinema-quality and you’re able to do that with such a tiny device. I’ve not done much recording in 4K but the few that I’ve done (test videos) are just incredible in quality. Honestly, if I ever have a work project (or just any video recording project) that requires me to shoot videos, I’m just going to use my 12 Mini to do all the recording because the cameras are just that good. Imagine just how much better it is on the 12 Pro series. What I don’t like Just with anything, there are pros and cons and the 12 Mini is no exception. As much as I’ve loved it, there are things that I’m not too happy about it (not many but there are a few) so here they are. Battery I’ll start with the obvious. Being a smaller phone and still packing all the awesome stuff of its bigger siblings, you can’t expect it to be exactly the same. Because of its size, the battery is obviously smaller in the 12 Mini and that really shows. I won’t even compare it to its siblings. I’ll just compare it to my previous phone, the iPhone XR. Just before I swapped the XR for the 12 Mini, I was easily getting a good 8–9 hours of usage with the XR on a daily basis. I would wake up and start my day with my battery level at 100%. My average usage of my phone sees me mostly calling, messaging and some social media browsing. Very little video watching or internet browsing are done. At the end of the day (at around 8pm), I would still have around 40% with my XR while my 12 Mini would see me with around 20% which isn’t great but isn’t too bad. Honestly, I have avenues to charge my phone most of the time, especially when I’m working from home or in the office. Even when I’m out and about, I have my trusted power bank with me that easily charges my 12 Mini from almost empty to full at least 3 times with a single full charge. So, I’m pretty covered in that sense. Again, not ideal but with the size, not surprising. I honestly have not much to complain about the iPhone 12 Mini except for the battery which I can fully accept because of how small it is. Aside from that, I’ve truly loved every single day that I’ve had with it over the past month and I can’t see myself hating it. Honestly, I still think Apple should have made this phone the 2020 iPhone SE instead and just have 3 options for the iPhone 12 lineup which would be the regular iPhone 12, the 12 Pro and 12 Pro Max. But in true Apple fashion, as long as it can squeeze every single penny out of you, they will and they know people will buy whatever product they release. The fact that new stock runs out really quickly the moment it comes in, despite being over a month since it launched shows how popular and in-demand these phones are. However, if Apple were to ever release the 3rd-generation iPhone SE in the current iPhone 12 Mini design but with the price point of the current iPhone SE, you can bet it will be one of their best selling phones and yours truly will definitely be one of its owners. Until that day arrives though, I will just continue to enjoy the joy that has been the iPhone 12 Mini. Didn’t regret getting it and after a month of using it, still don’t regret it.
https://medium.com/macoclock/one-month-with-the-iphone-12-mini-2d27b178796b
['Benny Lim']
2020-12-23 07:17:48.940000+00:00
['Review', 'Iphone Review', 'iPhone', 'Iphone 12 Mini', 'Apple']
Alfa-Enzo FAQs : General Questions
These are answers to questions on what our project is all about. For in-depth reading, please head to the docs part of our site www.alfaenzo.io/docs. Ask more questions at www.t.me/alfaenzoio. What is the project about? We are building the world’s largest decentralized economy. What makes this project different? There is no magic in blockchain, anybody with programming experience can build a blockchain. On top of that, blockchains are made for engineers and not the people (“mere mortals” as Steve Jobs calls them) who actually drive a decentralized economy. We’re different because we’re releasing real products with valuable patented intellectual properties that can drive our thesis along. We’re looking at the equation from a true economic standpoint and seeing how can can fully disrupt decentralized ecosystems. We’re different because we’re creating end-to-end products for the blockchain that’s relatable and usable. If today’s blockchain world is like the early days of the PCs, then we are Apple. Many projects are working on this though. The difference is the fact that we’re building a real business, backed by real product, and real intellectual property. Every ICO claims to be creating a decentralized economy with little more than a whitepaper, digital tokens, and a dream. It’s a little known fact that 90% of all startups fail in the first year—and 90% the remaining fail in the 2nd year. If the rate of failure is so dismal without the complexities of blockchain implementation, the sad truth is 99.99% of them will fail when blockchain is added to it. What most people see in these offerings are wild left-field claims that build on the successes of existing blockchain techn. With flashy spin-offs and pie-in-the-sky proposals, these dreams will never materialize. ICOs use crowd-psychology like signaling and hype to create balloon value with little viability behind. A clone of Facebook that runs with blockchain? No, that will never work. Most ICOs are ignoring all the critical tenets of how tech businesses start and create endearing value. We’re all tech start-up builders so we know what it takes to build truly competitive, user-centric products that last. How will you make it happen? We’re working with a 3-stage mission where the end-game is all about decentralizing your access to the world. After we’re done, a self-regulating decentralized economy will spark all by itself. Stage I : Community We will encase the complexities of blockchain in a new software interface that runs on top of it—a “user shell’’ that will make it “usable by mere mortals.” We call this Fluid. Stage II : Devices We’ll build a simple low-cost blockchain phone that runs our Fluid OS and global dApp store. We have already formulated how this can be done. Stage III : Terminals As we move into a world powered by voice UI, face recognition, and air gestures, there is only one end-game: Terminal Computing. Imagine being able to sit down anywhere and access your personal network, or walk by a building and check your content. The personal devices will give way for public terminals or thin clients that puts the personal part securely online instead of in the device itself. Why are you going for such an ambitious project? We have the skillset to execute, the proof to show that our vision will work, and the risk-taking penchant to make it happen. When will it happen? We’re well along on Stage 1. In fact, we’ve released the final section of our baseline version call Genesis to the iOS App Store. Our Android and Desktop version will follow in the next 2–3 months. Stage 2 will begin in 2023 and we expect to have something ready by Christmas 2025. Stage 3 is the long term outlook, and might be 5 years out from there. Isn’t decentralizing our access with one app the same as centralizing? No, quite the reverse. We’re building a technology platform that can handle everything, so that our experience is not co-opted and centralized. Under the hood, its tokenomics and mechanisms are powered by people. In essence, we’re creating a world without end where everyone can grow and sell their own crops to everyone else. In contrast, our present experience is only looking through the lenses of the apps tech giants produce. They own our data and they own our time. We are hooked on different versions of the real world because we were born into it—into bondage. It’s time to free our data and our time. Further Reading Who are you and where are you based? We’re currently a team of twenty-two members with academic backgrounds in computer science, applied and pure mathematics, engineering, and statistics. In terms of business experience, our members have all started successful businesses, and have held leadership roles in top tier tech companies. At the moment, our engineering team is based in Ukraine, US, and Argentina, while our design and business teams are based in United States. For more info, check out our website down below. Who will own everything? Alfa, Push, Valet, EON etc. is produced by Alfa-Enzo Foundation (AEF) and the community in a meritocratic governance model. The community will own via stakes of Enzo in conjunction with the Alfa-Enzo Foundation. AEF provides free guaranteed updates and support for all releases, starting from the release date and until the release reaches its predesignated end-of-life (EOL) date. AEF generates revenue through the sale of products and services related to its protocols and platforms. Where can I learn more about Alfa-Enzo? Head over to our website at https://www.alfaenzo.io for the full white paper as well as a quick one pager highlighting the key features of our technology. Also, feel free to follow us on any of the following media channels for the latest updates and news:
https://medium.com/alfaenzo/alfa-enzo-faqs-3616bfe589cd
['Alfa']
2018-09-03 18:50:03.461000+00:00
['Blockchain', 'ICO', 'Apple', 'Crowdsale', 'FAQ']
New World Order of the AI Economy
In conjunction with other global trends, Artificial Intelligence is rapidly tearing down the old barriers to building new world orders. As a result the next century will be dominated by countries that rapidly adapt to the new AI Economy. The assumptions that the next century will be the “Chinese Century” may be proved wrong if China cannot adapt to the AI Economy. The AI Economy is built on three pillars: AI Workforce: Labor pool with broad set of skills including AI researchers, AI implementors, and AI Literate workers adept with robotics and automation AI Infrastructure: Data sharing, algorithm exchange, labor mobility, and low-cost energy will be needed to support continued AI innovation AI Social Contract: Worker displacement will create significant resistance. Retraining, guaranteed income schemes, and other transition costs will be absorbed by leading countries adopting the AI Economy Pax Britannica (1815- 1914) — Map reproduction courtesy of the Norman B. Leventhal Map & Education Center Old World Orders: Since the early 1800’s the world orders of Pax Britannica (1815–1914) , the American Century (1917–2017) and the emergence (or perhaps re-emergence) of the Chinese Century (2017 — present) have been driven by similar economic and political rules. 1. Readily available and low-cost labor, energy, and raw materials 2. Institutional and government support of industry providing access to capital, political stability, and international trade 3. Continued innovation that drives marginal production costs down, worker productivity up, supporting increased living standards Since 1800 rapid adoption of new technologies including the telegraph, steam power, electricity, assembly lines, automation and recently robotics have delivered early adopters with durable advantage and supremacy on the world stage. As China prepares to assume economic supremacy for the next century, perhaps the rules have changed. Importantly energy costs are rapidly moving towards “zero marginal cost” per Jeremy Rifkin (2015), and the increased diffusion of military and political power as Fareed Zakaria first outlined in his 2009 book “The Post American World” are reducing the barriers to new entrants to the super power game. The most fundamental rule change is the mainstream adoption of artificial intelligence (AI) and the realization of huge AI driven productivity gains in terms of automation, efficiency, and accuracy. AI and automation fundamentally change the economic foundation of the previous world orders. Digital Workers are robots that do repetitive tasks running as programs in the cloud or on desktop computers not entirely unlike the characters in the 1999 science fiction film “The Matrix”. This has been commonly known as Robotic Process Automation, but the new generation of digital workers are using computer vision, advanced machine learning to execute work at 100 times the speed of traditional human workers Physical work is increasingly being done using industrial robots to replace human workers. Computer vision and robotics advances are rapidly shrinking the circle of things that can only be done by humans. Robot adoption is racing ahead, even in markets where low-cost labor is still abundant Complex decisions made with fuzzy facts have long been the domain of human subject matter experts whose organic neural network brains could contemplate all of the data and make the best split-second decisions. Today this advantage is narrowing, where deep learning systems consistently outperform humans in cognitive, vision, translation, signal processing, complex system analysis and a myriad of other tasks Building the AI Economy: First Priority: AI Workforce The demand for AI and robotics savvy workers already far outstrips the current supply. As the AI Economy ramps this impedance mismatch will become a gating factor for innovation. To avoid this we need to immediately make investments in adapting our workforce. China is already creating “AI Cities” and building this AI capability with full support of the Chinese government as Kai-Fu Lee outlines in his book “AI Superpowers: China, Silicon Valley and the New World Order” . Future leaders need to build this muscle with: Recruit and retain (in country) the best AI talent to universities to pursue research and support undergraduate AI education. Fund postgraduate research in AI with grants, challenges, and industry focused co-innovation programs Broaden science, engineering and mathematics undergraduate degree requirements to include core AI education and practical experience Fund robotics and mechatronic training programs technical and 2 year colleges coupled with tight collaboration with industry robotics providers China is currently investing heavily in this area ranking #1 in AI research , #1 in AI patents, #1 in AI venture capital investment, #2 in the number of AI companies, and #2 in the largest AI talent pool per the “Center For a New American Security” paper Understanding China’s AI Strategy Second Priority: AI Infrastructure Recent news on AI has consistently focused on the privacy concerns and bias related to data gathering and usage. From personal data being gathered for training virtual assistants to AI gauging criminal intent of shoppers it is clear that any issues remain to be solved. China has taken an expedient approach to data gathering, and could use this access to leapfrog other global AI competitors. National data sharing programs are critical to building up the training sets needed to build the next generation of AI — the west will need to resolve privacy and ethics concerns quickly Open algorithm exchange as well as commercial algorithm exchange will be a critical success factor. IP laws will need to adapt to facilitate a flow of ideas and ML models Labor mobility, and low-cost energy will be needed to support continued AI innovation Already we are seeing businesses form just to close these data gaps, as detailed in this IEEE article: IEEE Spectrum Article but these models present much commercial friction Third Priority: AI Social Contract: Recent global political and economic conditions have heightened nationalism and immigration concerns around the world. This is in part due to the start of worker displacement from automation in the workplace. The leading AI Economics will address these social issues with an AI optimized social fabric or safety net to avoid political friction to change
https://towardsdatascience.com/new-world-order-of-the-ai-economy-a9fa40375ba8
['Matt Vasey']
2019-04-26 00:23:09.665000+00:00
['Artificial Intelligence', 'Robotics', 'World Economic Forum', 'Economics', 'China']
Visualize Your Kong API Gateway Clusters With KongMap
Visualize Your Kong API Gateway Clusters With KongMap yes!nteractive Follow Oct 21 · 5 min read Browser-based tool allows for visual mapping and declarative management of Kong Open Source and Enterprise API Gateway Clusters KongMap provides a view of your Kong Cluster configuration in a single pane A picture is worth a thousand… configurations. KongMap is a new free Docker based tool that allows you to quickly see your whole Kong API Gateway configuration in an interactive map through a web browser. No more combing through yaml based declarative config files or making API calls or stringing together Admin GUI clicks to make a mental map of your current gateway configuration. CLI’s are great, but sometimes it is just nice to be able visually see your gateway configuration relationships. KongMap offers an alternative way to view and manage your Kong Gateways. KongMap provides a single pane view of your Kong Cluster showing all of your gateway API Endpoints (Routes) and the upstream proxied targets (Services) they are connected with. Polices (Plugins) such as authentication, rate limiting, caching and more can be toggled in and out of view. Hovering over or clicking into any node in the map will bring up details for that node, whether it be a route, service, plugin, or workspace (workspaces are only shown when connected to a Kong Enterprise cluster). When connected to a Kong Enterprise Cluster, every node has a direct link to view itself in Kong Manager (The Kong Enterprise Edition Admin GUI). KongMap is a good compliment to what is available and not-available in the Kong Enterprise UI, Kong Manager and Open Source UI alternatives such as Konga. KongMap provides the ability to toggle between multiple Kong clusters, whether they be Kong Open Source, Kong Enterprise, DB or DB-less cluster configurations, or Kong for Kubernetes (Kong’s Ingress Controller for Kubernetes). Endpoint Analyzer A really helpful feature beyond the map view is the Endpoint Analyzer. Clicking on an endpoint node will reveal the ‘Analyze Endpoint’ button. The Analyzer will walk you through the details of any particular endpoint including detailed configuration of the Kong Route, the order of execution and details of plugins/policies as well as the full details of the attached/proxied Service. If connected to a Kong Enterprise Cluster, then a direct link to the element on Kong Manager is provided. Endpoint Analyzer showing the details of the dad_jokes service attached to /jokes2 endpoint. Declarative Configuration Interface A powerful feature of KongMap is its ability to view, export, and manage Kong Gateway configurations in a declarative manner, rather than managing configurations via API calls or DB driven Admin GUI’s. More and more frequently, Kong admins are moving towards declarative operational management of Kong Gateways for a variety of reasons: reduced number of dependencies : no need to manage or rely on a database installation for operations : no need to manage or rely on a database installation for operations makes a good fit for automation in CI/CD scenarios : configuration for entities can be kept in a single source of truth managed via a Git repository and are highly transportable : configuration for entities can be kept in a single source of truth managed via a Git repository and are highly transportable flexibility: it enables more deployment options for Kong and operational option when considering things like high availability, version control, life cycle management, etc. Under the hood, KongMap utilizes Kong’s declarative CLI tool decK (https://github.com/Kong/deck) to pull and push new declarative configurations to Kong. The button to launch the declarative configuration interface is displayed when the Kong Cluster node (for Kong Open Source) or the Kong workspace node (Kong Enterprise) is clicked upon. Once launched, the declarative UI displays the current gateway configuration in YAML format. Here you can view, edit, export, copy and paste your Kong declarative configurations. This feature is helpful when you want to move Kong configurations from one cluster to another (such as moving a Kong DB-less configuration to a Kong Enterprise Workspace for example). Configurations can be made read-only by KongMap configuration during startup for a particular cluster. Saving declarative configurations is supported whether your Kong cluster is DB-based or DB-less and supports both Kong Open Source and Kong Enterprise. While helpful to view the configuration, it is recommended to only view Kong for Kubernetes / Kong Ingress Controller instances in Read-Only Mode as configurations should provided by the Ingress Controller within Kubernetes itself. Give It A Try Installing and running KongMap in Docker takes seconds. Installation instructions can be found here: https://github.com/yesinteractive/kong-map Only thing you require is a running Kong Cluster to connect to. If this is your first time venturing down the declarative configuration management options within Kong, here is a sample Kong declarative file you can try with KongMap that uses the DadJokes.Online JSON service and creates a ‘/jokes’ route and employs a rate limit policy: dadjokes.yaml Even if you are not planning on managing your Kong Clusters declaratively, the ability to quickly get a visual confirmation of the current configuration of your Kong Gateway in one view or jump in and see all the working parts of an API endpoint is rather useful. Helpful Resources KongMap GitHub to learn more, ask questions or provide feedback: https://github.com/yesinteractive/kong-map Kong Declarative Configuration Reference: https://docs.konghq.com/2.1.x/db-less-and-declarative-config/ Kong API Gateway Installation: https://konghq.com/get-started/#install Dadjokes.Online Microservice: https://github.com/yesinteractive/dad-jokes_microservice
https://medium.com/swlh/visualize-your-kong-api-gateway-clusters-with-kongmap-47b7697490cc
['Yes Nteractive']
2020-10-22 21:34:53.822000+00:00
['API', 'DevOps', 'Docker', 'Kong', 'Microservices']
Six Reasons We Love Investing in Founders in Small Cities
No matter where you run a company, you’re probably making trade-offs. There are pros and cons to operating anywhere, for sure. But, I know that operating your company in a smaller city can feel really, really hard. You face a lot of challenges — anything from your commute time, to finding great talent, to landing big investments, to connecting with your target customers (fashion and beauty companies come to mind, for instance). And, if you’re an investor, funding companies in smaller cities can feel riskier than you’d like, for essentially the same reasons I mention above. Staying in business in a small city or an emerging market can be truly difficult, so what if you invest in a company that doesn’t make it? Here’s the thing, though: We know that an uneven amount of dollars go to founders in major tech hubs — the Londons and San Franciscos of the world. We also know that much of this capital flows to major tech hubs simply because there’s an inherent belief that companies coming out of them are naturally “better” than the companies starting up in places like Providence, Boston, Dublin, Albuquerque, Madrid, Montreal, and Telluride. And, three years into running our fund at GAN Ventures, with over 25 investments (almost exclusively in founders operating in cities like the ones I just mentioned), we’ve seen investment markups on a full third of our portfolio. What does that mean? When a company raises another round of financing, it comes at a higher valuation than when we invested in it. Which means that the companies are becoming more valuable to the market. And, in three years, we have seen only two companies in our entire portfolio go out of business. So, I personally believe that we should, as a startup community, continue to shift resources to founders in markets outside of major tech hubs (and, to underrepresented founders across the board). Going Small(er) Here are a few reasons we often choose to look at “everywhere else” founders, though, and why you should, too — They’re scrappy. Raising money is a lot harder in smaller cities, so you see founders watching every dollar and being smart about how they invest their time and precious capital. Plus, being in places where there are typically fewer resources means that teams in smaller cities are necessarily super creative. I know that’s a generalization but, in our experience, founders in smaller cities really do know how to make a dollar last, get really inventive with what they’re up to, and pretty ingenious with how they market to the world. Nothing costs as much. When we think about our investments in places like Nairobi and Indianapolis, the costs aren’t nearly as high as San Francisco and NYC. So, when we give a $100K investment, we know it goes much further in smaller cities than it would elsewhere. That, in turn, means that startups can hire more, have bigger offices, pay their staff more, and not have to give up more equity in their companies to reach their ultimate goals. The CEOs know how to lead. Most founders in smaller cities are already leaders in their communities. (It’s the idea of a big fish in a small pond.) Because there tend to be fewer fights for fame — no one’s trying to peek out from the shadow of Google or Apple or Facebook — and because there are simply fewer people, founders in smaller cities become some of the strongest leaders in their home communities. And, this kind of practiced leadership means these founders then take those skills and are applying them to leadership inside of their own companies. They’re poised to make a giant impact in their communities. One of the biggest reasons we choose to invest in startups is because we really believe startups have the ability to positively impact our world — now and into the future. They’re solving problems, giving people wages that allow their employees to save money and feed their families, and they’re leading the way with what we value and think about (meaning, they make some of the biggest impacts on culture). And, just like founders at companies in smaller cities become some of their cities’ strongest leaders, their companies become some of the strongest allies to their home towns. As I said, they not only employee a ton of people (often hiring from within the community), they also invest a lot of their revenue back into their communities, and they advocate for local business that continues to bolster the local economy. Their teams can enjoy more balanced lives. Recently, I spent some time with friends in NYC. One of our friends, though, was up against some deadlines and couldn’t get out of work to hang out until around 8pm that night. Does this happen in smaller cities? Absolutely. But it seems like more of an irregularity than the norm. Workload for any founder is usually pretty intense, but founders in smaller cities often don’t seem to feel the pressure that founders in larger places have. They’re not dealing with crazy overhead costs and watching their money so closely, so they can relax at least a little bit more. Plus, I know so many founders that choose to live in “smaller” cities because of both the access to nature but also a stronger cultural value alignment with actually spending time in it. They take the challenges that come with operating in Telluride because they’d rather trade everything else for skiing before their workdays even start. And finally, they’re resilient. Often, founders in smaller cities don’t have silver spoons handed to them. They’ve had to fight for every dollar generated in revenue and investment. So they know how to stay in business for the long-term. I remember when we invested in our first company. The managing director at their accelerator program shared that the startup’s founder was like Matt Foley, the sketch character on Saturday Night Live that slept “in a van down by the river.” He’d stay in that van until his idea came to fruition. Was he? Probably not. But, the managing director told us everything we needed to know to understand that the founder was willing do anything and everything to make the company work. Further Proof If you want to read more about the value of companies coming out of these places, I highly recommend reading this article, put out in August. It shows data on 200 exits across 17 cities in the U.S. As they say, “…outside of New York and LA, discussion about startup geographies quickly tends to degrade into civic boosterism. [So, to] help nudge the discourse onto more empirical footing, [they] wanted to quantify the various startup hubs using exit value as the key metric.” It’s a great look at real data behind considering starting up in smaller cities — and why you should consider investing in them.
https://patrickriley.medium.com/six-reasons-we-love-investing-in-founders-in-small-cities-1735ae1d1c56
['Patrick Riley']
2019-10-10 18:09:45.684000+00:00
['Investing', 'Startup', 'Cities', 'Founders']
Monitoring and Alerting on your Kubernetes Cluster with Prometheus and Grafana
Why Are Monitoring and Alerting Important? IT Teams already realize the necessity of monitoring their infrastructure. There is a long history and many products available for legacy infrastructure: tools like Nagios, Zabbix, and others are familiar players in this space. But, with the Kubernetes ecosystem, it brings many levels of abstraction and troubleshooting if you don’t have the right tools. How many DevOps engineers are faced with the familiar error: Failed scheduling No nodes are available that match all of the following predicates::Insufficient CPU Cluster resource monitoring is essential to follow in real time. In comparison with traditional infrastructure, cluster resources are constantly scaling and changing. You can never know where your pods will be launched on your cluster. For these reasons, we need to monitor both the underlying resources of the cluster and the inner cluster health. On top of that, monitoring alone is not enough if you’re not utilizing alerts. We can easily imagine that our OPS will not stay all night looking at their dashboards on critical production clusters. Why Prometheus and Grafana? With an extensive set of alerting and monitoring tools, why should we go for Prometheus and Grafana specifically? Prometheus Prometheus is an open source monitoring tool. It was initially developed at Soundcloud but is now a standalone open source project, part of the Cloud Native Computing Foundation (CNCF) in 2016. It is the second project after… Kubernetes itself. This is the first reason why both components are often associated as tightly coupled projects. In addition to that, Prometheus differs from numerous other monitoring tools as its architecture is pull-based. It continuously scrapes metrics from the monitored components. Finally, in the architecture itself, Prometheus uses a multi-dimensional data model very similar to the way Kubernetes organizes its data by labels. In opposition to a dotted data model where each metric is unique and each different parameters requires different metrics, with Prometheus, everything is stored as key/value pairs in time series: <metric name>{<label name>=<label value>, …} Prometheus architecture includes three main components : The prometheus server itself : which collect metrics and answer to queries via API A pushgateway : to expose metrics for ephemeral and short jobs An alertmanager : to enable the alerts publication as name suggests Prometheus architecture and ecosystem components. Source We will be using here a combination of the prometheus node_exporter and kube_state_metrics to publish metrics about our cluster. Grafana Grafana is a popular open source (Apache 2.0 license) visualization layer for Prometheus that supports querying Prometheus’ time-based data out of the box. In fact, the Grafana data source for Prometheus is included since Grafana 2.5.0 (2015–10–28). It is, on top of that, incredibly easy to use as it offers template functionality allowing you to create dynamic dashboards editable in real-time. Finally, there is a very good documentation and a vast community sharing, among other things, public dashboards. We will use two public dashboards specifically made for Kubernetes in this article. After all this theory let’s get our hands dirty! Prerequisites for Installation The only requirement we have for this project is a working kubernetes cluster. For the sake of simplicity, I will be using in this article a minikube installation on AWS EC2. Minikube is a convenient way to install a single-node Kubernetes cluster for non-production, lab, and testing purposes. It is especially effective on individual computers, as it doesn’t come with heavy resource requirements, and it supports several K8s features out-of-the box. It works normally by creating a local VM on the machine relying on a hypervisor, but being on a AWS VM myself I’ll be using vm-driver=none mode of Minikube for this demonstration. Install Prometheus and Grafana with Helm It is time to install both products. We will rely on Helm, a Kubernetes package manager which updated to version 3.0 in November 2019. Just for history, this update is very important as Helm was deeply rewritten to catch up with Kubernetes evolutions like RBAC and Custom Roles Definitions. It makes it a lot more production-compliant than previous versions. Previously, many IT experts were reluctant to use Helm for production-grade clusters due to its permissive security model and its dependence on the controversial Tiller component (now removed from 3.0). We will use charts (Helm’s packaging format) from the stable Helm repo to help getting started with monitoring Kubernetes cluster components and system metrics. Installing Helm Add the stable repo to your Helm installation: $ helm repo add stable https://kubernetes-charts.storage.googleapis.com/ $ helm repo update Then we will create a custom namespace on our K8s cluster to manage all the monitoring stack: $ kubectl create ns monitoring Installing Prometheus We can install now the Prometheus chart in the newly created monitoring namespace $ helm install prometheus stable/prometheus --namespace monitoring NAME: prometheus LAST DEPLOYED: Tue Apr 14 09:22:27 2020 NAMESPACE: monitoring STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster: prometheus-server.monitoring.svc.cluster.local [...] Then we can create a NodePort using K8s’ native imperative command which allows us to communicate directly to the pod from outside the cluster. Just know that this step is optional if you don’t intend to query Prometheus without Grafana. We can see the port 30568 was automatically allocated to map the 9090 port to the pod. I can now access the Prometheus endpoint using my public DNS and port 30568 in my browser. There, we can directly query Prometheus to get, for example, CPU consumption by namespaces using the following command: sum(rate(container_cpu_usage_seconds_total{container_name!=”POD”,namespace!=””}[5m])) by (namespace) Installing Grafana Now that Prometheus is installed, rather than querying each metric individually, it is way more convenient to use Grafana to get comprehensive dashboards aggregating multiple metrics in one place. We use helm once again to install grafana in the monitoring namespace : $ helm install grafana stable/grafana --namespace monitoring We can see that grafana pod is running along the prometheus components Here again we create a NodePort service to access Grafana from outside the cluster (this time it is mandatory) : $ kubectl -n monitoring expose pod grafana-5b74c499c6-kt4bw --type NodePort --name grafana-np service/grafana-np exposed And get the external port mapped to Grafana listening port 3000 : (here 31399) $ kubectl -n monitoring get svc grafana-np NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana-np NodePort 10.111.59.2 <none> 80:30368/TCP,3000:31399/TCP 3h8m Type in your browser : yourPublicDNS:yourPort and tada! Grafana has user management capabilities, and by default, you must connect using the admin user. To get the admin password, type the following : kubectl get secret — namespace monitoring grafana -o jsonpath=”{.data.admin-password}” | base64 — decode ; echo ADMINPASSWORD Now you can connect. The first step is to configure Prometheus as Data Source by using the internal DNS structure of Kubernetes: http://prometheusServiceName.namespace.svc.cluster.local:port Which is in my case : http://prometheus-server.monitoring.svc.cluster.local (:80 optional) Now that the datasource added, we will import two community dashboards useful for monitoring both our workload and the cluster health. They should work out of the box. Click the import button and type Grafana dashboard ID: Let’s install 1860 then 8685 which are complementary: Now you should have two working dashboards: We are now able to easily monitor both system metrics and cluster configuration! Alerting on Slack Channel Now that we have a working monitoring solution, the second step is to activate Alerting. We easily understand that an Ops team supervising several clusters in production cannot keep constantly an eye on their dashboards. There are two ways to implement Alerting in our monitoring stack. We might use the prometheus AlertManager component (which is installed by the helm chart) or the built-in alerting function of Grafana. We’ll go for the latter as it’s easier to implement. Grafana can send an alert on Slack, mail, webhook or other communication channels. As I’m using Slack a lot, and I know several companies do as well, I’ll go for this example. Create the Slack notification channel The first step is to add Slack as a notification channel. In grafana, click on the bell to the left, select the notification channel menu then create a new channel. You’ll see the great number of tools compatible with Grafana for alerting. Select Slack and enter the URL of your slack webhook URL (other fields are optional). If you don’t have a webhook URL already follow this tutorial : https://api.slack.com/messaging/webhooks It will help you create an endpoint for sending messages to a specific channel of your Slack server. Create and test your custom alert Now that your channel is set up, take your “K8S Cluster Summary” dashboard, and click on the Cluster Pod Capacity title, to edit the panel. We will set up an alert to monitor the pod capacity, which is limited at 110 by Kubernetes by default on each node. This limit is considered as the limit of reliability. If the number of nodes in your cluster is limited and if you reach this hard limit of 110, the remaining pods will enter in a pending state which could be very problematic in a production cluster when scaling during a high business activity for example. We’ll set up an alarm at 90 pods: Back on our cluster let’s create an arbitrary deployment with 100 nginx replicas: $ kubectl create ns loadtest namespace loadtest created $ kubectl create deployment nginx --image=nginx -n loadtest deployment.apps/nginx created $ kubectl -n loadtest scale deployment nginx --replicas=100 deployment.apps/nginx scaled We can see the Pod capacity quickly rises to 115 because of the existing 15 initial pods in the cluster: That’s 5 pods more than the allowed 110 and effectively we can see on the dashboards 5 pods are on pending state. After 5 minutes the alert goes into ALERTING state and sends our notification on the Slack channel with the custom message and the current value . We can scale down the deployment to 1 pod : $ kubectl -n loadtest scale deployment nginx — replicas=1 deployment.apps/nginx scaled The Pod Capacity has cooled down And the alert sends a return to normal state notification to slack. This little exercise was very interesting as it is fairly straightforward to implement but still gives a real world scenario. There is something to note concerning the Grafana alerting feature. Both our dashboards are using functions called template variables. Variables are the values you can change on top of the dashboards like the host, cluster or namespace. By design, Grafana does not allow alerts on template variables for several reasons you can read here. If you try to put some alerts on certain panels on dashboards you may have an error for this reason. The solution will be to specify for each alert a specific host, cluster or node to monitor without using a variable.
https://gregoiredayet.medium.com/monitoring-and-alerting-on-your-kubernetes-cluster-with-prometheus-and-grafana-55e4b427b22d
['Grégoire Dayet']
2020-11-25 10:21:22.521000+00:00
['Technology', 'DevOps', 'Software', 'Monitoring', 'Kubernetes']
Measuring observable influence and impact of scientific research beyond academia 📄
METHODOLOGY EDA: We conducted EDA using Numpy, Pandas and Excel on Genome BC’s dataset which lists the academic papers funded by them. We explored the downstream documentations using web browser, collecting their citation methods such as by presenting PMID, PMCID, DOI or giving pdf files directly. Data Collection: Collection of data is the most challenging part of our project. The downstream documents are variable. By EDA, we find that we can use PMID to represent the papers they cited as most of them are medicine field papers. (If we want to extend our work to the other fields, we can use DOI instead.) The data is collected from 5 downstream documents entries: bcm It has a zip file that contains all the information that the website hosts. So we get all the pdf files by downloading the zip file and unzipping it manually. bcc, cadth and cpic We use Scrapy to scrape all the pdf files and extract all the DOI, PMCID and PMID references. pgkb pgkb is a javascript-generated dynamic website, so we used Scrapy combined with Selenium to scrape the pdf files and extract all the DOI, PMCID and PMID references. Web scraping using Scrapy combined with Selenium Extracting references directly from these documents does not solve the problem as we have to dig in deep by going to each link or reference taken from the previous reference. This task is done up-to a threshold due to the limited computing power and internet availability. Data Cleaning and Integration: After collecting the data, for DOI, PMID, and PMCID references, we clean it using python libraries -Numpy and Pandas combined with NCBI APIs which fulfill the null values and get rid of the duplicates. For the pdf files, at first sight, extracting references from pdf files seems hard. But in fact, as Chenet mentioned [2], reference information is, in fact, structured information. Using regular expression alone can achieve a satisfying result [5]. Still, this problem is not easy, as each website contains many different structures for PDFs and parsing them requires different approaches. We firstly parse pdf files using PDF2XML to get structured data. The lxml library is used to get to reference using XPATH. The implementation of this part becomes more tedious as the type of pdf files increasing. Then we extract reference paper titles using regular expression combined with NCBI APIs. We set the regular expression rule like this: every title is between two periods, and the number of words in the title should greater than 4. Then we verify this title by querying NCBI API. If we can find a paper the title of whom has an edit distance less than 4 to the title, then we can be sure that the title is indeed a valid paper title. In the meantime, we get the corresponding PMID of this paper. We use edit distance to determine whether the two papers are the same one based on this fact: each paper’s title is unique. The similarity method, such as Jaccard Similarity, will fail because papers focused on one specific topic will always have similar titles. Below is our pdf parsing result, we extract 1246 reference paper titles from 4379 pdf files: Analysis: We measure the impact of a Genome BC funded paper in real life by counting how many times they are cited in the downstream documentations both directly and indirectly. We get all the direct and indirect cited papers for every downstream documentation and represent it as a tree. Accelerate the generation of the reference graph At first glance, it seems a trivial task — just perform BFS to get the reference tree by query NCBI database. But as 1000 downstream entities will refer to around 40 million papers given a depth constraint of 6, plus that the NCBI API only allows querying 10 times every second, this becomes an impossible mission. We come up with an accelerated solution: as we only concern about the depth but not the actual structure, we can merge nodes at the same depth. As NCBI API allows a 100 id list query every time, we merge the nodes into a package of 100 nodes and do the query. This method decreases the number of nodes from 40 million to around 400 thousand and reduces the running time to 25 hours given a depth constraint of 6. After the query, we merge all the nodes of the same depth into one node, which further reduce the running time for later analysis. Finally, we will go and measure the real-life impact of Genome BC funded papers. It is somehow straightforward: go through the reference map using Breadth-First-Search (BFS) to do the counting for our Genome BC funded papers. Finally, we find the actual impact of research papers in the real world. Visualization: In order to show the relation between downstream documentations and the Genome BC funded papers, we use igraph to construct a map with coordinates associated with every paper and downstream documentation entity. Then, we plot the map using plotly. Plotting this helps both the researchers and the company to move into the right direction. It provides motivation to researchers and at the same time increases revenue for the company. These figures give a better picture to the company and are understandable to a greater audience.
https://medium.com/sfu-cspmp/measuring-observable-influence-and-impact-of-scientific-research-beyond-academia-c00372c96a76
['Chhavi Verma']
2019-04-17 20:22:09.172000+00:00
['Data Science', 'NLP', 'Web Scraping', 'Healthcare', 'Big Data']
This is What Collapse Looks Like
This is What Collapse Looks Like Facing the inevitable It’s time to normalize the word, “collapse,” to describe the ongoing conditions in the US. Some would counter it’s well past time — and I won’t argue with that — but I’d say we can no longer credibly claim that it’s too early to make this call. “Decline” has been happening for decades at this point, as manifested in trends such as increasing class inequality, decreasing wages (as relative to inflation), higher infant mortality, lower life expectancy, a disintegrating social safety net, explosive growth of the prison-industrial complex, deteriorating educational system, etc. More and more people have been feeling the squeeze in their efforts to get by, even if establishment voices make claims to the contrary about “recovery.” But “collapse” is more than “decline.” It’s when the system has lost enough integrity that it’s gone beyond the point of no return. With the multiple levels of disruption that have accompanied the COVID pandemic, we have passed that point. Unemployment is at levels not seen since the Great Depression. But we’re not going to get out of this one the way we did then. We no longer have the resources, either in raw materials or in manufacturing ability, and neither can be brought back. Eighty years ago, we were a rising power and after WWII, we took center stage when the other leads were exhausted or slain. The landscape is entirely different now. Others stand poised to step in when we lose our grip, which is an inevitability at this point. Another way to put it is that we just lack the “oomph” to avoid collapse at this point in our history. What a terrible history it’s been, too. Founded on the original sins of genocide and slavery, we were on a bad path from the beginning. Our declaration of world dominance was made by destroying two Japanese cities full of civilians. In Southeast Vietnam, we murdered three million people and left behind mines and shells that still maim and kill to this day. All over the planet, we have pillaged and raped (literally). Despite delusions to the contrary, we were never a “shining city on a hill;” more like a hellish pit of malice. For much of humanity — including not a few within our borders — the collapse of the US will be considered a blessing. That all empires end is not merely a truism but an historical fact. It doesn’t matter that many would shrug off the label of “empire;” that’s what we are, and we’re not exempt from the fate that all of them eventually face. With collapse will come much suffering, that’s certain. People will lose their homes, their health and their personal freedom, if not their lives. Given our national character as a settler colonial state, we can expect an uptick in violence, especially with so many fire arms in circulation. Suicide, too, will become more common. However, the results of collapse will undoubtedly be a mixed bag. As the old falls apart, room will open up for new. With less centralized control, locally-based initiatives will have the chance to grow. Experiments that have formerly been repressed will finally have a chance to be tried. Bad habits will be broken, both individually and culturally. We will be inspired by courageous people and powerful actions. Love will conquer in some moments, as it always has. This is all just to talk about the domestic issues of United States. Of course the world is bigger than that, and no matter how things go down here in terms of social institutions. economic conditions and cultural arrangements, we’re facing a much bigger challenge that’s been taking a back seat in the news during the pandemic and the uprisings, but which ultimately dwarfs these concerns, and that’s the environment. The climate’s course into increasingly chaotic territory continues unabated. Fires rage in Siberia. Arctic ice is at record low levels. Heat records are being smashed. The planet is becoming more inhospitable to human habitation with each year. Lurking in the background is the threat of abrupt climate change at a level that would wipe out agriculture and makes vast areas of the globe unlivable. According to ice cores and other evidence, drastic shifts can take place on alarmingly short timelines of just a few years. (For example, see the Younger Dryas.) Trump and elections won’t matter on a planet that’s suddenly several degrees warmer or cooler, where death tolls from starvation are in the hundreds of millions. The cohesion of the nation state model itself will unravel at that point, and political power and material wealth will be of no consequence. Just as many humans will be relieved when the US and its military machine are no more, so will many non-human creatures be when human civilization and its methodology of domination are gone. In the meantime, here we are. The future is uncertain. In and of itself that’s neither good nor bad, but merely a condition. I’m not going to say that life will be “what we make of it” because I don’t believe that. At the individual level, we will be buffeted by forces beyond our control. Luck will play as big a part as preparation. In another way of looking at it, though, “what we make of it” is exactly what we’re getting, collectively. One thing is undeniable: We can’t say we weren’t warned. Since the beginning of this misbegotten project called civilization, there have been dissidents in word and in deed. At every step of the way when we have chosen to walk with ecocide, we could have taken a different path, but didn’t. Now here’s the reckoning.
https://medium.com/age-of-awareness/this-is-what-collapse-looks-like-47eaa2cbe89a
['Kollibri Terre Sonnenblume']
2020-07-23 19:44:09.236000+00:00
['Environment', 'Politics', 'Collapse', 'Covid 19', 'Economics']
Let’s Build a Fashion-MNIST CNN, PyTorch Style
Hyperparameters Normally, we can just handpick one set of hyperparameters and do some experiments with them. In this example, we want to do a bit more by introducing some structuring. We’ll build a system to generate different hyperparameter combinations and use them to carry out training ‘runs’. Each ‘run’ uses one set of hyperparameter combinations. Export the training data/results of each run to Tensor Board so we can directly compare and see which hyperparameters set performs the best. We store all our hyperparameters in an OrderedDict: # put all hyper params into a OrderedDict, easily expandable params = OrderedDict( lr = [.01, .001], batch_size = [100, 1000], shuffle = [True, False] ) epochs = 3 lr : Learning Rate. We want to try 0.01 and 0.001 for our models. batch_size : Batch Size to speed up the training process. We’ll use 100 and 1000. shuffle : Shuffle toggle, whether we shuffle the batch before training. Once the parameters are down. We use two helper classes: RunBuilder and RunManager to manage our hyperparameters and training process. RunBuilder The main purpose of the class RunBuilder is to offer a static method get_runs . It takes the OrderedDict (with all hyperparameters stored in it) as a parameter and generates a named tuple Run , each element of run represent one possible combination of the hyperparameters. This named tuple is later consumed by the training loop. The code is easy to understand. # import modules to build RunBuilder and RunManager helper classes from collections import OrderedDict from collections import namedtuple from itertools import product # Read in the hyper-parameters and return a Run namedtuple containing all the # combinations of hyper-parameters class RunBuilder(): @staticmethod def get_runs(params): Run = namedtuple('Run', params.keys()) runs = [] for v in product(*params.values()): runs.append(Run(*v)) return runs RunManager There are four main purposes of the RunManager class. Calculate and record the duration of each epoch and run. Calculate the training loss and accuracy of each epoch and run. Record the training data (e.g. loss, accuracy, weights, gradients, computational graph, etc.) for each epoch and run, then export them into Tensor Board for further analysis. Save all training results in csv and json for future reference or API extraction. As you can see, it helps us take care of the logistics which is also important for our success in training the model. Let’s look at the code. It’s a bit long so bear with me: # Helper class, help track loss, accuracy, epoch time, run time, # hyper-parameters etc. Also record to TensorBoard and write into csv, json class RunManager(): def __init__(self): # tracking every epoch count, loss, accuracy, time self.epoch_count = 0 self.epoch_loss = 0 self.epoch_num_correct = 0 self.epoch_start_time = None # tracking every run count, run data, hyper-params used, time self.run_params = None self.run_count = 0 self.run_data = [] self.run_start_time = None # record model, loader and TensorBoard self.network = None self.loader = None self.tb = None # record the count, hyper-param, model, loader of each run # record sample images and network graph to TensorBoard def begin_run(self, run, network, loader): self.run_start_time = time.time() self.run_params = run self.run_count += 1 self.network = network self.loader = loader self.tb = SummaryWriter(comment=f'-{run}') images, labels = next(iter(self.loader)) grid = torchvision.utils.make_grid(images) self.tb.add_image('images', grid) self.tb.add_graph(self.network, images) # when run ends, close TensorBoard, zero epoch count def end_run(self): self.tb.close() self.epoch_count = 0 # zero epoch count, loss, accuracy, def begin_epoch(self): self.epoch_start_time = time.time() self.epoch_count += 1 self.epoch_loss = 0 self.epoch_num_correct = 0 # def end_epoch(self): # calculate epoch duration and run duration(accumulate) epoch_duration = time.time() - self.epoch_start_time run_duration = time.time() - self.run_start_time # record epoch loss and accuracy loss = self.epoch_loss / len(self.loader.dataset) accuracy = self.epoch_num_correct / len(self.loader.dataset) # Record epoch loss and accuracy to TensorBoard self.tb.add_scalar('Loss', loss, self.epoch_count) self.tb.add_scalar('Accuracy', accuracy, self.epoch_count) # Record params to TensorBoard for name, param in self.network.named_parameters(): self.tb.add_histogram(name, param, self.epoch_count) self.tb.add_histogram(f'{name}.grad', param.grad, self.epoch_count) # Write into 'results' (OrderedDict) for all run related data results = OrderedDict() results["run"] = self.run_count results["epoch"] = self.epoch_count results["loss"] = loss results["accuracy"] = accuracy results["epoch duration"] = epoch_duration results["run duration"] = run_duration # Record hyper-params into 'results' for k,v in self.run_params._asdict().items(): results[k] = v self.run_data.append(results) df = pd.DataFrame.from_dict(self.run_data, orient = 'columns') # display epoch information and show progress clear_output(wait=True) display(df) # accumulate loss of batch into entire epoch loss def track_loss(self, loss): # multiply batch size so variety of batch sizes can be compared self.epoch_loss += loss.item() * self.loader.batch_size # accumulate number of corrects of batch into entire epoch num_correct def track_num_correct(self, preds, labels): self.epoch_num_correct += self._get_num_correct(preds, labels) @torch.no_grad() def _get_num_correct(self, preds, labels): return preds.argmax(dim=1).eq(labels).sum().item() # save end results of all runs into csv, json for further analysis def save(self, fileName): pd.DataFrame.from_dict( self.run_data, orient = 'columns', ).to_csv(f'{fileName}.csv') with open(f'{fileName}.json', 'w', encoding='utf-8') as f: json.dump(self.run_data, f, ensure_ascii=False, indent=4) __init__ : Initialize necessary attributes like count, loss, number of correct predictions, start time, etc. begin_run : Record run start time so when a run is finished, the duration of the run can be calculated. Create a SummaryWriter object to store everything we want to export into Tensor Board during the run. Write the network graph and sample images into the SummaryWriter object. end_run : When run is finished, close the SummaryWriter object and reset the epoch count to 0 (getting ready for next run). begin_epoch : Record epoch start time so epoch duration can be calculated when epoch ends. Reset epoch_loss and epoch_num_correct . end_epoch : This function is where most things happen. When an epoch ends, we’ll calculate the epoch duration and the run duration(up to this epoch, not the final run duration unless for the last epoch of the run). We’ll calculate the total loss and accuracy for this epoch, then export the loss, accuracy, weights/biases, gradients we recorded into Tensor Board. For ease of tracking within the Jupyter Notebook, we also created an OrderedDict object results and put all our run data(loss, accuracy, run count, epoch count, run duration, epoch duration, all hyperparameters) into it. Then we’ll use Pandas to read it in and display it in a neat table format. track_loss , track_num_correct , _get_num_correct : These are utility functions to accumulate the loss, number of correct predictions of each batch so the epoch loss and accuracy can be calculated later. save : Save all run data (a list of results OrderedDict objects for all runs) into csv and json format for further analysis or API access.
https://towardsdatascience.com/build-a-fashion-mnist-cnn-pytorch-style-efb297e22582
['Michael Li']
2020-01-30 17:11:11.396000+00:00
['Machine Learning', 'Pytorch', 'Artificial Intelligence', 'Education', 'Deep Learning']
Kubernetes is better at packing suitcases
Photo by Erwan Hesry on Unsplash We are facing some social and economic turmoil caused by the COVID-19 pandemic, I say that as I sit at home in my family bubble down in New Zealand staying at home to break the chain of transmission. We are blessed down here with strong leadership steering us through these turbulent waters, Jacinda Ardern is leading us impressively with her positive action in taking on this virus. Talking of helmsman that brings me onto one of my favourite topics Kubernetes (definition), and how Kubernetes can also help us right now. At the moment we are still in level 3 lockdown pretty much around the world which means revenue has changed for every industry other than essential services. So we all need to find new ways to cut our costs so that our businesses can continue to thrive and survive, and I am aiming this article at people that currently run their IT services on VM based infrastructure. They would have been busy over the last couple of weeks shutting down non-essential VMs outside of business hours and reducing their VM instance size to save costs. There is another way they can now look at to further reduce their IT infrastructure costs, and that is to start running the workload in Kubernetes clusters on-premise or in the cloud. How Kubernetes can help to optimise the use of Infrastructure is not well known as I think it has not been well explained, but I will try my best to add some clarity. I listen to the Google Kubernetes podcast¹ to keep me up to date with what is going on in the rapidly moving world of Kubernetes. They took on the topic of Kubernetes economics back in July last year, they invited Owen Rogers from 451 Research² who has a PhD in cloud economics. He explained it really well using a suitcase analogy³ which I will use here, in fact, large parts of this are direct from Owen’s pod transcript. If you imagine you are packing your suitcase on vacation after you’ve bought some new clothes, which are in fixed-size boxes straight from the store. And rather than unpacking the boxes, you pop them straight into the suitcase. There’s lots of spare space in the boxes and in the suitcase, but you can’t use this space because everything is in a fixed-size box. Now, to take your suitcase on the plane would cost, let’s say, $100. But as you failed to pack in as much because you’ve used these fixed-size boxes, you need to take two suitcases at a total cost of $200. Now each of these fixed boxes is essentially a virtual machine. Each virtual machine requires significant overhead in the form of an operating system. And that’s why you’re getting all this waste and you have to take more suitcases because you can’t put as much in one. Now imagine you open these boxes from the store and you just take out all the clothes and squeeze them wherever they fit in every nook and cranny so that now you only need to take one suitcase. And that would cost you only $100. Essentially, you’ve lowered your unit cost per item of clothing just by packing better. Now, this concept represents a container. With less overhead due to less duplication, you can cram more in. And in theory, an application built using containers should cost less than using virtual machines, as long as you’re packing those containers into the suitcase of the server or the virtual machine in the correct way. So to get away from the abstract but useful suitcase analogy let’s consider some typical company workloads which include me running 20 EC2 instances and 4 RDS databases, my instances are sized to handle spikes so my CPU and Memory utilisation is generally very low. I have 24 boxes at fixed hourly charges and my monthly AWS costs for this is will be something like: 20 EC2 Instances at $100 per month = $2000 4 RDS Instances at $250 per month = $1000 If I spin up an AWS EKS instance which can handle this workload, we will need a cluster that can handle at the very least same number of instances 24, but for Kubernetes these are pods. So AWS will charge a fixed charge for the Kubernetes control plane and then you can choose to have as many worker nodes as you want to run your workload (pods), the worker nodes are actually EC2 instances and the number of pods you can run on the EC2 instances is limited by AWS to the number of IP interfaces allowed per EC2 instance. So we decide to spin up a worker node that is m5.xlarge so that I can run 58 pods, and I have plenty of CPU and Memory too. This will cost me the following: 1 x m5.xlarge EC2 instance at $300 per month = $300 1 x EKS monthly charge to run the control plane = $65 That is quite a considerable saving in EC2 instance cost, I can pack 58 instances into one suitcase and save myself heaps of idle EC2 cost, and I have included some very rough figures. However, I have done this for several clients and know this approach can give considerable savings up to 60–70% of existing EC2 cloud costs particularly when the EC2 instances are very underutilised. I think it definitely proves that Kubernetes is better at packing suitcases. To realise the saving it does rely on your team being able to containerise the workloads and spin them up into Kubernetes using your CI/CD pipelines. In my opinion, this is much easier in Kubernetes than VMs and also opens the door for advanced releasing strategies like Blue/Green and Canary releasing. You have simplified your workload into containers which makes it easier to run on your desktop, on-premise and in the cloud and allows the company to run their workload on the cloud provider of choice. Now saving money is, of course, important but there are many more reasons to choose Kubernetes, and I think making your release a tagged image in a container registry makes your DevOps pipelines and releases much quicker and simpler.
https://medium.com/weareservian/kubernetes-is-better-at-packing-suitcases-aee222691020
['Martin Arndt']
2020-05-22 04:39:09.095000+00:00
['Kubernetes', 'Cost Savings', 'Vm', 'Cloud']
Designing urban robots for hybrid placemaking experiences
by Marius Hoggenmueller, Luke Hespanhol & Martin Tomitsch (Design Lab, School of Architecture, Design & Planning, University of Sydney) Governments, organisational institutions and tech companies around the world are not only collecting and analysing data, but increasingly turning to urban robots for automating processes and services, with the aim to make cities more efficient and productive. However, the development of so-called smart cities also triggered heated debates among thought leaders and scholars, with some arguing that the matter of productivity should be dissociated from speed and efficiency and instead questioning how to make cities more liveable and healthier, regarding people as co-creators instead of simply consumers, and create cities for slowing down. In this exploratory design research project, we created Woodie, a slow-moving robot that draws on the ground using conventional chalk sticks. Capable of drawing various predefined designs, Woodie uses the public space as a large horizontal canvas, producing simple line drawings. Building on previous research on pervasive urban displays, we were interested how an urban robot could replicate the qualities of non-digital public displays and act as a facilitator or ‘spark’ to affect people’s engagement with public visualisations and to foster social interaction among people. As robots are increasingly being tested in real-world urban contexts — mainly to complete mundane tasks — our aim was to investigate their potential for triggering urban reflection, “getting lost” in cities and “slowing down”. This article is a summary of a full paper submission accepted to the ACM CHI Conference on Human Factors in Computing Systems (CHI’20) with the title “Stop and Smell the Chalk Flowers: A Robotic Probe for Investigating Urban Interaction with Physicalised Displays”. The conference was supposed to be held between April 25–30, 2020 in Honolulu, Hawai’i, USA, but was cancelled due to the current Covid-19 pandemic. The paper is published in the ACM library (https://dl.acm.org/doi/abs/10.1145/3313831.3376676), and a full-text is available here. Pervasive displays in the city Our study builds on and contributes to the field of pervasive displays, which has been extensively studied by the CHI community — as a field that takes computing into the urban space through digital screens that can be either interactive or non-interactive. Compared to smartphones, pervasive displays enable a push-based distribution of content that is addressed to the public instead of individuals. However, several works reported that passers-by tend to ignore digital screens. Therefore, researchers investigated how to overcome so-called display blindness, for example, through adding layers of interactivity or aiming for the design of more relevant content. Figure 2: Manually updating non-digital physical displays by a research team was reported to drive more engagement around public visualisations. Several studies also investigated and documented the use of non-digital displays, for example, through using chalk spray stencils to display urban visualisations on the pavement or deploying chalkboard-like signage for comparative energy feedback on the façades of residential houses (Figure 2). Based on their studies, the authors outlined several qualities of non-digital public visualisations: for example, they reported that textures and materiality, as well as the horizontal position of content on the pavement, attracted passer-by to approach (and touch) the visualisations. Further, the transient nature of the visualisations made people aware of their finite lifetime, resulting in additional appreciation of the content. The manual update process carried out by a research team created additional engagement between residents and the researchers acting as facilitators. Yet, although those studies suggest that non-digital physicalised displays can address some of the socio-technical pitfalls related to digital urban displays, their hyperlocal scale and impact remain unaddressed limitations. Design considerations From the outset, the design goal was to create a robotic device which could render digital drawings in a physicalised form, deeply integrated into the existing urban environment. One requirement was to design a ‘plug-in system’ that could be deployed “anywhere” in public space, in and out, without the need for additional infrastructural support, such as a canvas or a stationary power source. As sidewalks and public plazas are ubiquitous in cities, publicly owned and used as a stage for social interactions, we decided for a self-moving, autonomously powered platform which uses the ground as a large canvas. Figure 3. Early renderings of Woodie. Shape. In line with our goal of enticing curiosity by depriving Woodie from any obvious association with preconceptions about how a robot should look like, we decided early on not to give it anthropomorphic features (Figure 3). Instead, we opted for a circular body, thus allowing the robot to move in any direction without the constraints of having to reposition its “face”. Size. Woodie should be large enough to be noticed by passersby, yet small enough not to become an obstacle to them, and not to obstruct visibility of its own drawings. Speed. Slowness emerged as an important feature for Woodie, as it should move sufficiently slow to allow for thoughtful appreciation by the public, as well as not to compromise the free movement of pedestrians in the public space. Figure 4. Two different drawing style explorations (with the actual vector graphics in purple): geometric flower (left), sketchy flower (right). Information visualisation. The information visualised through the robot’s drawings — representing an instance of the physicalised display — had to be aligned with the environment and its physical and temporal context. Given that Woodie was located in a laneway, we aimed to use visualisations for activating the space, implementing digital placemaking principles to connect people with the space and with each other. As we deployed Woodie in the context of a public light festival, we aimed to align the visualisations with the theme of the festival, which was “love, peace and harmony”. These considerations led us to the use of simple shapes, such as flowers and hearts. An important aspect at this prototyping stage was not only to decide for the “right” type of content, but also to explore the characteristic style of the drawings. After several tests drawing on various grounds, we realised that highly geometric vector graphics convey the impression that the drawings are imperfect when “rendered” on rough terrain, while less geometric, hand-sketched drawing styles worked well with Woodie’s limited drawing accuracy (Figure 4). Information communication. In addition to creating a public display through drawing on the ground, Woodie should also have communicative features itself. Following recommendations from research on autonomous vehicles, suggesting the use of visual signals for communicating a vehicle’s intent to people around the vehicle, Woodie should be capable of visually representing its internal mode. We chose a low-resolution LED lighting display solution given their established aesthetic qualities. Deploying Woodie in-the-wild We deployed Woodie over the period of three weeks, during evening hours, in a quiet laneway situated within a major business and residential district. Due to the slow movement, Woodie was usually not drawing more than four individual drawings per hour.
https://medium.com/design-at-sydney/designing-urban-robots-for-hybrid-placemaking-experiences-b85574148f16
['Marius Hoggenmüller']
2020-05-11 14:56:47.095000+00:00
['Smart Cities', 'Design', 'Robotics', 'Digital Placemaking', 'Qualitative Research']
Making The Big Bucks On Medium
Making The Big Bucks On Medium Can it be done — and how do you do it? Photo by Alexander Mils from Pexels Everybody's dying to know the secret formula that will enable us to make some real money on this platform. So hypnotized by that prospect are we that several big earners here have made a lot of their cash by simply advising the rest of us on exactly how to ascend the slush pile of articles submitted for publication. I am certainly not that success story. But with a lot of work and publishing two to three stories per day, I’ve managed to enter the elite group of triple-figure monthly earners for three consecutive months. And as I understand it, that would put me in the top 5% of writers — at least earningswise. So maybe I am qualified to advise on the subject after all. Probably not. But regardless, I have read numerous articles on how to make the big bucks, and written several of my own. Maybe there’s something in the following embeds that might help somebody. It’s worth a shot. Granted, my observations are a little alt at times. But maybe that’s a good thing — as you won’t find them anywhere else from all the mainstreamers who have succeeded here.
https://medium.com/illumination/making-the-big-bucks-on-medium-fe9535883fce
['William', 'Dollar Bill']
2020-12-29 20:08:26.607000+00:00
['Freelancing', 'Advice', 'Medium', 'Opinion', 'Writing']
Inclusive Design Thinking at Microsoft
Inclusive Design Thinking at Microsoft How an inclusive process creates more accessible and ethical products How radical empathy defines our product ethos If design culture is the spirit of how we design at Microsoft, then design thinking is the manifestation of that culture. It’s the way we seek meaning in our work to make the impact we want. There are lots of ways to approach design thinking, but in the end every method comes down to the spirit of your organization — the values you honor and the way your design team shows up. “Design Thinking” sometimes get short-changed as a sticky-note session, solving the world’s problems one sketch at a time. Some have been critical of design thinking as common sense techniques — but I’m not sure that’s fair. Design thinking is a set of co-creative tools, a means to build empathy with your customers, to understand the why before you get to the what, to expand and consider different perspectives. It’s about including others in the process, converging your ideas by creating, making, and experimenting. It brings energy and clarity to the opportunity. Is it really that common? On our Microsoft Design teams, we’ve started to apply design thinking with a lens for the who. Our mission statement is to empower every person on the planet. When you design for millions, who are you really designing for? How do we design products and services inclusive to everyone? Our Inclusive Design ethos is how we create; how we approach design thinking to build products that reach the greatest number of people. The case for inclusive design thinking It’s tempting to jump right into product development. You know what you want to make, the new technology and capabilities support it, and your gut says go for it. The challenge is to slow down and consider who you’re creating for. When millions of customers depend on your products or services, you’re inevitably making assumptions about what might work for everyone. With an Inclusive Design method, we re-frame the opportunity to understand who’s being excluded and then build insights that can be used throughout the product development process (you can check out our Inclusive Design work here). Inclusive Design is a method of design thinking; one that invites more people into our experiences. It’s the core of responsible creation. We’re continuing to evolve our Inclusive Design practices. As we move deeper into a world of ambient computing, the value of design thinking is higher than ever. We interact with information all around us, through changing contexts embedded with intelligence. There’s a lot at stake. As designers, we need to think about how people will use our products and the real-world consequences of our intentions. The value of ethics in product design A key product ethos that’s emerging for us is design ethics. As designers, we need to define our obligation to the world around us and claim responsibility for the experiences we create. There’s an opportunity to bring mindfulness into our product ethos; to level-set and create designs that earn trust. We move and create at such a rapid pace, sometimes launching new features daily. How can we balance that rapid pace against the “move fast and break things” approach? Starting with an inclusive ethos can improve the long-term impact of what we create — delivering new value and new solutions with radical empathy at their core. Design thinking throughout, always human-led. It’s our responsibility as designers to account for the impact each feature release has on our customers’ lives. The world is fundamentally shifting, and for us, leading the industry means leading with empathy. Consider the philosophy behind the Iron Ring ritual that takes place across Canadian engineering universities (yes, I have one): The Ritual of the Calling of an Engineer has a very simple purpose: To direct the newly qualified engineer toward a consciousness of the profession and its social significance. Designers have the same weighty responsibilities. Maybe it’s time for The Ritual of the Calling of a Designer — a call for radical empathy around our design ethics. How do you approach design thinking? Talk to me here in the comments, or on Twitter.
https://medium.com/microsoft-design/inclusive-design-thinking-at-microsoft-5509da5f8ac0
['Albert Shum']
2019-08-27 17:30:20.989000+00:00
['Inclusive Design', 'Microsoft', 'UX Design', 'Design', 'Design Thinking']
Pieces and Ash
Photo by Jens Johnssonon Unsplash I was wondering what it would feel like to be this far. I’m so far now. Further than I ever thought I’d be. I wanted to live there, back there. But, violent silence and comfort pushed me here. To the brink. Fore the way back to you is finally dead. May it rest in pieces and ash.
https://medium.com/know-thyself-heal-thyself/pieces-and-ash-569a30b5a71f
['Talaya Mccain']
2020-12-24 17:37:02.287000+00:00
['Self-awareness', 'Self', 'Self Growth', 'Spirituality', 'Self Improvement']
There’s a Rumor — Marilyn writes Humor!
There’s a Rumor — Marilyn writes Humor! Don’t take my word for it, see for yourself… Photo by Jamie Street on Unsplash Taking a little break from our friends at Jesus the Lamb to catch up on some laughs. As you might guess, I’ve been burning the candle at both ends. The late-night end is the funny one — so enjoy! If you LOL, leave proof! Thanks!!! You may recognize these early rough drafts of your fav famous poems…
https://medium.com/humor-authenticity-magic/theres-a-rumor-marilyn-writes-humor-8dba4b3b24ae
['Marilyn Flower']
2020-08-29 16:22:49.512000+00:00
['Humor', 'Writing Tips', 'Poetry', 'Writing', 'Satire']
The Top 3 New JavaScript ES 2021 (ES 12) Features I’m Excited About
2. Promise.any Promise.any accepts an array of promises and resolves as soon as any of the supplied promises become resolved. It sounds difficult, so here is an example: We make three requests simultaneously. When one of the requests resolves, Promise.any also resolves and logs the first resolved request in the console (in our example, it’s Google). If all promises were rejected, Promise.any throws a new type of error: AggregateError . What’s new about it is the AggregateError object represents an error where several errors are wrapped in a single error. Here is how it looks:
https://medium.com/better-programming/the-top-3-new-javascript-es-2021-es-12-features-im-excited-about-a3ac129efbb2
['Nick Bull']
2020-11-17 15:45:04.668000+00:00
['Programming', 'JavaScript', 'Software Development', 'Nodejs', 'Es2021']
Three Strategies That Make Remote Teams Successful
Three Strategies That Make Remote Teams Successful (Avoid Pitfalls when Shifting to Remote Work) Over the past 30 years I’ve worked for global companies that adopted flexible remote working policies, and have traveled frequently and worked remotely as part of small startups and virtual teams. As a consultant, I know what it feels like to be a permanent remote worker as well as what it is like managing and mentoring teams from afar. What I have learned is that figuring out the how — technology platform, communication norms, meeting cadences, core work hours and time zones — and fostering social interaction and relationship building, is only half the battle. The biggest hurdle in helping remote teams to achieve efficiency and effectiveness comes from establishing and maintaining a clear, collaborative process for goal setting and progress reporting, in a way that empowers team members to embrace it. The Next Normal Even before the coronavirus pandemic, CNBC found that 70% of professionals work remotely at least once a week. It’s not always easy to do, and few companies supported 100% work from home for all employees. However, now that most companies have gotten somewhat settled into a remote working cadence, some companies may never go back to an office regularly. What does this mean? Fewer commercial office space leases, more video chats, fewer opportunities for informal interactions throughout the day. Team relationships are harder to foster, onboarding is more difficult, and corporate culture is harder to maintain. The popular management strategy MBWA — management by walking around — isn’t feasible and a once familial atmosphere is thwarted by the impersonal nature of a digital screen. Managers are less able to observe employee dynamics, build personal relationships, and quickly spot problems before they become issues. While goal setting methodologies abound, they are not always implemented in a way that supports and motivates virtual workers. The following strategies can help facilitate clear and focused communication to create employee engagement and team building. These 3 core steps are often missed, causing companies to stumble as they shift to remote working. STRATEGY 1: Make everyone’s top work priorities visible to everyone else on the team, including yours. Visibility creates accountability, and social accountability is a powerful lever that motivates accomplishment. Not everything has to be visible, but allowing everyone to see what everyone else’s most important goals are on can be a powerful motivator. A recent Asana survey reported that 74% of employees do not know how their work impacts the bottom line. Not only can better visibility give individuals an understanding of how their efforts contribute to the overall success of the team, but constant visibility of goals and accomplishments can also help team members appreciate what each person does. While visibility creates understanding and accountability, transparency creates trust. Trust and understanding lead to respect and can also serve to inspire individuals to go above and beyond — to do what they can to ensure that the team (and the company) succeeds. Knowing that such extraordinary efforts will be recognized and rewarded by their peers can also drive more of the desired behaviors. The cadence of visibility also matters. With the advent of technology, people expect frequent updates, and companies require agile pivots to succeed. No longer is the annual company meeting enough to keep everyone in the loop, and team members expect more. Having a central source of truth that is readily accessible 24/7, plus a weekly/monthly/quarterly cadence for updates is important for both team leaders and members to maintain effective communication, ensure consistent ownership of goals, and allow for shifts in priorities as needed. Goal-setting* is often everyone’s last priority, but deprioritizing it can be even more harmful when a team is remote. Without clear goals, there can be an unintentional misalignment, lack of productivity, or worse, a lot of busywork that amounts to nothing. Some leaders may worry that making bold commitments visible may set everyone up for failure. However, undeclared goals are rarely achieved. As long as a goal is attainable it can serve as a powerful rallying cry to motivate a team to step up and deliver. What happens if goals change? Sometimes companies have to pivot or shift focus for a myriad of reasons. Loss of a major customer or supplier, new competitor, slowing demand, or perhaps a global pandemic. When goals are visible and future changes need to be made, it’s easier to shift the team because there is a reference point for explaining what needs to happen differently and why. Goals can also affirm cultural values (encouraging community service, or supporting personal goals). Visibility of “culture-related” goals can also be a constant reminder of what the company stands for, which can be a powerful motivator as well. *To help companies get better at goal setting, there are many methodologies that can help support the process: OKRs, Smart Goals, EOS “Rocks”, etc. STRATEGY 2: Creating a culture that rewards proactive updates means less micromanagement. No one likes to be micromanaged, but without the in-person, casual interactions that used to happen in an office environment, it is difficult to see what everyone is working on, and know when they are working. The younger generations of workers prefer frictionless (no human intervention), on-demand (when they are ready), and expect software to solve problems (there’s an app for that). Team members do not want to be asked “did you do that yet?” and managers do not want to ask, fearing that they will come across as micro-managers. Having a process that creates social accountability for completing work can help solve both problems. In addition to a clear understanding of goals that are visible for all to see as discussed in strategy #1, what’s required to make a frictionless process work? Regular cadence for individuals to report the progress of each goal (with temporary consequences for not being timely). A weekly meeting for sharing highlights of progress, and recognition of individuals for what they achieve. A weekly report that documents progress and recognizes achievement, that is visible to everyone. Goal-setting methodologies require all the above to make them work. EOS recommends creating an Accountability Chart and the Weekly Pulse Meeting. Performance or goal management software tools that track the progress of OKRs and Smart Goals, automatically reminding teams to provide updates, can also easily provide visual dashboards to support weekly meetings. Making updates visible encourages social accountability and becomes the central source of truth that all can rely on. A clearly defined process that holds everyone accountable for work yet enables them the freedom to work on their own time is the perfect structure for empowering teams to deliver while also ensuring that the reporting happens and goals are achieved. STRATEGY 3: Motivate teams to actively participate in the cultural shift. Intrinsic motivators are often the most powerful. Gamification advocates understand this, and earning points, badges, and prizes are often used to reward progress against a goal. Widespread use of game mechanics in education is training our next generation workforce to respond positively to streaks, leaderboards, and social praise. Change happens when people want change to happen. Shifting the culture to maintain more transparent communication with remote teams starts with motivating individuals. The use of gamification tactics is natural for younger generations of workers and will become even more prevalent in the future. Here are some of the ways gamification can encourage active participation in a cultural shift towards ownership, accountability, timely updates, and accomplishments. Earning points for achieving mini-milestones and consistently demonstrating cultural values. Group rewards based on points that individuals earn. Encouraging appreciation for each other’s work, acknowledging support when it is given (earns points and individual recognition) Adding gamification can be intrinsically motivating, whether it’s trying to earn as many points as possible (point scores) or consistently earning points (streaks) and the recognition that accompanies both (badges and leaderboards). In addition, rewards that are earned can be much more meaningful than rewards that are given, since thank you “gifts” are not guaranteed and might not be clearly tied to effort. Empowering team members to articulate their own goals around work can create buy-in and accountability. Incorporating game mechanics into everyday work can encourage consistency and extra effort as team members strive to achieve the next reward. Conclusion: In the past 5 months, 45% of companies have had to shift their goals. To do so effectively requires a central source of truth combined with infrastructure that allows for clear consistent communication. Creating a culture of accountability and transparency that is intrinsically motivating to people can lead to a winning formula. While 36% of companies use spreadsheets to help manage their goals, it’s hard to turn a spreadsheet update into an engaging, motivating process that requires teams to update progress regularly. Easy to use, gamified goal management software tools, designed specifically for small business leaders without an IT or HR department, can provide a simple, fun, and affordable way to align and motivate a remote working team. While technology can’t solve all of the challenges of goal management, software tools can support the shift and get new processes to stick. These three strategies may present cultural shifts for you and your team, but in the long run, will transform your company culture and produce a collaborative team that’s better able to operate virtually. Avoid the stumbling blocks that often accompany the shift to virtual teams and you can celebrate achievements and accomplishments together.
https://medium.com/swlh/three-strategies-that-make-remote-teams-successful-5018f345862d
['Jennifer Apy']
2020-08-26 01:52:27.898000+00:00
['Remote Working', 'Leadership Development', 'Management And Leadership', 'Small Business', 'Motivation']
2020’s Top Ten Stories: Military Aid to Civil Authorities (MACA)
1. What is Military Aid to Civil Authorities (MACA)? Military Aid to Civil Authorities aka MACA is the help and support provided by the Armed Forces to authorities in the UK, like the Police, NHS or local authorities. The government can call on the military to assist at times of need, to share the burden on civil organisations. MACA may include assisting other government departments for urgent work of national importance, such as responding to emergencies, maintaining supplies and essential services. The Armed Forces also step in to provide assistance to communities for special projects or events of significance, or through the use of Reservists. A British Army soldier supporting the Police during Operation Temperer in 2017. 2. Why the Armed Forces? Our Servicemen and women are called upon due to the specialist skills, experience, expertise and equipment they have to offer. Military stand ready to support civil authorities when the latter’s capacity is reached. Support and resources are offered up when they are not critical to the running of the military’s core mission. Servicemen are pictured providing security in London during the 2012 Olympic Games in the city. 3. When does the government call in the military? The military only gets involved if and when: The civil authority has all or some capability, but it may not be available immediately The urgency of the task needs rapid external support The civil authority lacks the capability to fulfil the task and it would be unreasonable or too expensive to expect one to be developed There is a definite need to act and the tasks the Forces are being asked to perform are clear Mutual aid and commercial alternatives have been discounted 4. What exactly would the military help with? As highly trained members of the military, our soldiers, sailors and airmen and women can help with: Planning: When needed, a team of military planners can support government departments tackle crisis and act as military liaisons. Read more here. When needed, a team of military planners can support government departments tackle crisis and act as military liaisons. Read more here. Natural disasters: Most memorably, at times of flooding and snow, the military can help protect human life, property and reduce distress Most memorably, at times of flooding and snow, the military can help protect human life, property and reduce distress Animal disease outbreaks or public health epidemics: Provide medical and logistical expertise as required Provide medical and logistical expertise as required Public service related industrial disputes that affect our safety or security, or disruption to transport or communications links: The UK Armed Forces have trained logisticians who can lend expert advice at a time of need The UK Armed Forces have trained logisticians who can lend expert advice at a time of need Criminal or terrorist activity: Providing specialist expertise, skills and experience tailored to the circumstance Providing specialist expertise, skills and experience tailored to the circumstance Acts of terror: The government can deploy Service personnel to take the place of police officers and help protect people, landmarks and events — enabling more police to be reallocated to an ongoing crisis/emergency The government can deploy Service personnel to take the place of police officers and help protect people, landmarks and events — enabling more police to be reallocated to an ongoing crisis/emergency Mountain rescue: The Royal Air Force Mountain Rescue Service is able to support civil authorities by helping people in danger in hard to reach places The Royal Air Force Mountain Rescue Service is able to support civil authorities by helping people in danger in hard to reach places Mountain rescue: The Royal Air Force Mountain Rescue Service is able to support civil authorities by helping people in danger in hard to reach places The Royal Air Force Mountain Rescue Service is able to support civil authorities by helping people in danger in hard to reach places Protecting UK waters: While the Armed Forces constantly protect our waters, the government can request their support specifically for tasks relating to energy installations, ports, immigration issues and fishery protection A RAF Chinook helicopter deployed from RAF Odiham to Whaley Bridge assists civil authorities dealing with the emergency situation at Toddbrook Reservoir. 5. Do you have any examples of MACA? Flood relief during Storm Dennis — Over 140 British Army personnel were deployed in 2020 to help assist civil authorities in providing flood relief to local communities in West Yorkshire Over 140 British Army personnel were deployed in 2020 to help assist civil authorities in providing flood relief to local communities in West Yorkshire 2012 London Olympic and Paralympic Games — In total, up to 13,500 personnel and a number of military assets were deployed including Royal Navy warships and RAF fast jets In total, up to 13,500 personnel and a number of military assets were deployed including Royal Navy warships and RAF fast jets Whaley Bridge — In 2019 a RAF Chinook helicopter was tasked to Whaley Bridge to assist with a collapsing dam at Toddbrook Reservoir amidst fears their Derbyshire town could be flooded In 2019 a RAF Chinook helicopter was tasked to Whaley Bridge to assist with a collapsing dam at Toddbrook Reservoir amidst fears their Derbyshire town could be flooded Operation TEMPERER — UK Armed Forces personnel were deployed following the Manchester Arena Bombings and then again following the Parsons Green attack in London UK Armed Forces personnel were deployed following the Manchester Arena Bombings and then again following the Parsons Green attack in London The Covid Support Force — In March 2020, The UK Armed Forces supported the national response to coronavirus, from delivering PPE to air transport support and mobile testing units. Find out where the UK Armed Forces are deployed around the globe:
https://medium.com/voices-of-the-armed-forces/2020s-top-ten-stories-military-aid-to-civil-authorities-maca-fae8f27d8789
['Ministry Of Defence']
2020-12-22 12:12:22.725000+00:00
['Flooding', 'Military', 'UK', 'Coronavirus', 'Covid 19']
A Simple Life Isn’t About Reducing the Amount of ‘Stuff’
For some reason this week, my mind has gone to simplicity and complexity — the feeling of having nothing that I ​need​ to do and yet doing the things I want to, no matter how seemingly complex. In life and work we think that complexity is about our circumstances — how much stuff we have, how many people or projects we manage. We believe that, if we control things or people, if we reduce our stuff and keep everything in its place, we’ll be more comfortable with that complexity: that creating simplicity on the outside can create a feeling of simplicity and calm on the inside. In work, there’s talk of being able to ‘manage change’, of being agile — words that imply some kind of control of or response to what’s out there, that being on top of circumstances is a first step to being able to manage ourselves and others. In the personal development space, we’re seeing a trend towards minimalism — the premise being that the world is more complex now than ever, and the less ‘stuff’ we have around us, the less we have to think about, the clearer our minds will be, and therefore the happier and more peaceful our lives. And yes, there does seem to be a correlation between having peace of mind, and being able to ‘manage’ everything life throws at us. But which way does the causality run? Does Simplicity Create Peace of Mind? I love the ​idea​ of keeping things simple, it definitely seems as if it’s the way to create peace of mind. And yet… …I look at what’s actually going on in my life and somehow my reality contrasts sharply with this ‘ideal’. At home, I thought I was downsizing, making my physical surroundings small and modest, to give me more choices and ‘freedom’ outside. Turns out I’m moving towards something bigger, older, and more demanding of my attention than where I currently live. In business, where I thought I had a clean, straightforward way of working, it turns out I’m creating projects that involve managing others, with layers of relationships with individuals and organisations I never imagined I would have. And yet, rather than any feeling of complexity in either domain, life feels easy, it feels slow (mostly!) and it feels unpressured. I don’t take on responsibility for things that are not mine, and I don’t experience change or unforeseen events as problematic or overwhelming. Well, mostly… …we’re all human sometimes ;-) Why is this? Why is it that going against the grain of what we’re told will help us create simplicity, is the way to find a feeling of an effortless life, well lived? Apparently Not… It isn’t about the ‘stuff’. We’re moving and sometimes my husband panics about how much there is to pack, “We have so much stuff!” he cries. It looks to him as if it’s the ‘stuff’ that is causing him to feel the way he does, and that we need to do something immediately to either reduce, or manage the ‘stuff’ so that he can feel better. Mostly I ignore him. I know that it isn’t the ‘stuff’ that is causing his momentary anxiety and that responding to the feeling is likely to reinforce his misunderstanding. We don’t really have that much stuff and it will get packed when it gets packed. It’s Like Falling Off a Bike… We were out with the dog earlier this week and we saw a small boy fall off his bike. The boy’s immediate reaction was to turn around and look for his parents. His mum had seen the fall, and she smiled. And then ignored him. She wasn’t being a bad parent, she could see that he was OK, she was simply teaching him not to make a fuss when he wasn’t really injured. The boy picked himself up and rode off. You could almost read his mind as the thought of “oh well, I must be OK then,” flashed through his brain and any pain or desire to cry disintegrated in an instant. You see, his reaction — to cry or not to cry — didn’t come from the fall, it came from the thought ​that he might be in pain. It’s an obvious example that any parent (or pet owner!) knows to do. We want to show our children their innate resilience, so we make a judgement about the severity of the fall and our reaction is an example to the child. We might not think of it like this, but we’re teaching them which thoughts to pay attention to, and which to ignore and allow to pass on by. By ignoring the idea of pain, the child can get back on the bike and ride on without making, or feeling the need to make a fuss. In this example we’d see it as an over-reaction to take the bike away and never allow the child to experience risk. We know that the degree of pain is in the child’s head, that some risk is inevitable, and that being scared of falling is more likely to lead to an injury. And we also know that it’s better to encourage the child to develop an open and free mind to enjoy life, and be ready to respond to dangers as they arise (of course, while exercising reasonable caution.) ​ In Life, We Take Our Bikes Away… In life, though, we seem to have lost sight of the wisdom of developing this freedom of mind. We operate with the professional and personal equivalent of taking away our bike to minimise any current and future pain. We think that controlling our circumstances is the way to manage our emotions. We get so wrapped up with the notion that we must remove complexity if we want to experience peace of mind that we forget that complexity is a consequence of what we’re thinking — not of what’s ​out there. In the same way the boy can experience pain, or not, from a minor fall from his bike. Or that my husband can one day panic about the amount of ‘stuff’ we have and the next be totally at peace with how our move is progressing, so each of us is operating from our state of mind, not the state of our circumstances. Right now, I find myself creating complexity in a work project, and yet I’m experiencing it as joyful, easy and effortless. And the same is true for you: the feeling of self-assurance and certainty that you think will result from managing what’s in front of you, is available to you no matter how complex your personal or professional circumstances. And it’s the feeling that gives you the authority and the confidence to manage those circumstances to the best of your ability — just as the boy on the bike will have more enjoyment from his cycling adventures the freer his mind becomes. It’s All in the Mind This freedom of mind is the key for you to unlock an easier way to live and higher performance at work. From that state of mind, it might occur to you to create a simple life, with minimalism on the outside. Or it might not. Like me, you might pursue projects that require ways of interacting with people that might, at one stage of your career, have been considered ‘high risk’ or ‘high stakes’. And that, now, simply appear as the obvious next step in creating what you want. Or you might create more simplicity — sell your stuff, create a succession plan at work, slow down. Whatever emerges in your life, know this: complexity isn’t about the outside world. ‘Change management’ isn’t something you ‘do’, it’s an effect of knowing who you are. How you show up to your circumstances dictates how they turn out. Like that child falling off a bike, you can cry, and believe it’s because of the bike, or you can shrug, get back on, and continue to enjoy the ride that life’s taking you on. From that perspective complexity is irrelevant; love of life is the only thing that matters. With love, Cathy _______________ About the author Cathy Presland is an expert in personal and professional leadership and an advanced transformative coach. She has more than two decades of experience in government and international organisations and her focus as a coach is to support impact-driven individuals and organisations to improve their performance, leadership and peace of mind so they can make more of a difference with the work they do.
https://cathypresland.medium.com/managing-complexity-isnt-about-what-you-do-it-s-how-you-see-the-world-496c694ebffe
['Cathy Presland']
2019-05-01 13:31:39.299000+00:00
['Simplicity', 'Complexity', 'Leadership', 'Productivity', 'Life Lessons']
A note to the absentee parents of the world
My dad being the superstar that he is, took on the motherly role as best he could. He wasn’t much for girl talk and pillow fights, but he definitely raised some strong-willed and resilient girls. I can’t remember a time that I wished my mother was there because he did a great job of making us feel whole. Being the oldest, I kind of assumed the role of mom when it was necessary and became the confidant my dad needed when the girls required guidance. Somehow through all of that, I forgot that I also needed guidance, but having an absentee mother was the norm. So I didn’t really mind filling that void with other things. I compensated by being overambitious in everything I did and always pushing forward. Literally always looking forward, wanting more, thinking I needed to be better, and chasing the next big thing at every opportunity. It sounds like being an overachiever is a straight path to success, but it was actually pretty lonely and was filled with failures. That feeling that there is always something else, always something more, or that I could be working harder is exhausting. When I reached my twenties, about halfway through college, I realized how self-destructive that was. Hitting that realization forced me to take a break and to breathe. To reflect on the craziness that had been my life and realize that I still had a lot of time to change who I was and not be forever forged by the impacts of my absentee mother. Though those experiences will always be a part of me, they do not define me and they are not my fault. This is the hardest lesson that a child who has endured the absence of a parent has to learn and it’s a tough pill to swallow. So, what exactly are the impacts of an absentee parent? Let’s talk about a few 1.) Overcompensating There is no ample explanation to give a child about why their parent left them. It doesn’t matter if they were struggling with drug addiction, depression, having an affair, etc. All the child knows is that they left and they put the blame on themselves. We believe that the only plausible explanation is that we are not enough. That feeling sets in deep and seeps into every aspect of life, even if we are unaware. Overcompensating becomes a coping method that we often use to ignore our feelings and emotions. 2.) Emotional Turmoil You can imagine how twisted your emotions get when you are fighting feelings of being inadequate, dealing with the fact that your parent(s) left, and all the frustrations in between. Emotions become this scary, very messy, and often avoided subject. For me, all of this manifested itself in anxiety and I’m talking anxiety from hell. I became super avoidant of any emotion and basically let my anxiety rule my life. The number of anxiety attacks and mental breakdowns I had on the floor of my closet growing up is actually embarrassing. I can’t be the only one who sought clarity on the floor of a messy closet… 3.) Relationship disasters This is where we experience major trust issues, people! Trust issues and either building huge walls that take years to tear down or a very unhealthy level of dependency in relationships — and not only romantic relationships. These issues also trickle into every other type of relationship. For me, it was the wall. It feels like the wall is protecting you so you keep it up and when people try to get closer, you add a couple of layers. It worked for a good 20 years or so, but when it came crashing down I had a lot of baggage to deal with that I wasn’t — and still am not — ready to unpack. The only way to get through it is one day at a time. 4.) The Guilt Okay, this one gets to me the most. Anyone who grew up without a parent knows the feeling of guilt, and sometimes shame, that comes with being left by a parent. It all goes back to the feeling that it’s our fault, that we are not enough, and something is wrong with us. A completely unfair burden for a child to bear, but somehow our brains trick us into thinking these things on repeat and it doesn’t fade easily.
https://medium.com/be-unique/a-note-to-the-absentee-parents-of-the-world-58362f255c96
['Venessa Amber']
2020-12-17 07:30:43.713000+00:00
['Self-awareness', 'Family', 'Parenting', 'Relationships', 'Thoughts']
Mormon, Stoic, Taoist — Notes on Effortless Action
The Stoics taught that the difference between peace and suffering is learning to distinguish between what is and is not in your control — a state of mind called apatheia. Apatheia is like archery: you can control the type of bow and arrow you use, your form and stance, time and setting — but then you have to let the arrow fly. You’ve made your preparations, but once the arrow is loosed, you don’t decide what happens next. A crosswind might divert it; your target may fall over; an animal might leap in the way — none of which is up to us. What belongs to us, our task, is to prepare the shot. From there, whatever happens, happens. This Stoic mentality reminds me of wu wei, or “effortless action,” in Taoism. Wu wei lets nature flow like water, and joins that flow, against our tendency to freeze up and become rigid, unbending, immovable — much like ice or stone. We often work despite nature, not with it; we say we know better, so wu wei requires trust. It’s the trust of laying back in water and letting it buoy you; of a cat falling from a tree and loosening up rather than tensing — and landing on its feet as a result. In Taoism, wu wei answers ”thusness.” “Why are things as they are?” “Because they are, thus and so— now what?” Let them be, let yourself be — and everything accomplishes itself of its own accord. I often find myself saying in moments of passing stress or frustration something that might sound a bit quaint: How would I do this differently if I were water? The sentiment is enough, I think: force nothing, flow with everything; either way, things tend to work out the way they do — no other way. That’s the thought of Lao Tzu and the Tao Te Ching: the Tao forces nothing but is the space within which all this occurs, making room for everything, neither dividing nor ruling — only holding, like the empty inside of a vase, and the air that fills it. Like the present moment in which all this occurs thus and so. I’m tempted to say Mormonism has a touch of this wu wei in it. D&C 88 describes the earth as following the same “celestial law” as the gods, which is to say that the earth “filleth the measure of its creation” (88:25) — it flows as thus and so it is. Many Taoists might like that verse, seeing nature and her effortless action as the prime model for us, the ones who can’t stop suffering under our efforts. Perhaps that’s the essence of grace: to stop trying; to let be. To flow over it like water over a stone; to let your feet leave the floor as when you let the water carry you; to loosen up and let nature flow as she does like a cat falling from a tree, always landing on its feet.
https://medium.com/interfaith-now/mormon-stoic-taoist-notes-on-effortless-action-8e7e80acf379
['Nathan Smith']
2019-09-22 00:42:57.631000+00:00
['Mormon', 'Philosophy', 'Taoism', 'Stoicism', 'Religion']
The Kinky Female Mind: Why Women Fantasize About Rape and BDSM
Rape Fantasies Of course, the ultimate power fantasy is rape, which is an extremely common one among women. Around 62% of women report having one at some point in time. Just why women fantasize about an act that most of us would find extremely traumatic were it to take place has perplexed and fascinated researchers for years. Now, as many sex experts point out, most women aren’t fantasizing about being hurt or injured. Realistic images of violent rape aren’t spank bank for the majority of women. The “rape” fantasy of the average woman is primarily about aggressive seduction and has its roots in cultural norms. Women are socialized to experience our sexuality second-hand and feel desire by being desired. So, in a typical rape fantasy, the fantasizer is overpowered by a studly man who can’t control his desire for her because she’s so smokin’ hot. Women with forced sex fantasies often have rich fantasy lives replete with many sizzling storylines, ranging from the romantic to the tender to the taboo. Far from being an indicator of pathology, it’s more likely an indicator of sexual satisfaction and an uncensored imagination. And the tougher the broad the more likely she is to be drawn to rape themes. According to Hawley and Hensley’s research on female fantasies, dominant, high self-esteem women had more rape fantasies and were more attracted to dominant men than their meeker sisters. As the researchers noted in their study of 355 women: “If such fantasies reflect a masochistic desire for pain and humiliation perpetrated by a misogynistic and brutal aggressor (e.g., Baumeister, 1989), then the dominant woman should not entertain these fantasies because doing so would strip her of her power. If these fantasies instead reflect a passionate exchange with a powerful, resource-holding, and attentive suitor, then through them the dominant woman could reinforce her high standing in the group and her favorable opinion of herself.” It should be pointed out that there is a huge difference between sexual fantasies and sexual wishes. Though Lehmiller spent a good bit of time in his book discussing how to make your fondest fantasy a reality, many women don’t want to act out their rape fantasies. When a Rape Fantasy Is NOT a Sexual Fantasy Not all rape fantasies are meant to be sexual turn-ons. Sometimes, they’re a way to deal with trauma. One 2009 study separated rape fantasies into three separate categories 1. erotic 2. aversive, and 3. erotic-aversive. Erotic Rape Fantasies Erotic rape fantasies are the stuff that romance novels are made of and follow a standard plot-line: Dreamy stud undone by lust pursues reluctant heroine. She resists (sort of), he persists (definitely), then takes her in a heated frenzy (no animals were harmed in the making of this fantasy). When the dust has settled and cats have come out of hiding, they’re both smiling. These kinds of forced sex scenarios were characterized by high female arousal, little violence, and fake initial resistance. Many women starred their partner, ex-partner, or someone they knew as the perp/rapist. Aversive Rape Fantasies Aversive rape fantasies were rare in the study and shouldn’t properly be classified as “sexual fantasies” since most women were NOT aroused by them. They were characterized by genuine non-consent and often aggression at the hands of a faceless stranger, or a relative. Most women experienced feelings of guilt and shame over these thoughts and a sizable proportion of the women in the study who reported them had a sexual abuse history. The researchers theorized that: “Aversive rape fantasies appear to operate as attempts to deal with the fear of actual rape by gaining some sense of control over rape situations and rehearsing how one might deal with actual rape.” Erotic-Aversive Rape Fantasies These fantasies were a paradox — a mixed bag of fear and arousal. Most of them starred a partner or an ex of the fantasizer. They contained elements of non-consent that both aroused and disturbed. The researchers didn’t give an explanation as to why some women have these daydreams, but I imagine it might be for the same reasons aversive fantasies exist — as a way to deal with trauma and fear. Many women have experienced sexual abuse at the hands of someone who they once trusted, and fantasies are often our minds way of processing disturbing emotions, pain, and trauma. Plus, anxiety and sexual arousal can be uneasy bed-mates as I’ve gone into before in my article on the psychology of the erotic mind. Autonomic nervous system arousal underlies both fear and sexual excitement, and our minds can confuse the two since they are physiologically identical states. Fear can be a turn-on under the right circumstances. Abject terror can even create “fear goggles” in some cases. In social psychology, we call this the misattribution of arousal theory. To give you a practical example, if you’re looking to get laid it’s better to take your date somewhere exciting like a scary movie, out bungee jumping, or nowadays just out of the house — anywhere that that gets the nervous system all revved up, s(he) will “misattribute” the excitement to you and think you’re hot stuff.
https://medium.com/sexography/the-kinky-female-mind-why-women-fantasize-about-rape-and-bdsm-8c6a6ad4dd52
['Kaye Smith Phd']
2020-05-01 02:41:26.592000+00:00
['BDSM', 'Sex', 'Psychology', 'Women', 'Sexual Fantasy']
How to Become a Python Developer
With every industry or business focusing on integrating new technologies such as AI, ML, and data science, Python developers’ need increases exponentially. For the last one-decade, Python has become very popular amongst the Computer Science communities, and because of market demand, there is a surge in Python Developers. Python is popular among developers because it supports many IDEs, frameworks, APIs, and libraries for development in the AI, data science domain. If you are new to Python and want to see your career designation as a Python Developer in that case, you are reading the right article. The article mentions everything you should know about How to Become a Python Developer and the required skill set. Who is a Python Developer? Before we jump to the meat of this article, let’s look at a vague definition of Who is a Python developer? — “A Python Developer is a person who uses Python Programming language to build, deploy, implement or debug a project.”. This definition of Python developer is right, but it does not cover all the aspects of being a Python developer. According to this definition, even a complete beginner who just has learned how to write simple syntax in Python is also a Python Developer. However, he/she can call himself/herself a Python Developer, but there is a lot in Python we need to learn before we call ourselves a Python Developer. I have mentioned all the necessary skill sets (technical and non-technical) to be a Python developer. Let’s read them below. Python Programming Language Python is a high-level Programming Language, and like other high-programming languages, its code and syntax are human-readable. Programming in Python feels like you are writing some line by line instruction for a program. Its syntax is so easy even a non-programmer can understand what the code is trying to say. The object-oriented language was developed by Guido Van Rossum and released in 1991 to write a clear code for all project sizes. Python supports multi-paradigm programming, including functional, imperative, structured, reflective, and object-oriented programming. And among these five paradigms, object-oriented is the major one because right now, the developers widely use OOPs concepts to solve real-world problems. OOPs, provide concepts like class, objects, inheritance, polymorphism, abstraction, and encapsulation, which can be used to represent and solve real-world entities. As I have already mentioned, Python is a high-level programming language, but computers cannot understand the code written in high-level, so it requires a translator. A computer translator is a software program that converts the high-level programming code to a low level so the computer can execute it. There are three types of translators: compiler, interpreter, and assembler. Python uses an interpreter as a translator tool to convert its high-level code to a low level and execute it. Why Learn Python? Python is loved by most programmers, data scientists for its versatility, and object-oriented features. As discussed earlier, it has a huge demand for building AI and ML-based applications; apart from this, it also serves many web and mobile application industries. Although there are a plethora of reasons to jump into learning this language, let us briefly see them here below: Features of Python Programming Here are some Python programming features because you should know what makes Python a globally popular programming language. Easy to write Code: Python is the foremost programming language with the easiest coding syntax. You do not need any prior programming language to write code in Python. Python is the foremost programming language with the easiest coding syntax. You do not need any prior programming language to write code in Python. Open-Source: It is a free-to-use programming language; you do not require any license or permission to write code in Python or use Python code in your project. It is a free-to-use programming language; you do not require any license or permission to write code in Python or use Python code in your project. Support Object-Oriented - Programming: As I have mentioned above that, Python supports multi-paradigm programming, and Object-Oriented is one of those. Right now, developers prefer to choose only those programming languages which can support OOPs concepts. - As I have mentioned above that, Python supports multi-paradigm programming, and Object-Oriented is one of those. Right now, developers prefer to choose only those programming languages which can support OOPs concepts. Portable : All popular operating systems such as Windows, Unix, Linux, and Mac support Python. However, you first need to install Python in your system from its official website. : All popular operating systems such as Windows, Unix, Linux, and Mac support Python. However, you first need to install Python in your system from its official website. Libraries: Python is one of the fewest programming languages which has thousands of open-source libraries. And it comes with a package manager called “pip,” a terminal command tool that helps us install, update and uninstall python packages in our system using a single command. Python is one of the fewest programming languages which has thousands of open-source libraries. And it comes with a package manager called “pip,” a terminal command tool that helps us install, update and uninstall python packages in our system using a single command. Support GUI: Using Python, we can build a Graphical User Interface (GUI) applications. There are many libraries present in Python which can help us to build graphical applications. Python Developer Skill Sets If you use Python to write, debug, deploy, and test your program, model, or application, you will be considered a Python Developer. Python is a versatile language, and there are various Job roles and domains associated with this robust programming language. For example, using Python, you can be a software developer, web-developer, data science engineer, AI, ML engineer, or a deep learning engineer. It does not matter in which domain you use Python; you will be designated as a Python developer. But some technical and non-technical skill sets distinguish your Python developer role in the IT industry. Let’s look at all the technical and non-technical skill sets you require to become a Python Developer. Core Python Programming Web Development with Python Data Science with Python Machine learning and Artificial Intelligence with Python Deep Learning with Python Core Python Programming It does not matter what job-role or domain you want to associate with in the future. You need to be an expert in basic or core Python programming. Whether you are looking for a Python web-development or Python data science career, you need to go through core Python programming. If you miss this phase of Python or ignore some of its essential concepts, then you will be finding Python as the most daunting and complicated programming language to code with. Because it’s only the core Python, which has easy syntax, as you move further to advance concepts, Python becomes a tricky language. Only those who are well acquainted with its basic concepts can code well in Python. Technical Skills Python Variables and Data Types (int, float, str) Python Data Structures (list, tuples, dictionary, sets) Iterators Exception handling File Handling Object-Oriented Concepts (class and objects) Generators and Decorators Non-Technical Skills Basic Math Knowledge Problem Solving Skills Python Web Developer Python is well known for its powerful web-frameworks. A web-framework is a tool that is used to build a web application. Django and Flask are the two most popular Python frameworks, and with more than 52K stars on GitHub, Django has become the second most starred web-framework repository. To become a Python web developer, you should possess the knowledge of any Python web-frameworks with the front-end trinity HTML, JavaScript, and CSS. Apart from these technical skills, a web-developer is supposed to know all about servers and networking. Technical Skills Core Python Programming Python web framework (Django, Flask, Pyramid) Tkinter for GUI based Applications REST APIs (Django REST API) HTML CSS JavaScript MVC-MVT Architecture Non-Technical Skills Basic Mathematics Networking Knowledge HTTP requests working Server-Client Model working Python Data Science Engineer For the last five years, Python has been widely used for data science projects and models. With core or standard Python, we cannot create complex data science models, but luckily Python has many open-source libraries. Data Science deals with a massive set of data. Here, programming skills are not enough; one needs to be well acquainted with high-school mathematical concepts like statistics, applied math, probability, algebra, etc. Technical Skills Core Python Programming Pandas (Python Library) Matplotlib (Pandas Library) Seaborn NumPy SQL Data Visualization Data Analysis Data Wrangling Data handling tools (Hadoop, Apache Spark, etc.) Non-Technical Skills Basic Math Skills Problem Solving Skills Probability Statistics Algebra Applied Math Machine Learning and AI with Python Both Machine Learning and Artificial Intelligence are the submodules of data science. If you want to be associated with these two technologies, you have to be an expert in machine learning and data science algorithms. Fortunately, Python has a myriad of ML and AI libraries that comes with built-in ML algorithms. In ML and Artificial Intelligence Engineering, you will rely more on your Mathematical and Non-technical skills than your Programming. Technical Skills Machine Learning Algorithms Well acquainted with Important Data Science terminologies (Data Wrangling, Data Analysis, Data Report, Data Visualization, etc.) Python Libraries like Scikit Learn, TensorFlow, and all other Python Data Science Libraries. Neural Network Non-Technical Skills Expert in Basic Math Expert in Probability Expert in Statistics, Algebra, Calculus, and Applied Math. Deep Learning with Python Deep Learning is a subfield of machine learning, so to be efficient in deep learning or to become a deep learning engineer, you have to go through machine learning. Deep Learning requires you to be fluent with all the Python machine learning libraries and artificial neural networks. Technical Skills Expert in Python Core Programming Well Acquaintance with the Python Machine Learning Libraries Well Acquaintance with Neural Networks Video Processing Language Processing Audio Processing Non-Technical Skill Expert in Basic Math Expert in Probability Expert in Statistics, Algebra, Calculus, and Applied Math. Python Basic Data Structures Whether you want to build a web-application or a data science model with Python, you first need to learn basic concepts and syntax. And the first step of learning a programming language starts with learning its basic syntax and its built-in data types. Let’s look at all the Python primitive variables, built-in data type, and syntax and try to get an overview of the language. Variable in Python The concept of variables in Python is similar to the variables we use in mathematics. Variables in programming languages are also known as Identifiers. A variable is a container for a data value. To represent a variable in Python, we use an arbitrary variable name with a data value. Example x = 100 Here x is the variable name, and 100 is the variable data value. In the above example, we used an arbitrary variable name x, but there are 33 reserved keywords in Python, which we can not use as a variable name. Python Reserved Keywords and as assert break class continue def del elif else except False finally for from global if import in is lambda None nonlocal not or pass raise return True try while with yield Python Built-in Primitive Data types There are 4 Primitive built-in data types in Python int (integers numbers) float (decimal or floating-point numbers) str (string) bool (boolean value true and false) The int data type represents the integer numerical values with no decimal numbers. Example >>> x = 100 >>> type(x) <class ‘int’> The float data type represents the decimal points number values. Example >>> y = 100.23 >>> type(y) <class ‘float’> The str represents the string value, and all the string values can be defined using single or double-quotes. Example >>> string = “Hello World” >>> type(string) <class ‘str’> The bool data type represents the True or False value. >>> z = True >>> type(z) <class ‘bool’> Python Non-Primitive Data Structure There are four major Python inbuilt non-primitive data structures list tuples sets dictionary A list is mutable and ordered Python data structure. Example >>> my_list = [1, “two”, 3.0, False] >>> my_list [1, ‘two’, 3.0, False] A tuple is an immutable and ordered Python Data structure Example >>> my_tup = (1, “two”, 3.0, False) >>> my_tup (1, ‘two’, 3.0, False) A set is a mutable and unordered Python data structure which only store unique data. Example >>> my_set = {1,1, “two”, 3.0, False} >>> my_set {False, 1, 3.0, ‘two’} A dictionary is a mutable and unordered Data structure which store data in key-value pair. Example >>> my_dict= {1:”One”, 2:”two”} >>> my_dict {1: ‘One’, 2: ‘two’} Python Libraries and Frameworks The standard Python is a minimal language and does not offer developers many things to work with. But all thanks to Python open-source libraries and frameworks, which make Python the most significant programming language. Python comes with a built-in package manager “pip,” a terminal command capable of installing, updating, and uninstalling Python libraries. The reason behind Python’s versatility is its libraries and framework. Python has libraries for almost every IT domain. From web-development to deep learning, you can find the perfect library for your Python Project. Here is a list of Top Python Libraries and Frameworks TensorFlow (library) (Machine Learning and Data Science) Django (web-framework) (Web Development) Flask (web-framework) (Web Development) Scikit-learn (library) (Data Science) Numpy (library) (Data Science) Keras (library) (Machine Learning) SciPy (library) (Data Science) Pandas (library) (Data Science) Tkinter (library) (GUI Software) Pyramid (web-framework) (Web Development) Beautiful Soup (library) (Web Development) PyGame (library) (Game Development) Python Projects Learning a Programming language is one thing and building a project using what you have learned is entirely different. Many new developers focus on learning new Python syntax and libraries rather than doing a project, which is not recommended unless you master what you have known before. If you are done with core python, before moving further and learning a new Python library, build some projects to scale your learning. When you build a project or a small program on what you have learned, you find out what you have learned so far and how you can use the basic concepts and data structures to build logic. Because learning any programming language syntax is not what companies are looking for, a developer must know how to build a project and, more importantly, solve problems using coding and logic. To help you begin with, below is a list of Python projects based on levels: Core Python Projects (console-based) Hangman Game Rock Paper Scissor Dice Rolling Simulator Email Slicer Message Encoder and Decoder Alarm Clock with Tkinter Graph Algorithms Tic Tac Toe Game Scientific Calculator Currency Converter Python Web Developer Projects (Django, Flask) Sending Mail with Python Login System Text to HTML Generator ToDo App Chat App Students attendance management using Django Automatic Tweet Posting Content Management System E-Commerce Web Application Python Data Science and Machine Learning Projects Fake News Detection Model Sentiment Analysis Speech Emotion Recognition Image Recognition Uber Data Analysis AI-based Chatbot Handwritten Digit Recognition Traffic Signs Recognition Conclusion To be a Python developer, you just need to keep upgrading your Python skills with time and keep working on Python projects. It would be naïve to call yourself a Python developer only by learning basic Python concepts. Python is an ocean of unlimited opportunities and libraries, and it has something for everyone and every domain.
https://skarora0290.medium.com/how-to-become-a-python-developer-382282547e71
['Simran Arora']
2020-11-01 03:39:40.494000+00:00
['Python', 'Developer', 'Python Programming', 'Python Developers', 'Programming']
Let The Rain Fall
Let The Rain Fall Wishes of a pluviophile Image by chulmin park on Pixabay Wind baptized by heat rules the atmosphere roaming everywhere Yet it’s no longer an auspice of hope but something that tires exhausted bodies Torsos of trees let go of dry leaves surrendering to its will, hiding the remorse Provoked dust teases the strained eyes but tear glands keep a poker face Lament of beings for a drop of water fills the air replacing cheep with sound of misery while mirage is reinforcing the effect showing the wounded a false hope Sun absorbs the silent sighs of a dry and cracked earth holding on to passive intentions While profusely soaking anything with its rays unconsciously fueling the drought Uncountable eyes are towards the sky yearning to see a ray of fortune Only to get lost in the blue endless abyss hanging right above their heads A single thought sparks the nerves that lurk in their skulls Even the heaven is blind to the pain they are going through at the moment? So, dear mother earth, kindly wake up from your sleep and open your eyes, don’t let the prayers those innocents mutter in their mournings pass beyond you without getting noticed, Make your mind to allow these dethroned rivers and streams to return to their old glory, Let thunder become a music to the ears and
https://medium.com/illumination/let-the-rain-fall-39a15f221732
['Salitha Nirmana Meththasinghe']
2020-04-26 10:02:03.995000+00:00
['Mental Health', 'Nature', 'Love', 'Poetry', 'Life']
AirBnB Is the Apple of Travel
It was early 2009, and we were seated in a conference room at a Tel Aviv office tower. Leaders from Israel’s cellular carriers and top mobile startups were seated around the table. Bo Ilsoe, a corporate venture capitalist at the world’s #1 cellphone company was giving a strategic briefing (Nokia, if you’re that young). It was less than 2 years after the iPhone’s launch, but the picture was already quite clear, and for Nokia, quite grim. Apple, with less than 10% market share, was projected to collect 90% of the profit in the global smartphone market. Sure enough, within the next 3 years, iconic brands built over decades were relegated to the backbenches of the cellphone market — Nokia, Motorola, BlackBerry, and Sony Ericsson, to name a few. And Apple? Apple’s market cap soared twenty-fold since that meeting. Airbnb has followed much of Apple’s playbook to date. Doubling down on these strategies can turn it from great success to a dominant juggernaut. Here’s why. Key Tenets of Apple’s Strategy: Vertically-integrated Distribution Design-led Product Philosophy Vertically-integrated Product Visionary Storytelling and Likability AirBnB is like Apple, Booking and Expedia are like… Microsoft Since launch, AirBnB’s meteoric rise has been driven by the following strategies: Vertical Integration 101: Want an AirBnB? Book it on… AirBnB While vacation rentals always existed, and VRBO / HomeAway were both doing nice business off of this category, AirBnB allowed virtually anyone to become a host — and sell their inventory exclusively on AirBnB. From the consumer’s perspective, that meant that if you were looking for that product, there’s exactly one place you can get it — AirBnB. Historically most of the travel industry is a marketplace where the same products — the same hotel rooms, flights and rental cars — are packaged and repackaged by a host of travel agencies (on or off-line), aggregators, consolidators… You can probably book the same hotel room / flight on 50 different websites and 1,000 different travel agents. Once consumers realized that, travel search became focused on price comparison, not product comparison, which led to the rise of the meta-search players — Kayak, Trivago, TripAdvisor (for awhile) and then - the 600-lbs gorilla — Google. The product is a commodity, all the consumer cares about is where they can get it $5 cheaper, which, in fact they may not be able to due to rate-parity agreements. From a consumer perspective — that differentiates the product and defines the channel. You booked an AirBnB on your iPhone — not a shared rental on your Apple smartphone AirBnB maintained this strategy (almost) consistently. Unlike Expedia and Booking, where “private label” and API-based distribution is a significant percentage of the business, AirBnB does not distribute its inventory through 3rd parties. If you want an AirBnB, you buy it from AirBnB. This brand loyalty means AirBnB’s marketing expenses are markedly lower than everyone else’s: AirBnB 2019: $538M on revenue of $4.8B — 11% of sales Booking 2019: $4.97B on revenue of $15B — 33% of sales In travel distribution, distribution is the product (after all — neither AirBnB or Booking actually furnish you with a bed or a breakfast). Therefore this strategy is the equivalent to Apple’s tight control of both their distribution channels (originally — Apple Stores, Apple.com, and exclusive relationships with wireless carriers) and their platform — Apple does not license its operating systems, restricts accessory manufacturers (MFi program) and its control over the App Store and iTunes is now the subject for anti-trust cases. If you want an iPhone, you get it from a channel Apple controls, and what you do with it / connect it to is tightly controlled by Apple. Design-led Product Philosophy — Focus on the End-user AirBnB’s Founder & CEO Brian Chesky is an industrial designer by trade. Not a software engineer, not a business major. Design thinking and end-user focus (with both guests and hosts in mind) were key strategies from the get-go. Not so for the travel incumbents. Look at Booking.com or Expedia today and the sites still look like they were designed 15 years ago. Every inch of screen real estate is dedicated to eking out either slightly higher revenue (via conversions or ads), or more search-engine-optimization love. Booking and Expedia are huge optimization engines built to solve the problem of converting traffic bought from a search engine to a profitable visit. In Booking Group’s current CEO Glenn Fogel’s words — “there is no one in the Priceline group who can tell you the lifetime value of a customer — we need to be profitable on the single transaction”. And while I’m sure much has changed in the nine years since we had that conversation, not very much of it is apparent in the products. Just like Nokia and Motorola designed their products around the needs and desires of the wireless carriers, the travel incumbents design their product around the ecosystem partners — the hotel chains, the airlines, the white-label partners, the advertisers, Google and the other traffic sources. And while this helps them squeeze the last drop of the lemon, in the eyes of many consumers, it’s still, well, a lemon. As appetizing as a hotel search on an OTA site. Apple focuses its design efforts on the consumer with very little respect or heed to its ecosystem — whether it is the wireless carriers or competing tech platforms, and this is the philosophy AirBnB is following. Clean interfaces, flawless activation and onboarding, dazzling design. Apple’s devices and ecosystem are designed for optimal user experience — as long as you stay within that closed garden. Not sure? Try sending a picture from Messages to an Android user. AirBnB seems to be executing on the exact same playbook. And if there’s still room for improvement, you can bet that their recent engagement of Apple’s design super-celebrity, Jony Ive will fill those gaps. What’s Next? More Vertical Integration Apple’s end-user focus and walled garden strategies have paid back in spades, propelling it from the 100th company in the S&P 100 in 2007 to #1, consistently since 2012. Where is AirBnB destined? With the credibility of a public listing and the cheap capital it affords, expect to see further vertical integration — and adding new horizontals. AirBnB will deepen its hold over hosts by providing them more services — from real-estate financing to physical supplies, devices and services to improve the properties and streamline the AirBnB guest experience. This is similar to Apple’s support for its developer eco-system — and its iron-fist approach to controlling the content on the App Store. Furthermore it will deepen its relationships with independent hotels, actual B&Bs and similar “alternative” accommodation providers—a natural extension of the product inventory. The desire not to stay in cubby-hole human beehives will stay with us for years or decades after COVID-19, putting traditional hotels at a disadvantage and leading to new lodging categories and probably also other uses for some of that real estate. Expect AirBnB to enter and dominate new categories. It was already making some progress on Experiences pre-pandemic, utilizing its network of hosts to create products exclusive to its platform. Post-COVID, I will not be surprised if we will see attempts to create new categories by buying (or potentially simply financing) distressed real-estate assets — from lodging through retail to office-buildings. These may then be turned into new product revenue streams. Short / long term rentals, shared workspaces, new forms of entertainment / conference or even retail venues. And, of course, with tight control over the booking and the product experience, AirBnB’s user-focused design culture is not done changing our world. In the travel distribution business, AirBnB has been breaking the mold since day one, and its DNA coupled with public-market resources is a recipe for continued disruption and growth. In Apple words, AirBnB is Thinking Different. Apple’s iconic Think Different superbowl ad Thanks to my teacher, Prof G, for inspiring this. Nadav Gur is a serial entrepreneur who was the founder of two companies in the travel space — WorldMate (acquired by Carlson Wagonlit Travel) and Desti (acquired by HERE). He is the principal at NG Vanguard Enterprises where he works with founding teams and investors to make them 10% better.
https://medium.com/swlh/airbnb-is-the-apple-of-travel-44fa3d0ea4e5
['Nadav Gur']
2020-12-14 22:30:07.623000+00:00
['Strategy', 'IPO', 'Airbnb', 'Travel', 'Apple']
Daily Weirdo Poems & Stories
Daily Weirdo Poems & Stories Sharing some of my recent work and the stories behind the poems I’m working on making a living from my fiction and poetry. That means I have to actually publish my work for people to read (or ignore). Every day I am post three poems or micro stories on my website and on various social media channels. I am also going to be posting them to Medium. Here are the posts for Monday, November 26th and Tuesday, November 27th. Each day I share three different short poems or micro stories. Here are the posts for Monday, November 26th and Tuesday, November 27th. This first poem is part of my upcoming collection that I will be publishing in January 2019. The collection is called How the Machines Won. It’s a series of speculative fiction poems that tell the fragmented history of humans came to be enslaved by the machines and algorithms they created to make them smarter, happier, and richer. About half of the poems will be haiku. There will also be a few micro stories in the book. There will be about 600 poems and stories in the published work. It will be released in ebook and paperback. An audiobook addition is planned, but there is not a current date for that project. This second poem is a cinquain, Any five line poem is technically a cinquain. I’ve been experimenting with meters where the first line is two syllables, the second line is four syllables, the third line is six syllables, the fourth line is eight syllables, and the last line is two syllables again. This poem will also be in How the Machines Won. I wrote this haiku for How the Machines Won, but I’m not sure if it will make the final cut. I’m not sure I like the first line. I wrote this haiku back in February, and this project has undergone a change in focus since then. This is one of my favorite haiku from the How the Machines Won. This poem will be next to a micro story that fleshes out the idea of what an algorithmic bounty hunter is. I wrote this haiku as I was wondering about the idea of machine learning and context. The internet is full of information taken out of context. Being smart is about more than knowing a lot of discrete facts. it’s about understanding the context that those facts fit in. The success of AI and machine learning hinges not on the ability to learn, but the ability to understand context. George Orwell’s 1984 is also largely about context and how an authoritarian regime succeeds by destroying context. Not every poem and story I post will be from How the Machines Won, but this project has been front and center in my mind lately. The recent revelations about Facebook’s PR shenanigans on top of their data abuses have a dark humor to them when you notice the prompt “What’s on your mind?” when you view your timeline. It seems that social media platforms know us too well. This poem is a tanka. This is a Japanese poetry form related to haiku. Typically, English tanka have a syllable pattern of five, seven, five, like a haiku, and then two more lines of seven syllables each.
https://medium.com/weirdo-poetry/daily-weirdo-poems-stories-cc134c393c00
['Jason Mcbride']
2019-10-01 03:45:53.459000+00:00
['Poetry', 'Science Fiction', 'Haiku', 'Fiction', 'Writing']
Techniques for Subtour Elimination in Traveling Salesman Problem: Theory and Implementation in Python
Techniques for Subtour Elimination in Traveling Salesman Problem: Theory and Implementation in Python Aayush Aggarwal Follow Dec 6 · 9 min read INTRODUCTION In this article, I will explain and implement the well-known Traveling Salesman Problem aka TSP with a special focus on subtour elimination methods. We will use python to implement the MILP formulation. The dataset contains the coordinates of various cities of India. The aim is to find the shortest path that covers all cities. We will cover the following points in this article Input the data and visualize the problem Model TSP as ILP formulation w/o Subtour constraints Implement Subtour Elimination Method 1: MTZ’s Approach Implement Subtour Elimination Method 2: DFJ’s Approach Compare MTZ’s formulation and DFJ’s formulation Conclusion The GitHub codes for this article can be found on the link: https://github.com/Ayaush/TSP-ILP 1 Input the data and problem visualization The CSV file “tsp_city_data.csv” contains the names of cities in India with their latitude and longitude information. The first city “Delhi” is assumed to be the starting point of the trip (depot). The data input to TSP model is the distance matrix which stores the distance (or travel time or cost) from each city (location) to every other city. Thus, for a traveling salesman problem for N cities (location), the distance matrix is of size N x N. The variable no_of_locs in the code is used to define the first n no. of cities we want to include in our TSP problem data. The value is set at 6 for now. The python pandas library is used to read CSV file and distance matrix “dis_mat”. #import libraries %matplotlib inline import pulp import pandas as pd from scipy.spatial import distance_matrix from matplotlib import pyplot as plt import time import copy The function “plot_fig” is used to plot the data and visualize the problem and the function “get_plan” takes the LP solution and returns all the subtours present in the solution. #This function takes locations as input and plot a scatter plot def plot_fig(loc,heading="plot"): plt.figure(figsize=(10,10)) for i,row in loc.iterrows(): if i==0: plt.scatter(row["x"],row["y"],c='r') plt.text(row["x"]+0.2, row["y"]+0.2, 'DELHI (depot) ') else: plt.scatter(row["x"], row["y"], c='black') plt.text(row["x"] + 0.2, row["y"] + 0.2,full_data.loc[i]['CITy'] ) plt.ylim(6,36) plt.xlim(66,96) plt.title(heading) # this function find all the subtour in the LP solution. def get_plan(r0): r=copy.copy(r0) route = [] while len(r) != 0: plan = [r[0]] del (r[0]) l = 0 while len(plan) > l: l = len(plan) for i, j in enumerate(r): if plan[-1][1] == j[0]: plan.append(j) del (r[i]) route.append(plan) return(route) The following code below reads data from the CSV file and creates a distance matrix for TSP problem. # set no of cities no_of_locs=6 data=pd.read_csv("tsp_city_data.csv") full_data=data.iloc[0:no_of_locs,:] d=full_data[['x','y']] dis_mat=pd.DataFrame(distance_matrix(d.values,d.values),\ index=d.index,columns=d.index) print("----------data--------------") print(full_data) print("-----------------------------") plot_fig(d,heading="Problem Visualization") plt.show() 2 Model TSP in ILP without Subtour elimination constraints TSP problem can be modeled as Integer Linear Program. The LP model is explained as follows Data N= Number of location including depot (starting point) Ci,j = Edge cost from node i to node j where i,j= [1…N] Decision Variable xi,j = 1 if solution has direct path from node i to j, otherwise 0 The LP model is formulated as follows The objective (1) minimize the cost of the tour. Constraints (2) and (3) ensures that for each node, there is only one outflow and inflow edge respectively. Thus, each node is visited only once. Constraint (3) restrict outflow to one’s own node. In this article, python’s PuLP library is used for implementing MILP model in python. PuLP is an LP modeler written in Python. PuLP can call a variety of LP solvers like CBC, CPLEX, GUROBI, SCIP to solve linear problems. It can be installed from the link “https://pypi.org/project/PuLP/”. The CBC solver is preinstalled in the PuLP library while one has to install other solvers like GUROBI, CPLEX separately to use in PuLP. In this implementation, CBC is used as LP solver. model=pulp.LpProblem('tsp',pulp.LpMinimize) #define variable x=pulp.LpVariable.dicts("x",((i,j) for i in range(no_of_locs) \ for j in range(no_of_locs)),\ cat='Binary') #set objective model+=pulp.lpSum(dis_mat[i][j]* x[i,j] for i in range(no_of_locs) \ for j in range(no_of_locs)) # st constraints for i in range(no_of_locs): model+=x[i,i]==0 model+=pulp.lpSum(x[i,j] for j in range(no_of_locs))==1 model += pulp.lpSum(x[j, i] for j in range(no_of_locs)) == 1 status=model.solve() #status=model.solver() print("-----------------") print(status,pulp.LpStatus[status],pulp.value(model.objective)) route=[(i,j) for i in range(no_of_locs) \ for j in range(no_of_locs) if pulp.value(x[i,j])==1] print(get_plan(route)) plot_fig(d,heading="solution Visualization") arrowprops = dict(arrowstyle='->', connectionstyle='arc3', edgecolor='blue') for i, j in route: plt.annotate('', xy=[d.iloc[j]['x'], d.iloc[j]['y']],\ xytext=[d.iloc[i]['x'], d.iloc[i]['y']],\ arrowprops=arrowprops) The optimal solution given by the LP model has subtours i.e Tour 1 : Delhi > Nagpur > Rajkot Tour 2 : Kolkata > Dispur > Agartala The solution given by the model has 2 tours but what required is the single tour that starts with the depot (Delhi) and visits all locations one by one and ends at Delhi. To solve this problem and to get the desired single tour, the subtour elimination constraints need to be added in the LP Model. There are 2 well-known formulations DSF and MTZ (named after their authors). This article covers both the ideas and the implementation in python. 3. MTZ Method for subtour elimination This formulation was proposed by Miller, Tucker, Zemlin. To eliminate subtours, continuous decision variables representing times at which a location is visited is added. Variable for all locations except depot node is added. ti= time at which location i is visited, i =[2,…N] Finally what is required are the constraint How does constraint (5) remove subtours ? Lets takes an previous example and take the subtour Kolkata (k) > Dispur(d) > Agartala(a) So adding constraint (5) will eliminate the subtour. The complete Lp model is formulated as follows Data N= Number of location including depot (starting point) Ci,j = Edge cost from node i to node j where i,j= [1…N] Decision Variable xi,j = 1 if solution has direct path from node i to j, otherwise 0 ti = time at which location i is visited , i =[2,…N] The LP model is formulated as follows The MTZ’s formulation is implemented in python as shown below. start_t=time.time() model=pulp.LpProblem('tsp',pulp.LpMinimize) #define variable x=pulp.LpVariable.dicts("x",((i,j) for i in range(no_of_locs) \ for j in range(no_of_locs)),\ cat='Binary') t = pulp.LpVariable.dicts("t", (i for i in range(no_of_locs)), \ lowBound=1,upBound= no_of_locs, cat='Continuous') #set objective model+=pulp.lpSum(dis_mat[i][j]* x[i,j] for i in range(no_of_locs) \ for j in range(no_of_locs)) # st constraints for i in range(no_of_locs): model+=x[i,i]==0 model+=pulp.lpSum(x[i,j] for j in range(no_of_locs))==1 model += pulp.lpSum(x[j, i] for j in range(no_of_locs)) == 1 #eliminate subtour for i in range(no_of_locs): for j in range(no_of_locs): if i!=j and (i!=0 and j!=0): model+=t[j]>=t[i]+1 - (2*no_of_locs)*(1-x[i,j]) status=model.solve() #status=model.solver() print("-----------------") print(status,pulp.LpStatus[status],pulp.value(model.objective)) route=[(i,j) for i in range(no_of_locs) \ for j in range(no_of_locs) if pulp.value(x[i,j])==1] print(route) plot_fig(d,heading="solution Visualization") arrowprops = dict(arrowstyle='->', connectionstyle='arc3', edgecolor='blue') for i, j in route: plt.annotate('', xy=[d.iloc[j]['x'], d.iloc[j]['y']],\ xytext=[d.iloc[i]['x'], d.iloc[i]['y']],\ arrowprops=arrowprops) print("time taken by MTZ formulation = ", time.time()-start_t) 4. DFJ Method for subtour elimination This formulation was proposed by Dantzig, Fulkerson, Jhonson. To eliminate subtours, for every set S of cities, add a constraint saying that the tour leaves S at least once. How does this constraint eliminate subtours? Let's take the same example and take a set Si= {kolkata, Dispur, Agartala} and the rest of the cities be represented by s′i={ Delhi(del), Rajkot(r), Nagpur(n)} Now as per constraint (15), the new constraint added is as follows since there is no edge going to any other node in this set (due to subtour), this equation is not satisfied for the set Si= {{kolkata, Dispur, Agartala}. So, by adding constraint (15), this solution becomes infeasible and all subtours will be eliminated. Modification in DFJ Method For N cities, the number of possible sets adds up to 2^n i.e the number of constraints grows exponentially. So, instead of adding constraints for all the possible sets, only some constraints are added. Given a solution to LP model(without having subtour elimination constraints) with subtours, one can quickly find the subset for which DFJ subtour constraint is eliminated. In the example above, one needs to add only 2 constraints and not 2^5 constraints. So, the higher-level algorithm is as follows Higher-level Algorithm for DFJ step 1. Solve TSP problem with LP formulation w/o Subtour Constraints step 2. If no subtour present in the current solution, goto step 6 step 3. Add subtour constraint only for the subtours present in the current solution. step 4. Solve TSP problem with newly added constraint. step 5. goto step 2 step 6. Return the final TSP solution The above-mentioned algorithm is implemented as follows start_t_1=time.time() model=pulp.LpProblem('tsp',pulp.LpMinimize) #define variable x=pulp.LpVariable.dicts("x",((i,j) for i in range(no_of_locs) \ for j in range(no_of_locs)),\ cat='Binary') #set objective model+=pulp.lpSum(dis_mat[i][j]* x[i,j] for i in range(no_of_locs) \ for j in range(no_of_locs)) # st constraints for i in range(no_of_locs): model+=x[i,i]==0 model+=pulp.lpSum(x[i,j] for j in range(no_of_locs))==1 model += pulp.lpSum(x[j, i] for j in range(no_of_locs)) == 1 status=model.solve() route=[(i,j) for i in range(no_of_locs) \ for j in range(no_of_locs) if pulp.value(x[i,j])==1] route_plan=get_plan(route) subtour=[] while len(route_plan)!=1: for i in range(len(route_plan)): #print(route_plan[i]) model+=pulp.lpSum(x[route_plan[i][j][0],route_plan[i][j][1]]\ for j in range(len(route_plan[i])))<=\ len(route_plan[i])-1 status=model.solve() route = [(i, j) for i in range(no_of_locs) \ for j in range(no_of_locs) if pulp.value(x[i, j]) == 1] route_plan = get_plan(route) subtour.append(len(route_plan)) print("-----------------") print(status,pulp.LpStatus[status],pulp.value(model.objective)) print(route_plan) print("no. of times LP model is solved = ",len(subtour)) print("subtour log (no. of subtours in each solution))",subtour) print("Time taken by DFJ formulation = ", time.time()-start_t_1) plot_fig(d,heading="solution Visualization") arrowprops = dict(arrowstyle='->', connectionstyle='arc3', edgecolor='blue') for i, j in route_plan[0]: plt.annotate('', xy=[d.iloc[j]['x'], d.iloc[j]['y']],\ xytext=[d.iloc[i]['x'], d.iloc[i]['y']], arrowprops=arrowprops) plt.show() #print("total time = ",time.time()-start) Compare MTZ’s Formulation vs DFJ’s formulation Since two approaches for subtour elimination have been discussed in this article, it's time to compare the two. MTZ’s approach introduces n² constraints (one for each pair (i,j) where i, j=[1..n]) while DFJ’s approach introduces subtour constraints for all possible sets of locations i.e 2^n for n locations. Thus, MTZ’s approach adds a polynomial number of constraints while DFJ’s approach introduces an exponential number of constraints. In terms of decision variables, MTZ approach introduces n new decision variables (titi for i =[1..n]).ON the other hand, DFJ introduces no new decision variable. MTZ’s approach has to be solved only once to get an optimal solution While DFJ is generally implemented as a modified version and it is solved iteratively ( i.e LP model has to be solved multiple times with new subtour constraints added every time). There is no clear winner among the two. For some problems, DFJ gives solutions faster than MTZ and for some problems, MTZ is faster. But DFJ has an efficient branch and bound approach due to which it becomes more efficient than MTZ. Also, MTZ’s formulation is weaker i.e the feasible region has the same integer points but includes more fractional points. Conclusion In this article, MILP formulation of TSP is explained with a special focus on subtour elimination approaches. TSP problem is a special case of Vehicle Routing Problem (VRP) with no. of vehicle equal to 1. But, subtour elimination is a core issue in VRP as well which is solved by using the same techniques. In this article, the TSP problem is solved for only 6 cities to simplify the explanation of subtour elimination. The CSV file uploaded in the Github repository contains data of 27 cities. One can try to solve the problem for more number of cities.
https://medium.com/swlh/techniques-for-subtour-elimination-in-traveling-salesman-problem-theory-and-implementation-in-71942e0baf0c
['Aayush Aggarwal']
2020-12-06 22:35:02.606000+00:00
['Data Science', 'Artificial Intelligence', 'Optimization', 'Analytics', 'Travelling Salesman']
Stratis’ C# Smart Contracts: the first smart contracts which can be deployed in C#
The value for Stratis, over all other platforms, is in using C# / .NET and being able to leverage the powerful ecosystem that already exists there: Using Visual Studio to develop, compile and debug the code Easily decompiling CIL to C# source code Strong testing frameworks that run natively inside Visual Studio C# that behaves exactly as it would inside any other .NET application and thus can be audited by countless developers Entire companies based around the security auditing of C# code which have produced extensive list of software tools that can scan for certain kinds of conditions in the code (essentially, Stratis could have “code scanners” which can be applied or adapted to smart contract code) Well established best practices and technical tools that have been developed at corporations which can be used easily with Stratis’ contracts, and will actually MEAN something because the code will behave the way developers expect it to I could keep going. All of these (and many, many more) are only possible when running the CIL / CLR rather than a custom Virtual Machine. So far, no-one else has gone down this road with C# smart contracts. In fact, no-one has gone down this road with any smart contract offering. Not because they don’t want to, but because they can’t. Stratis is the first. NEO is perhaps the best known platform currently offering coding in C# for smart contracts. It is misleading to say “NEO has C# smart contracts” because NEO compiles the syntax of the language into their custom instruction set to run on their custom Virtual Machine (the NeoVM). The smart contract code that executes on the Virtual Machine is not .NET, it is NEO bytecode. Here’s a handy guide on whether your code is real C#: Does the C# code run like it would in other environments such as .NET web, console or mobile applications? Hence, can C# developers audit the code knowing it’s going to behave EXACTLY as they would expect? Can developers debug in Visual Studio natively? Can developers use other parts of the C# ecosystem such as decompilers? If the answer to any of these questions is no, then you’d be hard-pressed to convince someone your code is truly C#. When the smart contract code that executes on the Virtual Machine is not .NET, it is virtually impossible to guarantee that the C# code will behave in the same way as a real .NET web or console application. This means that very few developers can effectively audit what has been written. For Stratis, the smart contract code that executes on the VM is the same code that would execute in a web or mobile or any other C# application. (As an aside, theoretically any language that can be translated into CIL will also be supported for Stratis. This means native C# and F#, VB, and any other language where software has been written to compile to CIL will be supported for Stratis’ contracts.) Confidence in the security of smart contracts is very important for the adoption of the technology. To this end, Stratis have ensured that auditing and testing of their contracts can be easily done by a larger group of developers than ever before. This is because their contracts can be decompiled back into the C# source code. This is SO BIG for Stratis. For every single contract, nodes can natively display the C# source for auditing, instead of just the bytecode. No-one else has this functionality yet. Even Ethereum’s Solidity compilers like Porosity are very primitive. Stratis’ smart contracts can be developed, compiled, debugged and unit tested all in Visual Studio without any external tools. It is just .NET code running like any other .NET application: there is no need for compilation down to a different Virtual Machine / instruction set. This means that auditing of the contracts could be done by developers from a community which is quite literally millions strong.
https://medium.com/khilone/stratis-c-smart-contracts-the-first-smart-contracts-which-can-be-deployed-in-c-96448bd8671d
[]
2018-11-26 13:48:48.847000+00:00
['Stratis', 'Smart Contracts', 'Csharp', 'Stratiscommunity', 'Bitcoin']
Exoplanet K2–18b Could Have the Right Conditions for Life
It Lends an Air of Mystery… In the autumn of 2019, as two teams of researchers each reported finding water vapor in the atmosphere of K2–12b. However, details of this alien atmosphere remained unknown, and conditions underneath this hydrogen-rich blanket were a mystery. Researchers at the University of Cambridge in England developed simulations of K2–18b, determining that this world is able to house large reserves of water beneath its dense atmosphere of hydrogen. “We wanted to know the thickness of the hydrogen envelope — how deep the hydrogen goes. While this is a question with multiple solutions, we’ve shown that you don’t need much hydrogen to explain all the observations together,” said Matthew Nixon, a PhD student at University of Cambridge. Following the discovery of K2–18b in 2015, astronomers began studying the system in more detail. Just two years later, a second super-Earth was spotted orbiting the same star. That planet, K2–18c, is closer to its parent star than its companion. The star around which these worlds orbit, K2–18, is a small, cool, red dwarf, roughly 40 percent as massive and large as our own sun. This exoplanet straddles the line between a super-Earth (a larger version of our home world) and a sub-Neptune (a largely gaseous world). To determine the nature of the planet, the team created simulations based on data on provided by the High Accuracy Radial Velocity Planet Searcher (HARPS) planet-finder in Chile. Mini-Neptunes are thought to consist of a solid core of rock and iron, covered by a layer of water at high pressures, surrounded by a thick atmosphere of hydrogen-rich gas. If this layer is too dense, it would raise temperatures of the water above the point where life as we know it is likely to survive. Machine learning was used to determine the mass of K2–18b. This value, combined with the radius of the exoplanet, allows researchers to calculate the density of the planet. This figure (around 2.67 grams per cubic centimeter — about half as dense as Earth) provides vital data information on the composition of this world. A small change in the amount of hydrogen in the atmosphere of K2–18b could make a huge difference, as seen here, where changing the concentration of hydrogen by just 0.4% percent altered this exoplanet from an ice-encrusted water world to having a significant atmosphere. Simulation by The Cosmic Companion/Created in Universe Sandbox “With a bulk density between Earth and Neptune, K2–18b may be expected to possess a H/He envelope. However, the extent of such an envelope and the thermodynamic conditions of the interior remain unexplored,” researchers wrote in a journal article published in The Astrophysical Journal Letters. If K2–18b possesses a thick atmosphere similar to Neptune, the chances of complex molecules developing there — never mind life — would seem remote. “In a hydrogen-rich atmosphere, the temperature and pressure increase the deeper you go. By the time the rocky core is reached, the pressure is expected to be thousands of times higher than the surface of Earth, and the temperature can approach 5,000 degrees Fahrenheit,” Laura Kreidberg wrote for Scientific American in September 2019. This study suggests the exoplanet likely resembles either a rocky world with an atmosphere or a water world, covered with a thick layer of ice. Haze may be present in the atmosphere, although no evidence for such a layer was conclusively found by the team. Significant quantities of water were found in the atmosphere of K2–18b, along with lower-than-expected concentrations of methane and ammonia. This anomaly could be, but is not necessarily, the result of life on that world.
https://medium.com/the-cosmic-companion/exoplanet-k2-18b-could-have-the-right-conditions-for-life-a81ab9020aba
['James Maynard']
2020-02-27 23:37:40.424000+00:00
['Astronomy', 'Physics', 'Science', 'Space', 'Chemistry']
Write for STORIUS
Hello and thank you for thinking of STORIUS! Please check out our About page to see if our magazine is the right fit for your materials. If it sounds like it is, let’s talk. What We Are Looking For We cover a broad range of storytelling subjects, and at this point are particularly interested in the following topics (stand-alone articles and multipart series): Filmmaking (movies and TV) Emerging forms (serial fiction, interactive movies, chat stories, etc.) Storytelling in business (marketing, advertisement, sales, pitching, etc.) Storytelling tech Books and publishing Visual storytelling (graphic design and photography) Content monetization models Storytelling in gaming Psychology of storytelling Interviews with successful storytellers Events coverage (screenwriter summits, film festivals, book expos, etc.) We are always open to new ideas, so please do reach out with your proposals using the process described below. We want to hear the perspectives of people in various roles, including writers, filmmakers, editors, agents, casting directors, publishers, concept artists, and others (especially if they offer practical advice our readers). We do not publish: Unsolicited stories that have been published elsewhere . . Promotional posts . You are welcome to mention your business in your articles, but in order to be published, your content must offer value to our readers regardless of whether or not they click on your links. . You are welcome to mention your business in your articles, but in order to be published, your content must offer value to our readers regardless of whether or not they click on your links. Unsolicited fiction and poetry. Things we like: Informed perspectives . We want our readers to hear expert perspectives. You don’t have to be an award-winning storyteller to have your materials published with us, but you should have some expertise in the subject you are writing about. . We want our readers to hear expert perspectives. You don’t have to be an award-winning storyteller to have your materials published with us, but you should have some expertise in the subject you are writing about. High quality . This applies to text and images alike. Please proofread your materials before submitting and include high-quality pictures and/or graphics. Unsplash, Pexels, The Stocks, and Burst are great sites to get free stock photos. We also highly encourage original photos and illustrations. . This applies to text and images alike. Please proofread your materials before submitting and include high-quality pictures and/or graphics. Unsplash, Pexels, The Stocks, and Burst are great sites to get free stock photos. We also highly encourage original photos and illustrations. Highly quotable content . While our home base is Medium, our content model includes promotion of our articles on multiple channels, such as Instagram (@storiusmag) and LinkedIn. The more quotable your content is, the more likely we will be able to use it to promote your article elsewhere. . While our home base is Medium, our content model includes promotion of our articles on multiple channels, such as Instagram (@storiusmag) and LinkedIn. The more quotable your content is, the more likely we will be able to use it to promote your article elsewhere. Concise but deep. The target size for our articles is 1.5–2.5K words, although we’re open to shorter and longer pieces. We strongly prefer materials that provide valuable insights, while staying on topic. We are not interested in shallow or repetitive content. Some logistics: At this point we accept both paywalled (via Medium Partner Program) and free stories, leaving the choice up to the author. We ask that once an article is published at STORIUS, it remains in our publication and is not published elsewhere on Medium. Author bylines should be 1–2 sentences long and can include up to 2 links (no embedded forms). Our typical response time is 48 hours. If you haven’t heard from us within 3 days after submitting a story, it is safe to assume that we won’t be publishing it. We reserve the right to edit content and graphics where necessary. Typical modifications we make include adding the standard footer/kicker, light copyediting, and as needed, adjusting the display title/subtitle/tags, and replacing the header image to fit Storius’ visual style. If that all sounds good, here are some simple steps for submitting your work for consideration: Becoming a contributor In order to publish your content at STORIUS you need to have a Medium account. If you already have one, simply submit a request to contribute via the form below. If you don’t have a Medium account and want to touch base with us before creating one, send us an email and include the following: A brief pitch for an article/series you want to write or topics you want to cover Your credentials as an expert in the space you want to write about Links to your published materials representative of your writing style Submitting an article If you are approved as a contributor, you can start submitting your articles (unpublished drafts only) via Medium’s process for adding a draft to a publication. We want STORIUS to have a very distinctive voice, so not every article, even if very well written, will be a good fit. Pitching your article in advance saves you time and ensures that your article is in line with our editorial vision. To pitch, send us a short email and we will get back to you shortly. We look forward to hearing from you!
https://medium.com/storiusmag/write-for-storius-d35fa172290
['Ray N. Kuili']
2020-09-07 08:14:16.314000+00:00
['Storytelling', 'Submission', 'Submission Guidelines', 'Magazine', 'Publication']
18 Things Your Editor Wants You to Know
As a writer, I know that hitting “send” can feel like throwing your work into a frightening void. Now, as a magazine editor, I’m the one behind the curtain. Over two years reviewing essay submissions, fielding pitches, assigning reported work, and editing accepted pieces, I’ve collected scraps of insider info that I wished I’d known before I became an editor. So — I give them to you. Here are my top 18 tips for writers: 1. I want you to succeed. Every time I open a new piece or pitch, I’m rooting for it to be great. It’s not just that it makes my job easier. (Though it does, immensely.) It’s just a delight to read good work — and that’s what our readers want, too. I’m on your side. 2. When pitching, express enthusiasm. Pretend you work for the publication and want to see it succeed and do great work. (You’re on our side, too, right?) Say, “I’m fascinated by this question, and I haven’t seen it covered. It might be a fit for your readers because of x.” (You already know that you should be quite familiar with the publication, right? Of course you do.) 3. Sometimes I don’t know exactly what I’m looking for. That is, until I see it. If you can anticipate what our readers will respond to, or what our coverage is missing, you’re golden. 4. Use a conversational (but polite) tone in email. If you email like a cold fish, you’ll probably write everything else like a cold fish too. See #5. 5. Add voice and life to essays and articles. This is the number one thing I cannot edit for. If you make a typo, I can fix that. If you are clumsy in using quotes, I can fix that. If you have a tangential sentence or paragraph, I can fix that. If you are boring, I cannot fix that. If you’re bored writing it, why would anyone want to read it? 6. Find the through line. Your piece should have a point. You don’t need to hit the reader over the head with it — please don’t — but your essay or article should have a driving or unifying thread. After reading the piece, the reader should feel that they gained something — and as the writer, you should know what that something is when you’re writing. 7. Sometimes what I really want to say is, “Your writing needs work.” I don’t say that, because between strangers over the medium of email, it sounds unnecessarily cruel. But if you felt puzzled while reading #5 or #6, or you’re not getting the response from editors that you want, it might be useful to find a writing class or a writing group to provide instruction and feedback. 8. Don’t say “I followed your guidelines; why wasn’t my piece accepted?” I have actually been asked this. Look, it could be literally any reason. Although I’d like to, I just don’t have time to give individual feedback. (And do you really want to know if my answer is #7?) 9. Don’t take edits personally. Everyone needs to be edited, even editors. At least three people after me look at the pieces I edit, and they always have additional edits. And they’re usually good ones, things that I missed and that make the piece better. Editing is just part of the process. 10. Editors make mistakes. Don’t argue or assume the worst if you don’t get paid or something — just ask politely. Mistakes happen. We’re human. 11. Be enthusiastic and pleasant. You’ll get more work. Because we are human, editors like nice people. 12. Don’t be sloppy. We’ve all forgotten to attach the document to the email, and we all make typos. No worries. But if you’re sending a piece that is in two very obviously different fonts (has happened) or that just dumps block quotes into a document in succession (ditto), don’t send it. 13. Be careful with previously published work. Don’t submit previously published work unless the publication accepts it, and you have clearly disclosed that. (Blogs count as published work. Small updates do not make a piece new.) 14. I don’t want to micromanage you. If you’re not sure what angle I want, ask “Would you like A approach or B approach? I prefer A b/c x, but y weighs in favor of B.” Not “Here are the quotes from my experts, which ones do you like?” (Yes, I’ve been asked this.) 15. Don’t use my personal email or social media to reach me. I’m just so overwhelmed already with messages from multiple venues. (Aren’t we all?) Exception: It’s fine to ask on Twitter if you can email me at work, and I’m happy to give you my email address. 16. Keep pitching. Even if you’re an established writer that I assign work to, feel free to pitch. Pitches make my job easier. Make them specific. Make them easy to say yes to, and inspire confidence in me. 17. Observe simple care and courtesies. Such as: If you must be late (really try not to be late) then always check in with the editor. Don’t expect me to keep track of your invoicing — it’s just as easy for you to look at your emails and invoices as it is for me. If I ask for two experts, don’t ask me if one is OK. If I ask for edits, send them back to me as a redline so I can easily see what you changed. Make my life easier — and I’ll like working with you. 18. Be curious. Ask good questions. Dig a little deeper. Take a true interest in the subject matter. Find the surprises. Share them with us. It’s hard for an editor (or a reader) to resist a curious writer. Sharon Holbrook is the managing editor of Your Teen for Parents magazine. Her writing has appeared in the New York Times, Washington Post, and many other publications. Find her on Twitter @sharon_holbrook.
https://medium.com/swlh/18-things-your-editor-wants-you-to-know-8d2ebb540393
['Sharon Holbrook']
2019-05-29 14:00:55.095000+00:00
['Publishing', 'Magazine', 'Newspapers', 'Editor', 'Writing']
Modern WordPress Development: You should throw an exception when you encounter a WP_Error
WordPress methods like wp_insert_user don’t throw exceptions when they fail, but instead — and in some cases only if you tell the method to — return a WP_Error object. This class has all the hallmarks of error handling — messages and status codes, and so on — but, purely because exceptions aren’t thrown, I’m not sure I can argue for its use when your code really depends on it. What’s more, I’m not sure its use flies for folks who are just trying to write sufficiently simple code. Whether you’re playing for low CRAP scores in your unit testing, or you’re just trying to do good by your friends at work, leaning hard on the WP Error class means that features you build around critical points, like new user creation, are ripe with if or switch condition checking — if they’re not, you’re just not doing your due diligence — which add layers of unnecessary complexity. Although I don’t go so far as to say WP Error sucks, I’m not the first to bring this up. This debate as to whether WP Error should throw exceptions has been already brought to the WordPress core community, who made — I think — a sane argument for keeping WP Error as-is. Wait, really? Yup. In short: WP Error predates PHP Exception Handling, so the WordPress Community at the time created its own solution that has since become convention —part of the WordPress way. There is another debate outside the scope of this writeup about how — or whether — the WordPress way is diverging from modern application development, if you’re looking for some #hotdrama. Still, for my part, where I am leading a refactor of our internal but critical WordPress apps at WhereBy.Us, we’re watching for returning WP_Error objects for one purpose: to throw that exception. Take a modern but real-world example where we are using WordPress as part of a larger service-provision stack, where we need to create a WordPress user with a smorgasbord of custom fields. A lot of cool stuff depends on this user creation event, so when it works we want high confidence that something didn’t screw-up along the way. And why might something screw up? Well, in WordPress, only basic properties of a user live in the wp_users table in the database, while all additional, custom properties — called usermeta — live in another. What’s more, unless you’re inserting users into both tables directly (which I actually like, but we’re not doing just yet), user creation via wp_insert_user and update_user_meta will make a ton of database queries and are points of failure that, if they bork, return a WP_Error. Something like this could only partially succeed without throwing-up any red flags: $wordPressUserId = wp_insert_user($anArrayOfArguments); update_user_meta($wordPressUserId, $metaName1, $metaValue1); update_user_meta($wordPressUserId, $metaName2, $metaValue2); update_user_meta($wordPressUserId, $metaName3, $metaValue3); update_user_meta($wordPressUserId, $metaName4, $metaValue4); update_user_meta($wordPressUserId, $metaName5, $metaValue5); update_user_meta($wordPressUserId, $metaName6, $metaValue6); — so, you could check for an error with is_wp_error : $wordPressUserId = wp_insert_user($anArrayOfArguments, true); if (is_wp_error($wordPressUserId){ /* do something */ } $result1 = update_user_meta($wordPressUserId, $metaName1, $metaValue1); if (is_wp_error($result1) { /* do something */ } $result2 =update_user_meta($wordPressUserId, $metaName2, $metaValue2); if (is_wp_error($result2) { /* do something */ } $result3 =update_user_meta($wordPressUserId, $metaName3, $metaValue3); if (is_wp_error($result3) { /* do something */ } $result4 =update_user_meta($wordPressUserId, $metaName4, $metaValue4); if (is_wp_error($result4) { /* do something */ } $result5 =update_user_meta($wordPressUserId, $metaName5, $metaValue5); if (is_wp_error($result5) { /* do something */ } $result6 = update_user_meta($wordPressUserId, $metaName6, $metaValue6); if (is_wp_error($result6) { /* do something */ } This becomes exponentially more complex if you want to respond to different errors in unique ways, or want to delete the user and its custom properties that were successfully created up to the point of a failure. We’ve addressed this by writing our own WordPress wrapper methods (where we need to) and throwing exceptions if the return value is a WP_Error. First, our wrapper for wp_insert_user : private function insertUser(string $email, $firstName, $lastName) : int { if (!function_exists('wp_insert_user')) { return -1; } $wordPressUserArguments= array( 'user_email' => strtolower($email), 'user_login' => strtolower($email), 'user_pass' => null, 'first_name' => $firstName, 'last_name' => $lastName ); $results = wp_insert_user($wordPressUserArguments, true); if (is_a($results, 'WP_Error')) { throw new \Exception(); } return $results; } — and our wrapper for update_user_meta : private function updateUserMeta($id, $metaName, $metaValue) : int { if (!function_exists('update_user_meta')) { return -1; } $results = update_user_meta($id, $metaName, $metaValue); if (is_a($results, 'WP_Error')) { throw new \Exception(); } return $results; } Note: if you want wp_insert_user to return any error at all, you need to pass the boolean true as its second argument. Otherwise — I think, I’m not looking it up right now — it returns just a 0 . Idea: you could totally pass the message that comes back with WP_Error as the exception message. So, now, because our insertUser and updateUserMeta methods return exceptions, we’re able to lean-in on basic PHP Exception Handling and move our creation attempt into try / catch blocks. public function createNewMember(MemberDto $dto) : int { $email = $dto->wordPressUserEmail; $firstName = $dto->wordPressUserFirstName; $lastName = $dto->wordPressUserLastName; $response = null; try { $response = $this->insertUser($email, $firstName, $lastName); $this->updateUserMeta($response, 'someValue1', $dto->someValue1); $this->updateUserMeta($response, 'someValue2', $dto->someValue2); $this->updateUserMeta($response, 'someValue3', $dto->someValue3); $this->updateUserMeta($response, 'someValue4', $dto->someValue4); } catch (\Exception $e) { $response = -1; /* or do something more interesting */ } return $response; }
https://medium.com/the-metric/modern-wordpress-development-you-should-throw-an-exception-when-you-encounter-a-wp-error-81fb82275cbd
['Michael Schofield']
2018-09-01 10:37:44.770000+00:00
['PHP', 'WordPress', 'Programming', 'Development']
Upright and Truthful
BY TAMIKA DUNKLEY Throughout my scholastic career, I’m praised for being a “Wonderful straight-A student.” I’m the student body president and the leader in multiple choirs and bands. I spent six to seven days at church. I attend every bible study, worship service, and benefit. From a young age, I’m taught to walk with a certain persona; there was a level of professionalism and grace I’m expected to carry myself with. No matter how I feel or who I really want to be. Who I am doesn’t matter, as long as my family appears to be perfect. So much so that people began to label us as the “Sultan Family Show.” Tamika Dunkley But now, I’m 15 years old, sitting in the car, parked behind the church. I take a deep breath. I look down at my belly and then quickly into the rearview mirror. I shift my focus from myself to the double-pane doors. Through the stained glass, I can see all the people attentively watching the pastor at the pulpit. I want to go back home. I have avoided this moment for several months, and the internal build-up is all but unbearable. I can no longer hide. I know public shaming is in store for me, but I’m praying it won’t be as bad as I expect. As I slowly climb up the steps, holding myself up on the banister. Our eyes meet. I see the look of shock and disappointment immediately sweep over his face. “How far along are you?” the Deacon asks. I say, “I’m 7 months.” I take a deep breath and brace myself for this long journey. I somehow survive the next few months and triumph over the scorn from all the people in my life. Being the daughter of a pastor or any prominent person in the community is not easy. When you grow up surrounded by the church and find yourself pregnant out of wedlock, that is shameful enough. But being a pregnant teenager makes it even worse. People feel free to comment on any of the challenges in your life. It is as though one mistake makes everyone feel they all of a sudden know your story. I am well aware of the hypocrisy. I know all of their deepest secrets, the things they struggle with, the things they go to counseling for. You see, all pastors talk to each other behind closed doors, or not so closed doors. I wonder if they know that their secrets are not so secret anymore. All that doesn’t matter anyway; they all felt they have a right to pass judgment on my life. Now, I’m “no good,” “troubled,” and “a bad influence” to their children. Now the same people my life has been built around see me as less than, incompetent, and label me as fast. “It’s an epidemic,” people say. “Why would you do that?” As if I don’t know your husband was trying to take me home with him at the gas station just a few months prior. As if I don’t know you are buying drugs right after church. As if your daughter isn’t the one who encouraged me to do what she was doing. I’m not the type to confront people. I’m not going to fight back. I’m not going to “put their business out there.” The truth doesn’t matter. I allow others to create my history as the month's pass and my belly grows. I remove myself from the line of fire; I stay to myself and, most importantly, away from them. Finally, a few months later, I walk back into the church with my baby, only a few weeks old. I take the front entrance. I’m trembling inside but do my best to muster up false confidence, prepared to confront the naysayers’ and so-called “do-gooders.” As soon as I enter, a woman I have quite literally known my whole life pulls me in. She brings me to the front of the stage and holds my hand. I feel every cell in my body pulsing. I can’t think and can barely stand. I know she feels the sweat on my palms; she grips my hand. The baby feels like a bag of cement in my arms. “Dear Lord, I pray for your heavenly protection for Tamika. I pray that you cover her and keep her from her path of sin. I pray that she will no longer be living as a sinner but will keep the trials of the flesh far from her. She will no longer live in disgrace from you…” She continues, but I tune her out and keep my eyes closed as I feel compelled to blurt out her truths. That her daughter, only slightly older than me, has done much worse than me and worse than she could ever imagine. The difference is I was not covering up, shying away from my mistakes. I’m facing them head-on. She’s a white woman with a Black husband, just like my parents. There was always some underlying competition that they subjected me to, and now I have provided the fuel she needs to shame me publicly. After this public berating ceases, I stare out in front of the sea of faces; over 300 people are in attendance that morning. I go to the bathroom, and I stare at my son until the only thing I see is the outlining of his head through my tears. I let him know that I am not going to be what they condemned me to be. I will rise above that. I see nothing but love and innocence, staring back at me, and I feel peace. At that moment, I decide that anyone who wants to tell me he is anything but a blessing doesn’t have my best interest at heart. My whole world becomes him. But despite my promise to him and myself, I struggle. I struggle through school. I’m still an above-average student- but who is going to watch the baby? How am I going to get to and from school? How am I going to provide food for him as he grows? Although my parents try to support me in their own way, I can no longer conform to their right idea. I can no longer be subjected to the phony understanding or perception of who God is or who I am. They have to be who they are, and I have to be myself, and as a result, I find myself homeless a little over a year later. And, because I was so young, there are no services available to me. I walk out of the DSS building, pushing my one-year-old child in the middle of winter with nowhere to go. I find a coffee shop half a mile away, and I use my last $1.25 to buy a cup of tea so I can sit there. That semester, I stay in 9 places and somehow manage to attend and pass all my classes. I know that if I’m to fulfill my promise and create any resemblance of stability in my son’s life, I have to provide financially. I study nursing, a profession I never wanted, but it would enable me to care for my son with a 2-year degree, so at 19 years old, I officially become a Registered Nurse. I excel within my studies, attaining a bachelor’s degree less than a year later and being placed in a supervisory position within that same year. I named my son Amitai —it’s Hebrew meaning is upright and truthful. I named him for the purpose I saw in his life before he was born. Now, a teen himself, an avid reader enrolled in the early college program, is living up to his powerful name. With my family's support, I have been able to rekindle my dreams and pursue my own purpose. I continue to work as a nurse from time to time, but it’s not because I have to; it’s because I want to. My husband and I own a successful food company. Because of this, we’ve been able to create a non-profit organization that allows us to help other people change their life narratives and pursue their dreams. Over the years and through many trials and tribulations, I have developed my own relationship with God, the universe, our ancestors, whatever name you choose. But most importantly, I have found myself.
https://medium.com/black-stories-matter/upright-and-truthful-d1211d2799b5
['Tmi Project']
2020-12-11 16:24:31.743000+00:00
['Social Justice', 'Storytelling', 'Race', 'Motherhood', 'Black Stories Matter']
What Two of the Greatest History Writers Can Teach Us About Humanity
Lesson one: humanity isn’t terrible “Behind the red façade of war and politics, misfortune and poverty, adultery and divorce, murder and suicide, were millions of orderly homes, devoted marriages, men and women kindly affectionate, troubled and happy with children.” — The Lessons of History, Ariel and Will Durant By looking through the pages of history, you probably get the idea that most people are terrible. How can you not? You can’t make it far through any book on the topic without stumbling on war and scandal. Moreover, you continuously find examples of people doing awful things to each other. But is that really the history of humanity? After fifty years of research and digging through thousands of years of history, the Durants believed it’s more complicated than that. Often the “historian” records interesting things, not the mundane. They sum it up as follows: “History as is usually written is quite different from history as usually lived. The historian records the exceptional because it is interesting.” Think about it in our world today. How often do you see good news listed in the headlines? At least for me, it’s rare. Murder, death, theft, and generally awful behavior capture the prime spot. If it bleeds, it leads. The Durants believe this isn’t a new tactic; it’s always been this way. Logic dictates that if society is really as bad as the news reports, it would have crumbled to the ground by now. The Durants feel the same about history. War, death, and bad behavior are more interesting to write about than the normal boring, cohesive functions of society. The Durants also list examples of charity and good works, but they involve well-known figures. With this in mind, they remind us to think about how many instances of benevolent behavior aren’t recorded because they were either lost or too boring. Lesson two: conservatives and radicals both serve a purpose “…The conservative who resists change is as valuable as the radical who proposes it…It is good that the old should resist the young and that the young should prod the old. Out of this tension…comes a creative tensile strength…” — The Lessons of History, Ariel and Will Durant In our modern society, the new usually catches our imagination. We always have our eyes on the next technology that will change our world. Those who question or oppose the “new way” of doing things are often looked at as backwards luddites. The Durants challenged this assumption and stated that both are necessary. Their words remind us that “out of every hundred new ideas, ninety-nine or more will probably be inferior to the traditional responses which they propose to replace.” Often, the forces that shape the parts of society we take for granted are part of many years of experiments. The Durants remind us that the world that surrounds us looks much different as a youth compared to the shape it takes when you get older. Therefore, giving in to all the early desires in your life can cause chaos for you later on, and the same goes for society. They go on to say: “No one man, however brilliant or well informed can come in one lifetime to such fullness of understanding as to safely judge and dismiss the customs or institutions of his society. For these are the wisdom of generations after centuries of experiment in the laboratory of history.” Will Durant himself in an interview reviews his personal story. He had a troubled relationship with his church, eventually being excommunicated in his early life. However, his thoughts on religion changed in his later years and he believed it to be an integral part of society. The Durants believed the struggle between the radical “changers” of society versus its conservative “keepers” makes the world better. The conservatives force the radicals to jump through hoops to prove their ideas to be truly worthy. Inversely, the radicals force conservatives to challenge their beliefs. Both groups are necessary and valuable. Lesson three: virtue and vice change
https://medium.com/history-of-yesterday/what-two-of-the-greatest-history-writers-teach-us-about-humanity-6fefd96505e4
['Erik Brown']
2020-10-21 14:02:53.225000+00:00
['Inspiration', 'History', 'Books', 'Life Lessons', 'Culture']
Career Opportunities For Artificial Intelligence Professionals In Hyderabad and Bangalore
India has attained global attraction because of the prevalence of Information Technology. India is contributing it’s engineering services to top-notch companies including Microsoft, Volvo, IBM, Cisco and many more. The IT sector contributes to a major percentage of the country’s gross national income. Hence, there is a huge scope for the latest technologies in India. India is definitely a hub of information technology providing world-class technological solutions for many multinational companies across the globe. The revolution created by Information Technology in India has resulted in the conversion of the agricultural economy into a knowledge based economy. It is quite appalling to notice the evolution of technology. No one would know at a glance that modern ultra-books came from the calculator invented by a French Mathematician Blaise Pascal. Technology evolves substantially with time. It is inevitable. GrahamBell's Telephone has turned out to be a modern smartphone. No one has seen that coming. We can be certain this change will be relentless and lead to a new invention in the near future. Although Artificial Intelligence has become a buzz word these days, its roots go back to the 1950s. Data scientists have invested a significant amount of time and effort that led to advancements in artificial intelligence. Many companies are after artificial intelligence as it supersedes the performance of existing technologies. This demand was seen out of the blue. Thus, companies have been struggling to find appropriate talent for these projects. Besides, most college graduates and experienced professionals are taking the route of artificial intelligence jobs. It is apparent that these jobs pay way better than the existing software positions. In fact, the field of artificial intelligence offers highly paid job roles in machine learning, deep learning and other sectors. Multi-Faceted applications of Artificial Intelligence to a wide range of industries : Future in Artificial Intelligence There is a notion that the application of Artificial Intelligence is restricted to the IT sector. Contrary to popular belief, the application of artificial intelligence can be seen in a wide range of industries. Let’s see how artificial intelligence is applied in multiple industries for versatile applications. 1. Banking Sector Security has become a key aspect as people are opting for online banking. It is critical for any bank to prevent fraudulent transactions to ensure consumer security and build trust. Artificial Intelligence makes it easier to prevent fraudulent transactions. 2. Health-Care Industry Applications of AI in Health-Care Artificial intelligence assists doctors to produce accurate reports about the diagnosis of a patient. Besides, Artificial intelligence draws accurate information about the reception of treatment by the human body. 3. Gaming Industry It is undeniable that Artificial intelligence has been used in gaming. This has been used extensively since the inception of major gaming consoles like Xbox, Play-Station. Although the applications of artificial intelligence are seen at a fundamental level, experts are anticipating a steep decline in jobs that require manual labor and replace the same with artificial intelligence. Also the Indian IT sector is well advanced, it’s accessibility is limited to cities but not the countryside. Bangalore and Hyderabad are two prime contributors to the flourishing Indian IT sector The Future Scope for Artificial Intelligence and Machine Learning course in Bangalore Until the 1980s, Bangalore was called Retiree’s Paradise. With the inception of IT in Bangalore, people migrated to Bangalore in pursuit of good careers. Bangalore is recognized as the IT capital of the country and is also known as the silicon valley of India. Bangalore is the home for many multinational as well as small scale IT companies. Bangalore alone constitutes 2.5 million IT professionals who serve in delivering the highest rated IT exports across the country. Bangalore is also the headquarters for many top rated multinational companies such as Oracle, Infosys, Accenture, Capgemini, Cognizant, Wipro, TCS, IBM and many more. In Bangalore alone, there are around 2000 openings for various positions in Machine Learning and Artificial Intelligence. These statistics make it apparent that there is an unbelievable demand for artificial intelligence. Hence, taking up an Artificial Intelligence and Machine Learning course in Bangalore at this time makes more sense. Considering the increasing demand for several job roles in the field of artificial intelligence, many institutes are offering a guaranteed in Artificial Intelligence course in Bangalore with placements. The Emerging Demand For Artificial Intelligence and Machine Learning Course in Hyderabad Artificial Intelligence Demand in Hyderabad Similarly, Hyderabad is well known for college graduates. Most college graduates move to Hyderabad for job hunting along with learning new technologies at the good institutes. As it’s become obvious that artificial intelligence is the most sought after technology, graduates are seeking to find the Best Institution for Artificial Intelligence in Hyderabad. Considering the IT growth in Hyderabad, people are naming Hyderabad as Cyberabad. The economy of Telangana has become self-reliant post the bifurcation of state from Andhra Pradesh. K Taraka Rama Rao, the IT minister of Telangana has established a T-Hub in Hyderabad to encourage start-ups that will aid in generating revenue for the state. This has increased the number of IT jobs in Hyderabad. It is estimated that there are more than 1500 Artificial Intelligence job vacancies available in Hyderabad. Conclusion :
https://medium.com/my-great-learning/career-opportunities-for-artificial-intelligence-professionals-in-hyderabad-and-bangalore-81424702ed9e
['Great Learning']
2019-09-19 04:43:23.737000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Career Advice', 'Future Technology']
The Lucky Ones: Betty’s Story
BY BETTY MACDONALD As an 18-year-old girl in the early fifties, I possess very little knowledge of my body or reproduction. It will be twenty years before The Supreme Court passes Roe v Wade into law, making abortion legal. Not only is abortion illegal, so is contraception in many states. Not until 1974 does contraception become legal for unmarried couples. I know where babies come from. That’s about it. For a long time, I believe I am too skinny and too anemic to get pregnant. It’s my magical thinking. Betty MacDonald, 1956 After college, writing radio copy and hosting an afternoon disk jockey show on the local NBC affiliate, I spend a year in my hometown. I have total control of what I say on-air and what I choose to play, but I’m not allowed to touch the turntable or the microphones because I’m female. My co-host Charlie operates the control board. After a year on air at the radio station and evenings working backstage and performing in the newly created theatre at the Virginia Museum, I take the night train to New York City, intent on studying acting. In the Village, I become part of a group of struggling actors, artists, writers, and musicians who hang out in the coffee houses on Bleecker and MacDougal Streets. My boyfriend Joel was celibate for six years in self-imposed penitence for impregnating his first girlfriend when he was sixteen. When we get together, I almost immediately miss my period. The theory is Joel’s years of celibacy have intensified his potency. His super post-celibate sperm has overcome my magical thinking. I am pregnant. Dr. C., my primary care provider, upon hearing my plight, places me in the care of his loyal and knowledgeable nurse. In turn, she puts me in touch with an abortion provider, a doctor who, after losing his medical license for preforming the illegal procedure, makes his living renovating apartments. Joel, always the gentleman and the only man in our crowd with a steady job foots the bill — $500 cash. The operation will take place in my third floor Greenwich Village walk-up. My friend Claudette, a young woman toughened by her childhood in Nazi-occupied France during the Second World War, offers to be with me. The doctor, having climbed the three flights, appears at my door slightly disheveled. At first, he surveys the apartment with a contractor’s eyes and offers some suggestions for possible renovations before unpacking his physician’s bag. I lie down on my green and beige enameled kitchen table. There are no stirrups, so my legs and butt are placed in a mesh harness. The doctor considers anesthesia or pain medication too risky. I undergo the procedure without them. Claudette faithfully holds my hand as promised. She doesn’t freak out when I start screaming and talks me through the excruciating process. For weeks afterward, I bleed and feel faint. I continue to date Joel and quickly get pregnant a second time. I don’t know how to prevent it. This time the abortion doctor gives me instructions to meet him at an apartment in one of the many high-rise complexes in Queens. Another friend, Lorraine, offers to drive me, but because my instructions are to arrive unaccompanied, she waits for me in the car. Before going in, I swallow a pill Dr. C. has given me to lessen pain. I take the elevator to the sixth floor. Just as I am about to press the buzzer at the designated apartment, a door on the other side of the hall pops open. The doctor pokes his head out and calls to me urgently in a hushed voice. Once inside and on the table, I attempt to take a second pill Dr. C. has prescribed, but my abortionist stops me. He’s not taking any chances. Fortunately, the first pill removes me from the pain’s immediacy, although it doesn’t eliminate it. I experience the agony repeatedly, but this time it’s as if it’s from a distance. When it’s over, Lorraine drives me back to West 10th Street. I break up with Joel and feel restored. I vow to be more careful in the future.
https://medium.com/tmi-project/the-lucky-ones-bettys-story-75ef16cd5e27
['Tmi Project']
2020-12-16 21:41:01.373000+00:00
['Storytelling', 'Abortion Rights', 'Reproductive Justice', 'Abortion', 'Reproductive Rights']
How My Consciousness Has Affected My Relationships With White People
I never believed racism had different levels of offensiveness. For me, either you’re racist or you’re not and any derogatory comment or action is equally problematic. A Klansman burning a cross on a Black person’s lawn carries the same weight as a co-worker telling a racist joke. This is one area of my life where I have never allowed a grey area. Over the past few months, it’s become clear that a few White people in my life do not share these views. During our conversations, some have expressed that making a “racist comment” doesn’t necessarily define someone as racist if they’re “just joking” or if they don’t use the “N” word. So as long as it’s just a joke and they don’t call me a nigger, how could they possibly be racist? Because after all, everyone knows these are the only two ways someone can express racist tendencies, right? Um…okay. The more I speak out, write, and get involved with organizations devoted to securing racial equality and promoting the humanization of Black bodies, the further the divide deepens between myself and the White people I once considered my acquaintances and friends. They stopped asking me how they could help combat injustice. They even let go of the awkward “how are you feeling” calls and text messages. Honestly, I could do without the forced inquires about my well-being, but what once seemed like genuine concern for the liberation of Black people has been exposed for what it really was: momentary White guilt. These people who knew me and some of the obstacles I’ve worked to overcome felt an obligation to pay lip service to my plight of being a Black woman in this country. They delivered award-winning performances of feigned interest for as long as they could. After all, they weren’t completely oblivious to the fact that they benefit from White supremacy, even though they wouldn’t dare say so out loud. Instead, their guilt played out not by readily accepting and acknowledging the racial inequities in this country, but by asking me if I was okay and what they could do to help. Checking on your Black friend and at least asking what could be done to advance the movement surely meant you weren’t racist, right? It was evident that I could no longer continue to live in a bubble of delusion. The more White people in my life began to distance themselves from conversations related to anti-racism work, the more I knew our relationships with each other were never as they seemed. I had been there for their life events and supported causes that mattered to them but when it was my turn, reciprocity was nowhere to be found. It didn't matter how long I had known these people or how kind they might have been in the past. The bottom line was they couldn't comprehend that as long as racism still exists, speaking, writing, and joining groups formed to address it and find solutions would never have an expiration date. If they couldn’t understand that, I could no longer be vulnerable in their company. I couldn’t be myself around them, which would mean stifling my views and beliefs. Their admission that some racism was okay simply because it wasn’t identical to the “in your face” hatred touted by right-wing extremists confirmed they had no idea of the impact microaggressions have on Black people and people of color. Ultimately it meant they were unable to step outside the comfort zone White supremacy afforded them and their families to help dismantle it for the protection of me and mine.
https://medium.com/illumination-curated/how-my-consciousness-has-affected-my-relationships-with-white-people-93af6fe1272e
['Jeanette C. Espinoza']
2020-11-03 02:55:38.253000+00:00
['Activism', 'BlackLivesMatter', 'Equality', 'Racism', 'Writing']
The anatomy of a credit card form
Paying for something online with a credit card is simple, right? Yes and no. Yes, because we’ve been doing it since the early days of the Internet (e.g. Amazon), and no, because no two credit card forms are alike. Over the past 20 years, we’ve built a mental model of paying online: I pull out a credit card from my wallet, enter the card details into a web form, and click a submit button. But getting from A to Z can be a tricky journey, riddled with questions the user has to answer. And obviously, nobody wants an instruction manual. Credit card forms from various popular websites and apps. Paying for something online is still 2–3x clunkier than paying in-person. Nothing beats tapping/swiping your card at a physical terminal. Zero typing required. You don’t even care about the information printed on your card. In a magical world, you could tap your card on your monitor to buy a swag new T-shirt from your favourite band. Or get rid of physical cards altogether, so you don’t have to pull out a card from you wallet. We’ve made significant progress in the physical world with Apple Pay. Online, we’re getting closer. Paying online will become easier, and faster. The latest HTML spec includes specifications for credit card inputs, and browsers are pushing the boundaries. Chrome 42+ supports autocomplete. Safari supports credit card autofill. But you still need a physical card to manually input your security code with every transaction. But before credit card forms become a thing of the past, we still have the present-day task of adding clarity, simplicity, and security to the credit card form. At Wave, our Invoice product enables business owners to create and send invoices to their customers, and to have those invoices paid via credit card. My job was to design the credit card form, given a set of business requirements and constraints. This post is about the design considerations our team explored to arrive at the finished product. The Wave credit card form Our goal was to make sense of all the various inputs and questions a user may have, including: What payment cards are accepted? Deciding how much to pay Name on card Card number Card type being used Expiry date Security code Why is there a ZIP code? Is this form safe and secure? What happens when I click submit? Handling card errors Designing for different screens 1. WHAT PAYMENT CARDS ARE ACCEPTED? When the customer is presented with the credit card form, one of their first questions is “Is my credit card accepted”? This behaviour is the virtual equivalent of the physical-world scenario. When you’re at a physical counter ready to pay, you look for stickers to indicate the cards supported. So the common way to answer the question is by using credit card logos. But, where do you place the logos on a web form? At first we tried to place them above the card number field. This placement reduces the height of the form, but the cards are small and look squished. Another option was to place them inside the input. This almost worked, but because we have a narrow input field, the cards took up too much space, overpowering the input. We decided to place the credit card logos at the top of the form. This placement makes them immediately visible, because they are the first element a user has to parse visually. The user doesn’t have to search for them. And they promptly call attention to themselves with a “this is where you pay” message. We felt the logos alone were sufficient, so we did not add labels for “Cards accepted”, or “Pay with”. 2. DECIDING HOW MUCH TO PAY One requirement we had to satisfy was allowing the user to decide how much to pay. For large invoices, a customer may need to make a partial payment (e.g. a deposit), or pay the invoice with multiple payments as the work is completed. By default, the payment amount equals the total unpaid invoice amount. In other words, if a partial payment has been made, the payment amount equals the balance owed. With web forms, we know that more inputs lead to lower completion rates, and higher bounce rates. To reduce the number of inputs, we show the payment amount in a read-only format, with a button to edit it, instead of displaying the text input by default. In edit mode, we considered having a Save or Done button, that would flip the input back to read-only. But we felt this was unnecessary, since the amount is already visible inside the input. Also, if the customer wanted to edit the amount again, they could simply change the input value, without having to deal with Edit/Done buttons. We also wanted to confirm the payment amount when the user is ready to submit the form. A confirmation reassures the user of the amount that will be charged to their credit card. We display the payment amount inside the Pay button, at the bottom of the form. This amount updates synchronously with the amount entered in the payment amount input. 3. NAME ON CARD The next thing we ask the user to provide is the name of the credit card owner. We considered several options for the label text: Name of card holder Card holder name Name on card Name (as it appears on your card) Full name on card We felt that Name on card was the shortest and clearest way to ask for this input. This asks the user to simply type exactly what’s displayed on the card, instead of thinking about the card owner’s full or abbreviated name. 4. CARD NUMBER When reaching the card number input, a common question that a user asks is: “My card number has spaces. Do I enter my card number with spaces, or without?” To solve for this, we limit the input values to numbers only, so 0–9. So if a user types a space, it does not register and it does not affect the number format. At first, we wanted to mask the card number when the user leaves the input. This was an attempt to provide the user with a sense of security, similar to how password fields are masked. But we realized that the credit card number is not a “secret”. You can’t do much with just a card number. Furthermore, when a user is ready to submit the form, they may want to double-check their inputs for accuracy. A masked field would break the visual review of the form because the user would have to put the focus back on the card input to reveal its value. 5. CARD TYPE BEING USED A helpful pattern we noticed in other payment forms is to indicate the card type being used in a visual way. This reassures the user that the card type input matches the card they are holding in their hand. We can determine the card type from the starting first number, as follows: 3 — Travel/entertainment cards (e.g. American Express and Diners Club) 4 — Visa 5 — MasterCard 6 — Discover Card After the user enters the first two numbers, we display a card logo inside the input field, floated to the right. Of course, we could’ve done this differently, based on our designs from question 1: Dim out the credit card logos at the top of the form. But because the logos are placed away from the number input, the correlation would not be clear. Place all the credit card logos inside the number input by default, then as the user types in the first two numbers, all the card logos disappears except for the one that corresponds to the input. 6. EXPIRY DATE Most credit cards display their expiry dates in the format MM/YY (month and year). Some may include the full year, in an YYYY format. When designing the expiry date input, we wanted to keep the user in typing mode to speed their input. The user does not have to reach for a mouse to pick a date and year from a select menu, or navigate the options via up/down arrows. The user simply has to type in the numbers as they appear on the credit card. This also prevents the user from having to think of the actual month (e.g. 08 is August), so cognitive load is minimized. Because this input requires a particular format for the date, we included placeholder text inside the input. Note that the placeholder text includes a “/”, but this is not required to be typed by the user. We limit the input value to numbers only, so if a user does type a forward slash, it is not registered. After the month is entered, the slash is automatically appended. 7. SECURITY CODE The card security code was invented to reduce credit card fraud. In other words, it’s meant to make cards more secure. The problem is that this code suffers severely from non-standardized naming. What should we call it? Every card brand has its own naming convention: MasterCard — card validation code (“CVC2”) Visa — card verification value (“CVV2”) Discover — card identification number (“CID”) American Express — “CID” or “unique card code” Debit Card — “CSC” or “card security code” And there are even more permutations: Card verification data Card verification number Card verification code Card code verification Nuts, right? Acronyms create confusion. We wanted to stay away from them, but still indicate to the user that this code is all about security. So we decided to name this input “Security code”. Next, a security code can be 4 digits (American Express, on the front of the card) or 3 digits (every other brand, on the back of the card). To help the user determine which code they need to enter, and where to find it, we included a visual tooltip. The tooltip has 3 states: Dual code: If the user has not yet entered a card number, the tooltip shows both options available. 4-digit code: If the user has entered an American Express card, the tooltip indicates a 4-digit code on the front. 3-digit code: If the user has entered any other card, the tooltip indicates a 3-digit code on the back. 8. ZIP CODE As an extra security measure, we have to ask customers for the ZIP code associated with their card. There is a trade-off here: adding extra inputs to the form can increase bounce rates, but by adding it, our business is more secure and less prone to fraud. We realized that users may enter the ZIP code associated with their personal address, instead of the code associated with their cards. To add clarity, we added a note in a tooltip, which asks for the code from the credit’s card billing address. US zip codes contain only numbers, up to a maximum of 10 (ZIP + 4 FTW). In Canada, zip codes contain letters, and spaces too. We restricted the input field to a max character count of 10. And because we had to satisfy naming conventions for both US and Canadian customers, the input label reads “ZIP/Postal code”. 9. IS THIS FORM SAFE AND SECURE? When a user first skims a credit card form, they often ask themselves “Is this form secure? How do I trust the website behind this form? Are they just spoofing my card details?”. There are many ways you can reinforce security through design. Some options we considered included: Place a lock icon inside the form header, next to “Pay Invoice”, but this felt weak and disconnected from the form inputs. Place a lock icon inside the card number field, but the question became “Is only this input secure, or is the entire form secure?”. Label the Pay button with text “Pay $1.00 securely”, but the text would not fit for large payment amounts. Add a security badge below the form, but we felt badges distract from a clean aesthetic, and from the overall brand of the page. Also, users can’t tell two badges apart, so we scrapped the idea. Previous A/B tests also indicated no difference in conversion. Given the existing mental model of paying with credit cards online, we felt the presence of one lock icon was sufficient. The design solution was to add a lock icon inside the Pay button. The position of the icon is key, because it reinforces security at the critical point: when you click Pay. 10. WHAT HAPPENS WHEN I CLICK PAY? Once the user is ready to pay, they click the Pay button. The button changes state to a pending/loading state, and the text reads “Sending…”. We make a server request, and assuming an error-free state, we display a success message. 11. HANDLING CARD ERRORS One of the most important, and often unloved parts of web form design, is error handling. Yes, it can be tedious at times. Yes, there are endless ways to design errors. But when done right, error handling can turn an ambiguous interaction into a clear one. There are two general categories of error validation in Internet software: (1) client-side and (2) server-side. Client-side validation Client-side errors are caught before a request is sent to to the server. These errors are typically caused by formatting errors in the data, or missing data. To make things interesting, you can validate client-side input in different ways. Luckily, Luke Wroblewski wrote a great article explaining the After, While, and Before and While validation methods. We chose the After method based on Luke’s research, and our gut feelings. The After method displays an error message after the user has indicated that she is done answering a question by moving on to the next one. In other words, validating on “blur”. Also to keep in mind, the user is not “locked” into a field if there is an error. They can tab on their keyboard and move to the next input, and come back later to fix any errors shown. These were our validation criteria for client-side errors: All inputs, except Name on card and ZIP code, must contain numbers only (i.e. no letters or special characters) Payment amount: Must be minimum $1 Card number: Length must be 16 numbers (15 for American Express), must begin with one of the four known card codes Expiry date: Length must be 2 numbers for month, and 2 for the year (i.e. MMYY). Month can only be 01 to 12, year must be minimum 15. Security code: Length must be 4 numbers if American Express, or 3 numbers for other card brands ZIP code: Length must be minimum 5 characters, maximum 10 characters To visually indicate an error, we highlight the input field containing the error with red background and red border. We don’t make the input text red. As for the error hint, we display it below the error field, in red text. Server-side validation Server-side errors are caught after a request is sent to the server. They can be system-specific, or specific to the object being validated. In our form, we had to account for three types of server errors: Invalid data, including card number, expiry date, security code, or ZIP code (e.g. expired card, invalid postal code) System errors, for when there is a problem with the server (e.g. timeouts, lost connection) Card errors, for when the card being used has been declined by the payment network for some reason (and there are literally hundreds of reasons) In the case of a system error, we leave the fields populated so that a user can retry the payment. When a card is declined (i.e. card error), this is usually a smell for fraud, so we clear the data entered by the user. 12. DESIGNING FOR DIFFERENT SCREENS From the start, we knew that we wanted to build one form that could be used on different screen sizes (i.e. responsive), and different screens (i.e. inside the Wave iPhone app). By using a singular form object, we only have to make changes to the form in one place. We don’t have to maintain multiple code bases. Inside the Invoice by Wave iPhone app, we initially implemented a native credit card form, that was based on the single line input design pattern. The input was functional, but slightly buggy. More importantly, we perceived a poor experience with the way this input display labels, and the way a user has to navigate between inputs if there are errors present. Now, using an HTML iFrame, we inject the new credit card form inside the app. Users have a near identical payment experience when entering their credit card details on a desktop browser, or inside the app. In the future, inside our iPhone app, we will style the form elements using CSS to match the design of other forms inside the app. And there you have it, the anatomy of a credit card form! We’ve reviewed everything from copy, form input design, error handling, and mobile. Our credit card form will definitely evolve over time, so stay tuned for news. The payments space is not exactly sexy, but learning and understanding the user interactions behind accepting a credit card payment was really fun. Now, time for a demo! Huge thanks to Nick Presta, who engineered the entire form in React (woot!), and the rest of the awesome Payments crew at Wave.
https://uxdesign.cc/the-anatomy-of-a-credit-card-payment-form-32ec0e5708bb
['Gabriel Tomescu']
2015-07-15 17:38:30.800000+00:00
['Payments', 'UX', 'Design']
Design Thinking 101 — The Double Diamond Approach (Part II of II)
At SEEK, in preparation for our upcoming 8th Hackathon, we introduced Upskill sessions to teach fellow seekers new skills to help their hacking endeavours. We kicked off these sessions with a 2-part Design Thinking workshop to broadly share Design Thinking with the rest of SEEK and promote a customer-centric approach in everything we do. In part 1 we gave an overview of what Design Thinking is. This week we dive into the practical exercises we taught in the workshops. The Double Diamond Approach We use the Double Diamond structure to understand customers and their problems and explore creative and innovative ways to solve their problems and delight them. Using the double diamond, you approach problems and solutions by using 2 different types of thinking: divergent and convergent. Divergent thinking — think broadly, keep an open mind, consider anything and everything — think broadly, keep an open mind, consider anything and everything Convergent thinking — think narrowly, bring back focus and identify one or two key problems and solutions The Double Diamond There are four phases to this approach: Discover customer problems Define specific customer problems Develop potential solutions to these customer problems Deliver feasible and viable solutions to these customer problems Let’s look at each phase, and the Design Thinking exercises that can help with each. Discover In this first step, we practice divergent thinking. This means we open our minds and consider everything about our customer, constrained by nothing. You need to get up from your desk and go talk to customers. Interview them, watch them use your products, listen to them and learn everything you can about them. This work will help you build empathy. Once you’ve met real users, document what you’ve learned by creating empathy maps and customer journeys. Practical Exercise: Empathy Maps Empathy maps capture what our customers are saying, doing, thinking and feeling, hearing and seeing. These should be based on what you have heard from your customer — no making it up! Empathy Map template Practical Exercise: Customer Journey Map Next we develop customer journeys to map out their experience over time, identifying relevant touchpoints. This is a timeline of the activities customers undertake with your product — note the places, tools or people they interact with. Customer Journey Map template Define Now that we understand what the customers feel and what they do, we start to use convergent thinking. This means we begin to focus on key areas now, starting to converge on specific experiences within the customer journey. As a team, we consider the delight in their experiences, but also the pain-points. We vote and agree on the key areas that need a solution. Practical Exercise: Roses & Thorns In this activity we overlay some of the pleasure and pain-points we learnt about on the customer journey. Use red post-it notes to mark the Roses (pleasures) and another colour to represent Thorns (pain-points). Roses & Thorns Practical Exercise: How might we (HMW)… Take the pain-points (thorns) and re-frame them as questions for us to try and solve. Pick a few problems to solve and be ambitious in turning the problem into a delight. For example,instead of asking “how do we stop people using their mobile phones in the cinema”, we can turn it into an ambitious HMW by asking “how might we make people using mobile phones in the cinema provide a better experience for cinema goers?”. Rather than stopping the use, could we solve the core problems (interruptions, distractions) in innovative ways. Develop At this point we are half-way through the double diamond and have decided on the problem(s) to try and solve. In this third step, develop, we practice divergent thinking again. This means we begin to open our mind and start ideating — generating a list of whacky, amazing, creative, and innovative ideas. As a team, we’ll consider everything, and focus on generating as many ideas as possible. There are no bad ideas — quantity over quality. Practical Exercise: Creative Matrix Take our HMW’s, and use a Creative Matrix to brainstorm heaps of possible ideas — the aim is quantity!! HMW’s go along the top, and on the left are a few guiding considerations to direct you. It’s important to have some broad themes like Technology or people. However, also use some narrower themes like social media or gaming, as this will help force you to think outside the box. Don’t forget to add a wildcard too, to catch all those stray ideas that don’t fit anywhere else. Think outside the box on this one! The creative matrix When running this exercise, consider introducing rewards for the most ideas. We gave a few chocolates to the team that generated the most ideas, and not just the free chocolates readily available around SEEK, but the good stuff. Generating lots of ideas Deliver In this last step we practice convergent thinking again, focusing on what we can actually deliver and which solutions will solve the users’ needs. We vote and agree on the key areas that need a solution. Practical Exercise: Visual voting Converge by voting on some of the better or intriguing ideas, that we’d like to hear more about. Each person in the group votes for a maximum of 2 ideas each. Visual Voting Practical Exercise: Yes, and… (Improv) A common rule in improv, is never to say no. You keep the conversation going with “Yes, and…” — keep building. In your groups, take turns in explaining your ideas. Then respond, “Yes, and…” to add to the idea and improve it. Also — no buts! There’s always one person who tries to get around “yes and…” by saying “but…”. Pitching ideas Practical Exercise: Impact Vs. Difficulty Now that we’ve explored Desirability in an unconstrained way, we’re ready to start converging on the right solution. In your groups, plot your ideas on the impact/difficulty matrix and decide which ones you want to pursue! Difficulty vs Impact Matrix Very difficult and low impact problems aren’t necessarily a write off — these are luxuries that some users may be willing to pay for. Next steps That was all we had time to cover in the workshop, but the next steps in delivering the solution involve:
https://medium.com/seek-blog/design-thinking-101-the-double-diamond-approach-ii-4c0ce62f64c7
['Kayla J Heffernan']
2019-08-20 23:47:33.436000+00:00
['User Experience', 'Hackathon', 'Design', 'UX', 'Design Thinking']
Pros and Cons of Being a Multidisciplinary Designer in a Product House and Why It Is So Satisfying
Independent Project Managers? Individual Traffic Managers? We don’t need them anymore. Our multidisciplinary team of Designers and Developers does the job. After a few years spent on gaining experience in creative agencies and corporations, I could fairly say, that the IT industry is growing by leaps and bounds. New software is developed, new technologies are used. Evolution in managing businesses is visible at a first glance. Managers don’t waste their time and money on recruiting people who’d preserve unnecessary jobs. Teams are skilled and extremely effective these days. As a UX/UI Designer in a Product House I can say, the times of sharing responsibility among different positions in company are almost over. Some management duties were absorbed by other jobs. Responsibility for project and traffic management rest with designers now. How does it affect our reality? Let’s take a closer look at pros and cons of multidisciplinary approach to designer’s job. Advantages of being a Multidisciplinary Designer at Product House Our company management methodology is based on the lean concept. Lean management seeks to eliminate any waste of time, effort or money by identifying each step in a business process and then revising or cutting out steps that do not create value. What does it mean for us, multidisciplinary designers? First of all, the responsibility for client’s projects lies with us. We solve the problems because we work closest to them. Taking care of the highest possible quality of products is extremely motivating. Direct contact with a client makes the feedback process quicker and more effective. We don’t need to engage other team members in our tasks anymore. Given the information isn’t distorted or lost somewhere along the way, this makes a great improvement. Moreover, straightforward communication with a client allows me to set my own pace of work. Thanks to appropriate work time estimations I have the ability not only to do the design jobs, but also to write this article or join some workshops and conferences. Frugality is the substantial advantage of having multidisciplinary designers onboard from the company’s point of view. It’s economically and strategically wiser for a company to invest its resources in development of current employees by financing courses, workshops or conferences, than it is to spend time and money on random recruitments for one skill and one time needed jobs. Spreading knowledge and skills on multiple levels (not limited to design and management, front-end or back-end basics sound great too!) may lead us to more demanding positions, like product owner or startup founder, in the future. Let’s take Vu Hoang Anh as an example — from a designer to the CEO of Avocode. Check out his interview on multidisciplinary approach and building a successful startup at Design Encounters last year.
https://medium.com/elpassion/pros-and-cons-of-being-a-multidisciplinary-designer-in-a-product-house-and-why-it-is-so-satisfying-9d713c9954e8
['Matt Koziorowski']
2017-02-16 09:01:43.729000+00:00
['UI Design', 'UX Design', 'UI', 'Design', 'Product House']
Paint It Black: 7 Dark & Moody Kitchens to Enjoy All Year Long
It’s the color of choice for tuxedos, limos and chic little dresses. Black is luxurious, mysterious and it has gravitas — and in 2019 it’s been bringing all that badness to kitchens all over the world. This fascination with dark, moody kitchens is a natural reflex to the white kitchens that have been so dominant for the past decade — so are all the other colors trending now. “Everyone is looking for spaces that are moody and cozy and comforting,” says Sarah Robertson, principal designer at Studio Dearborn in Westchester County, New York. “Black makes the walls recede.” Black is also a natural foil for the light wood and stone floors that are popular right now, too. Even if you’re not thinking about taking your home kitchen to the dark side, you’ll still be inspired by these seven examples of black kitchens. Some are small, some are sleek and modern, others are cozy and comfortable. But no matter how you look at it, black is beautiful. Photo credit: Tatiana Shishkina Black Russian. This ultra compact, 24-square-meter (that’s 250 square feet) apartment designed by Tatiana Shishkina of Inroom Design in Vladivostok, Russia has just a slip of a kitchen. But with its textured walls, jewel-like appliances and white shelf and simple dining counter, it says so much. The best part may be the way she used skeletal stairs to divide the kitchen from the rest of the flat. Photo credit: Ken Fulk Black gold. Ken Fulk’s wonderfully warm black kitchen spans classic, farmhouse and midcentury. He collaborated with KitchenAid to outfit the Tribeca loft that’s his new East Coast headquarters. The matte black cabinetry features bright bin pulls and brass luggage corners. Photo credit: Nicole Hollis All black everything. California designer Nicole Hollis went all in on black to create this stunning kitchen for her San Francisco studio. We love how the shiny glass tile adds light and reflectivity. Using a range of finishes from the glossy to matte creates its own kind of palette, even though everything is black — save the salt.
https://medium.com/studio-dearborn-kitchen-design-journal/paint-it-black-7-dark-moody-kitchens-to-enjoy-all-year-long-e21fdbbdd2cf
['Maria C. Hunt']
2019-10-31 17:35:06.152000+00:00
['Kitchen Interior Design', 'Design', 'Kitchen', 'Black Kitchen Ideas', 'Interior Design']
5 Essential elements to build highly successful chatbot (Conversational AI)
Conversation should touch everything, but should concentrate itself on nothing. — Oscar Wilde Using AI to offer more personalized customer experiences at scale is one of the top priorities for companies (products, services) across the globe, and conversational AI plays a pivotal role in this space. Our interaction with technology touch points have increased exponentially in daily life. It starts from getting morning news updates and ends with setting an alarm and reminder for next morning by giving a voice command to the Google assistant. Similarly there are more such instances during the day for ex. booking a cab, checking weather, interacting with bank or calling someone with a voice command etc.. it is all done through various mobile applications or sites. Each of these technology points are conversation with a virtual assistant which tries to give you a personalized experience based on your usage history, available options etc… While the ability of a machine or application to have conversation as fluent and intelligent as the AI assistant named “Jarvis” from famous Marvel movie series, is still a goal to be achieved in next few years but this phenomena is increasing. Having conversation is a basic human quality that has started and developed with thousand years of evolution. It is basic human requirement to be able to converse with other people and communicate about your thoughts, exchange ideas, express emotions, information and so on. This natural requirement is giving birth to many new use cases of virtual assistant. With technology advancements virtual assistants have grown primarily in 2 categories : General Purpose virtual assistants such as Google assistant, Amazon Alexa, Microsoft Cortana, Apple Siri etc.. which can respond to variety of input as well as take some action. Domain specific virtual assistants or chat bots for ex. Bank chatbot, Restaurant or food delivery chat bot, Customer Service, Improve Workplace Productivity (HR assistant), Booking Agent, Gaming Expert, Weather Forecaster, News Reporter, Job Hunter, Marketer, Finance Adviser, Teacher, Legal adviser etc… To understand how chat bot really work and are able to mimic human conversation capability, we need to focus on few fundamental elements. In rest of this article, i will use domain specific chat bot as an example to explain the process of creating a chat bot. A contextual chatbot understands the context of the conversation by observing the pattern of communication between itself and the user. It not only keeps track of the current state of the conversation, but also of what has been said before which allows them to suggest next response or action. This capability differentiates from traditional programming method in terms of using ML model instead of relying on a series of if/else statements. The entire functioning of chatbot rely on following factors Framework Natural Language Understanding (NLU) layer and it’s training Pipeline selection for NLU training Dialogue management (Core or Brain of Chat bot) and it’s training Domain configuration Framework — For every conversational chatbot, the basic requirement is to first select a conversational framework as foundation. There are many options available such Google Dialog flow, Amazon Lex, Microsoft Azure bot service etc.. In this article the steps and examples are explained with an open source framework called Rasa to build a restaurant search chat bot for an imaginary food delivery start-up. Every conversational framework (Rasa, Lex, Dialogflow etc..) has 2 main elements Natural Language Understanding, or NLU layer Dialogue Management System layer or Core When humans do conversation, primarily these body parts and senses are involved — Ear (to listen), ability to speak (to respond) and Human brain (to decide action or response, to keep track of context, history of ongoing conversation). Similarly in terms of conversational framework, the NLU layer manages the listening and speaking ability and the dialogue management system is the brain of the chatbot. In Rasa, these two components are called Rasa NLU and Rasa Core, respectively. 2. Natural Language Understanding Layer (Rasa NLU) — First layer of a conversational system. It is responsible for intent classification and entity extraction. It interprets the free text provided by the user. It basically takes an unstructured text phrase or sentence, understands what the user probably intends to say, extracts entities from the text phrase or sentence, and converts it into structured data. for ex. if a user has asked a query to a chatbot — “Search for a restaurant in Nagpur.” Intent: Restaurant search; Entities: [location=Nagpur] The NLU layer explained above will be able to perform it’s task based on the learning from training data. In general, the training data for Rasa NLU is structured into three parts : Example Data— Most important component since it contains all training examples. The performance of NLU layer depends on the volume and variety of training examples . Each example further has three components— text, intent and entities. Text : It is an example of what would be submitted for parsing. for ex. what is the average budget for 2 people at MTR JP Nagar ? : It is an example of what would be submitted for parsing. for ex. what is the average budget for 2 people at MTR JP Nagar ? Intent is the objective that should be associated with the text. for ex. in above query the intent is “restaurant search”. is the objective that should be associated with the text. for ex. in above query the intent is “restaurant search”. Entities are specific parts of the text that need to be identified. for ex. in above query following entities are identified. Restaurant — MTR, location — JP Nagar 2. Synonyms — These are helpful to add those terms which are referring to a common term and generally used as a part of a conversation. for ex. City — Bangalore is referred as Bengaluru or misspelt as Bangaluru. The synonym field allows to capture possible combination to refer to an entity. 3. Regex features — With regular expression, you can cover a broader range of values where lot of unique elements are available and follows a certain pattern. for ex. City Pin codes are unique and large in values. 3. Selecting pipeline for NLU — Rasa framework allows to customize NLU model by configuring specific pipeline. supervised_embeddings pretrained_embeddings_spacy The biggest difference is that the pretrained_embeddings_spacy pipeline, uses pre-trained word vectors whereas the supervised_embeddings pipeline feeds specifically on your data set and doesn’t use any pre-trained word vectors. It is generally recommended that you use the pretrained_embeddings_spacy pipeline if you have less than 1,000 total training examples, and there is a spaCy model for your language. You need to specify the pipeline in config.yml as shown below : 4. Dialogue Management with Rasa Core — The next important aspect of the conversational framework is the Dialogue Management Model. A dialogue management model will predict the response or action that the chatbot should take based on the stage of the conversation. These actions/responses could be to fetch data, send a mail to the user or simply say “Good bye”. For ex. if a user asks “What’s the current temperature in Bengaluru ?”. In this case the bot should fetch the results from a weather database and display it on the screen. Rasa Core takes structured input in the form of intents and entities (i.e., the output of Rasa NLU) and decides the next actions. It accomplishes the task of learning to take the correct action based on the stage of the conversation. However, the correct action or response is highly dependent on the training data that is supplied to Rasa Core (Dialogue management ). These inputs in training data are supplied in the form of ‘conversational stories’. A story represents one training example, and it is simply a conversation between a user and an AI assistant, expressed in a particular format. Rasa core trains a neural network model on the stories. Rasa Core uses Tensorflow, a library for training neural networks, at the back end. More specifically, it uses LSTM neural networks implemented in Keras. In the story examples, user inputs are expressed as corresponding intents (and entities where necessary), while the responses of the assistant are expressed as corresponding action names. 5. Domain.yml — The domain defines the universe in which your assistant operates. It specifies the intents, entities, slots, templates and actions that your bot should know about. Slots: These are ‘objects’ that you want to ‘keep track of’ during a conversation. Suppose a user says “Book a table for 2 at MTR Lalbagh.” Throughout this conversation, the bot needs to remember that ‘MTR’ is the value of the entity restaurant_name and ‘2’ is the value of the entity number_of_seats so it can be used to query a database such as Zomato. Slots in domain.yml Intents (NLU layer) : These are strings (such as ‘greet’, ‘restaurant_search’), which describe what the user intends to say. : These are strings (such as ‘greet’, ‘restaurant_search’), which describe what the user intends to say. Entities (NLU Layer): As part of NLP, entities are extracted using models such as CRF, spaCy, etc. In most cases, entities are stored in slots; in cases when an entity is irrelevant for the dialogue flow, you don’t need to assign it a slot. As part of NLP, entities are extracted using models such as CRF, spaCy, etc. In most cases, entities are stored in slots; in cases when an entity is irrelevant for the dialogue flow, you don’t need to assign it a slot. Templates: Templates define the way the bot will utter a statement. For example, if the bot wants to ask for cuisine preference, then a template can be defined so that bot can ask for it: “What kind of cuisine would you like?”, “Please specify your cuisine preference”, etc. All these can be specified in the template section of the domain file. A random response is picked from the template so that your bot doesn’t sound robotic or static. Templates in domain.yml Actions: The ‘actions’ component enlists all the actions that the bot can perform — uttering a text message such as “Hi”, looking up a database, making an API call, asking the user a question, etc. For example, an action named ‘action_check_cuisine’ can check validity and availability of a particular cuisine. Action module in domain.yml There are two types of actions a bot can take: Utterance action: The bot just sends a text message to the user. For example: replying with a greet message, asking the location, etc The bot just sends a text message to the user. For example: replying with a greet message, asking the location, etc Custom actions: Actions such as querying the database, sending emails to the user, etc. If you want the bot to perform a custom action, you need to add that action in your actions file. for ex. below custom action for sending email is included in action.py file. Aforementioned all the components together complete the functioning of a conversational chat bot. Conversation about the weather is the last refuge of the unimaginative. — Oscar Wilde In case of building chat bot human imagination has no boundary. There are multiple possibilities that are still unexplored and the technology advancements in AI (Particularly in Machine learning and NLP) are allowing to getting into those unknown territories such as Health care chat bot, insurance chat bot etc… Happy Chatting !!
https://medium.com/analytics-vidhya/5-essential-elements-to-build-highly-successful-chatbot-conversational-ai-bd0eba4b549c
['Niwratti Kasture']
2019-12-21 15:30:47.015000+00:00
['Machine Learning', 'Chatbots', 'NLP', 'Conversational Ai', 'Artificial Intelligence']
Dimensionality Reduction Techniques
Source : HPC Now, what is this dimensionality reduction? Seems a bit complicated. But with the increasing amount of data on a daily basis, it is getting quite complicated to keep track of it. A tremendous amount of data is being generated from different social networking sites, educational websites, and it is very difficult to visualize that data. Therefore these techniques are used. In cases, where there is huge number of variables, it is quite difficult to visualize, draw inferences for that dataset. So these techniques try to take out a subset from that dataset, which can capture a normal an amount of information laid out by the original set of variables. So if we are having a dataset of X dimensions, we can convert it into a subset of Y dimensions. This is called dimensionality reduction. Why do we need these techniques? If the dimensions are reduced, the space required to store the data is also reduced. It helps us in visualizing the data. It helps us to take care of multicollinearity by removing those features that are of no use. Computation time can be reduced thus increasing the speed of our model. List of Dimensionality Reduction Techniques: Random Forest Principal Component Analysis (we will discuss about it) Missing Value Ratio Forward Feature Selection Backward Feature Elimination Low Variance Filter High Correlation Filter Principal Component Analysis (PCA): This technique extracts some amount of features from the original dataset. These extracted features/variables are called Principal Components. The main motive behind this technique is to extract a low dimensional set of features from a high dimensional dataset. And it is more useful when we are dealing with 3 or high dimensional data. If we want to perform this technique, then the matrix must contain numeric and standardize data. PCA is a linear combination of original variables. The first principal component should explain the maximum variance in the dataset. And the remaining variance is being explained by the second principal component. These components are uncorrelated, and the cycle goes on. Source : Medium Z² = Φ¹²X¹ + Φ²²X² + Φ³²X³ + .... + Φp2Xp -> This is the equation by which we can calculate Principal component. We will talk about the implementation part in next posts. Let’s have a look towards some feature engineering. Source : Amazon Success of machine learning as actually a success in engineering the features that a learner can understand. It is the process of transforming the raw data into features which can represent an underlying more problem in more accurate way, resulting in improved accuracy of our model. Steps involved in solving any machine learning problem are: Collecting data. Pre-processing data. Feature Engineering. Defining our model. Predicting output. Feature engineering can be anything like, introducing a new feature that can point out some useful information and can increase the accuracy of our model. Or it can be an introduction of a new feature by the combination of two different features. Sometimes, removing an unwanted feature from our model also contributes to feature engineering, because that model was the reason of the downfall of our model. Steps involved in Feature Engineering : Deleting features. Creating new features. Checking from the start if all the features are working in a fine way. Running our model again and again with different test cases, to check on the accuracy of our model with the extracted features. That’s all folks! HAPPY LEARNING !!!
https://medium.com/datadriveninvestor/dimensionality-reduction-techniques-27049b5a4c55
['Vineet Maheshwari']
2019-01-19 01:28:04.948000+00:00
['Machine Learning', 'Data Science', 'Dimensionality Reduction', 'Feature Engineering']
Hide & Seek
Photo by Sharon McCutcheon on Unsplash You’re not lost. Not yet. You’re just waiting for someone else to find you. You want them to recognize what you are, and relay the message. Music and movement open you up for seconds, minutes, maybe even hours… And then you return to your oyster. Safe. Closed. Clamped shut. Your friends call it turtling, because the world is a scary place, and burying your chin into your shirt collar makes you feel safer. You are terrified. And that’s okay. You have no idea what comes next, but you expect the worst. Stuck in your ways, burrowing into your blankets with quiet despair, I see you. I want you to know, no matter how alone you feel, how small, how unsettled and unmoored, You know better. You will remember, in time, That you are worthy of all the things you don’t believe you deserve. And the only way you will ever find out exactly what you are made of, Is by making something new, and having the courage to share it. Seek openness. Lean into fear. Question judgement. You’ve got this, little one. It’s safe to come out now.
https://leighhuggins.medium.com/hide-seek-2ac13661d8b8
['Leigh Huggins']
2019-02-16 06:12:12.697000+00:00
['Personal Growth', 'Self', 'Mental Health', 'Poetry', 'Life']
What is cloud computing? A beginner’s guide
We have always been storing the programs and data that we need onto our computer’s hard disk and accessing it whenever required. This is computing. But now technology has taken over and the need to store everything on your physical hard disk is no longer there. Here, Cloud Computing comes into picture. Cloud Computing is the method of computing in which the data and programs are stored over the Internet and not on your hard disk. The Internet is referred to as the Cloud in ‘Cloud Computing’. My Background: I am Cloud and Big Data Enthusiast , I am here because I love to Talk about Cloud. I am 11x Cloud Certified Expert. 4x AWS Certified , 3x Oracle Cloud Certified , 3x Azure Certified 1x Alibaba Cloud Certified . What is cloud computing ? Cloud computing is the delivery of on-demand computing services — from applications to storage and processing power — typically over the internet and on a pay-as-you-go basis. How does cloud computing work? Rather than owning their own computing infrastructure or data centers, companies can rent access to anything from applications to storage from a cloud service provider. One benefit of using cloud computing services is that firms can avoid the upfront cost and complexity of owning and maintaining their own IT infrastructure, and instead simply pay for what they use, when they use it. In turn, providers of cloud computing services can benefit from significant economies of scale by delivering the same services to a wide range of customers. What cloud computing services are available? Cloud computing services cover a vast range of options now, from the basics of storage, networking, and processing power through to natural language processing and artificial intelligence as well as standard office applications. Pretty much any service that doesn’t require you to be physically close to the computer hardware that you are using can now be delivered via the cloud. Over the past there years, job searches that included keywords related to the top cloud providers, such as “Google Cloud,” “Azure,” or “AWS,” increased by 223%, the report found. Job listings that included these terms in the description rose by 101% over the same time frame. Why is it called cloud computing? A fundamental concept behind cloud computing is that the location of the service, and many of the details such as the hardware or operating system on which it is running, are largely irrelevant to the user. It’s with this in mind that the metaphor of the cloud was borrowed from old telecoms network schematics, in which the public telephone network (and later the internet) was often represented as a cloud to denote that the just didn’t matter — it was just a cloud of stuff. This is an over-simplification of course; for many customers location of their services and data remains a key issue. What are examples of cloud computing? §Cloud computing underpins a vast number of services. That includes consumer services like Gmail or the cloud back-up of the photos on your smartphone, though to the services which allow large enterprises to host all their data and run all of their applications in the cloud. Netflix relies on cloud computing services to run its its video streaming service and its other business systems too, and have a number of other organisations. §Cloud computing is becoming the default option for many apps: software vendors are increasingly offering their applications as services over the internet rather than standalone products as they try to switch to a subscription model. However, there is a potential downside to cloud computing, in that it can also introduce new costs and new risks for companies using it. Public cloud services serve as the one bright spot in the outlook for IT spending in 2020. Cloud spending in many regions is expected to grow rapidly as economies reopen and more normal economic activity resumes, with regions such as North America expecting to return to higher spending levels as early as 2022. What is Infrastructure-as-a-Service? Cloud computing can be broken down into three cloud computing models. Infrastructure-as-a-Service (IaaS) refers to the fundamental building blocks of computing that can be rented: physical or virtual servers, storage and networking. This is attractive to companies that want to build applications from the very ground up and want to control nearly all the elements themselves, but it does require firms to have the technical skills to be able to orchestrate services at that level. Research by Oracle found that two thirds of IaaS users said using online infrastructure makes it easier to innovate, had cut their time to deploy new applications and services and had significantly cut on-going maintenance costs. However, half said IaaS isn’t secure enough for most critical data. What is Platform-as-a-Service? Platform-as-a-Service (PaaS) is the next layer up — as well as the underlying storage, networking, and virtual servers this will also include the tools and software that developers need to build applications on top of: that could include middleware, database management, operating systems, and development tools. What is Software-as-a-Service? §Software-as-a-Service (SaaS) is the delivery of applications-as-a-service, probably the version of cloud computing that most people are used to on a day-to-day basis. The underlying hardware and operating system is irrelevant to the end user, who will access the service via a web browser or app; it is often bought on a per-seat or per-user basis. What is private cloud? §Private cloud allows organizations to benefit from the some of the advantages of public cloud — but without the concerns about relinquishing control over data and services, because it is tucked away behind the corporate firewall. Companies can control exactly where their data is being held and can build the infrastructure in a way they want — largely for IaaS or PaaS projects — to give developers access to a pool of computing power that scales on-demand without putting security at risk. However, that additional security comes at a cost, as few companies will have the scale of AWS, Microsoft or Google, which means they will not be able to create the same economies of scale. Still, for companies that require additional security, private cloud may be a useful stepping stone, helping them to understand cloud services or rebuild internal applications for the cloud, before shifting them into the public cloud. What is hybrid cloud? Hybrid cloud is perhaps where everyone is in reality: a bit of this, a bit of that. Some data in the public cloud, some projects in private cloud, multiple vendors and different levels of cloud usage. According to research by TechRepublic, the main reasons for choosing hybrid cloud include disaster recovery planning and the desire to avoid hardware costs when expanding their existing data center. Refer to this article for more Information. Gartner Forecasts Worldwide Public Cloud Revenue to Grow 6.3% in 2020 I hope that this guide helps you in building your career with Cloud and getting Cloud Certified, If you have any doubt or unable to understand any concept feel free to contact me on LinkedIn :https://www.linkedin.com/in/adit-modi-2a4362191/ Instagram :https://www.instagram.com/adit_aweesome/ Twitter : https://twitter.com/adi_12_modi Github : https://github.com/AditModi You can view my badges on: https://www.youracclaim.com/users/adit-modi/badges I also am working on various AWS Services and Developing various Cloud , Big Data & Devops Projects. If you are interested in learning AWS Services then follow me on github. If you liked this content then do clap and share it . Thank You .
https://medium.com/analytics-vidhya/what-is-cloud-computing-a-beginners-guide-1e89fdb8791d
['Adit Modi']
2020-12-21 17:47:56.951000+00:00
['Cloud Native', 'Cloud Services', 'Cloud Storage', 'Cloud Computing', 'Cloud']
To Hell With Hustle
Jefferson Bethke’s new book puts a proper emphasis on the importance of Sabbath. Photo by Angelina Kichukova on Unsplash In his new book To Hell With Hustle, YouTube sensation and writer Jefferson Bethke, identifies an “ethos of hustle injected into us all at birth” and the noise, fame, work, and tribalism that have overtaken the lives of many, as the main problems facing our culture today. As a remedy, Bethke identifies each problem’s opposite: for noise, silence; for fame, obscurity; for work, rest; and for tribalism, empathy. The book explores each of these antidotes as Bethke takes readers around the country to New Mallery Abbey in Iowa — dubbed the quietest place in the world — and back in time to Swidnik, Poland on February, 5th 1982, a day when the city’s residents were carrying around their TV’s in protest against the Soviet occupied propaganda they were routinely subject to — a picture of the countercultural and defiant nature Sabbath rest looks like today. Indeed, it is Bethke’s own commitment to practicing Sabbath — one 24-hour period of rest every week — that is the most refreshing aspect of his book. While he doesn’t command or prescribe, Bethke invites readers into the rhythms and rituals of his family life, how they have made “no” their default answer, and prioritized formation over achieving goals. Writes Bethke, “A couple of the small changes Alyssa [his wife] and I have pursued are honoring a family Sabbath, never allowing phones in the bedrooms, and turning off our phones once a week for a twenty-four hour period.” Bethke claims that these changes have “yielded massive results.” In our 24/7 hustle culture Sabbath has never been more necessary — nor more abused. It is encouraging to see a writer like Bethke give such importance to something so vital to our health, both physical and spiritual. Of course our hustle culture wasn’t developed yesterday. Its been around for over a century now and Bethke traces its roots to the dawn of the industrial revolution. Henry Ford, the implementation of the assembly line, and the advent of the car all have a place in this tale and have each contributed in some measure to the hurry that does real violence to our souls. Bethke even examines how the industrial revolution has infiltrated the church and created a type of “assembly-line Christianity.” At the expense of relationships, a pre-planned and programmed schedule is followed every Sunday in churches across the country and world. Worship, stand up, sit down, pray, sermon, repeat. Of course Bethke has a remedy for industrialized religion as well — the adoption of agrarian principles that give proper weight to the rhythms, rituals, and seasons that keep us balanced. While all of this is good and true it seems at times as though Bethke is the one who needs this message the most. As if he is writing to himself and letting us read it. To be sure hustle culture is real and a threat, but most Christians probably aren’t in as deep as Bethke himself. It was a YouTube video that launched his career and he and his wife have taken to social media as their full time work. In some cases it leaves the reader wondering who Bethke is to preach to us on such matters. But the aforementioned discipline and intentionality Bethke has adopted largely restores his credibility and gives the book a flavor more akin to a warning from someone who has been there. In that way we can learn much from Bethke who has tried always-on-the-go lifestyle and has declared, “to hell with hustle.” John Thomas is a freelance writer. His writing has appeared at Desiring God, Christ and Pop Culture, and Christianity Today. He writes regularly at Soli Deo Gloria.
https://medium.com/soli-deo-gloria/to-hell-with-hustle-35404a92c9f2
['John Thomas']
2019-11-20 09:16:01.257000+00:00
['Books', 'Religion', 'Spirituality', 'Christianity', 'Culture']
How Do You Get More Reads on Medium? I’ll Show You.
1. A good title. A captivating title is what you need. Without it, no one is going to click on your work and then you won’t be receiving a read nor a view. Could you just use clickbait titles? These are very popular and I have been dragged into a fair few. The title looks promising, I begin to read and realise that actually this isn’t what I was prepared to read but I continue anyway. Clickbait titles are also very risky, as there have also been times when I have clicked on an article and then clicked straight off when I have read the sub-title. I have seen advise from social media groups to use a headline generator. Personally, I have not done this yet but I was meant to, I just lost the site that was recommended. 2. The myth of publishing on a weekend. I have seen a few conspiracies flying around stating that publishing on a weekend is not really a good idea. This is true. I noticed this after writing for Medium for about 3 weeks. A vast amount of people use the weekend to do things over than writing, which is fine because weekends are meant to break up the working week. I also find that there is a lot less engagement within social media groups as well which means that if you promote your work over the weekend you are bound to get less views and reads. Try and publish in the week if you can. I still publish on the weekend as I work full-time and just cannot get everything out in the week. Maybe I should plan a schedule… 3. Longer reads vs shorter reads. A lot of my pieces are within the 3–5 minute range. I did experiment and publish a 7-minute long piece but it only has a 31% engagement rate, how embarrassing. Personally, I do not like to read anything that is over 6 minutes and that’s pushing it. The second I click on an article and see anything above 6, unless it’s something that I want to read about, then I won’t read an article entirely. I simply just don’t have all day and I’m more likely to be engaged if I know I’m not reading a mini-essay. The main reason why a lot of writers publish longer reads is because they’re either writing fiction/non-fiction or they want the higher reading time which in turns generates more dollar signs. My best pieces are shorter reads and have generated me over $6 each which isn’t that much, but it’s something. 4. An interesting introduction. The key points that I read around building an introduction is that you want to make the reader be able to relate to something. If they can relate to it, they’ll be emotionally engaged and be inclined to continue to read. I don’t actually have too much advise to give surrounding this area as it looks like I can barely write a good one myself! 5. Adding images to your article. Should you add images mid article? I have noticed that not many writers do this. I don’t do this. Does adding photos mid article break the article up for the reader and make them more likely to continue to read? Let’s trial it. Did it work? Will you read till the end now? The articles I do come across that include photos are usually screenshots or they are using a photo to help explain what they are describing. It is rare that writers will just add images mid article for image sake. Although, I do find that it does make it slightly easier to read an article. It gives a nice distraction to the continuous stream of words and also adds colour and jazz to a piece of writing, that’s just my opinion. 6. Breaking up paragraphs. I cannot stand lengthy paragraphs. It’s a mega turn-off and it makes articles hard to read. PLEASE break up your paragraphs or I will break up with your writing. It makes it so much easier to continue to read. 7. The bigger, the better. The bigger the publication the more reads you are going to have, this is obvious. I have just seen a tweet from a writer who has had four articles accepted by The Start Up (one of the biggest publishers on this entire platform). They claim their “worst” performing piece has done 100,000 reads and their best performing has done 250,000 reads. Many writers do self-publicate. Personally, I do not do this. Many also find that publishers will reach out to them and ask if they can feature their article in their publication. This is a major win! The bigger your social media platforms full of Medium Partner Programme folk the more reads you are going to have, regardless of how captivating your title is. They’re all there to support you as your fan base and the chances are you know the amount of reads minimum you are going to receive. 8. Curation/Further Distribution. Due to the recent and consistent ongoing changes from Medium, it seems as if curation has died an annoying death. They’ve even changed the name. Whilst this used to work wonders for your work and reads, does it really anymore? Many social media groups state that since the new changes, further distribution no longer makes a major change to their views nor followers. It might do something, but not nearly as much as it used to. Still, it’s a good ego boost that you’re on the right track!
https://medium.com/illumination/how-do-you-get-more-reads-on-medium-ill-show-you-4c8ee22dc6e7
['Shamar M']
2020-12-14 12:19:21.660000+00:00
['Writing Tips', 'Writing Life', 'How To', 'Writing', 'Self Improvement']
Where Is Jho Low, the Billion Dollar Whale?
The Take on Billion Dollar Whale From Hollywood to Manhattan night clubs, the book takes its readers on a thrill ride; one that will leave you gobsmacked at Low’s unabashedly bold displays of wealth. While he celebrated, he was robbing the Malaysian people in broad daylight. The book pulls back the curtain on the ugly corners of global capital markets and capitalism generally. Low has an uncanny ability to pull the levers of power and convince people of his value. Numerous powerful people and celebrities fell prey, from Leonardo DiCaprio, Swizz Beats, and Paris Hilton, to Goldman Sachs bankers. Everyone has their price. Even Chris Christie is reaping the Low rewards by currently representing the international fugitive in a variety of forfeiture actions. The book does a good job at shedding light on the inner psychology of these people. They must have rationalized: “why raise alarms when we stand to benefit”? In conclusion, the book is an easy and entertaining read by Wall Street Journal investigative journalists, Tom Wright and Bradley Hope. It will shock and awe that a fraud of this scale could take place in 21st century international markets. Yet if you were like Low and grew up in the developing world, perhaps you too would construct a worldview that seeks to legitimize illegal and unethical behavior because others engage in it. Which is why the current Razak case in Malaysia is so important. Not many former heads of state in the region are held to account for abuses of power, permitting many to act with impunity. Razak’s conviction and the pursuit of justice against Low will go a long way toward restoring and building trust in democratic institutions and ultimately, the rule of law in the region. But the world should not forget that none of this would have been possible without either consent or willful blindness from scores of executives across the international financial system. Jho Low may have exposed some of the corrupt corners of the financial world. We should not forget, however, that many executives were willing to aid and abet his actions for the right price.
https://medium.com/curious/where-is-jho-low-the-billion-dollar-whale-a57180d617c0
['Sebastian Stone']
2020-08-21 00:37:47.732000+00:00
['Finance', 'Malaysia', 'Fraud', 'Books', 'Jho Low']
NumPy Fundamentals For Beginners
NumPy Fundamentals For Beginners Getting started with NumPy, a scientific computing package in Python What is NumPy? NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more. NumPy array object NumPy has a multi-dimensional array object called ndarray. It consists of two parts: The actual data Some metadata describing the data The majority of array operations leave the raw data untouched. The only aspect that changes is the metadata. The NumPy array is homogeneous — the items in the array have to be of the same type. The advantage is that, if we know that the items in the array are of the same type, then it is easy to determine the storage size required for the array. NumPy arrays are indexed just like in Python, starting from 0. Data types are represented by special objects. Let’s know what is arange function? numpy.arange Return evenly spaced values within a given interval. Values are generated within the half-open interval [start, stop) (in other words, the interval including start but excluding stop). For integer arguments, the function is equivalent to the Python built-in range function but returns an ndarray rather than a list. numpy.arange([start, ]stop, [step, ]dtype=None) For example np.arange(3) out: array([0, 1, 2]) np.arange(3,7,2) out: array([3, 5]) We will create an array with the arange function again. Here’s how to get the data type of an array: In: a = arange(5) In: a.dtype Out: dtype(‘int64’) The data type of array a is int64 (at least on my machine), but you may get int32 as output if you are using 32-bit Python. In both cases, we are dealing with integers (64-bit or 32-bit). Besides the data type of an array, it is important to know its shape. The following diagram will give us a better understanding of a NumPy array object: A vector is commonly used in mathematics but, most of the time, we need higher-dimensional objects. Let’s determine the shape of the vector we created a few minutes ago: In [4]: a Out[4]: array([0, 1, 2, 3, 4]) In: a.shape Out: (5,) As you can see, the vector has five elements with values ranging from 0 to 4. The shape attribute of the array is a tuple, in this case, a tuple of 1 element, which contains the length in each dimension. Time for action — creating a multidimensional array Now that we know how to create a vector, we are ready to create a multidimensional NumPy array. After we create the matrix, we would again want to display its shape and data type. Create a multidimensional array. In: m = array([arange(2), arange(2)]) In: m Out: array([[0, 1], [0, 1]]) 2. Show the array shape and data type: In: m.shape Out: (2, 2) What just happened? We created a 2-by-2 array with the arange function we have come to trust and love. Without any warning, the array function appeared on the stage. The array function creates an array from an object that you give to it. The object needs to be array-like, for instance, a Python list. In the preceding example, we passed in a list of arrays. The object is the only required argument of the array function. NumPy functions tend to have a lot of optional arguments with predefined defaults. Selecting elements From time to time, we will want to select a particular element of an array. We will take a look at how to do this, but first, let’s create a 2-by-2 matrix again: In: a = array([[1,2],[3,4]]) In: a Out: array([[1, 2], [3, 4]]) The matrix was created this time by passing the array function a list of lists. We will now select each item of the matrix one-by-one. Remember, the indices are numbered starting from 0. In: a[0,0] Out: 1 In: a[0,1] Out: 2 In: a[1,0] Out: 3 In: a[1,1] Out: 4 As you can see, selecting elements of the array is pretty simple. For the array a, we just use the notation a[m,n], where m and n are the indices of the item in the array. NumPy numerical types Python has an integer type, a float type, and a complex type, however, this is not enough for scientific computing and, for this reason, NumPy has a lot more data types. In practice, we need even more types with varying precision and, therefore, the different memory size of the type. The majority of the NumPy numerical types end with a number. This number indicates the number of bits associated with the type. The following table (adapted from the NumPy user guide) gives an overview of NumPy numerical types: For each data type, there exists a corresponding conversion function: In: float64(42) Out: 42.0 In: int8(42.0) Out: 42 In: bool(42) Out: True In: bool(42.0) Out: True In: float(True) Out: 1.0 Many functions have a data type argument, which is often optional: In: arange(7, dtype=uint16) Out: array([0, 1, 2, 3, 4, 5, 6], dtype=uint16) It is important to know that you are not allowed to convert a complex number into an integer. Trying to do that triggers a TypeError. This is shown as follows: In: int(42.0 + 1.j) -------------------------------------------------------------------- TypeError Traceback (most recent call last) TypeError: can't convert complex to int; use int(abs(z)) The same goes for the conversion of a complex number into a float. By the way, the .j part is the imaginary coefficient of the complex number. See the following code: In: float(42.0 + 1.j) -------------------------------------------------------------------- TypeError Traceback (most recent call last) TypeError: can't convert complex to float; use abs(z) Data type objects Data type objects are instances of the numpy.dtype class. Once again, arrays have a data type. To be precise, every element in a NumPy array has the same data type. The data type object can tell you the size of the data in bytes. The size in bytes is given by the itemsize attribute of the dtype class: In: a.dtype.itemsize Out: 8 The following diagram gives us a better understanding of data type objects: Character codes Character codes are included for backward compatibility with Numeric. Numeric is the predecessor of NumPy. Their use is not recommended, but the codes are provided here because they pop up in several places. You should instead use dtype objects. Look at the following code to create an array of single precision floats: In: arange(7, dtype='f') Out: array([ 0., 1., 2., 3., 4., 5., 6.], dtype=float32) Likewise this creates an array of complex numbers In: arange(7, dtype='D') Out: array([ 0.+0.j, 1.+0.j, 2.+0.j, 3.+0.j, 4.+0.j, 5.+0.j,6.+0.j]) dtype constructors We have a variety of ways to create data types. Take the case of floating-point data: We can use the general Python float: In: dtype(float) Out: dtype('float64') 2. We can specify a single-precision float with a character code: In: dtype('f') Out: dtype('float32') 3. We can use a double-precision float character code: In: dtype('d') Out: dtype('float64') 4. We can give the data type constructor a two-character code. The first character signifies the type; the second character is a number specifying the number of bytes in the type: In: dtype('f8') Out: dtype('float64') A listing of all full data type names can be found in sctypeDict.keys() : In: dtype('Float64') Out: dtype('float64' dtype attributes The dtype class has a number of useful attributes. For example, we can get information about the character code of a data type through the attributes of dtype : In: t = dtype('Float64') In: t.char Out: 'd' The type attribute corresponds to the type of object of the array elements: In: t.type Out: <type 'numpy.float64'> The str attribute of dtype gives a string representation of the data type. It starts with a character representing endianness, if appropriate, then a character code, followed by a number corresponding to the number of bytes that each array item requires. Endianness, here, means the way bytes are ordered within a 32 or 64-bit word. In big-endian order, the most significant byte is stored first. In little-endian order, the least significant byte is stored first. In: t.str Out: '<f8' Creating a record data type The record data type is a heterogeneous data type — think of it as representing a row in a spreadsheet or a database. To give an example of a record data type, we will create a record for a shop inventory. The record contains the name of the item, a 40-character string, the number of items in the store represented by a 32-bit integer and, finally, a price represented by a 32-bit float. The following steps show how to create a record data type: Create the record: In: t = dtype([('name', str_, 40), ('numitems', int32), ('price', float32)]) In: t Out: dtype([('name', '|S40'), ('numitems', '<i4'), ('price','<f4')]) 2. View the type (we can view the type of a field as well): In: t['name'] Out: dtype('|S40') If you don’t give the array function a data type, it will assume that it is dealing with floating-point numbers. To create the array now, we really have to specify the data type; otherwise, we will get a TypeError : In: itemz = array([('Meaning of life DVD', 42, 3.14), ('Butter', 13, 2.72)], dtype=t) In: itemz[1] Out: ('Butter', 13, 2.7200000286102295) What just happened? We created a record data type, which is a heterogeneous data type. The record contained a name as a character string, a number as an integer and a price represented by a float. Summary The topics we covered in this article were: Data types, Array types, Type conversions, and Array creation. Next, we will look at: Indexing, Slicing, and Shape manipulation.
https://medium.com/python-in-plain-english/beginning-with-numpy-fundamentals-6faebbdfaec3
['Bhanu Soni']
2020-12-21 11:45:31.971000+00:00
['Machine Learning', 'Python', 'Numpy', 'Data Science', 'Pandas']
Market trends bulletin — Insights 2.0 Vol 8
Market Analysis: Bitcoin (BTC) is currently trading at 6674 USD (as of 24 September 2018, 11.04 IST) with fall of 0.49% in last 24 hours. As per our previous market trends bulletin — Insights 2.0 (Vol 7) daily technical analysis forecast BTC started downtrend till support level 1(6250 USD — 6300 USD). It was good opportunity for margin traders to Sell/Short and book profits. Ethereum and other altcoins also went down and bounced back with BTC. The US Securities and Exchange Commission (SEC) extended CBOE and VanEck Bitcoin ETF proposal. This news did not make significant impact on Bitcoin (BTC) trend. Bitcoin (BTC): As per daily timeframe, BTC has tested daily Resistance level 1 (6784 USD) and failed to break and close above it because of selling pressure. The previous two daily candles closed as bearish spinning tops, which indicates a lack of determination and bears dominated the last 48 hours session. So we anticipate two scenerios here, one, Bulls come back into game and BTC may Retest Resistance level 1 (6784 USD) or else BTC manages to break and close above Resistance level 1(6784 USD) and we see continued uptrend till Resistance level 2(6984 USD). Ethereum (ETH): Ethereum is currently trading at 239 USD (as of 24 September 2018, 11.06 IST) with fall of 1.19 % in last 24 hours. The daily timeframe shows ETH has tested Resistance level 1 (250 USD) but failed to break and close above it because of selling pressure. The ETH daily candle shows Bullish spinning top pattern which again shows that the market seems undecided and Bulls have dominated last 24 hours session. So we might see a Bullish reversal if ETH breaks and closes above Resistance level 1(250 USD). If it succeeds then more Bulls (buyers) will fill their bags at retest and uptrend may continue. But if ETH fails to break this Resistance level 1(250 USD) then it will test Support level 1 (221 USD) and Support level 2 (185 USD). Aeternity (AE): AEBTC is currently trading at 0.0001514 BTC (as of 24 September 2018, 11.08 IST) with fall of 1.11 % in last 24 hours. The daily timeframe analysis shows AEBTC has formed ascending triangle pattern, which has a high probability of breakout than breakdown. The Daily candle closed in a bullish spinning pattern which shows that the market is undecided and bulls dominated last 24 hours session.
https://medium.com/koinex-crunch/market-trends-bulletin-insights-2-0-vol-8-3f7c1fdd6e04
['Team Koinex']
2018-09-24 11:10:37.249000+00:00
['Ethereum', 'Cryptocurrency', 'Bitcoin', 'Analysis']
Kotlin vs Java: reasons to switch from Java to Kotlin today
written by Bernardo do Amaral Teodosio, developer at Wavy; In mid-2016 we, as Android developers, were already eyeing the Kotlin language. At Movile Group, and at Wavy, we have innovation in our DNA, and as soon as the opportunity arose to start using the new language, we decided to do so. If you don’t know Kotlin yet, a brief description I usually use when I talk about it is: A concise, static typing programming language created by JetBrains and which is in constant development. It is fully interoperable with Java and also allows the creation of web and native applications. The Kotlin language was created by JetBrains, which is the same company that created the best IDEs currently on the market, such as: PHPStorm, CLion, PyCharm and IntelliJ, which is also the basis of Android Studio, Google’s official IDE for application development Android. Today’s article is about Kotlin vs Java. I’ll give you x reasons why you should switch from Java to Kotlin today. If you are interested in the subject, read on! Java interoperability Kotlin being 100% interoperable with Java means you can start using it as soon as you want, in an existing Java project (be it a server or an Android application), without the need to migrate any code, and without having to wait the opportunity for a new project to start using the language. You can insert the Kotlin code into an existing project today, call it from Java code, and vice versa, and you will have no problem with that. This is a good reason to give language a try. Officially supported by Google In 2016, when we started using Kotlin in our projects, Google had not yet officially commented on the use of the language for Android development. However, as we have a culture of testing and learning fast, this silence was not an impediment — a fact that we are proud of today. Since the official announcement by Google on the support of the Kotlin language for Android development, at Google I / O 2017, the number of Android applications written partially or entirely in Kotlin is growing, which means that more and more developers on the market are making use of it. As of Android Studio 3.0, support for the Kotlin language was added natively in the IDE, making it even easier to develop applications using it. Simple, concise and low-verb syntax Kotlin has a simple and concise syntax. Codes written in this language have better readability than equivalent codes written in Java. Below are three different ways to write a simple function that adds two numbers together. fun sum(a: Int, b: Int): Int { return a + b } fun sum2(a: Int, b: Int) = a + b public class JavaTests { public static int sum(final int a, final int b) { return a + b; } } The first 2 examples are written in Kotlin, while the third is written in Java. We can see that the Kotlin code, in addition to being more readable, is also much less verbose. We don’t need to create a class to do a simple function, as in Java. Furthermore, it goes without saying that the parameters are final in Kotlin. They already are, by default — which is another very positive point in the language, which encourages immutability. Oh, and another wonder: semicolons are optional! class KotlinClass(private val firstAttribute: String, private val secondAttribute: Int) { private var mutableIntAttribute = 0 private var mutableStringAttribute = "Hi!" } class KotlinClass(private val firstAttribute: String, private val secondAttribute: Int) { private var mutableIntAttribute = 0 private var mutableStringAttribute = "Hi!" } As explained in the examples above, the Java class constructor, which does nothing but initialize the class attributes with the values ​​of the parameters provided, was rewritten in Kotlin in a much simpler way, just after the class name. Everything in parentheses after the name of the Kotlin class are properties of the class, initialized from its primary constructor parameters. Data classes At some point, you have certainly had to make classes that simply serve to represent a set of data in the form of a model, and nothing more. They are the famous Models / POJOs classes. Generally, these classes have nothing special, just a few attributes: a constructor that initializes them, getters and setters methods, and sometimes override the equals (), hashcode () and toString () methods. Although simple, in Java they tend to get very large due to the verbosity of the language. Below is an example of a class that represents a user in Java: public class JavaUser { private String id; private String firstName; private String lastName; private int age; public JavaUser(final String id, final String firstName, final String lastName, final int age) { this.id = id; this.firstName = firstName; this.lastName = lastName; this.age = age; } public String getId() { return id; } public void setId(final String id) { this.id = id; } public String getFirstName() { return firstName; } public void setFirstName(final String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(final String lastName) { this.lastName = lastName; } public int getAge() { return age; } public void setAge(final int age) { this.age = age; } @Override public boolean equals(final Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; final JavaUser javaUser = (JavaUser) o; if (age != javaUser.age) return false; if (id != null ? !id.equals(javaUser.id) : javaUser.id != null) return false; if (firstName != null ? !firstName.equals(javaUser.firstName) : javaUser.firstName != null) return false; return lastName != null ? lastName.equals(javaUser.lastName) : javaUser.lastName == null; } @Override public int hashCode() { int result = id != null ? id.hashCode() : 0; result = 31 * result + (firstName != null ? firstName.hashCode() : 0); result = 31 * result + (lastName != null ? lastName.hashCode() : 0); result = 31 * result + age; return result; } @Override public String toString() { return "JavaUser{" + "id='" + id + '\'' + ", firstName='" + firstName + '\'' + ", lastName='" + lastName + '\'' + ", age=" + age + '}'; } } Kotlin has a special type of classes that can be used especially for these cases. These are called data classes. Using this functionality, the equivalent Kotlin code can be seen below: data class KotlinUser(var id: String, var firstName: String, var lastName: String, var age: Int) As we can see, the class written in Kotlin is much simpler, faster to write and also simpler to read, since it is not filled with unnecessary code that hinders reading. By default, data classes already have a useful implementation of equals (), hashCode () and toString (), which use the properties defined in the class constructor as parameters, so it is not necessary to re-implement these methods. In addition, data classes also have a copy () method, which is used to create copies of an instance by changing only certain attributes. Suppose, for example, that you want to create a new user, with the same attributes as an existing user, but only changing the age. It is simple to do this using this method: fun newUser(newAge: Int, oldUser: KotlinUser): KotlinUser { return oldUser.copy(age = newAge) } Null safety If you are a Java programmer, you may have seen the following line on your monitor: java.lang.NullPointerException The so famous and feared NullPointerException, which occurs when we try to call a method, access an attribute or do something like that in a null instance, that is, using a reference that points nowhere, is probably the most common exception — and also the most annoying — when it comes to Java. Worse still, it occurs at run time — which means that the execution of your application can be interrupted when this exception is thrown, severely harming its users. With Java 8, the concept of Optionals came into the language, which helps to avoid the number of occurrences of this exception. However, if you are an Android developer, you know how difficult it is to use features from newer versions of Java in applications. Due to the limitations of the system versions, in order to be able to use new Java features in older versions of Android, we need to look for compatibility alternatives, such as Retrolambda. Using Kotlin, we don’t have to worry about that. The language differentiates references that can be null from those that cannot be, according to the defined type. If you define a type as being “non-null”, then the language guarantees, at compile time, that null values ​​will not be assigned to the reference of the non-null type. // example 1 var notNull: String = "Hi!" // example 2 var nullableAttempt: String = null // compile time error // example 3 var nullable: String? = null // optional type must be used // example 4 var inferredType = "Inferred type here is String, non null" In the first example, we have a reference of type String, explicitly declared. This reference cannot receive null values. In the second example, we have a reference of type String, also explicitly declared. When trying to assign the value null to this reference, we get a compilation error, since the reference is of type String and not String? (the presence of the interrogation indicates a type that can receive null values). In the third example, we have a reference of type String ?, nullable, which can receive null values. In the fourth example, we have a reference of type String, not explicitly declared, but which is inferred by the compiler from the value assigned to the variable. // example function fun getLengthPlusThree(string: String): Int { return string.length + 3 } // test 1 fun testGetLength(): Boolean { val expectedResult = 6 val length = getLengthPlusThree(string = null) // compile time error return expectedResult == length } // test 2 fun testGetLength2(anotherString: String?): Boolean { val expectedLength = 4 val length = getLengthPlusThree(string = anotherString) // compile time error return expectedLength == length } // test 3 fun testGetLength3(testString: String?): Boolean { val expectedLength = 4 // if is an expression that can be used as a return return if (testString != null) { expectedLength == getLengthPlusThree(testString) } else { false } } In the tests above, we see some examples of the functionality in use. We initially have an example function, getLengthPlusThree (), which returns the length of a string (received by parameter) plus three. Note that the string received by parameter is non-null, which means that it is impossible to call this function by passing a null string as a parameter. In the first test, we tried to call the function by directly passing a null string as a parameter. As expected, we received a compile-time error. In the second test, we try to call the function by passing a nullable string as a parameter. Again, we have a compilation error. Although we don’t know what the value of the anotherString parameter is, we do know that it can be null. Because of this possibility, the getLengthPlusThree () function cannot be called. In the third test, we use an if to test the received parameter value. After ensuring that the parameter is not null, we can normally call the getLengthPlusThree () function, as there is no longer any possibility that the value of the passed parameter is null. In these examples, we can also notice two other interesting features of the language. The first of these is the naming of parameters. In tests 1 and 2, when calling the getLengthPlusThree () function, we explain the name of the function’s string parameter when calling it. Although optional, this feature makes the code more readable — especially when calling functions with parameters of the boolean type. The second feature is the return of the if expression. In Kotlin, we can use the if (and also other constructs) as a function return. Extension functions The famous Utils classes, any reasonably large Java project contains at least one: StringUtils, FileUtils, AndroidUtils. It is very likely that you have already used or even created one, at least once. In general, Utils classes are created to group methods that perform certain operations on objects of the same type. A FileUtils class, for example, would be full of static methods, which take a File as a parameter, and perform a certain operation with this file. Such classes, although functional (they are, in fact, useful), are usually composed of verbose code, and consequently more laborious to read and write. Kotlin has a feature that solves the problem of the Utils classes and makes the code more simple, readable and beautiful. In addition, it also facilitates its maintenance: the Extension Functions. Basically, Kotlin provides the ability to add functionality to an existing class, without having to inherit it for this. fun String.suffixedWithAmazing(): String { return this + " is amazing!" } In the example above, we created a function that extends the String class. Now, we can call this function on any string, as in the example below: fun testExtension() { println("Kotlin".suffixedWithAmazing()) // prints "Kotlin is amazing!" } If the function is used in a file other than the one in which it was declared, it is necessary to import it. Now consider the example below, in which an Extension Function was created to calculate the size in megabytes of a file. Instead of creating a FileUtils class to do this, we can write it in the following format, in Kotlin: private const val BYTES_TO_MEGABYTES_CONVERTER = 1000000.0 fun File.sizeInMegaBytes(): Double { return this.length() / BYTES_TO_MEGABYTES_CONVERTER } fun testExtensionFunction(vararg files: File) { files.forEach { file -> println(file.sizeInMegaBytes()) } } The vararg keyword indicates that the number of files that the method will receive as a parameter is undefined (variable number of arguments). If you develop Android, you’ve probably used Glide or Picasso, famous libraries for uploading images. We can use Extension Functions to create an abstraction around these libraries which, in addition to making the code smaller and much more readable, also facilitates an eventual replacement of libraries. The example below illustrates this situation: fun ImageView.load(url: String) { Glide.with(this.context).load(url).into(this) } private fun displayImageTest(imageView: ImageView) { imageView.load("https://kotlinisreallyawesome.org/kotlin.png") } Higher order functions and streams Kotlin supports high-order functions. It is possible to use functions like types and parameters, and even return functions from a function. With this functionality, a wide range of possibilities for functional programming using Kotlin is open. But, not only for this reason, it is possible to create a much cleaner, concise and simple code, using high-order functions than in Java, which only (partially) introduced this functionality from Java 8. If you use RxJava for Android development, you probably also use some additional library to use lambdas, as this functionality was only introduced in Java 8, and is fundamental to the functioning of RxJava. Using Kotlin, no additional library is needed, as the language natively supports this functionality. In addition, the language also has streams and operators for streams, extremely useful features that were only introduced in Java 8. Below is an example of Kotlin code that uses high-order streams and functions: import java.util.concurrent.Future import kotlin.concurrent.thread data class FetchOrderResult(val success: Boolean) fun testStreams(orderCallback: (FetchOrderResult) -> Unit) { val bigSmokeOrder = listOf( "I want two number nines", "a number nine large", "a number six with extra dip", "a number seven", "two number forty-five", "one with cheese", "and a large soda") val lightBigSmokeOrder = bigSmokeOrder .filter { it.contains("nine") } .filter { line -> !line.contains("large") } .map { it.toUpperCase() } lightBigSmokeOrder.forEach { println(it) } lightBigSmokeOrder.forEach(::println) // same thing as above thread { val order = fetchOrder(orderList = lightBigSmokeOrder).get() orderCallback(order) } } fun fetchOrder(orderList: List<String>): Future<FetchOrderResult> { TODO() // do something with orderList and create a FetchOrderResult } The testStreams () function builds a list of items for an order, filters them based on certain criteria and then sends the order, in a new thread, invoking the fetchOrder () function. When this is completed, the function invokes orderCallback (), the callback with the order’s response. Note that orderCallback () is a function that was passed as a parameter to the initial testStreams () function. Take a chance You may have noticed that when it comes to Kotlin vs. Java, Kotlin wins. In addition to all these reasons and in addition to your productivity increasing a lot, it is fun to program in Kotlin. It’s fun not having to write a bunch of boilerplate to do something simple, discovering a new way of programming and not getting stuck in a certain version of the language and failing to try new features because of that. The world spins, time passes and we are not obliged to continue in the past. Take a chance on something new and try Kotlin. In this and this link you can find parts of a live in which I participated, talking about how we use the language at Movile. Below, a presentation I made at Kotlin Night Campinas, an event that took place in 2017, in which I had the opportunity to talk a little about the language. The audio of the presentations of the event can be found here.
https://medium.com/wavy-engineering/kotlin-vs-java-reasons-to-switch-from-java-to-kotlin-today-4703548e0d04
['Wavy Global']
2020-10-26 13:16:19.403000+00:00
['Technology', 'Innovation', 'Kotlin', 'Java']
3 Detailed Templates for Your Next Testing with Users
Designers often underestimate the value of a full written script for usability testing. But there is no shame in always having one — in fact, an elaborate and well-thought-through script is the expert habit. Inspired by a classical template, which Steve Krug composed ten years ago, I created another one to reflect my usability testing experience and adjust old good techniques to the 2020 reality. Template structure Facilitator’s part: not shared with a user but essential to keep in mind. Intro for a user: an explanation of the goal and “rules of the game.” Mini-interview: ideas for getting acquainted with a user. Tasks with a prototype or live product: task hierarchy and examples. Outro: session closing and the next steps. Helpful reading
https://uxdesign.cc/usability-testing-templates-9b79b40eb481
['Slava Shestopalov']
2020-09-15 19:22:28.353000+00:00
['User Experience', 'Product Management', 'Design', 'UX', 'UX Research']
How to Find Work You Love in a Recession
Risk is now a big factor When you think about it, hiring someone is always a risk. Businesses are essentially taking strangers with certain credentials and paying them a chunk of money to handle specific tasks. Many steps can go wrong with that, which is why businesses tend to have a little leeway with their hiring budget. But in recessions, many companies are going to try and mitigate risk as much as possible. When combined with the job market being a seller’s market, it means that the deck is stacked against you. There are a few things you might have seen already. Perhaps you applied to the job online, only to discover that the job doesn’t exist due to hiring freezes. Or you may have noticed that job descriptions suddenly require much more experience. What you need to understand is that they’re not doing that to be mean to you: they’re doing that so that they don’t risk a bad employee. Someone very senior who has those years of experience is less likely to screw something up versus a fresh-faced college graduate. So you have two options here: make yourself seem less risky, or embrace the risk. The value of free work In his book Recession-proof graduate, Charlie Hoehn talks about the value of free work, which bridges this gap. Setting up trial periods with a company where you do work on a project for free, with a firmly established end date, is something that removes a lot of that risk for companies. They see how you work, how you interact with the team, and whether you’re able to get the job done. And I know what you’re saying right now: why would I ever work for free? That seems like a scam. But, it could also lead to your dream job. Let me ask you a question. If you saw your dream job posting on a website, how would you stand out against 35-year-olds with a decade more experience who just lost their jobs? On paper, there might be next to nothing you can do. But by mitigating that risk, you suddenly make the choice more competitive: Do I hire the 35-year old that looks good on paper, or the 22-year old that works well with the team? Also, choosing to do free work doesn’t mean you do it for everything. Instead, if there’s a project that interests you, with a company that you like, then offering to work for free means that you get to have real-world experience working with them. And if you’re still morally opposed to free work, make a Fiverr/Upwork account and charge $10. It’s still the same basic principle: getting real-world experience with companies and showing that you can deliver. At the very least, you can learn something: practicing your skills with real-world clients, learning about team dynamics, or even adding to your portfolio. Not to mention, researching companies and fields is going to be an essential skill to learn. Disruptive technology and risk Imagine that you tried to do the ‘safe route.’ You chose a conservative major like Engineering, you did your internships and classes and then were hired by a large manufacturing firm out of college. Then the 2008 recession hit, and you were out of a job. What types of jobs would you be looking for during that time? More manufacturing companies, which were a rapidly shrinking industry at that time? Or would you look to transition your skills into another field? If you making yourself as low risk as possible doesn’t sound appealing, you’re not alone. Between competing with experienced job seekers and taking jobs at companies where you weren’t sure if they’d implode, not to mention being a decade behind financially in major life choices, there weren’t many good choices. If that’s the case, then the other option is to embrace the risk and try to understand emerging markets. One of the effects that the 2008 recession had was changing fields for better or for worse. While manufacturing and retail suffered immensely, disruptive technology changed the face of Silicon Valley. Startups like Slack, Uber, and Instagram rose out of the last recession, embracing the risk of turbulent markets and eventually changing the way we work, drive, and socialize. And the driving forces behind these companies? Fresh-face college graduates. It may be too early to tell for this recession, but some fields are going to shrink or expand significantly in the next few years. Nightlife and retail might shrink, while telehealth and remote work technologies may expand. Keep an eye on startup-focused job sites like Angelist to see if there’s a risk you want to take.
https://medium.com/the-post-grad-survival-guide/how-to-find-work-you-love-in-a-recession-29d13a30cbf2
['Kai Wong']
2020-06-13 07:14:00.991000+00:00
['Work', 'Recession', 'Productivity', 'Jobs', 'Job Hunting']
The Top Game Designer Apps for Mobile Creators
There are now over 2.2 billion active gamers around the world, and the market is expected to reach $143.5 billion by 2020. In such a massive market, it’s hard to stick out from the crowd. Here are a few of our favorite game designer apps to help make your great ideas just a little bit better. Unity — the Top Mobile Game Engine for a Reason Choosing the right engine is a complex decision for mobile game designers and studios. The competencies and preferences of the team, structure of the game, tech stack, community, and a host of other features all play a role. Unity is always a serious contender for mobile designers and developers. Currently, Unity powers over 50% of mobile games and 60% of augmented and virtual reality content, including 34% of the top 1000 free mobile games. It’s a great game design app for both Android and iPad, along with virtually any platform you can think of, including multiple VR platforms and console gaming systems. This looks like a really cool mobile game universe. Unity is a usually touted as a developer tool, but there’s plenty for designers to love too. It integrates with virtually any graphics or animation tool and has extremely sophisticated functionality in a designer-friendly interface. Animation, physics, lighting, and post-processing effects are gorgeous, and best of all: there’s a large community to draw on. That community keeps the asset store well-stocked with templates, textures, pre-built characters and environments, and other digital assets. With access to so much great content, Unity can speed up the design process, provide inspiration for designers, and ensure you never have to reinvent the wheel. Stencyl — A Simple Tile-Based Game Design Engine While few video game design apps offer the power of Unity, not every game requires it. For simple concepts and indie developers, sometimes simpler is better. Enter Stencyl, a development platform built with an eye to accessibility. The app is designed for creating two-dimensional games, using an intuitive drag and drop interface. A tile-based system makes level design a snap. The system also has good support for active objects, allowing you to design fairly complex character behavior, and tweak physics, animation and collision for the perfect look and gameplay. You can do all of this without coding, building games for Android or iOS, along with Mac, Windows, Linux, and even Flash (remember Flash?). For those who want more control, Stencyl does support coding via Haxe, and offers both engine and toolset SDKs, along with third-party plugins and coding-free ad integration for monetization. We love these old school styled games. Stencyl does have some hard limits. If you’re into virtual reality or traditional 3D games, it’s not the tool for you. Likewise, there are more sophisticated 2D tools out there. However, for indie developers and anyone creating simple, addictive mobile games, Stencyl is a great option. Check out their mobile showcase to see some of the games users are creating. Stencyl offers a free account for education, testing, and publishing on Flash (which obviously won’t get you mobile users). To get access to mobile platforms, you’ll need a Studio license, at $199 per year. Spine — Vivid 2D Skeletal Animation That Doesn’t Hog Resources In mobile game design, tools are more than just tools. The software you use, the workflow, benefits, and constraints can profoundly affect how your game looks, feels, and plays in its final iteration. As a tool optimized for 2D skeletal design, Spine can transform your approach to game animation, leading to a more efficient workflow than traditional sprite sheets. A character can be animated in multiple scenes with only a single set of images. It doesn’t matter whether the character is running uphill, jumping between platforms, or sitting down — once you get your character right, you can animate them in any situation without drawing frame by frame animation (although Spine does support frames as well). You can even reskin characters, using the same essential skeleton for multiple characters to save work. This can accelerate your workflow, making it much easier and less time consuming to animate complex movements smoothly and realistically. That in turn allows you to do more with motion, creating complex animations that might not have fit into your timeframe otherwise. It also redefines what your app can accomplish. Sprites take a lot of memory — still a limited resource in mobile apps — and too many animations could lower adoption and use. Spine can expand the limits of what’s possible in mobile game design. Definitely looks like the monsters under our beds. Spine isn’t confined to classic humanoid characters with firm textures. Mesh deformation can give softness and stretch to support realistic body textures, and bones can be used to impose path constraints, allowing you to fluidly animate moving vines, rubbery or stretchy bodies, and complex mechanical movements using the same tools you’d use for a conventional human character. Check out the Spine demos to see it in action. For pricing, Spine bucks the trend of perpetual subscription in favor of one-time, relatively affordable prices. Small businesses and private users can buy Spine Essential for $69. There are also Professional licenses (with perpetual updates) at a fixed price, along with Enterprise and Educational licenses available. Overflow.io — Get User Flow Right Mobile games are played by a diverse audience, in a wide range of situations. Your game will likely have seasoned pros who intuitively grasp gaming conventions, along with new players who might need a little hand-holding to learn the basics. Your players will enjoy it during lunch break or on their commute (assuming they’re not driving!) where they may have frequent interruptions, as well as at home, where they’re less likely to be disturbed. You don’t have to accommodate all these players and situations, but it’s to your benefit to do so. If busy gamers can’t play short sessions on the go, some of them will uninstall your game before they find out how great it is. If new gamers don’t have a tutorial and/or clearly laid out controls, they’ll never become dedicated fans. It’s not enough to make a great game, you need to build a great experience — and that requires well-designed user flow. Overflow.io helps you conceptualize and plan a great user flow early in the design process, ensuring your app will be built around user needs. Overflow.io does one thing, and it does it extremely well: creating playable user flow diagrams. Creating your user flow is incredibly important in mobile game design. Designers can quickly build vivid, realistic screens, then connect them together into a flowchart that simulates the behavior of the app very quickly (about 20 minutes to link 60 artboards together). Users can then click buttons to navigate through the chart — either in the diagram view, or in a rapid prototype that simulates the flow of the app (check out this example user flowchart to see how it works). Not only is this useful for designers conceptualizing and developing flow, it also helps them communicate their thought process to other stakeholders effectively. Therefore, it’s also a useful tool for entrepreneurs, developers, and anyone else who needs an easy and effective way to illustrate their app ideas. Overflow is available in a free beta for MacOS. A Windows version is already in the works. Proto.io — How to Build an App the Right Way With the power of modern video game design apps, it’s tempting to jump into building the actual game a little too quickly. Tweaking backgrounds and character appearances, playing with physics and building smooth animation is a lot of fun, and it’s the point where the game starts to become “real” for a lot of designers. But when a game is thrown together without enough planning, players know it. Inconsistent design and behavior distract users from your game, settings and characters don’t mesh as well with each other, and plot elements (if your game has any) feel tagged on. Poor planning can turn beautiful design and engaging mechanics into a game that’s not worth the space it takes up on your phone. Proto.io helps designers turn their vision into a strong game blueprint, setting their team up for success. Proto.io lets designers build realistic, playable prototypes using animations, gesture controls, and interactions to bring the idea to life — all without writing a line of code. Thanks to functional prototypes, Proto.io can help you launch a mobile game with fewer bugs. Of course, this allows you to keep working on the elements that make a game exciting without losing track of how each piece fits into the game as a whole or diverging from the vision of other team members. Add in integration with design tools like Photoshop and Sketch, it’s easy to fit in with your existing workflow. Combined with an intuitive, user-friendly interface, that makes it a lot easier to bring your vision to life. Proto.io offers a number of pricing options, making it a great option for anyone from freelancers to major game development studios. But don’t take our word for it — start your 15-day free trial to see if it’s right for you. The Right Game Designer Apps Mobile has changed gaming forever, transforming it from a niche interest to a hobby enjoyed by billions worldwide. At the same time, gaming has transformed mobile apps, challenging developers to make everyday tasks more engaging, beautiful, and enjoyable. The right game designer apps can help you stay on the cutting edge, making innovative games that challenge and delight your users. Here are a few more articles, with tips and inspiration for your next mobile design project: Proto.io lets anyone build mobile app prototypes that feel real. No coding or design skills required. Bring your ideas to life quickly! Sign up for a free 15-day trial of Proto.io today and get started on your next mobile app design. Know a great mobile game design app we left out? Let us know by tweeting us @Protoio!
https://protoio.medium.com/the-top-game-designer-apps-for-mobile-creators-843641bb6e52
[]
2018-10-10 14:01:01.895000+00:00
['Game Design', 'Mobile Apps', 'UI', 'Tools', 'Design']
Understanding GRU Networks
In this article, I will try to give a fairly simple and understandable explanation of one really fascinating type of neural network. Introduced by Cho, et al. in 2014, GRU (Gated Recurrent Unit) aims to solve the vanishing gradient problem which comes with a standard recurrent neural network. GRU can also be considered as a variation on the LSTM because both are designed similarly and, in some cases, produce equally excellent results. If you are not familiar with Recurrent Neural Networks, I recommend reading my brief introduction. For a better understanding of LSTM, many people recommend Christopher Olah’s article. I would also add this paper which gives a clear distinction between GRU and LSTM. How do GRUs work? As mentioned above, GRUs are improved version of standard recurrent neural network. But what makes them so special and effective? To solve the vanishing gradient problem of a standard RNN, GRU uses, so-called, update gate and reset gate. Basically, these are two vectors which decide what information should be passed to the output. The special thing about them is that they can be trained to keep information from long ago, without washing it through time or remove information which is irrelevant to the prediction. To explain the mathematics behind that process we will examine a single unit from the following recurrent neural network: Recurrent neural network with Gated Recurrent Unit Here is a more detailed version of that single GRU: Gated Recurrent Unit First, let’s introduce the notations: If you are not familiar with the above terminology, I recommend watching these tutorials about “sigmoid” and “tanh” function and “Hadamard product” operation. #1. Update gate We start with calculating the update gate z_t for time step t using the formula: When x_t is plugged into the network unit, it is multiplied by its own weight W(z). The same goes for h_(t-1) which holds the information for the previous t-1 units and is multiplied by its own weight U(z). Both results are added together and a sigmoid activation function is applied to squash the result between 0 and 1. Following the above schema, we have: The update gate helps the model to determine how much of the past information (from previous time steps) needs to be passed along to the future. That is really powerful because the model can decide to copy all the information from the past and eliminate the risk of vanishing gradient problem. We will see the usage of the update gate later on. For now remember the formula for z_t. #2. Reset gate Essentially, this gate is used from the model to decide how much of the past information to forget. To calculate it, we use: This formula is the same as the one for the update gate. The difference comes in the weights and the gate’s usage, which will see in a bit. The schema below shows where the reset gate is: As before, we plug in h_(t-1) — blue line and x_t — purple line, multiply them with their corresponding weights, sum the results and apply the sigmoid function. #3. Current memory content Let’s see how exactly the gates will affect the final output. First, we start with the usage of the reset gate. We introduce a new memory content which will use the reset gate to store the relevant information from the past. It is calculated as follows: Multiply the input x_t with a weight W and h_(t-1) with a weight U. Calculate the Hadamard (element-wise) product between the reset gate r_t and Uh_(t-1). That will determine what to remove from the previous time steps. Let’s say we have a sentiment analysis problem for determining one’s opinion about a book from a review he wrote. The text starts with “This is a fantasy book which illustrates…” and after a couple paragraphs ends with “I didn’t quite enjoy the book because I think it captures too many details.” To determine the overall level of satisfaction from the book we only need the last part of the review. In that case as the neural network approaches to the end of the text it will learn to assign r_t vector close to 0, washing out the past and focusing only on the last sentences. Sum up the results of step 1 and 2. Apply the nonlinear activation function tanh. You can clearly see the steps here: We do an element-wise multiplication of h_(t-1) — blue line and r_t — orange line and then sum the result — pink line with the input x_t — purple line. Finally, tanh is used to produce h’_t — bright green line. #4. Final memory at current time step As the last step, the network needs to calculate h_t — vector which holds information for the current unit and passes it down to the network. In order to do that the update gate is needed. It determines what to collect from the current memory content — h’_t and what from the previous steps — h_(t-1). That is done as follows: Apply element-wise multiplication to the update gate z_t and h_(t-1). Apply element-wise multiplication to (1-z_t) and h’_t. Sum the results from step 1 and 2. Let’s bring up the example about the book review. This time, the most relevant information is positioned in the beginning of the text. The model can learn to set the vector z_t close to 1 and keep a majority of the previous information. Since z_t will be close to 1 at this time step, 1-z_t will be close to 0 which will ignore big portion of the current content (in this case the last part of the review which explains the book plot) which is irrelevant for our prediction. Here is an illustration which emphasises on the above equation: Following through, you can see how z_t — green line is used to calculate 1-z_t which, combined with h’_t — bright green line, produces a result in the dark red line. z_t is also used with h_(t-1) — blue line in an element-wise multiplication. Finally, h_t — blue line is a result of the summation of the outputs corresponding to the bright and dark red lines.
https://towardsdatascience.com/understanding-gru-networks-2ef37df6c9be
['Simeon Kostadinov']
2019-11-10 11:19:39.809000+00:00
['Machine Learning', 'Recurrent Neural Network', 'Lstm', 'Artificial Intelligence', 'Neural Networks']
5 Oddball Tales from History You’ve Likely Never Heard Of (Part 9)
Drunken Aviator Lands in City Centre, 1956 Bulky sedans rumbled sedately along the right-angled streets, and haggard creatures of the night here and there passed under the patchy street lighting past rows of rectilinear brownstone tenements. It was the witching hour on St Nics Avenue in New York City’s heart. Of course in the city that never sleeps life still stirred, and it was about to get a serious wake up call. Jimmy was wiping down the bar waiting for the last of his patrons to stumble out after a long night. The edge of his lips curled up with a wry smile; earlier that night one of his favourite patrons, a gung-ho flyboy named Thomas ‘Fitz’ Fitzpatrick made a bet that he could fly from New Jersey to New York City in 15 minutes. “I’ll land out there to prove it, how ‘bout that?” slurred Fitz. “OK ya crazy, drunken Irishman” laughed Jimmy “Hold my beer then.” And, with a leery grin Fitzpatrick plodded out the door. Good laughs, thought Jimmy. That was two hours ago. A barking dog out the window broke his reverie and Jimmy looked up to see a late night walker and his dog facing opposite directions; the man was pulled back by his leashed dog. The mut was staring back up the street and whined, its head tilted with that gaze of rapt concentration only a dog can do. “Come on!” the guy bawled, looking bewildered. Then Jim detected the sound of an engine, but it was no automobile; it was more of a deep buzz, and it quickly got louder. That sound was one of a small plane approaching and crazy as it sounds Fitzpatrick was making his approach to land the thing on the Avenue. One or two cars screeched to a halt as the small aircraft buzzed overhead. Bedroom lights flicked on and anyone quick enough caught a fleeting glimpse of Fitzpatrick as he zipped by. Jimmy slammed the door open in time to witness, mouth agape, the plane touchdown and whizz past his bar before coming to a stop. So Fitz won the bet after all! After leaving the bar, Fitzpatrick had hightailed it 15 miles across the state line to Teterboro Airport and there, stole an aircraft. What the wager was is unknown but he won his bet and his antics made newspaper headlines. The New York Times called the flight a “feat of aeronautics” and a “fine landing”, and a plane parked in the middle of the street made for quite a sight in the morning. For his illegal flight, he was fined $100 after the plane’s owner refused to press charges. Incredibly Fitzpatrick performed the same stunt again in 1958 because in another bar someone questioned the story. For that, he was sentenced to 6 months incarceration, blaming his antics on the “lousy drink”. Erfurt Latrine Disaster, 1184 A complex, multi-ethnic patchwork of kingdoms made up the Holy Roman Empire that was the most powerful Christian kingdom to arise from the ashes of the Dark Ages. Politically it was a kaleidoscope of shifting alliances and rivalries as it’s greatest nobles incessantly manoeuvred for power. In the year 1184 AD a feud between two of the empire’s most powerful leaders of the land was now reaching boiling point. On one side was Louis the Pious, Landgrave of Thuringia. On the other side was the Archbishop Conrad of Mainz. This political schism was in danger of wrenching apart the mighty empire from within and could be ignored no longer. On the way towards Poland on a military campaign King Heinrich VI of Germany halted in the Thuringian capital of Erfurt to call a meeting of nobles and end the dispute before it spiralled out of control. So it was that a hundred or so of the empire’s most important Dukes, Counts and clergymen congregated in the meeting hall of the Church of St Peter in the summer of 1184. Yet there was a hidden danger unbeknown to all and it would shortly spell the doom of many present. The noblemen took their seats whilst the King sat apart from the highborn rabble in an alcove. The floor was wooden and creaked loudly as men moved over it. They took their seats and some looked down nervously, feeling how loose it felt, yet said nothing. That floor was all that separated these dozens of men from a cellar below. The cellar however was a massive latrine filled with tonnes and tonnes of liquid smelly brown stuff, and must have been metres deep. Indeed one would imagine the stench was overwhelming the moment they entered the room; it is anyone’s guess why they chose to meet there in the first place. An ear splitting crack sounded out a split second warning before they were plunged into the dark pit below. Alarmed calls and shrieks thundered off the stone walls as men struggled and foundered in the thick liquid, fighting a losing battle to keep their heads above the surface. The survivors could only stare down breathlessly as they watched roughly 60 of their kin perish in the most humiliating manner a lord could possibly die. The Erfurt Latrine Disaster sent a shock wave through the empire as the staggering death toll included the Counts of Abinberc, Thuringia, Hesse, Kirchberg and Wartburg. King Heinrich was said to have survived only because his alcove had a stone floor. Child Drowned for an Hour Survives, 1986 On a hot June day the birds were singing, the bees buzzing, and mum’s voice on the phone wafted through the warm air, so warm after a late start to Summer. Her reassuring tones set her blond haired toddler at ease to range the backyard’s expanse and soak up its lush colours. The green foliage was offset by a beautifully painted butterfly, drifting into focus for the keen eyed child. Two and a half year old Michelle Funk’s eyes sparkled in awe and the eyes on the butterfly’s wings waved back. She lunged to grope the floating beauty to hold it. The butterfly flittered on towards the sound of gushing water. Could the intrepid infant reach the insect before the forest of grass which marked the garden boundary end the chase? Her mother’s voice was now almost drowned out by the babble of icy cold water below. She got her break; in a chance moment the butterfly dipped in time for Michelle to swing her little arms up and capture her quarry. But the ground treacherously slipped downwards; her face an instant of triumph turned to alarm as she vanished under the grass blades towards the water’s edge …Michelle’s alert older brother hared back to the house. At the Bells Canyon Creek bank Michelle tumbled down through the grass then plunged over the edge. There was no one to respond to her gurgled cries. As the warm sun rays glistened off the mountain melt water Michelle slipped under, lost. The minutes ticked by; her skin now a ghostly white and her flame barely flickering. After 66 minutes a rescuer finally hauled her blue, lifeless form from the 4 Celcius (40 Fahrenheit) water. Could she be saved at all? If there was even the smallest chance it was worth the try. They rushed her to hospital where a Dr Bolte was waiting. The extreme time Michelle had been submerged had surely drowned her. Many doctors, knowing how long she’d been submerged, would have declared her dead on arrival — indeed some of them thought Bolte crazy for even entertaining the notion she had a decent chance. Yet one factor was in her favour; instead of sealing her fate, the icy submersion had slowed down her metabolism to the extent her body’s oxygen needs were suspended. What’s more by happenstance, Dr. Bolte had been preparing for such an emergency for months. He and his team went straight to work. They started injecting warm fluids into Michelle’s veins and stomach and squeezed warmed air through a tube into her lungs, but three hours after the child had fallen into the creek she was still lifeless. Meanwhile Michelle’s parents and doctors feared her resuscitation would merely bring her back to a vegetative state. They persevered. However it was when her body reached 25 Celcius (77 Fahrenheit) that Bolte allowed himself to think there was hope for the poor little thing yet. She gasped; moments later she opened her eyes; then her pupils, responding to the bright lights in the operating room, narrowed — a sign of returning brain function. And then, to everyone’s cheers and high fives, a faint heartbeat was detected. Michelle was saved and made a full recovery with no lasting cognitive damage. Even the staid Journal of the American Medical Association described the case of Michelle Funk as ‘’miraculous’’. Her treatment went on to form the protocol for treating previously deadly cases of drowning. UK Town Terrible Twin, 2006 The sky above was white and seagulls could be heard in the distance being a nuisance. David Riley was wearing his best suit and his best smile and cradled a fine wooden case in one arm. He strode jauntily along the pavement, a bespectacled American with a ready smile for anyone willing to meet his eye along the way. He approached Bideford Town Hall entrance, an elizabethanesque building fronting the River Torridge. This should have been a special day for the resident of Manteo, N. Carolina. His small city of little over a thousand residents had been twinned with Bideford, England for some quarter century and announced this on large billboards to every visitor. Today Riley’s mission was to present Bideford Town Council with a commemorative clock to celebrate the link. Manteo’s town manager had emailed Bideford council heralding Riley’ visit a few weeks prior. He was scheduled to meet town clerk George McLauchlan and was a little disappointed with the secretary’s nonplussed greeting. Riley took a seat to wait. McLauchlan, a sandy haired man in a crisp white shirt and light green tie, invited his visitor in, a bemused curl on his lips. McLauchlan recalled: “He seemed like a nice guy and gave me a clock. It was a very nice clock. He said he was very proud to be twinned with us and offered a sincere thanks on behalf of the town’s population for representing them in the UK.” Yet Bideford’s officials didn’t have any idea what Riley was on about; the only town Bideford was twinned with was one in France. They’d never even heard of Mantao. “I said thank you but had to let him down gently. It seemed even more cruel not to. He seemed a little puzzled and said our name was on all their road signs. I couldn’t really offer any consolation so he said he was going home to look into it.” The only explanation for the mix-up could be that a resident of Bideford visited Manteo in the 1980s and said or did something which led the townsfolk to believe an official tie had been established. In 2010 Bideford officials reciprocated the affection sent forth from the good folks of Mantao by formally twinning the two towns. Model Citizen Returned to Prison, 2014 It was just another day for Rene Lima-Marin in his job helping to transform city skylines by installing glass windows into skyscrapers until an unknown caller buzzed his mobile phone. The woman on the line said she was from the Denver Public Defender’s office. As she talked Lima-Marin could feel his breathing turn shallow, his muscles tighten and his mind start to race. For the slim Latino man, with his hair shaved high on the back and sides and an immaculately groomed goatee, the day had come he feared for years would. Now all those dreams and plans lay shattered like a window pane that slipped from his grasp. The story started fourteen years before when 22 year old Lima-Marin and an accomplice were sentenced for committing robbery, burglary, and kidnapping during a series of video store robberies. These were to be served consecutively so, the US legal system being what it was, effectively locked the two up and threw away the key — The sentence was a whopping 98 years. It was basically game over for the two young men. Yet maybe Lima-Marin had an angel guardian looking out for him or something. The court clerk mistakenly wrote ‘concurrently’ not ‘consecutively’ next to his sentence and Lima-Marin discovered he only had a 9 year stint to do (not so his accomplice however). Realising someone had blundered, he kept shtum and did his time. 2008 came around and Lima-Marin heard the main gate of Colorado’s Crowley County Correctional Facility slam behind him and his life, rebooted, in front of him. Was he going to take his second chance to live a good life as a rehabilitated man or would he slip back into his old ways? He married his old girlfriend and became a father to her one-year-old son. He found a job, and then a better union job working construction on skyscrapers in the centre of Denver. The family went to church. They took older relatives in at their new, bigger house in a nice section of Aurora. They then had a child together, another boy. Lima-Marin feared that the justice system would discover its mistake and destroy what he was building. But the years passed by and the fear receded as his life entered the humdrum slipstream of work, church and football training for his sons. After six years, this was surely proof he was rehabilitated. The phone call from the Public Defender’s Office informed him that the Justice Department had discovered their mistake and, gut wrenching though it was, he was going to have to go back to serve out the rest of his life long sentence. How on earth was Lima-Marin going to break the news to his family? How were they all going to bear the heartache? From there his fortunes fluctuated like a heart monitor does for someone whose life hangs on a knife edge; he went back to prison but, after a campaign for clemency lasting years, the state Governor pardoned him. Lima-Marin’s wife’s euphoric high upon hearing this seesawed to a scream of frustration when the news was followed up with the fact her husband had to fight his case against illegal immigration in an immigration centre. The ending however was a happy one for Lima-Marin. He overcame the final hurdle by winning his case and walked away a free man, for good, from Aurora’s detention facility in 2018.
https://alee250485-65852.medium.com/5-oddball-events-in-history-youve-likely-never-heard-of-part-9-77b4b4e7b453
['Alasdair Lea']
2020-12-20 13:37:54.711000+00:00
['Storytelling', 'History', 'Medical', 'Lists', 'Law']
Still Feelin’ Alive at 25(K): My NaNoWriMo Halfway Mark
Having never participated in NaNoWriMo before, I had no clue what writing 25,000 words felt like or if I could even do it. In my last post, I had proudly survived my first week, hit 10,000 words and quickly figured out this stuff is for real not easy. And thanks to my amazing writing buddy, Martina, who has kept my ass in gear, I can happily report that I broke the 25,000 mark like a champ. Here are a few realizations I’ve learned by reaching the halfway mark of my writing journey. Character inspiration is awesome. Have a real-life nemesis that you would love to see go down in the worst kind of way? Make them your inspiration for your story’s evil villan! Need a name for a sweet, old woman who bakes cookies for the little town orphans? Name her after your gold-hearted grandmother whose makes the most delicious snickerdoodles of all time. Find your inner Taylor Swift and instead of writing songs about ex’s or people, be sneaky and incorporate them in your story. Keep reality alive in fiction. Weave in real-life events into your story, such as historical milestones, modern day conflicts and important eras in pop culture. These will keep your story more grounded and interesting for you to craft plot lines around. It also adds a very realistic tone to your story that will help your reader connect with your fictional storyline. I’ve surprised the hell out of myself. Looking back, my story seemed so simple and not nearly as interesting then when I first started. There are days when I’ve definitely struggled and the words just haven’t flowed. Some days I would have much rather slept in a few extra hours rather than guzzle down a pot of coffee before 6 a.m. But I’m thanking myself for stayed relatively disciplined. Now, my story has taken on a life of its own and is becoming way bigger and better than I ever could have imagined — all because I just started typing. So cheers to the next 25,000…and beyond!
https://medium.com/nanowrimo/still-feelin-alive-at-25-k-my-nanowrimo-halfway-mark-7dcd680821f6
['Megan Wagner']
2015-11-16 00:20:47.040000+00:00
['NaNoWriMo', 'Authors', 'Writing']
4 Reasons to Use Kubernetes in the Serverless Era
4 Reasons to Use Kubernetes in the Serverless Era Is serverless a replacement for Kubernetes? Photo by Paul Hanaoka on Unsplash. Serverless is cool and has taken the tech world by the storm. It is evolving quite a bit and growing at a tremendous pace. The reasons for its success are its developer-friendliness, ease of running microservices, and lower infrastructure overhead. Serverless, or FaaS (Functions as a service), is a technology that helps you run code within dynamically managed abstracted servers. The cloud provider provides a provision to run your stateless event-triggered system without worrying about the underlying infra. Some of the popular serverless offerings are AWS Lamba, Google Cloud Functions, and Azure Functions. Built for scalability, it allows multiple parallel invocations of your functions, and you pay for the number of executions instead of allocated resources. Kubernetes, on the other hand, is an open-source, robust, widely used container orchestration platform that allows you to run your applications on the scale. It is a hybrid between IaaS and PaaS and allows you to standardise your applications to run the same everywhere by using containers. Kubernetes runs on servers, and you or the cloud platform manage the underlying infrastructure. Therefore, there is some infrastructure administration overhead. That makes serverless a great proposition. It solves a variety of problems that we never thought about before, and it has its use cases and customer base. But before you think that this is going to change everything in IT, think again. It is not a one-size-fits-all technology. Kubernetes, for its part, looks promising in its ability to cover almost everything. Let’s find out how.
https://medium.com/better-programming/4-reasons-to-use-kubernetes-in-the-serverless-era-cf77ea3b018b
['Gaurav Agarwal']
2020-08-13 15:38:03.639000+00:00
['Technology', 'DevOps', 'Programming', 'Kubernetes', 'Serverless']
Don’t Let Your Partner’s Identity Influence Your Own
Photo by Pablo Merchán Montes About a year ago, I listened to an episode of Adam Roa’s Deep Dives Podcast where he had his ex-girlfriend of 10 years come back on the show and discuss what they’ve learned in the few months since their breakup. One of the questions they asked each other was, “What is the biggest thing you learned about our relationship after our break up?” What they said next blew my mind. His ex-girlfriend told the story of how one of the hardest things she had to adjust to after their breakup was taking on the more practical side of life. She had to book her own travel, pay her own bills on time, and drive her own motorbike in Bali. All of these changes were quite difficult at first, but in the end, she reported feeling empowered through taking care of herself in this way and actually became quite good at it. Then, Adam spoke about his experience dating again. In his relationship with his ex, it was understood that he was always the practical, rational one and she was always the spontaneous, carefree, fun one. So you can imagine his surprise when one of the girls he was dating looked up at him and said, “I love how fun you are. You’re so spontaneous and carefree.” In this new relationship, he was considered the fun one. This made him reflect on all the time he spent letting this limiting belief that he was not the fun one keep him from embodying his natural sense of play and spontaneity. This happens in relationships all the time So often we mold our identities around the identities of our partner. It’s natural to do this, but its also a huge bummer because when we hire out parts of ourselves we feel our partner does “better”, we cram ourselves into limiting boxes that keep us from developing our full talents. I do this all the time in my relationships. I usually go for very high energy guys and as a result, I’ve retreated into the identity of the quiet one, the listener, the one with too-low energy and too-much going on in her head. The reality is, I am actually quite a lively person. I mean, I’m a definitely introvert, don’t get me wrong, but I can definitely get out there and party, man. There are plenty of people I would be with in which I would be the funny one, the party animal, and the entertainer. Imagining this helps me not shrink down to my comfortable role of the quiet one just because I’m not louder than my partner and instead embody this new characteristic of myself as much as it feels comfortable. You could play all the roles you think you’re “not” in your next partnership Whatever characteristic you feel like you are not, it is totally possible for you to be with someone in which you would be the one in the partnership that embodies that characteristic most. I want you to really imagine that. Feel the expansiveness that creates in your chest and really feel into all the ways you would embody this fresh new characteristic of yourself that you’ve hired out to your partner. Then, carry this limitless feeling with you in your relationship and commit to not playing small about it anymore. Challenge yourself to cultivate this part of you and strengthen these new skills. Push yourself to embody them just as you would if you were single and hadn’t morphed your identity around your partner. And then post in the comments to let me know how it goes!
https://medium.com/just-jordin/dont-let-your-partner-s-identity-influence-your-own-d1060f80eb21
['Jordin James']
2019-10-12 23:51:22.703000+00:00
['Advice', 'Relationships', 'Life Lessons', 'Love', 'Psychology']
Design / UX: Specialists vs Generalists — What’s Better? Here’s the truth
When I’m out teaching, I frequently get asked “What’s better? To be a jack of all trades designer, or someone that specializes in one specific area of UX? Should I be a full-stack designer? A Unicorn? What do all of these terms even mean?” Ignore all of the dumb terms for a moment and let’s focus on the real question: Should I focus on being good at one thing, or should I try to be good at all things? Is it even possible to be equally good at all parts of the design process? So what’s the answer? Is it better to be a specialist or generalist? The design field is a lot like the medical field: For the first few years of the path to becoming a doctor, you get “Basic training”, or a general education of the body and its functions. After this general education, many doctors decide to just stick to being generalists — also called “General Practitioners”(I’m aware that this is grossly simplified). Being a general practitioner is sufficient for most cases. However, sometimes you need deeper knowledge to solve very specific problems. For these cases, some doctors go on and study a few more years to specialize on particular parts of the body, like the brain, the heart, or the feet, or how to make people’s faces look younger. Those specialists end up working either in their own practice or at a big clinic where their services are needed often. They also likely contribute to their specific niche in the medical industry in the form of submitting new research papers to help push the field forward. But every medical student starts their education with a baseline of general knowledge about how the body functions. It’s generally not possible to be equally good at unrelated specialties like heart surgery and sports medicine, because those individual specializations are too deep to master both equally in any reasonable amount of time. By the time you’ve mastered one, you’d have already forgotten half of the other. The Truth About Design is that for the first three years of your career, you’re going to be a general design practitioner. If you go to any good design program, you’re going to learn and practice the full spectrum of UX disciplines. In your first few jobs, you’re likely going to be working at small companies that don’t have the need or budget to hire a team of specialists. In your first three years on the job, you’re also simply not going to be good enough at anything to be able to specialize. The Truth About Design is that for the first three years of your career, you’re going to be a general design practitioner. Being a generalist is great for at least 80% of cases where a designer is needed. Jack-of-all-trades generalists make great freelancers and great employees at design agencies or small-to-mid-sized companies. Most companies don’t need full-time specialists all of the time. It’s rare that a company needs enough usability testing for someone to do it full-time. Same with content strategy, or information architecture. In many real-life situations, you don’t need to be a master of one discipline of UX, you need to be competent at a number of disciplines, and be able to switch between them. This is also important from a team perspective. If multiple people have overlapping skillsets, then once you’re maxed out on the amount of work you can take on, someone else on the team can help out, and vice versa. Being a generalist is great, however, it’s not the generalists that push the industry forward. All of the expertise we have in the industry came from specialists who spent years getting better and better at some niche part of the profession and then spent more years talking about it at conferences and writing articles and books. Specialists draw from the deep wellspring of new ideas and spread that knowledge to the masses of generalists. I’m talking about people like Luke Wroblewski, the expert on mobile design. Or Jorge Aranjo, the eminent expert on information architecture. Or Alan Cooper, the man who invented most of the UX research techniques we use today. Without these specialized experts, the generalists would have no knowledge to draw from. This is the true meaning of the “T-shaped person”. The horizontal top line is a set of skills in which you have decent understanding and knowledge, and the vertical center line is a single skill in which you have deep expertise. For example, you might be decent at creating prototyping in code, research, visual design and usability testing, but when it comes to strategy definition and interaction design, you are a master. Specialists draw from the deep wellspring of new ideas and spread that knowledge to the masses of generalists. Once you have acquired that level of mastery, you can, if you like, go to the designer’s equivalent of a big clinic where highly specialized skills are needed: A big tech company like Google, Apple or Facebook. After a few years there, you might find the work you do at a big company like that too stifling, but that’s a story for another article. Discovering your specialty will come as a natural consequence of pushing yourself to your limits If you decide to just do WordPress redesigns for the rest of your life, sure, you’ll never need to specialize. And if that’s your thing, I won’t judge you for it. But don’t you want to know how far you can go? Where your limits are? And if you just stick to one trick, what are you going to do when the market shifts and that trick is no longer needed? For example, when I was first starting out, I did everything, including coding. Once I even wrote a basic CMS for a client. As the projects I worked on got larger and more complex, I noticed that I hit a plateau when it came to my coding skills: I just didn’t understand JavaScript, and that was clearly where the industry was going. So I gave up coding in order to focus on just the design aspect as part of a larger team. Don’t you want to know how far you can go? Where your limits are? I hit similar plateaus later on in my career with research and visual design: I am a good researcher, but I couldn’t hold a candle to the researchers at Walmart or Google, who had degrees in psychology and anthropology and did this all day. I’m a very competent visual designer, but I hit a similar plateau when I was working with the visual designers at Google, who are basically artists moonlighting as UI designers during the day. But I haven’t been able to find an interaction designer that is clearly better than me. And I noticed that those designers and researchers never really thought about strategy, markets, and context. So eventually, through the process of pushing myself into new and tougher situations, pieces of me fell away until I was left with Interaction Design and Strategy. A fortunate teaching opportunity at General Assembly made me aware of my abilities as a public speaker, presenter, and writer. And that’s my core skillset, at which I am able to play at the highest level, with no plateaus in sight. But I still feel like I have so much more to learn. Summary Being a generalist is great. But if you’re in the design industry, I truly hope that once you’ve mastered the basics and discovered which areas of the craft you gravitate towards the most, you decide to pick a specialty. What’s cool about the design industry versus the medical field is that it’s not nearly as complicated and the fields are more interconnected, so it actually is possible to develop deep expertise at multiple areas over the course of your career. Probably not all of them though, because you have to account for interest levels and natural inclinations. But deciding to pick a specialty is an adventure of finding out how far you are able to push yourself in life.
https://medium.com/truthaboutdesign/design-ux-specialists-vs-generalists-whats-better-here-s-the-truth-fe9e44b428cd
['Jamal Nichols']
2019-02-27 04:38:39.822000+00:00
['User Experience', 'UX Design', 'Design', 'Design Thinking', 'Product Design']
New York City Moment I Miss
Friday Nights at the Movies Photo by Marcus Herzberg from Pexels I will share with you one of the best parts of working in New York City that I miss, more than anything. Every Friday before leaving work, I would go on-line and order a movie ticket for the latest Bollywood release, showing at the AMC movie theater on 42nd Street, near 8th Avenue. I’d walk out of the back end of the office building, and in two minutes, I’d be in Times Square at 46th Street. This would be at the height of the crowds, released in freedom from their toil of the workday. Many more tourists would fill Times Square. They would take pictures of each other, or with persons dressed up as cartoon characters or the Statue of Liberty. It was not a quick walk to the movie theater. There were so many people that I’d be lucky if I made it to the movie theater in under 20 minutes. When I reached 42nd Street, I’d walk on the street, next to the curb, as there were too many people on the sidewalk. This was safest to do near Madam Tussaud’s Wax Museum, where the crowd was the greatest. There were more tourists posing in front of the museum or waiting to get fast food from the McDonald’s nearby. I loved reaching the movie theater to pick up my ticket. The lobby was a great people-watching spot. There were always couples who would be in the middle of deciding what to see, and sometimes they would base the decision on what movie was starting next. Sometimes I’d be in the middle of retrieving a ticket and I’d be asked what I was going to see. Whenever I said that name of the Bollywood movie I’d be asked, “What’s that?” or, “Really?” as if it was a surprise that a Caucasian middle-aged woman would want to see an Asian film. The blockbuster films were shown on the lower levels of the movie theater, but to reach any of the auditoriums, it was necessary to take an escalator to the first level. The escalator ride provided enough time to think about the conclusion of the workweek and the anticipation of what that evening’s movie would bring. In the middle of the escalator ride, I would think about getting a giant bucket of popcorn and a soda to go with it. Even the act of pouring the soda was a pleasure. The concession vendor provided the cup, but it was up to the customer to dispense the soda from one of several machines in the lobby. I loved to add lime flavor to the soda. The only drawback to getting the soda was in drinking a generous portion. The urge to visit the restroom at the end of the movie would be overwhelming. I’d want to remain in my seat to watch the credits roll at the end of the movie but often left before I could do so. One time my father admitted to me that he used to get the large soda at the movies and had this issue too. I remember this as one of those genetic, quirky behaviors that get passed between generations, or a silly choice on both our parts — take your pick. I can’t recall the names of all of the Bollywood films I’ve seen, but I will tell you the one which I will never forget. It was “Bajrangi Bhaijaan,” starring Salman Khan. It was thrilling to see one of the opening dance scenes, filled with color and light. Yet it was the story of the film that was most captivating. It is the story of a young girl who is separated from her family, and what happens during the course of the separation. I won’t provide any more details regarding the film, because it’s magical, and something to sit, watch and live through. During the evening I saw it, there were moments you couldn’t hear a pin drop, but you could feel the people around you holding their breath. There were also other moments that brought tears. At one point I turned to the person sitting next to me, and she was crying as much as I was. She smiled and nodded through her tears. In all, that describes one of the New York City moments I miss. It’s not only the activity; it’s the vibrancy and connection with others during a shared experience. I saw and experienced so many of these great moments on Friday nights in the city. I look forward to the day it’s possible to return to creating more of these memories.
https://medium.com/illumination-curated/new-york-city-moment-i-miss-4a635312adf
['Yve Laran']
2020-11-18 15:37:23.253000+00:00
['Self-awareness', 'Experience', 'Movies', 'New York City', 'Memoir']
Ask Julie: “How Much Sex Is Normal?”
Dear Julie, How do you stay positive? 2020 has derailed a lot of my year. I lost my job, my roommates moved out so I had to relocate to a studio and I gained 30 pounds. I’m in therapy but I feel like I’m whining all the time. My friends are drinking or telling me that I have it good so I shouldn’t stress. I know that I have it good, better than most people have, but I can’t seem to feel better and it’s affecting all of my relationships. I’m trying to find the upside but I’m not seeing it. How can I be optimistic? ~ Looking for the Silver Lining Dear Looking for the Silver Lining, Thanks for writing in! Truthfully, I think aspiring to always be optimistic is an impossible and unrealistic state to always live in. To do that, you would be denying yourself of a lot of the other beautiful and necessary emotions that anger, grief, sadness can inspire and give to you. Personally, I used to shy away from the depth or fullness of certain emotions because it felt so overwhelming and intense, but that doesn’t mean it goes away. It just sits underneath the surface, stored away within— invisibly, lurking, accumulating — until something external sets it off and you are forced to confront it later on. Now, I’m more interested in being a healthy witness for my emotional expression and finding peace in whatever feelings arise; being happy, and everything in between. You’re right that you do have it good in a sense (materially: a roof over your head, running water, electricity) but your feelings are still 100% valid because they are yours. From your letter, you’ve spelled out an unbelievably tough year of immense loss and transition with your break-up, move, and job loss. If you keep waving off your concerns and continually compare it to people who are going through more, it invalidates how you are feeling. You can’t begin to take the time to process and find your way out because you are too busy shaming yourself for having those feelings in the first place. This year, I was on the cusp of starting my relationship business and I was passionate about my NFP day job working in social impact + tech. I finally felt like I was in a place where I had everything figured out. After the pandemic wrecked my carefully laid plans, I was devastated and became depressed. If the crushing sadness and uncertainty wasn’t already enough, years of emotional and work burnout finally caught up to me. I felt completely unmoored and helpless to do anything, so I let myself wallow. And it was exactly what I needed. As repressed feelings tumbled to the forefront of my consciousness, I honored myself and whatever I needed in my smallest moments. I gave myself permission to rest. I didn’t punish myself for having sad emotions or for making decisions to spend the entire day in bed. I’m sharing this in the hopes that you know you’re not alone. Even though this past year has been incredibly dark, I allowed myself to go through my breakdown instead of distracting myself from it. It was super tough but I’m all the better for it. I deepened friendships and connections that welcomed radically vulnerable and transparent conversations where I didn’t have to hide anything. I got rid of the things I used to use to deflect away from my emotions. I rooted my sense of worth to myself internally, instead of attaching it to something external. I’m finally feeling happier, more grounded, and resilient enough to rebuild. When I backtrack, which still happens often, I don’t beat myself up for it. I let myself feel whatever comes up and then move on when I’m ready. If you see your struggles as whining or complaining, it’s not going to shift you towards growth, it’s only stopping you from being vulnerable and being there for yourself in whatever way you need it (as long as it’s healthy and in moderation, of course.) Let us not forget that all of this is happening against the backdrop of a pandemic, political angst, and potential societal collapse too! Honestly, I would be more concerned if you were just cruising along. Your friends want the best for you, but engaging in emotional avoidance with things like drinking or telling you to cheer up and only strive for positivity isn’t a good long-term solution. There is nothing to fix, your emotions are valid. Surround yourself with people who can hold space for you. Allow yourself to be messy, I promise to the right people it won’t feel like too much. Keep going to therapy, it’ll slowly start to click. Get it all out there and let your body guide you in what you need, have that be your North Star as you heal. We have to be okay being a blob on the couch or crying because we are sad and sometimes, we just really, really need it.
https://medium.com/joincurio/ask-julie-november-edition-ec6af965676f
['Julie Nguyen']
2020-11-27 15:02:32.287000+00:00
['Mental Health', 'Depression', 'Relationships', 'Sex', 'Dating Advice']
How to reward loyalty without points
When you think about loyalty programs, the very first thing that comes to mind is probably the points. It may look like an expensive investment and, ultimately, an architecture for collecting points is only the beginning; you also need to support exchanging the points and integrate a loyalty system with the distribution of rewards. In this post, I’d like to show you how you can test if loyalty programs work for your business without an extra penny spent on new infrastructure. I’m going to use regular coupons, gift cards, and customer segments to create a loyalty program. A program which is ready to compete with giant “point’s collectors”. Challenge 1: multi-level structure People are much more motivated to participate in any loyalty program when they see a clear goal. And they see it’s close. That’s the point of dividing a program into levels. In such dimensions, each customer is always close to some reward. What’s also motivating is that they can see what’s at the very end. Typically, it’s entering a VIP or Premium Group of customers. Keeping that in mind, it’s important to introduce the entire program structure and send it over to every customer with the invitation to join. How can you model a structure without points? Let’s take a look at the list below. Customer segments based on total order amount and number of purchases. The segment’s criteria defines “loyal” customer and divides them into different “loyalty levels”. Automatic distribution of personalized emails which deliver rewards such as discount coupons and gift cards. Each email contains a simple graphic which informs them about their achieved level. Rewards have their unique codes to enable customer tracking and measuring of program results. To start with, you need to define a profile of your loyal buyer. Usually, brands take into account total spent amount and total number of orders. In this post, loyal customers need to spend at least $300 to get an invitation to join a loyalty campaign. Moreover, to achieve the next levels, this initial amount needs to grow. Invitation to join with 20% discount on the next order — for the segment of customers who spent at least $300 in the last six months. Level 2: 30$ gift card — for the segment of customers who spent at least $500 in the last six months. Level 3: 50% off the next order — for the segment of customers who spent at least $600 in the last six months. Premium Group — a segment of customers who spent at least $1000. Rewards for the Premium group can be a regular 20% discount, valid for 1 year, and occasional gift cards for special dates and holidays. Challenge 2: rewards The power of coupons and gift cards lies in their flexibility and tracking possibilities. They can be tailored to the needs of a particular customer, wrapped into a personalized message, and finally, tracked through their entire lifecycle. This tracking is crucial for managing a program with dynamic levels. You need to know if customers redeem their incentives, analyze order history, and get clues for future improvements. Ultimately, every detail counts. It can be a channel, type of incentive, time limit, or even a subject line that makes one segment grow faster than others. Counting redemptions and measuring performance on each level shows which configurations work best. Challenge 3: keep the wheels turning The biggest challenge is to keep customers involved after achieving a certain level. There is a risk your client redeems a reward (or not) and then forgets about moving forward. This is a scenario you don’t want to repeat. The program should be a gamified journey with a narrative in the background so that the clients are kept informed about where they are, and how close it is to the next reward Partly, all of that is provided by levels. The second important thing is the incentives. They need to be worth coming back for again and justify spending more. Last but not least, customers should be kept up-to-date about progress to the top of a loyalty ladder and the distance separating them from the next reward. In this example, I’m going to put this information into personalized emails. Why emails? “Emails that make the recipient feel like you’re talking directly to them are always a good bet. The personalized approach makes your target audience feel valued and connected while still delivering content that furthers your marketing goals and eventually, sales goals.” MailMunch A simple graphic inside an email gamifies the customer experience and gives the motivation to move forward. Ultimately, the point of every game is to finish it, right? Challenge 4: automatics and scale Don’t even think about doing it manually. Automation is a must in any gamified loyalty program. In our example, customers spend money and join matching segments which makes sure all of the levels presented by customer segments are dynamic creatures. Moreover, each time someone enters one of the loyalty segments, an email with a particular reward needs to be triggered to their mailbox. That is what your system needs to cover. Summing up, you need the infrastructure ready to: Track and store customer data. Build dynamic segments based on total spent amount. Trigger automatic emails in response to joining a segment. Create personalized emails with custom design and graphics (or at least integrate with your email provider). Generate rewards with unique codes such as gift cards and coupon codes. Track every code and validate before redemption. Equip your rewards and program with budget-based rules, time limits, etc. Support more than one distribution channel to enable a true cross-channel experience for your customers. Having all that in place, you can create a truly gamified journey for your customers. It’s a myth that you have to invest in a solution dedicated to loyalty programs. The aim of running such campaigns is to distinguish your loyal and most profitable customers and turn them into faithful brand advocates. This is the very first thing that should come to your mind when thinking about loyalty programs. — - At Voucherify, we are developing coupon, referral, and loyalty solutions for businesses of every shape and size worldwide. If you’re interested in having a consultative talk to help you decide how you should implement your loyalty campaigns, let us know at [email protected] — We’re always happy to help!
https://medium.com/voucherify/how-to-reward-loyalty-without-points-79de1d19050b
['Jagoda Dworniczak']
2018-09-18 08:43:26.538000+00:00
['Sales', 'Marketing', 'Loyalty', 'Customer Experience', 'Ecommerce']
The Gift-Givers Guide to Overthinking Literally Everything
The Gift-Givers Guide to Overthinking Literally Everything Lessons from a pro. Photo by Angèle Kamp on Unsplash Are gift cards tacky and impersonal? I ask myself this question every holiday shopping season. Which is worse: Getting someone something that they hate and feel obliged to wear or use? Or handing them an envelope with a card that says “I don’t know you well enough to choose an appropriate gift. I hope you like Sephora.” If you think this kind of internal dialogue is inane, if you’ve not spent more than five minutes in your lifetime considering the social and moral consequences of gifting a gift card, if the thought of shopping for holiday presents fills you with joy and not anxiety, You should know: There’s another way. Overthinking literally everything about gift giving doesn’t come naturally to everyone, but like many other things it’s a skill that can be learned. Fortunately for you, this guide covers all the basics and will have you second guessing everything you’ve ever thought about giving in no time. How to Slowly Drive Yourself to The Brink of Insanity While You Attempt Small But Meaningful Acts of Generosity — A Beginners Guide Alternate Title: How to Suck All the Fun Out of Gifting. If one or two of the following already applies to you, you’re in luck! Waffling and second-guessing yourself is probably already second nature. All you need is a little shove to push you over the edge into complete analysis paralysis. But if you want to really excel at overthinking, you need to: 1. Be responsible for — literally anything — besides gift giving. Like holding down a job, caring for small humans, or making sure everyone in your household eats, showers, and has clean laundry. Apparently these responsibilities are meaningful and rewarding in their own right, but they’re also perfect for complicating even the simplest gifting task. 2. Be concerned about the environment. If you’ve given the environment more than a passing thought, the idea of buying even one more thing that’s going to end up in a thrift store or landfill will make you cringe. Even better, when you see the 12-pack of shrink-wrapped mystery dolls on your niece’s wish list — basically a million pieces of micro-plastic just waiting to happen — you’ll die a little bit inside. Lather. Rinse. Repeat. 3. Have a strict budget you actually need to follow. The deep certainty that overspending on gifts means you won’t be able to make rent or buy groceries will really dial up the mental calisthenics. This also applies when you have a shopping list that’s bigger than your budget. If shopping for presents requires more math than your tax return, you’re well on your way to second-guessing, and third-guessing, and fourth-guessing every item on your list. 4. Really care about shopping local and/or shopping small. This is important. But you can’t just sort of care, you need to really care. You might not have the time or budget to shop small, but you know you should. And when you know better, you do better. Or, at the very least, you feel extremely guilty for every big box purchase you make. 5. Feel weird about any of the following: Gifting chocolates, cookies, hard cheeses, or cured meat — Because saturated fat and diabetes might not be the perfect thing to give. — Because saturated fat and diabetes might not be the perfect thing to give. Gifting house plants or other living things — Is this too much pressure? What if they’re allergic? What if it dies? — Is this too much pressure? What if they’re allergic? What if it dies? Gifting restaurant or spa gift certificates — Typically expensive. You also don’t want to run the risk of paying for only half a meal or facial, which then is basically just giving a coupon for 50% off. Cringe. — Typically expensive. You also don’t want to run the risk of paying for only half a meal or facial, which then is basically just giving a coupon for 50% off. Cringe. Gifting any sort of generic gift card — Nothing says deep and meaningful like a heartfelt gift of retail store credit. — Nothing says deep and meaningful like a heartfelt gift of retail store credit. Gifting clothing or accessories — This is a test of how well do you really know this person that (literally) no one wants you to take. — This is a test of how well do you really know this person that (literally) no one wants you to take. Gifting subscriptions or memberships — For some reason this feels like the ultimate copout. And again, how well do you know them, really? If you fit all of the above, and still aren’t overthinking every miniscule gift giving gesture, there’s one thing that’s guaranteed to get you there. You must believe that every gift should be magical and deeply meaningful. Weighing each and every gift you give as if it is the most important present they’ll ever receive isn’t for amateurs. This is pro-level overthinking at its finest. Deep down, you know that this isn’t just a gift — it’s a metric that’s directly tied to your own sense of identity and self worth. Missing the mark in this area isn’t just a mistake, it’s an abomination. Overthinking isn’t for the faint of heart. Sure it’s easier to see gifting as a means to an end. Some people actually take pride in zipping through their list and making everyone’s dreams come true by simply getting them what they asked for, or a gift card, or cash. But if you truly believe “It’s the thought that counts,” then there’s no better way to demonstrate your love than to investing every last ounce of your sanity into finding the perfect gift. How
https://medium.com/curious/the-gift-givers-guide-to-overthinking-literally-everything-f8df314cd4c3
['Jamie Siebens']
2020-12-28 09:55:02.988000+00:00
['Humor', 'Mental Health', 'Satire', 'Relationships', 'Self Improvement']
Why People Follow Toxic Leaders
More than half of House Republicans — 106 of them — urged the Supreme Court to overthrow the election, effectively trying to disenfranchise millions of voters for the heinous crime of not voting Republican. Thankfully, the Supreme Court recognized the Texas lawsuit for the baseless trash that it was and the Electoral College followed suit with endorsing the will of the American people. Score one for democracy. As well as intelligence and morality. But 106 House Republicans, along with 18 Republican Attorneys General, will forever be on the record as supporting a baseless lawsuit that attempted to obstruct the will of the people. And contrary to all logic, it doesn’t seem to be over. Apparently a number of Republicans are planning to sow further division by challenging the outcome in early January. It’s as though they have lifetime tickets to the wrong side of history. But this is the price of following a toxic leader. Eventually, followers always need to choose between behaviors that are unethical and immoral or risk becoming a target of the very leader they’ve supported. Toxic leaders don’t see their supporters as people. They see them as tools. And as tools, they have little compunction for casting them aside once they’re no longer useful. Everyone is expendable. Former Attorney General William Barr and Georgia Governor Brian Kemp both proved themselves to be ardent Trump supporters. They put their own reputations — and American lives — on the line in an effort to further Trump’s agendas. Yet the moment they stopped following his dictates, the moment they stopped being useful, he turned on them as he has many others. After all, there’s no reason to keep a tool once it’s stopped being useful. This progression seems inevitable. Yet people continue to pile on the bandwagon even as it catches fire. Part of the reason, at least, lies in a six-decade-old study. Would You Please Electrocute This Person? When Stanley Milgram ran his famous experiments in 1961, he wanted to measure the influence of authority on people’s behaviors. Under the guise of using electrical shocks to improve learning, he asked one group to learn word associations and recall them when asked. The other group was charged (haha, get it?) with giving the learner an electrical shock when they gave a wrong answer. The results weren’t encouraging for humanity. Despite screams of pain, pleading for mercy, and people passing unconscious, a full 65% of people continued to apply lethal electrical shocks through the applied lethal electrical shocks to the final 450-volt charge. Of course, no one actually died. There were no electrical shocks. It was all a setup to see if people would submit to an authority figure, even if they knew their actions would be harmful. While nearly all participants were reluctant to continue when the pain they were causing became obvious, the majority did so at the urging of the researcher. Milgram found that when an authority figure directed people, they were remarkably receptive to do things they otherwise would not, including actions that could harm others. Milgram initially wanted to run the experiment in Germany, hoping to explain why German citizens went along with the Holocaust. However, after seeing the test results here in the US, he decided there was no reason to go to Germany after all. Apparently, submission to authority figures is a relatively universal trait. These results have been publicized and cited many times over. But a critical aspect of the test is often lost. And it’s this detail which explains how people find themselves in these situations to begin with. Evil Begins with 15 Volts If I asked you to flip a switch and send 450 volts into someone strapped to a chair, you’d likely offer some colorful language telling me exactly what I can go do with that switch. But if I asked you to flip a switch and send 15 volts into someone, you might be more likely to comply. Fifteen volts is minor, most people would barely feel it. If it’s in your interest to take this step, it’s easy to rationalize it away. There’s no real harm in just 15 volts. Why not? Milgram recognized this and set his experiment up accordingly. He didn’t start with 450 volts. If he did, no one would know his name. He started with 15 volts and then increased the magnitude in 15-volt increments. Participants could rationalize that such a small increase wouldn’t be harmful. As Harvard psychology professor Daniel Gilbert describes in Stumbling On Happiness, “The human brain is not particularly sensitive to the absolute magnitude of stimulation, but it is extraordinarily sensitive to differences and changes — that is, to the relative magnitude of stimulation.” We notice relative changes. We don’t pay attention to absolute magnitudes. Toxic leaders build their supporters in the same way. They don’t start out asking them to support an Anti-American coup and overthrow the world’s longest standing democracy. They start by encouraging people to tacitly align through nonresistance. Just look the other way to these questionable practices. They know that once people begin to rationalize these actions, they’ll rationalize more later. From Jeff Skilling at Enron to Elizabeth Holmes at Theranos, toxic leaders don’t start off by asking people to directly clash with their values. They create situations and give them a role to play. They put people in environments that incentivize following the group and then let them rationalize their behaviors in small, incremental steps. Just as Milgram’s participants could rationalize away each 15-volt increase, followers find ways to rationalize away their own change in behaviors. Before long, they find themselves doing things that were once abhorrent to them. And looking back, it’s difficult to see how they fell into this trap. It all hinges on that first 15 volts. In Milgram’s study, this was the differentiator. If someone took that step, each subsequent shock became less of a relative change. And it was that much easier to rationalize. With toxic leaders, it’s the same principle. The time to stand up in defiance is in the beginning. It’s when you’re asked to be tolerant of intolerance. It’s when you see something that doesn’t seem right, but fear keeps you from speaking out. It’s when your gut gives you that unpleasant feeling that you’re drifting away from your core values and starting down a wrong path. Once those rationalizations begin, you begin to align to the person doing that behavior. It’s now normalized. And stopping becomes much more difficult. Be Wary of That First Step “Assholes tend to stick together, and once stuck are not easily separated,” Professor Robert Sutton wrote in his book, The No Asshole Rule. Sutton offers two main tests to identify a toxic leader: “Test One: After talking to the alleged asshole, does the ‘target’ feel oppressed, humiliated, de-energized, or belittled by the person? In particular, dos the target feel worse about him or herself? Test Two: Does the alleged asshole aim his or her venom at people who are less powerful rather than at those people who are more powerful?” In contrast, good leaders seek to unite people as opposed to dividing them. They don’t talk in terms of us versus them. They focus on how we can all move forward together. Good leaders are results-oriented, but they recognize the need for a broader view of the consequences. They understand that short-term victories at the expense of people’s well-being is not a victory. Good leaders treat their followers as partners, not as tools. They value diverse opinions and suggestions. They listen to their concerns. And they genuinely want to achieve the best solution — even if it isn’t their own. Most importantly, good leaders leave their team, company, or country better off than when they began. At the end of the day, they make decisions that enrich the lives of those they represent as opposed to their own. Be wary of the people you choose to follow. Once you take that first step, every subsequent one becomes a little easier. No one wakes up one day and decides to abandon their values. It’s a slow, incremental process. We chip away at our will one rationalization at a time. Recognize the danger that comes with that first 15 volts. The best time to stand up and resist is before you begin. The second best time is now. Before you have your own lifetime ticket to the wrong side of history.
https://medium.com/swlh/why-people-follow-toxic-leaders-a37d40189197
['Jake Wilder']
2020-12-15 21:06:17.766000+00:00
['Politics', 'Management', 'Self', 'Leadership', 'Psychology']
The Music Genre Cheat Sheet.
A simple idea of how Music Producers can use Data to get valuable insights. As music producer, one common task when starting a new project is to frame the track genre, tempo and key, although by any means this always the first step, sometimes, defining this “ground rules” is the beginning. In this post, I would like to share the result of a simple but I think insightful analysis made to produce a function able to create a basic “Music Genre Data Sheet”. If interested, at the end of this post I share the link to the analysis repository and where to access the data I used. Why I decided to build a Music Genre Cheat Sheet? Recently, I wanted to produce a Techno track, it was not my first time with the genre, but started to think … “I would like to produce this track in a uncommon key and a not so mainstream bpm” … you know, something not in the experimental side but neither in the most popular setting. Creatively speaking, I guess this is not a strange thought, but realized that in practical terms, it was trickier question. When trying to get this type of information, I usually jump to google and end in a music forum discussion thread, or if lucky, I happen to contact someone which I know has experience on that genre. But in both cases, the information comes at the expense of an unknown bias level, each producer talk from its experience and taste, so how to be sure if you are getting what you want. At the end, I decided to look for Data, in particular, a historical record with as much tracks as possible, where I could have three main features: Track Tempo, Key and Genre. Lucky me, I found a useful source on Kaggle, thanks in advance, this is Spotify data with 13K+ tracks entries and all the main features I wanted to work. What I wanted to get from the Cheat Sheet? Although, by any means I see this first try as a final version which contains all the information a Music Producer would like to see in a genre cheat sheet, I had three basic questions I wanted to be able to answer since the beginning. Which are the most popular Keys? Which are the representative Tempo’s? Which are the closest or more similar genres by Key and Tempo? The Result Let’s see the result for the Techno genre. cheat_sheet = genre_visual(df,’techno’) Result from genre_visual() function for ‘techno’. Going back to my initial question, from this simple visualization, I can get that going for a track at let’s say, 122 BPM and Emin, could be a setting that would deliver a track that is within the genre experience (so no to experimental) and at the same time offers a not so commonly heard sonic palette. Now, to do a quick benchmark and a sort of empirical validation if the cheat sheet works within expectation, let’s see what is the response for a genre like Reggae where I expect to see some differences. cheat_sheet = genre_visual(df,’reggae’) Result from genre_visual() function for ‘reggae’. Fortunately at least at high level, the results make sense to me, we see a lower tempo range and different set of popular Keys where Reggae is reported to be produced. Features to add and current Limitations: I think the cheat sheet offers what I wanted to know but it is easy to see its current limitations and more importantly the value a robust genre cheat sheet could add. About the current limitations: Might be worth to bucketize the tempo feature. Taking as example the top tempos response for Techno, a track at 128 bpm is not so different from a track at 129 bpm, by grouping tempo ranges we might have a more direct or representative display. C sus Maj appears as a popular key, my suspect is that those tracks could be C Maj tracks actually, if the raw data gets the key from an automated audio analysis algorithm this could be the case. A human audit of that will help to confirm if that key needs to be re-labeled. Features to add: If I could add the Date and Region where the track was produced, I could add a higher level of resolution to the cheat sheet. Imagine, we might think, I want to produce a house track with a 90's vibe of the London scene. Having the right data could allow that. Energy, Acousticness are features already available in the dataset. Adding a visual characterization of this by genre might be useful, maybe not too much to Music Producers but to Music Business applications. It is worth to add that insight to the cheat sheet. Potential applications I started this exercise to make a more informed decision to produce within a genre I have some experience but, having reliable data and a robust cheap sheet could serve other proposes: A producer could have unbiased guidance when trying to produce a track in a completely new genre to him/her. If you have a fan base or want to target a particular target, you could get this data but containing only what the study group consumes, could be possible to extract if there are popular track properties within this group. A record label might identify characteristics of the tracks that are more popular among their customers or a particular community. In conclusion, seems like apart from me, the music business can also benefit from having robust genre cheat sheets. Get the data
https://medium.com/the-music-genre-cheat-sheet/the-music-genre-cheat-sheet-fb57abf74301
['Aratz Hernandez']
2020-06-11 06:44:10.779000+00:00
['Data Science', 'Music Business', 'Music', 'Data Visualization', 'Data']
The Design Manifesto: How agile is as much about design as it is about development.
Photo by Daria Nepriakhina on Unsplash I’ve been working in design and IT for around 5 years, along the way, I have been a passenger in the fields of agile methodologies with teams using various levels of SCRUM for IT delivery, Kanban for design work and a fly on the wall for the development of DevOps. Emerging from the fields a few inherent commonalities presented themselves. Overall no matter the methodology, the discipline or the framework they all are concerned with answering questions about how humans work together like How do we work effectively as a team?, How do we deal with the differences in approaches and language that the collaboration of multiple disciplines presents us with? or How can we test solutions as fast as possible to maximise learning to deliver value faster? Photo by Ferenc Horvath on Unsplash If you’ve been around tech at all in the past decade you would have heard of agile. The agile manifesto is a set of principles that was released in 2001 to guide the future of software development. These principles emerged out of the mass frustration that was developing from companies that were so focused on excessively planning an documenting their software development that they lost sight of who they were making software for. (If you want to see the original agile principles see here https://agilemanifesto.org/principles.html) Photo by Kelly Sikkema on Unsplash Whilst agile working originated in the software development field it is not solely a development practice. If we remove the software development specific wordings from the agile manifesto set of 12 Principles we get a set of principles that is suspiciously close to the principles that have developed over the past few years of User/Human-Centred Design and Design Thinking. I present you.. *Drum roll* … the agile manifesto. The Agile Design Manifesto Our highest priority is to satisfy the customer through early and continuous delivery of valuable services. 2. Welcome changing requirements and insights, even late in Design. Agile processes harness change for the customer’s competitive advantage. 3. Deliver working insights (Learnings) frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. 4. Business people and designers must work together daily throughout the project. 5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. 6. The most efficient and effective method of conveying information to and within a design team is face-to-face conversation. 7. Agile processes promote sustainable development. The sponsors, designers, and users should be able to maintain a constant pace indefinitely. 8. Continuous attention to technical excellence and good design enhances agility. 9. Simplicity — the art of maximizing the amount of work not done — is essential. 10. The best insights, research, and designs emerge from self-organizing teams. 11. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly. 12. Customer behaviour (Outcomes)is the primary measure of progress. Now, much of these seem to align quite closely with many of the principles for Design Thinking, User-Centred Design, Human-Centred Design etc. I believe at the heart of all of these emerging fields we are all trying to figure out how to work together. There exists a range of principles and practices that lie at the heart of each that connects them together. This gives us common ground that might show how design will grow in the future and how design and development share a common purpose. For the next article (whenever that is) lets take a look at putting a design spin on DevOps and see how it matches it’s emerging sibling DesignOps. ;) Let me know what you think of this article.
https://ben-maclaren.medium.com/the-design-manifesto-a6c5fafbd135
['Ben Maclaren']
2020-12-21 23:24:06.584000+00:00
['Designops', 'Lean UX', 'Design', 'Agile']
Car Games & Conversation
As a treat on weekends when I’d accompany my mother running errands we would go to a hole-in-the-wall American-Italian restaurant in town called Franconi’s. There’s not much special about this place. From the drab color palette on the interior to the generic ‘Italian’ paintings that hung minimally on the wall, or even the gumball and toy machines near the counter that never seemed to need to be refilled, it was textbook average. It’s these habitual places that become special over time. On these outings we would grab lunch together, nearly always ordering the same thing. As a kid, I wasn’t the most adventurous eater, so naturally, I would order the very beige meal of a plain hot dog and a massive side of cheese fries. Mom ate relatively healthy, and retroactively I appreciate that she never made fun of my food choice even when I should have grown out of the nutritional crutch of anything covered in fake nacho cheese. After ordering, we would sit and catch up — talk about what was going on with each of us, anything that might have qualified for breaking news in middle school; passing the time with nothing more than conversation and doodling on paper placemats. Mom would reflexively get a pen or two from her purse and we would trace the faint embossed floral pattern on the edge of the placemat. We’d play games like tic-tac-toe or hangman, and laugh at how hard it is to write letters upside down. If I was struggling to uncover the magic phrase that would save this stick figure drawing, she would simply keep adding accessories to allow the game to go further, and for this stick figure’s final moments to be at their most fashionable. On rare occasions when there wasn’t a pen at the bottom of a purse we’d play what most people consider car games. These still hold a place in my heart, much to my husband’s dismay but he humors me on long trips. My favorite bar-none is an alphabet-style game where you fill in the sentence, “My name is [blank], I live in [blank], and I sell [blank],” with words that correspond to a letter, and then the next person the next, so on and so forth. It’s why I know there is a city in China called Xiangtan. On particularly anxious nights when my mind can’t seem to slow down hopping from one thought to the next, I still play this game silently to myself. My brain appreciates the single task to focus on, and I’m usually drowsy by V. The best memory I have at Franconi’s was playing “I’m thinking of a color, and it starts with [blank],” for lack of an actual name. Usually, one person would pose the question to the other, and soon enough I became a middle school kid with the color vocabulary of someone who holds a BFA. On this particular day, Mom must have been tired, because she started straight at me and said: “I’m thinking of a color, and it starts with orange.” I blinked, took a moment and responded, “Is it orange?” “Yeah!” Mom responded enthusiastically. I laughed, told her what she had said, and that phrase became the response whenever we would ‘brain-fart’ in the future. What I love most thinking about all that time sitting in the booths at Franconi’s is the feeling of being listened to. Even if on the same visit I wanted to complain about trivial things, ask big questions, and play silly games, she would listen and respond genuinely. As I grew older and the topics of conversation became more adult, the tone never changed. The same care and consideration was given to middle school drama, college woes, and everything in between. Listening and responding genuinely is something I want to give to my son. Active listening helped strengthen the bond between us and encouraged more open communication. I learned there wasn’t anything to be afraid of telling my mother. My thoughts, feelings, and opinions had value, an idea so ingrained that it made those angsty teenage years a little easier. Eventually, when I was on my own and the world considered us both adults, I knew not only did I have my mother in my corner, but I had a friend there too. I think that’s what I miss most. Let’s play some word association — go through each letter of the alphabet and think of one word that reminds you of your person. _ This post first appeared on jskennedy.net on 1/30/2020.
https://medium.com/ugh-good-grief/car-games-conversation-bedbcfb6da1a
['Julia Kennedy']
2020-02-10 14:42:14.643000+00:00
['Mental Health', 'Grief', 'Loss', 'Emotions', 'Mental']
William Moss, MD, executive director of the International Vaccine Access Center and professor of epidemiology at the Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland. Every…
“I’m hopeful that by the school term of fall 2021, we will certainly have a vaccine we can administer to children over 12, and I think we have a good shot of having a vaccine for even younger children as well.”
https://elemental.medium.com/im-hopeful-that-by-the-school-term-of-fall-2021-we-will-certainly-have-a-vaccine-we-can-312bc86c5f48
['Elemental Editors']
2020-12-23 06:32:57.857000+00:00
['Coronavirus', 'Vaccines', 'Covid 19']
NumPy Crash Course — Zero to Hero
Setting up numPy >> import numpy as np The sole reason that numpy is imported as np is convention. You are free to use another alias but it's not recommended as this is what you will find everywhere and it's better to stick to standards >> np.__version__ '1.18.1' nd-array The primary reason that numpy is fast is because of the nd-array type that it uses to store and manipulate data An ndarray is a generic multidimensional container for homogenous data. It provides vectorized arithmetic operations and sophisticated broadcasting capabilities. Every ndarray has 2 properties: shape and dtype. shape is a tuple providing the dimension of the array and dtype provides you the datatype of the array. The dtype of the array can also be explicitly specified while defining the array giving you fine-tuned control over the array. Let's create a numpy array from an array. This is possible by passing the array as input to the np.array function >> nparray = np.array([1,2,3]) array([1, 2, 3]) A numpy array has numerous properties which gives more information about them. Datatype of the array >> nparray.dtype dtype('int32') Size of the array >> nparray.size 3 Shape of the array >> nparray.shape (3,) The below 2 cells of code warrant a detailed explanation. The itemsize parameter returns the size of a single item in the array. In this case, we have an integer array taking up 32 bits of space for a single item and this is equivalent to 4 bytes(1 byte = 8 bits). The following cell showcases the nbytes parameter which returns the size in bytes of the entire array, thereby providing a value of 12 bytes(4 bytes * 3 items). To put it short: itemsize provides the size of a single item in the array while nbytes returns the size of the entire array. >> nparray.itemsize 4 >> nparray.nbytes 12 There is a reason that numpy’s nd-array is faster than Python’s native list. Let's look at this in depth in the following cells >> %timeit pythonList = [i for i in range(10000)] 545 µs ± 24 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) >> %timeit npList = np.arange(10000) 7.82 µs ± 256 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) numpy arrays are homogenous and is handled faster in memory. Now compare this to a Python List where you can put anything in; every entry in a Python list is a Python object and this causes overhead in computations. This is the primary reason why numpy arrays are significantly faster than the traditional Python lists Lets now look at some of the functions which makes numpy a flexible and handy library. Generating data with numpy arange generates a list of numbers within the range of the digit passed >> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) linspace returns a set of linearly-spaced items within the range passed as input. In linspace, the starting digit, ending digit along with the number of digits required as passed as input. Basically, it returns an array with the required number of digits in a specified interval >> np.linspace(0, 10, 5) array([ 0. , 2.5, 5. , 7.5, 10. ]) ones creates an array filled with ones. The parameter passed as input is the size of the required array >> np.ones(5) array([1., 1., 1., 1., 1.]) zeros creates an array filled with zeroes. The parameter passed as input is the size of the required array >> np.zeros(5) array([0., 0., 0., 0., 0.]) zeros_like creates an array with the same size as the array passed as input. The generated array will have zeros as elements >> np.zeros_like(np.arange(5)) array([0, 0, 0, 0, 0]) eye creates an identity matrix. The generated matrix will be of the dimensions of the integer passed as input >> np.eye(5) array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.]]) empty creates an array filled with garbage values, usually zeroes. The parameter passed as input is the size of the required array >> np.empty(5) array([1., 1., 1., 1., 1.]) Indexing numpy follows the usual rules of Python when it comes to indexing and slicing. I am laying out a few examples below that you can play around with and experiment with: >> nparray[1] 2 >> nparray[-1] 3 Slicing >> nparray[1:2] array([2]) >> nparray[:2] array([1, 2]) >> nparray[:] array([1, 2, 3]) >> nparray[1:] array([2, 3]) >> largeArray = np.arange(100) >> largeArray[::20] array([ 0, 20, 40, 60, 80]) >> largeArray[1:10:2] array([1, 3, 5, 7, 9]) >> largeArray[::-20] array([99, 79, 59, 39, 19]) >> largeArray[10:1:-2] array([10, 8, 6, 4, 2]) An important thing to keep in mind while using slicing in numpy is that the slices are essentially references(views) and hence, any changes that you make to the sliced data will reflect in the parent. Lets look at an example >> smallArray = largeArray[:10] >> smallArray array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >> largeArray[:10] array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >> smallArray[0] = 666 >> smallArray array([666, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >> largeArray[:10] array([666, 1, 2, 3, 4, 5, 6, 7, 8, 9]) The copy() method can be used to create copies instead of such views Axes A numpy array can be multi-dimensional. It also gives you the ability to change an existing array to a shape of your liking provided that it meets multiple constraints reshape lets you do exactly what the name says. It lets you shape the array into the dimensions passed as input. If you do not pass a dimension that the array can be reshaped into, then the function will return an error. Say, the array has 10 elements and you try to reshape it to an array of shape 3x5, then reshape will return an error >> np.arange(1, 7).reshape((2, 3)) array([[1, 2, 3], [4, 5, 6]]) >> np.arange(1, 4) array([1, 2, 3]) >> np.arange(1, 4).reshape(1,3) array([[1, 2, 3]]) newaxis is used to create a new axis in the data. It is commonly used when working on modelling techniques as models require the data to be shaped in a certain manner. As you can see below, if the newaxis parameter is in the first position, then a new row vector will be generated. If its in the second position, then a column will be created with each of the elements being a separate vector. >> np.arange(1, 4)[np.newaxis, :] array([[1, 2, 3]]) >> np.arange(1, 4)[:, np.newaxis] array([[1], [2], [3]]) Array Concatenation Arrays can be concatenated in numpy using the concatenate method. The list of arrays to be concatenated is to be passed as input to the concatenate function. >> np.concatenate([smallArray, largeArray]) array([666, 1, 2, 3, 4, 5, 6, 7, 8, 9, 666, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]) >> np.concatenate([smallArray, largeArray, [888, 999]]) array([666, 1, 2, 3, 4, 5, 6, 7, 8, 9, 666, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 888, 999]) numpy Ufuncs The primary purpose of Ufunc is to be able to speed up repeated operations on values in a numpy array. It can work both between a scalar value & an array and between 2 arrays >> 3 * np.arange(0, 10) array([ 0, 3, 6, 9, 12, 15, 18, 21, 24, 27]) >> np.arange(0, 10) + np.arange(20, 30) array([20, 22, 24, 26, 28, 30, 32, 34, 36, 38]) This is not just syntactically better & intuitive, Its also faster. Lets try that out below >> %timeit 3 * smallArray 1.22 µs ± 61.4 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) >> %%timeit >> for i in range(len(smallArray)): 3 * smallArray[i] 7.15 µs ± 182 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) As can be seen above, the ufunc is multitudes faster than the looped version of the code. This difference becomes more and more pronounced as the computational logic involved gets more complex Aggregation In this section, we will explore the various aggregation functions that numpy provides. numpy ships with a standard sum() function which returns the sum of all elements in the array. You may ask what the difference is, between the native Python function and the numpy function! After all, they are doing the same functionality; provide the sum of elements. Let’s check it out below >> np.sum(smallArray) 711 >> sum(smallArray) 711 >> hugeArray = np.random.randint(100000, size=1000000) >> %timeit np.sum(hugeArray) >> %timeit sum(hugeArray) 526 µs ± 9.48 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 271 ms ± 11 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) From the above code block, it's pretty evident how fast numpy is, compared to the native functions. This applies to most, if not all, aggregation functions available in numpy >> np.min(smallArray) 1 >> np.max(smallArray) 666 >> np.std(smallArray) 198.31512801599376 >> np.mean(smallArray) 71.1 np.median(smallArray) 5.5 One thing to keep in note while using the aggregation functions is that these are prone to NaN values while you are using them, i.e., if you have a NaN-value in your array, these aggregation functions will fail. In such cases, you can use their NaN-safe alternatives. Just to give you an example: nansum() is the NaN-safe alternative for the sum function >> np.nanmean(np.array([1, 2, np.nan])) 1.5 >> np.mean(np.array([1, 2, np.nan])) nan Broadcasting We are now going to look at an operation that you might have used a lot without having a conceptual understanding. In broadcasting, you perform an operation on an entity that has a different shape/dimensions. A simple example of this could be adding a scalar to a numpy array. You can basically think of the values being duplicated to match the dimension of the array followed by the operation that is to be performed. Broadcasting is an operation that can be elaborated on, but that is not my goal here with this post >> a = np.arange(1, 7) >> a array([1, 2, 3, 4, 5, 6]) >> a + 1 array([2, 3, 4, 5, 6, 7]) Logical Operations There will be times when you would like to perform logical checks on a piece of data. numpy provides all the normal logical operations that you would expect: greater than, less than & equal to checks. The logical functions return an array of results in Boolean indicating if they satisfied the condition or not >> x = np.array([1, 2, 3, 4]) >> x > 2 array([False, False, True, True]) >> x == 2 array([False, True, False, False]) numpy also provides useful functions such as any() and all() which are used to perform the following checks: if there is any element that fulfilled the condition or if all the elements satisfy the condition, respectively. They provide a single Boolean value as output which indicates the result >> np.any(x == 2) True >> np.all(x == 2) False From the above 2 code blocks: the following can be understood: any(x == 2): It checks if any of the elements in the array met the condition passed all(x == 2): It checks if all the elements in the array met the condition passed Say we want a count of values that satisfy this particular condition, you can use the sum() method as follows. It counts the number of True values in the array >> np.sum(x == 2) 1 This can also be compounded to check for multiple conditions. The same can be done for the any and all functions >> np.sum((x == 2) | (x == 3)) 2 >> np.any((x == 2) | (x == 3)) True Masking You might have seen the above arrays which provide values as a Boolean array and wondered how that’s helpful because you still need to provide further operations to make sense of the output. Here is where masking comes in; the True/False array can be passed into the array to provide only those values which meet the conditions >> x[x == 2] array([2]) >> x[x > 2] array([3, 4]) Fancy Indexing Fancy indexing is nothing but the ability to access multiple elements of the array at once >> x array([1, 2, 3, 4]) >> x[[0, 2, 3]] array([1, 3, 4]) As you might have guessed, we can pass in an array as the list of indices too >> indexList = [0, 2, 3] >> x[indexList] array([1, 3, 4]) The true power of fancy indexing comes out when it is combined with the likes of slicing, indexing and broadcasting Sorting The sorting algorithm provided by numpy is very efficient. There are 2 ways you can sort a numpy array; in-place and by calling the numpy sort function that returns the sorted array Let’s shuffle our array first so that we can sort it and play around with it >> np.random.shuffle(x) >> x array([1, 3, 4, 2]) Calling np.sort on the array will return a copy of the array that is sorted and does not sort the array in place as demonstrated below >> np.sort(x) array([1, 2, 3, 4]) >> x array([1, 3, 4, 2]) If you call sort on the array, then the array will be sorted in-place and there’s no need to save it to another array. Both methods perform the same functionality and are used depending on the required use-case >> x.sort() >> x array([1, 2, 3, 4]) Concluding Notes numPy has a lot more capabilities, especially with regards to higher-dimensional data. numPy can easily work with, and manipulate data of higher dimensions. I have not decided to go into such details as that would defeat the purpose of this post. I have compiled a Jupyter Notebook with the entire contents of this blog here. There are a lot of things I haven’t covered as its outside the scope of this blog, however, if you are interested in reading more about numpy, I would highly recommend the following resources:
https://towardsdatascience.com/numpy-crash-course-zero-to-hero-c1788a8a48ac
['Prejith Premkumar']
2020-06-09 13:56:40.370000+00:00
['Machine Learning', 'Python', 'Analytics', 'Numpy', 'Data Science']
Floating labels, high-conversion pages, prototyping smaller, and more UX links this week
What’s hot in UX this week: What exactly are design principles? What are they for? Are they useful? How? What makes a good design principle? In an attempt to answer those questions, I poured over the biggest collections of design principles on the internet, and came to the following conclusion: corporate design principles are a set of shared guidelines that reflect the core design values and vision of a company. They are meant to remind teams what kind of user experience they should be striving for, and help them make decisions. Here are some examples from some well-known brands. Read full story →
https://uxdesign.cc/floating-labels-high-conversion-pages-prototyping-smaller-and-more-ux-links-this-week-3c4b2aebd7d3
['Fabricio Teixeira']
2017-06-12 01:02:56.243000+00:00
['Chatbots', 'Interaction Design', 'User Experience', 'Design', 'Hot This Week']
COVID-19 News for State, Local, and Tribal Leaders, 10/16/20
Welcome to our weekly roundup of articles and resources for state, local, and tribal leaders creating policy to help combat the COVID-19 pandemic as well as steering the social and economic recovery for their communities. Postings below do not convey endorsement of any particular organization or opinion contained in links. Many of the early hotspot cities were also at the forefront of innovation, prompting questions over how well connected cities would respond to the pandemic. Dharavi contained Covid-19 against all the odds. Now its people need to survive an economic catastrophe. By making some small but important adjustments, several cities participating in the national Love Your Block City Hall VISTA program have found ways to continue reaching residents and support community projects during the pandemic. Their efforts are relatively easy and inexpensive to implement and provide lessons for other cities as they contend with COVID-19. Syracuse is a modest sized city of just over 140,000 people in upstate New York. It sits within the heart of the Finger Lakes region, not far from Lake Ontario. Although far from the worst hit areas of New York state by COVID-19, Syracuse saw alarming increases in cases and deaths during the initial wave of the pandemic. As communities nationwide shift their focus from response to economic recovery, transit agencies are supporting local efforts by increasing the availability of service. Indiana Chief Privacy Officer Ted Cotterill said several years of transforming the state’s culture of enterprise data-sharing paid off during the health crisis. The cities and local jurisdictions that I’ve seen performing best during this crisis are those that laid the technological foundation for a streamlined, efficient and disruption-resistant workforce. One way to do so is through automation.
https://medium.com/covid-19-public-sector-resources/covid-19-news-for-state-local-and-tribal-leaders-10-16-20-724f37460214
['Harvard Ash Center']
2020-10-20 03:15:49.960000+00:00
['Covid 19', 'Local Government', 'State Government', 'Coronavirus', 'Tribal Government']
Pandas DataFrame — simple transformations in Python
Useful data transformations for practical analysis. Pandas DataFrame — simple transformations in Python Few simple codes often needed while preparing your data. While coding, it seems there are few data transformations I often needed and always trying to find the best possible solution. At the beginning of my Data science “journey,” I was drawn to simple answers that will make data look more representative. While learning, I had the logic of it in my head, but I wasn’t experienced enough to pull code out of my sleeve. So let’s dive in. Photo by Franki Chamaki on Unsplash 1. Add column based on condition This is something always needed in my code. To make better predictions or visualizations, there is undoubtedly still a need to add something extra to my data. In my opinion, the most effective way is adding a column with function — having more control over it, and it is easily adjustable to any need. def func(data): if data['column']==condition: return 1 else: return 0 We define a function — take the entire dataset, in IF statement check if the selected column follows entered condition. Conditions can have many variations : # one condition if datadata['column']==(>,<,>=,<=)condition: # multiple conditions - and (&) , or(|) if (datadata['column']==condition) | (datadata['column']==condition2): With the return, we decide what needs to be entered in our new column if the IF statement is True, it can be int, string, float, etc.… As an example, I will use the Iris dataset : def func(data): if data['PetalWidthCm']>1.19: return 1 else: return 0 We want to create a new column based on Petal Width in cm — IF Petal width is greater then 1.19, in new column write 1, if it is less or equal write 0. Calling the function : Applying function by each row — axis=1 — using .apply. data['Is-PetalWidth_greaterThen_1.19']=data.apply(func,axis=1) Output : 2. Get dummy variables In short, the dummy variable is a numeric variable that represents a unique value of categorical data. For each categorical variable — the number of dummy variables = number of unique values in the categorical variable.
https://medium.com/analytics-vidhya/pandas-dataframe-simple-transformations-in-python-6582523c3d81
['Hana Šturlan']
2020-04-27 16:36:35.869000+00:00
['Python', 'Transformation', 'Simple', 'Pandas']
Practicing The Subtle Art Of Detachment
Practicing The Subtle Art Of Detachment Why taking a step back is as necessary as moving forward From everything that I recall about my life so far, I can say one thing with absolute certainty. I have been an extremely passionate person. Passionate about everything. Be it life in general, work, friendships, relationships. Bustling with energy, I have always liked to give my heart, my soul, my mind and my energy completely into things that matter to me. I take the leap and I go all in like there is no middle spot. And that always seemed to work for me. I was always on the high wave, getting things done, maintaining the happiest relationships and believing with certainty that I could achieve absolutely anything. Until, I reached a day when the things that really mattered to me were at a point of collapse and I collapsed along with them. And my story is not really unique in this sense. Mental fatigue and burnout is almost like the epidemic of the century. Some of the brightest people with immense energy and passion go through this phase of extreme exhaustion which might last for months if not years. And that’s because there is a bit of a downside of being too passionate. To put it simply, when you go about attaching your happiness, your existence and your life’s meaning too deeply with your work, your relationships or anything else for that matter, you put yourself at risk. And why is that? Because with attachment comes a very strong urge to control the circumstances. While you can exercise some amount of control over what happens in your life, that will absolutely never eliminate the possibility of things going haywire or the possibility of your plans and ambitions not quite turning into reality. You put yourself at risk because you put so much of yourself into something unwilling to believe that there is a tiny chance that it might not quite work out the way you plan. And I don’t deny that this kind of confidence is necessary. It is probably the only reason behind strong risk-taking capabilities and subsequent achievements. That’s why the problem hasn’t entirely got to do with being passionate alone. Passion is everything, after all. Defined as ‘a strong and barely controllable desire’, feeling passionate is what makes you feel alive. The problem turns out to be with delusional thinking. Remember how people say ‘Love is blind’? What they essentially imply there is that feeling too much passion and attachment towards something can skew our perception of it. It can make us unwilling to accept the possibility of things going wrong. It can make us unwilling to see the flaws in our plan. It can make us oblivious to the truth that is right in front of us. Be it in our work, in our relationships or anything else in our life that we feel strongly passionate about, we all have a tendency to look at it in a skewed manner. “Attachment is the great fabricator of illusions; reality can be obtained only by someone who is detached. ” ― Simone Weil So if the ceiling breaks and things go wrong one after the other, because sometimes they do despite your best efforts, you might find yourself really struggling to cope up. But does that mean passion is a bad thing? Should you never give yourself completely into anything? Should you not love unconditionally and whole-heartedly? Should you not embrace life fully with enthusiasm and be ready to take risks? I don’t think so. But you should always and always stick to an idea of ‘self’ that is independent of anything else in your life. “Remain in the world, act in the world, do whatsoever is needful, and yet remain transcendental, aloof, detached, a lotus flower in the pond.” ― Osho, The Secret of Secrets Is there anything that remains when I strip your life of your work and your deepest relationships for a while? Is there a core within you that is separate, detached and at peace irrespective of how things go in your life? Or are you constantly on a roller coaster ride based on what happens? Exhilarated because great things are happening at work, miserable because the last batch of orders didn’t get delivered on time and customers left bad reviews. Exhilarated because things are going well in your relationship, miserable because he/she suddenly stopped giving you enough time. Letting the things that you feel passionate about dictate your mood, your energy levels and your overall enthusiasm towards life is not a very healthy approach as you are relying over something external, something that is not entirely under your control to dictate your life. The only difference between people who collapse after failure/loss and those who dust themselves off and start again quickly is that the latter know and practice the art of detachment. What exactly is the art of detachment? It’s the art of withdrawing desire from lesser things, letting them fall away, so as to harness their power to reach the heights of what a human being can attain. Oxymoronic though it may sound, it’s said that you can achieve the greatest heights only through detaching yourself from the things that matter to you to a certain extent and by taking a step back. And it doesn’t mean that you should always feel detached either. It just means that you should be capable of practicing detachment when required. To be attached is to live in the fear that what you want will not materialise and traps you in a continuous state of desire. In my experience, I have found it useful to practice detachment in following forms — Detachment from Material Goals To understand this form of detachment, the best example is the story of Joshua and Ryan, the two people behind the concept The Minimalists. They said, “While approaching age 30, we had achieved everything that was supposed to make us happy: great six-figure jobs, luxury cars, oversized houses, and all the stuff to clutter every corner of our consumer-driven lifestyles. And yet with all that stuff, we weren’t satisfied with our lives. We weren’t happy. There was a gaping void, and working 70–80 hours a week just to buy more stuff didn’t fill the void: it only brought more debt, stress, anxiety, fear, loneliness, guilt, overwhelm, and depression.” There are just too many people who are too attached to the things they own and too addicted to buying and hoarding more and more things without asking this one simple question — “Is it important enough?” When you detach yourself from the compulsion of owning things just for the sake of owning them you begin to experience real freedom and joy from things that really matter. “Detachment is not that you should own nothing, but that nothing should own you.” Remember, less is more. Take a step back to understand what things add value to your lives. By clearing the clutter from life’s path, we can all make room for the most important aspects of life: health, relationships, passion, growth, and contribution. 2. Detachment in Relationships Most people struggle the most with this aspect of detachment and it’s only natural. Most of us misunderstand love to be all about really holding on to the other person, trying to fix them and taking care of them in all ways possible, even if it comes at the cost of neglecting your own well being. It gets even worse when we let our lives revolve around certain relationships. It might be relationship with your parents, with your spouse, with your best friends or anyone else who has a big influence in your life. In all relationships, there is a need to practice a certain amount of detachment. We might wonder why? The answers are many. Detachment is needed so that you do not take everything personally because you don’t control their reactions. Detachment is needed so that you don’t seek their validation to the extent that your own opinions start to diminish. Detachment is needed to understand that love is about acceptance and not about control. It is needed to understand that you alone are the master of your own lives and you need to draw boundaries so that others don’t control you. Detachment in love is necessary to maintain that optimum amount of distance that is most essential for growth. No lines sum up the thought about loving detachment as these lines from Kahlil Gibran’s poem “But let there be spaces in your togetherness, And let the winds of the heavens dance between you. Love one another, but make not a bond of love: Let it rather be a moving sea between the shores of your souls.” 3. Detachment from your experiences Life is meant to be lived and to not to be over-analysed. Yet, more often than not we find ourselves stuck in our head recounting experiences, mostly unpleasant ones over and over again till they bring us down. Not only that, we also tend to carry them with us around like a bad weather. They form our prejudices and biases about our view of the world. We tend to over-generalize and assume things when we hold on too tightly to our past experiences. It’s one thing to take the learnings from an experience and move further in life with new wisdom and it’s totally another thing to carry the bitterness, guilt and regret over the past experiences and letting them taint your present days. This often happens when we fail to completely accept and let go our bad experiences. When something bad happens, feel free to feel the pain, grieve and let go. Only through acceptance, you can free yourself from the weight and detach yourself from it. 4. Detachment from your work “You are not your job, you’re not how much money you have in the bank. You are not the car you drive. You’re not the contents of your wallet. You are not your fucking khakis. You are all singing, all dancing crap of the world.” ― Chuck Palahniuk, Fight Club Wallace Inmen’s Globe and Mail, “Losing Your Job, Losing Your Identity,” survey 0f 12,000 respondents on the topic reveal that more than 30% define their personal identities through their career. Does this describe you? You have few interests outside of work; you feel restless when you’re not working; you can’t carry on a conversation without referring to something at work; you make yourself available to people at work 24/7; and when you’re at home with family, your mind is back at work. If it does, you’ve defined yourself too much by your job. And that’s not good for your mental and physical health. Detachment from work means that when you leave your workplace you leave your work related worries there. Detachment from work means that you do not define your personal worth too closely to your performance at your workplace or to the validation that you receive at work. Detachment from work means that you do not rely on work alone to give you a feeling of completeness and to provide a meaning to your life. In fact, detachment from work can lift off the pressure to be at your best all the time, allowing you to take a step back, relax and just focus on the work without any anxieties. It can improve your overall mood, your performance and might even lead to more creative ideas. 5. Detachment from your own thoughts Out of all forms of practicing detachment, I find this one most profound in the ways it helps me grow. Most of us are too attached to our thoughts and our obsessive thinking patterns. Very few of us are able to take a step back to exercise a certain amount of control over our thoughts. It turns into a problem when we confuse our thoughts with feelings and end up taking actions on impulse. Somehow, we conclude that every thought needs to be acted upon and it doesn’t turn out very well. Detachment from thoughts, often practiced through meditation till it becomes a usual practice, allows you to look at your thoughts as an outsider, letting them come and go without allowing yourself to feel too much about them. This allows you to practice a certain amount of detachment and you begin to see that not all thoughts are important. You realise that most of them are just clouding your head and it will be best to free yourself from them. Detaching yourself from your thoughts requires an understanding of the fact that — Our thoughts are just thoughts. They are not the ultimate truth or reality. You enter a state of mind in which you witness, clearly and calmly, with good will, whatever you are seeing, hearing, thinking, enjoying, or suffering. You watch your problems, fears, and challenges as if you are not bound or preoccupied by them but viewing them calmly — a witness. With practice, your turbulent thoughts and negative emotions will lose their grip on your mind. They will not be able to drive you or distort your inner potential and well being. “Mind can be your best friend or worst enemy.” ― Kabira, Birthplace of Happiness 6. Detachment from sense of time Man alone chimes the hour. And, because of this, man alone suffers a paralysing fear that no other creature endures. A fear of time running out.” ― Mitch Albom, The Time Keeper A lot of our anxieties are caused by thoughts of not having enough time for all that we want to do. We have huge plans for months and even years whose enormity makes it difficult for us to live our present time in the best way possible today, the only time we have in hand for sure. Detachment from sense of time can help you become aware of the transient nature of our lives and help you become more and more peaceful as you understand that the only time you have control over is now, this present moment. All that has passed before and all that is coming ahead is immaterial.
https://medium.com/personal-growth/practicing-the-subtle-art-of-detachment-b3f94b91fcf2
['Shreya Dalela']
2020-04-24 11:17:11.160000+00:00
['Mindfulness', 'Mental Health', 'Spirituality', 'Life Lessons', 'Self Improvement']
Growing Self Organizing Maps (GSOM)
GSOM Algorithm 1. Initialization Phase In this particular phase algorithm initializes the weight vectors of the starting nodes with random numbers. Generally GSOM algorithm initiates with 4 nodes which provides the autonomy to grow in a direction which they wanted to since all four of them are boundary nodes. A numeric variable H_err is initialized to “0” at initialization. Spread factor (SF) has to be specified in this phase. And also the algorithm tends to calculate the growth threshold (GT) for the given data set according to the specified requirements. GT will act as a threshold value for initiating node generation. Fig.2. Initial GSOM(A) Initialization phase summary: - Initialize the weight vectors of the starting nodes (usually four) with random numbers between 0 and 1. - Determine Spread Factor (SF) - Calculate the growth threshold(GT) for the data set with (D) dimension and the spread factor (SF) which is already defined, using the formula GT = -D x ln(SF) 2. Growing Phase We now feed the input to the network in this phase. Initially, similar to SOM (Competition Phase) algorithm, GSOM too determines the closest weight vector to the input vector as the winner (or BMU-Best Matching Unit), based on Euclidean distance. GSOM can be considered as the2D representation of the input space where input space is partitioned by weight vectors(W_i) as Voronoi regions(V_i) and each Voronoi region is represented by one neuron(i). Fig.3. Voronoi Diagram — Photo by Charles Francis in Quora “Voronoi diagram is a partition of a plane into regions close to each of a given set of objects. In the simplest case, these objects are just finitely many points in the plane. For each seed there is a corresponding region consisting of all points of the plane closer to that seed than to any other.” [3]. Cooperation and Adaptation phase of SOM follows where weight updation occurs only in BMU and the neighborhood of the winner. In GSOM the starting neighborhood is chosen smaller when compared to SOM. When the total error (difference between the weight vector of a particular node and the input weight vector) of the node is lesser than the growth threshold(GT); i.e. if , H_err < GT then algorithm does not allow to grow new nodes. But when the total error of the node is greater than the growth threshold(GT); i.e. H_err >GT Generate new nodes in a boundary node. And new nodes are generated on all free neighboring positions since computational cost of figuring out the best exact position for the new node is higher. “A boundary node is one that has at least one of its immediate neighboring positions free of a node”[1]. In GSOM which defines that every node can have maximum of 4 immediate neighbors, refers to boundary node as a node where there is the availability of adding 3 more neighbours or can move in any of three remaining directions. After generating the new nodes, it is essential to update the weights. There are four possible ways(See Fig 4. ) Fig.4. Weight updation in GSOM new nodes by Damminda et al. When on one of its sides, New node has two old nodes in the consecutive manner (Fig. 4(a)) => if W2 > W1 => W_new = W1-(W2 - W1) => if W1 > W2 => W_new = W1+(W1-W2) 2. When the new node is placed in between two older nodes (Fig. 4(b)) => W_new = (W1+W2)/2 3. When the new node has only one direct neighbor which is an older node (Fig. 4(c)) => if W2 > W1 => W_new = W1-(W2 -W1) => if W1 > W2 => W_new = W1+(W1-W2) 4. When the nodes are isolated due to nodes being removed after aging and generated new node has only one neighboring older node => W_new = (r_1 + r_2) /2 ; r_1, r_2 lower and upper values of the range of the weight vector of the only one neighbouring node. Growing Phase Summary : - Winner node/neuron is chosen based on the closest proximity of input vectors to that of neuron. - The difference between the winner vector and the input vector of a particular input is accumulated as error value for that neuron(i). - If the error that is contributed by a particular weight vector is highly contributing to the accumulated error, it will be decided to generate a new neuron since the higher contribution of error refers that the winner neuron is not the best representer of input vector. - New nodes will be created to grow from a boundary node . Weight upation occurs for new nodes. 3. Smoothing Phase The growing phase stops when there are less number of new nodes being created and smoothing phase begins there itself. In this Smoothing phase, algorithm reduces the learning rate and fix a small starting neighborhood.
https://medium.com/datadriveninvestor/growing-self-organizing-maps-gsom-9ccf038de87b
['Vivek Vinushanth Christopher']
2020-12-11 14:18:11.217000+00:00
['Machine Learning', 'Unsupervised Learning', 'Data Science', 'Clustering', 'Data Visualization']
How Pedigree Cleverly Used a Classic Pet’s Game to Tackle Depression
How Pedigree Cleverly Used a Classic Pet’s Game to Tackle Depression A reminder that dogs can indeed be your best friends Screencapture from Brings It Back. 264 million. According to the World Health Organization, that’s the estimated number of people suffering from depression in the world. To put that into perspective, that’s equivalent to the entire population of Indonesia — the fourth largest country on the planet. What’s more, up to 800,000 people take their own lives each year. That’s a shocking statistic to even think about. Clearly, depression is both a significant mental and societal issue that needs to be dealt with seriously by everyone — and thankfully, there are some brands that graciously take on part of that incredible responsibility. In fact, one particular pet food brand’s campaign did it so well, they managed to seamlessly integrate a timeless game that you play with your dog with the overarching issue of depression. I’m talking about Pedigree Petfoods and their 2019 campaign “Brings It Back.” Let’s take a look at exactly how they pulled this off, and what we all can take away from this amazing campaign.
https://medium.com/better-marketing/how-pedigree-cleverly-used-a-classic-pets-game-to-tackle-depression-b4d3c19f531d
[]
2020-12-02 14:02:59.556000+00:00
['Marketing', 'Strategy', 'Culture', 'Social Media', 'Pets']
I’ve Kept a Daily Journal Now for a Full Year — But I Did It All Wrong
I started my diary last November, on what was almost a whim. I did it because I found a cute diary in T.K. Maxx and I liked its thick creamy pages and solid cover. I did it because I felt like all of my lived days were blending into each other and when I looked back at them, I could not separate their strands and I didn’t like that feeling. I did it because I thought it might slow the rush of time down a bit. Mostly, though, I did it because my grieving, widowed father told me one day that his absolute favourite thing to do in the world was to sit and read through the diaries his partner had kept throughout her life (before she died suddenly last year and left him, far too soon). He said “She writes about a trip to the beach and what we saw there, and I can fill all the rest of it in myself. It’s a whole memory that I forgot I had, but the diaries give me access to it.” I thought I’d like to leave that sort of record behind me. So I started writing one. I did exactly what my dad told me his partner had always done: I wrote short fact-based notes at the end of each day, detailing who I’d seen and where I’d been and the activities I’d filled my hours with. And at first, I felt like that was just right. The brief notes were all I needed. As of last month, I have kept it up for the whole year. That’s the longest I’ve managed to maintain any sort of journal routine since I was a teenager. Each day, shivering in my cold bedroom and mocked by its dead, useless radiator, I’ve made myself write with an old bitten biro in my pretty journal and within a couple of lines, rounded up the experiences of the past 24 hours, jotting their bare facts onto the page. The thing is, though, that after March swept the world into our varying degrees of quarantine, I didn’t actually do very much during my days. And it turns out that a constant litany of “worked from home; made food for everyone; went for a run; did a bit of yoga” is not only the opposite of entertaining but, predictably, serves as no sort of memory-jogger either when I leaf back through it. I could have had a rubber stamp made and saved myself the shivery effort. I wrote in this very publication during the summer that my ongoing diary was “dry as dust to read”, and that description remains accurate. But at the time, I thought that was fine. I thought it was enough. I also wrote, back then, that I’d be able to “flesh the dry facts out later”. What was I talking about? Where and when would I have done that? Oh, the naive confidence of my summer self. Yes, it’s useful to be able to note that we went to Scotland on the second weekend of June, not the first. Yes, when I read that entry and the list of the places we visited I remember afresh the ceaseless rain as we passed the border sign and the fact that (contrary to the Proclaimers’ mournful yodeling) there was no sunshine on Leith. But how was I actually feeling, that day in the breeze on Arthur’s Seat? Where was my head at? I have no idea. I have only the vaguest impression of myself. And that’s how I know I’ve been doing it all wrong. I want to remember the feelings as well as the facts. I want to look back and know that there were days on which I woke up gasping in terror from a nightmare but found that my mood was inexplicably sunny all the same, as though I’d fought and won a battle in my sleep. I’d like to remember the days when I argued fiercely with my husband and my children all separately grated on me and I felt that I could cheerfully turn my back on the lot of them and drive as far as my fuel tank would allow. I’d love to remember the golden days where I dared not look directly at my happiness, for fear it might burn a hole in my retinas. And I’d like to remember how I filled my mind on the colourless transparent in-between days when very little happened at all. So next year, I’ll carry on with the diary, but with one vital change: it’s going to include moods, feelings, and free-flowing opinions as well as the dry stuff. I probably won’t write it every day; I think every week would suffice, for such a general overview. No one’s going to read it, after all. I haven’t yet found the diary I’ll be using (browsing bookshop shelves isn’t exactly a frequent pastime at the moment), but I’m on the lookout. It’ll come to me. And when it does, I’m going to splurge and splurge onto those fresh pages. It feels like a revelation, of sorts. Which means that my dry diary-keeping helped me grow after all.
https://medium.com/assemblage/ive-kept-a-daily-journal-now-for-a-full-year-but-i-did-it-all-wrong-b18d975dbd83
['Em Unravelling']
2020-12-23 14:37:56.987000+00:00
['Memories', 'Development', 'Self', 'Personal Essay', 'Diary']
14 Deep and Machine Learning Uses that made 2019 a new AI Age.
9. pix2pix, face2face, DeepFake and Ctrl+Shift+Face Deep Learning world is full of experimentation. People think out of the box and that’s the most inspiring thing about DL specifically and AI in general. Gene Cogan experimented with dynamic pix2pix: in this case, the source was not a sketch, but webcam (his face) and the target was trained on Trump photos: These experiments inspired researches (in this case: Dat Tran) to face2face: The principle is smart: 1. face2face model learns facial features / landmarks 2. It scans webcam input on facial features 3. It finally translates it into another face Another frontier of the post-truth epoch is reached — now we can modify not only images, but also moving pictures. Like AR applications on popular messaging apps, AI interprets the video footage and modifies it, in perfect ways. Artists like Ctrl+Shift+Face perfected this method to an unbelievable level: he switched playful faces of actors in cult movies with the help of face2face. Like here: “Shining” with Jim Carrey. Side by side comparison shows you how well can even the fine psychological play be translated. Such implementation bear manifold possibilities within: Filmmakers can experiment with actors before the audition. They can also localize movies for better lip-synchronization in various languages, as Synthesia did with David Beckham. Now imagine these possibilities for international video conferencing using AI-driven language translation and speech synthesis. Artists can produce subversive and surreal “Being John Malkovich” alike masterpieces. For example this inspiring, bright and slightly absurd cover of “Imagine”, being sung by Trump, Putin, et al.: Passed away persons can be revived. The best example is singer Hibari Misora, who preformed new song on annual Japanese New Year TV event NHK Kōhaku Uta Gassen (NHK紅白歌合戦). She did it this week, even if she died 30 years ago. The visuals were reconstructed with the help of AI, the voice was simulated by Vocaloid: But the new ways for DeepFake are open. Remember ZAO, the Chinese DeepFake fun app: transfer yours and celebrities face. Now you are Leonardo DiCaprio. Combine it with biometric payment rolling out in China, and you have endless possibilities for fraud: My coverage of this topic (Friend-Link):
https://towardsdatascience.com/14-deep-learning-uses-that-blasted-me-away-2019-206a5271d98
['Vlad Alex', 'Merzmensch']
2020-02-11 13:46:34.516000+00:00
['Machine Learning', 'Artificial Intelligence', 'Art', 'Digital Life', 'Data Visualization']
Design-driven evaluation
Design is fun, hard work, and more than just toys A greater push for inclusion of evaluation data to make decisions and support innovation is not generating value if there is little usefulness of the evaluations in the first place. A design-driven approach to evaluation is the means to transform utilization into both present and future utility. I admit to being puzzled the first time I heard the term utilization-focused evaluation. What good is an evaluation if it isn’t utilized I thought? Why do an evaluation in the first place if not to have it inform some decisions, even if just to assess how past decisions turned out? Experience has taught me that this happens more often than I ever imagined and evaluation can be simply an exercise in ‘faux’ accountability; a checking off of a box to say that something was done. This is why utilization-focused evaluation (U-FE) is another invaluable contribution to the field of practice by Michael Quinn Patton. U-FE is an approach to evaluation, not a method. Its central focus is engaging the intended users in the development of the evaluation and ensuring that users are involved in decision-making about the evaluation as it moves forward. It is based on the idea (and research) that an evaluation is far more likely to be used if grounded in the expressed desires of the users and if those users are involved in the evaluation process throughout. This approach generates a participatory activity chain that can be adapted for different purposes as we’ve seen in different forms of evaluation approaches and methods such as developmental evaluation, contribution analysis, and principles-focused approaches to evaluation. Beyond Utilization Design is the craft, production, and thinking associated with creating products, services, systems, or policies that have a purpose. In service of this purpose, designers will explore multiple issues associated with the ‘user’ and the ‘use’ of something — what are the needs, wants, and uses similar products. Good designers go beyond simply asking for these things, but measuring, observing, and conducting design research ahead of the actual creation of something and not just take things at face value. They also attempt to see things beyond what is right in front of them to possible uses, strategies, and futures. Design work is both an approach to a problem (a thinking & perceptual difference) and a set of techniques, tools, and strategies. Utilization can run into problems when we take the present as examples of the future. Steve Jobs didn’t ask users for ‘1000 songs in their pockets’ nor was Henry Ford told he needed to invent the automobile over giving people faster horses (even if the oft-quoted line about this was a lie). The impact of their work was being able to see possibilities and orchestrate what was needed to make these possibilities real. Utilization of evaluation is about making what is fit better for use by taking into consideration the user’s perspective. A design-driven evaluation looks beyond this to what could be. It also considers how what we create today shapes what decisions and norms come tomorrow. Designing for Humans Learning is fun (and other less pleasant things, too) Among the false statements attributed to Henry Ford about people wanting faster cars is a more universal false statement said by innovators and students alike: “I love learning.” Many humans love the idea of learning or the promise of learning, but I would argue that very few love learning with a sense of absoluteness that the phrase above conveys. Much of our learning comes from painful, frustrating, prolonged experiences and is sometimes boring, covert, and confusing. It might be delayed in how it manifests itself with its true effects not felt long after the ‘lesson’ is taught. Learning is, however, useful. A design-driven approach seeks to work with human qualities to design for them. For example, a utilization-focused evaluation approach might yield a process that involves regular gatherings to discuss an evaluation or reports that use a particular language, style, and layout to convey the findings. These are what the users, in this case, are asking for and what they see as making evaluation findings appealing and thus, have built into the process. Except, what if the regular gatherings don’t involve the right people, are difficult to set up and thus ignored, or when those people show up they are distracted with other things to do (because this process adds another layer of activity into a schedule that is already full)? What if the reports that are generated are beautiful, but then sit on a shelf because the organization doesn’t have a track record of actually drawing on reports to inform decisions despite wanting such a beautiful report? (We see this with so many organizations that claim to be ‘evidence-based’ yet use evidence haphazardly, arbitrarily, or don’t actually have the time to review the evidence). What we will get is that things have been created with the best intentions for use, but are not based on the actual behaviour of those involved. Asking this and designing for it is not just an approach, it’s a way of doing an evaluation. Building Design into Evaluation Preparing for Design Loft Experience 2018 There are a couple of approaches to introducing design for evaluation. The first is to develop certain design skills — such as design thinking and applied creativity. This work is being done as part of the Design Loft Experience workshop held at the annual American Evaluation Association conference. The second is more substantive and that is about incorporating design methods into the evaluation process from the start. Design thinking has become popular as a means of expressing aspects of design in ways that have been taken up by evaluators. Design thinking is often characterized by a playful approach to generating new ideas and then prototyping those ideas to find the best fit. Lego, play dough, markers, and sticky notes (as shown above) are some of the tools of the trade. Design thinking can be a powerful way to expand perspectives and generate something new. Specific techniques, such as those taught at the AEA Design Loft, can provide valuable ways to re-imagine what an evaluation could look like and support design thinking. However, as I’ve written here, there is a lot of hype, over-selling, and general bullshit being sprouted in this realm so proceed with some caution. Evaluation can help design thinking just as much as design thinking can help evaluation. What Design-Driven Evaluation Looks Like A design-driven evaluation takes as its premise a few key things: Holistic. Design-driven evaluation is a holistic approach to evaluation and extends the thinking about utility to everything from the consultation process, engagement strategy, instrumentation, dissemination, and discussions on use. Good design isn’t applied only to one part of the evaluation, but the entire thing from process to products to presentations. Design-driven evaluation is a holistic approach to evaluation and extends the thinking about utility to everything from the consultation process, engagement strategy, instrumentation, dissemination, and discussions on use. Good design isn’t applied only to one part of the evaluation, but the entire thing from process to products to presentations. Systems thinking. It also utilizes systems thinking in that it expands the conversation of evaluation use beyond the immediate stakeholders involved in consideration of other potential users and their positions within the system of influence of the program. Thus, a design-driven evaluation might ask: who else might use or benefit from this evaluation? How do they see the world? What would use mean to them? It also utilizes systems thinking in that it expands the conversation of evaluation use beyond the immediate stakeholders involved in consideration of other potential users and their positions within the system of influence of the program. Thus, a design-driven evaluation might ask: who else might use or benefit from this evaluation? How do they see the world? What would use mean to them? Outcome and process oriented . Design-driven evaluations are directed toward an outcome (although that may be altered along the way if used in a developmental manner), but designers are agnostic to the route to the outcome. An evaluation must contain integrity in its methods, but it must also be open for adaptation as needed to ensure that the design is optimal for use. Attending to the process of design and implementation of the evaluation is an important part of this kind of evaluation. . Design-driven evaluations are directed toward an outcome (although that may be altered along the way if used in a developmental manner), but designers are agnostic to the route to the outcome. An evaluation must contain integrity in its methods, but it must also be open for adaptation as needed to ensure that the design is optimal for use. Attending to the process of design and implementation of the evaluation is an important part of this kind of evaluation. Aesthetics matter. This is not about making things pretty, but it is about making things attractive. This means creating evaluations that are not ignored. This isn’t about gimmicks, tricks, or misrepresenting data, it’s considering what will draw and hold attention from the outset in form and function. One of the best ways is to create a meaningful engagement strategy for participants from the outset and involving people in the process in ways that fit with their preferences, availability, skill set, and desires rather than as tokens or simply as ‘role players.’ It’s about being creative about generating products that fit with what people actually use not just what they want or think a good evaluation is. This might mean doing a short video or producing a series of blog posts rather than writing a report. Kylie Hutchinson has a great book on innovative reporting for evaluation that can expand your thinking about how to do this. This is not about making things pretty, but it is about making things attractive. This means creating evaluations that are not ignored. This isn’t about gimmicks, tricks, or misrepresenting data, it’s considering what will draw and hold attention from the outset in form and function. One of the best ways is to create a meaningful engagement strategy for participants from the outset and involving people in the process in ways that fit with their preferences, availability, skill set, and desires rather than as tokens or simply as ‘role players.’ It’s about being creative about generating products that fit with what people actually use not just what they want or think a good evaluation is. This might mean doing a short video or producing a series of blog posts rather than writing a report. Kylie Hutchinson has a great book on innovative reporting for evaluation that can expand your thinking about how to do this. Inform Evaluation with Research. Research is not just meant to support the evaluation, but to guide the evaluation itself. Design research is about looking at what environments, markets, and contexts a product or service is entering. Design-driven evaluation means doing research on the evaluation itself, not just for the evaluation. Research is not just meant to support the evaluation, but to guide the evaluation itself. Design research is about looking at what environments, markets, and contexts a product or service is entering. Design-driven evaluation means doing research on the evaluation itself, not just for the evaluation. Future-focused. Design-driven evaluation draws data from social trends and drivers associated with the problem, situation, and organization involved in the evaluation to not only design an evaluation that can work today but one that anticipates use needs and situations to come. Most of what constitutes use for evaluation will happen in the future, not today. By designing the entire process with that in mind, the evaluation can be set up to be used in a future context. Methods of strategic foresight can support this aspect of design research and help strategically plan for how to manage possible challenges and opportunities ahead. Principles Design-driven evaluation also works well with principles-focused evaluation. Good design is often grounded in key principles that drive its work. One of the most salient of these is accessibility — making what we do accessible to those who can benefit from it. This extends us to consider what it means to create things that are physically accessible to those with visual, hearing, or cognitive impairments (or, when doing things in physical spaces, making them available for those who have mobility issues). Accessibility is also about making information understandable (avoiding unnecessary jargon (using the appropriate language for each audience), using plain language when possible, accounting for literacy levels. It’s also about designing systems of use — for inclusiveness. This means going beyond doing things like creating an executive summary for a busy CEO when that over-simplifies certain findings to designing in space within that leaders’ schedule and work environment to make the time to engage with the material in the manner that makes sense for them. This might be a different format of a document, a podcast, a short interactive video, or even a walking meeting presentation. There are also many principles of graphic design and presentation that can be drawn on (that will be expanded on in future posts). Principles for service design, presentations, and interactive use are all available and widely discussed. What a design-driven evaluation does is consider what these might be and build them into the process. While design-driven evaluation is not necessarily a principles-focused one, they can be and are very close. By taking into account how we create not only our programs but their evaluation from the perspective of a designer we can change the way we think about what utilization means for evaluation and think even more about the experience it produces along the way. This is based on an original publication on Censemaking.com
https://cdnorman.medium.com/design-driven-evaluation-6ff563a63c5a
['Cameron Norman']
2019-02-21 13:30:22.136000+00:00
['Learning', 'Evaluation', 'Innovation', 'Design', 'applied creativity.']
4 Front-End Trends and 1 Loser
Photo from Svelte. Svelte According to the documentation, Svelte is a component framework — like React or Vue — but with an important difference. The difference is Svelte runs at build time, converting your code to highly efficient JavaScript. So you can get the benefits of the component framework without the performance penalty. Svelte has been around for a few years, with the latest version (3) shipping in 2019. It has over 39K stars on GitHub, but it’s not widely used in bigger projects, as it’s still perceived as immature. So why will 2021 be the decisive year for Svelte’s popularity? The typical problem with component frameworks is they are rendered client-side, so search bots get an almost empty HTML, which is bad for SEO. To alleviate that problem, each library has its solutions. For Svelte, it was usually Sapper. But in October, at Svelte Summit 2020, its creator decided to ditch Sapper and propose a new way of making Svelte applications. The new approach will be based on a SvelteKit. Its goal is to focus on the developer experience, with fast builds, hot module reloading, error overlays, and serverless support. If Svelte can add a seamless experience and out-of-the-box SSR support, it may be a game-changer. So it seems Svelte is going to provide a first-class experience for the developer. But will it be enough to convince them to use it?
https://medium.com/better-programming/4-front-end-trends-and-1-loser-5ce22c2406da
['Szymon Adamiak']
2020-12-19 07:07:11.914000+00:00
['JavaScript', 'Software Development', 'React', 'Nodejs', 'Programming']
I Used to be Suicidal. Here are my Thoughts Regarding the Rise of Suicide
Photo Credit: 3938030 via Pixabay Ask almost any of my friends and family if I was suicidal and they would probably give you a weird look. “Talin? Being suicidal? Nah. She’s a happy-go-lucky girl who is extremely tough. She’d never be that kind of person.” Wrong. Wrong. Wrong. Tough? I’d like to think so. Happy? For the most part. Suicidal? Used to be, very heavily actually. So with all of the suicides lately, especially circulating around Kate Spade and Anthony Bourdain (RIP these two) I thought I’d chime in a little since I used to feel as low and as empty as a suicidal person. When you’re actually suicidal, NOTHING helps This is a subjective feeling that I thought of, not objective by nature. What I mean is, while I was aware that there was therapy and hotlines to call, I felt so incredibly empty and sad that I felt that those things wouldn’t help. Being suicidal is probably one of the worst emotional feelings ever. You feel like you’re trapped in a hell hole of darkness with 0 light to guide you out. You feel like nothing will ever help. That nobody cares about you. And when you start thinking these things over and over again, your mind will start believing it’s true. That’s why when I see the following comments: “Wow why would they kill themselves when they had the perfect life: money, mansions, cars and a great job.” I get infuriated. You think that money, houses and fancy cars = happiness? You think that tangible objects that are attached to a high price tag = a life worth living for? Fuck you if you think that. Life is worth more than the tangibles that society has equated to “living the good life.” No. A “good life” is one that is surrounded by people you love and care about. A “good life” equates to a strong emotional attachment to these people, to your passions and to the life you live. It’s about appreciating more than what’s at face value on Instagram, Snapchat, Twitter and Facebook. Now I’m not saying suicidal people don’t have emotional attachments to friends, family, passions and all that stuff. Rather, they are too far down in the depression rabbit hole to see why that stuff matters to them (at least, this is what I thought of when I used to be suicidal). So for example, when I was suicidal I would think of the following: If I was gone, I wouldn’t eat my favorite food anymore, which is spaghetti and meatballs. Who cares. If I was gone, I wouldn’t see my friends, family or my boyfriend anymore. Who cares, they’ll eventually get over it. If I was gone, I wouldn’t play my favorite video game anymore, which is World of Warcraft. Who gives a shit. This is just to give you an idea of what I thought about at the time. It doesn’t matter what you would have said to me, it would have been like talking to a zombie who was no longer a thinking, feeling being anymore (aside from feeling sad all the time). So stop presenting suicidal people with a platter of “Oh won’t they miss XYZ” because none of it matters when you are on that level of depression. All it shows me is ignorance and a lack of understanding to this kind of mentality. I was too scared to call hotlines or to seek therapy On another note, stop arbitrarily handing out hotline numbers like it’s your actual phone number. It feels so fucking baseline and empty, like people are just doing it because “it’s the right thing to do.” How about taking the time to understand the core of suicide? How about doing some goddamn research on this mental illness so we can all be more well-educated and more well-researched on this rising cause of death? How about developing emotional attachments to your friends, family and coworkers instead of staring at your fucking phone all the time? Besides, people who struggle with depression and suicide aren’t idiots. They know that hotlines exist. They know that they “aren’t alone” and that many people also suffer from the same things. But it doesn’t actually help. You really think by posting “You’re not alone. Get help. Call this number” that suddenly the depressed/suicidal victim will think “Oh wow holy shit I never knew I could theoretically get help for how I feel.” On a separate note: I don’t know about other suicidal folks, but for me personally, I was too scared to actually call hotlines or seek therapy. Why? Because of imprisonment. I’ve heard far too many stories about getting arrested and tossed in some mental asylum for having suicidal thoughts. I was too scared of my phone being tracked for hotlines, too scared of my future therapist throwing me under the bus and too scared to trust anyone. Once again, even if you might think “No you’re wrong,” you have to understand that these are thoughts from an ex-suicidal person. Try seeing it from their perspective for a change. I didn’t want the risk of being thrown into some mental asylum for who knows how long. I didn’t know who to talk to and who to trust because, well, what if they call the police on me? I would have rather walked around with a smile on my face, pretending that everything is fine and dandy, rather than risk showing even a tiny glimpse of my depression to anybody and everybody. And that’s exactly what suicidal people do, for the most part. Most of them walk around like nothing is ever wrong. In my opinion, it’s extremely difficult, if not impossible, to distinguish a suicidal person from a regular person, no matter how well you know them. It’s like making 2 clones of the exact same person, with the exact same personality and saying “Guess who’s the real John Smith.” That’s why I was so great at hiding my feelings to everybody. Because I was not only ashamed, but I also didn’t trust anybody. I didn’t want to be thrown into a mental asylum. See and that’s part of the problem. Society paints hotlines and therapy like it’s Candyland for people. But we all know what goes behind closed doors. It’s practically jail for mentally-ill folks where you’re trapped in another hell hole that you can’t leave unless you prove you’re sane again. And if you’re smart, you’ll “fake” that you’re ok just to get out again. It’s no wonder people are too scared to pick up the phone or talk to a friend. So stop throwing hotlines and therapy clinics around like it’s the be-all-end-all solution. This is a real fucking problem that needs to be addressed and more importantly, understood by EVERYBODY. Stop saying suicide is selfish, it doesn’t help Here’s another aggravating comment I see all the time: “Suicide is so selfish, why would people take their own life to leave behind their friends, family and kids.” Clearly, these people have never experienced what real depression and suicidal thoughts are like. Because if they have, they would immediately understand. It kind of goes back to the first point I made. NOTHING HELPS. The will to live becomes the most mundane, physically-exhausting chore you can possibly imagine. Friends, family and even kids can’t usually change that. The pain of existing was so great, so paramount compared to the so-called “joys” of life, that there felt no reason to live anymore. You know what is selfish? Saying things like “Why did Anthony Bourdain have to kill himself? Now I can’t look forward to watching my favorite food-related TV shows.” “Now I can’t look forward to anymore fashion designs from Kate Spade.” “Now I can’t look forward to anymore rock music from Chester Bennington.” Being upset that your favorite icon committed suicide because it doesn’t benefit YOU anymore is pretty much the definition of selfishness. Now, everybody has their own reason for being suicidal. Mine was the following: I have too many problems. Too many bills to pay and not enough money to do so. I have too much stress with nearly no friends to talk to. Maybe it’ll just be easier if I wouldn’t have to deal with all of this anymore. And if you’re thinking “Suck it up sister, we all have bills to pay with no money, yap yap yap.” Shut the fuck up. You, whoever you are that ignorantly yaps like a damn chihuahua, have 0 idea what you’re talking about and you’re part of the problem. Let me put this into perspective: I equated life stress with the pain of existing. It was literally PAINFUL to exist. Do you understand what that means? Can you even comprehend what that feels like? When I would wake up, I felt physically sick that I was alive again. That I would have to rinse and repeat the mundane cycles of life. That I was once again presented with my life stress factors that would never, ever go away. Until you’ve walked a day in the life of a suicidal person, you’ll never truly understand their mentality, their perspective, and their own way of life.
https://hungrytally.medium.com/i-used-to-be-suicidal-here-are-my-thoughts-regarding-the-rise-of-suicide-ee46703f6010
[]
2020-04-22 01:58:37.790000+00:00
['Suicide', 'Mental Health', 'Depression', 'Life Lessons', 'Life']
5 Steps to Master Content Marketing For Your Small Business
From Main Street Hub and ContentWriters In 2017, consumers are looking online now more than ever to find a small business just like yours. In order to be competitive in your industry, you have to be visible when a customer looks for you online, and content marketing is a cost-effective way to boost your business in search results. In fact, 72% of marketers say relevant content creation is the most effective SEO tactic. Content marketing allows your business to maximize its potential to drive engagement, gain new customers, and fortify relationships with existing customers. Not sure how? Follow these 5 steps to develop and implement a comprehensive content marketing strategy, tailored to your business: Step One: Brand An effective content marketing strategy will be consistent, authentic, and tailored to provide value to your business’s audience. You’ll want to be very clear about your business’s brand so that it can inform every single piece of content you post. The most important factors to consider when developing your online brand are your identity, your audience, and your business goals. Here are some questions to ask yourself: What is the personality of my business? Who is my ideal customer? What results do I want my online content to accomplish? No one knows the ins and outs of your business better than you do! By asking yourself these questions, you’ll be able to get a sense of the authentic voice that makes sense for your business. From there, every piece of content that you create for your content marketing strategy will be inspired by your voice, your followers, and your goals. Step Two: Social Media As a business owner in today’s market, you have to integrate social media marketing into your content marketing strategy in order to be competitive. Setting up Facebook, Twitter, and Instagram profiles for your business is free and comes with many benefits for you and your customers alike! The benefits of being present, active, and engaged on social media are a boost in SEO, increased opportunities for customer service, a cost-effective means of increasing brand awareness, and visibility in local search results. If you don’t have the time or the know-how to keep up with all of these platforms at once, choose the one platform that makes the most sense for your target audience. If your business caters to a young demographic, Instagram is the most effective way to reach the youngest group of internet users. If you mostly serve or sell to customers 30 years old and above, Facebook is the best place for you to start. Twitter is the most receptive platform to non-visual content, so if you don’t have access to strong photography of your business and your products, Twitter might be the best match for you. Try to post at least once day, mixing up your content so that there is variety for your audience. Provide value by posting educational content relevant to your industry, photos of your product or service, and news about your community! From there, interact with and respond to all comments, mentions, wall posts, and reviews that your customers leave on your profiles. Think of this as good customer service. It will show your fans and followers that you are grateful for their support and engagement. Don’t have time to manage your business’ social media? Let Main Street Hub do it for you. Learn more here! Step Three: Email Marketing Once you’ve used social media to bring in new customers, outlining an email marketing strategy is the next step to build brand loyalty and keep those customers coming back time and time again. By sending out a weekly or biweekly email newsletter to your existing client base, you’ll stay top of mind with your customers, making it more likely that they’ll become repeat customers and loyal fans. Integrate your social media and email marketing efforts by including links to your social channels in each email you send! That way, your loyal customers have the opportunity to interact with you online in more ways than one. Step Four: Blog Content marketers who prioritize blogging are 13 times more likely to achieve a positive ROI on their efforts. And, as an added bonus, regular blog articles give you extra content to share on your social media channels! When deciding what kind of content to create for your blog, you’ll want to strategize how to provide value to your audience and stay true to the brand you developed in Step One. Show variety in your content so that your customers will want to check back, and make sure everything you post is relevant to both your industry and your target audience. From there, measure the results of each blog you create. By comparing the number of views, reads, and shares on each piece of blog content you publish, you’ll be able to determine what kind of content your audience enjoys most and allow that inform your content strategy moving forward. Don’t have the time to write blogs every week? Team up with the talented writers at ContentWriters to fill out your content! Step Five: Evergreen Content The beauty of creating content is that it lives forever on your site. Foundational content, or evergreen content, yields numerous benefits long after the minute you push that publish button on your blog. Once you publish a blog post or other form of extended written content on your site, it becomes part of the greater volume of work that establishes your business as a thought leader in your industry. A great example of evergreen content is a white paper. White papers are designed to provide value to any customer reading it by giving them real insight into the business, specifically, your company’s role in the industry. Having multiple white papers on your site increases the relevancy of your business, which helps immensely when search engines are trying to decide who should rank first. We know that developing and executing a successful content marketing strategy is a time commitment, especially on top of running your small business every day. However, by following these five important content marketing steps, you’ll be able to return on that investment with higher search results, increased brand awareness, visibility to new customers, increased loyalty with existing customers, and a better online experience for your customers. Need help using content marketing to drive new business? Learn more about how Main Street Hub and ContentWriters can help you: Main Street Hub is the only do-it-for-you, full-service online marketing platform for local businesses. Do-it-for-you is code for ‘whatever it takes.’ Using tools developed by our world-class engineers, designed with the needs of small business owners in mind, Main Street Hub brings together a mix of writers, designers, and tech experts to drive growth and take social media responsibilities off your plate — for good. Visit MainStreetHub.com to learn more, and head to our blog for more social media tips and best practices! ContentWriters specializes in providing expert-level written content across a diverse range of industries. From blog posts to white papers, ContentWriters has the ability to craft content that is perfectly tailored to your brand message at any scale. By tapping into our robust network of talent, our team ensures that the perfect writers are assigned to your company’s content needs.
https://medium.com/main-street-hub/5-steps-to-master-content-marketing-for-your-small-business-1e90ab5e3073
['Main Street Hub']
2017-08-23 20:32:04.764000+00:00
['Branding', 'Social Media', 'Marketing', 'Blog', 'Content Marketing']
Q&A with Sara Hines, Director of Product, Parenting at The New York Times
Q&A with Sara Hines, Director of Product, Parenting at The New York Times This week, The Idea caught up with Sara Hines to learn about how the NYT Parenting section was built, and its plans for 2020. Subscribe to our newsletter on the business of media for more interviews and weekly news and analysis. Can you give me a brief overview of your team and role? I am the director of product for New York Times Parenting. We are a website that provides evidence-based guidance for new and expecting parents as they navigate the transformational journey of welcoming, raising, and, ultimately, surviving children. We’re part of a business unit called New Products and Ventures which also oversees the Times’ cooking and crosswords offerings. Our newsletter was launched in March 2019, and our site was launched shortly thereafter in mid-May 2019. As the director of product, I lead a team of product designers, product managers, and engineers. A lot of us are parents of small children, so we have a lot of passion for this particular product. We work in close collaboration with our editorial team to create the best user experiences around the content that’s produced. Tell me more about the Times’ decision to go into the parenting space. As our audience has grown, the Times has been finding ways to add value to our readers’ lives beyond our core news business; so, that’s where Cooking and Crosswords come in. When we were considering lifestyle verticals that would add value to our readers, we looked at a broad set of areas like books, travel, and audio. Parenting was identified as an area of interest, so a team was hired to research the space and learn more about user needs, the audience, the competitive landscape, and services that we could potentially offer. They looked at everything from activity-finding services to memory keeping apps to lifestyle content. Additionally, as The New York Times, we recognize that our competitive advantage in any space is the quality of our content and the journalistic rigor. So, we’ve centered NYT Parenting’s value proposition on this premise. How many people in total work on parenting? Around 20. We operate like a start-up, but with the stability and support of a large company. How did you enter the product space? I came into this role in a rather roundabout way. I have a BFA in photography and spent the first five years of my career working in non-profit art organizations. I ended up going to business school and worked in digital advertising and digital strategy after graduating. In 2016, I joined the Times to lead the retention efforts for our subscription business. In that role, I thought a lot about what drives habit, why people engage with the Times continually, and why they’re willing to pay for The New York Times on an ongoing basis. When I came back from my first maternity leave in 2017, I heard rumors that the Times’ was interested in doing something in the parenting space. I was excited about this given that it was very germane to my existence. Ten months later, when I was pregnant with my second child, the Parenting team landed on a content premise and were ready to bring on business strategists to think about how to take the product to market. I joined the team as a business lead and growth partner. I actually had my second daughter exactly one month before we launched our site in beta. So, I spent the summer of 2019 literally inhabiting the mindset and user needs of our target audience. When I came back from my second maternity leave, the Parenting site was doing well and the needs of the team were more heavily weighted to figuring out how to build habit-driving experiences. As a result of this, I transitioned into more of a product role. What were some of the challenges the team faced when building this vertical? One of the biggest challenges is that we were starting from a content docket of zero, meaning that we had to do a tremendous amount of upfront investment in producing core, evergreen content from scratch around topics like child milestones, sleep training, and bottle feeding. When Cooking started, they were able to build product experiences based on the Times’ legacy of food content and archive of 15,000 recipes. With Parenting, however, our focus is to provide deeply researched and evidence-based guidance — so, we can’t just repackage content. And, because we were building content for a separate site, we built our own proprietary CMS and essentially created a parallel ecosystem to the core news operations. In just eight months, we published over 200 pieces of deeply researched content. We’re really interested in 1) informing readers on the practical applications of the best available research and 2) debunking contradictory and non-scientific information. So we’ve hired editors and writers with science writing backgrounds and have even had medical professionals write some of our guides. We also try to balance this type of content with personal stories from parents and news reports. Our editorial team is constantly assigning stories on a daily basis. One of our early pieces of reporting was during the Fisher Price Rock n’ Play recall where we revealed that many daycares were still using them despite the public health warnings. What does NYT Parenting have in store for 2020? My team has been thinking a lot about parent behaviors and how an app would fit into our potential offerings from a user experience. Parents are an interesting demographic because they’re very busy but also very exacting. Fundamentally, there are two behaviors that parents engage in that I would like NYT Parenting to displace. First, I want NYT Parenting to prevent parents from going down a Google hole. Secondly, I hope that our content dispels the myth of the perfect parent that you often see on Instagram. By elevating the voices of parents alongside our research, I want parents to actually feel seen and heard by NYT Parenting. What is the most interesting thing (product, tool, article, social channel, special project, redesign, etc.) that you’ve seen from a media outlet other than your own? I’m really interested in the rise of dad media like Fatherly and dad twitter. I feel like we’re coming off the decade of the mom-fluencer. I think there’s a problematic, cultural orientation towards dads as the third wheel, so I’m really interested in portrayals of dads as involved figures. Alexis Ohanian, the co-founder of Reddit, wrote a piece for us pushing for paternity leave, and he just launched a podcast called Business Dad that focuses on the realities of working dads. This is so important because we’re finding that paid parental leave is still not enough. It’s been a women’s issue for so long, and I don’t think the workplace can change for working parents until we even the playing field for taking leave. This Q&A was originally published in the January 21st edition of The Idea, and has been edited for length and clarity. For more Q&As with media movers and shakers, subscribe to The Idea, Atlantic Media’s weekly newsletter covering the latest trends and innovations in media.
https://medium.com/the-idea/q-a-with-sara-hines-director-of-product-parenting-at-the-new-york-times-7435340d8ad8
['Tesnim Zekeria']
2020-01-22 15:38:40.742000+00:00
['Product', 'Journalism', 'New York Times', 'Subscriber Spotlight']
What Can the Tech Industry Do?
What Can the Tech Industry Do? How you can help encourage more diversity and bring change to the tech industry Photo by LOGAN WEAVER on Unsplash George Floyd — a black man who will be etched into modern-day history. A man that sparked change and discussion around race relations. A man that in 2020, was murdered in broad daylight by police officers. Like many others, my hope is those involved will be brought to justice and that this will ignite a continued effort to bring equality to black people around the world. In each area of society, we need to work to dismantle systemic racism and improve the economic power that black people have. One industry — the one I work in, the one I enjoy, the one we all benefit from — is technology. Plenty has been written surrounding problems regarding diversity and inclusion in the tech industry, and with recent events, I think the case for diversity becomes even more overwhelming at this point. Efforts so far haven’t ignited huge change, but change is happening. I am part of a network called TLA Black Women in Tech. It’s part of the wider Tech London Advocates (TLA) network — both aim to amplify the voices of technology in the UK. TLA Black Women in Tech, in particular, was set up to support, mentor, and encourage diversity and inclusion in the UK. They post job, funding and network opportunities, events, and anything that helps move the industry forward in regards to black women. Meet Andy Davis, a black angel investor that is dedicated to pushing the culture forward and investing in black founders. Every Monday at 7 p.m. on his Instagram, black founders have the opportunity to pitch to him and his very engaged audience. Meet Jermaine Craig, who built Kwanda, a platform for people to invest in projects that benefit the black community. There are so many other organisations and individuals that are pushing the culture and paying it forward showcasing why we need diversity in tech. It’s us that are invested in using our technical skills to elevate our community, therefore, more of us are needed!
https://medium.com/better-programming/what-can-the-tech-industry-do-fb43f17eb311
['Theresa Semackor']
2020-06-23 11:36:37.451000+00:00
['Diversity In Tech', 'Programming', 'Diversity And Inclusion', 'Startup', 'Black Tech']
TensorFlow Quantum is an Open Source Stack that Show Us how the Future of Quantum and Machine Learning Could Look Like
TensorFlow Quantum is an Open Source Stack that Show Us how the Future of Quantum and Machine Learning Could Look Like TensorFlow Quantum allow data scientists to build machine learning models that work on quantum architectures. I recently started a new newsletter focus on AI education and already has over 50,000 subscribers. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below: The intersection of quantum computing and artificial intelligence(AI) promises to be one of the most fascinating movement in the entire history of technology. The emergence of quantum computing its likely to force us to reimagine almost all the existing computing paradigms and AI is not an exception. However, the computational power of quantum computers also has the potential to accelerate many areas of AI that remain unpractical today. The first step for AI and quantum computing to work together is to reimagine machine learning models to work on quantum architectures. Recently, Google open sourced TensorFlow Quantum, a framework for building quantum machine learning models. The core idea of TensorFlow Quantum is to interleave quantum algorithms and machine learning programs all within the TensorFlow programming model. Google refers to this approach as quantum machine learning and is able to implement it by leveraging some of its recent quantum computing frameworks such as Google Cirq. Quantum Machine Learning The first question that we need to answer when comes to quantum computing and AI is how the latter can benefit from the emergence of quantum architectures. Quantum Machine Learning(QML) is a broad term to refer to machine learning models that can leverage quantum properties. The first QML applications focused on refactoring traditional machine learning models so they were able to perform fast linear algebra on a state space that grows exponentially with the number of qubits. However, the evolution of quantum hardware have expanded the horizons of QML evolving onto heuristic methods which can be studied empirically due to the increased computational capability of quantum hardware. This process is analogous to how the creation of GPUs made machine learning evolved towards the deep learning paradigm. In the context of TensorFlow Quantum, QML can be defined as two main components: a) Quantum Datasets b) Hybrid Quantum Models Quantum Datasets Quantum data is any data source that occurs in a natural or artificial quantum system. This can be the classical data resulting from quantum mechanical experiments, or data which is directly generated by a quantum device and then fed into an algorithm as input. There is there is some evidence that hybrid quantum-classical machine learning applications on “quantum data” could provide a quantum advantage over classical-only machine learning for reasons described below. Quantum data exhibits superposition and entanglement, leading to joint probability distributions that could require an exponential amount of classical computational resources to represent or store. Hybrid Quantum Models Just like machine learning can generalize models from training datasets, QML would be able to generalize quantum models from quantum datasets. However, because quantum processors are still fairly small and noisy, quantum models cannot generalize quantum data using quantum processors alone. Hybrid quantum models proposes a scheme in which quantum computers will be most useful as hardware accelerators, working in symbiosis with traditional computers. This model is perfect for TensorFlow since it already supports heterogeneous computing across CPUs, GPUs, and TPUs. Cirq The first step to build hybrid quantum models is to be able to leverage quantum operations. In order to do that, TensorFlow Quantum relies on Cirq, an open-source framework for invoking quantum circuits on near term devices. Cirq contains the basic structures, such as qubits, gates, circuits, and measurement operators, that are required for specifying quantum computations. The idea behind Cirq is to provide a simple programming model that abstracts the fundamental building blocks of quantum applications. The current version includes the following key building blocks: · Circuits: In Cirq, a Cirquit represents the most basic form of a quantum circuit. A Cirq Circuit is represented as a collection of Moments which include operations that can be executed on Qubits during some abstract slide of time. · Schedules and Devices: A Schedule is another form of quantum circuit that includes more detailed information about the timing and duration of the gates. Conceptually, a Schedule is made up of a set of ScheduledOperations as well as a description of the Device on which the schedule is intended to be run. · Gates: In Cirq, Gates abstract operations on collections of qubits. · Simulators: Cirq includes a Python simulator that can be used to run Circuits and Schedules. The Simulator architecture can scale across multiple threads and CPUs which allows it to run fairly sophisticated Circuits. TensorFlow Quantum TensorFlow Quantum(TFQ) is a framework for building QML applications. TFQ allows machine learning researchers to construct quantum datasets, quantum models, and classical control parameters as tensors in a single computational graph. From an architecture standpoint, TFQ provides a model that abstracts the interactions with TensorFlow, Cirq, and computational hardware. At the top of the stack is the data to be processed. Classical data is natively processed by TensorFlow; TFQ adds the ability to process quantum data, consisting of both quantum circuits and quantum operators. The next level down the stack is the Keras API in TensorFlow. Since a core principle of TFQ is native integration with core TensorFlow, in particular with Keras models and optimizers, this level spans the full width of the stack. Underneath the Keras model abstractions are our quantum layers and differentiators, which enable hybrid quantum-classical automatic differentiation when connected with classical TensorFlow layers. Underneath the layers and differentiators, TFQ relies on TensorFlow ops, which instantiate the dataflow graph. In terms of execution standpoint, TFQ follows the following steps to train and build QML models. Prepare a quantum dataset: Quantum data is loaded as tensors, specified as a quantum circuit written in Cirq. The tensor is executed by TensorFlow on the quantum computer to generate a quantum dataset. Evaluate a quantum neural network model: In this step, the researcher can prototype a quantum neural network using Cirq that they will later embed inside of a TensorFlow compute graph. Sample or Average: This step leverages methods for averaging over several runs involving steps (1) and (2). Evaluate a classical neural networks model : This step uses classical deep neural networks to distill such correlations between the measures extracted in the previous steps. Evaluate Cost Function: Similar to traditional machine learning models, TFQ uses this step to evaluate a cost function. This could be based on how accurately the model performs the classification task if the quantum data was labeled, or other criteria if the task is unsupervised. Evaluate Gradients & Update Parameters — After evaluating the cost function, the free parameters in the pipeline should be updated in a direction expected to decrease the cost. The combination of TensorFlow and Cirq enable TFQ with a rick set of capabilities including a simpler and familiar programming model as well as the ability to simultaneously train and execute many quantum circuits. The efforts related to bridging quantum computing and machine learning are still in very nascent stages. Certainly, TFQ represents one of the most important milestones in this area and one that leverages some of the best IP in both quantum and machine learning. More details about TFQ can be found in the project’s website.
https://medium.com/dataseries/tensorflow-quantum-is-an-open-source-stack-that-show-us-how-the-future-of-quantum-and-machine-d1435593660
['Jesus Rodriguez']
2020-12-26 10:55:55.808000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Data Science', 'Thesequence']
Maybe 2020 Is The Year to Let Certain Things Go
Even when it comes to my writing, I am so often not in the headspace I need to be to finish a story. And it’s not because I don’t like what I do, or because I try to write too much. And it’s not because I’m some blubbering baby who blames everything on the pandemic. I am struggling to get shit done because most of my coping mechanisms were taken away this year, and I was hit with heavier health challenges than ever before. Honestly, I used to keep myself sane with leisurely Target runs. It might sound silly, but it’s true. I’d drive my daughter to school, then head over to Target and sit in their Starbucks Cafe with my tablet and a flat white and write. When I got too tired of sitting in the hard chair, I’d get up and grab a few things we needed for the house. Toothpaste. Trash bags. Maybe a new pair of PJs for my daughter. Some days, I’d mix things up and window shop at another store. Maybe I’d grab groceries. On a few occasions, I’d book time for myself and get my nails done or have a massage. Afterward, I might eat lunch at a local hole-in-the-wall cafe and do some more writing. For an introvert like myself, these quiet, contemplative days went a very long way at recharging my batteries. Just having time out in the world without being on mom duty was a real saving grace. I lost those things with the pandemic and it isn’t anyone’s fault. Yes, they’re seemingly small and silly things. Yes, you can call them first world problems, and no, I haven’t made a big fuss about losing them because there are much worse things in the world to lose. Unfortunately, though, I may have underestimated the difference those little habits and luxuries made in my life. I underestimated just how much of a difference those walks around Target or wherever actually made. Where I live in Tennessee, it’s frequently too hot for me to enjoy walking outside, or too stormy with heavy rain. It was something of a shock when after a few months of staying home, I realized that my fitness had really gone downhill. My agoraphobia was also kicking into high gear. It’s not the kind of thing that most people think about, but when you suddenly don’t need to (or can’t really) go out in public, it only makes your fear of going out in public worse. I often think of agoraphobia and certain Aspie fears of mine as existing somewhere along the spectrum of obsessive-compulsive disorder (OCD). For the most part, I rely on exposure therapy to live more of the life I want. It’s very difficult to get the right amount of exposure therapy, however, in the middle of a fucking pandemic. And once again, it’s nobody’s fault. It’s literally just life happening. Beating myself up about my feelings or discomfort isn’t effective. And as a mom, I’ve got this duty not to be perfect, but to try to be healthy and whole. To do my best to respond to life’s challenges in a positive way. I take my responsibility pretty seriously and I want my daughter to know that our weaknesses don’t define us. It’s how we respond to our difficulties that matters most. So? That’s what I’m trying to do. Last night, I finally made the decision to cancel my Molly Maids appointment that’s scheduled for Tuesday. Am I bummed out about it? YES. Do I feel defeated? A bit. Honestly, I didn’t want to cancel… again. I don’t like the idea that I might not — okay, probably won’t — have a super neat and clean home for Christmas. But there’s this idea that’s been swimming around in my head since March, and actually, I intended to write this headline way back then when I realized that too many of us were going a bit overboard with the personal development shit in quarantine. Maybe 2020 is best seen as a time for letting go. If 2020 gave you time to hone new skills or get more work done, that’s great. Wonderful, even. But I’m willing to bet that a lot more people need to hear that it’s okay if all you were really able to do in 2020 was… abide. And if simply abiding was too much so you barely made it through? That’s still something to be proud of. Don’t get me wrong. As a long-term goal, most people want to thrive and live with purpose. But in hard times, sometimes the purpose is just making it to the other side. Sometimes, you need to do whatever it takes just to reach the shoreline.
https://medium.com/honestly-yours/maybe-2020-is-the-year-to-let-certain-things-go-6c8c8e215405
['Shannon Ashley']
2020-12-11 21:19:53.178000+00:00
['Personal Development', 'Life', 'Mental Health', 'Self', 'Life Lessons']
Environmentalism in a Pandemic: The importance of critical, empathetic, and future-driven thinking
About a month or so ago, the grumpy old lady who lives in the not-so-deep recesses of my mind was stirred by social media posts like these: “Nature is healing! Humans are trash!” “The COVID crusade is saving the planet from humans doing bullshit.” “Mother Earth is now healing. Maybe that’s the plan after all.” “Mother Earth is healing. Don’t panic. Take a break and join her.” …are you serious…? They reek of mindless, privileged, out-of-touch environmental activism that completely misses the important complexities linking humans and the environment. This kind of mindset has led to unethical (or ethically ambiguous) and ultimately counterproductive conservation efforts around the world. The humans who are suffering the most from this pandemic and its socioeconomic impacts are not the ones who are primarily responsible for “defiling” the planet — whereas those who are primarily responsible have more resources that allow them to better avoid infection and destitution. Sure, they still might get infected, but while the virus “doesn’t discriminate,” our systems of healthcare access do. To be clear: I am pro-environment. For the past 18 years (wait, what?), I have pursued education and work in conservation. I never was a particularly passionate tree- or dolphin-hugger; I simply wanted to make a positive contribution to humanity and the world, and conservation happened to be a good match for my desired lifestyle (save the financial side of things). But I am passionate about equitable, effective conservation. I’m a pragmatic, systems-thinking, social-ecologically-informed conservationist, a pro-environment person who is also pro-human. Even though humans make me grumpy, they also have rights — something that mainstream conservation still tends to overlook (though this is improving). I wanted to explore my thoughts on this topic, and to share them with friends and colleagues — especially those who work on environmental issues. Now, I am lucky enough to still be working on some freelance contracts, and I do have a number of other important duties and activities that I really should be devoting my time to. So, what follows is not a particularly polished synthesis of anything, but rather a series of considerations and information from various sources that I could cobble together in my unclaimed time. In other words, an elaborate exercise in procrastination, curiosity, and somewhat scattered exploration of my initial reactions, conveniently camouflaged as “Tara practices her non-academic writing skills.” Environmentalism and the “Humans are Trash” Narrative It is beyond dispute that humans, in our current state of being, have vastly destructive impacts to the environment. This destruction spews out on different scales — from the localized, individual level to the global, structural level, with complex linkages throughout. It is hard to think of any individual impact that is not at least partially linked to the larger system: the individual who tosses away single-use plastic without a care in the world is part of a system that has made this behavior the most convenient and accessible option; the individual who illegally kills a rare animal is part of a system where people who live alongside rare animals are often economically desperate and where there is a market for rare animals; the individual, say, an aspiring conservation researcher, who hops on a greenhouse gas-emitting plane to go to a conference is part of a system where attendance at such gatherings is considered vital for career advancement and funding-relevant networking. The implications of this destruction are not spread equitably. The people who drive the destruction — let’s say, corporations and consumers in rich countries — are often not the ones who have to live with the negative fallout, at least not in the short term. One could say that we export our environmental impacts to the rest of the world. This is a key reason why our environmental destruction has spiraled so far: the lack of a direct feedback loop between our own actions and choices and the negative effects of those actions. Similarly, the implications of how we try to fix that destruction are also usually not spread equitably. An example from my niche in the conservation world: marine mammal conservation in developing countries. Wide-eyed marine mammal enthusiasts in rich countries, horrified by the threats facing these majestic, cute, charismatic beings, will staunchly call for strict and immediate ends to the human activities that threaten these animals… even though these animals and the threats often live in the developing world, where livelihood options are limited and social safety nets are often nonexistent. Who will shoulder the burden of fishing communities having to fundamentally change their livelihoods to avoid accidentally catching porpoises and dolphins (a major conservation threat)? It will be the poor coastal communities who pay, not the relatively well-off conservation activists — who often do not try to also address the social, economic, and political practicalities needed to make conservation-minded changes without violating the human rights and well-being of these communities. Where conservation seems to succeed without snatching land, ocean, and rights from local people is in cases where the local communities are actively involved — and respected — in the process. Conservation is essentially a human endeavor. And marginalized communities deserve to be treated as humans. The context that I’m trying to establish here is that environmentalism and conservation have a troubling history (and ongoing habit) of viewing humans as “the problem.” Important questions of privilege, perspective, and linkages are ignored in this worldview. This potentially leads down some pretty troubling pathways. Who, precisely, is “the trash” or “the problem”? Several others have commented on the worrying parallel between this “COVID is saving the planet!” and tone deaf comments on overpopulation as the main threat to the environment. Though I greatly admire them and am inspired by their considerable and ongoing legacy, even the otherwise wonderful David Attenborough and Jane Goodall are guilty of contributing to the white-/western-centric narrative of “there are just too many people on this planet!” I am not suggesting that these two giants of conservation are actively racist. What I am suggesting is that this narrative often focuses on populations in developing countries in Africa and Asia, despite the fact that it’s those of us in developed countries — e.g., the US, with it’s #1 ranking (we’re #1!) in per capita greenhouse gas emissions — who actually have the most profoundly negative impact on the environment. “There are just too many people who consume too much on this planet, screwing everyone else over,” is more accurate. And also: “There are just too many corporations that benefit from unsustainable practices! There are just too many politicians skewing the system so that it promotes unsustainable industry!” In the extreme, this “humans are the problem” thinking justifies some pretty horrific ideology. Enter ecofascism, a term I’ve only recently learned, though it’s a concept I’ve known about for a while. It uses environmental concerns to justify authoritarian rule and nationalism and racial supremacy; if overpopulation is the problem, well, then, genocide could be a solution — and they’ll tell you who should be the first to be eliminated. It’s a bizarre, and horrific, marriage of environmentalism and nationalism. Here’s a nice, brief video that provides an overview of the ecofascist take on COVID19. I will say that the term is used fairly loosely on social media, but even the most strict and extreme definition of ecofascism is along the same spectrum as the “humans are the problem!” narrative. See this Current Affairs piece for more thoughts on the label of ecofascism. For most of you reading, I’m assuming that you find the idea of genocide to be horrifying. So, if you were to learn that the decimation of Native American populations due to disease and genocide in the 15th and 16th centuries led to increased carbon storage as their previously cultivated lands rewilded, would you call that a silver lining? I hope not. (Ecofascists, however, might use it as justification for their warped worldview.) As this Grist article by Sierra Garcia explains: “Environmentalism has long danced with xenophobia and eugenics.” Similarly, it is fairly obvious that mainstream conservation operates on neocolonial assumptions and approaches, where the rights of local people are ignored in the quest for environmental protection in the process of “Green Grabbing”. I saw a mind-opening talk on this by Patrick Christie, in which he refers to modern, mainstream conservation as representing the “misanthropocene,” where humans (who happen to live near the places and things the rest of us want to protect) are treated as obstacles or opponents rather than entities with rights and agency in conservation. In simple terms: these ways of thinking, whether it be about overpopulation or a pandemic, whether it be a single thoughtless Facebook post or a hate-filled manifesto, contribute to the message that the lives and suffering of others — the most marginalized — are negligible, if the outcome for our precious environment is positive. Perhaps I am hypersensitive and imagining that the slope is much more slippery than it actually is, but I have seen even moderately “anti-human” conservation mindsets bring harm to marginalized communities while generally failing to meaningfully protect the target animals. When we say “humans are the problem,” we need to be honest. Some humans are much more a part of the problem than others, but rarely face the harsh consequences of our impacts to the environment. By treating humanity as one uniform mass — and not instead examining the inequitable human-created systems that drive environmental destruction — we could be interpreted as saying that all of humanity (including the most marginalized) should pay for the crimes of a relative few. This mindset pitches human against human, while the harmful systems remain unquestioned and unchanged. Again, from Sierra Garcia in Grist: Broad calls for limiting population, or rejoicing in the pollution-stunting effects of the world’s economy grinding to a halt, are indirect endorsements of mass suffering for people who are already most vulnerable — and incidentally, those who contribute least to climate change. Blind applause for environmental progress without acknowledging who’s bearing the cost is simply a rebranding of white supremacist ideals. Signs of a healing Earth? Environmental Impacts of the Pandemic “Stop preaching, stop lecturing, don’t be such a downer,” you might be thinking. “Can’t we celebrate the positive impact to the environment, even as we feel sorry for those who are hurting now? I’m no Nazi! I just love trees!” I suppose that’s a personal question of etiquette; for me, it feels awfully tone deaf, but as has been covered, I do have a grumpy old lady taking up space in my brain. However, I do fully agree that we should be observing and learning from what’s happening in the environment now, and I believe that this could lead to less ethically-fraught positive outcomes in the future. To mindfully understand the environmental impacts of the pandemic, we need to think about the system — the different scales in space and time, and the connections driving human behavior and larger scale human activities. I like how this opinion piece by Kaitlyn Radde in IDS News puts it: The destruction of the earth for the massive profit of a handful of individuals and corporations is not part of the human condition, but it is a part of the capitalist condition, especially in the face of ineffective regulation. Nature is resurging because of the absence of unsustainable profit-chasing, not because of the absence of human life It’s also important to think not only of what’s happening now, but what will happen in the future: how we resume our activities after the pandemic eases, and how governments choose to regulate and support various industries and sectors, will determine whether we can (1) sustain or magnify positive impacts and mitigate negative impacts (through mindful, far-sighted decision-making), (2) return to “business as usual” with the temporary positive impacts quickly disappearing into the past, or (3) even devolve into a frenzy of rebuilding and consumption and perverse incentives that make things even worse than before. Below I’ll summarize what came up when I thought about the environmental implications of the pandemic, and did some poking around for more information: Air pollution and climate change: I will admit that it is impressive to see photos of crisp mountain views visible from cities for the first time in years. Air pollution is down, and carbon emissions are down. However, the previous dip in emissions brought about by the 2008 recession was followed by a quick bounce back, and the current stockpile of products that is building up would allow for a fast resumption of industrial activities (here and here). · The renewable energy sector is being hit hard by the pandemic’s economic slowdown and disruption to supply chains (also, see here). The plummeting value of oil could is certainly hitting the oil and gas industry hard, but it also is an incentive for consumers to take advantage of a now super-low priced energy source. · Funding and interest in sustainable industry might diminish as the economic situation grows more dire and rapid, urgent resumption of industrial activities is prioritized. Also, if this drastic lockdown is taken as a model for reducing environmental impact, it certainly will be broadly unappealing (to put it lightly); as Martha Henriques writes in BBC Future: “It’s safe to say that no one would have wanted for emissions to be lowered this way.” · What is being put into momentum now will shape the future of air pollution and climate change. In the US, we see the Trump administration’s rollbacks of environmental protections and commitment to bailing out industry — such that the very entities that are the worst for the environment are those that might recover most quickly. Plastic pollution: Masks, gloves, medical waste, single-use plastic to avoid contamination — obviously these are being massively consumed and disposed of. Recycling programs are on hold in some countries. The pollution itself is a negative, of course, but I also wouldn’t be surprised if this also triggers a shift back away from the “reuse!” mindset that environmentalists have worked so hard (and rightfully so!) to spread. Food production: I didn’t go so far as to look into analyses on how this has been affecting global food chains, and the linked impacts of food production on the environment (deforestation, greenhouse gas emissions, shipping traffic). However, communities in the US are starting to see the importance of having a more reliably accessible, resilient, and self-sufficient food supply; local farms in San Diego are more popular than ever, overloaded with subscription for their Community Supported Agriculture (CSA) boxes. This seems like an important boost to the local food movement (though what you eat might have more environmental ramifications than how far your food had to travel to you). Something similar has been noted in Kenya, where local fish markets are thriving as consumers turn away from imports from China. Biodiversity conservation: With less traffic (on land and water), it makes sense that wildlife has been spotted in formerly heavily peopled areas. It’s an encouraging indication that habitats can be made attractive for these animals after a very short period of reduced human activities — this is a good lesson to learn. Other considerations include: · Great ape researchers are concerned about the possible risk the virus could pose to endangered ape species. · The reduction in wildlife tourism, while perhaps being a positive in terms of greenhouse gas emissions, is a big negative for conservation efforts and communities who depend on it. · There are concerns that poaching might escalate, as poaching patrol teams might need to reduce their monitoring due to health concerns and reduced funding, and as rural communities become desperate due to serious economic impacts from the pandemic. Though this uptick hadn’t yet been reported in Kenya, reports from South Africa, Cambodia, and India suggest that it’s a valid concern. · Other stories on this: here, and more below under “Conservation Operations.” Communities: As mentioned above, economic desperation might drive communities to unsustainable resource extraction for their own survival. I have seen reports that small-scale fishing communities are suffering during the pandemic. It’s a broadly touted (if not completely accurate) conclusion that desperate people are often less situated to make long-term, sustainable decisions. Once market chains start to open up again, will there be a rush of overfishing to make up for substantial economic losses? Will people be fishing more for their own subsistence needs now that available food from other sources might be less accessible? · However, in response to difficulties posed by the pandemic, fisher organizations in Brazil have set up an emergency network to exchange information and mobilize support — community-strengthening steps that could carry over to post-pandemic fisheries management (and social resilience). · Also, there are fears for what might happen to indigenous groups if the virus tears through their communities — on top of the obvious human tragedy this would pose, this holds significance for conservation because many indigenous groups are active protectors of their natural resources and hold important traditional and local ecological knowledge. Conservation operations in general: The poaching patrols mentioned above are one example of conservation efforts that might be compromised during the pandemic. A survey of conservationists showed that nearly 80% of those surveyed faced negative impacts from the pandemic. Many conservationists are under “stay at home” orders, like the rest of their communities; going into the field is simply not possible, and the momentum of many projects is stalled. However, many groups are working hard to develop online outreach materials, and I know my colleagues are taking this time to step back from our usually frantically packed schedules to reassess strategies and build up core skills among young staff through online trainings. · Zoos who do important work on endangered species research, including breeding programs, are running low on funds. ·This just-out paper touches on the impacts of coronavirus on the field of conservation · Mongabay covers diverse issues on this topic in various articles on various impacts of COVID-19 on conservation and the environment Appreciation for nature: It’s possible that concern for nature will diminish, for some, in the face of a more immediate threat to lives and economic well-being. On the other hand: from what I’ve heard of protests to re-open parts of San Diego (I cannot stomach watching the footage…), people are missing their access to green space and the ocean big-time. I’ve seen many posts from friends of flowers from their gardens, or of splendid landscapes from previous travels. Social distancing from nature seems to make the heart grow fonder, and hopefully our newly-deepened appreciation for nature will last beyond reopening and will influence our own behaviors — and how we push our decision-makers to act. Changed behaviors: Though, as mentioned above, this is not the ideal way to promote environmentally-friendly behaviors, we might develop and practice more sustainable behaviors through this experience (of course, see above for the counter to this re: plastics, but also see above re: food systems). Research shows that behavior change tends to be more effective and lasting during times of change, so behaviors we already defaulted to — e.g., reduced commuting — as well as behaviors we consciously choose during this time could set the stage for better habits (as suggested here). Additionally, businesses will be pushed to examine their resilience, thus opening the door to changes that could also be better for the environment. What happens from here? As alluded to in the sprawling series of ideas and resources above, the post-COVID19 pandemic environment depends on the decisions and actions that we put into place, even now. Sure, personal actions and behaviors are important, but we need structural change that embraces environmental justice and social justice, so that human lives can be protected alongside the environment. This section will feature several block quotes, because many people have already articulately and beautifully expressed these ideas, and I am falling farther and farther behind on work that others are waiting on (heh). The head of the UN Environment Programme, Inger Andersen, wrote: … as we inch from a “war-time” response to “building back better”, we need to take on board the environmental signals and what they mean for our future and wellbeing, because COVID-19 is by no means a “silver lining” for the environment… Visible, positive impacts — whether through improved air quality or reduced greenhouse gas emissions — are but temporary, because they come on the back of tragic economic slowdown and human distress. That’s more humane, substantive, and useful than the closing point from this piece from the Guardian that I read earlier today: A poster to mark the first Earth Day featured the quote: “We have met the enemy and he is us.” Fifty years on, will this be the year we collectively stop taking the planet for granted, degrading and exploiting its resources? Will we now, also, realise how vulnerable a species we actually are? This is not necessarily incorrect, but it is oversimplified to the point of being banal. Not all of “us” are the “enemy,” and deciding to “stop taking the planet for granted” very much lies in the hands of those of “us” who have the most power and inflict the most impact. Plenty of “us” do not take the planet for granted — just talk to any small-scale fisher who has seen their catch plummet over the past decades, who has witnessed the havoc wreaked by illegal industrial fishing boats, coastal development, and climate change, as well as ineffective social and economic development programs. Plenty of “us” realize all-too-well what it means to be vulnerable, on the edges of survival. A better set of questions would be: Will this be the year we step up and fight, earnestly and urgently, to push our governments to do better? Will we also realize how we need to work to protect the most vulnerable among us? It’s important to see the pandemic not as a condemnation of humanity, but of our current systems. From Jennifer Hijazi’s story in E & E News: Jacqueline Klopp, co-director of the Earth Institute’s Center for Sustainable Urban Development at Columbia University, said the virus should not be celebrated for its effects on the environment. It should serve as a warning. “It is not constructive to treat the pandemic as a policy response in terms of [thinking] it’s helping us with our structural problems around emissions,” she said. “It’s a symptom of us not addressing our serious environmental and social problems.” The pandemic is the symptom, the system is the problem, and some humans drive (and disproportionately benefit from) the system while most of humanity suffers from the symptom. To revamp our socio-political and economic systems to be more nature- and human-friendly, we need intensive, informed, and inclusive activism. Environmentalists cannot — morally or pragmatically — promote messages or ideas that pose human rights, lives, or dignity as disposable in the face of environmental concerns. There’s a long and troubled history of conservation sowing distrust among communities around the world, and how the sector reacts to this pandemic could shape — for better or for worse — the impression that communities and the general public have of conservation. Alasdair Harris, the Executive Director of the NGO Blue Ventures, puts it well in his commentary in Mongabay: Impacts of the coronavirus pandemic on vulnerable communities in the Global South go far beyond the looming public health emergency. The broader economic and environmental ramifications are of profound importance to biodiversity conservation. How the conservation movement responds will determine our relevance and credibility in the eyes of many communities who depend on nature for their survival. This a chance to strengthen the all-too-weak alliances that conservation makes with communities, to align environmentalism with social justice to change a system that serves neither the environment nor most humans. The importance of community action is not limited to rural villages in biodiversity hotspots — we can see it popping up in neighborhoods here in the US, for example. Communities are coming together to fill in the gaps in the government’s safety net (NB: though this is inspiring, we shouldn’t have to be doing this!). As Ruth Wilkinson writes in Global Justice Now: Around the world, people are building community solidarity and support systems, as well as organising to hold corporations and governments to account when their greed endangers the wellbeing of society. We are learning that community, health and social care are vital, whereas corporate profiteering and exploitative work practises can be done away with. When we come out the other side of this crisis, we must work to continue applying these lessons to resist the response of ‘business as usual’. And herein lies the key to a more resilient, just future for humanity on Earth. Communities working together, out of love for the environment and love for each other. We need long-term strategic planning and decisive action founded on critical thinking and informed by sound science — but also informed by empathy for each other. Sure, it sounds awfully sentimental, but it’s true; the sense of connection, understanding, and trust that empathy brings about is critical to sustained success of complex ventures with many different stakeholders. Because writing profound endings is difficult, and because he already wrote something wonderful that works perfectly here, I’ll leave you with Thich Nhat Hanh’s words from his 2014 Statement on Climate Change for the UN: Our love and admiration for the Earth has the power to unite us and remove all boundaries, separation and discrimination. Centuries of individualism and competition have brought about tremendous destruction and alienation. We need to re-establish true communication–true communion–with ourselves, with the Earth, and with one another as children of the same mother. We need more than new technology to protect the planet. We need real community and co-operation. *** Please accept as an postscript of sorts this excerpt from the eloquent IUCN Statement on the COVID-19 Pandemic, which resonated with me:
https://medium.com/age-of-awareness/environmentalism-in-a-pandemic-the-important-of-critical-empathetic-and-future-driven-thinking-fee55e405c2e
['Ts Whitty']
2020-04-29 04:41:25.982000+00:00
['Environmental Issues', 'Pandemic', 'Conservation', 'Coronavirus', 'Covid 19']
A Beginner’s Guide on Sentiment Analysis with RNN
Maximum review length and minimum review length. print('Maximum review length: {}'.format( len(max((X_train + X_test), key=len)))) Maximum review length: 2697 print('Minimum review length: {}'.format( len(min((X_test + X_test), key=len)))) Minimum review length: 14 Pad sequences In order to feed this data into our RNN, all input documents must have the same length. We will limit the maximum review length to max_words by truncating longer reviews and padding shorter reviews with a null value (0). We can accomplish this using the pad_sequences() function in Keras. For now, set max_words to 500. from keras.preprocessing import sequence max_words = 500 X_train = sequence.pad_sequences(X_train, maxlen=max_words) X_test = sequence.pad_sequences(X_test, maxlen=max_words) Design an RNN model for sentiment analysis We start building our model architecture in the code cell below. We have imported some layers from Keras that you might need but feel free to use any other layers / transformations you like. Remember that our input is a sequence of words (technically, integer word IDs) of maximum length = max_words, and our output is a binary sentiment label (0 or 1). from keras import Sequential from keras.layers import Embedding, LSTM, Dense, Dropout embedding_size=32 model=Sequential() model.add(Embedding(vocabulary_size, embedding_size, input_length=max_words)) model.add(LSTM(100)) model.add(Dense(1, activation='sigmoid')) print(model.summary()) Figure 3 To summarize, our model is a simple RNN model with 1 embedding, 1 LSTM and 1 dense layers. 213,301 parameters in total need to be trained. Train and evaluate our model We first need to compile our model by specifying the loss function and optimizer we want to use while training, as well as any evaluation metrics we’d like to measure. Specify the appropriate parameters, including at least one metric ‘accuracy’. model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) Once compiled, we can kick off the training process. There are two important training parameters that we have to specify — batch size and number of training epochs, which together with our model architecture determine the total training time. Training may take a while, so grab a cup of coffee, or better, go for a run!
https://towardsdatascience.com/a-beginners-guide-on-sentiment-analysis-with-rnn-9e100627c02e
['Susan Li']
2018-06-04 12:48:15.996000+00:00
['Machine Learning', 'Python', 'NLP', 'Recurrent Neural Network', 'Deep Learning']
One Big Reason Why iOS8 Sucks
One Big Reason Why iOS8 Sucks The bigger the display is, the less easily-accessible zone is, the more necessary is the adaptation of design to improve the user experience. Before I dive into details I should take a moment to explain a couple of things. Let’s travel back in time of the iOS7 release. What has changed in the system compared to its previous versions? User’s content got the highest priority. No skeuomorphism in design. Lightweight UI. More focus on interaction. Almost everything according to Dieter Rams. And that was fucking right and necessary move by Apple! But what they did not change? The pattern of usage! It could be seen from the very first iPhone and to the iPhone 4s. The dimensions of the display were the same. Hence, the patterns of users behavior and usage of the devices were the same too. This allowed to transfer users to the new iOS7 almost seamlessly. And the reason of that — they instantly got familiar to the common UI elements: a navigation bar at the top, a tab bar at the bottom, a table view, etc. The lineup of Apple’s iPhones is growing up in the dimensions and I think Apple have already reached the limit. No, I am not stating there could not be the bigger size. I’m just saying that if there is a bigger than iPhone 6+ display (5.5 in) then that will be absolutely different device. It is called — a tablet. Therefore, another usage, another user’s behavior, etc. Apple knew and clearly understood that the increase of the display size was inevitable. Bigger screens will lead to a bigger size of every single system element. Thus, the load on the processor, network and so on… I think that’s indeed one of the main reasons why all those changes I’ve mentioned above were applied. But did they take into consideration the inherent behavior patterns that originated with the previous iOS to be extremely uncomfortable and ergonomically incorrect for usage on the larger devices? I suppose, they did not. The basics Take a look at your hands right now. Did your thumbs, palm and the hand overall become bigger? No? Hmm, mine too (Okay, if you are a teenager they could really got bigger as you are! Good for you…). I would even say they have become a little bit thinner because of the adaptation to a constant use of the iPhone. But in general, yeah, they are still the same. And what about your phone? Which one were you holding three years ago and which right now? The resizing of the devices inevitably involves a change of how we hold our phones, how we use them and so on. And since we are not growing over the years (human body stops growing between the ages of 18 and 21 years of age), the behavioral patterns in iOS interface which were laid out in previous versions, do not work properly since iPhone 5. Here are three basic ways of how people are holding their phones: 1. Basic ways of how people are holding their phones. According to research by Steven Hoober 85% of observed users working with their phones using one hand. This is happening mechanically, on a subconscious level. Cradled method seems to be the perfect one. But it has obvious good and bad sides. Mostly whole display is in an easily-accessible area. Both hands are always busy. But I’m pretty sure that Cradled method is a necessary measure, which appeared just because of the inconvenience of the usage of big sized phones. A human is a lazy creature and will not involve two hands instead of one when it’s possible. But let this be another topic called ‘User behavior on psychophysical level’ or something like that. The problem The following heat map shows sorts of the thumb zones applied to every iPhone display size since 2007. 2.1. Thumb zones. According to ‘How to design for thumbs in the Era of Huge Screens’ observation by Scott Hurff Have you noticed that the typical navigation bar position is in the red area starting from iPhone 5? That stumbling block of the whole system which consists of the most important actions (Send, Done, Save, Confirm, Back, Close, etc.) is there because of the pattern which has been implemented on the system level. All the previous iOS versions, as well as the current one, accustomed us to use it. Even if it’s not convenient at all. And developers are forced to follow that pattern too, because they have no choice. Now it became a stereotype for a usual user. I know a few great developers who are trying to change the situation on their own to improve user experience of their applications. But it’s not enough to destroy or even change a little such a strong stereotype. It must be changed on the system level. By Apple. In iOS. Only then it will be possible to reteach a user with a new, correct and more convenient pattern. Namely with the iOS8 release Apple had to change that and adapt their design under the new dimensions of the iPhones. That was not done. And that is a big reason why iOS8 sucks! Therefore, a conclusion: The bigger the display is, the less easily-accessible zone is, the more necessary is the adaptation of design to improve the user experience. As a true proof of the reason why do we have to care about the big-screen-devices, please check out the graph below. And that is just a beginning! 2.2. Screen size and browsing share by phone size. According to Adobe Digital Index, 2014 Now let’s run a simple test on a couple of apps I use daily. Mailbox. The whole and the only one bar with all navigation controls is on the red area. So, you have to shift your grip up to reach any element on it. Those guys launched their product 6 months after iPhone 5 has been released. That means they knew their product would be used on a bigger device and did not change anything in the native iOS pattern. And the reason is clear — a user should be familiar with the app from the first touch. But they got lost in the usability on the bigger iPhone.
https://uxdesign.cc/the-big-reason-why-ios8-sucks-73ac7925c626
[]
2019-04-19 13:44:39.579000+00:00
['UI', 'Design', 'Mobile', 'iOS', 'UX']
The Special Turkey Saved From Being Christmas Dinner
Driving past the slaughterhouse in Moroni, Utah, where countless turkeys are killed each year, Zoe Rosenberg, 18-year-old founder of Happy Hen Animal Sanctuary in California, says all you can see surrounding it are farms. “When you’re standing at the slaughterhouse, you can just look over and see the hills, sheds everywhere, for miles and mile and miles; hundreds of thousands of turkeys.” It’s from these farms, and at this slaughterhouse, where Rosenberg, along with activists from Direct Action Everywhere (DxE), have been rescuing turkeys for the past three years, in cooperation with the new slaughterhouse owner. Since then, they have saved and re-homed nearly 150 birds, otherwise destined to be holiday dinner. But it is one particular little turkey, now named Delilah, who Rosenberg says really demonstrates how a change in care can change an animal. “She’s really a special girl.” ‘A lot of them we’ll pull out of the trailer and they’ll be very scared and stressed out, and after a couple of minutes they calm down, their eyes start to relax, and they realize ‘these people aren’t going to hurt me.’’ When performing the third annual turkey rescue this past November, Rosenberg says slaughterhouse staff did not want the activists too close by. “They don’t want the activists at the slaughterhouse, so they bring the turkeys to a building a few blocks down,” she says. “But as you’re driving by you can see outside the slaughterhouse several trucks parked, all lined up, filled with turkeys who are going to be slaughtered.” Once at the meeting place, a horse trailer filled with turkeys arrived. This year, 19 were lucky enough to be pulled. Rosenberg says she finds it interesting to see how the turkeys change upon their immediate transfer from slaughterhouse staff to the rescuers. “A lot of them we’ll pull out of the trailer and they’ll be very scared and stressed out, and after a couple of minutes they calm down, their eyes start to relax, and they realize ‘these people aren’t going to hurt me.’” After taking each bird out, they are given a brief check, “to make sure they are healthy and they will be ok for their transport home to the sanctuaries they are being placed at,” Rosenberg explains. Upon quick inspection, Delilah seemed healthy enough — for a turkey coming out of a factory farm — so she was placed in her crate. But then, Rosenberg recalls, a nearby police officer who was there to ensure the sanctioned rescue went smoothly, “all of a sudden he was like, ‘Oh my god this turkey is gushing blood’.” Photo provided by Zoe Rosenberg. Turns out Delilah had a broken wing, and the bone had punctured through her skin. “She had a really bad injury that she must have acquired while the slaughterhouse was transporting her or at some point on the factory farm,” says Rosenberg; sadly not an uncommon occurrence. The rescuers immediately bandaged up her wing, departed to purchase pain medication to prevent shock, and made their way with Delilah to the sanctuary. “We have a full time on staff veterinarian here at Happy Hen,” Rosenberg adds. Ironically, Rosenberg says staff select specific turkeys to be rescued because they are considered the healthiest. ‘She knows we are trying to help her. Now she is outside, she is with the other turkeys, she is so happy.’ The partnership with the Utah slaughterhouse came about as a result of new ownership, and more specifically, following DxE activists receiving felony charges for performing an open rescue of turkeys in 2017. DxE was investigating one of the farms “and rescued a few baby turkeys who were really sick and dying,” Rosenberg says, resulting in the charges. Soon after, the farms and slaughterhouse were bought by a new owner who did not agree with the charges, but he had no power to have them dropped. “So in an act of compassion toward the activists he is giving us turkeys each year before Thanksgiving,” turkeys including Delilah, who Rosenberg reports would have likely been Christmas dinner. Delilah today at the sanctuary. Photo provided by Zoe Rosenberg. Today Delilah is living a much better life, with 25 other turkeys and 275 other rescued farmed animals, on 40 acres at Happy Hen Animal Sanctuary. “She is such a good patient. Every time she needs to get her wound treated she just sits there. She knows we are trying to help her,” Rosenberg says. And the special turkey is no longer in the ICU. “Now she is outside, she is with the other turkeys, she is so happy.” She also enjoys special pumpkin treats, which she would only eat out of a specific red bowl for the first two weeks, Rosenberg laughs. “She’ll be living here permanently,” she says, “where she’ll be able to wander outside in the sun with her friends,” get ongoing top veterinary care, and where this holiday season she will be celebrated, not eaten.
https://medium.com/tenderlymag/the-special-turkey-saved-from-being-christmas-dinner-f95d749441d1
['Jessica Scott-Reid']
2020-12-25 18:09:06.919000+00:00
['Christmas', 'Turkey', 'Vegan', 'Animals', 'Sanctuary Stories']
Python for Cybersecurity — Lesson 4: Network Traffic Analysis
Welcome to the fourth installment of the Python for Cybersecurity web series! In the last lesson, we discussed the importance of Machine Learning in cybersecurity and how Pandas can be used to perform data analysis in Python. In this lesson, we are going to see how we can analyze the network traffic. Before we jump into it, let us brush our fundamentals a little bit and understand what a network is. Source: Lifewire.com What is a computer network? The formal definition according to techopedia.com is, “A computer network is a group of computer systems and other computing hardware devices that are linked together through communication channels to facilitate communication and resource-sharing among a wide range of users.” Very simply, a network is something that lets two or more users to communicate with each other. In this day and age where we are heavily reliant on the Internet for almost everything that we do in our day-to-day lives, cyber threats are also on the rise. So, it is important that we understand its functioning which differs with different infrastructures. Now, what is network traffic? Ever stood in the traffic signal and thought, “Where the hell are all these people headed towards?”. Well, that is comparable to a network too! Just as the name literally means, network traffic refers to the data that is going across the network at a specific time frame. Huge flow in one direction causes congestion here too! Does anyone really monitor all this? I recently logged into an open wifi in a cafe and walked out after an hour or so. The moment I left that radius, I received an email thanking me for using their wifi. Although it is not something that I was unaware of, it felt a little creepy that everything we do in a network is actually monitored! In addition to this, it has always been an intriguing factor to me about how people are able to find our location from our IP addresses. Have you seen those movies where a hacker is able to snoop into any system and is able to tell where the ‘money’ is or even find the location of where the network requests are coming from? It seems so cool to watch them do all that which motivated me to explore further. Finding Geo-Location from IP All these stuff that seem really high tech are really simple and easy to code — thanks to Python! For finding the geo-location based on the IP address provided, we can use the package geoip2 and access the database. The mmdb file can be downloaded from the below link: Now, let us write a very simple piece of code to read from the database that is saved in the same directory as the code file. We give a sample input here of 128.101.101.101 to see if it returns the location of the IP. Code to return country name of the IP target Output of the program Voila! We can now start locating the IP addresses that access our network. (Do not attempt this in any other network and get jailed :P ) Now that we have a hold of the basics, let us try to dig a little deep with one of the most famous packages in terms of network security in Python — Scapy! So, What does Scapy have to offer? Ever done a port scan using the nmap command or traceroute to see how your network request looks like? I have always had hard time remembering the exact syntax of these commands and to type in without looking up! This is where Scapy comes to rescue and offers these and many other powerful operations with one import statement into your program! Scapy allows you to decode packets of a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more. The package is so flexible that it allows you to sandwich any feature it has to offer with any other. It resonates with building newer tools where you don’t have to write anything from scratch or even write hundreds of lines of code. Lets trace the route! To understand what Scapy has to offer, let us try to code the simple traceroute command that we use to find the path traced by a request in the network to the implement the ‘traceroute’ command using Scapy in Python! First of all, let us recollect how the output of traceroute command looks like. In this example, I am trying to trace the route to www.wikipedia.org and by default it goes upto displaying the details of 30 hops. You can see the time taken to respond and the host network information as the output from the command. Now, that we understand what needs to be done, let us jump to our python implementation. The sample output should look in this format: In the code, we first import the scapy package into our code which has the IP and UDP modules. We then store the reply from the network in the reply variable. As shown, the code breaks and exits the program if there is no response or prints the hop count until the destination is reached. A good reading that I had from the Black Hat Python learning module, for coding the packets sent and received through Scapy is that the methods used differs with the layer of network in question (the seven layers of the OSI model). Layer 3: sr (returns the answered and unanswered packets) and sr1 (returns answered and sent packets) (returns the answered and unanswered packets) and (returns answered and sent packets) Layer 2: srp (returns the answered and unanswered packets) and srp1 (returns answered and sent packets) There are many advanced techniques that are employed to monitor the traffic of which Wireshark/Tshark is a predominantly used tool for analysis. It gives us a real time information on what is currently happening in the network. I would suggest reading about that and trying to implement a simple packet analysis code in Python to gain thorough understanding of how the packets are sent and received in a network. With this, we come to an end of this lesson on a brief introduction to network and traffic analysis. Happy Learning! Practice Create a python script that sends and receives packets, and then displays the information pertaining to those packets. Advanced: Try to implement a network port scanner using Scapy. Learning Resources
https://medium.com/cyberdefendersprogram/python-for-cybersecurity-lesson-4-network-traffic-analysis-with-python-6321f4c9d3f7
['Johnsy Vineela']
2018-08-21 01:15:37.609000+00:00
['Python', 'Network Analysis', 'Network Security', 'Python4cybersecurity']
Your Complete Guide on How to Become More Productive
Expectations In your first 100 hours of learning Productivity, you should be able to: ✅ Formulate your “why” in a single, clear sentence ✅ Journal for clarity ✅ Do your Ikigai and Map of Life ✅ Delegate the urgent but not important tasks ✅ Say no to the wrong things ✅ Get better at SMART goal-setting ✅ Plan your work and life effectively ✅ Improve in using the 80/20 rule ✅ Do proper time blocking and auditing ✅ Improve your sleep hygiene further ✅ Improve on your ability to power nap ✅ Establish productive workout routines ✅ Track your habits ✅ Learn about nutrition and eat the right ingredients ✅ Increase your positivity ✅ Understand the science of timing How to go from good to great in Productivity Prepare There’s a lot going on in this phase. Like you did during the first phase, start by reading the material provided in the above skill tree. There are many new concepts you should understand at this point. The order you start with doesn’t really matter. You’ll practice everything right after anyway. You should spend about 15 minutes reading material for the first four weeks. Review what you read after one week and one month after first reading it. This will help you retain the information for longer. Overall, you will spend about 7–10 hours reading and understanding the concepts proposed in this phase. Practice There is so much to practice during this phase. The most important aspect of productivity is doing the right things. For that, nothing beats having a clear life goal — your “why”. We’ve included many resources on the topic in the skill tree above. My two favourite activities are doing your Map of Life and Ikigai. Your Map of Life should be revisited monthly, and take up to one hour to do. Ikigai can be revisited less frequently. I do it about once every year. It takes longer to do; usually around 3–4 hours for me. Once you’ve done both, it’s much easier to formula your why. I like to use the 5 whys method for that. Basically, start by asking yourself: “why do I do what I do?” To your answer, ask why again. Do that at least 5 times to get a proper answer. All the above exercises should be done in a journal. It doesn’t matter if it’s electronic or paper, as long as you write your thoughts. The paper version has the added benefit of involving more senses, and for most people, is slower than typing. This allows you more time to think before writing. There are also many other ways journaling can help you be more productive. Make sure to review the resources we’ve included on this page and in the skill tree above. One of the most underrated aspects of productivity is delegation. You should have learned more about that when you read about the Eisenhower Matrix and practiced it. We’re talking about quadrant III activities; the urgent but not important tasks. Note the “not important” here just refers to you. It doesn’t mean the task isn’t important to do. Your goal is to find who would find it important and also has the capacity to do it. If you find someone like that, they’re the ideal candidate to delegate to. So, during this phase, figure out what you can effectively delegate and how you’re going to go about it. Create a plan for this. Hire an assistant if it makes sense. Involve colleagues, friends or family otherwise. Once the plan is done, put it into practice right away. Reflect on your experience, refine your process, and improve. Another thing that’s crucial to great productivity is saying no to things. You only have so much time, so if you’re too nice to others with it, you’re sacrificing your productivity. Check out the resources we’ve included for that, try to find a way to make it impossible for you to say yes to the wrong things. This could mean printing a list and keep it in front of your work environment (or your fridge at home), so you’re always reminded what isn’t that important to do. Get creative here and you’ll be rewarded with a lot more focus, time, and energy. During this phase, it’s also important to improve upon what you learned during the last phase, including SMART goals, the Eisenhower Matrix, improving sleep hygiene, and more. Make sure to spend at least one hour per week refining the techniques you learned. To your routine, as much as you can, include power napping. When you combine walking breaks and power napping during your day, you will be fully energized the whole day. One walking break and one power nap are usually all I need. If you need more, that’s fine too. During this phase, you also want to start building routines and habits. My suggestion is to start with atomic/micro habits you can do in less than 5 minutes. You may also want to look into compound habits. For example, after you’re done brushing your teeth, do 5 push-ups. When you combine atomic and compound habits, you start doing a lot more while not feeling like you need to block time for them. That’s really powerful. At some point during the first few weeks, start looking at nutrients that promote good health and provide energy. You’d be surprised how much of what you eat affects your mood and overall energy levels. We’ve included resources that show you exactly what you should be eating to increase your energy. If you can start eating them during this phase, it will help you perform the rest. And last but not least, read the books we’ve included in the “when” section of the skill tree. You’ll understand so much more about why you perform doing certain things at certain times. This will help you plan better, meaning, plan things at the right times of the day. Ponder With every week and month of practice, reflect on the usual questions: What went right? What went wrong? How can I improve? In addition, near the end of your 100 hours, reflect on the following questions:
https://medium.com/skilluped/your-complete-guide-on-how-to-become-more-productive-cd42e055fa8b
['Danny Forest']
2020-12-10 21:21:58.761000+00:00
['Education', 'Self Improvement', 'Productivity', 'Life Lessons', 'Learning']
Why I think we’re at least two years away from mainstream blockchain adoption
I’ve spent the last three and a bit years working on blockchain / distributed ledger technology in some way, shape or form. The article below is based on my experiences and observations; if you’ve come across anything that contradicts what I’ve written I’d love to hear about it. Things have come a long way in the ten years since Satoshi published his / her / their revolutionary whitepaper describing a new peer-to-peer electronic cash system underpinned by distributed ledger, or blockchain, technology (DLT). We’ve seen the launch of countless new protocols from Ethereum to Neo to Tezos, corporates left right and Scentre announcing blockchain pilots almost daily, and Don Tapscott’s TED talk reaching 4m of us (and counting) around the world in three years alone. Or have they? DLT has gone through boom and bust, hard forks and soft forks, ICOs, STOs, utility tokens, altcoins, $hitcoins, stablecoins… so why has it still not gone mainstream? From spending the last three years both working on DLT products and reviewing platforms built by others, I’ve come across two major barriers that are holding the technology back: these are poor user experience (UX) and lack of trust. The huge community of developers, designers, engineers, regulators and everyone else involved are solving the underlying issues creating these barriers every day, but projecting forward a rough pace of change based on nothing scientific but my own experience I think we’re at least two years away from this tech really being adopted by the masses — and by that I mean being used by a large number of people at least once a day. Large is subjective yes, but I’m talking Dropbox / iTunes levels for B2C, and Microsoft Office / AWS levels for B2B, so not exactly trivial numbers of people. So what is it about UX and trust that is so important? Success for a new product or technology is like a secret sauce — there’s no exact recipe to getting it right but people broadly agree on what ingredients need to go in. UX and trust are two key elements of the secret sauce, and companies have to get these two things (amongst others) absolutely nailed to go big. Look at the likes of Facebook; great UX but massive user drop off following the Cambridge Analytica scandal leading to a huge erosion in trust (60% of Americans don’t trust Facebook). Similarly Britain’s banks have been losing customers to challenger banks such as Monzo because their UX just doesn’t stack up to what is expected by the modern consumer. Hit that sweet spot though and you’re onto a winner; Amazon’s retail and cloud computing services are two of the world’s most trusted, coupled with a great UX and you’ve got market leadership in both areas. Where there are challenges there are also opportunities. Below I outline some of the biggest UX and trust issues I’ve come across in the last three years, and my perspective on opportunities that exist to solve them.
https://michael-yorke.medium.com/why-i-think-were-at-least-two-years-away-from-mainstream-blockchain-adoption-5411607210dc
['Michael Yorke']
2019-08-04 20:28:54.382000+00:00
['Distributed Ledgers', 'User Experience', 'Smart Contracts', 'Trust', 'Blockchain']
AI Lesson for Teachers, Teens, and Everyone In Between
My goal is to outline a lesson that any teacher can use in the classroom or any person interested in a very high level understanding of how AI works can walk through. This is not meant to be an exact representation of how AI truly works, but simply give intuition as to how it works. I have been a Math, SAT, ACT, ISEE tutor for close to a decade and work in machine learning research. Pre-requisites: know what a probability is. There are 2 sub-lessons, 1 smaller one and 1 larger one. All lessons will be under the scope of computer vision problems — object detection. Supervised learning vs Unsupervised learning Training a machine learning model Machine learning problems are often broken into two categories, supervised and unsupervised problems. Supervised problems are where you give the model examples of something and then expect it to be able to predict that thing later on an unseen image. Unsupervised problems are where you have a bunch of images and you try to figure out which ones are most closely related (not based on anything except what you can see) and then group them without knowing what the final class you are trying to predict actually is. Supervised Learning I will now show you a series of shapes and a name for the shape. “zhags” These shapes above are called zhags. “flarks” These shapes above are called flarks. Now I will present you with an object and you tell me if it’s a zhag or a flark. There is a hidden rule that categorizes zhags and flarks. Your job is to learn that rule. ? This is a zhag. If you guessed that, awesome! You learned a successful model. But maybe now you get an object that doesn’t fit exactly what you thought. ? This is a flark. Little did you know, the hidden rule is if the shape has any curve at all it is a flark. This is why sufficient training data is so important to machine learning problems! If this was a missing training data point in an autonomous vehicle this could cost someone their life. Unsupervised Learning Say we have a set of images and strictly using the images and no previous knowledge we need to place them on the xy-plane where their distance between each other represents how different they are from one another. Here are a group of images. Images from Wikipedia Now we are meant to place these on the xy-plane. Here’s a possible iteration of this. So if I now said, group these into two sets you probably would do this one of two ways.
https://towardsdatascience.com/ai-lesson-for-teachers-teens-and-everyone-in-between-7df81bd343f3
['Mike Chaykowsky']
2020-06-15 21:42:04.250000+00:00
['Machine Learning', 'Education', 'Students', 'Artificial Intelligence', 'Teaching']
Social Media Is Growing Up. What Does That Mean for Your Marketing?
Social Media Is Growing Up. What Does That Mean for Your Marketing? By Heike Young In the context of the omnichannel mix, social media is at the epicenter of innovation. Not only does it have incredible capabilities for listening and publishing, but it’s also the nucleus for modern direct-response advertising for demand gen. Now with emerging products like Facebook Messenger, social is set to continue revolutionizing e-commerce and customer support. On this week’s episode of the Marketing Cloudcast — the marketing podcast from Salesforce — we’re discussing how social media is growing up and maturing into a more useful marketing tool. We brought on the expert: Luke Ball, Senior Director of Product Management at Salesforce, who knows social tools better than anyone else. If you’re not yet a subscriber, check out the Marketing Cloudcast on iTunes, Google Play Music, or Stitcher. Take a listen here: You should subscribe for the full episode, but here are six things that the maturation of social means for your marketing, from our conversation with Luke Ball. 1. Social today is the Swiss Army Knife of customer engagement. “We’ve been through the hype cycle and the trough of disillusionment, but I’m seeing a lot of maturation happening right now. Many businesses finally realize the full potential of social and are incorporating it into their processes and business strategies in a more holistic way,” says Luke when asked about the current state of social. As Luke points out, “We’re seeing new channels emerge within channels. Messenger is a great example and the one to watch this year.” 2. Social listening data is at an all-time high. “Social media is real time. It’s extremely high volume, and people are literally falling over themselves to tell you what they think of your brand, your competitors, your industry, your products. That insight is actionable, it’s scaleable, and it’s something that doesn’t just affect marketing,” says Luke. With the influx of sites like Yelp and TripAdvisor, there’s truly so much you can listen to today when it comes to customer sentiment about your brand. In fact, there may be more out there than you can keep up with. 3. We need to stop pigeonholing the capabilities of social media. The evolution of social has been so rapid that many brands are struggling to keep up. When brands pigeonhole social media as only an engagement tool or only a publishing tool, Luke says “they’re missing the bigger story. I would encourage companies to take a step back from how they look at it as a marketing channel. Look at the value of the content and the insight that’s coming through, both on an individual and aggregate level, ask: how can that inform your business?” 4. Social customer service is better than ever. Handling service cases on social costs one-third of other channels and can offer a faster turnaround. Plus, these customers often tell other people about it after they’ve had a timely, helpful experience getting their question answered on social. “When you satisfy that customer, when you make them happy, it has an amplification effect that you don’t get in a private one-to-one channel like phone,” says Luke. Also, as Luke points out, “Social customer service really is the most concrete ROI because you’re talking about a cost center, you’re talking things where you want deflection and fast turnaround,” he advises. 5. Tools and data are super accessible. “We’re seeing the scaling of social. Teams of practitioners are now becoming centers for excellence. Their jobs are more about enabling, empowering, governing, selecting the strategies and tools for the rest of their business to be successful at social. We call this the hub and spoke model,” shares Luke. Because of this shift, Luke explains, “You need to have a different set of priorities for the tools that they use. They have to be very user friendly, easy to learn, easy to re-learn, mobile-first, have access on the go. They need to be consolidated so they’re a one-stop solution, and they need to be connected.” “It’s a combination of making the tool more accessible but also making the data more accessible,” he says. And today, that’s actually possible — whereas in the early days of social, it was all just a dream. 6. Most companies still could do much more with social data. “I don’t think most brands are fully realizing the value of the data that’s coming in,” says Luke. Listening for life events can have a big impact. “For consumer brands, life events are huge moments in time when you can shift someone’s loyalty and awareness of your brand — which can have resonance for decades to come,” he shares. Luke’s advice to organizations looking to get the most out of social? “Instead of asking: what can my company do in this scenario? It’s about flipping the lens and saying: what does my customer want from me and how do I fit into their life? Then structuring around that,” he says. We talked about much more in the world of social with Luke. Get more insights in this full episode of the Marketing Cloudcast. Join the thousands of smart marketers who already listen. Subscribe on iTunes, Google Play Music, Stitcher, or wherever else you like to listen to podcasts. New to podcast subscriptions in iTunes? Search for “Marketing Cloudcast” in the iTunes Store and hit Subscribe, as shown below. Tweet @youngheike with marketing questions or topics you’d like to see covered next on the Marketing Cloudcast.
https://medium.com/marketing-cloudcast/social-media-is-growing-up-what-does-that-mean-for-your-marketing-72e23fa189e7
[]
2016-12-29 19:14:02.529000+00:00
['Marketing', 'Social Media Marketing']