title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Poker at the Penalty Spot | Games are microcosms of life. They involve struggle, camaraderie, disappointment, and ecstasy. As such, games offer the inquisitive spectator a chance to reflect on social behaviour from afar. As both an aficionado of poker — a deeply strategic game — and a soccer fanatic, I’ve noticed a way in which wisdom from the former game can benefit the latter.
Soccer’s World Cup, the paramount event of Earth’s most popular sport, is the biggest game of all. The knockout stage of the tournament introduces the penalty shootout, a tie-breaking procedure in which players are given a free shot on goal from twelve yards, with only the goalkeeper to beat.
Penalties, as they’re known colloquially, are supposed to be advantageous for the shooter, who has a large goal to aim at. A well-taken penalty is nearly impossible to save — a keeper who does so becomes an instant hero — and around 3 in 4 penalties are scored, on average. The price of that advantage, however, is added pressure. The shooter is expected to score.
Reach for the sky (unless there’s a snake in your soccer boot)
The Economist magazine analyzed World Cup penalties spanning a forty-year period in search of patterns that might suggest an optimal strategy.
Figure 1. Photo credit: economist.com
Based on those data, it turns out that
Goalkeepers find high balls the hardest to deal with — just 3% of penalties aimed halfway up the goal or more are saved. Yet there is a tendency for these shots to miss the target: 18% of high shots do so, as opposed to 5% of low shots. Overall, though, allowing for misses and saves, high shots are successful 79% of the time compared with 72% for low shots.
High shots are more likely to score, yet Figure 1 shows that most penalties were aimed at the lower half of the goal. What explains the paradox?
One answer is that unlike rebounds from penalties taken during shootouts, which are immediately dead, rebounds from penalties attempted during regulation play are live balls. This means that someone taking a penalty kick during regulation play might shoot low to increase the chance of tapping in the rebound if the keeper saves the original attempt.
That explanation doesn’t apply to penalty shootouts, though, which account for the bulk of penalties taken over the years. There’s another reason for the discrepancy, and it gives insight into how people make decisions when the stakes are high. It concerns the fact that we human beings are emotional creatures who, for the most part, care about others’ opinions of us. ‘Ego bias’ affects our decision-making.
To spare your blushes
If scoring from the penalty spot is ostensibly simple, merely hitting the target should be even easier. Knowing this, nothing is more humbling for a penalty-taker than shooting high or wide. Even having your penalty saved is a better outcome, psychologically, than missing altogether. Shooters thus have some incentive to be more conservative in where they aim, because low shots are less likely to miss their mark.
Of course, that line of reasoning makes for a poor strategy. A penalty-taking robot would pay no mind to emotional considerations of potential humiliation. Focused purely on maximizing the probability of scoring, it would aim for the top corner every time. The benefit gained by straining the goalkeeper easily compensates for the increased chance of blasting the ball over the bar — the data show this strategy is nearly 10 percent more successful. A missed shot and a saved shot count the same on the scoreboard, after all. The only difference is in the shooter’s head.
Yet that difference makes all the difference, because soccer is played by humans whose emotions affect their physiology. Pressure weighs even on the best players, who miss the odd penalty despite scoring goals ruthlessly, in far more difficult situations, during open play. (Lionel Messi and Cristiano Ronaldo have each missed a penalty kick in this World Cup.)
You can often tell if a player is going to miss a penalty by watching their approach to the ball. A fast run-up immediately after the referee’s whistle — see Asamoah Gyan’s penalty in the 2010 World Cup quarterfinals (with the weight of a billion Africans’ hopes squarely on his shoulders) — suggests a subconscious desire to get the stressful situation over with. That player lacks composure.
A good penalty-taker decides where to aim first, takes a moment to visualize where the ball is going to go, then focuses all the way through the shooting motion to make sure the foot strikes the ball properly. As with any activity performed at a high level, even aspects as straightforward as an unencumbered shot at goal demand serious forethought and practice. The higher the stakes, the more nerves become a factor.
It’s all in your head
The laws of nature forbid us from living through any single moment more than once. We’re not allowed to rewind and see what could have been. This reality predisposes us to hindsight bias, where we overvalue outcomes and undervalue the actions that preceded the result.
Counterfactuals are instructive for allowing us to contemplate superior alternatives. It’s a common exercise among skilled poker players, who understand that the decision-making process is more important than any one outcome, which is subject to factors out of one’s control such as statistical variance.
Most people have a natural aversion to risk because risk sometimes leads to loss, which can be embarrassing. But losing doesn’t necessarily mean you made the wrong decision. You might have just gotten unlucky. Moreover, shame is a function of pride; it’s nothing more than a state of mind. A fear of shame discourages you from taking worthwhile risks, and it’s a bias that leads to poor decision-making. Conquering that inhibition removes a major obstacle from the road to victory.
Instead, think like a poker player. Remember that over the long run, if you factor in what you could have gained, a missed opportunity to win is the same as an equivalent loss. Discard your pride, and make the best decision. You’ll thank yourself later.
Most penalty-takers in soccer focus too much on their dread of missing the net, and think too little of the reward that awaits them, more often than not, by aiming at the top corner anyway. Taking intelligent risks in other parts of life is the same. We are always forced to live out our actual misfortunes, but we’re never confronted by the successes we forsook. If we were, we wouldn’t overlook them as often as we do, and we’d be bolder and better off for it. Those missed successes exist only in some parallel universe where a more logical and courageous version of us lives. Luckily, we each have the opportunity to make choices anew every single day. All it takes is determination. As the Tottenham Hotspurs’ motto goes, “To dare is to do.” | https://medium.com/age-of-awareness/poker-at-the-penalty-spot-af3a2a2f9359 | ['Brad Stollery'] | 2018-07-06 16:07:55.766000+00:00 | ['Motivation', 'Poker', 'World Cup', 'Economics', 'Soccer'] |
MVC in Flutter | The MVC design pattern in Flutter
In late October, back in 2018, I offered the package, mvc_pattern, to supply an MVC design pattern approach to your next Flutter app. It was a hit.
Like with many things in life, it’s only gotten better with age. It’s been improved by further integrating into and even mirroring Flutter but still retaining the spirit that is the MVC design pattern. Of course, the code is open-source for all to see. Eventually, a framework package called, mvc_application, was published that assists in the development of Flutter apps using the package, mvc_pattern, as it's core.
As time went on, twelve free articles on the subject have been published on medium.com. They and more are all presented here now for your convenience. Also, because this article serves as a supplementary resource for yet another article, Your Next MVC Flutter Project. All the articles are collected in one place for readers to ‘get up to speed’ with MVC return to ‘mainstream.’
Model-View-Controller
The articles are in the order I would read them, but the ‘World’s Your Oyster!’, read whatever catches your eye. Do realize as with any software, a lot has changed in the code and some of the older articles listed at the end are beginning to show their age, but the concepts they convey still hold up.
Live Streaming Now Available
→ Other Stories by Greg Perry | https://medium.com/follow-flutter/mvc-in-flutter-1d26b86328ea | ['Greg Perry'] | 2020-11-09 22:44:16.205000+00:00 | ['Programming', 'Flutter', 'Android App Development', 'Mobile App Development', 'iOS App Development'] |
The Technology Fighting Coronavirus | As I’m sure many of you are aware, the global pandemic, COVID-19, known as the Coronavirus, has spread rapidly and many of you are probably at home in quarantine reading this now. Initially I chose not to prepare a response, given how this issue has taken over every media outlet, YouTube channel, and Facebook page. However, after a little research, it became clear that technology is being used as a very good tool. I couldn’t pass up an opportunity to recognize the men, women, and technology that is working to solve this health crisis.
Like I’ve said, technology is a tool and its use as a very good tool couldn’t be made more clear in regard to fighting this virus.
Like I’ve said, technology is a tool and its use as a very good tool couldn’t be made more clear in regard to fighting this virus. There are three primary objectives in this fight:
We must prevent the spread of the virus to those who are healthy. We must treat those who are ill. We must develop a cure for all who may contract this virus.
Technology is being used within all three objectives.
Let’s start with preventing the virus. The first step to preventing the spread of a virus is to limit individual contact. This is of course where social distancing and quarantines are used. Technology is helping us make these efforts much more effective. To more effectively limit social contact technology is helping officials learn where cases are arising. A Boston-based start-up, BioBot Analytics, is installing technology in sewer systems. These systems are working to detect the virus, and using data analysis, determine where, how many, and how cases of COVID-19 are spreading (Perry, “Startups Unveil…”).
On a slightly more pleasant topic away from the sewage, researchers at the University of Southern California are working to develop an app that could determine who needs to stay home and who is probably safe to go to work or shop (Polakovic, “USC experts..”). The researchers are attempting to find a balance between preventing the spread of the virus and the economic impact we’re already seeing. The app uses anonymous data from positive COVID-19 tests to determine if an individual has been exposed to the Coronavirus and then alerts them with a suggestion to stay home and quarantine.
Of course, a review of the technology being used to prevent the spread wouldn’t be complete without mentioning the countless video platforms that allow us to connect with work, family, friends, churches, and schools. In many ways, although we are stuck in isolation, these technologies have allowed us to remain as connected as ever.
But technology can also be used to treat those who are ill.
But technology can also be used to treat those who are ill. The first step is diagnosis. In order to prevent further contamination, telemedicine is growing in popularity and developers are working to increase the accuracy of and the level of care teledocs provide. An Israeli company is working to develop apps and programs that can detect heart rate, heart rate variability, respiration, and oxygen saturation using only the cameras on a smartphone (Perry, “Startups Unveil…”). Another company is using simple audio recordings to detect the sounds within the lungs, a vital indicador of possible infection (Perry, “Startups Unveil…”).
Unfortunately, as of yet, there is no known cure for COVID-19 and therefore there is little healthcare professionals can do once a diagnosis is made. In the most severe cases, ventilators can however mean the difference between life and death. But as cases rise, equipment is further limited. Companies around the world are quickly building up factories to build more ventilators, but this may not be enough. Some individuals have discovered that 3D printers can make vital pieces for ventilators. Hobby groups with at-home 3D printers in Spain have even produced a ventilator prototype that is being tested by healthcare professionals and the scientific community. If their prototype succeeds, 3D printers across the world can print important components using online templates.
Of course, through this entire crisis, a cure is being developed. Researchers haven’t yet found it, but they are using technology to speed their progress. Doctors and scientists are producing virtual simulations of the virus and possible treatments. The simulations are being run through supercomputers 100 times faster than those used 10 years ago. They are testing current drugs and treatments against the virus, virtually. Big data, artificial intelligence, and machine learning are providing more capabilities to scientists than ever before (Polakovic, “USC experts..”). Also, these computers can work 24/7. A team of researchers has reported that artificial intelligence has helped them find 500 possible antibodies that could fight the Coronavirus (“Technology against…”).
Now, I’ve only scratched the surface of the uses and power of technology in the fight against COVID-19. For more information, please see the resources referenced in the Works Cited section below.
It’s clear that technology is an extremely important and powerful tool in this endeavor.
It’s clear that technology is an extremely important and powerful tool in this endeavor. As so many experts have reported, we will win this fight, it only takes time. When we do conquer this virus it will be by the tool of technology, well, that and washing your hands.
So stay confident, stay inside, and stay connected via the technology at your fingertips. | https://medium.com/tech-is-a-tool/the-technology-fighting-coronavirus-baa0b968625 | ['Benjamin Rhodes'] | 2020-04-23 13:42:49.580000+00:00 | ['Covid 19', 'Virus', 'Quarantine', 'Technology', 'Coronavirus'] |
Your Ultimate Data Mining & Machine Learning Cheat Sheet | Feature Importance
Feature Importance is the process of finding the most important feature to a target. Through PCA, the feature that contains the most information can be found, but feature importance concerns a feature’s impact on the target. A change in an ‘important’ feature will have a large effect on the y-variable, whereas a change in an ‘unimportant’ feature will have little to no effect on the y-variable.
Permutation Importance is a method to evaluate how important a feature is. Several models are trained, each missing one column. The corresponding decrease in model accuracy as a result of the lack of data represents how important the column is to a model’s predictive power. The eli5 library is used for Permutation Importance.
import eli5
from eli5.sklearn import PermutationImportance
model = PermutationImportance(model)
model.fit(X,y)
eli5.show_weights(model, feature_names = X.columns.tolist())
In the data that this Permutation Importance model was trained on, the column lat has the largest impact on the target variable (in this case the house price). Permutation Importance is the best feature to use when deciding which to remove (correlated or redundant features which actually confuse the model, marked by negative permutation importance values) in models for best predictive performance.
SHAP is another method of evaluating feature importance, borrowing from game theory principles in Blackjack to estimate how much value a player can contribute. Unlike permutation importance, SHapley Addative ExPlanations use a more formulaic and calculation-based method towards evaluating feature importance. SHAP requires a tree-based model (Decision Tree, Random Forest) and accommodates both regression and classification.
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
shap.summary_plot(shap_values, X, plot_type="bar")
PD(P) Plots, or partial dependence plots, are a staple in data mining and analysis, showing how certain values of one feature influence a change in the target variable. Imports required include pdpbox for the dependence plots and matplotlib to display the plots.
from pdpbox import pdp, info_plots
import matplotlib.pyplot as plt
Isolated PDPs: the following code displays the partial dependence plot, where feat_name is the feature within X who will be isolated and compared to the target variable. The second line of code saves the data, whereas the third constructs the canvas to display the plot.
feat_name = 'sqft_living'
pdp_dist = pdp.pdp_isolate(model=model,
dataset=X,
model_features=X.columns,
feature=feat_name)
pdp.pdp_plot(pdp_dist, feat_name)
plt.show()
The partial dependence plot shows the effect of certain values and changes in the number of square feet of living space on the price of a house. Shaded areas represent confidence intervals.
Contour PDPs: Partial dependence plots can also take the form of contour plots, which compare not one isolated variable but the relationship between two isolated variables. The two features that are to be compared are stored in a variable compared_features .
compared_features = ['sqft_living', 'grade'] inter = pdp.pdp_interact(model=model,
dataset=X,
model_features=X.columns,
features=compared_features)
pdp.pdp_interact_plot(pdp_interact_out=inter,
feature_names=compared_features),
plot_type='contour')
plt.show()
The relationship between the two features shows the corresponding price when only considering these two features. Partial dependence plots are chock-full of data analysis and findings, but be conscious of large confidence intervals. | https://medium.com/analytics-vidhya/your-ultimate-data-mining-machine-learning-cheat-sheet-9fce3fa16 | ['Andre Ye'] | 2020-06-20 15:38:50.032000+00:00 | ['Machine Learning', 'Data Science', 'AI', 'Data Analysis', 'Statistics'] |
Some musings on cryptocurrencies, ICO and blockchain | I have been thinking a lot about cryptocurrencies, ICO and blockchain these few weeks. It’s a bit hard not to when the prices of bitcoins, ether and the top few cryptocurrencies have taken the market on a really wild ride. It’s also a bit hard not to do so after I received 2 different requests for help to launch ICO for 2 startups in 2 wildly different industries.
But first, my thoughts about the crazy price fluctuations of the more popular cryptocurrencies.
Mrs Watanabe is also into bitcoins!
I was first introduced to the world of cryptocurrencies by the co-founders of TenX more than 2 years ago when they first arrived in Singapore. The way they explained it to me, cryptocurrencies represent one of the fastest, cheapest and safest way to move money around. And with the wallet they are building, users will be able to use their cryptocurrencies in the real world at any credit-card accepting stores. The cryptocurrencies in their TenX wallets will only be converted to fiat (real-world) currencies when they make a purchase with their TenX wallets. From that point on, I was hooked. Eventually, I ended up being an early investor of TenX and have stayed on as their advisor since that first meeting. In these past 2 years, the learning curve for me, a non-crypto luddite, has been steep. But the lessons learnt from observing the TenX team working has convinced me that cryptocurrency is REAL. It’s not all hype. And it’s definitely not a fraud as some bankers would like us to believe.
The true value of cryptocurrencies is really in the ease of using it to move money around. I can now send a cryptocurrency across national (and fiat currency) borders to anyone without incurring FX costs or worrying about whether the person receiving my cryptocurrency can actually use it where he/she is living. In that sense, cryptocurrency is very similar to normal fiat currencies that we are familiar with. I have SGD which I can use in Singapore. Or I can change it to another currency if I need to use my SGD, say, in Japan. One of the most important value of the SGD is that it allows some kind of value to be transferred from me to another person (or company). The rate of exchange of SGD against some other currencies really depend on the demand for SGD versus a particular currency. The demand generally results from how much SGD is being used to purchase for foreign goods and services versus how much foreigners need to get their hands on SGD to purchase Singapore goods and services. If the demand for such exchanges are down (like during an economic recession worldwide, or Singapore’s goods and services are no longer attractive), SGD value goes down.
So, it’s kind of mind-boggling for me to see the wild price fluctuations of bitcoins, ethers, XRPs and a whole bunch of cryptocurrencies. Sure, thanks to the crazy run-up of their prices, the total market capitalisation of all the cryptocurrencies are now around USD500–700 billion. I personally think this number is totally meaningless. More important for me is how much is these cryptocurrencies actually being USED as a method of money transfer, as payments? If they are just sitting around in the exchanges, wallets or some digital vaults, their utilities are not being optimised.
The true value of cryptocurrencies really arises from their usage. Trading in them is just one of the usage. Moving them around as a way to pay or be paid, is a, IMHO, much more useful usage. If there are not that many cryptocurrencies in circulation, it just means the real demand to use them is not really that high yet. The demand is only to own them as way to generate returns on your investments. I believe in time, more use cases will be found for cryptocurrencies. But for now, we are still at the early days of cryptocurrencies. Massive and wide-spread adoption is still a couple of years away at least. Until then, the current valuation is not just speculative, it is totally insane. And if Mrs. Watanabe is also buying bitcoins, you should be afraid. Be very afraid.
But maybe there is already a good use case!?
That brings me to my second musing. Being involved with TenX from the beginning has forced me to bone up on cryptocurrency, blockchain and everything crypto. Then, in the beginning of 2017, the founders came to me with something which is even more radical, “How about we do an ICO?” I looked back at them blankly and went, “Huh?”
Needless to say, they taught me aplenty and I have to researched a whole lot more on my own to catch up on Initial Coin Offering or ICO. Personally, I don’t really like the terminology as it brings to mind IPO or Initial Public Offering which is when companies issue shares to the public. IPO is a regulated process for companies to raise funds from the public. In return for the public handing over their money, companies issues shares in their companies to the public. I also don’t really like the “Coin” in ICO as there is this notion you are somehow issuing a currency, which needless to say, freaks out a lot of central bankers and financial regulators. I prefer to call them Initial Token Offering. But, that’s just me the lawyer speaking. Anyhow, we will just refer to them as ICO for this post.
A token offered by a company during an ICO, in plain English, is like a computer game credit or a token you use in a game park. You can also draw parallels to loyalty rewards points you get from airlines, hotels, credit cards or even the credits you chalk up from using ride-sharing, home-sharing or food delivery apps which you can exchange for more of their services and products. These ICO tokens generally give you access to a product or service which the company has launched or is planning to launch in due course according to the plans they have published in their “White Paper” (more about white paper later). The tokens do not confer in its holder any equities in or any other claims on the company. Nor does it represent a form of debt the company owes to the token holder. The token, in other words, is no more than just an advance right to use the product or service the company will at some point offer. To make it more attractive at the ICO to buy these tokens, they are usually offered with a discount on the eventual pricing of the launched product or service. And to make sure buyers of tokens will actually use the tokens rather than just hold on to them for purely speculative reasons, token holders are given a small incentive (like 0.1% of every transaction they make with their tokens) when they actually used the token to buy or sell something.
Two other characteristics make them slightly different from, say a token for a game arcade or credits from using a ride-sharing app:
there is usually a reward (in the form of more tokens) which will be distributed to token holders based on a certain percentage of the total tokens used in the system for a particular period of time. E.g. the reward given out by TenX to its token holders is 0.5% of the entire payment volume on the TenX system on a monthly basis.
the tokens are usually denominated against an established cryptocurrency, like ether or bitcoin, and comply with the more established protocols so that they are easily traded on cryptocurrency exchanges. Thus giving them a secondary market for the tokens. And for some of the tokens that have been issued, their secondary value has gone up quite a bit together with bitcoins and ether.
There are people who have compared White Papers to prospectus published by companies doing IPOs. Again, I don’t really think it is a like-for-like comparison as a prospectus for an IPO is subject to a lot more regulatory oversight and restrictions. The statements made in a prospectus also carry with them certain legal responsibilities. The White Paper carries with it a lot less legal responsibilities. It merely lays out the company’s product or services, the team behind it, their advisors, the market size, how the tokens will be used, how much of the money paid for the tokens at the ICO will be used to fund the development of the product, etc. More often than not, there will be plenty of disclaimers and waivers in the White Paper to warn readers that they are not making any warranties or representations of any kind. Nor does it confer any rights in the companies, etc., etc. In other words, they are saying, WE ARE NOT GUARANTEEING ANYTHING HERE! Just giving you a chance for an advance booking of our products or services.
If that’s the case, why are startup teams still so keen on ICO? And why are there still so many people lining up to buy tokens at the next hot ICO? For me, thanks to my involvement as an advisor for TenX’s ICO (I am listed as an advisor on the last page — https://www.tenx.tech/whitepaper/tenx_whitepaper_final.pdf), I have been given a front row seat to watch the drama unfold in the ICO world.
Interestingly for me, over the last few weeks, I have been getting calls to advise on other ICOs. One of the more recent ones has been most interesting as the startup is in the hardware manufacturing and distribution business. There have been plenty of ICOs last year, but there were only a couple of hardware-related ICO. And they were for hardware used for mining cryptocurrency or some business related to the crypto-world. But this particular startup team that approached me for help has nothing to do with the crypto world. My initial thoughts are that there is really nothing stopping us from tokenising the usage of the hardware. So, let’s say, 10 tokens will allow the holder to use the hardware for a fixed period of time. We issue the tokens now, you buy them and when the hardware hits the market, you can use your tokens to pay for a fixed time usage of the hardware. Your token does not entitle you to any ownership of the hardware (it’s quite expensive, so we don’t think people would really want to pay for the hardware upfront and a time-share model is better).
Which brings me to my earlier point that maybe this is the use case we have been looking for. With more than USD600 billion of cryptocurrencies held in millions of crypto-wallets or with cryptocurrencies exchanges, that’s a whole lot of venture capital that can be deployed to fund a whole lot of innovative startups. In the case of TenX, they managed to raise USD80 million during their ICO. Buyers of the PAY tokens can only pay for them with other cryptocurrencies like bitcoins and ether. There are already a few cryptocurrencies that are offering “ICO-in-a-Box” for startups and “ICO pre-sale” for their exchange users. So, rather than just have your cryptocurrencies sitting in a digital vault and praying that they don’t go on a roller-coaster ride any time soon, cryptocurrencies holders now have a new avenue to continue their investments forays by simply turning in their cryptocurrencies in exchange for a new utility token for some product or services. They will not only buy into a new product or service, they will be buying into the growth of a new startup by getting an incentive reward if more people use the new product or service. ICO is in a way radically changing the way startups get funded these days. ICO is also a great way for more investors to get on the startup investing game.
Looking at the many ICOs successfully completed in 2017, we can be sure there will be plenty of exciting new apps, online services, games, contents and even hardware (if the hardware startup manages to pull off their ICO) that can be exchanged with new tokens in 2018 and beyond.
Yes, there will be scams aplenty too. There have been plenty of research showing that only a very small percentage of tokens issued via ICOs in 2017 are actually in use. Most of the startups still do not have a product despite raising tens of millions of dollars from their ICOs. But that is really part and parcel of being an early adopter. It is risky. But if you are already holding onto some cryptocurrencies, you probably have a pretty healthy risk appetite.
When (not if) it all blows up, what’s left?
Billionaire investor Warren Buffett said on CNBC on Jan 10, 2018,
“In terms of cryptocurrencies, generally, I can say with almost certainty that they will come to a bad ending…” (https://www.cnbc.com/2018/01/10/buffett-says-cyrptocurrencies-will-almost-certainly-end-badly.html)
The man is right. Like I said at the beginning of my post, I think the current valuation of cryptocurrencies is insane. ICO aside and as funding source for a handful of crypto-wallets (e.g. TenX), there is simply not enough use cases out there. No use cases, no real demand. So, it’s just all hype. Take away the hype, then we will see the true value of a cryptocurrency.
The Managing Director of Monetary Authority of Singapore, Ravi Menon, made a very interesting comment recently,
“I do hope when the fever has gone away, when the crash has happened, it will not undermine the much deeper, and more meaningful technology associated with digital currencies and blockchain,” (https://www.channelnewsasia.com/news/singapore/mas-chief-ravi-menon-hopes-cryptocurrency-tech-will-survive-9861844)
Now, Ravi is no crypto-evangelist. Nor is he your typical conservative central banker that tries to ban every new innovation in fintech. This is a man who has single-handedly put Singapore on the global fintech map. He is also a well-respected central banker who has just been named Central Banker of the Year for the Asia Pacific region by The Banker magazine https://www.channelnewsasia.com/news/singapore/mas-managing-director-ravi-menon-named-asia-pacific-central-9834302. This is as good a prediction on the direction that cryptocurrencies prices will go in the near future. But more interestingly, it is a great stamp of approval for the underlying technologies of cryptocurrencies and blockchain. In fact, the MAS, under Mr. Menon’s leadership, has gone so far to conduct their own blockchain research and published a white paper on it http://www.mas.gov.sg/~/media/ProjectUbin/Project%20Ubin%20%20SGD%20on%20Distributed%20Ledger.pdf.
So, it has been a crazy few months for those of us watching the roller coaster rides of bitcoins, ether, XRP and a whole bunch of publicly traded cryptocurrencies. And all the headlines are about their crazy price fluctuations and who are the latest bitcoin billionaires. But the insanity we are witnessing now is no different from 2000 when the dotcom bubble burst. For those of use who lived through it, we have seen this kind of insanity before. We should be able to to cut through the b.s. and focus on the real technology. Remember, even after the excesses of 2000, a few tech startups did survive and thrive. My previous employer, eBay, is one of them. The company is still doing well as they got the fundamentals down right. There was a real business built by good people using awesome technology. I would argue that blockchain and other forms of distributed ledgers, which bitcoin, etherum, Ripple even TenX are build on is a sound technology breakthrough.
Payments is only one of the most obvious use case for the blockchain technology. But there are so many other use cases out there, like settlements of trade, execution of contracts, even cracking the toughest online challenge, proving you are who you claim you are, digital identity. With the backlash against Facebook and its monopoly on our attention and I think, more insidiously, its hold on our digital identity, maybe a distributed ledger system is the way for new technologies to be built for our individual identity that we own, control and can take along with us across different applications and platforms. Instead of having one single company owning our identity online (which is what has already happened), no one but ourselves will own that identity and we can verify that identity and anything we do with that identity across many millions of ledgers out there with no fear of a centralised control by one person or one company or one government.
This next generation of innovations with blockchain will hopefully take us beyond cryptocurrencies and bring back the glory days of the early Internet where everything is open-source.
Time will tell. | https://medium.com/swlh/some-musings-on-cryptocurrencies-ico-and-blockchain-db6b632fd337 | ['Steven Liew'] | 2018-02-10 13:44:40.545000+00:00 | ['ICO', 'Bitcoin', 'Blockchain', 'Startup', 'Cryptocurrency'] |
Why Read? Some Reasons to Love it Again | Why Read? Some Reasons to Love it Again
5 lessons from a life-long passion for reading
Photo by Johnny McClung on Unsplash
Pick up a book. Read.
Such a simple act, reading. Since childhood, we were taught to turn the page, voice the words, scan the story, predict the end. In primary school, we delighted in tales told and retold: picture books, Saturday morning Coyote and Roadrunner adventures, family stories, saints’ lives, Sunday comics, Dr. Seuss and Silverstein. Those words sprang to Technicolor life as we opened our eyes and read.
In those days, life and fiction blurred. I woke in my quilted bed, conversed long and meaningfully with my dolls and animals, frightened my little sister with the three-horned demon who lurked beneath the stairwell, scanned the a.m. comics in my dad’s discarded Tribune, ran to play Escape in the Woods with my best friend, returned for goulash and my mother’s dark German songs of love, listened to bedtime books, and read by flashlight until sleep overtook my dreaming eyes.
And then I grew. Life intruded more and more. I tried to hide in books. Books offered a world I understood and longed for, a land where characters were certain and plots began and ended in tidy loops. An open book swung open a door to a waiting universe — one that multiplied my world from one to a multiplicity of many.
I dove in.
While I read, I longed. I longed for joy, adventure, victory, and love. I dreamed of justice, journeys, and jealous abandon. I think I sensed, quite early, that life was one long spell of longing, that I would always be seeking, that humanity was on one continual hunt for something outside of itself, and always would be. I began to understand Dostoyevsky’s words: “The mystery of human existence lies not in just staying alive, but in finding something to live for.” Purpose, however, eluded me.
And so I read. It stilted my hunger. It calmed my nerves. It gnawed against my adolescent angst.
In those years, I summoned worlds. Paper lives leapt into reality. Characters breathed and sighed in flesh and sin and sainthood. I felt grime beneath my fingernails while planning sweet vengeance with the Count of Monte Christo. I ached with Heathcliff as his bitterly sweet Catherine slipped seductively away. I rolled and rumpled the grass blades as Whitman sharpened his single pencil. When Anna K let her body drop beneath the black engine, I felt the steel wheels carve trenches on my back.
There was so much to know. The Great Unknown stretched before me as I read. It was despairing to note that the more I read, the more I sensed the stretch of Unknowing unravel like a growing gulch of Ignorance.
“The fundamental problem of life,” psychologist Jordan Peterson said, “is the overwhelming complexity of being.” (“Three Forms of Meaning and the Management of Complexity” published in The Psychology of Meaning)
Good books addressed this existential issue. Great books asked questions that left life’s complexity unresolved.
A chronology of reading
When I consider my reading life, I find the underlying doubt and fear and love that engulfs and entwines us all.
As a child, I read for the joy a vivid scene could evoke, for the touch of my mother’s arm against me as she read aloud, for the castles that imagination built.
As a teen, I read to escape the gray world that confused my black and white soul, to find a love I felt I lacked, to discover galaxies that sprawled outside my reach.
As a college student, I sought out data, dove into clear pools of new-to-me information; I gathered knowledge like hoarding stray kittens.
As a young mother, I discovered developmental theories and expert reassurance. I measured maternal impulses against social and scientific norms. I attempted to reconcile my role as reader with that of parent and guide.
As a grad student, I became an academic. I learned to judge with critical eye and disdainful glance, to pepper my understanding with objective control, and to scan research like an addict.
This compilation of readerly tricks and turns is my own, yet reflects the ubiquitous universality of the reading realms we all enter when we open a book, read a page, collect our thoughts, and indulge in our very human need to read.
We all were readers, once. Perhaps we should pick up and read again.
Lessons from the love of reading
1. Reading opens other worlds
Reading is an act of faith, and a sticky web of treachery. Words create and destroy, build up and tear down. Language lifts us to the pinnacle of bright hope, and dashes us down to the bleakness of Hell. Books enlighten and damage, sometimes on the same page.
In his book, In Bed with the Word: Reading, Spirituality, and Cultural Politics, author Daniel Coleman points out the potential for books to inflict doubt, uncertainty, and pain; reading that restructures the mind, heart and soul, that challenges what we thought we knew, that reveals the elegance and ugliness of the Other, that forces us to see with eyes unknown, these are books that allow us to acknowledge: “You are changed.”
2. Reading changes us as we ourselves change
Books change us, but we also change books. With every lived experience, we alter our lives as readers. With every conversation, every action, every moment of love and betrayal, each connection and relationship, we bring a new grain to our storehouse of reading. I was one reader when I first opened Where the Sidewalk Ends at the age of eight; four children, one divorce, several academic degrees and 40 years later, I am a different reader of Silverstein’s “I Cannot Go To School Today” (said little Peggy Ann McKay). Do I mean better reader? Sometimes, though not necessarily.
Reading is colored by life’s indelible Sharpies: age, time, children, career, academia, heartbreak, loss, faith, death. What I did not know in grade 3, I now have some knowledge of. The ways of the world have changed my perception — for both good and bad.
3. Reading forces us to face ambiguity
We all inhabit a world of uncertainty. Like all of us, I struggle with it all: discomfort, confusion, life’s unrelenting ambiguities. Like Alexander Pope’s hierarchy of humanity, I am ever caught betwixt the angels and the unspeaking stones. I wish to soar in lofty realms, yet I sink toward the rocky earth. I remain indelibly mid-comprehension. Lumps of rocks do not sprout wings.
I am steeped in literary esoterica. Like all readers everywhere who attain a certain measure of mastery (in the U.S., probably anyone who reads a book beyond grade 12), I have read my way through dense, obscure, and obsolete literatures, some on purpose, some by accident. Because I can passably read German, I periodically torture myself with that as well. (Echoes of Herr Professor Heine boomingly narrating Das Niebelungenlied in Middle High German in contrapoint to my faltering German-major portrayal of Kriemhild remain embedded in my dark subconscious to this very hour: “I intend to remain a virgin: I will not let my life be ruined through love of a man!”) Like the unfortunate classmate who carried a copy of Mein Kampf to German 301 one day, we all suffer from errors of judgment — though some more forgivable than others.
A young man — tall, strident, authoritative — pushed a copy of William Burrough’s Naked Lunch into my hands one early fall afternoon on the quad.
“It’s pure genius,” he emphasized. I demurred until I opened the book and tried to read the damn thing. Incomprehensible could be construed a compliment; profane may be a better choice. Some things I still just don’t get.
I felt the same attractive repugnance for “Howl,” and Eliot’s Prufrock — as well as Lady Chatterly’s unmannerly lover. I didn’t think much of Hamlet the Ambivalent, or that vicious, gap-toothed Wife of Bath.
And what the hell was Emma Bovary thinking? I asked myself before becoming pregnant my senior year of college and marrying my own mistake.
4. Reading transforms itself and us
Reading is more action than presence, more verb than noun. Once acquired, it remains. Reading improves, decreases, waxes and wanes. It follows close behind, a lone moon trailing our life.
Like Optimus Prime, reading transforms itself. It morphs and moves. Chimeric, it shifts from blue to green, camouflaging and concealing, and bursting forth in a rainbow of intent. Reading is muted and demure, huddling in a cavern of pillows with a cup of hot, honeyed tea. It is raging and red, burning with consumptive fire. It falls asleep curled in your arms, only to awaken, yowling with hunger. Feed me, it demands.
Reading is much like the sea: a roiling, shadowed, submersive pool that yanks us under then spits us out. We ride its moods like careful boatmen wary of the storm. Giant whales surge beneath. Overhead, an ancient albatross haunts the sky with somber song.
That albatross never abandons nor deserts. It remains close by, in pleasure and pain, in busyness and boredom. Enraged, we may strike or shoot it, hang it from the sail or about our neck. Unerring phoenix: it rises from the grave, a befeathered Lazarus. Her song haunts: she has seen both the living and the dead.
5. Reading shadows our growth — and encourages it
Like a child, reading grows. It ages as I do. I grant it the gift of my finite time, feed it my attention, grant it periodic consideration, and it becomes my playmate, my companion, and finally, my peer.
“[Reading] matters,” Harold Bloom firmly stated in How to Read and Why, “if individuals are to retain any capacity to form their own judgments and opinions, that they continue to read for themselves.”
Why Read?
Bloom asks the fundamental question: why read? This query is answered by the self and soul.
I read to squander time, and to hoard it. I read because I want to know, or am driven to find out. I read to immerse myself in the world of ideas, and to unearth how they are chained together, “each to each” (Eliot). I want to familiarize myself with the Unfamiliar, an action that unhides the Other. I read to open doors, and to discover great and feeble notions. I read to unearth both conversations and love affairs. I seek answers for my questions, and questions for my half-baked answers. I read to dive into the lake of language — those waters where we swim together in time, thought, ideas, and death.
I read to touch immortality’s hem.
I read to laugh and weep, to stumble and fall. I read to find and lose myself. I read to enter a discussion that began millennia before my birth and that will continue its whispers and howls long after my demise. I read out of urgency and laziness. I read hungry and sated, half-asleep and wide awake. I read because I am.
Daniel Coleman links reading to spirituality, and that bond seems honest and firm: “Spirituality assumes that I have something to learn and that I can learn it from many things around me that draw me out of myself.” Curiosity draws us to the larger God. It provides that deep pondering that results in immutable prayer.
Conversely, curiosity can also lead us directly away from Light (see Faust, among others). It can deceive us into illusions of the knowledge’s power — and learning’s purpose. Such are the risks of reading.
Wisdom is not located on one specific page. It is not found in the Koran, Talmud, or Bible, though glimpses can be found there. Though the words and ideas 10,000 years are collected in 10,000 texts, yet wisdom is not written above, below, or between their lines. It remains elusive, a moth nibbling at the parchment’s edge.
Proverbs states: “Hold on to instruction, do not let it go; guard it well, for it is your life.” Individually, we read and learn. Uniquely, we walk through life’s foibles and failings. Ubiquitously, we falter and flail. Those intent on the smallest sliver of wisdom will rise once more, re-open the book, and read the next page.
In the end, reading is a verb
Reading is action. It is a choice we make to open both mind and heart. Reading is presence — terrible or beautiful, mindful or redundant. Opening a book teaches something new. It gives voice to an unknown world. Its beauty can hold the soul.
Reading is passion. It is immersion and sin. It is prayer and redemption. It calms ambiguity and carves deep dissonance. It is life empowering and time depriving. It is magic and metaphor, word and wing.
How to read? Pick up the novel. Lift the magazine. Heft the Sunday New York Times. Turn the page. Begin. | https://medium.com/illumination-curated/why-read-some-reasons-to-love-it-again-1026940c8ee0 | ['Dr. Audrey'] | 2020-12-13 06:24:51.344000+00:00 | ['Love', 'Life Lessons', 'Self', 'Education', 'Books'] |
10 Best Data Science Reads for Students | 10 Best Data Science Reads for Students
Top 10 ML and stats articles for learning data concepts
It’s time for some Best Of compilations! Here are my 10 best articles for students. If you find yourself having fun with the writing, try following the links in the articles — they’re almost always from the same blog. I try to keep things unboring for your amusement.
Enjoy!
#1 Understanding Data
#2 Explaining supervised learning to a kid (or your boss)
#3 Unsupervised learning demystified
#4 A brief history of data science
#5 Machine learning — Is the emperor wearing clothes?
#6 Statistical inference in one sentence
#7 TensorFlow is dead, long live TensorFlow!
#8 Statistician proves that statistics are boring
#9 Explaining p-values with puppies
#10 What is Decision Intelligence?
Bonus: Snarky Statistics YouTube course | https://towardsdatascience.com/10-best-data-science-reads-for-students-3bae97d9bb23 | ['Cassie Kozyrkov'] | 2019-08-17 18:04:24.019000+00:00 | ['Machine Learning', 'Statistics', 'Artificial Intelligence', 'Data Science', 'Towards Data Science'] |
How To Make Your Apartment a Place You Actually Want To Be | How To Make Your Apartment a Place You Actually Want To Be
For the times when, you know, you have to stay home
I used to hate being at home.
I lived in a really expensive one-bedroom apartment with barely enough room for myself, let alone my child, my belongings, and her belongings. I dreaded nights and weekends because it was cramped, uncomfortable, and there wasn’t a single spot without clutter.
After making a few changes, I’m much more comfortable being at home.
So comfortable, in fact, that I don’t mind if I have to stay home for the foreseeable future.
Here’s how you can do the same — regardless of where you live.
Get rid of things you don’t need.
This is a basic one, but it’s probably the hardest one to do. The thing is… you just have to do it. Start with the piles. You know which piles I’m talking about.
Photo by Samantha Gades on Unsplash
If something doesn’t have a home, give it a home — whether that home is on a shelf, in a closet, or in a donation bin. You’ll feel a lot better when you don’t have piles of things all over the place.
Yes, you can sell things — but if you’re trying to be expeditious and if we’re not talking about big-ticket items, just give things away or donate them. If something is literally garbage, just throw it away.
Hang things on the walls.
Nothing says “This is a home!” nearly as much as a little decoration. Can’t put nails in the wall? Get yourself some command strips and stick some decorations on the walls. A painting, a photograph, a tapestry — whatever. Just get something up on those walls so they don’t look so lifeless and barren.
Command strips can hold up to 16 pounds.
Not into traditional decorations? There are some heavy-duty hooks that stick right on the wall and can hold all of your frying pans and kitchenware. Turn your clutter into something pleasing to the eye by hanging things up.
Each hook can hold up to 13 pounds.
Spice up your lighting.
Who says you have to have plain white lights? You’re an adult. You can do what you want. You can make your bathroom lighting blue if you feel like it.
Get some LED strip lights and put them under your bed, or swap out a regular lightbulb for one that can change colors. If you want to go all out, you can get 32 feet of smart LED lighting for around $45. Then, you can just be like, “Alexa, make my living room purple,” and cuddle up in front of the TV.
Get some plants.
Look, I get it. I’ve killed a few plants in my day. I tried my best, but sometimes, fruit-bearing trees just don’t want to live indoors.
Get succulent plants that don’t require much water. They look great and they won’t die unless you REALLY mess up. Or get an AeroGarden and grow fresh herbs in your kitchen. Or, if you can manage to take care of a whole garden of plants without killing them, go crazy. You can even get one of those pink plant lights and combine “get some plants” with “spice up your lighting”.
Good luck killing THESE plants.
Move furniture.
It will feel like a new home if you move things around. Sometimes, you just need to trick yourself. Move your couch, move your bed, move your dresser, move your hamper… move whatever you can move.
Paint something.
Paint a wall if you’re allowed to. Accent walls look nice. Can’t paint a wall? Paint a desk. Paint a chair. Paint a picture and hang it on the wall. Instead of spending an exorbitant amount of money on paint, get paint samples for a few dollars. There’s more than enough paint in those little cans to do something cool.
Photo by Emily Wang on Unsplash
Use furniture for unintended purposes.
Bookshelf? You mean blanket storage. Put a picture frame on top and call it a day. Cube storage? You mean TV stand. Look around for things that can be repurposed, and repurpose them.
Organize stuff.
Not only does organizing things make it more pleasant to be at home… it’s also a great activity to do when you’re stuck indoors for awhile.
These hanging shelves were $5 at Five Below.
Being home doesn’t have to be miserable.
Make a few changes. Order whatever you need online. Some things arrive the same day with Amazon Fresh (yes, they sell more than just groceries).
Worst case scenario: you’ll enjoy the results.
Best case scenario: you’ll enjoy the process. | https://kerisavoca.medium.com/how-to-make-your-apartment-a-place-you-actually-want-to-be-2400f0fe9550 | ['Keri Savoca'] | 2020-03-17 14:31:00.646000+00:00 | ['Culture', 'Life', 'Family', 'Cities', 'Productivity'] |
How Typography Impacts Your Business Outlook — A Primer | How Typography Impacts Your Business Outlook — A Primer
For any up and coming business, establishing a proper mode of communication with users is mission critical to their success.
Online Businesses
If the business is online, text would be considered as the most basic yet highly effective way to reach their target audience.
Designers
For designers, typography has real potential to make or break a business. This is a particular area of interest for UI/UX design studios— to build brand identity organically, establish effective communication and streamline content delivery on different platforms through well-thought out typefaces.
As implied, text is a medium used to inform, discuss, emphasize and encourage new visitors to interact and communicate with the company. However, the way they present themselves through their content creates a long-lasting first impression among users, which could be their last.
But where does the term typography, fit into this whole dynamic?
Typography is basically how text is presented, in the most literal sense. With each letter stylized in a peculiar way, the design pattern remains the same across the rest of the letters.
Typography is represented by a typeface or more popularly as a ‘font-family’ showcased through titles, headings and paragraphs across a web-page. If the company is professional consulting service, a healthcare provider or an educational platform — the choice and style of words can make a big difference.
This write up aims at informing businesses and startups about how typography is a game changer when it comes to first or last impression.
Note: All images included in this piece are my own design. They serve the purpose of explaining a particular concept in each section.
1.0 Minimize The Number of Font-Families Being Used
Never aimlessly add fonts just because they look nice or if its adding some flavor to your web app. It is better to keep the number of font-families at a minimum; such as 2 or max 3. Going beyond 3 will make your web app seem busy and disorganized, almost to the point that it looks amateurish.
Fig 1: Using more than 3 fonts can give the impression of an unprofessional website
2.0 What Audience Are You Trying To Reach?
When choosing a set of typefaces for a web app, companies need to carry out research on their audiences. This would ideally involve specifying the following:
Age bracket
Demographic
Cultural Affiliation
Ethnicity
Gender
Looking into each factor allows companies to identify the exact pain-points faced by users and consequently formulate their goal and objectives based on the researched user persona.
Assuming a scenario, you are to design an educational app or a game targeting children between 4–12, chances are you will be using a font which is playful, glittery or maybe has decoration around letters.
Fig 2a shows one example of a font-family and includes colors on the even numbered letter. The point is to excite children on looking at the sentence and games utilize this technique to make playful learning experiences.
Fig 2a: Font-Family: AR Carter
However, when designing an online mobile-based app for a maturer crowd that may also include senior folk, the font needs to be legible, bold and of relative bigger size as seen in fig 2b.
Fig 2b: We need to design content for different age groups, both young and old.
Terminologies
In Design Thinking terminologies, to design an app which is wholly customer-centric, a very fundamental practice is to empathize with the users themselves, by listening to them and mapping out a complete flow from point A to B when addressing the problem.
If businesses open themselves to the customer’s sentiments and woes, they can make the ‘right’ additions or modifications to their existing line of products. Showing empathy tells customers that you care which goes a long way to building brand loyalty.
3.0 Complement Your Selected Font-families, Smartly
Lets say your business is a news group or blog channel that prioritizes content over other elements, your designers should choose font-families that favors easy readability.
By rule of thumb: text should be non-intrusive for the reader and should flow smoothly; hence, we do not want the selected fonts to be seen in juxtaposition to each other. Instead, complementing font-families goes towards improving the reading experience of users.
For instance, I experimented with a few known Google Fonts, creating combinations of typefaces that made sense and promoted smooth readability in-line.
In fig 3a: I used the Montserrat font as my choice for the heading and Merriweather for the paragraph section. The combo can fit well in an online publishing web and mobile site. The mixture of serif and sans-serif font does well in finding the intersection between traditional and contemporary design.
Fig 3a: Heading Font — Montserrat & Paragraph Font — Merriweather
When considering the section in Fig 3b, we observe the minimalist and straightforward nature of the text. I used Fjalla One for the heading text and Noto Sans for the paragraph. Most modern web designs are employing clean format for text and this typeface pair perfectly reflects the trend we see on most landing pages today.
Fig 3b: Heading Font — Fjalla One & Paragraph Font — Noto Sans
4.0 Stick With One Typeface, Attain Mastery Before Moving On
If your business is just starting out and have novice designer, it is best practice to stick with one typeface before experimenting with others. Modern typefaces are available with different font weights, which can be helpful in certain situations, like in a button, label or a bold heading on the landing page.
Fig 4: Various font weights of the Roboto Typeface
There is no universal standard for weights but category names have been devised as seen in Fig 4 with words that progress from Thin, Light, Regular to Medium, Bold and Black to imply the proportional thickness of the font.
You also have the option of using Italics and underlined text but those should only be utilized when absolutely necessary.
5.0 Pay Attention to Contextual Sizing of Text
Once the designer is done with typeface selection, we proceed on to sizing the text. There are a number of ways and scientific tools to determine the perfect size for typefaces.
For instance, the Modular Scale system gets input in the form of base font-size and ratio, allowing the tool to scale the text to the appropriate sizes by multiplying the two input values. This is just one tool among many to create size guidelines.
The scaled sizes of the typefaces are then mapped onto the text list that includes different headings, a body text and caption text, as seen in fig 5.
Fig 5: Scaled sizes (factor = 1.33) of text in Roboto Font
However, there are a number of factors that should be considered when applying the sizes on the text:
Text that is too small can cause stress to a user’s eyes, especially when reading it from a mobile device. It is important that text size is legible enough that users can read it comfortably.
Text that is too large can be problematic as well. Big sized text can distract users while they perform a task and it calls attention to itself instead of other graphical elements.
6.0 Avoid Using Fonts With Cursive/Handwritten Scripts
When presenting your online business, it is ideal to use fonts that are clean, simple and most importantly legible. Fonts that involve calligraphy or ones that are fundamentally cursive scripts; although are beautiful to the naked eye, but at times are insanely difficult to read or make sense of. Such scripts break the flow while scanning through the text.
Fig 6: Hard or easy to read?
7.0 Create a Design Guideline for Your Typography
In order to standardize design across your brand which includes your website, product line and social media campaign assets, it is imperative to set up a design guideline. One of the important components of this guideline includes typography.
In this phase, designers can modify attributes of the text such as color palette, effect, weight and size. I used Figma to draft a quick typography guideline in fig 7.
However, there are other popular tools as well such as Adobe’s Creative Cloud or Sketch (For Apple) that can be used to construct a similar looking guideline like this one. | https://medium.com/startup-grind/how-typography-impacts-your-business-outlook-a-primer-3f092261d294 | ['Hamza Mahmood'] | 2020-08-30 05:06:32.938000+00:00 | ['Business Tips', 'Design', 'Web Design', 'Typography', 'UX'] |
Token Basket Generator Toolkit | Crypto token enthusiasts and investors are often unsure about where to invest.
They try to figure out the best tokens to invest in, considering the amount of money they have in hand for crypto investment.
Token selection for investment can be done manually by analyzing market trends for various crypto tokens in the market, but it is cumbersome to come to logical conclusions with limited information that we are able to gather due to our human condition.
New and old traders often find themselves in a flux about choosing from a myriad of token options available in the market.
Even if one knows what tokens to buy, it gets complicated to decide how much of each token to purchase.
To solve this conundrum, Token AI has launched the Token Basket Generator.
Token Basket Generator is Token AI’s flagship toolkit which helps users develop a crypto token basket for investment from scratch.
We are familiar with Mutual Funds, which are baskets of stocks curated by experts with certain parameters in mind.
In a similar manner, Token Basket Generator is powered by Juliet, Token AI’s propriety AI-based program that analyzes historical and current trends and sentiments in the crypto market to recommend a basket of tokens to be purchased.
The Token Basket Generator Toolkit enables users to define their crypto investment preferences. Upon which, Token Basket Generator toolkit suggests customized investment baskets to users.
It empowers investors with useful information to help them derive maximum value from their crypto trades.
1. Token Universe
With so many old and new Crypto tokens in the market, it gets tricky to recognize the ones that are reliable.
Juliet creates a universe of about 500 valuable tokens. The number 500 is fluid and may increase or decrease, depending upon the availability of valuable liquid tokens.
This universe is comprised of tokens that are above a defined minimum threshold. This threshold consists of market capitalization, availability on Bittrex, Poloniex, Cryptopia, and Binance exchanges, and have desired trading volumes at desired prices.
This ensures that Token AI users invest only in liquid and valuable tokens.
2. Basket Define
To make crypto choices easy for traders, Token AI Basket Generator lets a user define what he/she wants in simple terms.
The user can define parameters like the desired number of different tokens in their basket, crypto exchange(s) he/she wants Juliet to perform analysis in, and the desired principal amount to invest in that basket.
The user also selects a trading frequency preference.
After this, the user just has to click on ‘Recommend’ to get AI-based token basket suggestions.
3. Basket Process
After the user has defined investment preferences, Juliet does what it does best — use its AI based algorithms to analyze the Token Universe.
It runs the analysis in user-selected exchange(s), as per the parameters defined by the user, to recognize and suggest the best investment basket.
It does all its analysis in moments, which would probably take months to carry out by an average crypto trader.
4. Basket Recommend
After its AI-based analysis of historical and current trends and sentiments surrounding the tokens that match the basket parameters defined by the user, Juliet recommends a weighted basket of liquid tokens to be purchased.
With its proprietary tools, Token AI Basket Generator simplifies token basket selection for crypto enthusiasts and traders.
For a full video demonstrating the Token Basket Generator click here or head over to www.tokenai.io to learn more. | https://medium.com/tokenai-blog/token-basket-generator-toolkit-508fad7350f6 | ['Denver Cutrim'] | 2018-08-30 05:05:12.652000+00:00 | ['Blockchain', 'Bitcoin', 'Artificial Intelligence', 'Cryptocurrency', 'Technology'] |
How to survive the democratization of content creation in 2018 and beyond | Marketing has changed.
It’s an idea that initially seems too obvious to have much impact. You’re probably thinking, “Of course marketing has changed. Marketing is always changing. There are always new channels emerging, new tools to try, and new ways to share our message with an audience.”
Let me explain what I mean.
In marketing, as in every discipline, there are micro changes and macro changes. Micro changes are small, frequent, and don’t drastically change the day-to-day work marketers do.
By contrast, macro changes happen far less often but have the potential to completely flip a discipline or industry on its head and make non-adopters obsolete.
Macro changes can happen as a result of new technology (e.g. the printing press or the internet) or of the accumulation of many micro changes over time (e.g. increasing consumer adoption of social media or gradual tweaks to Google’s search algorithm).
There’s a macro change that is currently flipping marketing on its head. I call it the democratization of content creation. (I know, it has a nice ring to it, doesn’t it?)
Here’s what that means and how it will impact you as a marketer — whether you like it or not.
What is the “democratization of content creation”?
Since the beginning of marketing time, a few select, trained experts in each company have controlled all the marketing collateral — all the marketing “stuff,” from print to television and everything in between. This has been due mostly to the reliance on mass marketing to a mass audience. (Ever seen Mad Men?)
All this “stuff” has been well thought-out to represent your organization in just the right way to help build your brand.
Today, however, more and more employees, partners, vendors, salespeople (the list goes on) create content that represents your brand whether you like it or not. (For example: the increase in employees blogging, sharing content on social media, etc.)
Not only that, but these “new marketers” are creating content across many channels and mediums for an increasingly fragmented audience. Customers don’t want mass marketing. They expect personalized messaging that is relevant to them at that time wherever they are.
Want to do a quick thought exercise? Think how much more content (i.e. marketing & sales collateral, web content, social media, print, etc.) your company created this year compared to last year. Now think of how many more people at your organization created that content compared to last year. Scary, right? That’s evidence of the democratization of content creation in action.
The problem is, not everyone knows how to create content that is on-brand.
Why does brand consistency matter?
As more people participate in the content creation process for your brand, consistent marketing really becomes a problem. No longer is all the marketing “stuff” created by a few well-trained brand experts. As a result, old logos are used, images are stretched, colors are off-hue, and horrific clip art is inserted into marketing collateral which damages your brand.
But… does an inconsistent brand really hurt your business in a tangible way?
The short answer: Yes.
A few months ago, I conducted a research study with a company called Demand Metric about the impact brand consistency can have on a business. We surveyed over 200 senior marketing leaders across a variety of company sizes and industries.
Among some of the most impactful findings were the following:
90% of study participants experience some level of inconsistent branding in the marketing & sales materials their company creates.
in the marketing & sales materials their company creates. Study participants estimated a 23% increase in revenue if they could always ensure marketing & sales materials were on-brand.
if they could always ensure marketing & sales materials were on-brand. Organizations that focus on maintaining brand consistency (and do it well) attribute 14% of their growth to doing so.
In summary, maintaining consistency in your marketing is a big challenge that most organizations face — one that, when overcome, can increase your revenue and fuel your growth.
What does this change mean for marketers?
As with every major change, the democratization of content creation will bring about both winners and losers.
The losers will be the complacent organizations that create a “wild west” mentality by allowing rogue (off-brand) content creation to continue.
Even worse is those who think the answer to brand & content consistency is to create a “brand prison” where content creation is restricted to the central marketing & design teams.
In contrast, imagine a world where employees, partners, vendors & salespeople are empowered to create customized, relevant sales & marketing materials (that are always on-brand) on their own.
Where central marketing teams can spend more time on the large, impactful projects that actually move the needle and less time drowning in an endless backlog of requests from across the organization.
And where brand managers can spend more time brand-building and less time on policing rogue content.
The winners of this change will be those organizations that recognize and embrace this change and empower employees, partners, vendors & salespeople with the tools and processes needed to create consistent, on-brand content.
There’s a better way to create on-brand content
Lucidpress is an intuitive design & brand management platform that empowers even the least design-savvy to quickly create on-brand marketing collateral through web-based, lockable templates.
Here’s how it works:
More relevant, brand-compliant content? Check.
Improved team efficiency? Check.
Stronger brand and bigger business? Check and check.
The democratization of content creation is already upon us. If you’re ready to embrace the change, Lucidpress can help. Find out how today by signing up for a free, no-obligation demo.
About Garrett Jestice
Garrett Jestice is the Product Marketing Manager for Lucidpress. When he’s not thinking about new ways to grow the Lucidpress business, he enjoys watching sports, playing Uno, and showing off his mad barbecuing skills. Follow him on Twitter @gjestice or connect with him on LinkedIn. | https://medium.com/lucidpress/how-to-survive-the-democratization-of-content-creation-in-2018-and-beyond-3aef216a46df | [] | 2018-12-31 00:00:00 | ['Branding', 'Content Marketing', 'Marketing', 'Digital Marketing', 'Graphic Design'] |
The Wonder of Getting Lost in a Bookshop | The Wonder of Getting Lost in a Bookshop
And why we should fight to keep them
Do you love visiting bookshops as much as I do? I love it! But, I don’t do it enough. There’s something so alluring about the convenience of online shopping, right? You think about something one minute, search for and buy it the next and voila…more often than not it’s on your doorstep the very next day. But, the thing about visiting a bookshop is that it’s a much better experience. Here’s why:
You open yourself to serendipity
One of the best things about bookshops is that you’ll pick up titles that you didn’t even know existed. You might even find a new subject of interest, surface an unknown author, or find an obscure title. It’s harder to do that online. Most of the time, when you’re online shopping it’s purpose-driven! You search for exactly what you want, conduct your business and badda-bing badda-boom you’re done.
However, in doing that you miss the opportunity to accidentally discover new things. You don’t broaden your horizons as much and you never know what you might be missing.
The romanticism of bookshops
What’s more inspiring to you? Finding a 500 square foot bookshop that has 1000s of titles crammed into every nook and cranny or imaging someone racing through a warehouse so big that they stopped measuring in square foot and started measuring in acres.
I know what floats my boat more. To me, there’s nothing better than discovering a bookshop. Every time I’m traveling I almost always visit a bookshop — just to see what’s going on. I love it and I could while away the hours quite easily. And, here’s a top tip…buying a book on your travels is a much better souvenir than an overpriced fridge magnet.
You have the opportunity to support your local economy
Support local, independent retailers where you can. In these shops nine times out of 10 you’ll find people who love books. Also, they usually love talking about books. This is where you can have a discussion about your recent favourite read or find out what you’ve missed in your chosen genre. They’re a wealth of information. And, what’s better is that you’re supporting the local economy and small retailers. What could feel more satisfying than that?
However, even if you visit a Barnes & Noble or a Waterstones they’re staffed by local people. They’re staffed by people who know books. They’re not staffed by people who don’t care if they’re picking a pack of dishwasher tablets, car windscreen wipers, or the latest Stephen King Novel. | https://medium.com/1-one-infinity/the-wonder-of-getting-lost-in-a-bookshop-ad29bb82154f | ['Jonny Mccormick'] | 2019-09-15 21:29:58.162000+00:00 | ['Retail', 'Local Business', 'Reading', 'Bookshops', 'Books'] |
Complexities in building a custom In-App Voice Assistants | Custom In-App Voice Assistants
Complexities in building a custom In-App Voice Assistants
Want to build a custom Voice Assistant for your app? Do NOT build one from scratch. It's harder than it looks
Voice as an interface is becoming more mainstream. Most likely the reader has experienced Voice interfaces when interacting with general-purpose Assistants like Alexa or Google Assistant or Siri. More and more brands are also adding custom In-App Voice Assistants to their mobile and web apps to enable their users to access their services faster and a lot easier. But building a custom In-App Voice Assistant is deceptively complex and requires multiple people working with different skill sets, working together for many months. Flipkart took 2 years to build its In-App Voice Assistant, even after acquiring a specialist Voice company, Liv.ai. ConfirmTkt took almost 18 man-months to get their In-App Voice Assistant built for the ticketing app.
Flipkart took 2 years to build its In-App Voice Assistant, even after acquiring a specialist Voice company, Liv.ai
Why does it take so long? What are the various things that one should consider when building their own custom In-App Voice Assistant? This blog tries to dissect the process of building one and argues why the world needs Voice Assistants to be delivered as a service rather than everyone needing to build out their own from scratch.
In the beginning, there were 4
At the core of any In-App Voice Assistants are 4 fundamental technology components.
Automatic Speech Recognition (ASR)
The speech to text service converts the speech captured via the microphone on the device to text in the language spoken by the user. Google is the pioneer in this service and provides both platform-specific (Android APIs) and a more powerful cloud-based service as part of its GCP offerings.
Natural Language Processing (NLP)
This service takes as input the text representing the user’s speech, classifies it based on the intent, and also extracts data from it. E.g. when the user says
“Book a ticket from Bangalore to Chennai for tomorrow” or “Cancel my ticket”
the app needs to understand the action that it should trigger to fulfill the user request. The NLP system, if configured correctly, can help determine this. The alternative is to perform a simplistic string pattern matching inside the app itself, but that is very fragile and very hard to maintain. Google’s Dialogflow, Amazon’s Lex, Facebook’s wit.ai, and the open-source project Rasa are some of the services that can be used for this purpose.
Translation
If the app wants to support multi-lingual input, it typically does in one of two ways. The NLP system itself can be configured for every language or the app can employ translation to get this right. If you are using translation, there are 3rd party services like Google Translate services, that do a decent job.
Text to Speech (TTS)
A Voice Assistant should ideally be a duplex system. That is, it should not just allow users to talk to the app, but also the Assistant should be able to speak back to the user at the appropriate times. One can use the platform native APIs for doing this or a 3rd party service like Amazon Polly.
Training needs
Once you have identified the providers of the 4 key components, the next thing that is needed is to ensure that you can use them in a way that is suited for your app.
ASR Domain Optimization
Typically, the service can be used out of the box with no configurations as it would come with its own pre-trained models. But it may not be good enough for your app or domain. For E.g. if you are a car company app, and if your user says “corolla”, the ASR might potentially recognize it as “gorilla”, as it has no explicit context about your app. It is using a probabilistic model and based on how it recognizes the speech patterns, it might potentially pick a word that it thinks is the best globally. Some ASR engines allow you to augment the language model used by the ASR to be “biased” towards words or phrases that are more relevant to you.
NLP Intents and Slots/Entities Configurations and utterance training
As mentioned above, an NLP system needs to be configured and trained well for it to work efficiently. This normally one of the hardest things to do and requires a specialist conversational designer to configure and train it correctly.
Intents and Entities (or Slots as they are called by some) are the basic building blocks here. Intents are used to classify the actual utterances that the user speaks (is he or she trying to book a ticket or cancel it?). And the Entities (or Slots) are used to identify/extract the data/parameters/entity inside that utterance (e.g. extracting the source, destination, and travel date when the user says “book a ticket from Bangalore to Chennai for tomorrow”)
Translation Optimizations
Again this mostly would work out of the box, until it doesn't :). The translations that the generic 3rd party services provide are optimized for the common case and might not be applicable to your needs. Here are some common examples of failures that are not optimized say for a grocery search
“Narasus Coffee” is a popular coffee brand in Tamil Nadu
“aamchur” means “dry mango”. But with a colloquial spelling, the translation might go wrong
TTS Prompts and Statements configuration
There are fundamentally 4 reasons why Voice Assistants need to speak out.
Speak out a greeting message (“Welcome to Big Basket”) Clarify something if it's not able to understand (“Sorry I did not understand what you are saying”). Ask a question (“Which is the travel date?”) Convey some information (“Your balance is 200 Rs”)
In a typical application, there could be hundreds of such sentences that need to be spoken out by the Assistant, and these need to be configured, with the ability to be changeable dynamically at runtime.
Conversation Design
This is usually the most involved part: How should the application react to the various Intents that are recognized and unrecognized? What happens when some entities are recognized and some are not? After one intent is recognized, how do you trigger the next user-journey? This, more often than not, is coded explicitly inside the app in most cases and leads to a lot of complex “if/else” and complex programming logic.
Simple code that handles a travel-related search (source, destination, and travel date)
I am glossing over this concept, even though it's the hardest. But this is roughly equivalent to handling various UI events in Android and Web apps, after having designed the various page layouts and rendered them on-screen, and then connecting them with the business logic which triggers the actual functionality. Just that, here, instead of UI events, you deal with intents and entities as your inputs. But unlike UI events which are definitive in nature (you can only click on things that you can see on the current screen), voice is unconstrained and the user can refer to things that are outside the scope of the current screen too.
User Interface
The next big puzzle is the Visual experience that has to go along with the Assistant. Think of it like the Google Assistant or Siri like UI elements that you need to add to your app to get the feel of having a Voice Assistant. While this is slightly more straightforward than the previous points, it has a bunch of nuance to it. Since most developers are less familiar with this part of the Assistant building, I will get a bit more specific for this section.
Invocation Trigger
This is the first step of the puzzle. How will the user initiate the interaction with the Assistant? The Assistant should have a “single” point of entry (unlike traditional UI elements where different functionalities have different UI elements). This is done in one of two ways (or both in some cases) —
A microphone button is placed strategically on the screen that users can click to start the interaction. This is again kept in one of three places —
At the bottom of the screen, which is easier for the thumb to get to At the top of the screen, typically inside the nav/action bar Right next to the functionality that has voice-enabled, if voice enablement is very localized
A hotword that triggers the Assistant. For E.g when using Alexa inside the Amazon app, you can start it by just speaking “Alexa”. This is a double-edged sword in my view. While it might seem quite convenient, hotwords are ideal for far-field interactions (like with an Echo device or talking to Siri or Google Assistant when the phone is kept away from you). But as we have seen with those devices, it can have a lot of false positives and also is a potential battery drain and a security loophole.
Assistant Interaction Surface
Once the Assistant has been invoked, it needs its own surface where it can interact with the user. There are two ways in which apps have implemented this —
A full-screen “conversational screen” that overlays on top of the traditional app interface.
A full-screen VA interface — Erica from Bank of America
A partial screen (normally at the bottom) that allows the user to interact with both the traditional app interface or with the Assistant simultaneously
User Training
When a user is interacting with the Voice Assistant, it's quite natural for him or her to not know what exactly to say, even after they have invoked the Assistant. Because Voice is so open-ended, it puts a cognitive overhead on the user. We have all been through this. That’s why it's important for the Assistant to be able to educate the user on what are the various things they can say that an Assistant can reliably understand. Over time, the user becomes more accustomed to the system that they wouldn't need it, but this is important initially.
This can be done by one of two means.
Showing contextual hints when the Assistant is active. Contextual hints give clues to the user about possible answers when the Assistant is prompting them to speak. This helps train the user about the correct way to speak back to the Assistant.
Showing contextual coach-marks to inform the user when is a good time to use the Assistant. For Eg when the user tries to do a textual search and if Voice Search is enabled in that app, the app can inform the user that they can use the Voice Assistant to do the same immediately after the search results have been shown. This contextual help message is more useful than showing them a coachmark at the beginning and we have seen higher conversions happening because of this.
Assistant UX
A key aspect of Assistant design is setting the amount of time it should wait for the user to speak before timing out. Wait too long and it would feel like the system is too slow. Timeout quickly and you would end up missing out on the user’s thoughts. The Assistant needs to have two different timeouts —
Initial wait timeout — This is the time the Assistant will wait for the user to begin speaking. This should be higher and in the range of at least 5 to 10 seconds.
End of speech timeout — This should not have a fixed time but rather be dynamic. When speaking a short sentence and the user pauses, it can timeout quickly. But when the user is speaking for longer initially, it's better to keep waiting for him to gather his or her thoughts. For Eg. if the user spoke something that is lesser than 10 characters and there was a 1-second pause, it probably means he is done. It was most likely a short answer (eg: “help” or “done” etc). But if the user spoke a bigger sentence, it might be okay to wait for a bit more (say 2-second pause).
Tying up the UI and the Conversational Design
The next challenge is making sure that the UI elements and the conversational design elements and the business logic are all well connected.
For example:
When the trigger is shown, the surface should not be visible at the same time.
When the user has finished speaking and the app is trying to determine the intent behind the speech, the UI should indicate it’s processing
After the user’s intent is understood and if the screen changes, the Assistant surface should inform the user about the outcome and get out of the way for the user to be ready to interact with the app
When the Assistant is asking the user for an input and if the same can be provided by just clicking on the UI, the Assistant should react to this multi-modal input and move to the next steps, instead of forcing the user to give input only by voice.
There are lots of nuances of connecting the visual experience and the conversational experience when building the In-App Assistant. This is what makes the experience smooth and pleasing to the user.
Platform Nuances
Next up. Platform specificity. Every platform has its own nuances, especially when it comes to getting access to the microphone and also access to the speaker.
For instance, in browsers and specifically in multi-page apps (like non-React apps), when you move from one page to another, the app cannot, without any user input, automatically speak out something. Also, the way to get the microphone permission is not very intuitive.
Similarly for Android and iOS. The app has to make sure every time that the permission for the microphone is available and, if not, prompt for it at the right time and in a very tactful way.
Have human-like characteristics but still don't try to be one
When a user interacts via voice, since it's a very unique human experience (even animals can potentially use a touch interface), we intuitively have a higher level of expectation. While the most common thing that people assume here is that it should be “intelligent” enough to understand what is being spoken, what we have seen in our experience is that users understand that finally they are talking to a machine and it has some predefined constructs. As long as there is some flexibility there, most users are fine to use almost similar sounding commands.
But what they cannot tolerate is a lack of responsiveness. If you say something and if the system takes a long time to respond back or even tell you it got what you spoke, it is very off-putting. It's equivalent to having a touch interface where you touched something on the screen and it does not even give you an indication of response. So a Voice Assistant has to be -
Responsive. It should show on the screen what the user spoke, as soon as they spoke it out, so that they feel reassured that the system got their input
Quick. And after the Assistant determines the user speech has ended, it should strive to respond back (do some UI transitions) typically within a second. This sub-second response time is what made Alexa so loved by everyone. It felt so natural when it responded back within a second. The bar is now high for users when it comes to performance.
Dynamic and contextual. The Assistant should not feel too monotonous in its interactions. When it comes to visual outputs, it's fine if the same sequence gets played out every time and in fact, we want that. But when it comes to voice responses, humans tend to get bored if the response from the Assistant is very repetitive. So it's important that the Assistant is dynamic. And also it should be contextual. Depending on which screen the user is interacting with the Assistant, it should change the way it drives the conversation with the user
Oh my. That sounds like a lot of work
Building a good custom In-App Assistant has typically needed a lot of investment, both in terms of the number, kind of people needed, and also the time it would take to get it right. That is why it has taken many of the brands quite a bit of time and investment to roll out their own custom Assistants.
Mobile and Web apps are the dominant channels for businesses today and they will continue to be. But if it takes a lot of effort and time for brands to embrace this technology, it's going to be a big deterrent to many of them to embrace this technology.
But this does not have to be like this. There has to be a better way if we are to democratize this notion of Voice interfaces.
Voice Assistant as a Service
The time has now come for this technology to be made available to brands as a simple service that they can connect to and embed inside their apps. It’s time for Voice Assistants to become a service.
In our next blog, we will talk about how Slang is doing exactly that via its unique Voice Assistant as a Service (VAaaS) platform. | https://medium.com/slanglabs/complexities-in-building-a-custom-in-app-voice-assistants-606d803528b3 | ['Kumar Rangarajan'] | 2020-12-23 11:15:50.409000+00:00 | ['Voice Assistant', 'Alexa', 'Artificial Intelligence', 'Google Assistant', 'Technology'] |
Free SSL Certificates With Let’s Encrypt for Grafana & Prometheus-Operator Helm Charts | Now we will install the Grafana or Prometheus-Operator Helm Chart. This example will install the Prometheus-Operator Helm Chart, but the values.yml file for both the Grafana portions are the same.
Again the first step is to create a Kubernetes namespace for deploying the prometheus-operator Helm Chart:
kubectl create namespace prom
Now we need to configure the ingress values for Grafana in the Helm Chart’s values.yml . The complete configuration options are available on the prometheus-operator Helm Chart GitHub repository.
For the annotations, we want to specify that nginx is used for the ingress and that letsencrypt-prod is used for the cluster-issuer . Then we want to specify the host to use for the ingress and the tls host (SSL certificate common name), in most cases they will be the same. Below is an example of the grafana portion of my values.yml for the prometheus-operator Helm Chart:
## Using default values from https://github.com/helm/charts/blob/master/stable/grafana/values.yaml
##
grafana:
enabled: true
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- grafana.example.com
tls:
- hosts:
- grafana.example.com
secretName: grafana-tls
We can install the prometheus-operator Helm Chart once the values.yml file has been configured:
helm install prom stable/prometheus-operator -f values.yaml --namespace prom
Verify the prom-grafana pod is running (it make take a few minutes to get running):
$ kubectl -n prom get po
NAME READY STATUS RESTARTS AGE
prom-grafana-798b7b89bf-rnbpt 2/2 Running 0 10s
prom-kube-state-metrics-568dc84666-z5vm6 1/1 Running 0 10s
prom-prometheus-node-exporter-88h4k 1/1 Running 0 10s
prom-prometheus-operator-operator-67d764bff6-j99jm 2/2 Running 0 56s
prometheus-prom-prometheus-operator-prometheus-0 3/3 Running 0 56s
Finally, you can view the status of the grafana-tls certificate. The ca.crt will be 0 bytes, but the tls.crt and tls.key should be greater than 0 bytes. If there is an error, the error message should show up here:
$ kubectl -n prom describe secret grafana-tls
Name: grafana-tls
Namespace: prom
Labels: <none>
Annotations: cert-manager.io/alt-names: grafana.example.com
cert-manager.io/certificate-name: grafana-tls
cert-manager.io/common-name: grafana.example.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-prod
cert-manager.io/uri-sans: Type: kubernetes.io/tls Data
====
ca.crt: 0 bytes
tls.crt: 3574 bytes
tls.key: 1671 bytes
Now you can go to https://grafana.example.com and it should be secured by a free valid SSL certficiate. That is it. As long as the cert-manager is running you don’t have to do anything, ever! cert-manager should automatically renew Let’s Encrypt SSL certificates without any user interaction every month. | https://medium.com/swlh/free-ssl-certs-with-lets-encrypt-for-grafana-prometheus-operator-helm-charts-b3b629e84ba1 | ['Kevin Coakley'] | 2020-07-20 22:25:05.905000+00:00 | ['Ingress', 'Kubernetes', 'Lets Encrypt', 'Grafana', 'Helm'] |
The Garbage Truck | is a cartoonist in Vermont. Her book is RX: A Graphic Memoir and her comic strip is Rachel Lives Here Now. www.rachellivesherenow.com
Follow | https://medium.com/spiralbound/the-garbage-truck-15edf8f65480 | ['Rachel Lindsay'] | 2019-01-07 14:01:00.995000+00:00 | ['Graphic Memoir', 'Mental Health', 'Rachel Lindsay', 'Comics', 'Bipolar Disorder'] |
Evolving MySQL Compression — Part 1 | Robert Wultsch | Pinterest engineer, SRE
Pinterest Infrastructure engineers are the caretakers of more than 75 billion Pins–dynamic objects in an ever-growing database of people’s interests, ideas and intentions. A Pin is stored as a 1.2 KB JSON blob in sharded MySQL databases. A few years back, as we were growing quickly, we were running out of space on our sharded MySQL databases and had to make a change. One option was to scale up hardware (and our spend). The other option–which we chose–was using MySQL InnoDB page compression. This cost a bit of latency but saved disk space. However, we thought we could do better. As a result, we created a new form of MySQL compression which is now available to users of Percona MySQL Server 5.6.
JSON is efficient for developers, not machines
As a small start-up, Pinterest built and scaled its MySQL environment to tens of millions of Pinners without having an engineer who specialized in the care and feeding of MySQL. This was a testament to MySQL’s ease of use, but it also meant non-trivial changes were not practical. In particular, adding columns to MySQL tables was impossible without knowledge of specialized tools such as the online schema change scripts from Percona, GitHub, or (my favorite because I helped build it) Facebook.
Storing almost all Pin data in a JSON blob worked around the inability to add columns to MySQL tables. This flexibility came at the cost of storage efficiency. For example, we store a field called “uploaded_to_s3” as a boolean. If we had stored this as a boolean in MySQL, the field would have only used 1 byte. With the JSON representation below, we wrote 24 bytes to disk, largely as a result of the field name being stored in the JSON blob. About 20 percent of a Pin’s size comes from field names.
How the boolean uploaded_to_s3 is stored in JSON
, ‘"uploaded_to_s3": true
InnoDB page compression
As it’s normally configured, InnoDB “thinks” in 16KB pages and will attempt to compress a user-defined number of pages and push them into the space of a single page. (For a deep dive on how InnoDB page compression works, I suggest reading these fine docs.)
However, we found several significant downsides to InnoDB page compression:
InnoDB’s buffer pool, its in-memory cache, stores both the compressed and uncompressed pages. This is helpful if data from a page is read repeatedly in relatively quick succession since the data doesn’t need to be decompressed multiple times, but it isn’t memory-efficient. In our case, we have a significant caching layer (managed by Mcrouter) in front of MySQL, so repeated reads are somewhat rare.
The fundamental unit of work is still a 16KB page. This means that if a set of pages to be compressed don’t fit into 16KB or less, the compression fails and no savings are realized. It also means that if the table is configured for a compression ratio of 2:1, but the pages happen to compress extremely well (perhaps even all the way down to a single byte, for purposes of our thought experiment), the on-disk size is still 16KB. In other words, the compression ratio is effectively still only 2:1.
In general, latency is higher for tables that use InnoDB compression, especially those under high concurrency workloads. Stress testing against our production workload showed significant increases in latency and a corresponding drop in throughput with more than 32 active concurrent connections. Since we had a lot of excess capacity, so this wasn’t a major concern.
Alternatives
We considered using a method other companies have tried, where the client compresses the JSON before sending data to MySQL. While this reduces load on the databases by moving it to the client, the cost of retrofitting middleware, particularly at the expense of new features, was too high in our case. We needed a solution that didn’t require any changes to the database clients.
We discussed at length modifying MySQL in order to allow compression at the column level. This approach would have different benefits and some tradeoffs:
We’d realize the maximum disk space savings from compression.
For pages containing compressed data, we’d only store one copy in memory, so RAM would be used more efficiently than it is with both uncompressed and page-compressed InnoDB.
For every read we’d need to decompress data, and every write would require a compression operation. This would be especially harmful if we needed to do large sequential scans with many decompression operations.
We were fortunate that Weixiang Zhai from Alibaba had posted a patch for inclusion in Percona Server that implemented this feature. We patched, compiled and tested MySQL using our production workload. The result was similar compression savings to InnoDB page compression (~50%) but with a better performance profile for our workload. This was helpful, but we had another improvement in mind.
Improving column compression
Zlib is the compression library used by InnoDB page compression and the column compression patch from Alibaba. Zlib achieves savings in part by implementing LZ77 and works by replacing occurrences of repeated strings with references to the earlier occurrences. The ability to look back at previous string occurrences would be very useful for page compression but less so for column compression since it’s unlikely field names (among other strings) would occur repeatedly in the same column in a given row.
Zlib version 1.2.7.1 was released in early 2013 and added the ability to use a predefined “dictionary” to prefill the lookback window for LZ77. This seemed promising since we could “warm up” the lookback window with field names and other common strings. We ran a few tests using the Python Zlib library with a naive predefined dictionary consisting of an arbitrary Pin JSON blob. The compression savings increased from ~50% to ~66% at what appeared to be relatively little cost.
We worked with Percona to create a specification for column compression with an optional predefined dictionary and then contracted with Percona to build the feature.
Initial testing and a road forward
Once an alpha version of column compression was ready, we benchmarked the change and found it produced the expected space savings and doubled throughput at high concurrency. The only downside was large scans (mysqldump, ETL, etc.) took a small performance hit. We presented our findings earlier this year at Percona Live. Below is a graph from our presentation which showed a read-only version of our production workload at concurrency of 256, 128, 32, 16, 8, 4 and 1 clients. TokuDB is in yellow, InnoDB page compression is in red and the other lines are column compression with a variety of dictionaries. Overall, column compression peaked at around twice the throughput on highly concurrent workloads.
In our next post, we’ll discuss how we increased the compression savings using a much less naive compression dictionary.
Acknowledgements: Thanks to Nick Taylor for suggesting the use of predefined dictionary from Zlib, Ernie Souhrada for benchmarking, Weixiang Zhai for writing the original patch and posting it to the Percona mailing list, and Percona for adding in the predefined dictionary feature and being willing to include it in their distribution. | https://medium.com/pinterest-engineering/evolving-mysql-compression-part-1-7f8b09666589 | ['Pinterest Engineering'] | 2017-02-21 20:24:21.839000+00:00 | ['Programming', 'Big Data', 'MySQL', 'Compression', 'Open Source'] |
Getting Unstuck in the Middle of NaNoWriMo | Dear November Novelists,
It’s after 2pm and I’ve done everything but add to my novel’s word count. I did my morning pages, made a lavish breakfast, talked to friends, walked my dog, cleaned the kitchen, watched Netflix (I’m watching Episodes with Matt LeBlanc, you?), walked the dog again…all the while guilt accumulated with each word I didn’t add to my novel.
As seen on my dog walk. Do fairies and sprites live here?
I forgot that in middle of NaNoWriMo, my motivation and creativity usually diminish to the equivalent of a wrung-out kitchen sponge.
Then I read Grant Faulkner’s post on this well-known phenomenon. A nightlight in the hallway of my imagination ignited, casting a faint glow on possibility.
Do I need to write 1,667 words today? Not really. It’s not like the world will stop spinning. Except I made a promise to myself. It’s the same promise I’ve made every November for the past 9. This month, writing comes first.
I’ll argue the promises we make to ourselves are the most ones to keep. So when I finish penning this newsletter and share some of the fantastic posts this week, I’ll start typing. | https://medium.com/nanowrimo/getting-unstuck-in-the-middle-of-nanowrimo-8e391a1c4f73 | ['Julie Russell'] | 2020-11-15 16:40:52.804000+00:00 | ['Fiction', 'NaNoWriMo', 'Nonprofit', 'Writing'] |
Fundamentals of MapReduce with MapReduce Example | MapReduce Tutorial - Edureka
In this MapReduce Tutorial blog, I am going to introduce you to MapReduce, which is one of the core building blocks of processing in the Hadoop framework. Before moving ahead, I would suggest you get familiar with HDFS concepts which I have covered in my previous HDFS tutorial blog. This will help you to understand the MapReduce concepts quickly and easily.
Google released a paper on MapReduce technology in December 2004. This became the genesis of the Hadoop Processing Model. So, MapReduce is a programming model that allows us to perform parallel and distributed processing on huge datasets. The topics that I have covered in this MapReduce tutorial blog are as follows:
Traditional Way for parallel and distributed processing
What is MapReduce?
MapReduce Example
MapReduce Advantages
MapReduce Program
MapReduce Program Explained
Traditional Way
Traditional Way - MapReduce Tutorial
Let us understand, when the MapReduce framework was not there, how parallel and distributed processing used to happen in a traditional way. So, let us take an example where I have a weather log containing the daily average temperature of the years from 2000 to 2015. Here, I want to calculate the day having the highest temperature each year.
So, just like in the traditional way, I will split the data into smaller parts or blocks and store them in different machines. Then, I will find the highest temperature in each part stored in the corresponding machine. At last, I will combine the results received from each of the machines to have the final output. Let us look at the challenges associated with this traditional approach:
Critical path problem: It is the amount of time taken to finish the job without delaying the next milestone or actual completion date. So, if, any of the machines delay the job, the whole work gets delayed. Reliability problem: What if, any of the machines which are working with a part of data fails? The management of this failover becomes a challenge. Equal split issue: How will I divide the data into smaller chunks so that each machine gets even part of data to work with. In other words, how to equally divide the data such that no individual machine is overloaded or underutilized. Single split may fail: If any of the machines fail to provide the output, I will not be able to calculate the result. So, there should be a mechanism to ensure this fault tolerance capability of the system. Aggregation of the result: There should be a mechanism to aggregate the result generated by each of the machines to produce the final output.
These are the issues which I will have to take care individually while performing parallel processing of huge datasets when using traditional approaches.
To overcome these issues, we have the MapReduce framework which allows us to perform such parallel computations without bothering about the issues like reliability, fault tolerance etc. Therefore, MapReduce gives you the flexibility to write code logic without caring about the design issues of the system.
What is MapReduce?
What is MapReduce - MapReduce Tutorial
MapReduce is a programming framework that allows us to perform distributed and parallel processing on large data sets in a distributed environment.
MapReduce consists of two distinct tasks — Map and Reduce.
As the name MapReduce suggests, reducer phase takes place after the mapper phase has been completed.
So, the first is the map job, where a block of data is read and processed to produce key-value pairs as intermediate outputs.
The output of a Mapper or map job (key-value pairs) is input to the Reducer.
The reducer receives the key-value pair from multiple map jobs.
Then, the reducer aggregates those intermediate data tuples (intermediate key-value pair) into a smaller set of tuples or key-value pairs which is the final output.
A Word Count Example of MapReduce
Let us understand, how a MapReduce works by taking an example where I have a text file called example.txt whose contents are as follows:
Dear, Bear, River, Car, Car, River, Deer, Car and Bear
Now, suppose, we have to perform a word count on the sample.txt using MapReduce. So, we will be finding unique words and the number of occurrences of those unique words.
MapReduce Example - MapReduce Tutorial
First, we divide the input into three splits as shown in the figure. This will distribute the work among all the map nodes.
Then, we tokenize the words in each of the mappers and give a hardcoded value (1) to each of the tokens or words. The rationale behind giving a hardcoded value equal to 1 is that every word, in itself, will occur once.
Now, a list of key-value pair will be created where the key is nothing but the individual words and value is one. So, for the first line (Dear Bear River) we have 3 key-value pairs — Dear, 1; Bear, 1; River, 1. The mapping process remains the same on all the nodes.
After the mapper phase, a partition process takes place where sorting and shuffling happen so that all the tuples with the same key are sent to the corresponding reducer.
So, after the sorting and shuffling phase, each reducer will have a unique key and a list of values corresponding to that very key. For example, Bear, [1,1]; Car, [1,1,1].., etc.
Now, each Reducer counts the values which are present in that list of values. As shown in the figure, reducer gets a list of values which is [1,1] for the key Bear. Then, it counts the number of ones in the very list and gives the final output as — Bear, 2.
Finally, all the output key/value pairs are then collected and written in the output file.
Advantages of MapReduce
The two biggest advantages of MapReduce are:
1. Parallel Processing:
In MapReduce, we are dividing the job among multiple nodes and each node works with a part of the job simultaneously. So, MapReduce is based on Divide and Conquer paradigm which helps us to process the data using different machines. As the data is processed by multiple machines instead of a single machine in parallel, the time taken to process the data gets reduced by a tremendous amount as shown in the figure below (2).
Traditional Way Vs. MapReduce Way - MapReduce Tutorial
2. Data Locality:
Instead of moving data to the processing unit, we are moving the processing unit to the data in the MapReduce Framework. In the traditional system, we used to bring data to the processing unit and process it. But, as the data grew and became very huge, bringing this huge amount of data to the processing unit posed the following issues:
Moving huge data to processing is costly and deteriorates the network performance.
Processing takes time as the data is processed by a single unit which becomes the bottleneck.
Master node can get over-burdened and may fail.
Now, MapReduce allows us to overcome the above issues by bringing the processing unit to the data. So, as you can see in the above image that the data is distributed among multiple nodes where each node processes the part of the data residing on it. This allows us to have the following advantages:
It is very cost effective to move the processing unit to the data.
The processing time is reduced as all the nodes are working with their part of the data in parallel.
Every node gets a part of the data to process and therefore, there is no chance of a node getting overburdened.
MapReduce Example Program
Before jumping into the details, let us have a glance at a MapReduce example program to have a basic idea about how things work in a MapReduce environment practically. I have taken the same word count example where I have to find out the number of occurrences of each word. And Don’t worry guys, if you don’t understand the code when you look at it for the first time, just bear with me while I walk you through each part of the MapReduce code.
Source code:
package co.edureka.mapreduce;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.fs.Path;
public class WordCount
{
public static class Map extends Mapper<LongWritable,Text,Text,IntWritable> {
public void map(LongWritable key, Text value,Context context) throws IOException,InterruptedException{
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
value.set(tokenizer.nextToken());
context.write(value, new IntWritable(1));
}
}
}
public static class Reduce extends Reducer<Text,IntWritable,Text,IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException,InterruptedException {
int sum=0;
for(IntWritable x: values)
{
sum+=x.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf= new Configuration();
Job job = new Job(conf,"My Word Count Program");
job.setJarByClass(WordCount.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
Path outputPath = new Path(args[1]);
//Configuring the input/output path from the filesystem into the job
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
//deleting the output path automatically from hdfs so that we don't have to delete it explicitly
outputPath.getFileSystem(conf).delete(outputPath);
//exiting the job only if the flag value becomes false
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Explanation of MapReduce Program
The entire MapReduce program can be fundamentally divided into three parts:
Mapper Phase Code
Reducer Phase Code
Driver Code
We will understand the code for each of these three parts sequentially.
Mapper code:
public static class Map extends Mapper<LongWritable,Text,Text,IntWritable> {
public void map(LongWritable key, Text value, Context context) throws IOException,InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
value.set(tokenizer.nextToken());
context.write(value, new IntWritable(1));
}
We have created a class Map that extends the class Mapper which is already defined in the MapReduce Framework.
We define the data types of input and output key/value pair after the class declaration using angle brackets.
Both the input and output of the Mapper is a key/value pair.
Input:
The key is nothing but the offset of each line in the text file: LongWritable The value is each individual line (as shown in the figure at the right): Text
Output:
The key is the tokenized words: Text We have the hardcoded value in our case which is 1: IntWritable Example — Dear 1, Bear 1, etc.
We have written a java code where we have tokenized each word and assigned them a hardcoded value equal to 1.
Reducer Code:
public static class Reduce extends Reducer<Text,IntWritable,Text,IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values,Context context)
throws IOException,InterruptedException {
int sum=0;
for(IntWritable x: values)
{
sum+=x.get();
}
context.write(key, new IntWritable(sum));
}
}
We have created a class Reduce which extends class Reducer like that of Mapper.
We define the data types of input and output key/value pair after the class declaration using angle brackets as done for Mapper.
Both the input and the output of the Reducer is a key-value pair.
Input:
The key nothing but those unique words which have been generated after the sorting and shuffling phase: Text The value is a list of integers corresponding to each key: IntWritable Example — Bear, [1, 1], etc.
Output:
The key is all the unique words present in the input text file: Text The value is the number of occurrences of each of the unique words: IntWritable Example — Bear, 2; Car, 3, etc.
We have aggregated the values present in each of the list corresponding to each key and produced the final answer.
In general, a single reducer is created for each of the unique words, but, you can specify the number of reducer in mapred-site.xml.
Driver Code:
Configuration conf= new Configuration();
Job job = new Job(conf,"My Word Count Program");
job.setJarByClass(WordCount.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
Path outputPath = new Path(args[1]);
//Configuring the input/output path from the filesystem into the job
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
In the driver class, we set the configuration of our MapReduce job to run in Hadoop.
We specify the name of the job, the data type of input/output of the mapper and reducer.
We also specify the names of the mapper and reducer classes.
The path of the input and output folder is also specified.
The method setInputFormatClass () is used for specifying that how a Mapper will read the input data or what will be the unit of work. Here, we have chosen TextInputFormat so that single line is read by the mapper at a time from the input text file.
The main () method is the entry point for the driver. In this method, we instantiate a new Configuration object for the job.
Run the MapReduce code:
The command for running a MapReduce code is:
hadoop jar hadoop-mapreduce-example.jar WordCount /sample/input /sample/output
Now, you guys have a basic understanding of the MapReduce framework. You would have realized how the MapReduce framework facilitates us to write code to process huge data present in the HDFS.
If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Big data. | https://medium.com/edureka/mapreduce-tutorial-3d9535ddbe7c | ['Shubham Sinha'] | 2020-09-10 09:46:51.670000+00:00 | ['Big Data', 'Mapreduce', 'Hadoop', 'Hdfs', 'Hadoop Training'] |
Deus Absconditus et Otiosa — Atheism, Modernism, Mormonism | “People readily swallow the untested claims of this, that, or the other. It’s drowning all your old rationalism and scepticism, it’s coming in like a sea; and the name of it is superstition. It’s the first effect of not believing in God that you lose your common sense and can’t see things as they are. Anything that anybody talks about, and says there’s a good deal in it, extends itself indefinitely like a vista in a nightmare. And a dog is an omen, and a cat is a mystery, and a pig is a mascot, and a beetle is a scarab, calling up all the menagerie of polytheism from Egypt and old India; Dog Anubis and great green-eyed Pasht and all the holy howling Bulls of Bashan; reeling back to the bestial gods of the beginning, escaping into elephants and snakes and crocodiles; and all because you are frightened of four words: He was made Man.” — G. K. Chesterton, The Complete Father Brown Stories (Wordsworth Editions, 2006), 394–395
Before proceeding, it is most important to define one’s terms: in this case, modernism and atheism.
In The True Life, Alain Badiou describes modernity’s dramatic evaporation of the social structures which gave order to the life of the West. The two respective events, according to Badiou, which transitioned boys to men and girls to women were military conscription for males and marriage and childbearing for women. This system held for centuries, if not longer, with obvious strains and flaws: for instance, a woman who had a child out of wedlock had no place in the symbolic order of her society and was thus either exiled, abused, or otherwise disregarded. Conscientious objectors may be something of an equivalent among men. The fact that these individuals were seen as anomalies or threats to the stability of society only serves to emphasize how strongly these institutions were rooted in the cultural soil of the West.
These structures have, by and large, since evaporated, with strange results, according to Badiou. The military and marriage were tantamount to initiation or coming-of-age rituals, signaling to not only society at large but to the participating individual that the latter had transitioned from childhood to adulthood and should thus be treated as such. Thus with the dissolution of these structures comes an odd dissolution of any concrete moment in which a boy may become a man or a girl may become a woman in secular society. Considering this neither a negative nor a positive, Badiou only observes that with the blurring of a previously defined society comes the contemporary sense of being adrift — modernism’s lack of identity, belonging, or purpose. According to Badiou, males are no longer required to enter the military, meaning there is no moment in which they are granted the status of manhood by their culture, leading to a kind of Lost Boys-like perpetual youth; likewise, for females, with the rise of sexual liberation and the decline in frequency of traditional monogamy, girls are seen as always already women, always conferred womanhood and thus never experiencing childhood. Even if one denies Badiou’s division of results among males and females, one cannot deny the modernist experience: the structures of traditional pre-modern societies have mostly collapsed, to the point that they are largely no longer viewed as mandatory, opening up a space of freedom. But not only freedom, but a freedom so absolute that it leads to a kind of existential vertigo. “Anxiety is the dizziness of freedom,” writes Søren Kierkegaard; in more modern terms, you can be absolutely anything, and thus all options are equally relevant or desirable, and thus, paralyzed by options and unable to choose, you become nothing really.
Friedrich Nietzsche described a similar phenomenon in The Gay Science, wherein a madman rushes into the city and cries out:
“God is dead. God remains dead. And we have killed him. How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it?” — Section 125, trans. Walter Kaufmann
The crowd laughs at the fool, until he runs into a church to pray; afterward, the crowd continues their business. Often the most focus is given to the madman who cries “God is dead,” yet little attention is given to his additional point that “we [ourselves] have killed him,” and almost no attention is paid to his conclusion that “we ourselves [must] become gods simply to appear worthy of it.” The point is not simply that traditional religion or theism has mostly died out and that some people simply cannot accept its end, but that the symbolic order which rendered the cosmos itself intelligible and thus manageable for believers has collapsed. Moderns, according to Nietzsche, have generally responded to this collapse in one of two ways: the first is that of the madman, hopeless nostalgic who can only weep at the grave of God; the second, however, is that of the crowd, who meander through the streets with no destination or purpose.
This is Nietzsche’s atheism, properly defined. Rather than to simply say “there is no God,” as if such a phrase were at all self-explanatory, atheism is pure negation — a-theism, the negation or rejection of theism, or normative religion. Atheism need not foreclose on the mythological, or even the supernatural per se, but is instead a negation of the traditional religions that have structured Western societies for the past millennium or more. This is what Nietzsche means when he insists that the world has slipped into nihilism: not that the world has simply sloughed off old superstitions, but that it has broken out of its previous pre-defined patterns of thought and behavior into a dizzying openness; and in that open void, there is no alternative structure waiting to replace the former failed system. Because atheism is simply negation, it has no positive descriptors to add; subtracting theism, it does not then become a religion of its own, but the mere absence thereof. This absence thus begs the question: what, if anything, may come after the death of God?
Modernism and atheism, far from tragedy or triumph, are purely neutral events — perhaps much like the passage from childhood to adulthood — yet that is not to say that they mean nothing. Rather, together, they are the guardians at the gate who open for twenty-first century societies and individuals a world of inexhaustible inexplicability and absolute freedom — which may be two ways of saying the same thing.
Mormonism, too, has not escaped this seismic transition. According to Jana Riess’ The Next Mormons, though the Church of Jesus Christ of Latter-day Saints (or LDS Church) has maintained a relatively higher retention rate among its youth relative to other American denominations, youths (gen-X, millennials, gen-Z) are still disaffiliating from the LDS Church at a progressively increasing rate. Opinions also continue to shift from older to younger generations within Mormon culture concerning sex, gender, and sexuality; reliance upon organized religion, authority figures, and hierarchy; certainty concerning doctrines and traditional practices — among other things.
Furthermore, similar shifts into this realm of inexplicability and freedom punctuate Mormon history. After the assassination of Joseph Smith, Mormonism’s founder, Mormonism as a whole threatened to disintegrate as Smith’s followers broke into a number of factions in the greatest succession crisis the Mormon movement has since experienced. Contrary to the views of devout members of any Mormon denomination, and though most surviving Mormons followed Brigham Young to Utah (in the Church of Jesus Christ of Latter-day Saints), the initial succession crisis which led to this fragmentation was not simply brought on by the opportunism or narcissism of a few of Smith’s surviving contemporaries. Smith’s sudden murder — foreseen (if at all) by only perhaps a few days, maybe weeks — left no opportunity for him to put into place a definite apparatus to determine who would succeed him. The event of Smith’s death led the Mormons into an undefined space, from which they emerged with a number of contradictory arrangements. This same experience has reoccurred within various Mormon denominations, such as when the Church of Jesus Christ of Latter-day Saints ceased plural marriage, when the Reorganized Church of Jesus Christ of Latter Day Saints (now Community of Christ) transitioned to leaders not directly descended from the Smith family, or when the Fundamentalist Church of Jesus Christ of Latter-Day Saints witnessed Warren Jeffs’ arrest and imprisonment. Each moment was the dissolution of a previous organizing principle, leading to an open space of possibility, from which emerged not simply one triumphant progression but numerous varying developments.
The least controversial statement one could make about the Church of Jesus Christ of Latter-day Saints (the Mormon denomination with which I identify) is that the majority of its American members err on the side of a general conservatism — politically, socially, economically, etc. Though this is by no means universal among American Mormons, and of course says nothing of the non-American majority of Latter-day Saints, this datum may nonetheless indicate how many modern Mormons have responded to the advent of modernism and atheism: opting for the role of the madman weeping at God’s grave rather than that of the aimless drifters in the streets. Among Latter-day Saints of this persuasion, much of the non-Mormon world and even Mormon culture (especially the elements which contradict or conflict with traditional Mormonism) are usually experienced as the “wickedness of the world” and the “influence of the Adversary” — a broadly conservative perspective with obvious theological effects. However, given that this general conservatism is more of a persona — or even a caricature — of the genuine diversity of Latter-day Saints, within and beyond the LDS Church, one wonders if Mormon culture might have other options in considering how to respond to the questions of modernism and atheism. More pointedly, what would come of Latter-day Saints leaning into this inexplicability and freedom rather than resisting it? What may come of a plural marriage of atheism, modernism, and Mormonism?
One may begin by stating that all human discourse and understanding is ultimately bound by perspective: one experiences nothing that is not mediated through their own nervous system, complied and interpreted by their own brain — experience is most fundamentally subjective. With this in mind, anything one may say, even theologically, will run the risk of being a superficial reification of their own internal lives, a projection of their psychological state onto the universe. For this reason, perhaps one may attempt to reverse the causal flow; rather than speaking theologically about the universe, one may attempt to speak theologically about oneself and their own subjective experience. In this sense, to speak theologically, even in a particularly Mormon context, would be to describe back to oneself and others their present modernist, atheist experience — the experience of feeling the structures which once provided them with order, identity, purpose, and belonging whittle away against the flint wheel of impermanent reality. What is it like to be a Mormon who is, whether they may realize it, already a modernist and an atheist?
In The Phenomenology of Spirit, G.W.F. Hegel describes this subjectivity, positing that “everything turns on grasping and expressing the True, not only as Substance, but equally as Subject” (trans. A. V. Miller). For Hegel, subjective experience — one’s conscious experience as an individual person — was central and paramount to his philosophy. However, nearly since its publication in 1807, Hegel’s Phenomenology has been — according to some of his latest interpreters — woefully misread.
To understand this misreading, one must turn to Immanuel Kant. Kant’s contribution to Western philosophy was his distinction between a “thing-as-appearance” and a “thing-in-itself”; in other words, Kant described the human subject’s inherent inability to grasp reality in itself, and therefore their reliance upon the limited representations their mind can produce of this or that narrow selection of reality. Ultimately, Kant’s distinction between these two categories is a distinction between two forms of knowing: to believe one possesses the “thing-in-itself” is to deceive oneself into believing one has perceived objective reality, that one’s knowledge is certain and unquestionable; and to understand that one possesses only a “thing-as-appearance,” or a diluted representation of the thing itself, is to recognize the limitations of one’s perceptions, and to scrutinize one’s understanding. The former naively assumes they simply see the world “as it is,” while the latter recognizes that they see the world only “as they are,” mediated by the narrow filter of their senses and mental faculties.
Arthur Schopenhauer, a passionate student of Kant, articulated this view further in The World as Will and Representation. Schopenhauer recognized that the human nervous system can only absorb so much sensory data from the world around it, which the brain then compiles into a working simulation (a virtual reality, if you will) in which the subject lives, moves, and has their very being, so to speak. In other words, Schopenhauer recognized some of the consequences of the fact that the human subject only experiences the “thing-as-appearance” (Vorstellung), never truly accessing the “thing-in-itself” (Wille). He was also a sharp critic of Hegel, the former believing the latter had either misunderstood or outright rejected Kant, describing Hegel’s work as a “pseudo-philosophy” audaciously attempting “to comprehend the history of the world as a planned whole” (Schopenhauer, World as Will and Representation, vol. 2, trans. E. F. J. Payne [Dover, 1966], 442). According to Schopenhauer, in describing “grasping and expressing the True, not only as Substance, but equally as Subject,” Hegel was subscribing to the very epistemological certainty — the naïve belief in access to the “thing-in-itself” — which Kant had critiqued. Worse yet, he claimed, Hegel had made not only an epistemological statement, but an ontological one: the human subject could perceive any and all things, even things not directly present, as in the pre-Kantian metaphysics of Reason, which could deduce the mysteries of the universe from a contemplative distance alone. And thus for the following two centuries, most commentators read a kind of “amputated” Hegel, to use Todd McGowan’s term: attempting to rescue Hegel from his own apparently embarrassing ontology, numerous commentators reduced him to a philosopher of epistemology or anthropology merely. For nearly 200 years Hegel has been read as a commentator on subjectivity alone, his statements on Being itself having been subtracted.
However, as recently as 1989, with the English publication of The Sublime Object of Ideology, Slavoj Zizek has begun a bold re-reading of Hegel — or, perhaps more accurately, a more faithful interpretation of Hegel’s original intent. According to Zizek, Hegel’s equivocation of subjectivity and substance is not a rejection of Kant, but a faithful tracing out of the consequences of Kant’s work. As Kant and Schopenhauer proposed, and as Zizek insists Hegel believed, one never experiences reality-in-itself, only narrow representations thereof; like Anish Kapoor’s Cloud Gate, the subject, a field of consciousness, is a polished surface reflecting back (in warped fashion) its immediate surroundings. In Hegel, subjectivity is experienced as a kind of contradiction between oneself and the external world. Similar to Schopenhauer, Hegel recognizes that the human mind can only represent a narrow portion of its external world back to itself and others. One may compare this limited subjectivity or consciousness to cartography: one explores a given terrain, then distills that experience into a map. The map and the terrain are not the same, and sometimes they can even contradict one another. In this sense, a conscious subject in the world is like a map of the United States hanging on a wall somewhere in the United States. And because consciousness and reality, like a map and its terrain, can be disjointed and out of sync with one another, subjectivity possesses an inherent division or internal fracture — a contradiction. Hegel’s contribution, according to Zizek, was thus not to reject Kant’s epistemology, but to tease out its consequences: if a subject or being arises from and is a part of Being, and if this subject experiences this inherent contradiction within themselves, then something about Being itself must also possess this internal contradiction. What Hegel rejects is not Kantian epistemology but Cartesian dualism, which insists that the human mind is a foreign object in the world. Rather, Hegel proposes, the subject grows out of the world — a being from Being — like a wave from the sea, not inventing this divisive contradiction in its own subjectivity, but bringing with it the division which always already pervades Being itself all the way down. Reality at its most fundamental is fragmented, a contradiction reeling against itself as it continually unfolds into countless different beings and things.
One may read this Hegelian division theologically, as well. In Godhead and the Nothing, Thomas J.J. Altizer proposes that this is in fact the central message of the Christian epic: at the beginning of time, Being — or Godhead, as opposed to God, to use Altizer’s terms — rips itself to pieces in a “primordial sacrifice,” dividing itself in a way which creates not only good but evil, life and death, contentment and horror. Between all things, Altizer states, is an infinite abyss or nihil, the Nothing of Godhead itself, which separates all things: people from themselves, one another, the world — everything. Uniquely, this division is also transposed into God, who also becomes a fragment of the larger Godhead constantly tearing itself to pieces in creation. This primordial sacrifice, eternally ongoing, is recapitulated in various ways by human beings, notably in rituals of division and sacrifice, sometimes even involving the shedding of blood; and ultimately in their own deaths, when they themselves are “ripped to pieces” and cast to the four winds of the world which survives them. In a sense, this is a quasi-Gnostic interpretation of the human condition: while normative Judaism, Christianity, and Islam see Creation and Fall as two distinct events occurring in succession, Gnostic variants of these traditions often describe Creation and Fall as the same event. To be created is to always already be fallen, as, in order to create, Godhead must divide itself from itself.
In the closing chapter of The Puppet and the Dwarf, Zizek describes how deeply this primordial sacrifice or Hegelian contradiction goes by analyzing the narrative of Job, specifically the theological friends who come to “console” him. According to Zizek, Job’s friends represent the typical human response to this fundamental division in themselves and Being itself, and the violence and ignorance (such as their blindness to Job’s profound suffering) which result therefrom. To cope with the chaos of Being, humans construct ideologies through which to interpret the unintelligibility of life: in Job’s case, his friends approach with tightly-defined theological explanations for precisely why he is suffering, and even what he can do to change that; but when Job rejects their explanations, ostensibly “threatening” their existential security, the friends recoil and become enemies, turning on Job. According to Job’s friends, they have simply sought to make sense of the world; in reality, they have sought to explain away the world.
In the end, after Job’s steadfast rejection of his friends’ attempts to explain his suffering by fitting him neatly into their various ideologies, God descends and rejects the explanations offered by Job’s friends, instead endorsing Job himself (cf. Job 42:7). Curiously, however, Job has offered no explanation of his suffering which might compete with the explanations offered by his friends — he has simply rejected any attempt to integrate his experience into their fantasies, ideological or otherwise. However, for Zizek, the question which follows is not so much “how do we cut through fantasy to reality?” but “how does reality produce fantasy at all?”
Following Zizek’s example, one may utilize the three-part psychological structure pioneered by Jacques Lacan of the Symbolic, the Imaginary, and the Real. Imagine a universe entirely devoid of conscious beings, no one to analyze or evaluate any of its contents — this is the Real. Now plant one conscious human on a desert island in the middle of nowhere, entirely alone, and grant them the discursive thinking natural to a conscious person. They can sense external reality, represent it back to themselves (perception, memory), and respond to those representations (plans) — this is the Imaginary. Add two or three more humans, or a thousand, and together they leverage the Imaginary to create a society and culture with social norms and a status quo, so much so that these collective imaginative structures seem almost as if to take on a life of their own — this is the Symbolic. The Real precedes consciousness and its contents, while the Imaginary is a necessary and involuntary byproduct of consciousness, but the Symbolic is the active principle of consciousness, its attempt to not only perceive but alter its environment — including other conscious subjects who may enter its field of awareness.
In a manner of speaking, while the Real and Imaginary are in themselves natural, the Symbolic is a kind of recoil in response to them, an attempt to beat them back, as it were. The Symbolic is a collective attempt at holding back the trauma of the Real and the Imaginary, whether tragic circumstances or unwanted thoughts and feelings. The Symbolic is the realm of ideology — political, religious, or otherwise — a series of potentially utilitarian fantasies operating by and large unconsciously within the subjects themselves. While the Symbolic is objectively an emergent principle of the Real, the subject typically experiences things the other way around: they believe the Symbolic is an organizing principle around which the Real arranges itself, rather than something the Real itself has produced.
Taking many forms — God, State, Father, Mother; the Big Other guaranteeing meaning and intelligibility — the Symbolic is that element which has largely broken down for the West in the advent of modernity. More precisely, the Symbolic has fractured: once a mostly universal means of structuring one’s experience of the Real, the Symbolic has formed fault lines, delineating nearly irreducible competing factions. These competing factions can be cultural, political, philosophical — any interpretive system, or ideology (to retain Zizek’s term), which creates meaning, order, and manageability in one’s subjective experience. Ideology, unconscious fantasy, is the levee one erects to hold back the traumatic floodwaters of the Real — reality as such. The Symbolic is the theology of Job’s friends, who cannot see the suffering Job for the fact that they are too busy “explaining” him — and one can see myriad correlates throughout the history of Western theism and the institutions of normative religion. Or, to return to Badiou’s examples, one may consider the previously unquestioned and unquestionable customs of military conscription, monogamy, and childbearing, once unconditionally expected of Western society generally. These ideologies are the “God” whose death the madman announces in a panic and which the crowd awkwardly shrugs off. The dreadful point is that the crowd forgets that the Symbolic serves a purpose, while the madman naively presumes the Symbolic is contingent upon something which precedes it — both refuse to face the Real.
For Altizer and Zizek, this tension is illustrated in Jesus Christ, particularly his destitute cry from the cross: “My God, my God, why have you forsaken me?” (Mark 15:34, New Revised Standard Version). In this climax of the crucifixion, God withdrawing from Jesus, one may see modernity itself: the absence of a Big Other to guarantee meaning in one’s suffering, let alone their existence, leaving them in what Zizek calls “the desert of the Real.”
G.K. Chesterton describes this event well:
“When the world shook and the sun was wiped out of heaven, it was not at the crucifixion, but at the cry from the cross: the cry which confessed that God was forsaken of God. And now let the revolutionists choose a creed from all the creeds and a god from all the gods of the world, carefully weighing all the gods of inevitable recurrence and of unalterable power. They will not find another god who has himself been in revolt. Nay (the matter grows too difficult for human speech), but let the atheists themselves choose a god. They will find only one divinity who ever uttered their isolation; only one religion in which God seemed for an instant to be an atheist.” — G. K. Chesterton, Orthodoxy (Ignatius, 1995), 145
According to Altizer and Zizek, in proper Hegelian fashion, Christianity extends the contradiction within the subject to God, wherein God becomes alienated from God as any other subject alienated from themselves. In The Monstrosity of Christ, Zizek explains:
“[I]n Freudian-Lacanian terms: Christ is God’s ‘partial object,’ an autonomized organ without a body, as if God picked his eye out of his head and turned it on himself from the outside. We can guess, now, why Hegel insisted on the monstrosity of Christ. “It is therefore crucial to note how the Christian modality of ‘God seeing himself’ has nothing whatsoever to do with the harmonious closed loop of ‘seeing myself seeing,’ of an eye seeing itself and enjoying the sight in this perfect self-mirroring: the turn of the eye toward ‘its’ body presupposes the separation of the eye from the body, and what I see through my externalized/autonomized eye is a perspectival, anamorphically distorted image of myself: Christ is an anamorphosis of God.” — Zizek, Monstrosity of Christ, 82
In this divine absence, or self-division, bereft of a Big Other to guarantee a meaning to his experience (which would ultimately only dissociate him from it), Jesus enters the space of absolute freedom and chooses to lean into the inconvenient and fatal Real. With no Symbolic order gazing over his shoulder, insisting upon or even compelling him to do anything in particular, Jesus may produce an action that is entirely authentically his own — he may receive reality as it is, even in its “antagonism,” and choose for himself how to respond.
Elsewhere in The Monstrosity of Christ, Zizek articulates the ramifications of the crucifixion of Jesus:
“[W]hen people imagine all kinds of deeper meanings [to their experiences] … what really frightens them is that they will lose the transcendent God guaranteeing the meaning of the universe, God as the hidden Master pulling the strings — instead of this, we get a God who abandons this transcendent position and throws himself into his own creation, fully engaging himself in it up to dying, so that we, humans, are left with no higher Power watching over us, just with the terrible burden of freedom and responsibility for the fate of divine creation, and thus of God himself. Are we not still too frightened today to assume all these consequences of the four words [of Chesterton, ‘He was made Man’]? Do those who call themselves ‘Christians’ not prefer to stay with the comfortable image of God sitting up there, benevolently watching over our lives, sending us his son as a token of his love, or, even more comfortably, just with some depersonalized Higher Force? … “Hegel’s underlying premise is that what dies on the Cross is not only God’s earthly representative-incarnation, but the God of beyond itself … “That is to say: what dies on the Cross is precisely the ‘private’ God, the God of our ‘way of life,’ the God who grounds a particular community.” — Zizek, Monstrosity, 25, 29, 295
In a literary vein, this interpretation of the crucifixion of Jesus — of God the Symbolic — may be most clearly seen in Nikos Kazantzakis’ The Last Temptation of Christ, namely the novel’s portrayal of the execution of Jesus. Having spent his prophetic career in a wrestle between his own desires and the call of God, Jesus wishes for nothing more than domestication, to settle down with a woman and raise a family. Upon the cross, he passes out, dreaming that an angel of God has come down, helping him off the cross, telling him that God has seen his devotion and decided not to force Jesus to go through with crucifixion after all. The angel takes Jesus to Mary, the sister of Lazarus, whom he marries and with whom he fathers children; he even marries her sister Martha, expanding their family. Jesus receives everything he ever wanted. However, after a charged encounter with Paul, who preaches a Messiah who was killed for his and all others’ sake, Jesus realizes he has not had his desires fulfilled at all, but that he is only in a dream, and thus still upon the cross, unconscious. Jesus then chooses of his own volition to wake up, to return to consciousness while hanging from the cross, where he cries out in relief — where he surrenders himself to the Real — “into your hands I commend my spirit” (Luke 23:46, NRSV) — a moment Kazantzakis describes in the closing passage of The Last Temptation:
“Jesus rotated his eyes with anguish, and looked. He was alone. The yard and house, the trees, the village doors, the village itself [where he lived with a family in his dream]— all had disappeared. Nothing remained but stones beneath his feet, stones covered with blood; and lower, farther away, a crowd: thousands of heads in the darkness. “He tried with all his might to discover where he was, who he was and why he felt pain. He wanted to complete his cry, to shout LAMA SABACTHANI. . . . He attempted to move his lips but could not. He grew dizzy and was ready to faint. He seemed to be hurling downward and perishing. “But suddenly, while he was falling and perishing, someone down on the ground must have pitied him, for a reed was held out in front of him, and he felt a sponge soaked in vinegar rest against his lips and nostrils. He breathed in deeply the bitter smell, revived, swelled his breast, looked at the heavens and uttered a heart-rending cry: LAMA SABACTHANI! “Then he immediately inclined his head, exhausted. “He felt terrible pains in his hands, feet and heart. His sight cleared, he saw the crown of thorns, the blood, the cross. Two golden earrings and two rows of sharp, brilliantly white teeth flashed in the darkened sun. He heard a cool, mocking laugh, and rings and teeth vanished. Jesus remained hanging in the air, alone. “His head quivered. Suddenly he remembered where he was, who he was and why he felt pain. A wild, indomitable joy took possession of him. No, no, he was not a coward, a deserter, a traitor. No, he was nailed to the cross. He had stood his ground honorably to the very end; he had kept his word. The moment he cried ELI ELI and fainted, Temptation had captured him for a split second and led him astray. The joys, marriages and children were lies; the decrepit, degraded old men who shouted coward, deserter, traitor at him were lies. All — all were illusions sent by the Devil. His disciples were alive and thriving. They had gone over sea and land and were proclaiming the Good News. Everything had turned out as it should, glory be to God! “He uttered a triumphant cry: IT IS ACCOMPLISHED! “And it was as though he had said: Everything has begun.”
Interpreting Martin Scorsese’s film adaptation of Kazantzakis’ novel, Zizek focuses on Jesus’ cry of abandonment to the Big Other who has failed to make sense of his outstanding failure and horrific execution. According to Lacan, God is the Big Other of the Hebrew Bible, the great che vuoi? or “what do you want?” making traumatic yet gripping demands of the people. In Jesus’ destitution, however, he realizes that this Big Other is not the ground of Being but an emergent property of Being, arising from his own mind; not an externality which he asks che vuoi? but something internal to himself. And thus he returns to the Real. In either case, novel or film, one finds not necessarily the historical Jesus upon the cross, but certainly the literary Jesus of Gethsemane who prays “not my will, but thine, be done” (Luke 22:42, King James Version; cf. Matthew 26:39, Mark 14:36, John 6:38) — a Jesus who surrenders himself willingly to that which exceeds his control, that which is most real.
This mytheme of the self-sacrificing Jesus parallels Hegel’s own notion of reconciliation. Summarizing Zizek’s own reading of Hegel, Todd McGowan expresses this messianic-Hegelian reconciliation well:
“Hegel doesn’t stop with uncovering contradiction in being that corresponds to contradiction in thought. Instead, he contends that the contradiction in being is even more intractable than the contradiction in thought. Most philosophers view knowledge as a movement from thought to being: thought aspires to the knowledge of being. But Hegel reverses this relationship. Thought has a higher status than being and thus can tell us about the nature of being. Though being has a chronological priority — obviously being is a necessary condition for the emergence of thought — thought has a logical priority because it has a capacity for enduring and reconciling itself with contradiction that being lacks. Being simply succumbs to contradiction without gaining any purchase on it. “Reconciliation is the great achievement of thought. Through the act of reconciliation, thought adopts a different relationship to contradiction than being does. It doesn’t overcome contradiction but grasps its necessity. As Zizek puts it in Less Than Nothing, ‘what Hegel calls “reconciliation” is, at its most basic, a reconciliation with the antagonism.’ [Pg. 951.] Even though antagonism or contradiction acts as a limit or obstacle to thought, thought nonetheless has the ability to grasp this limit as what defines it rather than as what it must surmount in order to realize itself. Spirit is, for Hegel, thought’s capacity to recognize contradiction not simply as an obstacle to overcome but as i[t]s own innermost condition of possibility. Reconciliation marks a triumph through the embrace of the necessity of failure.” — McGowan, “The Insubstantiality of Substance, Or, Why We Should Read Hegel’s Philosophy of Nature,” International Journal of Žižek Studies 8:1 (2014), 13
Finally, with this Hegelian background fully formed, one may propose a radical reinterpretation of Mormonism itself, utilizing the narrative structure of the temple rituals practiced by the Church of Jesus Christ of Latter-day Saints — the initiatory and the endowment:
In the temple, one begins in the time of “pre-existence,” time before time, where they are given a unique name they are not to share with any other — an identity so fundamentally their own that it cannot be expressed to another. In parallel to this antechamber to time, one may read the Mormon narrative of a war in heaven, a primordial fragmentation of the divine family, a war “before the foundation of the world” between not opposing principles but siblings, both children like any other of a God who can only watch as the cosmos begins to fray and fragment. This God, too, joins in the disintegration: in Mormonism, God is a person, composed of “flesh and bones as tangible as” anyone’s (D&C 130:22); God knows the “pains and sicknesses” of finitude on a visceral, physical level, intellectually and experientially (Alma 7:11–13); God is not exempt from the suffering of existence, nor is God aloof from the human condition —even God can weep (Moses 7:48).
From this primeval time before time, one proceeds in the temple’s ritual narrative to the creation of the world, where they encounter a trinity substantially different than that of normative Christianity: rather than Father, Son, and Holy Spirit, one encounters Elohim (a plural noun in Hebrew), Jehovah (Christ), and Michael or Adam (humanity). Rather than a tight priesthood hierarchy organized by genealogy, one may instead read these three figures in terms of set theory: the proper class Elohim or the Real — the plurality of Godhead or nature tearing itself to pieces — from which emerge, among numerous other sets, Michael the human Imaginary and Jehovah the human Symbolic. Thereafter, Michael-as-Adam is cut off from Elohim and Jehovah in forgetfulness; he is cut off from himself in the division of male and female through sleep; they are cut off from the immortality of changelessness through eating the fruits of finitude and becoming mortal and bearing children distinct from themselves. Ultimately, they find themselves in the “lone and dreary world” of the human condition — the world of Hegelian contradiction all the way down, subject and substance.
Then begins the process of reconciliation, returning to the Elohim from which Adam has become inherently alienated. This is signified through ritual acts such as perfectly unified prayer with others, against whom one must hold no resentment; and an embrace with the Lord through the very veil which separates the Real from even itself — the dividing contradiction running all the way down; Altizer’s nihil ensuring the gaps between beings and things, Zizek’s less than nothing in excess of something and nothing — all while Adam carries the name they received before time began, the name uniquely their own and no other’s.
Additional rituals are then performed: the previously performed rituals, repeated this time in remembrance of the participant’s deceased family and loved ones — by name, with attendant birth and death dates, places of residence, all particular details to testify to this person’s irreducible individuality. Beyond this are sealings, wherein spouses, siblings, children and parents, friends and loved ones, ancestors and descendants, come together in eternal relationships — a Hegelian reconciliation which does not dissolve the differences of each person involved, but honors those differences, even in their “antagonism.”
Finally, every week, Latter-day Saints gather to renew these rituals through one central ceremony — sacrament — wherein bread is ripped to pieces, water is meted into thimble-sized cups, and each person eats and drinks in remembrance of both contradiction and reconciliation. And thus, in narrative and ritual, the fragmented universe holds itself together even in its division, a stained-glass cosmos of multicolor pieces, woven together by the wrought-iron less than nothing.
— — — — — — — — — —
Recommended Reading | https://medium.com/interfaith-now/deus-absconditus-et-otiosa-atheism-modernism-mormonism-5b7a9565a3cf | ['Nathan Smith'] | 2019-12-08 08:40:45.992000+00:00 | ['Atheism', 'Christianity', 'Philosophy', 'Religion', 'Psychology'] |
The Halt of a Breath | Photo by Christopher Campbell on Unsplash
The Halt of a Breath
Inhale, exhale, lungs fill, lungs deplete, hold, release, live, die
“When holding your breath, you can learn to change your brain activity (frequency of electrical impulses) and go into a state of flow. In flow the notion of time disappears and you become what you do.”
— Stig Severinsen
In flow, the notion of time disappears and you become what you do.
The notion of *time disappears*
and you become what you do.
I only repeat what’s so radically divine and almost incomprehensible. Might you feel the same? When you hold your breath, you literally are altering the body’s response to: oxygen consumption, heart rate, blood pressure, muscle tension, breath allowance, thoughts and imagery. Instead of wondering if you need to tackle on the question of fight or flight based on rapid breathing due to the so-called emergency response, your body wills itself to calm until it’s reached the point of stillness — entering the parasympathetic response — when slowing your breaths and then, halting them altogether by holding them whilst your lungs are full.
But of course, our bodies aren’t always responding to emergencies, and sometimes our bodies are even confused as to what dictates the call for the emergency response. An example? Back before we had grocery stores and the ease of shopping around for food, us humans had to fend for ourselves, whether we were farming or hunting or even, gathering. What if a bear were to intercede on man’s hunt? Well, fight or flight response kicked in, and he was forced to make a decision and fast! But now… now, we have it quite easy in terms of food. We go to the store, walk the aisles, try to avoid contact with other humans — lest we get the newly spread coronavirus — and try to decide which brand of oats to get. Not only which brand, but what type: rolled, steel-cut, organic, quick, whole, chopped? The options are nearly endless, and too many choices can seriously stress the body, even if it is just deciding which fucking oatmeal to buy. In either situation (a bear, or too many choices of oatmeal) though, our bodies begin to enter panic-mode, forcing our breaths to become more rapid, our heart beats to quicken pace, our sweat to exude from our pores profusely, our oxygen supply to limit itself.
Most humans don’t know how to react to such involuntary (I say involuntary, because in times of emergency, our bodies respond as if on auto-pilot — sometimes at a level in which we aren’t even aware) responses, so they allow their breaths to become shallow and rapid, exposing them to the life-threatening force we like to call stress.
But have no fear, nor stress for that matter, because there is an answer, and it’s quite simple:
hold your breath when panic ensues, consuming your every thought, action, reaction, movement, response.
Holding your breath not only negates the body’s response to emergencies, but it also reverses the body’s response to that once critical question of fight or flight. We alone have the power to calm our bodies to the point of tranquility, as if we are floating in water, devoid of thoughts, senses, emotions, awareness. But the ironic thing is… we aren’t devoid of any of those things in actuality; instead, we are more cognizant of our thoughts, senses, emotions, awareness. We more properly understand our situation at hand when we simply hold in our breaths, but not just in times of crises. We can also train our bodies to better handle emergencies, albeit most emergencies these days aren’t actually emergencies, when they arise by developing breathing techniques into our routine of life.
Before learning to properly hold your breath, it’s crucial to first learn to properly inhale and then, exhale, according to your body’s physical and even mental capacity. To learn more about the art of breath-holding, please visit Nicklas Johansson’s page as a reference and guide.
Holding the breath alters our perception of time because it opens our awareness to that of a yogi. Rather than focusing on our past, our future, our courses of action that we can no longer change, our upcoming scheduled events that are noted on the calendar, we instead focus on the now, completely wrapped up in the moment at hand, serenely soaking up all of our surroundings and experiences and thoughts and thus, perceptions of the current situation. We are no longer anything but ourselves, following the path of the words: I am. No more words to follow I am.
It’s not: I’m stressed, because you aren’t stress. It’s: I am, because you are.
No more: I’m busy.
No more: I’m in an emergency.
No more: I’m worried.
No more: I’m overwhelmed.
No more: I’m my job.
Now, it’s I am.
So we inhale, filling our lungs and holding our breaths, as we live.
Then we exhale, depleting our lungs and releasing our breaths, until we die…
until life as we know it, has stopped completely, and we no longer are.
We alter our perception of time as we hold our breaths, re-gaining control of our inner calmness that would otherwise be suppressed, buried to the point of almost no return, until we resuscitate our desire to live, urging us to find the awareness of the moments passing by us like the hands on the clock — never stopping, always flowing. It’s when we enter that flow state that all else seems to stop, muted in the background of our momentous now, giving us free reign over the remarkable present, to which we are attentive, appreciative, aware. We become what we’ve always been: ourselves.
And much like the human bodies that can hold their breaths to fulfill that yearning of wanting to feel utter bliss, the universe too, can learn to hold its breath and then, stop life altogether, stopping us.
“And then, our universe will be in a state of absolute equilibrium. All life and thought will cease, and with them, time itself.”
— Ted Chiang
According to Ted Chiang, the universe’s breath allowed us humans to be alive, having full range over our thoughts and even, time. But like the constant flow of air in our bodies maintaining our livelihood, the universe too must eventually allow its flow of breath to stop, ceasing the existence of humans, thoughts, and thus time.
Inhale, exhale, lungs fill, lungs deplete, hold, release, live, die.
This is the cycle of flow, of time, of perception, of experiences, of life itself.
So live fully and learn to halt your breath once in a while, because the moment we have right now is all we have. And the breaths we allow to enter our body and then, leave us, allow us the honorable awareness to appreciate time and what it has to offer us now, as we breathe in and breathe out and laugh and cry and jump and lie and play and rest and live and then, die.
The universe too, must eventually die, and with it us. But we’re alive now, and that’s quite possibly the only thing that matters in this weird time-space continuum in which we all live.
Remember: I am, because we are.
And you alone control your notion of time, and I, mine. | https://medium.com/age-of-awareness/the-halt-of-a-breath-34221f187bb2 | ['Natalie Jeanne Maddy'] | 2020-05-25 11:46:21.131000+00:00 | ['Self-awareness', 'Time', 'Life', 'Energy', 'Breathing'] |
8 Skills that meetings can help you get better at! | Meetings are fun and productive — keep your ears and eyes open!
Meetings are perfect grounds for learning, if used well! When you start looking at meetings as learning opportunities, you can definitely notice the array of skills that meetings will help you hone.
Punctuality:
start and end meetings on time, stick to topics and time limits. Folks will have a lot to say, but eyes on the clock please!
Planning:
every meeting is to be planned. Right from who will attend, who will take notes, the discussion points to take up, the actions to remind and the decisions to be made — everything has to be done in the time frame allotted.
Note-taking:
Taking notes is an art. Summarising a 60 minute meeting into a minute’s read is an art. You would have to cover all the major points, the nuances, the actions and the decision clearly. One of my customer bosses in the UK taught me to make notes and I have been an expert minutes taker ever since! We make the mistake of delegating this to the youngest one in the room — big mistake! The minutes guy is the most powerful one in the room!
Memory:
you will be forced to remember the precedents and the various conversations. Context recollections and following up on action items are fun activities, especially when you are leading huge programs.
Meetings aren’t an alternative to work as it is largely made out. Rather, it is fundamental to getting people together and ensuring they work well together!
Listening skills:
immerse yourself in the multiple conversations that happen, yet keep a track of the individual threads that happen. Understand the tone used, gauge the language used and watch for any escalations or scenarios that might need intervention!
Assertiveness:
from guiding the meeting well, to enforcing meeting rules, you got to be assertive in the room. People have to have an update on their actions, timelines have to be respected and there shouldn’t be any domination in the room. Everyone should be free to speak their mind and at the same time, disciplined to not stray!
Network:
in bigger projects, especially, you will be meeting and guiding many senior individuals across the organization. You will be required to introduce people, make sure they understand the context well, are able to contribute their bits and most importantly understand their role in the bigger scheme of things.
Coordination:
you can be the person who brings it all together. From individuals to projects, you will be the one with the bigger picture finely sorted and will assist everyone to have the same level of clarity.
I love making my meetings count — get real work done in meetings! All the best! | https://medium.com/growth-catalyst/8-skills-that-meetings-can-help-you-get-better-at-681aa73a0d5a | ['Bharath Kumar Balasubramanian'] | 2019-12-26 18:27:14.540000+00:00 | ['Work', 'Management', 'Meetings', 'Productivity'] |
Holiday Gifts from Prism & Pen | by Artemis Shishir
Happy holidays!
This year has been nightmarish, but with it almost over maybe we can all relax with some stories and a hot cup of tea or coffee. Unless you prefer something stronger! Whether you like fiction or non, we’ve got great gifts for you.
Liam Heitmann-Ryce offers both fiction and memoir about his travels in Germany. theoaknotes (I’m trans nonbinary, mixed black and white, queer, the child of two moms, a Gen Z, a K-pop fan, a practicing minimalist, a college student, a dancer, a reader, a philosopher, and an expert cuddler) gives us some of their their favorite queer quotes of the year.
And James Finn tells a story about Grandma Finn and how she taught him that religion must never excuse bigotry — all while exposing bigotry in a supposedly charming southern events center.
There’s so much to get into this week! So, let’s get started with our Editor’s Picks!
Editor’s Picks —
Creative Nonfiction
I’ll Have Just One of Those Brownies
Ever been curious to try marijuana? Loren A Olson MD writes about how some untimely experimentation almost cost him his marriage, military career and profession. Don’t expect an anti-drug piece, but DO expect to be entertained! | https://medium.com/prismnpen/holiday-gifts-from-prism-pen-14eabe7df6ab | ['James Finn'] | 2020-12-27 22:04:57.641000+00:00 | ['LGBTQ', 'Storytelling', 'Fiction', 'Poetry', 'Creative Non Fiction'] |
An Institution Plagiarized My Stories — Here Is How I Removed Them From Their Website | I often google search my popular content to know if there are any plagiarized stories on the web. Until last month, there were none, but when an institution plagiarized one of my recently published Better Marketing posts, I was clueless. Previously in the Facebook group, I had heard writers talking about their content plagiarized by many external websites, but very few explained how to solve such issues.
The website had published my article with no modifications and no credit to me. What is worse, that particular pirated post had a higher rank on google search than the original content. I informed Medium support of the issue and also the Better Marketing editors. The reply I got from Medium was:
Hi Suraj, Sorry you are experiencing that. Unfortunately, there isn’t anything Medium can do to prevent people from copy and pasting, and in essence, “stealing”, the text from the Medium post page. We are actively working to identify and stop these sites right now. As good citizens of the internet, Medium completely honors the DMCA and all takedowns we receive. So when we are alerted to copyright infringement that occurs on Medium, we remove it until the matter can be resolved legally. We offer a public form to initiate this process. This site that has taken your work does not appear to have that in place, nor any contact information even, and embrace anonymity above all else. That’s troubling. So what can you do? As the copyright owner, you need to make a claim against them for copying your work. As there is no contact information on the site, you can do a Whois lookup to find any other information on the site: https://www.whois.com and https://hostingchecker.com/ You may need to start higher up the food chain, possibly by contacting their DNS registrar, as copyright violation should be against their terms. Again, we are working to identify and stop this behavior. However, you, as the copyright owner, have much more power than us as an interested third-party, but non-copyright holder.
I received a follow up from the editors of Better Marketing too, but there was nothing that they could do. I thought it is only one article, and I can wait for editors to take the steps. In essence, I ignored it.
I got restless when one of my friends informed me about another instance of plagiarism. The same institution had copied my other post within an hour of publishing, and there were a handful of other posts also, all from Better Marketing. I knew that if I ignored the issue, they would continue to plagiarize my work.
My friend suggested that I contact the server host, so I took multiple steps without delay. As a result, the institution removed all Better Marketing plagiarized content within twenty-four hours of my intensive complaints. I’m not sure what exactly made it happen, but it worked. Let me explain to you my steps so that you can repeat the same if you ever encounter such problems.
Online support with the host-server
There was no email address to contact the institution. So, I got the host server's name, searched them on google, browsed their site, and waited for online support. I then explained my concern to their representative through chat. Although they had asked me to email my issues, I told them I have no time to follow up on their progress report. They registered my complaints and promised to resolve them.
Complain through email
To leave no stones unturned, I also emailed the host-server. Emailing them serves as proof. They responded to my email, saying they would contact the institution as soon as possible.
Reaching the institution’s guardian through social media
The only clues I had to reach the institution were a few officers mentioned on their website. So, I contacted the head of the institution on Twitter and Linkedin. I know it was a harsh and bold step, but I believed they were also responsible if their organization is doing such illegal activities. They needed to know about the issue, and they should be the ones to make sure it wouldn’t ever happen again. I expressed my discomfort through both Linkedin posts and tweets. | https://medium.com/better-marketing/an-institution-plagiarized-my-stories-here-is-how-i-removed-them-from-their-website-ca31c92254f4 | ['Suraj Ghimire'] | 2020-10-27 14:31:29.640000+00:00 | ['Writing', 'Marketing Strategies', 'Blogging', 'This Happened To Me', 'Plagiarism'] |
Not Acknowledging Your Pain Might Be Hurting You More | The Indicator of Discomfort
As Medical News Today states, physical pain is an unpleasant sensation and emotional experience that links to tissue damage. It allows the body to react and prevent further damage.
Pain on a psychological level can be a similar indicator of emotional anguish. If you are feeling consistently low, maybe it’s an indication that something’s wrong with your surroundings — a way for your subconscious to rebel against your current situation and prompt you to get out of it.
If you turn away from it and only focus on the positive, how would you know there’s something wrong? Ignoring the symptoms and sweeping uncomfortable realizations under the carpet would only mean forcing yourself to adjust to the distress you’re in. If you don’t acknowledge the pain and try to get to the root cause, how can you reach a viable solution?
Here are some ways I’ve adopted to identify the root cause. You can adopt the same to embrace growth:
Don’t fill your head with external stimuli all the time
In other words: don't check your phone each time you’re bored. Sure, podcasts and audiobooks are a great way to improve your productivity, but they snatch away the opportunity to spend time with your thoughts.
There was a time I used to run away from being alone because I was terrified of what demons I’d have to deal with. But in 2020, if there’s one thing I’ve learned, it’s this: self-awareness comes with a price, but it’s the best (and probably the only) tool for growth.
Yes, it’s hard to listen to your thoughts, but you’ve got to do it to learn what kind of person you are. Without this, no amount of self-help content will help. Silence offers opportunities for self-reflection and daydreaming, which activates multiple parts of the brain. It gives us time to turn down the inner noise and increase awareness of what matters most.
Write what you feel and be honest with yourself
It’s difficult for me to be completely transparent during conversations with people. However, my journal is the friend I turn to for solace. It’s where I write all my thoughts out and am unabashedly honest with myself.
Journaling has cleared my head of a ton of clutter and helped me acknowledge my innermost feelings and desires. You might find some emotions too hard to acknowledge, but remember that it’s your journal. No one else is going to read it. Be as honest and open with yourself as possible. Reflective journaling can be a great tool for self-awareness and personal growth. It can help you prioritize problems and track any symptoms so you can recognize triggers and learn ways to better control them.
Allow yourself to just be
There’ve been times when something triggered me, my immediate instinct was to start binge-watching a show or lose myself in a book. But several sessions with my therapist have helped me understand that such acts will only cure the symptoms, not the root cause.
Instead, my therapist advised me to embrace the sadness. She said it was okay to wallow in self-pity for as long as I needed, but then pull myself back together and move on.
You don’t have to be happy all the time, as long as you don’t let the sadness overcome and take control. A book that helped me come to terms with this is How to Be a Movie Star by TJ Klune. The novel carries such an accurate representation of mental health, the narrative took my breath away. One of the quotes that gave me strength was: | https://medium.com/mind-cafe/not-acknowledging-your-pain-might-be-hurting-you-more-50b7ac70ad9d | ['Anangsha Alammyan'] | 2020-12-27 19:32:49.457000+00:00 | ['Self Improvement', 'Psychology', 'Advice', 'Ideas', 'Inspiration'] |
Quorum Founder Jackie DeJesse on Using Empathy as a Product Manager & Entrepreneur | Quorum Founder Jackie DeJesse on Using Empathy as a Product Manager & Entrepreneur Carlee Murray Follow Nov 9 · 6 min read
If you look at Jackie DeJesse’s CV, you’ll see someone at the top of her game. She’s held product roles at several successful tech companies and is currently serving as a product manager for Google. On top of that, she recently started a side venture, Quorum (more on that later!).
Below, we explain how Jackie fell into product management and break down her thoughts on empathy, female role models, and the power of shared experiences.
On discovering product management
Jackie left college like many of us did, “I graduated without really knowing what I was going to do.” Armed with an entrepreneurial spirit and a degree in photography and biological sciences from the University of Southern California, she launched her own business as a photographer working with several startups in Silicon Valley.
Eventually, one of her freelance gigs turned into a full-time job. “This one company was on a mission to create a visual search engine for food.” Think: an Instagram specifically for foodies and restaurateurs. “I began working for them as a photographer, helping them develop their brand and their visual guides and was eventually hired on full time,” she says.
As the director of photography, Jackie had a unique perspective: She, like the users, was interacting with the platform every day. “It was through my work as a photographer with them that I realized, OK, these tools they’re giving to their photographers aren’t working.” Naturally, she went into problem-solving mode. “I just started brainstorming different user flows, asking ‘How can we improve this, how can we make this easier?’”
Having a bad user interface wasn’t just making life more difficult for the photographers, “it was also costing us money,” Jackie explains. “The more time the photographers had to spend navigating our content management system, the higher our costs were per shoot. It just wasn’t going to be scalable.” Working in tandem with the company’s Chief Technology Officer (CTO), they developed a solution that decreased the amount of time it took for photographers to manage their content on the platform by hours.
It wasn’t until the CTO approached her about joining his team as a product manager that Jackie ever considered going into a product role. “When I was at USC, I didn’t even know what product management was. It wasn’t something that was taught or even thought of when I went to the career office for guidance.”
Jackie began reading books on product management, going to meetups in the Bay Area, and getting exposure to the product side of business. “Because I was working at a very small startup, I had the opportunity to try things out. That’s really where I got my feet wet.” After moving to New York, her career as a product manager took off.
“The easier it is for you to connect with people, I think the more successful you can be as a PM.”
According to Jackie, the soft skills she had going into product management are a large part of what made the role such a good fit. “It’s the same set of questions regardless of whether your user is an elderly parent or a child — It’s all in understanding how folks operate and what motivates them.” But how do you crack that code? Empathy. “Something that has worked well for me is being able to empathize and build those human connections,” Jackie says. “The easier it is for you to connect with people, I think the more successful you can be as a PM.”
On strong women, being raised by them, and becoming one
Jackie believes her success is a result of having strong female role models. “I grew up surrounded by very, very strong women, who have seen many challenges throughout their personal and professional lives.” She continues, “I always felt that I could trust them to be there — they have enabled me to come into my own and feel confident in myself as an individual while being mindful of maintaining some sense of vulnerability.” Jackie has been particularly influenced by her grandmother, Consuelo.
Jackie DeJesse and her grandmother, Águila Consuelo Vargas de Menendez Crosby
Consuelo was raised by a single mother in Peru in the 1920s. As the eldest daughter, she took on a maternal role at a young age. As Jackie explains, “She was the right-hand woman to my great-grandmother, whether it was helping with the accounting of the shop that she ran or making sure her siblings were okay.”
Throughout her life, Consuelo continued to make sure her family was taken care of. “Before moving to the United States with my grandfather, she saved up enough money from her work at the British Consulate to build a house for her family in Peru so that they would never be homeless.” Eventually, Consuelo was able to reunite with her mother and siblings in the U.S., standing by them through the struggles of immigration.
Coneulo (far right) with her mother María Luisa (middle), her sister Candelaria (far left), and her youngest sister Elvira (front) fully reunited after helping her family come to the US from Peru
As Jackie grew up, she learned more about her grandmother’s life. “She wanted to open a tea shop and a Parillada, but she dedicated so much time to running my grandfather’s business and raising her family, her dream never came to fruition.” However, Consuelo selflessness was not in vain. “Her love and stability to her family enabled us to become independent women and go after our own dreams.”
On starting Quorum and embracing shared experiences
Jackie started her latest business venture, Quorum, a candle company, as an ode to her grandmother. “Each of these candles has a theme that groups together a bunch of different stories from women that are meant to be shared,” she says.
“With women as 51 percent of the U.S. population, and 50 percent of the entire world, we have quorum, we have the power to collectively change the way our societies are run.”
“Quorum means the representation necessary to make a decision,” Jackie explains. She wanted a name that reflected the company’s underlying mission: “At the heart of Quorum is this desire and this need to reclaim power. With women as 51 percent of the U.S. population, and 50 percent of the entire world, we have quorum, we have the power to collectively change the way our societies are run.”
Quorum’s candles, which are currently in the testing phase, will be phthalate and paraben-free and made with scented coconut soy wax. While the candles are not yet in full production, Jackie says they will feature artwork from female artists.
Quorum Candle Test Lab, featuring hand-poured coconut soy wax test candles
Jackie put a lot of thought into choosing candles as her product. “I wanted something physical that people could hold onto.” These aren’t your usual pastel-colored candles either. “Design-wise, these candles are loud, they’re making their presence known,” she explains. “It all ties back to Quorum’s mission that these are things to talk about. By having that physical object in the room, I hope it will inspire conversation.”
While Quorum is starting by highlighting women’s stories, Jackie thinks its mission is universal. “Quorum, I think, has the potential to be something that can support anyone who is underrepresented.” When explaining who these candles are made for, she continues, “You don’t have to be a woman; you can identify in a lot of different ways — there are a lot of people who are disenfranchised with our current power structure.”
There is power in numbers. “We all have these stories. We all have these experiences,” she says. “The more that we can share, the more that we can get validation from one another, the more we can reclaim that power.”
Want to share your story or nominate a friend to be featured? Reach out on Instagram to @burnquorum, email [email protected], or submit your story directly at burnquorum.com. | https://medium.com/herproductlab/quorum-founder-jackie-dejesse-on-using-empathy-to-connect-with-others-as-a-product-manager-69a01c706bdc | ['Carlee Murray'] | 2020-11-10 00:50:25.574000+00:00 | ['Women In Tech', 'Entrepreneurship', 'Empowerment', 'Female Founders', 'Product Management'] |
Big data and Hadoop | Big Data is generally considered to be very huge amount of data for storing and processing. Data in huge volume and different varieties can be considered as Big Data. Data is changing our world and the way we live at an unprecedented rate. Big data is the new science of analyzing and predicting human and machine behavior by processing very huge amount of related data. Big data refers to speedy growth in the volume of structured, semi-structured and unstructured data. It is estimated to generate 50,000 GB data per second in year 2019. Today’s enterprises are generating massive amount of Data, Which essentially has 3 attributes :
Volume : — The size of the data, we are talking about GB and TBs here
Velocity : — The rate at which the data is being generated
Variety :- Data from Multiple sources and multiple kinds
This added complexity can not be handled with traditional frameworks hence Hadoop ( not the only solution ) Hadoop is a parallel processing programming framework which work in MapReduce. Apache Hadoop is one of the tool to work on Big Data. It is open source, software framework that runs on a commodity hardware.
Why we are talking about Hadoop?
We are talking because, On a hadoop framework we can store any type of data weather it is a Structured, Semi Structured and Unstructured in Hadoop Distributed File System [HDFS] layer.
Apache hadoop is open source, it means you don’t need to pay for license while using commercially.
Apache Hadoop runs on commodity hardware, it means you don’t need to stick to a single vendor. You can choose any vendor, who offers infrastructure at low cost.
Consider a problem where you need to count the number of words in a 5 pound book. would be very difficult for one person but if you tear the pages and distribute it to hundreds of people. Each will count the words in their “page” and then you can simply total that count from each person you shall have to total word count in no time.
Why Hadoop with an real time example ?
Suppose you have a very big file (50 gb log file for an example) and you want to parse it , do some filtering on it and see the result. What are the options?
1. If you have a computer with ram more that 64 gb ( assuming the additional 16 gb will be used for os and other processes) you can write some code to get it done. Still it will be super slow. And if the file is even larger (in petabyte scale) it wont be even feasible. Rams on petabye scale is not available yet
2. Parse the file in smaller files (may be 10000 files , each in the scale of megabytes) and read them sequentially
3. Using approach 2 but using multiple threads , each thread reading a smaller file and finally merge the threads and compute the result
Hadoop is just step 3 only with a distributed computing touch to it. You have bunch of computers. One of the computer is master node and the rest are slave nodes.All these nodes form a cluster. This is hdfs or hadoop distributed file system. You upload a huge file to the cluster. This huge file gets divided in small file chunks of some size ( each X megabyte for an example). These chunks are replicated throughout the cluster following a replication factor. Then using a programming framework called mapreduce , you do operations on the content of the file chunks and get the desired result.
Hadoop Ecosystem:
It comprises of various tools that are required to perform different tasks in Hadoop.
These tools provide you a number of Hadoop services which can help you handle big data more efficiently.
There are the popular tools that are part of the Hadoop Ecosystem:
HDFS: It stands for Hadoop Distributed File System and it is the storage unit of Hadoop. Have a detailed explanation on HDFS and how the data is read and write into hdfs.
YARN: It stand for Yet Another Resource Negotiator. It handles the cluster resource management. Allocates RAM, memory and other resources to different applications.
MapReduce: MapReduce processes large volumes of data in a parallelly distributed manner.
HBase: It is a column-oriented non-relational database management system that runs on top of Hadoop Distributed File System (HDFS). HBase provides a fault-tolerant way of storing sparse data sets, which are common in many big data use cases. It is well suited for real-time data processing or random read/write access to large volumes of data.
Sqoop and Flume for data collection and ingestion: Sqoop is used to transfer data between Hadoop and external datastores such as relational databases and enterprise data warehouses.
Flume is distributed service for collecting, aggregating and moving large amounts of log data.
Pig: Pig is used to analyze data in Hadoop. It provides a high level data processing language to perform numerous operations on the data.
Hive: Hive facilitates reading, writing and managing large datasets residing in the distributed storage using SQL (Hive Query Language).
Spark: Spark is an open-source distributed computing engine for processing and analyzing huge volumes of real time data.
Mahout: Mahout is used to create scalable and distributed machine learning algorithms. It has a library that contains in-built algorithms for collaborative filtering, classification and clustering.
Ambari: Ambari is an open-source tool responsible for keeping track of running applications and their statuses.
Kafka: Kafka is a distributed streaming platform to store and process streams of records. It builds real-time streaming data pipelines that reliably get data between applications.
Storm: Storm is a processing engine that processes real-time streaming data at a very high speed. It has the ability to process over a million jobs in a fraction of seconds on a node.
Oozie: Oozie is a workflow scheduler system to manage Hadoop jobs. It has 2 parts: Work engine and Coordinator engine. | https://medium.com/analytics-vidhya/big-data-and-hadoop-918f8a13f3f0 | ['M S Dilli'] | 2020-03-27 13:23:51.712000+00:00 | ['Hadoop Training', 'Big Data', 'Hadoop', 'Big Data Analytics'] |
Plotly Python: Scatter Plots | We start by using marker=dict() which then takes all the parameters that we use to style our markers (data points). There is an additional dict within marker that corresponds to the style options for the marker border.
Note how we color the markers themselves using an RGB value, whereas we color the marker outline with a CSS color code.
Both are perfectly acceptable — you can even use RGBA to set the alpha.
The marker size is adjusted via the marker_size attribute. I created a new dataframe column ‘ratio’ where I did a scaled value of the positive_ratings / negative_ratings.
A larger bubble means the game had a higher ratio of positive to negative ratings; we expect these games to generally have a higher average playtime.
Right now when we hover over the points, we only see the x and y values which isn’t exactly useful. However, it’s easy to add proper hover text to our points, so we can see the name of the game that we’re looking at.
By including the below code in our Figure object we can take the above hover data and turn it into something much better!
hovertext=steamdf['name'],
hoverlabel=dict(namelength=0),
hovertemplate='%{hovertext}<br>Price: %{x} <br>Avg. Playtime: %{y}'
It may look a little overwhelming, but lets break it down:
- hovertext is a variable we are defining for use in our template
- hoverlabel is mainly aesthetic in purpose. If you keep it in you may see the trace number off to the side of the tooltip box. I don’t like it, so this code will remove it.
- hovertemplate allows you to create a template string to render whatever information you want to appear on the hoverbox.
Variables are added using the %{variable} format and you can make use of HTML tags such as <br>, <i>, <b>, etc. | https://towardsdatascience.com/plotly-python-scatter-plots-2ea1b4885c90 | ['Bryan White'] | 2020-01-13 01:16:38.860000+00:00 | ['Python', 'Data Visualization', 'Data Science', 'Data', 'Programming'] |
My Father Was Arrested for Living in an Airport Terminal | I remember how excited I was as a little girl when I got to ride in a taxi with my grandparents instead of taking the subway. It always felt rewarding with a dash of thrill as if it were a holiday or special occasion.
I didn’t know then that my parents couldn’t afford to take a taxi. I just assumed it was something wealthy people did and my grandparents had been blessed with wealth.
As I grew into my teen years I would spend the summers in Brooklyn with my grandparents. That’s when I began to put the pieces together. They had things we didn’t have at home such as a TV, food and more than one bedroom. What was normal for them was a dream come true for me.
By my seventeenth birthday I had the pattern of my parent’s irresponsible failure down to a science. It came in waves and the waves would crash and thrash harder each storm. There was never any money and once I started working they felt entitled to my measly $6 an hour part time paycheck.
I knew if I continued to live with them I would never have a future. I lived by this imaginary impression the cycle of poverty could be broken. My siblings and I packed our backpacks and set off never to return again. We were all under the age of eighteen.
The last time I saw my father he’d been arrested for stealing a car. I went to visit him in jail. He asked my siblings and I to bail him out. We did. He skipped out on his bail. He ran. We lost our $1400.
My father had his own form of logic. If he didn’t pay the rent and we were evicted he’d claim if the landlord can afford to send his kid to college then he didn’t need the money. If we ate that morning than why did we need to eat that night? We’d be fine. He’d never keep a job because all of his bosses were assholes. I can’t even count how many asshole bosses existed during my childhood.
When we asked him why he stole the car he told us the people he took it from had more cars than they needed. It was never about the stupid, humiliating and selfish things he did. His actions were merely a reaction to other people.
More than a decade had gone by since the car theft when I received a call from the Miami- Dade Police Department. My father was homeless and living in the Miami International Airport terminal where he was arrested by the Transportation Security Administration (TSA).
They informed me he claimed there was no one else for him to call and requested I come pick him up. It wasn’t worth it to me to travel from Arizona to Florida to bail him out. Instead, I requested he call me.
When he called I was nervous. I didn’t want to speak to him but I was curious how he got to this place in life. I had so many questions. What was he thinking? This is who he turned out to be? How did he end up homeless? Why the airport? Why did he ask them to call me? Why did he not get the help he so desperately needed? How could he not see he needed help?
I cringed at the sound of his voice. The more he spoke the more my stomach churned. I said little in response. There wasn’t much for me to say.
He was living in his mini van for a few years. It eventually died and he abandoned it. It had been towed and he had lost what little possessions he had left to his name.
The airport was a master suite to him. He was able to shower in sinks, sleep on couches, eat unwanted food out of garbage cans. It was a resolution to his homelessness situation. It was months before TSA caught on. He referred to it as if he were living in a resort. A part of me was filled with anger and humiliation. The other part of me felt compassion and devastation. He experienced mental illness and he most likely lived up to his level of function.
He asked if I would allow him to come live with me. I thought back to the memories I had growing up. I now had my own children and I needed to protect them. I had once wished someone would have protected me. I was already raising my children. I didn’t want the burden of raising my father.
My belief in breaking the cycle of poverty when I was a teenager became a reality in adulthood. It’s a reality I’ve held onto so tightly.
When I hung up the phone that day it was the last time I spoke with him. I have no knowledge of whether he’s alive or if he’s passed away. I have no closure of what may have happened to him. Was I right to choose myself over his needs?
I used to carry his stories around as if he were baggage. A burden of horrid memories I could never free myself from. I realize now that I should be grateful, because of him I’ve worked hard not to be who he was. | https://erikasauter.medium.com/my-father-was-arrested-for-living-in-an-airport-terminal-3fea41d66fe3 | ['Erika Sauter'] | 2018-07-20 18:38:26.230000+00:00 | ['This Happened To Me', 'Life Lessons', 'Mental Health', 'Life', 'Family'] |
Do High Frequency Gravitational Waves Explain Li & Podkletnov’s Experimental Results? | Scientists like Dr. Ning Li & Eugene Podkletnov have claimed to see anomalous gravitational effects for decades. Could High-Frequency Gravitational Waves provide an explanation? We join Dr. Robert Baker, Jr. to discuss international HFGW research and hypothesize about what might be causing these strange experimental effects…
Robert, I understand that there are literally dozens of physicists & engineers doing research on High Frequency Gravitational Waves, and the 2003 Mitre HFGW conference was a pivotal first event in terms of bringing them together as a community. Can you describe this for me a bit?
Dr. Robert Baker, Jr — view his bio here.
The MITRE conference was a crucial first step in bringing scientists together to discuss HFGWs from both the perspectives of theoretical physicists and practical engineers and included scientists from all over the world.
Interestingly though, scientists from China who were not able to attend the event became some of the biggest proponents of HFGW research and use the MITRE papers as background for their research.
The community that took shape at the MITRE Conference later evolved into STAIF Section-F, and this was due entirely to the tireless efforts of Paul Murad and Tony Robertson. STAIF proved to be an excellent forum for the presentation and discussion of new concepts in gravitational science.
I’ve been following a few of them recently — such as Dr. Ning Li. The last I’d heard from her was a very brief message in which she claimed to have achieved an 11 kilowatt experimental result for gravitational wave production. She calls this “AC gravity”, which is poorly understood by myself and many others.
Well, at the 2003 MITRE conference, we had participants from about nine countries and about 25 papers, indicating quite a bit of interest — particularly in Russia and China.
Ling Li presented a paper there, which was theoretical in nature and perhaps not easily understood by laymen, and in it she did indicate a theoretical output of 11 kilowatts from her system. In her mind, “AC gravity” is simply alternating current — the rest of us refer to it as high frequency gravitational waves. So we’re really talking about the same thing — with frequencies on the order of a hundred kilocycles and on up to the gigacycle range and beyond.
As I understand things, Ning Li’s research uses rotating superconductors, and is similar to the experiments that Eugene Podkletnov claimed results for in the early ’90s. The biggest difference is that she’s working on a theoretical model for her experiments, right?
Well, she worked with Douglas Torr while they were at the University of Alabama and came up with a very significant paper back in 1992 which indicated that these gravitational waves could be refracted.
Essentially, she theorized that there’s an index of refraction, which means you could make a gravitational wave lens out of a superconducting material. It this idea pans out, it would lead to all kinds of optics and a lot of future innovation — it would be quite an amazing feature of her studies.
Dr. Ning Li, demonstrating the superconductor she claims to have generated “AC Gravity”.
The superconductors that both Li & Podkletnov have been experimenting with are quite large — much larger than what NASA used during their experimental replication. Does that play a role in their results?
Yes, they’re both using discs up to almost 10 inches, I think. I’m not quite sure, but certainly 10 centimeters at the least. They’re quite large.
My thesis has been that high frequency gravitational waves are the basis for the effects Li & Podkletnov have claimed. In other words, the rotating disc, fields, high frequencies, harmonics and so on could be producing HFGWs that are perhaps about below the limits of detection at the magnitudes they work at, but may be influencing the local gravitational field, perhaps through self-rectification or something along those lines.
That makes sense. She’s been rather quiet lately — I haven’t heard from her in some time, but did see a recent funding request that appears to have been fulfilled by the Department of Defense, so it seems they may be financing part of her research?
Well, they did, and I was a part of the oversight committee. It was done by the U S Army and the Redstone Arsenal there in Huntsville. They did indeed have her do some research, and I was on the panel that was taking a look at that research for the Army.
There was a problem, though: we couldn’t get a final report out of Ning. You know, we had to get something. She is a brilliant scientist, but like so many brilliant scientists, sometimes they’re a little less than practical.
It’s not that she was hiding it or anything nefarious. In fact, I was on the panel that was talking to Ning, and I said, “well, where’s the final report?” Well, she just didn’t quite get around to it, you know, and that’s a problem. I‘ve talked with her since then, however, and she would play a role in the efforts that I’m trying to promote.
Back in the 1990's, Podkletnov claimed a 3% loss in weight in a rotating superconductor, which is similar to Ning Li’s experiments. However, in the early 2000’s he co-authored a paper with Dr. Giovanni Modanese claiming that a spark discharge onto a superconductor was generating over 20 pounds of force. Are you familiar with this second paper?
Podkletnov’s rotating superconductor was claimed to produce a 3% loss in weight.
Well that’s interesting, and I would love to see a replication of it. Some of their work I have seen, and it had interesting correlation with frequency. In other words, the higher the frequency, the greater the effect, or rather with the square of the frequency.
That caught my attention, because high frequency gravitational wave influence also increases with the square of its frequency, so again I can’t help but wonder if there’s an HFGW effect that might explain their claims.
Please keep in mind, however, that I’m an engineer — not a theoretician. In this particular case, you’d want to follow up the experimental work with a detailed theoretical analysis.
That’s not always the case. Often you have a theory and then develop an experiment to you know, to test it, but since they already claim to have experimental results I think they should work it the other way around.
The Podkletnov & Modanese claims are significant, as is this 11 kilowatt effect that Ning Li claimed to generate — how is that measured? What kind of force could we anticipate from future experimental devices?
Well it’s hard to tell. I mentioned Landau and Lifshitz, but they do not give any particular specifications. Now another scientist that you might get in touch with is Professor Giorgio Fontana over in Italy.
Fontana has come to some of the HFGW meetings and presented papers on gravitational wave beams, interactions and so forth. He’s one of several researchers in Italy looking at gravitational wave beams — what happens when they intersect, and how they might change the gravitational field.
I think Giorgio’s work is good, and he’d be a good person to talk with, but I really don’t think anybody’s theoretically come up with the magnitude.
Do you have any ideas on how conservation of energy is preserved in these experimental claims? I ask because these are claims for pretty substantial results, and I wonder how input power might be coupled to output effect.
I’m not sure and I don’t think anybody’s quite sure. Again, that’s why an experiment is valuable. I’ve not seen any of these purported gravitational change experiments myself, of the people who are reporting claims, none of them have reported any kind of permanent effect or easily replicable effects that would allow for detailed measurements coupling measurements.
In some of the work that I’m doing with the Chinese, you can get up to 10²⁰ watts per square meter for a small area, and that might really open things up. After all, when Marconi developed his ship to shore radio telegraphy, I don’t think he ever thought about microwave oven applications for his work. So we just won’t know until we actually do more experimental research.
Podkletnov’s “impulse generator” was claimed to generate up to 20 lbs of force.
A lot of that research is currently being done in China, from what I understand. Can you tell us a bit about their efforts, and how many people have become involved in their research projects?
Yes, the Chinese sponsored me on a month-long lecture tour of Universities and Institutes in China on the subject of HFGWs. Right now there are probably more scientists in China working on HFGW research than in the whole of the rest of the world.
I’m also working on a project with professor Fangyu Li at Chongqing University in China on an experiment attempting to emulate the gravitational waves produced by a double star system. Fangu Li has a detector that would be well suited to detecting these gravitational waves, and we’re excited to see what kind of results we can produce & detect.
What is the anticipated timeframe before we can anticipate seeing scientific applications for High Frequency Gravitation Wave research?
It’s like anything, it really boils down to money. If the A-bomb project during World War II had proceeded at a normal pace, it probably would have taken 50 years to develop — but when the Manhattan project funded it & removed all the stop, then it happened very rapidly. You could make the same case for the rapid development of computer processors.
Basically as I see it, there’s a tipping point in terms of experimental results, and once that’s been reached then mainstream “big science” gets involved and further research will move rapidly. The question is how fast we’ll reach that tipping point — it could be a couple of years, or it may be a decade. It’s difficult to say. | https://medium.com/discourse/do-high-frequence-gravitational-waves-explain-li-podkletnovs-experimental-results-5d9f9560e1a6 | ['Tim Ventura'] | 2019-12-24 01:00:21.732000+00:00 | ['Physics', 'Futurism', 'Science', 'Technology', 'Gravity'] |
Simplifying Sentiment Analysis using VADER in Python (on Social Media Text) | What is Sentiment Analysis?
Sentiment Analysis, or Opinion Mining, is a sub-field of Natural Language Processing (NLP) that tries to identify and extract opinions within a given text. The aim of sentiment analysis is to gauge the attitude, sentiments, evaluations, attitudes and emotions of a speaker/writer based on the computational treatment of subjectivity in a text .
Why is sentiment analysis so important?
Businesses today are heavily dependent on data. Majority of this data however, is unstructured text coming from sources like emails, chats, social media, surveys, articles, and documents. The micro-blogging content coming from Twitter and Facebook poses serious challenges, not only because of the amount of data involved, but also because of the kind of language used in them to express sentiments, i.e., short forms, memes and emoticons.
Sifting through huge volumes of this text data is difficult as well as time-consuming. Also, it requires a great deal of expertise and resources to analyze all of that. Not an easy task, in short.
Sentiment Analysis is also useful for practitioners and researchers, especially in fields like sociology, marketing, advertising, psychology, economics, and political science, which rely a lot on human-computer interaction data.
Sentiment Analysis enables companies to make sense out of data by being able to automate this entire process! Thus they are able to elicit vital insights from a vast unstructured dataset without having to manually indulge with it.
Why is Sentiment Analysis a Hard to perform Task?
Though it may seem easy on paper, Sentiment Analysis is actually a tricky subject. There are various reasons for that:
Understanding emotions through text are not always easy. Sometimes even humans can get misled, so expecting a 100% accuracy from a computer is like asking for the Moon!
A text may contain multiple sentiments all at once. For instance,
“The intent behind the movie was great, but it could have been better”.
The above sentence consists of two polarities, i.e., Positive as well as Negative. So how do we conclude whether the review was Positive or Negative?
Computers aren’t too comfortable in comprehending Figurative Speech. Figurative language uses words in a way that deviates from their conventionally accepted definitions in order to convey a more complicated meaning or heightened effect. Use of similes, metaphors, hyperboles etc qualify for a figurative speech. Let us understand it better with an example.
“The best I can say about the movie is that it was interesting.”
Here, the word ’interesting’ does not necessarily convey positive sentiment and can be confusing for algorithms.
Heavy use of emoticons and slangs with sentiment values in social media texts like that of Twitter and Facebook also makes text analysis difficult. For example a “ :)” denotes a smiley and generally refers to positive sentiment while “:(” denotes a negative sentiment on the other hand. Also, acronyms like “LOL“, ”OMG” and commonly used slangs like “Nah”, “meh”, ”giggly” etc are also strong indicators of some sort of sentiment in a sentence.
These are few of the problems encountered not only with sentiment analysis but with NLP as a whole. In fact, these are some of the Open-ended problems of the Natural Language Processing field.
VADER Sentiment Analysis
VADER (Valence Aware Dictionary and sEntiment Reasoner) is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media. VADER uses a combination of A sentiment lexicon is a list of lexical features (e.g., words) which are generally labelled according to their semantic orientation as either positive or negative.
VADER has been found to be quite successful when dealing with social media texts, NY Times editorials, movie reviews, and product reviews. This is because VADER not only tells about the Positivity and Negativity score but also tells us about how positive or negative a sentiment is.
It is fully open-sourced under the MIT License. The developers of VADER have used Amazon’s Mechanical Turk to get most of their ratings, You can find complete details on their Github Page.
Advantages of using VADER
VADER has a lot of advantages over traditional methods of Sentiment Analysis, including:
It works exceedingly well on social media type text, yet readily generalizes to multiple domains
It doesn’t require any training data but is constructed from a generalizable, valence-based, human-curated gold standard sentiment lexicon
but is constructed from a generalizable, valence-based, human-curated gold standard sentiment lexicon It is fast enough to be used online with streaming data, and
It does not severely suffer from a speed-performance tradeoff.
The source of this article is a very easy to read paper published by the creaters of VADER library.You can read the paper here.
Enough of talking. Let us now see practically how does VADER analysis work for which we will have install the library first.
Installation
The simplest way is to use the command line to do an installation from [PyPI] using pip. Check their Github repository for the detailed explanation.
> pip install vaderSentiment
Once VADER is installed let us call the SentimentIntensityAnalyser object,
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer analyser = SentimentIntensityAnalyzer()
Working & Scoring
Let us test our first sentiment using VADER now. We will use the polarity_scores() method to obtain the polarity indices for the given sentence.
def sentiment_analyzer_scores(sentence):
score = analyser.polarity_scores(sentence)
print("{:-<40} {}".format(sentence, str(score)))
Let us check how VADER performs on a given review:
sentiment_analyzer_scores("The phone is super cool.") The phone is super cool----------------- {'neg': 0.0, 'neu': 0.326, 'pos': 0.674, 'compound': 0.7351}
Putting in a Tabular form:
The Positive, Negative and Neutral scores represent the proportion of text that falls in these categories. This means our sentence was rated as 67% Positive, 33% Neutral and 0% Negative. Hence all these should add up to 1.
The Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1(most extreme negative) and +1 (most extreme positive). In the case above, lexicon ratings for and supercool are 2.9 and respectively 1.3 . The compound score turns out to be 0.75 , denoting a very high positive sentiment.
compound score metric
read here for more details on VADER scoring methodology.
VADER analyses sentiments primarily based on certain key points:
Punctuation: The use of an exclamation mark(!), increases the magnitude of the intensity without modifying the semantic orientation. For example, “The food here is good!” is more intense than “The food here is good.” and an increase in the number of (!), increases the magnitude accordingly.
See how the overall compound score is increasing with the increase in exclamation marks.
Capitalization: Using upper case letters to emphasize a sentiment-relevant word in the presence of other non-capitalized words, increases the magnitude of the sentiment intensity. For example, “The food here is GREAT!” conveys more intensity than “The food here is great!”
Degree modifiers: Also called intensifiers, they impact the sentiment intensity by either increasing or decreasing the intensity. For example, “The service here is extremely good” is more intense than “The service here is good”, whereas “The service here is marginally good” reduces the intensity.
Conjunctions: Use of conjunctions like “but” signals a shift in sentiment polarity, with the sentiment of the text following the conjunction being dominant. “The food here is great, but the service is horrible” has mixed sentiment, with the latter half dictating the overall rating.
Preceding Tri-gram: By examining the tri-gram preceding a sentiment-laden lexical feature, we catch nearly 90% of cases where negation flips the polarity of the text. A negated sentence would be “The food here isn’t really all that great”.
Handling Emojis, Slangs, and Emoticons.
VADER performs very well with emojis, slangs, and acronyms in sentences. Let us see each with an example.
Emojis
print(sentiment_analyzer_scores('I am 😄 today'))
print(sentiment_analyzer_scores('😊'))
print(sentiment_analyzer_scores('😥'))
print(sentiment_analyzer_scores('☹️')) #Output I am 😄 today---------------------------- {'neg': 0.0, 'neu': 0.476, 'pos': 0.524, 'compound': 0.6705} 😊--------------------------------------- {'neg': 0.0, 'neu': 0.333, 'pos': 0.667, 'compound': 0.7184} 😥--------------------------------------- {'neg': 0.275, 'neu': 0.268, 'pos': 0.456, 'compound': 0.3291} ☹️-------------------------------------- {'neg': 0.706, 'neu': 0.294, 'pos': 0.0, 'compound': -0.34} 💘--------------------------------------- {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}
Slangs
print(sentiment_analyzer_scores("Today SUX!"))
print(sentiment_analyzer_scores("Today only kinda sux! But I'll get by, lol")) #output Today SUX!------------------------------ {'neg': 0.779, 'neu': 0.221, 'pos': 0.0, 'compound': -0.5461} Today only kinda sux! But I'll get by, lol {'neg': 0.127, 'neu': 0.556, 'pos': 0.317, 'compound': 0.5249}
Emoticons
print(sentiment_analyzer_scores("Make sure you :) or :D today!")) Make sure you :) or :D today!----------- {'neg': 0.0, 'neu': 0.294, 'pos': 0.706, 'compound': 0.8633}
We saw how VADER can easily detect sentiment from emojis and slangs which form an important component of the social media environment.
Conclusion
The results of VADER analysis are not only remarkable but also very encouraging. The outcomes highlight the tremendous benefits that can be attained by the use of VADER in cases of micro-blogging sites wherein the text data is a complex mix of a variety of text. | https://medium.com/analytics-vidhya/simplifying-social-media-sentiment-analysis-using-vader-in-python-f9e6ec6fc52f | ['Parul Pandey'] | 2019-11-08 14:49:39.076000+00:00 | ['Sentiment Analysis', 'NLP', 'Artificial Intelligence', 'Data Science', 'Machine Learning'] |
Two Significant Enhancements Were Made To IBM Watson Assistant | Different Date Entries Normalized
You need to manually enable the new system entities to take advantage of improvements that were made to the number-based system entities provided by IBM.
The list of supported languages can be found here.
The new system entities can recognize more nuanced mentions in user input. For example, system date can calculate the date of a national holiday when it is mentioned by name. This is obviously very country specific.
System date can also recognize when a year is specified as part of a date mentioned in the user's input. The improvements also make it easier for your assistant to distinguish among the many number-based system entities.
For example, a date mention, such as April 15, that is recognized to be System Date is not also identified as a System Number mention.
Day of Interest Recognized and Normalized To Date
This example shows how Christmas day is translated or normalized to the year, month and day.
This significantly simplify the process of date normalization from various natural language formats.
Two: Automatic Irrelevance detection
Conversational Interfaces, also known as chatbots are usually used to address a very narrow domain. But then a lot of time needs to be spend on handling conversations which is irrelevant to the domain of implementation.
Irrelevance detection helps the chatbot to recognize when a user touches topics which it is not designed to answer, and with confidence earlier in the development process.
This feature helps your chatbot to recognize subjects which you did not designed and developed for, even if you haven’t explicitly taught it about what to ignore, by marking specific user utterances as irrelevant.
The algorithmic models that help your chatbot understand what the users say are built from two key pieces of information:
Domains you want your chatbot to address. For example, queries about deliveries to a chatbot designed to take payments.
You train your chatbot on these subjects by defining intents and providing lots of example user utterances that articulate the intents so your chatbot can recognize them.
Then you need to present counterexamples and often there are false positives where an utterance is erroneously assigned to an entity.
Irrelevance detection is designed in order to navigate any vulnerability there might be in counterexample data as you start your chatbot development.
When you enable it enabled, an alternative method for evaluating the relevance of a newly submitted utterance is triggered in addition to the standard method.
Irrelevance Detection Enabled
From the Skills page, open your skill. From the skill menu, click Options. On the Irrelevance detection page, choose Enhanced.
The supplemental method examines the structure of the new utterance and compares it to the structure of the user example utterances in your training data.
This alternate approach helps chatbots that have few or no counterexamples, recognize irrelevant utterances.
Marking User Input as Irrelevant To Build a Counterexample Model
Note that the new method relies on structural information that is based on data from outside your skill. So, while the new method can be useful as you are starting out, to build a chatbot that provides a more customized experience, you want it to use information from data that is derived from within the application’s domain.
The way to ensure that your assistant does so is by adding your own counterexamples. If only a few.
In the chatobt development process, you provide example user utterances or sentences which are grouped into distinct topics that someone might ask the assistant about — these are called “intents.”
Utterances Are Detected As Irrelevant with No False Intent Assignments
The bad news…users do not stick to the script. The chatbot usually gets a variety of unexpected questions that the person building the assistant didn’t plan for it to handle initially.
In these cases an escalation to a human is triggered, or a knowledge base search is triggered. Bot not the most effective response. How about the chatbot saying, I cannot help you with that.
Negate False Intent Assignment
Often, instead of stating the intent is out of scope, in a desperate attempt to field the utterance, the chatbot assigns the best fit option to the user; often wrong.
Or the chatbot continues to inform the user it does not understand; and having the user continuously rephrasing the input. Instead of the chatbot merely stating the question is not part of its domain.
The traditional approaches are:
Many “out-of-scope” examples are dreamed up and entered
The NLU model automatically select a set of irrelevant examples different to the training data
We could easily build a very accurate irrelevant-question-detector by tagging most of the questions from assistant users as irrelevant… but this would be a pretty bad experience because the assistant would have low coverage on in-domain questions. Since none of these approaches are perfect, especially when an assistant is new and doesn’t have a lot of training data, we decided to come up with an approach that is more human-like.
If we want to know if an utterance from a user is irrelevant, IBM Watson Assistant initially check if it is similar to the set of relevant examples, and if not, it is tagged.
IBM combine this with another algorithm which gauges the dissimilarity of the incoming question from concepts that are not close to the assistant’s domain.
In Conclusion
It is very encouraging to see an environment develop and grow in functionality. The IBM Watson Assistant team has been effective in augmenting the building out the Watson Assistant environment with tools and functionality which make a huge difference. | https://cobusgreyling.medium.com/two-significant-enhancements-were-made-to-ibm-watson-assistant-7d4461dedced | ['Cobus Greyling'] | 2019-12-04 11:49:34.891000+00:00 | ['NLP', 'Artificial Intelligence', 'Chatbots', 'Nlu', 'Machine Learning'] |
Next Time I Brush My Hair | Photo by 42 North on Unsplash
Next time I brush my hair,
wrapping the strands around my fingers,
I will walk to the backyard apple tree
and drape among its battered branches
threads of silver, soft yet strong
much-sought for by robins and wrens;
a perfect carpet, safe and warm
for the bellies of naked nestlings. | https://medium.com/crows-feet/next-time-i-brush-my-hair-90eda8ad6b76 | ['Deborah Barchi'] | 2020-07-28 23:56:05.719000+00:00 | ['Self-awareness', 'Nature', 'Life', 'Poetry', 'Mindfulness'] |
Finding The Right Leader: A Primer On ‘A-P’ Leadership Framework | Having spent a large portion of my career helping, running or building small to medium sized businesses, I have often had to develop simplistic tools to problem solve or coach management teams on various topics. One among them that has surfaced most frequently is the topic of leadership. I was reminded of this yet again when Adam, a CEO of a digital printing company and a close friend, recently reached out to me for a coffee meeting to get counsel on accelerating the growth of his business.
After problem solving for a bit, it was clear that Adam was in serious need of managerial talent. I asked him whether he needed an “A” or “P” type leader for his business. Looking at his “what the dickens” expression, I realized I was speaking in code, even for a software guy, and needed to expand.
As an investor, and consistent with my previous operating and advisory roles, I fundamentally believe that while mathematically we underwrite markets, products or a business model, inherently we are also underwriting the current or future team’s ability to deliver against those growth and return goals. Over the years, having made both bad and good investing decisions, one thing is clear to me that agnostic of what my math says, putting the right leaders throughout the organization, be it at executive, functional or even at project level, significantly improves the probability of getting outsized returns.
Hence, whether it’s value, growth or turnaround type investment, I evaluate the type of leadership needed within an organization and then pair it with one of two leadership styles, “A” or “P,” based on my A-P Leadership framework. Let’s break the two styles down further.
An “A” type leader gives their teams significant Autonomy allowing the use of their own methods to get the job done. This kind of leader gets energized by investing time and energy in coaching their employees driving a sense of Appreciation within their teams. Employees will find these leaders Approachable. In return for the above attributes, “A” type leaders expect a high degree of Accountability providing loads of clarity on consequences of key actions or inactions.
“P” type leaders are energized by a well-defined repeatable Process to keep their teams focused and sticking to the plan. They enjoy a high level of Predictability in their organization’s end-product and are driven by consistent and repeatable results. To that effect, they will outline a Prescriptive approach detailing tactical steps to minimize sources of ambiguity. They love using first Principles to solve problems ensuring the deliverables are always rooted in fundamentals.
Here is a deeper look at each attribute for both leadership types.
Autonomy: These leaders don’t micromanage but rather align on the end goals and are comfortable letting their teams stray from pre-defined processes to foster individual ownership and creativity to finish a job. For example, the leader of your new product innovations team probably needs this trait to be comfortable with ambiguity and to promote unconstrained creative thinking within their team.
Appreciation: Teams working for “A” leaders enjoy a high level of professional appreciation from their bosses. These leaders exhibit appreciation both in the form of commendations but also through proactive and impromptu coaching of employees on their developmental needs. Another way they show appreciation is by seeking advice from others, including junior members of their team.
Approachability: “A” type leaders try to create an environment where teams feel comfortable approaching their boss for problem solving or seeking advice without feeling judged or afraid. Given high degrees of autonomy allowed by these leaders, this attribute helps develop a self-regulated counterbalance system for their employees to seek advice as needed. This style is particularly helpful in companies with rapid growth or when you have a young team, like in a startup.
Accountability: In return for the above, “A” leaders expect a high level of accountability from their teams and use it as a key instrument to motivate and to calibrate strengths or gaps in their teams. They leave little ambiguity on the consequences of delivering or failing against set goals.
For a “P” leadership style, the attributes translate as the following:
Process: They are extremely process oriented and like to stick to tried-and-tested approaches to completing tasks. Often precise about each step, they want to ensure repeatability in the approach and outcomes. Think about a large software implementation; in that situation, you want your leader to be a process maniac to ensure timely success.
Predictability: “P” leaders enjoy having a high level of predictability and accuracy in their team’s output. They wish to avoid gray zones and work hard to minimize variances. For example, when hiring a production manager, you want a “P” leader so accurate throughputs can be achieved consistently.
Prescriptive: These folks can be extremely prescriptive and prefer to spell out each step in the process as well as the expected outcomes. If good at communicating, they leave little doubt on what, how and when their teams should deliver results. This style is essential in heads of functions such compliance or safety.
Principles: Reliance on first principles for problem solving is a key attribute of “P” leaders. It is also a way for them to measure the skills of younger or newer employees and to ensure the work product is grounded in fundamentals. For example, the head of engineering design or head of the legal team should have this trait in abundance to avoid any missteps.
When I explained all this to my friend, he still had a few questions regarding whether one style is better than the other, whether a leader can be a blend of both styles and whether he should be assessing his company’s needs for now or the future.
The above is just a framework. But I believe both styles are useful in an organization. For most people, one style is at their core; however, with tenure or changing roles and responsibilities, they may need to demonstrate traits of the other type. It is like while introverts can exhibit extroverted behaviors when needed and extroverts may need their own time and space to think, at the core, people are either introverted or extroverted. Similarly, people are primarily “A” or “P” type leaders, but with practice and awareness, some, but not all, can bounce in and out of the other as needed. Feel free to check out related articles on “A-P Leadership Framework” as part of my series on Forbes.com. This series merely begins to touch the surface of this topic and deserves more content and discussions through various channels. | https://medium.com/swlh/finding-the-right-leader-a-primer-on-a-p-leadership-framework-3d50b2611e77 | ['Rohit Bassi'] | 2019-08-21 07:56:10.413000+00:00 | ['Business', 'People', 'Finance', 'Leadership', 'Money'] |
Day Twenty-Nine: My Journey to Zen | Day Twenty-Nine: My Journey to Zen
Slowly, slowly…
Photo by Daria Tumanova on Unsplash
Haste makes waste is an ancient cliché. Some say that it goes back more than two thousand years. Big surprise. I am not go to digress to research its origin. I do remember reading a while back that it is that old and since I am writing about the concept rather than the history of the phrase, the origin is not pertinent at the moment.
My 30 day journey is almost over and there is something I must say. Thirty days of being mindful and examining the way I live is a good start. This is not some sort of quick fix. It is not like a crash diet, which is fairly certain of being a failure.
The price of freedom is eternal vigilance has been attributed to Thomas Jefferson, Thomas Paine, Abraham Lincoln, John Philpot Curran — to name a few. Again, I am not going to digress into researching these words. I am going to apply them to changing your life, your habits, your way of being.
Whatever you want to manifest in your life requires a focus on what will bring about the necessary changes. Making that habit a part of your life has a price…the price of eternal vigilance.
You cannot lose 100 pounds in a week, prepare to run a marathon in a week, become a millionaire in a week (unless you win a lottery), or entrench any of these big changes in a week.
Every day is a new day in the journey and being mindful is at the core of building and using your new habit. If you are tired, worried, angry, frustrated, trapped, stressed, or any of these negative things, being mindful is going to be difficult.
Take it slow. Take your biggest impediment and spend a few minutes with it. When did this start? What is the source? If it were your best friend’s problem, what advice would you give her?
Don’t expect the answer to suddenly burst out of the sky. Be patient. Go slow. Examine the issue thoroughly and list the possible solutions. List the potential things you could let go that might alleviate that negative impact on your life.
Be prepared for some tough moments as you come to terms with the situation. But do not let your past hold you captive and don’t wait for some miracle solution to arrive. This is your life and you can learn how to live it in a much more fulfilling manner.
For me it was a matter of stepping back from drinks with negative friends where we would sit and gripe about how unfair life was to us. A couple of hours of a pity party fueled by alcohol a few times a week did not solve much for me.
In truth, it made matters worse. It allowed me label myself as a victim. Hah! That’s not an energizing label. It painted big black clouds in the sunshine of my days. It also used money that would have been better spent elsewhere and added extra calories to my already laden day.
When I walked away from these frequent grumble sessions, the odd thing is that not one of my gripe buddies missed me. We faded out of each other’s lives without a ripple.
When you make a change, often you find yourself feeling off kilter. You are out of your comfort zone. Take it easy. Think about yourself and your life and what you might like to do with the newly discovered time in your life.
Haste does make waste. There is no need to hurry. You cannot recover lost time. You start where you are and move forward. | https://medium.com/narrative/day-twenty-nine-my-journey-to-zen-a0d68a3ea22f | ['Joanne Reid'] | 2020-08-24 11:27:16.890000+00:00 | ['Self-awareness', 'Change', 'Mindset', 'Mindfulness', 'Mindset Shift'] |
Sad Dragon, Postmodernism, and Chubby Mary | But you can find not just old buildings in Tegel — it also houses the oldest tree in Berlin, so-called Chubby Mary (Dicke Marie). This is a natural monument, although the tree didn’t look quite healthy at first glance.
Dicke Marie
It’s believed this English oak was born from a seed in 1107, which means it’s over 900 years old — older than the city of Berlin! Sadly, there are some clues to its roughly twice exaggerated age. There are not many 9–8-century old trees in Germany, and all of them are way thicker and have darker bark. But even if it started growing in the middle of the 17th century, it’s a respectable old oak anyway.
Dicke Marie
The tree received its name, Chubby Marie, after the cook of brothers Alexander and Wilhelm von Humboldt. They spent their youth not far from this site, in a grand white manor, Tegel Castle. Another famous guy, Johann Wolfgang von Goethe, visited the tree during his travels in 1778 and was relaxing in its shade. I’m a bit skeptical about such facts. Not a single huge tree can exist without someone famous to have visited it.
Already tired of history? I’ve got a dose of relaxation. On the square with cannons, Kanonenplatz, you can enjoy lying on a wooden sunbed and gaze at numerous water birds: ducks, geese, and swans. | https://medium.com/5-a-m/alt-tegel-82324680a803 | ['Slava Shestopalov'] | 2020-12-19 12:14:16.609000+00:00 | ['Berlin', 'Architecture', 'Design', 'Travel', 'Photography'] |
Java vs Kotlin | Now Let’s discuss the Basic Differences
1. Intro & Release
•Java was developed by Sun Microsystems (now the property of Oracle) in 1995.
•It supports almost all types of machines, and OS X be it Android, Windows or Linux.
•Kotlin was introduced by JetBrains in 2011, open-sourced in 2012, officially supported in Google I/O (annual event of Google developers) in 2017.
•Google claimed that 70% of the top 1,000 Android apps are written in Kotlin now.
•Some apps are under construction in Kotlin from Java, for instance, The Google Home app isn’t completely written in Kotlin yet,
•but as of June, 2020 about 30% of the code base was rewritten in Kotlin from the legacy Java code.
•Other popular Kotlin apps examples from Google include Maps, Play, Drive.
•There are plenty of android apps by other companies written in Kotlin now.
•In nutshell, Android development is now backed by “Kotlin-first” policy. It is similar to IOS apps development which shifted from Objective-C to Swift.
2. Version
•As on November 2020, Kotlin version is v1.4.0
•while Java 15 has now released, but java 8 (aka 1.8) is still most popular.
3. Speed
•Java does beat Kotlin by 12–15% for clean builds. That means, Kotlin compiles a little slower than Java for full builds
•However, for partial builds with incremental compilation enabled i.e. only building with small changes, Kotlin compiles as fast or slightly faster than Java.
4. Lines of code
•Code written in Kotlin is much smaller compared to Java.
There is nearly 30–40% less code in Kotlin. So Apps have potential to lose 1/3rd weight.
•Java is detailed while Kotlin is concise and modern.
5. Market share
•Kotlin developers are 1/5th of Java developers as per surveys.
•7.8% Kotlin developers against more than 40% developers using Java, but also these surveys suggest that Kotlin is more loved than Java and expanding fast.
•References -
https://www.statista.com/statistics/793628/worldwide-developer-survey-most-used-languages/
https://insights.stackoverflow.com/survey/2020#most-popular-technologies
https://insights.stackoverflow.com/survey/2020#most-loved-dreaded-and-wanted
6. Null Safety
•Kotlin is safe against NullPointerException. This type of error is the largest cause of app crashes on Google Play.
•Java lets developers assign a null value to any variable.
•Unlike Java, all types are non-nullable in Kotlin by default. Assigning or returning a null will give compile time error.
•In order to assign a null value to a variable in Kotlin, it is required to explicitly mark that variable as nullable.
val number: Int? = null //Nullable type val name: String = null //Error because not possible to assign a null value
Nullable types are used with safe call operator.
name?.getLength()
So even name becomes null, whole expression is equivalent to null without NullPointerException
7. Hybrid apps
•Kotlin can be used to write native code of Android and IOS apps
•Kotlin Multiplatform Mobile (KMM) works on both Android and iOS.
•Java is not used for IOS app development till now. | https://amir-ansari.medium.com/java-vs-kotlin-1a119beb43b8 | ['Amir Ansari'] | 2020-11-06 06:31:22.030000+00:00 | ['Kotlin', 'Comparison', 'Java'] |
Christmas | Christmas
Wish
Photo by loly galina on Unsplash
One of my fondest memories was when my mother would decorate for Christmas. She would buy a can of snow, and we would spray it on the window and make a shape of a mountain on half of the windowpane. Then we would take the Christmas stencils and fill them in with glass wax on the rest of the window. My mother loved decorating for the holidays.
I distinctly remember the silver Christmas tree with the silver palm fronds at the end of each branch. The Christmas lights we placed on the tree glistened in multi-colors. Watching the lights made my body tingle. It was the happiest time of my childhood that I could remember.
I would walk to Steinway Street with my mom and look at the dolls in the small toy store window. We both admired the beautiful ice-skating doll, which was dressed in a red velvet ice skating dress and a red velvet hood with white fur around her perfect face. She also wore white ice-skating boots. I wanted to magically touch her through the glass store window. She was so beautiful, and my eyes instantly filled with adoration — wishing I could have her.
When Christmas came that year, she was wrapped in a gaily-decorated box under my Christmas tree — with a beautiful faux fur coat for her from my older brother, Frankie. I received only two Christmas gifts, but to me it was more than enough. How far we have come from those simple days.
It was a time when I had my family all together before death came upon them. Because of my innocence, there was no thought that I would be the only one left behind on planet Earth from my immediate family.
But if I had only one wish, I would like to take a dream ride back in time as the movie. It’s a Wonderful Life. And have Christmas with my family that once lived. To be able to kiss my mother and father and my two brothers once more. That would be Christmas.
In response to the Spiritual Secrets Weekly Prompt: “Christmas” by Darshak Rana | https://medium.com/spiritual-secrets/christmas-2b69275955ec | ['Bernadette Decarlo'] | 2020-12-26 23:57:28.662000+00:00 | ['Christmas', 'Nonfiction', 'Family', 'Spiritual Secrets', 'Weekly Prompts'] |
#5: A Dot Com Crash | Featuring Usman Majeed
Listen Now
Show Notes
In today’s conversation with Usman Majeed we compare and contrast the dot com bubble of then and the valuations of cryptocurrencies.
He may have missed the boat with bitcoin, but he didn’t make that same mistake with ethereum and it’s landed him in great position, recently launching Mutual Coin Fund.
Reach this weeks guest:
Transcript
Coming Soon. | https://medium.com/coloringcrypto/episode-5-how-crypto-looks-when-you-never-experienced-the-dot-com-crash-78d385a51ae5 | ['Kelly Mcquade-W.'] | 2018-06-28 22:40:01.908000+00:00 | ['Startup'] |
Bringing Core Values to Life: A Look at "Stay Curious" | In today’s professional landscape, 51% of workers are actively looking to leave their current jobsand 73% are open to hearing about new opportunities, which means it’s more important than ever to engage your workforce if you want to win the talent war — and that includes keeping the talent already within the walls of your organization.
A key element to engaging your workforce is ensuring strength of culture through your company core values. Today’s workforce wants to work somewhere with strong moral fiber. They want to be filled with a sense of purpose beyond their projects and daily tasks. In the current talent landscape, candidates consider core values when vetting organizations. They want to know if they exist, what they are, how they are encouraged, shared, and promoted, and ultimately if they are truly part of your company’s DNA. If you do core values well, you’re guaranteed a more engaged and productive workforce.
80% of employees feel more engaged when their work is consistent with the core values and mission of their organization (IBM).
93% of workers at companies with recognition programs tied to core values agree the work they do has meaning and purpose (Globoforce).
Needless to say, core values are important. But you can’t just have them, you have to live them.
First Things First, Identify Your Core Values
At Element Three, our core values serve as the fundamental beliefs that guide our behaviors and decisions as an organization. Each core value was intentionally crafted to serve a unique purpose based on pivotal learning moments from the early days of Element Three. The history behind our core values makes them individually authentic and collectively powerful. Despite the evolution of the agency, our values have remained static and relevant since their inception because the decisions we make as a company are grounded in our core values.
E3 Core Values
Awesome Comes Standard
Business First
Emotional Intelligence
Stay Curious
Transparency
Creative Swagger
Own Selflessly
Defining core values is another lesson for another day. But assuming you’ve identified them for your own organization, the question becomes, “how do I create space within my organization to recognize, promote, and encourage said values?”
Living Your Core Values: Stay Curious
While I could go on and on about all of our core values, I’m going to dive deep into one in particular — Stay Curious — and elaborate on how this value is lived out within the walls of Element Three every day.
Stay Curious: Ask why. Search more. Participate. Create. Don’t ever rest in the belief that you have it all figured out — always be looking forward to what is next.
Questions are powerful, and if you don’t believe me, Forbes thinks so too. In an article titled The Power of Questions, Forbes contributor Jeff Boss writes, “Nothing has such power to cause a complete mental turnaround as that of a question. Questions spark curiosity, curiosity creates ideas and ideas (well, good ones) lead to innovation and dollar signs — ideally.”
At Element Three, employees at every level are encouraged to stay curious. We want folks to challenge the status quo, get to the root cause of an issue, and push others to think differently and dig deeper, all through the power of curiosity and questions.
So how do we ensure our employees embody “Stay Curious?” We infuse it into seemingly small but extremely important parts of our business — onboarding and peer-to-peer recognition.
Onboarding
As part of our employee onboarding, each new member of the Herd must learn and share the core values with someone in the office during their third week of work. And by the following week, they must recite the values by heart. It is a seemingly simple ask of new employees, but it is something we take seriously. Within your first month at Element Three you need to know our values, because you will only encounter them more and more as you move through the organization.
So, are you bringing your core values into your onboarding experience? Or are you simply listing them out, without any specific action to take?
Another part of our onboarding is to schedule meetings with other E3ers to get to know them better. You’d better believe they’re asking questions or that gets really awkward really fast. And finally, every new E3er has to ask Tiffany — the president of our agency — a question.
To live out your core values, you can’t just list them — you need to find clever or creative ways to have your people practice them. And what better time than when someone is fresh to the organization?
Peer-to-peer Recognition
Another place that lends itself to your core values is any peer-to-peer recognition program you might have in place.
At E3, we have a peer-to-peer recognition program in place that is meant to educate, encourage, recognize, and reward behaviors that go above and beyond in representing our core values specifically. It provides a simple platform for peers and supervisors to recognize the positive contributions of the E3 Herd. We’re at about 75 full-time employees currently, and roughly 100 Awesome Blocks (the program name) are awarded each month.
Not only does this keep core values front of mind for employees who honor their peers, but if you can make it visible, others begin to see real examples of the core values lived out in real life (some of which I’ve conveniently included below).
It could be a longtime Creative Director who “is always asking and figuring out how to lead the team and push our clients further and it’s never lost on me how much he cares. I’m doing my best to learn from his leadership style so I can hopefully be a version of how calm, yet powerful he is.”
Or it might be a relatively new employee who stepped into a project management role and “has shown mass amounts of curiosity in her first month at E3 — she is constantly seeking to understand the ‘why’ behind everything we do.”
Or maybe it’s a Senior Digital Marketing Manager who jumped into a project late in the game to provide some subject matter expertise: “His ability to question how and why previous programs were built, and then improve upon those things has been huge for our team.”
Allow a Safe Space for Employees to Practice Core Values
If your core value is to “Think Big” or “Take Risks,” you’d better make sure your team members know it’s okay to fail every once in a while. And if your core value is to “Stay Curious” (like ours), you’d better give your employees a safe space to ask hard questions.
Here’s what it looks like for us: every month our organization gets together for a Business Review meeting. One component of the meeting includes a Q&A session — anyone from the organization can submit a question, and it will be addressed by the appropriate leader. Questions are all across the board, ranging from “What is the meaning of life?” to “Which service areas are we pushing right now?” Seriously.
If your employees fear practicing your core values, they’ll never truly embody them. And as a leader in the organization, you want to help them get there, not scare them away.
Live Your Core Values in All That You Do
In the end, the most simple (and potentially most effective) way to really get buy-in for your core values is to truly live up to them, every day. After all, great leaders don’t just lead with words, they lead by their actions. Show your team what living your core values looks like, and they’re more likely to follow you. Think about your core values when you make decisions, and practice what you preach. Because if you don’t, it’s unlikely anyone else is going to.
Perspective from Maggie Campbell, Organizational Development Strategist at Element Three | https://medium.com/element-three/bringing-core-values-to-life-a-look-at-stay-curious-b89a4018b027 | ['Element Three'] | 2018-06-15 14:49:58.433000+00:00 | ['Culture', 'Core Values', 'Startup', 'Employer Branding', 'Agencylife'] |
Drum Patterns from Latent Space | Drum Patterns from Latent Space
Percussion Beats And Where To Find Them
TL;DR: I collected a large dataset of drum patterns, then used a neural network approach to map them into a latent explorable space with some recognizable genre areas. Try the interactive exploration tool or download several thousand of unique generated beats.
Context Overview
In the recent years there have been many projects dedicated to the neural network-generated music (including drum patterns). Some of such project use an explicit construction of a latent space in which each point corresponds to a melody. This space can then be used both to study and classify musical structures, as well as to generate new melodies with specified characteristics. Some others used less complex techniques, such as “language model” approaches. However, I was unable to find an overall representation of typical beat patterns mapped to 2D space, so I decided to create one myself.
Below, I listed relevant projects that I managed to found and analyze before I started working on my own:
LSTMetallica — author used a language model approach to predict the next step in a beat.
neuralbeats — another language model project, very similar to LSTMetallica.
Magenta VAE— Google’s Magenta is a great source of interesting models and projects on music generation and augmentation. Particularly, in 2016 they released the drum_rnn model, and in March 2018 they published the music_vae model. Since then a lot of projects used these models.
model, and in March 2018 they published the model. Since then a lot of projects used these models. For example, last year Tero Parviainen created a really great online drum beat generator based on drum_rnn+Magenta.js+TensorFlow.js+Tone.js.
Beat Blender was another Magenta-based project (first presented at the NIPS 2017). It’s quite similar to what I wanted to do, however, the authors hadn’t built the overview map of different genres, but only an interpolation space between pairs of patterns.
Last but not least, there was my other project, Neural Network RaspberryPi Music box, which used the VAE space to generate an endless stream of piano-like music.
Dataset Building
Most of the projects I found used small datasets of manually selected and cleaned beat patterns, e.g. GrooveMonkee free loop pack, free drum loops collection or aq-Hip-Hop-Beats-60–110-bpm.
It wasn’t enough for my goals so I decided to automatically extract beat patterns from huge MIDI collections available online. In total I’ve collected ~200K MIDI files, then kept only those with a nontrivial 9th channel (percussion channel according to the MIDI standard) so there were approximately 90K tracks left. Next, I did some additional filtering basically in the same fashion as implemeted in the neuralbeats and LSTMetallica projects (I used a 4/4 time signature, and applied quantization and simplification to the subset of instruments).
Then I split tracks into separate chunks based on long pauses, and searched for patterns of length 32 steps that were repeated at least 3 times in a row — in order to speed up the process I used hashing and a few simple greedy heuristics. Finally, I discarded trivial patterns with too low entropy and checked the uniqueness of each pattern in all possible phase shifts. Ultimately, I ended up with 33K unique patterns in my collection.
some examples of distilled patterns
I used a simple scheme to encode each pattern: a pattern has 32 time ticks and there are 14 possible percussion instruments (after simplification), so each pattern could be described by 32 integers in the range from 0 to 16383.
You can download the dataset here in TSV format:
First column holds the pattern code (32 comma-separated integers).
Second column is the point of this pattern in the latent 4D space (4 comma-separated float values), see details below.
Third column is the t-SNE mapping from the latent space into 2D projection (2 comma-separated float values), see details below.
Neural Network
I used the pytorch framework to build a network with a 3-layered FCN encoder mapping the beat matrix (32*14 bits) into 4D latent space, and a decoder with the same size as the encoder. The first hidden layer has 64 neurons, the second one has — 32. I used ReLU between the layers and a sigmoid to map the decoder output back into the bit mask. | https://towardsdatascience.com/drum-patterns-from-latent-space-23d59dd9d827 | ['Aleksey Tikhonov'] | 2019-03-10 20:51:55.469000+00:00 | ['Drums', 'Neural Networks', 'Visualization', 'Percussion', 'Machine Learning'] |
How Ingress Works in Kubernetes | Setting Up Ingress With minikube
For this example, we are going to go through the process of setting up Ingress, using the Ingress Controller type. The implementation of the Ingress Controller is going to be nginx, but it is just as straightforward to use other ingress controllers, such as Traefik.
First of all, make sure minikube is started and working.
➜ minikube start
😄 minikube v1.14.2 on Darwin 10.15.7
✨ Using the hyperkit driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing hyperkit VM for "minikube" ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" by default
We also need to enable ingress support for minikube, which can be done with the following command
➜ minikube addons enable ingress
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
If this does not work, delete your minikube instance and run using this command.
minikube delete && minikube start --vm=true
We are going to create a namespace, as its good practice to logically separate things from each other.
➜ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: minikube-example
labels:
name: minikube-example
EOF
namespace/minikube-example created
This will create a namespace called minikube-example .
➜ kubectl get namespaces
NAME STATUS AGE
default Active 7m29s
kube-node-lease Active 7m30s
kube-public Active 7m30s
kube-system Active 7m31s
minikube-example Active 39s
Now we should deploy a pod that contains a web application. One I use a lot for testing is one of the Google samples, Hello App.
➜ cat <<EOF | kubectl apply -n minikube-example -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
labels:
app: test-app
spec:
replicas: 3
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
EOF
deployment.apps/test-app created
I can see the pods running now (I use k9s, which is an awesome tool for managing your k8s cluster: https://github.com/derailed/k9s).
Viewing the cluster in k9s
Hitting the web app after using a port forward
I manually added a port-forwarding rule to check the app works as expected.
Underneath it seems to set up a Node Port for the duration of time that k9s is open, but if you wanted to do this manually, it is possible by running a command like this:
➜ cat <<EOF | kubectl apply -n minikube-example -f -
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
name: test-app
spec:
type: NodePort
ports:
- port: 8080
name: http
selector:
app: test-app
EOF service/test-app created
🏃 Starting tunnel for service test-app.
|------------------|----------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|------------------|----------|-------------|------------------------|
| minikube-example | test-app | |
|------------------|----------|-------------|------------------------|
http://127.0.0.1:53732
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. ➜ minikube service --url test-app -n minikube-example🏃 Starting tunnel for service test-app.|------------------|----------|-------------|------------------------|| NAMESPACE | NAME | TARGET PORT | URL ||------------------|----------|-------------|------------------------|| minikube-example | test-app | | http://127.0.0.1:53732 |------------------|----------|-------------|------------------------|❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
Hello, world!
Version: 1.0.0
Hostname: test-app-df4854fbc-z9zq9 ➜ curl http://127.0.0.1: 53732 / Hello, world!Version: 1.0.0Hostname: test-app-df4854fbc-z9zq9
(Note that the selector uses the label rather than the name since the deployment will create a name with a unique identifier postfixed. Also remember to delete the service afterward, as we are going to use Ingress Controller instead).
Ok cool, so now the deployment and pods exist, we can focus on the Ingress Controller.
➜ cat <<EOF | kubectl apply -n minikube-example -f -
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
name: test-app
spec:
type: NodePort
ports:
- port: 8080
name: http
selector:
app: test-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minikube-example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: minikube-example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-app
port:
number: 8080
EOF
service/test-app created
ingress.networking.k8s.io/minikube-example-ingress created
What this will do is create an ingress pod, which will proxy all content with the host header minikube-example.com to the service with name test-app.
To check this is working as expected, run the following command. (It could take a couple of minutes for the IP to appear.)
➜ kubectl get ingress -n minikube-example
NAME CLASS HOSTS ADDRESS PORTS AGE
minikube-example-ingress <none> minikube-example.com 192.168.64.4 80 56s
You should then pass this IP into the host's file, like so:
➜ echo '192.168.64.4 minikube-example.com' | sudo tee -a /etc/hosts
Now, time to try it out!
➜ curl minikube-example.com
Hello, world!
Version: 1.0.0
Hostname: test-app-df4854fbc-6mjnw
The ingress controller hits each pod in turn
The ingress controller hits each pod in turn
The ingress controller hits each pod in turn
You now have the nginx ingress controller, running on minikube!
To change this up and test some things out, we are going to try two back ends and update the Ingress to include them both.
The easiest way to get rid of what we have done so far is to delete minikube and start it back up again, or use a different namespace.
➜ minikube delete && minikube start --vm=true && minikube addons enable ingress 🔥 Deleting "minikube" in hyperkit ...
💀 Removed all traces of the "minikube" cluster.
😄 minikube v1.14.2 on Darwin 10.15.7
✨ Automatically selected the hyperkit driver
👍 Starting control plane node minikube in cluster minikube
🔥 Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube" by default
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
And then run in a config, similar to this:
➜ cat <<EOF | kubectl apply -n minikube-example -f -
---
apiVersion: v1
kind: Namespace
metadata:
name: minikube-example
labels:
name: minikube-example
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
labels:
app: hello-app
spec:
replicas: 3
selector:
matchLabels:
app: hello-app
template:
metadata:
labels:
app: hello-app
spec:
containers:
- name: hello-app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: insights-app
labels:
app: insights-app
spec:
replicas: 3
selector:
matchLabels:
app: insights-app
template:
metadata:
labels:
app: insights-app
spec:
containers:
- name: insights-app
image: yeasy/simple-web:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-app
labels:
name: hello-app
spec:
type: NodePort
ports:
- port: 8080
name: http
selector:
app: hello-app
---
apiVersion: v1
kind: Service
metadata:
name: insights-app
labels:
name: insights-app
spec:
type: NodePort
ports:
- port: 80
name: http
selector:
app: insights-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minikube-example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: minikube-example.com
http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello-app
port:
number: 8080
- path: /insights
pathType: Prefix
backend:
service:
name: insights-app
port:
number: 80
EOF namespace/minikube-example created
deployment.apps/hello-app created
deployment.apps/insights-app created
service/hello-app created
service/insights-app created
ingress.networking.k8s.io/minikube-example-ingress created
Now wait for the pods to enter a running state.
Viewing the cluster in k9s
Wait for the ingress controller to become ready.
➜ kubectl get ingress -n minikube-example
NAME CLASS HOSTS ADDRESS PORTS AGE
minikube-example-ingress <none> minikube-example.com 192.168.64.7 80 51s
And update the host's file to reflect it.
sudo vim /etc/hosts
Now check that both paths can be reached from Ingress.
➜ curl minikube-example.com/hello
Hello, world!
Version: 1.0.0
Hostname: hello-app-5b844b975f-ddrxc
➜ curl minikube-example.com/insights
<!DOCTYPE html> <html> <body><center><h1><font color="blue" face="Georgia, Arial" size=8><em>Real</em></font> Visit Results</h1></center><p style="font-size:150%" >#2020-11-11 12:38:29: <font color="red">1</font> requests from <<font color="blue">LOCAL: 172-17-0-2.ingress-nginx-controller-admission.kube-system.svc.cluster.local</font>> to WebServer <<font color="blue">172.17.0.6</font>></p></body> </html>
Awesome, now we have the Ingress controller routing two different paths to two different back ends. | https://medium.com/better-programming/how-ingress-works-in-kubernetes-and-how-to-set-it-up-in-minikube-bb23d9086b1c | ['Craig Godden-Payne'] | 2020-11-16 20:31:29.977000+00:00 | ['Programming', 'Minikube', 'Kubernetes', 'Docker', 'Containers'] |
How name-calling alienates supporters | How name-calling alienates supporters
I wrote this story just before Covid-19 took our attention from climate change and other topics. It discusses how the use of language can alienate supporters of social movements and how avoiding such language makes for inclusive movements.
This is the story of how cliches and names can alienate people. It is about the contemporary use of gender or ageist wording and how this has the opposite effect to that intended, how it turns away supporters.
When right-wing adults and media shock-jocks made ad hominem attacks on the age, character, gender and politics of Norwegian climate activist, Greta Thunberg, an illustrator produced a work calling them and the economic elite at the head of the oil and gas industry who oppose moves to ameliorate the warming climate, ‘white dudes’.
We should be clear that most of the prominent critics of Greta Thunberg are white, are of a middle or older age bracket and are ‘dudes’ when that term is understood to refer to males. Some are media commentators and so have more speaking rights than other people. Together, the most vocal are a political and economic elite.
Well intentioned but self-defeating
The illustrator’s allegation was right. That is not the point of this story. The overuse of terms which highlight ethnicity, gender and other characteristics is. The point is important to people working in, or vocally supportive, of progressive social movements.
‘White dudes’, one of those terms, is similar to other Americanisms creeping into our Australian vocabulary. Terms like ‘old white males’, ‘middle-aged white guys’ and ‘fragile white dudes’ have been adopted mainly by Australians who view things through an ethnocultural, a gender or an ageist lens and who often share some form of vaguely-leftist political attitude.
Although their descriptions may be true, the unfortunate thing in using them is to stigmatise the types of people they describe — old, white, male and so on. ‘Feminist’ is used in similar ways as a putdown to catagorise people, mainly by the right wing. Bounced around the social media echo chambers of like-thinking groups and out to a wider demographic, people read the posts and assume that, somehow, all who are old, white, male or whatever the accusation is, are guilty. Unnuanced use of such terms ascribes universal guilt, whether that is the intention or not. It is obvious that this is an injustice.
When people stigmatise, deliberately or unintentionally, through the use of such terms without saying that there are a great many exceptions, they come across as arrogant and morally superior. They also alienate supporters who may be old, white, male, feminist or whatever the chosen putdown might be. This comes at a cost to social movements because, tired of being lumped with the guilty when they might have worked for decades towards making things better, they walk.
We have probably seen this happen on social media. How many insults does a person have to endure before they have had enough? And what do we see when this happens? More putdown terms like describing the accused as ‘fragile white males’ or similar. It should be obvious that using these terms is a tactical blunder.
Terms of the culture wars
White, old, male, feminist, politically correct. When combined, these word strings imply a particular meaning. They are the terms of the culture wars.
They are also cliches. Like other cliches there is an element of truth to them, however that is diluted through the loose use of the terms and because they are overused. Meaning and impact leach form overused terms until they become largely meaningless, what have been called ‘throwaway’ words. They sometimes become terms of reverse-pride in which a person adopts the term to describe themselves. This dilutes their impact.
How language is weaponised
The use of words of accusation are relevant to a book I read years ago by the educator in thinking strategies, Edward de Bono. He said naming is a powerful tool. Naming gives meaning. By naming something you imply something about it. Naming can be used to support or disparage.
‘White dudes’, ‘old white men’, ‘fragile white males’, ‘feminist’ and similar idioms are good examples when used in an accusing manner to single out some individual or group. When it comes to the culture wars, naming weaponises language. Both the left and right make use of it.
The illustrator I mention is well respected, and deservedly so. When I saw the comment I understood it was not meant to apply to all white males. But that implied exclusion was not made clear. That’s the danger with assumptions, such as males will realise that not all of them were meant to be targets. They are assumptions to the person making them but not necessarily to anyone else.
Social media readers did not understand the terms applied only to the particular the subset of older, white males attacking Greta Thunberg. They took offence and responded in the comments on the facebook group.
“I’m a supporter of Greta Thunberg and her ilk; it’s offensive how you’re lumping ‘white dudes’ together as being opposed to her and her message. I can understand that it may appear to be a ‘white dude’ thing when you have people like ScoMo, Alan Jones and Andrew Bolt all effectively on the same side, but in making such gross generalisations you’re falling into the same trap as this other ‘side’ when they make generalisations about ‘young people’ or ‘African gangs’ too”.
…
“Many climate change deniers throughout the world would not be white, and many would not be male”.
…
“If you’re going to generalise at all here, I think the generational aspect of this is more relevant — but, again, people like David Attenborough remind us that we can only generalise so far in this regard too”.
…
“‘White men’ are not a homogenous group — some are poor, some are rich; some have power, some are powerless. Some are part of the problem; others are part of the solution”.
…
“Most of the world’s climate scientists, who have worked diligently to bring climate change to the attention of the world, are white males”.
…
“Yes, many white males are denying climate change and supporting vested power groups; but many others are supporting and creating change”.
…
“I read the comments carefully, and the post, despite some later qualification, gives the impression of lumping white males together, with no qualification in the main text box”.
…
“If all white males were to band together, conspiracy-style, and sue (the illustrator), I’m confident that the argument that (the person) is not defaming us would fail to hold up in court”.
…
“And (the illustrator) seems to label anyone who takes issue with her argument to be a ‘troll’. We can’t win, apparently!”.
…
“Tired of the skin tone/gender bias… Totally pollutes the message.”, wrote another, a non-Anglo.
Kick-back
Here we see people kicking back against the assumption that they would realise the comment was not aimed at them. In defence of the illustrator, some commentators to the social media post said as much. However, the point is that those whom the item was not directed at were left to realise that for themselves.
The incident is not the only example of this. White people, males, the old and young have taken offence at similar comments with racial, gender or ageist wording that puts them in the same bag as the guilty fitting those descriptions.
It is the way the terms are used as generalisations. The responses show how it comes across as accusing all older white men of being guilty of some transgression. Predictably, that triggers a hostile reaction. In the example we are looking at, hostile reactions make gender, ethnicity and the illustrator the main thing, taking the focus away from the illustrator’s intention.
There are always exceptions
The point about not generalising on the basis of gender, sexuality or age is important to citizen journalists. There are always exceptions to generalisations and people will point that out. They will be compelled to defend themselves. That is why journalists sometimes make the precaution of starting a sentence with ‘generally,…’ when they make a generalisation.
When the portion of exceptions is likely to be large it might be better to make our point without generalising and by being more specific.
Let’s not alienate supporters
For influential people in social movements and citizen journalists working with them or writing about them, using inclusive language is important because numbers are important to the impact of the movements. It is best not to alienate even when tempted to use gender or age-related terms. Avoidance of accusations around gender, age and ethnicity is a good tactic.
Viewing any issue through ethnicity and skin colour, gender or an ageist lens is potentially divisive of social movements and potentially alienates supporters who fit that description. Supportive people are turned away. Language based around those single issues, important that they are, exclude people.
Using the language of gender, age and ethnicity risks fracturing movements. This is not to say that issues around those characteristics are not important, however wrongful use of those terms can turn people away.
Better to focus on what people have in common and build the movement on that. | https://medium.com/citizen-journalism/how-name-calling-alienates-supporters-8f3a011d1246 | ['Russ Grayson'] | 2020-08-10 00:50:35.286000+00:00 | ['Social Movements', 'Language', 'Citizen Journalism', 'Citizen Journalist', 'Journalism'] |
The Future Vision of Microsoft 365 | Productivity is personal.
Who we are as human beings deeply influences productivity as both a process and an outcome. Our values and beliefs, the needs of our families, our personalities and preferences, how energized or deflated we feel when seeing our Calendars or To Dos — these are key facets of productivity.
And because nobody knows your external circumstances or inner emotional state better than you, achievement needs to happen on your terms to be sustainable. This is perhaps truer than ever because, for many, 2020 swallowed whole the proverbial work/life divide. Understanding this propels us to craft Microsoft 365 experiences that support our lives in all their unique complexity.
We’re also crafting experiences that acknowledge the broader ecosystem that Microsoft 365 lives within. As the world wakes up to racial injustice, the economic gap between “essential” and “non-essential” workers, and the long-term cost of a digital learning divide, we’re intentionally and ethically designing digital spaces to support a diversity of lived experiences.
As product makers, we go where human need takes us and strive to navigate the nexus of timeless needs and current realities. Three years ago, we launched Microsoft 365 and began holistically rethinking how our products could work together as an intelligent and connected suite of services. We implemented flexible designs and over time evolved our ecosystem to facilitate not just modern work, but modern life.
That ecosystem increasingly decouples app capabilities from the apps themselves, leaving you free to use functionality whenever, however, and wherever you need it. New generations’ embrace of mobile devices for their ease, simplicity, and joy has inspired us to create cross-platform Microsoft 365 experiences that scale gracefully and feel natural to whatever device you choose.
Today, the future of Microsoft 365 blends our planned trajectory with real-time changes based on the remarkable complexities that 2020 dropped at the world’s feet. We center our efforts around four key experience pillars, and the work that you’ll read about below is itself a hybrid. While some of these changes will roll out within a year or two, others are still very much exploratory. | https://medium.com/microsoft-design/m365future-815cf30a8be | ['Jon Friedman'] | 2020-07-21 17:43:13.178000+00:00 | ['Microsoft', 'Design', 'User Experience', 'UX', 'Future Of Work'] |
Caviar: A Usability Case Study. It’s arguable whether a good meal is a… | Caviar is a popular app that offers food delivery from local restaurants.
Disclaimer: I do not work for Caviar. This case study was purely used as a tool for learning and making me a better designer.
I conducted a 2-week case study to challenge myself to improve Caviar’s iOS ordering flow. Being an ardent food lover, Caviar is one of the apps that I use a lot, if not the most. Through research, I was able to identify a few pain points that users were experiencing. I then prototyped some solutions and validated them with user data.
Research
To begin my research, I started to look at a few competitors or similar platforms, analyzing UI, UX, User flow, key features and reviews to identify the common problems.
While many restaurants certainly offer their own pickup and delivery services, and there are a handful of smaller players in the marketplace, there are currently four other main competitors with Caviar Delivery: Uber Eats, Postmates, Grubhub, and DoorDash.
User reviews
There are nearly 9,000 combined reviews between iTunes and Google Play, the Caviar app held an average rating of about 4.4 stars. I identified potential trends from users reviews which have not highlighted relevant issues about usability or functionalities. I’ve categorized the complaints in 4 categories: | https://medium.com/tradecraft-traction/caviar-usability-case-study-5c0f61a11956 | ['Sayali Shah'] | 2019-05-30 18:52:47.462000+00:00 | ['Product Design', 'Usability', 'Design', 'Case Study', 'UX'] |
Tabular view + filters | Every project starts simple, with few models and not so many records to take into consideration. But either by necessity or simply by adding more content through time, having a grasp of every piece of content created should be easy to access and in a fast way.
To reach this accessibility goal, DatoCMS has two important features that work perfectly together, the tabular view and the filters.
Why Tabular view?
Tabular view is the perfect solution for complex projects.
If you want to make a project for an e-commerce, you’ll have a model for a product with many different fields.
A product could have even more fields than this
As your project grows and more records are added to your product model, it becomes hard to keep track of every single record when you have to edit them.
The Compact view seems to be unfit to manage this kind of complexity, you can’t sort your products and it’s very hard to search for them.
Now let’s see instead how the Tabular view shows a collection of many records.
With different columns it’s much way easier to discern your records
Tabular view allows editors to configure(add, remove, reorder and resize), columns related to the field will inherit its name making them highly customizable too.
Each editor can manage its preferences, adding or moving columns according to its needs, then merge different filters to create a powerful search experience through many records.
With Tabular view you’ll be able to put some logic in the visualization of your list of records.
The columns selection with default values and customized fields
How to use it
By default the visualization for your models is now set on tabular view, let’s see exactly where it is located in your model settings.
In your DatoCMS dashboard choose a project and then go to its settings and select “Models”, under “STRUCTURE” in the left sidebar, and then click on the “SETTINGS” of the model you want to edit.
The settings for the model are next to its name on the right
Once you clicked this will open the settings modal, select the “Additional settings” tab.
This is where you can set your records visualization mode to tabular view
With Tabular view DatoCMS allows you to select what columns you want to see and then arrange them as you like.
Select and remove the columns and then move them the way you like
Tabular view + Filters, a match made in heaven
So it’s a given that tabular view for your model is awesome but it would be an empty shell without a proper way to search for your content.
Enter the new filters, with them you can search any of the values of the columns and if you don’t remember the name of a record you can narrow it down to time.
you can use more than one filter for a detailed search
Evolving
Tabular views and filters are a couple of features designed to improve content management for anyone using DatoCMS.
DatoCMS has many other new features incoming and it’s constantly evolving thanks to the suggestions and feedbacks of who’s using it.
Moving forward is the only way for DatoCMS. | https://medium.com/datocms/tabular-view-filters-a0193be2dd8a | ['Francesco Falchi'] | 2018-10-05 15:08:59.828000+00:00 | ['Headless Cms', 'Features', 'Visualization', 'Organization', 'Web Development'] |
Talk less… Do more… Let your actions show what you’re against and what you’re for… | “What you do speaks so loudly that I cannot hear what you say.” Ralph Waldo Emerson
We are all familiar with the old saying “actions speak louder than words” and I am certain we all know people that are consistent in saying one thing, and then doing another. Without a doubt we have all been guilty in doing this ourselves at times because it can be challenging to live a life of 100% alignment and consistency with our stated words and values.
Why do we say anything at all? Why do we create a disconnect between our words and our actions? Is it because we are desperate to be heard? Is it because words are so very easy to say and we believe by simply saying them we might have done enough?
As I write this I keep hearing the life from Hamilton when Aaron Burr when he advises Alexander Hamilton to; “Talk less… Smile more… Don’t let them know what you’re against or what you’re for…” What if we flipped this and lived by a credo that said “Talk less… Do more… Let your actions show what you’re against and what you’re for…”
This isn’t new perspective by any means. James wrote about the need to let your works be the representation of your faith in one of my favorite books of the bible. ‘What good is it, my brothers, if someone says he has faith but does not have works? Can that faith save him? If a brother or sister is poorly clothed and lacking in daily food, and one of you says to them, “Go in peace, be warmed and filled,” without giving them the things needed for the body, what good is that? So also faith by itself, if it does not have works, is dead.’ James 2:14–17
If you choose to put the effort and energy into your actions instead of your words, then you won’t have to worry about your the things you do overpowering the things you say… | https://medium.com/the-innovation/talk-less-do-more-let-your-actions-show-what-youre-against-and-what-you-re-for-aac5102a7da5 | ['Dusty Holcomb'] | 2020-07-24 05:10:08.283000+00:00 | ['Leadership', 'Inspiration', 'Self Leadership', 'Motivation', 'Quote Of The Day'] |
Recognizing Burnout & How to Beat It | In this fast-paced world, we’re all busy bodies with our own work and study commitments, relationships to uphold, a controlling conscience, and many more. The accumulation of these tasks can be overwhelming and lead to stress, which when unaddressed, can lead to burnout.
Burnout essentially is the accumulation of prolonged stress, which is ultimately emotionally, physically, and mentally draining. Because we can easily tire ourselves, both mentally and physically, with work and unhealthy thoughts, it is inevitable that we reach the point of burnout from time to time.
In many cases, burnout can lead to unfavorable physical changes. It can lead us to withdraw from work, relationships, and neglect self-care. On top of that, burnout oftentimes manifests as a physical ailment as well, such as headaches, muscle pains, or a weakened immune system. Such physicalities go on to show the interconnectedness of the mind and the body– when we put in our all, we often forget about ourselves. And when we forget about ourselves, we withdraw from the world around us.
Now although you may typically associate productivity marathons with burnout, it turns out there is a lot more behind it than you think and it’s important to explore the various signs and symptoms. Thus, when you begin to find yourself being overwhelmed and/or demoralized, consider these following tips to recognize, deal with, and prevent burnout.
Recognizing Burnout
There are three types of burnout, each with distinctive causes:
Overload Burnout
Overload Burnout is the form of burnout that most commonly comes to mind as it is associated with working excessively and frantically. Such dedication to completing tasks and securing accomplishments is oftentimes done with little regard for your own health.
Having an unmanageable workload and high expectations from either oneself, or a superior, frequently initiates an intense, yet simultaneously draining race towards completion. While we embark on our tiresome endeavors towards success and satisfaction, we oftentimes neglect ourselves during this surge of energy and productivity.
Thus, with our eyes glued to our work, we oftentimes disregard the passing of the day, we skip meals in exchange for productivity, and continue to type away at our computers– all the cost of some quality time with friends, family, and ourselves.
Under-Challenged Burnout
Another subtype of burnout is called “Under-Challenged Burnout.” Under-challenged burnout refers to the disengagement of individuals who find that their job does not spark motivation or excitement.
According to performance coach Melody Wilding, “Because under-challenged people find no passion or enjoyment in their work, they cope by distancing themselves from their job. This indifference leads to cynicism, avoidance of responsibility, and overall disengagement with their work.”
As humans, we thrive off of stimulation and excitement– even in the most temporary of things. But without the flame of passion, the more prolonged version of excitement, it’s no wonder that we may end up feeling drained and detached.
In fact, those experiencing under-challenged burnout oftentimes gage the significance and satisfaction they’ll get from getting the work done. If something is deemed engaging, the task will be done with ease, but if it’s not, it will likely be put off to the side– but before you know it, there will be a hefty pile of things to do.
Neglect Burnout
The last form of burnout is Neglect– feeling helpless and incompetent– which can lead to reduced performance.
Neglect in itself can be attributed to a cynical mindset. Those of us who suffer from neglect burnout are oftentimes held back by imposter syndrome and the related believe that our “incompetence” renders us useless. Many of us believe that our effort is not good enough and thus it is not worth it to try at all. As a result, individuals experiencing neglect burnout oftentimes refrain from carrying out tasks out of the fear of failure, thus contributing to passivity and lack of motivation.
While neglect burnout may appear to be the polar opposite of overload burnout, you can especially see the mental and emotional fatigue that arises in neglect burnout. In the case of neglect burnout, t’s the tiresome thoughts of incompetence and fears of failure that tempt us to withdraw, rather than trying.
Dealing with Burnout
The first step to dealing with burnout is to recenter your attention from all the stressors and distractions in your life and funnel it towards yourself. After identifying the type of burnout that you are experiencing, ask yourself ‘What do I need right now?’ and consider the fixes suggested below:
With overload burnout, it is extremely important to know your limits and that you set up your work routine around that. Rather than trying to maintain peak, obsessive productivity at the cost of the progressive decline of your mental health, try to develop a consistent work schedule instead that integrates breaks and self-care into your daily routine.
“Burnout is the result of too much energy output and not enough energy self-invested. In other words, it’s burning too much fuel than you’ve put in your tank.” — Melissa Steginus
Take into consideration these quick and simple self-care activities that can be easily integrated into any busy lifestyle:
Go for a walk → Helps give your mind a break from the heavy workload.
→ Helps give your mind a break from the heavy workload. Talk to someone for personal enjoyment → Just because you’re a busy person does not mean it should cost you your social life.
→ Just because you’re a busy person does not mean it should cost you your social life. Take a power nap → Allows you to re-energize for enhanced productivity and gives your brain the chance to process new ideas.
→ Allows you to re-energize for enhanced productivity and gives your brain the chance to process new ideas. Treat yourself to a snack → Reward yourself with something small while making sure you get the energy and nutrients you need to keep working.
Because under-challenged burnout stems from one’s attitude towards their work and studies, it is highly recommended that individuals self-reflect and consider what truly sparks their interest. To alleviate oneself from the cynical mindset associated with under-challenged burnout, try exploring opportunities that spark your interest. Since seeking out a job change may not be the most feasible option for many, try to find ways to tailor your current occupation around your interests and skills through job crafting.
Thus, if you feel “bored” with whatever tasks are at hand, make it fun for yourself. See how you can integrate your own interests into your duties so that you can tackle your responsibilities with much more zeal and enthusiasm than before.
For those dealing with neglect burnout, it is essential that you address the underlying fears and emotions that deter you from getting work done. If you’re worried about incompetence, remind yourself of your past accomplishments and that there’s always the chance to grow.
Mistakes are the little hills that we must climb over in order to get to the finish line– they’re embedded in the rocky terrain of life. Just because we see a daunting mountain ahead of us does not mean we should throw in the towel and turn back. Instead, we should try to trudge forward and focus on the endless possibilities in mind. With little steps at a time, we can eventually see what great things await at the top of the mountain– but we must first put ourselves out there and take that first step.
“When we commit to action, to actually doing something rather than feeling trapped by events, the stress in our life becomes manageable.” — Greg Anderson
While it is very easy to fall into this negative thought cycle, catch yourself when you begin thinking this way and remember– things are better done than not done at all, but of course it is important to recognize your limits as well.
Preventing Burnout
Now that you know the types of burnout and ways to deal with them, make sure you stay in tune with your emotions so that you can catch yourself before you burn out.
If you find yourself working tirelessly without time for yourself, or neglecting your responsibilities out of fear of boredom, take a step back. Always take the time to check up on your mental state and your needs. With that being said, never deny yourself of a well-deserved break and take every chance you can get to evaluate your work ethics and your emotions.
Even when you’re not feeling burnt out, it’s still important to remember to take breaks for self-care, do the things you love, and develop a sense of confidence in what you do.
Remember, burnout is a common occurrence that can happen to virtually anyone– students and adults alike. All this marathon running we do in life is tiring, but we must always take the time to recharge– never let your battery completely run out. Take this advice and use it wisely to ensure that you develop a healthy and resilient lifestyle as a busy individual, and finish that marathon with a greater sense of mental, emotional, and physical ease. | https://medium.com/joincurio/recognizing-burnout-how-to-beat-it-43c2003dff44 | ['Madison Estrella'] | 2020-07-24 18:31:01.173000+00:00 | ['Stress', 'Mental Health', 'Mental Wellbeing', 'Stress Management', 'Burnout'] |
Computational Creativity: The Role of the Transformer | Over the years, computers have become increasingly sophisticated in their ability to identify more and more complex patterns. The field of computational creativity, a multidisciplinary endeavour to build software that can assist humans in a variety of tasks in the arts, science and the humanities, has seen much progress since the early days of computers where instructions had to be explicitly programmed.
In this article, we will attempt to unravel some of the recent developments in generative modelling that have shown significant improvements in computers’ ability to generate useful patterns that appeal to human observers. In particular, one type of neural network architecture, the Transformer, will be discussed in detail with regard to its ability to capture longer-term dependencies in text, music and images. Some future directions that this technology could lead to are also discussed.
What is creativity, anyway?
In her illuminating essay, Margaret Boden describes that creativity is “the ability to come up with ideas or artifacts that are new, surprising, and valuable”. Historically, this has been viewed as a unique aspect of human intelligence. As described by Boden, for something to be deemed creative, it must deliver both novelty and value.
A common criticism about modelling creativity is that machines (including the most recent deep learning methods) just use “brute-force pattern matching” to achieve an end goal — if the machine is simply identifying existing patterns in the data that it was trained on, then what “novelty” is it truly adding? In addition, “values” are known to be highly variable, and “any arguments about creativity are rooted in disagreements about value”.
While it may be true that computers might “just be pattern-matching” on a large-scale, it is important to realize that even great scientists and mathematicians are in essence, using their finely tuned pattern-matching abilities (albeit in a highly sophisticated form) to discover patterns in reality that nobody else thought of before. In the below sections, we will look at how transformers could help address both the issues of novelty and value, and how they might combine with other tools to aid humans in combinatorial creativity.
The Transformer
The 2017 Google AI paper “Attention is All You Need” first introduced the concept of the transformer neural network architecture. In essence, a transformer is a stack of encoders that can process a sequence of arbitrary length, which connects to a stack of decoders that outputs another sequence. Below is an example of such a structure used for machine translation.
Image source: Illustrated Transformer by Jay Alammar
In recent times, there have been a number of extensions and modifications made to the vanilla transformer that allow it to generalize to other tasks by using encoder-only stacks (BERT), or decoder-only stacks (OpenAI GPT), resulting in large-scale improvements to downstream language modelling tasks.
A deep technical description of the transformer is out of the scope of this article — however there are excellent visual explanations (Illustrated Transformer) and line-by-line descriptions of the implementation (Annotated Transformer) that supplement the original paper.
Self-Attention
At the core of the transformer is its self-attention mechanism — i.e. the ability to model relationships between parts of a sequence (for example, words in a sentence), regardless of their respective positions in the sequence.
Image source: Google AI blog post on transformers
The above image shows two examples where the meaning of the word “it” could be ambiguous to a machine— in the example on the left, “it” refers to the word “animal”, whereas on the right, “it” refers to “street”. The term self-attention used in this context means that the encoder, while inspecting its input sequence, is able to focus on different parts of the input in relation to itself, allowing it to “attend” to the right representation based on observed patterns in the training data.
Self-attention is incorporated into an encoder/decoder structure as follows.
Image source: Illustrated Transformer by Jay Alammar
Via this design, the encoder’s input tokens first flow through the self-attention layer, which allows it to focus on different parts of the sequence in relation to itself. The output of the self-attention layer is then passed to a feed-forward neural network, through which the model “learns” the representation of each word.
Multiple Attention Heads
The transformer expands on the idea of self-attention by incorporating multiple “attention heads”, i.e. instances of self-attention applied to different parts of the sequence simultaneously. In Google AI’s transformer, there are eight attention heads focusing on different positions at the same time step, applied to each encoder.
Image source: Illustrated Transformer by Jay Alammar
In essence, having multiple attention heads (eight, in the above image) makes parallelizing the computation relatively trivial, since each head can identify the relationships between words independently of the other. This greatly reduces the amount of training effort, allowing the transformer to scale efficiently to very large training sets, unlike more sequential architectures like Recurrent Neural Networks (RNNs).
Over each training epoch, the individual attention heads update their vector representations, with the final representation being calculated as the weighted average of attentions.
Image source: Self-Attention for Generative Models, By Ashish Vaswani and Anna Huang
The decoder side works in a much similar way, and the final output vector representation is fed through a linear layer (which projects the vector output to a much larger “vocabulary space”), followed by a softmax layer (which converts the float representation of the word into a probability distribution for each word in the complete vocabulary). | https://medium.com/sfu-cspmp/computational-creativity-the-role-of-the-transformer-c3fa20da9c5f | ['Prashanth Rao'] | 2019-03-07 06:11:06.678000+00:00 | ['Machine Learning', 'Deep Learning', 'Generative Art', 'Neural Networks', 'Artificial Intelligence'] |
How Venture Works: Term Sheets and Napkins. | I remember when I started my first company, I really didn’t understand how financings worked, and why they were the way they were. Over the last 7 years and 300 deals I have done, I have some concept of why venture works the way it does, and some concept of why. I’m not going to claim to understand everything. However, I figured every few weeks, I could take a concept that is generally accepted in Venture Capital and actually break down what it means. I might even have a story behind it.
My Dad is a VC (Tim Draper) and my Grandfather is a VC (Bill Draper) and his Dad was a VC (William H. Draper II). I’m not sure how many VCs can actually claim that they are 4th generation, this outlook influences my understanding of the industry.
I remember when I was a teenager, sitting in my Dad’s office as he would look through term sheets — he would point at terms in contracts that were supposed to mean something to me: Prorata, Preference, Board Seats… He would point and explain each point to me, I would nod along and repeat the terms back to him as he cited them. Teenagers are so good at short term memorization.
To this day, my dad’s teachings stick in my head. However, I didn’t have context. I didn’t understand what constitutes a deal. I didn’t understand what a good exit felt like, or really what made a great partnership. What I truly did not understand was why it was so complicated. Why do investors try to protect themselves from entrepreneurs with such intense terms?
Venture capital was founded on napkins. My grandfather is the most charming man anyone could ever meet, always dressed immaculately. He’s 90 now and still sharper than everyone. He was around for the beginning of Venture Capital, and he has some amazing knowledge bombs to drop on people.
He used to have to cruise up and down “The Orchards” of Silicon Valley trying to find a building that said “technology” on it. He would go inside and explain to the founders what Venture Capital was, and then describe the value add:
“Well, you put in the blood sweat and tears, and I’m putting up the money, so why don’t we go 50/50?”
This is what a term sheet used to be. It used to be written on the back of a napkin, in ink, forged by the investor and founder in blood. Well not blood, but you can imagine it being a Huckleberry Finn type contract. Now we have 3x preferences, prorata, the annexation of Puerto Rico (Watch the Little Giants to understand the reference) and board seats. The industry got more competitive, and decided to create more standards and protections. It became about protection rather than parallel partnership.
The “Light Bulb” moment for me came a couple years ago when Henry Ward, the founder of Carta came to speak at Boost VC with a presentation. His goal was to give the founders the knowledge around early stage contracts, so they at least understood what was worth negotiating for.
I think it was during his 3rd slide it was a line graph, and he said,
“In the case of early stage investing, you are negotiating for an ‘ok’ exit.” (Paraphrase) Basically for the 5 years, 2–4x exit.
If the exit is $0, none of the terms matter. Because everyone gets zero.
If the exit is $10B, no one cares as long as they own their number of shares.
At the end of the day, it’s about control and protections and it’s about an average outcome. In an industry truly driven by power law though, investors shouldn’t care as much about complex terms created by lawyers to continue to be relevant. They should care about aligning incentives with founders for the long term.
This is a macro level thought. So it’s important to understand that our lawyer is important to the Boost VC process. Our term sheets are not written on napkins and we have had experiences where we learned to need a specific term. But at the end of the day, the returns of the Boost VC will be driven by 10000x wins, and whether their shares were common, preferred, or alien, they will end up making the investors money. | https://medium.com/boost-vc/how-venture-works-term-sheets-and-napkins-b7b12377022a | [] | 2018-10-10 15:17:43.339000+00:00 | ['Startup', 'Venture Capital', 'Fundraising'] |
Influence Live Recap: The influencers in fashion are no longer the writers in Vogue | By: Jared Augustine
Last Wednesday, we held the first event of our Influence Live series in partnership with The Drum. I had the pleasure of chatting about the world of influencer marketing with Kyle O’Brien of The Drum, Amy Tunick of Grey New York, Ian Schafer of Deep Focus, and Shan Lui of Superfly.
For insights of the event, check out the recap article on The Drum by Lisa Lacy or the highlight video below. | https://medium.com/juliusworks/influence-live-recap-the-influencers-in-fashion-are-no-longer-the-writers-in-vogue-c71057514270 | [] | 2017-02-15 16:28:43.461000+00:00 | ['Advertising', 'Marketing', 'Influencer Marketing', 'Digital Marketing', 'Social Media'] |
Supporting Dynamic Type at Airbnb | Background
Since iOS 7, Dynamic Type has allowed users to choose a prefered font size for their phone. At Airbnb, we try to build an app that our entire community can use — since Dynamic Type is a critical accessibility feature, we knew supporting it would make more people able to effectively use our app, some of them probably for the first time. To validate the importance of this feature, we examined the data and saw as much as 30% of people using our app had a preferred font size that was not the default. This usage is not skewed towards particular sizes, but evenly spread across larger and smaller than default.
30% of people using the app had a preferred font size that was not the default.
It turns out, supporting this preference creates a consistent experience across the OS that users will notice. Experimenting with Dynamic Type on individual features in the Airbnb app resulted in a significant increase in engagement, helping move our bottom line metrics. If you spend the time to support Dynamic Type in your app too, users will surely thank you for it!
Font size selection in iOS
Why is it important?
Going beyond the metrics, supporting Dynamic Type holds your UI components to a higher level of quality. Layout will need to be robust enough to handle a wide range of preferences, similar to variations created by localization and device screen size. Since much of our development time is spent on single devices and languages, bugs only reproducible in certain configurations will too often slip through. Fortunately, many of these are now being caught during Dynamic Type testing. If you already support varying screen sizes through UITraitCollection and translations with various length strings, there’s a good chance you’ve done most of the work to support Dynamic Type.
Design
The majority of bugs we encountered when large font sizes were used had to do with text not fitting their containers. To resolve these, we created a few design recommendations. First, widths and heights became flexible, allowing text to expand to multiple lines. In many cases this should have already been done since some languages can be much longer than the English words we include in design mocks.
Second, we had to make sure fonts scale the correct amount. This is done by assigning every text a corresponding UIFont.TextStyle. Using a larger TextStyle indicates your font is already big, so it doesn’t need to increase in size as much.
Third, we had to fix some labels that were changing size even though they shouldn’t be eligible to scale. Our recommendation is everything part of the scrollable area on the screen should scale, and everything else should be left static. However, anyone with large Dynamic Type enabled still needs a way to view text in smaller containers such as tab bars. If you use all standard UIKit elements this is handled by the Large Content Viewer:
We filed a bug report with Apple, requesting a new API for presenting these popups from custom views. This capability has since been included in the iOS 13 beta, so you’ll be able to see it in the Airbnb app soon.
Engineering
iOS provides mostly automatic Dynamic Type APIs for system fonts, but the Airbnb app uses a custom typeface, Cereal. To support Dynamic Type, we rely on UIFontMetrics. This class handles scaling our font size, line height, and tracking.
Each of these attributes exist in two forms:
The unscaled units which are shown to users with the default font size, and are the values we set when creating font attributes. The scaled units which fit the preferredContentSizeCategory and are what we read at runtime.
Internally, features will request a font using attributes expressed as unscaled units, and will receive an object containing various functions we use to display text which always returns values in scaled units. This ensures any calculation, such as bounding box of text, will use the scaled units. Some of the most common bugs we saw were caused by using unscaled units for layout calculations instead of scaled units.
There are two ways to use UIFontMetrics to convert from unscaled to scaled units.
Method #1:
func scaledFont(for font: UIFont, compatibleWith traitCollection: UITraitCollection?) -> UIFont
Method #2:
func scaledValue(for value: CGFloat, compatibleWith traitCollection: UITraitCollection?) -> CGFloat
There are subtle differences in these approaches. Consider the following examples, each using a different UIFontMetrics method:
Depending on the device you run on, we observed results like this:
The results aren’t quite consistent, but since we customize line height with NSParagraphStyle we need to use the CGFloat scale function. The UIFont with unscaled point size is scaled directly to get an adjusted UIFont. Here’s a full example to scale an NSAttributedString:
The last step to fully supporting Dynamic Type is to encourage validation across all features, including ones in development. We know developer time is limited, so we automated support for Dynamic Type as much as possible. Happo, the tool we use for UI regression detection, already snapshots existing components. We added an additional step to render with the accessibilityExtraExtraExtraLarge size.
There are no APIs available to programmatically change the simulator’s Dynamic Type settings, so avoid using UIApplication.shared.preferredContentSize . A more testable approach is to query the trait collection of a UIView. In the UIWindow for our Happo Tests, the traitCollection is configured to include a custom content size. The end result is snapshots like these:
Default (large) preferred content size
AccessibilityExtraExtraExtraLarge preferredContentSize
With snapshots generated on every code change to the iOS app, developers have a hassle-free way to know new features support Dynamic Type, and easily detect regressions.
Thanks to Amie Kweon, Dylan Harris, Bryn Bodayle, Tyler Hedrick, and Kieraj Mumick for their support on this project! | https://medium.com/airbnb-engineering/supporting-dynamic-type-at-airbnb-b47c68b0c998 | ['Noah Martin'] | 2019-07-25 06:35:13.475000+00:00 | ['Dynamic Type', 'Design', 'Airbnb', 'iOS', 'Mobile'] |
Jobs 2.0: Troubleshooting the Adoption of Jobs to be Done | When I first started helping companies deploy Jobs to be Done within their organizations, people were largely unfamiliar with the concept. Maybe a few people — likely the project sponsors — had read an article about Jobs or seen the famous milkshake video in which the late Harvard Business School professor Clayton Christensen describes the theory. But not a single company I worked with had a detailed understanding of the theory and a framework for turning the theory into product ideas, service offerings, or business model innovations.
If you fast forward to today, Jobs to be Done is far more widespread. Hiring managers seek applicants who have experience conducting Jobs research. Companies as diverse as Clorox, Home Depot, and Facebook are giving talks on the theory at conferences. The question I get from companies is no longer “What is Jobs to be Done?” but rather “How can we improve the way we’re using Jobs to be Done?” Those who are farthest along in the journey are wondering how they can take the concept to new levels, such as using Jobs to be Done to size markets and compare vastly different ideas. Others, however, have encountered hurdles as their organizations begin using Jobs to be Done for the first time. For those in the latter group, I’ve gathered five of the most common challenges that companies have when they adopt Jobs to be Done, as well as some solutions for avoiding those struggles.
Issue 1: “We say we’re applying Jobs to be Done, but really we’re just paying it lip service.”
At a recent innovation conference, someone who works at a large fast food company explained to me that although her organization uses Jobs language, they simply apply it to problems they want to solve rather than problems articulated by customers. She told me, for example, about a colleague who wanted to work on a job that was essentially “having a burger I can handle easily with one hand coming out of the drive-thru.” An opportunity for innovation, perhaps, but hardly a customer’s job to be done.
This kind of problem occurs for a few reasons. First, organizations that introduce the idea of Jobs to be Done typically do so in a theoretical way, and they often fail to address the other parts of the framework. Having only learned about jobs, people try to force fit other elements such as “pain points” and “success criteria” into the form of a job to be done. Second, because the training is theoretical, it tends to get tied to ideas and pet projects already floating around the organization. Jobs insights need to be derived from real customer research, and the jobs that come out of that research should be solution agnostic.
People who are getting used to Jobs to be Done need to be exposed to it regularly, not just as a one-and-done training. A large tech company that I work with achieved this by orienting the strategies of each of its business units around a small number of jobs. Projects within each business always focus on one of those core jobs. New initiatives are consistently being led by those who have built up some expertise in applying Jobs to be Done, and their work — which uses Jobs terminology correctly — is shared company-wide via short presentations and podcasts to get everyone familiar with using Jobs the right way.
Issue 2: “Our research uncovered too many jobs, and we don’t know how to prioritize dozens or hundreds of jobs to be done.”
Another issue that many companies have shared with me is that their first attempt to roll out Jobs to be Done resulted in the organization uncovering more jobs than they knew what to do with. They interviewed dozens of customers, only to realize that the more interviews they did, the more jobs they uncovered. More importantly, there was nothing to explain which jobs were the most important to the customer, or which jobs the organization should focus on in developing new solutions.
When doing qualitative Jobs to be Done research, it’s important to ask questions that go beyond uncovering jobs. Your discussions need to dig deeper to understand how jobs relate to one another and how particular customers prioritize one job as compared to another. Ultimately, this allows you to build hierarchies of jobs that ladder into a relatively small number of North Star jobs that a team or organization is better equipped to focus on. Those lower-level jobs may be important as you get into the details of new product design, but they’re rarely a good focal point for beginning your ideation and prioritization efforts.
A healthcare organization I once worked with had developed a marketing campaign — based on primary research — around the idea of giving patients adequate attention during office visits. While patients had certainly mentioned the need for attention, it was a lower level job. Other North Star jobs were shared by larger numbers of patients and were more top-of-mind. In particular, one higher-level job was that they wanted to be sure their doctors had the collective expertise necessary to treat complex ailments. Given the organization’s breadth of specialists, focusing on that higher-level job allowed the organization to develop a marketing plan that leveraged one of its biggest assets, create marketing collateral that was more differentiated than broad promises about how its physicians really cared, and craft messaging about something that was far more important to patients.
While asking the right questions in a qualitative research interview is important, there are times when qualitative research may not be enough. Quantitative surveys are a good way to understand which jobs are most important, as well as which ones customers struggle most to get done with the solutions currently available to them. Moreover, quantitative research allows you to segment customers, make determinations about which segments the company is best suited to serve, and identify the jobs that are particularly relevant to the segments you’ll be focusing on.
Issue 3: “I don’t know how Jobs to be Done should fit in with Design Thinking and the other methodologies we use.”
People often ask me how Jobs to be Done differs from Design Thinking. The reason many people ask is that companies have a habit of providing regular trainings without giving much thought to how new skills fit with those that are already being deployed. It’s often unclear whether a particular methodology replaces another or complements it. Employees struggle to figure out the contexts in which they use one tool versus another. The investments in past trainings seem wasted as everyone rushes to use the new tool in every situation they encounter.
One of the most important pieces of advice that I give clients is never to do Jobs to be Done training simply for the sake of learning about a new framework. Jobs to be Done training needs to be tied to real business objectives and real projects that the organization plans on carrying out. That gives me an opportunity to understand what approaches teams are currently taking, where they struggle, and where Jobs to be Done has a place. It lets me guide the teams on when to choose Jobs to be Done, as well as when they already have a good solution in place. It’s not a substitute for Design Thinking, but rather a component of it. The Jobs to be Done framework lets teams understand why customers make the decisions they make, ensuring that you ask the right questions and fully understand the decision-making process. It’s also a common language for discussing customer insights and a prompt for ideation sessions. But it’s simply one piece of the toolkit, and there’s value in determining how it best fits with the other innovation tools and processes that may already be in place.
Issue 4: “We understand Jobs to be Done, but we fall back on our traditional ways of thinking when it comes time for a real project.”
One thing that’s particularly appealing about Jobs to be Done is that it’s easy to understand. While it may be a different way of thinking about challenges, the logic behind it is pretty intuitive. That’s also a challenge. Because people catch on quickly, they don’t spend a lot of time digging into the nuances and understanding how it applies in real situations. Then, when time is short and an idea is needed, people fall back on their typical ways of gathering insights and innovating.
For market researchers, I encourage those getting used to Jobs to be Done to color-code their discussion guides. By matching each question to an element of the Jobs to be Done framework, you can ensure that you’re covering all the important elements of how customers make decisions. This also safeguards against scope creep, making sure you’re not asking questions that aren’t tied to your project’s research objectives.
On the innovation side, ideation sessions need structure. Rather than just generating ideas that respond to a problem or sound like they’d be good, it’s important to keep the focus on the jobs that are being addressed. I make teams identify the job to be done that their idea responds to, and I also force them to think about the customer types the idea is targeted at and the success criteria that those customers use to determine whether they’re getting the job done.
Issue 5: “We understand our customers’ jobs, but we don’t know how to translate insights into products.”
Companies don’t generally do research for the sake of doing research. Not successful ones anyway. While there are a lot of best practices out there on customer-centric product development, I’ll focus on three lessons that address areas where I often see companies get it wrong. First, I encourage teams to do a diagnostic to determine how their existing products and their competitors’ products address the Jobs to be Done they’ve uncovered with their research. This does two things. It highlights where there may be gaps in the market, based on the customer’s perspective. It also forces people to start thinking about how features relate to jobs, hopefully minimizing the temptation to simply mimic or upgrade features they’re already familiar with.
Second, I urge those responsible for innovation to get early buy-in from those who will be responsible for commercialization. That may be a Product team or a particular business unit. A senior leader at a large financial services company once shared a story with me about how its innovation team had invested hundreds of thousands of dollars and months of work in generating a truly customer-centric solution only to find out that it wasn’t going to be a priority for the business that would have to sell the solution.
Third, teams often benefit from frameworks and templates that ensure they’re covering all the key elements of product development. The Jobs Atlas I use, for example, ensures that I think not just about customers’ jobs, but also about how customers view the competition and what obstacles will stand in the way of them adopting and using new solutions. Further, Jobs to be Done helps you understand the desirability component of a new solution. Other resources may still be necessary for thinking about whether it’s feasible for the company to offer a new solution and whether there’s a financially viable business model that supports the solution.
Jobs to be Done is a valuable tool for ensuring that innovation responds to the needs of your customers. While it’s an easy concept to understand, it can take some time to make sure your organization is using it correctly. Hopefully, the struggles of early adopters provide some guidance on how others can ensure they’re getting the best experience. And if you have encountered other difficulties in adopting Jobs to be Done, I’d love to hear about them.
Dave Farber is a strategy and innovation consultant at New Markets Advisors. He helps companies understand customer needs, build innovation capabilities, and develop plans for growth. He is a co-author of the award-winning book Jobs to be Done: A Roadmap for Customer-Centered Innovation. | https://medium.com/new-markets-insights/jobs-2-0-troubleshooting-the-adoption-of-jobs-to-be-done-47cb9f163168 | ['Dave Farber'] | 2020-07-23 16:38:02.407000+00:00 | ['Innovation', 'Jobs To Be Done', 'Market Research', 'Marketing', 'Strategy'] |
Automatically Find & Re-post Popular Instagram Content with Python | What it does?
This takes a keyword as an input from the user and using it as a hashtag, retrieves public Instagram posts. It then sorts those posts based on the number of likes. The post with the most likes is then downloaded to be reposted later. It then pulls any hashtags from the caption, finds other hashtags being used with these hashtags on twitter and Instagram and use them in the caption, along with credit to the original poster of the selected post. The script then opens Instagram, logs into the user’s account and uploads the picture along with the caption.
Importing required libraries and set up the variables:
import requests
import urllib.request
import urllib.parse
import urllib.error
from bs4 import BeautifulSoup
import ssl
import json
from IPython.display import Image
import re
import time
import autoit
from selenium import webdriver
from selenium.webdriver.chrome.options import *
from selenium.webdriver.common.keys import Keys
import operator import tweepy as tw
import bs4
import requests consumer_key= ‘your-consumer-key’
consumer_secret= ‘your-consumer-secret’
access_token= ‘your-access-token’
access_token_secret= ‘your-access-token-secret’
auth = tw.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tw.API(auth, wait_on_rate_limit=True)
Get input from user:
Here we will ask the user for the key word. This key word will be used as a hashtag to find relevant Instagram posts.
key_word = input('Please enter your key word:')
Finding the Instagram photo for re-posting and then download it:
The following function with use the Instagram explore option and retrieve posts using the key word provided by the user.
url= '
html = urllib.request.urlopen(url).read() def get_posts(key_word):url= ' https://www.instagram.com/explore/tags/'+key_word+'/' html = urllib.request.urlopen(url).read() soup = BeautifulSoup(html, 'html.parser')
script = soup.find('script', text=lambda t: \
t.startswith('window._sharedData'))
page_json = script.text.split(' = ', 1)[1].rstrip(';') posts=json.loads(page_json)
b=posts['entry_data']['TagPage'][0]['graphql']['hashtag']['edge_hashtag_to_top_posts']['edges']
return b
The following function will use the above function to get a list of Instagram posts, and then find the one with the most likes to return.
def get_top_post(key_word):
posts= get_posts(key_word)
l=[]
for item in posts:
d={}
d['likes']=item['node']['edge_liked_by']['count']
d['url']=item['node']['display_url']
d['urlcode']=item['node']['shortcode']
d['owner']=item['node']['owner']['id'] d['caption']=item['node']['edge_media_to_caption']['edges'][0]['node']['text']
l.append(d)
l.sort(key=operator.itemgetter('likes'), reverse=True)
return l[0]
This function returns a dictionary containing attributes of the top post.
Download the photo:
We write a function that takes a URL and downloads the photo at that URL and saves it as downloaded.jpg.
def download_image(url):
f = open('download.jpg','wb')
f.write(requests.get(url).content)
f.close()
The URL for this function will come from the dictionary containing attributes of the top post.
Building the caption:
What I want to do is to credit the original uploader, therefore I want to get the username of the original uploader. Our post dictionary contains ‘URL code’. The following function takes that URL code and returns the username of the original uploader.
url= '
html = urllib.request.urlopen(url).read() def get_owner(shortcode):url= ' https://www.instagram.com/p/'+shortcode+'/' html = urllib.request.urlopen(url).read() soup = BeautifulSoup(html, 'html.parser')
script = soup.find('script', text=lambda t: \
t.startswith('window._sharedData'))
page_json = script.text.split(' = ', 1)[1].rstrip(';') post=json.loads(page_json)
return post['entry_data']['PostPage'][0]['graphql']['shortcode_media']['owner']['username']
Another thing we want to add to our caption is relevant hashtags. Following are the functions to do that. I have written detailed tutorials on how this code works (Part 1, Part 2).
def return_all_hashtags(tweets, key_word):
all_hashtags = []
for tweet in tweets:
for word in tweet.split():
if word.startswith('#') and word.lower() != '#' + key_word.lower():
all_hashtags.append(word.lower())
return all_hashtags def extract_shared_data(doc):
for script_tag in doc.find_all("script"):
if script_tag.text.startswith("window._sharedData ="):
shared_data = re.sub("^window\._sharedData = ", "", script_tag.text)
shared_data = re.sub(";$", "", shared_data)
shared_data = json.loads(shared_data)
return shared_data def get_hashtags(key_word):
tweets = tw.Cursor(api.search,
q='#' + key_word,
lang="en").items(200)
tweets_list = []
for tweet in tweets:
tweets_list.append(tweet.text) url_string = "https://www.instagram.com/explore/tags/%s/" % key_word
response = bs4.BeautifulSoup(requests.get(url_string).text, "html.parser") shared_data = extract_shared_data(response)
media = shared_data['entry_data']['TagPage'][0]['graphql']['hashtag']['edge_hashtag_to_media']['edges'] captions = []
for post in media:
if post['node']['edge_media_to_caption']['edges'] != []:
captions.append(post['node']['edge_media_to_caption']['edges'][0]['node']['text']) all_tags = return_all_hashtags(tweets_list + captions, key_word)
frequency = {}
for item in set(all_tags):
frequency[item] = all_tags.count(item)
return {k: v for k, v in sorted(frequency.items(), key=lambda item: item[1], reverse=True)}
Binding it all in one function:
Now that we have written all the helper functions, we can write one function to make a post using all out helper functions. This function will find the top post, download the image, build the caption which will include credit to the original uploader and then relevant hashtags and return the caption.
def make_post(key_word):
tag = clean_input(key_word)
top_post = get_top_post(tag)
download_image(top_post['url'])
caption = 'Repost from @'+get_owner(top_post['urlcode']) + '
' + ', '.join(list(get_hashtags(tag).keys())[:10])
return caption
Now that we have downloaded the picture that we want to re-post and have built a caption, we need the script/code to upload it to Instagram.
Autogram:
The Autogram class contains all the code required for automatically loading Instagram, logging in and then uploading the picture. The Autogram class code can be found here. I did not write this code therefore decided not to go over it. However the whole code is available here as mentioned above.
The following code opens a new chrome window and logs into Instagram.
ig = Autogram('your-instagram-username', 'your-instagram-password')
ig.open_instagram()
ig.login()
ig.popup_close_save_login_info()
ig.popup_close_turn_on_notifications()
ig.popup_close_add_to_home_screen()
I usually like to watch it load and log in. Instagram can be unpredictable in giving you pop-ups on logging in. Therefore the script might miss an unexpected pop-up and you might have to make one or two clicks manually.
Once it is logged in, you can execute the following script to upload the picture.
ig.upload_image(os.path.normpath(r'full-path-to-folder\download.jpg'), description=make_post(key_word)
ig.popup_close_turn_on_notifications()
Improvements:
Definitely there’s a lot that can be improved with this script. The first thing I can think of, right now, we are using a fixed path and name to the file that is downloaded and then uploaded (download.jpg), however, to make it more universal, the path can be programmatically extracted and then re-used during upload.
Another improvement would be completely automating selenium driver and close app the pop ups that Instagram throws at you. If that can be achieved, then the whole script can run “headless” meaning you will not see the chrome window loading Instagram and everything will happen in the background, however, I do enjoy watching chrome go on it’s own. | https://medium.com/the-innovation/automatically-find-and-re-post-popular-instagram-content-a3a27d28b72b | ['K. Nawab'] | 2020-11-17 04:35:13.288000+00:00 | ['Instagram Marketing', 'Instagram', 'Automation', 'Python', 'Social Media'] |
Independence And Happiness. Independence Or Happiness — Choice Is Yours, Stakes Are High! | Independence And Happiness. Independence Or Happiness — Choice Is Yours, Stakes Are High!
Real independence is more about peace of mind than happiness.
Image from Quotemaster
This comes from a writing prompt titled, ‘’Gather Your Strength’’ by Samantha Lazar. The theme is Susan. B. Anthony’s quote, ‘’Independence is happiness.’’.
It makes me think that what has independence got to do with strength, especially if independence is happiness.
Then it dawns upon me that perhaps both have everything to do with each other. The equation is simple. You are happy when you are free. Being free stems from being strong. So, happiness and strength are directly proportional to each other.
Or are they?
And is the equation that simple?
I am not talking of the independence which comes with travel backpacks and earning your own money. I am talking about the independence you trade off for a non-controversial existence when you don’t write about what you really want to shout aloud or when you don’t unfriend people who have walked out of your heart long ago. I said heart, so I am talking only about the ones who once were there in it.
The clutches all this keeps you tied in are daunting. What if tomorrow your dream-like life comes true? How will you justify that revealing patch created by your words of yesterday, written in an emotional outburst? Those friends without friendship stand on the very rope which tie you to many other precious relationships. How can you drop them and risk breaking the rope?You will then subject yourself to mockery or embarrassment. Loneliness and blames won’t be far away.
There, my friend, the real strength comes in. Sneaking in a quiet, timid and unsure demeanor. Whispering in your ears that it’s enough now. Half of your lifetime has already passed, let the story play out and let the rope break, if it must. Then see what remains. May be nothing. May be a lot. But either way, all of what you have now is also not all yours, anyway.
Happiness is a state of mind. Easier said than experienced. Eventually you will find happiness anyhow — that is the survival instinct less talked about. But peace of mind is what you earn. By giving up some relationships, some unrealistic goals and some deep rooted self-images. Also, there’s a whole package with a ‘’items cannot be sold individually’’ tag. You get some angst and heartbreaks for free.
Come, let’s talk of the freedom equation again. If you have read till now, I will take the liberty to say that the real equation is a little different from the one above.
Once you earn your peace of mind, it’s then that you get free in actuality. But to earn peace of mind, you have to gather strength. Remember the timid, unsure but consistent voice within? You always knew what was right and wrong for you. What really holds value to you and what would actually put you off worries. You just could not figure out if it’s worth putting something out to bring that in.
But hey, in all this, what happened to happiness. ‘’The popular Happiness’’, you know. Don’t we find it in the small joys of everyday life? In all honesty, isn’t it something you know can originate from within you anytime? Because it is just a state of mind.
Neither of the above seem to be mutually exclusive though.
So, at least from where I see, being free or independence is actually more about peace of mind than happiness. Especially as you age and learn to create happiness but struggle to find peace of mind amidst pretending to have it all together.
Concluding with this: | https://medium.com/sky-collection/independence-and-happiness-independence-or-happiness-choice-is-yours-stakes-are-high-88b61634aa75 | ['Payal Khare Bhatnagar'] | 2020-08-26 22:17:49.255000+00:00 | ['Happiness', 'Nonfiction', 'Strength', 'Freedom', 'Peace Of Mind'] |
5 Really Subtle Signs That Tell You a Lot About a Person | You don’t have to be a psychological expert in reading people’s behavior. We all have that inner power to discern that can tell us a lot about people’s personalities.
If not, we can always develop that skill.
As a child, I was always intrigued by people’s dual-faced behavior. Sometimes their actions and words did not match. I didn’t get it.
Though humans had developed a science of communicating thoughts, expressions, and feelings through speech, they often used body language to display their aversions, love, liking, hate, etc. And I wanted to learn that subtle language.
An American novelist Carol Plum-Ucci once quoted in her interviews:
“If you can understand human behavior, it can’t hurt you nearly as much.”
This quote stuck in my head forever. And being a voracious reader by nature, I started reading books about human behavior when I became a teenager. The more I read, the better understanding I began to gain.
I realized that if I could master the art of reading body language, I could know everything about a person in and out. In short, I desired to become a live lie-detecting machine.
Besides learning about these subtle signs from books and videos, I became more conscious of my public behavior.
Did you know that hunched shoulders, crossing arms, touching our face and neck, etc... are the signs of low self-confidence?
And people can perceive these signs as our weaknesses to use against you?
So, by learning this skill of reading human behavior, we can feel confident about trusting others. Not everyone is trustworthy material in this world, so we can find the right people of great character to work with by staying aware of these subtle signs.
Hence, this knowledge can help us find genuine people to work with and improve our public image.
So, let's get started. | https://medium.com/the-innovation/5-really-subtle-signs-that-tell-you-a-lot-about-a-person-5dc173f2608e | ['Darshak Rana'] | 2020-11-13 21:32:27.877000+00:00 | ['Advice', 'Spirituality', 'Self Improvement', 'Psychology', 'Integrity'] |
It’s The End Of The Year As We Know It! | By RYAN PERSAUD,
Director of IT and Innovation at International School of Curitiba
I have had the privilege of walking into many many schools over my career, all over the world, and in a variety of capacities. If there is one constant that I know to be true is the end of the year burnout and the overwhelming feeling that educators experience. No matter what the role, I have heard educators say “I am feeling so tired” when it comes to the last few weeks. From moving up ceremonies, to celebrations, to graduations, to summatives, to grading, to reports, just writing this is enough to send me burying in my shell. I will admit, I am no expert myself at pacing. I often come in the start of the year firing on all cylinders, and by the end am barely puttering across the finish line. I recently asked a teacher from our school who is retiring for some advice around this, and she commented to me that in over 30 years in education, she did not have a magic solution, and always felt tired at this time of year. So upon reflection, I have come to determine that yes, we may always have the feeling of being tired at this time of year, but there are definitely ways to make the end of year a positive and rewarding experience. Here are some of my tips:
Relationships, Relationships, Relationships
Many great writers on education and leadership speak to relationships. George Couros dedicates an entire chapter to this in his book The Innovator’s Mindset. Even though he is a huge proponent of technology and innovation, he knows that it all starts with relationships. Another leadership guru that I admire is Simon Senek, who states: “We need to build more organizations that prioritize the care of human beings. As leaders, it is our sole responsibility to protect our people and, in turn, our people will protect each other and advance the organization together.” As we start to feel tired and lose focus, it is really important to reach out to others. We may feel that we don’t have time, or they are too busy as well. But taking just a small moment for a five minute conversation, can re-energize and refocus our efforts in ways we did not imagine. I love what Brene Brown says about connection:
Connection is such a deep part of relationships. Being able to feel heard and valued is so important when we are feeling exhausted, and just want someone to truly empathize with us, not just listen. Relationships drive everything in schools, especially at this time of year.
Move!
We all know that exercise is good for us. At this time of year, we probably don’t feel like getting up for a run, or going to the gym, or heading to yoga class. But exercising can be one of the best things for us to do, in order to alleviate stress and actually give us more energy to make it through. If you need convincing, here is a short article from the Mayo Clinic on Exercise and Stress Management. If you read this and are still having a hard time getting there, why not invite someone to exercise with? You can work on your relationship with that person and exercise! And keep in mind, it doesn’t have to be long. Even a short walk everyday is great for the body, mind, and spirit.
Accept Responsibility
This is the tough love part of the blog post. So there are certain responsibilities that we all have and must be completed before year end. As I mentioned above, these may include report cards, comments, grading, attending certain year end events, and others. When I say accept responsibility, I mean just that. Get yourself a prioritized list and get yourself organized. This can go a long way in terms of accomplishing all of your tasks. There are lots of great apps out there to help you keep focused and organized. Here are some of my favorites: yes, I am a Google fanboy, so I encourage you to use your Google Calendar or Google Keep to keep organized. Another couple of tools that I have found useful when working in teams are Trello and Slack. Trello is like using digital boards, cards, and lists, and can really help with personal or team organization. I personally use it with my team, to keep us on track with projects. Slack is great for collaboration and can act as a hub for projects and tasks. Slack integrates well with Google too. I recently used it for a team project with group members spread around the world.
Teamwork Makes The Dreamwork
So we used to say this in elementary school right? Well it still applies. What are you responsible for or working on at the end of the year that you can collaborate on? We teach our students to be good collaborators and teammates and we should do the same. Not only does this reduce our workload, but it gives us an opportunity to work on relationships. In case you still don’t believe me, here is a great article from the Washington Post entitled Why Collaboration is Vital to Creating Effective Schools. This article cites some solid research, and I love the way it connects teamwork, trust, and relationships.
Remember Why You Are Here
It is easy to forget when we are run down and stressed why we are in the building in the first place, especially for those of us that are not in classrooms! Remember the kids! Take a moment, stop, and breathe. If you must check emails, do it on the playground, and absorb the sounds of the kids. Read to an elementary class for 10 minutes, see them, hear them, remember, we are there for them. Have a conversation with some high schoolers about a summative they are working on, or just chat with some awkward middle schoolers!
Think Creatively About Your Events
This one might seem like a no-brainer, but when you are in the midst of it all, it is sometimes difficult to take the balcony view. Move from the actual work, and step back and think about what is really going on. Here is a great article from the Harvard Business Review, speaking about this concept, entitled, A Survival Guide for Leaders. We know you are going to have moving up ceremonies, can you place them on the same day, at different times, so parents with multiple children only have to come once? Are there multiple musical showcases that you can combine? Are there any events that can be pushed to the beginning of the following year?
Consider how you can use technology to engage parents at this time of year. Like you, they are tired and overwhelmed too. What are the ways we can leverage technology to support those who just can’t get to school for an event. Consider social media accounts that your school may have and the ways events could be posted and shown there. Student blogs that showcase learning can be broadcast out, or videos highlighting student work. If your school’s social media presence isn’t quite where it should be, here are five tips to increase your school’s social media presence;
Automate your posts Be real Make it interesting Be aware and spontaneous Don’t forget your brand
Social media can be a powerful tool if utilized correctly. It can increase your brand, engage parents, and act as a storytelling device.
This in itself may all feel overwhelming to enact. So why not choose one idea, as an area for improvement, as we head into these last few weeks. For me, my goal is to make the last few weeks the best weeks that I have had this school year. I would love to hear from others as to how you deal with the feeling of being overwhelmed at this time of year.
CWI Institutes are Transformative Professional Development. learn more
© copyright 1995–2019, Community Works Institute (CWI)
All rights reserved. CWI is a non-profit educational organization dedicated to engaging students and teachers with their local communities through integrated learning projects. We work with educators and schools across the U.S. and internationally.
email us about supporting your school or organization.
CONTENT USE POLICY All materials contained in this web site remain the sole and exclusive property of CWI, or the author as designated by arrangement. We strongly encourage re-publication, but we do ask that you properly credit, and then share your re-publication link with us directly. contact us | https://medium.com/communityworksjournal/its-the-end-of-the-year-as-we-know-it-c1353d342b44 | ['Joe Brooks'] | 2019-06-08 23:54:39.760000+00:00 | ['Schools', 'Teaching', 'Wellness', 'Learning', 'Education'] |
The Literally Literary Weekly Update #7 | Wounded Mother by Cassius Corbin (Poetry)
“Earth’s children going back
to be held by The Mother
then running back home
at the first sign of thunder”
All Passages by Water Lead Home by Jerry Windley-Daoust (Fiction)
“ It was as forlorn as a desert, except that the upside-down part of this watery desert harbored a whole world of strange living things, and unexplored mountains and plains, and the shipwrecks of people who hadn’t made it to the other end of the ocean.”
How Democracies Die by Dale Biron (Poetry)
“When middle-class dreams shatter,
rarely do the jagged shards fall upward,”
The Lessons from the Sea Mist by Sylvia Clare MSc. Psychol (Poetry)
“Can we learn
there is always a mist around us,
even when it is unseen?”
Do you remember? by Taiwo Adesina (Fiction)
“Do you remember on your wedding day, when I promised to write you every month? How you smiled and looked away? Did you even get the other letters?”
A Cruel and Swift Instrument of Nature by Edward Punales (Fiction)
“Every world is a collection of many worlds, each one conceived, used up, and excreted by nature to make room for the next.”
What’s the Role of Fiction in Social Change? by Aline Müller (Society)
“We don’t create things from nowhere, we imagine them first and then we work on it. Fiction is a rich pot of concrete technological developments and images to take inspiration from.” | https://medium.com/literally-literary/the-literally-literary-weekly-update-7-e607d389ac8d | ['Jonathan Greene'] | 2020-02-05 14:30:40.291000+00:00 | ['Fiction', 'Nonfiction', 'Ll Letters', 'Essay', 'Poetry'] |
The 5-Bullet-Log: A Note-Taking System to Increase Self-Awareness and Learn More From Life | I choose to keep my log in my journal because handwriting helps retain information, because I can create my own symbols and layouts, and because that’s where I keep almost everything else.
But if you prefer, you can also use digital apps such as Evernote, which allow you to move things around and organize thoughts by tags.
You can also keep it analog but use separate cards instead of a notebook, or use a calendar — or whatever feels best for you. You will notice that your log will keep changing with time, and this is a reflection of your own ever-changing nature as a human being.
You’re the one who’s going to be using the log, so adapt it to your own personality and preferences. No matter which format you choose, just make sure it’s easy and fun to use; otherwise, you won’t stick with the habit.
Before You Start…
Here’s a little secret I realized after using this system for a while: it’s not actually my future self who benefits the most from this practice — it’s me, right now.
Of course, it’s useful to have access to all this data. But it’s even more useful to have a mental framework that helps me think, consolidate, and summarize information.
Because I know that in the evening I will have to fill out the log, I live my days in a different way. I am constantly on the lookout for signs and opportunities to improve. I recognize good ideas when they surface, and I memorize them. I ask more questions. I am aware of things I had never seen before.
Sometimes the evening comes and I feel that I have nothing really important to write down on my log. Nevertheless, I include whatever comes to mind — even if the most relevant things I can come up with that day are about grocery shopping or house cleaning.
Don’t worry if at first, it seems like you have nothing to write about, or if you have so much in your head that it’s hard to choose what to keep in your log. It’s all a part of the process, and your practice will change with time, as will your approach to life, memory, and note-taking.
For me, the benefits of keeping a log are invaluable, and I keep observing new ones as time goes by. Here are some of them:
I base my goal-setting methodologies on solid data, and they are more effective.
Every week I set weekly and monthly goals. Before, I used to base it solely on feelings, thoughts, or other people’s approaches to goal setting. Now, I base it on real evidence.
I can’t escape from my own problems.
And that’s great. A while ago I noticed that every day I was writing in my log that I was not reading enough, not learning enough, not making time for studying. I became so frustrated with writing the same thing every day on my log that I eventually created a morning slot for studying, and the problem was solved. The same happened with emotional eating and improving my communication skills.
I am quicker at coming back from emotional lows.
Looking back through my achievements and lessons makes me see how much I have actually accomplished. It shows me that everything is temporary, and it reminds me of the skills and lessons I need to overcome any problem.
I am more consistent.
At its very core, the 5-Bullet-Log is a daily practice that requires consistency. Once I created this routine and made it non-negotiable, I started becoming more consistent with other habits as well, such as taking notes after conversations and events, habit tracking, and journaling in general.
But most of all, it helps me separate the wheat from the chaff. I have plenty of ideas and observations, but having to choose the most important ones every day tells me a lot about what I value in life, what my current focus is, and what I need to let go of. | https://medium.com/better-humans/the-5-bullet-log-a-note-taking-system-to-increase-self-awareness-and-learn-more-from-life-8150b8d2b322 | ['Sílvia Bastos'] | 2019-05-06 16:55:08.076000+00:00 | ['Quantified Self', 'Writing', 'Self Improvement', 'Self', 'Journaling'] |
It’s Much Easier Than You Think to Live the Life You Want | Maybe not “easy”, but entirely possible.
I recently listened to an episode of The World Wanderers Podcast where the host discussed working at a cafe in a great city that a lot of people would love to live in. She mentioned how, had she not moved to this cool, exciting city, the job she had would have made her feel like a loser. In your hometown working retail after getting an expensive degree seems pretty lame. Up and moving to a destination city and working retail to support the lifestyle seems kind of adventurous.
Back home, she would have dreaded seeing an old friend come in. “Oh, so you’re working here?” In the new city when someone she knew came in the question was more like, “Wow, so you’re living here?”
Just a few days ago I talked to a guy who’s biking across the country and loving it. He spent several months in beautiful Missoula, Montana waiting for the weather to improve so he could continue his journey. He worked at a grocery store while there and it provided everything he needed to live the lifestyle he wanted and get back on the road in time. What would his resume look like when, several years out of college, he had “Grocery bagger” listed? Not great, except when put in the context of, “Spent two years biking across the U.S., paying my way through with odd jobs and blogging about the adventure.”
I thought about this phenomenon more in Mompiche, Ecuador a few weeks ago. We found a little place with a sign for American-style pancakes. A welcome breakfast after days of fruit and cereal. The breakfast nook was run by a twentysomething woman from the Ukraine. She fried up pancakes on a small griddle and served them with coffee for breakfast and lunch in the tiny Bohemian surfing village. She lived in a neat little house right above the pancake joint and spent the rest of the day as she pleased.
Imagine this ambitious young woman back home responding to the common, “So, what do you do?” with, “I make pancakes for a living.” Likely her friends and family would be a little worried and ashamed and think something wrong with her.
Contrast that with the same answer to the same question but with a change in geography. “I moved across the world to a tropical surfing village in Ecuador where I opened my own business.” Wow. What an enviable life, right?
There’s something weird about staying in your hometown. It severely limits the definitions you accept for what makes you successful. Oddly, most of the hometown definitions of success have nothing to do with happiness. They have to do with becoming what everyone in your past expects or desires given who you used to be. It’s a sort of tether to a past self that no longer exists.
When the expectations of back home no longer apply you can ask better questions and make clearer connections. What kind of person do you want to be (vs. what job title do you want)? What kind of people and surroundings do you want to be immersed in (vs. where do you want to work or live)?
Many people would probably love to be the master of their own schedule, be in a beautiful outdoor setting with interesting people from around the world, seriously pursue a hobby with lots of their time, and be challenged in new ways daily. Yet most of those same people would be horrified at the idea of playing guitar on the street for money, flipping pancakes, or doing freelance odd-jobs online, any of which might be the very means to achieve the life described.
Most people have this idea that you have to work a boring job in a boring house in a boring city for a few decades, and then if you play your cards right and all kinds of things totally out of your control (like the stock market or real estate prices) do the right thing, you can have some kind of two week vacation cruise or retire in a place where you enjoy good weather and leisure. The weird thing is, all those “someday” goals are available right now with relatively little difficulty. You can afford to live in a cool bamboo house in a beach town just by making pancakes for lunch and breakfast. You can (as was one guy I met) travel the length of South America living entirely off the cash you make playing guitar outside of restaurants.
I’m not claiming this kind of life is for everyone. Not at all. There is nothing wrong with a 9–5 job and life in the suburbs if that’s what really resonates with you. There’s nothing inherently noble about traveling or working some low wage odd job. The point is that it’s too easy to choose things based on an artificially limited option set. It’s too easy to define your life by stupid things like college majors or giant industry labels or titles that will make Aunt Bessie proud at the family reunion or salary levels.
The last one is especially dangerous.
It’s a weird habit to measure your success in life only by the revenue side of the equation. Who cares if you bring in $100k a year if it only buys you a crappy apartment that you hate in a city that stresses you out with friends that don’t inspire you and a daily existence you mostly daydream about escaping from? Your costs exceed your revenues and you’re actually going backward. You very well could get twice the lifestyle you desire at half the annual income. Like any business, the health of your personal life should be measured using both revenues and costs. On the personal level, neither are just monetary.
Only you can know what kind of life you want. But getting off the conveyor belt of the education system, getting out of the home town expectations trap, and opening your mind to measures of progress beyond salary will give you a much better chance of crafting a life you love.
If you want others to see this, please hit the ❤!
Isaac Morehouse is the founder and CEO of Praxis, a year-long entrepreneurial apprenticeship program for young people who want more than college. His company’s mission and his life mission is to help people awaken their dreams and live free.
Here are a few other articles to chew on:
Why You Should Move Away from Your Home Town
Why You Should Get Off the Conveyor Belt
Why “Escapism” Isn’t a Bad Thing
Why It’s So Hard to Exit a Bad Situation
Do You Need to Do Work You Love to Be Happy?
Stop Doing Stuff You Hate
Focus on What You Don’t Want
Do What You Love, or Have it Easy? | https://medium.com/the-mission/it-s-much-easier-than-you-think-to-live-the-life-you-want-41e8356660bc | ['Isaac Morehouse'] | 2019-09-24 13:05:06.328000+00:00 | ['Travel', 'Education', 'Entrepreneurship'] |
Potential | Potential
Email Refrigerator :: 25
Hey friend!
What Will Become of Us?
“Hello, Elmo” Golda says politely as she tosses her red plush friend on top of her doll, already-buckled into the tiny stroller. My daughter keeps adding her toys one by one. An oversized plastic butterfly sits on the table. She picks that up, too. “Come here, Par-par!” which she recently learned means butterfly in Hebrew. And finally, the bath toys. “Alimango!” (Tagalog for crab).
Ok. I’m sharing this story partially to brag about my daughter. I mean, she’s barely two. 3 languages? She’s brilliant, right? Some child prodigy genius.
But I’m also bringing it up because it’s revealing something to me about me. Assuming she’s a genius raises my expectations of her, projecting visions of tutors and extra-curricular classes. College opportunities and career achievements ahead for her. And then I can’t help but match those expectations with how I treat her and guide her learning.
But we do that for anything with potential. A promising stock investment, a job offer at a startup, the fixer-upper house, an up-and-coming neighborhood, an exciting new relationship that could be “the one.” We mentally assign expectations and behave differently in order to guide each one, hoping it lives up to those expectations.
Is that normal? Is that healthy? Is there another way?
So I’m thinking a lot about potential this month. How does projecting these expectations on people and ideas in our lives affect how we treat them, and what is to come of them?
Let’s step up to the plate and see what happens.
“Bags and Boxes” by Ellen Porteus
Falling Short
About 15 years ago, I interned at an ad agency–my first real job out of college. After a particular meeting, one of the senior creative guys told me “someday we’ll work for you.” At the time, he meant it as a compliment. And I took it that way for almost a decade.
I used that vision to motivate me and my career trajectory. But over time, that compliment became unnecessary pressure. I was carrying this potential and it was weighing me down. The compliment was turning into a curse, like I had to somehow pay back this gift I’d been given and meet everyone’s expectations of me.
Potential can be paralyzing.
It’s one of the challenges of making things. Doesn’t really matter what, just the idea of being creative. A song, painting, homemade cookies, starting a new company, a pasta dinner.
In the process of making that thing, one of the earliest and most exciting steps is envisioning what it could be. Imagining the crowd cheering, that first bite right out of the oven, framing and hanging the masterpiece we just started, standing on a TED stage.
We can imagine perfection but because we are human we are incapable of making it. It will always be better in our head than what it is when it comes out.
And then we finish. And inevitably, it falls short.
When so much is expected of us and the work we make, it becomes nearly impossible to meet those expectations. So why should we continue to pursue audacious career goals, have kids, or make ambitious work, knowing they’re likely going to fall short of what we can imagine them to be? | https://medium.com/email-refrigerator/potential-f88a77e10673 | ['Jake Kahana'] | 2020-12-28 03:23:30.044000+00:00 | ['Self-awareness', 'Life Advice', 'Expectations', 'Worldviews', 'Potential'] |
How to become a Hadoop Developer?- Job Trends and Salary | Hadoop Developer is the most aspired and highly-paid role in current IT Industry. This High-Caliber profile takes superior skillset to tackle with gigantic volumes of data with remarkable accuracy. In this article, we will understand the job description of a Hadoop Developer.
Who is a Hadoop Developer?
How to become a Hadoop Developer?
Skills Required by a Hadoop Developer
Salary Trends
Job Trends
Top Companies Hiring
Future of a Hadoop Developer
Roles and Responsibilities
Who is a Hadoop Developer?
Hadoop Developer is a professional programmer, with sophisticated knowledge of Hadoop components and tools. A Hadoop Developer, basically designs, develops and deploys Hadoop applications with strong documentation skills.
How to become a Hadoop Developer?
To become a Hadoop Developer, you have to go through the road map described.
A strong grip on the SQL basics and Distributed systems is mandatory.
and is mandatory. Strong Programming skills in languages such as Java, Python, JavaScript, NodeJS
Build your own Hadoop Projects in order to understand the terminology of Hadoop
Being comfortable with Java is a must. Because Hadoop was developed using Java
is a must. Because Hadoop was developed using Java A Bachelors or a Masters Degree in Computer Science
Skills Required by a Hadoop Developer
Hadoop Development involves multiple technologies and programming languages. The important skills to become a successful Hadoop Developer are enlisted below.
Basic knowledge of Hadoop and its Eco-System
and its Able to work with Linux and execute dome of the basic commands
and execute dome of the basic commands Hands-on Experience with Hadoop Core components
Experience with Hadoop technologies like MapReduce, Pig, Hive, HBase.
Ability to handle Multi-Threading and Concurrency in the Eco-System
and Concurrency in the The familiarity of ETL tools and Data Loading tools like Flume and Sqoop
and Data Loading tools like and Should be able to work with Back-End Programming.
Programming. Experienced with Scripting Languages like PigLatin
Good Knowledge of Query Languages like HiveQL
Salary Trends
Hadoop Developer is one of the most highly rewarded profiles in the world of IT Industry. Salary estimations based on the most recent updates provided in the social media say the average salary of Hadoop Developer is more than any other professional.
Let us now discuss the salary trends for a Hadoop Developer in different countries based on the experience. Firstly, let us consider the United States of America. Based On Experience, the big data professionals working in the domains are offered with respective salaries as described below.
The entry-level salaries starting at 75,000 US$ to 80,000 US$ and on the other hand, the candidates with 20 plus years of experience are being offered 125,000 US$ to 150,000 US$ per annul.
Followed by the United States of America, we will now discuss the salary trends for Hadoop Developers in the United Kingdom.
The Salary trends for a Hadoop Developer in the United Kingdom for an entry-level developer starts at 25,000 Pounds to 30,000 Pounds and on the other hand, for an experienced candidate, the salary offered is 80,000 Pounds to 90,000 Pounds.
Followed by the United Kingdom, we will now discuss the Hadoop Developer Salary Trends in India.
The Salary trends for a Hadoop Developer in India for an entry-level developer starts at 400,00 INR to 500,000 INR and on the other hand, for an experienced candidate, the salary offered is 4,500,000 INR to 5,000,000 INR.
Job Trends
The number of Hadoop jobs has increased at a sharp rate from 2014 to 2019.
to It has risen to almost double between April 2016 to April 2019.
to 50,000 vacancies related to Big data are currently available in business sectors of India.
vacancies related to Big data are currently available in business sectors of India. India contributes to 12% of Hadoop Developer jobs in the worldwide market.
in the worldwide market. The number of offshore jobs in India is likely to increase at a rapid pace due to outsourcing.
due to outsourcing. Almost all big MNCs in India are offering handsome salaries for Hadoop Developers in India.
in India are offering for Hadoop Developers in India. 80% of market employers are looking for Big Data experts from engineering and management domains.
Top Companies Hiring
The Top ten Companies hiring Hadoop Developers are,
Facebook
Twitter
Linkedin
Yahoo
eBay
Medium
Adobe
Infosys
Cognizant
Accenture
Future of a Hadoop Developer
Hadoop is a technology that the future relies on. Major large-scale enterprises need Hadoop for storing, processing and analysing their big data. The amount of data is increasing exponentially and so is the need for this software.
In the year 2018, the Global Big Data and Business Analytics Market were standing at US$ 169 billion and by 2022, it is predicted to grow to US$ 274 billion. However, a PwC report predicts that by 2020, there will be around 2.7 million job postings in Data Science and Analytics in the US alone.
If you are thinking to learn Hadoop, Then it’s the perfect time
Roles and Responsibilities
Different companies have different issues with their data, so, the roles and responsibilities of the developers need a varied skill set to capable enough to handle multiple situations with instantaneous solutions. Some of the major and general roles and responsibilities of the Hadoop Developer are.
Developing Hadoop and implementing it with optimum Performance
Ability to Load data from different data sources
from different Design, build, install, configure and support Hadoop system
and Hadoop system Ability to translate complex technical requirements in detailed a design.
Analyse vast data storages and uncover insights.
vast data storages and uncover insights. Maintain security and data privacy.
and Design scalable and high-performance web services for data tracking.
and web services for data tracking. High-speed data querying.
data querying. Loading, deploying and managing data in HBase.
and data in Defining job flows using schedulers like Zookeeper
Cluster Coordination services through Zookeeper
With this, we come to an end of this article. I hope I have thrown some light on to your knowledge on a Hadoop Developer along with skills required, roles and responsibilities, job trends and Salary trends.
If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Big data. | https://medium.com/edureka/hadoop-developer-cc3afc54962c | ['Shubham Sinha'] | 2020-09-11 06:23:19.072000+00:00 | ['Big Data', 'Hadoop', 'Hadoop Developer', 'Big Data Analytics', 'Hadoop Training'] |
Parametric vs non-parametric statistical tests in Python | Once one has a good understanding of the data they have to work with, they next need to decide what they aim to answer with this information. Understanding the problem at hand is part of the Business Understanding step in the Data Science Process.
The Data Science Process
A business question with a data solution can often be posed as a hypothesis. For example “Is there a difference in the customer conversion rate between our old website design and a proposed new layout?” Having a hypothesis to test is a must-have before statistical testing can occur.
Two types of hypotheses are exploratory and confirmatory; as the names might suggest, exploratory analysis seeks to uncover the “why” and dig into the data while confirmatory hypotheses are more applicable when you have a pretty good idea of what is going on with the data and need evidence to support thinking. It is important to decide a priori which of your hypotheses belong to these categories. It has been argued that limiting exploratory hypothesis testing can help to increase certainty in results.
Once the hypothesis has been determined, the next question to answer is “am I comparing the mean or the median of two groups?”. Parametric tests will compare group means, while non-parametric tests compare group medians. A common misconception is that the decision rests solely on whether the data is normally distributed or not, especially when there is a smaller sample size and distribution of the data can matter significantly. Other factors should also be considered.
Parametric tests are widely regarded as handling data that is normally distributed — data with a Gaussian distribution — well. However, parametric tests also:
Work well with skewed and non-normal distributions.
Perform well when the spread of each group is different or the groups have different amounts of variability.
Typically have more statistical power than non-parametric tests.
If sample size is sufficiently large and group mean is the preferred measure of central tendency, parametric tests are the way to go.
If group median is the preferred measure of central tendency for the data, go with non-parametric tests regardless of sample size. Non-parametric tests are great for comparing data that is prone to outliers, like salary. They are also useful for data with small sample size and/or non-normal, and are especially useful for working with ordinal or ranked data. You should also stick with non-parametric tests for ordinal and ranked data.
Some of the most commonly used statistical parametric tests and their non-parametric counterparts are as follows:
Where n = sample size
There are also tests which compare correlation — looking for associations between variables e.g. Pearson, Spearman, Chi-Squared — and regression tests — seeing if a change in one or more independent variables will predict the change in a dependent variable e.g. simple & multiple regression.
A quick overview of when you might use each of the above tests:
The Paired t test is used when you are looking at one population sample with a before and after score or result. This could be comparing a classroom of students beginning of year proficiency on reading to their end of year proficiency to determine if there was growth or decrease in understanding. The non-parametric counterpart is the Wilcoxon Signed Rank test, which can be used to determine whether two dependent samples were selected from populations having the same distribution and takes into account the magnitude and direction of the difference.
The Unpaired t test, also widely known as the 2-sample or independent t test, is used to compare two samples from different, unrelated groups to determine if there is a difference in the group means. The Mann-Whitney U test, also known as the Wilcoxon rank-sum test, is similar to the Wilcoxon Signed Rank test but measures the magnitude and direction of the difference between independent samples.
Finally, the One-way ANalysis Of VAriance (ANOVA) is used to determine difference in group means for two or more groups where there is one independent variable with at least two distinct levels. An example of this would be predicting the weight of a dog based on breed given a set of dogs of different breeds. The Kruskal Wallis test, an extension of the Mann-Whitney U test for comparing two groups, can be used to compare medians of multiple groups where the distribution of residuals is assumed to not be normal.
There are certain assumptions that are made for data that is to be analyzed using parametric tests. The four assumptions are that 1) the data is normally distributed (or that difference between the samples is normally distributed for paired test), 2) there is similarity in variance in the data, 3) sample values are numeric and continuous, and 4) that sample observations are independent of each other. The below functions from the statsmodels.api module allow us to explore these assumptions during data exploration.
statsmodels.api.graphics.plot_regress_exog() statsmodels.api.graphics.qqplot()
Let’s examine how to call up these tests in Python 3. First, the parametric data:
The stats module is a great resource for statistical tests.
Paired t test is
scipy.stats.ttest_rel
Unpaired t test is
scipy.stats.ttest_ind
For ttest_rel and ttest_ind, the P-value in the output measures an alternative hypothesis that 𝜇0 != 𝜇1; for one-sided hypothesis, e.g. 𝜇0 > 𝜇1, divide p by 2 and if p/2 < alpha (usually 0.05).
One-way ANOVA is
scipy.stats.f_oneway
A significant P-value signals that there is a difference between some of the groups, but additional testing is needed to determine where the difference lies.
For the non-parametric data:
Wilcoxon Signed Rank is
scipy.stats.wilcoxon
Wilcoxon Rank-Sum is
scipy.stats.ranksums
Signed rank and rank-sum tests should be used for continuous distributions.
Kruskal Wallis is:
scipy.stats.kruskal(group1, group2, group3)
Similar to ANOVA, rejection of the null hypothesis does not tell us which of the groups is different, so additional post hoc group comparison is necessary.
In terms of takeaways, it is never good practice to conclude on the results of one test, but significant findings should lead to additional investigation. Bonferroni corrections, a topic for another time, can be used to reduce spurious positives. | https://zachary-a-zazueta.medium.com/parametric-vs-non-parametric-statistical-tests-in-python-9c7ab48e954a | ['Zach Zazueta'] | 2020-09-07 16:09:37.346000+00:00 | ['Parametric Tests', 'Hypothesis Testing', 'Statistical Test', 'Python'] |
B2B Marketplaces Will Be The Next Billion-Dollar E-Commerce Startups | Originally published in TechCrunch on November 4, 2020.
Startups involved in B2B e-commerce such as Faire and Mirakl have burst out of the gates in 2020. Almost overnight, these startups transformed into consequential platforms, earning billion-dollar valuations along the way. The B2B e-commerce industry has broad reach, encompassing everything from commerce infrastructure and payments technology to procurement and supply-chain solutions. But one area of the B2B e-commerce sector holds outsized promise: marketplaces.
These venues for buyers and sellers of business-related products are exploding in popularity, fueled by better infrastructure, payments and security on the back-end and companies’ increased need to conduct business online during the pandemic.
Even before the pandemic, B2B marketplaces were expected to generate $3.6 trillion in sales by 2024, up from an estimated $680 billion in 2018, according to payments research firm iBe TSD. They were already growing more quickly than most B2C marketplaces that predated them, and when COVID shutdowns hit, many companies scrambled to shift all purchasing online. A survey of business buyers conducted by Digital Commerce 360 found that 20% of purchasing managers spent more on marketplaces, and 22% spent significantly more, during the pandemic.
For many entrepreneurs running B2B marketplaces, the pandemic created new demand for their platforms. Yet to convince businesses to make a permanent shift to online purchasing, B2B marketplaces cannot simply remain stagnant, serving as simple transactional platforms. Those that innovate now to introduce adjacent services will emerge as winners in the next few years, with some inevitably becoming billion-dollar companies.
As a venture capital investor in B2B e-commerce companies, I’m carefully watching the industry and have seen several forward-thinking business models emerge for B2B marketplaces. The predominant revenue model of B2C marketplaces, the gross merchandise value (GMV) take rate, or percentage of each transaction, doesn’t always translate well in the B2B world. Instead, B2B marketplaces are discovering creative new ways to monetize their networks, ensuring their approach is tailored to the complex and nuanced world of B2B e-commerce. I’ll delve into each of these models below, providing examples of marketplaces that have successfully begun implementing them.
What makes B2B transactions unique? Before discussing how B2B marketplaces can deploy new business models, it’s important to think about how B2B transactions typically work.
Payment methods: There are four main ways to make a B2B payment: paper check, ACH transfer, electronic fund transfer (wires), and credit/debit cards. Nearly half of B2B payments are still made by paper check, but digital payment solutions are quickly gaining.
Financing: It is customary in B2B transactions to pay “with terms,” such as net 30 or net 60, effectively giving a line of credit to the business buyer that enables them to send payment after delivery of the good or service. Supply-chain financing and dynamic discounting are two mechanisms business buyers use to settle invoices with suppliers on preferred timelines.
Bulk discounts: Business buyers often expect and receive discounts in return for placing high-volume orders. While not a concept unique to B2B, negotiated or custom volume discounts can complicate the checkout process.
Contractual pricing: Businesses often enter into enterprise-level pricing agreements with their suppliers. In some B2B verticals, such as the veterinary supplies market, there is little consistency and transparency regarding the market price of any given item; instead, each buyer pays a bespoke price tied to contractual agreements. This dynamic typically benefits suppliers, which can price discriminate based on buyers’ ability and willingness to pay.
Delivery method and timing: Unlike consumers, businesses may place orders for goods but delay delivery for weeks or months. This is particularly common in the commodities market, where futures contracts specify a commodity to be delivered on a certain date in the future. B2B transactions typically include a negotiation on delivery method and timing.
Insurance: Business buyers frequently purchase insurance as part of their transactions, particularly in high-value verticals such as jewelry. Insurance is designed to protect against damage to the goods in transit or theft.
Compliance: In some verticals, particularly those related to healthcare and chemicals, there is a heavy compliance burden to ensure goods are properly sourced and transported. Is the seller legally registered to sell and transport sensitive goods such as medical equipment or pharmaceuticals?
With all of these considerations, it’s no wonder B2B e-commerce has been slower to digitize than B2C. From product discovery through the checkout process, a consumer buying a bag of licorice looks nothing like a retailer buying 100,000 bags of licorice from a distributor. The good news for B2B marketplace founders is that, based on the parameters above, there are many creative ways to extract value from transactions that go beyond the GMV take rate. Let’s explore some of the creative ways to monetize a B2B marketplace.
Sampling fees
In most B2B verticals, individual transactions are so large that charging fees on a percentage basis means scaring potential customers away. In high-value markets with infrequent orders, charging a take rate on purchase orders will be perceived as unfair, especially when suppliers and buyers know each other already. But the fee-per-sample model is a unique wedge to aggregate suppliers and buyers, who often sample supplies before placing large orders.
One of our portfolio companies, Material Bank, has used this monetization strategy with success. Material Bank is a B2B marketplace for construction and interior design materials that warehouses samples (fabric swatches, paint chips, flooring materials, wall coverings, etc.) from hundreds of brands. Architects and interior designers can order free samples from Material Bank and receive them the next morning, and then ship samples back for free when they’re no longer needed. Material Bank charges the manufacturers a fee every time one of their samples is shipped out. Manufacturers receive new customer leads that require no effort to generate and are happy to outsource sample fulfillment, which was historically a cost center and not a core competency. Other B2B markets where sampling is well-established include chemicals, apparel and packaging materials.
Data monetization
In mature markets, reliable longitudinal sales data exists at the SKU-level, enabling manufacturers to understand exactly how their products perform relative to peers. The best example is the role Nielsen and IRI play in the consumer packaged goods (CPG) industry. These two companies have exclusive point-of-sale integrations with major retailers. They collect, cleanse, package and analyze sales data, which they then sell back to CPG brands and manufacturers via data services portals.
Brands rely on this information for product development, marketing, distribution and strategy decisions. In most B2B verticals, sales data isn’t centralized to the same degree, so suppliers often have no idea what their precise market share is, or how a certain product performs against its closest competitors. B2B marketplaces have an opportunity to change that. Those that achieve scale will be able to monetize insights from the transaction activity on their platforms, playing a key role in data capture and analytics.
Companies that have successfully monetized B2B commerce data include Panjiva, a global trade data company, and BroadJump, an expense management platform for the healthcare industry.
Embedded financial services
Embedded financial services is the premise that fintech is an ingredient within a broader product suite as opposed to a standalone business model. An oft-cited example is Shopify, which started as virtual storefront software, subsequently monetized payments volume and later introduced Shopify Capital, a small business financing program. Another example is Toast, a software platform for restaurants that started as a point-of-sale system and gradually expanded into other financial services over time, including lending and payroll management. There are three prongs of embedded financial services: integrated payments, lending and insurance. B2B marketplaces are well-positioned to offer all three.
LeafLink, a wholesale marketplace for the cannabis industry, is a pioneer of embedded fintech. The company has amassed a significant share of cannabis wholesale buyers (retail stores) and growers on its platform, a marketplace with embedded inventory management and CRM tools, and is now facilitating noncash payments and supply-chain financing. Due to a lack of federal legalization, many banks are constrained in serving cannabis customers even in states where trade is legal, effectively leaving buyers and sellers unbanked (or you might say high and dry).
Cash on delivery (COD) is the most common payment method, making it difficult for cannabis companies to access financing to fuel growth. LeafLink is solving these vertical-specific challenges with its embedded fintech offerings.
Targeted advertising
Advertising on B2B marketplaces can take several forms, the most common being sponsored listings, similar to Google Adwords. If someone searches for certain products on the website, a supplier can pay to have their products show up at the top of the list. Advertising can take a more traditional form, too, such as printed marketing materials included in packages delivered to buyers. With an understanding of buyers’ profiles and order history, marketplaces can function as originators of direct mail to targeted, qualified buyers.
As a marketplace, wading into advertising can be tricky because neutrality is key, but giving suppliers the option to selectively market to buyers can be a powerful tool. One B2B marketplace that has successfully monetized through advertising is Construct Connect, a bidding platform for construction projects.
Subscription fees
B2C marketplaces rarely charge subscription fees. Dating apps like Tinder, OKCupid and Raya are notable exceptions, since they provide users with access to a valuable curated network, but there are no monetizable transactions (at least, we hope not). In the B2B world, however, monetization through subscription often makes sense. Suppliers are likely to pay for access to high-quality buyers in vertical markets because business buyers tend to be repeat customers making large purchases.
A useful way to attack the “chicken or egg” problem is to aggregate a group of buyers on a platform without charging them. When you reach a critical mass of coveted buyers on a platform, sellers may then be willing to pay a subscription fee to access them. A few examples of B2B marketplaces that have succeeded with subscription fees include Bamboo Rose, a global supply chain management platform, and Cvent, a platform for event management professionals.
Private-label products
Drugstores and department stores have sold merchandise under their own brands for decades. E-commerce sites like Revolve, the multibrand women’s apparel retailer, launched private-label brands over the years without customers realizing it. But in B2B commerce, the notion of private-label products is less common. It wasn’t until 2019 that Amazon launched its first B2B private-label brand, a product line of bulk toilet paper, tissues and paper towels (and subsequently got in hot water with regulators for leveraging its sales data to develop its own brands). We should expect to see more private-label products from vertical B2B marketplaces.
Analyzing sales data on their platforms, B2B marketplaces can not only create private-label products based on those which are the most popular and/or where gaps exist, they can also drive high sell-through on their own items. There’s nothing stopping vertical marketplaces from assuming the role of manufacturer, marketer and distributor of their own products.
The pandemic placed a spotlight on the importance of digitizing B2B transactions in every sector, from pharmaceuticals and CPG, to construction materials, food, supplies, manufacturing and beyond. B2B marketplaces stand to profit from this move to the digital realm, but only those that offer far more than just a place to transact will become billion-dollar businesses. | https://medium.com/ideas-from-bain-capital-ventures/b2b-marketplaces-will-be-the-next-billion-dollar-e-commerce-startups-6fb42c5be50f | ['Merritt Hummer'] | 2020-12-02 15:15:19.156000+00:00 | ['Monetization', 'Insights', 'B2B', 'Startup', 'Marketplaces'] |
Javascript (Preload, Prefetch) ใช้ยังไง ทำไมต้องใช้ ช่วยอะไร ?? | Written by
Frontend Developer At Central JD FinTech Co., Ltd | https://medium.com/23perspective/javascript-preload-prefetch-%E0%B9%83%E0%B8%8A%E0%B9%89%E0%B8%A2%E0%B8%B1%E0%B8%87%E0%B9%84%E0%B8%87-%E0%B8%97%E0%B8%B3%E0%B9%84%E0%B8%A1%E0%B8%95%E0%B9%89%E0%B8%AD%E0%B8%87%E0%B9%83%E0%B8%8A%E0%B9%89-%E0%B8%8A%E0%B9%88%E0%B8%A7%E0%B8%A2%E0%B8%AD%E0%B8%B0%E0%B9%84%E0%B8%A3-d9d510a7d451 | ['Tuanrit Sahapaet'] | 2019-05-23 10:57:08.285000+00:00 | ['Engineering', 'JavaScript', 'Developer', 'Web Development', 'Code'] |
3 Stories to Make the World Small | 1. Superman’s Island
In elementary school, a young Clark Kent first discovers he’s hypersensitive. He can hear faint noises miles away and see right through walls, doors, even people.
Since he doesn’t know how to control these powers yet, all the impressions overwhelm him and trigger a seizure. Clark runs away and hides in a closet.
Eventually, the teacher calls his mom to the scene, and she starts speaking to Clark through the locked door:
“Sweetie, how can I help you if you won’t let me in?” “The world’s too big, Mom.” “Then make it small. Just focus on my voice. Pretend it’s an island, out in the ocean. Can you see it?” “I see it.” “Then swim towards it, honey.”
Once he hones in on the one thing right in front of him — his mom — Clark calms down and leaves the closet.
Social media, the internet, our state of constant connection — Clark Kent isn’t the only one who’s hypersensitive. It’s all of us. We share and communicate so much, we too can see other people’s insides; their thoughts, wishes, feelings. It creates a lot of noise too, and we can hear it, even if it’s made far away.
Cheaper entertainment, travel, remote work. As consumers, workers, experiencers, we hold more power to choose than ever. It scares us. There’s too much of everything, and it puts a grave responsibility on our shoulders: What do we do with our limited time? Because we’ll never get to it all.
Unfortunately, we can’t replay our lives like movie scenes. Our moms won’t always be there when we want to run and hide. But we can still make an effort to find our island.
Let the universe be big. You hone your senses. Quiet your mind. Calm down. Focus. Zoom in on the next thing that matters. And then swim towards it.
2. The Wall
In the 1970s, there was an electrician in Philadelphia. The man’s job was to install freezing cases in supermarkets. You know, the long ones with glass doors, from which you pick up your milk and frozen pizza. To set up his own little workshop, the man bought an old bakery.
One summer, he decided to rebuild the front wall. It was made of bricks, about 16 feet high, and 30 feet long. After he had torn down the old one, he called his two sons to the site. They were twelve and nine years old. He told them that they were now in charge of building a new wall.
The boys’ first task was to dig a six-foot hole for the foundation. Then, they filled it with concrete, which they had to mix first — by hand. Clearly, this wasn’t just a job for the summer holidays. For the next year and a half, every day after school, they went to their father’s shop to build that wall. To the young brothers, it felt like forever. But eventually, they laid the final brick.
When their dad came to audit what they had done, the three of them stood back and looked at the result. There it was. A brand new, magnificent, 16 by 30 feet wall. The man looked at his sons and said, “Don’t y’all never tell me that you can’t do something” — and then he walked into the shop.
The electrician’s name was Willard Carrol Smith. It’s the same name he gave his oldest son. Today, we know the 12-year-old as Will Smith.
When Will recounted this story on Charlie Rose in 2002, he said:
You don’t try to build a wall. You don’t set out to build a wall. You don’t say, “I’m gonna build the biggest, baddest, greatest wall that’s ever been built. You don’t start there. You say, “I’m gonna lay this brick as perfectly as a brick can be laid. There will not be one brick on the face of the earth that’s gonna be laid better than this brick that I’m gonna lay in these next 10 minutes.” And you do that every single day, and soon, you have a wall.
“Brick by Brick,” as we might call it, is a story about the value of hard work. But it’s also a story about happiness. Because what Will also said is this:
I think, psychologically, the advantage that that gives me over a lot of people that I’ve been in competition with in different situations is: It’s difficult to take the first step when you look at how big the task is. The task is never huge to me. It’s always one brick.
That’s more than a competitive advantage. It’s a philosophy of relief. By choosing to focus on the next step, the next brick, not the end result, Will never feels overwhelmed. That’s happiness and, if we make that same decision, it’s available to us all.
3. The Bowl
A monk told Joshu, “I have just entered the monastery. Please teach me.” Joshu asked, “Have you eaten your rice porridge? The monk replied, “I have eaten.” Joshu said, “Then you had better wash your bowl.” At that moment the monk was enlightened.
I eat a lot of cereal. Every time I do the dishes is a chance to remember this story. Leo Babauta shared it years ago. He added:
Remembering to do these things when we’re done with the activity isn’t just about neatness. It’s about mindfulness, about completing what we started, about being present in all we do instead of rushing to the next activity. Don’t get your head caught up in all this thinking about the meaning of life … instead, just do. Just wash your bowl. And in the washing, you’ll find all you need.
Some tasks feel inherently comforting, but all tasks offer comfort if we let them. Enough-ness is transferable. You can bring it to all your activities. Whatever life demands of you, if you do it with intention, the outcome won’t matter so much because, simply by being there, you gave it your best.
Life is big, but it’s made of small moments. Every event is a tiny piece of an infinite puzzle. Washing your bowl is choosing to enjoy the shape and detail of each one. And since the puzzle will never be complete, you might as well start doing that today. | https://ngoeke.medium.com/3-stories-to-make-the-world-small-31be8544a66c | ['Niklas Göke'] | 2019-10-17 12:09:59.095000+00:00 | ['Happiness', 'Stories', 'Self', 'Psychology', 'Mindfulness'] |
We Must Have Training to Address the Trauma Our Students Bring to School | Photo by Chris Benson on Unsplash
By Austin Hawk
Screaming, shrieking, and begging were not the sounds I expected to hear while reading a story to my class. I was teaching in San Antonio, in one of the poorest zip codes in Texas, and Lily, a third grader new to our school, was continually challenging my knowledge and skills as an educator. She had experienced trauma before entering my classroom, and I lacked the necessary knowledge and skills to help her.
My school served students from Haven for Hope, an organization whose mission is to provide care and support to those affected by homelessness in Bexar County. Many students supported and housed by Haven for Hope came through my classroom and Lily was one of them. On her first day, I heard Lily from a distance pleading for her mom not to leave. She had previously been put into the foster system and was now terrified to go to school. Lily was scared that if her mom left, she would not see her again. This stress made Lily’s coming to school and leaving her mom agonizing each day.
The need for school-based mental health services to ensure our students are safe and ready to learn has become even more evident in recent years, following major events like Hurricane Harvey and the tragedy in Santa Fe. Teach Plus Texas Fellows, a cohort of teacher leaders from across the state of Texas who advocate on education policy issues, conducted a statewide survey of teachers to identify the level of need for mental health awareness in schools. The Fellows asked teachers in schools that spanned the socioeconomic spectrum if they had students in their classes who experienced Adverse Childhood Experiences (ACEs.) Ninety-one percent of teachers said yes, they had students in their classes who had experienced these types of trauma.
Over time, Lily became more comfortable in my classroom; however, I quickly exhausted the skills I had to support her beyond academics. Lily also missed many days of school because she did not want to leave her mother. She needed additional coping strategies and techniques that I was unable to provide due to a lack of training. The school counselor did everything he could to help me while also supporting 600 other elementary school students, completing administrative duties, and leading state programs such as STAAR testing. In the end, we failed Lily by not being able to provide her with the best resources to be a successful student. She came to my classroom below grade level and finished the year in a similar situation. Lily needed to make significant academic growth quickly and unfortunately, that did not happen. As a result, she continued to fall further behind her peers.
This year, Governor Abbott signed into law HB 18, the legislation that provides multiple pathways for all stakeholders in schools to increase awareness about mental health and trauma-informed instruction. As teachers, we must continue to lead and encourage our schools and districts to take advantage of this legislation.
With the passage of HB 18, there are new opportunities to partner with mental health professionals outside of the educational system, and we must advocate for our school leaders to access this resource. In addition, access to professional development opportunities related to mental health, trauma-informed instruction, and suicide prevention now must be available at the district and state level. School districts across Texas should be reaching out to experts to share their knowledge with a variety of school personnel; in my own districts, I recently watched experts present on ACEs and trauma. In addition, school districts should make online learning options available to meet the diverse needs of their employees. As teachers, we must advocate for opportunities to learn about these crucial topics. If districts throughout the state of Texas engage in this work, our state can become a model that will encourage other states to better support their students who have experienced trauma.
Every student deserves a safe and healthy school that includes teachers, administrators, and other school personnel who are prepared and trained to meet their needs. Lily, and many other students like her, will greatly benefit from current and future educators engaging in professional development related to mental health, trauma, and suicide. Their future depends on it. | https://medium.com/whats-the-plus/we-must-have-training-to-address-the-trauma-our-students-bring-to-school-1d1126a21f0a | ['Teach Plus'] | 2019-08-22 18:30:24.052000+00:00 | ['Trauma Informed Teaching', 'Mental Health', 'K 12 Education', 'Teachers Leaning In', 'Texas'] |
3 Simple Reasons Why I Didn’t Replace My Old AirPods With New | Credit: Suganth on Unsplash
3 Simple Reasons Why I Didn’t Replace My Old AirPods With New Tim Schröder Follow Jun 11 · 4 min read
I remember when I first used AirPods. It was in the office at work. I was chatting with my colleague. And he was mentioning that he thinks about selling his pair of AirPods. He had bought them for working out, but they fell out every time.
Eager to try them, he gave me his pair, and I was amazed. I felt every single little and big positive thing others are saying about the AirPods. They are light and comfortable. Besides, they are easy to wear, easy to pair and charge quickly. I was sold.
The next day, I came back with the money and bought them from my colleague. From then on, I took them with me wherever I went.
For over two years, I’ve used my pair of original AirPods in the craziest surroundings possible.
In planes over the Indian and Atlantic Ocean. In the hustle and bustle of Bangkok and Kuala Lumpur. And while working out in Bali, amid a humidity which would make every sauna jealous.
However, just like it has turned out with my iPad: The AirPods and me weren’t a true love story after all. I had worn them down. Over time they lost their charge, appeal, and sound quality. I stumbled across their shortcomings, and we started yelling at each other. Okay, admittedly, it was more me yelling at them, because they had lost their sound level…
As it was time to part, I didn’t replace my old AirPods with new. Here’s why: | https://medium.com/macoclock/3-simple-reasons-why-i-didnt-replace-my-old-airpods-with-new-346465d49957 | ['Tim Schröder'] | 2020-06-11 05:11:40.007000+00:00 | ['Airpods', 'Experience', 'Wearables', 'Tech', 'Apple'] |
Splunk Architecture — Forwarder, Indexer & Search Head Tutorial | The demand for Splunk Certified professionals has seen a tremendous rise, mainly due to the ever-increasing machine-generated log data from almost every advanced technology that is shaping our world today. If you want to implement Splunk in your infrastructure, then it is important that you know how Splunk works internally. I have written this article to help you understand the Splunk architecture and tell you how different Splunk components interact with one another.
Before I talk about how different Splunk components function, let me mention the various stages of data pipeline each component falls under.
Different Stages In Data Pipeline
There are primarily 3 different stages in Splunk:
Data Input stage
Data Storage stage
Data Searching stage
Data Input Stage
In this stage, Splunk software consumes the raw data stream from its source, breaks it into 64K blocks, and annotates each block with metadata keys. The metadata keys include hostname, source, and source type of the data. The keys can also include values that are used internally, such as character encoding of the data stream and values that control the processing of data during the indexing stage, such as the index into which the events should be stored.
Data Storage Stage
Data storage consists of two phases: Parsing and Indexing.
In Parsing phase, Splunk software examines, analyzes, and transforms the data to extract only the relevant information. This is also known as event processing. It is during this phase that Splunk software breaks the data stream into individual events. The parsing phase has many sub-phases:
a) Breaking the stream of data into individual lines
b) Identifying, parsing, and setting timestamps
c) Annotating individual events with metadata copied from the source-wide keys
d) Transforming event data and metadata according to regex transform rules
2. In Indexing phase, Splunk software writes parsed events to the index on disk. It writes both compressed raw data and the corresponding index file. The benefit of Indexing is that the data can be easily accessed during searching.
Data Searching Stage
This stage controls how the user accesses, views, and uses the indexed data. As part of the search function, Splunk software stores user-created knowledge objects, such as reports, event types, dashboards, alerts and field extractions. The search function also manages the search process.
Splunk Components
If you look at the below image, you will understand the different data pipeline stages under which various Splunk components fall under.
There are 3 main components in Splunk:
Splunk Forwarder, used for data forwarding
Splunk Indexer, used for Parsing and Indexing the data
Search Head is a GUI used for searching, analyzing and reporting
Splunk Forwarder
Splunk Forwarder is the component which you have to use for collecting the logs. Suppose, you want to collect logs from a remote machine, then you can accomplish that by using Splunk’s remote forwarders which are independent of the main Splunk instance.
In fact, you can install several such forwarders in multiple machines, which will forward the log data to a Splunk Indexer for processing and storage. What if you want to do real-time analysis of the data? Splunk forwarders can be used for that purpose too. You can configure the forwarders to send data to Splunk indexers in real-time. You can install them in multiple systems and collect the data simultaneously from different machines in real time.
Compared to other traditional monitoring tools, Splunk Forwarder consumes very less CPU ~1–2%. You can scale them up to tens of thousands of remote systems easily, and collect terabytes of data with minimal impact on performance.
Now, let us understand the different types of Splunk forwarders.
Universal Forwarder
You can opt for a universal forwarder if you want to forward the raw data collected at the source. It is a simple component which performs minimal processing on the incoming data streams before forwarding them to an indexer.
Data transfer is a major problem with almost every tool in the market. Since there is minimal processing on the data before it is forwarded, a lot of unnecessary data is also forwarded to the indexer resulting in performance overheads.
Why go through the trouble of transferring all the data to the Indexers and then filter out only the relevant data? Wouldn’t it be better to only send the relevant data to the Indexer and save on bandwidth, time and money? This can be solved by using Heavy forwarders which I have explained below.
Heavy Forwarder
You can use a Heavy forwarder and eliminate half your problems because one level of data processing happens at the source itself before forwarding data to the indexer. Heavy Forwarder typically does parsing and indexing at the source and also intelligently routes the data to the Indexer saving on bandwidth and storage space. So when a heavy forwarder parses the data, the indexer only needs to handle the indexing segment.
Splunk Indexer
An indexer is the Splunk component which you will have to use for indexing and storing the data coming from the forwarder. Splunk instance transforms the incoming data into events and stores it in indexes for performing search operations efficiently. If you are receiving the data from a Universal forwarder, then the indexer will first parse the data and then index it. Parsing of data is done to eliminate the unwanted data. But, if you are receiving the data from a Heavy forwarder, the indexer will only index the data.
As the Splunk instance indexes your data, it creates a number of files. These files contain one of the below:
Raw data in compressed form
Indexes that point to raw data (index files, also referred to as tsidx files), plus some metadata files
These files reside in sets of directories called buckets.
Let me now tell you how Indexing works.
Splunk processes the incoming data to enable fast search and analysis. It enhances the data in various ways like:
Separating the data stream into individual, searchable events
Creating or identifying timestamps
Extracting fields such as host, source, and source type
Performing user-defined actions on the incoming data, such as identifying custom fields, masking sensitive data, writing new or modified keys, applying breaking rules for multi-line events, filtering unwanted events, and routing events to specified indexes or servers
This indexing process is also known as event processing.
Another benefit of Splunk Indexer is data replication. You need not to worry about the loss of data because Splunk keeps multiple copies of indexed data. This process is called Index replication or Indexer clustering. This is achieved with the help of an Indexer cluster, which is a group of indexers configured to replicate each other’s’ data.
Splunk Search Head
Search head is the component used for interacting with Splunk. It provides a graphical user interface to users for performing various operations. You can search and query the data stored in the Indexer by entering search words and you will get the expected result.
You can install the search head on separate servers or with other Splunk components on the same server. There is no separate installation file for search head, you just have to enable splunkweb service on the Splunk server to enable it.
A Splunk instance can function both as a search head and a search peer. A search head that performs only searching and not indexing is referred to as a dedicated search head. Whereas, a search peer performs indexing and responds to search requests from other search heads.
In a Splunk instance, a search head can send search requests to a group of indexers, or search peers, which perform the actual searches on their indexes. The search head then merges the results and sends them back to the user. This is a faster technique to search for data called distributed searching.
Search head clusters are groups of search heads that coordinate the search activities. The cluster coordinates the activity of the search heads, allocates jobs based on the current loads, and ensures that all the search heads have access to the same set of knowledge objects.
Advanced Splunk Architecture With A Deployment Server / Management Console Host
Look at the above image to understand the end to end working of Splunk. The images show a few remote Forwarders that send the data to the Indexers. Based on the data present in the Indexer, you can use the Search Head to perform functions like searching, analyzing, visualizing and creating knowledge objects for Operational Intelligence.
The Management Console Host acts as a centralized configuration manager responsible for distributing configurations, app updates and content updates to the Deployment Clients. The Deployment Clients are Forwarders, Indexers and Search Heads.
Splunk Architecture
If you have understood the concepts explained above, you can easily relate to the Splunk architecture. Look at the image below to get a consolidated view of the various components involved in the process and their functionalities.
You can receive data from various network ports by running scripts for automating data forwarding
by for automating data forwarding You can monitor the files coming in and detect the changes in real time
coming in and detect the changes in real time The forwarder has the capability to intelligently route the data, clone the data and do load balancing on that data before it reaches the indexer. Cloning is done to create multiple copies of an event right at the data source whereas load balancing is done so that even if one instance fails, the data can be forwarded to another instance which is hosting the indexer
has the capability to intelligently the data, the data and do on that data before it reaches the indexer. Cloning is done to create multiple copies of an event right at the data source whereas load balancing is done so that even if one instance fails, the data can be forwarded to another instance which is hosting the indexer As I mentioned earlier, the deployment server is used for managing the entire deployment, configurations and policies
is used for managing the entire deployment, configurations and policies When this data is received, it is stored in an Indexer . The indexer is then broken down into different logical data stores and at each data store you can set permissions which will control what each user views, accesses and uses
. The indexer is then broken down into different logical data stores and at each data store you can set permissions which will control what each user Once the data is in, you can search the indexed data and also distribute searches to other search peers and the results will be merged and sent back to the Search head
to other search peers and the results will be merged and sent back to the Search head Apart from that, you can also do scheduled searches and create alerts , which will be triggered when certain conditions match saved searches
and create , which will be triggered when certain conditions match saved searches You can use saved searches to create reports and make analysis by using Visualization dashboards
and make by using Finally, you can use Knowledge objects to enrich the existing unstructured data
to enrich the existing unstructured data Search heads and Knowledge objects can be accessed from a Splunk CLI or a Splunk Web Interface. This communication happens over a REST API connection
I hope you enjoyed reading this article on Splunk Architecture, which talks about the various Splunk components and their working. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Splunk. | https://medium.com/edureka/splunk-architecture-c9910b34c745 | ['Aayushi Johari'] | 2019-06-06 08:24:16.283000+00:00 | ['Big Data', 'Log Monitoring', 'Splunk Administration', 'Log Analysis', 'Splunk'] |
Pipeline Politics: The Appalling Silence of Virginia Governor Ralph Northam | It has been said that where you stand often depends on where you sit. Such is the case with the term “old plantation.”
People who want to distort the cruel history of slavery in America use the term “old plantation” to conjure false and comforting (to some) memories of happy slaves and their benevolent owners. This historic lie was perpetrated by historians in the early 20th century and beyond with the active support of Hollywood. The narrative came to be known in film and literature as the “plantation tradition,” meaning “works that look back nostalgically to the times before the Civil War, before the ‘Lost Cause’ of the Southern Confederacy was lost, as a time when an idealized, well-ordered agrarian world and its people held certain values in common.”
For African Americans and all honest students of history, however, “old plantation” means something quite different. As Paul Robeson, the great actor, singer and human rights activist put it simply when referring to Hollywood’s tradition of depicting the African American slave as “solving his problem by singing his way to glory”, the “old plantation tradition” is “very offensive to my people.”
All of which makes this little known fact about two members of Virginia Governor Ralph Northam’s inner circle a little more than curious: In 2010, Clark Mercer, who is now Northam’s Chief of Staff, created an oyster business on the Eastern Shore of Virginia and called it “Old Plantation Oyster Company, LLC.” Mercer’s business partner in Old Plantation was his childhood friend, a then largely unknown 29 year old aide to then State Senator Northam. The aide’s name was Matt Strickler, and the headquarters for their new business was the “country estate” of Strickler’s grandparents. Strickler now serves as Northam’s Secretary of Natural Resources, which puts him at the center of the fierce battle over two proposed massive fracked methane gas projects, the Atlantic Coast and Mountain Valley Pipelines. More on that below.
When Mercer and Strickler created their new “Old Plantation” company, they also started a website, where they boasted that “we think you’ll love our Old Plantations.” And they had big plans, looking to grow 150,000 oysters by the spring of 2011. They even tested their “Old Plantations” at a June 2011 Democratic fundraising dinner.
But all did not go well for the “Old Plantations,” and by 2013, Mercer and Strickler failed to pay their annual LLC registration fee and the State Corporation Commission revoked their privilege to do business. But the Old Plantation Oyster Company website stayed active until as late as June 2018, when this writer started tweeting questions about the “Old Plantation.”
Mercer and Strickler were born and raised in Virginia and each have undergraduate and graduate degrees from fine universities, so they can be presumed to be aware of the history and meaning of the term “Old Plantation.” There is no reason to believe that they are adherents of the offensive “plantation tradition.” More likely, they are just plain tone deaf. In any case, the pair clearly were willing to seek profit from the fact that others might hold an idealized version of history and thus would love their “old plantations.”
But elsewhere in Virginia, plantation politics is alive and well, shining a bright and distasteful light on the pipeline battle over which Matt Strickler and Governor Northam are presiding. And the epicenter is Buckingham County, the geographic heart of the Commonwealth .
Variety Shade Landowners of Virginia is an organization of descendants of a slave owning family in Buckingham County who still own 1,400 acres from the former tobacco plantation of the same name. Unlike, perhaps, others, the Variety Shade descendants clearly are enamored of the false “plantation tradition” mythology. Indeed, their website notes the demise of their ancestors’ slave plantation and laments that “now nothing remains of this lovely plantation except in the memory of those who loved and admired it.”
It is doubtful that the slaves who worked the Variety Shades plantation either “loved” or “admired” the plantation. What we do know is that the freedman who worked on that plantation founded what is now an historic 85% African American community named Union Hill.
Several years ago, Dominion Energy bought 68 acres in Union Hill from the descendants of the Variety Shade plantation owners, at ten times the market rate. Dominion bought the land in order to build a massive compressor station to power the Atlantic Coast Pipeline.
Never mind that the compressor station would create respiratory and other health problems for the many elderly residents of Union Hill.
Never mind that two historic African American churches are within one mile of the proposed compressor station, an area known as the “incineration zone,” in case of an explosion.
And never mind that Governor Northam’s own 15-member Advisory Council on Environmental Justice wrote Northam on August 16, pleading with him to do something to save Union Hill. Their recommendations were unequivocal:
The Governor’s Advisory Council on Environmental Justice (ACEJ) recommends that the 401 Clean Water Act certifications for the Atlantic Coast Pipeline (ACP) and the Mountain Valley Pipeline (MVP) be rescinded immediately. Likewise ACEJ recommends that the Governor direct DEQ to suspend the permitting decision for the air permit for the Buckingham compressor station pending further review of the station’s impacts on the heath and the lives of those living in close proximity. We also recommend that a review of permitting policies and procedures take place and that the governor direct the Air Pollution Control Board, DEQ, and DMME to stay all further permits for ACP and MVP to ensure that predominately poor, indigenous, brown and/or black communities do not bear an unequal burden of environmental pollutants and life altering disruptions. These actions would ensure that environmental justice has meaningful influence in all current and future energy projects.
In making these recommendations, the Advisory Council was acting within its mandate, which was “to provide advice and recommendations to the Governor to improve equity in decision-making and improve public health in marginalized communities, among other goals listed in Executive Order 73 (EO 73) from October of 2017.” The Council’s letter on Union Hill and the pipelines was “our first formal set of environmental justice concerns to the Executive Branch since our inauguration.”
You would expect a Democratic governor, elected in 2017 with overwhelming African American support, to respond positively to such recommendations from his own Environmental Justice appointees.
You would be wrong.
Instead, Northam’s spokesperson dismissed the August 16 letter as a “draft”that had not been fully approved by the Council. When the Council met 12 days later and unanimously reaffirmed that their August 16 letter was indeed final, Northam’s office dodged again, telling the Washington Post that the Governor would “review the letter carefully and respond to the Council.”
Dominion Energy, on the other hand, made it crystal clear where it stood: “We strongly disagree with the Advisory Council’s recommendations.”
That all leaves Ralph Northam with a simple choice: which side is he on? Does he stand with his own appointees — and with Union Hill? Or does he stand with Dominion Energy.
Northam’s refusal to discuss Union Hill is nothing new. In fact, he has been stone cold silent on Union Hill for four years, despite the fact that the story has been told again and again in protest, in song, on film, on the front page of the Washington Post, and here, here, here, here and here, among many other places.
In marked contrast to his silence on Union Hill, in June, Northam directly addressed Dominion’s plans to build a different compressor station in Maryland (for a different project) that would be visible from George Washington’s Mount Vernon plantation. Northam could have mentioned the fact that that compressor station was to be situated in Accokeek, Maryland a predominantly African American community. But he did not. Instead, he said, “If it’s going to impact their view, if it’s going to contribute to environmental detriment, then it’s something I’m concerned about.”
In any event, Northam’s statement was a game changer. One week later, Dominion surrendered and said it would move the compressor station so as not to interfere with the view from Mount Vernon.
But while Mount Vernon is now safe, Union Hill remains on the chopping block. Northam previously ignored pleas from the Virginia State Chapter of the NAACP, which in July called for a halt to all construction on both the Atlantic Coast and Mountain Valley Pipelines and drew particular attention to the environmental racism inherent in Dominion’s planned compressor station in Union Hill.
He even stayed silent when the U.S. Justice Department came to Union Hill to investigate.
Now Northam leaves his own Advisory Council twisting in the wind.
Meanwhile, something else caught Northam’s attention on August 16 — the same day that the Advisory Council on Environmental Justice issued its plea to stop the environmental racism at Union Hill and to halt construction of the Atlantic Coast and Mountain Valley Pipelines.
That same day, another executive level council was formed to deal with a much different issue: oysters.
Yes, Oysters.
On August 16, Northam announced the formation of an “Aquaculture Working Group” “to develop consensus-based recommendations to promote the sustainable growth of Virginia’s clam and oyster aquaculture industries.” Northam even attended the working group’s first meeting — also on August 16 — pledging that “my Administration is committed to working with all stakeholders to finally resolve user conflicts.”
And who did Northam appoint to lead his new Oyster Council? Secretary of Natural Resources Matt Strickler.
Yes, that Matt Strickler.
Apparently Strickler’s experience selling the “Old Plantations” came in handy.
In April 1963, Dr. Martin Luther King, Jr. wrote a letter from his jail cell in Birmingham, Alabama that would become a bedrock document of the Civil Rights Movement. Speaking to leaders who, despite good intentions, failed to speak up against injustice, King famously wrote: “We will have to repent in this generation not merely for the hateful words and actions of the bad people but for the appalling silence of the good people.”
Ralph Northam seems to have found the time and motivation to speak out on everything from the view from Mount Vernon to his views on oysters. Meanwhile, two of those closest to Northam, his Chief of Staff Clark Mercer and his Secretary of Natural Resources Matt Strickler, have demonstrated not only tone deafness but little inclination to do anything for the people of Union Hill and many other front-line communities.
Thousands of people stand to have their lives, water, land and future devasted for generations to come by these proposed pipelines. All for two massive and unnecessary fracked gas pipelines that together represent more than $10 billion in new investment in fossil fuel in Virginia.
These pipelines come at exactly the wrong time. when climate change continues apace and is becoming an existential threat to our entire planet. Also to be harmed by these pipelines: Northam’s beloved Chesapeake Bay, including, by the way, the oysters.
Northam’s silence is more than just embarrassing.
His failure to listen to his own appointees is more than just insulting.
One might say his silence is appalling.
It needs to stop now. | https://jonsokolow.medium.com/pipeline-politics-the-appalling-silence-of-virginia-governor-ralph-northam-5a5d0cef240 | ['Jonathan Sokolow'] | 2018-09-04 11:25:03.723000+00:00 | ['Politics', 'Politics And Protest', 'Environment', 'Virginia', 'Natural Gas'] |
Linear Regression…Huh? | If you're new to Artificial Intelligence, chances are you’ve heard the term “Linear Regression” being thrown around. Jeez..could they have picked a more intimidating name? Anyway, I’m here to tell you that it is in no way as complex as the name makes it out to be. Simply put, it is the equation for the slope of a line…and then some.
First of all, what is Linear Regression? This concept simply aims to answer if a line-of-best-fit can be drawn in a random amount of data to determine if a linear relation lies within. Essentially does the x(independent) variable have a direct impact on the y(dependent) variable.
When I was in 9th grade, the math course I took essentially revolved around the formula that forms the basis for much high mathematics. This legendary equation:
Where “b” is the y-intercept and “m” represents the slope of the line. We aim to find a quantitative relation between “x” and “y”.
The only major difference between this formula and the one used for linear regression is the symbol epsilon(ε). Within the context of Artificial Intelligence, this symbol is used to represent the verticle distance of any data point to the line-of-best-fit. The point of these programs is to reduce this value as much as possible over all the points so that you end up with the most accurate value possible.
Now that you’ve hopefully gotten your head around this concept, let’s see what this looks like in code. First thing first, ✨import statements✨. Let’s bring in all the libraries we will need:
Since we’ve imported everything that we will need, let's proceed to store the data within our code. Although you may have a .csv file containing your data, I just made up some data for the sake of this example and stored it in an array:
Now, we have the data (Woohoo 🎉), but we need to reshape it in a way that can be used for this graph. But what is shaping data anyway?
Here we see a block of data that would be of shape(3, 2). From this diagram we can that our block is 3x2 blocks of data. However, to make it a line we can reshape the data into (6,1). We still have the same data, but we’ve simply changed the way it is stored. So back to our example:
HOLD ON, I know the -1 doesn’t make sense…jeez. But the “-1” is used as a placeholder when we do not explicitly know how “long” our data is and I want the computer to figure it out (Since the next parameter is 1, and I have 9 pieces of data, it does not take a whole lotta processing power to figure out that the -1 is actually a 9 but whatever).
Next, I’ll make an object of the LinearRegression class and fit my data. This will essentially train my model to figure out the equation of my best fit line:
The final few steps are to use the prediction function to make a “y” value for each “x” value that I have and plot it. Then I will plot the predicted values on the plotted graph.
After all this setup, you should get a result similar to this:
BOOM! A linear regression graph! Congrats on your completing your first steps to becoming an A.I. developer. You have just learned the bare essentials of linear regression 🧑💻.
However, something is still missing…how exactly does the computer take into account the verticle distance for each point and make sure the line-of-best-fit is as accurate as possible? In future articles, I will cover the Least Square Method to first find the optimum line-of-best-fit, the R Squared Method to check how well the line fits with the given set of data, and how to optimize this line using Gradient Descent. All coming soon…hopefully.
Thanks for reading this article. My name is Rohan, I am a 16-year-old high school student learning about disruptive technologies and I’ve chosen to start with A.I. To reach me, contact me with my email or through my LinkedIn. I’d be more than glad to provide any insight or to learn insights that you may have. Additionally, I would appreciate it if you could join my monthly newsletter. Until the next article 👋! | https://medium.com/swlh/linear-regression-huh-b3489f13a75a | ['Rohan Jagtap'] | 2020-12-19 09:59:36.658000+00:00 | ['Linear Regression', 'Artificial Intelligence', 'Technology'] |
Shaping the Future of Cloud Native Computing Foundation | I am happy to announce some great news for Intuit and our participation in the technology community at large. The Cloud Native Computing Foundation, the community behind Kubernetes and other related open source technologies, held an election for its Technical Oversight Committee, and I have been elected as the representative for the end-user community. With a seat on this committee, I can help shape the future of the CNCF projects and represent the end-user community that consumes these technologies. It is important to Intuit that we change the way we develop and run software as we transition to consuming the public cloud away from our private data centers. While Intuit relies heavily on AWS to provide our public cloud infrastructure, the cloud native community drives vendor-neutral innovation and standards that make possible the very best solutions and developer experiences.
Intuit has come a long way in leveraging cloud native technologies. Just in the last year alone, we have operationalized more than a hundred Kubernetes clusters that run more than 250 services in production and pre-production, advancing Intuit’s mission of powering prosperity around the world. Along the way, we have solved many issues that provide value to anyone who is deploying Kubernetes and related technologies to increase developer productivity. To this end, Intuit is committed to contributing back to the open source in the CNCF community. Argoproj developed by the Intuit Developer Platform group is a widely used workflow engine and a continuous deployment Kubernetes tool. I hope that these will become official CNCF projects in the near future.
At Intuit, Kubernetes has served us well so far with an extensible base technology for developing, deploying and operating software at scale. Observability, monitoring, service mesh, tracing and many other areas of the cloud native landscape are well-solved by the community and the CNCF projects. Going forward, I am particularly interested in the cloud native serverless solutions that will allow developers to create and deploy serverless functions (there are many good use cases at Intuit) with the same amazing cloud native developer experience, and I hope to bring the firsthand experiences from Intuit to bear in shaping discussions at the TOC.
I will be attending a twice-monthly TOC committee meeting that is open to anyone in the community. I will attend the Linux Foundation Open Source Leadership Summit in March where will have our first in-person gathering of the TOC. And I will be attending KubeCon in Barcelona in May (I need to brush up on my Spanish)!
The TOC consists of 9 members responsible for leading the technical direction of the CNCF community. Here is the full roster of the TOC. I would like to congratulate the 6 new members that were just elected. | https://medium.com/intuit-engineering/jeff-brewer-intuit-elected-to-cncf-technical-oversight-committee-aea2866f07af | ['Jeff Brewer'] | 2019-02-06 20:29:00.054000+00:00 | ['Cloud Native', 'Cloud Computing', 'Infrastructure', 'Cloud', 'Digital Transformation'] |
80/20 — Six. A winning campaign strategy | Many different strategies can stand behind an excellent advertising campaign. Today, I’m going to talk about one that I’ve found to be particularly effective in the online world.
Before I dive in, allow me to state two basic assumptions.
Assumption #1: Always on
Your advertising campaign is always-on. You are always, or almost always, spending money to promote your message. Even if it’s just a minimal investment behind paid search advertising, or promoted social media posts.
Why? Because this gives you the chance to leverage ongoing learning and data to optimize your campaign. If you’re looking to boost your KPIs in a crowded media space, you need to be in-market and in front of the right people at the right time, which requires constant experimentation and adaptation.
Assumption #2: Agility
Your team is agile enough to react to the feedback from your advertising in a timely manner. If you think your organization is too large to be nimble? Re-consider your team structure. Being able to ditch what doesn’t work and improve on what does is key to success.
80/20
Now for the 80/20 strategy. It’s a budget allocation framework to specific channels: 80% to the first and 20% to the second. Depending on your product and our target market, you should be able to easily determine which channel mix will tell your story more effectively to your target audience:
Sure thing budget. Say you have determined that being on Google’s SERP (search engine results page) and on Facebook’s timeline are good places to find people who are interested in your product or service, based on past experience or your audience profile. You allocate about 80% of your media budget to those channels. Experiments budget. In any campaign you allocate a certain budget towards experimentation. This might be towards new channels, creative formats, or ways. Here I’m suggesting a 20% budget allocation.
Now, how often should you experiment?
Six
That’s where the six comes in. I recommend a six-week cycle of experimentation. With digital media, specifically digital media, six weeks is a good timeframe to capture feedback and incorporate changes.
The fine print
This framework comes with a few caveats:
Your media mix may not allow for an 80–20 split. Some channels are more expensive than others, so you may have to reserve a small chunk of the pie for experimentation.
Six weeks may not be enough time to collect data — if your geographic reach is small you may not be able to generate critical mass in six weeks, and need to experiment for longer (although if you’re testing online advertising for more than 10 weeks, it’s not really a test any longer).
Your internal business reporting cycle may conflict with this model.
The critical takeaways are:
Use the bulk of your advertising dollars for what you know are the sure things Always use a small percentage to try new tactics Repeatedly test and refine the new tactics, moving them into the ‘sure thing’ bucket as you see success
How do you manage your advertising campaigns?
Do you set aside budget dollars for experimentation and improvement? | https://medium.com/empathyinc/80-20-six-27c0d65d15c3 | ['Mo Dezyanian'] | 2018-10-22 13:46:17.611000+00:00 | ['Digital Marketing', 'Marketing', 'Advertising'] |
The Revolution of the Soul | Perhaps the first known depiction of the crucifixion — probably from around the year 200 — is a blasphemous graffito that was scratched on a wall in Rome. It shows Jesus of Nazareth on the cross with a donkey’s head, and a cartoonish figure praying beneath him. The inscription reads: “Alexamenos worships his god”. Alexamenos is the butt of a joke, or perhaps the victim of a deadly accusation.
Inverted Values
Christian values were an inversion of Roman values. Christian virtues included embracing poverty, altruism, and pacifism (“turning the other cheek”), these would have been absurd to the Romans. To be a good Roman was to understand the importance of authority, to be loyal to the state and the emperor, to strive for wealth and pay taxes to fuel the Roman war machine.
For the early church, there were no specially-built churches and certainly none of the riches that you’d find in western churches today. Christians were mostly made up of the poor and the marginalised: slaves, immigrants and the lowest classes of Roman citizens.
Some rich Romans, like Perpetua, converted to Christianity. Archeologists have found burial sites with Christian symbology. But the archaeological evidence is always going to be weighted to the rich, who owned property and therefore leave a bigger archaeological footprint. We can assume that most Christians were poor.
Despite their poverty, the history of early Christianity is replete with astonishing stories of selflessness. Christians gave what alms they could, and shared food and shelter.
In Roman society, where there were very little means of contraception or abortion, it was acceptable (though not encouraged) to “expose” unwanted newborn children — in effect, leaving them in the wild or in rubbish tips to be dismembered by wild animals.
Christians would collect exposed babies and raise them as their own as an act of altruism. To Christians, human life has a supreme dignity that only the choice to sin can tarnish. To Romans, life was cheap.
The atheist philosopher Friedrich Nietzsche, writing in the nineteenth century, described Christian ethics as “slave morality”. To Nietzsche, there was a successful revolution for the slaves and the poor. The resentment towards the ruling Roman classes, who followed the principles of a pagan “noble morality”, manifested itself in the moral system of Christianity which slowly transplanted paganism.
For Nietzsche, Christian morals were essentially “sour-grapes” made into a system of values. Where the Romans prized sensuousness and power, Christianity prized bodily-denial and pity. A religion that rose from below had inverted noble Roman values.
St Paul is said to have been struck blind on his way to persecute Christians in Damascus. When he regained his sight, he converted to Christianity himself. Painting: Conversion on the Road to Damascus by Michelangelo Caravaggio, 1601 (source: Wikipedia)
Saint Paul the Revolutionary
Early Christianity was initially a cult only open to Jews. It was a new and revolutionary understanding of the providence of Yahweh, the one and only God of the Israelites. Christians claimed that a working-class preacher from Galilee, a backwater province in Roman-conquered Judea, was the Jewish messiah — “anointed one” — prophesised to bring independence to the Jewish people and rule by the authority of God.
This was a controversial and deadly claim to make. Jesus himself had been executed for sedition in the most humiliating way the Romans had at their disposal — crucifixion. How could a dead man — a dead criminal — be the savior of the Jewish people?
One of the Jews who had converted to Christianity was well-heeled and had the privilege — rare among his people— of being a Roman citizen. Paul (formerly Saul) of Tarsus took the revolution a step further.
Saint Paul was formerly a persecutor of Christians but claimed to have converted when he saw a vision of Christ, who commanded him to spread Christianity.
Paul asserted that anybody in the world could be a Christian, not just Jews. Jesus Christ was not just the savior of Jews, Paul stated, but the savior of everybody in the world that believed in His message.
This doesn’t seem revolutionary now, but at the time it was astonishing. Saint Paul, as we now know him, had the idea that people were the same. No matter how rich you were, where you were from or what culture you were a part of, you were the same in the eyes of God as everyone else.
In an ancient world obsessed with particularities and social hierarchies, Saint Paul evoked not only generality but the universal.
This radical leap of reason reveals the philosophical heritage of Christianity, which originates with the ancient Greeks. While more materialist Greek philosophies like Epicureanism and Stoicism were popular among the Romans at the time of St Paul, the Christians took inspiration from Platonism.
Plato was a follower of Socrates, and both philosophers believed in the idea that a soul within us would survive our physical death. The soul (anima in Greek) was not a new idea, there is evidence of belief in the soul dating back to ancient Egypt, and likely to have been pervasive in prehistoric times.
But the immortality of the human soul was by no means as common an idea in the ancient world as it is now. Christianity absorbed the idea of the eternally living soul from the Hellenistic (Greek-speaking) world from which it emerged.
There’s no explicit evidence that Platonism inspired Jesus himself or the earliest Christians, but Platonic ideas were pervasive in the Roman Empire from Judea to Britain. Because of the conquests of Alexander the Great, first century B.C.E. Judaism was heavily influenced by Greek culture and customs. Historians refer to “Hellenistic Judaism” as the dominant spiritual and intellectual climate among Jews at the time.
Whether or not the soul lived on after death was a matter of debate between the Sadducee and Pharisee sects of Judaism. Jesus of Nazareth preached that the soul would survive death and that those who followed him would live on in an eternal “Kingdom of God”.
It was the idea of the soul, unconstrained by material things, that gave Saint Paul the notion that all people are the same. Our particularities such as our status and our appearance, health, and origin are all material: just a messy vesture on a pure soul. He wrote:
“We now have this light shining in our hearts, but we ourselves are like fragile clay jars containing this great treasure. This makes it clear that our great power is from God, not from ourselves.”
In philosophical terms, this is known as dualism: the idea that the body (all your purely physical traits) and your mind/spirit/soul (your personality, your reason and judgement) are separate.
When we strip away all those things, when the soul is considered apart from the body, Saint Paul believed it will be the same as all other souls.
He wrote in a letter to the Galatian Christians:
“There is neither Jew nor Greek, there is neither slave nor free man,
there is neither male nor female; for you are all one in Christ Jesus.”
We would be judged by God not by status or wealth but the decisions we make. Even private thoughts could be judged by God. Morality was disembodied. Those who were good would live in an eternal paradise, and those who were bad would be tormented forever.
Socrates reasoned that the soul is alive before birth and would live on after death. Plato believed that the soul transmigrates into other bodies after death. Both believed in a perfect world of pure forms that was transcendent over (but existing parallel to) our imperfect world.
Saint Paul had a different idea. He wrote that a perfect world was coming to triumph over the corrupted, imperfect world and that those who did good works would be resurrected in that world. A person’s soul is not alive before the person but has the chance to be resurrected on a perfect Earth after the last judgement.
Saint Paul’s idea was the catalyst Christianity needed to survive. His vision of a universal religion open to anybody to convert to allowed Christianity the flexibility to grow fast throughout the well connected Roman Empire.
That idea, combined with the promise of a better life after death (on the condition of believing), and the warning that the Roman Empire would soon be shattered, meant that Christianity spread like wildfire, particularly among the poor and downtrodden. Christian churches sprang up all over the Mediterranean preaching the coming end of the world.
Official Religion
By the fourth century, the religion had converted many in the ranks of the Roman army. Constantine, a pagan emperor fighting a civil war, was supposedly inspired by a vision and a dream on the eve of the decisive Battle of Milvian Bridge in 312. The sun, he claimed, formed a cross shape and the words “in this sign you will conquer” came to him.
He ordered his soldiers to paint the Chi-Rho sign (the first two Greek letters of “Christ” superimposed) on their shields. With Constantine’s victory secured, Christianity had been validated. The Edict of Milan followed in 313, which promised that Christians would be protected from persecution by pagans.
Whether Constantine had a real vision, or that he simply took a pragmatic measure to motivate his Christian troops, it seemed that the religion had finally become tolerated as equal to other religions.
Constantine himself only converted to the religion close to the end of his life. Every Roman emperor after him (apart from Justinian, who lasted a mere two years on the throne) followed him into the faith. Christianity became the official religion of the Roman Empire.
Roman civic buildings — basilicas — where many Christians had previously been sentenced to death, were converted into churches. Pagan holidays morphed over time into Christian holidays: Saturnalia, Rome’s sacred gift-giving festival held at the time of the winter solstice, became Christmas. Paganism slowly died out, transplanted by the monotheistic religion.
The global march of Christianity was underway. While the religion faltered in the Medieval period as Islam ascended, the economic and military rise of post-Renaissance Europe fuelled the religion’s growth. Missionaries spread the Gospel — meaning “good news” in Greek — to all corners of the world as European nations built their empires.
Christianity has simply become the norm, but in the process has lost the most radical aspects of early Christianity. The “official” status of Christian churches, with their vast reserves of property and money, sit uncomfortably with the revolutionary values espoused by Jesus of Nazareth and the early Christians.
The belief in the immortal human soul is common in our age but does not inspire the same levels of reckless self-sacrifice that the early Christians embraced, a recklessness that astonished and frightened Roman society.
Despite the officialdom of Christianity in the west, it remains the most persecuted religion in the world in real numbers. While Christianity is safely practiced in Europe and America, there are thousands of Christians in jail around the world. Those people are both condemned and comforted by their belief in their immortal soul.
Thank you for reading. I hope you learned something new. | https://medium.com/the-sophist/the-revolution-of-the-soul-dac7ca5753d7 | ['Steven Gambardella'] | 2020-10-18 18:35:07.437000+00:00 | ['Christianity', 'Philosophy', 'History', 'Self', 'Psychology'] |
Is Social Media Changing How We Write? | originally published in Writer’s Digest
We’ve all heard it said that readers have increasingly short attention spans. Spending time with a print medium isn’t as engaging as watching things move or interacting with content on a screen. Does this mean that we should write stories that can be eaten up in one sitting, or novels with chapters that are short, punchy, to the point, and don’t wander too much?
Have we been infected by Twitter? Though you can now write a message 280 characters long, double the initial limit of 140 characters, you can’t say a lot in that space. You have to be very brief, totally succinct. And Instagram? You can’t post anything without posting a photograph, first. On that platform, the image is the focal point. Any text almost an afterthought.
Book reviews on Instagram are really collected impressions. There’s not a lot of depth. If you’ve ever read the pieces in London Review of Books, the difference is startling. Granted, the latter is a print medium, and those take their time quite a bit more, but London’s essays are graduate school caliber. In depth and thoroughly researched, each one is like a crash course in the subjects and themes of the books reviewed.
To be fair, we Yanks have good print reviews, too. I read the New York Times Book Review every Sunday. My hunger for detail is well-satisfied there, to be sure. What’s featured in The New Yorker can be very compelling. But, I digress by speaking about the experience on the other side of the desk as it were, that of reader, and possible book buyer.
So, back to writing. I’ll be honest. Over the last several years, I’ve noticed that my sentences have gotten shorter. Not always, certainly not in a paragraph or segment that is expanding on someone’s memory, drawing the reader further and further back in time. But, in general, I’ve gotten pretty concise.
Has my own attention span gotten short? I confess, I do watch quite a bit of television. The dark dramas I prefer have men and women of few words. The interior spaces, into which the viewer propels herself, are more conversant, but nothing compared to The Age of Innocence, directed by Martin Scorsese. Granted, that film drew directly from Edith Wharton’s novel of the same name, and one would be hard-pressed to trim her down. Today’s fare seems to be heavy on imagery, unless you’re got up a rerun of Upstairs Downstairs, or Downton Abbey. Since both series are British, is this an across-the-pond-trait?
My father, a professor of English at Cornell University, said what he appreciated about the British most was how well they used their language. Their authors aren’t afraid of the esoteric word when it suits. Perhaps we Americans are still hung up on plain-speaking, channeling our prairie homesteading forebears, assuming we have them. Maybe we don’t want to sound fancy, or snobbish. I, for one, use many multi-syllable words because I love the rhythm they bring to the line.
Another point I could make is that the two television series I referenced are not only British, but set in a time before television, and in the early episodes of both, before radio. People had to talk to each other after dinner because there wasn’t a lot else to do. Does this mean that we’ve lost the art of conversation?
We talk to each other a lot over Facebook, but unless one is in a chat session, it’s a matter of making a statement and waiting a while for a reply. Even in Messenger, we’re relying only on words, nothing else. No background noise, no imagery. Everything is by necessity both truncated and stripped down.
What if, in relying so much on social media platforms not only to maintain relationships but to promote our work, we’ve become impatient during the genesis of it? Creating will always take a lot of time, but perhaps now we want it to appear dashed off, rather than deeply probed, considered, and weighed.
I don’t know, but I think about it a lot as I push my novels out into the world. My ideal reader would be someone with a lot of gracious time on her hands, who loves to sit and allow herself to become lost in my fictional world, who doesn’t need to look at the clock all the time.
There are ideals, and there is reality. I just keep wondering, considering, comparing notes with other writers and readers. And, to that end, I invite you to share your thoughts. | https://annelparrish.medium.com/is-social-media-changing-how-we-write-8d2f5bb01332 | ['Anne Leigh Parrish'] | 2018-09-11 15:16:31.035000+00:00 | ['Attention Span', 'Social Media', 'Writing'] |
The Literally Literary Weekly Update #5 | One Last Note
Our most-read story was read only 150 times. We have 27,612 followers. What this tells us is that algorithms and user preferences play a huge role in what gets seen and that publications, in and of themselves, don’t carry a ton of weight.
Most people don’t read publications on Medium, they read writers. That’s why it is important for all writers on the platform to be consistent with their frequency and quality.
We at Literally Literary are doing our part to make sure your works are published in a timely manner and that we are only publishing high-level submissions, however, this is a partnership. One between publication, writer, and reader. The more we continue to work together as a community, the more our views will grow back to where they should be. | https://medium.com/literally-literary/the-literally-literary-weekly-update-5-573502abbd9f | ['Jonathan Greene'] | 2020-01-22 14:37:30.023000+00:00 | ['Ll Letters', 'Fiction', 'Nonfiction', 'Poetry', 'Newsletter'] |
Slack Clone with React | Semantic UI | GraphQL | PostgresSQL (PART 1) | Photo by Volodymyr Hryshchenko on Unsplash
Introduction
Hey all, this project will be a series. I don’t know how long the series will be as I’m still working on the project as I write these articles. I’ve wanted to build a chat app for quite some time. I came across an older tutorial (3 years ago) of Ben Awad (awesome YouTuber) doing a slack clone, which was perfect for me, so I’m following his approaches and making mine a updated version (a lot has changed in 3 years).
I wanted to practice building more complex projects. I’m learning a lot so far, like working with the PostgresSQL database, using Sequelize for the ORM, and connecting it with Graphql. So I’m hoping you guys can learn something too :) But that’s enough of the intro, let’s dive into the first part.
Installation for Database
Before we get to the good stuff, we need to install the things we need for this project. I’ll be using a Mac throughout this series.
Nodejs of course :) (if you haven’t already => https://nodejs.org/en/download/) PostgresSQL (for Windows and Mac https://www.postgresql.org/download/)
Installation videos
Mac video: https://www.youtube.com/watch?v=EZAa0LSxPPU
Windows video: https://www.youtube.com/watch?v=RAFZleZYxsc Postico (https://eggerapps.at/postico/) *optional* if your more visual like me :) this is a GUI for your database. (for mac)
That is all you need to get the database portion setup using Postgres (not that much). In the next one, we’ll work on folder setup and installing the packages we need for the backend. Until then folks :) | https://medium.com/dev-genius/slack-clone-with-react-semantic-ui-graphql-postgressql-part-1-cd40b5d3460 | ['Ajea Smith'] | 2020-09-16 07:03:32.132000+00:00 | ['Postgres', 'React', 'Programming', 'GraphQL', 'Technology'] |
Positionality and Identity | Positionality and Identity
Becoming me!
I aim to reflect and reflexively explore factors within my personal and professional development which have influenced not only who I am, but furthermore my positionality and world view (Takacs, 2003). In doing this, I intend to draw forth suppositions and presumptions influencing my subjectivity and core value.
Photo by Jason Zhao on Unsplash
Having been born in and spending all of my primary school years in newly independent Zimbabwe, I had a middle-class upbringing, even attending private school. Even from a young age, the significant economic divide within the nation was evident; there was a disproportionate number of young people to whom the quality of education I had received was inaccessible (UNESCO Institute for Statistics, 2018). In a bid to support me in having an appreciation towards the role education could play in further keeping me from becoming a victim of the socio-economic climate, but rather be the one to shape my future. The teaching within my primary school seemed didactic and traditional, contextually speaking. Whilst differing to the traditional approaches to teaching in the UK, it provided a curriculum which drew on the economic, political and cultural needs of the society within which it was set (Higgs, 2012). Although the government had initially made a significant investment towards education during this time from 12 per cent of the nation’s gross domestic product in 1990 to 44 per cent by 1994; low attendance levels of 50 per cent at secondary school nationally fuelled an understanding that education had become a commodity for those who could afford it and it was not a right for all. Academic success within this society which had commodified education was significantly revered. Corporal punishment, as a socially acceptable method of discipline, was utilised within schools to reinforce the importance of educational compliance. There was a bid to instil behaviour, and educational prowess which could be deemed socially acceptable through varying efforts from the regimental approaches used by educators here seemed to significantly align Foucauldian concepts of thought (Foucault, 2012).
Arguably my primary school experience was a crucial factor to actively shape my ontological security whilst seemingly being at odds with the nature of physical security (Mitzen, 2006). Reflecting on the use of negative reinforcement methods for discipline and getting students to learn, factual recall was highly regarded, though seemingly, this was not the case for knowledge understanding and application. This was particularly noticeable when deadlines would come around, as there was an understanding that missing deadlines would mean sanctions. The relationship between students and teachers seemed unilateral, aligning with the banking concept in which Freire (1970) explains professional authority as being mistakenly taken as the authority of knowledge. Retrospectively, the power dynamic above suggested the teacher be an oppressor, who would impose his worldview on the students and further utilising corporate punishment to exert conformity. Whilst the primary schooling and its use of corporal punishment did not initially instil an intrinsic love for learning and education, had they approached the teaching and discipline technique differently, I may have developed a passion for learning much earlier in my life.
When my family relocated to England in 2001, this profoundly affected my epistemology; much in that, it provided a contrasting educational and socio-economic to that which I had experienced to this point. During 2001 a mere 0.8 per cent of the United Kingdom’s (UK) population and further Bhattacharyya, Ison and Blair (2003) outline the poor academic attainment from black African males during these times. My academic achievements, once I had started education in the UK, differed from the findings of their study. While I had barely been above average as a primary school student, I found myself as one of the highest attainers during my first few years, at the West Midlands secondary school I attended. A growing passion for sports overshadowed my academic success; this came particularly as I feel I had never really had the opportunity to pursue sport when I was still back home.
Photo by Alyssa Ledesma on Unsplash
My foci as I developed as a teen started shifting towards sports and my social interactions (social acceptance). These passions had a somewhat detrimental effect on my education. I believed I had been successfully attaining acceptable grades; this came in spite of neglecting studies to focus on other priorities. I feel that individuals such as Osbourne (2002) had articulated quite well the identity struggles young black males had been facing within the educational system. Osbourne (2002) had shown, how like me, a lot of other things young black males had become disidentified with the educational system though they identified well with sport. My perceptions at the time, viewed the society within which I was living held a differing socio-economic divide to that of my preteens; political instability had been replaced by an ethnocultural divide.
Tensions within my life at these times had been shifting away from education towards sports. It took the completion of my GCSEs to realise that whilst the grades I had attained were significantly above the national average; I had not performed to my full potential. This resulted in the epiphany prompting me to focus more on my education. The attainment of good grades had stopped being the main aim of my studies but rather the development of myself as an individual. This aligned with the notion portrayed by Maslow (1954) regarding the importance of basic gratification in self-actualisation.
Social research, in this case, educational research utilises reflective practice as an essential facet. Inquirers such as Dewey, Schön, Brookfield and Freire have spent some time exploring these notions. Reflective practice provides the possibility not only to learn how roles or a task could be done better based on past and present events. It further provides a means by which to explore where I may position myself reflexively. Further to his discussions regarding the contrast between purposeful reflection and causal thought, Dewey (1938) explains the role experience plays in shaping my understanding of the world. The suggestion from Dewey is that while I must take time to reflect in and on action actively; I can never know truth instead only interpretations of my experiences. In alignment with this, Lynch (2017, quoted in Shieber, J. H., 2019) expresses,
‘There just is no way of escaping your perspective or biases. Every time you try to get outside of your own perspective, you just get more information filtered through your own perspective.’ (Pg. 14).
Understanding that it would be particularly challenging to unburden myself of biases which I hold. An essential factor in embarking on any research effort is a greater understanding of what these biases might be. The purpose, as mentioned above, is not only for myself; but also, for those who would explore research which I may produce, to have a clear understanding of influences underpinning my thoughts during the research. The development of my ontological and epistemological position hinged initially on my interpretation of experiences which had transpired during my early childhood, especially those pertaining to my social and educational upbringing.
Conclusion
Photo by Victor Garcia on Unsplash
The path into teaching was not a straight forward one for me; however, it was one which depicted a development in my positioning and the values I regarded highly. Though having studied a degree in architectural visualisation, sport had drawn me in as a socio-economic bridge. I initially embarked on working in varying social and economic settings. I worked as a; sports coach in local schools; project worker in a gang deterrent unit; safeguarding ambassador, with a particular focus on gangs; and youth violence before finally becoming a teacher. This decision had been inspired by my own experiences within the educational system, and I have a great deal of appreciation for the incalculable influence that my educational experience played on my lives. There were vast amounts of skills and a great depth of understanding which I gained while at school and go on to use throughout life. In a similar fashion to that expressed by Osterman and Kottkamp (1993), in order to develop a higher level of self-awareness within my teaching capacity, I have had to engage in reflective practice. In a bid to aid young people to understand better, as well as overcome social and cultural barriers, I have strived to be a mentor to the young people. Having risen through some adversity, I believe I my for some students, provide a source of inspiration. I work towards supporting students in unlocking the potential within their young minds to have a love for learning through having a creative and understanding outlook of the world.
My experiences and the exploration of literature have consolidated my belief that the works of some theories better resonates with me than others. Particularly Foucault (2012) who challenges us to consider how power operates through dominant ideologies and how truth is often significantly influenced by this. Regimes of truth, expressed in Foucauldian philosophy, discusses implications of authority in determining what is deemed to be ‘truth’. There are several dominant ideologies in education. These belief systems influence a range of practices including policymakers, governing bodies, institutional leadership and educators. The dominant ideologies influence an assortment of elements within education including, policy, pedagogical approach and practical approaches. I believe as, with other industries, education is not immune to ideologies and policymakers. In my own settings, I have observed pedagogical approaches and ‘best practice’ being blindly implemented without any exploration into how it fits our unique sector. I would, therefore, like to conduct an investigation into how technologies available within my setting might be utilised to mitigate barriers to learning.
As part of my proposed study, I intend on employing a range of methodological viewpoints. This will be done in order to explore complexities found in schools, along with those surrounding technology use. The study will find its grounding on notions surrounding digital sociology as it approaches technologies as being problematic, especially; the potential imbalances which may be influenced by power differentials, Ideologies advocating use of digital technologies in education as a precursor future societal, technological developments; the role the ‘human experience’ plays in shaping schools and digital technologies (Selwyn, Nemorin, Bulfin and Johnson, 2016). The study will seek out to draw meaning from the experience of the participant, thereby, utilising socio-cultural tenets (Asch, 1952; Vygotsky, 1978). Methodological approaches that may be used within this research will be those best cohering with connectivism, along with, interpretivisim or transformative theories (Siemens, 2004). The study may utilise the transformative worldview principles in order to draw on the thread of political entanglement through which the research may be influenced (Creswell, 2014).
I have through this paper, attempted to express the notion that all knowledge is underpinned by a set of beliefs through which it may be demonstrated during the enquiry. Interestingly as with other social research, we are all influenced by experiences which shape our values and beliefs. Therefore, articulating your positioning prior to, inferring, the influence your viewpoint will have on the beliefs. Sheiber posits the notion that | https://medium.com/age-of-awareness/positionality-and-identity-c020b73ec352 | ['T W Chirara', 'Bsc Hons', 'Pgce Ma'] | 2020-08-06 19:08:52.677000+00:00 | ['Teachers', 'Growth', 'Development', 'Identity', 'Education'] |
Predicting House Prices Pith Machine Learning | Predicting House Prices Pith Machine Learning
If you’re going to sell a house, you need to know what price tag to put on it, for this purpose I wrote a regression algorithm to predict home prices
Introduction
The average sales price of new homes sold in the U.S. is US$388,000. The house price mainly dependent on Location, size and condition … these factors influence a home’s value.
The goal of this artical is to predict house prices using one basic machine Linear Regression and Random Forest , in this mainly we will look at data exploration and data cleaning, I used data from Kaggle with 79 explanatory variables describing every aspect of residential homes such ad Area, space, materials used …
Business Understanding
We are interested to answer the following two questions:
Q1: Get an overview of the our traget variable and find out what distribution its follow ?
Q2 : What are the most important variables to our traget ?
Q3: What is the best model that give us the best result ?
Q4: Can you improve the accuracy of a model ?.
Data Cleaning and Exploration
First we want to see if we have missing value in our data
It looks like we have few columns with a lot of missing valsue.
From above we can safely drop first 5 columns since they have approximately 50% missing value.
Since we don't know much about our data we can replace the the missing value in our categorical variables with mode. Similarly for the continuous variables we will replace nan with the mean.
Explore data
We want get an overview of the our traget variable and find out what distribution its follow
As we see, the target variable SalePrice is not normally distributed.
This can reduce the performance of the ML regression models because some assume normal distribution.
we will make a log transformation, the resulting distribution looks much better.
Q2 : What are the most important variables to our target ?
Our target variable is SalePrice, so we want to see what are the variables that have strong relationship with our response. | https://hilalalhwaiti.medium.com/prediction-house-prices-with-linear-regression-95ebe04fb286 | ['Hilal Alhwaiti'] | 2020-11-16 21:27:33.460000+00:00 | ['Analytics', 'Linear Regression', 'Python', 'Prédiction', 'House'] |
About Me — Robert Trakofler. Nice to meet you | A torchier I made from unsold lamps by Robert Trakofler
antiques into other objects I repurpose furniture into more practical uses I have even made a 15 foot mobile out of antique meet hooks and broken mannequins I suspended from the old soda pop factory ceiling here in Pittsburgh that is my store called Zenith.
I don’t waste anything. All of the leftover food I give away at the end of the week, I give all of my food preparation clippings to a worm farm and compost facility. I recycle what I don’t use in my projects at one of my favorite places the Warhola scrap yard (Andy Warhol’s uncle’s place) so you might guess at this point I am very concerned about the environment.
Silent Scream by Robert Trakofler
As a person of whom has neurological issues and experienced rape as a young man I am also passionate and write about human rights, equality, healing and survival. I have been on the other side of life I had a drinking problem was homeless and even arrested on a few occasions. Artful expression for me (and many others) was indeed a way for me to cope and gain a better understanding through introspection it helped me to survive and transcend many difficulties and obstacles and appreciate even revel in the beauty that surrounds us all and it was in these revelations that sparked the inspiration for me to want to share them in my art.
Robert Trakofler
I often say that at the end of every work of art, be it a poem a painting, a photograph or a written article is a hidden post script that says… Take this, make it yours… and then take it further! It is in this spirit that I write and wish to share with you my humble scrawls in the hope that somehow I can touch a witnesses heart and mind to a grand furtherance…
This is a song I worked on…
An example of my poetry…
I often close my writings with this statement… A poem or any form of artistic expression is nothing without a witness, and for yours… I am always grateful thank you for reading! | https://medium.com/about-me-stories/about-me-robert-trakofler-d80fab3d98d5 | ['Robert Trakofler'] | 2020-12-23 16:07:07.840000+00:00 | ['Poetry', 'Music', 'About Me', 'Art', 'Photography'] |
No Contact: The Brutal Way To End Heartbreak | No Contact: The Brutal Way To End Heartbreak
Love addiction is rough.
Photo by Sage Friedman on Unsplash
I’m shocked when people comment on how they appreciate and follow my shitshow of a love life. Truthfully, I wish I could write about political foreign policy or editing tips in Google Docs. But we write about what we know best, and I know my internal rollercoaster quite well right now.
Everything went to hell last week after I had an effing dream about Jon, a guy I fell in love with while I was married.
We ended things shortly after the pandemic hit because we knew there was no chance of success in any relationship until we had divorced our spouses and had a few flings. Well, he immediately started a new, serious relationship before even moving out of his marital home or filing for divorce. That’s his choice, but it was a rough blow worsened only with the few times we still hooked up or texted.
After the traumatic dream and a necessary-but-painful phone call, I’m vowing to finally stick to the No Contact Rule. Granted, I only broke the rule once in 6 months (the rest of the times were all him), but I’m the one who did it last. If we communicate ever again, it will be him. Not me.
No Contact is the human equivalent of drug abstinence. The withdrawals from a relationship mimic the effects of heroin. You can’t do a little bit of heroin to take the edge off. It’s all or nothing. Since Jon was the first person I fell in love with for over a decade, I experienced (and still am experiencing) heartbreak, unlike anything I’ve felt since I was in my twenties.
I did a lot of research to understand the why of the pain when we ended things but I still hurt months later. Dudes fall for me. I don’t fall for dudes. Genuinely missing a person and the relationship is foreign to me. Even when I separated and moved out from my husband 11 years ago, I didn’t feel this.
I will never understand how anyone can be in love with two people at once. I understand loving, but not being in love with two people. I go balls-in when I’m in love; there’s no chance I could fall in love with someone else at the same time.
I learned a lot about love addiction. While I’m not codependent, everyone experiences love addiction to a certain degree. Unfortunately, like a drug, withdrawals make you go out-of-your-mind berserk. Like straight up, batshit crazy, crying in pain like you’ve been stabbed but your body refuses to go numb.
Surprisingly, I also learned that I’m a hell of a lot stronger than I’ve ever been. Perhaps because of all the changes I’ve made in 2020, I’m able to rationalize the pain. Even when it hits me out of the blue and I start crying, there’s another part of me that can step back and say, “you’re grieving the dream of you two. While you may think you’ll never feel that way again, you will. Trust the process.”
I think maintaining a level of rational thought, no matter how tiny, is critical after a breakup. I’ve had quite a few epiphanies about him and our relationship. These were things that I wouldn’t have seen had we stayed together; things that would have hurt me down the road. Clinging to rational thought allows you to separate the dream from reality.
In my research, I also learned that the one who focuses on working on themselves long after a breakup is usually the one who was most loyal. Also, women tend to focus on reassessing themselves and healing whereas guys are apt to just move on without processing anything. For them, their patterns keep repeating.
This year, especially in the last six months, I’ve transformed in ways I didn’t think possible.
I’m like the latest version of Tony Stark’s armor. While his first Iron Man suit was impressive, each iteration makes him more powerful and adept to take on adversity. This analogy not only shows that I’ve watched way too many Marvel movies during quarantine, but that I never gave myself credit in the past for being a pretty good model of a human.
My knee jerk reaction was to start dating everyone and anyone right away. Being on a fetish site, I have an endless list of guys to bang at my disposal. I had all sorts of threesomes and crazy shit lined up. Plus a lot of dinner dates.
Until I realized: I don’t want any of that. I need to stop doing what seems like a solution on the outside. Deep down, I believed the quicker I got tons of post-marriage dating and fucking done, then Jon and I could be together. Or at least, I could move on as quickly as he did.
It’s not a competition. This is me running my own marathon where I make up the rules and selectively choose who runs with me — if anyone at all.
I bailed on all the prior plans with other guys (coronavirus is a great excuse). It’s been a relief to focus on what I want instead of pushing myself outside of my comfort zone for the sake of getting over someone.
It’s time to do the tough work the right way. The healthy way. I’m not burying the pain or redirecting it elsewhere. Time to suck it the fuck up and face it head-on. If I don’t, I risk hurting others.
Only once have I immediately rebounded after a relationship and it was downright awful how it impacted the other guy. In turn, my rebound relationship was filled with four years of drama because I didn’t take the time to reassess and regroup before moving on with another long-term love.
My old habits would have had me playing the past with Jon on loop. And love addiction would have only played the good parts to make me hurt. Love addiction would also have played the bad parts to also make me hurt. This is where the No Contact order comes in.
No Contact isn’t just not contacting them as the name implies. This means nuking all texts, emails, and anything else you might have. I had a few things that were reminders of him; I tossed them out. I almost got rid of a particular dress that he ripped taking off of me before sex; then I remembered how much I like that dress and I’m not wasting a perfectly good afternoon date dress that another guy would appreciate.
I hated deleting Jon’s texts and emails. I even forwarded him one before I deleted it from my inbox and sent folder. The trick is to do it all when you’re in a frenzied mood. Ironically, it’s when you’re most crazy is when deleting things is easiest because you don’t second guess anything.
I’m finally beginning a true No Contact endeavor. While there will always be a part of me that wishes the phone notification is a text from him, I’m assuming by January 2021 I’ll be a blip in his memory. I don’t say that with self-pity. It’s just the nature of what happens when you fall in love with someone else.
This is my accountability. I’m on Day 2 of For-Real-This-Time-I’m-Not-Going-To-Contact-Him No Contact. There’s the edgy shakes, the bouncing of the knee, and the moments where I’m not breathing. I blame the dream for ruining my No Contact because it was so vividly real. Prior to that, I was doing quite well.
For anyone else going through this, here are some nuggets of wisdom I’ve learned in my research. They don’t all apply fully to my situation but I felt like putting them here as a reminder to anyone else who can use them.
It wouldn’t have gone any other way. You can replay something in your mind in different ways but the results would have been the same.
There was a time when you didn’t even know that person. And you were just fine. They weren’t even a thought in your mind.
If you’re not their “person”, then you’re just not their person. It’s not that you’re not prettier, funnier, or more engaging…it’s just that you’re not their person. You are not their person if they chose to walk away. In my case, it’s not so much the walking away that’s an issue. It’s how he could fall in love with someone else while still engaging in behaviors with me.
People who move from relationship to relationship are extremely insecure or scared to be alone and can’t handle it. Or, they’re narcissists who don’t think they need to reflect and change; they jump into the next relationship thinking they’re perfect.
Sometimes you need to let that shit hurt so you remember what it felt like when you pulled yourself up off the fucking ground.
“You’re going to miss the person you created them in your head to be. And you’re afraid to let go not because you’re going to lose them but because you’re going to lose that dream of them turning into Prince Charming somewhere down the line. You fell for the potential of what you wanted them to achieve. If you look back, you’ll realize that they were so bad that when they were doing the bare minimum you were jumping for joy, thinking they were going above and beyond. But that’s okay because once you separate the dream from the reality, that’s when you can fully let go of that person.” (This is verbatim from Katie Florence, the full video is hilarious.)
He may love you. He probably does. He probably thinks about you all the time. But that isn’t what matters. What matters is what he’s doing about it, and what he’s doing about it is nothing. And if he’s doing nothing, you most certainly shouldn’t be doing anything. You deserve someone who goes out of their way to make it obvious that they want you in their life. Straight forward healthy love.
Never, ever beg someone to be in your life.
The person who stays single the longest after a breakup tends to be the most loyal of the two. It takes an average of 3 to 5 months to heal from a breakup. If that person moved on before that time, then they were fake with you or they’re fake with the relationship they’re in now.
I do want to make something clear: the breakup was mutual (if anything, he says I’m the one that dumped him). I wasn’t blindsided. Also, I’m not worried about ending up single or lonely. I trust that once I can enter the world of the living after vaccination, I’ll find guys who will give their left nut to spend time with me. I’m concerned that I won’t fall for them and it will take years before I have this feeling again.
My hurt stems from how quickly he fell in love with someone else while I not only allowed myself to feel hope each time he messaged me, but I also let it delay my post-breakup self-growth. Never lose yourself trying to hold on to someone who doesn’t care about losing you.
No Contact. Say it with me, Mandalorian style: this is the way. | https://medium.com/heart-affairs/no-contact-the-brutal-way-to-end-heartbreak-42e296eb06a6 | ['Jennifer M. Wilson'] | 2020-12-24 16:59:57.437000+00:00 | ['Relationships', 'Love', 'Mental Health', 'Sex', 'Self Improvement'] |
A Love Tale | I want to delve into your corners,
to live inside your binds.
I long to feel the sensation of
your edges, trace along your spine.
To live a thousand lives with you,
and travel back through time.
Find the rhythm of each word
page by page
line by line | https://medium.com/literally-literary/a-love-tale-c4c9b845f602 | ['Jess Kaisk'] | 2017-03-31 18:42:07.355000+00:00 | ['Literally Literary', 'Stories', 'Bibliophile', 'Poetry', 'Books'] |
Could Videos Make Writing a Thing of the Past | Could Videos Make Writing a Thing of the Past
More people are watching YouTube videos than ever before
By Cmichel67
Videos grab attention quickly and keep people engaged.
Almost everybody from Gen Z can record, edit, and publish to YouTube. Their channels are monetized and most of them like to learn by watching a video instead of finding some book or an article.
Why are they choosing videos over written text? Around 90% of information reaching your brain is visual. You can process visual information 60,000 times faster than text. The written word evolved much later than the spoken word. We can learn and remember more if we watch a video.
Recently, Evan Williams began a question form where Medium users can ask questions. One guy, Kevin, asked him this question:
“The Future of Blogging in a Multimedia World. Gen Z is increasingly consuming more videos. How does Medium stay relevant?”
Evan Williams chose to answer this question in two parts:
Part 1: Reading and writing are going to be around for a long time.
He says writing is older than videos. When writing was invented, it was an incredible technology. Since its invention, it has been becoming more and more popular. Its value is not going down — it’s going up.
When somebody wants to write, it is easy to write. People can record what is on their mind or what they are feeling. Anybody can read without a device. Writing is efficient to consume and lightweight to create.
Evan William calls a book an incredible device that stores and disseminates ideas and information. He says it has the highest accessibility to information ratio. You can learn what you want to learn at your own pace.
Do people — not just Gen Z — watch more videos today?
People are consuming more videos today, but they are reading more text as well. Reading brings slower but deep gratification. If you sleep for eight hours, books are better for learning new information.
All the search engines, newspapers, blogs, lectures, pdfs, and books use text. Every searches a new word on google and then clicks on the democratically promoted articles. People read more these days — if you add the number of printed books, digital books, blogs, messages, posts, and audiobooks:
Pew research center
Gen Z succeeds Millennials and precedes Generation Alpha. People born in the late 1990s to early 2010s are included in Gen Z.
Gen Z plays more games and watches more videos. Evan Wiliams says he played more games and watched more videos as a teenager than he read books. But as he aged, his focus shifted to written words.
He said YouTube is a phenomenon. People watch videos all the time, but most of the videos include written text. Lectures, slides, translated content, tutorials, and silent videos overlaid with text, are a huge part of video consumption.
He said podcasts and audiobooks are becoming popular as well. These fill the time when someone’s hands or eyes are not free. People listen to audiobooks and podcasts during the walk or jog and daily commute to the office.
All this means the written word is moving the videos, audiobooks, and podcasts from behind the scenes. There is no chance people will quit reading any time soon.
Part 2: There’s no reason for Medium to be limited to reading and writing.
According to Evan Williams, Medium will always remain relevant. Youtube provides written transcripts for most of its videos. Similarly, writers can provide an audio or a video for their written words. Medium is working on making audio of popular articles already.
me·di·um: a means by which something is communicated or expressed
At Medium, the goal is to create a deeper understanding of issues and spread ideas that matter. Like TED, some ideas are worth spreading. On an increasingly noisy internet, one voice is not enough. As more writers write about an idea — whose time has come — it creates a social impact.
Medium is trying to build tools — the infrastructure and network — for spreading ideas and deep insights. The writers — the people with deep insights — may choose to use videos to broadcast their words in the future. In contrast to YouTube, Medium will remain a place where people discuss important issues and personal experiences.
Medium’s focus has been the written expression of thoughts and ideas in the past, but it doesn’t have to be limited to only text. Evan Williams said, “… we’ve also dabbled in audio, tappable stories (words and pictures), video, and even events.”
He believes the internet allows all forms of expression — audio, video, written words, and audiobooks — ‘to co-exist and coalesce.’ For Evan Williams, the most important thing is that quality thinking and great ideas can move from one brain to another that may be able to make better use of those thoughts and ideas.
He said he’ll keep on spreading the thoughts that may change the world someday.
Final Thoughts
Book reading and reading for pleasure is going down. Reading trends are going down since the 1980’s — long before YouTube and Facebook grabbed people’s attention.
In the US, reading for pleasure has fallen by 30 percent since 2004, as estimated by the American Time Use Survey from the Bureau of Labor Statistics.
According to a report, ‘To Read or Not To Read,’ reading is becoming less popular. The share of grown-ups reading a novel, short story, poem or play fell from 57% in 1982 to 43% in 2015.
In the Netherlands, one study about reading trends concluded that from 1955 to 1995, time spent watching TV grew while reading time declined: “Competition from television turned out to be the most evident cause of the decline in reading.”
In the end, I would like to share two stories:
When J.K. Rowling’s series of novels was released, during the 1990s, children started reading more. In the age of TV, long queues outside the bookshops were a new thing. In 2011, Fifty Shades of Grey, written by E. L. James, hit the shops. Everybody wanted to read erotic details of the affair between Anastasia Steele — a college student — and Christian Grey — a young billionaire.
We live in a competitive world. When the written words make more sense than the TV and movies, people read. In another article, Evan Williams said he wanted Medium writers to produce quality content as good as the Game of Thrones. | https://medium.com/technology-hits/could-videos-make-writing-a-thing-of-the-past-46bd4f63c295 | ['Dew Langrial'] | 2020-12-14 00:03:54.686000+00:00 | ['Reading', 'Writing', 'Writing Tips', 'Writing Life', 'Self Improvement'] |
Part Three: Reader and philanthropically-funded Black-owned media (June 22, 2020) | Part Three: Reader and philanthropically-funded Black-owned media (June 22, 2020)
Subscribe to The Idea, a weekly newsletter on the business of media, for more news, analysis, and interviews.
DIGITAL STARTUPS 2.0
Declining ad revenue has posed a challenge for the Black press, legacy and new publications alike. While ad revenue enticed a number of mainstream, white-owned companies to begin producing content specifically for Black audiences, more recent trends and tools have opened pathways for Black media entrepreneurs. This “second wave” has seen Black creators using podcasts, newsletters, and messaging services to reach audiences and while tapping into reader and philanthropic revenue.
Some Black outlets are leveraging both reader revenue and philanthropy. The TRiiBE, a digital site covering the Black community in Chicago is funded mostly by brand sponsorships, donations from readers through an Indiegogo campaign, with which it successfully raised $20,000 in 2018, as well as philanthropic support. Push Black, which reaches its audience of four million primarily through text and Facebook Messenger, also relies on donations from its subscribers as well as philanthropic support.
Few have turned to paywalls and subscription revenue. The TRiiBE explicitly eschewed the subscription route: One of its founders, Morgan Johnson, said, “We know, as Black millennials, we don’t want to pay for news. We made a conscious decision not to be subscription-based.”
The PLUG, a platform built from a successful newsletter covering the Black tech sector, has found a niche that has made a subscription-based model viable. It has thousands of free newsletter subscribers and hundreds of members paying $100 annually. Sherrell Dorsey, its founder, told The Idea that although revenue first came from advertising partnerships with companies such as Goldman Sachs, she quickly decided in favor of a subscription-based model in the interest of sustainability. Dorsey has accomplished this by turning The PLUG into an insights platform in addition to a news one, providing data indexes and reports on topics such as Black-Owned Venture Capital Firms a Tech D&I Executive Index.
Subscriptions have not been the only challenge, as the distribution of funds by philanthropy remains concentrated and unequal. A 2019 report by Borealis Philanthropy found that 40% of foundation giving went to just three newsrooms: ProPublica, Center for Public Integrity, and the Center for Investigative Reporting. The Racial Equity in Journalism Fund, which Borealis launched last September, aims to address this disparity by supporting news organizations run by people of color. More general funds, like the Facebook Journalism Project, have also prioritized outlets owned by people of color at times. In May, Facebook gave out $16 million in COVID relief grants; more than half of the outlets that were beneficiaries are published by or for communities of color.
However, these philanthropic funds do not necessarily solve the issue of start-up costs. Although MLK50, a Memphis-based nonprofit, is now supported by both of the aforementioned funds, its founder initially self-funded the publication with credit cards. The TRiiBE, too, started with no financial backing. Dorsey told The Idea that although she initially tried to pursue VC money for The PLUG, she received pushback, as many did not see the point in investing in multiple Black media outlets.
NOW WHAT?
Platforms like Medium, Patreon, and Substack have decreased the barriers to publishing. For example, MLK50 is hosted on Medium, while Coronavirus News for Black Folks, a newsletter written by Patrice Peck, uses Substack and aims to address the lack of coverage of the disproportionate impact of the COVID-19 pandemic on Black Americans.
Whether these platforms can facilitate sustained support for creators of color or end up reflecting the racial and economic inequities in the rest of media remains to be seen. As of this writing, according to Substack’s ranking of paid publications, there is one newsletter written by a Black writer on its list of top 25 paid newsletters: Roll Call. Written by Austin Channing Brown, a New York Times-bestselling author, Roll Call started in February and costs $7/month.
Subscribe to The Idea, a weekly newsletter on the business of media, for more news, analysis, and interviews. | https://medium.com/the-idea/part-three-reader-and-philanthropically-funded-black-owned-media-june-15-2020-d7dd7673dd0a | ['Saanya Jain'] | 2020-06-23 21:10:26.504000+00:00 | ['The Latest', 'Media', 'Journalism', 'Philanthropy'] |
How To Publish Your Content On UX Planet | We’re looking for articles, insights, tutorials and case studies on Inspiration, User Experience, User Interface Design, Usability, Interaction Design, Prototyping, Product Design, and any other topic that relates to designing and building digital products.
How To Send Your Article
Send your draft or link to the published article to [email protected] via email, with a short description (one-two sentences) of what your article is about.
via email, with a short description (one-two sentences) of what your article is about. We will review the article and get back to you within 1–2 business days.
Information To Keep In Mind
Your article should be for all Medium audience, not just Medium Members.
Medium only allows articles to be published to one publication at a time. If your article has already been published with a different publication, we aren’t allowed to request it to publication.
Once your article is published with us, we ask you to not remove it from our publication for at least 6 months.
In order to keep the quality of our channel, we maintain the privilege to decline articles that don’t align with our goals and convictions.
We’re always glad to make new contacts and explore new possibilities. We would be very happy to welcome you on board of our UX Planet! | https://uxplanet.org/how-to-publish-your-content-on-ux-planet-fd9dc99756db | [] | 2020-06-28 13:33:40.772000+00:00 | ['UX', 'User Experience', 'Writing', 'Ux Writing', 'UX Design'] |
How to create strong hierarchy in digital design | Hierarchy is a key element in design, creating a good user experience, and achieving the business goals of a website or app. It’s also one of the biggest mistakes new designers make. When there is no clear hierarchy on a website or an app, you risk frustrating the user and losing them as a potential customer. Let’s dive into what visual hierarchy is, tips for how to achieve it, and how you can use it to improve your websites and app designs.
What is visual hierarchy in design?
There are many definitions of hierarchy. Most refer to a system of organized rankings, it’s an order or power structure to help everyone know who is on top and most important. For example, at a company, this would be the CEO. The CEO is at the top of the pyramid, the top of the chain of command with various leaders and individual contributors underneath in the organization chart.
Visual hierarchy, on the other hand, is the principle of arranging elements to show their order of importance. If everything in your design is the same size, the same weight, or the same color, then nothing is important. This is even more paramount in creating a good user experience for web design. You want to capture the attention of your user and guide them through an experience. Whether it’s a large headline, a bright color, or the placement, there are many ways to create a strong level of hierarchy in web design.
Why is hierarchy important?
Now that we understand what hierarchy is, why is it so important specifically for web design? Our brains have to look at something, we have to understand what we should be looking at first.
If there are too many things to look at on a landing page, then our eyes will have no idea where to look first and process information. If the hierarchy is too confusing on a website, a user will get frustrated and exit the site altogether. This creates a bad user experience. You’ve lost a potential customer forever, they’re not coming back.
In order to achieve a goal for your user, you need to use hierarchy to guide them through the website experience. Are they there to log in and make a purchase? Do they need to pay a bill? Are they looking for information to reach out and hire a company? Or is it more for pleasure, maybe a browsing experience like YouTube or Netflix? Whatever the goal, you must work with the client and use your design expertise to achieve it.
What does good hierarchy look like?
Let’s take a look at some examples of websites using good hierarchy best practices. Take a look at these designs and see if you can spot why the hierarchy works well.
Example of good hierarchy in web design (source: squarespace.com)
In this example of Squarespace’s homepage, there’s a balance of hierarchy. The largest element is a glimpse into the Squarespace product, how you can use their tools to customize your own website.
Next, we read the headline “Everything you need to grow online.” which reinforces the mission behind Squarespace, a tool that helps you develop an online presence. There are clear calls to action with “Get started” buttons and also a balanced flow to the whitespace surrounding these elements.
Example of good hierarchy in web design (source: onemedical.com)
In this example, a large headline with clear contrast tells us immediately what One Medical is and how they help. Next, we notice the lifestyle photography, reinforcing the idea that you can get medical care from the comfort of your own home.
Example of good hierarchy in web design (source: squareup.com)
The centered layout in this example draws our eyes to the headline “Tools to run your business”. Followed by the group of photos underneath, we get a better sense of how Square can help your business. We’re also not bombarded with too many actions, one clear “Get started” CTA button or, in a secondary CTA style, the option to view more solutions. | https://uxplanet.org/how-to-create-strong-hierarchy-in-digital-design-1f605f04d1fc | ['Monica Galvan'] | 2020-12-17 22:23:30.374000+00:00 | ['UX', 'UI', 'Visual Design', 'Web Design', 'Design'] |
Careless citations don’t just spread scientific myths — they can make them stronger | Science, in theory, is self-correcting. But, as a new study demonstrates, some scientific ideas appear immune to criticism. Striking them down only seems to make them more powerful.
The study, published in PLoS ONE by Kåre Letrud and Sigbjørn Hernes of the University of Applied Sciences in Lillehammer, Norway, looks at citations and mis-citations of three articles critiquing the so-called Hawthorne effect. It’s the first in a series of planned investigations of what Letrud refers to as “tenacious scientific myths”.
Continue reading at Nature Index | https://medium.com/dr-jon-brock/careless-citations-dont-just-spread-scientific-myths-they-can-make-them-stronger-d2d2a386c8cf | ['Jon Brock'] | 2019-10-21 07:26:43.761000+00:00 | ['Science', 'Science Publishing', 'Hawthorne Effect'] |
Finding the Meaning Of Life In Suffering | “If you say that all this, too, can be calculated and tabulated — chaos and darkness and curses, so that the mere possibility of calculating it all beforehand would stop it all, and reason would reassert itself, then man would purposely go mad in order to be rid of reason and gain his point! I believe in it, I answer for it, for the whole work of man really seems to consist in nothing but proving to himself every minute that he is a man and not a piano-key!” Chapter eight, Notes from the Underground
Fyodor Dostoevsky uses the character of the Underground Man to celebrate liberty. The Underground Man embodies the freedom to choose — to choose suffering over health, horror over delight and immorality over morality. He has devoted his life to a perverse idea of rebellion whereby he continuously throws himself into bonfire in the hope that someone will notice him. He commits violence upon himself to show others of the violence they are doing to themselves.
It is the existence of the irrational that the Underground Man is trying to expose. His life is a performance; he is acting as a mirror, presenting to his audience the side to them that is illogical. For man is far too unpredictable and inconsistent to reduce itself to that which is rational, moral, reasonable and logical.
To purposely be offensive, foul, rude and arrogant, to be as irrational as one can be, is an act of rebellion against a system that believes people are mere straight lines.
He strives, against the disbelief of those around him, to become ever more so angry, isolated and unhappy — he is committing mutiny only to demonstrate to his audience what it is like to be chucked off the ship and into the ocean.
He hopes to insult his audience, to show them how pathetic they are for trying to control the little world around them.
The people have a great many schemes and plans that they hope will end their suffering, yet, once heaven appears on the horizon, they hunch over again and discover something else to be miserable about. When they are in Naples, they dream they are in Rome, as Emerson wrote. The Underground Man understands this, he has seen it before many times, over and over again:
“But if he is not stupid, he is monstrously ungrateful! Phenomenally ungrateful. In fact, I believe that the best definition of man is the ungrateful biped.” Chapter eight, Notes from the Underground.
This is the nature of man. But, if you believe in the rationality of man, then how do you justify the history of the world? For history is a great catastrophe, an endless cycle of betrayal and misery where there are no victories or triumphs.
A person of modern-day rationality must either look away or squeal in horror. Because history is not rational, it is not even sensible, but, instead, a chaotic mess defined by extraordinary acts of horror and cowardice. There is no end to the immense amount of suffering we have caused ourselves. Why, then, do people continue to believe that the future will be different?
Notes from Underground is an attack on the ideologies that seek to end suffering, namely Marxism and utilitarianism. Dostoevsky argued that despite humanity’s attempt to create the “Crystal Palace,” an all-inclusive utopia, one cannot avoid the truth that people do not always want to act in their own self-interest; the attraction to protest the rational is a part of our natural energy even if it is harmful.
People are always struggling for freedom, and for a chance to declare themselves as independent from the platoon. Indeed, reason and comfort have attracted the horde. But, sometimes, someone will step aside from the crowd just to hear their own voice again even if that means crime or drudgery.
Humans do intend to be good, but they do not wish to be perfect. Sometimes they want to see what it is like to be terribly bad, to feel the vibrations of chaos and watch the way the cards flutter as they fall.
No one deserves to be a saint, but no one wants to be one either. And, here lies, Dostoevsky wrote, the problem with progressing towards a utopia — nobody truly wants what they seem to seek. Dostoevsky argues this brilliantly in Notes from the Underground:
“Shower upon him every earthly blessing, drown him in a sea of happiness, so that nothing but bubbles of bliss can be seen on the surface; give him economic prosperity, such that he should have nothing else to do but sleep, eat cakes and busy himself with the continuation of his species, and even then out of sheer ingratitude, sheer spite, man would play you some nasty trick. He would even risk his cakes and would deliberately desire the most fatal rubbish, the most uneconomical absurdity, simply to introduce into all this positive good sense his fatal fantastic element.” Note from the Underground, Chapter eight
Give people exactly what they want, the rationalist argues, and the pain suffered because of the shortage will disappear. But, there is no reason to believe this to be true. Solve all the difficulties in the world, and we will not simply put our hands together and admire our shiny new world, but only create, from thin air, more problems, more worries, more uncertainties.
People want to feel alive, they want to discover who they truly are, what makes them gasp, what makes them bleed. Sometimes they tear the hairs from their arms just to feel the pain. Sometimes they swear at those they love just to feel the grief.
But, they commit these crimes only to prove that they are man and not machine, that they are powerful, and that they do indeed have a choice, a great ability to choose between an infinite number of possibilities. And, how they adore the shaking tension before making such a decision!
This irrationality, this darkness cannot be calculated — it is born from the madness of free will. No amount of reason and logic will ever straighten the molecules of our genes. And, even if we were to succeed in our schemes to reinstate reason and logic, it would not be long before we would lose our minds only for the purpose of discovering that we have one. | https://medium.com/personal-growth/finding-the-meaning-of-life-in-suffering-604182cf11cd | ['Harry J. Stead'] | 2018-12-27 00:29:53.976000+00:00 | ['Life Lessons', 'Culture', 'Philosophy', 'Life', 'Psychology'] |
Persephone Rises | Persephone Rises
A poem
Art by Moga Alexandru
I remember; The crispy grass under my soles,
The tall lavender kissing my sore knees and my hands
Dancing around trees; Dirt beneath my fingertips and
A feeling of urgency towards stillness.
The voices of the woods would call me and I would answer
In sober desperation; The world is dying, I don’t want to spend
More than one minute in the same place because there is
So much to see and much earth to greet; Soon everything will collapse,
Starting by my lungs, one against the other, so I’ve been saving my breath
For the words that matter.
The Fae asked me to have faith in the cycles, in the seeds about to sprout
Yet I feel I’ve been cropped, harvested and soon, the soil will follow,
Eating down my legs and my arms; swallowing my limbs whole, engulfing
All that once was into a new beginning; An imploding volcano,
Erupting itself.
I see stones made out of empty shells and webs abandoned, evicted;
Long-gone spiders left circular notes, which I study and memorize by heart.
It is the lore of the forest, the words of a song I forgot the rhythm yet my
Syncopated hearth tries to retrieve it; I thought I could sing the world anew.
I thought I could mend the roots and the branches with saliva and tears,
But they keep on breaking; crippled jungles seek shelter beneath my arms,
The toasted wings of a weary phoenix, tired of the chthonian act of rebirth,
Pushed from the legs of Gaia more than once; beaking the primordial egg
Until it cracks open and cuts my skin, plucking my feathers.
But I resist.
The colossus of trees resist.
The agitated animals resist, clawing the fields, standing the sacred ground.
My eyes lighten the flames, and the ashes fall heavy and agglutinated
As dunes of sand of a dying desert.
A forbidden idea of possibility enters the pores of the forest, each hole,
Each cave wants to be more, to extort life from every leaf and dewdrop.
I twist stalagmites with my bare hands to extract the juice of persistence
And it rains all over the burnt land, the wounded seeds.
The tiniest creatures seem to celebrate this second chance, this existence.
I think of the many times I wanted to end mine, desiring to be
Shoved down into the cold, wet earth, stretching my whole body;
Finding new heartbeats beneath the steps of new ancestors,
Drinking new blood from the gems of a pomegranate.
Soothed, remade, strong enough to germinate again. | https://medium.com/giulia-listo/persephone-rises-1ab79252b817 | ['Giulia De Gregorio Listo'] | 2020-01-24 20:12:55.204000+00:00 | ['Poetry', 'Nature', 'Love', 'Writing'] |
Song Lyrics Without the Suck | Over the past few years I’ve been involved with a number of music-related web projects, including Loudwerkz, DMOD (RIP), and Ink19. Although I make my living on the web and love what I do, I suppose I’ve always been secretly jealous of those snobby record store employees who hear the latest cuts first, troll naive customers, and argue with their coworkers about song meanings :).
Fortunately, the web has lowered the bar to all of that stuff. What would we do without Pandora, Last.fm, and (more recently) Turntable? The one missing piece for me has always been a good song lyrics site. Sure, there are lots of lyrics engines on the web, but they’re all dreadfully designed, spam-centric, and just scuzzy feeling in general. Search for lyrics for your favorite song on Google, and you’re bound to end up looking at popup ads for diet pills.
If you’re like me, this makes you sad. And irritated. So there I was, bitching about this on Twitter one evening, when Seth Banks proposed that we actually do something about it. So we did.
Last week, Seth and I launched Lyricful, which aims to be the first classy lyrics site. We started with a nice clean design and paired it with a sizable lyrics database (growing every day), an intuitive search interface, and some SEO know-how. We’ve also added a few other things we felt fans would find useful, like song previews, concert information, and easy sharing features.
Most importantly, Lyricful isn’t running any invasive eyeball-bleeding advertising. The only ads on the site are in the form of song preview links and concert ticket referrals, relevant to the artist you’re currently browsing (which directly benefits the artist). We built this for ourselves, as music lovers and fans, which means that we wanted to make it as easy as possible to get right to transcribing, discussing, and sharing your favorite lyrics.
Since the soft launch, we’ve started working with a number of artists who were interested in Lyricful and its sister site, MusicNewsHQ. To start with, we’ve added featured / verified artist spots, which will help promote up and coming artists and ensure accuracy of the site contents. Double win. I’m really looking forward to seeing how this project evolves, having both artists and their fans involved in the process. Got feedback for us? We’d love to hear it! | https://medium.com/zerosum-dot-org/song-lyrics-without-the-suck-58603e599528 | ['Nick Plante'] | 2017-11-04 17:35:43.101000+00:00 | ['Startup', 'Song Lyrics', 'Side Project'] |
Random Numbers in Python | We make numerical calculations when the analytical solutions are not available. For example, if we flip a fair coin a large number of times, then we know “analytically” that the outcome will be heads or tails roughly equal the number of times (50% each). For complex systems, a numerical solution might be needed. This article covers a few ways to generate random numbers in Python for the purpose of numerical solutions to differential equations or Monte-Carlo simulations for forecasting.
Image by Clker-Free-Vector-Images from Pixabay
What are Random Numbers
Random numbers are the number chosen from a certain distribution. Depending on the distribution, some numbers are more likely to be chosen than others.
If the distribution is uniform, then all numbers have equally likely to be chosen.
If the distribution is Gaussian, then the numbers closer to the mean of the distribution have a higher probability of getting chosen. This is particularly useful when working with Brownian motion such as in particle diffusion.
Sometimes the distribution is not available, and a random entity needs to be drawn from a list with each entity (or number) having an equal probability of getting drawn. A modeler may choose to select the same entity again in subsequent draws (with replacement) and configure the code to not draw the same entity again (without replacement).
Random Library in Python
All you need information: random numbers on python.org, random numbers on analyticsvidhya. I am summarizing a few statements that are used more often.
import random # Random number between 0 and 1
random.random() # Get same random number every time
random.seed(1) # Random Number between a and b with uniform distribution
random.uniform(1,2) # Random Number with Normal distribution
random.gauss(1,2) # mu = 1 and sigma = 2
Quick Test 1: Red and Green balls
In some cases, random entities are needed to be picked from a list. For example, consider a typical interview question: What is the probability of drawing a green ball from a bag of 1 green and 4 red balls. Answer is 1/5 = 0.20. In other words, if I draw balls (with replacement) from the bag a large number of times (say, 100), then 20 of those would be green. Let’s test it programmatically.
green_ball_count = 0
for i in range(0,100):
new_ball = random.choice(['g', 'r', 'r', 'r', 'r'])
if new_ball == 'g':
green_ball_count = green_ball_count + 1 print (green_ball_count)
# output is 19 which is close to 20
Quick Test 2: Unfair Coin
Let’s assume that we only know the probabilities of the occurrence of the events. For example, in the case of an unfair coin, the head (H) occurs less than the tail (T). The probability(H) = 0.2 and probability(T) = 0.8.
H_count = 0
for i in range(0,10000):
new_flip = random.choices(['H', 'T'], weights=[0.2, 0.8])
if new_flip == ['H']: # notice that this function returns a list
H_count = H_count + 1 print (H_count)
# output is 1973 which is close to 2000
Law of Large Numbers
In both the above examples, we see that as the number of coin flips or balls drawn from the bag increase, the results from the numerical solution (simulation) come closer to the analytical solution. Let’s test this programmatically.
In the following code, a fair coin is flipped 1000 times. For the first few runs, the average numerical probability fluctuates but reaches a stable value of 0.5 for a larger number of flips.
for i in range(0,1000):
new_flip = random.choices(['H', 'T'], weights=[0.5, 0.5])
total_flips = total_flips + 1
if new_flip == ['H']: # notice that this function returns a list
H_count = H_count + 1 numerical_probability.append(H_count/total_flips) import matplotlib.pyplot as plt
plt.plot(numerical_probability)
plt.xlabel("flip counts")
plt.ylabel("numerical probability")
Numerical probability approaches the theoretical value of 0.5 for a large number of experiments
Real-World Applications
Safety Stock Calculations. Like an unfair coin, the demand for various items is not evenly distributed. In other words, there are more chances that the demand is below or above the forecast. Here is a research paper on this topic. Estimating Film Porosity and Thickness for Sensors: Brownian motion plays an important role in how aerosol particles get deposited on surfaces. Here is a research paper on this topic. Check out the “Predictions by the Numbers” documentary from Nova PBS.
Advanced Topics
Please free to reach out to me if you have any questions, suggestions, code requests, or if you found any errors in the above code.
Contact: https://www.linkedin.com/in/anshumanlall/ | https://medium.com/predmatic/random-numbers-in-python-c897d719169b | ['Anshuman Lall'] | 2020-12-28 18:28:25.832000+00:00 | ['Monte Carlo Simulation', 'Python', 'Data Science', 'Probability', 'Random Numbers'] |
If I Had to Ring In the New Year with Anyone | Let’s be perfectly honest. I am the single mom to one happy five year old, and I am rather blissfully self-partnered these days. It’s a little bit funny because I used to be a very social butterfly. I’m the sort of INFP who can easily mask as an ENFP whenever she is truly comfortable, and in her “element.”
But motherhood has admittedly changed me. Even though it took a few years for me to adjust to being a parent and to finally get over the loneliness and isolation of single motherhood, I am now remarkably happy living my life on my own.
At this point, my New Year’s Eve is going to involve some Thai takeout and Frozen games with my daughter. But do you know what? I’m looking forwards to it. My life is currently on my own terms, and I love that.
Years ago, I would have been especially sad and lonely on New Year’s Eve. I would have felt that I definitely needed other people to make the holiday a good one. People as in plenty of peers and a boyfriend.
These days I feel much differently about what makes a good holiday. And I’m not bothered by being alone. That’s why, when folks first began asking me to share my dream guest list for New Year’s Eve, I froze.
It took a long time for me to figure out who I would really want to connect with if “anything was possible.” And for the most part, I realized my list would be all about women who inspire me.
Stevie Nicks
“When you grow up as a girl, the world tells you the things that you are supposed to be: emotional, loving, beautiful, wanted. And then when you are those things, the world tells you they are inferior: illogical, weak, vain, empty.”
In a lot of ways, Stevie Nicks feels like my spirit animal. I wish I could be as cool as she is at 71 years old.
I’ve written about Stevie before, and I’ve always planned to write about her again. In Stevie, I don’t just see an incredible artist, but a woman who made her own path without worrying so much about what the world expected her to be.
Stevie has been very vocal about her choice to be child-free and her overall singleness despite having had many lovers… and even being something of a hopeless romantic at heart.
She hasn’t lived an easy life, but she’s been through some shit (like drug addiction and a three month marriage) and lived to tell the tales. I would love to spend time with her and soak up some of that ageless wisdom.
Helena Bonham Carter
“I think everything in life is art. What you do. How you dress. The way you love someone, and how you talk. Your smile and your personality. What you believe in, and all your dreams. The way you drink your tea. How you decorate your home. Or party. Your grocery list. The food you make. How your writing looks. And the way you feel. Life is art.”
For as long as I can remember, I have always been a little bit obsessed with Helena Bonham Carter. I love the roles she’s played in countless movies, and the way she’s been able to make each character so different despite own her distinctive appearance.
Helena is a small woman (only 5′ 2″), yet something about her always seems larger than life. She’s been romantically connected to powerhouse men like Kenneth Branagh and Tim Burton, but she’s never appeared to be overshadowed by them.
I respect her fierce independence and I’ve loved watching her age gracefully over the years without catering to Hollywood’s whims. A lot like Stevie, I sort of want to be Helena when I grow up, at least in certain ways.
bell hooks
“Genuine love is rarely an emotional space where needs are instantly gratified. To know love we have to invest time and commitment... dreaming that love will save us, solve all our problems or provide a steady state of bliss or security only keeps us stuck in wishful fantasy, undermining the real power of the love--which is to transform us. Many people want love to function like a drug, giving them an immediate and sustained high. They want to do nothing, just passively receive the good feeling.”
When it comes to bel hooks, I’m embarrassed to say that I am late to the party. I only began reading her this year, but I love the way she tackles really tough subjects like love, feminism, culture, and politics.
Her commonsense approach and honesty are refreshing and I identify so much with some of her revelations about life and love, though I’ve only just scratched the surface in reading her work.
I know that I could learn a great deal from bel and would welcome any time to chat with her.
Amy Tan
“It’s a luxury being a writer, because all you ever think about is life.”
Ever since grade school, Amy Tan has been one of my favorite writers in the world. I can’t adequately express how much I love her work and wish my writing had just a fraction of her beauty and sensibility.
Most folks are familiar with Amy for her novel The Joy Luck Club. That was my first experience with her writing as a child, and I understand that while we are very different writers, she has still made an enormous impact upon my own work and what I dare to reveal.
Back when I used to write copy for a social media marketing agency, I often thought a lot about Amy and the fact that her writing career began with mundane copy too. She has often inspired me to press onward.
The cast of Frozen
“I’m here. What you need?”
Just in case you missed it, I love Frozen. Especially Frozen 2. And I love how it’s such a feminist fairytale, honestly. One of my favorite parts in the film happens so quickly you might miss it. When Kristoff reunites with Anna he simply says, “I’m here. What do you need?”
You know, instead of taking over and trying to do everything for her, he asks and actually listens. Then, he actually does what she needs, without an argument.
I’ve written before that Frozen 2 is a love letter to single women everywhere. And that holds true for me even when looking at Anna and Kristoff’s relationship. Disney has done a great job with the franchise and I hope to see more positive and women-driven Disney films.
So, as long as we’re talking about a fantasy New Year’s Party, I’d love to party with the cast of Frozen because I find each actor hilarious and down to earth. Idina and Kristen especially inspire me as woman who are thriving in Hollywood.
My real New Year’s Eve might seem sort of boring.
Again, I plan to grab my favorite Thai takeout and play some games with my kid. I’ll have a glass or two of moscato and maybe I’ll stay up to watch the ball drop, but maybe not.
What I love most about my life right now is that it’s my choice. And while I can’t ring in the New Year with any of the fantasy guests on this list, that doesn’t mean I can’t include them in my life throughout 2020.
Like it or not, the New Year is a natural time to look ahead and live with more purpose. There’s no reason why the work of these artists can’t continue to inspire me and my own work moving forward.
And who knows? Perhaps one of these days I will get to cross off a few of these names on my bucket list of folks whom I’d like to meet.
Stranger things have happened. | https://medium.com/honestly-yours/if-i-had-to-ring-in-the-new-year-with-anyone-c362d4aaa6fd | ['Shannon Ashley'] | 2019-12-29 15:59:51.283000+00:00 | ['Inspiration', 'Women', 'Writing', 'Holidays', 'Life'] |
The Last Prompt before the Drawing in December is…. | Dec. 1st to Dec. 14th prompt
Photo by Oskars Sylwan on Unsplash
In these dark times…we need a little light. I would like to hear a piece on what the writer can imagine a New Year's Eve Party for 2021 will be like …but with a positive spin-off to it. I want to uplift people and give them hope for the holidays, for Pagans, and Christians alike. I would prefer it to be from the perspective of the writer as well. That the writer is at this party and will tell me this grand story.
I want to hear of this legendary party…where even the God’s might look down with envy. Now it doesn’t all have to be campy. There could be that loud drunk girl…who throws up on so and so…etc. There could be drama. BUT, I do want to hear that it mostly resolves it’s self and is happy in the end.
And it doesn’t have to be a regular party…nor even on this planet. Use your imagination, for your only limitation, is your inhibitions. Let go. Go ID on this shit. Dig deep. Maybe there is a lesson in what you will write. Maybe it’s just a feel-good story or a comic strip. Or you write about a secret BDSM club scene that escalates past midnight. Explore your hard limits. But stay within the Submission Guidelines rules of course.
So I’d like my writers to please put in your subtitles: A Rebel’s Prompt: A Grand Party. The submissions must be no less than 50 words and no more than 2,500 words.
Anyways, on Monday, December 14th, at midnight the prompt will end. Then on Tuesday, December 15th, 2020 I will announce the winner of this prompt. Finally, on Friday, Dec. 18th, I will announce the winner of the Grand Advertising Prize!!! Good Luck!
Amanda Dalmas
J.D. Harms, Mimi Bordeaux, Crystalclearcandace, Scott Leonardi, Ann Marie Steele, Iva Hotko, Robert Milby, Ben Kassoy, Lizzie Finn, Terri Seddon, Zach J. Payne, Kino McFarland, Regina, Calluna, Ed Newman, Shalini C, Tathy M Ntumba, Katy Madgwick, Lavender Nightmares, Gun Roswell, Ngang God’swill N., Penni Livingston, Charlotte Ivan, Dionne Charlet, Lauren Tolbert, J M Mantium, Michael Ritoch, Background Noise Comics, rstevens, marcialiss17, Pablo Stanley, Andy Anderson, Rhonda Skinner, Erica N, David Heatley, Lovely Daye, Wolfie Bain, Ora, Markmalady, Barry Dawson IV, Chelsea Cristoffor, Ravyne Hawke, Aaron Quist, Denis Adair, Jeff Suwak, Sivasai Yadav Mudugandla, Mohan Boone, Ema Dumitru, Suntonu Bhadra, Josie Elbiry | https://medium.com/the-rebel-poets-society/the-last-prompt-before-the-drawing-in-december-is-b24bf1b16edd | ['Amanda Dalmas'] | 2020-11-29 22:18:05.659000+00:00 | ['Partying', 'Contests', 'A Rebels Prompt', 'The Rebel Poets Society', 'Storytelling'] |
Everything You Need to Know About AirPods Max | The headphones also have a digital crown and button straight from an Apple Watch. The crown allows you to change volume, control ANC, use Siri, etc.
Features
These headphones of course have their whole list of features and gimmicks for marketing. Let’s quickly go through them:
The headphones have an array of 6 different microphones for adaptive EQ (including inside your ear). This is a marketing thing we saw on the HomePod. Was it groundbreaking? No, not really.
They will auto-pause when you take them off and put them around your neck (which is nice).
20 hours of battery life is good. Really happy to hear that.
It was rumored that the headphones would be bi-directional (meaning you could put them on either way and they’d adapt) but it turns out they are not. It does, however, say the left and right inside the ear cups. Similar to what Bose has done.
Spacial audio is something new we saw with AirPods Pro and is coming on the Max’s as well. It’s a really cool feature for watching movies and is actually a really cool feature that you won’t find in competitors' headphones.
They also have that AirPods magic which means that they pair with your phone and work amazing. It also of course has Active Noise Cancellation and Siri.
Accessories
It wouldn’t be a modern-day Apple product if they didn’t price gouge you on accessories, Best Buy style.
Apple has two main add ons (kinda) for the Max. The first is a bi-directional lightning to eighth-inch jack for $35. It will let you use the headphones with a wired connection.
The other add on isn’t as much a necessary purchase right now as it is a future one. Apple has confirmed that replacement cups will cost $69 each. I’ve never been so proud and disappointed with Apple at the same time.
Thankfully, the case which can put the headphones in a special lower power mode to save battery comes with the headphones.
But is it Worth the Price?
While the AirPods Max look really cool, it brings up the question of whether it is worth the price tag.
For $550 you can get really nice sounding headphones. And while those headphones don’t have the AirPods “magic”, they do have features the Max doesn’t have. For example, you can’t use an external amp to power the headphones.
But the real competitors to these are Sony’s and Bose headphones. The Sony WH-100X M4’s (yep, worse name on the planet) only cost in the $300 range. They have really good noise canceling and are highly acclaimed. Sure they won’t have that AirPods touch or the spatial audio, but is that worth the cost? We’ll have to see how good they sound once reviewers get their hands on them.
So Should You Buy Them?
No matter what, don’t buy them right now. Even if you’re interested in picking them up, you should wait till reviewers can try them. The headphones are already sold out so you won’t be getting them before Christmas.
There are also rumors that Apple will release a “sports-edition” version which will only cost $350. It might be worth waiting out a month or two to see how these really turn out. | https://medium.com/macoclock/everything-you-need-to-know-about-airpods-max-7f529ec6d3a | ['Henry Gruett'] | 2020-12-15 17:40:18.383000+00:00 | ['Technews', 'Technology', 'Technology News', 'Tech', 'Apple'] |
Live Discussion 29.05.20 update! | Last Friday we had a live discussion with Eugeny Ponomarenko and Peter Sharia about wipe and concept prospects!
Let’s check out together, what’s new!
Eugen, so the actual game will not be updated and you aim directly to a new concept?
That’s right. We had a difficult choice to continue to maintain existing game which isn’t good, so we decided to focus on new promising gameplay
With the new direction of Worldopo, what will happen with my WPT, Hexagons, Mining Farms, Buildings, Resources, etc?
Wipe is inevitable, but I can assure you that we will be careful of dropping players assets and, for sure, your WPT will stay with you
As for hexagons, we will provide some kind of assessment: for instance, there will be a minimal amount of hexagons left, enough to construct a building, which will remain in the player’s possession. As for the single hexagons, we will provide chose of tokenizing enough nearest hexes to a minimal amount, or compensation for the player after a wipe
Thank you, Eugene. Will a player have the option to decide if we can remove all of our Hexagons from the map and trade them back in for WPT? I know a lot of us have made many mistakes and would like the option to make a fresh start now that we are not guessing as much where to place the Hexagons like we were in the beginning.
Our plan is to make a fully functional in-game stock market where you can trade your tokenized assets, resources, and so on.
There will be a lot of us who are here from the very beginning when rules are totally far from now and honestly I hope you will come back with 2 rules instead of 100500 rules … you know, less is more!
I can’t agree more. The less the rules, the better the game. We trying to reach that
I have a lot of inventory slots that I don’t use, can I sell it back or you see use needed all those slots? In what manner they will be used?
No, they will not be used in new games in this manner.
We want to cut all unnecessary items and assets, which make a lot of noise and nothing else. Like, we drop drones, because their main purpose, for now, is to collect resources, and players need to build waypoints and it doesn’t make the gameplay any better. Thet’s why we want to simplify the game process and there will be no scissors and no wires
So does it make sense to play actual version or wait for new which will come in days? Weeks? Months?
Several months for sure. Can’t be more specific. We would be happy to mark the date. It’s a long way to release for sure, our team is currently working on a new scalable platform. When it’ll be ready, we can add assets and features and show you something!
Will you be asking for alpha testers?
Probably yes. They helped us much also last time
I would like to hear more about the new ways of earning WPT through the new resource tree. Examples of low end versus high end?
So, it’s the most exciting part of the new concept. Farms will be only one of the many options for players to earn WPT. There will be a looped model of WPT circulation inside the game economy, where the player with farms depends on a player with Power Stations, and all of them need skilled workers provided by players with Universities. And they can trade all of it for WPT. A stock exchange is included.
What will happen to farms? Will there be WPT return so for all levels also?
Farms will be removed from players and refunded with WPT, for all levels of farm. After wipe players could buy new farms that will be tokenized and limited.
Any updates on PvP?
PvP isn’t in our top priority, and it’s a hell of a task to make it realtime on real map. Say, there will be a simplified version of PvP, can’t say more for now.
Can we reach a high level on multiple branches? Or we have to pick one?
You can get as much as you want, but it will take you a crazy amount of time and resources.
What do you recommend we do in Worldopo while waiting for the new game? With the inevitable demise of most related gameplay, should we keep placing Hexagons? Building Resource buildings? Collecting Resources? Or just concentrate our gameplay on mining Gems (WPT) only?
For now, we recommend building farms and earn more WPT while you can, to take it later on with you, and rebuild a brand new world after the wipe. You also can try playing RPS in meanwhile!
PlayMarket: https://play.google.com/store/apps/details?id=de.qubit.rpsgame
AppStore: https://apps.apple.com/ru/app/rps-cryptolord-worldopo-icq/id1507803151
Do you have some things to show already?
We will share when we have. At the moment there is more work on code, then on polishing design. You will see new buildings and so on, next weeks, in our posts so follow the updates!
Btw, Design. There will be a major change. We move from Landscape to Portrait.
Will there still be a ranking system in-game? In-game chat?
Yes, of course.
What about dead or inactive hexes?
Those would be wiped, but we will refund the WPT used for standalone hexes.
So I should not bother with producing any more resources, apart from Quibits for repairs? No need for them moving forward?
Qubits, brains, and cash have no future and place in the bright new world
Stay tuned for the next live discussion at 12.06.20! | https://medium.com/worldopo/live-discussion-29-05-20-update-e75ac3841c01 | [] | 2020-06-01 18:28:29.777000+00:00 | ['Discussion', 'Development', 'Games', 'Updates', 'Crypto'] |
Forced to Throw Out Their Old College Admissions Standards, Higher Ed Institutions Should Seize on the Crisis to Create Better Ones for the Future | Forced to Throw Out Their Old College Admissions Standards, Higher Ed Institutions Should Seize on the Crisis to Create Better Ones for the Future The 74 Apr 17 · 4 min read
Commentary by Conor Williams
American higher education has always been enamored with longevity. Student tour guides on campuses across the country tout their institutions’ origin stories, quirky campus traditions and historical successes as they strut backwards past various stately, be-columned buildings with cornerstones announcing their decades — or centuries — of academic tenure.
It’s understandable enough: Many colleges and universities bill themselves as unique clubs with particular characteristics that they’ve developed organically over time. More seriously, the academy has an instinctive reverence for intellectual history; campuses have long served a key role as repositories of human wisdom.
And yet, framing the academic life in terms of links to the past can also create institutional calcification. It can make today’s status quo seem like the only possible way of organizing a campus.
Take colleges’ admissions processes. Several months ago, before the pandemic, California civil rights activists filed suit in an effort to force the system to drop standardized college admission test scores as an admissions requirement. Now, suddenly, unexpectedly, they’ve gotten their wish. Earlier this month, the University of California (UC) system announced that, among other things, its schools would not require students to include standardized test scores with their applications — starting with the 2021 freshman class.
The global coronavirus pandemic is shaking the very foundations of that chunk of the higher education universe. Indeed, public health concerns canceled spring administrations of both the SAT and ACT. For now, UC’s change is temporary, but the system’s size and scope — it enrolls more than 280,000 students across 10 campuses across the state — should open a broader conversation about how to make college admissions fairer. It’s worth asking: If we can do without mandating standardized assessments in college admissions now and the sky doesn’t fall, why bring them back at all?
The UC wouldn’t actually have to blaze a trail. As it happens, my alma mater, Bowdoin College, was the first college in the country to go test-optional (more than 50 years ago). The college has tracked the academic trajectories of students who submit test scores against those who don’t for decades, and it says that it doesn’t find any real impact. (In case it matters: I did submit my scores.) Their findings aren’t unique: When it comes to predicting students’ likelihood of graduating from college, high school grade point averages appear to be significantly better than scores on standardized college admissions assessments. Major university systems in Indiana and North Carolina — part of more than 1,000 test-optional campuses across the country — have followed Bowdoin’s example.
Advocates for making these admissions tests optional also cite studies that flag possible biases in their design. One study has found evidence of bias against African-American test takers in the verbal section of the SAT, arguing that it “favors one ethnic group over another.” There’s also a socioeconomic angle. “The highest predictive value of an SAT isn’t in how well a student will do in school, but how well they were able to avail themselves of prep material,” John A. Pérez, chairman of University of California’s Board of Regents, told the Los Angeles Times last fall. “And access to that prep material is still disproportionately tied to family income.”
At present, there’s ample reason to believe that these assessments have come to serve less as an upward mobility path for historically underserved students and more as an amplifier of existing social inequities. And yet, the tests’ critics shouldn’t overplay their hand (and the research behind it). While college admissions tests have sometimes served as an unfair hurdle for students, particularly those from diverse backgrounds, a good test-optional policy would convert them into an alternative pathway.
“Scores are best used not as the basis for a rank ordering of individuals,” wrote professor Danielle Allen in a 2014 Century Foundation book, “but as thresholds, dividing an applicant pool into those above and those below a line that is roughly predictive of likelihood of success.” That is, they could allow students with lower GPAs to prove their mettle — and earn admission — with higher scores.
What’s more, generally because of inequitable allocation of public education resources, many students from historically underserved communities attend schools that do not offer the full range of advanced coursework and extracurricular activities prized in competitive college admissions. Assessments like the SAT and ACT, however imperfect, could be used to give high-scoring students from under-resourced schools a chance to boost their admissions chances.
Does such a test-optional policy seem impossibly radical? Would it be such an unconscionable departure from college admissions’ meritocratic aspirations? It certainly doesn’t seem so. Suddenly, admissions offices in California are capable of breezing away the core metrics of prestige higher education — things the university system was preparing to defend in court just months ago. Perhaps this status quo was overdue for an update.
Those admissions policies aren’t a pile of natural, set-in-stone definitions of a quality applicant. They’re just the ones we’d gotten accustomed to. As colleges pause on mandating college’s admissions assessments, a decision made for them by the pandemic, it’s an opportunity to think seriously about what’s lost and what would be gained by making this change permanent — not just in our chaotic present, but as part of building a better, fairer future.
This article is part of The 74’s ongoing coverage of how the coronavirus pandemic is affecting schools. See more and sign up for The 74’s daily morning newsletter to receive the latest in your inbox. | https://the74million.medium.com/forced-to-throw-out-their-old-college-admissions-standards-higher-ed-institutions-should-seize-on-3b89e09fea95 | [] | 2020-04-17 18:56:29.616000+00:00 | ['College Admissions', 'Higher Education', 'Coronavirus', 'See more'] |
Book Reviews From the Library | Becoming by Michelle Obama
In a life filled with meaning and accomplishment, Michelle Obama has emerged as one of the most iconic and compelling women of our era. As First Lady of the United States of America — the first African American to serve in that role — she helped create the most welcoming and inclusive White House in history, while also establishing herself as a powerful advocate for women and girls in the U.S. and around the world, dramatically changing the ways that families pursue healthier and more active lives, and standing with her husband as he led America through some of its most harrowing moments. Along the way, she showed us a few dance moves, crushed Carpool Karaoke, and raised two down-to-earth daughters under an unforgiving media glare. (Good Reads)
Sandcastles by Sana Rose
Sana has been a poet and story-writer long before she decided to be a Homoeopathic Physician. Her first collection of poetry ‘The Torrent from My Soul: Poems of A Born Dreamer’ was originally published in 2011. Her second poetry collection ‘The Room of Mirrors: Reflections in Verse’ is soon to be published by Roman Books (UK & India). Currently, she is working on her second novel, a psychological mystery. (Good Reads)
The First Free Women by Matty Weingast
Matty is co-editor of Awake at the Bedside and former editor of the Insight Journal at Barre Center for Buddhist Studies. With almost two decades of meditation experience, Matty is currently a resident at Aloka Vihara, a nuns’ monastery in northern California (From the Forward)
Pride and Prejudice by Jane Austen
Jane was an English novelist known primarily for her six major novels, which interpret, critique and comment upon the British landed gentry at the end of the 18th century. Austen’s plots often explore the dependence of women on marriage in the pursuit of favorable social standing and economic security. (Wikipedia)
Jane Eyre by Charlotte Bronte
Charlotte was an English novelist, and poet, the eldest of the three Bronte sisters. She and her sisters, Emily & Anne, originally published under the names Currer, Ellis, and Acton Bell. (Wikipedia)
Secret History of Witches by Louisa Morgan
Louisa lives and writes and rambles with her familiar, Oscar the Border Terrier, on the beautiful Olympic Peninsula in Washington State. A musician and a yogini, she finds inspiration in the artistic environment where she makes her home.
Under the name Louise Marley, she has written a number of other historical fiction novels, as well as fantasy and science fiction. (Amazon)
The Red Pencil by Andrea Davis Pinkney
Andrea is the New York Times bestselling and award-winning author of numerous books for children and young adults, including picture books, novels, works of historical fiction and nonfiction.
She was named one of the “25 Most Influential Black Women in Business” by The Network Journal and is among “The 25 Most Influential People in Our Children’s Lives” cited by Children’s Health Magazine.
Andrea was selected to deliver the 2014 May Hill Arbuthnot Lecture. This distinguished honor recognizes her significant contributions to literature for young people provided through a body of work that brings a deeper understanding of literacy for children and young adults. (Author Site) | https://medium.com/from-the-library/book-reviews-from-the-library-6729fd044cc0 | ['Laura Manipura'] | 2020-03-25 16:19:09.161000+00:00 | ['Books And Authors', 'Book Review', 'Ftl Letter', 'Writers On Medium', 'Books'] |
Stateful Kubernetes Applications Made Easier: PSO and FlashBlade | For stateful applications on Kubernetes, Pure Service Orchestrator automates and productionizes those services with FlashBlade as shared storage. This repository presents three illustrative example applications: a simplistic shared scratch space as well as file-sharing services NextCloud and OwnCloud.
I assume the reader has only a basic understanding of Kubernetes and is interested in how to use FlashBlade with Kubernetes.
For a more general introduction and guide to PSO for both FlashArray and FlashBlade, see this explanation of Kubernetes Storage, Kubernetes documentation, and Containers-as-a-service architecture.
Introduction
The Pure Service Orchestrator (PSO) automates the process of creating filesystems on FlashBlade and attaching them to the running applications in Kubernetes.
But first, why run applications in Kubernetes? There is a generic set of problems that almost all applications need to solve: recovering from hardware failures, scaling up or down resources, orchestrating connectivity between inter-linked services, securing and isolating environments, and provisioning resources for running applications. Kubernetes helps solve these problems, simplifying the operation of production services.
Why Use PSO and FlashBlade
In Kubernetes, applications run inside a set of pods, which are ephemeral and not tied to physical compute or storage resources. If the pod needs persistent storage for any reason (and there are many!), then PersistentVolumes are needed. But manual storage administration is complicated: creating volumes, attaching them to running pods regardless of protocol, and finally deleting volumes. In a fully automated system, none of these steps should involve a human!
PSO automates the process of provisioning storage and hides the details of storage creation and attachment to each pod. The result is self-service storage that matches cloud-native workflows and applications.
FlashBlade provides shared filesystems for Read-Write-Many (RWX) volumes. These are critical for scale-out applications that spawn multiple pods; each additional pod is able to automatically share access to a common data store. FlashBlade can also support Read-Write-Once (RWO) volumes.
While I focus here on FlashBlade filesystems as volumes, PSO also serves FlashArray and block devices. The details of storage administration are different for block (FlashArray) and file (FlashBlade) volumes, and PSO hides all of these differences from the end-user. For example, a developer creating an application that uses a ReadWriteOnce volume can choose between latency sensitive or bandwidth sensitive performance simply by changing the text between “pure-file” and “pure-block.” All of the annoying differences between iSCSI, Fibre Channel, and NFS mounts are hidden automatically by PSO.
How to Use PSO with FlashBlade
Compared to traditional storage workflows, PSO automates most of the requests and interactions between users and administrators for file system creation and mounting. The result is a self-service storage infrastructure; no direct interaction is necessary between administrators and users.
Administrators install and occasionally upgrade the PSO software and users (developers) interact only via standard Kubernetes PersistentVolume mechanisms.
For the Kubernetes Administrator
PSO is a software layer for Kubernetes that utilizes the public REST API provided by FlashBlade to automate storage-related tasks and is installed via helm.
The one-time PSO configuration requires a FlashBlade management IP, data VIP, and the corresponding API token. These API tokens prove that PSO has permission to create and delete volumes or filesystems so safeguard them appropriately.
To get the API token, use the following command from the FlashBlade CLI:
pureuser@flashblade-ch1-fm2> pureadmin list --api-token --expose
Name API Token Created Expires
pureuser T-c4925090-c9bf-4033-8537-d24ee5669135 2017-09-12 13:26:31 PDT -
Then, add the following to the PSO values.yaml (example) file to add a FlashBlade under management:
FlashBlades:
- MgmtEndPoint: "10.62.64.20"
NFSEndPoint: "10.62.64.200",
APIToken: "T-ab0ed3e0-5438-4485-9503-863e6a9c1434"
The PSO works by fulfilling any PVC claims with a StorageClass with corresponding name, “pure-file,” by creating a matching filesystem.
Update the default storageClass to use the “pure-file” class if you want FlashBlade to be the default for PVCs that do not specify StorageClass.
> kubectl patch storageclass pure-file -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
After this change, view the configured StorageClasses and their associated provisioner with the following command:
> kubectl get storageclass
NAME PROVISIONER AGE
pure-block pure-provisioner 24d
pure-file (default) pure-provisioner 24d
Finally, to view the PersistentVolumes in the system which were automatically created by PSO:
> kubectl get persistentvolume
...
The PersistentVolumes reported here should match the filesystems created on the FlashBlade, as seen with the following CLI command in the FlashBlade:
> purefs list --filter "name = 'k8s-*'"
Name Size Used Hard Limit Created Protocols
k8s-pvc-98bb10ca-c037-11e8-9ada-525402501103 1T 0.00 False 2018-09-24 13:22:46 PDT nfs
k8s-pvc-98c51436-c037-11e8-9ada-525402501103 800G 52.50K False 2018-09-24 13:22:46 PDT nfs
k8s-pvc-98d6147c-c037-11e8-9ada-525402501103 20T 450.00K False 2018-09-24 13:22:48 PDT nfs
k8s-pvc-9fcfdf49-beb3-11e8-9ada-525402501103 5T 0.00 False 2018-09-22 15:05:33 PDT nfs
Adding a StorageQuota (example) allows the administrator to limit the number of claims or the total amount of storage requested. These limits cause graceful failures in the case of buggy jobs that consume too much storage.
For the Kubernetes User
The following examples create a PersistentVolumeClaim that will be satisfied by the PSO by using a storageClassName of “pure-file”. The storageClass signals to Kubernetes how to satisfy the claim request. Switching to use a FlashArray block device instead can be done by using “pure-block” as StorageClass.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: file-claim
spec:
storageClassName: pure-file # This line is all that is required to use PSO
accessModes:
- ReadWriteMany
resources:
requests:
storage: 15Ti
A PersistentVolumeClaim is a request for a PersistentVolume of certain type and size. If there is a plugin for the specified storageClass, then it will create a matching PersistentVolume. To attach this storage to a container at a specific path, refer back to the claim as follows:
volumes:
- name: myvol
persistentVolumeClaim:
claimName: file-claim
containers:
...container config...
volumeMounts:
- name: file-claim
mountPath: /mountpoint
What did you NOT need to do?
Create a filesystem via a UI of any kind (GUI or CLI)
Log in to the node and issue the commands to mount the filesystem
Think about what happens if the physical node fails and you need to move the app
Track who uses which filesystem so you know when to cleanup
And what you do get is that the volume automatically follows your container if Kubernetes restarts or moves the container to a different physical host.
Example 1: Shared Scratch Space
The first example to illustrate PSO is a simple Deployment and PersistentVolumeClaim to create a shared scratch workspace. In other words, the application is a standard Linux shell and command line tools.
A shared scratch workspace is useful when a small team needs to work together, for example for forensics investigation of log files. The goal is to automatically create a shared scratch space to download necessary data to work with and produce derivative datasets. PSO automates the creation, connection, and cleanup of the workspace on FlashBlade.
scratch-space.yaml.
The yaml file creates two pods that both mount a shared “/scratch” directory for collaborative work. This illustrates the core steps necessary to use a persistentVolumeClaim to attach a volume to pods in a Deployment.
To use this shared scratch space, each user connects to a pod and spawns a shell by running ‘exec’ on one of the pod instances:
> kubectl exec -it scratchspace-7877bc68f9-s8nxw /bin/bash
Once this shell has been started, the user can collaborate using the ‘/scratch’ directory and all results will be visible to users in other pods.
What if a third user wants to also collaborate? Adding additional workspaces that share the same filesystem is as simple as:
> kubectl scale deployment scratchspace --replicas 3
Each additional replica pod is automatically connected to the filesystem, i.e., the mounting and unmounting of the filesystem is automatically handled by PSO.
There are many directions to make this example even more useful: 1) add a unique name to the resources based on a ticket number, 2) automate cleanup of resources when finished, and 3) use a prebuilt container image with already installed tools.
Example 2: NextCloud
NextCloud
NextCloud and OwnCloud are both open-source file sharing applications providing an alternative to cloud services like DropBox. These applications provide similar file sharing services across multiple client platforms while keeping the data in-house instead of an offsite 3rd party cloud; either performance, cost, or data governance may drive this desire for an in-house service instead of external. Both applications rely upon persistent storage for the user’s data and metadata.
The NextCloud configuration borrows heavily from existing documentation for NextCloud. This configuration is kept simple in order to highlight the usage of persistent storage.
The config file combines three major elements:
Service ‘nextcloud’ that exposes the application externally
Deployment ‘nextcloud’ that starts the application server and mounts a filesystem
PersistentVolumeClaim that PSO will use to create a matching filesystem on the FlashBlade
Create the nextcloud service as follows:
> kubectl apply -f nextcloud-pure.yaml
To make this application accessible externally, I used a NodePort service, which is not recommended in production. To access the NextCloud application, find the port assigned by Kubernetes:
> kubectl get svc nextcloud | grep NodePort
Connect to the NextCloud server by using that port number and the ip address of any Kubernetes node. Production environments should configure load balancers instead.
Using InitContainers for Setup
The NextCloud example utilizes an initContainer to solve a common problem: configuring the volume to meet the conditions expected by the application. An initContainer is a container that is run to completion before the application container is started and must complete successfully for the application to begin. InitContainers encode the steps necessary to satisfy pre-conditions or dependencies that the user does not want to or cannot build into the main application container. As an example, an initContainer can download a dataset to the persistent volume for the primary application to use.
The NextCloud server expects the “data/” directory to be owned by the www-data user and to have permissions of 770, otherwise the application fails. To accomplish this, an initContainer performs two simple commands to achieve this precondition:
initContainers:
- name: install
image: busybox
command:
- sh
- '-c'
- 'chmod 770 /var/www/html/data && chown www-data /var/www/html/data'
volumeMounts:
- name: nc-files
mountPath: /var/www/html/data
Many applications, especially legacy applications not originally built for containers, expect preconditions on the volumes and initContainers are the easiest way to achieve this. Beyond the simple example here with busybox, different container images can be used to place data or install necessary software on the volume.
Example 3: OwnCloud
OwnCloud
The OwnCloud deployment involves multiple applications working together: mariadb and redis alongside the owncloud server. These applications use a mix of block devices and filesystems. For example, MariaDB is an RDBMS service that requires a block device backed volume whereas OwnCloud stores user data on a filesystem-backed volume. Derived from docker documentation for OwnCloud.
With PSO, multiple applications and volume types can be mixed and modified easily. For example, switching the Redis application between “pure-file” and “pure-block” is as simple as changing the value in the yaml config before deploying.
Running multiple different applications on FlashBlade works well because the system is 1) built to support high concurrency and many simultaneous applications, 2) scales-out seamlessly and non-disruptively, and 3) is architected natively to leverage the mixed IO performance of NAND flash.
Deploy this set of applications and volumes together with the following command:
> kubectl apply -f owncloud-pure.yaml
Note that there is a helm chart for OwnCloud here but it is currently not working due to a failed connection between OwnCloud and mariadb. With the default storageClass set to “pure-file,” this helm chart would automatically use FlashBlade as the backing store.
Quick Tips for Usability
Below are some quick-tips that I found useful in running the OwnCloud application.
Run OwnCloud reporting commands internally:
kubectl exec oc-owncloud-0 occ user:report
Log in to the server and look around at the contents of the filesystem.
kubectl exec -it oc-owncloud-0 /bin/bash
Summary
A good software engineer always looks to automate tedious and error-prone tasks in order to increase the team’s productivity. Running Kubernetes and the Pure Service Orchestrator automates almost all storage related tasks: filesystem creation, mounting, and deletion. The result is simple and agile infrastructure that just works.
Legal stuff — Please note: I am an employee of Pure Storage. My statements and opinions on this site are my own and do not necessarily represent those of Pure Storage. | https://joshua-robinson.medium.com/stateful-kubernetes-applications-made-easier-pso-and-flashblade-aa3e2ebb0248 | ['Joshua Robinson'] | 2019-03-26 18:44:01.561000+00:00 | ['Kubernetes', 'Pure Storage'] |
10 Game-changing AI Breakthroughs Worth Knowing About | From the beginning of my AI journey, I found several ideas and concepts promising unparalleled potential; Pieces of research and development that were absolutely mind-blowing; And breakthroughs that pushed this field forward, leaving their mark on its glorious history.
Also, in the last few years, the number of people pointing to the “Skynet-terminator” scenario has increased exponentially … ^_^!
So today, I decided to curate a list of some of the most interesting ideas and concepts (from my own experience) that kept me going all these years.
I hope they’ll inspire you too, as they did to me.
Several objectives, possibilities, and “new ways of thinking” have emerged out of these ideas, so don’t take any of these lightly. We never know what will happen next.
A year spent in Artificial Intelligence is enough to make one believe in God — Alan Perlis
So let’s get started with the very first love of an AI Enthusiast. | https://medium.com/towards-artificial-intelligence/10-game-changing-ai-breakthroughs-worth-knowing-about-b2076afc4930 | ['Nishu Jain'] | 2020-12-03 12:42:41.796000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Deep Learning', 'Research', 'Technology'] |
How to Get Your Retro Gaming Backlog Under Control | How to Get Your Retro Gaming Backlog Under Control
It’s time to stop procrastinating and start gaming
Most gamers have a massive backlog. If you’re a retro gamer like me, you own — or want to own — multiple consoles, which only compounds the problem.
If your backlog is an overwhelming nightmare like mine once was, I’ve got good news for you: there’s a way around it. In this article, I’m going to show you how you can completely eliminate it.
After many misadventures in gaming, here’s how I learned to deal with my backlog.
1. Examine your console collection
Take a look around your TV stand. Chances are, you have more games for some consoles than others.
Make downsizing your backlog easier by starting with the console that has the least amount of games.
Only have two games to complete on your GameCube, but have twenty for your PS1? Then start with your GameCube.
The only time this advice doesn’t apply is if one of those games takes an insanely long time to complete.
Some games (*cough cough* Rayman) are almost impossible to finish in a short period of time because there are absolutely no save points throughout the levels, which brings me to my next point.
Image by Francis on Dribbble.
2. Play your easiest games first
Certain games are a lot less demanding than others. I completed Mario: Double Dash eons ago because it’s not that hard. Portal Runner for the PS2 is another great example. It can be completed in a couple of hours, give or take.
I’ve played them multiple times. The objectives are clear for both games, so they’re easy to play. It’s nice to know that they’re sitting in my collection just for fun.
Imagine if your entire game library was like that.
The moral of the story? Knock out the less challenging games first, and you’ll be flying through levels and your backlog before you know it.
3. Stop spending like a banshee
You might be dying to get your hands on a game series from a manga you love, like Ranma 1/2.
The problem? You don’t own the console for it.
Prioritize by putting these games on your to-buy list — but don’t buy them. Doing this will save you time and money.
Complete the games you already own before further cluttering up your retro library by stocking up on more consoles and games.
You can always reevaluate later and see if the games you might want to play are still important to you.
Image by Francis on Dribbble.
4. Take charge of your time
Setting a schedule may take the spontaneity out of gaming, but it’s also one of the best ways to get rid of your backlog.
It doesn’t mean that you never play other games randomly or for fun. But maybe you set every Saturday aside for a month straight so that you can finally complete Mother or Crash Bandicoot: Warped.
Now that you’ve narrowed down which games to play, spend a couple of weekends just grinding it out.
Rinse and repeat, and watch in awe as your to-play list magically grows smaller.
5. Get yourself a level book
Can’t remember which games you completed? Instead of scrolling through your memory cards, start recording them in your new level book.
Engineers use level books to record field notes for surveying. It turns out they also make the perfect video game notebook: they’re small, lightweight, and easy to use.
Your level book doesn’t have to be fancy; you could even use a plain notebook. But make sure to keep it with your game collection so that it doesn’t get lost in the abyss or mixed up with your office supplies.
Don’t underestimate having a level book — it’s a secret power-up that you definitely need in your arsenal. It will prevent you from losing track of games you’ve completed or are currently working on.
Image by Francis on Dribble.
There will always be more games
And gamers wouldn’t have it any other way. It’s time to clear out your backlog and make way for the shiny and new.
There are going to be a lot of games coming out for the PS5 and the latest Xbox in 2021. This is the moment that we’ve all been waiting for.
It’s time to demolish your backlog, once and for all.
Are you ready to knock it out of the park? | https://medium.com/super-jump/how-to-get-your-retro-gaming-backlog-under-control-b6e788caa22d | ['Ilene Kuehl'] | 2020-12-16 06:29:39.890000+00:00 | ['Features', 'Gaming', 'Retro', 'Videogames', 'Productivity'] |
Understanding customer mindsets will save insurance companies | In an age of ever-increasing technological disruption, insurance companies need something much more human: a deep understanding of their customers.
The insurance industry is at a critical inflection point. Finding itself increasingly disconnected from mistrustful customers and unaware of their true needs, traditional insurers are at risk of losing out to a combination of new entrants and incumbents who are quicker to adapt to changing expectations. To survive and succeed, insurers can’t just move faster, they need a clear idea of where to go and which opportunities to pursue.
We recently completed an extensive ethnographic study to truly understand the unique fears, hopes and motivations our customers have and alongside this developed a behavioural framework that provides clear direction for brands and stimulates new ways of creating value for both businesses and their customers.
A difficult climate
The insurance industry faces challenges on all fronts: the speed of technological change and shifting consumer behavior make them ever more vulnerable to new entrants. Existing markets are becoming more and more commoditised, with margins being squeezed in a race to the bottom. All the while companies race to keep up with changes to regulation in an already highly complex and heavily regulated sector.
In this perfect storm, it’s easy to miss new opportunities for value creation based on addressing unmet customer needs. Whilst the fundamental needs to plan for a rainy day, retirement or an unforeseen life event are somewhat timeless, the way we live our lives and the scenarios that keep us awake at night have evolved and multiplied. Alongside this, what we look for in products and services has changed profoundly in recent decades, as forward-thinking companies re-define customer expectations.
The traditional insurance industry hasn’t kept up, relegating customer insight to an afterthought, leading with incremental improvements to tired propositions and segmenting customers by socio-demographics.
Changing demographics, falling trust
The foundational shifts in Western Europe and US society are well documented: on average young people are buying cars, raising families, and getting on the property ladder later than ever (if at all).¹ This delayed ‘adulting’ is accompanied with a decline in traditional support networks and safety nets, such as organised religion and trade union membership, and a rise in lower paid, less-secure employment.² All of which creates an underlying atmosphere of uncertainty and anxiety for many, with most Americans and Europeans believing today’s children will grow up to be worse off than their parents.
This uncertainty about what the future holds should make insurance the perfect product for these fluid times. Yet trust is at an all time low. The financial services sector is the least trusted sector globally, behind automotive, manufacturing, and, perhaps surprisingly given recent privacy and fake news stories, the technology industry.
Going beyond segmentation with behavioural mindsets
To understand why trust is so low, and how to unearth new opportunities for value creation, you need to go beyond traditional approaches that cluster customers based on socio-demographic factors or digital maturity (or even worse, meaningless generational groupings such as ‘Millennials’ and ‘Gen Z’). These ways of grouping customers ignore context, behaviour and psychology, and are therefore poor springboards for innovation.
Instead, Fjord uses design research to develop behavioural mindsets that are rooted in a deep understanding of where the category sits in the lives of customers and their subsequent goals, needs, feelings and beliefs.
Rather than focus groups that foster respondent bias and surveys that skim the surface, Fjord spent three weeks with a cohort of insurance customers from all walks of life, combining diary studies, mobile ethnography and in-depth interviews to understand their aspirations, motivations and mental models used to understand the category.
Meet today’s insurance customer mindsets
Our findings reveal shocking levels of mistrust among customers, but the biggest mistake insurance companies make is assuming customers think about the category in a similar way. Our behavioural mindsets framework reveals this couldn’t be further from the truth.
These mindsets not only allow clients to look at their customers in a new light, but they make sense of existing trends in B2C innovation we’re seeing in the market and point towards new strategies and propositions that can unlock new value and revenue streams.
We uncovered four broad behavioural mindsets, each of which has implications for new propositions:
1. “Optimizers”
Optimizers strongly believe in the concept of insurance and are willing to go above and beyond to find the best cover for them, i.e. invest more time or pay a bit more money. Because of this investment in a better fitting product, they tend to be ‘brand sticky’ rather than brand loyal, staying with a provider because they’ve put the effort in to get the right policy, rather than affinity with the brand.
The opportunity: Optimizers can we won over with bespoke services that are tailored to their needs. A great example is InsurTech start-up Wrisk, that provides dynamic risk scores, bespoke multi-product policies, and adapts to customers’ changing lives. However, whether large numbers of cautious Optimizers are willing to trade big brand security for tailored services from unproven start-ups remains to be seen.
2. “Satisfiers”
Satisfiers just want the basics done at the right price and see insurance as a necessary evil. They rely on trusted third-parties and price comparison sites to get the job done swiftly, using rules-of-thumb that combine brand and price, e.g. quickly sorting policies by price and choosing the cheapest policy from a brand they recognise. Again, this is not brand affinity but big brand familiarity.
The opportunity: Offer Satisfiers clear, simple and transparent policies that address familiar anxieties as well as new fears — such as mental health and caring for elderly parents — whilst helping them avoid the familiar pitfalls of price hikes and loopholes. MoneySavingExpert is one brand well placed to address this opportunity, and already caters to this mindset through its Cheap Energy Club. It’s easy to imagine how this proposition could be replicated across insurance products.
3. “Experiencers”
Experiencers are focused on getting the most out of today rather than planning for tomorrow, and as such see insurance as more hassle than it’s worth. When they are legally required to buy insurance, the minimal viable product does just fine.
The opportunity: Help them buy ‘thinner slices’ of insurance when they absolutely need it, rather than forcing them to choose off-the-shelf products with a 12-month lifespan. On-demand insurance providers such as Trov do this well, providing insurance for specific items during specific times of the day. FinTech start-up Revolut also plays in this space, selling travel insurance as a bank account bolt-on and activating it when the customer leaves the country to ensure you only ever pay for it when you really need it.
4. “Seekers”
Seekers have passions or parts of their life that need optimal cover, whilst for everything else a basic policy (or no policy) will suffice. We met a 38-year-old with three health plans to ensure she had access to the best medical treatments available, including alternative treatments not available through traditional policies, and a twenty-something insurance cynic who made an exception for first-class ski cover for his action-packed winter breaks.
The opportunity: Win Seekers over through policies and services tailored to their passions and built around (and for) their communities. For instance, Bought By Many works with insurers to develop policies and negotiate discounts for underserved customers or those with niche interests, e.g. exotic pet owners or beauty salon owners. It has also recently started rolling out its own unique policies, such as Fixed for Life, a pet insurance policy which never increases in price.
Customer discontent as brand opportunity
In his 2018 letter to shareholders, Amazon founder Jeff Bezos rejoiced in the fact customers are “divinely discontent,” as these “ever-rising” expectations provide ever-increasing ways of creating customer (and ultimately shareholder) value and staying one step ahead of their competitors. Our research has shown that customers can be discontent in meaningfully different ways, which presents opportunities for brands willing to think differently and develop new propositions that solve for these unmet needs.
If you’d like to find out more about all the findings behind Insurance Mindsets, including the product strategies and value propositions aligned to these four mindsets, please get in touch.
This initiative was a collaboration between Fjord London and Accenture Insurance Strategy.
[1] For US data on the delayed marriages, births and ‘adulting’ see ‘The Changing Economics and Demographics of Young Adulthood: 1975–2016’. The UK Office for National Statistics has data on the increasing age at which people get married, as well as the declining rate of marriage overall, as well as the rising age at which people have their first child.
For insightful analysis of why young people drive less, and the long-term factors contributing to a delayed transition into ‘adulthood’ see ‘Young people’s travel — what’s changed and why?’ by Department for Transport.
[2] UK trade union data is taken from ‘Trade union statistics and bulletin for 2017’ by Department for Business, Energy & Industrial Strategy.
For the decline in UK organised religion, particularly among young people, see the Centre for Religion & Society reports on Young People and the ‘No Religion’ population.
For the rise in self-employment, and the fact that 45% of self-employed people between 35–54 years old don’t have a pension, see ONS ‘Trends in self-employment in the UK’.
Research from Resolution Foundation shows that typical self-employed person earns 40% less than an employee.
Tax Research UK (using HMRC data) shows almost 80% of self-employed people in the UK are living in poverty.
See TUC data showing that half of self-employed over 25 yr olds (approx. 2 million UK adults) earn less than the minimum wage. | https://medium.com/design-voices/understanding-customer-mindsets-will-save-insurance-companies-2d021063f73 | [] | 2019-05-07 14:21:18.103000+00:00 | ['Insurance', 'Design', 'Research', 'Mindset', 'Customer Experience'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.