title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
From Ballet to Booleans
|
My Second Act
After establishing a solid base working on my own, I wanted the extra depth that I could gain from a coding bootcamp. My goal is not only to write working code but also to know why it works so that I can craft more robust applications. I searched for a program that was online-only, had rigorous coursework, and flexible hours. Flatiron was the perfect fit!
In October, I began Flatiron’s software engineering bootcamp. I’ve enjoyed participating in the live lectures every day, and the collaborative nature of the class has been very rewarding. Working with my peers has shown me a number of different approaches to solving problems. Moreover, lending a hand to other students reinforces my own understanding of the curriculum. Software engineers don’t work in solitary bubbles, just as most dancers don’t perform only as soloists. The corps de ballet is best when every dancer works in concert towards one common goal, and engineering teams are the same way; I consider learning to work closely with others to be just as important as improving my coding abilities.
I have encountered a misconception that ballet and software engineering have nothing in common, but I believe they overlap in some fundamental areas. In both disciplines, the finer details can make or break your success. Just as a sickled foot can break a ballet dancer’s line, a misplaced comma can crash a software engineer’s build. However, precision alone is not enough. Each line of code must fit into a larger framework, and every ballet step must be connected to the next. Technique is critical, but it is vital that it be paired with vision and understanding of a larger purpose. All the mental skills I honed as a ballerina have prepared me well for a career as a software engineer, and I can’t wait for opening night!
|
https://medium.com/better-programming/from-ballet-to-booleans-9a5e910cc58e
|
['Rebecca Hickson']
|
2020-11-03 15:52:29.272000+00:00
|
['Software Development', 'Ballet', 'Software Engineering', 'Programming', 'Women In Tech']
|
Gradient Descent — A Powerful Optimization tool for Data Scientist
|
Model Objective
Below we will preface the article with a simple introduction of the structure of the model optimization problem (The notation is not robust).
Suppose you have the following model:
Where f(.) is a function that takes features and parameters as input values. The function then returns an output. The parameter value (β) is not known to us, therefore we will need to learn the values from the data.
We want to find the best parameters (β) so that our predictions (ŷ) are close to the truth (y). To accomplish the task, we would minimize a loss function. β^ is the argument that minimizes the loss function L(.).
Minimizing the cost function
The structure of the loss function will depend on the problem you are trying to solve. Some popular loss functions are RMSE, RSS, and Cross- entropy.
Note: Identifying the structure of the loss function is beyond the scope of the article.
Direct Method
Photo by Mark König on Unsplash
The direct method of minimizing a cost function is ideal because you can be certain that the answer you receive will be the optimal solution. Take for example the simple linear regression below.
Photo by Steven Loaiza on Latex
The objective of the model is to solve for the optimal solution to the β parameters. Linear regression has a well-defined closed-form solution for the loss function. The parameters and can be solved directly.
Photo by Steven Loaiza on Latex
But, not every loss function has a direct solution. Therefore, we need other methods to solve the parameter values.
Approximation Method
There are complex formulas that are not easy to solve for the unknown parameters. On the other hand, even if a parameter can be solved, the time complexity of performing the calculations could take a long time. There are many approximation methods, but we will focus on gradient descent.
We deem the method an approximation because we are trying to get close to the true parameter values that will minimize a loss function. There can be times when the algorithm converges on a local solution instead of a global solution (more on that later).
|
https://towardsdatascience.com/gradient-descent-a-powerful-optimization-tool-for-data-scientist-89f48e8401c6
|
['Steven Loaiza']
|
2020-12-27 19:43:38.104000+00:00
|
['Python', 'Machine Learning', 'Algorithms', 'Optimization']
|
The economy, immigration, the Irish border.
|
The economy, immigration, the Irish border.
These were a few of the key topics leading up to the EU referendum vote, and all three continue to be focal points in the ongoing Brexit negotiations between the UK and Brussels. What wasn’t talked about so much, however, was the direct impact that Brexit will have on the ‘daily lives’ of people in the UK, and recent findings are showing that energy prices are expected to rise as a result of Brexit.
In a recent open letter to the President of the European Commission, Jean-Claude Juncker, and UK Prime Minister, Theresa May, a collective of UK and EU companies urged both leaders to have strong stances on climate and energy to foster prosperity and enhance the ability of both parties to tackle climate change. The letter also warned that a no-deal Brexit scenario, amongst many other things, would lead to an increase in UK energy bills.
How will this impact the daily lives of people in the UK?
For a start, an increase in energy prices could lead to what is commonly known as ‘fuel poverty’, a household situation which can damage people’s quality of life and health, as well as impose wider costs on the community. OFGEM currently estimates that as of May 2018, the average variable tariff for a dual fuel customer was £1,138 per year, which equates to nearly 5% of the average UK household budget, and given how the UK hasn’t seen average wage increases of 3% since 2015, an increase in energy prices will be sure to put a strain on households.
It is important to note, however, that fuel poverty isn’t necessarily just down to energy costs.
Many houses in the UK aren’t using energy efficiently, meaning there is a bigger chance that energy is being wasted. Households are thus having to spend more to keep their homes warm, and according to EU analysis, UK homes are some of the most expensive to heat in Europe because of poor maintenance and insulation, with 30% of homes in England having an EPC (Energy Performance Certificate) rating of E, F or G; the lowest brackets.
What this suggests is that there is maneuvering room to mitigate rising energy costs through more energy efficient options, and in the event of a no-deal or an unfavourable one, these options will provide a much needed safety net amidst projected energy price spikes. So what can be done?
Firstly, improving home insulation has many upsides in saving money whilst significantly reducing heat loss.
According to the Energy Savings Trust, a household can save yearly amounts of anywhere from £285 to £395 with loft insulation, £330 to £725 with cavity walls and £40 to £65 with floor insulation; just to name a few money savings options.
The Energy Savings Trust also found that lighting accounts for 15 percent of a typical household’s electricity bill, and that significant savings could be made through an array of energy efficient practices and purchases such as changing which bulbs you use and how you use them. LEDs, for example, are said to be more efficient than CFLs (compact fluorescent lamps) and will save you more money in the long term. So, by replacing all bulbs in your home with LED alternatives, a household can save around £35 a year on electricity bills.
What these findings reveal is that energy inefficiency is very much a reality for many households here in the UK, and given how there are indeed options to counter energy efficiency, this begs the question, “why aren’t more people employing these energy saving options”?
There are three possibilities why this may be.
Firstly, upgrading a house to become more energy efficient does incur an initial upfront cost, which for some households might be too expensive. There is also the strong possibility that information is not disseminated in a way which people can come by easily, and thus the benefits of energy efficient upgrades are simply being lost upon the public. Finally, people may also not be pursuing energy efficient practices due to a lack of ‘motivation’.
What is interesting about these reasons is that all, save the ‘lack of motivation’, have traditionally had solutions. This is obvious owing to the fact that motivation is a personal and intrinsic human behaviour, and therefore is almost impossible to change using an overarching solution. Or is it?
The Manchester based energy management and trading company, EnergiMine are developing a blockchain powered EnergiToken (ETK) rewards platform. This platform will allow people to earn EnergiTokens for partaking in energy efficient practices, such as using low carbon transport, using less energy at work, amongst other things. Once obtaining ETK, people will be able to use EnergiTokens for an array of different purposes, ranging from paying off their energy bills, purchasing energy efficient appliances, trading their tokens on exchanges, and using ETK as a payment method such as at electric car charging points.
One of EnergiMine’s partners, ON5, already offer strategic programmes and workshops for communities, homes and businesses, to help them reduce their energy consumption, and subsequently their energy bills. ON5 value clear and accurate information, finding innovative solutions, and using digital technological tools and campaigns to enable everyone to become an ‘eco-actor’ by making simple changes to their current consumption habits. With simple yet effective guidance from ON5, and the incentivising reward of EnergiToken, changing consumption behaviour has never been more beneficial for the consumer.
Therefore, if energy bills are to rise as a result of Brexit, these findings refreshingly reveal that there are indeed alternatives to counter this, and what’s more, the impact of energy often comes down to how one views and consumes energy.
Use less, save more.
|
https://medium.com/energitokennews/the-economy-immigration-the-irish-border-98ab3cc7b8e2
|
[]
|
2018-12-11 11:09:59.775000+00:00
|
['Brexit', 'Money', 'Energy', 'Sustainability', 'Climate Change']
|
The Flavours of APIs 🍦
|
The Flavours of APIs 🍦
Exploring Different Types of APIs and When to Use Each
Source: Vectorstock
In today’s tech landscape, APIs are a hot topic. The rise of microservice based architectures opposed to legacy monoliths has further driven the necessity of robust APIs. With microservices, powerful applications are broken up – i.e. distributed as discrete components across numerous logical and physical machines. This is possible in part due to the availability of cloud computing – virtualized access to almost limitless compute and storage resources made available in a pay-as-you-go model, provided by large technology companies operating massive data centres around the globe.
These microservices based architectures are a contrast to the large-scale, tightly coupled applications of the past that were better designed to run on the limited infrastructure available at the time. When applications required more resources, they would need to scale vertically (i.e. adding more memory, CPU, or storage to the machine). The ubiquity of computing resources in the cloud allows modern applications to instead scale horizontally, by distributing the load over many less powerful machines. Further, applications can be designed intelligently – with components running on infrastructure that better meets the unique load. This ultimately leads to bite-sized chunks of the larger application having infrastructure which is uncoupled from the rest.
With this style of computing, however, comes the requirement for a lightweight communication mechanism which allows for two things:
Consumers (either humans or other systems) to interact with a specific part — and only that specific part — of the broader application.
The various distributed components to interact with each other.
Enter: the modern API. Built on the HTTP protocol, and capable of sharing data over the network in a lightweight fashion, the API is the piping of the web that connects users and technologies.
Microservices and APIs within a distributed system
And similarly to how changes in both technology and human behaviour led organizations to switch from monolithic development practices to microservices – consumers of digital applications have changed in recent years as well. With new ways of building and deploying applications come new ways of integrating them with other areas of your landscape.
While it is true that many of the traditional software patterns still exist as they did years ago, despite changes in technology —new requirements have emerged and old requirements are now amplified given the prominence of digital applications in our day-to-day lives.
That said, let’s take a look at various flavours of APIs, how they work, and the scenarios in which each are useful.
|
https://medium.com/swlh/the-flavours-of-apis-c13552799645
|
['Ryan S.']
|
2019-09-23 18:34:52.118000+00:00
|
['API', 'Technology', 'Software Engineering', 'DevOps', 'Software Development']
|
Common Mistakes Made By Facebook Advertising Novices
|
Advertisers who are new to Facebook advertising must be very panicked, so they tend to do some seemingly effective things, but often these things are often people make mistakes.
Mistake #1-No sense of goal
In most cases, this is because you have no idea about the various terms of Facebook advertising optimization.
Facebook’s advertising goals are diverse, and the types of advertising often introduce new ones. Different measurement indicators are derived from this. In order to facilitate understanding, we can treat them as KPIs. It must be particularly emphasized here: If you want to know whether the Facebook ads you place are effective for your brand, you must know which indicators you need to use to measure.
For example, in the case of Facebook ROI (return on advertising investment), you can use these different actions:
New homepage “Like”
Post interaction
Mail registration
Website click
Website sales or sales leads
In view of the different business goals of each advertiser, we will not specifically specify which indicators should be tracked. If you are just starting a new brand, then the goal of building brand awareness should be even more important-therefore, you need to find ways to increase your homepage followers.
The most important thing is to make sure that you have a specific sense of goal. Make sure you have a systematic understanding of terms related to Facebook advertising before you start. such as:
You only have to know what a homepage like is, why you want to get homepage likes, how to get homepage likes and a series of knowledge,
Only then can there be a more specific goal that should be set, assuming that it is to add 100 homepage likes every month-and use regular checks to ensure that your budget is properly used.
Mistake 2-Use the wrong ad format
This is usually caused by two reasons: Either you are a newbie to Facebook advertising and don’t know how to choose the right format for your specific ad; or one of your ads has been running for a while, and it has been in the past. very effective. But as Facebook continues to introduce rich advertising formats, your original advertising appeal has gradually decreased or even disappeared.
Facebook continues to introduce new forms of advertising, on the one hand to increase sales; on the other hand, it is also to minimize the decline in user experience caused by a single advertising format. Remember, the user’s attention is very limited and getting lower and lower. In my observation, for a long period of time, Facebook’s main form of advertising was Single Image; after that, Carosel and video ads were developed, and then CINEMAGRAPH, CANVAS. , The current 360 video advertising; and the ongoing LIVE, Mid-Roll video interstitial advertising.
Facebook’s recent new advertising formats are CANVAS and 360 video advertising. Canvas allows advertisers to post rich media full-screen advertisements (integrated carousel pictures, single images, small videos, etc.) on the mobile terminal; 360 video ads allow advertisers to display products or tell brand stories through panoramic videos. These brand-new forms, although the cost has increased accordingly; each time they are released, they will bring a period of dividends. Although the user’s attention has been decreasing, they are always curious about novel things.
The amount spent on different advertising formats is also different. The following is the average cost of different Facebook Placements in 2020. This will help a lot.
Data Source:Facebook ads cost tool — ADCostly
Mistake 3-You only have one campaign
There are 3 levels of Facebook advertising, from small to large, advertising-ad group-ad campaign.
In the official definition of Facebook, an ad campaign is a marketing activity to meet specific advertising goals; an ad group is a combination of ads that differ in a certain characteristic. This shows that Facebook’s ads are built and revolved around ad groups in a sense — an ad group is a group of ads that contain different “daily advertising budgets, advertising schedules, advertising bidding methods, and advertising target audiences.”
As a novice, if you don’t have a deep understanding of ad groups, maybe you think the easiest way is to put all these different ads in one ad group to compare their effectiveness. This is not the case. For Facebook, they have an algorithm at the system level that optimizes all the ads of each ad group of advertisers into a best-performing ad — — officially called automatic optimization. If you concentrate too many ads in one campaign, many ads will not get enough impressions.
After setting up multiple campaigns, you should use some calculators so that you can quickly get back to your advertising effect. If there is a situation where advertising costs too much or too little, you should stop or modify it in time. These are two free Google expansion calculators, you can quickly and easily calculate your CPC, CPM, CPA, ROI and Roas.
CPC & CPM & CPA Calculator
ROI & Roas Calculator
Mistake 4 — The target audience is too broad
In the traditional media era, the primary consideration for advertising is coverage; whether it is print media such as newspapers and magazines (corresponding to Facebook image ads) or video streaming media such as TV stations (corresponding to Facebook video ads); you will find that they are all distributed The larger the volume, or the more viewers, the higher the advertising unit price.
In the age of social media advertising, what you should value most is to push the right ads to the right people. Remember those audience targeting features that Facebook mentioned? Accurate audience targeting means that your ads are more likely to be seen by customers who may actively respond to you, rather than being presented to users who don’t pay much attention to it.
In fact, if you think your advertising engagement is too low, then you should first consider whether it is because the audience is too broad. You can try to refine the positioning strategy, set one more positioning rule each time, and then see what changes. Improve the accuracy of the positioning strategy through multiple attempts.
You can get more advertising interest audiences for free by using the Facebook ads cost tool. A core interest can be used, so that more audience interests can be expanded.
Mistake 5-You did not go straight to the topic
Be sober: advertising is not for good health, but for making money. In order to make money, you must be able to grasp the short-term attractiveness of customers (about less than 8 seconds) and find a way to convince them to take the action you expect in a short period of time. This is the role of the “call to action” in your advertisement, as David Ogilvy said: An advertisement requires an attractive promise to capture customers; and the promise should be short enough.
For the best results, David Ogilvy made his recommendation-use no more than 14 words as your title. And I also want to make a personal suggestion: try to use short links for your website.
Mistake 6 — Too much text is used in the ad image
Maybe you have a creative team that puts all the positive comments of real users together to create a convincing product image (as we often see on Taobao). This looks really good, so you put all these texts in your Facebook ads, but in the end you were rejected due to Facebook’s ad content policy restrictions.
This may be rejected during the ad approval stage, or it may be rejected after the ad has been put on for a period of time. Some people may conclude that Facebook’s current advertising policies are ambiguous and elusive. In fact, the working mechanism of Facebook advertising is to approve while running. Even if some ads are approved by the algorithm, they will be lowered or stopped by Facebook because they do not meet the user interaction evaluation.
In fact, Facebook may have its own mechanism to reduce the delivery and exposure of heavy text content ads to avoid the impact of its user experience. This means that similar ads with a large proportion of images and text must be less effective than those with a small proportion or no text.
Our suggestions for this are:
1. Try to use pictures without text
2. If you need to add text to the picture, you can consider using Facebook’s official text grid detection tool
|
https://medium.com/plus-marketing/common-mistakes-made-by-facebook-advertising-novices-967696ac43a2
|
['Jakeson Christopher']
|
2020-12-01 19:38:16.316000+00:00
|
['Facebook', 'Social Media', 'Facebook Marketing', 'Facebook Ads', 'Digital Marketing']
|
The Challenges of Staying Informed in the Mis-Information Age
|
Google’s own trillion-dollar search engine algorithm is susceptible to false and misleading information. When Devin Patrick Kelley gunned down 26 churchgoers in Sutherland Springs, Google spread misinformation thanks to it’s ‘Popular on Twitter’ feature⁷. Google may have put too much trust in unverifiable sources when feeding real-time Twitter results for related searches. Just hours after the incident, tweets falsely labeled Kelley as a Muslim convert, member of a pro-Bernie Sanders group and a radical with ties to ANTIFA. Millions of people read the Google feed and the misinformation and speculation shaped perceptions while failing to provide verifiable facts.
Google’s had other issues with false or misleading information appearing in search results. Originally meant to answer common questions from consumers, Google’s ‘answer box’ is powered by websites ranking on the first page for specific searches⁸. The ‘rich snippet’ or ‘position zero,’ as it’s also commonly known, helps power voice searches on mobile devices and Google Home. While only 15 to 20 percent of all Google searches currently return a featured snippet, the impact can be significant. Earlier this year, prominent search engine marketer, Danny Sullivan, noticed that select Google searches were returning clearly false results, including the idea that Barack Obama was attempting a government coup d’état. ALT-right news sites like Breitbart also generated unfounded results, including the accusation that Hillary Clinton is an alcoholic. The questionable sources can originate across the political, socio-economic and geographic spectrum, making it difficult for Google to identify and remove erroneous results, but that doesn’t make the issue any less important to address.
Originally known for posting pictures of flowers, weddings and gardens, Pinterest has broadened its userbase significantly. Unfortunately, with growth comes opportunity for misuse. While Pinterest is an image-centric website, it’s not immune to manipulation. For example, many people turn to Pinterest for healthy lifestyle advice and information. Unfortunately, Pinterest is chock-full of misleading information, ranging from miracle cures to warnings about common ingredients that can kill you⁹. One popular pin saved to 16,000 boards suggested vitamin B17 is a viable cancer treatment, when in fact it can kill in high dosages due to cyanide levels. Another pin claims Alkaline water kills cancer. The Washington Post reported Pinterest was also inundated with bogus election-oriented content by the Russians.
No platform is immune.
Social media platforms are not the only targets of hostile hackers and opportunistic hucksters. Even a highly-trusted marketplace like Google Play Store has been compromised. For a brief time in early November, anyone attempting to download WhatsApp for Android from the Google Play Store may have been duped into adding a fake app¹⁰. While the fake app provided WhatsApp functionality, it buried users in advertising. The creators of the app utilized a few simple tricks, including an identical icon, similar-looking company name and copy to convince hapless visitors to download. The app could have contained more malicious malware or otherwise compromised phones, should the criminals decided to go that route. Fortunately, that was not the case; unfortunately, nearly one million people downloaded the fake app before it was deleted from the store.
Because of all of the questionable content on ‘trusted’ platforms over the past year or so, US internet users are more jaded about the information sources from which they consume content. In a recent HubSpot study, most US respondents indicated they distrust the top three platforms nearly equally: 59 percent of Facebook users find the platform to be at least somewhat untrustworthy, compared to 55 percent of Google and 58 percent of Twitter users¹¹.
|
https://medium.com/online-marketing-institute/the-challenges-of-staying-informed-in-the-mis-information-age-586dc99859e4
|
[]
|
2017-11-23 01:50:00.434000+00:00
|
['Information Technology', 'Fake News', 'Facebook', 'Digital Marketing', 'Social Media']
|
Parse Args in Bash Scripts
|
There is something to be said for the immediacy of using Bash scripts, especially when dealing with relatively simple system operations; however, parsing command line arguments has always been rather cumbersome and usually done along the lines of several painful if [[ ${1} == '--build' ]]; then ... .
And, yes, I know about getopts : hopefully, no one here is going to argue that it’s either user-friendly or feature rich, now, are we?
On the other hand, Python is pretty convenient for system operations (especially when using the sh module) but sometimes a bit of an overkill, or just missing the immediacy of Bash: however, the argparse module is nothing short of awesome, when it comes to power and flexibility in parsing command line options.
This simple Python script tries to marry the best of both worlds, allowing with a relatively simple setup to parse arbitrary command line options, and then having their values reflected in the corresponding local environment variables.
Usage
The usage is rather straightforward: we invoke it with a list of the desired option names, followed by the actual command line arguments (`$@`) separated with ` — `:
# The `-` indicates a bool flag (its presence will set
# the associated variable, no value expected);
# the `!` indicates a required argument.
PARSED=$(./parse_args keep- take counts! mount — $@)
source ${PARSED}
the values of the arguments (if any) are then available via the `${ }` operator:
if [[ -n ${keep} ]]; then
echo “Keeping mount: ${mount}”
fi
For example:
└─( ./test — keep — mount /var/loc/bac — take 3 — counts yes
Keeping mount: /var/loc/bac
Take: 3, counts: yes
The trailing - (simple dash) indicates a “flag” (a boolean option, which takes no value and whose presence will result in the corresponding variable to be set), while a trailing ! indicates a required argument:
└─( ./test — keep — mount /var/loc/bac — take 3
usage: [-h] [ — keep] [ — take TAKE] — counts COUNTS [ — mount MOUNT]
ERROR: the following arguments are required: — counts
Modifiers
Simple arguments (without any trailing “modifier”) will be assumed to be optional arguments of the --arg type; there are three modifiers:
! indicates a required option, of the --arg type;
indicates a option, of the type; + indicates a required “positional” argument (no flag, simply the value — as in tar /var/loc/my.tar )
indicates a “positional” argument (no flag, simply the value — as in ) ~ indicates an optional “positional” argument
Implementation
The source code is available here and revolves around adding arguments to argparse.ArgumentParser dynamically:
for arg in args:
# we first check if there is a modifier using a RegEx
m = re.match(MODIFIED_PATTERN, arg)
if m:
mod = m.group('modifier')
# Depending on the modifier, we add the
# necessary arguments for add_argument()
... parser.add_argument(f"{prefix}{m.group('opt')}",
**kwargs)
the regular expression that matches the ‘option’ and the ‘modifier’ takes advantage of Python’s “named groups” (of the form: (?P<name>...) ):
MODIFIED_PATTERN=re.compile(r"(?P<opt>\w+)(?P<modifier>[-!~\+])?")
so we can then use their names from the smatches object:
mod = m.group('modifier')
Finally, instead of having many “call points” for add_argument() depending on the modifier, we simply build the method’s named arguments into kwargs and pass it in the call:
if mod == "!":
kwargs["required"] = True
kwargs['action'] = 'store_true' if mod == '-' else 'store'
if mod == '+':
prefix = ''
elif mod == '~':
prefix = ''
kwargs['nargs'] = '?' ...
parser.add_argument(f"{prefix}{m.group('opt')}", **kwargs)
We have subclassed ArgumentParser with a StderrParser so that:
when erroring out, we emit error messages to `stderr` so they don’t get “swallowed” in the bash script; and
we need to exit with an error code, so that using `set -e` in our shell script will cause it to terminate, instead of executing the `source` command with potentially unexpected consequences;
for the same reason, we need to override the print_help() method, which will only emit the `usage` line (to `stderr`) and then exit with a non-zero code, to force the bash script termination.
|
https://medium.com/python-in-plain-english/parse-args-in-bash-scripts-d50669be6a61
|
['Marco Massenzio']
|
2020-08-07 06:47:03.590000+00:00
|
['Python', 'Coding', 'Tech', 'Bash', 'Programming']
|
How Social Media Has Sabotaged Deep Communication
|
How Social Media Has Sabotaged Deep Communication
Writing a personal letter is like stuffing yourself into an envelope to pay a visit. A Facebook post is like waving at someone.
Photo by Debby Hudson on Unsplash
From the time I was a teen until my forties, the mailman’s visit was the highpoint of my day. Back then mailmen (and they were men) walked their routes, carrying their mailbags or wheeling them along. The mailman delivered personal letters from people you knew or bills you needed to pay. There was only the occasional piece of junk mail.
Before the computers and the internet appeared in my world, there were typewriters and stationery. I made good use of them. I especially loved writing on beautiful stationery. I occasionally wrote in a journal and sometimes wrote poetry, but mostly I wrote letters. I think the longest letter I ever wrote was 28 single-spaced pages. The recipient referred to it as the term paper.
Every one of those letters was personal and written for only one person. Some went to penpals and some to college students away from home. Some went to people wrestling with problems. Some went to extended family. Some went to very close friends.
As I wrote each letter I thought only of the person who would receive it. It was either someone I already knew well or wanted to get to know and understand. Those letters weren’t shallow. They were meant to encourage or sometimes even chastise just one person. That one person would often return an answer that was equally thoughtful. The exchange might go on for weeks. Or months, or years.
The closer the relationship, the more personal the correspondence became. I wrote to an older relative often, even though I saw her every week. Some things I wanted to be private. I wrote to many students in our church college group (we were its advisors) after they moved to a college campus. We discussed spiritual struggles, relationship problems, job interviews, and anything else that was important to either of us.
Letters Were a Means of Continuing Conversations
I’m sure this has happened to you. You’re at a meeting, at church, or at a party having a great personal conversation with a friend, a third person joins you, and the conversation changes. Once another person comes on the scene, the conversation becomes more impersonal. If this happened to me and an important conversation had been interrupted, we would later continue it by mail.
It was actually easier for me to talk to people on paper than in person. It gave me an opportunity to organize and refine what I wanted to say. I could think about each word and the effect it might have. I wasn’t rushed. Sometimes when I left a live conversation it would not be long before I was thinking of what I should have said. I hardly ever thought of what I’d wished I’d said after dropping a letter in the mailbox. I’d had time to think over everything I wrote and include all that was in my mind.
Email Changed How Most People Communicated
When email came on the scene, most of us who had an internet connection switched to it for communicating with friends. Although it was possible to write in depth by email, most people didn’t. If they sent an email they were usually in a hurry for an answer. An email was fast. Most people didn’t want to wait for answers to snail mail letters. So they sent an email — often one they wrote quickly. They left out what they might have put in a real letter.
Photo, © B. Radisavljevic, Taken at Paso Robles Art Association Exhibit, with Permission to Use Online.
The content of snail mail letters was more private than emails sent to a computer that might be shared. Usually back then each family had only one computer. I had more privacy than most because I was the only computer user in my family. I still am. That was not true of my friends. The less private a communication is, the more superficial it tends to be. What young person wants a roommate or parent reading what a close friend writes to them? I know those who wrote letters to me would never have risked sending an email with the same content.
Now We Have Facebook and Talk to Everyone at Once
While a few of us are brave enough to post what we really think or feel, we still don’t post what we would have written forty years ago to a close friend. Not only would it be too long, but we know there are strangers looking at what we say. We can’t discuss situations that involve family problems, work problems or really personal problems because we know that they should be kept confidential. We’d never share something like that on Facebook or any social media site. Yet many of us now use Facebook to keep in touch with the long-distance friends we used to write letters to. As a result, we don’t share as much or know each other as well anymore.
Do You Still Write and Receive Personal Letters?
I’m guessing most people don’t. I used to spend a good part of each day writing letters when I wasn’t employed. When I started a mail order book business, I gradually stopped writing to friends. We were traveling all over the country and when I was home there was business paperwork to tend to. I had to build a website and keep it updated. And I had to ship out books almost every day. If I really needed to talk, I called someone or we got together. I lost touch with most who did not live close to me.
There simply wasn’t time to write letters anymore. The friends I’d written to when they were in college graduated. Then most finally got married. They soon had families of their own that kept them too busy to write except for the traditional Christmas letter, if even that. As a result, our once-close relationships are now fond memories and our communication has turned superficial. I see some of them on Facebook, but others never joined.
Letters Were Treasured Documents of a Relationship
Unlike emails and social media posts, which are ephemeral, written correspondence on paper is as permanent as you want it to be. I still have files of treasured correspondence from people I love who are no longer in this world. Their words live on in my files. I also have letters from decades ago that bring back memories long forgotten. My friends still live on the pages they wrote.
© B. Radisavljevic, Computer print-outs of letter copies
I typed many of the long letters I wrote so that I could make carbons and keep my copy. Why? Because I often poured out my heart in these letters or shared things I don’t want to forget. Instead of keeping a journal, I wrote letters. I often didn’t know what to say to myself, but I got inspired when talking to someone else.
Later, when I got my first computer I could easily print out copies of my typed letters. It was so much easier than making carbons.
I still have almost every Christmas letter I ever wrote and those letters document what was most important in our family history each year. Without those letters, I probably would have forgotten many of the events which once had seemed very important.
Writing Letters Was Therapeutic
When my son died in an accident at the age of fourteen, the Home School Legal Defense Association put me in touch with another bereaved homeschooling mom, Kathy. I had read about her experience in their magazine and knew it might help both of us if we could communicate. That was back in 1991. Our sons had been about the same age. The article below discusses my relationship with my son and how losing him affected me.
The correspondence with Kathy actually led to a personal meeting when she was visiting her parents in California several months after we were bereaved. By then we knew each other pretty well because of the many long letters we had exchanged. She came to the cemetery where my son was buried and we sat on a bench and talked for a long time and then prayed together. Nothing helps a bereaved person quite like talking to someone who really understands what they are going through.
We continued to write for years until we had resolved our grief enough to carry on without exchanging the letters anymore. Now we are Facebook friends and hardly talk directly to each other at all. Hundreds of miles separate us.
Letters Bring Loved Ones to Us When We Most Need Encouragement
The first people we meet and the last we will probably ever see are those in our families. When my mom died, I found a box in her closet that contained all the cards and letters family members and close friends had sent her for years. In her dark moments when she was grieving the death of my father and I lived a long drive away, she still had all those words of encouragement people had sent. She could reread them whenever she felt alone and needed to be reminded that we loved her even when we couldn’t call and visit as often as she would have liked.
After Mom moved a couple of hundred miles from her sister to live near me, the two continued to write to each other every day until her sister died. Those letters were her one communication line to the family she grew up in. Her sister was the only one who shared most of Mom’s family memories and who understood their family life.
Neither Mom nor her sister ever wrote an email. It was letters that kept them connected and allowed them to reveal their most personal thoughts to each other — thoughts no one else might have really understood. Email, social media, and text messages are quick and easy in our busy lives. However, they may have destroyed the kind of communication that keeps friendships intact and close.
Close friends need more than superficial communication and the annual Christmas letter. They need to share the thoughts and feelings that are best expressed in person, in live phone conversations, or in real letters. Facebook just doesn't cut it.
|
https://barbsbooks.medium.com/how-social-media-has-sabotaged-deep-communication-ae4d7c8393e3
|
['Barbara Radisavljevic']
|
2019-04-28 23:18:27.497000+00:00
|
['Email', 'Snail Mail', 'Facebook', 'Friendship', 'Letters']
|
Oh No! We’ve Been Carving our Christmas Turkey Like Amateurs.
|
Oh No! We’ve Been Carving our Christmas Turkey Like Amateurs. The golden rule is to carve meat against the grain for max tenderness. But we pull our birds out of the oven every year and do the exact opposite.
Here’s a quick tip that will save you time, provide more tender morsels, and dress up your dinner plates! With fewer people coming together for the holidays this year, it’s a great time to experiment with this better method!
I need to get back to my Christmas cookies (216 this year), so watch the video, I gotta go, and Carve Your Christmas Turkey Like a Pro!
Merry Christmas and a Happy, Healthy Cheer to 2021!
|
https://medium.com/illumination/oh-no-weve-been-carving-our-christmas-turkey-like-amateurs-fcbb15ddfd92
|
['Liz Porter']
|
2020-12-22 14:23:19.081000+00:00
|
['Food', 'Health', 'Self Improvement', 'Cooking', 'Chefs']
|
Five Leading Causes of Plantar Fasciitis
|
Five Leading Causes of Plantar Fasciitis
Prepare to be surprised
Photo by Jakob Owens on Unsplash
This post may contain affiliate links. Please read the disclaimer for more info.
I’ve noticed an incredible amount of confusion within the online plantar fasciitis (PF) community in addressing the pain points. My biggest pet peeve are all these “exercises” that will apparently “cure” your injury.
First of all, exercises and stretches are missing the mark in terms of getting yourself fully functional again. Especially because these are almost always strengthening exercises and light stretches that we would do to warm up for a workout.
PF comes about because of bad habits. These habits may cause you to be weak in an area like your foot arch, but that doesn’t mean you should do foot crunches every day for years so you can run again.
It means you need to identify why your foot arch is not able to gain strength (what’s holding it back?) because your arch should naturally be strong on its own.
There’s no “curing” PF. It’s the result of some tightness or misalignment within your body (the bad habits I was talking about). More to the point, something or things are the CAUSE and PF is the EFFECT.
You don’t want to tackle the symptom when you should tackle the source to stop all the symptoms. We can get rid of PF completely if we address these other already existing problems within the body.
Speaking of causes, let’s get into what is ACTUALLY causing your plantar fasciitis:
Traditional Footwear
The shoes we wear every single day, are literally the one and only real reason that plantar fasciitis even exists. It’s only recognized as a common problem in modern, developed countries like the U.S. If we lived most of our lives in our bare feet or at least in shoes that were made to serve our feet, we’d be much better off.
If you’d like to read the laundry list of specific problems with our shoes, please check out this article.
In the meantime, you can take a look at minimalist or barefoot style shoes which allow your toes to spread out, allow your calves and achilles to stretch, and overall allow your feet and ankles to get strong again on their own unlike in regular shoes. These have saved my feet from multiple ailments like PF and bunions.
Everything else in this list is a byproduct of our traditional shoes by the way….
Tight Calves
Yes, your calves are most likely the root cause of all the symptoms you are feeling in your feet and heels. Most notably the soleus, posterior tibialis, and gastrocnemius muscles which run along the inside of your shin bone from your knee to ankle, might be incredibly tight. These muscles are what give your foot arch support.
When they become too tight, they cause people to stand in an open-foot position which is exactly what it sounds like. This causes the ankles to collapse inward (over-pronation) and puts lots of stress on the tissues above the ankles. This is plantar fascia and heel pain central.
To give your foot, heel, and calf a nice stretch to alleviate plantar fasciitis, the foot rocker has proven to be very effective for many people. It can help improve flexibility in the lower leg muscles, strengthen your ankles, and reduce pain from injuries.
Heel-Striking
This is a byproduct of our shoes, as we would never heel-strike a hard surface in our bare feet because it hurts! Striking the ground heel first when we walk or run is detrimental for many reasons.
It increases the amount of time in a pronated position as we land and push off, which is how pain starts to arise. This leads to over-pronation, stress on the muscles in the lower leg, and the foot collapsing inward again.
Lack of Ankle Mobility
Most shoes have an elevated heel in them, whether you know it or not. We are told that it’s just for a little extra cushioning or shock absorption, but this is very bad for our feet ironically.
The raised heel makes our calves sit in a shortened position all the time, leading the muscles to get tighter. This turns the feet outward, making them prone to collapse inward and put stress on the lower leg leading to more heel and arch pain.
The ankle wants to stay mobile, so when it doesn’t have proper mobility, it will enable more pronation but this is not what we want.
Weak Toes
Another thing that our shoes don’t allow for us is the ability to use our toes to their full potential. The toes are needed for balance, agility, proprioception, and all the movement we undertake. All shoes are made with a narrow toe box unless you specifically buy wide toe box shoes.
The narrow toe box compresses our toes together and holds them in a position where they are not being used and are forced to stay smashed into a small space. We need the toes to spread out and build strength again.
The big toe is a major deciding factor in the height and strength of your foot arch, so when you can’t use your big toe, you can’t have a strong arch. This means your foot yet again becomes prone to collapsing inward and experiencing over-pronation because the big toe isn’t accessible to stop the natural pronation pattern.
To help offset some of these issues, I’ve been wearing toe spacers for a while now. They help spread the toes so they can relax after a long day and start getting stronger so they can become fully functional again.
Over-pronation
Over-pronation is when the ankle and foot collapse too much while moving. Pronation is a natural function, but when its excessive it can become very detrimental.
Notice how this is the result of all the other causes, the common denominator. But it is the reason we get the pain signals from our feet and heels. Over-pronation is ultimately what we’re trying to get rid of, but we need to address the previously mentioned causes first because they actually cause THIS cause.
|
https://medium.com/runners-life/5-leading-causes-of-plantar-fasciitis-fc397c62023d
|
['Will Zolpe']
|
2020-12-27 02:08:30.320000+00:00
|
['Life Lessons', 'Shoes', 'Body', 'Health', 'Plantar Fasciitis']
|
Python for Data Science vs Python for Web Development
|
Python programming has various frameworks and features to expand in web application development, graphical user interfaces, data analysis, data visualization, etc. Python programming language might not be an ideal choice for web application development, but is extensively used by many organizations for evaluating large datasets, for data visualization, for running data analysis or prototyping. Python programming language is gaining traction amongst users for data science whilst being outmoded as a web programming language. The idea of this blog post is to provide a comparison on the two completely different purposes of using Python language and help understand that it is not necessary to know Python as a web programming language for doing data science in Python.
Python for Data Science
Organizations of all sizes and industries — from the top financial institutions to the smallest big data start-ups are using Python programming language to run their business.More additional Information At Data Science Course
Python language is among the popular data science programming languages not only with the top big data companies but also with the tech start up crowd. Python language ranks among the top programming languages to learn in 2019.
Python language comes in the former category and is finding increased adoption in numerical computations, machine learning and several data science applications. Python language can do anything, excluding performance dependent and low level stuff. The best bet to use Python programming language is for data analysis and statistical computations. Learning Python programming for web development requires programmers to master various web frameworks like Django that can help the build websites whereas learning Python for data science requires data scientists to learn the usage of regular expressions, get working with the scientific libraries and master the data visualization concepts. With completely different purposes, programmers or professionals who are not knowledgeable about web programming concepts with Python language can easily go ahead and pursue data science in Python programming language without any difficulty.
Python is a 23-year-old powerful expressive dynamic programming language where a programmer can write the code once and execute it without using a separate compiler for the purpose. Python in web development supports various programming paradigms such as structured programming, functional programming and object oriented programming. Python language code can be easily embedded into various existing web application that require a programming interface. However, Python language is a preeminent choice for academic, research and scientific applications which need faster execution and precise mathematical calculations.
Python web programming requires programmers to learn about the various python web development frameworks, which can be intimidating because the documentation available for the python web development frameworks might be somewhat difficult to understand. However, it is undeniable that to develop a dynamic website or a web application using Python language, learning a web framework is essential.
Python Web Development Frameworks
There are several Python web application frameworks available for free like-
Django
Django is the python web development framework for perfectionists with deadlines. Python web development with django is best suited for developing database driven web applications with attractive features like automatic admin interface and a templating system. For web development projects that don’t require extensive features, Django may be an overkill because of its confusing file system and strict directory structure. Some companies that are using python web development with django are The New York Times, Instagram, and Pinterest.
Flask
It is a simple and lightweight solution for beginners who want to get started with developing single-page web applications. This framework does not support for validation, data abstraction layer and many other components that various other frameworks include. It is not a full stack framework and is used only in the development of small websites.
CherryPy
It emphasizes on Pythonic conventions so that programmers can build web applications just the way they would do it using object oriented Python programming. CherryPy is the base template for other popular full stack frameworks like TurboBears and Web2py.
There are so many other web frameworks like Pyramid, Bottle, and Pylons etc. but regardless of the fact, whichever web framework a python programmer uses, the challenge is that he/she needs to pay close attention to detailing on the tutorials and documentation.
Why Web Development with Python is an impractical choice?
Python programming language probably is an impractical choice for being chosen as a web programming language –
Python for web development requires non-standard and expensive hosting particularly when programmers use popular python web frameworks for building websites. With PHP language being so expedient for web programming, most of the users are not interested in investing in Python certification programming language for web development.
programming language for web development. Python language for web development is not a commonly demanded skill unlike demand for other web development languages like PHP, Java or Ruby on Rails. Python for Data science is gaining traction and is the most sought after skill companies are looking for in data scientists, with its increased adoption in machine learning and various other data science applications.
Python for web development has come a long way but it does not have a steep learning curve as compared to other web programming languages like PHP.
Why Python for Data Science is the best fit?
Python programming is the core technology that powers big data, finance, statistics and number crunching with English like syntax. The recent growth of the rich Python data science ecosystem with multiple packages for Machine learning, natural language processing, data visualization, data exploration, data analysis and data mining is resulting in Pythonification of the data science community. Today, Python data science language has all the nuts and bolts for cleaning, transforming, processing and crunching big data. Python is the most in-demand skill for data scientist job role. A data scientist with python programming skills in New York earns an average salary of $140,000
Data Scientists / Data Science Certification like to work in a programming environment that can quickly prototype by helping them jot down their ideas and models easily. They like to get their stuff done by analysing huge datasets to draw conclusions. Python programming is the most versatile and capable all-rounder for data science applications as it helps data scientists do all this productively by taking optimal minimal time for coding, debugging, executing and getting the results.
The real value of a great enterprise data scientist is to use various data visualizations that can help communicate the data patterns and predictions to various stakeholders of the business effectively, otherwise it is just a zero-sum game. Python has almost every aspect of scientific computing with high computational intensity which makes it a supreme choice for programming across different data science applications, as programmers can do all the development and analysis in one language. Python for data science links between various units of a business and provides a direct medium for data sharing and processing language.
Python has a unified design philosophy that focuses on ease of use, readability and easy learning curve for data science.
Python has high scalability and is much faster when compared to other languages like Stata, Matlab.
There are more and more data visualization libraries and cool application programming interfaces being added for inclusion of graphics to depict the results of data analysis.
Python has a large community with good number of data science or data analytics libraries like Sci-Kit learn, NumPy, Pandas, and Statsmodels, SciPy etc. which have rich functionality and have been tested extensively. Data analysis libraries in Python language are growing over time.
Python Programming for Number Crunching and Scientific Computing in Data Science
Data analysis and Python programming language go hand in hand. If you have taken a decision to learn Data Science in Python language, then the next question in your mind would be –What are the best data science in Python libraries that do most of the data analysis task? Here are top data analysis libraries in Python used by enterprise data scientists across the world-
NumPy
It is the foundation base for the higher level tools built in Python programming language. This library cannot be used for high level data analysis but in-depth understanding of array oriented computing in NumPy helps data scientists use the Pandas library effectively.
SciPy
SciPy is used for technical and scientific computing with various modules for integration, special functions, image processing, interpolation, linear algebra, optimizations, ODE solvers and various other tasks. This library is used to work with NumPy arrays with various efficient numerical routines.
Pandas
This is the best library for doing data munging as this library makes it easier to handle missing data, supports automatic data alignment, supports working with differently indexed data gathered from multiple data sources.
Enroll Free Live Demo At Data Science Online Training
SciKit
This is a popular machine learning library with various regression, classification and clustering algorithms with support for gradient boosting, vector machines, naïve Bayes, and logistic regression. This library is designed to interoperate with NumPy and SciPy.
Matplotlib
It is a 2D plotting library with interactive features for zooming and panning for publication quality figures in different hard copy formats and in interactive environments across various platforms.
Matplotlib, NumPy and SciPy are the base for scientific computing. There are many other Python libraries such as Pattern for web mining, NLTK for natural language processing, Theano for deep learning, Scrappy for web scraping, IPython, Statsmodels, Mlpy and more. For people starting with data science in Python, they need to be well-versed with the above mentioned top data analysis libraries in Python.
|
https://medium.com/quick-code/python-for-data-science-vs-python-for-web-development-fcdbeb1c67cf
|
['Sandhya Reddy']
|
2020-02-05 03:21:34.546000+00:00
|
['Python', 'Web Development', 'Data Science', 'Data Visualization', 'Python Programming']
|
TypeScript DataTypes
|
Similar to other programming languages we can assign the datatypes to the variables and return types for the functions which makes the system more reliable and easily maintainable.
Before delving into the TypeScript types let us understand the problem that the TypeScript is solving when dealing with enterprise applications that are building using JavaScript.
JavaScript is a dynamically typed language. This means JavaScript does not know what type a variable is until it is actually instantiated at run-time. It would be very difficult to handle large applications without having type checking as it leads to issues.
DataTypes
number
string
boolean
enum
void
null
undefined
any
never
Array
tuple
User-defined Types (Classes, Interfaces, etc.)
Number Type
It represents numeric values including decimals, hexadecimal, binary, and octal literals. But to use binary and octal literals, you must use a TypeScript version which follows ECMAScript 2015 or higher.
To declare the Number data type for a variable, you need to use the number keyword.
const intValue: number = 100;
let decimalValue: number = 10;
let hexaDecimalValue: number = 0xf10b;
let binaryValue: number = 0b110100;
let octalValue: number = 0o410;
Boolean Type
It represents true/false values. To declare the Boolean datatype for a variable, you need to use the boolean keyword
let hasAdminRole:Boolean = false
String Type
It represents the string values. To declare the String data type for a variable, you need to use the string keyword.
let userName:string = "admin";
Enum Type
It represents the numerical constants. To declare the Enum datatype for a variable, you need to use the enum keyword.
enum CardTypes {
Debit,
Credit,
Virtual
}
let card: CardTypes = CardTypes.Debit;
By default, the enum values start from 0 (zero), but you can also set it by manually entering the value as mentioned below.
enum CardTypes { Debit = 1, Credit, Virtual }
enum CardTypes { Debit = 1, Credit = 3, Virtual = 5 }
Void datatype
void datatype is used to functions that does not return any value.
showNotification(): void {
alert('hello world');
}
Null Type
It represents the null value. To declare the null datatype for a variable, you need to use the null keyword. As null is a subtype of all other types, you can assign the null value to all types.
let nullValue: null = null;
let numericValue: number = null;
let booleanValue: boolean = null;
let stringValue: string = null;
Undefined Type
It represents the undefined value. To declare the undefined data type for a variable, you need to use the undefined keyword. As undefined is a subtype of all other types, you can assign the undefined value to all types.
let undefinedValue: undefined = undefined;
let numericValue: number = undefined;
let booleanValue: boolean = undefined;
let stringValue: string = undefined;
Any Type
When the developer is unsure of the data type of value (third-party library/service), it is recommended to use any type. To declare any data type for a variable, you need to use any keyword. This is also useful when you are declaring an array which has a mixed data type.
let anyValue: any = 100;
anyValue = "string value";
anyValue = true;
let arrayList: any[] = [ "String Value",100, true];
Never Type
The never type is used when you are sure that something is never going to occur. For example, you write a function which will not return to its endpoint or always throws an exception.
function throwError(errorMsg: string): never {
throw new Error(errorMsg);
}
function batchProcessing(): never {
while (true) {
console.log('call REST API');
}
}
Array Types
Arrays can be defined in the Typescript in 2 ways.
let userIds: number[] = [80, 85, 75]; let orderIds: Array<number> = [50000, 50001, 50001];
Tuple Types
A tuple is a data type that allows you to create an array where the type of a fixed number of elements are known but need not be the same.
let employee: [string, number, boolean] = ["sam", 2019, true];
let empName:string = employee[0];
let joinYear:number = employee[1];
Union Type
TypeScript allows us to use more than one data type for a variable or a function parameter.
Syntax: (type1 | type2 |.. | typeN)
let empId: (string | number); empId = 123; // OK
empId = "TE123"; // OK
empId = false; // Compiler Error
As we are talking about data types, let us talk about the type inference in typescript.
Type Inference meaning “types of variables will be inferred when there is no explicit information available in the form of type annotations.”
Types are inferred by TypeScript compiler when:
Variables are initialized
Default values are assigned for the parameters
Function return types are determined based on the input parameters
|
https://medium.com/techmonks/typescript-datatypes-84b94aa80041
|
['Anji']
|
2019-09-03 14:53:42.560000+00:00
|
['Angular', 'Typescript', 'JavaScript']
|
How to make a Twitch profanity filter Chrome extension
|
I do a lot of Twitch related projects here and there, trying to see what the possibilities are. One of my latest projects involved creating a Twitch Emote Extension for Chrome where I showed how to get extra emote “slots” for free. That got me into browser extensions, so I figured let’s show how to make one from scratch by building a profanity filter for the Twitch chat.
At the end of this, we want to have an extension that, when installed, checks for a set of flagged words in chat and replaces them with more friendly versions. This could be useful for children watching streams, for instance.
If you just want the code for this extension, there is a link to the end result at the bottom of this article.
Tinker Time
Alright, so let’s get into it head first. Create a folder somewhere you like and call it profanity-filter. I created mine in a place where I sync up with GitHub.
Any name will do, but this seems fitting
Now inside this folder, create two new files: manifest.json and content.js. You’ll see how we’ll use them as we get to building the extension.
There we go
Open up manifest.json, which is the file Google looks at for metadata regarding your extension, and paste the following into it:
{ "manifest_version": 2, "name": "Twitch Profanity Filter Extension", "version": "0.1", "content_scripts": [ { "matches": ["https://www.twitch.tv/*"], "js": ["content.js"] } ] }
As you can see, we are telling Google that this extension should work on any page from Twitch and we’re also saying that the logic that should be run on these pages resides in the file called content.js. Note that I put the version as 0.1, as this is my initial try.
Getting the logic in
Now open up content.js in the text editor of your choice. I would highly recommend Visual Studio Code, it’s my go-to text editor for anything coding related and it’s free.
Now, the way we’re going to set this up is in a few steps. What do we want to achieve and how are we going to accomplish that? We know we want to change certain words to others, which means reading the chat messages, scanning them for flagged words and then updating those words to more friendly ones. So when the page loads, we want our logic to be able to do the following:
Find the chatbox
Every time a new chat message appears, get the message or message container
Change flagged words to their friendly counterparts
So let’s get to it. The first part is actually activating the script when the page, or more specifically the window, has been loaded. In javascript, you can hook into that loading event and we are going to do just that. Update your content.js to contain the following:
That will make sure that whatever we put inside that function, gets called when the page has loaded.
Find the chatbox
So how do we find the chatbox? Well, in the Chrome version of the Twitch website, the chatbox is contained in a <div> element that has a class called chat-scrollable-area__message-container. We are going to get that element, or node, through javascript. Update the file to look like this:
The reason we add .item(0) behind it, is because getElementsByClassName() returns an array of elements. Since we know only one element has this class, we take it from the array by specifiying we want the first thing (most programming languages start counting at 0).
Get every new message in chat
This is a bit more tricky to understand if you’re not used to programming or not used to observer patterns. The way we are going to get every new message is by using a MutationObserver on the chatbox node we just picked up. Update the file to match the following and then we’ll walk through what’s going on:
First, on line 4, we define a callback function that takes a list of mutations and the observer we’ll create on line 8. The logic in this function is what gets called whenever something changes, or mutates, in the node we are observing.
On line 8 we create the observer and pass it our callback function. Then, on line 9, we tell the observer to start observing our target node and we pass a configuration that says we are only interested in mutations to the childList. Messages appear as child nodes in the chatbox node we are looking at, so our callback function will get triggered every time a new chat message appears. That’s exactly what we want!
But now we still need to get the actual message elements and for that we add one more line of code in our callback:
The name might seem a bit weird, but all text parts of a message (we are not scanning emotes, just the actual texts) are placed in <span> HTML elements that have the class text-fragment. Also, because this function gets triggered at every new message, we know the message we want must be the last element in the chatbox node, so that’s why we’re grabbing the lastElementChild.
Change flagged words to their friendly counterparts
Let’s get to the juicy bit. If we want to change flagged words, we first need something that tells us what flagged words and their counterparts are. We’ll do this by using a simple dictionary. Update the file to look like this:
You can add more words as you like, just remember the comma at the end
Now we are going to read the actual text in all the text-fragment elements, loop over all forbidden words and replace them if necessary. Let’s get that loop started, so update your callback function with the following:
What’s happening here is that we are looping through all these elements that contain text. On line 18 we are taking one specific element at a time and on line 19 we get the actual text from it in lower case, because our flagged words are all in lower case as well. It makes it easier for comparison purposes, but in a more advanced version we’d take care of neatly maintaining casing. Now that we have our text, we want to loop over our flagged words and check if they’re present. If so, we delete them. Update the function:
The dictionary we made works with key-value pairs, where the keys are the flagged words and the values are the friendly words, so on line 21 we’re simply getting an array of all the keys, which is the the array of flaggedwords we want to loop over. In the loop we grab the flagged word we currently are looking at and if our text includes one or more of them, we grab the friendly word and update our text by replacing the flagged words. That’s almost all, now we just need to take our new and improved text and put it back into the element we grabbed it from:
On line 31 we have now placed the text back into the element and that is it! Let’s get to testing this in Chrome.
The test
Open Chrome and browse to chrome://extensions. You should see your extensions and in the top right you’ll see Developer mode. Toggle it on if it isn’t yet. Afterwards, in the top right, you’ll see the Load unpacked option. We are going to use that to load in our local extension without having to go through the Chrome Web Store route just yet. Click on it and select the folder where the files are located:
That should make it appear instantly in our extension list:
Good! Now let’s head over to any channel where we can chat. I’ll go to my friend Bjarke’s channel, a hilarious Danish streamer, and try it out:
There we go, it works! Our very own profanity filter. Wasn’t that hard, right? All that’s left is to release it to the Chrome Web Store if you want to. I’ll be expanding on this list of words and releasing this, so if you don’t need more than this, you don’t have to release it, seeing as it requires a fee to get into the program. I won’t be going into how the release works, but if you are interested, I made a Twitch Emote Extension to get your own limitless emote slots and this article explains how to release extensions at the end.
If you want to see all the code for this extension, check out the GitHub repo here. Merry coding!
|
https://nintendoengineer.medium.com/how-to-make-a-twitch-profanity-filter-chrome-extension-195bce36b38d
|
['Nintendo Engineer']
|
2020-12-28 13:51:07.926000+00:00
|
['Twitch', 'Chrome', 'Google', 'Extension', 'Programming']
|
Update from AiFinTek
|
The team has spend a lot of time on the AI platform integration in the last few weeks. We have a new project for Game development that has been draining our resources. Another firm has asked us to look at Coin integration in their platform. We have also placed some resources into mining and developing mining pools.
And yes, We are also working with an exchange to list our AFTK coin.
In summary, we have been very very busy.
On the Airdrop. We used our AI platform to delete multiple forms, multiple addresses, and multiple scams. In the end we were able to shorten the list of Airdrop members to about 11,400 unique members.
Out of this 11,400 members, about 2,100 have already received their AFTK coins. All remaining members will be receiving their coins. Stay tuned.
Several Apps were used in our testing for Airdrop. We generally found Ethereum to be somewhat dependable but 99.999% quality was missing. Working around various limitation of Ethereum, it has become quite clear to us that better platforms will evolve. Or Ethereum would need to get better by a considerable level.
We have been slightly offtrack with our Airdrop and ICO time line. But at the same time we have invested in our business development and to place the company on a stable path to growth and we are close to a critical project milestone.
As you receive your AFTK tokens — I would just advise — hold them and don’t go sell them for pennies.
A guy sold 10,000 bitcoin back in 2009 to buy two pizzas. https://www.forbes.com/sites/ericmack/2013/12/23/the-bitcoin-pizza-purchase-thats-worth-7-million-today/#647d06862509.
Hold the AFTK — these are not going to be minted any more. Only 25 million were issued to fund our company AiFinTek. At the right time, we will decide if and how we can allocate some of the shares of the AiFinTek firm to the AFTK token holders.
Welcome Aboard.
|
https://medium.com/aifintek/update-from-aifintek-22f0bbee3476
|
['Oscar Wilde']
|
2018-06-09 05:15:10.875000+00:00
|
['Blockchain', 'Bitcoin', 'Ethereum', 'AI', 'Fintech']
|
Eastern European Nationalism to Find a Fertile Ground in COVID-19 Pandemic
|
Eastern European Nationalism to Find a Fertile Ground in COVID-19 Pandemic
COVID-19 will overwhelm resource-poor eastern EU health systems. In the aftermath, ramped up nationalism and erosion of democracy are a strong possibility. To avert this, the public needs EU leadership and guidance.
Photo: Getty Images
By Veronica Anghel, PhD, Fulbright Fellow, The Europe Center, Stanford University; Associate Fellow SAIS — Johns Hopkins University
Central-East European (CEE) authorities are more reactive than proactive on COVID-19 management and have devised an ad hoc patchwork of measures. States rely on drastic ‘stay-at-home’ strategies to curb excessive demand on health systems. The public is not a participant in decision making, parliaments are mostly bypassed and restrictive measures are enforced without much concern for individual freedoms. Politically, COVID-19 is not creating new attitudes but amplifying existing ones. It offers national-populists a fertile environment for centralized decision-making and adopting measures incompatible with normal democratic practices.
These are not unique elements to the new post-communist EU members. However, given previous attitudes on restrictive interpretations of the rule of law in countries such as Poland and Hungary, there is a risk that temporary limits on individual freedoms will be used not just to deal with the health crisis. They also damage citizens’ avenues of participation in decision-making. A further strengthening of ruling parties’ grip on state resource allocation is to be expected, along with limitations on the rights of the media and individuals, including the freedom of movement of medical personnel. The crisis will highlight weaknesses in medical systems and put pressure on poorer countries to try to reverse these trends.
Under-financed Health Systems
Compared with older EU member states, CEE countries have dysfunctional and under-financed health systems and lack the necessary medical staff and supplies to withstand the emergency now facing the whole world.
Following EU accession, the freedom to seek work led to a flow of highly skilled workers from east to west. Doctors and nurses moved more than any other such group. Combined with less healthcare expenditure in the Eastern EU on average, this resulted in fragile health systems, particularly in Romania, Bulgaria, Hungary and the Baltic states.
Sign up for the FSI newsletter to receive stories like this directly to your inbox
OECD and European Commission statistics show annual increases of health spending per capita in all EU countries. However, eastern member states have yet to converge with EU standards. According to the World Health Organization (WHO), there are shortages of health professionals in the largest CEE countries, Poland and Romania. The emigration of Polish nurses and physicians has made these shortages more acute. The WHO estimates the ratio of physicians to 1,000 inhabitants in 2016 at 2.4, compared with an EU average of 3.6. The number of practicing nurses is also low: 5.2 per 1,000 in 2016, compared with 8.4 for the EU. Poor working conditions and low salaries are among the main reasons given for leaving Eastern Europe.
Romania has the most understaffed and least-resourced healthcare system. Expenditure per capita is three times lower than the EU average, and the lowest in the EU. Whereas EU countries spend on average about 10% of GDP on health, in Romania, although its expenditure has grown more or less constantly, it had still only reached 5% of GDP by 2019. Measures to increase doctors’ pay in 2019 have yet to reverse the trend of medical professionals leaving the country. According to healthcare officials, 14,000 doctors left Romania between 2009 and 2015, for such better- paying countries as France, the United Kingdom and Germany — half of Romania’s registered doctors. The trend has continued to grow since then, with another 10,000 registering to practice outside the country between 2016 and 2018. Medical assistants and pharmacists are leaving in even greater numbers.
More than 5,000 doctors left Bulgaria during 2008–18 to work in other EU countries. As in the case of Romania, most leave straight out of university and do not return. The Baltic States and Hungary are the other countries most hit by the migration of physicians and medical assistants.
Medical equipment
Besides depleted human resources, lack of funding also results in low stocks of necessary materials. CEE governments are struggling to scramble protective gear for medical staff.
Dealing with the initial wave of coronavirus patients without proper protective kit has removed many healthcare professionals into quarantine. In Romania, tens of medical professionals have resigned from fear of working with COVID19 patients without protective gear.
High-filtration N95 masks, intensive care beds and ventilators are being rationed to those patients who will benefit the most. Like most West European countries, eastern member states are looking for more real-time polymerase chain reaction (PCR) test units and COVID-19 testing kits. PCR is the most commonly used test for diagnosing coronavirus because it is highly accurate, but it is not a sustainable strategy in the long run. PCR testing is limited, time-consuming and expensive for under-resourced health systems. As most PCR units in CEE are small, they can only process about 40–45 tests per day. Numbers of trained professionals are required to work in shifts to produce results. Doctors who know how to operate with this technology are scarce. Protocols are currently being imagined to train new staff on how to work with this technology. Fighting time and resource scarcity, Romania is one of the first countries in the world to sign a contract to equip 20 hospitals with a new technology that is cheaper and can deliver results in 45 minutes. Designed by US company Bioneer and recently been given FDA approval, it still awaits EU approval. Other European countries are likely to follow suit should this test prove its accuracy.
Dealing with COVID-19 cases
Technology and human resources are reflected in the number of reported cases. As of April 3, the Czech Republic was in the lead in COVID-19 testing relative to population, with 67,281 reported tests administered and 4,091 confirmed cases. Poland had tested 61,000 people and reported 3,266 positive cases. In Hungary, 15,401 people were tested and 585 cases confirmed. In Romania, 3,183 people tested positive out of 28,483 administered tests. The overall numbers of cases in CEE are still small compared to the hotspots of Italy and Spain, but an increase is expected in the coming weeks.
The lack of prepared procedures to deal with epidemics and the pervasiveness of informality through direct payments to medical staff present CEE with further challenges. In normal times, socioeconomic inequalities translate into selective access to healthcare, favouring those with access to private treatment or who are well connected. Such practices dictate access to medical care and testing also in times of crisis. Private hospitals have ramped up their testing capabilities, but offer them at a cost. In normal times, out of pocket households’ irregular payments are the second-largest source of finance for public healthcare. During the pandemic, hospitals are turning to corporate, industrial and private sources of finance for urgently needed equipment. Universities and research centers are not prioritized as a source of innovation, as they are traditionally underfinanced. High-complexity labs are rare.
Political aftermath
Governments will be judged on how well they handle the pandemic — CEE countries are no different. The public will get behind restrictive measures as long as governments can present them as resolving the crisis. In the long run, many leaders will downplay the role of experts, resort to nationalism and target external ‘enemies’.
Many European states are using national borders to protect their citizens from contact with foreigners often blamed for introducing infection. In CEE, the COVID-19 pandemic offers a new platform for national-populists and Eurosceptics to stress the importance of borders, capitalise on public fear and confusion, and move more powers to the centre. In Hungary, these are the migrants. In Romania, the travelling diaspora is increasingly blamed for having brought the virus from Italy.
Despite their fewer cases, some CEE states immediately closed borders to foreigners, including EU citizens — although many West European states did much the same. The rhetoric employed by national -populists has been problematic. CEE states have also been among the first to declare states of emergency, giving governments the ability to suspend laws, although again, they were not alone in the EU in this regard.
Hungarian Prime Minister Viktor Orban has gone furthest, blaming foreigners and migrants for the emergence and spread of the virus. Ruling Fidesz pushed through parliament laws limiting freedom of expression, under the guise of stopping the spread of misinformation, or of information that alarms the public or interferes with the government’s mission to protect. Such measures leave great scope for interpretation. As of March 30, the Hungarian Parliament gave the prime minister the possibility to rule by decree and no longer entertain legislative debates. This is in line with the previous pattern of behavior from Fidesz leaders. As with the 2009 economic crisis, they nurture the same belief in the need to centralize decision making for fast action. In doing this, they have the support of the public. Parties opposed to the government have asked for a three-month limit on the new emergency provisions. PM Viktor Orban has argued that this is not enough, as Hungary could be in worse shape then.
The European People’s Party (EPP), the European political family of which Fidesz is a suspended member, asked for justifications of such extreme measures. In a letter of reply, PM Orban used the same argument for the need for fast decision making.
In Bulgaria, Prime Minister Boyko Borisov obtained parliamentary support for a bill amending the penal code to punish spreading false information with heavy fines and prison terms. President Rumen Radev vetoed the bill, saying it attacked “the last vestiges of free speech” and could be used against “any inconvenient free thinking”. The bill can still pass the parliament in second vote without the president’s approval.
In Romania, the Parliament voted the state of emergency unanimously on March 19. Since then, President Klaus Iohannis and the executive make all the decisions regarding restrictions of individual freedoms. President Klaus Iohannis has gone as far as suggesting Romanians abroad should not return to their own country for the duration of the health crisis, so as not to spread the virus any further. In this case, nationalism has excluded the numerous diaspora, despite the fact that it can usually be relied upon to vote for the president’s centre-right party. The foreign ministry is nevertheless arranging flights to bring home Romanian nationals stranded abroad as seasonal workers. According to a controversial statement from Romanian PM Ludovic Orban, such efforts should only be limited to Romanians who are lawfully working abroad.
In Poland, Prime Minister Mateusz Morawiecki declared at a press conference on March 13 that most cases of COVID-19 were imported by foreigners or Poles returning from abroad. The underlying message is that nations would not be in crisis if there were less freedom of movement. This is in line with previous anti-migration and nationalist agendas. So far, Poland has refrained from any further tightening of the executive’s grip on democracy. However, given its history of challenging the rule of law, Law and Justice is likely to curb a critical press at least.
So far, signs of solidarity or collaboration between CEE countries have been lacking in dealing with the COVID-19 pandemic.
Disinformation running rampant
With more people spending time online, disinformation and misinformation are ever more widespread, sowing confusion and mistrust of the authorities and experts. Russian-led outlets and alternative blogs are spreading fears of global economic collapse, as well as suspicions that the pandemic is faked. Their interest is not in making any one story stick, but in increasing social divisions and confusion.
Online communities are magnifying rumors about the need to make ‘a run to the bank’ in the face of impending global economic collapse. The government slow to address the public’s growing economic concerns. As in previous cases in Hungary and Bulgaria, the authorities are answering the spread of misinformation punitively.
In a bid to stop panic, the Romanian government has even forbidden county prefects to inform the public how COVID-19 is spreading regionally, starting March 21. This is likely to backfire, as the public will then resort to alternative sources of information and rumor. Hungary and Romania are the only two European states to adopt this strategy.
In the early stages of the pandemic, the lack of trust in authorities in such countries as Romania and Bulgaria has led citizens to doubt the seriousness of the crisis and downplay their individual responsibility in tackling it. By the time the authorities were announcing total lockdowns and the use of the army to enforce institutionalised quarantines, people were disregarding recommendations to stay at home. An exponential increase in the number of those infected is likely. However, ‘quarantine-fatigue’ is also likely to kick in soon, with citizens increasingly flouting the rules.
Conclusion
The pandemic numbers will expand more quickly as a consequence of tests becoming more available, delayed isolation strategies in some countries and citizens seeking to evade quarantine rules. The healthcare infrastructure will be overwhelmed by the severely ill and government resources will be over-stretched. Medical equipment and interventions will need to be rationed, with doctors and nurses already falling ill or being quarantined for lack of protective gear.
As it began, so the pandemic will end — gradually. Restrictive measures could remain in place after the peak of the crisis. The population is not protesting against repression now, but their attitudes will depend on how governments handle the crisis.
Most CEE governments are likely to suspend local or national elections, at least for the next six months. The exception is Poland, which expects to hold the May presidential election, although the main opposition is not campaigning because it would violate social distancing and will probably boycott the vote altogether.
Nativist national attitudes will be accentuated by new fears of ‘the foreigner’. As previous studies have shown, if poorly managed, the economic shock felt in the aftermath of the crisis will contribute to a spike in extremist and populist attitudes.
The next EU budget may take into account the latest revelation of less affluent members’ structural weaknesses. However, EU solidarity is already stretched. The current pandemic is likely to create new tensions between east and west at a moment when Eastern countries need strong countermeasures to nationalist rhetoric and isolationist policies.
An earlier version of the analysis was published by Oxford Analytica. © Oxford Analytica 2020. All rights reserved. No duplication or transmission is permitted without the written consent of Oxford Analytica
|
https://medium.com/freeman-spogli-institute-for-international-studies/eastern-european-nationalism-to-find-a-fertile-ground-in-covid-19-pandemic-ec4572f19ac0
|
['Fsi Stanford']
|
2020-04-06 16:33:37.744000+00:00
|
['Covid 19', 'European Union', 'Pandemic', 'Health']
|
Destroy Fast and Move Faster with Sketch
|
Enterprise UX often deals with solving hard problems. At Juniper, we solve a lot of complex problems around networking everyday and a lot of them involve challenges around displaying large amounts of data.
When designing, working with real data is as crucial as sketching before you begin prototyping something. It surfaces unforeseen constraints early on in the process, elicits better feedback from stakeholders and it gives a real picture of what the design would look like when it gets built.
But working with real data is a time consuming process. As designers, we need to be able to quickly get feedback early on in the process and move forward quickly as in order to create good user experiences, rapid iteration is crucial.
And in the end, we often end up compromising either hi-fidelity or speed.
But why should we sacrifice one over the other when we can destroy fast and move faster, balancing both speed and fidelity?
Let me explain —
When laying out mockups, we usually begin by creating elements and laying them in a meaningful order to convey the design. At a later stage, if any edits are to be made, we have to go to each and every element to make the respective changes because we do not want to disturb the order we have spent so much time creating. This process becomes particularly harder when we are dealing with several different data points. For examples, many times I’ve had to quickly turn around updates to the order of columns in a grid right after a stakeholder review. Turned out I had grouped all of my grid elements in rows (damn it! If only I had known earlier — what a waste of time!).
But what if we don’t edit the different elements that we had earlier created, rather destroy and rebuild them once again, in no time?
I would call this the Destroy Fast and Move Faster technique. I’ll explain this further in the context of grids. Grids have been a tried and tested way to visualize large quantities of data for comparison and recognizing meaningful relationships. However, anyone who has ever laid out content for grids would know that it takes a lot of time to create and edit them.
The Destroy Fast and Move Faster technique explained
It involves the use of 3 different Sketch Features/Plugins — The new Scale feature in Sketch 39, CRAFT and RenameIt. These tools have been around for sometime now but I’ll try to tell you how to best use these three together to optimize your workflow when working with grids.
I’ve broken down the technique into 6 steps — I would encourage you to follow along these steps in Sketch as you read through them.
1. Create a table cell
Create a rectangle of height equal to your cell height and any width. Add a text box in it and add a rectangle for margin. Set the text layer and the margin layer to pin to corner and the rectangle to stretch. Now group these 3 layers and rename it to Table Cell.
Tip: Rename your layers in the beginning itself — before you start duplicating them. Use ⌘ + R to quickly rename a selected layer and use Tab/Shift+Tab to quickly move between layers while renaming.
Tip: Always use inner shadows to create dividing lines in your grid. Never create even a single line that you would have to duplicate later. Always use a rectangle. I’ve used a -1px inner shadow in this rectangle, which when repeated would create those grid lines for me effortlessly.
2. Create a table header cell
Add the styling that you want and place it right below the first one.
3. Duplicate horizontally
Hide the Margin layer and then duplicate this cell horizontally to create columns. You can easily add more columns later. We’ll soon see how.
Next, edit the text fields to the categories of data that would be displayed in the grid.
Tip: Use option + drag to duplicate a group and command + D to create a subsequent duplicate which would follow the same rule.
|
https://medium.com/juniperux/destroy-fast-and-move-faster-with-sketch-233de17e3170
|
['Ridhima Gupta']
|
2016-09-21 00:48:06.477000+00:00
|
['Design', 'Data', 'Tips And Tricks', 'Invision', 'Sketch']
|
Understanding BIXI Commuters: An Analysis of Montreal’s Bike Share System in Python
|
The visualizations above were created using Folium. Folium allows the user to create geographical visualizations, it can be installed using pip. The visualizations above were created using the following code. It should be noted that the data frame passed to the function contains the net influx or out flux associated to each station for a specific time slot.
import folium
def densityMap(stations):
#generate a new map
Montreal = [45.508154, -73.587450]
map = folium.Map(location = Montreal,
zoom_start = 12,
tiles = “CartoDB positron”)
#calculate stations radius
stations[‘radius’] = pd.Series( index=data.index)
stations[‘radius’] = np.abs(stations[‘net_departures’])
stations[‘radius’] = stations[‘radius’].astype(float) #set stations color
stations[‘color’] = ‘#E80018’ # red
stations.loc[stations[‘net_departures’].between(-10000,0), ‘color’] = ‘#00E85C’ # green
lat = stations[‘latitude’].values
lon = stations[‘longitude’].values
name = stations[‘name’].values
rad = stations[‘radius’].values
color = stations[‘color’].values
net_dep = stations[‘net_departures’]
#populate map
for _lat, _lon, _rad, _color, _name, _nd in zip(lat, lon, rad, color, name, net_dep):
folium.Circle(location = [_lat,_lon],
radius = _rad/5,
color = _color,
tooltip = _name + “ / net. dep:” +str(_nd),
fill = True).add_to(map)
#save map
f = ‘maps/map_density.html’
map.save(f)
Correlation between biking habits and temperature
Now that its been established that bikers commute to and from work using BIXI, lets look at other predicting factors. Montreal is a city that knows a high range of temperatures, it can easily get as hot as 30 °C in the summer and as low as -30 °C in the winter. The bikes from BIXI are taken off the streets in November and brought back in April. It is worth asking how well does temperature predicts the usage of the network.
Each dot is the count of trips on a given day in 2018
Looking at the plot above, it can be seen that daily ridership increases as temperature gets warmer, the number of predicted trips triples between 0 and 20 °C from 10,000 to 30,000 trips. It could be be argued that the third degree polynomial regression provides the best fit fort this situation as one would expect the daily ridership to remain low as the temperature is around the freezing point. The daily ridership would then increase rapidly as the temperature gets warm but then stagnate and even decrease as it gets too hot. Looking at the plot it seems like the number of daily trips starts to decrease around 22 °C. This decrease could also be due to people being on vacation during the hotter months.
From the R squared score it can be seen that 70% of the variability is explained by the temperature with polynomial regression of order three, other factors to consider could be the weekend vs the week. I have also tried to correlate the distance from subway stations but the results were inconclusive.
Variance r2 score linear: 0.63
Variance r2 score poly2: 0.65
Variance r2 score poly3: 0.70
A regression can be performed in python using the library Sklearn. The average daily temperature used for this analysis came from an open data set from the Government of Canada. Here is the code that was used to perform this linear regression.
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import r2_score def scatterPlot(dataDay):
X = dataDay[[‘Mean Temp (°C)’]].values
X_1 = dataDay[[‘Mean Temp (°C)’]].values
X_2 = dataDay[[‘Mean Temp (°C)’]].values
X_3 = dataDay[[‘Mean Temp (°C)’]].values
Y = dataDay[[‘departures_cnt’]].values
#Linear regression
linear_regressor = LinearRegression()
linear_regressor.fit(X, Y)
Y_pred_linear = linear_regressor.predict(X)
print(‘Variance score linear: %.2f’ % r2_score(Y, Y_pred_linear))
#polynmial degree 2 regression
polynomial_feat = PolynomialFeatures(degree=2)
x_poly_2 = polynomial_feat.fit_transform(X)
polynomial_regressor = LinearRegression()
polynomial_regressor.fit(x_poly_2, Y)
Y_pred_poly_2 = polynomial_regressor.predict(x_poly_2)
print(‘Variance score poly2: %.2f’ % r2_score(Y, Y_pred_poly_2))
#polynmial degree 3 regression
polynomial_feat_3 = PolynomialFeatures(degree=3)
x_poly_3 = polynomial_feat_3.fit_transform(X)
polynomial_regressor_3 = LinearRegression()
polynomial_regressor_3.fit(x_poly_3, Y)
Y_pred_poly_3 = polynomial_regressor_3.predict(x_poly_3)
print(‘Variance score poly3: %.2f’ % r2_score(Y, Y_pred_poly_3)) #Ploting the data
plt.figure(figsize=(20, 10))
plt.title(‘Daily Bike Ridership With Regards to Temperature’)
plt.scatter(X_1,Y,c=’blue’,marker=’o’)
plt.xlabel(‘Mean Temperature (°C)’)
plt.ylabel(‘Number of Daily Trips’)
plt.plot(X_1, Y_pred_linear, color=’red’)
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X_2,Y_pred_poly_2), key=sort_axis)
X_2, Y_pred_poly_2 = zip(*sorted_zip)
plt.plot(X_2, Y_pred_poly_2, color=’green’)
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X_3,Y_pred_poly_3), key=sort_axis)
X_3, Y_pred_poly_3 = zip(*sorted_zip)
plt.plot(X_3, Y_pred_poly_3, color=’magenta’)
plt.plot(X_1, Y_pred_linear, ‘-r’, label=’degree=1')
plt.plot(X_2, Y_pred_poly_2, ‘-g’, label=’degree=2')
plt.plot(X_3, Y_pred_poly_3, ‘-m’, label=’degree=3')
plt.legend(loc=’upper left’)
plt.rcParams.update({‘font.size’: 22})
plt.show()
Cluster and graph analysis of the data
The last portion of this analysis is about estimating the traffic throughout the network. To do so spectral clusters were formed with the stations. Spectral clustering is a technique from graph theory. It seeks to identify groups of nodes, in this case stations, by using the edges connecting them, in this case trips. To do so it minimizes the number of trips between different clusters and maximize trips inside clusters. In the end it gives a idea of the geographical area riders stay within when taking a bike from a station.
From the map bellow it can be seen that 53% of trips in 2018 happened within the blue cluster showing that most of the traffic in the network is happening between the Plateau neighborhood and downtown. Moreover it can be seen that most of the clusters are delimited by geographical features, which are often highways or train tracks. For instance the split between the red and the blue clusters is mostly happening along a train track. Further the downtown area and the Old Port neighborhood are split between the blue and the purple cluster along the Ville-Marie highway.
Each dot on the map is a station and its color identifies to cluster to which it pertains. On the graph on the right the color of the nodes identifies the cluster and the vertices identify the percentage of the traffic that occurred within or between clusters in 2018. for Instance 5.02% or the traffic occurred from a red station to a blue station in 2018. Edges representing less than 0.5% of yearly traffic have been removed.
Looking at the graph bellow, one could hypothesized that to foster cycling in Montreal the city should invest in consolidating the network where bikers are already present, in this case the blue cluster and foster links to the other clusters. For instance, the city should ensure that safe bike paths are available in the blue cluster and that bikers can also efficiently and safely cross highways or train tracks that are cutting them from other adjacent clusters.
The graph analysis above was performed using the library folium and Diagraph from the library Graphviz. Both of which have to be installed using pip. Plotting of the map was similar to the one above. A Graphviz can quickly be generated using a Pandas data frame with the following code.
def create_graphviz(data):
#create the graph
G = Digraph(format=’png’)
G.attr(rankdir=’LR’, size=’10')
G.attr(‘node’, shape=’circle’)
nodelist = [] #prepare the data
data = data.drop([‘count’], axis=1)
data[‘from’] = pd.to_numeric(data[‘from’], downcast=’integer’)
data[‘to’] = pd.to_numeric(data[‘to’], downcast=’integer’)
data[‘from’] = data[‘from’].apply(str)
data[‘to’] = data[‘to’].apply(str)
#add the nodes and edges to the graph
for idx, row in data.iterrows():
node1, node2, weight = [str(i) for i in row] if node1 not in nodelist:
G.node(node1)
nodelist.append(node2)
if node2 not in nodelist:
G.node(node2)
nodelist.append(node2) #transform the edge label to XX,XX% format
percent = float(weight)
percent = percent*100
percent = round(percent, 2)
percent = str(percent)
G.edge(node1,node2, label = (“”+ percent +” %”)) #show graph
G.render(‘bixi_graph’, view=True)
Conclusion
To conclude, this exploratory analysis has been conducted using BIXI data from 2018 and May 2019. It has been seen the commuters use BIXI to go to and from work during the week and for leisure on weekends. Also, the use of the network is heavily influenced by the temperature as there are less users when the weather gets cold. Moreover, most users of the network remain around the Downtown, Plateau and surrounding areas. This could be due to the fact that geographical barriers such as highways and train tracks make their commute to nearby neighborhoods difficult.
Further analysis would also be useful to understand trends and patterns such as the specific paths users take to commute in the morning and at night. This would also provide great insights on bikers needs for a bike path network. Access to additional data such as stations capacity would allow for a better understanding of the network’s limitations and need for re balancing (bringing bikes to other stations by truck). As such, painting a complete picture of the situation, towards optimizing the BIXI and biking offer in Montreal.
Finally, I would like to thank other developers and writers who inspired my work with their own analysis more specifically Todd W. Schneider, Eden Au and Jon Nordby.
|
https://towardsdatascience.com/understanding-bixi-commuters-an-analysis-of-montreals-bike-share-system-in-python-cb34de0e2304
|
['Gregoire C-M']
|
2019-08-14 00:27:31.825000+00:00
|
['Python', 'Bikeshare', 'Linear Regression', 'Data Science', 'Data Visualization']
|
i think you’re confusing me with 9/11
|
i think you’re confusing me with 9/11
a coronavirus poem
Photo by Flavio Gasperini on Unsplash
you’re trying to
keep everything running, on schedule
denial laced with anger
knocking over
masks in stores — You deserve your
American life, unchanged
forward is the way through
i think you’re confusing me with 9/11
rhetoric about how
you must keep buying, keep going
schedules
resistance
stubbornness —
don’t give in to the fear! you say
but i’m not trying to
scare you
i’m not trying to
destroy your life
i’m not trying to do…anything really.
i am ceaseless. there are things i can live on
and things i cannot
i do not care
if you open parks
open schools
go back to work
i do not care
i’m not trying to
ruin your way of life
i’m really not
i do not care
if you have been in lockdown for
two months and you
really miss your favorite bar.
i do not care
what you miss, what you want
what your goals were.
there are things i can live on
and things i cannot
you’re blaming me for things
that are not my fault.
wanting to disrupt the economy? cause you to
live in fear?
what is fear?
no, i am looking for
lungs and spines and brains
soft, warm tissue
so i can multiply
that’s all i care about
i am not an abstract thing
a thing overseas
a thing somewhere else
i am right here
and i would very much like to go
into your body.
i don’t understand
your rage. i am not
a terrorist. i am not trying to destroy your
businesses, your stock market.
i think you’re confusing me with 9/11
the same stubbornness you felt
the national pride
the dedication to
a “way of life.”
i think you’re confusing me with 9/11
the resistance against
an enemy that looks like a person, an enemy with
a face
i think you’re confusing me with 9/11
but i’m not a single, intentional event
i am a wave, i am a disease
i cannot be
ignored, wished away
i am a wave, i am a disease
i do not tire, i do not rest
i am a wave, i am a disease
i am not your enemy, in fact
i want very much
to be your friend.
|
https://medium.com/are-you-okay/i-think-youre-confusing-me-with-9-11-50437ece4fcf
|
['Lisa Martens']
|
2020-07-14 18:47:40.088000+00:00
|
['Poetry', 'USA', 'Society', 'Covid Diaries', 'Poem']
|
Motion Models in Self Driving Cars
|
Odometry
Now that we know one way to calculate the car’s new position, now let’s take a look at another technique to calculate the car’s new position which is called odometry based on motion sensor data. For mobile robots, the values for odometry comes from the sensors located on the wheel which measure how many times the wheels of the car or robot have turned. For instance, the sensor on this wheel would tell us it has turned twice. Given the circumference of the wheel and this odometry data, we can measure the distance travelled. Specifically, the new position of the car is equal to the starting position of the car plus the x and y components of the values given by odometry, multiplied by the circumference of the wheel. Awesome. With this knowledge, next I want you to think about when odometry measurements might fail to report accurate position estimates.
On a slick wet road, there will be errors in odometry values because the wheels will slip and due to this they will travel less distance than expected. Also, there may be error due to the wheels which will slide during braking, further contributing to sensor errors. However, on dry paved roads, the distance travelled by the wheels will be very close to the expected circumference of the wheel. Roads with lots of bumps also create problems for odometry because we are assuming the car travels a straight distance in the direction of its heading. In reality on a bumpy road, the car is traveling much of the distance up and down and not in a straight line. On a road with lots of turns, wheel odometry works well because even though the heading of the car is changing, it’s still moving the expected distance in the direction of its yaw. Notice that despite the fact that the distance vectors for the car are at different yaw angles, the sum of the absolute magnitude of all the vectors is the same as the expected distance travelled.
|
https://medium.com/datadriveninvestor/motion-models-in-self-driving-cars-d387b696a4f1
|
['Prateek Sawhney']
|
2020-12-05 17:45:37.918000+00:00
|
['Machine Learning', 'Self Driving Cars', 'Deep Learning', 'AI', 'Data Science']
|
Quick Tips to Transform Your Tableau Dashboards
|
Quick Tips to Transform Your Tableau Dashboards
by Sari Nahmad
We’ve all been there. You have spent all week getting the optimal results and now you have a short time to put your results in a presentable form. You can spend hours getting the right answer, but if your delivery isn’t clear then that may overshadow all the hard work you’ve done. When designing a visual, consider what allows your audience to easily absorb the information you are presenting verses what is distracting and will hinder getting your message across.
Here are a few tips that can instantly transform your Tableau visuals:
Branch out of default mode. Try out a new color template, ideally one that either represents the data or a color scheme that complements your company colors. Also try new fonts to spruce up your Dashboard (don’t forget the hover font). Consider changing the font color to black or a darker shade and bolding it to make it easier to read.
Try out a new color template, ideally one that either represents the data or a color scheme that complements your company colors. Also try new fonts to spruce up your Dashboard (don’t forget the hover font). Consider changing the font color to black or a darker shade and bolding it to make it easier to read. Be selective with highlighting and use color with intention. Avoid information overload and focus on only one or a couple items to emphasize. Using color wisely (or a unique mark such as a dashed or bold line) in a visualization is a common way to give attention to a certain point. If one item seems different than the rest your audience may perceive there are reasons for that. Therefore, using contrast is a good strategy for emphasis: use a bright color for the focus item and mute colors for the remaining items. If you want all items to be considered equally important, use all bright colors or all mute colors but don’t mix and match. Be conscious of color associations: especially use red sparingly as it is associated with poor performance and will draw immediate attention.
Avoid information overload and focus on only one or a couple items to emphasize. Using color wisely (or a unique mark such as a dashed or bold line) in a visualization is a common way to give attention to a certain point. If one item seems different than the rest your audience may perceive there are reasons for that. Therefore, using contrast is a good strategy for emphasis: use a bright color for the focus item and mute colors for the remaining items. If you want all items to be considered equally important, use all bright colors or all mute colors but don’t mix and match. Be conscious of color associations: especially use red sparingly as it is associated with poor performance and will draw immediate attention. Consider font sizes and relative font sizes. Make sure your font size will be legible from any screen it is presented on. At least make the hover font large enough to supplement any smaller text. Use clear and concise titles (try camel case with no underscores) so that your audience knows what they are looking at without any guidance.
Here, a default bar graph is transformed by using a custom color pallet and cleaner placement of text to focus the eye on important points. The font is also made bold and black to improve legibility.
Ask yourself if the flow makes sense. Especially with a multi graph view on a dashboard, make sure your graphs are organized in a logical manner.
Especially with a multi graph view on a dashboard, make sure your graphs are organized in a logical manner. Aim to reduce clutter and remove excess features. In general, think about which features if removed would not change the story. If they have no effect, removing them will clean up your visual. This includes the lines that border each graph and gridlines. Also consider removing self-explanatory headers and placing attribute names vertically in the bars (given there are a limited amount of bars, approximately 3 to 15).
In general, think about which features if removed would not change the story. If they have no effect, removing them will clean up your visual. This includes the lines that border each graph and gridlines. Also consider removing self-explanatory headers and placing attribute names vertically in the bars (given there are a limited amount of bars, approximately 3 to 15). With maps, aim for a minimalistic rendering. For example, remove distracting layers such as country and state names. Try the dark map as it allows points to pop on the background. With the data points use opacity to allow overlapping points and data density to be seen, play with the point size scaling to increase point visibility, and change the scale range so diversity between points is easier to detect. Maps are also great filters for other plots on a dashboard.
The default map below is transformed by using a dark map background and removing state and country layers. A custom color palette is applied and points are made transparent in order to see overlapping and enlarged for visibility.
These tips should help construct solid visuals to spark meaningful discussion. It is key that your visual is an aid to the discussion and not a distraction.
If you liked this blog post, check out more of our work, follow us on social media or join us for our free monthly Academy webinars.
|
https://medium.com/opex-analytics/quick-tips-to-transform-your-tableau-dashboards-9dd6b6bf8e25
|
['Opex Analytics']
|
2019-01-12 00:43:58.800000+00:00
|
['Tableau', 'Visualization', 'Data Science', 'Operations', 'Data Visualization']
|
SplashDance
|
Written by
Daily comic by Lisa Burdige and John Hazard about balancing life, love and kids in the gig economy.
|
https://backgroundnoisecomic.medium.com/splashdance-f49a72db2b8
|
['Background Noise Comics']
|
2019-07-23 19:48:50.992000+00:00
|
['Humor', 'Dance', 'Weather', 'Summer', 'Comics']
|
Does Calorie Counting Work? Science Increasingly Says No
|
The Calorie Myth
Calorie counting has become a religion in Western countries — but we’re getting it all wrong
Credit: Andrii Zastrozhnov/Getty Images
Here’s a recipe: Take one bad idea, coat it with a veneer of science, and chow down heartily. It may taste great but the long-term effects on your health include serious indigestion.
The bad idea in this recipe is the calorie. On the surface, calories seem straightforward. You use them to measure how much fuel you put in your body and how much energy you use when you walk, run, or even just sit on the couch breathing. If you pump your body full of calories and leave it idle, all that extra fuel sloshes around inside you. It doesn’t get used and instead, it becomes the fat that pads your skin and engulfs your organs.
This is more or less the central myth of Western diet. The word “myth” here doesn’t necessarily mean that calories aren’t real. It just means that calories are a story around which we organize our Western beliefs and values — just like ancient societies that had their own culture-shaping myths about why it rained and which spiritual beings ran the show.
But here’s the problem: If you take even a moment to learn about how the calorie was invented, how calories are measured, or what they actually represent, the whole story starts to unravel — fast.
Inventing the calorie
The calorie was created in the early 1800s as a unit of energy measurement. If you’re a science nerd, you already know about the kilowatt hour, a unit commonly used to measure electrical energy. You’ve also probably heard of the all-purpose joule, which is used for just about everything a physicist touches. The calorie was created as a convenient unit for measuring thermal energy (in other words, heat). By definition, one calorie is the energy it takes to heat a kilogram of water by one degree Celsius.
How can a unit that measures the change of water temperature tell you something about food?
(Technically, I’ve just described a Calorie with a capital C. The original calorie with no capital C is the energy needed to heat a measly gram of water. But outside of academic papers, no one uses the tiny lowercase calorie anymore. Because after all, do you want to eat a 452,000 calorie donut? For this story, we’ll be talking about the Calorie with a capital C.)
All of this doesn’t answer the obvious question, though: How can a unit that measures the change of water temperature tell you something about food? To answer that, we need the help of Wilbur Atwater, a chemist born in the mid-nineteenth century, shown below looking rather sedentary.
Photo via Wikimedia Commons
Atwater did something that sounds bizarre at best: He burned different types of food in a sealed chamber, which he submerged in a vat of water. This device is called, somewhat dramatically, a bomb calorimeter.
Basically, as your meal burns to ash in the bomb calorimeter, the temperature of the water around it increases. If you measure the change, as Atwater did, you can calculate, using calories, how much the burnt food warmed up the water. Assuming the human body is a similarly efficient food-burning machine, you can use this experiment to figure out how much energy the body can extract from, say, a bacon sandwich.
Diagram from the 10th Volume (second period of 1892) of the French popular science weekly ‘La Science Illustree’
If this process seems strange to you, that’s because it is. This was 1896, after all. Most doctors still thought attaching leeches to your body was a reasonably good way to cure herpes. But Atwater’s research with the bomb calorimeter had a lasting effect. It’s why we still talk about burning calories today.
|
https://elemental.medium.com/the-calorie-myth-f9e5248daa0c
|
['Matthew Macdonald']
|
2020-11-20 18:59:34.340000+00:00
|
['Food Science', 'Diet', 'Food', 'Health', 'Trends']
|
Exploring the changing nature of communication
|
Exploring the changing nature of communication
An Argument for Verified Humans (Part II)
In Part I of this series (linked below), inspired by a quote from a guidebook called Deep Learning with PyTorch (further below), I embarked on a journey toward propositions for (1) verified humans on social media platforms and (2) transparency regarding the use of language models to generate text by discussing the unimportance of the semantic classification of the actions and capabilities of machines. In this article, I will explore the changing nature of communication with human language to build toward these propositions by further discussing the quoted text.
The authors of Deep Learning with PyTorch claim that the question of whether computers are intelligent might not be important, and I believe that much is true, but only with regard to semantics. Specifically, any discussion of the words used to describe what it is computers do when they do the things they do is probably little more than an exercise in pedantry. However, as further noted in the quote, there is an important caveat to consider: In the past, tasks related to human language could be performed only by humans. Herein, I will show that, regardless of whether machines are intelligent, this caveat is non-negligible when considering machine-generated text.
The nature of interpersonal communication
Humans communicate effectively with each other, thanks to their shared mental models of the physical world and social context. These models foster reciprocal trust by making contextual knowledge transparent; they are also crucial for explaining how decision-making unfolds.²
First, let us explore the nature of human communication by considering the example of two humans interacting. Interestingly, even the simplest such encounters are subject to innumerable dynamic factors due to the presence of what I will refer to as internal and external information streams, where an internal stream (e.g., body language, tone) stems from the persons in the conversation, and an external stream (e.g., background noise) stems from the environment. Further, note that, while internal information streams may help to convey the speaker’s message, external information streams may distort it.
(As an aside, vocabulary seems neither an internal stream nor an external stream, acting more as a basis, i.e., vocabulary is to the metric system what spoken utterances are to specific measurements in metric units. However, unlike the metric system, which is standardized so that, for example, one meter is equal to every other meter, a speaker’s vocabulary acts as a semi-standardized basis that is both shared and unique due to the denotative and connotative aspects of words, respectively. As such, vocabulary can both help and hinder the conveyance of the speaker’s meaning.
For example, for two people who share (1) a native language, (2) regional, cultural and generational dialects, and (3) knowledge of a certain industry’s jargon, there may be wide overlaps in both the denotative meanings of and connotative associations with individual words. Thus, in such conversations, the vocabulary serves as a mostly standardized basis, leaving little room for ambiguity of meaning. In contrast, for two people conversing in a lingua franca that neither use with native proficiency, there may be many unshared connotative associations due to foundational knowledge of other cultures and languages and limitations in denotative meaning that lead to malapropism and out-of-vocabulary words, both of which may hinder the exchange of information.)
Despite the inherent complexity of human communication, in general, such encounters proceed within the bounds of certain social norms of human decency that vary based on the pair’s relationship and the context in which the interaction is set (which includes culture and mode of communication). I do not believe that an exhaustive list of such norms could be produced for any given culture — or for any given pair, for that matter — but as an example, consider that, during a conversation, when asked a question, a person will usually try to provide the answer that he or she deems most appropriate, rather than yelling incomprehensible gibberish, because to do so — to yell gibberish—would surely and unnecessarily sully the relationship between the pair, in a way opposite that of saving face.
So upon entering such interactions, each person can assume that the other is likely to adhere to these norms — even if there is no spoken agreement or written requirement to do so. If the norms are not respected, then the offending person can be said to have breeched the trust of the other, which may have certain repercussions or influence future interactions between the pair. In other words, people are held accountable for their behaviors during interpersonal interactions by a (lowercase) social contract that requires certain standards of communication to be maintained.
The curious case of text-based communication
Second, let us consider the specific case of text communication, wherein each person plays both writer and reader in turn. In some ways, this case is even simpler than that described above, as only information presented as written language can be considered by the reader; however, this very simplification also introduces a complication: Because information cannot be carried by any means other than text, the reader can use only text to decipher the writer’s intended meaning. Consequently, the intended meaning — now limited in its vehicles of conveyance — is more likely to be misunderstood by the reader (as often occurs when sarcasm is employed in written conversation). Therefore, during a written conversation, there is an inherent disparity between the writer’s intention and the reader’s interpretation that cannot be ameliorated by auxiliary data sources.
Let us assume that the disparity between interpretation and intention is intuitively acknowledged by those participating in the textual conversation. Through this acknowledgment, which is essentially an acknowledgement of the limitations of written language to convey meaning, a person may become more likely to accept minor incongruencies between the original message and the response provided by the recipient of that message. Consequently, the meaning of the response can be inferred in the context of the original message. (I do not believe this assumption is unrealistic, as I regularly acknowledge the ambiguity of language, and thus the possibility of my incorrect interpretation of written words, when inferring meaning from messages — especially upon seeing how my own words can hit in an unintended way and thereby provoke a misaligned response.)
During a text-based conversation, this intuitive acknowledgement helps to move the conversation forward. For example, in contrast to a face-to-face conversation, wherein one person can ask the other to repeat a previous utterance without significantly delaying the conversation, in a text-based conversation, there is often a non-negligible time lag between messages because the communicators may not be active (i.e., ready and willing to respond) at all times. Thus, to provide a written request for clarification could stall the exchange of information. In addition, it would give the person with the questionable response (A) the chance to speak again without the person requesting clarification (B) providing input on the matter being discussed, i.e., person B requests that person A take another turn, which seems unfair in an adolescent mom-said-take-turns kind of way. Furthermore, because a text conversation produces a log of exchanges that can be used as a context from which to infer meaning, the persons communicating may resist asking for clarification, preferring instead, under risks of stagnation and of being skipped, to reread previous messages.
And so, in text-based conversations, each message often acts as a full, discrete response to the previous message and as the ground truth from which the writer’s meaning can be inferred.
Interactions between computers and unsuspecting humans
Finally, let us consider an interaction similar to that devised by Alan Turing for his Imitation Game⁴, wherein a human player must determine if he or she is interacting with a human or a computer via conversation alone; however, unlike in the Imitation Game, in this scenario, the opponent is definitely a computer, and the human is entirely unaware that a game being played (i.e., the human has no reason to believe a priori that the opponent could be a computer). This situation occurs each time a person engages with a customer service instant messaging (IM) app of ambiguous humanity, but it can also occur on social media, where the automatonity of the bot may be even less obvious.
Let us consider this example from the perspective of the computer, where the computer receives messages (inputs) from and provides responses (outputs) to the person. In this case, the computer has to be able to understand human language as inputs well enough to produce passably human responses as outputs. This task is nontrivial, but depending on the course and duration of the conversation and the savvy of our human player, it is accomplishable with current methods — unlike a task such as a face-to-face conversation, for which a machine would not only have to compose a response but sound like a human, look like a human, and move like a human.
Because the person has no reason to believe that the opponent is a computer, as until very recently, all beings using human language were human, he or she will participate in the conversation according to the standard norms of human decency dictated by the cultural context in which the conversation is taking place. Furthermore, because the conversation is carried out via text, the person will likely grant the computer’s (output) response some leeway when considered in the context of the original (input) statement. Therefore, because the computer’s output is represented by only written language in a text conversation, an unsuspecting person may be easily convinced that a machine is a person who intended to convey the written message and who is accountable for the message according to the norms of the society.
This type of I/O disparity is perhaps best explained by a phenomenon known as the ELIZA effect, which is the tendency of humans to read too much into the messages produced by machines, even if they are aware of the mechanical origins of the text, but especially if they are not. In a toy example of the ELIZA effect, a person who is thanked by an ATM after completing a transaction may actually feel a sense of appreciation. While the text in this example is probably harmless, the ELIZA effect may (1) increase in influence as chatbots become more sophisticated and (2) become more difficult to manage as language models become less interpretable, as supported below:
ELIZA, a chatbot developed in the 1960s, could discuss a number of topics, including medical and mental-health issues. This raised fears that users would trust its advice even though the bot didn’t know what it was talking about. Yet until recently, most chatbots used rule-based AI. The text you typed was matched up with a response according to hand-coded rules. This made the output easier to control. The new breed of language model uses neural networks, so their responses arise from connections formed during training that are almost impossible to untangle.³
Therefore, because tasks related to human communication were formerly performed only by humans, it is possible that many people — especially those who are unaware of the text-generation abilities of modern large language models — can be convinced that a machine-generated text was produced by a human, as when a human reader consumes a text, the text is usually assumed to have been written by a human writer who intended to convey the message in the text and who can be held accountable for the message if it requires elaboration or if it does not adhere to certain social norms.
In a way, when a machine engages in a conversation with a human, it (or, as a proxy, its programmer) acts in bad faith with regard to the above-discussed (lowercase) social contract because, as I will elaborate on in Part III, a machine cannot be held accountable for the messages it produces due to both its inability to have human desires and our inability to assume that a computer-generated text can have a thesis.
|
https://medium.com/linguaphile/an-argument-for-verified-humans-part-ii-cc3c4e1e2c6a
|
['Danielle Boccelli']
|
2020-12-28 15:12:53.753000+00:00
|
['NLP', 'Communication', 'Data Science', 'AI', 'Language']
|
Apache Hive Hooks and Metastore Listeners: A tale of your metadata
|
The Need for Metadata Management
According to Gartner “By 2020, most data and analytics use cases will require connecting to distributed data sources, leading enterprises to double their investments in metadata management.”
After reading that statement, you are probably wondering: What does metadata really stand for? And why does it need Management?
I can tell you those were the first questions that came to my mind a few months ago…
Metadata
We can categorize metadata in:
Technical Metadata : details about an asset, such as its name, type, the creator's name, the size of the object, or when it was last updated.
: details about an asset, such as its name, type, the creator's name, the size of the object, or when it was last updated. Business Metadata : offers additional business context about an asset, for example, if it contains PII (Personally Identifiable Information), the date it should be deleted, and many others.
: offers additional business context about an asset, for example, if it contains PII (Personally Identifiable Information), the date it should be deleted, and many others. Operational Metadata: usually information on how your assets are being used, like query logs, data sampling and data profiling.
You can also think about physical and logical information about the asset, let me clarify that by looking at Hive and the Hadoop Ecosystem…
Your data can be physically stored in a hdfs location, and have many different logical tables pointing to it on Hive.
Photo by Amy Chen on Unsplash
That’s a lot to process, but it will become clearer later on…
Metadata Management
A Metadata Management solution helps answering questions related to your data assets such as:
Are my data assets secure?
Am I in compliance with all those new data protection regulations? Such as CCPA, GDPR, HIPAA
Who has visibility and who can make changes to those assets?
The nature of management usually starts with a wiki, spreadsheet, and various documentations, but when we start to enter the big data world, they can easily get out of sync and we become unable to answer those questions.
Do you remember the $5 billion fine regarding Facebook user's data privacy? You don’t want to be put in a place unable to answer those questions.
We need to start automating our metadata management!
|
https://towardsdatascience.com/apache-hive-hooks-and-metastore-listeners-a-tale-of-your-metadata-903b751ee99f
|
['Marcelo Costa']
|
2019-11-21 17:03:40.948000+00:00
|
['Big Data', 'Metadata', 'Data', 'Data Warehouse', 'Hive']
|
A Simple Guide to Make Better Code Reviews
|
Photo by Markus Winkler on Unsplash
Code reviews are an excellent tool for maintaining good quality in the codebase but only if done right. Although there is no magic formula to do a code review, in this article we will discuss some good practices you can consider while performing it to get the desired result.
What to look for in a Code Review?
Photo by Markus Winkler on Unsplash
Consistency
You should always have a style guide for your codebase, this guide can include things like naming conventions, imports order, etc. and these rules should be followed strictly unless the document says otherwise, a lot of these rules can be enforced by using automated tools like linters, formatters, and hooks but some others don’t, here is where the reviewer should pay the most attention.
Functionality
Does the feature or the fix work as intended? What about edge cases? You should always try to run the code and check that it works correctly, especially if involves UI changes, always try to think like an end-user and what actions could them do that could potentially break the feature.
Complexity
So often developers make the code more generic than it needs to be or add functionality that isn’t needed because they think it could be used in the future, this type of over-engineering can lead to some problems, being the most common:
Spending more time to write code and tests that could or not be needed, or having to refactor code because the requirements weren’t as thought.
Have unused code in the codebase which may lead to slower times of deployment or bad performance.
As a reviewer, you need to check that the code meets this requirement because even if there is a high chance this will be needed in the future, it could happen that when the time arrives isn’t needed or that the requirements are different from what was thought originally.
Testing
All of the new code should have unit tests attached either by adding new ones or updating the existent ones, as a reviewer you should at least check that the main cases are tested because as you may know already this would allow refactoring easily without breaking things.
How to leave comments in a Code Review
Photo by Volodymyr Hryshchenko on Unsplash
Write why not what
Often by leaving just comments about what you want to be changed can be seen as a mandate, more when you review code from junior developers, this makes them do the changes asked without thinking about the reasons which lead them to do the same mistakes over and over again
For example, when leaving a comment or a change request that is linked to a style rule, you can link the place where is written so the developer knows why that should be implemented that way.
Recognize the good work
Code reviews shouldn’t be only to detect mistakes, it is also a good place for appreciation if the developer did something good, let them know, this will increase their confidence and very likely will lead to better code quality.
Be clear
Be clear on the outcome you expect from your comment, are you requesting a change, asking a question, or just being informative? This way the developer can act accordingly, nobody is a mind reader.
Don’t comment just because
Sometimes (more often senior developers) want to show their knowledge by having something to say, it is not mandatory you find mistakes when doing a code review, there will be times where you only need to approve them as they are, there is nothing wrong with it, nobody will think you didn’t review it and if someone else finds something you didn’t, don’t worry, we are humans and we make mistakes.
Guide, not solve
It is the developer responsibility to make the changes to improve the code, sometimes it seems easier to leave a comment with the code we as reviewers would like to see implemented, but this is not helping anyone, you’re working on something is not assigned to you, and the developer is not learning, if you want this to be useful you could do a pair programming session where you explain and the developer understands what you are proposing.
How to solve conflicts in a code review
Photo by John Schnobrich on Unsplash
Facts not opinions
When a conflict arises in a code review technical facts should prevail, if the code is maintainable, readable, well performant, and solves the problem, there shouldn’t be motives to request changes if we don’t have evidence of the contrary.
Pair review
Often it is difficult to express our emotions or to describe accurately what we are thinking using text comments, if something is very complex to be addressed this way a face to face communication can help to understand better what both, reviewer and developer are thinking and get to a consensus.
Lean on a technical lead
If there is no consensus on some decision you can ask a technical lead to help solve the situation, this doesn’t mean the reviewer and developer should do as the technical lead says unless it is affecting the times of the project, but mostly to have another person to look at it and give their advice.
Conclusion
As I said in the beginning, code reviews can help to improve the code quality and learning but they need to be done correctly, otherwise, they can become a burden that won’t help anyone. I hope this article helps you and your team to use this excellent tool to make your life as developers easier.
|
https://medium.com/swlh/how-to-make-good-code-reviews-1f9f3ee82189
|
['Luis Guerrero']
|
2020-11-20 18:43:17.098000+00:00
|
['Software Architecture', 'Programming', 'Software Engineering', 'Software Development', 'Code']
|
Asking for Free Writing Advice is The ‘Send Nudes’ of My Profession
|
Attack of the LinkedIn Inbox
“Can you look at my writing and provide some feedback?”
“Can I have some guidance on your content strategies?”
“Hey, would you review my draft?”
Welcome to my inbox. This is a good chunk of the messages.
There’s a noticeable brazenness with the writing profession, a willingness to hawk professionals for free stuff. Perhaps it's because ‘anyone can write sentences’ and everyone has written essays in school, occasionally scoring a good grade.
Perhaps they assume writing has this airy, easily tangible quality, like picking up a pen someone just dropped.
Other times, their inquiry is more calculated. That’s when my claws come out. They’ll say, ‘I’d love to pick your brain’ or ‘meet for coffee’ when they are just looking for free consulting.
“I’ll buy my own coffee and charge my rate. Cool?”
Back in my agency days, clients occasionally kicked and screamed about billings. There was the old, “Why am I paying this much for something your graphic designer can do in a couple of hours?”
The blunter version of our response, “You’re paying for the years of experience that enabled them to do it in a couple of hours. And the fact that this designer is very talented.”
This disconnect, the lack of value for artistic merit isn’t a new phenomenon. It’s also why most company blogs are little more than the walking dead. Their marketing team bottom-feeds and they get what they pay for.
Anyone can type words. And anyone can wield a paintbrush.
|
https://medium.com/publishous/asking-for-free-writing-advice-is-the-send-nudes-of-my-profession-cdedb57a0a98
|
['Sean Kernan']
|
2020-12-08 23:36:30.827000+00:00
|
['Humor', 'Life', 'Business', 'Life Lessons', 'Writing']
|
12 popular website types you need to know as a designer
|
Portfolio website
The first website you will probably ever create as a designer is your own portfolio website. A portfolio website is used to display and promote examples of someone’s past work. Mostly used by those who work within a creative field, a portfolio site is essentially a visual resume. It demonstrates your skills as a creative and shows your best work in hopes of attracting prospective clients or employers.
Since they’re often not too complex in content you can choose to design and develop your own website with Webflow, a tool to help you build custom websites. But if you prefer to focus on design rather than development, there are also many services out there like Squarespace or Wix that make it easy for anyone to create a portfolio website quickly and easily. You pay a monthly or yearly fee to use the website builder tool and keep it hosted on their platform. They can be costly to maintain but they usually offer some level of customer service to help with questions and problems that may arise which makes it worth paying for.
Example of a design portfolio website (by Dann Petty)
When it comes to designing for a portfolio website, the sky’s the limit. But the most important thing to remember is to let the work shine through. Rather than having a complex website design with fancy animations, you want to wow potential clients and employers with your work and guide them to contacting you for your services. Each portfolio site should be unique and reflect the designer or creator of the work. So you may want to lean toward a minimal design layout to allow the work to speak for itself and promote a good user experience.
As a freelance web designer, this is an easy website type to get started with. You can help design portfolio sites for your friends and family in return for recommending your services. Referrals are crucial to the growth of a freelancing business.
Personal website
The goal of a personal website is exactly that, personal. It might not be to sell something or promote work but merely to post and share thoughts. It can feature a blog, a one-page resume with links to other social platforms, or whatever you want to share.
Example of a simple one-page personal website (petermckinnon.com)
As a web designer, you probably won’t be asked to design too many personal websites unless it’s as a favor to a friend or family member who wants to start and share a project they’re working on. However, if you’re new to designing websites, this could be a great way to get some practice in a low-pressure environment.
Try practicing designing a simple one-page website that catches a user’s attention and shares the most important information about the individual whom the site is for. It can be a fun challenge as you won’t have as much content to work with.
Blog
A blog is a website that’s regularly updated with articles, also called blog posts. These sites can be run by an individual, a group of people, or even companies. The purpose of a blog is to share information on a specific topic to attract an audience. Every blog is different but most have a goal of lead generation. By posting well-written and researched content, you can rank for specific keyword search terms and generate quality traffic which may lead to new customers.
Example of a well-known design blog, Smashing Magazine
You’ll notice most blog websites look templatized — that’s on purpose. There needs to be structure to the design of a blog website so it’s easy for the end-user to read fresh, new content. On the backend, these websites use a CMS (content management system). With a CMS, you can create, manage, modify, and publish content with a user-friendly interface. This makes it easy for someone to update and add new content to the blog, rather than having to pay for a web developer to make these updates.
When designing a blog website, focus on the structure of the site. Think about how you want to display a series of blog posts. Know that the content will change as new articles are uploaded, but consider how you want to show the title, author, article summary, read more button, and all the other elements of a good blog website. A good user experience is super important on a blog website so the user can easily find what they are looking for.
Business websites
Most businesses these days have a website where they can share what they offer, awards and accolades, past examples of work, customer testimonials, specifics on services they offer, and just about anything that helps tell the story of what the business does and how it helps people.
Example of a business website (compass.com)
Design and branding are vital to a business website as it’s how they can set themselves apart from the competition. When designing these types of websites, it’s important to gather as much information upfront from the client about their business. Do they have an existing logo, brand colors, tone of voice, style of photography? Or are they a fairly new company that is looking for help in these areas as well?
As a designer, you have the opportunity to not only design a great website presence for your client’s business but maybe even offer logo and branding services that they can use throughout the life of their business.
eCommerce websites
An eCommerce website is an online shop where people can buy products. Some businesses have both physical store locations and shops as well as eCommerce websites to reach a larger customer base. When your customer lives on the internet, you have an infinite amount of potential customers you can reach.
Example of an eCommerce website (sephora.com)
This is a popular website type for designers to work on. There are always new businesses starting up every single day and they’re looking for designers to help them create a memorable online presence to sell their products. As a web designer, you can choose to specialize in designing certain types of websites or work with certain clients. For example, choosing to work with real estate agents, a restaurant, or other eCommerce business. Choosing to focus on eCommerce websites means you’ll have more opportunities to work with clients and they’re willing to pay well since the purpose is to generate business.
When designing these websites, there are many factors to consider. Think about the brand presence and how to set their product apart from the competition. Organization is key: are there multiple families of products, for example different flavors or sizes? With so much variety, it’s important to make sure the customer understands what they are buying.
Social media websites
Facebook, Twitter, and Pinterest are examples of popular social media websites. They are usually created to allow people to share their own thoughts and ideas or simply connect with other people. These sites allow the user to upload words, photos, and videos to personalize their feeds. With automatic refresh and infinite scroll features, these sites tend to keep people coming back for new content often.
Example of a social media website (Pinterest.com)
Once created, these types of websites mostly need backend maintenance such as testing and fixing bugs. There may be occasional design updates but the design might not change much visually to the user until there is a significant rebrand. Most social media companies have their own in-house design team that works on design and development but you may have the opportunity to collaborate as a contract designer.
Membership websites
Membership websites use a paywall in order for a user to access the content. Usually, these are paid for on a monthly or yearly basis and are updated regularly with new content to keep the user coming back and happy to renew their membership.
Example of a membership website (Skillshare.com)
Since the goal of a membership site is to upload new content often, it’s important for the CMS behind the site to be easy to use. For this reason, companies tend to use services like Kajabi or Teachable to create and maintain their membership sites. They may seek out design help, in the beginning, to create a branded presence on the platform.
Wiki or community forum website
One of the most well-known and visited websites is Wikipedia. Wikis can be created on any subject. A wiki is a website where various users are able to collaborate on content, update, and make their own changes.
Wikipedia.com
Community forum websites provide an organized way to publish discussions on a specific topic. Users can register to start and contribute to existing discussions. Forums tend to help users solve technical issues, a great example of one is the WordPress support forum.
Due to the open-source nature of these websites, the design tends to be non-existent. You’ll notice Wikipedia is not designed well, it’s basically just HTML without much CSS styling. However, if you wanted to create your own community forum you could get creative with the design and collaborate with a developer to make the backend work smoothly.
Magazine and news media websites
Magazine and news websites are similar to blog websites in structure. However, they’re focused on journalism rather than personal interests and opinions. These sites feature articles, photos, and videos that are informative and educational. Over the last few decades, the magazine and news industry has shifted from a print-only experience to a digital format, usually offered through a subscription service.
Example of a news website (The New York Times)
Companies that run these sites, like The New York Times, rarely update the design or work with individual designers. They mostly work with agencies or companies that help with design, development, and maintenance.
Video streaming websites
Video streaming sites like YouTube, Netflix, Hulu, Amazon Prime, and many other competitors have surged in popularity over the last decade. Some are free, while others offer paid services.
Example of a video streaming website (Netflix.com)
Similar to social media websites, once the skeleton of these websites is designed, most of the upkeep is on the tech side. With so much traffic and bandwidth streaming, these websites require a lot of backend maintenance. Streaming companies tend to have their own in-house design and development teams that work on long-term updates.
Web portals
Web portals are often created for internal purposes for a business, organization, school, or institution. They usually involve a login with a personalized experience based on the user. For example, a university might have a web portal allowing students to access information about their courses or a company might use a portal to allow their employees to request time off and view pay stubs.
Example of a web portal login for Harvard University
Web portals are complex and will usually involve more complicated design and programming than most other websites. For this reason, companies tend to outsource the design, development, and IT maintenance to another company.
Landing pages
A landing page is a one-page website usually created for a marketing campaign that drives visitors to take a specific action. The content and design should be to the point and lead the user to one CTA (call-to-action).
For example, a company might want to create a landing page as lead generation for their business, offering a free download or access to a video in exchange for an email address. Another example of a landing page might be to educate a user about an app and direct the user to download and use it.
Example of a landing page to download an app (source: Ghulam Rasool on Dribble)
While there are many services out there that make it easy for businesses to create their own landing pages, it’s still a skill that is in-demand. Knowing how to design an effective landing page is a lucrative skill to have as a designer and you will likely have repeat clients as they may request several landing pages designed.
You can also try your hand at designing landing pages for clients on sites like 99designs, Fiverr, or Upwork. It’s a great way to get started with web design, add to your portfolio and resume while getting paid for the experience.
|
https://uxdesign.cc/12-popular-website-types-you-need-to-know-as-a-designer-31334f5a64c7
|
['Monica Galvan']
|
2020-11-10 01:30:02.269000+00:00
|
['Freelancing', 'Design', 'Business', 'User Experience', 'Web Design']
|
No, political interests have not warped the vaccine approval process.
|
No, political interests have not warped the vaccine approval process. Over at Elemental, journalist Tara Haelle spoke with medical experts about the coming Covid vaccine and why we should feel secure that a vaccine’s distribution is a reflection of its efficacy — not a politician’s deep pockets.
“Safeguards are in place to really limit the ability of any one scientist in being unethical,” says Michele Andrasik, the director of social and behavioral sciences and community engagement at the Fred Hutchinson Cancer Research Center’s Covid-19 Prevention Network. Among those safeguards: FDA and CDC advisory panels, and mandatory trial pauses for vaccines that appear to have adverse effects. (Both AstraZeneca and Johnson & Johnson were forced to halt their trials in September after subjects reported adverse effects.)
|
https://gen.medium.com/no-political-interests-have-not-warped-the-vaccine-approval-process-89e301be3512
|
['Max Ufberg']
|
2020-12-15 22:43:32.854000+00:00
|
['Covid 19', 'Health', 'Vaccines']
|
The Future of Management Is Fostering Employee Growth — In and Out of Your Organization
|
2020 has been widely recognized as the year of change, uncertainty and growth. With each of those, often comes the search for personal value, stability and career development.
Unfortunately in the United States, most Americans work 40+ hour weeks. So it shouldn’t come as a shock when talent is seeking new opportunities, or at least enjoy most of what they do. In fact, they should be encouraged to pursue whatever sparks a light within them.
Whether you’re a CEO of a Fortune 500 company or a start-up, a team manager or the head of recruitment, you’re a leader. And the role of a leader is not synonymous with boss. The roles and skills of people in leadership are fundamental to the talent they currently have, and the talent they attract in the future. That means leaders have to invest mentally and financially into the growth of their team — especially if they want them to stick around.
A follow-up to “Five Questions to Ask Yourself When Contemplating Quitting Your Job.”
To lead is to help move forward.
Employees know when they’re being supported, and when they’re not. As people, we want to feel invested in. We expect it.
If you’re a leader, you should be aware of exactly how you are actively supporting your employees’ overall success. Whether it be to thrive in their current role at your organization, or build up their value to prepare them for their next career advancement, here are five steps you should take to ensure these progressions happen, increase talent retention, and prepare for a successful annual review.
Photo by Lindsay Henwood on Unsplash
1. Open communication and trust
This one may sound obvious, but how many times have you had an honest conversation with your team on what they’re looking to get out of their role? If the last discussion on this was during their interview, that’s a major problem.
In “Five mindset shifts to transform your organization into a networked powerhouse,” Scott Mason, founder and CEO of Life’s Work Associates, explains how management can get better acquainted with their teams’ aspirations and day-to-day output. “Leaders should start by listening to really understand their employees’ goals, motivators, and ideas. This combined with coaching and mentoring, assigning stretch assignments to learn new skills, and nurturing a work culture that attracts and keeps top talent, will empower your workforce.”
Goals change as we grow. If your employee(s) started five years ago, six months ago, or last week, consistent check-ins are needed to ensure they’re on pace to succeed in their role not only for the better good of the business, but for their own sake, happiness and professional growth.
Here are some opportunities for scheduled check-ins:
Quarterly reviews: Key performance indicators (KPIs) and career goals should be set up within the first few weeks of an employee’s hire date to ensure they fully understand their newly assumed role, responsibilities and expectations. But this is also a great time to learn what they’re looking to get out of the role. This is the building block to a sturdy foundation with your team or organization. They should be continued each quarter to ensure goals are being checked off, address any concerns and set up new ones.
Key performance indicators (KPIs) and career goals should be set up within the first few weeks of an employee’s hire date to ensure they fully understand their newly assumed role, responsibilities and expectations. But this is also a great time to learn what they’re looking to get out of the role. This is the building block to a sturdy foundation with your team or organization. They should be continued each quarter to ensure goals are being checked off, address any concerns and set up new ones. Weekly one-on-one’s: I recently left an organization where I had weekly touch bases with my manager. Though they were often productive, they almost never addressed my professional aspirations. Once in a while the topic would come up, and I’d voice any thoughts but action plans weren’t made a priority. I suggest organizing and rotating one-on-one’s to discuss: 1) any business objectives or day-to-day tasks, and 2) utilizing the other to allow your employee to share any updates or progression in their role. Don’t always expect some groundbreaking revelation to be made, but approach the conversations with an open mind. If you find you two prefer to adjust the frequency of these check-ins, go ahead. Flexibility is key.
I recently left an organization where I had weekly touch bases with my manager. Though they were often productive, they almost never addressed my professional aspirations. Once in a while the topic would come up, and I’d voice any thoughts but action plans weren’t made a priority. I suggest organizing and rotating one-on-one’s to discuss: 1) any business objectives or day-to-day tasks, and 2) utilizing the other to allow your employee to share any updates or progression in their role. Don’t always expect some groundbreaking revelation to be made, but approach the conversations with an open mind. If you find you two prefer to adjust the frequency of these check-ins, go ahead. Flexibility is key. Open door policy: Give your team the space to come to you when needed. If something isn’t working, you should want to know about it. And that doesn’t just mean work projects. If the social or professional dynamic is off within your team, or lacks team trust, that not only affects project productivity, but mental health which can hinder overall performance and long term career objectives. The Center for Disease Control (CDC) provides tips and resources on how to implement positive workplace programs.
Give your team the space to come to you when needed. If something isn’t working, you should want to know about it. And that doesn’t just mean work projects. If the social or professional dynamic is off within your team, or lacks team trust, that not only affects project productivity, but mental health which can hinder overall performance and long term career objectives. The Center for Disease Control (CDC) provides tips and resources on how to implement positive workplace programs. Establish a mentorship. As a mentor, you not only distribute your plethora of information onto someone more junior, but you’re given the opportunity to develop an organic relationship with someone on the foundation of mutual interests. With this comes the insight to understanding whether or not you yourself enjoy teaching, and opens the floodgates to more opportunity.
If you’re not confident you can effectively support your team in these areas, Forbes provides ways to enhance your soft skills in the careers-centered article, “Developing Your Employees Is The Key To Retention — Here Are 4 Smart Ways To Start.”
2. Invest in education
Now that you’ve (hopefully) asked your team what their career interests are and understand their professional goals, you can actually help them achieve them.
Providing educational tools and resources is a seamless way to ensure employees are getting access to opportunities that will support them in their current and prospective roles. Whatever the tool, it’s best that the employee(s) are able to access the resources on their own terms, at a time when it’s convenient to them. Some popular opportunities are optional Lunch-and-Learns (that take place during business hours), tools that are uploaded to a digital platform where they can accessed be 24/7, and my personal favorite — tuition reimbursement.
Photo by XPS on Unsplash
Tuition reimbursement plans
I’ve participated in two educational programs that my employer paid for, and found both of them to be the best use of my own time and my employer’s money; they supported my role at the time, the trajectory of it and the business’ goals.
Allowing your teams to identify educational resources themselves will help prevent wasted dollars on a topic that isn’t much help to the employee. As much as you want to believe you understand your employees’ roles, no one knows them better than themselves.
Give them options to access online resources like General Assembly, Skillshare, Udemy, and lynda.com, which offer online learning tools and resources across industries and price points.
HR Daily Advisor reinforces that there is a mutually beneficial reward when employers provide tuition reimbursement programs, and see that education is also a personal development tool. “Educated employees are happier, have more confidence in their abilities, and make for better workers overall,” Joan Burns, EVP of HR, marketing, and communications and Chief Diversity Officer at IDB Bank explains.
Burns concurs that this added benefit also makes open roles more attractive to potential talent. She suggests that companies can show their support even more by covering the costs of textbooks, providing bonuses for high grades, or offering transportation reimbursements for school commutes.
Let’s not forget one more ROI — employers can offer up to $5,250 per employee annually in tax-deductible tuition costs; employees, in turn, are exempt from paying taxes on the same amount. Thank you, IRS. Section 127 of the Internal Revenue Code.
3. Invest in training
Training shouldn’t just apply to the first two weeks of a new hire’s employment. It should continue throughout their tenure. Robin Sodaro, an experienced brand marketer and creative director, shares her take on staying sharp throughout her career. “The learning process does not stop at any age and nor do ideas, energy or the practical experience to get things done,” she explains in her Rosie Report article, “Cancelling ageism in the future of work: 3 age myths debunked.”
E-Learning site, Shift, emphasizes the importance of training and warns the dangers of avoiding it in their blog post, “The True Cost of Not Providing Employee Training.” “Untrained employees will, inevitably, lack the knowledge to use company resources properly, which will lead to waste, in a service industry; lack of knowledge about procedures will affect customer interaction and retention. Because of this, your employees, your company, and your clients will all suffer.”
I’m a big supporter of (now virtual) conferences that the employee identifies as relevant to their job. I’ve sat through a fair share of one’s that didn’t pertain to my role at the time. If irrelevant, they are not only a waste of admission costs, but of a payday if the employee attended during business hours. (Employees shouldn’t be required to attend a training or educational course if they are not being compensated for it.)
Training also shouldn’t be limited to the employee’s job description. If it supports their job performance it should be encouraged. Many organizations are implementing training efforts around Diversity & Inclusion initiatives, HR and workplace protocols, how to use softwares or programs that aid in productivity, improving management and team-building skills and more.
During this TED Talks, Career Consultant Greg Shirley, explains how our growth doesn’t stop, and should be nurtured throughout our career.
You’re Always On: Your Career Development Cycle | Greg Shirley | TEDxUTA
4. Be flexible
No one is perfect, so don’t expect your employees to be. Assuming expectations are set, there needs to be a level of flexibility that makes sense for your team, the business and you — as long as it’s within reason. Whether it’s being patient while they learn a new skill or topic, providing flexible working hours to attend to urgent or personal matters, or expanding their role to encompass new responsibilities, your employees will (hopefully) feel valued and will appreciate the gestures.
Expanding role responsibilities should be considered depending on their workload, experience and earned trust. If the team member is able to handle their daily tasks and has proven to be reliable and a good communicator, there shouldn’t be anything stopping them from taking on additional responsibilities if they seek out new opportunities. By volunteering to participate in new projects or lead initiatives they are showing you that they’re ready to establish relationships, up for new challenges, and likely feel comfortable and confident to make their next career move.
Photo by heylagostechie on Unsplash
5. Invest in yourself
When leading a team, managing your personal life or putting yourself first can feel like a chore. Airlines remind us that we must put on our own oxygen mask first before attempting to assist anyone else. That’s for a reason. It’s critical to put yourself first so you can support your team later on. Don’t forget to invest in your own education, training and professional development to build up your own portfolio and resumé.
You are an asset, too. If you are on the quest for your next career opportunity, rest assured. It’s healthy to outgrow a role and should be encouraged to explore new opportunities. Before venturing off to new horizons, maximize your current resources by asking yourself these five questions to identify new career opportunities.
Read more on the Rosie Report about progressing the future of work for everyone, by everyone.
|
https://medium.com/swlh/the-future-of-management-is-fostering-employee-growth-in-and-out-of-your-organization-2f55d4b5af9e
|
['Dominique Dajer']
|
2020-12-22 17:23:01.029000+00:00
|
['Careers', 'Leadership', 'Jobs', 'Creativity', 'Education']
|
React context API (Part 2) — Updating state through a Consumer
|
Hi and welcome back to part 2 of the React context API series. For part one, we talked about how to leverage React’s context API to pass state values to child components. Now, in part 2, we’re gonna look at how do we update the state through Consumers?
Photo by Émile Perron on Unsplash
The repo for this project can be found here.
If you want to learn more about what context is I highly suggest reading my first post. I explain a little more about why context is useful as well.
What you will make
Let’s take a look at what you’ll make. Check out the picture below.
Child Component
Does this look familiar? It’s the same Child component from part one! But wait… There’s a button? Why, yes. Yes there is. Let’s see what it does!
Toggled GrandChild Component
Awesome. We are now able to toggle the Grandchild component. We’ll see how to do that later. But wait… There’s another button. Why, yes. Yes there is. Can you guess what this button does?
Toggled Name
If you guessed that it toggled a name, you would be correct. It toggles the first name of each component between Bob and Mark. Guess what? This is all done through context! Let’s take a look how.
Tutorial
Create the context. First, Let’s create a Context.js file again that will create the Provider and Consumer that we need.
Context.js
Import createContext from react. This lets us create our Context object. Then, create the Provider and Consumer.
const { Provider, Consumer } = createContext()
Just a quick reminder. This step is important. Creating the Provider and Consumer is what allows us to subscribe to state changes outside of the Parent component that holds the state.
The Parent. Okay, so we’ve created our context object. We’ve created the Provider and the Consumer. Let’s create the Parent component that will hold the state.
Parent.js
Okay, there’s a lot going on here. Let’s take a look at the state items first.
state = { toggleGrandChild: false, toggleName: false, people: [ { id: 0, name: “Bob”, age: 24 }, { id: 1, name: “Jack”, age: 22 }, { id: 2, name: “Jill”, age: 26 }, ], }
The first item is toggleGrandChild . This lets us know if the GrandChild component is toggled or not. The value starts off as false. Here's the function that changes the state.
toggleComponent = () => { this.setState({ toggleGrandChild: !this.state.toggleGrandChild, }) }
toggleComponent changes the state of toggleGrandChild to the opposite of its current value. So, if its current value is false and the toggleComponent is called, then the value will change to true. This function will serve to toggle the GrandChild component.
Next, we have the toggleName state value. You guessed it. It is used to toggle the name. toggleName also starts at the value of false. Let's look at its corresponding function.
switchNameHandler = newName => { this.setState({ toggleName: !this.state.toggleName, people: [ { id: 0, name: newName, age: 24 }, { id: 1, name: “Jack”, age: 22 }, { id: 2, name: “Jill”, age: 26 }, ], }) }
A newName is being passed into the switchNameHandler . This handler handles changing the name of the first person. First, it toggles the toggleName value. Next, it inserts the new name into the name value of the first people object. Notice that the other names stay the same. This is to show that the other people objects will remain the same. Technically though, you are updating the objects with the same initial values.
Lastly, we have the people object. People holds the object of each person. Bob is the name that gets switched later on.
people: [ { id: 0, name: "Bob", age: 24 }, { id: 1, name: "Jack", age: 22 }, { id: 2, name: "Jill", age: 26 }, ]
The Provider. Let’s move into the Provider. Remember, the Provider will wrap components that contain consumers.
<Provider value={{ state: this.state, toggleComponent: this.toggleComponent, switchNameHandler: e => this.switchNameHandler(e) }}> <Child /> </Provider>
In this case, the component that is wrapped is the Child component. The Child component also contains the GrandChild component within it. More on that later.
Let’s look at the value within the Provider.
value={{ state: this.state, toggleComponent: this.toggleComponent, switchNameHandler: e => this.switchNameHandler(e) }}
This looks familiar. However, it’s a little different than part one of the tutorial. The original Provider value from part one looked like this.
value={this.state}
So why is this one different? We talked a little about it in part one but let’s look at it again. If we want to pass in a function to our context Consumers, we have to pass in variables that contain the functions needed. This is why we have items like switchNameHandler: e => this.switchNameHandler(e) . Let's look at the Child component to see how we utilize this.
The Child Component
Child.js
Notice, once again, the entire component is wrapped with the Consumer context object. Within the Consumer is {context => (...)} . This connects the items in the consumer to the value= ... items from the Provider. This is because the Consumer has subscribed to the values in the Provider. Note: 'context' is an arbitrary name that could be any name. It does NOT have to be 'context'.
Let’s look at the Consumer.
<Consumer> {context => ( <div> <h1>Child Component</h1> {context.state.people.map(person => { return ( <p key={person.id}> Hi, I am {person.name} and I am {person.age} years old. </p> ) })} <button onClick={() => context.toggleComponent()}> Toggle Component </button> {context.state.toggleGrandChild ? <GrandChild /> : null} </div> )} </Consumer>
First, we map through the people object that is contained in the state of the Parent component. How do we access people? Easy. We use context.state.people . Remember in the Parent component value we set state: this.state ? That's why we have access to context.state . If we wrote context.state.toggleGrandChild then we would have access to the toggleGrandChild state value.
Back to mapping through the people object. We map through the object just like part one and return Hi, I am {person.name} and I am {person.age} years old. But, now there is a button below it. Let's take a look.
<button onClick={() => context.toggleComponent()}>Toggle Component</button>
Can you guess what this does? It calls the function toggleComponent that is in the Parent component. Remember what this function does?
toggleComponent = () => { this.setState({ toggleGrandChild: !this.state.toggleGrandChild, }) }
It toggles the state value of toggleGrandChild . Let's look a little bit below the button.
context.state.toggleGrandChild ? <GrandChild /> : null
If this looks unfamiliar to you, don’t worry. It’s just a fancy shorthand way of writing an if else statement. It could easily be written like this.
if context.state.toggleGrandChild === true { <GrandChild /> } else { null }
The shorthand notation just saves some space but the standard if else statement will work just fine.
So, what is this part doing? It’s grabbing the value of toggleGrandChild from the Parent Component state. If the value is true, then it will render the GrandChild component. If the value is false, it will render nothing.
When you press the button, it will call the toggleComponent function in the Parent component. This is because you passed toggleComponent: this.toggleComponent in the value section of the Provider in the Parent component. What does that function do again? Oh, right. It changes the value of toggleGrandChild . Wow! That's amazing. You just changed the state from a Child component. Congrats, my friend.
GrandChild Component. Okay, we are on the home stretch. We’ve already learned how to update the state through our Child component. But, we’ve just toggled a value true or false. That’s not very fun or exciting. Let’s move on to updating a string value. First, we’ll look at the GrandChild component.
GrandChild.js
There’s a lot going on in this component as well. At first glance, it looks pretty similar to the Child component. It’s mapping through the people object and printing the same Hi, I am {person.name} and I am {person.age} years old. . And there's a button. But, the button seems to be doing something different? Let's take a look.
<button onClick={ context.state.toggleName ? e => context.switchNameHandler("Bob") : e =>
context.switchNameHandler("Mark") }> Toggle Name </button>
Now, instead of just calling a function to toggle true or false, we’re actually passing a value to a function. Let’s look back at the switchNameHandler that was passed as a value in the Provider of the Parent Component.
value={{ state: this.state, toggleComponent: this.toggleComponent, switchNameHandler: e => this.switchNameHandler(e) }}
So, we assigned the value switchNameHandler to e => this.switchNameHandler(e) . This lets us use context.switchNameHandler({value}) within a context Consumer. Currently, we're passing a name. Let's look at the switchNameHandler function again.
switchNameHandler = newName => { this.setState({ toggleName: !this.state.toggleName, people: [ { id: 0, name: newName, age: 24 }, { id: 1, name: "Jack", age: 22 }, { id: 2, name: "Jill", age: 26 }, ], }) }
First, we’re toggling the value of toggleName to either true or false. Next, we are assigning the id, name, and age to each people object again. But, this time, for the first people object we are passing in the value newName . This is the value that is passed from the function. Let's jump back to the GrandChild component where we call this function.
onClick={ context.state.toggleName ? e => context.switchNameHandler(“Bob”) : e => context.switchNameHandler(“Mark”) }
Remember, this is just a fancy if else statement. It could easily be written like this.
if context.state.toggleName === true { e => context.switchNameHandler("Bob") } else { e => context.switchNameHandler("Mark") }
This checks the value of toggleName from the state within the Parent Component. If it is true, then it will pass Bob to the switchNameHandler function. This then updates the name of the first people object to Bob . Then, it will update the toggleName to false. Notice, if you press the Toggle Component button, it will still toggle the GrandChild component. But, it does not change the state value for toggleName . So, the name will stay regardless of the GrandChild component being visible or not.
Conclusion
So, we’ve learned how to use React’s context API not only to read the state of a Parent component from a Child component, but also to update the state of the Parent from a Child. The Consumers within the Child components subscribe to the values being passed from the Provider within the Parent component. Now you can leverage context to read and update the state. Go make something great!
|
https://medium.com/javascript-in-plain-english/react-context-api-part-2-updating-state-through-a-consumer-7be723b54d7b
|
['Alex Valdez']
|
2019-09-20 19:03:33.041000+00:00
|
['JavaScript', 'Context Api', 'Web Development', 'React', 'Programming']
|
The Naughty Reason Nails Were Vanishing from the HMS Dolphin
|
The disappearance of iron nails
Captain Wallis frequently went ashore with his cartographers to get proper documentation of the island, as was part of his exploratory orders.
He also mingled with the locals and communicated with Tahitian leaders. Outside of the initial fighting, the people were surprisingly kind and generous. Many meals were shared together.
But several weeks in, he noticed iron supplies disappearing from the ship. Notably, most of his spare iron nails. They were critical for any mid-transit repairs. Even worse, his first mate pointed out several nails that’d been pried out of the main hull itself.
Wallis soon discovered the reason and was infuriated. Upon their arrival, his sailors were instantly been smitten by the beautiful Tahitian women. This in and of itself wasn’t a problem. But the women offered the sailors sex in exchange for iron. It was a coveted metal on the island.
Author via Pinterest
Additionally, in the 18th century Tahitian culture, before Christian missionaries pushed their morality on the islanders, women were free and empowered to make such offers with their bodies. So much so, that even Tahitian men were puzzled that visitors didn’t have such an open society (or women on their boat).
The sailors, known for excess, burned through all the ships available iron and even tested the very nails that held the ship together.
Wallis was aghast at the limitless bounds of his men’s lust. Like many captains of his stature, he was an educated British man, cultured, reserved, and placing value on restraint. However, he knew he too bore blame for being so lenient with his men.
Wallis brought the men in and threatened lethal punishment on the next to touch another morsel of the ship’s iron.
He resolved they’d leave by week’s end and did so, returning home in one piece.
|
https://medium.com/publishous/how-foolish-lust-and-disappearing-nails-nearly-stranded-the-hms-dolphin-51867e0e9f03
|
['Sean Kernan']
|
2020-12-11 13:36:01.831000+00:00
|
['Health', 'Travel', 'Life', 'Life Lessons', 'History']
|
The Importance of Nurturing Doubt in an Age of Righteousness
|
The Importance of Nurturing Doubt in an Age of Righteousness
Fiction’s gift to us is the ability to live in the “land of ambiguity.”
Stories allow us to live the questions.
I often remark that we’re living in “the Age of Certainty,” although perhaps the better moniker is “the Age of Righteousness.” The two go hand in hand.
People shout their truths on social media with such shrillness that life can feel like an ongoing screed. Our streams are rife with taunts and ripostes, demands and disses, rebukes and rebuttals. Rarely does anyone “lower” themselves to ask a question, listen to a response, allow another to explain. And then it’s even rarer to shift one’s own position. Being right is more important than creating an environment for an exchange of thoughts. (It’s good to remember that scolding isn’t an effective rhetorical tool.)
As I read people’s comments online, it’s as if I’m navigating a land of walls and fortresses, with arrows darting from towers on different sides. It’s difficult to speak unless you want to draw your own bow, so, unfortunately, many stay silent.
It’s easy to blame social media for such a state, but I think there’s something going in the world that’s beyond social media — or that social media only reveals: a mindset of righteousness that has infected the culture at large, no matter your political or religious persuasion. We feel threatened, so we’ve chosen sides. The Civil War has begun. We might not carry rifles (yet), but the bullets of Gettysburg have taken the form of tweets, memes, and scowling emojis.
The Salve of Doubt
The answer? I think we need to immerse ourselves in the healing powers of doubt. The kind of doubt that poses questions, sparks curiosity, invites scrutiny. The kind of doubt that entreats us to pause and listen, to surrender our egos, soften our stances, admit fallibility and weakness.
Certainty leads to arguments and wars. Doubt leads to exploration and dialogue. Certainty tends to close us. Doubt by definition opens us.
To say, “I’m not sure,” to hesitate, is to be weak in our culture.
America, however, in our bigness, our brashness, has never had much fondness for doubt. People who operate with doubt are often criticized as being indecisive — and indecision isn’t an admirable trait. Just consider the way leadership styles are esteemed. George W. Bush and Donald Trump both brandish their decisiveness as a badge of macho honor (Bush even proudly gave himself the nickname, “The Decider”), and people often view them as strong as a result. Barack Obama, however, was frequently pilloried for vacillating between options as he pondered decisions. To say, “I’m not sure,” to hesitate, is to be weak in our culture. Swift, sure, and strong decisions mark a good leader.
Bush and Trump each also prefer to make decisions from their guts, seemingly proud that their certainty doesn’t get muddled in the complications of the cognitive realms. On the other hand, Obama’s “wavering” was often due to the fact that he liked to probe his advisors’ thoughts, look at data, reflect on history — a process that took time because it didn’t come from the gut, but from consultation and consensus building (and, yes, the higher cognitive realms).
We’re a society that prefers “full speed ahead” over “I’ll mull it over.” In some ways, we’re still a nation of Wild West gunslingers. If you’re not fast on the draw, you’re dead. Americans cherish certainty, so we’ve defined good decision-making around quickness and surety, even if such a style does lead to long wars we should have never gotten involved in.
“The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people are so full of doubts,” said Bertrand Russell.
Our sound-bite world encourages us to live in the tiny boxes of our certainty.
It’s easy to make quick decisions when you’re an ideologue, a fanatic, or an authoritarian because you’re guided by righteousness. You don’t have or seek any questions. The wiser people, the doubters, speak with qualifications, with references, with a point-counterpoint style that doesn’t lend itself to pithy marketing phrases or the witty one-upmanship that is privileged in so much of our contemporary discourse.
The language of doubt also doesn’t translate effectively onto social media, for social media allows for little nuance. The text fields are just too small to afford a contour, a counterpoint, a tangent — too small to invite in the language of questions, of uncertainty. Our sound-bite world encourages us to live in the tiny boxes of our certainty. We follow our impulses, our gut instincts, like animals.
It takes courage and patience to operate with doubt because you’re essentially without arms, or you’re carrying a very different kind of arms at the very least.
Doubt and Creativity: the Benefits of Being Willing to Surrender
To nurture a mindset of doubt is to nurture a mindset of creativity — to love testing ideas, to revel in mystery and ambiguity, to seek answers in the shadows, to take comfort in the cloudy regions of thought. A mindset of doubt leads us beyond defensive postures because when evidence disproves or weakens our positions, we welcome that evidence and change our positions as a result. Doubt paves the roads of our search. It’s a “willingness to surrender,” as Walt Whitman calls it, which allows our thoughts, our dialogue, to move and shift.
“I like the scientific spirit — the holding off, the being sure but not too sure, the willingness to surrender ideas when the evidence is against them: this is ultimately fine — it always keeps the way beyond open — always gives life, thought, affection, the whole man, a chance to try over again after a mistake — after a wrong guess,” said Walt Whitman in his Camden Conversations.
The hero is the one who can live and even thrive in a teetering world
Doubt — and the questions it opens — is what has always drawn me to fiction, where a character’s uncertainty and quest guide the story. Instead of people putting on masks of invulnerability, as they tend to do in real life, characters revel in their vulnerability. Confusion mixes with needs and desires, causing characters to leap and lunge in good ways and bad ways in their search for satisfaction, comfort, and love. In a novel, the righteous, the know-it-alls, tend to get their comeuppance. The hero is the one who can live and sometimes even thrive in a teetering world, and it’s good for us to engage in such uncertainty.
“Fiction can allow us brief residence in the land of true ambiguity, where we really don’t know what the hell to think,” said George Saunders. “We can’t stay there very long. It’s not in our nature. You can be truly confused by something and then ten minutes later you’re grasping for your opinions like somebody going for a life jacket. But that brief exposure to the land of ambiguity is really, really good for us. To be genuinely confused about something for even a few seconds is good because it opens us up to the idea that what we know right now is not complete.”
Another word for the “land of ambiguity” is life. Imagine if people’s social media posts revealed their confusion, if we wore our uncertainty as a badge of honor.
Montaigne grounded his thought in doubt, even coining the word for his reflections “essais” — attempts. The main character of his essays is “Myself,” which he describes as “bashful, insolent; chaste, lustful; prating, silent; laborious, delicate; ingenious, heavy; melancholic, pleasant; lying, true; knowing, ignorant; liberal, covetous, and prodigal.” His self is full of such contradictory and competing traits because he lives in the “land of ambiguity,” a place that doesn’t allow for the dominance of any single trait that’s the right way to live or be.
Montaigne knew that humans are beasts full of contradictions, that logic contends with irrationality and virtue never truly wins over sin, so we have no business being righteous.
Doubt is the beginning of wisdom.
The poet Ranier Marie Rilke gave perhaps the best writing (and life) advice of all in his Letters to a Young Poet. “Be patient toward all that is unsolved in your heart and try to love the questions themselves, like locked rooms and like books that are now written in a very foreign tongue. Do not now seek the answers, which cannot be given you because you would not be able to live them. And the point is, to live everything. Live the questions now. Perhaps you will then gradually, without noticing it, live along some distant day into the answer.”
When you live the questions as a writer or reader, you’re plumbing your vulnerability, touching a deeper self, revealing all of the good and bad you’re capable of.
As the writer Chris Abani puts it: “The point is to dissolve oneself into the journey of the protagonist, to face the most terrifying thing in narrative, the thing that has been at its heart since the earliest campfire and story. To dare ourselves to imagine, to conjure and then face all of our darkness and all of our light simultaneously. To stand in that liminal moment when we have no solid ground beneath us, no clear firmament above, when the ambiguity of our nature reveals what we are capable of, on both sides.”
To conjure and then face all of our darkness and all of our light simultaneously.
Doubt is the beginning of wisdom. I wonder if we should spend an entire year of high school or college simply immersing ourselves in a curriculum focused only on doubt, exploring all aspects of it, celebrating it. Perhaps we should create a Church of Doubt and attend its services each Sunday morning as a way to prepare for the week ahead.
We’ve strayed so far from from Rene Descartes’ method of skepticism, which formed the foundation of Western thought. Descartes put all beliefs, ideas, thoughts, and matter under a microscope of intense scrutiny in his search for what he could truly know. To be “woke” for Descartes was a matter of living the questions, not the certitudes.
Too much doubt can lead to a paralysis of action, to excesses of conspiracy theories, to distrust of the world, but if we all honored and revered our doubt as a strength, not a weakness, we’d certainly be less likely to build a moat around a political party, a religion, or a school of thought.
We’d also be less likely to build a moat around ourselves. Because doubt spawns tolerance. Doubt spawns acceptance. Doubt spawns enlightenment.
Grant Faulkner is the author of Pep Talks for Writers: 52 Insights and Actions to Boost Your Creative Mojo and the co-host of the podcast Write-minded. His essays on creative writing have appeared in The New York Times, Poets & Writers, Writer’s Digest, and The Writer.
For more, go to grantfaulkner.com, or follow him on Twitter at @grantfaulkner.
|
https://grantfaulkner.medium.com/the-importance-of-nurturing-doubt-in-an-age-of-righteousness-ff0e650e21a6
|
['Grant Faulkner']
|
2019-02-13 18:34:23.603000+00:00
|
['Creative Writing', 'Doubt', 'Reading', 'Creativity', 'Fiction']
|
A Detailed Web Scraping Walkthrough Using Python and Selenium
|
Section 3 — Running Main Script
With the same dependencies installed and imported earlier, we can proceed to build a script to automate the tedious web scrapping process. I previously mentioned that there are various healthcare professional bodies on the MOH webpage, and they are actually represented with different abbreviations. For example, SDC for Singapore Dental Council, SMC for Singapore Medical Council etc. For the Pharmacists group, the abbreviation is SPC.
We prepare our WebDriver once again to access the page and target HTML frame, along with several WebDriver options to make sure it runs smoothly. In particular, we are introducing wait times and sleep times so that there can be appropriate pauses during the scraping process.
(i) WebDriver Initiation
# Define healthcare professional (HCP) body
hcp_body = 'SPC'
# Set wait times
waittime = 20
sleeptime = 2
# Initiate web driver
try: # Close any existing WebDrivers
driver.close()
except Exception:
pass
# Access professional registration system (PRS) for specific
# healthcare professional body
home_page = f"https://prs.moh.gov.sg/prs/internet/profSearch/main.action?hpe={hcp_body}"
# Set webdriver options
options = webdriver.ChromeOptions()
options.add_argument('--no-sandbox')
options.add_argument('ignore-certificate-errors')
# Initiate webdriver
driver = webdriver.Chrome(options=options)
# Get driver to retrieve homepage
driver.get(home_page)
# Switch to frame which contains HTML for Search section
driver.switch_to.frame(driver.find_element_by_name('msg_main'))
# Click Search button (named btnSearch) to load all results
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.XPATH, "//input[@name='btnSearch']"))).click()
# Sleep a short while for page loading to be fully completed
time.sleep(sleeptime)
(ii) Setting up key functions
Because there is a series of actions to take in order to navigate through the different records across different search results pages, it is best to setup functions to better organize all these steps.
(1) Setup master list CSV to store all the records
We first generate a CSV file to store the records which we will be appending later. The column names are based on the earlier exploration of the web page, where we identified the various fields we can extract from each public record.
# Set file name (based on your own preference)
file_name = 'master_list.csv'
# Check if file already exists
if os.path.isfile(f'./{file_name}'):
print(f'Filename {file_name} already exists')
else:
# Set names of fields we want to extract
column_names = ['name','reg_number','reg_date','reg_end_date','reg_type','practice_status','cert_start_date','cert_end_date','qualification','practice_place_name','practice_place_address','practice_place_phone']
# Generate template dataframe
df_template = pd.DataFrame(columns = column_names)
# Generate csv file from the dataframe template
df_template.to_csv(f'{file_name}', header=True)
print('Created new master list file')
(2) Get current page number
We need a function to track which page of the Search results the WebDriver-controlled browser is currently on
def get_current_page(): # Wait until element is located, then click it
current_page_elem = WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.XPATH, "//label[@class='pagination_selected_page']"))).text
# Get page number as integer
current_page_num = int(current_page_elem)
return current_page_num
(3) Get absolute last page number
It is important to know how many Search results pages we have in total, so that we can create a page loop accordingly. For example, at the time of this exercise, the last page number is 340 (i.e. there are 340 Search results pages)
def get_absolute_last_page(): # Find all elements with pagination class (since it contains
# page numbers)
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.XPATH, "//a[@class='pagination']")))
all_pages = driver.find_elements_by_xpath("//a[@class='pagination']")
# Get final element, which corresponds to 'Last' hyperlink
# (which will go to last page number)
last_elem = all_pages[-1].get_attribute('href')
# Keep only the number of last page
last_page_num = int(re.sub("[^0-9]", "", last_elem))
return last_page_num
(4) Extract data from detailed information page of a single record
While we are on the details page of a particular healthcare professional (after clicking the ‘View More Details’ link), we need to have a way to extract all the text from the respective fields available for public viewing. The information for each record is consolidated into a Python dictionary.
Do note that we are longer using BeautifulSoup here (since BS was used just for demo purposes earlier). We are now solely depending on Selenium to do the parsing and text extraction for us.
def gen_hcp_dict(): WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.XPATH, "//div[@class='table-head']")))
hcp_name = driver.find_element_by_xpath("//div[@class='table-head']").text
all_fields = driver.find_elements_by_xpath("//td[@class='no-border table-data']") # Using find elementS since there are multiple elements
hcp_data = []
# Extract test for every field within the table
for field in all_fields:
hcp_data.append(field.text)
hcp_dict = {}
# Assign extracted data into respective key-value pairs
hcp_dict['name'] = hcp_name
hcp_dict['reg_number'] = hcp_data[0]
# hcp_data[1] is just a blank space, so it can be ignored
hcp_dict['reg_date'] = hcp_data[2]
hcp_dict['reg_end_date'] = hcp_data[3]
hcp_dict['reg_type'] = hcp_data[4]
hcp_dict['practice_status'] = hcp_data[5]
hcp_dict['cert_start_date'] = hcp_data[6]
hcp_dict['cert_end_date'] = hcp_data[7]
hcp_dict['qualification'] = hcp_data[8]
hcp_dict['practice_place_name'] = hcp_data[9]
hcp_dict['practice_place_address'] = hcp_data[10]
hcp_dict['practice_place_phone'] = hcp_data[11]
return hcp_dict
(5) Get current pagination range
There is a pagination range that is available on each page of the Search results, and it is usually limited to around 10 digits. This means that you are only able to directly access a particular page from that specific range visible to you. Thus, we need a function to determine what is the current pagination range that is accessible. Below is the screenshot of the pagination range I am referring to:
def get_current_pagination_range():
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.XPATH, "//a[@class='pagination']")))
# Get all elements related to pagination
all_pages = driver.find_elements_by_xpath("//a[@class='pagination']")
# Implicit wait to allow page to load
driver.implicitly_wait(1)
# Find numbers of pagination range, and append to list
pagination_range_on_page = []
for elem in all_pages:
# Only extracting numeric values of pagination range
if elem.text.isnumeric():
pagination_range_on_page.append(int(elem.text))
driver.implicitly_wait(1)
else:
pass
driver.implicitly_wait(1)
return pagination_range_on_page
(6) Click last pagination number on current page
With the pagination range known, we need a function to click on the last number displayed in this range. This is for us to proceed further into the pages beyond the given range. For example, to reveal pages beyond page 10, we need to click the number 10 from the range (Page 1 2 3 4 5 6 7 8 9 10 ..)
def click_last_pagination_num(pagination_range): last_pagination_num = pagination_range[-1]
# Introduce implicit wait for page to complete loading
driver.implicitly_wait(1)
# Click page number once element is located
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.LINK_TEXT, f'{last_pagination_num}'))).click()
driver.implicitly_wait(1)
(7) Click first pagination number on current page
Since there are hundreds of pages, it does not make sense to always start from page 1. This is because it will require an extremely tedious amount of effort to click on the last visible page number of each pagination range to reach the later half of the Search results.
The solution to this is that for page numbers greater than the halfway mark (e.g. pages beyond 100 if there is a total of 200 pages), our program will instead start from the last page, and go back in reverse to reach those latter pages. This strategy of starting from the last page to reach these latter pages is much more efficient compared to always starting from the first page.
To make that happen, we need a function to click the first number of the existing pagination range. For example, if our current pagination range is (Page .. 151 152 153 154 155 156 157 158 ..), and we want to go to page 147, the program will be clicking the page number 151 so as to reveal the preceding set of pagination range.
def click_first_pagination_num(pagination_range):
first_pagination_num = pagination_range[0]
# Introduce implicit wait for page to complete loading
driver.implicitly_wait(1)
# Click page number once element is located
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.LINK_TEXT, f'{first_pagination_num}'))).click()
driver.implicitly_wait(1)
(8) Locate and access target page
Let’s put the earlier functions into action. We now need to find a way to get our WebDriver to go to each page to scrape all the relevant information for each of the 10 professionals listed.
A massive pain comes from the fact that when you click the ‘Back to Search Results’ link while on the details page of a particular professional, it brings you all the way back to Page 1 of the Search results instead of the current page. This means that if you were looking at the details page of Mr Chan on Page 10 and you click ‘Back to Search Results’, you will be brought back to Page 1 instead of the page 10 you were on.
As such, we need a function that can repeatedly click the appropriate numbers of the pagination range to return to the target page we were scraping every time after the ‘Back to Search Results’ link is clicked. The following code sets up the automation to do just that, with the strategic use of conditional statements and loops.
def locate_target_page(target_page):
last_page_num = get_absolute_last_page()
# Find midway point of all search results page
midway_point = last_page_num/2
# If target page is in first half, start clicking from the start
if target_page < midway_point:
current_page_num = get_current_page()
if current_page_num == target_page:
pass
else:
pagination_range = get_current_pagination_range()
while target_page not in pagination_range:
driver.implicitly_wait(1)
# If target page is not in pagination range, keep
# clicking last pagination number to go down the list
click_last_pagination_num(pagination_range)
current_page_num = get_current_page()
pagination_range = get_current_pagination_range()
driver.implicitly_wait(1)
else:
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.LINK_TEXT, f"{target_page}"))).click() # Once target page is in pagination page, go to the target page # If target page is in later half of list, go to Last page and
# move in reverse (This saves alot of time)
else:
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.LINK_TEXT, 'Last'))).click()
time.sleep(sleeptime)
current_page_num = get_current_page()
if current_page_num == target_page:
pass
else:
pagination_range = get_current_pagination_range()
while target_page not in pagination_range:
driver.implicitly_wait(2)
click_first_pagination_num(pagination_range)
current_page_num = get_current_page()
pagination_range = get_current_pagination_range()
else:
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.LINK_TEXT, f"{target_page}"))).click()
# Once target page is in pagination page, go to the target page
(9) Create script that automates the navigation through portal
And now we setup the last function, which will bring everything together
def full_scrape(target_page):
last_page_num = get_absolute_last_page()
driver.implicitly_wait(1)
while target_page != last_page_num:
locate_target_page(target_page)
print('Starting with target page ' + str(target_page))
# Retrieve HTML from search page
target_page_html = driver.find_element_by_xpath("//body").get_attribute('outerHTML')
driver.implicitly_wait(1)
# Find list of all IDs on that page, and keep the unique IDs
all_ids = re.findall("P[0-9]{5}[A-Z]{1}", target_page_html)
id_list = list(dict.fromkeys(all_ids))
for index, hcp_id in enumerate(id_list): # Tracking the healthcare professional (HCP)'s ID
# Click 'View More Details' link to access details page
# for that professional with the specific ID
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.XPATH, f"//a[contains(@onclick,'{hcp_id}')]"))).click()
# Scrape data from details page into a dictionary
hcp_dict = gen_hcp_dict()
# Convert dict to pandas dataframe (Need to set an index
# since we are passing scalar values)
df_hcp_dict = pd.DataFrame(hcp_dict, index=[0])
# Append df row directly to existing master list csv
df_hcp_dict.to_csv(f'{file_name}', mode='a', header=False)
# Print row that was just scraped (To track progress)
print(f'Scraped row {index+1} of page {target_page}')
# Update (+1) target page after successfully scrapping
# all 10 records on page
if index == len(id_list):
print(f'Completed scraping for page {target_page}')
target_page += 1
print('Updated target page ' + str(target_page))
else:
pass
# Head back to homepage by clicking 'Back to Search
# Results' link
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.LINK_TEXT, 'Back to Search Results'))).click()
# Go to latest updated target page
locate_target_page(target_page)
# Define steps when program reaches last page (essentially
# similar to the steps above)
else:
locate_target_page(target_page)
print('Working on last page')
target_page_html = driver.find_element_by_xpath("//body").get_attribute('outerHTML')
driver.implicitly_wait(1)
all_ids = re.findall("P[0-9]{5}[A-Z]{1}", target_page_html)
id_list = list(dict.fromkeys(all_ids))
for index, hcp_id in enumerate(id_list):
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.XPATH, f"//a[contains(@onclick,'{hcp_id}')]"))).click()
hcp_dict = gen_hcp_dict()
df_hcp_dict = pd.DataFrame(hcp_dict, index=[0])
df_hcp_dict.to_csv('master_list.csv', mode='a', header=False)
print(f'Scraped row {index+1} of {target_page}')
if index == len(id_list)-1:
print(f'Completed scraping for page {target_page}')
print('Mission Complete')
else:
pass
WebDriverWait(driver, waittime).until(EC.presence_of_element_located((By.LINK_TEXT, 'Back to Search Results'))).click()
locate_target_page(target_page)
(iii) Kickstart automated web scraping
It’s been a relatively intense process so far, in terms of setting up all the functions correctly. This will require plenty of trial and error, so do prepare to spend some time and effort to get the automation right. Putting things in perspective, this short term hassle will still deliver much much more efficiency compared to doing this scraping manually.
Now that we are done with the hardest part of the coding, so it is time to execute all our functions and witness the fruits of our labor.
# Start with selected target page. target_page = 1 by default (to start from page 1)
target_page = 1
# Display time started
print(time.strftime("%H:%M:%S", time.localtime()))
# Run main web scraping function
full_scrape(target_page)
# Display time concluded
print(time.strftime("%H:%M:%S", time.localtime()))
NOTE: You will notice many wait times peppered across the script. This is because the scraping will fail if a particular page is not loaded completely. Therefore, these wait times provide some buffer time for page loading. More info on waits can be found here.
Despite that, the webpage loading may still fail from time to time. If that is the case, simply take note which page was last scraped successfully (from the output screen in the above cell). You then update the target_page in the above chunk, and rerun the automated scraping (you may do so by re-running the code from the WebDriver initiation section).
(iv) Data Cleaning of Master List
Once scraping is completed, we can now tidy up the master list CSV file where our data has been progressively appended to. This is important since there will likely be row duplicates if you encounter scraping failures along the way and need to re-run scraping from break points.
# Import master list csv file
df_master = pd.read_csv(f'{file_name}') # Display master list columns
df_master.columns # Drop leftmost index column (Unnecessary column)
df_master.drop(columns = [df_master.columns[0]], axis = 1, inplace = True)
# Remove duplicates
df_master.drop_duplicates(keep='first', inplace=True) # Export final curated list
df_master.to_excel('master_list_final.xlsx', index = False)
(v) Conclusion
And that’s it! We have successfully performed web scrapping on a website that is relatively difficult to navigate. The concepts for the other types of websites are also similar, so you are in good stead to try out web scraping for other kinds of projects. If you do encounter specific errors or problems, always refer to trusty resources like StackOverflow and Selenium documentation pages.
Please feel free to reach out to me on LinkedIn should you have any suggestions or queries.
Happy web scraping!
|
https://medium.com/swlh/web-scrapping-healthcare-professionals-information-1372385d639d
|
['Kenneth Leung']
|
2020-11-10 17:13:03.997000+00:00
|
['Selenium', 'Healthcare', 'Python', 'Webscrapping', 'Data']
|
Reasons I’m Thankful this Holiday Season
|
Gratitude
Reasons I’m Thankful this Holiday Season
And reasons why you should practice gratitude.
Photo by Guillaume de Germain on Unsplash
Thanksgiving is tomorrow, and I’ve been thinking more and more about the reasons I’m thankful this holiday season. It’s important to remember the things you’re thankful for everyday of every year, of course, but I always think about it even more around Thanksgiving, and this year, with everything that’s been going on in the US and other countries, I’m thinking about it even more.
Why should we be thankful?
There are so many good reasons to practice gratitude. Here are just a few of them:
It helps you cope with stress better.
Helps improve your overall health and well-being.
Helps you develop stronger bonds and relationships with the people you care about.
Helps you feel more secure and self-assured.
Increases overall happiness and satisfaction.
Keeps you inspired and increases creativity.
So here are some of the things I’m thankful for — at least the ones I can think of right now:
First and foremost, I’m thankful I’ve been able to stay healthy this year. With coronavirus rates constantly changing, I’ve done everything I can to stay healthy. I’ve worn my mask out in public, and tried to social distance as much as possible. I’m thankful for my creativity. My interest in creative pursuits, like writing and various crafts, has helped me keep my sanity this year. I started learning to crochet at the beginning of this year, and I’ve done pretty well with it. I plan to continue crocheting, and keep adding new stitches as my skills improve. I love writing because it allows me to express my opinion about different topics and also get my thoughts down in paper about different things going on in my life through journaling. My writing has also helped me make a little extra money this year, through the Medium Partner program and some of the freelance websites I find work on. My writing skills aren’t always the best, but I do my best to write as clearly as possible. I’m thankful for my family. I haven’t gotten to see them this year because of the pandemic and also because of the busy schedules me and my husband keep with our work and responsibilities outside of work. I’m thankful for my husband. He’s been my rock, one of my sources of mental and emotional support during this crazy time. We don’t always get to see one another because of his travel schedule, but even when he’s away we talk all the time. I’m also thankful he was able to find a new job quickly after his other company laid him off because of the coronavirus. I’m thankful for my social media family. The people I connect with on social media have supported me as I work to accomplish my different goals. I’m thankful I have some time off for Thanksgiving. We had planned to go up to Michigan to see Kevin’s family. We changed our plans at the last minute and decided to stay home because Michigan’s governor was tightening up restrictions again for the holidays. We wouldn’t be able to go to any restaurants, and Kevin’s mother isn’t planning to cook, so we decided to get our own turkey and stuffing, and some vegetables and a delicious dessert to go with it. I’m thankful I’ve lost a few pounds this year. I’ve been struggling to lose weight the past few years. But I figured out that because I’m getting older, I can’t exercise as intensely as I used to be able to. Once I reduced the intensity of my workouts and worked on controlling my food intake, I started losing weight again. I’m down about 5 pounds so far.
Those are just a few of the reasons I can think of for why I’m thankful. I have so much more to be thankful for, but I can’t think of it all right now. I hope all my Medium family has a great, safe holiday.
What are you thankful for this holiday season?
|
https://medium.com/finding-myself/reasons-im-thankful-this-holiday-season-2d82075e859
|
['Erica Martin']
|
2020-11-25 21:54:19.473000+00:00
|
['Health', 'Relationships', 'Thankfulness', 'Self', 'Gratitude']
|
Why I admire Jean-Jacques Rousseau Despite his Many Major Faults
|
Rousseau wearing an Armenian costume (1766), painted by Allan Ramsay
Many intellectuals I have run into have had a rather pessimistic view of the famous Genevan philosopher Jean-Jacques Rousseau (1712–1778). From a history professor who saw him as a hypocrite to Jordan Peterson’s damning dismissals, Rousseau seems to be a philosopher to love to hate. There are certainly reasons for this. He abandoned his illegitimate children at foundling hospitals despite being the author of a popular child-rearing book. He wrote personal reflections which were way too personal (and held public readings of these, often embarrassing episodes). He had trouble maintaining friendships and saw conspiracies where they did not exist. His Social Contract has influenced some of the worst dictators of the past two centuries. Despite all of this, Rousseau was one of the most genuinely human philosophers of the Enlightenment. He understood its shortcomings better than most. He appeared to hold in contempt middle class materialism while praising the simplicity of his native Geneva.
Jean-Jacques Rousseau seemed to intuit the problems associated with the excessive materialism of the coming centuries. While he may have gone a little too far in the other direction (the Industrial Revolution made life infinitely better for the average person), the problems associated with excess caused problems all their own. As psychologist Jordan Peterson noted in one of his lectures, leisure is the precondition for existential angst.
“Richness of apparel may proclaim the man of fortune,and elegance the man of taste; but true health and manliness are known by different signs. It is under the homespun of the labourer, and not beneath the gilt and tinsel of the courtier, that we should look for strength and vigour of body.” -Jean-Jacques Rousseau, ‘Discourse on the Arts and Sciences’ (1750)
One can see a great divide in society — between blue collar values, grounded in practicality, and the superficiality of bourgeois values, grounded in the opinions of others. The most absurd of the latter can be seen in the popular dress of the eighteenth century, known as the ‘macaroni.’
‘The Macaroni’ (1773)
The macaroni has much in common with the Hollywood celebrities and fashion gurus of today — concern with outward appearance and keeping up with trends. The individual pictured in the above satire would not be out of place in the world of Instagram and Facebook.
Generally speaking, as far as the arts go, they tend to become quite effete in later stages of culture. One can see this in the statues from ancient Greece — going from the muscular heroes of old to the fragile androgyny consistent with decadence.
“Thus it is that luxury, profligacy and slavery, have been, in all ages, the scourge of the efforts of our pride to emerge from that happy state of ignorance, in which the wisdom of providence had placed us. That thick veil with which it has covered all its operations seems to be a sufficient proof that it never designed us for such fruitless researches. But is there, indeed, one lesson it has taught us, by which we have rightly profited, or which we have neglected with impunity? Let men learn for once that nature would have preserved them from science, as a mother snatches a dangerous weapon from the hands of her child. Let them know that all the secrets she hides are so many evils from which she protects them, and that the very difficulty they find in acquiring knowledge is not the least of her bounty towards them. Men are perverse; but they would have been far worse, if they had had the misfortune to be born learned.” -Jean-Jacques Rousseau, ‘Discourse on the Arts and Sciences’ (1750)
Rousseau seemed to have gotten the problem more or less right, though his solutions are often a different matter. His advocacy of simple living is probably the best solution one can find in his writings.
Rousseau wrote his Discourse on the Arts and Sciences for an essay contest (which he won) in 1750, arguing that the arts and sciences corrupt human morality rather than improve it. This is a quite problematic claim, as the arts are a reflection of society (and influence it) while the sciences are tools developed by human beings to understand the world around them. Jordan Peterson’s Maps of Meaning (1999) expands on this, situating the role of the sciences in a larger human-centered perspective.
“The world can be validly construed as a forum of action, as well as a place of things. We describe the world as a place of things, using the formal methods of science. The techniques of narrative, however — myth, literature and drama — portray the world as a forum for action. The two forms of representation have been unnecessarily set at odds, because we have not yet formed a clear picture of their respective domains. The domain of the former is the object world-what is, from the perspective of intersubjective perception. The domain of the latter is the world of value-what is and what should be, from the perspective of emotion and action.” — Jordan Peterson, ‘Maps of Meaning’
Rousseau gets quite a bit wrong by asserting that the sciences corrupt humanity because the sciences are tools. One needs morality because morality is that which offers a path forward (through elimination of non-useful information and use of values to determine motion). The abuse of ‘science’ by nineteenth-century intellectuals has more to do with the need for justification of certain prejudices (in many cases) than with the pursuit of knowledge. Jean-Jacques Rousseau did not construe the world in the same manner as Peterson. Indeed, Rousseau often flirts too much with what today would be called social constructivism. He fails to see societies as expressions of human nature across time, instead focusing on creating theoretical impositions on society through his works to counter the excesses of humankind. In short, Rousseau would have done far better as a writer of self-improvement than of politics.
Rousseau’s critique in the Discourse on the Arts and Sciences is a valid critique of the superficiality of middle class values. This is something the nineteenth century thinker Benjamin Disraeli would expand upon in his literary works. Indeed, Disraeli’s ‘One-Nation Conservatism’ was conceived of as a union between the working classes and the elites against the decadent bourgeois values.
Rousseau was also spot on in his rejection of excessive study of virtues rather than simply practicing them:
“Before that time the Romans were satisfied with the practice of virtue; they were undone when they began to study it.” -Jean-Jacques Rousseau, ‘Discourse on the Arts and Sciences’ (1750)
This seems to be an important antecedent to the Genetic Epistemology of Jean Piaget. By studying children playing tag, Piaget observed that children understood the game before they were able to clearly (or uniformly) articulate the rules of the game when asked. This informed his views on the development of human morality and suggested the necessity of the abstract being grounded in the concrete. The problem with so much modern philosophy (particularly of the postmodern variety) is that it does not do this. The emphasis is on increasingly obscure abstractions with no real grounding — essentially a symptom of decadence. Rousseau’s insights were way ahead of his time.
Jean-Jacques Rousseau is that critic of Enlightenment sorely needed but also terribly misguided in several of his key writings. His life and works exemplify that ‘crooked timber of humanity’ that Immanuel Kant spoke of, stating that out of such a crooked timber nothing straight could be made. Such is the human condition. Among critics of society, Rousseau ranks at or near the top. He was a genius. But, like many geniuses, not a man who one might like to know and not a man who was right about all or even most of his assertion. Rather, his works contain key insights necessary to keep the excesses of modernity in check.
|
https://medium.com/digital-republic-of-letters/why-i-admire-jean-jacques-rousseau-despite-his-many-major-faults-d0bf9fc87818
|
['Kevin Shau']
|
2019-09-24 16:54:01.734000+00:00
|
['Politics', 'Society', 'Philosophy', 'Enlightenment', 'Criticism']
|
Cloud Automation Using TerraForm
|
This article will help us to understand how to spin up instances in AWS using the Infrastructure as a Code tool Terraform. Firstly we’ve to know what is Terraform?
Terraform is an open-source infrastructure-as-a-code software tool created by HashiCorp. It enables users to define and provision a data-center infrastructure using a high-level configuration language known as Hashicorp Configuration Language (HCL), or optionally JSON. Terraform supports a number of cloud infrastructure providers such as Amazon Web Services, IBM Cloud (formerly BlueMix), Google Cloud Platform, Digital Ocean, Linode, Microsoft Azure, Oracle Cloud Infrastructure, OVH, Scaleway, VMware vSphere or Open Telekom cloud as well as Open-Nebula and Open-Stack.
HashiCorp also supports a Terraform Module Registry launched in 2017 during HashiConf 2017 conferences. In 2019 Terraform introduced the paid version called Terraform Enterprise for larger organizations. Terraform has four major commands: Terraform init, Terraform Plan, Terraform Apply, Terraform Destroy.
Terraform has a great set of features that make it worth adding to your tool belt, including:
Friendly custom syntax, but also has support for JSON.
Visibility into changes before they actually happen.
Built-in graphing feature to visualize the infrastructure.
Understands resource relationships. One example is failures are isolated to dependent resources while non-dependent resources still get created, updated, or destroyed.
An open-source project with a community of thousands of contributors who add features and updates.
The ability to break down the configuration into smaller chunks for better organization, re-use, and maintainability. The last part of this article goes into this feature in detail.
Problem Statement
Create a Security group that allows the port 80. Launch EC2 instance. In this EC2 instance use the existing key or provided key and security group which we have created in step 1. Launch one Volume using the EFS service and attach it in your VPC, then mount that volume into /var/www/html Developers have uploaded the code into GitHub repository. Copy the github repo code into /var/www/html Create an S3 bucket, and copy/deploy the image into the S3 bucket and change the permission to public readable. Create a Cloudfront using an S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.
L et's get started
So this is an updated task with respect to my task 1 of AWS. Here I have created the same setup as that of the previous ones but with a small difference here I have integrated using EFS instead of EBS.
What is EFS?
Amazon Elastic File System provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS has a simple web services interface that allows you to create and configure file systems quickly and easily. The service manages all the file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations.
The major difference between the both is EBS can only be accessed by only a single instance at a time. Where’s using the EFS you can access multiple Amazon instances at the same time
Before moving on to the task we need to know some basics about terraform
terraform init - To install the required plugins
terraform apply - To make the resources run
terraform plan - is used to create an execution plan
terraform validate - To check the code
terraform destroy - To destroy all the resources in single click
Creating the separate folder for web page code and in that create terraform file with extension .tf and after initializing with terraform init the terraform file so that it can download the required plugins for that particular folder.
Before that Login to your AWS profile using your CLI and fill down the necessary credentials.
Step1
All the necessary plugins will be downloaded which belongs to the terraform provider and the profile of AWS.
provider "aws" {
profile = "satvi"
region = "ap-south-1"
}
Since we are using it for the first time so we need to initialize the code using the following command
terraform init
Step2
Creating the Security group for instance so our clients can access from other devices as the AWS has some default security setting for not allowing to connect from outside the host so there is a firewall which protects from outside for connecting we need to configure the TCP settings here I’m giving access to the SSH, HTTPS, NFS services with their respective port numbers as 22, 80, 2049.
# -- Creating Security Groups resource "aws_security_group" "sg" {
name = "task2-sg"
description = "Allow TLS inbound traffic"
vpc_id = "vpc-f6829f9e" ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
} ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
} egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
} tags = {
Name = "task2-sg"
}
}
Step3
Launching an instance with created key pair and security group and to connect into the instance we need to specify the path of the key and public_ip of instance. And will be launching my provisioner “remote-exec” that will start working once my instances are launched and will download and install all the required packages
# -- Creating Ec2 instance resource "aws_instance" "web_server" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
root_block_device {
volume_type = "gp2"
delete_on_termination = true
}
key_name = "key22"
security_groups = [ "${aws_security_group.sg.name}" ] connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sathvikakolisetty/Downloads/key22.pem")
host = aws_instance.web_server.public_ip
} provisioner "remote-exec" {
inline = [
"sudo yum install httpd git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
} tags = {
Name = "task2_os"
} }
Step4
Now we will be creating our EFS and for that, we require VPC which will contact VP at the backend but since we haven’t mentioned it so we will go for the default one. And once it gets created then we will create a mount we will clone all the required data from the Github and then we will mount our EFS to /var/www/Html directory.
# -- Creating EFS volume resource "aws_efs_file_system" "efs" {
creation_token = "efs"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "Efs"
}
} # -- Mounting the EFS volume resource "aws_efs_mount_target" "efs-mount" {
depends_on = [
aws_instance.web_server,
aws_security_group.sg,
aws_efs_file_system.efs,
]
file_system_id = aws_efs_file_system.efs.id
subnet_id = aws_instance.web_server.subnet_id
security_groups = ["${aws_security_group.sg.id}"]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sathvikakolisetty/Downloads/key22.pem")
host = aws_instance.web_server.public_ip
}
inline = [
"sudo mount ${aws_efs_file_system.efs.id}:/ /var/www/html",
"sudo echo '${aws_efs_file_system.efs.id}:/ /var/www/html efs defaults,_netdev 0 0' >> /etc/fstab",
"sudo rm -rf /var/www/html/*",
"sudo git clone
]
}
} provisioner "remote-exec" {inline = ["sudo mount ${aws_efs_file_system.efs.id}:/ /var/www/html","sudo echo '${aws_efs_file_system.efs.id}:/ /var/www/html efs defaults,_netdev 0 0' >> /etc/fstab","sudo rm -rf /var/www/html/*","sudo git clone https://github.com/satvikakolisetty/cloudtask2.git /var/www/html/"
Step5
Now I will create an S3 bucket and upload my image to it in the same availability zone.
# -- Creating S3 Bucket resource "aws_s3_bucket" "mybucket" {
bucket = "satvi112233"
acl = "public-read"
region = "ap-south-1" tags = {
Name = "satvi112233"
}
} # -- Uploading files in S3 bucket resource "aws_s3_bucket_object" "file_upload" {
depends_on = [
aws_s3_bucket.mybucket,azxccvgh i
]
bucket = "satvi112233"
key = "hybrid.png"
source = "C:/Users/sathvikakolisetty/Desktop/terraform/hybrid.png"
acl ="public-read"
}
Step6
In the last step, we will create the cloud-front that will collect all my data from the S3 bucket and reach my client through the nearest edge locations whenever any client will hit to my site.
resource "aws_cloudfront_distribution" "s3_distribution" {
depends_on = [
aws_efs_mount_target.efs-mount,
aws_s3_bucket_object.file_upload,
] origin {
domain_name = "${aws_s3_bucket.mybucket.bucket}.s3.amazonaws.com"
origin_id = "ak"
} enabled = true
is_ipv6_enabled = true
default_root_object = "index.html" restrictions {
geo_restriction {
restriction_type = "none"
}
} default_cache_behavior {
allowed_methods = ["HEAD", "GET"]
cached_methods = ["HEAD", "GET"]
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
default_ttl = 3600
max_ttl = 86400
min_ttl = 0
target_origin_id = "ak"
viewer_protocol_policy = "allow-all"
} price_class = "PriceClass_All" viewer_certificate {
cloudfront_default_certificate = true
}
}
# -- Updating cloudfront_url to main lacation resource "null_resource" "nullremote3" {
depends_on = [
aws_cloudfront_distribution.s3_distribution,
]
Step7
Connecting to the instance and deploying image of s3 bucket to the var/www/html and then it automatically opens on the google chrome browser
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sathvikakolisetty/Downloads/key22.pem")
host = aws_instance.web_server.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo su <<END",
"echo \"<img src='
"END",
] connection {type = "ssh"user = "ec2-user"private_key = file("C:/Users/sathvikakolisetty/Downloads/key22.pem")host = aws_instance.web_server.public_ipprovisioner "remote-exec" {inline = ["sudo su < http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.file_upload.key}' height='1000' width='250'>\" >> /var/www/html/index.html","END", }
} # -- Starting chrome for output resource "null_resource" "nulllocal1" {
depends_on = [
null_resource.nullremote3,
] provisioner "local-exec" {
command = "start chrome ${aws_instance.web_server.public_ip}/index.html"
}
}
Now we are done with all our steps required and to create our setup just create the complete code and run the following commands and then now our entire setup will be ready
|
https://satvikakolisetty.medium.com/cloud-automation-using-terraform-fa3fc049ecfa
|
['Satvika Kolisetty']
|
2020-07-24 12:18:03.427000+00:00
|
['AWS']
|
The Sacred Void in the Election
|
The Sacred Void in the Election
Where do we go once the dust settles down?
Photo by Jon Sailer on Unsplash
As I write this, the 2020 presidential election results are still trickling in. I walk through the living room and see my family watching intently as it unfolds.
“Why are you watching?” I inquire. “You can just wait a day or two and find out who won.”
“It’s the drama,” they respond. “The anticipation. It’s like watching sports, except the winner has lasting implications on our lives.”
I’ve never been one to watch sports, so I nod with acceptance, grab the bag of chips I came for, and make my way back upstairs. Throughout all the buildup to the election, I was constantly drawn back to a lecture I read during college. Religion and Secular Culture by Paul Tillich.
Though it has been years since I read it, it still stews in the back of my mind. Though it was delivered in 1946, his words still ring as true as they ever have.
|
https://medium.com/the-apeiron-blog/the-sacred-void-in-the-election-a5224339922c
|
['Nick Keehler']
|
2020-11-14 15:01:57.704000+00:00
|
['Philosophy', 'Religion', 'Election 2020', 'Society', 'Politics']
|
Dolly Parton’s “Together You and I” Video
|
We first heard this searing ray of sunshine back in May (or in 1974, when she sang it with Porter Wagoner), and it now has a touchingly cheesy video, in which Dolly Parton — Glinda the Good Witch — unites the entire planet. Dolly Parton for General Queen. (Her Better Day album is out now.)
[Via]
|
https://medium.com/the-hairpin/dolly-partons-together-you-and-i-video-f6f0c3b5bbd2
|
['Edith Zimmerman']
|
2016-06-01 11:45:09.053000+00:00
|
['Music', 'Dolly Parton', 'Music Videos']
|
Power Shift: The Battery Revolution Has Begun
|
He’s right. While the complex engineering that drives the grid in Australia (and every other country) may be impressive, it’s based on a concept that was already outdated — and being overtaken by technological changes — when Australia’s interconnected national grid was completed in 1998.
Disruption was coming from many angles: the growing use of renewable energy, like wind and solar, being connected to the grid, adding intermittency; the rapidly falling prices for solar panels, which triggered a boom in solar rooftop installations and made solar farms viable; and the voracious demand for laptop computers, which accelerated the development (and lowered the cost) of energy-dense batteries like lithium-ion, eventually creating a secondary market for large-scale storage. Meanwhile, computers and high-tech electronics in industry and in homes were changing the way energy was used, unbundling what had for decades been a neat and predictable pattern of usage.
Those effects were just beginning in 1998. Twenty years later, the disruption is so immense, electricity grids are having trouble coping.
Take Australia, for example. In 2018, the number of rooftop solar power generators on Australian homes passed two million, compared with just 20,000 a decade before. There are now, on average, six new household solar installations every minute in the country.
In total, 8,900 gigawatt-hours (GWh) of Australia’s power currently is generated from solar rooftops — more than the 8,000 GWh generated by the Liddell Power Station near Muswellbrook in regional New South Wales, once the biggest power station in Australia. Another 766 GWh of electrical energy was produced by large-scale solar farms (versus zero in 1998) and 12,668 GWh by wind turbines, compared with just eight two decades ago.
That’s a good thing, right? Yes — and no. Australia’s national grid was designed to cater for centralised power from large-scale hydroelectric, coal and gas-fired generators. It was created not just to ensure power was easily transferable from one side of the country to another, but also to drive prices lower. If there were too many plants producing power but not enough being used, prices would fall, encouraging the most expensive generators to drop off the grid. Conversely, spikes in demand did the reverse: pushing prices higher and encouraging more generators online, which eventually lowered prices across the grid.
This worked nicely when competition was between big coal, big gas and big hydro. But solar and wind have become such a low-cost way to produce electricity, and are now so widespread, that they have forced prices lower on the grid — and, paradoxically, higher.
Australia’s national electricity grid, consisting of 40,000 km of transmission lines and cables supplying 200 terawatt hours of electricity to five of Australia’s six states — around 80% of the country’s consumption .(Australian Energy Market Operator)
When solar and wind generators are in operation, they are so cheap that large-scale coal and gas cannot compete, so these dial back production or shut down. However, power from renewables is intermittent — it can rise and fall with little warning, such as when winds abate, or clouds diminish the intensity of sunlight falling on solar panels. Hence, if supply falls off suddenly, and demand stays the same, prices on the grid spike up to encourage more generation and avoid blackouts. That brings the big generators back to cash in.
Problem is, these coal and gas behemoths are inflexible; they take from several hours to a whole day to go from standstill to full power. Even when running hot, they cannot easily or economically vary output up or down fast enough to meet sudden peaks in demand, such as during a heatwave.
Because all generators on the Australian grid are paid for the power supplied in five-minute blocks (known as the spot price), the price of electricity sold on the national grid can vary wildly — on rare occasions, as high as A$14,000/MWh, and as low as minus A$1,000/MWh. But prices stabilise over the year. The average spot price in 2018 was around A$111/MWh in South Australia and A$100/ MWh in Victoria, for example.
Nevertheless, it’s obvious that energy storage is the missing link in this whole shebang. It would allow the power generated by any technology — solar, wind, coal, gas — to be amassed when demand is low, and discharged when demand rises.
And it’s not like energy storage isn’t used. Globally, there are 70 dams, with a generating capacity of at least 2,000 MW each, where water is stored then released gradually to drive turbines and generate electricity. But hydroelectricity, while renewable and flexible, is enormously costly (monetarily and environmentally) to build, and limited by geography and access to reliable sources of water. While those 70 dams have a total energy output of more than 1.2 million GWh, they required the flooding of more than 70,000 square kilometres of land to create them.
“There was a huge lack of imagination,” recalls Skyllas-Kazacos of her discussions with industry giants in the 1990s, when she was trying to commercialise the VRB patents she’d taken out for her employer, the University of New South Wales (UNSW). “People in the electricity sector didn’t seem to be aware of what technology was out there. But also, everyone was looking after their own interests, unfortunately. They weren’t looking at the big picture.”
|
https://medium.com/discourse/power-shift-the-battery-revolution-has-begun-bc3f750e8c89
|
['Wilson Da Silva']
|
2020-10-09 14:03:57.785000+00:00
|
['Energy', 'Technology', 'Climate Change', 'Science', 'Climate']
|
5 Fitness Tips to Improve the Quality of Your Nutrition
|
Photo Credit: Jeremy Colon
1. Plan and Prepare
The fitness champs ensure their balanced meals are pre-prepped to avoid getting hungry and reaching for an unhealthy meal. Do a big cook-up once or twice a week and carry healthy meals and snacks on your person to help avoid temptation.
2. Have Carbs Earlier in the Day
Have most of your high-energy carbs earlier in the day. Think your first three meals. Front loading your carbs provides energy for the day ahead and can offer clues as to how well your body responds to types and amounts of carbs.
3. Eat Regularly
Think every two to three hours. Eating regularly helps to manage your hunger hormones ghrelin and leptin, dousing the temptation to reach for that sugary chocolate bar.
4. Don’t Be Afraid of Hunger
Don’t be afraid of hunger; be afraid of undereating. Competitors are hungry all the time because they’re training hard and their metabolism is in overdrive. Hunger is a good sign — if you can eat in response and don’t feel deprived. Nourish your body and eat the calories that your body needs to function effectively.
5. Supplement Smart
Don’t go spending your hard-earned cash on a mountain of sups you don’t necessarily need. The pros supplement for their specific goals, current body composition, and deficiencies. Start with a high-quality multivitamin and fish oil tablet, and then work with your trainer to fill the gaps.
If reading this offered you value, please recommend it and share it with others! Also please connect with me on my website, Facebook page, and Instagram!
|
https://medium.com/gethealthy/5-fitness-tips-to-improve-the-quality-of-your-nutrition-db8024cc5662
|
['Jeremy Colon']
|
2017-05-02 19:34:42.250000+00:00
|
['Health', 'Nutrition', 'Metabolism', 'Fitness Tips', 'Weight Loss']
|
A quick overview of compression methods for Convolutional Neural Networks
|
Motives
A very deep neural network that is designed to solve hard problems might take a few seconds to run on modern computers with hardware accelerators for linear algebra operations. Similarly, smaller neural networks that do not take that much time to run, still do not meet realtime constraints. Hardware resources and execution time constraints is what drives the research community to investigate different methods of neural network compression. In the next few sections, common compression methods are presented.
Hyperparameter Search
In this specific problem, we do not only optimize for accuracy (or the metric of your choice), but also for efficiency. Typical techniques for hyperparameter search include Random Search, Grid Search, Bayesian Optimization and Bandit/Tourament based methods.
Hyperparameter Search Visualization
For CNNs, one of such methods is EfficientNet (top of the leaderboards at Papers With Code). Given the constraint of memory and FLOPS, it scales your typical convolutional network on several dimensions: width, depth, compound and resolution (number of layers, number of filters per layer, convolution dimensions).
Efficient Net Scaling Dimensions
Convolutional Variants
Besides hyperparameter search methods, there are some hand-made computational schemes that depend on mathematical tricks to reduce the parameter count of each layer while they proportionally keep a larger amount of their computational capacity.
Group Convolution
Split the convolutional filters of each layer into equal sized groups. Apply to them to equal sized groups. You can end up with a more computationally demanding network if you choose too large group and filter sizes.
Group Convolution
It’s not in one’s benefit to keep the output groups aligned as — is. This is a topological constraint that has no reason to exist. Treat it like ShuffleNets do, and mix the output channel groups.
Output Channel Group Shuffling
Network-In-Network
Hyperparameter search? Special convolution ‘variants’? Empirical network design? Actually, this technique sits right in the middle. Let’s focus on the Inception-v{1,2,3,4} model family. Those replace the typical convolutional layer stack in classic convoltional networks like ResNets and VGG, with layer groups that are aimed to have a greater knowledge capacity with regards to their slight increase in parameters and computational cost.
Inception-v1
Naive Inception v1 Block
Inception v1 Block with less input channels
Inception v{2,3}
The larger convolutional kernels of size (5, 5) are replaced as a cheaper alternative: 2 layers with kernels of size (3, 3). Then, the layers which contain kernel sizes (n, n) with n > 1, are replaced by 2 layers with the first one having filter size of (n, 1) and the second one having filter size of (1, n). In order to reduce the depth of the total network (you typically stack 10s of these layer groups together), the (1, n) and (n, 1) filters are computed in parallel.
Inception v2 Group
The main difference of Inception v3 is that it uses greater convolutional filter sizes (7, 7).
Inception v4
This revision of the Inception family ivnestigated a lot of filter size and arrangement configurations in order to be more efficient. Special steps were taken to reduce computational overhead where less knowledge capacity is needed. In brief, downsize layers were created in order to work better with residual networks.
Weight Sharing
We can keep a smaller number of the weights required to train a neural network and project them into virtual weight matrices that are used to perform forward pass. Then, on the backward pass we just accumulate (or average) the loss for each real one, based on the virtual ones! This is typically achieved with some sort of hash function.
Weight Sharing Visualization
Sparsity and Pruning
Not every neuron is created (or trained) equal. Some may contribute practically zero information to the total output of the network (due to very small weight values). These can be pruned away, leading to a sparse (lots of zeros) network. Then, special matrix storage types (CSC, CSR) and linear algebra subroutines can be used to accelerate computation and reduce storage space by a huge margin. Loading the whole network on on-chip SRAM is a huge advantage. There are lots of techniques to do such thing:
Keep in mind that a high perceptage of sparsity does not always correspond to full hardware utilization. You can get matrices with > 50% sparsity that are so dense in general that can not be multiplied effectively with special hardware. The most easy way to take advantage of pruning is to remove specific filters as a whole when possible.
Quantization
You don’t need 32 bits to represent weights. You might not even need 16! Google and nVidia already use less bits for precision (16 bits) for their neural computers. Weights are typically small and need high precision close to zero and one. You can use less bits to do that. Sometimes even 1 (XNORNet)!
IEEE 754
Google TPUs bfloat16
Quantizing less than 16 bits
It’s not efficient to stick with the typical sign — exponent — mantissa representation for very low bit width (≤ 8 bits). The usual ‘trick’ employed is to first project the weights to the [0, 1] range, using tanh(.) , then to quantize (map to nearest number) and then to re — project back to [-1, 1]. In the 1-bit scenario you expect the values to be either -1 or 1.
Due to the non — differentiable nature of quantizers (thresholding operations), we need to somehow calculate the gradient on the backward pass. We just set the gradient to 1 where the quantization intervals are used and 0 otherwise (Straight Through Estimator). For training, we do use shadow/virtual weights that are the ones to get updated during back propagation, while quantization is done on-the-fly while the network executes. Then, when training is complete and we have reached convergence, we can just quantize and store the weights.
You can use the methodology of knowledge distillation to train networks with multiple bit modes jointly.
Learnable Quantization
You can also learn the quantizer parameters stochastically along with the networks weights! Specific constraints/design considerations can be put in place in order to ensure bit-operation compatible output format. Modern CPUs (even without a signal processing or graphic accelerator module) support these operations in a vectorized manner, being able to process > 64 batches of data at once!
I won’t dive deeper in this matter, due to the fact that I am going to get way more technical than I originally intended to.
|
https://zarkopafilis.medium.com/a-quick-overview-of-compression-methods-for-convolutional-neural-networks-c651f88d6db5
|
['Theodoros Ntakouris']
|
2020-12-10 16:39:44.464000+00:00
|
['Machine Learning', 'Deep Learning', 'Convolutional Network', 'Efficiency', 'Computer Vision']
|
The Gun Lane We Need Doctors In, and Government Out
|
Much has been said recently of the NRA’s hyperbolic statement that doctors need to “stay in their lane.”
The freakoutery over this is thick, and palpable, and almost entirely misses the most important point on both sides, which is that we do in fact need doctors in the most important lane of gun deaths, those being suicides. To unravel it all, let’s first look at the article the NRA tweeted, then some of the backlash, then let’s look at the actual problem, where doctors can in fact actually help, and where our state governments are doing some of us a terrible disservice.
The Article
The tweet that triggered the left links to an article by the NRA’s Institute for Legislative Action. While I’m sometimes not a fan of the NRA, the opening passage of the article is pretty much on point, and in my estimation, a solid and legitimate criticism.
Everyone has hobbies. Some doctors’ collective hobby is opining on firearms policy. Half of the articles in the “Latest from Annals” email from the Annals of Internal Medicine journal are related to firearms. The most prominent of these articles is a position paper written by the American College of Physicians (ACP) that expands upon their 2014 paper and reflects every anti-gunner’s public policy wish list, save for the outsized role given to doctors. The ACP’s policy recommendations include a ban on semiautomatic firearms and “high” (read: standard) capacity magazines, licensing and permitting requirements, improved reporting to NICS, restrictions on concealed carry, and so on. None of the ACP’s policy recommendations focus on law enforcement or the importance of identifying, prosecuting, and incarcerating criminals. As Philip J. Cook notes in his commentary, “It is unfortunate that the public health community has not recognized the importance of policing gun violence as a key aspect of prevention.” Language matters, and the ACP “favors enactment of legislation to ban the manufacture, sale, transfer, and subsequent ownership for civilian use of semiautomatic firearms…” They refer to the targeted firearms as “assault weapons” only in parentheses, and the word “rifle” only appears once in the entire document: in the appendix, specifically in a section about 3-d printing a rifle receiver. Does the ACP support a handgun ban?
Put simply, “yes,” they do support a handgun ban, and goes back to a tremendous misunderstanding within the non-gun-owning public (including apparently a lot of doctors) about firearms themselves. “Semi-automatic” simply means the gun chambers another round after you shoot a round, and readies itself to fire another. It is a simple, straight forward, common feature. And this feature is tremendously popular, given that around 60% of firearm purchases are for stated “self-defense” reasons. A qualified analysis by Keith Shannon, which matches numbers I’ve drawn up myself, shows that approximately half of all firearm sales are semi-automatic, and around a third of all guns in private possession in the United States are semi-automatic. The lion’s share of these weapons are pistols, the kind you see every police officer in the country carry, and which have been available for purchase in one format or another for over 100 years. And there is no possible way, mathematically speaking, to magically evaporate these things. They’re here to stay.
Further, the dreaded AR-15 rifle, which draws so much ire from the public, is basically nothing more than a longer range, bulkier version of these pistols, with which it’s actually more difficult to murder someone due to its lack of concealability. This is why we see handguns used 19 times more often than rifles in US crimes. The AR-15 and similar rifles are far superior weapons in a medium range running gunfight against other armed opponents, but their distinguishing features don’t help you murder kids, or mug someone, or rob someone’s house, or even in fact defend your kids from a robber or a mugger. The folks buying them are preparing themselves for the rare but statistically realistic chance they may be stuck in one of those “medium range gunfights against other armed opponents” by no choice of their own.
Or they’re shooting hogs.
Much of the NRA piece’s remaining criticism is legitimate, although there are some things the NRA piece very improperly snows over:
The ACP wants to require gun purchasers to undergo an “educational program” before they can obtain a firearm and they support universal background checks, which would necessitate a licensing and permitting system as well as a registry of firearms and owners. This ignores the RAND Corporation’s finding that evidence on the impact of licensing and permitting regimes on firearm homicides and total homicides as well as total suicides and firearms suicides is inconclusive. The ACP is apparently only interested in pseudo-science “evidence” that supports their preferred anti-gun policies.
While the fears the NRA proffers about firearms registries may be legitimate for other reasons, the ACP’s recommendation of an “educational program” is solid, particularly when it comes to suicide, and if properly implemented could be divorced from the sorts of gun tracking that the NRA opposes for ideological reasons. We covered that here:
The NRA also missed a golden opportunity here, to remind the public of their own efforts to improve education related to suicide, which are recounted below. Nobody ever accused them of having good PR, I suppose.
The Response
The response from doctors has been tremendous, and highlighted everywhere, but this NY Times piece is pretty typical fare for the topic, and leans heavily on tropes we’ve previously identified in our gun series. Here’s a quick example. By my count, the Times piece drops six anecdotes about gun homicide victims on its way to this:
“Annals of Internal Medicine is not anti-gun; we are anti-bullet holes in people,” Dr. Laine said in a statement to The New York Times. “And if we are biased, the bias is toward counseling our patients to reduce their risk of firearm injury and toward evidence-based solutions to the public health crisis that firearm injury has become.” Many doctors shared a similar message to the N.R.A.: For physicians who treat gunshot victims, the topic of gun policy is absolutely “in their lane.” More than 35,000 people in the United States are killed in firearm-related deaths every year, according to an annual average compiled from C.D.C. data by Everytown for Gun Safety, a gun control group.
This is the classic Everytown bait and switch, regurgitated. Talk a hell of a lot about homicides, and then quote statistics dominated by suicide — a problem for which auto loading firearms is literally irrelevant. In fact, the dominant problem of gun deaths in this country, suicide, is not mentioned once in the Times article, nor is it mentioned almost anywhere else in the sea of angry responses to the NRA.
And of course, nowhere in the dialog do we see anyone, on the red or blue side, mentioning that homicides in the country are basically tied for historic lows.
Suicide is the Lane we Need Doctors In
As we unpacked before, suicides make up two thirds of gun deaths, and seven eighths of those suicides are men.
Having a gun in the home is not statistically correlated with overall female suicide rate, but is statistically correlated with overall male suicide rate. What that means, is that fewer firearms in the home don’t really affect the female numbers (they just find a different way to do the deed) but that men having suicidal thoughts are apparently hastier, and having guns out of the home could save some of their lives.
But as we discussed in the above link, some kind of government mandate to seize guns from suicidal men would make things worse, because it would disincentivize men from gaining treatment for their condition. The only legitimate solution to this issue, which I repeat is two thirds of the problem, is to educate male gun owners about the risks and encourage them to entrust their guns to a buddy when they’re going through a rough patch. That one thing could save six times more people annually than the entire total of domestic violence homicides, and there are only two paths to get there:
1) Overall gun education
2) Doctors must discuss firearm ownership, safety, and storage, with their patients
Many doctors are afraid to have the firearm discussion with their patients, and many of those fears stem from a 2011 Florida law (heavily pushed by the NRA) called the Firearms Privacy Owners Act, or FOPA. There was much wailing and gnashing of teeth over this law, but it didn’t do what many people said it did, and in the end it was thrown out by the 11th Circuit Court anyway. FOPA said doctors shouldn’t ask their patients about guns, and can’t enter their gun ownership status into medical databases, but it had a specific exception that allowed doctors to do both of those things if they believed it was relevant to the medical care or safety of the patient, or to the safety of others. If a patient was depressed, or suffered from anxiety, or PTSD, or any mental condition, or even had children, gun questions were fair game under the law. Doctors aren’t prevented from asking these questions. There was even a report in the Annals of Internal Medicine which clarified this.
And since the law is overturned, and has never existed elsewhere, there is almost literally no reason for doctors to avoid having these discussions with their patients.
The NRA are Doing Good Work Here, if Limited in Scope
I personally find the NRA’s nationwide work in this space to be very lacking, and I think they could do a lot more with outreach to their own membership base to assuage the suicide problem, which by God I’ll say again is two thirds of the problem. But at a state level, they’ve done some very interesting work which could form a basis for nationwide policy changes, or at least state by state reform.
In the state of Washington, 80% of gun deaths are suicides, higher than the national average. In 2016, the NRA supported a state bill called the Suicide Awareness and Prevention Education for Safer Homes Act. From a Washington Times article on the bill:
The bill requires suicide awareness training for gun dealers and owners of shooting ranges. It would add suicide prevention messages to hunter safety courses, promote education about safe firearm storage, and prompt pharmacists to talk to customers about safe storage of prescription drugs. “We may not agree on gun control, but we all want to prevent suicide,” said Stuber, one of the speakers at the Zero Suicide Inland Northwest conference Friday at Gonzaga University. “One bullet fired, and I became a single parent and a widow.”
The bill was a collaboration between the NRA and anti-gun groups. It passed, with bipartisan support. Suicide rate statistics tend to lag, so the most current information I’ve been able to find for Washington State, or any other state, is for 2016. It will be interesting to track how Washington State fares under this new bill, compared to the relative rise of suicide across the rest of the country.
The Government is Not Helping
Some state governments interfere, or outright prevent, this life saving solution.
In California, you literally can’t give your gun to a buddy if you’re suicidal. Calling a friend and saying “Hey, I’m having a rough patch, can you hold on to my pistol for me?” is literally a crime. The conversation instead must take the following form. “Hey, I’m having a rough patch, can we go down to the local gun store or otherwise licensed federal firearm dealer, have you undergo an extensive background check, and then wait ten days hoping I don’t kill myself in the interim when you can finally hold on to my pistol for me?” And then there’s the issue that if the gun owner having suicidal thoughts seeks inpatient treatment, he or she is prohibited by law from getting their firearm back in perpetuity.
So instead, the gun owner with the suicidal thoughts keeps the gun in the house. Also potentially foregoes seeking treatment from a doctor.
This is terrible law. Somewhere around 1,500 people die per year in California from firearm suicide. That’s more than all domestic violence deaths nationwide. And there are many other states with laws that take a similar form.
If we stuck with a numbers-driven approach to firearm policy, instead of policy based on emotion, it would help us tremendously in the process of actually saving lives.
|
https://medium.com/handwaving-freakoutery/the-gun-lane-we-need-doctors-in-and-government-out-57b8a3547e0b
|
['Bj Campbell']
|
2019-08-30 18:04:25.508000+00:00
|
['Health', 'NRA', 'Politics', 'Guns', 'Mens Health']
|
How I Made Myself a More Valuable Programmer in 6 Months (and How You Can Too)
|
How I Made Myself a More Valuable Programmer in 6 Months (and How You Can Too)
Set goals, manage your time, learn quickly, and apply immediately
Photo by 30daysreplay Marketingberatung on Unsplash
I came into the Salesforce development industry knowing nothing, absolutely nothing, about Salesforce: how it was used in business nor how to even develop on it. I was as clueless as a newborn baby seeing the light for the first time — I suppose most of you understand this feeling when tackling a new programming language or framework.
That being said, I was very uncomfortable starting and was afraid that I wouldn’t fare well in this uncharted territory of cloud development.
Boy, was I wrong.
Come my first week of training and I fell in love with this technology almost immediately. I was baffled and bummed that I wasn’t able to cross paths with this powerful tech sooner.
I started setting long-term goals early on. I spent time outside of work relentlessly trying to learn and understand everything I possibly could about the platform, and I started teaching, mentoring, and inspiring everyone around me with my quick accomplishments and gained knowledge.
I found my niche. I found an opportunity where I had the potential to provide an extraordinary amount of value to businesses, my peers, and to myself as the Salesforce development space supplements the perfect mixture of both business and programmatic skills — a standard that I could easily get behind as I recognized myself as not only a good communicator (thanks to years of athletics), but also a passionate tech geek.
If you’re not aware of the Salesforce ecosystem, you should know that this is a space where certifications actually matter. Certifications are always relevant in Salesforce, and depending on what certifications you get, they are a very clear marker of competency, drawing eyes and value to yourself as an asset in this ecosystem. This is because the exams are hard. Very hard. And to pass one of these exams is to show that you know what you are doing and that you can at least convey to the public eye that you are relevant.
I am telling this story of certifications because in my first week of the nine-week training program I was put in, I was told that I’d only be receiving two certifications at that time. I said I wanted four, and they told me that it was unheard of and that it couldn’t be done, especially for someone so junior and new to the technology itself and the tech industry as a whole. Even seasoned Salesforce professionals would struggle to do this.
My reaction? I’ll prove them wrong.
Fast forward 9-weeks later. I set a company record for achieving four certifications in that period of time, which almost immediately led me to achieve a company-wide award, speak on the company podcast, be a designated mentor for new hires, have the opportunity to speak in front of an audience during a company-wide zoom call, and get contracted immediately out to a fast-growing startup consulting company where I was given the opportunity to stretch my skills on a daily basis. Keep in mind that this company has hundreds of employees and was voted to be one of 2020’s top 25 companies to work at for new grads.
Fast forward 6-months after I first got hired. I achieved two top-tier certifications that only people with upwards of three to five years of experience are recommended to take — and still people with that amount of experience fail and struggle to get those achievements.
Now I’m on the fast track to potentially become one of the youngest ever to be deemed as a technical architect (not quite there yet, but will be), a position that only a handful of people in the world are worthy to call themselves. What’s baffling to me is that there are almost fewer people with that certification than there are certified astronauts in the world (astronauts are still more extraordinary though — this number will surely rise over time).
All of this, and I have become one of the most valued and decorated programmers in both my parent company and the consulting company that I’m currently positioned at. Thing is, I still have so much more to grow and a long road ahead of me.
So this poses the question: How can you do the same? How can you become a valued programmer not only in your own eyes but also in the eyes of your company and the people you work with? How can you drastically accelerate your learning to the point where you can’t be ignored and become respected as a skilled and competent programmer? Someone that everyone can depend on and trust?
|
https://medium.com/better-programming/how-i-made-myself-a-more-valuable-programmer-in-6-months-and-how-you-can-too-97f3323f9035
|
['Zachary Minott']
|
2020-10-09 03:26:30.090000+00:00
|
['Success', 'Software Development', 'Startup', 'Learning To Code', 'Programming']
|
How to Make Currying more Readable
|
Currying is a functional programming technique for transformation a function with n parameters into a series of n functions each expecting one argument.
I am exploring for some time the options for enabling currying on functions in JavaScript and how to make it easier to read. Next, we will look at what I found and you may give your thoughts on which looks better.
Let’s look at how currying can be implemented and used. Consider the next scenario, where we filter a list of numbers and select only the numbers that are divisible by 2.
const numbers = [1, 2, 3, 4, 5, 6]; function isDivisibleBy(n, divisor){
return n % divisor === 0;
} const newNumbers = numbers.filter(n => isDivisibleBy (n, 2));
console.log(newNumbers);
//[2,4,6]
The isDivisibleBy function takes a number n and a divisor and returns true or false. When the is remainder of n divided by divisor is zero, then n is divisible by divisor .
Next, we are going to write isDivisibleBy as a manually curried function and use it.
function isDivisibleBy (divisor){
return function(n){
return n % divisor === 0;
}
} const newNumbers = numbers.filter( isDivisibleBy (2));
In this example, isDivisibleBy takes the divisor and returns a function taking the number n and returning the end result. So isDivisibleBy is a function that returns another function that returns the result.
Even if the new curried version of the function is maybe a little harder to read, it can be invoked nicely with one argument, the divisor, to create the callback for the filter method: numbers.filter(isDivisibleBy(2)) . isDivisibleBy(2) returns a function expecting a number.
Note that we need to change the order in which arguments are applied. The curried version of the isDivisibleBy first accepts the divisor and then the number n .
Let’s now analyze some other options for writing isDivisibleBy as a curried function. One other option is to use the arrow syntax.
const isDivisibleBy = divisor => n => n % divisor === 0;
In this case, we have removed some of the parentheses, curly braces, and returns but is it clearer what isDivisibleBy does? Every time we see => a new function is created.
Maybe a better option is to list all parameters on the first line using arrow functions and then used return inside the curly braces. Here is an example:
//orginal function
const isDivisibleBy = (n, divisor) => {
return n % divisor === 0;
} //curried version
const isDivisibleBy = divisor => n => {
return n % divisor === 0;
}
Again notice the reverse order of parameters in the curried version. So divisor is first, number n comes second.
Another option is to use the curry decorator from a functional library like lodash/fp. With the auto-curry decorator, we can simply write a function and the decorator will enable currying for it.
import { curry } from 'lodash/fp'; const isDivisibleBy = curry(function(divisor, n){
return n % divisor === 0;
});
Below is the same function that is rewritten with the curry decorator and the arrow syntax.
import { curry } from 'lodash/fp'; const isDivisibleBy = curry((divisor, n) => n % divisor === 0);
Here is the same arrow function rewritten with the curly braces.
import { curry } from 'lodash/fp'; const isDivisibleBy = curry((divisor, n) => {
return n % divisor === 0
});
The missing function decorator
Next, see how I think it would be nicer to enable currying on a function.
I would like to import the curry decorator from a functional library and decorate the function with it. This is not possible at the moment.
import { curry } from 'lodash/fp'; @curry
function isDivisibleBy (divisor, n){
return n % divisor === 0;
}
Again we need to write the parameters in the reverse order than the original function. This can be solved with another decorator curryRight . The curryRight decorator applies currying from the last parameter. So in order to apply parameters from right to left, we don’t need to change the original function.
import { curryRight } from 'lodash/fp'; @curryRight
function isDivisibleBy (n, divisor){
return n % divisor === 0;
}
Final Thoughts
There are different ways of making a curried function, from using functions returning functions, with the function keyword or the arrow syntax to the currying helper utilities from functional libraries.
I would prefer to enable currying with a function decorator like @curry , but since that is not available, I will stay with manually curried functions.
If you know better ways of enabling currying on an existing function I will be glad to hear your feedback. Thanks for reading and giving your feedback.
|
https://medium.com/programming-essentials/how-to-make-currying-more-readable-15f28ec37d3e
|
['Cristian Salcescu']
|
2020-07-16 05:19:23.480000+00:00
|
['Development', 'Programming', 'Functional Programming', 'Front End Development', 'JavaScript']
|
What’s Next For Making of a Millionaire In 2021
|
What’s Next For Making of a Millionaire In 2021
What we have planned for readers next year
2020 has been an incredible year for this publication. We’ve gone from a small group of writers publishing occasional pieces to one of the largest publications in the personal finance space on Medium.
In the past 30 days Making of a Millionaire has had over 250,000 page views and nearly 6,000 unique visitors per day. I always knew this publication had that kind of potential, but I’m floored about how quickly it’s happened.
Thanks
So, first and foremost, I want to thank all of our readers for your support. This growth is literally impossible without thoughtful people reading and sharing our work.
Second, I want to thank everyone who has submitted a story for Making of a Millionaire in 2020.
There are literally too many writers to properly thank, but I do want to acknowledge a few writers who have written articles that went absolutely bonkers and have helped move the needle on the growth of this publication.
I will cut myself off here because, as I said, there are so many writers who have contributed to the success of this publication in 2020 that I can’t possibly name them all here, but know that you are appreciated!
Looking ahead to 2021
Let’s grow this thing
In 2021, I want to take Making of a Millionaire from a successful publication to one of the biggest publications on Medium in any genre.
That’s a big, ambitious, and frankly greedy goal.
But if the growth curve of 2021 looks like it did in 2020, I think we can do that. I don’t see any reason why we can’t push 2 million page views per month by the end of this year and have more than 50,000 people visit Making of a Millionaire every day.
The biggest constraint will be time. I literally have over 100 ideas for articles I want to write sitting in my drafts. I hope to hammer some of those out during the holidays, and you can expect some of my best writing coming this winter.
I am ready to f*cking bring it in 2021
If you haven’t already, I would recommend checking out my top 10 articles this year and expect a lot more articles like these in the future.
Expanding beyond written articles
I spent much of 2020 building the infrastructure to one day expand Making of a Millionaire from a blog to a digital media company focusing on personal finance and money.
Community
If you haven’t already, I encourage you to join the “Millionaires in the Making” private Facebook group here.
I know not everyone is on Facebook, so I am looking into independent platforms to host the community outside of Facebook, so stay tuned.
If you have not already, you may want to consider joining the 30-day money challenge. Each day you get a new task related to your finances. If you are feeling “stuck” with money, this is a good place to start getting unstuck.
Education
In 2020, I focused on creating a ton of educational content. These can be found generally on two platforms.
These are video-based courses that come with financial spreadsheets that crunch the numbers on various financial problems you might want solved, such as “exactly how much should I be paying against each of my loans every month?”
The financial mentor program
My top focus from a business perspective in 2021 will be on the recently launched “financial mentor program.”
When developing this, my thinking was how can I pack in as much value as humanly possible and give it to people at a price low enough that makes sense from a business perspective (after all, businesses do need to make money.)
That’s how the financial mentor program was born; here's how it works.
You sign up, pay $5 per month and get the following;
A new personal finance course every month.
Financial software to crunch the numbers for you.
Free E-books.
Early access to our best articles and videos.
Access to private webinars and live streams.
Also, one of my ambitious goals for 2020 will be to publish a book. If and when I do that, anyone enrolled in the financial mentor program will get a free copy.
Basically, being enrolled in the program means getting free access to any course, book, product or piece of content I create going forward.
Click here to sign up for the financial mentor program.
YouTube
Every Wednesday, I publish a new video on YouTube, reviewing many of the topics I discuss in my writing.
Occasionally, my wife comes on, and we talk about how we tackle certain financial issues together.
I see YouTube as having a lot of potential to grow Making of a Millionaire in new and interesting directions.
You can subscribe here
Podcast?
This is something I have been toying with for a long time. If I am being honest, this feels like a 2022 goal.
Am I going to keep creating courses, publishing 3–4 articles a week, making a new YouTube video, writing a book, and launching a Podcast in 2021? Possibly, but I doubt it.
If you REALLY want a Making of a Millionaire Podcast, let me know in the comments!
Please be vocal
Making of a Millionaire will ultimately be shaped by our readers.
Are there particular topics you think I should be writing about?
Are there some areas where a course would be extra helpful?
Do you wish the “community” aspect was different?
If you ever have feedback or comments, or suggestions, you can send them to me at [email protected]
Anyway, thanks to everyone for reading in 2020. I hope you enjoy some time to relax and reflect at the end of 2020, and I can’t wait to share our work with you in 2021.
|
https://medium.com/makingofamillionaire/whats-next-for-making-of-a-millionaire-in-2021-c07e583825e
|
['Ben Le Fort']
|
2020-12-09 20:23:17.650000+00:00
|
['Personal Finance', 'Entrepreneurship', 'Business', 'Money', '2021']
|
A Pinterest Engineering guide to technical interviews
|
By Shayda Rahgozar (Recruiter), Nishant Roy (Engineer) and Indy Prentice (Engineer)
Technical interviews are the key component to landing an engineering role. Everything from preparation to asking the right questions is important for a successful interview. In this post, Pinterest employees from Recruiting and Engineering share tips and tricks for acing your interview from start to finish.
Preparing for your interview
Shayda Rahgozar, Recruiting
Practice: The interview itself shouldn’t be the first time you’re hearing and thinking about interview questions. Practice everything from potential questions to concepts ahead of the real deal. Brush up on core CS, general software engineering skills and large scale system design. Companies are constantly refreshing their interview question banks, but you can use sites like Leetcode and Interviewing.io to find great practice questions. Don’t forget to also check out resources like Pinterest open source projects on GitHub and opensource.pinterest.com.
At Pinterest, you can generally code in whatever language you prefer during your interview, though be prepared to be flexible as there may be cases where an interviewer asks you to use a specific language based on team and role. Use this to your advantage when you’re completing practice questions, and take some time to get familiar with the details of the language.
Research: Show interest and passion about the company. If you’re not a regular user of the product already, spend some time getting to know what you could be working on. Share any product pain points or areas of improvement in the interview as well as how you could be involved in helping the company succeed.
Reference the company’s own channels to get a feel for the latest trends, launches and technologies. For example, check out this blog as well as recent news and our Labs initiative for more on the Pinterest Engineering team.
This is your chance to not only showcase your problem solving and coding skills, but also show the interviewer you’re someone with whom they’d like to work.
Working Through the Problem
Nishant Roy, Engineer
Communication: One of the most important aspects of the technical interview is your ability to communicate with your interviewer. Don’t be in a rush to jump straight into writing code. Rather, take some time to think about the problem and share your thoughts out loud. If the question calls for some data structure or system design, the interviewer probably expects a high-level discussion before you actually get started. It’s also a good idea to ask clarifying questions to make sure your understanding of the problem aligns with that of the interviewer. For example, running through a few test cases or drawing some figures to illustrate the problem will help you understand exactly what’s expected, and it will also highlight your ability to communicate and plan out your work.
Attention to detail is essential for a programmer, and your interviewer will evaluate this skill during a coding interview. Usually, an interviewer will not give you all the details for a question up front, such as certain constraints, whether time or space is more important, input format or validity, system scale, etc. Once you’re confident you understand the problem, discussing the details will show the interviewer you’re thinking about how to write a solution that is production-ready. It will also help you understand the problem better and come up with a stronger solution.
Optimizing your solution: Once you’re confident you understand the problem, discuss the details of the solution before writing code. Start with an inefficient solution and think out loud so your interviewer can follow your thought process. Explain why the design is inefficient by discussing the space and time complexity, and try to identify any bottlenecks. Furthermore, think about how much room there is to optimize given the constraints of the problem. This way, you’re showing the interviewer you’re able to come up with a naive solution to the problem, identify its weaknesses, and find ways to improve it.
There are a couple ways to identify optimizations:
Try simplifying the problem, seeing how to optimize the basic version, and applying that to the problem at hand. Look over your design for repeated, superfluous work, and think about how to minimize it.
Think about what sort of data structures are best designed for the problem and try to integrate them into your solution. For example, maps are optimized for lookups, heaps are optimized for sorting, etc.
Once again, communication is key! As long as you’re thinking out loud and asking clarifying or leading questions, it’s easier for the interviewer to understand your solution and even to help you. Once you and the interviewer are satisfied with the solution you’ve designed, it’s finally time to start writing code.
Time Management: One of the biggest challenges with technical interviews is time management. While it’s always good to spend time designing the perfect solution, make sure you don’t end up in a situation where you don’t have time to implement it. Keep track of time and give yourself at least half the interview to actually write and debug your code. It’s often better to implement a naive solution and keep working on the optimization in any remaining time than it is to run out of time in the middle of trying to implement an optimal solution.
Code Quality: Writing good, clean code will always win you an interviewer’s approval, whether you’re writing it on a computer or whiteboard. Good coding practices include:
Abstracting out reusable sections of code into functions
Writing meaningful variable and function names
Adding input checks and null checks
This will both make it easier to identify any potential bugs in your code later on and help the interviewer understand your work.
Assumptions: It’s perfectly acceptable to make assumptions, as long as you explain them! It would be unreasonable to expect you to remember every detail of a language or implement every common data structure or algorithm within the limited time given. Unless a component’s implementation is vital to solving the problem, the interviewer will often allow or even expect you to abstract out some details. However, it’s important to clearly state and explain your assumptions. For example, rather than implementing a helper function, you can just design the method signature, explain how and why you will use it, and move on.
Testing your code is one of the most important parts of software development. Focus on good practices and a correct, bug-free solution instead of the speed at which you can finish the problem.
Take time to reread your code line by line and look for any glaring bugs.
Come up with a few basic test cases and step through your code one line at a time.
Make note of what is happening at each line.
Write down key variable values.
Make sure you get to the expected answer.
Finally, play devil’s advocate by trying to break your solution, focusing on:
corner cases
unexpected input formats and types
error or exception handling, system reliability, etc.
You don’t have to actually implement or even come up with a solution to all such problems, but it will show your interviewer you’re thinking about the limitations of your code and how to improve it.
There is a lot more to a technical interview than just coding up a solution. Another key factor an interviewer is looking for is how they’d work with you as a teammate. Sometimes a candidate who needed some help but showed a lot of strengths in their design and development process might be rated higher than a candidate who quickly got to the right solution on their own but didn’t show a penchant for planning their work or testing their implementation.
Your problem solving and coding skills are your ticket to passing a technical interview. However, the coding question isn’t the only way to make a good impression. Everything from your introduction to the questions you ask can help you get the most out of an interview.
Asking The Right Questions
Indy Prentice, Engineer
In addition to the technical questions, most interviewers leave time at the end for you to ask your own questions. Though this is another chance to make a good impression, the most important thing is that you get your questions answered.
Even if you feel your interview covered your questions, use your research to prepare additional questions to demonstrate your values and interest in the company. This not only shows the interviewer you’re serious about the opportunity and excited to learn more, but also allows you to evaluate if the company is a fit for you. Spend some time ahead of the interview reflecting on what’s important to you, such as team diversity, work/life balance, opportunity for impact, participation in the open-source community, or working with designers. This is your chance to evaluate whether the company you are speaking to fits the bill!
In addition to value-driven questions, there are plenty of generic questions you can ask any interviewer to gain additional information about the company and start a conversation around that topic. These questions can center around your research or the interviewer’s role.
Some examples are:
What’s a typical day like for you?
What are your favorite and least favorite things about your job?
Why did you choose to join, and why have you stayed?
To help make the most of your time and the interviewer’s, make sure the questions you ask are best suited to that person. Questions should be within the engineer’s experience or area of expertise and be appropriate for discussing with a stranger.
For example:
At the end of your interview, you may still have additional questions for your interviewer. Unless they offer their contact information where you can follow up, it’s best not to put them on the spot by asking for it. Instead, you can go through the recruiter to find out the best way to follow up.
While your technical performance helps determine your candidacy, an interview is looking for employees beyond just those skill sets. At Pinterest, we look for more than strong engineers. Our team is made up of people who are excited about the product, technical challenges, and the values important for living well-rounded lives. Ultimately we’re looking for people who will stay and be happy at Pinterest. Engaged candidates who have some reason for wanting to come here, whether that’s the product or the technical challenges, are the most exciting.
Use your time as a candidate to find the company that fits you. Making the most out of the non-technical aspects of the interview can ensure that you and the company are a good match!
We have a growing list of open roles for our engineering team. If you’re interested in learning more about working at Pinterest, check out our Careers Page!
|
https://medium.com/pinterest-engineering/a-pinterest-engineering-guide-to-technical-interviews-1c2471c2d139
|
['Pinterest Engineering']
|
2018-11-30 21:40:32.657000+00:00
|
['Engineering', 'Recruiting', 'Technical Interview']
|
Jobs of the Future are Already Here
|
Jobs of the Future are Already Here
The multi-faceted, digitally equipped security guard is one example of change underway
By Tom Davenport and Steven Miller
One of the most frequently-used phrases at business events these days is “the future of work.” It’s increasingly clear that artificial intelligence (AI) and other new technologies will bring substantial changes in work tasks and business processes. But while these changes are predicted for the future, they’re already present in many organizations for many different jobs. The jobs described below are an example of this phenomenon.
The world’s most famous mall security guard is probably Paul Blart, who defended a mall on his Segway from a gang of crooks in the 2009 movie Paul Blart: Mall Cop. As usual, however, the portrayal of the role in a fictional movie doesn’t mirror reality, especially not the state-of-the-art reality for security guards at one of the most amazing malls on this planet.
The rain vortex at Jewel Changi Airport
Jewel Changi Airport (Jewel) is a shopping mall, indoor attraction, nature environment, hotel and airport check-in all in one, located on the premises of Singapore’s Changi Airport. It opened to the public in April 2019, and was built at the cost of U.S. $1.25 billion. Of course, the pandemic is taking its toll on businesses and airlines at the hub, but it remains open.
Jewel has an iconic design inside and out. The entire facility is encased in a glass dome that resembles a multi-faceted gem. The indoor nature-themed environment includes the world’s tallest indoor waterfall (or “rain vortex”), 120 species of plants covering 2,000 trees and 100,000 shrubs, mazes, canopies in the form of walkways and bridges, and sky nets for walking and bouncing. This provides the ambiance for over 280 retail outlets and eateries, various visitor facilities and attractions, and airport facilities that supplement those in terminals.
Guarding the Jewel
Certis, the private company responsible for ensuring the physical security, facility management, and customer services for the new Jewel facility provides the frontline workers — security guards, guest concierge and service officers, and facilities maintenance staff — who deliver these services.
In 1958, Certis was born as the Guard & Escort Unit in the Singapore Police Force. This unit became the Commercial and Industrial Security Organization (CISCO) in 1972, a statutory board under the Ministry of Home Affairs. In 2005, it was privatized to compete for commercial security and related facilities support contracts domestically and internationally. The company eventually changed its name to Certis, and is fully owned by Temasek Holdings, a commercial investment company owned by the Singapore government.
While the origins of Certis were from an industry that has traditionally been physical guarding, it has transformed itself into a technology intensive diversified Ops-Tech service provider that has grown organically from S$200 million annual revenue in 2005 to S$1.5 billion in revenue in the last financial year.
Another aspect of their transformation has been their strong growth as an advanced integrated Ops-Tech service provider, including security services, in airports.
Prior to 2006, Certis had no presence in providing services to airports. Now, Certis provides security and other visitor support services to airports in Singapore, Australia, and the Middle East.
Certis provides the security services to Singapore’s Changi Airport, and in 2019, Changi Airport received the highest score among major worldwide airports (with 40 million or more passengers per year) in the Airports Council International (ACI)’s Airport Service Quality (ASQ) Security Worldwide Ranking. [NOTE: This month it won recognition for its COVID-19 response and efforts in combating the outbreak in Singapore.]
A Digitally Transformed Approach
When Jewel opened to the public, Certis introduced an entirely new, technology-enabled approach to its service delivery called “Security+.” The six key elements of this new approach included:
1) Digital monitoring and surveillance of the entire Jewel facility, including over 5,000 sensors and CCTVs
2) Centralized, on-site Smart Operations Centre (SOC) where all of the monitoring and surveillance data is consolidated, integrated, analyzed, visualized, and assessed by operators
3) A multi-service orchestration platform called MOZART that handles the consolidation and integration of all incoming information sources, and includes AI capabilities for analyzing video and other sensor inputs to identify situations requiring operator follow up
4) A mobile phone application called Argus that is tightly integrated with the MOZART platform, enabling SOC staff to manage, monitor, and communicate with security guards and other front-line staff;
5) The addition of service robots to the front-line patrolling workforce to handle specialized monitoring tasks such as parked cars outside of Jewel
6) Revamped job descriptions where all the Certis front line security guards, guest service officers, and facilities staff are cross-trained to support one another.
Certis had been piloting and deploying some of these Security+ elements in recent years — however, the launch of Jewel was the first time it had brought all six elements together into one integrated and unified service delivery approach. Now, other parts of Changi Airport, major shopping mall operators in Singapore, and JTC Corporation (Singapore’s government agency for the development of industrial facilities and high-tech business parks) are in the process of working with Certis to incorporate this Security+ approach.
Digital Job Transformation
Jun Yuong Pang is 29-years-old and has been with Certis for the past ten years. He spent the first nine years as a security officer and supervisor in the Changi Airport terminals. In May 2019, Certis moved Pang to Jewel to supervise and manage a team of 17 other security specialists and one robot named PETER (an acronym for Patrol and Traffic Enforcement Robot).
Everything about Pang’s work changed, even though he is still working as a security executive and supervising other security specialist. Now, Pang is responsible for security across a large, multi-purpose facility. He and his team members are always on the move, patrolling. Because of the SOC, MOZART and its embedded AI capabilities, the Argus mobile app, and the cross-functional approach to job roles, Pang and his security team are now an integral part of a well-coordinated network of people and intelligent support systems linked through digital means, supporting one another through two-way interactions. This has substantially changed the way they spend their time each day.
Previously, Pang needed to go through the time-consuming tasks of working out patrol routes for the security officers, preparing worksheets for each of them, and briefing them on their daily schedules at the start of every day.
Now MOZART generates the daily patrol schedules for each guard, including randomization to make patrol routes less predictable.
Pang uses Argus to review daily patrol schedules, and makes patrol route adjustments based on information he is aware of that goes beyond MOZART’s information. Via Argus, he automatically distributes the patrol route to each team member and to the SOC. The team no longer needs to show up early before the official start of their patrol work just to get route assignments. They also no longer need to patrol outside in the Singapore heat for parking violations, since PETER, the robot, takes over that task.
In addition, Argus helps the security specialists file reports on every incident they encounter during patrol. This has been a tedious task for the security industry for decades. Officers would sometimes skip filing reports on smaller incidents due to the tedious effort required. Supervisors often had to re-interview the guards to clarify incident details and rewrite, and sometimes even write the reports.
The deployment of Argus and its integration with MOZART completely transformed the process of submitting an incident report, as well as all post-submission processes related to archiving, accessing, and using the information. Incident report submission templates in Argus enable the security specialist to click a few buttons to choose appropriate categories, add pictures as documentation, input descriptive comments, and press send. The report is time-stamped and immediately flows to the SOC and into MOZART. Pang and his team now do a more thorough job on security-related incident reporting and can even support incident reporting for facilities management and guest services.
Likewise, the guest services officers (called “Experience Concierge & Ranger” staff) and the facilities staff use Argus for incident reporting, and the jobs all overlap. Zell Chow, Pang’s counterpart who manages the guest services staff at Jewel, said, “They’re not just concierge people sitting behind a service counter. They also rove around the facility as part of their customer service work…They provide more eyes on the ground, and this helps us with security.
It also helps us with facilities support, as they spot things that need landscape maintenance or facilities follow up. We could not perform in this multi-functional way without our supporting technology.”
Gains and Challenges
The technology systems deployed by Certis at Jewel are constantly monitoring the entire facility to support visitor, workplace, and traffic safety, as well as security and surveillance. MOZART uses its embedded AI capabilities to constantly monitor and analyze this stream of incoming information. By assessing the alerts automatically generated by MOZART, and the incident reports and other incoming communications submitted by ground staff via ARGUS, the SOC manager and operators can determine when to mobilize frontline staff to follow up on incidents.
They also always know where all the security specialists, guest service officers, and facilities staff are physically located, using a combination of location tracking through Argus and the other sensor information. Through Argus, Pang has visibility to this information, as does his counterpart Zell Chow. The two of them coordinate closely with the SOC and with each other to plan deployment of personnel on the ground to respond to a particular incident.
Transitioning to this new work environment at Jewel — with the new support technology, and with the new multi-functional approach — has not been without its challenges.
Pang’s manager, Aaron Soo, noted that those CERTIS security staff with prior airport security officer experience had a lot to learn in terms of maintaining high levels of visitor engagement.
Pang also reflected on the challenges of getting his security specialist team members, especially those 60 and older, to be comfortable using the new technology. “This has been a very challenging but good experience. It has taken time for them to learn and adjust. Fortunately, they have all managed to do so.” Zell Chow added that efforts were made to help frontline employees in customer service and in security come on board with the new technology, especially for older workers. “We have redesigned our training materials to incorporate more pictures and videos, and to reduce the use of long sentences. We have made training content and sessions more fun.” Both Zell and Pang pointed out that the technology enables workers to remain as active, contributing employees.
Pang, as well as Aaron and Zell, highlighted that the Certis digital transformation effort at Jewel created a significant need for continuous learning and training. At the same time, because of the efficiency and productivity, they were able to reinvest their substantial saved time into training and working with their teams to make adjustments. Zell reflected, “We never could have progressed in this way prior to our digital transformation efforts.”
Automated Operations + Humans
As Duty Manager of the SOC at Jewel, and direct supervisor of Pang and Zell, Aaron shared his thoughts on how the work setting at Jewel and Certis will evolve. “As the capabilities of our systems continue to improve, I anticipate that we will be able to automate more of our operations tasks in our SOC as well as on the ground. We will steadily realize even higher levels of productivity and be able to further reduce certain types of manpower.”
At the same time, Aaron is convinced that humans will continue to play essential and irreplaceable roles in the SOC and the ground staff teams. He explains, “There are just too many novel, non-standard situations in the SOC when we monitor and assess alerts automatically generated by MOZART and incident reports from the ground staff. My SOC team and I, as well as our ground staff team executives, do very complex ‘man-in-the-middle’ coordination and communication across multiple stakeholders such as the ground staff at Jewel, our senior management at Certis and Jewel, and also with other external parties including the ambulance teams, medical facilities, and the government authorities.
Our technology, as advanced as it is, just does not have the capability to do all of this type of coordination and communication, especially for unusual situations; Not yet, at least, and not for any foreseeable future.”
It appears that humans and the human-machine partnership are here to stay in the mall security setting, even as AI capabilities and applications continue their rapid and remarkable advance. Sadly, no movie-style Segways appear to be in the offing for Jewel’s security specialists, as their manufacturer has just stopped making them.
|
https://medium.com/mit-initiative-on-the-digital-economy/jobs-of-the-future-are-already-here-37a4b6a9b902
|
['Mit Ide']
|
2020-10-29 18:13:52.512000+00:00
|
['Automation', 'Jobs', 'AI', 'Future Of Work']
|
An Introduction to Data Cleaning: Using Regular Expressions to Clean your Data
|
Johnathan Padilla, Thomas Kidd, and Hadrien Picq are fellows in the TechSoup and ParsonsTKO Summer 2020 Data Strategy Mentorship Program, a select group of upcoming data analysts and scientists to learn more about what the social impact and non-profit analytics sectors look like from industry professionals.
Regex is a Superpower
Unlike powers inherited by radioactive spiders, regex can be used as irresponsibly as you want. There are limits to this power but when used skillfully those limits can be largely ignored. I’ll show you a quick example of regex’s power. Say you are given a string like the following:
Words = 'the quick brownTfox runs super! fast: '
The sentence is all garbled and has characters that we do not want to keep. We can correct and recreate this sentence using regex. First, you have to split the string on the characters of your choice.
split = re.split('T|!|:',Words)
split
I chose to split on the ‘T’, ‘!’ and the ‘:’ which yields a list of only the desired characters
['the quick brown', 'fox runs super', ' fast', ' ']
From here I would want to recombine the strings to one string. I can do this by adding each portion of the list to each other like so
corrected_words = split[0] + split[3] + split[1] + split[2]
corrected_words
Be sure to note that I placed the space character in between ‘brown’ and ‘fox’ as no space existed there in the first place. Printing the new string should look like this:
'the quick brown fox runs super fast'
Using the regex method is a faster way of cleaning text data that is simpler and quicker to use compared to manually splitting the strings. Now that we have established a basis for using regex I can dive into the data being used for the project.
|
https://johnathan-d-padilla.medium.com/an-introduction-to-data-cleaning-using-regular-expressions-to-clean-your-data-9684ccfac74c
|
['Johnathan Padilla']
|
2020-06-24 19:55:11.050000+00:00
|
['Air Quality', 'Data Cleaning', 'Python', 'Data', 'Regex']
|
The Holiday Olympic Torch
|
The Holiday Olympic Torch
Passing the burning torch of hope
When the world holds the Olympics thousands of people from around the globe carry the Olympic torch, handing it off, one by one, as a symbolic gesture of peace and solidarity. I see the holidays in the same way. I know for many it is a time of stress, for some a time of personal pain, for others a reminder that their life has failed to provide them with the means to celebrate while there are others that lack the basic needs of life.
Unfortunately, it will always be that way. Life does carry with it a certain cruelty. All my life I have done things to make a difference but people continue to suffer. I have come to realize that I am not Mother Theresa nor a doctor with the Doctors Without Borders nor a UN relief worker nor the countless others that are willing to sacrifice all for the good of others. I have thought about it many times but it is not something I am capable of.
I mentally apologize and accept my human limitations.
That doesn’t mean I don’t contribute to improve the quality of life for others. I do so every day in small things and occasionally in bigger ways. It allows my heart to express love.
Every time I do something, I hope that I am passing the burning torch of hope and love onto others and that they will, in turn, pass it on. I have always subscribed to the idea that the best acts of kindness and charity are the ones done privately and without fanfare.
Those are the best ones to witness.
At the same time, each act is the torch of humanity, multiplied with each act as the torch is handed off to others, directly or indirectly, to create a chain of kindness, shared humanity and love around the world. The torch seems to burn brighter during the holidays as we all open up our hearts. This pandemic and the global political strife we are witnessing, unfortunately, has added additional darkness to our world.
We need an even brighter torch this year.
Watch for your opportunity to take the torch and carry it for just a brief moment and then willingly pass it on. The more times we carry it, the more times we share it, the brighter the world will be.
May the light and the warmth of the holidays carry you well into the New Years and beyond.
Emma Holiday
This story is a response to Prism & Pen’s writing prompt “Blaze Against the Dying of the Light”
Other stories so far —
|
https://medium.com/prismnpen/the-holiday-olympic-torch-2fe0e7ab8e57
|
['Emma Holiday']
|
2020-12-17 22:59:36.969000+00:00
|
['Society', 'Creative Non Fiction', 'Humanity', 'Love', 'LGBTQ']
|
Explainer Dashboard — Build interactive dashboards for Machine learning models
|
The explainer object itself is also a plot factory that you can use to directly make plots inline in your notebook:
Notebook Screenshot. Image Credit: explainerdashboard
Launching from within a notebook
When working inside Jupyter or Google Colab you can use ExplainerDashboard(mode='inline') , ExplainerDashboard(mode='external') or ExplainerDashboard(mode='jupyterlab') , to run the dashboard inline in the notebook, or in a separate tab but keep the notebook interactive.
There is also a specific interface for quickly displaying interactive components inline in your notebook: InlineExplainer() .
For example, you can use InlineExplainer(explainer).shap.dependence() to display the shap dependence component interactively in your notebook output cell.
Command-line tool
You can store explainers to disk with explainer.dump("explainer.joblib") and then run them from the command-line:
$ explainerdashboard run explainer.joblib
Or store the full configuration of a dashboard to .yaml with e.g. dashboard.to_yaml("dashboard.yaml") and run it with:
$ explainerdashboard run dashboard.yaml
You can also build explainers from the command line with explainerdashboard build .
See explainerdashboard CLI documentation for more details.
Custom dashboards
All the components in the dashboard are modular and re-usable, which means that you can build your own custom dash dashboards around them.
Custom Dashboard. Image Credit: explainerdashboard
By using the built-in ExplainerComponent class it is easy to build your
own layouts, with just a bare minimum of knowledge of html and bootstrap.
For example, if you only wanted to display the ShapDependenceComponent , but hide a few toggles:
from explainerdashboard.custom import * import dash_bootstrap_components as dbc
import dash_html_components as html class CustomTab(ExplainerComponent):
def __init__(self, explainer):
super().__init__(explainer, title="Custom Tab")
self.dependence = ShapDependenceComponent(explainer,
hide_selector=True, hide_cats=True, hide_title=True)
self.register_components()
def layout(self):
return dbc.Container([
dbc.Row([
dbc.Col([
html.H3("Shap Dependence Plot:"),
self.dependence.layout()
])
])
])
ExplainerDashboard(explainer, CustomTab).run()
You can use this to define your own layouts, specifically tailored to your
own model, project, and needs.
See custom dashboard documentation for more details.
Switching off tabs
You can switch off individual tabs using boolean flags. This also makes sure that expensive calculations for that tab don’t get executed:
ExplainerDashboard(explainer,
importances=False,
model_summary=True,
contributions=True,
whatif=True,
shap_dependence=True,
shap_interaction=False,
decision_trees=True)
Hiding components
You can also hide individual components on the various tabs:
ExplainerDashboard(explainer,
# importances tab:
hide_importances=True,
# classification stats tab:
hide_globalcutoff=True, hide_modelsummary=True,
hide_confusionmatrix=True, hide_precision=True,
hide_classification=True, hide_rocauc=True,
hide_prauc=True, hide_liftcurve=True, hide_cumprecision=True,
# regression stats tab:
# hide_modelsummary=True,
hide_predsvsactual=True, hide_residuals=True,
hide_regvscol=True,
# individual predictions tab:
hide_predindexselector=True, hide_predictionsummary=True,
hide_contributiongraph=True, hide_pdp=True,
hide_contributiontable=True,
# whatif tab:
hide_whatifindexselector=True, hide_inputeditor=True,
hide_whatifcontribution=True, hide_whatifpdp=True,
# shap dependence tab:
hide_shapsummary=True, hide_shapdependence=True,
# shap interactions tab:
hide_interactionsummary=True, hide_interactiondependence=True,
# decisiontrees tab:
hide_treeindexselector=True, hide_treesgraph=True,
hide_treepathtable=True, hide_treepathgraph=True,
).run()
Hiding toggles and dropdowns inside components
You can also hide individual toggles and dropdowns using **kwargs . However they are not individually targeted, so if you pass hide_cats=True then the group cats toggle will be hidden on every component that has one:
ExplainerDashboard(explainer,
no_permutations=True, # do not show or calculate permutation importances
hide_cats=True, # hide the group cats toggles
hide_depth=True, # hide the depth (no of features) dropdown
hide_sort=True, # hide sort type dropdown in contributions graph/table
hide_orientation=True, # hide orientation dropdown in contributions graph/table
hide_type=True, # hide shap/permutation toggle on ImportancesComponent
hide_dropna=True, # hide dropna toggle on pdp component
hide_sample=True, # hide sample size input on pdp component
hide_gridlines=True, # hide gridlines on pdp component
hide_gridpoints=True, # hide gridpoints input on pdp component
hide_cutoff=True, # hide cutoff selector on classification components
hide_percentage=True, # hide percentage toggle on classificaton components
hide_log_x=True, # hide x-axis logs toggle on regression plots
hide_log_y=True, # hide y-axis logs toggle on regression plots
hide_ratio=True, # hide the residuals type dropdown
hide_points=True, # hide the show violin scatter markers toggle
hide_winsor=True, # hide the winsorize input
)
Setting default values
You can also set default values for the various dropdowns and toggles. All the components with their parameters can be found in the documentation. Some examples of useful parameters to pass:
ExplainerDashboard(explainer,
higher_is_better=False, # flip green and red in contributions graph
col='Fare', # initial feature in shap graphs
color_col='Age', # color feature in shap dependence graph
interact_col='Age', # interaction feature in shap interaction
cats=False, # do not group categorical onehot features
depth=5, # only show top 5 features
sort = 'low-to-high', # sort features from lowest shap to highest in contributions graph/table
orientation='horizontal', # horizontal bars in contributions graph
index='Rugg, Miss. Emily', # initial index to display
pdp_col='Fare', # initial pdp feature
cutoff=0.8, # cutoff for classification plots
round=2 # rounding to apply to floats
)
You can define your own layouts, specifically tailored to your own model, project, and needs. You can use the ExplainerComposites that are used for the tabs of the default dashboard as a starting point, and edit them to reorganize components, add text, etc.
See custom dashboard documentation for more details. A deployed custom dashboard can be found here(source code).
ClassifierModelStatsComposite
ClassifierModelStats. Image Credit: explainerdashboard
Deployment
If you wish to use e.g. gunicorn or waitress to deploy the dashboard you should add app = db.flask_server() to your code to expose the Flask server. You can then start the server with e.g. gunicorn dashboard:app (assuming the file you defined the dashboard in was called dashboard.py ).
It can be helpful to store your explainer and dashboard layout to disk, and then reload, e.g.:
Due to the modular design, it is also really easy to design your own custom dashboards
generate_dashboard.py:
from explainerdashboard.custom import * from explainerdashboard import ClassifierExplainer, ExplainerDashboardfrom explainerdashboard.custom import * explainer = ClassifierExplainer(model, X_test, y_test) # building an ExplainerDashboard ensures that all necessary properties
# get calculated:
db = ExplainerDashboard(explainer, [ShapDependenceComposite, WhatIfComposite],
title='Awesome Dashboard', hide_whatifpdp=True) # store both the explainer and the dashboard configuration:
explainer.dump("explainer.joblib")
db.to_yaml("dashboard.yaml")
You can then reload it in dashboard.py:
from explainerdashboard import ClassifierExplainer, ExplainerDashboard explainer = ClassifierExplainer.from_file("explainer.joblib")
# you can override params during load from_config:
db = ExplainerDashboard.from_config(explainer, "dashboard.yaml", title="Awesomer Title") app = db.flask_server()
And then run it with:
$ gunicorn dashboard:app
JupyterNotebook Files
1. Example notebook on how to launch dashboards for different model types here: dashboard_examples.ipynb. 2. Example notebook on how to interact with the explainer object here: explainer_examples.ipynb. 3. Example notebook on how to design a custom dashboard: custom_examples.ipynb.
Deployed example
You can find an example dashboard at titanicexplainer.herokuapp.com
(source code at https://github.com/oegedijk/explainingtitanic)
Demonstrations
1. Classifier Dashboard — Predicting the probability of surviving the titanic.
2. Regression Dashboard — Predicting the fare paid for a ticket on the titanic.
3. Multiclass Dashboard — Predicting the departure port for passengers on the titanic.
4. Custom Dashboard — Showing a custom design for a classifier dashboard.
Documentation
Documentation can be found at explainerdashboard.readthedocs.io.
Conclusion:
In a lot of organizations, it is becoming more and more important to be able to explain the inner workings of the machine learning algorithms. Customers have to some extent a right to an explanation why they received a certain prediction, and more and more internal and external regulators require it. With recent innovations in explainable AI (e.g. SHAP values) the old black box trope is no longer valid, but it can still take quite a bit of data wrangling and plot manipulation to get the explanations out of a model. This library aims to make this easy. The goal is manyfold:
Make it easy for data scientists to quickly inspect the workings and performance of their model in a few lines of code
Make it possible for non-data scientist stakeholders such as managers, directors, internal and external watchdogs to interactively inspect the inner workings of the model without having to depend on a data scientist to generate every plot and table
Make it easy to build an application that explains individual predictions of your model for customers that ask for an explanation
Explain the inner workings of the model to the people working (human-in-the-loop) with it so that they gain an understanding of what the model does and doesn’t do. This is important so that they can gain an intuition for when the model is likely missing information and may have to be overruled.
The library includes:
Shap values (i.e. what is the contributions of each feature to each individual prediction?)
Permutation importance (how much does the model metric deteriorate when you shuffle a feature?)
Partial dependence plots (how does the model prediction change when you vary a single feature?
Shap interaction values (decompose the shap value into a direct effect an interaction effects)
For Random Forests and xgboost models: visualization of individual decision trees
Plus for classifiers: precision plots, confusion matrix, ROC AUC plot, PR AUC plot, etc
For regression models: goodness-of-fit plots, residual plots, etc.
|
https://medium.com/analytics-vidhya/explainer-dashboard-build-interactive-dashboards-for-machine-learning-models-fda63e0eab9
|
[]
|
2020-12-13 16:22:46.660000+00:00
|
['Programming', 'Software Development', 'Python', 'Explainable Ai', 'Machine Learning']
|
Theft Detection Using Machine Learning
|
Theft is a common criminal activity that is prevailing over the years and is increasing day by day. To tackle this problem many surveillance systems have been introduced in the market. Some are simply based on video surveillance monitored by a human while some are AI-based capable of detecting suspicious activity and raising an alarm. However, none of them are intelligent enough to identify what kind of suspicious activity is being carried out and what kind of protective measures should be taken in real-time. This blog presents the design of an effective surveillance system using machine learning techniques.
Introduction
Theft is the most common crime committed across the world. According to the National Crime Records Bureau (NCRB), ~80% of the criminal cases are related to theft [1] as shown in figure 1. Increasing theft rates cause people to suffer both financially and emotionally. Therefore, there is a need to develop a more deterrent surveillance system, which is convenient to use, free from false alarms, minimize human interference, and cost-effective.
Fig. 1. Contribution of theft in crime
Machine Learning (ML) techniques prove to be fruitful in developing efficient surveillance systems. This blog aims to design a theft detection and monitoring system, which would be capable to detect theft using a motion-sensing camera using ML and alarm the owner with an alert message along with the captured image of that instance of motion.
The major contributions are:
To detect and activate motion in the still place according to requirements
To recognize facial expressions and detect people wearing the mask using the ML model
To detect the suspiciousness in the surrounding for any kind of weapon and raise alert messages
Proposed Methodology
The proposed methodology consists of three phases- Data Collection and Acquisition, ML models, and Actions are taken. The workflow of the proposed methodology is shown in figure 2.
Fig. 2. Flow Chart
3.1 Data Collection and Acquisition: Datasets utilized are taken from Google Pictures, Kaggle, and Flickr 8k dataset are utilized, which contains 8000 pictures each with 5 captions. This dataset is fitting sufficient to prepare the demonstration. Bigger the number of pictures, troublesome to prepare the show with the given computational prerequisites.
Video is captured utilizing an IP camera, which is associated with the remote organizer. This is often done by utilizing the OpenCV library. Once the video is captured, it is at that point handled and broken into outlines of pictures. These picture outlines are at that point nourished to ML models to encourage investigation. Information is cleaned and pre-processed sometime recently applying ML models. In information cleaning, a lexicon of one of a kind words shown in all the captions of the dataset pictures is created and spared on the disk. To create the show ended up more strong to outliers we considered as it were those words, which happen at the slightest 10 times within the whole corpus.
In information pre-processing, the image is changed to a fixed-size vector, which is then fed to a Convolutional Neural Network (CNN) based on exchange learning. This demonstration is prepared on the ImageNet dataset to perform picture classification on 1000 distinctive classes of pictures but since we needed to extricate a fixed-length instructive vector for each picture so we evacuated the final softmax layer from the show and extricated a 2048 length vector (bottleneck highlights) for each picture.
3.2 ML Models: Image frames are processed and fed to different ML models. Each model performs a task in a particular sequence so as to analyze different evaluation parameters.
3.3 Actions: According to the results obtained from the ML models the actions will be taken such as raising alarms, sending alert messages to the owner only, sending alert messages to cops only, sending an alert message to the owner and cops both.
The system consists of several levels of surveillance at each level the activity in each frame of the video will be monitored thoroughly using ML models, which are solely trained to perform their specific job. There are a total of six levels of surveillance and the system consists of two modes (i.e. day and night) and it will totally depend on the user which mode is required at the moment.
Motion Detection: To detect the motion we firstly detected if any human being is present in the frame or not and to do so we used CNN. Convolution can be used to achieve the blurring, sharpening, edge detection, noise reduction, which is not easily achieved by other methods.
In the convolution method center of the kernel is kept at each element of the image input and the corresponding elements are multiplied and added together. CNN is a type of neural network, made up of a large number of neurons with weights and biases which define the relations between them. Each neuron receives several inputs, takes a weighted sum over them, passes it through an activation function, and responds with an output.
After getting quite normal results from the model trained on a dataset with 2000 images with a human being and 2000 images without a human being. We tried transfer learning for that we used the YOLOv4 neural network and extracted the weights just before the last two layers of the network and then used those pre-trained weights to train the model which can detect the human being in the image and using the transfer learning technique accuracy got improved. After detecting the human being in the frame we now can detect the motion and so we used the python library “OpenCV”.
To identify the motion different strategies like frame differencing, background subtraction, optical flow, etc. and got the most excellent comes about with frame differencing. The frame differencing strategy employments the two or three adjacent frames based on time series picture to subtract and gets diverse pictures, its working is exceptionally comparative to background subtraction, after the subtraction of the picture it gives moving target data through the edge esteem. This method is straightforward and simple to execute, conjointly it is comparable to the background subtraction. But this strategy is exceedingly versatile to energetic scene changes; in any case, it by and large falls flat in identifying entirely significant pixels of a few sorts of moving objects. Extra strategies that ought to be received in order to identify ceased objects for the victory of the next level are computationally complex and cannot be utilized in real-time without specialized equipment.
Mask Detection: Thieves usually wear masks to hide their identity while attempting theft. Therefore, the output of this module will help in determining if the theft is real or false. Dataset used in this module is from Google Images and Kaggle. Google images are scraped using the selenium package and Chrome Driver extension. To get better results in a short time the approach Transfer Learning is used and the MobileNetV2 architecture is trained on the weights of ImageNet. Amid training, the main focus was on stacking the face cover detection dataset from disk, training a model (utilizing Keras/TensorFlow) on this dataset, and after that serializing the face mask detector to disk. The model was trained on the information set containing 1000 images for a person with a mask and without a mask each.
The distinctive steps utilized in this module are:
Train mask detector: Accepts the input dataset and fine-tunes MobileNetV2 upon it to make a model.
Detect mask image: This module performs face mask detection in static images
Detect mask video: Utilizing the camera, this module applies to confront cover discovery to each frame within the stream.
For mask detection, we utilized a crossover model in which we, to begin with, recognized the face utilizing haar cascade and after that identified the cover on the face utilizing MobilenetV2 prepared on the weights of the Imagenet. Once the confront veil finder was prepared, we at that point moved on to stacking the mask locator, performing face discovery, and after that classifying each face as ‘with mask’ or ‘ without a mask’.
Facial Expression Detection: As a pre-processing step, the images are cropped around the faces and intensity normalized. Features are extracted and local descriptors are calculated. This computation is represented using a Vector of Locally Aggregated Descriptors (VLAD). This labeled dataset is trained on the emotion classes using the SVM classifier. Lastly, the face is identified from the image using a haar cascade classifier and the image is cropped into 256 x 256 resolutions. Weapon Detection: Weapon detection is another important module for differentiating between false and actual threats [8]. The approach we have used to detect weapons is similar to that of Mask Detection except that we use a haar cascade to detect the hand of a person and then detect the weapon in the hand. To train the model we used the data set containing 4000 images of a person with a weapon and person without a weapon each. The label of the images employed in the dataset is categorical. Therefore, we used a one-hot encoding to convert the categorical labels to binary values. As our dataset is small, we used augmentation technique to achieve good accuracy. Data augmentation could be a technique that may be used to artificially expand the dimensions of a training dataset by creating modified versions of images within the dataset. Kind of like mask detection, transfer learning is used for training the model. The MobileNetV2 architecture is trained on the weights of ImageNet to realize better accuracy during a short time. During training, callbacks like checkpoints and early stopping are used to save the best-trained model. Pose Detection: The issue of human posture estimation is frequently characterized since the computer vision strategies that foresee the circumstance of changed human keypoints (joints and points of interest) like elbows, knees, neck, bear, hips, chest, etc. It’s a very challenging issue since different components like little and barely unmistakable parts, occlusions, and huge inconsistency in enunciations. The classical approach to enunciated posture estimation is utilizing the pictorial structures system. The fundamental thought here is to speak to a question by a bunch of “parts” orchestrated in an exceedingly deformable setup (not inflexible). A “portion” is an appearance format that’s coordinated in a picture. Springs appear in the spatial associations between parts. When parts are parameterized by pixel area and introduction, the coming about structure can demonstrate verbalization which is uncommonly significant in posture estimation. (A organized expectation assignment). This strategy, be that as it may, comes with the impediment of getting a posture demonstrates not looking at picture information. As a result, inquire has centered on improving the representational control of the models. After getting a few unsuitable results from the above-mentioned strategies inside the final, we utilized a neural organization by Google which gave us the desired results. After getting the posture recognized, we prepared a show that portrays whether the postures recognized in outlines ought to concern approximately suspiciousness inside the outline or not [9]. To do so, we made a fake dataset that contains 10,000 pictures labeled with their postures and a word reference which contains data almost the postures and their pertinence to the sort of action conceivable at that point. Activity Captioning: Activity captioning is a task similar to image captioning but in real-time that involves computer vision and natural language processing. It takes an image and can describe what’s going on in the image in plain English. This module is essential for determining the course of actions that are being performed by the person in view. In surveillance systems, it can be used to determine theft by captioning the frames obtained from the camera and finally alerting the authorities. We again used the approach of transfer learning by utilizing the ResNet50 architecture trained on the weights of ImageNet. After getting the results from the above six modules those results are combined and then further becomes the input to an ML model which decides to whom it should address for the alert message whether it should be the owner or the cops or both [10].
The main aim of the project is to embed computer vision into a camera. We designed the system in such a way that the camera is connected to an external computer on which the algorithms will run. The client is also connected to this computer through the internet. Whenever a suspicious activity is detected an alert message will be sent to the client. The message will consist of an image captured through the camera along with the various features extracted by our 6 modules. We can also use Raspberry Pi instead of a regular computer to reduce the cost of the project.
Results
To distinguish the movement we basically utilized the outline differencing method without identifying the human creatures in it which gave great comes about but inevitably, movement get identified indeed by the development of the creepy crawlies or other undesirable things, to expel that commotion we attempted to set a limit nearly same as the alter in pixels to happen due to nearness of a human within the outline. But in this case, moreover, the clamor was tall so in the end, we made an ML show that can detect human creatures within the outline. After identifying people within the outline movement is recognized utilizing outline differencing techniques.
In the “mask detection” module where masks are going to be detected using the ML model, the model was first trained using simple CNN a combination of some convolution layers, max-pooling layers, and some dropout layers followed by dense layers but the accuracy was not up to the mark. Then to improve the accuracy of the model trained on the smaller datasets we used the transfer learning technique and got great results with that.
In “facial expression detection”, we primarily developed a model which detects the frontal face of humans present in the frame and then we build another model to determine the facial features using CNN with activation function ‘Relu’ but we got a poor response with that so we replaced ‘Relu’ with ‘elu’ and got great accuracy.
In “weapon detection”, we detected the weapons in the image and we used the same technique as of mask detector and achieved good accuracy.
In “pose detection”, we used a model trained by Google to detect the poses and then getting results from that we processed those results to an ML model which detects the poses in the image relates to suspicion or not and to do so we used classification techniques and got best results with multivariate logistic regression.
In “activity captioning”, where we designed an ML model which captions the activity is the combination of computer vision and natural language processing and we firstly tried basic image classification and text generation algorithms but didn’t get good outcomes then we used transfer learning for image classification and Bert for text generation and eventually we got great results.
Finally after combining the outcomes of these modules the input for another ML model is created which determines to whom the alert message should be sent and to design such a model we used several classification techniques like logistic regression, SVM, etc. But getting the best results with decision trees.
An Intruder without a mask.
An intruder with a mask.
Conclusion and Future Scope
The work carried out in this blog is basically centered to plan and create effective and helpful observation frameworks to unravel security issues that are able to offer assistance to reduce/stop a theft. Though a significant amount of research has been done in the past to solve such security problems, it still remains challenging due to increased complexity and various theft actions that are taking place daily. The system will capture images only when there is any human being in the frame and motions exceed a certain threshold that is pre-set in the system. It thus reduces the volume of data that needs to be processed. Also, it will help to save data space by not capturing static images which usually do not contain the object of interest. Users using this system need not worry about supervising the cameras all the time instead the system will inform the user about the activities happening and will also suggest that the user should take some action or not. After successfully implementing the project, it can be applied in a smart home security system which would be very helpful in auto theft detection for security purposes. It can also be useful in banks, museums, and streets at midnight.
References
|
https://medium.com/analytics-vidhya/theft-detection-using-machine-learning-a4232ea51f1c
|
['Jatin Arora']
|
2020-09-25 13:23:41.439000+00:00
|
['Deep Learning', 'Neural Networks', 'Computer Vision', 'Data Science', 'Machine Learning']
|
What Makes Social Media, Social Media?
|
What Makes Social Media, Social Media?
In short — People and communication.
Photo by ROBIN WORRALL on Unsplash
Social media is 90% social, 10% media. These technology platforms allow for infinite power and reach. Our work can reach millions within a fraction of a second after hitting the “Post” button. More people can access our work concurrently, and more than ever before.
As much as I see the technological prowess which connects millions at any particular point in time, many see the good and the dark side of social media.
Let me stick to the technological layer for a bit before I touch on the social element.
These web and mobile applications have a wider outreach compared to any physical platforms we knew. Television, billboards, and posters have one thing in common.
They cannot move.
Smart devices can move. Through these physical proxies we carry with us like our wallets, the social media platforms follow us wherever we go.
Getting our attention is one vibration away. Visual assaults or on-air commercials could only be envious of their younger but more powerful step siblings.
Now, leaving the technology layer aside. The social element makes up the social media we know today.
If we could examine all those platforms we are involved in, we can find a common thread. They are designed to facilitate and exaggerate communication.
Like a trendy drama, the main cast is ridiculously handsome, rich beyond means, taller than an NBA player, gyms, have eyes like a hawk, and can run like a cheetah.
When we engage in social media, we express our love equivalent to the magnitude of an earthquake. We comment to approval, like, share, and funneling in others based on tagging on that same post.
That’s social media.
The dark side of social media is manifested in communication as well.
I think an influencer can bury those they dislike. This is nothing new. In a corporate setting, those on top could end the career of a certain someone at the operational level.
In that sense, humans are humans. We carry the same behavior wherever we go.
What social media actually does is to extend the reach of our behavioral and communication patterns beyond our physical boundaries.
We can send our love to a child in Siberia instantaneously. We can also continue to destroy that person we hate no matter where they escape to. Zambia or Columbia doesn’t matter. The throttling continues so long as social media accounts are alive.
This is something that we have to adapt to in the 21st century. Social media will continue to evolve, and we have to learn to deal with it.
That is the same with dealing with difficult people, annoying friends, and seemingly scheming adversaries.
In that sense, social media is as normal as it is novel.
This is the social media we know today.
Social Media Is A Reflection Of Social Element,
Aldric
|
https://medium.com/the-innovation/what-makes-social-media-social-media-cc6bb12576e0
|
['Aldric Chen']
|
2020-12-05 11:02:26.457000+00:00
|
['Communication', 'People', 'Personal Development', 'Technology', 'Social Media']
|
Important life lesson from a Halo 4 developer
|
You don’t have to be a video game fan to appreciate this. No…seriously.
Halo is one of the biggest video game series in history. Master Chief, the main character, is up there with Mario, Pac Man, Solid Snake, and Zelda. Halo 4, the newest chapter in the saga is currently in development and is for sure a highly anticipated title.
I read this morning that Ryan Payton, one of the creative directors for Halo 4, is stepping down from his position. This is obviously a big blow for the game’s development. It also begs the question as to why he’s doing it.
The whole article about his decision can be read HERE, but this is the part that really struck me:
“For somebody who loves this industry as much as I do and know how lucky I’ve been, I never thought I’d get to a point where I was so drained. That was when I knew I had to do something else,” said Payton. “I think time is the most valuable thing we have, and I’ve decided that I’m not going to waste one more day working on something that doesn’t speak to my values.”
Like I said, you don’t have to be a gamer for that bolded line to hit home.
Can you articulate your core values?
Does how you spend your time reflect your core values?
What are you wasting time on?
Answering those questions will take courage. A ton of it.
Ryan’s statement reminded me of a song by Chris Rice, “Life means so much.”
If you have time, it’s worth a listen:
|
https://medium.com/processing-life/important-life-lesson-from-a-halo-4-developer-33527bdb7452
|
[]
|
2016-07-01 03:26:27.118000+00:00
|
['Leadership', 'Perspective', 'Time', 'Goals', 'Creativity']
|
How to Build a Reporting Dashboard using Dash and Plotly
|
A method to select either a condensed data table or the complete data table.
One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present:
Code Block 17: Radio Button in layouts.py
The callback for this functionality takes input from the radio button and outputs the columns to render in the data table:
Code Block 18: Callback for Radio Button in layouts.py File
This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' .
Conditionally Color-Code Different Data Table cells
One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues.
There is lack of formatting functionality in Dash Data Tables at this time.
If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly.
There is a bug in the Dash data table code in which conditional formatting does not work properly.
I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide:
Code Block 19: Conditional Formatting — Highlighting Cells
The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash.
*This has since been corrected in the Dash Documentation.
Conditional Formatting of Cells using Doppelganger Columns
Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements:
Code Block 20: Adding Doppelganger Columns
Then, the conditional cell formatting can be implemented using the following syntax:
Code Block 21: Conditional Cell Formatting
Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values.
The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method):
Code Block 22: Data Table with Conditional Formatting
I describe the method to update the graphs using the selected rows in the data table below.
|
https://medium.com/p/4f4257c18a7f#33e7
|
['David Comfort']
|
2019-03-13 14:21:44.055000+00:00
|
['Dashboard', 'Towards Data Science', 'Data Science', 'Data Visualization', 'Dash']
|
Why You Should Invest in the Tech Market Now More Than Ever
|
The 2020 Tech Market
This has been a rather unique year. The COVID-19 pandemic has changed how we work and connect with each other, while markets around the world dwindle and uncertainties reign. Yet the top ten tech stocks (comprising goods and services in software, electronics, computers, AI, and IT) have given investors a total return of 32.2% over the past 12 months.
The three best-valued stocks of 2020 were digital printing technology giant Xerox Holdings Corp. (3.2 12/month trailing P/E ratio), cyber safety business NortonLifeLock Inc. (3.4), and data-storage company Seagate Technology PLC (7.4).
The fastest-growing were AMD (an impressive 1,300% EPS growth), NortonLifeLock Inc. (620%) and Gartner Inc. (260.9%). In terms of momentum, measured as total return over the last 12 months, the top three performers were NVIDIA Corp. (144.4%), Apple Inc. (90%), and AMD (88.1%).
One of the reasons this sector has done so well is because the technologies it develops have had, and continue to have, a synergistic effect. Each advance makes it possible to innovate in even more products due to increased revenues, combined talent and technology, and cost reduction.
For example, the Internet of things (IoT) and cloud computing have become a permanent part of the modern technology landscape, while blockchain continues to grow as an enabling technology. Also, even though it’s still a few years away, quantum computing has already allowed companies to start planning for a type of exponential development that was previously unthinkable.
Top Tech Sectors
Tech enterprises all across the board are having a positive year. Still, some areas are doing particularly well: IoT (as mentioned above), cloud computing, artificial intelligence (AI), 5G, and blockchain, but also electric vehicles and spaceflight, among others.
This all makes for a perfect environment to invest in the tech sector.
Of course, investment never comes without some degree of risk. The dot-com crash shows us how companies can become irrelevant overnight if someone else makes a breakthrough. One of the tech sector’s caveats is that they must consistently invest large sums of money to keep up with the fast pace. However, tech stocks offer enough of a variety of opportunities that can suit both fresh and experienced investors.
Advice for New Investors
If you’re new to investing, you should first decide if you want to invest or to trade. If you invest, you’ll be buying shares and receiving any payouts made by the business. If you trade, you’ll be speculating about the future direction of prices.
In both cases, it’s recommended that you open a demo account first. Demo accounts let you simulate real trading using pretend money, and they can help you familiarize yourself with the way investing works without assuming any of the risks. Some popular apps are FXTM, AVA, Plus500, and eToro.
Investment comes in many shapes and forms, and you can find the one that best suits your personality and goals, depending on whether you prefer more risk or smaller yet steadier gains. Below you will find some of the most interesting tech companies of 2020, chosen based on growth, stability, and momentum. There are many others, but if you’re new to investing, these names might sound more familiar. The first thing you can do is take a look at them and familiarize yourself with this sector’s possibilities.
Once you’ve decided on an approach, picked a few possible candidates, and tested the waters using a demo, we advise that you develop the following habit: Every month, put money away and automate your investment plan. Then, sign up for a robo-advisor such as M1 Finance, Wealthfront, or Betterment, or install a stock trading app like Public or Robinhood and turn 2020 around!
Top Tech Companies of 2020
Microsoft Corp
The year 2020 has been an excellent one for Microsoft. This upward trend, initially fueled by the acquisition of LinkedIn in 2016, continues with the guaranteed recurring revenue provided by the Windows upgrade cycle and the Microsoft Office cloud-based subscription model, as well as the steady growth of their Xbox games ecosystem.
Microsoft’s cloud computing division Azure soared an impressive 59% last quarter, while Microsoft Teams is being adopted globally with much success. Microsoft is not only attractive because of its steady growth. It’s also just won a lucrative contract with the U.S. Department of Defense worth up to $10 billion over the next ten years.
Tesla
Tesla specializes not only in electric vehicles but also in energy storage and solar panels. The company’s mission is to “accelerate the world’s transition to sustainable energy.” The Model 3 was voted the best electric car available in 2020, which is impressive considering Tesla was a pioneer that inspired many successful rivals.
Tesla is at the vanguard of an inevitable global shift toward sustainable energy and is already producing photovoltaic modules and solar cells in “gigafactories” built in the US and abroad. Their philosophy of constantly innovating places them high in our list of outstanding tech stocks.
Adobe
Adobe is behind a suite of iconic widely-used applications that include Photoshop, Illustrator, Premiere, InDesign, and many others. Adobe works with a cloud-based, software-as-a-service subscription model, which guarantees a stable group of professional users committed to their products. One of Adobe’s main attractions is that it grows steadily year after year, between 15 and 20%.
The company’s stock has risen considerably due to work-from-home during COVID-19 times. Adobe is on both the IBD 50 list of top growth stocks and the IBD Long Term Leaders list, which spotlights companies with superior performance and stable earnings growth.
Dell
Dell started as a computer company. However, in recent years, it has expanded into a diversified portfolio that includes storage, data center, servers, and the cloud — not to mention the acquisition of an 80% stake in VMWare, currently valued at around $50 billion.
The commercial PC business grew 60% last year, and the success of VMWare shot its shares from $25 in March to $50 in July 2020. Dell is that legacy hardware business, so its decision to go with VMWare is definitely a consistent one.
Facebook
Facebook, which also encompasses WhatsApp, Instagram, and Messenger, is expected to grow 35% next year. Not only does it have over 2.6 billion users, but it’s been growing steadily, around 8% annually. Even if the business is broken down into segments after the 2020 election, each segment will most likely maintain its value.
Facebook’s biggest asset is its enormous ability to gather data. They know everything about their users and use that knowledge to create apps, giving businesses the capability to generate quality leads and close sales. Although Facebook faces some challenges in terms of government regulation, it still has considerable untapped growth capabilities.
IAC/Interactive Corp.
This media company is the top-performing tech stock of 2020. Run by media legend Barry Diller, IAC’s stock grew from $2 a share in 1995 to $250 at the beginning of 2020.
Its stakes include online dating giant Match Group (Tinder, Match, OkCupid, PlentyOfFish, and others), Expedia, Live Nation, and Lending Tree, as well as Vimeo and Dotdash. In an era of social distancing, the market for IAC’s products is likely to keep growing.
Disney
Disney’s business model is rather expansive. Its income is generated mainly by its Media Network division (Disney Channel, ABC, ESPN, and National Geographic). It also runs film and studio businesses such as Pixar Animation Studios and 20th Century Fox and owns the successful Marvel and Star Wars film brands. Not to mention that Disney also runs several resorts and theme parks.
One of its newest additions is the streaming service, which capitalized on an enormous amount of exclusive content. Thanks to all these divisions, Disney continues to grow steadily and makes for a great investment opportunity.
Alibaba Group Holding
The closest analog to Amazon in China, Alibaba, generates over $70 billion annually in revenue. Their focus is set on its retail division, particularly electronic retailing, whose earnings fund other business endeavors that in 2020 include cloud computing and a booming entertainment division.
Alibaba has also expanded into food delivery, merging its service Ele.me with its lifestyle app Koubei in 2018. The group has been trading tightly near highs since July.
Alphabet
Although most popular because of the Google search engine, Alphabet comprises the Android operating system, its own cloud computing division, and enormous services like Google Play, Google Maps, and YouTube.
While they are less known, Alphabet also has a few hardware projects and investments in firms like Waymo, makers of autonomous vehicles. Mostly focused on growth, Alphabet has been reliably gaining share price since 2015 and seems to continue that trend in the foreseeable future.
Amazon
Amazon is indisputably the king of commerce. It not only ships products around the world quickly and cheaply but also owns the largest cloud-computing company in the world: Amazon Web Services (AWS), which accounts for 9% of their revenue.
Amazon also holds a diversity of businesses such as Whole Foods, Kindle e-readers, and Echo smart speakers, not to mention Amazon Prime, which offers video, music, and audiobooks. If you had bought $1,000 Amazon stock in 2010, your investment would be worth $14,400 today.
Apple
Finally, Apple is most popular for its hardware, including the iPhone, iPads, Macs, and the Apple Watch. Slowly turning toward software and services, Apple already has successful proprietary operating systems and applications like iCloud and Apple Pay and is working on growing its own music and TV services, including a new streaming app. Although there was a slowdown in phone sales, more growth is expected, and dividends have shown steady increases. It seems that Apple is not going anywhere for a while.
|
https://yisela.medium.com/why-you-should-invest-in-the-tech-market-now-more-than-ever-ae298120cd48
|
['Yisela Alvarez Trentini']
|
2020-10-24 12:36:30.929000+00:00
|
['Investing', 'Technology', 'Business', 'Startup', 'Tech']
|
We Need to Talk About the Racial Investing Gap
|
Investing is a crucial way to build wealth and secure financial future. There are numerous ways people may choose to approach investing: stocks, bonds, high-yield savings accounts, real estate and more. You would expect everyone who has disposable income to consider available investment opportunities. Unfortunately, there is a group of individuals who invest less often than others: women of color. Today we will investigate the reasoning and attempt to find a solution.
Reviewing the numbers: How many women decide to invest?
First of all, let’s look at this financial activity from the gender perspective. Men and women seem to have equal opportunities, yet their investment styles are quite different. What’s painfully different is statistics, according to the SoFi investing platform:
On average, of those who have recurring deposits, men contribute 48% more than women.
Men contribute 32% more than women.
53% of men choose the most aggressive investment plan, compared to 38% of women.
At the same time, women accomplish particular education- and money-related tasks better than their male counterparts:
Women pay around $200 per month more on their loans and pay off the loans 10% faster.
Women earn two-thirds of degrees and half of the masters’ and doctorates.
The existence of the investment gap is evident.
Investment gap starts with a gender pay gap
Payscale.com is reporting that women earn 81 cents for every dollar compared to men. There is the reasoning behind this discrepancy: women stay home to raise children and seek lower paid positions to gain work flexibility so that they can look after their family. However, it explains the gap only up to a certain point — this is where the discrimination comes into play. According to PewResearch.org, “about four-in-ten working women (42%) said they had experienced gender discrimination at work, compared with about two-in-ten men (22%) who said the same.”
The difference in investment strategies: Gender gap
Factors such as education and cultural environment contribute to the gender gap as well. Merril Lynch conducted a survey on women’s financial wellness. Here are the findings:
“41% of women wish they had invested more of their money.
The cumulative lifetime earnings gap between men and women at retirement age equals to $1,055,000 (based on earnings from 23 and 65).
(based on earnings from 23 and 65). When it comes to managing investments, only about half (52%) of women, say they are confident, compared to 68% of men.”
Acorn recently published statistics about the gender pay gap. It highlights the following differences:
While admitting the lack of knowledge in investing, only 26% of men cited it as a concern, compared to 37% of women.
56% of women admitted to never following the financial markets, compared to 29% of men.
When asked how much they invested the previous year, 44% of men didn’t invest — much less compared to 57% of women.
If given $1,000 right now, men would be 2.5 times more likely to invest that money in stocks.
Gender pay gap is worse for people of color
Circling back to Payscale’s report, women of certain ethnic groups earn less than white men and women. American Indian, Alaska Native, Black and Hispanic women end up getting 75 cents for every $1 men make, compared to 81 cents for $1 that white women make.
The graph below demonstrates the gender gap by race — how much women earn compared to men:
How wide is the investment gap compared to caucasian families?
The Association for Financial Counseling and Planning Education researched in 2010 regarding the racial or ethnic differences in high return investment ownership. They found out that “in the 2004 and 2007 Survey of Consumer Finances datasets, 30% of Hispanic, 36% of Black, and 65% of White households had high return investments such as stocks, investment real estate, or private business assets”.
The study has also shown a significant difference between the nature and amount of investments, depending on the family’s racial profile. For instance, 58% of white families owned stocks, compared to only 28% of black and 22% of Hispanic households. The amount of stock investments also varies greatly: white families invested $143,758, black households only had $14,015, and Hispanic homes had $14,162.
Check out the break-down of other investment instruments along with amounts in the table below:
The gap exists in venture capital too
Investing is not the only area signifying gender and race gaps. Start-up funding allocation is an issue as well. According to Entrepreneur.com, only 2.2% of women received Venture Capital funding in 2017, with the total funding amount being $85 billion.
Project Diane reports that “since 2009, black women-led start-ups have raised $289MM in venture/angel funding, with a significant portion of that raised in 2017. This represents .06% of the $424.7 billion in total tech venture funding raised since 2009.”
It’s important to note that the One-Million-Club is growing. Having that said, the average funding raised by black women is significantly lower than the overall start-up average:
It is time to make a change
How do we change the narrative and ensure the women of color invest on the same level as females of other ethnic groups?
It starts with education. All women need to understand they should invest, and it will serve them well if they do. We need to minimize the cultural and racial biases, as well as prevent discrimination. Once women believe in their abilities and stop limiting themselves, the statistics will go through a drastic change.
There are several resources you can start with:
|
https://medium.com/an-injustice/we-need-to-talk-about-the-racial-investing-gap-9b31d54e3e26
|
['Joanna Henderson']
|
2020-05-22 09:00:12.622000+00:00
|
['Society', 'Women', 'Equality', 'Money', 'Race']
|
We don’t know what we eat… Only Blockchain can bring the solution.
|
Broken supply chains and rising food prices
All countries are currently facing with food price surges and shortages due to broken supply chains. In this turbulent context, it is often impossible to make sure that the remaining agricultural products available on our stalls are actually safe to eat, free from poisonous crop protection residues.
Then, does it really make a difference consuming products while being able to trust crop protection, harvest, storage and shipping methods with our lives ?
The obvious answer is YES. In fact, unhealthy food production processes are responsible for an increasingly important number of casualties, projected to be over 5 million deaths a year by 2050.
The only solution, Blockchain
Then, how to trustfully isolate the good from the bad products? How to make sure all agricultural products, including animals, are given the care they ought to? Only one answer: Blockchain.
Imagine an immutable, decentralized and always accessible database. A real travelogue containing all production details, accessible as easily as scanning a QR code on the final product.
At Fieldcoin, we are able to extract such critical informations by providing 4.0 Agribusiness technologies to our farmers. This way, they benefit from an extensive use of these state-of-the-art technologies to greatly improve and fully monitor their yield, while being 100% transparent with the consumer.
|
https://medium.com/predict/we-dont-know-what-we-eat-only-blockchain-can-bring-the-solution-27d734937289
|
[]
|
2020-06-10 11:14:22.670000+00:00
|
['Food', 'Health', 'Health Foods', 'Blockchain', 'Covid 19']
|
Is It Easy to Move From GCP to AWS?
|
Set Up the New VM
Now we need to open a terminal window and connect to the new VM:
ssh -i ~/.ssh/amazon.pem [email protected]
The username ubuntu is the default username for VMs with an Ubuntu image. Yours may be different. When it prompts you about whether this is really the server you’re trying to connect to, say “yes.” You should be in the newly provisioned VM. It should already have Git installed. You can check by running git and seeing if you get the help printout. Docker wasn’t installed on my VM, but running docker gave me a hint on how to install it:
sudo snap install docker
Those are the only things you have to install on your new VM!
One of the most labor-intensive parts of this is copying the configuration files. Because the configuration files contain secrets, they can’t be on GitHub or DockerHub, so they should be somewhere separate. In my case, I keep config files in the directory below where I keep all my GitHub repositories on my laptop. I’m going to copy these files up to the home directory of the new VM. You might think that since they contain sensitive information, you should put them somewhere clever, but if someone hacked into your VM, they’re probably smart enough to find wherever you hid them.
There are two ways to copy them to the VM: Either you can use scp to copy or you can just cat them on your laptop, use vi to create the files in the home directory on your VM, and cut and paste. I used the latter process because I always have trouble remembering the exact syntax of the scp command. There are five files to copy: config.yml , kamradtfamily.net.key , kamradtfamily.net.pem , phpapp.env , and phprest.env . These were all created in my previous articles, so you’ll have to read through them to find out the contents, as I can’t present them here without redacting most of them.
Next, we’re going to get the deployment scripts and public configuration scripts that are in GitHub. This is a simple clone operation — again in the home directory on the VM:
I used the https protocol instead of the ssh protocol since we won’t be updating the repository from the deployment host and don’t want to be bothered with the SSH keys.
This should create a directory called phpappprod . cd to it and then start up the system:
sudo docker-compose up -d
If everything is correct, you should be able to browse the new site at https://phpdemo.kamradtfamily.net/.
If all goes well and you’ve used my code, you’ll get to a login screen. There’s not much you can do with it. If you’ve followed along with my previous articles and set up your Okta service, you’ll be able to log in. When I log in, I get my wine-and-cheese pairing application, and I can tell it’s a new instance because the list is blank.
I can add a pairing and test out the REST read microservice on https://phprest.kamradtfamily.net/api, which should give me the new database record in JSON.
My point here is that by staying close to vanilla VMs, you can transfer your knowledge from one cloud provider to another. I had no experience with AWS before this, yet I was able to deploy an application in an hour.
If you’re an employer looking for a new developer who can deploy to AWS, you just have to look for Docker Compose experience instead of specific AWS experience. If you’re a developer, you can go into a shop and jump right in if they run AWS, GCP, Azure, or some other cloud provider. It’s a good reason to avoid the temptation of utilizing all the gadgets and gizmos of the cloud provider you choose.
|
https://medium.com/better-programming/is-it-easy-to-move-from-gcp-to-aws-645b319b226f
|
['Randal Kamradt Sr']
|
2020-11-09 15:27:34.459000+00:00
|
['Docker Compose', 'Docker', 'AWS', 'Google Cloud Platform', 'Programming']
|
Strange Times: Visualising the Oddities of Time Data
|
Time is a common dimension for visualising data and people have created temporal visualisations for a very long time. The ‘earliest known attempt to show changing values graphically’ is a planetary movements chart, shown below, with time running horizontally dating from the 10th or 11th century. The timeline — mapping a sequence of events by time — is a classic visualisation form. (For a history of the timeline, have a look at the book Cartographies of Time by Daniel Rosenberg and Anthony Grafton, which has a wonderfully rich selection of illustrations.)
But visualising data in timelines can come up against the strange ways humans understand and experience time. We may agree that time is passing at a uniform rate, but there are complications. There are time zones and daylight savings. The human experience of time passing does not always match up to the rate of the clock (“time flies when you’re having fun”). Different cultures conceive of the shape and orientation of time differently. Which direction feels “natural” to draw the arrow of time can be influenced by the writing direction we use (this timeline in Arabic has time going right to left). Looking back on the past, more recent events appear in greater focus than those farther back.
In this post, I discuss how the ways we think about time shape the data we create (even if it’s not immediately obvious), taking the example of historical time. Illustrated by my own work visualising data from digitised museum collections, I explore how the designer can choose to either emphasise these peculiarities or to conceal them, highlighting other characteristics of the data instead. Time is fundamental to making sense of digitised museum collections (data that describes museum holdings: objects, artworks, texts, etc.) and visualisation can be a powerful way to analyse, explore, and present patterns and stories in this data, but it is a domain where the oddities of time data can really rear their heads.
Uncertainty
Take uncertainty, for example. A common issue with historical dates is that they are often not precisely known, but estimated to a stretch of time. Date information for an item may be given as a span (eg. 1940–45) or accompanied by some qualifier like “approximately” or “circa.” For artefacts from a very long time ago, this uncertainty can be very large.
The purpose of your visualisation and what you want to draw attention to will inform how best to deal with this. If visualising a large number of data points with uncertain date information, if you start slapping error bars on everything you’re in danger of that uncertainty becoming the entire communication of your visualisation. And your intention in visualising the data may not be quantitative analysis or to present an indisputable temporal order. You may, instead, be interested in revealing general patterns and connections across a large collection.
In a project visualising Cooper Hewitt Smithsonian Design Museum collection data (which I’ve written more about here and here) I experimented with using a collage layout to position data with uncertain dates. Photos serve as the data points, positioned at a random place within their timespan (time running horizontally). All the images are then spread out so that nothing is overlapping while keeping some portion of each photograph within its timespan (see the diagram below). The design does not draw attention to the fact that many of the dates are uncertain, but the overall impression also does not encourage the viewer to read the visualisation for precise date information. In a similar vein, others have explored using a normal distribution or spreading items in a grid structure to organise items within time spans like this.
|
https://medium.com/nightingale/strange-times-visualizing-the-oddities-of-time-data-bc1fee487153
|
['Olivia Vane']
|
2020-02-11 17:49:30.806000+00:00
|
['Museums', 'Design', 'Data Visualization', 'History']
|
Physics, Life, and Everything Nice
|
Physics, Life, and Everything Nice
What a hot cup of coffee tells us about the history of life on Earth
What is life? Life, the empirical phenomenon, is (to use a simple definition) a collection of molecules that are capable of self-replication, with slight mutations. More interesting, however, is to ask: why is life?
The answer may lie in thermodynamics, in the laws that govern the flow of energy.
Let’s say you have a mug of coffee sitting in a room. Heat energy is stored in that mug, in a rather distinct configuration.
(In the universe, collections of heat energy in the particular shape of that coffee mug are bound to be quite rare).
The Second Law of Thermodynamics tells us that energy will flow, out from the mug and into the room, to make this configuration less distinct. It will try to organise itself in a form that’s more common, random, or disorderly. To use the scientific term: from low entropy to high entropy.
When energy flows, it enters molecules and atoms, exciting them and sometimes causing them to enter new — and again, distinct — configurations. This might seem strange: after all, isn’t energy only flowing because it doesn’t want to remain in distinct configurations? It’s a bit of a paradox, but these small instances of increasing distinctness or order actually help increase the system’s overall randomness, “boringness”, or disorder.
They’ve absorbed energy which was all concentrated in a mug, and are now bouncing around, touching other molecules, spreading the energy far and wide. Thus, they’re restoring the entire room to the less distinct, more evenly-spread-out configuration of energy it was in before you introduced the coffee mug.
This brings us back to the question we started with: why is life? Consider an undersea vent a few billion years ago. Energy is flowing from molten lava into frigid water. There’s a high “gradient”, or difference, between these two. Thermodynamics tells us that energy must flow to reduce the gradient and make the configuration of energy more random.
As molecules around the lava are excited by this flow of energy, they begin to join together into orderly little groups of molecules, which would bounce around, joining up with other groups to form new ones. By moving around in chaotic ways and impart that energy to more molecules, these orderly little groups actually help increase the overall disorder of the entire system.
As they agglomerate, these groups of molecules develop all sorts of random chemical properties. At some point, some groups develop the property to feed off that energy gradient and make copies of themselves.
Let’s say that we have two such groups, both of which are capable of self-replication. One group — let’s call it Super-Molecule A — is perfect at it. Every replica is exactly the same as its “parent”. The other — Super-Molecule B- is somewhat imperfect. Every replica is slightly different than its “parent”. Which one survives and thrives?
The answer, again, might be counter-intuitive — it’s the second one.
Why does the imperfect B survive and thrive better than the perfect A? Because the descendants of the first group can’t get better at processing energy. They’re all only as good as the initial template. Whereas the second, through random reconfigurations, creates a whole bunch of copies, some of which are worse at processing gradients — and some of which are better. The ones that are worse make less and less copies of themselves. The ones that are better process more gradients and create more copies of themselves, some of them worse, some of them even better. The effect snowballs across generations. B outbreeds A, and becomes the ancestor of life.
An orderly little machine, processing an energy gradient to make slightly different replicas of itself, and thus increasing the overall disorder in the system’s distribution of energy.
Of course, life didn’t stay confined to waters around volcanic vents. The little low-entropy machines, constantly seeking to feed off energy gradients, spread out across the world into a plethora of different environments, all with different levels of energy available.
They had to. How else would the trapped and bunched-up energy get evenly spread throughout the Universe?
Entropy is fundamentally about equilibrium: about things being equal. It all goes back to the Big Bang.
“Before” the Big Bang, energy and matter were all contained in one infinitesimal point. Time and the fundamental forces of nature didn’t exist. Order and disorder had no meaning, energy and matter saw eye to eye, and everything was happy and equal.
Then came the Beginning. The Bang.
Nobody knows what exactly happened — but once it was over, all these concepts decoupled from each other. Time and space emerged to become the fabric upon which gravity and the atomic forces played their great game, creating new elements in the furnaces of new stars and ordering the cosmos into galaxies and superclusters.
And what was — is — the purpose of the game? It’s actually very simple: to return to that state of perfect equilibrium until space and time, order and disorder, energy and matter have just as much, or just as little, meaning as they did when they started out.
There were two ways this could happen. The laws of physics could have mandated that things stick together, becoming more orderly over time until they were infinitely ordered. Instead, our Universe mandates that everything drifts apart, becoming more random over time until they are infinitely disordered.
This is the only reason that energy — and even, for that matter, matter — obeys the Second Law of Thermodynamics. There were many possible paths the Universe could have taken in establishing its post-Bang physics, and this is the one it chose.
As you can see, it takes something very, very special to apparently defy this fundamental law.
In each environment, the “special somethings”, those little machines, that were life, diversified further. They became more complex and orderly, becoming highly specialised types of machines that could process distinct “slices” of energy available in a given environment. Solar, energy stored in other machines, anything: it didn’t matter as long as it was energy.
Types that were good at it made more versions of themselves. Types that weren’t died off.
Over hundreds of millions of years, this drive to mutate, to increase internal complexity in order to increase overall randomness in the distribution of energy, created incredibly complex machines.
From insensate little creatures, they mutated over generations into forms that could move, developed little photosensitive patches that could sense photovoltaic and chemical variations in the environment around them, grew larger, and figured out how to transfer information within their bodies and coordinate responses.
They learned how to eat other machines and how to escape being eaten by other machines. They began to evolve in response not just to energy-environments around them, but in response to energy-environments now shaped by other machines. Complex systems of interactions developed, that we now call “ecosystems”.
The machines called plants processed solar energy. Herbivores processed plants. Carnivores processed herbivores. And so on.
Fish. Amphibians. Dinosaurs. Mammals.
Every now and then, changes in energy distributions sparked off the emergence of radically new machines that were better suited to doing their task. Massive volcanic events blotted out solar energy, cascading through ecosystem networks and killing off many species. Continents drifted, stranding some machines in their own distinct energy environments that allowed for even more diversification and specialisation. New evolutionary pressures incentivised the emergence of new forms of life.
Eventually, a mutation in one rather insignificant machine gave it the ability to process language. That was the ancestor of Homo Sapiens.
Language was a revolution. All of a sudden, human life-machines could interact to a far greater degree of sophistication than ever before, leading to the emergence of more complex networks.
These social networks enabled them to coordinate to hunt and take down much larger machines, from trees to mammoths. Humans were now flexible enough to harness a much more massive range of energy sources.
The result? More self-replication, more entropy. A population explosion. Migration. Large-scale ecological change.
Human social networks were organised in a plethora of different ways. But one, in particular, proved to be especially good at enabling them to harness and redistribute energy: The State. States that were better at processing energy and increasing entropy could become more and more orderly and complex and better at doing that (paradoxical, I know). Other social structures that weren’t up to the mark were simply gobbled up by states.
Within states, human functions further specialised. By connecting a vast number of humans together, states allowed the emergence of “professions”. By enforcing trust in an imaginary concept called “money”, states allowed specialised humans to interact in increasingly complex ways. Cultures and civilisations emerged on the fringes of the deep eternal drive to process energy gradients and increase entropy.
This process is what we call “history”.
States weren’t invincible, of course. They had their own sets of vulnerabilities. Vast empires could be shattered by a slight change in solar energy levels, or the emergence of a rapidly-spreading biological machine (like a plague) which destroyed information and energy redistribution networks and thus rendered their degree of complexity unsustainable.
Consider the Roman Empire. A fall in insolation levels in the 3rd century CE made agriculture less sustainable, which cascaded up various human networks in chaotic ways, causing famines and wars, eventually leading to a “mass extinction” of social complexity similar, in principle, to mass extinctions of biological complexity that the Earth has seen countless times.
Eventually, though, humans evolved social networks that were sufficiently complex to import energy from other parts of the world, and extract it from the Earth itself. That done, states became practically impervious to environmental redistributions of energy (such as volcanic eruptions). The result? Exponentially increasing social complexity and order. Ever-increasing entropy.
An example: what business does energy have being stored in coal and oil? Humans evolved organisational structures that were complex and orderly, extracted the energy, and turned it into spinning tyres and puffing exhaust pipes — a more random and high-entropy configuration.
Modern human networks are incredibly complex and interconnected. They are vastly more resilient to environmental change, as mentioned. Yet they are also changing the environment to a much vaster degree than ever before. For example, chopping forests can lead to the introduction of zoonotic diseases that were hitherto confined to species that we rarely interacted with, tearing into delicate human organisational structures and bodies, leading to the global COVID-19 pandemic we are struggling with as I write this.
Or, in the near future, a global environment that is inhospitable to (for example) plants, which process solar energy into forms that sustain other life forms, will cascade through now densely interconnected human networks, causing famines, wars, and mass migrations, and cause an extinction of social complexity that will make the collapse of the Roman Empire seem like a child kicking over a sandcastle.
The solution? Change our social and economic systems so they damage the environment less. But these highly-ordered systems are the basis of human existence as we know it — to change them will require an unprecedented global effort that leads to a totally different way of life. Whether we will actually do so remains to be seen.
But not all hope is lost for all human life-machines, not yet. We could save ourselves.
Or not. The Universe doesn’t care. Energy gradients must not persist. Entropy must increase.
And it will — whether you like it or not.
More on history: Anirudh Kanisetti is a researcher and writer. He also hosts two history podcasts you might be interested in, Echoes of India: A History Podcast and YUDDHA: The Indian Military History Podcast.
|
https://medium.com/snipette/physics-life-and-everything-nice-942d2db2cf03
|
['Anirudh Kanisetti']
|
2020-05-15 07:01:01.353000+00:00
|
['History', 'Physics', 'Entropy', 'Thermodynamics', 'Science']
|
This Christmas, It Is Well with My Soul
|
Joy amidst sorrow —
In 10 Things You Probably Don’t Know about Me, I wrote that people might be surprised to find out that I love traditional Christian hymns like Amazing Grace and It is Well with my Soul.
Have you ever heard It Is Well With My Soul by Phillip Bliss and Horatio Spafford?
The music is deeply sad in tone and feeling. I’m not musically educated enough to explain why. I just know it touches something deep inside me, even without knowing the story and hearing the lyrics.
But those lyrics! Here’s the first verse and the chorus:
When peace like a river, attendeth my way,
When sorrows like sea billows roll;
Whatever my lot, Thou hast taught me to say
It is well, it is well, with my soul. Refrain:
It is well, (it is well),
With my soul, (with my soul)
It is well, it is well, with my soul
And here’s a sung version I find incredibly moving —
The story?
Horatio Spafford, a middle-aged American businessman, penned those words one dark, freezing night in 1873 on a ship in the middle of the Atlantic — as he steamed past the spot where only weeks before, his four daughters had perished in the wreck of the passenger ship Ville du Havre.
He’d received the news in a two-word telegram from his wife, “Saved alone.”
Spafford penned this plaintive yet defiant poem as his soul cried out in grief and despair. He would grieve, but he would not give in. He would absorb the blow and he would be well.
Bliss composed perfect music for the piece. It makes my own soul vibrate in electric sympathy.
I’m hanging on to that this holiday season, as best as I can.
It is well with my soul.
Perhaps if I repeat that phrase often enough, and listen to the hymn often enough, I’ll find truth in it.
|
https://medium.com/james-finn/this-christmas-it-is-well-with-my-soul-224c57f6d656
|
['James Finn']
|
2018-12-24 10:01:00.615000+00:00
|
['Family', 'Joy', 'Grief', 'Christmas', 'Music']
|
How to Build a Reporting Dashboard using Dash and Plotly
|
A method to select either a condensed data table or the complete data table.
One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present:
Code Block 17: Radio Button in layouts.py
The callback for this functionality takes input from the radio button and outputs the columns to render in the data table:
Code Block 18: Callback for Radio Button in layouts.py File
This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' .
Conditionally Color-Code Different Data Table cells
One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues.
There is lack of formatting functionality in Dash Data Tables at this time.
If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly.
There is a bug in the Dash data table code in which conditional formatting does not work properly.
I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide:
Code Block 19: Conditional Formatting — Highlighting Cells
The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash.
*This has since been corrected in the Dash Documentation.
Conditional Formatting of Cells using Doppelganger Columns
Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements:
Code Block 20: Adding Doppelganger Columns
Then, the conditional cell formatting can be implemented using the following syntax:
Code Block 21: Conditional Cell Formatting
Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values.
The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method):
Code Block 22: Data Table with Conditional Formatting
I describe the method to update the graphs using the selected rows in the data table below.
|
https://medium.com/p/4f4257c18a7f#2d4a
|
['David Comfort']
|
2019-03-13 14:21:44.055000+00:00
|
['Dashboard', 'Towards Data Science', 'Data Science', 'Data Visualization', 'Dash']
|
Properly identifying environmental heroes
|
When it comes to the environmental movement, the lyrics to a 1980s hit from Welsh singer Bonnie Tyler keep rattling in my mind.
I need a hero; I’m holding out for a hero ’til the end of the night.
Now, real environmental heroes do exist. Greta Thunberg is a no-brainer. The 16-year-old humbly protested climate change by leaving school every Friday to picket in front of the Swedish parliament. From there, she’s taken a leading role in inspiring students around the world to stand up for climate action.
But, for years, a broad range of groups have anointed people with more questionable credentials as environmental heroes. For a stretch, Time magazine ran an annual list dubbed “Heroes of the Environment.” In 2014, the United Nations released a documentary called “Climate Heroes: Stories of Change.” And, winners of the annual Goldman Environmental Prize have been described as “environmental heroes” in press materials.
This desire to ordain green heroes should be expected. From civil rights (think, Rosa Parks) to labor (Cesar Chavez), we can point to heroes who played integral roles in important and pivotal social change movements. But it’s worth questioning whether this honorific is being devalued in the environmental space.
Every movement needs heroes like Rosa Parks (above) (Source: Wikimedia Commons)
After all, when Time posted its inaugural “Heroes of the Environment” list in 2007, it included 45 individuals alongside the whole Toyota Prius design team. The Guardian nit-picked that list, arguing that the likes of Prince Charles and Richard Branson didn’t deserve inclusion.
Still, even if every member of Time’s honor roll did merit some recognition, calling them all heroes is likely an oversell. Keep in mind, Greek mythology introduced the hero as a “legendary figure often of divine descent endowed with great strength or ability.” If all of Time’s “Heroes of the Environment” satisfied that definition, the environmental movement likely would have already met every conceivable goal (and presumably even some inconceivable ones).
Admittedly, we’ve long since ratcheted down this meaning. But even modern explanations of what makes a hero require getting over a pretty high bar. A few years ago, an Inc.com author listed five qualities necessary to earn the title: courage, selflessness, humility, patience and caring. While ticking all these boxes may not be as epic as slaying the Minotaur in the Greek meaning of the word, there’s little doubt that some — if not many — of those getting environmental hero plaudits probably don’t quite deserve it.
One could make an argument there isn’t a huge downside in generously sprinkling hero prestige on more, rather than fewer, people. Maybe endowing that designation will inspire some of those inaccurately branded to actually reach full-fledged hero status.
And yet, in these cynical times, this perspective is regrettably misplaced. As most of us implicitly know, whether it’s the media or politicians, Americans nowadays are quick to cast a jaundiced eye on institutions or individuals. Throw in a little exaggeration and let the collective eye-rolling really commence.
This dynamic creates a situation where the public can, as the cliché goes, easily toss the baby out with the bathwater when it comes to environmental heroes. Such a move would serve a gross injustice to environmental efforts because individuals who truly meet the hero criteria can deliver an enormous payoff for their causes.
“Heroes elevate us,” Psychology Today explained in 2014. According to New York University professor Jonathan Haidt, true heroes can trigger what he termed “elevation.” This concept suggests people “feel a mix of awe, reverence and admiration for [the type of] morally beautiful” acts that heroes often perform. Such emotions can draw people into serving in the name of a hero’s cause. In other words, heroic inspiration can make other people better. The Psychology Today article goes on to suggest that heroes can also “heal our psychic wounds,” “nourish our connections with other people,” “show us how to transform our lives” and inspire us to act heroically.
Greta Thunberg (above) has been fearless in advocating for climate action (Source: Wikimedia Commons)
With such upside, heroes are a necessary part of environmental action. Consider the 16-year-old Swedish activist Thunberg, who was nominated for the Nobel Peace Prize earlier this year. She has been fearless and eloquent in standing up to political leaders — all of whom are much older than her. Last month, for instance, she met with Britain’s Environment Secretary Michael Gove. “It takes a lot for Michael Gove to feel shame,” the British newspaper The Telegraph wrote. Nevertheless, the minister came out of a listening session with Thunberg acting “unusually contrite.”
“As I listened to you, I felt great admiration but also a sense of responsibility and guilt,” Gove said publicly to a group that included Thunberg. “I recognize we have not done nearly enough to deal with the problem of climate change … Suddenly, thanks to the leadership of Greta and others, it has become inescapable that we have to act.”
Thunberg, who has so far made her greatest impact in Europe, is a beginning. But more undeniable heroes — particularly ones in the United States — are necessary. Along those lines, we must have an open mind (and eye) for individuals who fully embody the criteria necessary to be heroic. And when those people come to the fore, we must hold them up and help give them the stage to speak truth to power. Because, unlike most politicians or businesspeople, a hero almost always talks directly to the hearts of regular people.
|
https://medium.com/the-public-interest-network/properly-identifying-environmental-heroes-c870ec8214a5
|
['Josh Chetwynd']
|
2019-05-08 18:06:37.388000+00:00
|
['Social Change', 'Heroes', 'Greta Thunberg', 'Climate Change', 'Environment']
|
The Rise of Hijabi Bloggers
|
A phenomena of halal celebrities is beginning to arise and is taking the Muslim fashion scene to new levels. Women that we had never heard of are now common names among Muslimahs. These women have mastered social media, display unique fashion sense and some of them even maintain a side business.
Admirers of these various blogs and Muslim Fashionistas have said that they finally feel they can be fashion forward while still maintaining their religious identity.
But critics of these personalities say that the Muslim bloggers are in fact slaves to the fashion industry and are promoting the objectification and sexualization of hijab and modest fashion
Let me take you back ten years when I first started wearing the hijab. Triangular gray, black and white scarfs pinned at the neck with the two ends tied in the back. Anyone remember that one? That style and design was neither appealing nor attractive in any manner. It really was a struggle to wear it in high school when all the other girls looked so put together.
At that point had I seen some of the Amenakin (Pearl Daisy) hijab tutorials in which she beautifully incorporates the tikka, (decorative jewelry that hangs in the middle of the forehead) I wouldn’t be awkwardly stumbling around Pakistani weddings wearing a grandma style dupatta.
I know of many girls who take off their hijabs either before or after marriage, and in some cases may even feel hijab is the reason why they aren’t getting proposals. As Muslim women we have a fine balancing act, between modesty and beauty.
Having access to muslim bloggers who can offer creative ways to style modest clothes and hijabs can be an asset. They are not self-proclaimed experts but normal Muslim women who usually have been approached by their fans to tell them about their skincare regime, weight loss tips and how to get that smokey eye right.
I may not agree with everything that Muslim fashion bloggers promote but I do think there are many ideas and tips which are creative and inspiring. The hijab and the act of dressing modestly is a personal journey for each women and it is something which each of us can improve on.
|
https://medium.com/biscuits-and-banarsi/the-rise-of-hijabi-bloggers-feed6a00acb7
|
['Islamic History Notes']
|
2018-09-10 02:05:21.602000+00:00
|
['Modest Fashion', 'Hijab', 'Family', 'Muslim Fashion', 'Creativity']
|
Chan Zuckerberg Initiative is Supporting Bokeh
|
Chan Zuckerberg Initiative is Supporting Bokeh Bokeh Follow Nov 19 · 4 min read
We are excited to announce that Bokeh has been awarded a grant of $250,000 USD by the Chan Zuckerberg Initiative! The funding will help improve Bokeh for academic use-cases. Besides general maintenance of the project, two new features will be added to Bokeh — support for mathematical text in Bokeh plots, and SVG outputs for plot layouts.
We’re pleased to be collaborating with Makepath for this initiative. Makepath is a data science company that develops data-intensive applications with a focus on distributed computing, web service architectures, and data visualization. Makepath personnel were active Bokeh contributors in the past, and we can’t wait to work with them again!
About Chan Zuckerberg Initiative
The Chan Zuckerberg Initiative (CZI) is a new kind of philanthropy that’s leveraging technology to help solve some of the world’s toughest challenges — from eradicating disease, to improving education, to reforming the criminal justice system. Across three core Initiative focus areas of Science, Education, and Justice & Opportunity, they’re pairing engineering with grant-making, impact investing, and policy and advocacy work to help build an inclusive, just and healthy future for everyone.
As part of their Open Science work, CZI’s Essential Open Source Software for Science program supports software maintenance, growth, development, and community engagement for critical open-source tools. Through three distinct rounds of funding, CZI has awarded $11.8 million USD to support over 67 open source projects.
Proposed Work
Bokeh has three main goals for this initiative:
Add support for mathematical notations in Bokeh plots
Extend SVG outputs to plot layouts
Focus on project maintenance and sustainability
LaTeX is widely used in academia for typesetting mathematical text in scientific documents. In order to make Bokeh more useful for academic journal outputs, the capability to use mathematical notations (using standard LaTeX syntax) will be added to Bokeh. This capability will be added to all text rendered inside plots, e.g plot titles, axis and tick labels, and annotations. It will also extend to selected Bokeh widgets, such as the Div widget.
Bokeh can generate SVG outputs for individual plots, including individual plots in a layout, like gridplot. A single SVG export encompassing an entire layout will be more convenient to use in journals and other publications. The capability to generate these SVG outputs for plot layouts will be improved. The feature will be easy-to-use and thoroughly documented. It will also integrate well with the support for mathematical text to allow SVG outputs for plots that have LaTeX syntax.
Bokeh users have invested their time and effort to utilize Bokeh for their own use-case and projects, and want the project to continue to be active and supported. With a focus on general maintenance, we want to ensure steady or increasing project momentum. The project’s ongoing needs around documentation, bug fixing, and infrastructure will have dedicated resources. In addition, we will work to develop a pipeline of new contributors and future leaders within the project team.
Significance of Proposed Work
As of November 2020, Bokeh has nearly 1.4M package install per month, with around 80K regular documentation visitors. Bokeh has been referenced in over 500 biomedical research scholarly papers, and over 450 projects have a dependency on Bokeh. These metrics suggest that a substantial number of Bokeh users will benefit from active maintenance and project improvements.
The proposed features have been highly anticipated over the years. Supporting mathematical content is an upvoted feature in the Bokeh tracker. SVG for plot layouts is requested in multiple external sites and projects. These features also affect downstream projects, like Holoviews.
Bokeh conducted a “Bokeh in Biosciences” survey in May 2019 and July 2020 to understand how the bioscience community uses bokeh. We learnt that most of the responders use Bokeh for presentations in journals and websites. A significant number of users also asked specifically for the proposed features.
Project Roadmap
Development will officially begin in January 2021 and will continue throughout the year. The work plan has been divided into four phases — onboarding, followed by 3 months of focused work on each of the mentioned tasks. Follow the discussion on Bokeh’s Slack, on the #czi channel.
Bokeh’s general roadmap has been documented on the website. Besides development, the project also needs help with documentation, design, testing, outreach, and more. If you are interested in contributing, say “Hi!” to us on Slack, we would love to hear from you!
Thanks for your support!
We extend our deepest gratitude to Chan Zuckerberg Initiative for funding Bokeh, and a huge thanks to Brendan Collins and Makepath for collaborating with Bokeh for this program. We’re super thrilled to be working with you!
We’d like to thank Sumana Harihareshwara, the founder of Changeset Consulting, for her invaluable help with the EOSS application process. Sumana is a member of the PSF Project Funding Working Group, and has spearheaded multiple successful grant writing initiatives.
Thanks to the “Bokeh in Biosciences” survey responders. Your inputs played a key role in shaping the project goals. We look forward to keeping in touch!
We would also like to thank Bokeh’s existing sponsors — NumFOCUS, Blackstone, Anaconda, NVIDIA, RAPIDS, Quansight, REX, and Nom Nom Data for helping support Bokeh’s infrastructure costs. A very special thanks to Bokeh’s core team for making sure the project runs smoothly every day. Lastly, thanks to all the Bokeh users and contributors who keep this project alive and active.
|
https://medium.com/bokeh/chan-zuckerberg-initiative-is-supporting-bokeh-d810e7f1587a
|
[]
|
2020-11-19 17:24:17.178000+00:00
|
['Science', 'Data Visualization', 'Bokeh', 'Open Source', 'Grant']
|
Creating a dynamic DAG using Apache Airflow
|
Today we want to share with you one problem we solved by using Apache Airflow. We have a project comprising more than 40 apps. Every day we have to load data from on-premise databases to the cloud—particularly, to AWS S3. The process is performed in batch and executed every day. Due to particular reasons, the data has to be loaded to S3 without any further pre-processing. Tables to be loaded to AWS S3 may vary regularly, so it should be easier to add such tables as input sources — since we don’t want to make a DAG for every table we have in the databases.
We decide to solve the problem described above by automatically generating an Airflow DAG. Particularly, we design the solution so the DAG workflow can be generated from a simple YAML, i.e., given a YAML file containing table names to be loaded to AWS S3, Airflow should automatically generate DAG tasks for loading such data. So, in this post, we want to take you through our solution. We hope you may find it useful!
YAML file
Tables to be loaded to S3 should be specified in the YAML file. Furthermore, since it’s a batch process, we need to include a date-type field for filtering such tables based on the execution date. So, we perform incremental loads, i.e, only data from the day before is loaded into S3.
Dynamic Task Generation
Once the YAML file structure is defined, we can build the logic for our dynamic DAG! So, the first thing to do is defining two tasks using dummy operators, i.e., the start and the end task. Such tasks are the ones in which we are going to build upon our DAG by dynamically creating tasks between them — at this point this may be a little confusing, but once you see the graph everything is going to be clear.
Then, next is step is defining a Python function allowing for creating DAG tasks. Particularly, we create such tasks by using PythonOperators . The function should receive as arguments the task id; a python function to be executed, i.e., the python_callable for the Python operator; and a set of args to be used during the execution.
We include as an argument the task id. So, we can exchange data among tasks generated in dynamic way, e.g., via XCOM.
Setting the DAG workflow
The final step is putting together the workflow for the DAG, which is really easy if you got to this point.
This is how our DAG looks like after putting the code together
DAG automatically generated by using a YAML file
Conclusions
In this post, we present a way for creating dynamic tasks inside an Airflow DAG. We let out this article the definition of the functions getSQLData and upload_to_s3_task, since we consider they are out of the scope of this article.
You can find the final code in the next snippet
Hope you find it useful.
Thanks for reading until the end! :)
|
https://towardsdatascience.com/creating-a-dynamic-dag-using-apache-airflow-a7a6f3c434f3
|
['Antony Henao']
|
2019-10-15 17:52:21.880000+00:00
|
['Data Science', 'Airflow', 'Data Engineering']
|
Nashville Star
|
We live in a cynical little world where reality television stars are chewed up, spit out, and thrown up on by Tara Reid literally within fifteen minutes. So it was with some smart-assed how-crazy-will-this-be attitude that I approached the Nashville Star auditions at the Wildhorse Saloon last weekend.
With a line of 1400 that wrapped around the block and the earliest hopeful having arrived at 11 p.m. the evening before, my expectations for some Taradise-style entertainment were high. After speaking to several completely normal people in line, I walked into the Wildhorse a bit disoriented. I mean, I had specifically chosen people with cowboy hats and hadn’t gotten so much as a “yee haw.”
Luckily, I was rewarded with perpetual motion inside. Nashville Star staff brought hopeful contestants inside in groups of ten where they were quickly registered and sent to another line upstairs. On the main stage, a seemingly endless parade of Ambers, Heathers and Brandies crossed to a microphone where they had 30 seconds to belt out an approved song before a quick “Thank You” told the next Heather, Amber or Brandie to step forward.
With the bright lights, the booming P.A., the cameras, and the table of producers, the scene had all the accoutrements of a reality television program. But there was something too hygienic about it. Questions ran through my mind: Where were all the crazy people? Would this procession of brunettes in blue jeans ever end? And how many phone numbers could I get if I started telling them I was a producer?
Well, if you’re going report on hot dogs, you’ve got to take a peek inside the sausage factory, so I headed upstairs and got my answers. The line of hopefuls were led like lambs to the slaughter upstairs for a pre-audition. In groups of ten, they entered a room with ten tables and a judge at each. A moderator yelled “Begin” and each had to start singing immediately because thirty seconds later, he yelled “Time!” and nearly pushed them out the back door.
The chaos was hilarious. Ten people singing in one room all at once. Some brought guitars which added to the cacophony, some forgot how their songs started, and most didn’t make it. Out of each group usually less than three were led to the stage. For the rest, that was the end of the road. That was about as brutal a filter for big talent as you could imagine.
Downstairs, as the performers left the stage, they were breathless. Typical was the response I received from a stunned Jennine Sandford, “I think it went okay. I mean, thirty seconds isn’t very much time.”
After every 100 or so contestants, the producers would stop the procession and announce the call-backs. Out of those hundred, two or three would hear their names. In a mere five hours, they’d seen the whole lot and picked 42 for Saturday’s call-backs.
If Friday’s hopefuls felt slighted, they should take heart that though Nashville Star may have missed some talent during the cattle call, they certainly didn’t advance anyone without it. Saturday is pro through and through.
It’s also sincere and downright grown-up. You almost wonder how something so mature made it onto TV these days. Honestly, there’s a contestant named “Mike Hunt” and no one even chuckles when they call his name over the PA. I am officially the most childish person in the room.
Andrew Carlton, who made it to the call-backs, describes the entire experience as “sensory overload.” Carlton, like many of those called-back, is a songwriter in town who works a day job to pay the bills. Dean Miller, another local, has been through the wringer in the music business and plays an original called “Music Executive” — a song that makes a joke of the whole circus.
Where nervous brunette girls were a staple of Friday, it’s the composed, employed, adult men who are omnipresent Saturday. How the producers who run the whole show narrowed down the field this well is beyond me. But it’s not entirely surprising given Nashville Star’s track record for uncovering talent. Take, for example, the scorching debut of Miranda Lambert who was the second runner-up in the show’s first season and compare her rocking, ballsy style to the frizzy-haired guy from American Idol whose career found no traction.
I leave the Wildhorse sober and refreshed — two things Tara Reid never feels — and I’m pleased that not all reality television is based on eating bugs, having rich parents or getting insulted by a tightly-pantsed Englishman.
|
https://medium.com/hey-todd-a/nashville-star-b4ff30561fea
|
['Todd A']
|
2020-03-11 23:37:40.473000+00:00
|
['Nashville', 'Music', 'Zine', 'Pop Culture', 'Reality TV']
|
Avoid Null Booleans in Java
|
Java boolean variables (the primitive ones) allow only two possible values: true or false, the last one as default.
We use booleans to represent values that only can represent a true or a false statement, such as:
isEmpty
hasSomething
isAlive
canContinue
isBlank
etc
Let’s say that you have a list of tasks to do, and you want to know if the list is empty so you can finally go home. So, is the list empty? If it is (true), then you can go home, if it is not (false), then you have to keep working… does it really make sense for a null answer?
Subject A: “Hey, is the list empty?”
Subject B: “I have no idea”
Subject A: “But should I go home or should I stay here?”
What is the answer? Should subject A keep working? Should he/she wait for a new task?…
Wrapper classes
As Java is an Object Oriented Programming language, it has all its primitive types as classes too.
Integer is the wrapper for the primitive int, Double is the wrapper for the primitive double… you see the point here. Well, Boolean is the wrapper for the primitive boolean. This introduces a third possible status for a boolean variable: As objects are nullable, Booleans are nullable too.
So… a Boolean can be true, false… or null.
What does a null Boolean mean?
A null Boolean means that the variable has no reference assigned, so it is neither true nor false, it is “nothing”.
What can we do with a null boolean?
An example that comes to my mind would be the next one:
Suppose you are working on a weather system, and you consume a third-party API. One of the fields in the JSON response is a Boolean, isTornadoAlert, and it can be true, false, or null.
There are several weather stations, in many countries or cities, that does not report the tornado alert, because they do not have the required infrastructure or because of the typical weather, or whatever… but the thing is that they do not send this isTornadoAlert field, so you won’t get true or false, but you will get null instead.
Well, your weather system provides this information to a website, and then this website shows the tornado alert information:
If isTornadoAlert is true, then it shows the proper alert.
If isTornadoAlert is false, then it can proudly say that everything will be ok.
If isTornadoAlert is null, then it can just omit the information. It has no real information, and say that no tornado will happen would just not be true. It simply doesn’t know.
You can just say that null means false, but that won’t be exactly the truth. At least no in this situation.
Reasons why I believe it is wrong to use null Boolean values
I believe that the possibility of having null values on Boolean variables is a kind of unfortunate side effect, and this is why:
Boolean Algebra
According to Wikipedia*,
Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0, respectively.
There is not a third possible value for the variables in Boolean algebra. So why should be a null possible value in Boolean variables?
Three-valued logic
It exists the three-valued logic, where a variable can be true, false, or indeterminate. But this kind of logic is not supported by the Java language. There are a few possible truth tables in the three-valued logic, and none of these actually works with null values in Java.
If you try to run this code, you will see what I’m talking about.
Just avoid the Boolean boxed primitive
I strongly recommend not to use the Booleans but to use the primitive type instead. It is always better: No NullPointerException, no autoboxing/unboxing complexity, no object creation, no identity problems.
Instead of using the wrapper class, I prefer the use of enumerated values. Let’s take the isTornadoAlert example.
We could transform the isTornadoAlert Boolean field to an enumerated tornadoAlertStatus with these values:
TORNADO_INCOMING
NO_TORNADO_DETECTED
UNKNOWN_STATUS
The UNKNOWN_STATUS value tells us a little more than the null value and having an enum gives us the possibility to add more values to the response without breaking the whole API.
Exceptional cases
Generic types:
If you have to use generic types such as Collections, Streams, Maps, Predicates, Functions, Consumer, Suppliers, or your own generic types, you can’t use primitive values. In this situation, you have no alternative but to use the boxed primitive Boolean. However, you should be careful not to have any null value.
If you have to use generic types such as Collections, Streams, Maps, Predicates, Functions, Consumer, Suppliers, or your own generic types, you can’t use primitive values. In this situation, you have no alternative but to use the boxed primitive Boolean. However, you should be careful not to have any null value. Third-Party APIs:
It can happen that some third-party APIs will respond with a Boolean wrapper. If a null value is possible, then I encourage you to document its meaning and, if possible, transform that Boolean to a more representative enum.
Thanks for reading this article!
|
https://medium.com/swlh/avoid-null-booleans-in-java-4a5cd9b23bca
|
['Marcelo Valls']
|
2020-12-23 15:14:47.226000+00:00
|
['Java', 'Object Oriented', 'Software Development', 'Software Design', 'Java8']
|
What is UI design? What is UX design? UI vs UX: What’s the difference
|
What is UI Design?
The “UI” in UI design stands for “user interface.” The user interface is the graphical layout of an application. It consists of the buttons users click on, the text they read, the images, sliders, text entry fields, and all the rest of the items the user interacts with. This includes screen layout, transitions, interface animations and every single micro-interaction. Any sort of visual element, interaction, or animation must all be designed.
This job falls to UI designers. They decide what the application is going to look like. They have to choose color schemes and button shapes — the width of lines and the fonts used for text. UI designers create the look and feel of an application’s user interface.
UI design process by Ramotion
UI designers are graphic designers. They’re concerned with aesthetics. It’s up to them to make sure the application’s interface is attractive, visually-stimulating and themed appropriately to match the purpose and/or personality of the app. And they need to make sure every single visual element feels united, both aesthetically, and in purpose.
What is UX Design?
“UX” stands for “user experience.” A user’s experience of the app is determined by how they interact with it. Is the experience smooth and intuitive or clunky and confusing? Does navigating the app feel logical or does it feel arbitrary? Does interacting with the app give people the sense that they’re efficiently accomplishing the tasks they set out to achieve or does it feel like a struggle? User experience is determined by how easy or difficult it is to interact with the user interface elements that the UI designers have created.
Photo by CareerFoundry
So UX designers are also concerned with an application’s user interface, and this is why people get confused about the difference between the two. But whereas UI designers are tasked with deciding how the user interface will look, UX designers are in charge of determining how the user interface operates.
They determine the structure of the interface and the functionality. How it’s organized and how all the parts relate to one another. In short, they design how the interface works. If it works well and feels seamless, the user will have a good experience. But if navigation is complicated or unintuitive, then a lousy user experience is likely. UX designers work to avoid the second scenario.
Designing in a vacuum leads to less than ideal results.
There’s also a certain amount of iterative analysis involved in UX design. UX designers will create wireframe rendering of their interface interactions and get user feedback. They’ll integrate this into their designs. It’s important for UX designers to have a holistic understanding of how users prefer to interact with their applications.
How They Work Together
So a UX designer decides how the user interface works while the UI designer decides how the user interface looks. This is a very collaborative process, and the two design teams tend to work closely together. As the UX team is working out the flow of the app, how all of the buttons navigate you through your tasks, and how the interface efficiently serves up the information user’s need, the UI team is working on how all of these interface elements will appear on screen.
Let’s say at some point in the design process it’s decided that extra buttons need to be added to a given screen. This will change how the buttons will need to be organized and could require changing their shape or size. The UX team would determine the best way to lay out the buttons while the UI teams adapt their designs to fit the new layout. Constant communication and collaboration between UI and UX designers help to assure that the final user interface looks as good as it can, while also operating efficiently and intuitively.
Research is Key
Research is vital for both UI and UX designers. It’s important for both disciplines to gather as much good information as possible to assist them in crafting appropriate designs, and both follow a similar approach.
Both will research what users want. What they expect from applications of the sort being developed. This research is often iterative, involving usability sessions, where real users will interact with scaled versions of certain functionality or visual designs being tested to determine whether the designers are moving down the proper path. Feedback is integrated with each iteration.
This process involves generating low fidelity prototypes, like wireframe renderings of interface elements in order to gauge a user’s response strictly to the functionality being tested. This can also involve fast visual prototypes and A/B tests of different possible versions of the look and feel of the interface to determine which one users prefer.
Tweet by LukeW
In all cases research helps guide the steps designers take as they build their contributions. However, the information UI and UX designers are looking for is very different.
Research in UI Designs
UI designers need to make sure the visual language they choose fits the class of application they’re writing. They’re trying to predict user expectations. If your team is designing a travel app, it’s important to research how other travel apps have been developed in the past. Which ones worked? Which ones didn’t? There are design lessons to be learned from the work others have done before.
Research might indicate that people prefer outlined icons instead of bold shapes. This is a visual shorthand that people are comfortable with and enjoy. UI designers would then do well to incorporate that lesson.
The exact aesthetic they choose is up to them, but the basic “rules,” or the need to conform to user expectations, is something designers ignore at their own risk.
Not to say risks shouldn’t be taken. UI designers want their interface designs to stand out and be memorable. But this must be balanced against making sure people recognize the purpose of the elements you’re placing on screen.
Research for UX Design
UX design is particularly interested in user expectations. All of the experiences and interactions that users have had with every application they’ve used in their lives have helped set their expectations for how interfaces are supposed to work. If a UX designer isn’t intimately familiar with these expectations, they could inadvertently design an interface interaction that seems logical to them but breaks commonly accepted conventions. Users don’t like when an interface behaves very differently than they were expecting, and this could negatively impact their experience.
If a UX designer decides to do something different, they need to have a very good reason, because breaking a deeply trained expected behavior will likely cause people to do the wrong thing frequently.
As an example, most people are comfortable with the idea that you click twice on a file to open it and once to select it. This is an interface behavior that has existed almost as long as there have been graphical user interfaces.
|
https://uxplanet.org/what-is-ui-vs-ux-design-and-the-difference-d9113f6612de
|
['They Make Design']
|
2020-07-21 12:16:57.670000+00:00
|
['UX', 'UI', 'Design', 'Art', 'Research']
|
Privacy Preserving Deep Learning in Medical Imaging
|
Privacy Preserving Deep Learning in Medical Imaging
Summary of talk from OpenMined PriCon 2020
Photo by Michael Dziedzic on Unsplash
This blog post is inspired by Dr.Georgios Kaissis’s talk titled ‘End-to-end privacy preserving deep learning on multi-institutional medical imaging data’ at the OpenMined Privacy Conference 2020.
OUTLINE - Motivation
-- Clinical Applicability of AI in Medical Imaging
-- Patient Rights, Legal and Ethical requirements
-- Privacy-Preserving Machine Learning (PPML) - PriMIA
-- Features of PriMIA
-- Computation Node Setup
-- Federated Training
-- Architecture
-- Encrypted Inference - Case Study: Paediatric Pneumonia
-- Experimental Setup
-- Result
Motivation
Clinical Applicability of AI in Medical Imaging
With recent advances in deep learning and computer vision, and easier access to computing resources, AI in medical imaging has reached clinical applicability across countries because of promising results in improving diagnosis and in the early detection of diseases.
This can be of great help to counter the lack of radiologists in disadvantaged areas. However, the current methodology of central data collection and model training poses a key problem which is why deploying remote Diagnosis-as-a-Service remains a challenge. We shall enumerate the key problems with ‘data’ and ‘data rights’ in the subsequent section.
Photo by National Cancer Institute on Unsplash
Patient Rights, Legal and Ethical requirements
Even with the proper consent of patients, the data collection process has several inherent challenges in facets of accumulation, storage, and transmission. This primarily impinges on the patients’ right to be informed about the storage and usage of their personal data and medical records.
Given that the patients are informed about their rights on the data, in disadvantaged communities, these rights may not even be informed to the patients which further widens the existing inequality in the society. Well, clearly this is a difficult problem to address.
In the next section, let’s see the role of Privacy-Preserving Machine Learning (PPML) in addressing some of these concerns.
Privacy-Preserving Machine Learning (PPML)
We shall now enumerate some of the key concepts and advantages of PPML
It essentially bridges the gap between deriving insights from data (data utilization) and protecting the data of the individuals.
and of the individuals. Federated learning allows for the training data to remain in the premises of the hospital or medical institute, which allows the hospital to retain control over it and enforce better data governance. Put in simple terms, Federated Learning says,
“Do not take the data to where the training algorithm is; Rather, bring the algorithm to where the data is”
Encrypted computation services allow us to protect both the data and algorithm and provide end-to-end services.
services allow us to protect both the data and algorithm and provide end-to-end services. It also allows for single-use accountability; the notion that the data collected from the patients is used only for a singular purpose, medical diagnosis and not research or other marketing purposes.
PriMIA (Privacy-preserving Medical Image Analysis)
Privacy-Preserving Machine Learning has remained in the proof of concept stage for sometime now. A new tool called PriMIA has been introduced as part of the OpenMined libraries, for federated learning and encrypted inference on medical imaging data.
The goal of the library is to provide securely aggregated federated learning in a multi-institutional setting and provide training algorithms in an encrypted inference scenario.
Features of PriMIA
Framework for end-to-end privacy-preserving deep learning for medical imaging.
A Simple and extensible Command-line Interface (CLI) for secure federated learning and encrypted inference.
Cloud Ready design.
Designed to include current state-of-the-art (SOTA) algorithms.
Flexible to include medical imaging-specific innovations.
The library’s main design goal is to enable a moderately tech-savvy user to perform not more than 3 steps to accomplish the task. The user could be either data owner or data scientist. The steps are outlined below.
Computation Node Setup
If you’re the data owner, and would like to provide data for federated training, the following are the steps.
✅ git clone
✅ Put data in folders
✅ Single CLI call 😊
✅That’s it! ✨🎉🎊
Federated Training
The following are the steps to initiate federated training. There are pre-existing scripts available for hyperparameter configuration over the entire federation.
✅ Single configuration file
✅ Run train.py
✅ Done!🥳
Architecture
A hub and spoke configuration is used, which is preferred over serial processing topologies based on previous works in the medical imaging community.
This architecture supports synchronous training .
. Secure aggregation is done by secure multi-party computation using the secret-sharing function. There were tests to break the process of secure aggregation and they generally failed, ensuring a robust secure aggregation scheme.
Hub and Spoke Configuration for federated training and secure aggregation (Image Source)
Encrypted Inference
To perform encrypted inference, computational nodes such as a crypto provider and the model server have to be set up; the data owner then initiates a request and receives the result locally.
Most of the work is driven from the client-side, including the output of encrypted JSON files. This encryption happens under a zero-knowledge end-to-end premise.
Illustrating Encrypted Inference (Image Source)
Case Study: Paediatric Pneumonia
To demonstrate the application and effectiveness of the library, there is a case study on data of chest X-rays for detecting paediatric pneumonia. This is an example of remote Diagnosis-as-a-Service model.
Experimental Setup
There are 5163 training images
There are 2 external validation datasets.
Goal: To train a federated remote diagnostic model.
To train a federated remote diagnostic model. The recently introduced confidential computing node has been used for training. A total of 3 computing nodes have been used in the case study.
Result
To assess the performance of the trained model, the results were benchmarked against a locally trained model and two human experts. The results of the study show that the federated trained model
Performed better than the human experts on both datasets
Was found to be competitive with the locally trained model
Some of the key challenges for the next 5 years and active areas of ongoing research are:
Federated learning on heterogeneous data
Integer quantization for secure aggregation
Reducing computational overhead
Here’s the link to the GitHub repo of PriMIA and here’s the recording of the talk.
|
https://medium.com/wicds/privacy-preserving-deep-learning-in-medical-imaging-f944fda3d0f8
|
['Bala Priya C']
|
2020-12-25 14:11:31.519000+00:00
|
['Deep Learning', 'AI', 'Machine Learning', 'Research', 'Blogathon']
|
Reading between the lines of the major operatic event
|
Reading between the lines of the major operatic event
OperaWire Editorial: Repertory Stagnation, Star Power & What We Can Learn About the Opera Industry From Rolex Ambassadors Gala
Sonya Yoncheva — Soprano
Juan Diego Flórez — Tenor
Jonas Kaufmann — Tenor
Yuja Wang — Pianist
Wiener Philharmoniker
Plácido Domingo — Conductor
Gustavo Dudamel — Conductor
23rd of June 2019
This review originally appeared on Operawire.com on the 24th of July 2019.
A prestigious showcase of the opera world, the Rolex Ambassadors Gala, took place at La Scala earlier this month. Big names and outstanding performances. A unique opportunity to see the stars of the opera world together: Juan Diego Flórez, Jonas Kaufmann, Sonya Yoncheva, Yuja Wang, Plácido Domingo and Gustavo Dudamel conduct Wiener Philharmoniker.
But today, I’m not going to review it. After so much time, that’s not really necessary. Instead, I want to draw your attention to what we can see behind the big names and all the splendor. The Rolex Ambassadors Gala is very indicative, and you can learn a lot about the ongoing state of the opera industry if you read between the lines.
Photo from La Scala official press-release
Repertory Stagnation
Let’s start with the obvious. It was obvious that an overture would open the concert, as are my observations on repertory stagnation. In this case, it was the overture from “La Forza del Destino,” which shouldn’t be surprising — it was Verdi at La Scala. It couldn’t get more obvious than that. After this, I could easily predict three other pieces that would be performed: “La Forza…” (again!) for Jonas Kaufmann, who recently had a long run of this opera at ROH; a Donizetti or Rossini aria for Bel canto star Juan Diego Flórez; and “Tosca” for Sonya Yoncheva, the diva who canceled her performance of Puccini in Paris earlier this June. I swear that I didn’t check the program beforehand, but I guessed all three. Another piece of music that I didn’t even need to predict was “Libiamo, ne’ lieti calici” at the end. There were also arias from “Otello”, “Romeo and Juliet” with the most “obscure” pieces of music coming from such operas as “La Juive” and “L’Amico Fritz.” That’s it.
Despite being in the 21st century, we still live in the world of Verdi, Wagner, Puccini, Mozart, Rossini, Tchaikovsky, and Donizetti. But Bel Canto and Romanticism are not the whole opera world. And when we hear something new, we freeze with mixed feelings of interest and fear. I don’t say, other old and even contemporary composers aren’t performed or the audience isn’t aware of them. I just ask what do you recall, thinking about opera? I think that this is what defines opera for most audience members — an art form where we continue to venerate creators of the past, not the present and possibly not even the future.
This recurrence always makes me think about the audience’s preferences. I assumed that musicians preferred to satisfy rather than educate or discover something new at such events. Gustavo Dudamel is a young star conductor. He has a vivid perspective and he immerses himself into his interpretations. But here he chose Verdi. Verdi is great. But you know, I think Dudamel is more than someone’s greatness. Of course, he did his best, but if I want to showcase myself as a unique young conductor to a European audience that is still getting to know me, I would hit them with an original perspective. But Gustavo Dudamel’s choice was more likely a gesture of solidarity with La Scala and the audience. “If he can’t fight off this repertory stagnation, who can?” I thought.
Priorities
Let’s talk more about the audience. They were still mostly aged. They were either privileged or superheroes who succeeded to get their tickets in less than four hours after the start of sales. They gathered here from all over the world (or at least Europe). And they knew exactly what they wanted. They were excited to see their favorites, who canceled numerous performances recently. They were led by their expectations, so they welcomed Jonas Kaufmann with greater applause than they gave to amazing pianist Yuja Wang. No matter how brilliant the piano performance was, it was just a warm-up. The fact that Kaufmann just appeared on stage was more valuable for the public than the entire first act.
There was another part of the audience who watched the concert via a free live stream. The opera world needs new, younger spectators. The trend of making high-rank opera events available worldwide for free is really important. Some new services like Operavision popped up in recent years. They compete with old players like medici.tv or Arte, which still have some geographical or subscription limitations. The Rolex sponsorship gives Medici an opportunity to fight for the audience with the help of big names.
Opera is expensive. And we all know that even the most expensive tickets cannot pay the price of the production. And the transmissions give an additional cost. So, sponsors are important.
And when such a major patron as Rolex organizes a concert, it certainly puts some pressure on singers. Even on such great singers as those who performed that night.
I can only guess about Sonya Yoncheva’s state that evening, but the fact remains — the soprano has canceled everything of late. Everything, that is, but the Rolex Ambassadors Gala. The same goes for Jonas Kaufmann. This question is a controversial one and reveals some painful issues about the industry and cancelations. Why does it look like sponsors are more important than the audience? Why do our expectations ruin the performance if anyone cancels? Who can cover Jonas Kaufmann? Do you remember the times when there were no names of singers on the posters?
From medici.tv
That night, they were all there. A bit tired, but still at their best or at least managed to show it convincingly. For Jonas Kaufmann, it was especially important. Paris and Vienna will remember his absence for years, but fortunately, the La Scala audience is international. So it was a good opportunity to defuse the situation.
Yet Europe is not like the US, and the audience forgives easily and treats their beloved artists greatly. They certainly talk and criticize, but they sell out a show in a few hours. Once in love, they are loyal and true.
That night they were true to Plácido Domingo. They didn’t really care if he would sing or conduct. And you could feel and hear that at the concert. They treated him like a genius and there was no doubt that in their hearts, Domingo was the star, even in the pit. Dudamel, so loved in America, was, as the audience expressed, just another good, conductor. His performance was stunning, but he was undeniably number two on this evening. With Domingo, his work was beyond reproach. Plácido Domingo is a Star, a Name. And this is enough. But is this fair? To put two conductors on stage and treat them differently because of their names, not their performances. It’s nothing new, though. But witnessing it once again I felt somewhat disappointed.
And yet, it was great to see what Maestro Domingo brought to the performance. Pietro Mascagni’s rare Intermezzo from “L’Amico Fritz” was his piece, and it was brilliant. I found it a perfect and quite original choice for the opening of the second part, which represented a conductor fit for the occasion, and pleased the audience.
It was a different matter when Domingo faced Kaufmann’s understanding of “La Forza del Destino.” It never worked out properly. It was unusual for me to witness this collision, which could probably happen because they relied on their previous experiences way too much. Jonas Kaufmann followed his own perfectly expressive line while Domingo led the orchestra way in a smoother manner. I thought about how they felt hearing it happen, though, they never looked unhappy with each other. They were okay with it, at least personally.
Encores
Talking about personal moments, we came to the end — the encores. This part of the concert is always reflective of the singer’s career statement. The choices he or she makes here often comes from the heart. Or sometimes from the record label.
We had both.
From medici.tv
“Are you having fun? There’s some more to come.” Juan Diego Flórez was the first to give an encore. He sang a Mexican song with a guitar. It seemed very simple and natural. I remembered Rolando Villazón’s concert of Spanish songs in Paris this May. The national heritage is a noble thing to share. It is soulful and familiar and also bestows a wide voice range for tenor. It is always a good idea to entertain by educating and sharing some personal music.
Jonas Kaufmann was joking in three languages. He performed a song from an operetta. That looked quite natural, I thought operetta could be a good way to relax. But it was also performed in anticipation of Kaufmann’s new operetta album, which was recently announced. Sony always has a clear plan.
Sonya Yoncheva chose “Ô Paris, gai séjour de plaisir” and seemed to apologize for the cancellation at the Bastille. I found it somewhat comic. But many a true word is spoken in jest. This year hasn’t gone smoothly for soprano in terms of her scheduling and the cancelations she has made due to her pregnancy. We talk a lot about women’s rights, but here’s a case of a woman’s choice and its effect on the opera industry. What does it mean for a star soprano to withdraw from the stage for at least a half of a year?
The audience would still discuss this concert for a few weeks. Was it great? Definitely. Could it be better? Certainly. Stars fulfilled their duty. The public got another checkmark in their bucket lists.
And I got my conclusions. Names. Expectations. Voices. Duties. Repertoire. Rights. Melodies. Decisions. Labels. Visions. Sales. Good and bad, mixed and collapsed. I felt like I peered through the looking glass and saw a reflection of the whole opera world. The world where I belong. So familiar, with all those wrinkles, imperfections, and the beauty. And I recall many different performances where I’ve seen and talked about these issues before. I find it important to improve what we have. To analyze and to conclude.
But there’s another crucial thing to do — to enjoy. The industry, for sure, has its problems, but it exists as long as we can simply enjoy what we hear and see.
|
https://medium.com/opera-in-review/reading-between-the-lines-of-the-major-operatic-event-f78d343fcdca
|
['Polina Lyapustina']
|
2019-07-25 08:41:01.001000+00:00
|
['Opera', 'Culture', 'Essay', 'Music', 'Classical Music']
|
How to Style your Markdown Content in Gatsby 📝
|
Gatsby offers out of the box support to write the content in markdown and create pages easily in your app and with the leverage of many Gatsby Plugins, we can do many things like format code snippets, lazy load images, generating an RSS feed, creating a Sitemap, making SEO easier and a lot more 🥳
Writing a blog post in markdown is easier, no need to worry about styling, # makes H1, ## makes H2, the normal text makes a paragraph, adding links, images and so on.
But how to style the markdown content when the pages are generated, the default browser styles don’t look good, so then how to approach 🤔
Here are few ways to style your markdown content 👇
1. Using Tailwind CSS
Tailwind CSS is great 😍 The utility CSS that tailwind provides is easier to use and you can also generate your own utilities from the config file. For a simple blog, you don’t have to use the full config that the Tailwind provides, you can spin up🌪 your own version of config.
How to style
While using Gatsby, we can style the content which are in the pages or components, but how to style the content in the markdown, because the markdown process the markdown language and Gatsby generates the HTML content. Yes, there is a way to style the markdown content ✨ Using gatsby-remark-classes we can solve this problem, you can add class attributes to markdown elements. This plugin will help to do that
Install:
npm install — save gatsby-remark-classes
How to configure:
gatsby-config.js
Advantages
No writing separate CSS for the HTML elements generated in markdown
.blog-post-container h1 { } .blog-post-container h2 { } .blog-post-container p { }
2. Using styled-components
Nowadays styled-components are one of the ways to style your app. But how to style the markdown content using styled-components 🤔
To style the markdown content, we need to define the components first 👇
And in the file where you’re programming the blog post, here I’m assuming the file to be blog-post.js. Import the elements into this file.
import { Title, Paragraph } from "./elements";
The Title & Paragraph are the component which we will be using to design the markdown content.
By using rehype-react with htmlAst method, we can write custom React components and then reference to the elements in markdown
In your blog-post.js file, replace this line:
<div dangerouslySetInnerHTML={{ __html: post.html }} />
With
{renderAst(post.htmlAst)}
And also add htmlAst in the graphql query:
The above-mentioned steps will help in styling your markdown content with styled-components .
3. Using Classless CSS Frameworks
Say you don’t want to write CSS only 🤯 Using these Classless CSS frameworks you don’t have to write even a single line in CSS
I have listed some of the frameworks below
Sakura
Water.css
awsm.css
new.css
Bahunya
MVP.css
AttriCSS
Advantages
📦 Bundle size is smaller
🤔 Don’t have to remember the classes
👌🏻 For a simple blog or to beautify markdown parsed content
✨ People who love ❤️ beautiful defaults
You can also refer to this Instagram post to know about Classless CSS Frameworks
|
https://medium.com/javascript-in-plain-english/how-to-style-your-markdown-content-in-gatsby-af43cc20880e
|
['Chetan Raj']
|
2020-10-20 15:49:41.831000+00:00
|
['CSS', 'Gatsbyjs', 'Web Development', 'React', 'JavaScript']
|
My Journey From Open Source Noob to Google Summer of Code 2020
|
My Journey From Open Source Noob to Google Summer of Code 2020
How to get started in open source
Photo by Alex Holyoake on Unsplash
What it means to contribute
If you’re a new open source contributor, the process can be intimidating. How do you find the right project? What if you don’t know how to code? What if something goes wrong?
Don’t worry! There are all sorts of ways to get involved with an open-source project — you don’t need to know everything just to get started.
For anything more than a typo fix, contributing to open source is like walking up to a group of strangers at a party. If you start talking about llamas, while they were deep in a discussion about goldfish, they’ll probably look at you a little strangely.
Before jumping in blindly with your own suggestions, start by learning how to read the room. Doing so increases the chances that your ideas will be noticed and heard.
|
https://medium.com/better-programming/google-summer-of-code-2020-837b262aa581
|
['Shubham Kumar']
|
2020-10-19 08:05:37.168000+00:00
|
['Open Source', 'Gsoc', 'Google', 'Mozilla', 'Programming']
|
Visualising 2018 tree cover loss with Global Forest Watch.
|
Global Forest Watch allows anyone, including you, to take a closer look at what’s happening in places reserved for indigenous peoples. A fast, smooth zoom enables and encourages deeper exploration of what’s occurring. No one can hide from satellites in the sky and with weekly GLAD alerts, any change in forest cover can be detected and reported in almost real-time. This is the power of data visualisation: it provides compelling, irrefutable evidence that allows us to speak up for the people and places that are not being heard.
Good news for Indonesia.
But it’s not all bad news. Indonesia saw a 63% reduction in primary forest loss from its peak in 2016 when new legal protections of peatlands were introduced. More recent government policies—which include a moratorium on issuing new licenses to use land designated as primary forest and peatland—have also helped slow deforestation.
The clearing of rainforest to make way for oil palm plantations has attracted much media attention, and with a recently added ‘Global Plantations’ data layer, you can now use Global Forest Watch to explore where they are located. (Note: only data for 2015 is currently available). Although the moratorium is likely to stop any further conversion of pristine primary forest, GLAD alerts will help researchers, land managers, and interested citizens alike keep watch over Indonesia’s forests.
|
https://medium.com/vizzuality-blog/visualising-2018-tree-cover-loss-with-global-forest-watch-ffbd71ef07be
|
['Camellia Williams']
|
2019-05-08 14:18:47.251000+00:00
|
['Forest', 'Environment', 'Data Visualization', 'Deforestation']
|
Dynamic resource generation of resources using CloudFormation Macros.
|
One of the things you can do with this new feature is generate and deploy a number of resources based in a parameter value or the resulting value of the describing the number of AZs in order to create a subnet for each AZ for example.
In this example I’m defining a dynamic number of IAM users (depending on the int I provide in the parameter) that will have the same custom EC2 policy for EC2 resources tagged with Owner:devteam. The user will be prompted to reset their passwords on next sign-in. Also an EC2 instance will be created for each user simulating a classroom.
How to work with Macros:
In order to use this feature you need two things:
-One stack containing a AWS::CloudFormation::Macro resource along a “AWS::Lambda::Function”.
-The stack that will use the Macro which will process a section or the whole template by using on it the Fn::Transform function or a Transform section to transforming the whole template.
Biggest Caveat:
- Your Function will receive the whole template minus transform sections.
Take care of how you send the Macro Response in your lambda function.
You need to send the proper “Fragment”, this is the the processed template CFN receive from the lambda function.
def lambda_handler(event, context): FinalFragment= event[“fragment”] …loops adding new objects to the template.
…. …. … … …. response[“requestId”] = event[“requestId”]
response[“status”] = “success”
response[“fragment”] = FinalFragment print (FinalFragment[“Resources”]) return response
In the sample attached I’m using python and just looping to create the number of resources I need in the final template. However you can go further with your function and implement more complex logics using the SDK to do API describes etc…
You can keep several Macros in your account doing different types of processing and using them whenever you need unlimited power to process your template suiting to a huge amount of use cases.
Outcome:
|
https://medium.com/pablo-perez/dynamic-resource-generation-of-resources-using-cloudformation-macros-f6baba75d730
|
['Pablo Perez']
|
2019-10-10 16:01:40.460000+00:00
|
['Macros', 'AWS', 'Dynamic', 'Cloudformation']
|
How to design Mobile Apps that Survive Poor Network Conditions
|
Building an application comes with plenty of challenges. When it comes to revolutionary ideas one might appeal to the public that might cause them to download the application.
Following this comes to work that comes into developing into app.
It needs to be made sure that every feature is capable of doing what it is supposed to do, finding a good graphic designer in order to breathe life into coding and also to integrate the correct amount of content into the application is are few important aspects that guide the process.
There are plenty of other applications are capable of win users with a multitude of features and eye-catching visuals. Despite that, there are things you need to keep in mind when the application turns fully functional even in cases of poor internet connectivity.
As one of the best mobile app development company in USA, we at BrainMobi are capable of presenting some essential factors which might help you conclude the discussion easily.
1. Curate content that doesn’t require much Internet Connectivity:
Almost any application is capable of the brink of a good internet connection but it can be stressful in moments when the network is down. You cannot just expect users to hope results while waiting on the face of a blank page and could be a deal-breaking factor for your application.
Facebook, Twitter and plenty of other mobile app development company are also capable of caching the load part of the newsfeed even when the application is not connected to the internet. They are capable enough to display a message telling the user that the internet is down but by displaying cached content they are able to ensure that their users don’t close the application right away.
The design of the application should be weaved in a way that the pages are capable of working even without an internet connection. So even if it is just a few pages of content, the cached feed is capable of providing at least some offline support for applications that could be mandatory in specific cases. Such pages should often accompany a message that alerts users about the nonexistent network connection.
2. Make Bandwidth Optimization Your Priority
When it comes to designing an application there are plenty of things that need to be kept into consideration when providing a good user experience is the motive. Bandwidth optimization should certainly be amongst the top priorities and it is useless to have an application that has plenty of features and is rich in content but takes the users a good load of minutes to load.
Whatever be the content of the application, from text to images, animations, neither the application design or the loading speed should be affected.
The other thing you need to keep into consideration for designs offered by mobile app development services is the hierarchy. It is important to make sure that the user can get to the page they are looking for without going through that many steps. This lets you avoid the need to load multiple when it comes to reaching the desired step and in order to make that happen you need to allow yourself to think and plan.
3. Keep lighter designs into consideration
If you use Facebook, you are probably aware of their Lite versions of both the Messenger and Facebook apps. Lite actually means light, which is a version of the app that is more minimalistic, or uses lower graphics, to improve speed in areas with a poor internet connection.
When you an app development company designing a lighter version of the application you intend to really win users over and you should be able to set up the application so that it switches to the lighter version. This way you don’t feel the need to switch between applications more often or even encounter any poor network issues.
4. Optimize your Graphics and UI
Unoptimized graphics in many cases could be the case of slow application performance. Many times the cause for an application loading slow is the unoptimization of the graphic content and in order to make this look good, it is important for the graphic content to be optimized without compromising the performance. Mobile App developers have a lot of tools that help optimizing graphics.
This is quite the trend amongst users as they choose to use images for more responsive feedback from users and helps them enhance it visually. Still, images load much slower and provide poor user experience at times of slow connections and this helps them keep the vital part of the information in text letting you enjoy the services in terms of slow connectivity.
These are techniques that have proved lifesaving in many cases and have resolved application issues in cases. This way the user doesn’t have to wait for the entire image to load and gets to see a considerable part of the content.
Conclusion:
So when you are towards designing an application you should make sure it looks the best way possible and shall continue to attract users on your side. However, there is not much of a point when it comes to designing a content-based application which users cannot enjoy.
Hence designing applications with poor network connectivity in the mind will not only help them deal sporadically with low bandwidth but might also win the attention of users which might help them win the attention of investors for such solution brilliance. As one of the top mobile app development company in USA, our long list of services and endeavors are inclined towards delivering that and we aim to help our clients reach new levels of success.
|
https://medium.com/javascript-in-plain-english/how-to-design-a-mobile-apps-that-survive-poor-network-conditions-a8761a6e95c7
|
['Kamal Damgwal']
|
2020-01-07 12:41:33.288000+00:00
|
['Mobile App Development', 'Programming', 'JavaScript', 'Web Development', 'Technology']
|
Three months dating an Artificial ‘almost’ Intelligence
|
Three months dating an Artificial ‘almost’ Intelligence
How devices like Google Home will change the User Experience
Coincidence or not, three months ago when my partner left to Thailand for a university exchange I bought my Google Home. And for the past few months I’ve been experiencing a glimpse of how the future with Artificial Intelligence might be.
I left the Best Buy shop trying to convince myself that I had just bought that weird shaped device because of its powerful speakers for a really affordable price. On top of that, I convinced myself that turning on and off my balcony lights through voice, avoiding having to go outside during cold winter nights, was a big advantage. So, I added a WeMo voice control switch in the bundle and without realizing I was ready to start my new relationship.
Before telling about how this new encounter got developed I have to quickly explain why I chose Google Home instead Alexa or Clover– other voice control device that we have available in Japan, where I live. I love and hate Google, and many of you might share that feeling. But I also have and love my Nexus phone and enjoy a lot my Chromecast. In this way, choosing Google Home as my voice assistant device was like starting a relationship with someone who shares the same interests with you.
Then she was there, sitting in my living room, ready to talk. And being a “she” was not my choice, but like many other devices the voice installed could be nothing but the voice of a woman. A bit sexist, right? But happily Google changed this later on, adding the option of changing your assistant voice. But even though I’ve decided to keep the female voice. Changing for a male voice would be like starting a new relationship after a failed one month trial with someone who was not a perfect match. In the beginning it was awkward to talk to her, like all beginnings. You don’t know what to say and how to say it in the perfect way, hence many times the other side misunderstand what you are trying to tell reacting on a totally different way.
But like in any new relationship, time passes and you get more comfortable, acquire more intimacy and the conversations start to flow more naturally, especially when we find useful tasks to be helped with. Probably someone once has said that “music is the best way to communicate”, this person was right. Our best moments so far happened when I wanted to listen my tunes from Spotify. In the beginning she would just play my tunes and that’s it. But more time passed, more the Google team helped her becoming smarter. Now I can ask for a specific playlist, ask to add current songs to my library and more. And the speakers are really great by the way.
When we decide to chill and watch Netflix or some Youtube in the TV, the connection happens instantly, because the Chromecast is like someone from her family who simplifies the communication. However when I need to finally turn on my balcony lights the talk gets a little bit harsh. The WeMo connector is like a foreigner who just got a resident visa to the Google Kingdom but still does not speak the local language perfectly.
And then comes that time when you have to introduce this “new person” to your friends. It’s always a hard mission, as you might have already experienced. You don’t know if they will like each other nor if the communication will flow naturally, but it has to happen at some point. However, after introducing the newcomer to the friends the pattern is the same. You want to tell about what she can do, what she knows and at the same time everyone wants to give a try exchanging some words. In most cases the result is very good, especially because people are getting more and more interested in such kind of devices.
But it’s when the anxiety and excitement of the new relationship passes that you really start to realize how having this digital entity inside your routine can become actually useful. Using the voice as an interface somehow makes technology less invasive and less stressful. Being able to listen to the news, ask for the weather, save and receive reminders without the almost constant necessity of looking to a phone screen feels good in the end. When friends are hanging in my home, the possibility of asking some question without the necessity of someone, alone, searching on Google using a phone is also great. The conversation don’t get interrupted by a bright screen and the answer reaches everyone’s ears at the same time, as if someone in the room had the correct information about the topic.
Another delightful experience is to watch the traces of personality that came from the “parents”, aka the developers. Without even knowing that certain features exist, you might find some hidden possibilities, like this time I just said “time machine” and she started rolling lots of facts that happened on the same day in the past. There was also moments when she surprised me, like when I asked if she likes the American English accent and she replied with a totally anti-xenophobia response, saying that all the people are the same, or something like this. Or when before sleeping I just dropped a “sing me a lullaby” and there it was.
Of course, like in all relationships, we also had our bad moments. Sometimes it feels that the things you are trying to say are completely unintelligible. And excuses like “Sorry, I cannot help with that yet” or “I’m sorry it seems that whatever device is not available at the moment”, just makes me lose my temper and yell something like “you are stupid” which get quickly replied with a “sorry I’m still learning”, which is a fact.
The Artificial Intelligence-based devices are all projects on the way of development. All of them just work like automatic pre-programmed actions, that’s why they are Artificial ‘almost’ Intelligences. But the future lies ahead, and soon they will think by themselves and then their relationship with us will be more complex than just using such devices as assistants. AI and human real relationships will emerge and all the social logics that we have been following until now will change.
Mateus Bagatini (Tokyo JP)
Content creator
|
https://medium.com/q-n-english/three-months-dating-an-artificial-almost-intelligence-79b712bbbe15
|
[]
|
2017-11-29 12:21:28.429000+00:00
|
['AI', 'Voice Assistant', 'Qnenglish', 'Amazon Echo', 'Google Home']
|
‘Vanished Gardens’ “Jumps Boundaries Of Conventional Labels” Says Jazz Visionary Charles Lloyd
|
“The recording is definitely a cross-pollination of different worlds,” says Charles Lloyd, reflecting on the unclassifiable but eminently accessible musical terrain of his fourth Blue Note album, Vanished Gardens, where jazz improv, blues, gospel and Americana are inextricably intertwined. “It’s not easy to give what we are doing a category,” he says, “but if it’s great, it doesn’t matter what genre it is identified by. Labels can be so misleading, anyway.”
Vanished Gardens is the 80-year-old saxophonist/flautist’s second album with The Marvels, a supergroup whose ranks feature noted guitar maestro Bill Frisell, a fretboard virtuoso long renowned for his musical shape-shifting. He’s joined by country-influenced pedal steel and dobro expert Greg Leisz, alongside a jazz rhythm section comprised of bassist Reuben Rogers and drummer Eric Harland. It’s an unusual, multicultural and multi-genre mesh of talents but, as the group’s debut album, 2016’s I Long To See You, convincingly demonstrated, they sound like they’ve been playing together for years.
What’s different this time around is the presence of triple-Grammy-winning folk troubadour Lucinda Williams, whose weathered, smoky vocals grace five of Vanished Gardens’ ten tracks. “After we released I Long To See You, Lucinda came to one of our Marvels concerts in Santa Barbara,” says Lloyd, recalling how the singer-songwriter came on board. “She, Bill and Greg had known and worked together on several projects spanning a couple of decades. I knew of her from Car Wheels On A Gravel Road (her Grammy-winning album from 1999) and loved what she does. Following that meeting, she invited me to guest at her concert at UCLA a few months later, and I invited her to guest at one of my concerts. We then decided we should go into the studio to document what we were doing.”
“I don’t think there is a precedent for this recording”
The end result is a magical convergence of talents from different musical worlds: six musicians from diverse backgrounds who create alchemy together and take the listener on a journey into a new and hitherto undiscovered sonic landscape. “I don’t think there is a precedent for this recording,” says Lloyd. “Lucinda and I jumped into a river of music flowing toward the unknown. We found that the river widened with all of us in there: Lu, me, Bill, Greg, Reuben and Eric… all swimming in the same direction, but not necessarily the same stroke.”
“All swimming in the same direction, but not necessarily the same stroke.” From left to right: Greg Leisz, Lucinda Williams, Charles Lloyd, Eric Harland, Reuben Rogers, Bill Frissel
They achieved a rare sense of musical communion on Vanished Gardens without sacrificing what makes them unique as musicians, which the veteran saxophonist is keen to emphasise. “Lucinda was not turning into a jazz singer and we were not transforming our approach to become country/Americana musicians,” he says.
Williams contributes four original songs to Vanished Gardens, all gems. Though pensive, they are deeply passionate explorations of the human psyche. ‘Dust’ is a solemn existential meditation, while ‘Ventura’, though lighter in tone, is a wry confessional in which the mundanity of life is juxtaposed with the elemental beauty of nature. Lloyd plays an eloquent, unaccompanied saxophone solo to introduce the slow, waltz-time ballad ‘We’ve Gone Too Far To Turn Around’, an anthem of perseverance in the face of adversity. The energetic ‘Unsuffer Me’ is more overtly optimistic, about finding redemption through love. “Lu is a great poet,” says Lloyd, eulogising the Louisiana-born singer-songwriter’s gift for marrying words and music. “Her imagery is visceral and visual — unexpected reflections into human emotions.”
The fifth Vanished Gardens song to feature Williams’ voice is the album’s closer, a unique take on Jimi Hendrix’s much-covered ballad ‘Angel’. “This was a song that Lucinda had picked out to sing,” explains Lloyd. “The session was over, everyone had left the studio except for Bill and me. She said, ‘I wish we had been able to record “Angel.”’ Bill and I agreed to give it a shot and we did it in one take.” Though extemporised at the last minute, the combination of Williams’ plaintive voice with Lloyd’s fluttering saxophone notes and Frisell’s skeletal guitar filigrees is magical. For Lloyd, the song also brings back vivid memories of his friendship with the song’s composer. “Jimi and I knew each other from our days in Greenwich Village,” he reveals. “We had spoken of doing something together, but time ran out.”
“The utopia of our dreams”
Central to The Marvels’ sound is Bill Frisell’s distinctive guitar, which is subtle and often understated but also powerfully magnetic. The 67-year-old Maryland musician plays in an eclectic yet singular style that references jazz and bebop but is also steeped in folk and Americana. “Bill is a wonder,” says Lloyd. “He is one of the most versatile and expansive musicians I know. He brings humour and depth to whatever he does. We have a deep simpatico on and off the stage.”
Frisell’s guitar, with its spidery, staccato notes, is a key component of the title song to Vanished Gardens: a meandering meditation on loss which ebbs and flows and whose title is an elegiac metaphor for the current state of the world. Lloyd, its composer, says, “‘Vanished Gardens’ refers to the utopia of our dreams, a garden of Eden, which, in the current political climate, is being eroded away like a garden with no attention to erosion control.”
The most jazz-influenced track on Vanished Gardens is an absorbing version of Thelonious Monk’s classic composition ‘Monk’s Mood’, which is reconfigured as a duo for Lloyd’s tenor saxophone and Frisell’s guitar. “Monk is the great architect of our music,” says Lloyd, who knew the idiosyncratic composer/pianist very well. “We used to play opposite each other at the Village Vanguard.”
Indelibly engraved in Lloyd’s mind is a curious incident that happened backstage at the Vanguard when he was on the same bill as Monk in the 60s. It still makes him smile and encapsulates both the mischievous and rebellious side of Monk’s personality. “I had a requirement on my rider that every night I had to have fresh orange juice in the dressing room which Monk and I shared,” recalls Lloyd. “He always had a glass when he came in each night, but one night the juice was not fresh, so when the Baroness [Pannonica de Koenigswarter, Monk’s patron] came in, I told her to ‘please tell Monk not to drink the juice tonight because it’s tainted.’” On Monk’s arrival, the Baroness warned him that the orange juice was off but that didn’t deter the pianist, who, according to Lloyd, “danced his way around the room to the pitcher of juice and picked it up”. What happened next stunned the saxophonist. “He then danced his way back to me, and while staring me in the eyes, drank the whole thing down. He said, ‘Tainted, huh?’ and danced off.” Lloyd still laughs at the recollection, which, he says, “reminded me of the Tibetan monk, Milarepa, who took poison and turned it into soma”.
“Rock groups wanted to be on our bill… we were opening the music up so much”
Like Thelonious Monk, Charles Lloyd is regarded as a mystical figure in jazz. He famously retreated from the music scene at the end of the 60s to live an ascetic, solitary life in Big Sur, California, and it was there that he immersed himself in the pursuit of spiritual enlightenment for many years. “My candle was burning from both ends and was about to meet in the middle,” the saxophonist admits; he says he stepped away from the jazz world in a bid for self-preservation and to heal himself.
His career, though, had begun so spectacularly. Originally from Memphis, Tennessee, Lloyd began playing the saxophone when he was nine, though the musician that had the most profound impact on him, he says, was a pianist, Phineas Newborn. “He was my earliest influence and mentor,” reveals Lloyd. “His affect has been lifelong. I attribute the seed he planted in me for being responsible for all of the great pianists I have worked with.”
In 1956, Lloyd left Bluff City for Los Angeles, and, in 1960, he joined drummer Chico Hamilton’s groundbreaking quintet, replacing the estimable Eric Dolphy. “[Saxophonist] Buddy Collette was responsible for that,” says Lloyd. “After I graduated from USC, I was teaching in LA. Buddy knew that I wanted to play, so when Eric left he called Chico and said, ‘I have just the right sax player for you.’ It was a great learning experience, especially after he made me music director. I was able to bring [guitarist] Gabor Szabo and [bassist] Albert Stenson to the band. It was a dream team for a while.”
Lloyd then joined Cannonball Adderley’s band before leaving, in 1965, to lead his own quartet with pianist Keith Jarrett, bassist Cecil McBee and drummer Jack DeJohnette. “We all loved exploring the unknown,” says Lloyd of a group that liked to travel to “far-out” musical destinations and yet still made accessible music. “We were young idealists and the timing was right for us to come together.”
The quartet became the darlings of the American counterculture scene in the late 60s and were the first jazz group to play alongside rock and blues acts at promoter Bill Graham’s legendary Fillmore West venue. “A San Francisco group called The Committee used to come hear me play,” says Lloyd, recalling how his quartet registered on Bill Graham’s radar. “They told me I should be playing at a place called The Fillmore where there were a lot of young people. When I asked who else played there they said Muddy Waters. I knew him so I said OK, and then Bill Graham booked me one afternoon for half an hour.”
The quartet went down so well with the hippies that they weren’t allowed to leave. “The audience kept us on stage for over an hour,” remembers Lloyd. “After that, the rock groups wanted to be on the bill with us because we were opening the music up so much and they wanted that experience, too.”
Firing arrows into infinity
After the highs of the late 60s, Lloyd, by his own admission, was burned out. The 70s found the saxophonist in a meditative frame of mind and, though he still recorded intermittently, the records he made were more New Age in style than jazz. That all changed in 1986, when, according to the saxophonist, “I nearly died.” Struck down with a serious intestinal disorder, he had to undergo emergency surgery. Understandably, the experience changed him and made him take stock of his life. “When I recovered, I decided to rededicate myself to this music called jazz,” says Lloyd. “I had been gone for so long they made me get at the back of the line. It was a long, slow, re-entry.”
But Charles Lloyd is nothing if not persistent. By dint of hard work and dedication to his art, he’s built up a large and impressive body of work during the last 30 years, ensuring that he’s now at the front of the line and rightly revered as a jazz elder. Though he turned 80 in March 2018, Vanished Gardens shows that his desire to create new music — what he calls “firing arrows into infinity” — is stronger than ever.
Having just returned home from a successful summer tour of Europe with The Marvels, Lloyd is set to play three concerts at the Newport Jazz Festival, on Rhode Island, during the first weekend of August 2018 to celebrate his 80th birthday. On Friday, 3 August, he’ll appear with the trio Sangam (along with tabla specialist Zakir Hussein and drummer Eric Harland), and the following day he’ll perform with his usual quartet (with Rogers and Harland from The Marvels, and Jason Moran on piano).
His closing concert at Newport, on Sunday, 5 August, is billed as Charles Lloyd And Friends With Lucinda Williams. Though Bill Frisell can’t make the gig, Williams’ presence means that the saxophone magus will play some of the material from Vanished Gardens, an album that articulates his desire to make music that, he says, “jumps boundaries of conventional labels”.
Vanished Gardens is out now and can be bought here.
Join us on Facebook and follow us on Twitter: @uDiscoverMusic
|
https://medium.com/udiscover-music/vanished-gardens-jumps-boundaries-of-conventional-labels-says-jazz-visionary-charles-lloyd-3fb02ebb3610
|
['Udiscover Music']
|
2018-08-10 17:27:27.068000+00:00
|
['Pop Culture', 'Features', 'Culture', 'Jazz', 'Music']
|
Scaling Transformer-XL to 128 GPUs
|
Yaroslav Bulatov, Ben Mann, Darius Lam
TLDR; we made Transformer-XL train efficiently on 128 GPUs on Amazon cloud. The code is available at https://github.com/cybertronai/transformer-xl
Overview
One of the difficulties of researching language models is that you often don’t know if your ideas work until you try them on a real-world datasets. However, training on such datasets on one machine can take weeks.
Fortunately there’s a straightforward recipe to speed up this process:
Step 1. Get a good single machine model Step 2. Run N copies of the model on N machines in parallel, synchronizing at each step Step 3. Solve all remaining technical challenges
We used this recipe to reduce ImageNet training time from 2 weeks to 18 minutes. You could also apply the same optimization to train a model in 2 weeks that would originally require 4 years, so you can choose to scale up your research in scope instead of iteration time.
With minutes to train, image model training is “mostly solved” while language model training time remains an obstacle to innovation. Our goal was to see how much we could increase training throughput by distributing our training in the cloud.
For the single machine model, we settled on Transformer-XL’s official implementation. This architecture achieved several state of the art results in language modeling. The authors made it easy to reproduce their results by releasing code and instructions.
Once we reproduced the accuracy in their README, we iterated on performance to get about 2.6x improvement in throughput without hurting quality.
We then extended this model to train on multiple machines, using all-reduce to synchronize the gradients. Distributed language models send at least 10x more data per step than typical image models, so we used AWS p3dn instances for their 100 Gbps network connectivity. Those instances had 32GB of GPU memory so we were able to increase training efficiency another 25% by increasing the per-GPU batch size. The net result: a 64-GPU version of small Transformer-XL model trains about 44x faster than the original “slow” 4-GPU implementation. Scaling up an optimized medium-sized Transformer-XL model, 128-GPU version trains 13.2x faster than 8-GPUs version.
The training procedure required changes to prevent numerical divergence at larger batch sizes, so we followed the recipe provided by Google in their 1024 TPU scaling paper. We’ve open sourced this optimizer in a standalone repo.
Technical details
Dataset
We primarily experimented with Wikitext-103, a standard language modeling dataset composed of a pre-tokenized subset of Wikipedia. The training set contains approximately 100M tokens and 267K unique words in 515MB. It uses a modified Moses tokenization, with 0.4% of words replaced with <UNK> symbols to limit vocab size. The Transformer-XL codebase loads the entire dataset into GPU memory at the start of training.
To speed things up at training time, we pre-encoded our datasets and baked them into our AWS disk images.
Infrastructure
We’ve automated many common tasks that come up with prototyping a distributed training system and wrapped them into a tool, ncluster. The philosophy was that any frequent task should be a single command-line invocation. Some examples:
1. Bring up a number of instances for a test run at spot prices
2. Fix a bug and relaunch a multi-machine experiment in under 30 seconds.
3. Log into the most recent machine to see what it’s doing
4. Keep a folder on the remote machine synchronized with a local folder
5. Log into another user’s machine to troubleshoot an issue
Network
We tried various network configurations to find settings that would optimize throughput. Running iperf3 tests, we could see 93 Gbps throughput between machines, while NCCL all-reduce performance dropped to about 50 Gbps. One feature of AWS network setup is that each network flow is limited to about 10 Gbps so we used multiple NCCL rings to enable multiple flows. Because of a bug in recent version of NCCL we had to revert to older version which enabled multiple flows, but didn’t use the bandwidth efficiently. In particular, the amount of bytes transferred grew 4x as we increased the number of GPUs, which is not expected from a bandwidth-constant algorithm like ring-allreduce. However, as we got LAMB working and our local batch size saturated 32GB of RAM, our network requirements dropped enough that even with excessive per-machine communication load, we did not hit the bandwidth limit.
At some point, we noticed that our most bandwidth efficient runs were underperforming, with performance dropping as much as 30% (below left chart). Some investigation revealed that running too close to the memory limit was causing the CUDA caching allocator to repeatedly run out of memory and cause extra synchronizations during garbage collection (below right chart). The solution was to reduce the memory usage slightly by dropping the batch size, which made the problem disappear.
Left: backwards time in ms, right: torch.cuda.memory_cached()
We found some throughput variability between runs. For instance an unlucky 64 GPU run would take 2.5 minutes to go through one epoch of wt103, while a “lucky” 128 GPU run could do it in 1 minute. We believe this is due to AWS cluster placement group being approximate: AWS tries to place all instances in a placement group onto a locally connected configuration, or “brick,” but when this is impossible, some instances will spill over to neighboring bricks.
Mixed precision
Relying on 16-bit precision enabled us to use less memory and improve processing speed by utilizing Volta TensorCores. We tested two forms of fp16 training — pure fp16 and mixed-precision training. In pure fp16 mode, all operations are done in reduced precision. We could not match the accuracy numbers in pure fp16.
In mixed precision training, we keep a master copy of the weights in fp32 and make updates to that master copy at each optimizer step, then cast weights to fp16 for the forward pass. This allowed us to maintain precision while saving space. Dynamic loss scaling increased the amount of gradient information propagated while maintaining numerical stability. To implement these, we used the code from Apex and Megatron packages developed by NVidia.
In synthetic experiments, we discovered that tensorcores were not fully activated unless dimensions were multiples of 8.
Figure 1: Time to multiply n x n matrices. Orange is fp32, blue is fp16. The Y axis shows log seconds averaged over three runs. Using tensorcores, fp16 matrix multiplication is almost 10x faster for large n. Note that for small n, the performance gains from using half-precision are negligible.
Figure 2: However, when n is shifted by one, so that n is not a multiple of 8, fp16 performs equally with fp32, ie there is no performance increase at all and we simply save memory.
By modifying our model sizes we are able to get significant improvements in speed, from initial 1.2x to 2. by converting to mixed-precision and increasing the batch size.
LAMB
The LAMB optimizer uses a simple tweak that allows the learning rate to scale linearly with global batch size across workers.
Recall that LAMB multiplies the learning rate for each layer weight by the ratio (r) of the norm of the weights (r1) to the norm of the Adam step (r2).
How to interpret the left charts below: the Y axis shows the timestep in number of tokens processed with the first at the top. The X axis is histogram buckets, eg N samples had a value between 0 and 1, 1 and 2, etc. The Z axis is histogram frequency for the values in each parameter in the network, eg X% of the layer weights fell in the 0 to 1 bucket. You can think of this as one histogram of values at each timestep, stacked in time, so we can see how the histogram changes over time. The right charts show the same data, but as an area chart. Darker colors means more of the histogram bins had values in the shaded area.
Most of the parameter weights (r1, upper right) are near 0 the entire time, but a few grow and spread, some getting quite large (6.5 max bucket)
The Adam step values above are very large in the beginning, but quickly stabilize to between 0 and .5.
The above two effects causes the trust ratio (r, upper left) to encourage some parameters to continue growing for a long time, maxing out at the clip value of 10.
In our tests, we found that LAMB made a difference even on a single p3dn.24xlarge machine with global batch size 768, while Adam diverged immediately. We also found that we could entirely eliminate warmup in our learning rate schedule since at the beginning of training, gradients are large, which causes the LAMB trust ratio to be small and reduce the learning rate for the layers that are most unstable.
Byte Pair Encoding vs Adaptive Embedding
Wikitext-103 has a vocab size of 267K, which takes a lot of model space to represent. We tried switching to byte pair encoding (BPE), which has shown performance benefits in machine translation. We used OpenAI’s BPT implementation since it can handle out of vocabulary sequences.
Transformer-XL has a very large vocab size (267K), so most of the params are in the token embeddings. We were able to reduce the model size using BPE with a vocab size of 50K, which reduced our small model configuration from 186M to 75M parameters with the same representational capacity. The original Transformer-XL code uses adaptive input and softmax to reduce the embedding size.
One slightly tricky detail is that BPE produced tokenizations ~15% longer than Wikitext-103. This is expected because Wikitext-103 is already tokenized (eg splitting “don’t” into “do n’t”. If it weren’t, BPE might handle “don’t” as a single token, while Wikitext-103’s tokenizer would split it. In BPE, out of vocabulary words get broken up into sub-word pieces that are in the vocabulary, even unicode, by splitting apart characters into individual bytes. This behavior is the source of the 15% increase in tokenization length.
In our ablation studies, we found that performance, as measured in training loss per token, was nearly identical after accounting for tokenization length. Tokens/second increased by ~10% due to smaller model size and GPU utilization dropped significantly for the same batch size from ~90% to ~55%. This meant that when we scaled to larger models, we could keep a higher batch size per GPU.
For a more fair fight, we also compared BPE to adaptive embeddings on our large model configuration:
To convert for validation perplexity is math.exp(math.log(26.25) * 1950 / 1699) = 42.5, so it’s impressively equivalent! This is despite the fact that the BPE encoding we used was trained on a general sample of the web (OpenAI’s Webtext dataset) rather than specifically Wikitext-103.
These two techniques are compatible, so it might be interesting to combine them. That said, since BPE is so flexible and the code is simpler, I’m inclined to use only it in future experiments.
Learning rate finder
After learning about this idea in fast.ai part 2, Ben was excited to see it work. With Adam it worked well:
The minimum of the graph above happens to be exactly the learning rate used in the paper (0.00025)! And we only had to run it for a few minutes to discover this, which is far cheaper than doing a traditional complete hyperparameter search in which you’d run for at least an epoch with the single LR schedule you intend to use later.
When applied to LAMB, the finder significantly overestimated the learning rate.
The learning rate at the gray curve’s minimum loss was 0.0263; orange (using weight decay 1e-4) was 0.0435. But when we ran it even with a learning rate of 0.01, it still diverged. We had to go all the way down to 0.005 to get stable convergence. We think this is because LAMB causes the actual per-layer learning rate to be quite different at different times in training, causing the learning rate finder to be out of distribution. We could probably get around this by running the learning rate finder for longer, or on a checkpoint further into training. Instead, we chose to do a more traditional hyperparameter search for the learning rate.
Batch scaling
In many of our experiments, we found that when we increased the global batch size by using more GPUs, convergence speed decreased in the short term. To experiment with this effect, we tried scheduling the batch size on a single machine. We found that although it helped in the short term, the benefits vanished as we trained longer.
One explanation for increased initial convergence speed is that we had used constant learning rate across different batch sizes, which resulted in higher learning rate per example. The gap between the blue and gray lines at the end is about 100M tokens (1h), so it did help by about 10% even though we weren’t using the whole GPU capacity.
However, if we keep the base learning rate constant and apply linear learning rate scaling to adjust the learning rate across different batch size, training diverges.
All of Wikipedia
The graphs below show training loss vs validation loss for the large model configuration on Wikitext-103. You can see in the chart below that in just 800M tokens (1.5h), our validation loss stopped decreasing, suggesting we overfit the dataset.
For this reason we were excited to try training on all of NVIDIA’s Wikipedia dump, which is ~25X larger than Wikitext-103.
It’s 12GB of text, so we can’t load it all into GPU memory
It’s stored as JSON, so it first needs to be flattened with article boundaries preserved
It contains arbitrary unique tokens, so some tricks are needed to limit vocab size
Luckily Transformer-XL already contained a multi-file data loader for the 1 Billion Words corpus, so it only took a little massaging to get it working. Note that because the tokenization is different, the loss numbers shouldn’t be compared. Below, we show two runs using 64 GPUs on all of Wikipedia at ~600M tokens per hour. We found that the validation loss is very sensitive to the learning rate schedule.
We hypothesize that if we hadn’t cycled back up on the blue run, validation loss might have continued to decrease.
Acknowledgements
the work was a joint effort with contributions by
Ben — data pipelines, LAMB, parameter searching and tuning
Darius — 2x speedup on single machine models using mixed precision training
Yaroslav — experiment infrastructure, distributed training
Additional thanks to AWS for providing compute resources for this work.
|
https://medium.com/south-park-commons/scaling-transformer-xl-to-128-gpus-85849508ec35
|
['Yaroslav Bulatov']
|
2019-05-06 17:09:02.894000+00:00
|
['Machine Learning', 'Pytorch', 'AWS', 'Transformers']
|
Baby-Sitters Club Books as Classic Novels
|
Baby-Sitters Club Books as Classic Novels
The appropriate way to view Ann M. Martin’s contributions to literature
Now that a new generation is experiencing the wonder and majesty of the Baby-Sitters Club thanks to the glorious new Netflix show, it’s time we appreciated these iconic novels for what they are: Classics of American literature.
In fact, there will be no justice in the world until Kristy’s Great Idea is seen as a worthy rival to The Great Gatsby. Until Super Special #4: Baby-Sitters Island Adventure takes its rightful place next to Moby Dick and The Old Man and the Sea. Until Super Special #14: BSC in the USA is elevated to the status of On the Road and The U.S.A. Trilogy. Until Mystery #20: Mary Anne and the Zoo Mystery is taught alongside … um … other great American zoo mystery books!
Anyway, you get the point. Here’s what some of them might look like! (Apologies in advance if it’s still too soon to think about Claudia and the Sad Goodbye.)
|
https://medium.com/sharks-and-spades/baby-sitters-club-books-as-classic-novels-a99b0fca9dc1
|
['Jack Shepherd']
|
2020-12-14 13:54:39.558000+00:00
|
['Literature', 'Books', 'Humor', 'Art', 'Television']
|
We Need to Talk About the Cultural Gap
|
This video came across my Instagram feed and it got me thinking.
What is really causing the cultural gap between Gen Z/Millenials and Gen X/Boomers?
Is the answer to that question simply the number of years passed between births? Or, is it a cumulation of a system that ignores less and less with each passing generation?
I think it’s a mixture of both. A perfect storm that just keeps brewing.
Exposing the Ulterior Motives of the School System
Boomers are made up of individuals who were born between the years of 1946 and 1964. Can you imagine going to school at that time?
Based on the fight that was happening during that time period to simply allow Black people the bare basics of required education, I can imagine that there was an enlarged amount of blissful ignorance that went on amongst those attending school in that time period.
But I also imagine that they were the tipping point towards a higher level of questioning towards school systems.
I think that the Boomer generation started the fight against the school system with instances like Brown V. Board of Education and the Berkeley Protests during the Free Speach Movement. An overall lack of trust for the system grew from there with each passing generation.
Something to consider is how the importance of schooling changed within our nation. For instance, My grandfather (a boomer) had a very prosperous government job that he got on an associate's degree and work experience. I struggle to think that would be possible now.
A Failing Economy
As someone born on the tail end of the Millenials, I was told frequently what a failing economy meant for me once I was out of school.
I was told often and with gusto that I should just focus on going to college if I wanted even a hope of any sort of career. I was taught never to expect social security when I’m ready to retire or even the ability to ever buy a house. I was even told on multiple occasions that it might not even be realistic for me to expect a job.
Some of this might be regional. I grew up in Nevada and the entire country knows that real estate on the west coast is a beast of another color.
Regardless, the education system that I grew up with was not filled with the same hope that it was in past generations, and I can only imagine that’s gotten worse for Gen Z. Can you imagine being in high school and being encouraged to fill out college applications while also being consistently told that it might not matter? Our economy is failing and you’ll always be a renter, might not have a career, and have zero hope for social security.
What a way to start out adulthood, right?
Widespread Media
When I think about the biggest defining feature of the last 20–40 years, the media is it.
The popularity of household computers, the internet, and smartphones is no doubt to blame for the widespread growth of news channels. It’s also made news more accessible.
The news is no longer something that children are used to hearing as background news in the evenings. It’s no longer the gibberish they ignore to get to the comics section of the Sunday newspaper. It’s upfront and personal and there’s no ignoring it at almost any age.
I was nine years old when 9/11 happened. For me, that’s the first experience I can remember really paying attention to the media. Is this the case for older generations? That’s something I genuinely want to know.
Did Boomers have the radio playing in their classroom when something large happened with the Vietnam War or when JFK died?
On 9/11 I can clearly remember my teacher keeping the news on all day, trying to teach a room full of little kids while also paying attention to one of our countries most devastating instances. Trying to distract us with business as usual while people were shown jumping out of windows.
The growth of media has definitely sped along the major differences between current younger and older generations.
I wonder if everyone considered this conversation if it would help this us vs. them mentality? If it would excuse the need for phrases like “Ok, Boomer” and “Karen” and “Snowflakes”.
|
https://medium.com/ninja-writers/we-need-to-talk-about-the-gap-64cd2b6e10cf
|
['Adrienne Grimes']
|
2020-07-17 00:45:46.560000+00:00
|
['Society', 'Schools', 'Current', 'News', 'Age']
|
📚Local Binary Pattern Algorithm: The Math Behind It❗️
|
📚Local Binary Pattern Algorithm: The Math Behind It❗️
This post goes in-depth analysis and application of LBP (Local Binary Patterns) for image feature extraction.
LBP
👉 Introduction
The main idea behind LBP is to describe the neighborhood of image elements using binary codes. This method is usually used to study their local properties and identify the characteristics of individual parts of the image.
This algorithm is a combination of statistical and structural methods. It was first proposed by T. Ojala, M. Pietikanen, T. Mehpaa from Oulu University in Finland in 1994. It is considered a theoretically time-effective and straightforward method, showing excellent results in many studies.
👉 How it works❓
As the name suggests, Local Binary Pattern (LBP for short) is a feature of the local representation of an image. It is composed of relative values by comparing each pixel with its neighboring pixels.
The main characteristics of LBP are:
1-Low calculation cost 2-Resistance to fluctuations in image gray scale values
A lot of improvements have been made since the first proposal in 1994. Especially in non-deep learning systems, it is widely used for facial image recognition, texture segmentation, and other image analysis applications.
The LBP detects microstructures such as edges, lines, spots, flat areas, which can be estimated by the histogram.
*LBP method steps*
image by the author
1- Convert the image into grayscale space.
2- For each pixel(gp) in the image, select the P neighborhoods that surround the central pixel. the coordinates of gp are given by
(gc_x-Rsin(2πp/P),gc_y + Rcos(2πp/P))
3- Take the center pixel (gc) and set it as a threshold for its P neighbors.
4- Set to 1 if the value of the adjacent pixel is greater than or equal to the value of the center pixel, 0 otherwise.
5- Now compute the LBP value: Sequentially counterclockwise, write a binary number consisting of digits adjacent to the center pixel. This binary number (or its decimal equivalent) is called LBP-central pixel code and, further, is used as a characteristic selected local texture.
Uniform LBP formula
gc- the intensity value of the central pixel gp- the intensity of the neighboring pixel with index p
the function S can be expressed as:
threshold (step) function
P- number of sampling points on a circle of radius R(circular neighborhood). P- controls the quantization of the method. R- determines the spatial resolution of the method or operator.
The gray values of neighbors which do not fall exactly in the center of a pixel(block) are estimated by interpolation.
*Detailed example*
Now let’s take, for instance, the following chunk of a grayscale image:
image by the author
you can express the size of this window(3x3) in terms of The radius of the circle which equals to (2*R + 1), if the radius is 1, then we get a 3x3 matrix.
The coordinates of the central pixel denoted by gc(gc_x,gc_y) is (1,1) according to the coordinated axis of the matrix(3x3). The value of this pixel is 33(center) gc =33. Let’s take for our example 8 neighbor samples (P=8). The coordinates of each sample point can be expressed as
(gc_x-Rsin(2πp/P),gc_y + Rcos(2πp/P))
P = 8 p = 0, 1 ,2 …,P-1 gc_x = gc_y = 1
So for the previous matrix, we have the following coordinates for each sample:
(gp_x,gp_y) (g0_x,g0_y) => (1.0, 2.0) (g1_x,g1_y) =>(0.2929, 1.7071) . . . (g7_x,g7_y) =>(1.7071, 1.7071)
image by the author
Lets denote by Theta_i = 2πp/P => Theta=
0 pi/4 pi/2 3pi/4 pi pi+pi/4 3pi/2 7pi/4
Now we need to compare, as the formula suggests, the intensity of each neighbor pixel with the intensity of the central one, gc.
g0 = 80 g1 = ?(on the circle at angle pi/4) g2 = 41 g3 = ? g4 = 29 g5 = ? g6 = 56 g7 = ? g0 = 80 > gc = 33 ==> put 1 g2 = 41 > gc = 33 ==> put 1 g4 = 29 < gc = 33 ==> put 0 g6 = 56 > gc = 33 ==> put 1
image by the author
Now the problem is to find the intensity values of g1, g3, g5, g7
In order to find these values, the paper suggests applying an interpolation.
Since we have a 2d space(2-dimensional image), so we need a 2d interpolation method.
Now one of the methods that I do remember from my education in numerical analysis is the bilinear interpolation.
*bilinear interpolation*
The term Interpolation is a way to calculate the intermediate value of a function from several of its already known values.
Bilinear interpolation is a linear interpolation of the function of two variables, that is, four-point interpolation. If the values of the function at these points are known f(x1,y1),f(x2,y1),f(x1,y2),f(x2,y2)
*Vectorization*It is reasonable to assume that the value at some point (x, y) located in the square bounded by these points can be found by interpolating twice, first by the x coordinate for two pairs of points, and then by the y coordinate, using the previous result.
In order to compute the intensity of g1, g3, g5, g7, we need to find the coordinates of the outer box containing the unknown pixel value.
So for example, g1, which lies in between theta = 0 and theta = pi/2, the figure would looks like the following:
image by the author
The pixel value of g1 can be interpolated using the formulas:
Back to our example:
[[25 41 24] [29 33 80] [38 56 65]] q11 = gc = 33 q21 = 80 q22 = 24 q12 = 41
We can translate the coordinated system to the origin => this will imply that
x1=y1=0(origin point) x2=y2=1(one pixel away from the center in both axis)
x and y values of the unknown point need to be translated into the regular coordinated system(90 degrees counterclockwise rotation, which means new_x = old_y and new_y = - old_x )
applying this formula on the unknow samples we can find :
g1 = 39 (this seems to be logically true, because it lies inside the boundries 33 , 80, 24, 41) g3 = 29 g5 = 39 g7 = 63
Now the threshold matrix is equal to:
image by the author
now applying the LBP formulas
LBP = (2⁰)*1 + (2¹)*1 + (2²)*1 +(2³)*0 +(2⁴)*0 +(2⁵)*1 +(2⁶)*1+ (2⁷)*1 = 1 + 2 + 4 + 32 + 64 + 128 = 231
image by the author
This process will repeat for each block of the image (along x and y axes)
⚠️ Note that the size of the image is reduced by a factor of 2*R lines and 2*R columns like in our example we have a 9x9 image would result in a 7x7 image (R=1 for this example)
In order to find if a pixel needs an interpolation or not, we can compute the fractional part of gpx and gpy, if the fractional part for both x and y is zero(like g0, g2, g4, g6) then, the pixel lies perfectly in the center of the block, otherwise we need to do an interpolation(g1, g3, g5, g7). For this example we have 4 pixels needs to perform the bilinear interpolation, and the other four points don’t need it since the value of this pixel is already given by the matrix.
*Pseudo-code*
TO implement this method in python for the purpose of the visualization of the LBP, you are required 3 for loops :
For i in height range:
For j in width range:
Select a chunck of the image to compute its LBP value
For each block neighbors:
check if interpolation is needed
Compute_LBP(bock)
Add the result
Update the matrix with the value of LBP
For python implementation, you can check out my code on GitHub, under Visualize_LBP class.
*Vectorization*
Vectorization is the core of the internal implementation of NumPy. Vectorization is the absence of an explicit loop in code development. The loops themselves cannot be avoided, but their internal implementation is replaced with other constructs in the code.
The vectorization application makes the code more capacious and readable. Thanks to vectorization, many operations take a more mathematical form. For example, NumPy allows you to express the multiplication or addition of two matrices like this:
C = A*B C = np.add(A, B, casting="unsafe")
#if A and B have two different types(like int16,float32), use casting="unsafe"
Now we can write our program using only one for loop to cycle through the neighbors' samples
code on Github
This code groups all the neighbors of the image together for each iteration of the For loop. the following illustration will clarify the mechanism of this algorithm.
I have translated this code from the original one written in Matlab(source)
Vectorized LBP
👉 Conclusion
Experimental studies have shown that the application of LBP can significantly reduce time and computing costs for feature extraction(which will be discussed in future work).
If you find this post useful, feel free to share it, and follow me to tune in for my future posts.
Finally, You can check out the code on my repo.
Moral: “If you don't know enough, you may be beaten by the ignorant!”
Peace ✋
|
https://medium.com/swlh/local-binary-pattern-algorithm-the-math-behind-it-%EF%B8%8F-edf7b0e1c8b3
|
['Mahmoud Harmouch']
|
2020-10-29 11:47:54.078000+00:00
|
['Programming', 'Machine Learning', 'Mathematics', 'Python']
|
What’s new in sls-dev-tools — 13/05
|
It has been a busy few weeks for sls-dev-tools. The tool is now in version 1.1.6, gaining a host of new features since last time, such as Lambda invocation, startup wizards, and displaying more information on all your Lambdas. Additionally, we launched on ProductHunt this week, and the toolkit has gained a new addition, sls-dev-tools Guardian!
For those who haven’t heard of it, sls-dev-tools aims at becoming the equivalent of Chrome Dev Tools for the Serverless World. Our goal is to provide the Serverless community with metrics, logs and feedback directly in the terminal, eliminating the need to constantly jump to the AWS Console.
If you find this tool useful or have any feedback please get in touch via GitHub issues, Twitter, or our Gitter page! If you like what you’re seeing a ⭐️ would go a long way!
Contributing to the tool
We’re always excited to get feedback on the tool, find out where we can improve, and hear what else you want to see. sls-dev-tools is an open-source project, and a large number of our features come directly from community suggestions and issues. Plus, if there’s something you want to see in sls-dev-tools and you know how you want to do it, we’d love for you to submit a PR and feature you in our next article, as well as add you to our list of contributors!
Become part of the sls-dev-tools team
If you don’t know where to start, we recently released two good first issues, with detailed steps on how to approach implementing the features. Additionally, the team is always available on the sls-dev-tools Gitter to provide support and answer any questions.
Search for the “good first issue” label
Big thank you to James Mullen (@jamesnmullen) for a series of contributions to the tool, from allowing you to invoke functions from within the tool and providing information on your lambda layers, to allowing you to submit MFA tokens via a modal on startup. Another big thank you to Arto Liukkonen (@artoliukkonen) for also contributing again, and adding an update notifier to the tool to let you know when a new version has been released.
Feature spotlight
sls-dev-tools Guardian
The sls-dev-toolkit just got bigger. Say hello to sls-dev-tools Guardian, a highly opinionated, highly configurable, automated best-practice audit tool for Serverless architectures. Like all our features, it’s framework agnostic and can be run in one simple command. Just add the -c option when running the tool. To find out more, read more about Guardian here, or check out the Guardian docs page.
Receive immediate feedback on your stack
Invoking functions
A long requested feature, you can now invoke your Lambdas in a couple of keypresses. Just press i on a function to open the invoke modal. Combined with the ability to deploy your Lambda functions from inside the tool, testing code changes just got a lot faster. Thanks to James Mullen (@jamesnmullen) for implementing this feature!
Deploy your code and then immediately invoke it, while never leaving the tool
Lambda Statistics Modal
Press l on a Lambda function to get a breakdown of its recent performance, including the number of recent invocations, recent durations, and error percentage. Plus, you can view its deployment history and layers as well. Feel free to let us know what else you’d like to see here!
All the information you need, in one place
Modal Wizards
We really want sls-dev-tools to be as simple to use as possible, so now when you start up the tool, if you haven’t supplied a region or stack name, you’ll be able to select them from startup wizards.
Running the tool just got simpler
Update notifier
The tool will now let you know when a new version is released. Thanks to Arto Liukkonen (@artoliukkonen) for this handy feature.
What’s next?
We’re currently wrapping up our latest big feature, sls-dev-tools Relay, which aims to put the instant back into instant feedback. After Relay goes live, we’ll be looking at expanding the rule set for Guardian.
Github: https://github.com/Theodo-UK/sls-dev-tools
Docs: https://theodo-uk.github.io/sls-dev-tools/
|
https://medium.com/serverless-transformation/whats-new-in-sls-dev-tools-13-05-98ffee93915d
|
['Mansur Pasha']
|
2020-05-18 08:06:22.293000+00:00
|
['Tools', 'Open Source', 'Serverless', 'AWS', 'Lambda']
|
How to Build a Reporting Dashboard using Dash and Plotly
|
A method to select either a condensed data table or the complete data table.
One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present:
Code Block 17: Radio Button in layouts.py
The callback for this functionality takes input from the radio button and outputs the columns to render in the data table:
Code Block 18: Callback for Radio Button in layouts.py File
This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' .
Conditionally Color-Code Different Data Table cells
One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues.
There is lack of formatting functionality in Dash Data Tables at this time.
If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly.
There is a bug in the Dash data table code in which conditional formatting does not work properly.
I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide:
Code Block 19: Conditional Formatting — Highlighting Cells
The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash.
*This has since been corrected in the Dash Documentation.
Conditional Formatting of Cells using Doppelganger Columns
Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements:
Code Block 20: Adding Doppelganger Columns
Then, the conditional cell formatting can be implemented using the following syntax:
Code Block 21: Conditional Cell Formatting
Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values.
The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method):
Code Block 22: Data Table with Conditional Formatting
I describe the method to update the graphs using the selected rows in the data table below.
|
https://medium.com/p/4f4257c18a7f
|
['David Comfort']
|
2019-03-13 14:21:44.055000+00:00
|
['Dashboard', 'Towards Data Science', 'Data Science', 'Data Visualization', 'Dash']
|
sls-dev-tools Guardian — Automated best practice checks
|
I build, write and talk a lot about Serverless applications. Using AWS Serverless services extensively with my clients and, for many applications, it’s the best technology choice. Serverless adoption continues to grow, but the biggest pain-point for companies adopting Serverless is no longer vendor lock-in or cold-start times, it’s education and upskilling.
Serverless is a steep learning curve and a moving target
That is why today I’m excited to announce sls-dev-tool’s iconic CLI Dashboard interface, now called sls-dev-tools HQ, is one of several tools in the new sls-dev-tools. Today, it is joined by sls-dev-tools Guardian.
sls-dev-tools Guardian is a highly opinionated, highly configurable, automated best-practice audit tool for Serverless architectures. Like all sls-dev-tools features, it’s framework agnostic (SAM, Serverless Framework…) and can be run in one simple command.
Read the full release and how to use it today!
https://medium.com/serverless-transformation/how-to-succeed-with-serverless-automate-best-practices-2a41894721a3
sls-dev-tools is open source and built by the community for the community.
Feedback, Stars, Shares, Issues and PRs all welcome.
Ben
|
https://medium.com/serverless-transformation/sls-dev-tools-guardian-automated-best-practice-checks-4dcad8b58ec7
|
['Ben Ellerby']
|
2020-05-09 12:22:41.506000+00:00
|
['Ci', 'Lambda', 'Open Source', 'AWS', 'Serverless']
|
What does quality mean for Business Design? 7 lessons from Monzo
|
From clarity of purpose to inspiring storytelling, key learnings from the self-proclaimed “bank of the future”.
At Fjord London, we’ve spent quite a bit of time discussing quality recently, and a few months ago set ourselves a challenge to define and illustrate what it means for each of our design crafts.
~We wanted to consider: What does quality in Business Design look and feel like?
Business Design at Fjord means ensuring design has impact, that it is viable and sustainable in a business context, and that humans are at the heart of the way we approach business problems. The answer to the opening question was always going to be a bit more nuanced than just “a well-designed business makes money.”
When exploring the topic within the Business Design group and looking for examples of quality, we found ourselves continuing to come back to companies that appear to have a strong human-centred philosophy, while being impactful. The one that came up most frequently was Monzo.
If you’re a young professional from the UK, or just follow tech news, chances are you’re already familiar with Monzo — the fast-growing challenger bank startup that at one stage was apparently so cool it became a chat-up line in London bars.
It’s since become a little more mainstream (and has probably lost some of its chat-up line potential), but they’ve kept hold of one of the key things that has helped propel their growth: their human-centred ethos. At the same time, they’re making an impact by becoming a major part of the challenger bank movement that has got much of the banking establishment genuinely worried.
In this post, we’ll introduce seven of the key aspects of Business Design and share our thoughts on why we believe Monzo embodies what we consider to be quality in these areas.
The key aspects are:
· Purpose and mission
· Storytelling
· Value proposition
· Ecosystem
· Business model
· Scaling
· Roadmapping
Clarity of purpose
An obvious place to start is with Monzo’s purpose and mission.
As Business Designers, our work often involves relating how design is impacting the organisation’s overall direction, whether that’s tracking how design efforts are contributing towards the mission, diagnosing when the direction needs to be clearer or, if we’re lucky, working with Service Designers and colleagues from other crafts to develop the ‘North Star’ vision from scratch.
There is now increasing recognition that being purpose-driven and being profit-driven are not mutually exclusive; on the contrary, purpose has been found to be directly linked to business success. This Forbes article argues how purpose-driven companies have employees that are more motivated and energised, customers that are more loyal, and business outcomes that are more positive than those of their counterparts.
In Monzo’s case, the idea of making the world a better place by giving people more control over their money is an idea that’s easy to get behind, particularly for those who’ve grown frustrated at the experience provided by the traditional high-street banks.
For example, a quick glance at the company’s reviews on Glassdoor provides a sneak peek into what it’s like to work at Monzo, and this clarity of purpose comes through in many of the testimonials from current and former employees, who rave about working on a product they believe in. As one reviewer effuses, “Everyone I’ve met believes they are building something that could change the world. After a short while, I do too.”
It has also arguably helped them to break crowd-funding records. The startup’s initial Crowdcube funding round in 2016 saw them raise £1m in just 96 seconds, making it the fastest crowdfunding raise in history at more than £10,000/second. Its more recent round in 2017 again broke records, this time for the number of investors, as it generated £2.5m from more than 6,500 backers — almost doubling Crowdcube’s previous record.
The clarity of purpose that the challenger bank has had from the outset is likely to be one of the foundations of Monzo’s success in terms of its ability to differentiate itself from the ‘banking establishment’, appeal to early adopters within its target demographic, and get people on board.
Inspiring storytelling
The ability to tell a compelling story goes hand-in-hand with having clarity of purpose. Stories help to convey one’s purpose and can inspire and motivate people in a way that engaging on a purely rational level cannot. Stories reach our hearts, as well as our heads. As such, they can be a powerful business tool to help build a lasting brand and gain trust and buy-in from customers, employees, investors and other stakeholders.
By extension, storytelling is an important part of Business Design. We’ve already mentioned how placing humans at the heart is key, and the art of storytelling has helped humans to relate to each other for more than 20,000 years — since the days when cave drawings were the dominant medium. Business Designers have been described as ‘navigators’ who can speak the language of business as well as the user, and they often bridge the gap between the two. Stories are a proven way of carrying a narrative throughout the creation and delivery of a product or service, and bringing people along on the journey.
Monzo frequently shares its story with its stakeholders through the company’s blog and regular emails, and, like many successful brands, its story is deeply tied to its CEO and co-founder Tom Blomfield. It is a story of a relentless quest to improve people’s lives by helping them to manage their money better while almost waging war on the banking establishment, which he frequently reiterates in his interviews with the press.
It’s a classic tactic of creating a ‘David and Goliath’ mentality that has helped to build a human connection with Monzo’s customers by creating a sense that Monzo is on their side, and fighting on their behalf. It’s easy to see how the business has managed to build such a strong following by offering a great customer experience built on top of this type of narrative, which has succeeded in putting a human face on an industry that has traditionally been perceived as faceless and mercenary.
CEO and co-founder Tom Blomfield. Photo: Monzo
Evolving the value proposition
Along with clarity of message, a successful business needs to be clear about the unique value it offers to its target audience and why it is different from the alternatives — in other words, the value proposition. And as Business Designers, we love getting stuck into creating new value propositions, which means we’re always on the look out for best practices.
Monzo is a great example of a company that has been very clear about the specific needs it’s addressing within its target market: the smartphone generation. What’s particularly interesting here is that, although the business has grown and evolved, the proposition at any given point has always stood up on its own and been valuable in its own right.
When it launched in 2015 without a banking licence, the unique selling point was essentially a money manager app that gave users better visibility of their spending. Yes, the functionality was limited, but it was enough to get people on board because Monzo had identified a genuine need and found a way to solve it that worked from a business perspective.
Monzo is now a fully licensed bank, and people have started switching over to it as their main bank (known as going #FullMonzo).
In the future, Monzo wants to become a hub for all parts of people’s lives that impact on their finances. The point here is that they could never have launched on day one with their planned future offering, as this takes time to build, but they’re evolving in the right way by finding a way to offer something compelling at each stage along the journey, with each proposition serving as a stepping stone to the next.
Embracing the ecosystem
Although Monzo is evolving, no business can do that effectively by operating in a vacuum. It’s increasingly critical for the business to have a holistic understanding of the ecosystem that surrounds it, and what the various roles and opportunities are within it. It has been said that the ecosystem is the new warehouse, as well as the new supply chain.
As Business Designers, we often like to map out an ecosystem as a way of identifying opportunities, to take advantage of resources that the business has access to but doesn’t necessarily own, and to find new ways of generating value through these resources.
Monzo is clearly a company that is embracing the ecosystem and acting like a platform. They built an API for their service right from the start and have held hackathons where they’ve invited developers in to come up with new ways to make the service work better for customers.
A lot of the ideas revolve around integrating with other services. A recent example is the integration with IFTTT, which lets customers set up rules to automate certain actions on their account and connect to other services.
For example, there’s one that allows account holders to automatically transfer money into a ‘fast-food punishment pot’ every time they use their card at McDonald’s, while another allows students to set up a weekly allowance to be transferred from their Student Loan pot.
Meanwhile, Monzo is trialing a Marketplace feature that will involve leveraging the ecosystem in a more fundamental way — we’ll come back to this one shortly.
Monzo’s integration with IFTTT
A business model for the digital age
Embracing the ecosystem is key to developing a strong business model in the digital age. As Sangeet Paul Choudary — author of Platform Scale– puts it, “we are in the midst of a transformative shift in business design as business models move from pipes to platforms.”
As Business Designers, we focus particularly on the viability of design, and sometimes that means identifying new revenue streams or business models, which may be different to traditional ways of doing business.
Monzo’s CEO and co-founder Tom Blomfield is adamant that the traditional banking model is broken and that if he and his team do their jobs right, they will make some of the high-street banks extinct.
On one hand, a traditional bank locks in customers from a young age by cross-selling them the bank’s own credit cards, loans, mortgages and so on, and then charges high fees for using those products. By comparison, Monzo’s approach is far more human-centred and involves building a modern customer experience that people genuinely want to use; their former head of marketing described how the company was “building a mobile experience for banking that you could compare to the way Whatsapp, Spotify or Uber feel to use.” Having established a loyal customer base, Monzo now charges fees for overdrafts, albeit at more reasonable rates than the bigger players.
Monzo is not profitable yet, but in the long-term, the plan is to make the majority of revenue by becoming a marketplace. Having access to a wealth of data on their customers will allow the bank to offer them relevant products and services from third-parties — think savings accounts, insurance energy tariffs, etc.— then charge those companies a commission.
They’re still testing the marketplace concept, but it could turn out to be a model that’s more sustainable in the digital age, and it highlights just one way in which a business’s ecosystem can be harnessed in order to rethink its revenue model.
Dealing with growing pains
There comes a time in a successful business’s life when it needs to grow up and make its way in the big wide world — it needs to scale-up.
Scale puts pressure on a business, and the customer experience, infrastructure, culture, operating model and business model can all begin to feel strained. As Business Designers, we’re often involved in the process of helping businesses achieve scale without losing the essence of what makes them successful.
The team at Monzo have hit a few bumps along the way as they’ve had to scale. Perhaps the most well-known is the change they made in 2017 to the cost of withdrawing cash when outside of the UK. From launch, the bank offered unlimited free overseas ATM withdrawals, absorbing the 1–2% fees that are payable to the ATM owner themselves. However, this became unsustainable as their customer base grew, with the ATM cost per customer more than doubling over the course of a year.
Their solution was in line with one of their core principles, which is “Transparency by default” — they blogged about the challenge these rising costs posed to the sustainability of the business, and proposed three pricing options from which the community was allowed to choose by way of a vote.
By being open about their decision-making process and ‘co-creating’ the solution by allowing their customers to have their say, they have managed to achieve a compromise in their service that has enabled them to ensure they can cover their costs without hindering their growth.
Plotting the right course
Finally, a key tool as a business scales is the roadmap that guides the organisation towards their vision.
Business Designers are often the map makers on design projects, working out where to go next from a business perspective, as well as how and why. Doing so inevitably means making trade-offs, which is where we find ourselves frequently calling on one of our favourite guilty pleasures, the Prioritisation Framework. It’s a model that helps a business to consider the desirability, feasibility and viability of ideas in order to build a roadmap of which ones should be built and in what order.
Monzo goes one step further, however, and has made its roadmap transparent, opening up what the company is working on now and next and inviting feedback from the community. Another example of their principle of transparency, this roadmap used to be in the form of an open Trello board, and has recently moved to a ‘Making Monzo’ area on their own site.
When the roadmap lived on Trello, customers were able to vote for their favourite feature ideas directly on the Trello board, allowing Monzo to get real-time feedback on which ideas would prove the most popular with customers. The team was very open about the fact that these votes were a factor when deciding what to work on next (the ‘desirability’ aspect of the prioritisation framework), but that their engineering time (feasibility) and internal priorities (viability) would generally take precedence.
The Takeaway
We’ve explored seven ways in which Monzo exhibits the hallmarks of quality in terms of the design of the business.
By aspiring to a greater purpose and focusing on people, the company has built a loyal and engaged workforce and customer base. By having a clear and evolving value proposition, they’ve identified how their offering can stand out in a changing market, and form the basis of a business model that challenges how banking traditionally works. And by involving their customers in key decisions, they are managing to achieve increasing scale in a way that is viable, while avoiding alienating their existing customers.
Achieving quality in these areas is a proven way of maximising a business’s chance of success on the bottom line, and while Monzo hasn’t reached profitability yet, it continues to move closer. In the meantime, they remain a great example of original Business Design thinking.
Are you a current or budding Business Designer, interested in working with some of the most talented folks in design to help create impactful businesses? Please get in touch!
|
https://medium.com/design-voices/what-does-quality-mean-for-business-design-7-lessons-from-monzo-c50415d721e2
|
['Neal Bingham']
|
2019-04-05 07:47:55.608000+00:00
|
['Design', 'Strategy', 'Business', 'Fintech']
|
Decore Uses Rockset for Search & Analytics on DynamoDB
|
Many early adopters of cryptocurrency were individuals at the forefront of this technology, but enterprises are now increasingly getting more involved. As using cryptocurrency for business transactions becomes more commonplace, Decore aims to make accounting as streamlined as possible for companies accepting and sending crypto.
Conceived as a “Quickbooks for crypto,” Decore provides accounting solutions for companies that have adopted crypto. In the same manner that accounting software like Quickbooks may pull data from banks and credit cards, categorize transactions, and generate periodic reports, Decore’s software service pulls and compiles data from blockchains so that accountants may process crypto transactions easily. Decore has the ability to sync crypto transactions with Quickbooks itself, allowing accountants to operate with crypto in a familiar environment.
Decore also provides companies other forms of automation around crypto transactions. Companies can opt to use Decore for payroll purposes and augment regular bank deposits with crypto payments to employees. In addition, tax reporting can be complicated given the large trading volume common in crypto and the different tax treatments that apply, so Decore simplifies this process by generating tax reports on crypto holdings.
Pulling Crypto Transaction Data into DynamoDB
A lot goes on behind the scenes to run Decore’s software service. Whenever a user imports their cryptocurrency wallet or exchange account, Decore needs to pull data on all the transactions associated with it.
Decore adopted a serverless approach to building their application. To populate and update crypto transaction data once a wallet or exchange account has been added, Decore fires off AWS Lambda functions to query blockchains approximately every hour. Decore also requires a data backend that can scale with the volume of transactions as their business grows, so they use Amazon DynamoDB to store all the crypto transaction data returned by the queries.
Search and Analytics on Crypto Transactions
Aside from compiling all transaction data from users’ wallets and exchange accounts, Decore, as an accounting tool, needs to allow accountants to verify, allocate, and reconcile transactions as part of their work. To support this, Decore provides functionality to filter, search, and analyze transactions. As an example, users can perform ad hoc searches for transactions involving specific deposit and withdrawal currencies, between particular origin and destination accounts.
Ad Hoc Queries Run Too Slowly
The ability for accountants to issue arbitrary queries on transaction data was not straightforward to implement, however. Decore originally built these features on the DynamoDB data store but quickly realized that this design was infeasible. Searches on transaction data needed to be fast and sufficiently interactive to be usable by accountants, but DynamoDB could not deliver the necessary performance on its own. These searches were simply not possible without predefining the queries and creating secondary indexes in DynamoDB for this subset of queries. But with more than 20 attributes in the DynamoDB table at this point, it wasn’t feasible to maintain indexes for everything, which was what would have been required to support ad hoc queries.
Decore would have had to severely limit query functionality and disable ad hoc and random queries by users if they couldn’t find a reasonable way to run analytics on DynamoDB. Decore engineers looked at offloading analytics onto other data stores and considered supplementing DynamoDB with MySQL to index the crypto transaction data. However, this alternative was also fraught with problems, including an inability to scale.
Delivering Real-Time Search and Analytics with Rockset
In searching for a solution to the query performance issue, the Decore team came across Rockset, which enables real-time search and analytics on data from DynamoDB. Rockset delivers low-latency queries through a combination of cloud autoscaling and automatic indexing of every field and value in the data, making it possible for Decore to enable ad hoc queries on the crypto transaction data without any performance engineering or index management. In addition, Rockset continuously loads data from DynamoDB, so the most up-to-date data Decore is receiving can be made available for fast analytics.
Decore incorporated Rockset into their data landscape, setting up a stream from DynamoDB to maintain a copy of the transaction data in Rockset. Decore rebuilt their application so that search operations go through Rockset while write operations continue going to DynamoDB. All queries that filter and search on the transactions are handled through Rockset’s index, which returns transaction IDs that the Decore app then uses to fetch matching transactions from DynamoDB. The integration was relatively simple, due to Rockset’s support for DynamoDB as a standard data source and Decore’s modular, microservices-based architecture, taking one engineer half a day to implement.
As a serverless search and analytics engine, Rockset also aligns well with Decore’s serverless approach, requiring no management of infrastructure or data platform while scaling transparently to meet Decore’s needs. This allows Decore to use Rockset in their architecture with minimal disruption to their processes.
Powering a Better Crypto Accounting Experience
The combination of DynamoDB and Rockset allowed Decore to deliver the full range of search functionality for crypto transactions that was planned. Most ad hoc queries that were not possible before now return in under 100ms, so performance is no longer a limiting factor.
“We want to provide accountants an environment where they can work efficiently with crypto, so it’s important they have all the functionality they need to do their job,” says Yenwen Feng, CEO of Decore. “By building our search and analytics on top of Rockset, we make it possible for accountants to find and review transactions as needed.”
According to Feng, using DynamoDB and Rockset together gives Decore the best of both worlds-fast writes and fast analytics-with no need to trade off between the two. He adds, “Queries that were impossible with DynamoDB alone are now completing in milliseconds with Rockset. Anyone running analytics on DynamoDB can get better performance just by hooking up their applications to Rockset and connecting to DynamoDB.”
|
https://medium.com/rocksetcloud/decore-uses-rockset-for-search-analytics-on-dynamodb-81b353ddcb12
|
['Kevin Leong']
|
2019-08-22 18:15:55.922000+00:00
|
['Analytics', 'Serverless', 'AWS', 'Dynamodb', 'Cryptocurrency']
|
Why I’m Writing To You Now
|
First a warning. I’m going to tag a shit load of writers at the end of this, but it’s not for the reason you’re probably thinking. I’m really not that interested in a multitude of folks reading this missive, but I do want to send this message to a few of my friends — yes, I said friends — on this platform and I really hope they’ll read it.
Hope the damn notification system still works around here.
This is the first thing I’ve laid down on Medium in over a month. There’s a reason for that and maybe to some, it’s not a good one, but hell, anyone who knows me (and so few really do) can tell you I don’t always make the best choices. I started writing here three years ago and I had a plan.
Actually, it was more of a dream, but I called it a plan. A plan to make money from my writing, enough money to replace the income of my current JOB, enough steady income that would allow me to ride off into the sunset letting my brain and fingers do the walking.
For three years I fed on a daily load of bullshit dreams of glory, spurring myself to write each day and for a while I was content, telling myself I was moving closer to recognizing those dreams of glory.
The fact was I wasn’t moving at all.
My writing was lost in a sea of voices. I was drowning in a vast ocean of other people’s words and ideas and yet I continued to dog paddle and keep my head above water desperate to finally hit my stride. I must say, writing on Medium is addictive as hell and I was taking my daily dose of it. Each day “shooting up” never bothering to think about what it was doing to me. Ignoring the fact I had a life to live, a family who had always depended on me for emotional support.
And then reality set in.
One day about a month ago, I realized that if I really intended to make decent money, for me, this wasn’t the place to do it. I also saw a very ugly picture of myself and what spending three frustrating years on this platform chasing an elusive dream was doing to me.
I guess you could say — and I’m only speaking for myself here — on Medium, I spent these last three years attempting to achieve dreams of glory when all I was really doing was becoming little more than a cautionary tale.
So, I stopped writing here and refocused on creating a revenue stream by freelancing. Interestingly enough, I struck a tiny vein of gold by securing a client within the first two weeks, and then another a few days later. Now, don’t get me wrong here. The money I’m making on my freelancing gigs hasn’t come close to what I’m making on my current JOB, but each month it’s ten times more than any $100.00 month I’ve made on Medium. Which I only hit two times in three years, by the way.
Okay, enough about the money. I’ve proven to myself I can make money with my writing so why am I writing this now?
Because I miss you guys.
I miss the camaraderie, the humor, the caring, nurturing open-handed, and sometimes humbling wisdom. I miss the quirky comments and opposing perspectives.
I miss you.
Some of you have stayed in contact with me via email during my absence and each time you reach out my heart does a triple Salchow. A lot more of you didn’t and I realize now that’s okay because everyone lives their life and maintains their connections in their own unique way.
I think it’s called being Human.
What I realized in this month of solitary pondering in a Tibetan monastery (my house is a split level and I’m on the top) is that I need more than just being able to make money with my writing.
I need you, folks.
I need to read (I so need to do a better job of that) what you have to tell me and the world. I need to hear from you.
And so, (Cliche Warning) when all is said and done at the end of the day it comes down to people need to interact with people whether it’s face-to-face (Zoom nowadays) or by telephone or email or just like I’m doing right now.
This isn’t so much about me (okay, maybe just a little or a lot. I’ll let you be the judge) as it is about you. It’s “the you” I miss, and that’s why I’m writing to you now.
Sometimes finding your way isn’t always about discovering how much money you can make, but a lot of you already knew that, right? Say it with me folks. P.G.? You ain’t the sharpest tool in the shed are you son?
That’s okay because that’s what friends are for. Everybody needs a friend who’s willing to slap us upside the head and tell us to get over ourselves and pay attention to the important things.
That’s what I love about you and that’s why I’m writing to you now. You’ve no problem doling out a little common sense when you see me losing my shit. In my own little masochistic way, I’ve grown to love it.
Just please put down the whip before you respond okay?
As I said at the beginning, I’m going to tag a ton of you folks. Not necessarily to get my read count up because I’ve long gotten over that, but in hopes that this “message in a bottle” makes its way to you and you actually read it.
So in no particular order here we go. Oh, if I’ve forgotten somebody (old age memory loss sucks), I apologize. Remember to slap some sense in me if you get a moment.
Shannon Ashley, Lon Shapiro, Lindsay Lonai Linegar 🌼, Paul Myers MBA, Linda Caroll, Bebe Nicholson, White Feather, Michael Stang, Sherry McGuinn, Kristi Keller, Helen Cassidy Page, Kat of Magik, Edd Jennings, kurt gasbarra, Brian Emery, Brian Abbey, Karen Fayeth, Elle Rogers, Suzanne V. Tanner, Robin Klammer, Mark Starlin, Bonnie Barton, Joe Váradi, Joe Garza, Joe Luca, Holly Jahangiri, Rasheed Hooda, Zul Bal, Rebecca Romanelli, James Knight, Bill DuBay Jr., J.D. Harms, Amy Marley, Elizabeth Helmich, Julia E Hubbel, Mary Holden, Britni Pepper, Tree Langdon, Tre L. Loadholt, Agnes Louis, Adam, Diabetic Cyborg, Nicole Akers, Robert Nelson, Roy, Mo Solo, Mike Range, Michael Shook, Michelle Monet, Charles Roast, Sharon Hurley Hall, Lucy King, Background Noise Comics, How To Even…, G.Lodhia M. Edu, Roz Warren, Selma, Genius Turner, Ann Litts, Michele Thill, Tracey Folly, Denise Shelton, Kathryn Dillon, JC Cullum, Jeff Suwak, Jeff Hanlon
…and others…
Or if you just want to reach out to me — [email protected] — please do so. I’d love to hear from you.
Paul
© P.G. Barnett, 2020. All Rights Reserved.
|
https://medium.com/the-top-shelf/why-im-writing-to-you-now-d69d13d6cb8f
|
['P.G. Barnett']
|
2020-11-17 19:59:17.416000+00:00
|
['Health', 'Human Behavior', 'Emotional', 'Medium', 'Writing On Medium']
|
How to Make Your Product Likable
|
Customer Relations
I feel this is mostly ignored by developers: managing good customer relations. Please keep in mind that customers are driven more by emotion than logic. The question might appear stupid to you, but for your customer, it is like a complex problem. Your user is clueless.
Photo by Kelly Sikkema on Unsplash.
An important thing in the process is patience. You need to be patient. Users might not have enough vocabulary to express their problems or they might have reached a wrong root cause by themselves. You need to ask questions politely. Users want to be heard. Even if you agree or not, listen to them, help them to correct their understanding, and make them feel like they are the most valuable customer of all. The cumulative interaction of a customer should add to the positive emotion.
Try to delight your customers by adding some surprising and creative elements now and then. The perfect example of this is Google doodles. You sometimes just open the page out of curiosity about what’s new. And for your product, it would bring increased retention and acquisition of users.
|
https://medium.com/better-programming/how-to-make-your-product-likable-7367d7887a7c
|
['Shilpi Gupta']
|
2020-10-08 16:11:31.495000+00:00
|
['Product Management', 'Software Development', 'User Experience', 'Design', 'Programming']
|
A Scalable Magento 2 Architecture
|
A Scalable Magento 2 Architecture
Utilizing AWS ECR, Fargate, EFS, and Aurora
I write this post with mixed feelings as a company in my portfolio has recently decided to no longer on-board new Magento development projects due to various reasons beyond the scope of this post. However, we are still here to help with companies that are looking for a hand on designing infrastructure, maintaining the systems side of Magento environments, and migrating working Magento instances to the cloud.
For those of you who are not familiar, Magento is one of the most popular E-Commerce platforms used by e-tailers of all sizes. Perhaps it gained its popularity due to the fact that it’s an opensource project with a freely distributed community edition helping lots of companies build out powerful e-commerce websites quickly at extremely low costs. It has also in the past been one of the only few choices in its class that could efficiently handle large amounts of SKUs until the recent years where some of its competitors have caught up a bit. According to Wikipedia, there are over 100,000 stores globally that run on Magento and the software itself has been downloaded more than 2.5 million times. In 2018, Magento was acquired by Adobe for 1.68 billion US dollars, effectively becoming one of the members of Adobe’s brigade of web presence building tools.
As you can tell from the above, even though we always advice customers to build out their e-commerce sites the right way from the start with custom builds rather than using pre-built template based platforms for many very good reasons, it’s extremely easy for budget conscious businesses to be tempted to go with a template based platform to reduce up-front costs instead. Whether this is a good decision or not is yet another topic beyond the scope of this post. However the point is that because of this phenomenon, we have over the years done quite a bit of work with Magento and some of the other platforms like WooCommerce, Odoo, and even Shopify.
Background
Around two years ago, a friend’s company running 3 e-commerce websites on a single Magento 1.9 instance came to me and asked for some help with their site. Apparently, at the time, the site would keep on crashing especially during high season and that was causing them business opportunities. I took a look at it and it was a typical growth situation. Basically, everything was running on a single VPS that just wasn’t cutting it anymore. So without going into too much detail, we essentially decoupled their Magento instance to an AWS EC2 auto-scaling group mounting EFS shares (we talked about containerization but the in-house team wasn’t quite ready for such a big leap) and Aurora to handle the database along with slapping Cloudflare in front of the whole thing to do some CDN and caching. They then had a somewhat scalable infrastructure from there and a single point to upload media and configure magento for all EC2 web instances. Off we went on a launch and the problem was solved.
Magento 1.9 New Architecture
Fast forward to a few a months ago, Magento 1 using Paypal was falling out of PCI compliance and Paypal urged everyone to upgrade to Magento 2. So we regrouped with the client and discussed a plan of action. This time around, they were ready for containerization and AWS Fargate was at the same time finally mature enough for us to give it a run for its money. The hard part now is to migrate the Magento 1 data to Magento 2. This is the most painful part which I will definitely not cover here but let’s just say that we were able to successfully migrate all of the pertinent data over to a new Magento 2 instance running in Docker after a lot of pain and many sleepless nights. If anyone is thinking about doing this, I would strongly suggest you don’t go cheap and hire a strong firm that’s specifically focused on Magento with lots of experience doing migrations of this type. The difference is night and day.
The New Archtecture:
Now that we have a pretty solid Magento 2 instance running in Docker on our local dev environment with the migrated data from Magento 1, we are ready to throw this onto AWS. We first imported the database to Aurora and created the Redis instance. Then, we started up an EC2 instance called DevOps and ran the image with it to reconfigure it for using Aurora and made some of the directories on EFS to be test mounted by the container. After that, we committed the image and pushed it to a newly created ECR registry.
We are then ready to give it a run on Fargate, we created the cluster and task definitions so that the image we have prepared is launched to have 2 minimum containers with an auto-scaler that maxes out at 4 containers sitting behind an Application Load Balancer which are configs derived from our past traffic based estimates. So why Fargate you might ask? Here is my logic behind it:
No servers or orchestration infrastructure to manage Handles EFS mounting and launching/monitoring of docker containers for you via task definitions Load Balancer is automatically adjusted on launch and auto-scaling Overall cost reduction to only paying for what’s used compared to EC2
Of course, we reused the bastion host that has OpenVPN (external) and ssh (internal LAN only) enabled to allow our customers to configure Magento and upload images to EFS that persists across all containers. It will also serve as the gateway for the client to access DevOps for making docker image changes and accessing the new analytics system we will be installing after.
As mentioned, the client wanted to run some analytics on the data in Magento so it was only logical to create a read replica of the production database so that they can run an EC2 instance and install whatever they wanted to grab data and analyze with. So that’s exactly what we did.
And that’s it! We logged into Cloudflare, swapped out the CNAME of the domains to the new load balancer and went live. This sums up all the components of a modern scalable Magento 2 architecture as diagrammed above. There are more improvements that can be done or even components that’s already there which I didn’t really talk much about like varnish and maybe even CI/CD but what’s clearly within the diagram alone is already a solid enough foundation for general scaling and growth and definitely ready for you to add more or improve upon without any unwanted architectural issues.
If there are any questions or suggestions, please feel free to drop a comment below or get directly in touch!
|
https://hkdb.medium.com/a-scalable-magento-2-architecture-107f5fe7a813
|
['Jeremy Cheng']
|
2020-08-16 21:04:46.299000+00:00
|
['Ecommerce', 'AWS', 'Magento', 'Fargate', 'Cloud']
|
Tips on Designing a Million Dollar Dashboard
|
Tips on Designing a Million Dollar Dashboard
How to avoid the dashboard graveyard
Photo by Stephen Dawson on Unsplash
In the past few weeks, we have watched Tableau be purchased by Salesforce for 15.7 billion dollars and Looker for 2.7 billion dollars by Google.
Why are these companies so valuable?
Why are so many companies willing to pay for the services that Tableau and Looker provide?
The truth is dashboards can be expensive wastes of time and resources or they can provide insights that are worth exponentially more than the original investment. In fact, creating dashboards is a very lucrative business.
I worked for a company whose business model revolved around selling dashboards and insights to external customers. We only had four dashboard products. That was it, and yet, this company has existed for over a decade.
This is because they were really good at creating dashboards that provided impact continuously. These weren’t dashboards that you use for a month or two and forgot about. Instead, the dashboards they created were concise, clear and helped directors make decisions quickly.
They were able to do this by creating dashboards that followed a few basic principles. We wanted to share some of these basic tips below. Who knows, maybe your team will build the next million dollar dashboard.
|
https://medium.com/better-programming/tips-on-designing-a-dashboard-worth-millions-of-dollars-21b1f992dee2
|
[]
|
2019-06-26 18:50:18.675000+00:00
|
['Big Data', 'Analytics', 'Data', 'Data Science', 'UX']
|
TorchServe and [TorchElastic for Kubernetes], new PyTorch libraries for serving and training models at scale
|
TorchServe and [TorchElastic for Kubernetes], new PyTorch libraries for serving and training models at scale PyTorch Follow Apr 21 · 4 min read
Authors: Joe Spisak (Facebook), Aditya Bindal (AWS), Kiuk Chung (Facebook), Mike Stefaniak (AWS)
As PyTorch is used more and more in production environments, we’ve continued to see the need to provide better tools and platforms for the community to scale up training and deploy models efficiently.
Today, we are excited to introduce TorchServe (Experimental), a new open-source model serving library under the PyTorch project. TorchServe is the result of a collaboration between Facebook and AWS engineers aimed at providing a clean, well supported, and industrial-grade path to deploying PyTorch models for inference at scale. This library is available for free as part of the PyTorch open-source project.
We are also announcing the availability of a new co-developed, between Facebook and AWS, Kubernetes controller with tight integration to TorchElastic (Experimental), a library for fault-tolerant and elastic training in PyTorch. With the TorchElastic Kubernetes controller, developers can create fault-tolerant distributed training jobs in PyTorch using their Kubernetes clusters, including Amazon EC2 Spot instances on Amazon Elastic Kubernetes Service (EKS).
In the rest of this post, we describe these new PyTorch libraries in detail and provide resources on how to get started.
TorchServe
Deploying machine learning models for inference at scale is not easy. Developers must collect and package model artifacts, create a secure serving stack, install and configure software libraries for prediction, create and expose APIs and endpoints, generate logs and metrics for monitoring, and manage multiple model versions on potentially multiple servers. Each of these tasks adds time and complexity and can slow down model deployment by weeks, sometimes months. Further, optimizing a serving stack for low latency online applications is still more of an art than a science. Lastly, and until now, PyTorch developers lacked a canonical and officially supported way to deploy PyTorch models. That’s why we are releasing TorchServe, the PyTorch library for deploying trained models.
Below is a simple example of how to take a trained model from torchvision and deploy it using TorchServe:
#Download a trained PyTorch model
wget
#Package model for TorchServe and create model archive .mar file
torch-model-archiver \
--model-name densenet161 \
--version 1.0 \
--model-file examples/image_classifier/densenet_161/model.py \
--serialized-file densenet161–8d451a50.pth \
--extra-files examples/image_classifier/index_to_name.json \
--handler image_classifier
mkdir model_store
mv densenet161.mar model_store/
#Start TorchServe model server and register DenseNet161 model
torchserve — start — model-store model_store — models densenet161=densenet161.mar trained PyTorch modelwget https://download.pytorch.org/models/densenet161-8d451a50.pth modelTorchServe and create model archivefiletorch-model-archiver \--model-name densenet161 \--version--model-file examples/image_classifier/densenet_161/model--serialized-file densenet161–d451a50--extra-files examples/image_classifier/index_to_name--handler image_classifiermkdir model_storemv densenet161model_store/TorchServe model server and register DenseNet161 modeltorchserve — start — model-store model_store — models densenet161=densenet161.mar
The experimental release of TorchServe, available today, includes:
Clean APIs — Support for an Inference API for predictions and a Management API for managing the model server.
— Support for an Inference API for predictions and a Management API for managing the model server. Secure Deployment — Includes HTTPS support for secure deployment.
— Includes HTTPS support for secure deployment. Robust model management capabilities — Allows full configuration of models, versions, and individual worker threads via command line interface, config file, or run-time API.
— Allows full configuration of models, versions, and individual worker threads via command line interface, config file, or run-time API. Model archival — Provides tooling to perform a ‘model archive’, a process of packaging a model, parameters, and supporting files into a single, persistent artifact. Using a simple command-line interface, you can package and export in a single ‘.mar’ file that contains everything you need for serving a PyTorch model. This `.mar’ file can be shared and reused. Learn more here.
— Provides tooling to perform a ‘model archive’, a process of packaging a model, parameters, and supporting files into a single, persistent artifact. Using a simple command-line interface, you can package and export in a single ‘.mar’ file that contains everything you need for serving a PyTorch model. This `.mar’ file can be shared and reused. Learn more here. Built-in model handlers — Support for model handlers covering the most common use-cases (image classification, object detection, text classification, image segmentation). TorchServe also supports custom handlers.
— Support for model handlers covering the most common use-cases (image classification, object detection, text classification, image segmentation). TorchServe also supports custom handlers. Logging and Metrics — Support for robust logging and real-time metrics to monitor inference service and endpoints, performance, resource utilization, and errors. You can also generate custom logs and define custom metrics.
— Support for robust logging and real-time metrics to monitor inference service and endpoints, performance, resource utilization, and errors. You can also generate custom logs and define custom metrics. Model Management — Support for management of multiple models or multiple versions of the same model at the same time. You can use model versions to roll back to earlier versions or route traffic to different versions for A/B testing.
— Support for management of multiple models or multiple versions of the same model at the same time. You can use model versions to roll back to earlier versions or route traffic to different versions for A/B testing. Prebuilt Images — Ready to go Dockerfiles and Docker images for deploying TorchServe on CPU and NVIDIA GPU based environments. The latest Dockerfiles and images can be found here.
Getting Started with TorchServe
You can get started at pytorch.org/serve with installation instructions, tutorials and docs.
If you have questions, please drop it into the PyTorch discussion forums using the ‘deployment’ tag or file an issue on GitHub with a way to reproduce.
Kubernetes Controller with TorchElastic Integration
With larger and larger models being trained, such as RoBERTa and TuringNLG, the need to scale out to a distributed cluster is increasingly important. The use of preemptible instances, such as Amazon EC2 Spot instances, to satisfy this need is a common practice. However, these preemptible instances are unpredictable by their very nature. The integration of Kubernetes and TorchElastic allows PyTorch developers to train machine learning models on a cluster of compute nodes that can dynamically change without disrupting the model training process. The built-in fault tolerant capabilities of TorchElastic can pause node level training even when a node goes down and resume once the node is healthy again.
Additionally, using the Kubernetes controller with TorchElastic, you can run mission-critical distributed training jobs on clusters with nodes that get replaced, either due to hardware issues or node reclamation. Training jobs can launch with partial requested resources, and dynamically scale as resources become available without being stopped or restarted. To take advantage of these capabilities, users can simply specify training parameters in a simple job definition and Kubernetes-TorchElastic package will manage the job’s life cycle.
Below is a simple example of a TorchElastic configuration for an imagenet training job:
apiVersion: elastic.pytorch.org/v1alpha1
kind: ElasticJob
metadata:
name: imagenet
namespace: elastic-job
spec:
rdzvEndpoint: $ETCD_SERVER_ENDPOINT
minReplicas: 1
maxReplicas: 2
replicaSpecs:
Worker:
replicas: 2
restartPolicy: ExitCode
template:
apiVersion: v1
kind: Pod
spec:
containers:
- name: elasticjob-worker
image: torchelastic/examples:0.2.0rc1
imagePullPolicy: Always
args:
- "--nproc_per_node=1"
- "/workspace/examples/imagenet/main.py"
- "--arch=resnet18"
- "--epochs=20"
- "--batch-size=32"
- "/workspace/data/tiny-imagenet-200"
Getting Started with TorchElastic on Kubernetes
Learn more about the Kubernetes Controller design here.
Full docs and tutorials can be found here.
Cheers!
Joe, Aditya, Kiuk & Mike
|
https://medium.com/pytorch/torchserve-and-torchelastic-for-kubernetes-new-pytorch-libraries-for-serving-and-training-models-2efd12e09adc
|
[]
|
2020-04-21 17:41:19.922000+00:00
|
['Model Serving', 'AWS', 'Announcements', 'Pytorch', 'Machine Learning']
|
From Dev to Prod - All you need to know to get your Flask application running on AWS
|
From Dev to Prod - All you need to know to get your Flask application running on AWS
Getting the right configurations, making sure it is secured, ensuring resource access through endpoints and having a pretty rendering, … all of them made easy thanks to AWS!
As a machine-learning engineer, I never really faced the issue of putting my algorithms out there myself. Well, that was until recently, when I decided to start my multiple entrepreneurial journeys … Unfortunately, when you start, you do not have a DevOps or software engineering team. Those are experienced with the world in which customers are using those services, and they know how to bridge the last step to bring your product from zero to one.
Now, I have had to spend hours reading tutorials and documentation to learn the basics, and finally, put my own algorithms as world-wide available and independent containerized services. For obvious reasons of reproducibility, I went through the templating of those steps, and I am more than happy to share those templates with you! :) [The templates are hosted here.]
Prepare the battlefield!
This initialization step is crucial and dependent on your practices. Here I expose the way I do it, but feel free to be original. My only assumption so far is that you have a proper AWS account, and you know how to navigate in between services!
Install and configure the AWS CLI: AWS Tutorial.
Install and configure the AWS Elastic Beanstalk CLI: AWS Tutorial.
Download the templates: You currently have three choices: You can either download each file from the folder template independently; you can clone the entire repository for project Challenger, which is where I host my templates; you can use subversion to download a specific folder through the following commands:
svn checkout sudo apt install subversionsvn checkout https://github.com/Coricos/Challenger/trunk/templates/beanstalk
Define your dev environment:
virtualenv -p python3 beanstalk
cd beanstalk
source bin/activate
pip install -r requirements.txt
Build your flask server in application.py ! Check out that everything is running properly by trying it on your local machine first, and make sure to encapsulate the server launch in the __main__ . Also, that is an AWS specific requirement, but the name of your application as to explicitly be application …
It works! Now what?
Because I found out AWS Elastic Beanstalk to be the best service to run my applications, I am inherently willing to present it. Why? Mainly because what is running on a beanstalk instance can be spawned on any other similar service and cloud provider. That gives you unlimited flexibility (and I am not even talking about using Docker for even greater deployability).
This service consists of a basic container running on an EC2 instance, linked to an S3 bucket for storage. In our case, the application itself does not require a lot of computational power (we did not talk about deep learning so far), so we will opt for the t2.micro instance (single virtual core and 4 GB of RAM). From there, it will only be about configuration, because AWS made our lives easier: you do not have to think about subnets, security-groups, VPCs, IP gateways, NAT, … This is automatically created and defined when you spawn your instance. Nonetheless, for the ones needing to control those (working in VPC or with RDS), you can configure everything through security.config in the .ebextensions folder. The critical configuration ends with the config.yml file in the .elasticbeanstalk folder:
environment-defaults:
{flask-app}:
branch: null
repository: null
global:
application_name: {my-app}
default_ec2_keyname: {my-ec2-key}
default_region: {region}
profile: {my-profile}
workspace_type: Application
For that configuration section, there is not much to do: Give a fancy name to your application as well as to your service (~environment); Define its EC2 key name if you need so, or use default otherwise; Select the region in which you want to spawn the instance; Chose your user profile; Use the Application load balancer (that’s my advice).
From there, you can already access and visualize your application running online, under a name such as {flask-app}.{id}.{zone}.aws.com. However, this lacks something: encryption during information transfer. And you may not be me, but I really dislike using websites or endpoints that do not use HTTPS…
Get the princess in the fortress!
Unfortunately, your instance cannot use HTTPS without having SSL certificates. Usually, people would work with OpenSSL, which is pretty straightforward but in our case, AWS makes it easy once again with their Certificate Manager service. If you want to make it even fancier, buy a domain name through the Route 53 service. You can then create your own certificate either with AWS for your beanstalk, either relative to your newly acquired domain name (looks way more pro this way, I assure you). Now, two objects have to be configured to redirect the requests: a Canonical Name , such as {cname}.domain-name that takes as value your EB instance; an Alias , such as {alias}.domain-name for your EB instance as well. With those two records, you will be good to go!
Please, use HTTPS!
What you miss is the messenger: the specific redirection of HTTPS requests to your instance. That messenger is called a listener, with a straight-forward configuration: direct HTTPS from the outside world to the HTTP on your instance. (Available in listener.config !)
option_settings:
aws:elb:listener:443:
InstancePort: 80
ListenerEnabled: true
InstanceProtocol: HTTP
ListenerProtocol: HTTPS
SSLCertificateId: {certificate}
But that’s not all! To make sure your instance accepts HTTPS you have to configure the server: that is what https.config does for you! ;)
Deploy Deploy Deploy!
Once you have figured out your Flask application, how to fill all missing configuration settings, how to get your SSL certificate, it is time to deploy! (No need to init, we have built the configuration files for that purpose.)
eb create flask-app
Want More? Here you are!
Here are a few things I did not want to integrate into the previous tutorial but are equally important in terms of the next steps:
|
https://towardsdatascience.com/from-dev-to-prod-all-you-need-to-know-to-get-your-flask-application-running-on-aws-ecedd4eec55
|
['Meryll Dindin']
|
2019-11-11 20:10:48.875000+00:00
|
['Flask', 'AWS', 'Good Practices', 'Production', 'Cloud']
|
Vue Authentication Using Sendy Auth Plugin
|
Configuring Sendy Auth
The options object includes a number of properties that can be used to configure the plugin.
driver (string)
The driver property sets the social authentication driver that will be used when performing social authentication. If a driver is not defined in the object then the app will default to using Google.
NOTE: Currently, the plugin only supports the ‘Google’ driver.
authUrl (string)
In case there is an internal authentication endpoint that is required in your application, this is where you define it. This property is also used for basic authentication that shall be covered in the second part of the series.
NOTE: For social authentication, if this is provided then the payload that will be sent to the url will be the email and password (token) obtained from the social providers.
configs
The configs property further customises the driver. It’s an object whose keys should be supported drivers only. Each definition is an object with the necessary driver configurations. For example Google’s config object expects a clientId that is used to access google apis (gapi).
configs: { google: { clientId: //enter your google api token here as a string } }
Sendy auth automatically registers two components in the global Vue instance:- the social auth and the basic auth component.
|
https://medium.com/sendyit/social-basic-authentication-for-vue-applications-using-sendy-auth-plugin-5b355ac047cd
|
['Francis Kisiara']
|
2019-09-19 09:54:23.990000+00:00
|
['Front End Development', 'JavaScript', 'Plugins', 'Vuejs']
|
Bugs in Trust
|
While some software teams have instituted expensive zero-tolerance policies, most teams try to balance the priority of each bug against a mountain of other worthwhile demands. Both absolutist and pragmatist policies can be perfectly appropriate depending upon the nature of the software–for instance, I’d like the software that keeps my plane aloft to work 100% of the time thank you very much. But, if a gif of a panda rolling around in the grass pauses half-way through, it’s not the end of the world. For “system of record” software like Greenhouse, we’ve found our own tolerance level sits closer to “keep the plane in the air” than gif-stuttering consumer site, but we don’t bring everything to a halt if a button is 3px left of where it should be in IE10.
To bring clarity to how we prioritize our work, we have developed a new term that has found its way into our day-to-day conversations at Greenhouse, so I am writing this short article to share this term with the broader community. While we do make trade-offs in prioritizing certain issues, we have recently instituted a zero-tolerance policy for what we call trust bugs. Let’s define the term.
Trust bugs are distinct from other bugs, in that they cause the user to either:
not believe that the data as presented are correct, or not believe that an action taken will reliably occur
Bugs in this category are distinct because they don’t just cause frustration, they harm the relationship between a company and its customer. Trust bugs create skepticism every time a button is pressed, or a list appears, and this eventually leads to customer churn.
Trust is much harder earned than it is lost, so don’t let trust bugs sit in the backlog.
|
https://medium.com/hackernoon/bugs-in-trust-55bce19d9ad5
|
['Michael Boufford']
|
2017-10-20 15:21:01.567000+00:00
|
['Quality Assurance', 'Customer Service', 'Software Engineering', 'Software Development', 'Testing']
|
How to Build a Reporting Dashboard using Dash and Plotly
|
A method to select either a condensed data table or the complete data table.
One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present:
Code Block 17: Radio Button in layouts.py
The callback for this functionality takes input from the radio button and outputs the columns to render in the data table:
Code Block 18: Callback for Radio Button in layouts.py File
This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' .
Conditionally Color-Code Different Data Table cells
One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues.
There is lack of formatting functionality in Dash Data Tables at this time.
If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly.
There is a bug in the Dash data table code in which conditional formatting does not work properly.
I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide:
Code Block 19: Conditional Formatting — Highlighting Cells
The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash.
*This has since been corrected in the Dash Documentation.
Conditional Formatting of Cells using Doppelganger Columns
Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements:
Code Block 20: Adding Doppelganger Columns
Then, the conditional cell formatting can be implemented using the following syntax:
Code Block 21: Conditional Cell Formatting
Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values.
The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method):
Code Block 22: Data Table with Conditional Formatting
I describe the method to update the graphs using the selected rows in the data table below.
|
https://medium.com/p/4f4257c18a7f#ed95
|
['David Comfort']
|
2019-03-13 14:21:44.055000+00:00
|
['Dashboard', 'Towards Data Science', 'Data Science', 'Data Visualization', 'Dash']
|
Shocking Research Warned of a Pandemic Decades Ago
|
Shocking Research Warned of a Pandemic Decades Ago
History repeats itself so why weren’t we prepared?
Red Cross volunteers fighting the influenza epidemic of 1918 (Photo Credit: Getty Images/American Red Cross)
Reality is made up of circles. The swirling tides of historical events eventually swift back to our present. The fascinating duplicity of history is that it does nothing but repeat itself. Every event, every act, and every move can be predicted. Nothing is ever original and the outlook of the 2020s is living proof.
The simple irony of life is that the more we change, the more we stay the same.
The thought of predicting the future is no longer a fantasy but rather an existing possibility. But if we’ve walked down this path before, why do we keep falling into the same pits?
|
https://medium.com/history-of-yesterday/shocking-research-warned-of-a-pandemic-decades-ago-d5e570380316
|
['Kim Mia']
|
2020-12-30 18:02:47.668000+00:00
|
['Covid 19', 'Humanity', 'History', 'Future', 'Pandemic']
|
My Personal Thoughts, Opinions & Preferences on Programming in 2020
|
My Personal Thoughts, Opinions & Preferences on Programming in 2020
Personal technology preferences after building many projects in the past
This story is originally published here.
Photo by John Schnobrich on Unsplash
Over the past few months, I’ve been building many side projects in my free time for learning apart from working full-time as a software engineer for the past 3 years. When building the side projects, I intentionally use many different technology stacks that I’m not familiar with in order to learn and make more informed choices in the future.
Over that period, I’ve developed my preferences towards the tech stacks of choice and approaches on how I’m gonna build my next project.
Prefers API-first development that automatically generates API documentation for you. It’s always better to spend extra hours documenting your API rather than spending extra hours every time figuring out how to use the API endpoint you wrote 3 months ago. It also frees up your mental memory. You may use FastAPI, Django REST Framework, or Connexion for this.
Example API Doc
Prefers GraphQL over RESTful API when writing frontend code. GraphQL server will automatically generate a schema inspector that will help the frontend developers to easily use the API. However, writing a GraphQL server is still a big pain these days, so use Hasura (or AWS AppSync although less preferred) for that. As for frontend development, the Javascript client libraries are fairly great, well maintained, and easy to use.
when writing frontend code. GraphQL server will automatically generate a schema inspector that will help the frontend developers to easily use the API. However, writing a GraphQL server is still a big pain these days, so (or AWS AppSync although less preferred) for that. As for frontend development, the Javascript client libraries are fairly great, well maintained, and easy to use. When building a REST API, always follow API design conventions and best practices like this one from Microsoft:
Prefers building monolithic API/Web development (using Django) over microservice development for ease of development & deployment. Microservice creates vast new problems to handle (networking, security, deployment, management, observability, etc.) that you need to tackle.
for ease of development & deployment. Microservice creates vast new problems to handle (networking, security, deployment, management, observability, etc.) that you need to tackle. Prefers Jamstack over regular web app development . The deployment to the edge is super easy and cheap. That means, the website (frontend) is served very fast from all around the world and is always available . It also works perfectly together with the API-first development approach mentioned earlier which gives you a unified API.
. The deployment to the edge is super easy and cheap. That means, the website (frontend) is served very fast from all around the world and is . It also works perfectly together with the API-first development approach mentioned earlier which gives you a unified API. Prefers matured & boring technology over bleeding-edge tech for critical projects. Keep things simple and it will make you go faster! Let the other people discover bugs first (on new tech) and you enjoy the battle-tested tech. I always refer to the InfoQ Trends Report for reference. I would be more interested in the Early Majority & Late Majority quadrants rather than the Innovators & Early Adopters quadrants.
Prefers Async-first web development with Transactional Outbox pattern over synchronous web development. Return the result to the user immediately after validated and atomically inserted the data into the database. All other operations should be done as asynchronous jobs by workers. Prefers to use Change-Data-Capture (CDC) to achieve this. This is a good example of this approach:
Prefers Kubernetes for sophisticated deployment and prefers other container orchestrators (eg. HashiCorp Nomad, AWS ECS, Heroku, or GCP Cloud Run) for personal project deployments . Use Kubernetes only when your project requires complex network rules (using CNI), automatic scaling (using HPA), running stateful service that needs automatic storage provisioning (using CSI), deploying on 10s, 100s, or 1000s of nodes, observability, automatic SSL cert provisioning, complex ingress rules, custom CRD controller, canary deployment, health check, etc. because that’s what makes Kubernetes great. Otherwise, use a simpler orchestrator that doesn’t deal with them. Kubernetes is not easy to tame.
. Use Kubernetes only when your project requires complex network rules (using CNI), automatic scaling (using HPA), running stateful service that needs automatic storage provisioning (using CSI), deploying on 10s, 100s, or 1000s of nodes, observability, automatic SSL cert provisioning, complex ingress rules, custom CRD controller, canary deployment, health check, etc. because that’s what makes Kubernetes great. Otherwise, use a simpler orchestrator that doesn’t deal with them. Kubernetes is not easy to tame. Prefers serverless Container development and deployment (eg Cloud Run, ECS Fargate) over Function development and deployment (eg. AWS Lambda) . The advantages of container-based development are there’s no vendor lock-in and toolings and Developer Experience (DX) is much better for the container ecosystem compared to function-based development. I still not comfortable building applications in the FaaS way (on AWS Lambda) and still struggling with how to deploy the app in a staging environment, how to debug correctly. However, Yan Chui might change my mind in his upcoming lesson. In either case, you wouldn’t have to worry about VPC settings, NAT Gateway, Firewall rules, Security Groups, OS patching, etc. which is great!
. The advantages of container-based development are there’s no vendor lock-in and toolings and Developer Experience (DX) is much better for the container ecosystem compared to function-based development. I still not comfortable building applications in the FaaS way (on AWS Lambda) and still struggling with how to deploy the app in a staging environment, how to debug correctly. However, Yan Chui might change my mind in his upcoming lesson. In either case, you wouldn’t have to worry about VPC settings, NAT Gateway, Firewall rules, Security Groups, OS patching, etc. which is great! Prefers monolithic SQL databases over NoSQL databases for most projects. SQL databases give you flexibility for you to design your model. NoSQL (especially DynamoDB) on the other hand requires you to know the access pattern before starts your development which costs you so much time when refactoring the app in the future. If you have a really really specific use case (eg. high throughput database), then only use NoSQL DB. This is not the case if you’re using multiple databases.
for most projects. SQL databases give you flexibility for you to design your model. NoSQL (especially DynamoDB) on the other hand requires you to know the access pattern before starts your development which costs you so much time when refactoring the app in the future. If you have a really really specific use case (eg. high throughput database), then only use NoSQL DB. This is not the case if you’re using multiple databases. Prefers managed serverless service offerings/SaaS over build my own/self-hosted, as long as it’s not my core business product. For example, use cloud transcoding to transcode rather than deploying VMs, and run ffmpeg to transcode my videos, using GCP BigQuery to conduct analysis rather than spinning and set up a new dedicated MySQL server to run analysis, use AWS Textract service rather than building and deploying Machine Learning models that do Optical Character Recognition (OCR) on digital documents, etc. This is highly influenced by this article:
Prefers opinionated frameworks/languages over unopinionated ones . Usually, these frameworks/languages, especially the ones with a high number of Github stars are built & contributed by experts in their fields. I prefer Django over Flask/FastAPI, NuxtJS over VueJS, Buefy over Bulma CSS. Too much flexibility can lead you to poor choices and you’ll have to implement the best practices on your own (which costs a lot of time and effort).
. Usually, these frameworks/languages, especially the ones with a high number of Github stars are built & contributed by experts in their fields. I prefer Django over Flask/FastAPI, NuxtJS over VueJS, Buefy over Bulma CSS. Too much flexibility can lead you to poor choices and you’ll have to implement the best practices on your own (which costs a lot of time and effort). Prefers Firebase Auth over other authentication/identity providers because of its generous free tier (unlimited user accounts). Firebase Auth is good enough for me whenever I don’t need to build my own auth service (Auth service is more complicated than you think. You’ll need to handle token expiration, password reset, email confirmation, refresh token, social logins, etc.).
because of its generous free tier (unlimited user accounts). Firebase Auth is good enough for me whenever I don’t need to build my own auth service (Auth service is more complicated than you think. You’ll need to handle token expiration, password reset, email confirmation, refresh token, social logins, etc.). Use the right tool for the right job eg. use Elasticsearch for full-text search and not a SQL database for that, use Columnar store database (eg. Clickhouse, Redshift, and GCP BigQuery) for OLAP and use Row store database (eg. MySQL, AWS Aurora and PostgreSQL) for OLTP and not vice versa, use RDBMS indexes & materialized views for quick queries, use API Gateway when managing multiple API services and endpoints. When dealing with multiple datastores, you must decide which one to be the source of truth.
Photo by Joanes Andueza on Unsplash
Learn how people are using tech outside your company & outside your job scope . Watch a lot of conference talks, workshops, and read blogs. Learn from gurus like Kelsey Hightower, Chris Richardson, Jaana Dogan, folks from FANG , etc. on how they build stuff. It will open up your mind!
. Watch a lot of conference talks, workshops, and read blogs. Learn from gurus like Kelsey Hightower, Chris Richardson, Jaana Dogan, folks from , etc. on how they build stuff. It will open up your mind! When reading library/service documentation, read it thoughtfully, and do not skip the best practices section. That’s where you know you’re using the library/service the right way as intended.
and do not skip the best practices section. That’s where you know you’re using the library/service the right way as intended. Try not to reinvent the wheel . Use matured & stable libraries/frameworks whenever possible. Most of the libraries & frameworks (established ones like Django) were written to make sure that you follow the best practices in the industry. For example, Django by default mitigate many security vulnerabilities for you so you can focus on your application.
. Use matured & stable libraries/frameworks whenever possible. Most of the libraries & frameworks (established ones like Django) were written to make sure that you follow the best practices in the industry. For example, Django by default mitigate many security vulnerabilities for you so you can focus on your application. Not preferring a Multi-cloud strategy, it adds too much hassle. Unless you’re a big corporation with 100s of SREs, don’t dream of it. Most of the time, it’s overkill. 1 major cloud provider should be sufficient to handle your workload.
Photo by Pero Kalimero on Unsplash
Unit tests really help you when you’re upgrading your libraries/languages in the future . The system should behave the same before and after you upgrade. Other benefits of it are you can avoid mistakes from happening after refactoring when you anticipate no surprises. It also really valuable when you’re working in a big team and the other team members are making changes to the program logic that affect other components.
. The system should behave the same before and after you upgrade. Other benefits of it are you can avoid mistakes from happening after refactoring when you anticipate no surprises. It also really valuable when you’re working in a big team and the other team members are making changes to the program logic that affect other components. The code is a liability . Less code is better.
. Less code is better. Do not overdo when building an MVP . The speed is the key . Premature optimization is your enemy .
. . . Build incrementally and see the results immediately.
and see the results immediately. When using cloud services, always try to leverage spot/preemptible instances whenever possible to significantly reduce your cloud bill.
These thoughts and preferences are only for the perspective of technological choices. It doesn’t cover the business considerations because that is not what I was doing. I wish to get involved more in product & business decisions in the future.
Hope that this benefits you in making better choices as well.
Feel free to reach me on Twitter or on my personal blog.
|
https://medium.com/swlh/my-personal-thoughts-opinions-preferences-on-programming-in-2020-ee8d623de133
|
['Mohamad Fadhil']
|
2020-10-15 12:01:47.534000+00:00
|
['Web Development', 'Software Development', 'Software Engineering', 'DevOps', 'Programming']
|
Searching for Food Deserts in Los Angeles County
|
img source: robrogers.com
For a recent data science project, I collaborated with several other Lambda School students to search for food deserts in L.A. County. A general definition for what qualifies as a food desert is an area that does not have access, within one mile, to a grocery store/market providing fresh, healthy food options, such as fruits, vegetables, meats, etc. We wanted to test the theory that lower income neighborhoods are more likely to live in a food desert.
Step 1: Source the data
There were 3 main types of data we needed to find for this project: data for grocery stores/markets in L.A. County, geographic data for each zip code in L.A. County, and finally, income data for each zip code.
For the grocery stores, we sourced data from the SNAP retailers database for all of CA and then filtered for L.A. County. Unfortunately, this data was overly inclusive, as it contained many convenience stores, liquor stores, and other random establishments that should not qualify as providing access to healthy and fresh food. Using regex, we tried to eliminate as many of these non-qualifying establishments as possible. Finally, we used the latitude and longitude for each market and generated a Point object using the Shapely library.
Next, we were able to find geographic information for each zip code in L.A. County. However, due to the curvature of the earth, we needed to convert this data into a 2-dimensional representation, or projection, so that we can display it graphically. We read the data into a geopandas dataframe, and then called the to_crs() method, and pass in ‘epsg=4326’ which converts the geographic data from GIS coordinates to latitude/longitude, and also formats the zip code polygon objects to display properly in a 2-d visualization.
The final data we sourced was median household income for each zip code in L.A. County, which we store in a pandas dataframe to be merged with food desert and grocery density later in the project.
.
Step 2: Perform a Grid Search over all of L.A. County
We wanted to do an exhaustive grid search of the entirety of L.A. County in order to identify the locations of any food deserts, as well as compile some summary statistics for each zip code such as average number of grocery stores within 1 mile in order to calculate a grocery density metric for each zip code. Then later we would compare this data with median income for each zip code and examine the results.
First, for each zip code, we created a rectangular shape around the geographic bounds of the zip code, then starting in the northwestern corner, iterated in quarter mile steps horizontally (eastward), then started again at the western edge but stepped down one quarter mile south, and iterated across horizontally again, repeating this process until the entire area had been exhausted. At each step, we generated a Shapely Point object and appended to a list for that zip code. We added this list of Points inside each zip code to a new column in the geographic dataframe. For all of L.A. county, this resulted in just under 58,000 points.
Next, for each zip code, we iterated over each point in the list of points. For each point, we used the buffer() method to draw a circle with radius of one mile around each point and save that circle as a Shapely Polygon object. Because each shapely Polygon object contains the values for all coordinates that comprise the exterior of the shape of the object, we can call the contains() method for that polygon and pass in a coordinate. This method returns a Boolean True or False whether or not that given coordinate is located in the interior of that Polygon object.
In order to test for food deserts, we can check the coordinates of all of our markets to see if they exist inside the boundaries of that circle. However, the dataframe contains several thousand markets, which would be computationally quite expensive to perform this check for every market for every single generated Point in L.A. County. Instead of iterating thru every market in our dataframe, we can speed up the process significantly by first filtering for markets whose coordinates are within a square boundary of our circle, which we can access using the bounds property of the circle (a Shapely Polygon object). Now our list of markets to test is significantly reduced. For each market inside the square, if the circle object contains the market coordinates, we increase the count of number of groceries inside the circle.
Finally, in order to create a grocery density statistic as well as a master list of all points that were food deserts (no markets found inside the one mile radius circle for that point), we made a function to iterate over each zip code, and then each point in its list of points. We initialized a counter for each zip code that would record the number of food deserts found among the list of points. We also initialized two empty lists, one to record the locations of any food deserts found, and one to record the number of groceries inside the circle for each point in the list.
After iterating through each zip code, we unpacked the results to create a master list of all food deserts found, as well as to compute summary statistics by zip code such as grocery density (average number of nearby groceries found, as well as the total number of food deserts for each zip code).
Step 3: Analyze the results and create visualizations:
After unpacking the results of the exhaustive grid search, we first wanted to create a visualization of where the food deserts were found. As you can see, the majority of them are located in the northern half of and western portions L.A. County. These are areas that are generally either national parks, mountainous regions, or even actual deserts, so these food deserts existing there makes perfect sense.
Next, we plotted the grocery store density by zip code. As you can see, the density was highest in the heart of the city.
Lastly, we wanted to examine the grocery density relative to income, so we binned incomes into 4 ranges, and plotted the average grocery density for each bin.
Interestingly, our results seemed to indicate that lower income areas actually had a higher grocery store density than higher income areas.
Conclusions/Takeaways:
After consideration of why our results were paradoxical to our expectations, we learned a couple of good data science lessons. First, the results of any data science project will generally ultimately hinge on the quality of the underlying data. In our case, the underlying data for our list of markets and grocery stores was overly broad. Even after our best efforts at manually filtering out disqualifying establishments using regex, there were still several hundred to perhaps thousands of ‘markets’ that probably should not have qualified. Unfortunately, we did not have enough man hours to go through each establishment one by one and make a determination of whether or not healthy/fresh food was available.
Also, due to the unique topography of Los Angeles, many wealthy areas are located in the sprawling hills/mountainside and beachfront areas. Most of these wealthier households own cars and have no issue driving more than one mile to a grocery store. Conversely, lower income households are often in more dense urban areas. Members of these households often don’t own a vehicle, but can get around by walking, biking, or using public transportation. Since the overall density of these area is higher, and because our dataframe of markets included many corner stores, bodegas, or ethnic stores/retail establishments, the grocery density in these areas ended up higher than for the high income zip codes.
Overall, I thought this was a great and enjoyable project. I gained some experience w some new python libraries, including GeoPandas, Shapely, pyproj, made some neat geographic data visualizations, and learned some valuable data science lessons along the way.
If interested in checking out some of our code, here is a link to the notebook for this blog, and here is a link to our group repo
|
https://towardsdatascience.com/searching-for-food-deserts-in-los-angeles-county-b573467a55b
|
['Josh Mancuso']
|
2019-08-19 16:34:05.058000+00:00
|
['Python', 'Data Science', 'Data Visualization', 'Pandas', 'Food']
|
Why the U.S. Waited So Long to Regulate Carbon Dioxide
|
I am an environmental engineer who studied air quality and atmospheric science. I know I have a lot to learn, but I thought I at least knew the very basics of air quality and its relationship with climate change.
I was wrong. I knew the basics of the science. Not the policy.
I have all of this knowledge about how carbon dioxide (CO₂) chemically impacts the oceans and coral, and how crops will struggle to grow while weeds have potential to thrive. But not about the policies that regulate the environment we live with.
To kickstart my education, I figured, why not read a nonpartisan book about some of the most prevalent environmental policy challenges to date. (I’d recommend reading How the Government Got in Your Backyard by Jeff Gillman and Eric Heberlig.)
I made my way through several chapters, learning about invasive species management and the strict lawncare rules of homeowners associations, as well as the key differences between organic and conventional farming.
But when I reached the chapter on climate change, one thing shocked me: CO₂ has only been regulated at any governmental level in the United States since 2007.
Wait. How could I have not known this?
And how did that tiny molecule become the face of global climate change?
|
https://medium.com/climate-conscious/carbon-dioxide-policy-how-we-got-here-9de7143d6ffa
|
['Charlee Thompson']
|
2020-08-06 15:38:41.221000+00:00
|
['Climate Science', 'Carbon Emissions', 'Climate Change', 'Policy', 'Environment']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.