title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Speech Recognition with TensorFlow.js | Deploying a sample model with TensorFlow.js
As we said, TensorFlow.js is a powerful library, and we can work on a lot of different things like image classification, video manipulation, and speech recognition among others. For today I decided to work on a basic speech recognition example.
Our code will be able to listen through the microphone and identify what the user is saying, at least up to a few words as we have some limitations on the sample model I’m using. But rather than explaining, I think it’s cool if we see it first in action:
Unfortunately, I can’t run the code on medium, but you can access the live demo here
Pretty cool? I know it can be a bit erratic, and it’s limited to a few words, but if you use the right model, the possibilities are endless. Enough talking, let’s start coding.
The first thing we need to do is to install the library and get our model. For installing TensorFlow.js there are a few options that can be reviewed here, in our case to keep it simple we will import it from CDN.
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"></script>
<script src="https://unpkg.com/@tensorflow-models/speech-commands"></script>
Then we would use some HTML to show the list of words:
<div class="demo">
<div>
<label class="form-switch">
<input type="checkbox" id="audio-switch">
Microphone
</label>
<div id="demo-loading" class="hidden">Loading...</div>
</div>
<div id="sp-cmd-wrapper" class="grid"></div>
</div>
So far nothing strange, we have our checkbox, a loading element and a wrapper element which we will use to render the list of words, so let’s do that next:
const wrapperElement = document.getElementById('sp-cmd-wrapper');
for (let word of wordList) {
wrapperElement.innerHTML += `<div id='word-${word}'>${word}</div>`;
}
In order for the demo to start working we need to click on the Microphone checkbox, let’s set an event listener there to trigger the loading and listening processes.
document.getElementById("audio-switch").addEventListener('change', (event) => {
if(event.target.checked) {
if(modelLoaded) {
startListening();
}else{
loadModel();
}
} else {
stopListening();
}
});
When the checkbox changes its value we have 3 different possibilities, the user enabled the checkbox and the model is not loaded, in that case, we use the loadModel() function, if however the model was already loaded we trigger the listening process. If the user disabled the checkbox, we stop accessing the microphone.
Let’s review each function implementation:
loadModel()
loadModel() is responsible for creating the recognizer instance and load the model. When the model is loaded we will be able to get the list of labels the model was trained on with recognizer.wordLabels() . This will be helpful later when evaluating the model.
async function loadModel() {
// Show the loading element
const loadingElement = document.getElementById('demo-loading');
loadingElement.classList.remove('hidden');
// When calling `create()`, you must provide the type of the audio input.
// - BROWSER_FFT uses the browser's native Fourier transform.
recognizer = speechCommands.create("BROWSER_FFT");
await recognizer.ensureModelLoaded()
words = recognizer.wordLabels();
modelLoaded = true;
// Hide the loading element
loadingElement.classList.add('hidden');
startListening();
}
startListening()
startListening() will be called after the model loaded or the user enabled the microphone and will be responsible for accessing the microphone API and evaluate the model to see which word we were able to identify. This sounds complicated, but thanks to TensorFlow is just a few lines of code.
function startListening() {
recognizer.listen(({scores}) => {
// Everytime the model evaluates a result it will return the scores array
// Based on this data we will build a new array with each word and it's corresponding score
scores = Array.from(scores).map((s, i) => ({score: s, word: words[i]}));
// After that we sort the array by scode descending
scores.sort((s1, s2) => s2.score - s1.score);
// And we highlight the word with the highest score
const elementId = `word-${scores[0].word}`;
document.getElementById(elementId).classList.add('active');
// This is just for removing the highlight after 2.5 seconds
setTimeout(() => {
document.getElementById(elementId).classList.remove('active');
}, 2500);
},
{
probabilityThreshold: 0.70
});
}
Super easy! now the last function.
stopListening()
stopListening() will stop accessing the microphone and stop the evaluation.
function stopListening(){
recognizer.stopListening();
}
That’s it, that’s all that you need to build your first example of speech recognition on the web. | https://towardsdatascience.com/speech-recognition-with-tensorflow-js-66608355376e | ['Juan Cruz Martinez'] | 2020-06-23 22:54:04.878000+00:00 | ['JavaScript', 'TensorFlow', 'Data Science', 'AI', 'Nodejs'] |
Understanding Android Adaptive Icons | Android O introduces a new format for app icons called adaptive icons. To better understand the motivation and potential of this feature it’s useful to take a look at what it’s replacing.
While Android’s icon guidelines have evolved over time, they have always promoted using unique shapes. I was a huge fan of this! I held that it really helped users to locate the app they wanted to launch. If you want to get nostalgic you can listen to Roman Nurik and I talk about this to 6 whole minutes in an old video we made.
Here’s the ‘traditional’ icon (created by Roman) from Plaid, an app I work on. I believed that the distinct shape helped it to stand out, making it easier to find:
Plaid’s icon. How I used to think distinct shapes helped stand out.
But it’s not all sunshine and rainbows in distinct-shape-icon-land. The flipside of this near-complete creative freedom is lack of consistency. When each individual app is responsible for shape, size and drop-shadow (which is baked into the icon) then the inevitable consequence is that they vary widely. Here’s an example of icons just from Google showing how they at one time varied:
Now admittedly the above image is from 2012 and things have improved a lot in the meantime; especially with the extra guidance in the material guidelines. Nonetheless, I’ve come to believe that the current system places too much responsibility on app developers; giving us too much scope to detract from the overall experience..
When we’re working on an app, we can become laser-focused on it. We rightly spend huge amounts of time pouring over the details that make it unique. We think about it in isolation. But that’s not how users see it; no app is an island and we need to recognize that it exists alongside many other apps on a device. As such it needs to get along. This is true for your entire app but it’s all the more important with elements like app icons which appear side by side. With this framing we can see how instead of our idealized situation, the reality often ends up being more like this:
Idea vs reality: when everything is unique, nothing is unique
In response to this problem, a whole cottage industry has sprung up: custom launchers offering icon packs to replace app’s icons or normalize their size. Devices also started shipping with launchers adding backgrounds to app icons to enforce consistency & brand their platform.
Samsung’s launcher which places icons on a squircle background. Image source
Indeed Google’s launcher will start placing icons of apps which target Android-O but do not supply an adaptive icon onto a background (scaling down their non-adaptive icon).
Icons & pinned shortcuts of apps targeting Android-O but not supplying adaptive icons.
While normalizing icon shapes or sizes is understandable, altering an icon without input from the app developer can’t lead to the best outcome.
Android 7.1 introduced roundIcon as an attempt to bring some consistency here but this was pretty restrictive to OEMs looking to differentiate their devices (i.e. only supporting circular icons) and lacked any kind of validation (developers could supply any shaped icon and pinky-swear that it was round!).
I’d characterize the situation as lacking a well defined contract between the app icons and launchers which will display them. Balancing the complete freedom of icon design against a desire for consistent display currently places responsibilities in the wrong camps. Launchers try to resize icons but don’t understand the content, like which elements are critical and shouldn’t be touched. App icons need to keep up with any guideline changes to ensure they bake in correct sizing/padding or shadow information. I see adaptive icons as making this contract clearer; becoming more explicit about what an app must supply and how a launcher will consume and display it.
For icon makers, it’s easy to see this as losing some freedom. I think this is actually more of a shift rather than a reduction. Adaptive icons introduce new and interesting constraints that open up new creative possibilities. Join me in part 2: designing adaptive icons to explore these. | https://medium.com/google-design/understanding-android-adaptive-icons-cee8a9de93e2 | ['Nick Butcher'] | 2017-07-25 10:38:46.365000+00:00 | ['Android', 'Iconography', 'Design'] |
Catalyst Programme Week 1 | Who are you? What do you want? What can you do?
To successfully integrate into a startup, we must all be able to articulate what we are seeking and what we can offer. Therefore, our group began by developing answers to these critical questions. Two days of reflection in group discussion, written reflection, and a variety of tools helped articulate the fine points of our intentions beyond gaining employment in the Finnish startup scene. One exercise we did, for example, was Tim Ferriss’ fear-setting framework, which is designed to show that inaction born of fear may have worse consequences than the perceived fear itself; another, the Purpose 15, provided a framework for identifying the people, causes, and motivations that instill our work with purpose.
A template for step 1 of the fear setting exercise made famous by Tim Ferriss. Credit to: mindfulambition.net
The self-reflection period culminated in personal goals that will guide and measure success in the program according to our unique motivations and intentions. This reflects that, while we all have the same ultimate goal, the paths we take will be different. I, for instance, want to understand the roles existing at the intersection of technology and people that require considering the balance between optimal technical efficiency and consequences on us as individuals and community members.
Equipped with this understanding, we spent the second half of the week branding ourselves. Through CV workshops and a session from Marko Oksanen, a LinkedIn expert, we improved our existing applications to materials to better reflect our skills and motivations, especially as it suits the startup world. Tips ranged from the highly specific — don’t use Times New Roman — to broader discussions of best practices. The takeaway: in a multimedia world, a one-page summary should only be one of the tools in an arsenal of materials showcasing competency, creativity, and personality.
We head into Week 2 with a greater understanding of our professional selves. Familiarization with the varied startup ecosystem in the capital region and an “Entrepreneurial Challenge” are on the agenda. It promises to be another week packed with learning, stay tuned to find out how it went.
**If you are interested in joining the Catalyst Programme the next batch starts on January 28th and you can apply by January 18th here!** | https://medium.com/the-shortcut/catalyst-programme-week-1-fcb3e6a41f6e | ['Thomas Rocca'] | 2018-12-19 12:24:06.083000+00:00 | ['Professional Development', 'Events', 'Entrepreneurship', 'Shortcut Lab'] |
When Data visualization and Art Collide With the Humble Org Chart | “The only thing that is constant is change” — Heraclitus
Today in the business world, we talk about change. The advent of AI is transforming our entire society. Tech companies heavily invest in R&D to stay on top of the game while older industries are trying to catch up on their digital transformation.
There’s a widespread belief that companies which are successful in digital achieve their goals because they have the right internal structures in place. I am intrigued by how you can visualize these structures beyond simple org charts, and whether or not a tangible representation of them reveals insights into the way a company is managed.
As you would imagine, few corporations were willing to let me access their organizational chart under my own data art studio. My only option was to work as a collaborator in a large group to test this idea (among other things). As soon as I joined the Havas group, I got started on this pet project and was eager to discover how each business unit relates to another.
Spoiler: it turns out that the knowledge extracted from this analysis is extremely valuable and can drastically help to make informed data-driven decisions.
Getting the data
Havas recently migrated to WorkDay, an ERP for finance/HR SaaS for large organizations. One of WorkDay’s key features is TalentSpace, which acts as as an internal LinkedIn-type system, through which people can expose their title, physical location, company name, ways of reaching them and their boss: an ideal dataset for us!
Personal private view for each Havas employee
For those who fear leakage of private info, rest assured that as a regular employee, I could only access anonymized GDPR-compliant (no names) information with the consent of the HR department. Note that when I started working on the project not all the subsidies had migrated to the platform (hence the lower number of data points than might be expected).
My own personal hierarchy chain to the top!
As with all new data-driven projects, I started by agglomerating information. One point worth noticing is the gender balance of the organization, with more women in all four branches of the group, 56% to be precise.
Regarding the distribution of managerial functions, we can observe that the top-management is slightly balanced in favor of male executives with a turning at hierarchical level four (0 being the CEO Yannick Bolloré), which roughly corresponds to the executive committee of each individual digital agency that the group manages.
Of course, there is much more confidential information to be shown that I can’t share here, such as the distribution of job titles, the number of managers per agencies and at which level, the subsidies’ location of each agency and how they interact for instance.
When you start adding the job title in the mix, you can also anticipate the shortage/recruitment of talents you need to recruit to achieve your strategic goals.
This is just a bunch of examples but as you can imagine, it is extremely valuable and actionable info when your job is to manage and have a strategic vision of a company.
From data visualization to data art
Visualizing the hierarchy
So far we have talked about the value of collecting organizational info on a company. Let’s remember that we extracted a network connecting all employees to their direct superior. As you know, this simple one-to-one relationship does not necessarily reflect the real chain of command but it is already a good start to build a visualization.
Here is an attempt at revealing the hierarchical levels of the group using a radial tree layout.
A classic and very useful data visualization used to explore the hierarchy of the company from a central node (the root of the tree), here being the CEO. By counting the number of circles we can assess the depth of the tree: how many hierarchical positions we have from the lowest entry point (the interns) to the CEO. Each dot is employee, colored according to the group sub-branches such as Havas Health, Havas Creative, BETC or ekino.
The dataset is too complex to be able to visualize all its dimensions in a 2D representation. As a general rule in a network visualization, proximity on the image does not necessarily imply proximity in the data. In our radial tree, we can clearly see the different hierarchical layers, but we miss a sense of quantity. Indeed, most nodes are drawn almost on top of each other, making it difficult to assess the volume of employees per branch.
When dataviz is not enough
To solve this problem, we have basically two options: adding a companion data visualization (like the barchart on top) to indicate the volume per branch or per sub company or improve/modify our radial tree layout.
First, instead of having a circular shape we could put everything in line like a file system folder view. The problem with this one is that the list becomes very long and boring to look at: you probably need to scroll to see it correctly or unzoom it so far you won’t see anything.
Coming back to our radial chart, we also put more nodes on each line with no overlap and zoom out to even further away. The issue in this case, as with all angle-based visualizations, is that the human visual system has a hard time evaluating angles precisely and thus difficulty counting the number of employees. For the curious reader and avid dataviz designer, this observation led to the famous blog post: “Death to Pie charts”.
This problem is also the perfect opportunity to get creative in the dataviz itself and gradually enter the realm of data art. Because it is already a compromise between different visual features, we could as well introduce aesthetics and emotions in the mix!
To iterate over a design, the dataviz practitioner must ask him/herself questions that criticize the current implementation. For instance:
How would the audience judge its interpretability compared to the previous one?
Is the color palette appropriate for the data at hand?
Is the data-ink ratio adequate?
Does it look more emotionally engaging, if so why?
On top of that, other artistic concerns must be addressed such as:
What is the concept behind this piece?
Is there a sense of harmony, symmetry in it?
How does it compare to previous pieces, too much similar? Coherent with the series?
Is the shape supposed to be figurative or totally abstract?
Do the colors resonate with the tone and idea behind the piece?
Data art to the rescue
These questions led me to create this “fuzzy” radial tree.
A fuzzy radial tree displaying the hierarchy of the Havas group. While the hierarchical levels are harder to discernate, this data viz shows the total number of employees more accurately. Subjectively, it also offers a more interesting artistic shape like a multicolored iris.
A zoom on ekino France, where I currently work. Notice how ekino’s blue is mixed with orange. It is due to the different email aliases belonging to its former organization Fullsix. Hence, some points are still labeled as Fullsix’s orange instead of ekino.
While it is harder to count the number of layers, we have a better sense of the total number of employees and something which stands alone as a image for branding purposes. If you are interested in the technical details, it was created by using the previous radial tree and by applying a bit of a force-directed layout to it.
“Art is never finished, only abandoned” — Leonardo Da Vinci
Once I “finished” this fuzzy radial data artwork, I put it as my wallpaper and moved on to other projects for a few months. I knew it was not in its final stage, yet I had no idea how to drastically improve it. Improving one aspect, such as visual compactness, would make a regression in another, like the interpretability for instance. In addition, I was so used to see a circular representation for these types of datasets that I had a hard time imagining something else.
I had to start fresh, a blank slate so to say, to be able to come up with something new.
Breaking the circle of death
If you are used to working with graph datasets, you know that most layout algorithms have a tendency to create circular-like shapes. It comes from the fact that they try to minimize the distance from the center of mass of the network without overlapping too much.
In layman’s terms, it means that most mathematical formula researchers use to draw networks on screen are based on the same physics model and that it often yields a circular shape.
Despite my love for these circular shapes, I wanted to create something radically different and kickstart creativity without doing a ring.
My first tries were leaning towards isometric shapes, but being a 2D artwork (to be printed) introducing a false sense of perspective didn’t really work for me here. I gradually came up to this triangular shape that I find compelling because it is so unusual in the network visualization world. It clearly shows each individual employee of the group, organized into each sub organization as we had before. | https://medium.com/nightingale/when-data-visualization-and-art-collide-with-the-humble-org-chart-647a2df46c5c | ['Kirell Benzi'] | 2020-09-15 13:01:02.335000+00:00 | ['Data Science', 'Design Process', 'Organizational Culture', 'Data Art', 'Dataviz'] |
Stereotyping the Guys | Stereotyping the Guys
A view from underneath
Photo by Karolina Grabowska from Pexels
While mainstream society certainly has its well-known stereotypical attitude toward sex workers, few in the square world even consider the preconceptions upon which “providers” labor when dealing with customers who pay them for their services.
Servicing five to ten individuals a day on a very intimate basis, the girls build their wall to safeguard not just their personal safety — but their emotional safety as well. Quick judgments as well as ingrained attitudes all seemingly serve the working girl in this pursuit.
The first stereotype applies to black men. Squares may argue whether black men are in actuality larger, more long-winded, more dangerous, and less likely to leave a gratuity than their white or Asian counterparts. But working girls certainly don’t. The mythology applies. And given that a pro in the business seeks to exercise conservation while ensuring her safety, many just won’t see black guys — or especially — black men of the thug variety.
But beyond this obvious stereotype, the professional working girl has a certain attitude about anybody who walks through her door. And that is — he’s a trick. The derivation of that word comes from the idea that a girl is supposed to trick her customer into blowing his load before he has a chance to enter her vagina.
The program is to use her hands, mouth, or whatever to get the guy off so she can save wear and tear on her precious vaginal orifice. So right off — the hustle is on. Get the guys’ money with as little effort as possible. It’s The Battle of the Sexes played out in a professional arena.
Working girls are pragmatic in that they basically judge a guy by the size of his dick, his wallet, and the cleanliness of his body. They’ll always ask him what he does for a living when he calls. But she doesn’t really give a shit about who or what he is beyond the safety issue. Just so he isn’t a cop or a freak who’s going to hurt her, his personal info is of very little interest (unless he’s famous — that’s a big turn-on).
They view Indians, Pakis, and Arabs as dirty and aren’t fond of allowing them entry. They view Asians as clean, small, civilized, and worthy as customers. While they respect a guy for having a big dick, they often don’t want to see him because he’s likely to put a lot of mileage on the old chassis — or even put it out of commission for the day!
Attitudes and preconceptions vary as to the ethnicity of the working girl as well. Americans, regardless of color, generally have the worst attitude about the people who buy them their food, shoes, drugs, bling etc. They’re dirty tricks until proven otherwise. Asians are a little more tolerant. They view their chosen path as a craft and are a little more appreciative of guys who indulge in paying money for sex.
In general, girls from other countries have a better attitude than Americans — or at least that’s the case within the borders of the USA. Maybe in Europe, it’s the European girls who are the trash — and the Americans the class. I’m sure the rule has its exceptions but on balance, the fundamentals basically apply.
In closing, judgments and preconceptions born of stereotypes are ubiquitous in our global society and aren’t likely to go anywhere anytime soon. So why wouldn’t working girls judge their customers as well? It’s just human nature. | https://medium.com/everything-you-wanted-to-know-about-escorts-but/stereotyping-the-guys-23f113880f6e | ['William', 'Dollar Bill'] | 2020-12-19 14:37:44.289000+00:00 | ['Nonfiction', 'Prostitution', 'Culture', 'Sex Work', 'Education'] |
State of the Art #2: Tayve Neese of Trio House Press | The Nonconformist: What prompted you to become a publisher?
Tayve Neese: Books. I love writing them, reading them, and making them. It’s such an intimate experience to be part of the creative process of bringing a book out into the world. I think most writers experience that Zen-spot of being completely in the moment when they write. I experience this when I edit as well. What I find so wonderful about being a publisher of full-length collections of poems is that it’s one of my greatest pleasures to sit down and read a whole collection by one single poet. I’m the kind of reader that, if a poem makes me shudder, I want to get my hands on everything that person has written and learn what their obsessions are, learn how they craft their lines, and understand how they do what they do and how their usage of language somehow makes me feel more whole and human. It’s amazing that I get to work with poets and hold up that lantern so that they can better see their work. That’s my job — to help support the vision of the poet.
It’s amazing that I get to work with poets and hold up that lantern so that they can better see their work. That’s my job — to help support the vision of the poet.
NC: You’re the Executive Editor and Co-founder of Trio House Press. What were its origins?
TN: In 2011 I was part of a great group of poets who exchanged poems for critique. This included Dorinda Wegener, Lisa Sisler, Sara Lefsyk, and Terry Lucas. We were diligent about sending out our work, and we lamented that while there were so many new literary journals, there just weren’t as many new presses. I called Dorinda from Florida one morning and said, “Want to start a press?” She and I spent a good part of the year researching and building the press. I wrote to the executors of Louise Bogan’s estate to inquire about naming one of our awards after her, and after being given permission, off we went to the 2012 AWP in Chicago to find poets who would consider THP as a publisher, even though we’d never published a thing.
Terry Lucas led the way with our marketing strategies, and the press was so fortunate that Ross Gay and Michael Waters agreed to act as our first judges. Our table at AWP that year was covered with their books, Elizabeth Frank’s Pulitzer Prize-winning biography, Louise Bogan: A Portrait, and candy. We handed out a lot of candy, and we talked to anyone who would listen to us about our mission. We met so many poets and writers from all over the country, and when those manuscripts came in that first year, it was wonderful. That year David Groff’s Clay was selected by Michael Waters as the Bogan Award winner, Iris Dunkle’s Gold Passage was selected by Ross Gay as the Trio Award winner, and we editors selected Matt Mauch’s If You’re Lucky is a Theory of Mine for our open reading period. David Groff’s book was a finalist for the Lambda Award that year, and we were over the moon. I’m so happy David, Iris, and Matt took a gamble on THP. Those three poets set the tone for all that’s followed. Just this past month, Matt Mauch came aboard as a Co-executive editor. Matt, Sara, and I have edited work together in the past, and he has a keen editing instinct. Sara Lefsyk and I both agreed that if we’re going to grow the press, Matt’s the person best suited to help make that happen.
I think most writers experience that Zen-spot of being completely in the moment when they write. I experience this when I edit as well.
NC: How would you describe your mission as a publisher?
TN: When we founded the press, the mission of Trio House Press was to publish distinct and innovative voices in American poetry. What’s changed for THP is that we’re now borderless. It’s not just American voices we’re looking for anymore. If a collection is written in English and it’s powerful and beautiful and raw, we want a chance to consider it. The current political climate was a catalyst for us to ask ourselves, what is our role when our government is rejecting our interconnectedness? The best of what language and poetry do is to connect us to one another as human beings. So, we want to consider work from all poets writing in English and dissolve the idea of boundaries. We want new voices, overlooked voices, and even established voices. We want well-crafted collections, no matter the aesthetic leaning. There are such vast differences between our titles! Sandy Longhorn’s The Alchemy of My Mortal Form is so different from Carolyn Hembree’s Rigging a Chevy into a Time Machine and Other Ways to Escape a Plague, but both are so expertly written that you tremble. That’s the whole reason why we bring in judges from such diverse aesthetic backgrounds to help select collections for our Lousie Bogan Award and Trio Award. It keeps our press fresh and vibrant rather than on one stale note.
NC: What is it like to be a small press publisher? How do you find a balance between your everyday life and publishing duties?
TN: I don’t think I have ever found that balance. When we first started the press, it was an eat, sleep, dream it full-time gig. I was all adrenaline while learning the ropes. As life zigged and zagged, and since I’m not in higher ed, I’ve worked at a number of different jobs to keep things ticking along. The administration and business side of the press has become an early morning or late-night venture with the serious work of editing happening on weekends. My youngest daughter is about to launch, and I’m considering dismantling my life and heading to Micanopy or back to the St. John’s River where I can recede, run the press, and write for a while. Maybe then I’d find that balance.
Social media has pros and cons just like any other invention. I’ve found poets online who I never would have found any other way.
NC: Let’s talk about your influences. Who inspires you as a poet and who is your role model as a publisher?
TN: Emily Dickinson is essential, and I admire Louise Bogan and her work. Other poets I go back to over and over again are Carol Frost, Joan Houlihan, James Dickey, and Galway Kinnell. That’s way more than one poet, but it takes a village. As far as publishers go, Salmon Poetry in Ireland is a press I deeply admire. They’ve been around since the early ’80s, have their own bookstore, and Jessie Lendennie and Siobhan Hutson have done so much for poetry in Ireland, the States, and globally. I’m so honored that I have a book, Locust, forthcoming from Salmon. They’ve been my dream-press for so many years, and my hope is that THP has the same longevity and gravitas. Having a bookstore wouldn’t be bad either.
NC: How would you describe the current state of the publishing industry from the perspective of a small press publisher?
TN: While I try to keep my finger on the pulse of what other publishers are doing, I live on an island — literally. I often rely on social media to keep me in the loop. So much in the small publishing is currently in flux. I’ve heard about a number of really strong presses or journals that closed their doors recently. That’s heartbreaking. With all the government cuts for funding to the arts and education, it’s hard for those presses tied to universities or grant money to keep afloat. The bottom line is that being a small press publisher can be challenging.
As an independent press, we’ve had to flex and bend when sales just didn’t happen as we’d projected, and I’m always exceptionally cautious about overextending ourselves. Most people know that the average book of poetry sells only between 250–500 copies and that most poetry publishers are not in it for the money. But the industry isn’t all doom and gloom. Just like social media has its place in this morphing literary world, so do new models of printing and distribution. Print-on-demand used to be a dirty word, but if you look and inquire, you’ll see that so many long-time presses and journals have made the switch silently. At first, THP printed hundreds of books for a first press run, and when I moved across the country from Colorado back to Florida, I took over a thousand Trio House Press books with me. Now, we still use the same printer and distributor, but there is absolutely no need to bury a press financially with unsustainable print runs of books that may never sell. It’s an unnecessary financial risk for a press, and to boot, it’s environmentally wasteful. Why produce something that may not be consumed or appreciated?
As a working poet, I know the drill of opening up emails every single day waiting for an editor’s reply.
NC: Optimists claim that poetry is enjoying an unexpected renaissance these days, and that both Facebook and Twitter are perfect for publishing and sharing poems. What’s your opinion on this?
TN: A renaissance of poetry, no matter the medium of dispersing well-crafted work, should be celebrated. While I love the whole tactile experience of paper, from its lingering between the fingers before you flip the page, to that crisp scent of paper, I’ve had amazing reading experiences via social media. Why pretend to be so high-brow when this is just the way we human beings communicate now? Social media has pros and cons just like any other invention. I’ve found poets online who I never would have found any other way. Manglesh Dabral springs to mind. I learned of his work by way of Facebook when someone shared one of his published poems, and then I had to track down This Number Does Not Exist published by BOA. The book is gorgeous, with both English and Hindi versions of poems. Regarding posting unpublished work on social media, if someone is comfortable with this venue, then that’s their bag of tricks. After all, we built THP through grassroots marketing by way of nationwide social media pushes. We never could have achieved our vision without these platforms, and our poets have more exposure and better opportunity to get their work into people’s hands because of social media. It’s a new frontier for everyone in publishing, and that’s exciting!
NC: Which current and forthcoming titles from your catalog should find their way to our bookcases?
TN: I always like to shout-out our newest titles. Poet Jeff Friedman selected Waiting for the Wreck to Burn by Michele Battiste as the most recent Louise Bogan Award winner. Battiste’s work explores borders in the town of Ruination and borders that evolve between spouses as a marriage collapses. I love this book! Artress Bethany White’s My America was the recent Trio Award winner, and her collection is a chisel and hammer that uncovers racism in our society and within families. Last year we also solicited the work of Tamara J. Madison, Threed, This Road Not Damascus, in which a mythic three-breasted woman takes on King James. Her work is rich, melodic, and fierce! I also adore Darren C. Demaree’s Two Towns Over, selected by Campbell McGrath as another recent Bogan winner. Demaree takes on the opioid crisis from a personal and sociological perspective in these poetic vignettes.
As far as our forthcoming titles, every year the editors at THP participate in selecting a title for our open reading period. It’s a democratic process and we all discuss, pitch, and vote for a title we’d love to bring on board. I’m thrilled that we are able to bring on the voice of Madeleine Barnes. Her forthcoming title, You Do Not Have to Be Good, well, we don’t have another voice like hers. Her imagery and lines — Technicolor, rhythmic, stellar. Our poets are amazing. Their books, damn!
If you start a small press, make it about the people. Make it about the poets.
NC: If it’s not a secret, what are your upcoming projects and plans for the future?
TN: We’d love to publish more titles each year. It’s painful to turn away manuscripts for lack of funding and resources, especially manuscripts that have been semi-finalists or finalists. Each year after we’ve selected someone to publish, we celebrate when the contract is secured. Then, I walk around for a few days with a knot in my stomach having to send out those rejections to folks whose work is so deserving of publication. As a working poet, I know the drill of opening up emails every single day waiting for an editor’s reply. It’s not a sexy topic, but fundraising for THP is a high priority. We haven’t quite sunk our teeth into this apple yet, but it’s so necessary so that we have the resources to publish more great work.
NC: What advice can you give to those dreaming of setting up their own small press?
TN: Every morning before sun-up I sit down with my coffee at my desk. Right in front of me on my bookshelf, I see this rainbow of the Trio House Press spines and titles, and every morning I think about the poets as much as the contents of their books. I hope their families are well, that their writing lives are thriving, and that their poems are doing the good work that they were written to do. If you start a small press, make it about the people. Make it about the poets. | https://medium.com/the-nonconformist/meet-the-publisher-tayve-neese-153d30fc059b | ['The Nonconformist Magazine'] | 2020-12-28 11:51:04.675000+00:00 | ['Indie Publishing', 'Interview', 'Small Press', 'Writing', 'Interview Questions'] |
When Peace Summons Loneliness | This past week, I shared my space with my doggy niece, Nala. March was a hard month. We lost her brother, Reese after 13 years. With her, Jernee, and Reese, I used to call them the Triple Threat, now they’re just Double Trouble. They were tons of energy running about, playing with each other, always content enough to enjoy their simple lives. Nala is motherly. She is kind. She is the sweetest Boston Terrier and she has always mothered Jernee. She is protective of her, shielding her when necessary and Jernee eats it all up. As long as all attention is on her, she is satisfied.
Nala, being two years older has afforded some rights in this area of expertise, the motherly category. She is a burst of excitement that takes the wind out of you and watching her lanky yet, hefty frame leap towards you is a sight to behold. I am never prepared.
Ooh! Is this my closeup, Human Auntie? Yes, is this good? Okay, thank you.
For six days, she hopped, gulped, ran, played, rested, kissed, and strutted around my home like she lived here full time. Having both her and Jernee together was a reminder that I am out of shape.
Majorly!
One is full of energy and can go on for hours on end. The other needs more breaks in time and requires more attention to certain things. Age is bad mutha “shut your mouth.” I walked them separately. I live on the third floor. Let me repeat that, I live on the third floor. Thus, I received double the exercise and double the fun. Why did you walk them separately, Tre? Please see the details above regarding their differences. Jernee and I walk with a purpose. There is an eager bop in our steps, we flow, and our walks can get lengthy. Nala needs subtle steps. A smooth cruise ensures that her breathing will not become labored. We take our time. We savor the walk. We become the breeze.
Jernee: Yes, listen human. I did not do anything wrong. You gotta believe me. Nala: Um, that’s a puppy-faced lie, Human Auntie!
When we settled into the evening, the girls liked to get that last bit of energy out by running back and forth from the living room to my bedroom. Usually, I could be found reading and enjoying quiet time. One night, Jernee (because it is always Jernee. ALWAYS!) had gotten into some mischief. What that mischief was, I do not know. I have not retrieved the evidence. I can only assume she ate Nala’s treats and Nala was coming to tell me, however, Jernee beat her to it. Notice in the photo above how my little one has a look of innocence plastered on her face, but also one that says, “Human! Do not believe a bark she says!” | https://medium.com/a-cornered-gurl/when-peace-summons-loneliness-82a5a200c2f1 | ['Tre L. Loadholt'] | 2017-05-07 15:43:07.484000+00:00 | ['Dogs', 'Is Always Needed', 'Feel Good Music', 'Peace', 'Writing'] |
The Physics of the Graphene Revolution | The Graphene lattice. Image from: https://singularityhub.com/2018/08/05/beyond-graphene-the-promise-of-2d-materials/
It is a poorly kept secret that graphene is a material with the potential to revolutionise the world of electronics. Alongside its optical transparency, its flexibility and its unparalleled physical strength, it is among the most efficient conductors of electricity known to science.
In some ways, it is remarkable that graphene conducts electricity at all. It is, after all, a lattice of carbon, which occupies the “non-metal” bin of high-school chemistry, so we might expect that it would be a poor conductor of electricity like, say, crystalline sulfur or phosphorus. And indeed diamond, a crystal of carbon, is an extremely good insulator. So why is graphene any different?
The electrons around any atom occupy distinct regions of space known as orbitals. These can come in a wide range of different shapes and sizes, but for carbon there are only two types of orbital present: two spherical s-orbitals and one dumbbell shaped p-orbital. Each of these contain two of the carbon atom’s six electrons.
Examples of s and p atomic orbitals. The p orbital shown is in the z direction, which is the orbital left unhybridised in the carbon atoms of the graphene lattice. Image from the Encylcopaedia Brittanica https://www.britannica.com/science/orbital/images-videos
More exciting is what happens when atoms bond. To do this, orbitals combine in a process known as hybridisation, and this can happen in a few different ways. In diamond, the outer s-orbital and the p-orbital hybridise to form 4 sp³ hybrid orbitals, each containing one electron which can then be shared with a neighbouring carbon atom to form an atomic bond. This process utilises all of carbon’s 4 valence electrons, meaning that there are no electrons free to move to neighbouring carbon atoms and carry a current through the material.
In graphene, and its layered counterpart graphite, the hybridisation is different. Instead of forming 4 sp³ orbitals, the orbitals hybridise into 3 sp² hybrid orbitals, leaving one electron alone in an unhybridised p-orbital. It is the interactions between these orbitals of atoms in adjacent layers which hold crystals of graphene together. More importantly for our purposes, these orbitals are not involved in atomic bonding, so electrons can freely move between the unhybridised orbitals of neighbouring atoms, allowing a current to be carried through the crystal.
Ok, so graphene can conduct electricity. But why is it so good at it? The best measure of how conductive a material is, unsurprisingly, its conductivity. This is the ratio between the current generated by a voltage applied to a material, and the magnitude of that voltage, with some factors related to the size of the sample to allow comparison between materials. So, if a small voltage is applied to a material, and it generates a large current, then the material has a large conductivity and is a good conductor of electricity.
In the simplest terms, a voltage corresponds to extra energy given to electrons within a material. If that extra energy isn’t large enough to break electrons free of their atoms, then no current can flow at all, and we have an insulator. Conversely, if it is easy to break an electron free of its atom, most of the extra energy goes to its kinetic energy — the energy associated with its movement through the material — and the electron can move quickly. Current is the rate of flow of charge, so electrons moving quickly through a material corresponds to a large current.
The electrical conductivity of some common metals. Notice that group 11 elements occupy the top 3 most conductive metals. Image from http://wiki.robotz.com/index.php?title=File:Conductivitymetalchart0.jpg
For example, in the most conductive metals, group 11 elements such as gold, silver and copper, the outermost valence electron is on its own in a higher energy subshell than the rest of the atom’s electrons, meaning it takes comparatively little energy for it to break free from its atom. This means that when that electron receives energy from a voltage, most goes to kinetic energy meaning that it can move quickly through the metal. Combined with these outermost electrons of neighbouring sites all moving similarly fast, the current generated by the voltage is large, and these metals have extremely high conductivities.
So what about graphene? Graphene is a honeycomb lattice: its atoms form hexagons such as those one might find in the local beehive. We find that the dispersion relation, which describes how the energy of an electron varies with its momentum, has two bands: an upper band known as the conduction band, and a lower band known as the valence band. The gradient of the dispersion relation gives the electron group velocity, which can be broadly thought of as the velocity of the electrons as they move through the lattice.
The dispersion relation for a honeycomb lattice such as graphene. Here, momentum is normalised against the magnitude of the momentum at the K and K’ points, so the two bands meet at k=1 and k=-1.
Crucially, the bands touch at two momenta, which we refer to as the K and K’ points. Near these points, the dispersion relation is linear, which means that the momentum of the electrons increases at a constant rate as energy of those electrons increases. This is the behaviour of a massless particle!
When you increase the energy of a photon, for example, its momentum increases at a constant rate, namely that of the speed of light c. Just like photons can’t travel at any speed other than the speed of light, a conduction electron in graphene cannot travel at a speed other than a velocity of around 1 million m/s. So, even for extremely small energies, the electrons move at very high speeds. This means that these small voltages generate a large current, which mean graphene has a large conductivity.
The dispersion relation of a honeycomb lattice close to the K point, which has been defined as q=0. Colour has been used to differentiate the conduction band (red) and the valence band (blue). Notice that the dispersion relation is linear, such as would be expected for a massless particle.
However, for fast electrons to be generating a large current, they must, for the most part, be moving in the same direction through the material. Ideally, the voltage ensures this by setting up an electric field in the material, attracting the negatively charged electrons to the positive terminal, but the universe is not all sunshine and rainbows. Electrons can be deflected, or scattered, by numerous impediments. Indeed, a wire is an ever-changing assault course of impurities, vibrating atoms and other electrons, all of which can disrupt the progress of any one electron moving under the influence of the voltage.
We thus have another factor which affects conductivity: how often electrons are scattered as they move through a material. If the electrons are scattered often, then most will be prevented from moving in the direction of the electric field associated with the applied voltage, and the current associated with that voltage will therefore be small. This leads to low, poor conductivity.
A diagram of the generic honeycomb lattice, indicating the inequivalent A and B sites, as well as the primitive lattice vectors and nearest-neighbour vectors. Image from Gilles Montambaux. Artificial graphenes: Dirac matter beyond condensed matter. Comptes Rendus Physique, Elsevier Masson, 2018, 19 (5), pp.285–305. ff10.1016/j.crhy.2018.10.010f
Does this crush our dreams for graphene? Thankfully not. On a honeycomb lattice such as graphene, there are two “types” of lattice sites, which we call the A and B sublattice. Any electron state can be thought of a superposition of being located “on” the A sites and the B sites, and we can treat this behaviour, known as a pseudospin, in the same way that we do the spin of the electron. For example, an electron on an A site can be thought of as being spin up and an electron on a B site can be thought of as being spin down.
Importantly, for the electrons which carry the current in graphene, this pseudospin depends on the direction of the group velocity. Near the K point, an electron with a positive group velocity is spin up, while an electron with a negative group velocity is spin down. This coupling of spin and velocity is known as chirality, and we can think of it as electrons moving in a positive direction on the A sites and in a negative direction on the B sites.
The dispersion relation near the K point of a honeycomb lattice with the pseudospin in each band, here represented as left and right as opposed to up and down, superimposed. Note that any backscattering, represented by a horizontal translation on the graph, requires a flip of pseudospin direction.
So, what happens when a chiral electron encounters a scatterer? For it to scatter backwards, it would require a flip of this pseudospin since the group velocity changes sign. In other words, to backscatter would require a move from the A sublattice to the B sublattice, which is not possible in most cases. We therefore see complete suppression of backscattering, and an overall suppression of all scattering.
As such, electrons in graphene can travel for up to micrometre distances without scattering at all. This behaviour is known as ballistic transport, and this reduced scattering ensures that the full power of the speed of the conduction electrons in graphene can be brought to bear.
The final factor which affects the conductivity of a material is simply the density of conduction electrons within a material, known as charge carrier density. Electrons are what carry the charge and so the more of them that move, the greater the associated current. In graphene, we can control this quantity by doping. This is where atoms such as phosphorus or nitrogen, which can donate two electrons for conduction compared to carbon’s one, replace some carbon atoms in the lattice to increase the number of available electrons for carrying a current.
Even while the carrier density of graphene could never be as high as in most metals, its conductivity is still remarkable. The theoretical limit of its conductivity is up to a million times better than that of silver, the best known metallic conductor. Indeed, the conductivity of graphene samples is actually often limited by interactions with the material on which they are mounted, rather than the properties of the graphene itself.
Even so, for a material as light as graphene to exhibit such impressive conductivity is extremely exciting. For example, one of the major hurdles in the world of electric transport is the sheer weight of lithium-ion batteries which dramatically limits the range of such vehicles. Current graphene-based cells are an order of magnitude more efficient, an energy-to-mass ratio of 1000 Wh/kg, compared to 180 Wh/kg for Li-ion batteries.
Energy efficiency is as important a goal as any as our global society hurtles headlong into an increasingly inevitable climate crisis. More efficient conductors such as graphene, and potentially even the long elusive room-temperature superconductors, have the power to reduce energy losses in just transporting energy around our power grids. This could provide one of the many necessary spears in the phalanx of efforts to combat the encroaching climate disaster. | https://medium.com/predict/the-physics-of-the-graphene-revolution-3feef2b090b5 | ['Jason Segall'] | 2020-12-23 21:27:49.084000+00:00 | ['Electronics', 'Technology', 'Physics', 'Graphene', 'Science'] |
Audio signal feature extraction and clustering | Code
We firstly import all the necessary libraries in our jupyter-notebook. This is mine.
Follow along.
import tensorflow as tf
import numpy as np
import pandas as pd from pyAudioAnalysis import audioBasicIO #A
from pyAudioAnalysis import audioFeatureExtraction #B
import matplotlib.pyplot as plt import os #C
#A — This function is used to extract audio data like Frame rate and sample data of the audio signal.
#B — This function is responsible for extracting all the features from the audio signal that we talked about earlier.
#C — This is basically used to iterate through the music files in the file-system.
This is where we get our hands dirty. Let us start with the first point in our objective — Extraction. We start by defining the utility functions.
def preProcess( fileName ): [Fs, x] = audioBasicIO.readAudioFile(fileName) #A
#B
if( len( x.shape ) > 1 and x.shape[1] == 2 ):
x = np.mean( x, axis = 1, keepdims = True )
else:
x = x.reshape( x.shape[0], 1 )
#C
F, f_names = audioFeatureExtraction.stFeatureExtraction(
x[ :, 0 ],
Fs, 0.050*Fs,
0.025*Fs
)
return (f_names, F)
This function takes file name as an argument.
#A — We call the function provided by the library to read the audio data. This returns Fs and x. Fs is the framerate of the audio sample and x is a numpy array representing the sample data that you see in any music editing software.
A visualization of an audio file in Audacity
#B — Did you notice that the picture I provided above has two sample data( The yellow wave live thing ) for same song. This is because it has two channels, one for the right and the other for left (Stereo). But we can only deal with a single channel (Mono). So we basically check if the sample data in fact has data for two channels, and if it does, then we take the mean of the two channel data.
#C — This function extracts all the 11 features that I had listed earlier and returns the feature names and their values in a numpy matrix. The f_names (feature names) will not be really useful as we know which row of the F matrix holds which data. The first argument is the sample data itself. The second and third argument are the window size ( 50 msec ) and step amount ( 25 msec ) respectively.
Lastly we return the the data from this function.
def getChromagram( audioData ):
# A
temp_data = audioData[ 21 ].reshape(
1,
audioData[ 21 ].shape[0]
)
chronograph = temp_data
# B
for i in range( 22, 33 ):
temp_data = audioData[ i ].reshape(
1,
audioData[ i ].shape[0]
)just say you love me
chronograph = np.vstack( [ chronograph, temp_data ] )
return chronographjust say you love me
Like we discussed earlier, we are planning to use only the chronagram feature of the audio signals hence we will separate out that data from the rest of the function.
#A — We pick the “chroma_1” feature from the audioData numpy matrix and create a new numpy matrix.
#B — We loop through the rest 11 chroma and stack them vertically on top of each other.
This is how the final matrix looks like
def getNoteFrequency( chromagram ):
numberOfWindows = chromagram.shape[1] #A
freqVal = chromagram.argmax( axis = 0 ) #B
histogram, bin = np.histogram( freqVal, bins = 12 ) #C
normalized_hist = histogram.reshape( 1, 12 ).astype( float ) / numberOfWindows #D
return normalized_hist
Like we discussed earlier, now we need to find the most prominent note in each window, and then we wish to find the frequency with which each of the 12 notes are hit.
#A — Total number of time frame aka windows. This will be useful while normalization step.
#B — Find the index of the most prominent note in the vertical axis ( axis = 0 ). This gives us a number between 0–11. The output might look something like this:
[0, 0, 0, 1, 1, 3, 9, 9, 11, 0 …number of windows]
#C — Now we basically get the count of each note using the inbuilt function.
#D — Finally we normalize the data for the reason we mentioned above. This is the final feature vector which will define our data point. This will be a (1x12) vector — The relative frequency for each of the 12 notes.
Note: The plotting functions are trivial and can be directly copied from matplotlib examples page. You can play around with these functions and different files to see how the chromagram and frequency plot changes with different genre of music.
fileList = [] def getDataset( filePath ):
X = pd.DataFrame( )
columns=[ "G#", "G", "F#", "F", "E", "D#", "D", "C#", "C", "B", "A#", "A" ]
for root, dirs, filenames in os.walk( filePath ):
for file in filenames:
fileList.append( file )
feature_name, features = preProcess(filePath + file )
chromagram = getChromagram( features )
noteFrequency = getNoteFrequency( chromagram )
x_new = pd.Series(noteFrequency[ 0, : ])
X = pd.concat( [ X, x_new ], axis = 1 )
data = X.T.copy()
data.columns = columns
data.index = [ i for i in range( 0, data.shape[ 0 ] ) ]
return data
Nothing special in this, we are just iterating over all the files in the filepath directory. We have finally created a pandas Dataframe out of our feature vector. The reason I prefer pandas dataframe over numpy matrix is because it makes the data look beautiful with column names and indices. I believe in making my work look pleasing so it is totally fine if you disagree and stick to numpy. Here is my dataframe. | https://medium.com/heuristics/audio-signal-feature-extraction-and-clustering-935319d2225 | ['Aakash Mallik'] | 2020-04-10 05:44:51.023000+00:00 | ['Machine Learning', 'Clustering', 'Unsupervised Learning', 'TensorFlow', 'Coding'] |
It’s 2030 and I’m Growing Fake Pigs | Photo by Kenneth Schipper Vera on Unsplash
All my life I’ve farmed real mammals. I’m an expert in the snuffling, snouty, somewhat hairless kind. For 72 years tofu didn’t enter the district let alone the house. Then, in 2022, I woke up.
I woke up a tired man. I guess I was getting old, or something, but mostly I think it was a psychosomatic effect of being sick of chasing escaped pigs. When you’ve pursued rogue porkers all your life, you get sick of livestock bloody escaping. And fixing fences. When I retire, I’ll make sure I have a terrace outlook on a good, solid, pig-proof fence.
But I had one more venture in me. Enough pig farming was enough pig farming, but I reckoned there was a real opportunity in plant-based meat production. I sensed a gap in the market.
I dove into research. I tried it all: Quorn, Lightlife, Field Roast, Tofurky, Gardein. I tried seitan, soy, and mycelium proteins. I drove into town for vegan Thai and pulled jackfruit. I crossed borders for five-star teriyaki tofu and flew to Switzerland for the world’s leading no-pork pork belly. The look of sorrow on my wife’s face when she ate Sydney’s best-rated vegan pork vindaloo was what clinched it.
You don’t muck around with pork vindaloo. Not unless you know what you’re doing.
The plant-based community needed my help.
No more ‘facon’. It was time to get real. It was time to grow pigs. For my wife. For the children of tomorrow.
I won’t tell you how I did it. Let’s just say I convinced a certain mushroom’s DNA to replicate the appearance and texture of a prime porker (sans hair). A mate in Newcastle University’s bioengineering department gave me a hand.
Five years on, my fields are greenhouses and I grow pig shrooms out of used coffee grounds trucked in from city cafés. Apparently, the coffee grounds stop the greenhouses from releasing harmful gases into the atmosphere. Something to do with a circular economy.
Gas aside, Shroompork is a hit. It mimics pork perfectly, from French cuisine or curry to sizzling on the barbie.
That’s why I’m in the running for Australian of the year and the government wants to buy my recipe. Jamie Oliver messages me often and Prince Harry’s Instagram shows Archie munching on my signature pork patty.
At this rate I could retire and commission the world’s best fence. But you know, once a farmer, always a farmer, and I really do feel twenty years younger on all this plant food. A whole new world is open to me. I’m looking at growing prawns.
Then again, things have gotten complicated recently. I haven’t told anyone yet, because half the country’s restaurants are relying on my stock. But now it’s urgent.
You see, I recently rescued two real pigs to guard the greenhouses and I believe their snorts and snuffles encourage robust shroom growth. I’ve been known to grunt a bit at the shrooms myself over the years, but since Rocky and Delilah got here the pig shrooms have been growing twice as fast.
And they’ve started grunting.
I can’t pretend any longer. That really was the sound of glass breaking, and it really did come from the greenhouse.
Hundreds of mushroom piglets are streaming into the yard and down the driveway.
I bet they’ll get through the fence. | https://elliebirdseer.medium.com/its-2030-and-i-m-growing-fake-pigs-7fb24ccd208c | ['Ellie Baker'] | 2030-12-30 00:00:00 | ['Comedy', 'Humor', 'Vegan', 'Animals', 'Humour'] |
Anxiety Doesn’t Have to Be Your New Normal | Anxiety Doesn’t Have to Be Your New Normal
Tips for staying positive during the coronavirus.
Photo by Pille-Riin Priske on Unsplash
My hometown (Melbourne) plunged back into lockdown this week in an effort to curb fast rising COVID-19 infections. As stage three restrictions loom, I ponder the dull reality of being stuck at home for six weeks with reduced contact by friends and loved ones.
“Get a grip,” I tell myself.
Staying positive is harder than it used to be even for the most optimistic people. Coronavirus has affected people everywhere. Entire countries are in quarantine. People are sick. Health services overwhelmed. If you are feeling negative don’t worry — you are not alone.
Together, we’ll get through this.
While it is true we need to take the pandemic seriously, it does not have to break us. Yes, the coronavirus has harmed our economy and caused illness worldwide. And yes, we see, hear and read about it all day long. Yet feeling constantly concerned or anxious doesn’t have to be our new normal.
Keep social, even when social distancing
During lockdown some of us experience loneliness while others relish the solitude. Yet a desire for connectedness is generally true and critical right now as we are going through a crisis. To make it through in the most resilient way possible, relying on other people for support is important. Scheduling time with friends and family helps to keep feelings of isolation in check.
Happy people are socially connected.
Social distancing means we can’t physically spend time with the people we care about. That’s why connecting up with people in real time can be a powerful antidote to loneliness. Zoom or FaceTime is better than a phone call with loved ones because you see facial expressions. You’re able to connect with people and all the emotions they are experiencing in the moment.
Help others
We tend to have this idea of focusing on self-care above all else during a crisis. But that’s not exactly what happy people are doing. Yes, it is important to ensure your own needs get met. But at the same time happy people are giving money to charity. Happy people are focusing their time on volunteering. Doing things for others is a great way to shake of the blues during the coronavirus.
Research shows helping others can also help protect your mental and physical health. It can reduce stress, combat depression, keep you mentally stimulated, and provide a sense of purpose.
Random acts of kindness can be incredibly powerful. And that’s for a couple of reasons. First, being kind to others is a simple way to improve your own wellbeing. Second, doing nice things for other people at a time where lots of people are suffering and vulnerable is simply the right thing to do.
Exercise — even if you can’t leave your house
A lot of us are moving less because we’re not getting our steps in walking to work and so on. But in some ways, that’s on us right? Just because you are not walking to work doesn’t mean you can’t exercise at all. Just because your gym shut doesn’t mean you can’t do a workout at home. There are so many free classes online and so many different types of exercise classes.
Despite housebound, you can still find ways to incorporate movement into your day. Even a small amount of movement can make a difference. It can have a huge impact on anxiety you are feeling due to coronavirus and help ease stress and depression.
Here is how I stay active while stuck at home during lockdown.
Schedule it. Scheduling workouts may prevent you from procrastinating or avoiding them all together. I find it is easiest to exercise first thing in the morning. Others may find it easier in the afternoons or at night. It doesn’t matter when you do it as long as you commit to being active a couple of hours each day.
Try something new (or rekindle an old passion). During lockdown I’ve rediscovered my yoga practice. It’s been 10 years since I’ve been this flexible and bendy! I do yoga stretches soon after I wake up which sets me up with a feeling of calm and serenity for rest of my day.
Move around the house more. I take an extra lap or two around the house if I have to put something away. If you have stairs, go up and down them a few times throughout the day.
Write standing up. I set up my laptop on my kitchen counter table top and write standing up. This is a no brainer for writers and others who spend hours tapping away on keyboards.
Mop those floorboards. Household tasks like sweeping, dusting, and vacuuming all add up when done at a brisk pace. They also work the muscles in your arms and legs. The same goes for gardening too.
DIY home gym. My home gym consists of a yoga mat, foam roller, resistance bands etc. To lift weights I fill up water bottles. To tone my abs I tuck my feet under the sofa and do sit ups. I skip rope outside on my balcony. There really is no reason to skip a workout just because you are stuck at home.
It’s normal to feel less motivated to exercise when your routine turns upside down. But even a small amount of movement can make a difference. Treat it as a challenge or a game to get it done more easily.
Savor everyday pleasures
Research shows happy people are more mindful. They tend to be in the present moment noticing what’s happening to them. Right now many of us are in fight or flight mode. Our chests are wound tight. Our muscles are tensed up and so on. The act of slowing down — taking a deep breath — puts you in the present moment. Being in the moment allows you to put some resources into managing your state of mind and health, which is something we all need to do right now to build up resilience.
Think about how you want to look back on this time
The coronavirus is a historic world event. In the future, when we look back on all this time we had, we may wonder how it all went? Had we made anything positive out of the situation we found ourselves in? Try to think about how you want to remember COVID-19? What are the stories you want to tell people about this time in your life?
Do you want to tell people you binged on NetFlix, hoarded toilet paper and copious amounts of dry goods? Or would you rather share how you made the best of the situation by helping the people around you through a difficult time? Or how you worked on improving yourself in someway?
Making a positive story starts and ends with you.
Be the clue that keeps your family together. Make productive use of all the free time at your disposal. Enjoy all the precious moments during lockdown. For when you appreciate the ordinary, each and every day is extraordinary!
Thank you for reading. | https://medium.com/mindset-matters/anxiety-doesnt-have-to-be-your-new-normal-937486662e1f | ['Lucy King'] | 2020-07-09 12:30:44.754000+00:00 | ['Mindfulness', 'Inspiration', 'Quarantine', 'Lockdown', 'Coronavirus'] |
The Mormon Sabbatical: New Thoughts on Death | I’ve heard many a Mormon say, “I don’t know how I’d live if I didn’t believe in an after-life” or about someone who has left Mormonism, “How can they keep going on day after day, knowing that this life is all there is?” It was definitely one of the hardest things about my original transition away from traditional religious beliefs to let go of the idea that I would live again, that death wasn’t really something to worry about, that all the people I loved would be with me again in the next life in the same way that they are now. I clung so hard to the idea that nothing would ever be lost. And you know what? The best thing about my life now is my realization that this is all I have. If that makes no sense at all to you, keep reading.
When I believed that I’d have my children, my parents, my faithful friends, everyone I loved, with me again in the next life — I didn’t value them as much as I do now. If that sounds ridiculous, let me explain a little more. It’s not that I loved them less. I mean if you’d asked me, I’d have said that I loved my children as much as I could love anyone. It was a revelation to me when they were born that I could love so completely. It felt like there was no other part of me than the part that loved these tiny creatures that needed me so absolutely.
And then sleep deprivation hit. And the terrible twos. And potty training. And saying “No” to everything. And running around naked at a family party. And refusing to walk to school. And hitting the other children for no good reason. And getting them to practice the piano. And eat the food that I made even if it wasn’t their favorite. And grades. And church activities. And on and on. (I liked the teenage years by and large, so I won’t go there.)
I yelled at my kids. More than I wish were true, even if I’m being kind to myself about having five kids in eight years and trying simultaneously to jumpstart a career as a young adult fantasy writer with a national press and a national agent from the wilds of Utah when I rarely could leave the house. I punished my kids when they stepped out of my very strict lines. I can’t say I really enjoyed them. I was too busy trying to make sure they did what I said so that we could all be in heaven together again.
It’s true that I had a sense of safety that God was protecting us and that nothing bad would happen as long as I followed all the “rules.” I was blind to the real threat of death that was on every street corner. I panicked about that loss of control after my youngest daughter died at birth in 2005 and I spent several years in a deep suicidal depression because I felt at fault for her death (though that was actually ridiculous in the circumstances).
I’ve come a long way since then. I no longer spend a lot of time worrying about death. It seems like I had to do all that teenage work about thirty years late, but I’ve dealt with it now. It’s not that I think I have control over life and death. I don’t. But what death means is that life is so much more precious to me now. This moment is all that I have.
I know that this statement would have baffled the old me. How can you like something that is surrounded by terror and sadness? Well, it’s hard to explain if you haven’t really faced death before, and I don’t think I ever did as a Mormon. People died, but then I waved a hand and told myself I’d see them again and there’s no real sorrow there. You’re just waiting until later and it will all be fine.
The cure to my suicidal depression was partly realizing that death was coming for me soon enough. I didn’t need for it to be right now. I could wait for the final sleep, the final release of all my troubles. Because I would only have one chance to live, however miserable that chance is.
I think about this a lot in the midst of situations I once would have called terrible and painful.
Kidney stones.
One child’s suicidal depression.
Another child’s cutting.
My marital problems.
The painful process of leaving Mormonism.
Illness.
Exhaustion (more and more often now that I’m getting older)
Waiting in line
Sitting for two hours in the pouring rain, waiting for my son’s commencement ceremony to, well, commence.
Hoping for something for a child or other loved one that I have no control over.
Instead of wanting to push these away so I can work toward perfection or making it to the next life, where everything will be perfect, what I feel now is the sweetness that tinges everything that is temporary. And everything in this life is temporary. Everything lasts for only this moment and then it will be gone. Pushing away the bad means pushing away all the good that is inextricably intertwined with the bad.
During my kidney stones in the hospital, my husband held my hand and kept me sane.
My son’s graduation, however physically unpleasant, was one of the most joyous days of my life. I was well aware that it would only last for a few hours and I clung to those while they were mine, then tried to let them go. Because I have no guarantee I will have more hours like those with him, and at the same time, it’s likely that there will be other events, good and bad, that I will share with him and that this one will seem then like it doesn’t matter anymore.
Listening to my teen/grown children weep is one of the greatest privileges of parenthood. I’m not sure I have words to express the painful pleasure that comes with realizing that this person trusts you deeply enough to be vulnerable with you, combined with the realization that there is absolutely nothing you can do to fix their pain. Or that you should do to fix their pain, because this is their life and their journey and you are just here to witness it.
That’s all we have. The chance to witness. The chance to breathe a little while together. And then it’s gone.
I remember a few months after my daughter died, I watched my other children walk up the hill in our backyard in their snowsuits, rope pulling their sleds in hand, and they screamed out in delight as they zoomed down the hill. I was inexpressibly angry that they got to be alive and my youngest daughter didn’t. They got to enjoy going down that hill and she never would. She also got to skip all the bad parts of being alive, but I didn’t feel any comfort at that. Because how could I? That is what living is.
I’m not a fan of saying that bad things happen for a reason, or that God wanted this to happen so I learned a lesson. But I did learn a lesson. And if this bad thing happened for a reason, to teach me that life is short and that we don’t get to pick and choose, I embrace that lesson. I embrace all of it. Life to me is only so sweet and so precious because it ends. I can’t know when it will end. I don’t get to choose. I say “I love you” to my dearest ones whenever I have a chance because you never know. If that sounds morbid, yeah, it probably is. It’s also true, as true as anything I’ve found in this side of my life.
(If you liked this post, you might like the first episode of my new podcast, The Mormon Sabbatical: http://mormonsabbatical.libsyn.com/the-mormon-sabbatical-1-1-my-faith-crisis) | https://metteharrison.medium.com/new-thoughts-on-death-c1524992bb24 | ['Mette Harrison'] | 2019-07-08 16:07:31.701000+00:00 | ['Mormon', 'Death And Dying', 'Mormonism', 'Life', 'Death'] |
Everything you need to know about Min-Max normalization: A Python tutorial | Introduction
This is my second post about the normalization techniques that are often used prior to machine learning (ML) model fitting. In my first post, I covered the Standardization technique using scikit-learn’s StandardScaler function. If you are not familiar with the standardization technique, you can learn the essentials in only 3 min by clicking here.
In the present post, I will explain the second most famous normalization method i.e. Min-Max Scaling using scikit-learn (function name: MinMaxScaler ).
Core of the method
Another way to normalize the input features/variables (apart from the standardization that scales the features so that they have μ=0 and σ=1 ) is the Min-Max scaler. By doing so, all features will be transformed into the range [0,1] meaning that the minimum and maximum value of a feature/variable is going to be 0 and 1, respectively.
Why to normalize prior to model fitting?
The main idea behind normalization/standardization is always the same. Variables that are measured at different scales do not contribute equally to the model fitting & model learned function and might end up creating a bias. Thus, to deal with this potential problem feature-wise normalization such as MinMax Scaling is usually used prior to model fitting.
This can be very useful for some ML models like the Multi-layer Perceptrons (MLP), where the back-propagation can be more stable and even faster when input features are min-max scaled (or in general scaled) compared to using the original unscaled data.
Note: Tree-based models are usually not dependent on scaling, but non-tree models models such as SVM, LDA etc. are often hugely dependent on it.
The mathematical formulation
The mathematical formulation for the min-max scaling. Image created by the author. Here, x represents a single feature/variable vector.
Python working example
Here we will use the famous iris dataset that is available through scikit-learn.
Reminder: scikit-learn functions expect as input a numpy array X with dimension [samples, features/variables] .
from sklearn.datasets import load_iris
from sklearn.preprocessing import MinMaxScaler
import numpy as np # use the iris dataset
X, y = load_iris(return_X_y=True)
print(X.shape)
# (150, 4) # 150 samples (rows) with 4 features/variables (columns) # build the scaler model
scaler = MinMaxScaler() # fit using the train set
scaler.fit(X) # transform the test test
X_scaled = scaler.transform(X) # Verify minimum value of all features
X_scaled.min(axis=0)
# array([0., 0., 0., 0.]) # Verify maximum value of all features
X_scaled.max(axis=0)
# array([1., 1., 1., 1.]) # Manually normalise without using scikit-learn
X_manual_scaled = (X — X.min(axis=0)) / (X.max(axis=0) — X.min(axis=0)) # Verify manually VS scikit-learn estimation
print(np.allclose(X_scaled, X_manual_scaled))
#True
The effect of the transform in a visual example | https://towardsdatascience.com/everything-you-need-to-know-about-min-max-normalization-in-python-b79592732b79 | ['Serafeim Loukas'] | 2020-06-14 14:52:55.761000+00:00 | ['Machine Learning', 'Python', 'Scikit Learn', 'Normalization', 'Feature Engineering'] |
Kenapa Design System itu Sangat Penting 🕺 | “Design Systems is about how to approach your design process more systematically, and ensure your design system helps to achieve the purpose of your product and fits with the culture of your team. | https://medium.com/penggiat-desain/kenapa-design-system-itu-sangat-penting-3e83a3983a0d | ['Rizki Mardita'] | 2019-06-17 04:55:01.527000+00:00 | ['Design Systems', 'Design', 'Gojek', 'Design Language System', 'Product Design'] |
What is a Comms Planner vs. a Media Planner vs. a Brand Planner vs. a Digital Strategist? | Comms Planner
A comms planner provides strategic rigor to the implementation of the idea, ensuring integration of the work between creative and media. Their key outputs include messaging frameworks, comms tasks, tactical briefs, and ecosystems.
Previously (and less frequently) known as Engagement Planners, Channel Planners, or Connections Planners.
Brand Planner
Also called “Account Planner” or in some smaller advertising agencies, simply the “Planner” or “Strategist.” Confusing, no?
The Brand Planner is considered the “voice of the consumer” in the creative process. They figure out what messages need to be communicated to reach target consumers in the right way, with the right message. To do this, they consumer research and trends to write creative briefs and evaluate work.
Media Planners
Called “Comms Planners” in some media agencies.
Within a media agency, media planners calculate how to best and most efficiently reach target consumers. They do consumer media research similar to an agency’s comms planner and use this research to develop a media plan and a tactical channel plan.
Digital Strategist/ Social Strategist/ Digital Guru….
May just be called “Strategist” especially at digital or social agencies.
From my experience, these roles vary the most from agency to agency. Generally, these people have a deep knowledge in a specific field such as social media, search strategy, website strategy, etc. They use these precise skills to develop digital or social solutions to a brand’s problems. In addition, they think about how to optimize each piece of creative for the channel similar to a Comms Planner.
How Do Comms Planners and Media Planners Compare? How do they Overlap?
Here is a simple Venn Diagram that outlines how BBDO Comms Planning compares to our counterparts at a media agency.
How Do We Keep The Planners Straight?
In summary, we’ve found at BBDO that a simple way to talk about the different types of planners is to think of the Who, What, When, How, and Where of communications. | https://medium.com/comms-planning/what-is-a-comms-planner-vs-a-media-planner-a-brand-planner-a-digital-planner-c2a20634e2ca | ['Larissa Hayden'] | 2017-02-16 22:01:47.448000+00:00 | ['Marketing', 'Digital Strategy', 'Advertising Agency', 'Brand Strategy', 'Advertising'] |
Right-sizing and the teeter-totter: how one CEO built a culture of balance | Ryan Vanni is the CEO of Bukwild, an award-winning digital ad agency in Sacramento. He started his company at just 21 and didn’t have an understanding of the trends in business or what company culture really was. He’s quick to admit he had no idea what he was doing for most of the time.
Since launching in 2001, they’ve worked with clients amongst the likes of Amazon, Netflix, and Pandora. They just claimed a people’s choice Webby award for their work on Coachella’s website.
Vanni works particularly hard to craft the culture of Bukwild. He strives for a place where employees can be themselves at work, do good work, and take time to go on adventures and be with their families. As a father of four, Vanni recognizes the importance of family time.
As part of our Culture Changers series, we sat down with Vanni to see how he built the culture of Bukwild to balance the wants and needs of his employees and the demands of a high performing advertising agency.
How do you describe Bukwild’s culture?
We are a very transparent, familial. We work to live, we do not live to work. We strive for excellence while viewing ourselves right-sized. It’s easy in our professional careers — in my estimation, at least — to think that we’re doing more or accomplishing more than we really are and lose track of your values.
What do you mean by “right-sized?”
Right-sized-ness is viewing the importance of our work relative to the balance we need in life. It’s so easy to fall into self-importance. Often the things I desire for a culture are in direct opposition to each other.
For example, I mentioned we’re a work-to-live culture, so we have uncapped paid time off. This means we have people traveling constantly, and that comes at a cost. But to have our own adventures in our own lives pays dividends in the culture we keep. But we have great work we need to accomplish.
Balance is the key…a scale or a teeter-totter, two opposing things hoping to be at an equal weight.
A lot of people are talking about balance right now because it’s a millennial value. When in actuality “balance” is a human value. Millennials just happen to be the generation to question so boldly the absurd standards we’ve become accustomed to.
They’ve hired companies like us to find out millennials care about balance. So they allow you to bring your dog to work or have a happy hour or something.
But that’s not real balance. How do you be a company in a capitalist environment while balancing the actual lives of the people who work for you?
That’s my job. To try and find all those things that will equal the same weight on the teeter-totter.
What levers have you pulled to balance the teeter-totter?
Well, the uncapped paid time off, we’ve offered it since 2010.
Another local business owner told me it would backfire. But only one person has ever abused it, and we let that person go. We encourage you to go be with your family and take your adventures. When work needs to get done, we just figure it out. People take time off and we work around it.
One way we handle all the time off is that our work isn’t silo-ed. There’s a democratic way we get things done. It doesn’t have to go through one person, it’s a shared responsibility.
You started your company at age 21, what got you interested in building a transparent company culture?
I don’t read a lot. I specifically don’t read a lot about what other companies do, it clouds my judgment. I want to come from a very earnest place. I spend time listening to my own intuition and writing about it and thinking about how this value can materialize in a reasonable way. The whole authenticity thing, I am by my very nature a transparent person. I am uncomfortable in small talk.
How have you cultivated authenticity at Bukwild?
“Bring your whole self to work” is part of our vision statement, it’s in our manifesto. We’ll do great work by giving people space and time to be themselves. In a small group, culture is set by the leader.
For example, alcoholism. I went to rehab.
My whole team knows this. It’s as much a heavy subject as it is a joking subject. We have a lot of alcohol clients, and it comes up as an ironic joke. But, as soon as I became comfortable sharing my struggle, which was about a year in, it opened up doors for other people to share. Some incredible things have happened because I go to work as me. I don’t go to work as the boss.
The old school mentality would say, “Keep that shit to yourself. Of course, you don’t bring it to work.” But it’s in everybody’s lives. Everybody has some strife and struggles with something. Bringing your full self to work doesn’t require you to word vomit all your stuff. But if you want to share, we won’t shun you.
What lessons have you learned about culture building in the 17 years you’ve been running your own agency?
It’s a lot harder than it looks.
I think it takes constant mining and gardening. Last year, I took six weeks off. Before I took off, I sat down with each employee and asked for input. I got all kinds of critical feedback — good but critical.
The biggest lesson was I need to have a constant dialogue with every employee. One is not enough.
The initial sit down was so helpful that we did another a couple months later, and we’re planning on doing them twice a year.
Between the first and second sit down did you see a culture shift?
Absolutely! I got great feedback, and we put that into action. There’s a tremendous amount of work, and we haven’t put all the stuff into action.
I realized directors were making decisions without looping in the rest of the teammates. The farther down the org chart you go, the more insight you have. So now we have Agency Alignment meetings (or AA meetings) and happy hour (yes, like I said, it’s a running joke about a heavy subject). These meetings are opportunities to share what we’re thinking at the director level and get feedback. It’s a full group discussion, it’s casual, and you can say whatever.
What would you do differently?
I have allowed fear to become a part of my operating system…We’ve spent too much time being all things to lots of groups and not enough time being very convicted about what our value is and who we shouldn’t be working with.
I’m still working on how to work out of intuition and inspiration and keep fear out of my business. You could write books and books and books about that, but I wouldn’t read them anyway.
Join the discussion on Atlassian community about how your team creates a transparent culture! | https://medium.com/smells-like-team-spirit/right-sizing-and-the-teeter-totter-how-one-ceo-built-a-culture-of-balance-63788b56d438 | ['Melody White'] | 2018-07-19 22:46:15.191000+00:00 | ['Teamwork', 'Leadership', 'Business Strategy', 'Startup', 'Company Culture'] |
So You Want to Fund Black Founders | Photo taken at the Kapor Center in Oakland, CA
I was one of the first Black investors to be promoted to Partner at a venture capital firm. At the time (2015), I was also one of the youngest, having just turned 31 years old.
I have been in the venture industry since 2011, beginning as an intern in Kapor Capital’s inaugural Summer Associates program. I have never seen a wave of pro-Black commentary and actions like the one we are all witnessing today.
Over the past few weeks, more people than ever before have aligned themselves to the #BlackLivesMatter movement, including people in tech. Specifically, many venture capital firms are now opening their doors to fund more Black founders. Many of these firms have reached out to me, asking what ways they can be helpful during these times. Rather than focus on the history of venture and its deep-rooted exclusionary practices, I am writing this piece as a guide. After nine years as a venture capitalist (which is considered “veteran” status in the Black investment community), I have learned some things that I think can help, and I’m going to keep it simple.
Here are 3 immediate actions VC firms can take:
1) Hire Black Investors
Most firms have reached out to Black founders directly on Twitter to take pitch meetings and offer office hours. Instead, start with diversity at your own firm. This benefits you in two ways:
Adding Black investors at your firm will bring more diverse deal flow and perspectives.
81% of all VC firms do not have a single Black investor. 33% of the Kapor Capital investment team is Black. 34% of our first-time investments have a founder of a racially underrepresented background. Diverse investors have access to diverse founders.
The Associates and Principals of today are the Partners and Managing Directors of tomorrow.
Investing in talent today means investing in the leadership of tomorrow. At Kapor Capital, we have run a Summer Associate Program since 2011. (I was in the first class). We have an incredibly diverse, scrappy, and well-educated pipeline of trained investors that are ready to be hired. Also, BLCK VC and HBCU.VC are other organizations working to advance the success of Black investors.
But let me be clear.
If you do not publicize the jobs that are available at your venture firm, then you are intentionally being exclusionary.
It does not matter how much Black talent is available for hire. People can’t get a job that they don’t know exists.
At Kapor Capital, our diversity is by design, our inclusion is intentional, and it starts with publicly posting available job openings.
You can learn more here about how we do it.
Awesome, you’ve got your house in order. Now, let’s talk about Black founders.
2) Fund Black Founders
Kapor Center has covered in great detail the leaky tech pipeline and how it shows up in venture funding. The problems start in early childhood education, continue through higher education, and result in a tech workforce that lacks diversity, inclusion, and equity.
As much as founders appreciate office hours, mentorship, and free advice, what they really need are investor checks to get to work.
Only 1% of venture backed companies are led by Black founders. Yup, out of 100 funded founders in a room, only 1 is Black. Let that sink in.
As venture capitalists, we are consistently advising our portfolio companies not to spend too much time fundraising, and closing as soon as they identify the right lead for their round. The longer the fundraising process, the more time a CEO must spend away from their core business activities.
If you are not seriously interested in writing checks to Black founders, then save the lip service and skip the PR stunts.
But if you are serious, the answer is straight-forward: put your money where your mouth is.
3) Hold Your Firm Accountable
In 2015, Kapor Capital publicly launched a $40M initiative focused on diversity in technology, with $25M earmarked for venture capital. Soon after, we created the Founders Commitment, a first-of-its kind initiative where portfolio founders set diversity and inclusion goals for their individual companies. In 2019, we released the Kapor Capital Impact Report, highlighting our learnings over the last eight years of investing in gap-closing social impact companies and underrepresented founders. In the report, we outlined why financial returns must not be the only measure of a company’s success, and why diverse backgrounds give founders a competitive edge.
Your public commitment does not have to look like ours, but it does need to be something you are serious about being held accountable to. State your goals, measure your progress, and publish your results. | https://medium.com/kapor-the-bridge/so-you-want-to-fund-black-founders-fc58e3f93972 | ['Brian Dixon'] | 2020-06-08 22:34:10.266000+00:00 | ['Diversity In Tech', 'Investors', 'Tech', 'Startup', 'Venture Capital'] |
Web App for Healthcare: Case From Practice | Since the healthcare topic is quite popular now, I decided to share with you our exciting experience in developing a web application for patients. It was a large and severe project which IntexSoft’s (the company I work in) team was working on for several years.
I have to omit all names and some details for certain reasons: the screenshots in the article also serve a demonstration purpose only — the real application looks a little different, but I hope you will still enjoy this article.
Background
The client urgently needed to extend the team to develop the application. And since we already had a positive experience of cooperation, they turned to us.
The product itself is a large-scale, multifunctional web application that allows you to quickly find a doctor of any specialization, make an appointment, or receive an online consultation.
Our team worked only on a part of the functionality, and particularly these features we will discuss in this article.
Key tasks and solution
Below I will describe the main tasks that were set for our team and the way we solved them.
1. Facilitate the process of posting video content for content managers
The application has a partner information section that contains a lot of video files. To accomplish the task, we used the Brightcove service, which allowed us to compress video files to the necessary parameters and assign a specific ID to each video. This speeds up the loading of video content and makes the process of placing files easier.
2. Customize the search for the needed specialist by location
The application has a separate unit allowing to quickly find the doctor you need: the user can go to the page, chose a specific state/district and see the locations of partner doctors on the map with all the necessary additional information, and make an appointment, or contact a specialist for online consultations. To implement this feature, we used Google Maps.
We also developed a special React component, which allows to filter locations and display only those regions where partner doctors are available
3. Develop a COVID-19 info block
The client requested to create a section, which contains all the most relevant information about the virus and the epidemiological situation. By following the link from the pop-up banner, the patient can make an appointment, or request an online consultation.
4. Make the app accessible for people with disabilities
One of the most important tasks of the project was to adapt the application for people with visual impairments. For this task, on all pages, we used HTML Accessibility, which allows users to listen and navigate the content of the app. Voicing is carried out via screen readers — VoiceOver for macOS and iOS, TalkBack for Android, and NV Access for Windows.
By the way, if you are interested in the topic of Accessibility, then you might also like the interview with our Accessibility testing specialist.
5. Simplify content management for admins
The application contains a section of diseases that are studied and treated at the medical center. This section contains detailed information about diseases and real cases.
When creating this section, we used CKEditor to facilitate the work of content managers. For the detailed display of specific objects — blocks with quotes, galleries, blocks with relevant stories, etc. — we developed additional plugins. When creating an object, a content manager can select a ready-made template for a certain block, then insert a text and everything will be displayed on the screen in the required format.
For the block of diseases, we also developed a plugin that allows content managers to add videos from other services to CKEditor.
6. Set up localization
To configure localization, we developed a separate component on React, which allows users to switch between different languages (10 languages).
7. Speed up page loading time
The application has a complex menu, which could slow down the loading of pages. Based on the client’s requirements, we created HTML documents with a menu item title and the necessary data, and then these documents were added to the project. When you first enter the website, the menu is loaded and cached, which speeds up the work of the application.
The app also contains many other static elements. Not to overload the servers, different levels of caching are used there. The first level is performed via Symfony, which acts as a link for the frontend and Drupal, and when the page is fully formed, everything is cached via Varnish.
8. Make the app responsive
Our team faced the task of adapting the application interface to the main types of modern devices — computers, tablets, and smartphones. In this regard, the client’s designers created a separate design for each type of device. To implement this, we used a grid-based layout system. Thus, depending on the device screen parameters, the application interface is displayed in the relevant size.
Since the app has a lot of graphic content, it also had to be adapted so that the images uploaded in the original size would not slow down the page loading on mobile devices. For this, our experts developed a special module for Drupal. Now, when adding the content, it is necessary to upload only one picture of the original size, and the module creates 3 copies of different quality for certain devices and loads what is needed.
Final functionality of the application
For users:
search for a doctor by region/competency;
making an appointment or an online consultation;
multilingualism (10 languages);
accessibility for people with visual impairments;
blocks with useful information for patients and doctors;
adaptability for the main types of modern devices.
ready-made templates for certain types of content;
simplified management of text, graphic, and video content.
For administrators:
ready-made templates for certain types of content;
simplified management of text, graphic, and video content.
Technologies used
Frontend: HTML5, scss, React, Lodash, jQuery, Babel
Backend: PHP, Symfony, Nginx
CMS: Drupal
Database: MySQL
VCS: Jenkins
Other: Varnish, CKEditor, BrightcoveSusy
Summarizing
As a result, the client received a multifunctional web application that greatly simplifies the process of finding the necessary medical specialist for end users, as well as simplifies the content management process for site administrators.
P.S. If you are interested in this kind of articles, write in the comments below or just clap. I will try to share such real cases from practice more often. | https://medium.com/quick-code/case-from-practice-web-app-for-healthcare-d8cd231de083 | ['Andrej Suschevich'] | 2020-07-27 11:32:05.695000+00:00 | ['Nginx', 'PHP', 'Healthcare', 'Web Development', 'React'] |
How to Find The Best Mobile App Developers for Your Project | There are nearly 2.3 million professionals working as mobile app developers across the globe. The number is enormous and increasing every day, making it impossible for you to find the right kind. To save you the trouble and billion dollars we’ve found you the right strategies on how to employ the best resources and brightest minds? If you choose your resource without putting much thought into it, there are chances that you will make a wrong choice. You really can’t afford to go wrong when hunting for mobile app developers for your project. The write-up is going to offer you insightful suggestions on how to find the best resource.
Tips to Find the Best Mobile App Developers
Here are a series of tips that will help you find the best mobile app development resources. While making the choice, try to get as much information as you can about the developer. This will help to find the best mobile app development company for your project. Here are the suggestions to keep in the mind.
Select A Destination
Now there are analyst firms that allow you to access the developers from all around the world. However, the top locations seize to be some preferred locations like India, U.S. or may be the UK. Depending upon the location, the cost of the resource will vary prominently. India is a preferred destination to hire mobile app development companies and the cost for a basic mobile application can range anywhere between $5000-$8000. This basic amount can reach up to $40,000 an app depending on the complexity of it. So, the idea is to do a thorough research to find a company that offers the best services under your preferred budget.
Choose: a Company VS an Individual
The next thing that will play an important role in finding a perfect match is to decide whether you want a mobile app development company to work for you or it is a developer who can excel in the similar job? Freelance mobile app developers are getting the due attention they have always deserved. The best thing about hiring freelancers is, you get a resource to work for your project only, unlike a mobile app development company that manages a number of projects simultaneously.
This Article is Originally Published here. | https://medium.com/appdexa/how-to-find-the-best-mobile-app-developers-for-your-project-d6b06c305561 | [] | 2017-09-12 13:47:28.002000+00:00 | ['Mobile Apps', 'Mobile App Development', 'Technology', 'Top Mobile App Developers'] |
If Virtual Reality Is Reality, Virtual Abuse Is Just Abuse. | If Virtual Reality Is Reality, Virtual Abuse Is Just Abuse.
As more of us embrace virtual space, how should we deal with aggressive, abusive, or indecent acts that occur in a parallel world?
“If you’ve got something that is independent of your mind, which has causal powers, which you can perceive in all these ways, to me you’re a long way toward being real”, the philosopher David Chalmers recently told Prashanth Ramakrishna in an interview for the New York Times. Chalmers invoked remarks by fellow Australian philosopher Samuel Alexander who said that: “To be real is to have causal powers”, and science fiction writer Philip K. Dick who said that, “a real thing is something that doesn’t go away when you stop believing in it.”
Professor Chalmers’ comments were made in reference to the new and increasingly sophisticated world of virtual reality; something he believes has the status of a “subreality” (or similar) within our known physical reality. A place that still exists independent of our imaginations, where actions have consequences.
Chalmers draws parallels with our trusted physical reality, which is already so illusory on many levels. After all, the brain has no direct contact with the world and is reliant upon the mediation of our senses. As the mathematician-turned-philosopher points out, science tells us that vivid experiences like color are “just a bunch of wavelengths arising from the physical reflectance properties of objects that produce a certain kind of experience in us.”
He is certainly not alone in his observations. There is an established tradition of asking whether our worldly experiences are anything more than a deceit — a virtual dupe for the amusement of some higher power (see here and here). Within that scenario, a VR world would just be another virtual experience within an even larger virtual world.
Whatever the status of our more familiar reality, we may soon have problems distinguishing these so-called physical experiences from their virtual counterparts. Even in cases where the two are visually distinct, the salience of the acts committed in one may reverberate in the other — particularly when it comes to acts that provoke an emotional response.
How should we respond to distressing, manipulatory, or abusive behavior in an immersive and interactive environment? Particularly when it graduates from “content” to something much more like a lived experience?
In truth, these problems are already well on their way to becoming manifest.
Take online simulation environments like Second Life, where virtual simulations of child abuse are played out by consenting adults without legal implication. In 2018, a research report about “sexual ageplay” examined this phenomenon — something that would become more widespread as more fluid and anonymous virtual environments arise — noting that in most jurisdictions there is no illegality as there is no direct harm to a child involved. This is in spite of other users sharing a “common understanding of sexual ageplay online as a form of virtual paedophilia rather than as a form of sexual fetish between consenting adults.”
Though this activity is characterized as “victimless”, Second Life’s creators Linden Labs have sought to police it with their community rules. But with ever multiplying virtual worlds there’s every likelihood new ones will spring up with “ageplay” as their specific purpose. They’re playing a game of whack-a-mole.
The backlash by other Second Life users is interesting. Especially in light of the fact that the same audience seem to view other “edgeplay” differently. The report tells us that: “…behaviors such as rape-play, murder-play, incest-play are broadly culturally accepted because they are constructed as only consensual fantasy and so not as reflecting or affecting the RL [real life] resident and their future behaviors.”
Though these users were accepting when it came to consensual examples, it’s more than clear that there are innumerable instances when sexual harassment and assault in virtual environments do not involve mutual consent.
In 2016, a woman named Jordan Belamire wrote about her experience of being sexually assaulted in a virtual reality game called QuiVR, where another (male) player started to virtually rub her chest, proceeding to give chase with “grabbing and pinching movements” when she cried out for him to stop (the game allows verbal communication). Understandably, Jordan felt extremely violated by the encounter even though no physical touching actually occurred.
This assault was both real and virtual. Using Chalmers’ criteria for reality — it was independent of Belamire’s imagination, and it absolutely had a causal effect.
This intimidating situation is not an isolated event. Female users who regularly step into VR environments often have a string of similar examples to share. As avatars and self-expressions become more photorealistic, it’s still unclear how platforms and laws will adapt to this kind of behavior. As law professor Mary Anne Franks of the University of Miami has cautioned:
“We are nearing a situation where inputting a person’s body type with scary accuracy into scenarios where they can be raped, assaulted and even killed. You’ll never know if the guy in the cubicle next to you or the guy sitting across from you on the train isn’t doing exactly that on his phone.”
Dauntingly, this permissive VR environment is burgeoning in parallel with another phenomenon that allows bad actors to take control of another person’s body and identity: deepfakes. We have already seen this medium weaponized in ways that violate individuals — and particularly females — while escaping true accountability by playing along the tricky reality-simulation boundary.
Are we headed towards a virtual bedlam in which predators can recreate and operate our near precise human likenesses in the deeper, darker recesses of another reality? And perhaps even sell that access on? The burden of proof falls on those who would deny this potential.
There are, of course, those who say that this is still a victimless endeavor. After all, this would not be you, or even your actual body. It’s an avatar or a likeness and there is no physical contact. Any violation is just a perceived one. Offensive, perhaps, but not an offense.
However, this diminishes the very real psycho-physical connection we feel with our likenesses — a connection that is only likely to grow stronger in the near future. In 2010, researchers demonstrated that a body ownership effect (comparable with the rubber hand illusion) can be triggered in virtual reality by tracking the subject’s body movements in microfine detail, the level of hand and finger movements and then reproducing these movements exactly in the virtual body. Most recently, another set of researchers found that this ownership effect was even reproducible in avatars of non-human animals, like spiders and bats. The MIT Technology Review is quick to point out that this effect has “inevitable applications in the world of pornography”.
So, if we have a perspective of our virtual body that countenances it as an owned form of sorts, isn’t any imposition on this form an imposition on the person that momentarily inhabits it? It certainly seems so, and there’s little doubt that there would be real psychological effects from its assault.
But now we have a problem. Even if we agree that the harassment or assault of a user in a virtual environment constitutes a real life harm, this does nothing to condemn the users involved in morally dubious ageplay posing as minors and abusers. After all, there is no complainant, so is there any real harm? We cannot assume, as Second Life community members did in the 2018 study, that behaviors in virtual reality are indicative of behaviors in real life. Nor can we make nebulous generalizations about behaviors it might promote that ultimately create real life harms to children further down some perceived causal chain. If we were to consider this was an issue of virtual bestiality — rather than virtual child abuse — undertaken by two consenting adults, it’s unlikely that we would use a “cruelty to animals” argument to lobby for its elimination.
And yet, just intuitively, virtual child abuse feels like something that should be objectionable to any right-minded person. So, how can we safeguard our new VR environments without falling prey to accusations of needlessly censoring victimless fantasy?
Perhaps the answer lies in examining precisely why certain acts of abuse are almost universally criminalized in the real world. Does it always rest on the harm to the victim? It strikes me that it does not. There are instances where we will condemn an act even when we cannot easily define its bad consequences. Indeed, some acts — like adultery, habitual lying, or dishonoring the dead — are often considered intrinsically wrong, regardless of whether there is any specific fallout.
This is undoubtedly a more complex scenario, but there is some precedent. In 2008, the United Kingdom outlawed cartoons of child sexual abuse. As in virtual worlds, no children are directly harmed in the creation and distribution of these illustrations, so it makes a useful comparator. It’s worth noting the comments of the then Parliamentary under Secretary of State for Justice, Maria Eagle that:
“This is not about criminalising art or pornographic cartoons more generally, but about targeting obscene, and often very realistic, images of child sexual abuse which have no place in our society.”
It is a broad and definite condemnation of the acts depicted. It’s about not wanting to inhabit a world in which such grave abuses can be trivialized and openly festishized by adults. An attempt to eliminate something that is toxic in its very depiction.
So, if this type of victimless imagery is enough to make us, at the very least, extremely uncomfortable — if we can conceive of it as a kind of abuse in a civilized society — we should also consider performative sexual abuse in realistic virtual worlds just as troubling, if not considerably more so. And of course, many do. But as this new wild west opens up in front of us, it’s important we achieve some clarity on what constitutes an acceptable and profitable use, as well as what kinds of behaviors could act as contaminants. Particularly if, as Professor Chalmers would have it, virtual reality is actually another reality, not some lawless playground where anything that we can imagine can be enacted.
This is a slippery slope, of course. Gamers would be appalled if real world rules prevented them from enthusiastically shooting at their opponents in apocalyptic virtual worlds. Similarly, there are a great number of actions that are illegal in real life on safety grounds — like driving at great speed — for which VR could be the perfect outlet. We also know that it is a fantastic place to learn and practice special skills, like surgery, to actively reduce real harm. So, it appears the blanket application of laws from physical reality to virtual reality could undermine the best use of the medium and quickly deteriorate into farce.
Nevertheless, that doesn’t mean we shouldn’t give due consideration to which rules we can and should import. Where an action has true causal effect — on an individual or on broader society — it is appropriate that we take measures to protect those precious things. Not just in real life, but in all possible worlds. | https://towardsdatascience.com/if-virtual-reality-is-reality-virtual-abuse-is-just-abuse-34f09f1007ef | ['Fiona J Mcevoy'] | 2019-12-11 03:54:19.245000+00:00 | ['Virtual Reality', 'Ethics', 'Technology', 'Abuse', 'Society'] |
The Absolute Reasons of Having a Mobile App for Business | The global number of mobile phone users, as per Statistica, by the end of 2017 is going to be 2.32 billion. The fact also denotes that one-third US population is likely to have smartphones of their own by the end of this year. The figures are persuasive enough to every mobile app development company to have an app for the business.
A business seems closer to the clients when there is an app to leverage a direct connection to the users. Additionally, user’s dependency on mobile phones has made it essential for companies to create an app for their businesses. Moreover, the ease of connecting with the clients helps to gain more clients in the business. Here are added absolute reasons to invest in mobile app development.
Reasons to Invest in Mobile App Development
24*7 Availability to the Clients
There are a set of pre-defined perks to having a mobile app to every mobile app development company. And, the best one is to offer continuous connectivity to the clients. A business app makes it quite easy for clients to interact with the business. Modern apps have also made it a cakewalk to offer a way to the clients in order to interact with the business at any point of time. This is one of the most exclusive reasons to invest in mobile application.
Enhancement to Brand Recognition
Developing a business centric application is taking a leap towards improving the brand recognition. A brand is the first line of introduction of a business to every user. It needs to make an impact on the users in order to entice them towards the services. Even if the users are not interacting with the business directly, an enticing mobile app development company’s logo can attract them towards the website, which will always count for the success of the business.
Providing Value-Centric Services to the Clients
Apps actually help mobile app development companies to offer value-centric services to the clients. For example, there are a number of applications that users can browse through either for getting the directions, knowing the nearest food outlet or for navigation purposes in order to reach a place. In the context of mobile app development business, the apps can really be helpful for clients to offer them knowledge about the latest app making technology.
This Article is Originally Published here. | https://medium.com/appdexa/the-absolute-reasons-of-having-a-mobile-app-for-business-2dacd58e75e3 | [] | 2017-09-15 14:43:27.748000+00:00 | ['Mobile App Development', 'Technology', 'Apps'] |
Hi Santa. | Hi Santa.
I am a child living on the equator. I never had any Christmas wishes growing up. This year, I do. Would you hear me?
I am not asking for toys. They are irrelevant to me this year.
My parents have been retrenched and are staying at home. I used to complain that they never had time for me. Now, they have the time, but they look at me with sadness in their eyes.
I don’t want that. I want my parents to be happy.
How can I help them? How can I find a job when I am too young?
Maybe I should try becoming a social media influencer. I see many people doing that on my Instagram and YouTube accounts.
Can you teach me how to do that?
Hugs and Kisses,
Kiddo
Chuttersnap on Unsplash
Liam Ireland | https://medium.com/illumination/hi-santa-f801e6d27833 | ['Aldric Chen'] | 2020-12-18 02:13:23.867000+00:00 | ['Christmas', 'Self-awareness', 'Thinking', 'Short Story', 'Santa Claus'] |
Intent vs. Outcome | De Blasio probably waited too long before running for the White House. At the moment he had decided to run, he was the 23rd candidate to put his hat in the ring. De Blasio has proven time and time that he is a great progressive, middle-class fighter democrat candidate that would have done well against Trump and during his short campaign, he was remembered for calling the president “Con Don” on multiple occasions.
Bill de Blasio and his wife Chirlane McCray on “The View” on August 2- ABC News
During the campaign, de Blasio raised around $1.1 million- far behind the lead candidates. He didn’t qualify for the third democrat debate in September. He was also too low in the polls to get a seat to the fourth debate next month.
I find it quite surprising he wasn’t able to get support from a large number of democrat members, taking into account that he is in command of the most populated city in the United States. Pete Buttigieg was able to attract much more attention and support even if he is only mayor of South Bend.
Being in charge of a population totaling over 8M people since 2014 isn’t an easy task. Before becoming mayor, de Blasio worked in City Hall and then was elected as a municipal councillor. He managed Hillary Clinton’s 2000 Senate campaign and later criticized her for not having a progressive enough platform during the 2016 general election.
With the Party in mind he has indirectly attacked Biden, he said: “ We don’t have to worry about lack of unity, we do need to worry about lack of passion.”
Bill de Blasio’s campaign slogan for his bid, Working People First, didn’t resonate with democrats.
Some say he isn’t charismatic enough. I am not a New Yorker, nor an American, but I think that he brought forward good policies for the Democratic Party. Policy-wise he was probably in the best position to possibly obtain support from some republicans in 2020. Everyone agrees that for the democrats to beat Trump next year, it is going to take a big voter turnout.
It will be interesting to watch who the mayor supports as a candidate. Joe Biden will probably not be the one he supports as de Blasio has called out Joe Biden the last few months for being too much of a moderate.
The Big Apple on the Map
New York City has been doing well compared to other cities in the U.S.A. It has been a witness to many bold local reforms over the past few years. This will push positive consequences for changes at the other levels of government.
Because of gang activities and illegal trade, shootings have increased in 2019, compared to 2018. The crime rate is now lower. Serious crime, a category that includes burglary, rape and felony assault, is down nearly 4% this year compared with the same period in 2018, according to the New York Police Department (NYPD).
The rate is the lowest for the first six months of the year since the NYPD started tracking major crimes in 1994. “There is always more to be done, and there are some areas of real concern, but the big picture is very, very positive”, said de Blasio earlier this year.
Under de Blasio’s watch universal pre-k for all children was implemented. Because of this new initiative, about 70,000 NYC children are now enrolled in the system.
Earlier this year, New York was home to an impressive rise of the minimum wage- that is taking place gradually throughout 2019. Now, all businesses with more than 10 employees must pay their staff $15 an hour. This policy improves the lives of 1.5 million workers, according to the New York City comptroller’s office.
New York City on a sunny day- Tony Cenicola/The New York Times
Excluding 29 States and Washington, D.C., at the national level, the minimum wage remains at a low number of $7.25 an hour.
In comparison, in Canada, the minimum wage varies from province to province. Saskatchewan is where the minimum salary is the lowest ($11.06 an hour). The minimum salary is the highest in Alberta ($15 an hour)- in part because of the oil industry.
The next democrat presidential debate will be held on October 15 in Westerville, Ohio, moderated by CNN and The Times. | https://pilonolivier.medium.com/intent-vs-outcome-fc9e857f16a0 | ['Olivier Pilon'] | 2019-09-22 23:57:26.855000+00:00 | ['Bill De Blasio', 'New York', '2020 Presidential Race', 'Democrats', 'Politics'] |
Too Many Stories | Arguably, story is in our souls.
People have told stories since they first walked the Earth. We find stories painted on rock walls, woven into tapestries, and carved into buildings and monuments. Stories have been handed down from generation to generation, written, published, recorded as voice and music, acted out on stage, filmed, and posted on the Internet.
But there are stories and there are stories. Stories can relate history, teach science, provide moral education and guidance. They can entertain, relax, offer welcome distraction, even bind us together. Or they can mislead and manipulate. Like every tool we create, stories are morally neutral. They can be used for good or ill.
In the political realm, stories convey messages about people, organizations, policies, or programs. Here, they have basically one purpose: to align people with the storyteller. While the practice is ancient, in recent times a word has emerged for it: narrative.
Before the early 2000’s, you didn’t hear “narrative” used that way. You were more likely to hear it with regard to fictional storytelling, where it refers to storytelling passages distinct from dialogue. It only became a common term for political storytelling about ten years ago. In either sense, narrative is basically the same: the organized telling of a connected series of details and events.
In every narrative, material is selected and arranged with a goal in mind: to engage the audience’s attention, draw them into the story, and align them with designated characters. In other words, narrative manipulates the thoughts and feelings of its audience. In a work of fiction, such manipulation is essential to achieving the goal of the story. It hooks you, draws you in, and keeps you entertained until the climax and denouement. You welcome that manipulation. Without it, boredom would drive you off to something else!
But in the broad realm of politics, manipulation by narrative has a very different purpose. Far from entertaining, it seeks to gain the audience’s allegiance. From the company proclaiming its good intentions to the candidate running for office to the party hammering its agenda through a legislature, political narrative is about amassing support. Put another way, it’s about power, about who gets their way.
As an author of fictional tales, I can — I hope — engage your interest in people who do not exist while they “do” things not playing out in the real world. I make you care about what happens to these nonexistent people, even though it’s all in your head and you know it. That’s the power of narrative. If blatant fiction can so manipulate us, how much more can narratives claiming to be factual.
And that’s why we’re drowning in them. Corporate and political narratives battle daily for supremacy on television, radio, the Internet, and print media. So much power and money are at stake that fact and truth take a back seat to persuasion. Not that that’s news. How often have devoted fact checkers demonstrated the spin and outright lies embedded in nearly every modern political narrative? Even when based in fact, these narratives gleefully depart from reality as needed to build a compelling story. In an era of fake news, the real news may be that to varying degrees it’s all fake.
Do you care? Maybe not. I know people who don’t. But if you do, if you think at least sometimes it’s important to distinguish truth from falsehood, fact from fiction, then here’s a principle you need on your side:
Independent investigation of truth.
Yes, we’ve all encountered this in some form. It underlies all scientific advancement and ideally is how we make up our minds about matters of importance. It’s even stated or implied in most major religious systems. (The above wording is taken from the Baha’i Faith, my own religion.) Still, while not shockingly new or different, it’s worth stating explicitly. In simplest terms, it holds that each of us inherently has both the right and the responsibility to investigate matters for ourselves and come to our own conclusions about them.
Independent investigation of truth holds that we shouldn’t take the stories told by others at face value. We should look into the claims underlying them and evaluate them with justice. While none of us can directly investigate everything, we can draw upon the collective knowledge of humanity, find the facts that underlie the stories, and consider how various people interpret those facts.
Narratives, particularly those motivated by political aims, should never be given full credence without such investigation. They are too often corrupted by the drive for power. Once truthfulness is sacrificed, all other virtue begins to erode, and the worst form of erosion is the division caused by, indeed deliberately fostered by, competing political narratives. United we stand. Divided we fall. Divisive political narrative works not to our advancement but to our destruction.
If we really wanted a better world, we would resolve to be unified. We would consult together on facts and seek agreement on how to address the problems before us. Whether we can do that remains to be seen. In the meantime, we’re smothering under the press of way too many stories. | https://lehket.medium.com/too-many-stories-10c5524b7799 | ['Dale E. Lehman'] | 2018-10-10 21:04:10.472000+00:00 | ['Narrative', 'Investigation', 'Politics', 'Storytelling', 'Fact'] |
Training Considerations for Multiple Skills Orchestration | When you are building a Watson Assistant chatbot that must handle a variety of knowledge domains, you may begin looking into a concept known as “multiple skills orchestration”. Natural language classifiers often perform best when the classifier is focused on a single domain. Some use cases will perform just fine when mixing domains in a single skill (such as a banking use case that also handles light chit-chat). However, if your chatbot needs to demonstrate a broad range of knowledge and is struggling with accuracy and confidence, the solution may benefit from isolating each domain as a separate skill. This allows you to deliver a robust natural language experience where the chatbot can cover a wide range of topics and also go deep into any single domain.
The three main approaches for implementing a multiple skill solution are:
Router
Spray
Waterfall
There are a number of factors to consider when selecting an approach. (Check out this article to dive into the technical implementation aspects.) Once an approach is selected, there are a couple of options for training your skills. This article focuses on those training considerations.
Note: This article includes the results of several experiments that demonstrate various machine learning/training concepts. These experiments were conducted using data available in the Watson Assistant Content Catalog. Consider replicating these experiments with your own data if you are struggling to identify the best approach for your use case.
The Router Approach
The router pattern is a hierarchical configuration in which an utterance is initially classified to the most likely topic or domain, and then “routed” to a sub-skill for refined intent classification (and usually to deliver the associated response). | https://medium.com/ibm-watson/training-considerations-for-multiple-skills-orchestration-5d33ef7e7936 | ['Cari Jacobs'] | 2020-06-12 03:07:22.017000+00:00 | ['Editorials And Comments', 'Artificial Intelligence', 'Chatbots', 'Data Science', 'Watson Assistant'] |
Scheduled serverless dbt + BigQuery service | My colleague Felipe Hoffa recently published a blog post titled Get started with BigQuery and dbt, the easy way. More specifically, he showed how to install dbt in Google Cloud Shell, configure it and manually run it to create a temporary dataset in BigQuery. This is great for testing dbt + BigQuery but how do you run this kind of setup in production?
dbt documentation states that Running dbt in production simply means setting up a system to run a dbt job on a schedule, rather than running dbt commands manually from the command line.
Cloud Shell is just a temporary VM in the cloud and not suitable for production workloads. One obvious solution is to create a dedicated VM, install dbt and have some kind of cron job on that VM to run dbt on a schedule. This will work but who wants to maintain a VM? Not to mention, you need to pay for the VM per second even when dbt is not running. It’s wasteful. We can do better.
A better solution is to use Cloud Run. Cloud Run is fully managed, no VMs to setup or maintain. Its pricing model is based on request time, you’ll only get charged when the dbt service is running.
In this blog post, I want to show you how to take Felipe’s sample and make it more production-ready by running it as a serverless Cloud Run service on a schedule.
Challenges
Running dbt as a Cloud Run service has a few challenges, namely:
dbt is mainly a command line tool whereas Cloud Run expects HTTP requests. How do you call dbt command from a Cloud Run service? Cloud Run runs containers. How do you run dbt in a container? How do you authenticate dbt with BigQuery? OAuth works for end users but for services running in the cloud, it’s probably not the right solution.
Let’s tackle them in that order.
Running shell commands from Cloud Run
Cloud Run has an example on how to run a shell command from an HTTP Server deployed to Cloud Run. It involves setting up a Go based HTTP server that simply calls a shell script upon receiving a GET request. You can take a look the details in invoke.go.
In our case, the shell script, script.sh simply calls dbt with the profile folder:
#!/bin/sh dbt run --profiles-dir .
Container image for dbt
dbt has some base images that you can rely on (although the documentation is pretty much non-existent). In the container, we want to include the HTTP Server with the script.sh . We also want to include dbt runtime. This is a sample Dockerfile that works:
FROM golang:1.13 as builder
WORKDIR /app
COPY invoke.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -v -o server FROM fishtownanalytics/dbt:0.17.0
USER root
WORKDIR /dbt
COPY --from=builder /app/server ./
COPY script.sh ./
COPY dbt_project ./ ENTRYPOINT "./server"
Build the container using gcloud :
export SERVICE_NAME=dbt-service gcloud builds submit \
--tag gcr.io/$(gcloud config get-value project)/${SERVICE_NAME}
Authentication
By default, Cloud Run uses the Compute Engine default service account and that should be able to make BigQuery calls. However, it’s best practice to assign a more granular permission to your Cloud Run service by assigning a dedicated service account with more restricted IAM roles.
In our case, the Cloud Run service will only talk to BigQuery, so let’s create a service account with bigquery.admin role. You probably want to use even a finer grained role in production:
export SERVICE_ACCOUNT=dbt-sa gcloud iam service-accounts create ${SERVICE_ACCOUNT} \
--display-name "DBT BigQuery Service Account" gcloud projects add-iam-policy-binding \
$(gcloud config get-value project) \
--member=serviceAccount:${SERVICE_ACCOUNT}@$(gcloud config get-value project).iam.gserviceaccount.com \
--role=roles/bigquery.admin
We will use this service account when we deploy the Cloud Run service.
Deploy Cloud Run service
Now that we have all the pieces assembled, deploy to Cloud Run with the service account created earlier and also no-allow-unauthenticated flag to make it a private service:
gcloud run deploy ${SERVICE_NAME} \
--image gcr.io/$(gcloud config get-value project)/${SERVICE_NAME} \
--service-account ${SERVICE_ACCOUNT}@$(gcloud config get-value project).iam.gserviceaccount.com \
--no-allow-unauthenticated
After a few seconds, you should see the service deployed and running:
You should also see that the service is private:
Scheduling
The final step is to call the Cloud Run service on a schedule. You can do this with Cloud Scheduler.
First, make sure the Cloud Scheduler API is enabled:
gcloud services enable cloudscheduler.googleapis.com
Create a service account for Cloud Scheduler with run.invoker role:
export SERVICE_ACCOUNT=dbt-scheduler-sa gcloud iam service-accounts create ${SERVICE_ACCOUNT} \
--display-name "DBT Scheduler Service Account"
gcloud run services add-iam-policy-binding ${SERVICE_NAME} \
--member=serviceAccount:${SERVICE_ACCOUNT}@$(gcloud config get-value project).iam.gserviceaccount.com \
--role=roles/run.invoker
Create a Cloud Scheduler job with the service account to call the Cloud Run service every 5 minutes:
export SERVICE_URL="$(gcloud run services list --platform managed --filter=${SERVICE_NAME} --format='value(URL)')" gcloud scheduler jobs create http ${SERVICE_NAME}-job --schedule "*/5 * * * *" \
--http-method=GET \
--uri=${SERVICE_URL} \
--oidc-service-account-email=${SERVICE_ACCOUNT}@$(gcloud config get-value project).iam.gserviceaccount.com \
--oidc-token-audience=${SERVICE_URL}
To test that the service gets called and the temporary BigQuery dataset gets created, you can manually trigger the job:
gcloud scheduler jobs run ${SERVICE_NAME}-job | https://medium.com/google-cloud/scheduled-serverless-dbt-bigquery-service-8cdca03ff238 | ['Mete Atamel'] | 2020-07-29 15:35:59.220000+00:00 | ['Data Science', 'Google Cloud Platform', 'Software Engineering', 'Bigquery', 'Serverless'] |
World Humanitarian Day: Veterans of WFP’s Ebola response talk coronavirus | Meet Modesta Tuturu and Joseph Makumbe. As part of the World Food Programme (WFP) Zimbabwe’s Supply Chain team they are no strangers to emergency response during a pandemic — such as the one pushing Zimbabwe to the brink. Modesta and Joseph were both on the frontlines in 2014. No strangers to quarantine or lockdown, they understand the ins and outs of social distancing and personal protective equipment (PPE). Below, they talk about their experiences of lockdown — then and now.
Tell me about where you were during the Ebola response
Joseph: I was deployed to Freetown, Sierra Leone in September 2014. I was responsible for monitoring stock levels and overseeing the movement of incoming and outgoing goods. When dealing with a pandemic, you always have to be ahead of the game.
Both Modesta and Joseph agreed that the most important part of their job is the last mile distribution. Seeing the food going home with smiling faces. WFP/Tatenda Macheka.
This is a lesson I learnt well in Sierra Leone. My role there was to make sure food and PPE were transported seamlessly, and WFP was the UN Agency taking the lead on this. I was making sure everything arrived in time, and that we were ahead of Ebola.
Modesta: I was deployed to Vonjama in Lofa County [the most affected city in Liberia] in October 2014. I was responsible for the last mile of distribution, for setting up logistics hubs. When I arrived in Liberia there was nothing — and as a woman I had to demonstrate that I could lead. We didn’t have much time when we were there, so we needed to be as fast as we could. Speed and efficiency are key. There is no room for error when dealing with such pandemics.
Modesta says balancing family responsibilities against humanitarian work is tricky. Photo: WFP/Tatenda Macheka
Were you prepared to be at the frontlines fighting Ebola?
Modesta: I had to leave my family, including my two-year-old baby, at home to venture into a male-dominated field. I was emotionally torn between my family and my career, but the passion for humanitarian work kept calling me. Some people didn’t want me to go, and even tried to discourage me from going. But for me, it was a once-in-a-lifetime opportunity that I couldn’t pass up.
Joseph: I realised that it’s real when I arrived at Lungi International Airport. I was hit hard by culture shock when I was thrown in to work while trying to adjust to the new food and climate. I remember watching the news on Ebola and being afraid to go to Sierra Leone, but, thanks to the support and training I got from WFP, I felt prepared.
What sticks in your mind as a lasting memory of that time?
Modesta: There was a time when I had a high temperature and I was asked to visit the clinic. I thought I was sick and everyone was afraid. It turned out to just be tonsillitis! | https://medium.com/world-food-programme-insight/the-passion-for-humanitarian-work-kept-calling-me-515bc0bf31e1 | ['Tatenda Rodney Macheka'] | 2020-08-19 08:39:47.812000+00:00 | ['Humanitarian', 'Zimbabwe', 'Ebola', 'Coronavirus', 'Covid 19'] |
Editor’s Picks — Top 10: Writer’s Block and How to Use Its Power to Be Prolific | Here is a list of our top 10 writers who managed to calm their subconscious today:
10. Why There Is No Right Way To Use Credit Cards
Shubham Pathania is a coder turned writer. Today he is trying to help us to seek financial freedom.
Many finance gurus already preach to avoid credit cards. You might have heard this bazillion times on the internet. Have you ever thought about what could be the possible reason that so many people advice to stay away from these cards? Let me first tell you it has nothing to do with your spending habits. People often say that if you plan a budget and stick to it, then credit cards are beneficial. In fact, many financially literate people regularly use it.
9. The Weird Egg-Laying Mammal that Glows Under Blacklight
Simon Spichak is a neuroscientist and science communicator. First I read this story and then I went on a half-hour YouTube research to know more about this weird reptile+bird+mammal creature. Don’t miss this one.
Have you ever felt something was too strange to be real? Over 200 years ago, biologist George Shaw received a specimen from New South Wales that left him perplexed. He even tried to snip off some fur around the beak, expecting to see the evidence of stitches and fraud. Despite how strange this specimen was, he did not find any evidence it was a fraud. Describing it in a scientific journal, he wrote: Of all the Mammalia yet known it seems the most extraordinary in its conformation; exhibiting the perfect resemblance of the beak of a Duck engrafted on the head of a quadruped.
8. You Don’t Know What You Don’t Know
Akarsh Nalawade is all these things: Talkative. Intrepid reader. Easy-goer. Globetrotter. Quixotic. Gooner. Polemic. Opinionated. Tea Drinker. Nerd.
He is an excellent writer as well. His style is simple, direct, and engaging. I hope you like his article as well as the book he is talking about. Accept diversity. Don’t miss it.
“She told me, shaking her head, how painful it was to see the company hire all these great college kids — all sorts of backgrounds; all sorts of ideas brimming in their heads — only to watch them gradually remoulded to ‘fit’ the culture of the organisation. They came with unique insights and voices. She heard those voices fade, unless it was to echo the company’s ‘accepted’ way of thinking.” ~Matthew Syed, Rebel Ideas As brave as it’s brainy and as holistic as it’s honest, this book gives the reader a new lens to view the world. Incorporating insight from psychology, economics, nutrition, academia and biology Syed makes a compelling case for more diversity. He forces us to question why diversity in business, culture and society is a feature, not a bug. How hiring an ethnic employee in your multinational company is not “diversity” but fostering a culture of constructive dissent, is. Why intelligent individuals come together to form stupid groups that make witless decisions. Why conforming to your peers at work is likely costing the company millions in unrealised revenue. Why being social leads to 100x more innovation than being smart ever will.
7. A Japanese Mother Hires a Father for Her Daughter
Zul Bal writes to collect, capture, and curate ordinary beautiful ideas. She is an exceptional writer. Her charming style is conversational and highly engaging.
It is stories like these where I use these words: Well-written stories take less time to read.
This 4-minute read seems like a 1-minute read. Don’t miss it. And don’t forget to see her other work.
Growing up without a father was difficult enough for Manna. Being bullied for not having one made life more unbearable for her. At the age of ten, she became withdrawn and didn’t even want to talk with her mother about her troubles with school bullies. Her behaviour worried her mother, Hiromi. By talking to her daughter’s teachers, she found out that Manna was ostracized by her friends for not having a father and had no one to play with. Soon, she refused to go to school.
6. How Not To Deal With Writer’s Block
Tosin Sanusi loves to write about chaos in our lives. She is facing the block I was talking about in the intro.
Her style is honest, direct, and engaging. If you want to see unpretentious writing, her writing would please you. Do check her other work.
Reading and writing on this platform along with thousands of talented creatives, I’ve learned that the majority of us have to work hard to maintain our inspiration. Each day brings unique challenges that can block creativity but when the career you’re pursuing relies on your ability to pump out quality content, you just have to find a way. I will not pretend I have a formulaic cure for writer’s block. I believe I’m still suffering from a mild case of the affliction right now. Since my lack of inspiration is all I can think about at the moment, I thought I’d write about it and perhaps help you dodge the traps I’ve fallen into. Basically, all you have to do is to avoid all the things I’ve done this December and you’ll probably be just fine.
5. How Much Is One Year of Living Worth to You?
Oliver Brunchmann is working tirelessly at increasing his odds of a happy life and to help others reach their potential.
He is an excellent writer. His style is inquisitive, direct, and engaging. The headline is perfect — it captures your attention without reading a single word. If you started reading this story, you would probably not click away without finishing it.
Don’t forget to check his other work.
I always wanted to write but never had the time for it! About one year ago I did a critical examination of my life. I didn’t think I was moving forward. I did a horrifying analysis of my time and why I didn’t have enough time to fulfill my dreams. Life is limited. You will approximately have 80 years to live, some of them are already behind you. For the sake of this article. I am close to 40 so living an average life, I am halfway through. But enough about me, let’s assume you have 60 years left to live. How are you spending those?
4. Tips on Saving Money From a Former Shopaholic
Liberty Ann is a brilliant writer. Her style is simple, direct, and highly engaging. She is sharing her personal experiences here to help you save some money.
Not so long ago I was a textbook shopaholic (as I’ve been told by friends and family). I’d feel the incredible rush of instant gratification and there was nothing else like it in the world for me. The bad part was afterwards I’d soon realize I didn’t even need it, or want it that much anymore. Sometimes it would take less than a day — or just hours — after purchasing for me to feel the comedown that’s known as buyer’s remorse.
3. Are Your Meetings a Meaningless Gathering of People?
Paul Myers MBA is a top writer in Business, Leadership, Entrepreneurship, Startups, and Innovation. His style is simple, direct, and engaging.
If you are interested in his writing topics, follow him immediately. His advice would help you in your business and life. Don’t forget to check his other work.
Few meetings are dialogues. By that, I mean that they do not invite openness or untethered contributions from workers on the front-line. In my 20 years of experience, most meetings are ill-structured. They fail to capture individual opinion. Nor are they a forum to collate collective insights. Too many meetings follow the same form regardless of function or output. Today we have presentations, Zoom, flip charts, white-walls, coloured pens, post-its, cookies, coffee, tea, and buffet prawn sandwiches. None of which substitute real progress, the true output of a collective opinion.
2. 9 Small Gestures That Can Make You A Joy To Live With
Erin King is a writer and a musician. She is an outstanding writer. If you are not already following her, it is your opportunity to keep track of her future stories.
Many of her stories have reached thousands of views and earned her a reputation for writing well. Do check her other work.
The biggest culprit for domestic conflict is housework and some people genuinely don’t understand what it means to be a good housemate. Whether you’re a roommate or a romantic partner, living with someone else requires a certain amount of consideration in order for it to work. If you want to be easier to live with, why not try some of these little gestures because when you’re living together, it’s the little things that make the most impact.
1. Lead With Kindness
At number one, it is Randy Wolken. He writes to educate and inspire. He is the President & CEO of MACNY — The Manufacturers Association with over 300 company members in New York State.
He is also an outstanding writer. His writing style is pleasing, direct, and extraordinarily engaging. If you read this story, you would definitely like to check his other work as well. | https://medium.com/illumination-curated/editors-picks-top-10-writer-s-block-and-how-to-use-its-power-to-be-prolific-25629ea824a0 | ['Dew Langrial'] | 2020-12-20 23:49:42.313000+00:00 | ['Writing Tips', 'Readinglist', 'Self Improvement', 'Reading', 'Writing'] |
Top 3 Best Medium Stories I Have Read | Top 3 Best Medium Stories I Have Read
One of these is the best short fiction I have read anywhere, and the others are two of the 3 best Medium stories I have read.
Semper Opera House Image by S Scholz from Pixabay
What Is Top 3?
Top 3 is a publication where Medium writers support other Medium writers by promoting each other’s work. Medium members are encouraged to post three stories from other writers that they enjoyed reading.
If you want to join, please read the Write for Us Guidelines.
Before You Start
I usually read self-improvement, How-Tos, and writing essays here on Medium. Though I am not choosy about the topic if it is written by one of my favorites. And typically, I don’t read stories more than four or five minutes long.
One of these is the best short fiction I have read anywhere, and the others are two of the three best Medium stories I have read.
Before you begin this journey of three, you should probably get a box of tissue. You just might need it.
Risen
Luke Beling writes about Doris, a housemaid in South Africa, and, more importantly, a boy’s friend.
He tells about the fall and how one rises above it:
“There was an escape in these: an escape from what she knew as the fall. Since before she could walk, her father began preaching to her about it: ‘We used to be rulers in this land, Doris. Then the white man came and with him, the fall of black South Africa’.”
Luke weaves the tale so well, and finally lowers a boom that you might not have expected. You’re going to love it, I promise. Once again, keep that tissue handy.
What I Learned Loving My Husband’s Best Friend
Janie Emaus wrote about her husband’s cancer and then her husband’s best friend’s cancer. This is a very moving story that will tug at your heartstrings.
Her story starts, “A few years ago, I lived with two men, my husband, and his best friend. These guys were closer than any two people I’ve ever known, sharing everything in their lives from bourbon to bluegrass.”
This story will give you a glimpse into the “unconditional love” that is seldom found between three friends. It’s a short, short story, which is good because you might have to get another box of tissue were it any longer.
Broken Arrow
And now, the Pièce de résistance!
Dean Middleburgh wrote this story of a hitman who never missed. It is not unlike a Greek epic such as Iliad, an Indian epic such as Ramayana, an epic poem, such as Beowulf or Gilgamesh, or a tragic play like Romeo and Juliet by William Shakespeare. I shit you not! As I read this, I felt many emotions and could see it playing out in my mind as if I were in an opera house.
Hence, the feature image of Semper Opera House, the only opera I ever saw, Freischütz‘ by Carl Maria von Weber in the summer of 1985. I didn’t understand a word.
Sorry for the interruption, back to the story.
I looked at the 15-minute reading time estimated by Medium and thought, “There’s no way in hell I am going to finish this.” But the first two lines hooked me in, and I was off to the races.
Just try to pass up this stirring start:
“The thought of retirement has always terrified me. The idea that one day I would be old, withered, and useless, left on the shelf to gather dust and slowly rot, was an outcome that happened to someone else, not me.”
With just a few more words of introduction, he tells us of the profession from which the protagonist will resign, “The name of my target handed to me on a piece of ivory paper folded in four. With my goal in hand, I would stalk my prey, waiting for that precise moment before landing the fateful blow. Call me old-fashioned, but I never really enjoyed using a gun — it felt too easy, ugly and clumsy.”
I’ll leave the rest up to the worthy bard, for his words are much more delectable than mine. I’m only here to introduce this piece and let you know, it is IMHO, a must-read! You’ll likely find no better method to spend 15 minutes on Medium or anywhere else. Well, with the possible exception of coitus.
My Top Three About Books
Subscribe to My Newsletter
About Me
Stephen Dalton is a retired US Army First Sergeant with a degree in journalism from the University of Maryland and a Certified US English Chicago Manual of Style Editor. He is a freelance journalist currently living in the Philippines.
You can see his portfolio here. Email [email protected]
Website | Facebook | Twitter | Instagram | Reddit | https://medium.com/top-3/top-3-best-medium-stories-i-have-read-c0a6c5e4587 | ['Stephen Dalton'] | 2020-03-27 07:02:31.754000+00:00 | ['Reading', 'Fiction', 'Short Story', 'Top 3', 'Writing'] |
What do you want to read? | Photo by Roman Kraft on Unsplash
What do you want to read?
We have been bombarded with thousands of pieces of information all the time, from the special diet to the crisis of existence of the film actor. Cake recipes, tutorials on how to care for plants or raise pets.
Internet, television, radio, books, magazines, egg cart and billboard. Selling images, ideas, ideals, goals, commitments, real commitments in relation to supply and demand. I suggest that we return our thinking to the ends, what would be the purpose of buying a new car before the old one falls into disuse? New clothes before the ones we have are torn? Make new friends without knowing old friends in depth? Having sex with several people without having experienced odd intimacy with at least one before? To seek God without being surprised by an act of humanity? Looking for eternity after life without having heard or spoken something that echoes and answers for eternity itself?
I have become a double of my past, just as many others have become, reflections of a human capacity so complex, so powerful, so we are, memory itself, a character in a dead text, immutable, indigestible, but totally assimilable. Assimilable to the point of becoming ghosts of ghosts. For those who did not understand, I will become clearer, we are reflections of our past which is also a reflection, I admire myself and at the same time I find it an intoxicating grace, when people say they are meeting, when they say they are looking for the real “I” , I have the decency and duty to write to you that life itself is a lie and that if there is a real self, for the maintenance of the common good it must remain inaccessible and silent.
I confess that I am extremely tired and distressed to read, watch and listen to what I expect and to write what is expected of me, my next text will be a meditation guide to find balance, or a tutorial with investment tips, perhaps giving tips about tantric sex, tell me what you think you want to read. | https://medium.com/never-fear/what-do-you-want-to-read-7f4a59d965d9 | ['William Pardo'] | 2020-09-15 10:54:57.107000+00:00 | ['Philosophy', 'News', 'Friendship', 'Love', 'Psychology'] |
Bill Gates, Voodoo Dolls, Conspiracies and Covid-19 Vaccines | The dangers of predicting the future
Why has Microsoft’s founder, one of the wealthiest people on the planet and a huge benefactor of the global health industry suddenly achieved pariah status, almost overnight?
It all stems back to a Ted Talk Mr. Gates gave in 2015. He made a prediction, one that proved more than a little prescient and a prediction which has now come back to haunt him in a way he couldn’t possibly have envisaged.
During the Ted Talk, he discusses the greatest threat to mankind over the coming decades as being a viral threat, likely to kill millions and one that we needed to prepare for. In his own words;
“If anything kills over 10 million people over the next few decades, it is likely to be a highly infectious virus rather than war,”
His statement drew only lukewarm attention from the global media at the time. Many saw it simply as a scaremongering and the talk was mostly forgotten.
Then Covid arrived.
If you haven’t watched the Ted Talk it’s worth a few minutes of your time. 64 million people who flocked to the video after the outbreak would seem to agree. Of course, as people are inclined to do, rather than credit Gates with foresight, the conspiracy crowd got stuck in.
By stuck in, I really do mean stuck in.
According to a recent BBC article, theories falsely linking Bill Gates to the coronavirus were mentioned 1.2 million times on television or social media between February and April. This was according to a study by The New York Times and Zignal Labs and I’m pretty sure they missed a few.
Gates has been at odds with Trump on the Covid virus and has become a high profile target of right-wing misinformation.
Little surprise considering his vocal criticism of Trumps handling of the pandemic.
“There’s no question the United States missed the opportunity to get ahead of the novel coronavirus,” he wrote in an opinion column in The Washington Post on March 31. “The choices we and our leaders make now will have an enormous impact on how soon case numbers start to go down, how long the economy remains shut down and how many Americans will have to bury a loved one because of Covid-19.”
The misinformation surrounding him includes more than 16,000 posts on Facebook this year about Mr. Gates and the virus that were liked and commented on nearly 900,000 times, according to a New York Times analysis.
On YouTube, the 10 most popular videos spreading lies about Mr. Gates posted in March and April were viewed almost five million times.
TikTok and Facebook have primarily been the platforms used to spread the conspiracies and wild theories. Twitter has also helped out, but Facebook has played the largest role.
What are the claims?
Screenshot by author
Here are a few of the examples of some of the wild claims posted on Facebook groups and shared millions of times.
Claims that the Bill and Melinda Gates Foundation has tested vaccines on children in Africa and India, leading to thousands of deaths and irreversible injuries. One post even suggested he is facing trial in India.
He is accused of rolling out a tetanus vaccine in Kenya that includes abortion drugs
A video on the website of The New American Magazine’s Facebook page continues with the theme of mass depopulation via vaccines and abortion, and also links Mr. Gates to China’s Communist Party. It was shared 6,500 times and viewed 200,000 times.
A video accusing Mr. Gates of wanting to microchip people has garnered nearly two million views on YouTube.
Screenshot by Author
More than a quarter of all Americans and 44% of Republicans believe that Bill Gates wants to use a Covid-19 vaccine to implant microchips under people’s skin, according to a survey from Yahoo News and YouGov. | https://medium.com/illumination/bill-gates-voodoo-dolls-conspiracies-and-covid-19-vaccines-b23f0df88174 | ['Robert Turner'] | 2020-06-07 04:16:44.325000+00:00 | ['Conspiracy Theories', 'Bill Gates', 'Social Media', 'Fake News', 'Coronavirus'] |
Crypto Tales: Success story of Vikash | Hey everyone,
I am Vikash and you’re reading my crypto tale. I have been into cryptocurrency trading for about 4 years now and I think I have built quite a stable portfolio that is able to withstand even the harshest of market crashes. And that has become possible due to the liquid nature of my trading strategy. Most of the traders usually lose money while trading along daily trends, but that is where I make profit. I came to know about Bitcoin in 2012 and I did not invest back then because I thought it is a scam.
However, things took a different turn for me when I purchased Bitcoin in 2015 through Zebpay, which was an Indian cryptocurrency exchange that recently shut down. Back then the price of BTC was Rs. 2,00,000 and I invested around Rs. 8,000, which quickly sore to about Rs. 36,000 when the price of BTC crossed Rs. 12,00,000. Greed got me, and I did not sell my BTC at that time; instead looked for ways to make even more profit.
I think this second phase was where I went wrong. I joined some social channels where people would predict the market for different cryptocurrencies and sometimes, I’d be surprised to see their predictions were right, due to which I made some more profit in the market. But then I joined the pump and dump groups, incurred some losses due to the false wisdom circulated on the channel, and finally figured it’s not the right way to trade. Now on a quest for some genuine ways of trading, I learned about technical and fundamental analysis, which are two different ways of predicting the market. YouTube was my place to go for learning these two prediction techniques. Even though the profits that I made after learning these techniques were not consistent due to improper money management on my side. But still, I had learned the concepts of trading and I just needed a proper trading platform with sufficient trading tools and a fair market.
In the beginning of 2018, I got to know about Bitbns, which I found suitable for arbitrage with many more amazing features. I never stopped making decent profit after that. | https://medium.com/bitbns/crypto-tales-success-story-of-vikash-72d2fba1a692 | ['Vaibhav Seth'] | 2019-09-04 14:43:24.683000+00:00 | ['Storytelling', 'Cryptocurrency', 'Bitcoin', 'Bitbns'] |
How Convenience and Comfort Caused the Downfall of Personal Responsibility | Plastic: The material we know is bad but continue to abuse
It’s the year 2018, and everyone knows that plastic is bad for the environment. The images we see on our screens of overfilling landfills, polluted ecosystems and strangled animals has made this much abundantly clear.
A universally recognised and pressing issue, yet an issue that nonetheless remains unresolved.
If you showed our hunter-gatherer ancestors a plastic bottle, they would surely have lost their minds. A cheaply produced, light weight, malleable, and non-degradable material, plastic plays a critical role within all industries and facets of modern life.
But the utility of plastic is intrinsically tied its inefficacy, and we have allowed it to pervade the very fabric of our existence.
I can say with 100% certainty that you are staring at some plastic right now. It coats the device on which you read this article. It sits on your skin within the clothes that you wear. No doubt, for many of us it was the very first thing that we touched when synthetic latex wrapped hands ushered us into a hospital operating room.
It’s widely recognised that the first step in reducing human output of plastic is to reduce production. This is an effort that we can each greatly contribute to and one that would have little to no negative impact on the way we live our lives.
Take the plastic bag phenomenon. If we each committed to changing our behaviour, to simply keep our own shopping bags in the car, by the door to take along with us, or god forbid — to carry items in our arms — the issue would be resolved in a day.
And yet we not only refuse to do so, but complain bitterly when forced to reflect on the implications of our actions.
The individual has come to rely on macro level processes and systems to dictate behaviour and guide their actions. Thankfully, in many places the gears of policy are finally starting to turn and with painstaking slowness the plastic bag is being phased out around the globe.
But the issue cannot and must not be solely appraised as the responsibility of corporations or governments.
Macro level policy must be buttressed by micro level responsibility.
Personal Responsibility in the 21st Century
The plastic bag phenomenon is reflective of a broader, more contagious, and inherently erroneous way of thinking. One that pervades the collective psyche of human society in the 21st century.
We make things too convenient for people and it destroys their sense of personal responsibility. It encourages unconscious behaviour and steeps people in inflexible modalities of thought.
We sit and wait idle for the next technological invention that takes responsibility away from us, that moves the spotlight elsewhere, that lets us off the hook.
The newly found bacteria that consumes plastic to survive, a machine that captures the continents of waste that now float in the ocean, or the one whose input is our garbage and output new useable products.
The advance of science and the myriad benefits that have resulted since have served to obscure the duty that we owe to ourselves, each other and the planet on which we reside.
Convenience and comfort have caused the down fall of personal responsibility.
It’s a hard truth, one that’s painful to swallow because it forces us to critique our own behaviour and practices.
But it’s better than the lie that most of us subscribe to.
That my actions, that of one person out of seven billion others, make no difference.
But they make all the difference.
The world starts with you.
If everyone recognised this truth and took upon themselves the commitment to uphold a degree of civic responsibility, we as a collective would be more empowered and capable of dealing with problems that affect us all.
Comfort and convenience seduce and sedate. The lure of comfort and an easy way of going about life enabled by technology now arguably causes more problems than it solves.
Our species underwent most of its evolution in the form of hunter-gatherer. As a result, we have coded into our biological hardware an intrinsic need for struggle, dealing with discomfort and overcoming adversity.
We’re good at it too.
The fact that we’ve come as far as we have is testimony to this fact. But we’re now trapped in a cycle of attrition, one of our own making.
Our desire for improvement and progress has led many to ignore our natural inclinations, and push responsibility onto anyone but ourselves.
In an era where information is not only semi-infinite and free but also at our very fingertips, how can we not reflect on and strive to change our actions when we know they do harm?
It’s not because we’re evil people. It’s because we’ve become disillusioned by comfort and convenience and coaxed into believing that personal responsibility is inconsequential.
But it’s not — and we ought to be doing better.
Alex Goik is a Media Analyst. He commonly writes for Mogul News and at Foreign Affairs Navigatorwhere he strives to offer fresh perspectives on foreign affairs, tech and China (coupled with the odd analysis of human nature).
Disclaimer: The views expressed in this text belong solely to the author and not those of their employer. | https://medium.com/the-ascent/how-convenience-and-comfort-caused-the-downfall-of-personal-responsibility-feebbd6dabed | ['Alex Goik'] | 2018-12-19 23:54:06.192000+00:00 | ['Environment', 'Responsibility', 'The Ascent', 'Life', 'Earth'] |
Talking data user demographics | Problem:
In a nutshell, User mobile usage data is provided using which we have to predict the age and gender of the user. Please check the Kaggle site for the complete problem statement.
So, the Final Prediction should be the group which comprises of gender and age range.
eg: M32–38,F24–26 etc.
Data:
Let's focus on “mobile usage data” here.
As shown in the above diagram,7 files are provided where gender_age is the train data and the other 5 files have the data of train and test devices. The 7th file is similar to gender_age, containing only device ids and skipping the rest.
We have device_id as our primary key indicating the device, which is linked with events data. app_events, labels, and label categories are linked through event_id,app_id, and label_id.
Analysis:
Key observations I had during analysis were:
There are 12 groups, where male users are more than female users.
Male age range -> 22–39+
Female age range -> 23–43+
Age-wise we have more data in females and count wise male data is more.
25 and 75 Percentile Age values for Both Male and Female are similar
MAD is the same for both Male and female
Even though the count of males is higher than females, the distribution of data in both is similar.
The majority (68.77%) of Train data has events
The majority (68.80%) of Test data has events
Data is from 30th April 11.52 PM to 8th May 12 AM 2016
M32–38 has a significant number of people who spent the phone between 11 PM and 3 AM.
And comparatively, Males use the phone at night more than women.
Assuming more the count of events means more the user is using the device. But in our dataset, we can say that there is no relationship between the age and gender of users and the amount of time the user is using the device.
In each event, the user has used multiple apps, and these apps belong to multiple categories. And among those top used categories are:
Here all registered apps are installed but only 39.21% are actively used(Train).
Duplicates are present in phone data which need to be removed
The top 3 brands consist of 58.78% of phones and hence they are dominating the mobile industry.
Features:
After Analysis and a few references, I came up with the below features.
mode/median of longitude(of all events and timestamps).
mode/median of latitude(of all events and timestamps).
TFIDF approach of apps used(active) by that particular device.
BOW approach of labels of apps used(installed) by that particular device.
BOW approach of phone brand(one-hot encoding, since these consist of different languages).
BOW approach of phone model(one-hot encoding, since these consist of different languages).
TFIDF weekday the event has occurred.
TFIDF Hour of the Event.
BOW Hour bin of the Event.
The ratio of active apps and installed apps.
Clustering of location into 10 clusters using latitude and longitude.
Prediction:
Since 32% of the data doesn’t have events data I have divided the prediction into 2 parts:
With Events
Without Events
We will have individual prediction models for each dataset and finally, concatenate the results.
I have tried Logistic regression, XGBoost, and Neural networks, so the base of the model is an ensembling of the methods I tried. What I have done is, after obtaining the predictions of each algorithm, weightage has been given to results based on their CV_logloss(cross-validation).
Again, here since we have variance in Neural Networks i.e. they have different predictions every time ( even with the same parameters), I have implemented ensembling for each Neural network I had. It's more like taking an average of all the results obtained by the network.
In this way, I have trained 2 NNs for each dataset 10 times and have taken the average of the results, and then also applied Logistic Regression and XGBoost.You have to find the right combination of features you want to apply to get the best results.
And then finally concatenate the end results of both the datasets.
In my final model, I have used only the neural networks for my prediction and was able to reach the top 10% rank in the leaderboard, with a Private Score of 2.24054 and a public score of 2.23523.
Further Improvements:
I have seen many other participants get better scores by giving very low weightage to the other algorithms like 0.1 for LR,lightGBM, etc. You can try that too, In the end, it's all about the right features with the right set of methods to get the best results.
Please check my code in GitHub for detailed implementation and let me know if you had a better result or a different approach. We can connect through LinkedIn.
References: | https://akhil-dasari.medium.com/talking-data-user-demographics-f97dea332bde | ['Akhil Dasari'] | 2020-11-11 04:16:40.862000+00:00 | ['Machine Learning', 'Kaggle Competition', 'Python', 'Neural Networks', 'Solutions'] |
The Rhetorical Minefield of Cyberpunk 2077 | With Cyberpunk 2077 set to release in a few days, the busy cogs of discourse have already begun turning at full speed — between reports of crunch, the game’s controversial marketing, and the massive build-up of hype since its debut trailer seven years ago, the game’s launch is already shaping up to be a moment of no satisfaction to any. Delays have been so numerous under the pretense of further refinement, that expectation could only be fallen short of — if the game misses its lofty commercial targets, CD Projekt investors will make sure there is hell to pay; and whether journalists or players bear the brunt of the game’s potential woes, that is up for future months (and potentially years) of its lifecycle to decide.
The seedings for all-out warfare to break out over the critical reception of this game have been planted for a long time — Gamergate made it so that any journalist going against the grain is bound to have the “ethics in games journalism” card leveraged against them until something gives, and Cyberpunk 2077 is no exception here. The Last of Us Part II earlier this year elicited a similar reaction where fans felt at odds with Neil Druckmann’s treatment of beloved characters in the story (especially in light of leaks prior to release), that praise could only be conceived as an act of collusion with Naughty Dog — CDPR’s latest RPG is likely to swing things in the other way, imparting upon its any criticism a malevolence that could only be described as purely conspiratorial.
But to claim otherwise wouldn’t be entirely incorrect — a long-standing issue with games media is its tendency to flatten nuanced narratives and make themselves the heroes of whatever story they’ve fallen on the bad side of, poising to make the game’s critical reception even harder to assess. If fans attack journalists for unwarranted bad-faith criticism — which there may very well be — they’ll immediately be deemed too emotionally-invested, their concerns hardly considered — conversely, players who hold a less charitable view of CDPR will be quick to dub praise of the game a sign of corporate patronage, an accusation further made complicated by the studio’s relationship with crunch culture. The situation is ripe for exploitation by malicious parties on all sides, and it’s not-so-clear what the solution would be in the face of such great volatility.
Gaming outlet Polygon has already staked its flag in the ground and is bracing for what might be its worst nightmare to date — contributor Stacey Henley expressed concerns about the game’s marketing and how it could be potentially used as leverage against criticism if Cyberpunk 2077 doesn’t quite deliver. The flipside of that conversation however is that games media isn’t without fault either — they’ve given print space to controversies that could’ve been entirely ignored if their purpose was to undermine some not-so-stellar messaging in the game’s marketing, but that decision was not made and so CDPR got to play the “our game is hated by journalists but it will succeed regardless” card to great effect.
In an era where tensions between players and professional games journalists seem to be at an eternal simmer, it may be helpful to treat these inflection points as an opportunity to re-establish trust rather than rekindle age-old conflicts — it’s understandable that both feel like the other party is solely responsible for turning discourse into highly-flammable rhetorical poison, but there hasn’t been a moment more opportune to mutually agree that honesty is of the utmost essence, especially when either party is so frequently eager to accuse the other of dishonesty. It may sound ridiculous to propose a moment of diplomacy when it looks like conflict is inevitable, but a disaster of Gamergate’s amplitude isn’t in urgent need of reproduction — there ought to be a better outcome than making Cyberpunk 2077 yet another item of the never-ending culture war.
The media is a for-profit business, so it makes sense that games journalism would indulge a bit of controversy even if it’s losing legitimacy in the process — to question the very nature of newsworthiness is an alien concept to media, but perhaps it isn’t too soon for games media to ponder why it is so often seemingly the mechanism for generating controversy, while at the same time complaining that it exists at all. The trade’s high-and-mighty nature necessitates that writers would never claim such a thing, but to deny that media itself hasn’t been complicit in perpetuating which that it cautions against is wishful thinking at best — there has to be accountability, and the cycle of highly-anticipated releases causing all sorts of unfathomable mayhem has to eventually be broken.
As I’m writing this piece, the review embargo has just been lifted, and the consensus seems to so far be that the game has done a competent-enough job of achieving what it set out to do, but not without a myriad of technical issues to spare — that’s hardly the slam dunk CDPR was looking for, but it’ll suffice for most as a healthy diversion from our current hellish plague-ridden reality. The game’s release would have been trivial under any other circumstance, but as demand for home entertainment is at an all-time high, Cyberpunk 2077 will undoubtedly deliver for that experience — it may be less so for journalists who’ve gotten used to an abundance of games available at their fingertips, but for your average player, the game is an adequate addition to their library.
Only time will tell if the game can manage to stay relevant after its release as it did prior, but given that it’s largely uncontested in its category, it’s likely to remain the talk of town throughout next year. With a slate of post-launch content yet-to-be-revealed as well as an online mode on the horizon, the tale of Cyberpunk 2077 has yet to be recited in its entirety — the question is whether the game can successfully maintain a cult following like the Witcher 3 before it did, or will it slide into oblivion like every other blockbuster release eventually does. The latter might mean that the game’s cultural presence would’ve been uneventful enough that it quickly died down, but considering past entries in the ‘fierce discourse’ saga, that might not be such a bad thing after all. | https://medium.com/swlh/cyberpunk-2077-gamergate-journalists-media-transphobia-poster-keanu-reeves-a942b56be56a | ['A. Khaled'] | 2020-12-10 08:55:55.339000+00:00 | ['Journalism', 'Cyberpunk 2077', 'Gaming', 'Culture', 'GamerGate'] |
Why I Worry About Venture-Backed Mental Health & Addiction Startups | Why I Worry About Venture-Backed Mental Health & Addiction Startups
And My Ask Of Investors In These Companies
It’s frustrating if you’re a customer of an expense report SaaS startup and the company goes out of business, but it’s potentially devastating if your tele-therapist or addiction counselor suddenly disappears because the platform that employed them ran out of money. This is my most significant concern about the wave of mental wellness startups being funded with venture dollars — what happens to the clients of the ones which fail?
Photo by Matthew Waring on Unsplash
Traditional venture capital models lean into what’s called ‘power laws.’ Basically the idea that you are backing risky new ventures, many of which will stumble along the way, but one or two of the companies you back will be such outsized successes that the investment gains from those will more than offset the others.
Venture capital is a great instrument for high growth companies, or those who are very early in their development but intend to pursue a high growth strategy. If a normal small business must optimize for unit economics and profitability early in its lifecycle, a venture-backed business seeks product-market fit in a big industry and then trades nearterm profit-taking for long-term marketshare, with the idea that profits can be extracted later. I’ll pause for a moment now to emphasize that I don’t believe there’s anything fundamentally wrong with this tradeoff, which shouldn’t surprise you since I am a venture capitalist. If you’re reading this post because you think capitalism is a fundamentally broken system or that venture itself is evil, I’m sorry to share that I don’t agree. But I will absolutely acknowledge that companies which take any outside capital implicitly and explicitly incorporate the needs and expectations of that capital into their business planning. And for venture-backed startups this tends to be “get them customers.”
Which leads us to the fundamental difference between, say, a small self-funded online therapy practice and one that has taken millions of dollars in seed capital: the latter can acquire a larger number of patients much faster using investment dollars for both customer acquisition and to subsidize the economics of serving those clients. That’s what always gives me a little bit of pause in this particular area — the scale ahead of the sustainability
This post is an open question, not a conclusion, because there are plenty of startups which are trying to grow this market using technology and new approaches. Their success will mean that many more people can access mental wellness and addiction services than were potentially able to do so before. And hopefully the efficacy of these programs is even higher when software can be used to support provider matching, behavioral nudges and other extensions to what counselors themselves can do with patients. If we don’t have mission-driven entrepreneurs believing there are opportunities to dramatically improve the service and outcomes in these areas then we tragically don’t move forward. And if 2020 taught us anything it’s how important mental health is to our lives and how many more people who suffer from loneliness, depression, anxiety could benefit from proactively engaging around their health beyond pharmaceuticals.
So when a founder pitches me a business (and please do! [email protected]) in this market I’m simultaneously excited and conflicted. This is personal for me. Since 2011 I’ve been in therapy and seen great benefits in my life. I want others to have similar access ongoing or as needed and know that it’s difficult for many because of economics, time and access limitations. Startups can help fix these problems and we’ve seen a number who are solving infrastructure problems for therapists and clients (aka picks and shovels).
Whether you’re the platform providing the therapy or the software powering the therapist, entrepreneurs in this area should have their own version of the Hippocratic Oath. What I’d ask the investors in these companies is that they share the same values. Push for responsible growth and make sure patients are well-served. Realize that when you look at stats that involve quality of customer interactions, drug prescriptions, etc you’re talking about real people, not just percentages. And perhaps most essential, have a plan for what happens if the company doesn’t succeed. What does client offboarding look like, how long would it take and how much would it cost? The answer might be that in a failure-case you don’t use the remaining capital for one last growth hack but instead have a responsibility to get patients to a new provider. We, as investors, have to be very careful about unknowing exposing vulnerable populations to venture-risk.
Update: Coincidentally The Atlantic had an article out today about troubles at one such startup.
Notes and More
I’ve historically not been a “New Year’s Resolutions” type of guy, but have found a certain mindfulness in at least taking stock of what’s working well for me and what’s less useful. And then deciding whether it’s the recognition of these that matters or there’s work to be done to change my perspectives and outcomes.
📦 Things I’m Enjoying
Miley Cyrus’ new album Plastic Hearts, Haus’ sampler pack of delicious, low ABV spirits, and OMG these ImmuneSchein Ginger Elixirs are so good — you just mix in some hot water and yum.
🏗 Highlighted Homebrew Portfolio Jobs
Arthur.ai is a software startup making it easier for companies to manage and monitor their AI/ML models. This includes not just observability and explainability but fairness. A great, inclusive culture and team plus a brand new $15m Series A round means it could be your next job. They’re hiring lots of folks across engineering, product, design, marketing, sales and such! | https://hunterwalk.medium.com/why-i-worry-about-venture-backed-mental-health-addiction-startups-cb57ec146536 | ['Hunter Walk'] | 2020-12-29 04:20:02.057000+00:00 | ['Technology', 'Tech', 'Addiction', 'Therapy', 'Startup'] |
Searching For The Centre | Searching For The Centre
Skirting the edges, most of us blindly play the game never finding the truth of the matter, if indeed, there is one.
On the edge, just inside the edge, there is the mainstream media, culture, society, a glut of data. Fabricated, manipulated, molded, spun, practiced and polished. It’s where most of us get our information.
It’s the nature of surface level reality, but rather than thinking of this information distribution in a two-dimensional circle, consider it three-dimensional, with no centre and no edge. Particles, waves, wavicles in constant motion, combining, separating, mutating, dipping, diving, pulsating, expanding, contracting, growing.
Immensely complex and self-organised.
It’s not ours — we inherit it, then adopt and contribute to it.
We come into the world, Aristotle proposed, a tabula rasa. Then the world of things and people, rules and regulations, impose its will on us.
We must conform with it not merely to live successfully, but to become that which we call a human being.
So here we are, all grown up and at the mercy of the narrative.
So which one will you choose?
The more we learn and experience, the more open we are to change and develop, the closer we may get to, or away from the centre. It is the source of all information — the singularity. It is not just reading what someone else has reported and taking it as gospel.
No.
It is the product of true first hand experience. The application of some prior information and putting it to the test. It is to be the guinea pig in your own experiment.
And as we experience change, we get closer to the source of information; we have the opportunity to move closer to the centre.
Then maybe one day we get there and rather than finding some old bloke with a white beard, the idol, standing there smiling, we find we are alone.
There’s nobody there. | https://medium.com/the-reflectionist/searching-for-the-centre-9d644fd8a627 | ['Larry G. Maguire'] | 2020-11-12 01:10:43.356000+00:00 | ['Work', 'Philosophy', 'Psychology', 'Reality', 'Life'] |
Dealing With My Anxiety Because…Life | I have the day off work today and you’d think I’d be happy and content looking forward to my day.
I am not. I have an underlying sense of deep anxiety and trepidation.
I can feel it in my gut. It’s a buzzing in my body and some nausea. It’s a clenching in my throat. I’m all to familiar with these feelings. I recognize my own cues of anxiety which warn me something isn’t right in my life.
Usually I don’t initially know why I’m having an anxiety attack. I have to make an effort to figure it out, which I don’t like doing, even if it’s in my best interest.
If I want to know why I’m having anxiety, I need to look at what I’m avoiding.
If I want to relieve my anxiety, I do certain activities to take off the pressure; I write, I exercise. I strategize about moving on with my day because I still need to live even if I don’t feel right inside. I refuse to let my anxiety disorder take over and become debilitated from it. I’ve done that in the past and it’s never a good outcome.
So I will continue on with the tasks of my day. I’ll apply for that car loan and go car shopping. I’ll have dinner with my mother. I’ll get over my anxiety attack eventually and be proud that I still adulted regardless. | https://medium.com/the-virago/dealing-with-my-anxiety-because-life-a2a668145f62 | ['Michelle Jaqua'] | 2020-05-26 02:20:08.562000+00:00 | ['Self', 'Mental Health', 'Life Lessons', 'Women', 'Anxiety'] |
Why Do Men Think It’s Cute When Women Get Mad? | My anger is not your entertainment.
Photo by Joshua Newton
About a month ago, I went on a date with a man who tickled and touched me when I didn’t want him to. Rather than stopping, he demanded an explanation why he should. “I don’t like that” and “stop” were not sufficient reasons for him. This made me angry. My anger made him laugh. I have not reckoned with this for myself yet. I am trying.
It is not the first time my anger, discomfort, or frustration has made a man laugh or further provoke me. I am not the only woman this has happened to. We’ve all been there, poked into anger, a completely normal reaction, only to be laughed at and made fun of. “Look, she’s angry! How adorable and silly. Dance, little girl, dance!”
I don’t know why this happens, but my thinking is that if I can establish some reasoning, I might be able to calm my rage and release the incident’s grasp on me. At first I felt like an idiot, discussing this topic with myself in 2018, in the era of #MeToo. We’re past this simple stuff and onto much bigger issues, right? Maybe 2018’s greatest irony is that it’s been showing us simultaneously how far we’ve come, and how far we haven’t. So I’m going to try to figure out why a woman’s anger amuses a man so that I don’t drive my fist into a wall.
Maybe it’s guilt. Maybe a man feels guilty for making a woman angry, and tries to laugh and minimize both the incident and a woman’s reaction to it in an attempt to make it go away. This of course has the opposite effect, but they’ve always been a little slow on the uptake.
Along the same lines, maybe it’s ignorance. Maybe a man genuinely doesn’t comprehend why a woman is angry in any given moment, and it’s easy to mistake the situation as a funny.
I don’t really think it’s guilt or ignorance, I’m just trying to be a little bit fair.
Maybe it’s because women exist for men. That old chestnut, right? Every part of us is meant to serve their needs. My body for pleasure, my emotions for taking care of his children. My mind to challenge and delight him, lest he get bored. Even my anger is not my own, it is nothing more than entertainment, worth a good chuckle, at least. What else could a woman’s anger be for? It doesn’t seem to serve much purpose.
Maybe it’s because I’m a possession, not a presence. Women belong to men, they’ll feel the way they’re given permission to. Are you sad? There, there, I’ll hold you because I like fixing things. Are you happy? I must have made you happy, I feel accomplished. Are you angry with me? How can that be? Stop being that. It isn’t what I want, so you must be joking.
Maybe it’s because we’re pretty. It’s really hard to take window dressing seriously. If we’re nothing more than eye candy to a man, of course it’s hilarious when we get mad. I’d laugh my ass off if my wallpaper got pissy.
Maybe it’s because we’re wrong. It’s my word against his, my reaction against his, and who has historically been the one in the right? Our responses to human behavior are wholly ineffective, and so adorable when compared to a man’s, because they carry the assumption that they’re wrong. If I laugh at a man that doesn’t want to be laughed at, I make him feel small. If a man laughs at a woman who doesn’t want to be laughed at, she’s being overly emotional and irrational. If a man gets angry, everyone around should be scared. “Don’t make me tell your father.” If a woman gets angry, she’s assumed to be having the wrong reaction. “Oh come on, it was just a joke.”
I can’t define the reason for everyone, but I can define it for me. I think the real reason men think it’s cute when women get mad is that it helps them. It helps them make us smaller. It helps them make our motivations invalid. They haven’t done anything wrong, it’s just the foolish woman being foolishly angry, there isn’t a thing wrong with men, you see. It’s not you, it’s her. Laughter is the best medicine for being a degrading piece of shit.
My gender does not invalidate my feelings. It does not invalidate my reactions to others. My gender does not put my emotions on a different plane than a man’s. But some men don’t know that, and so this will probably happen again. When it does, I probably won’t be ready, I won’t be prepared, and I won’t be collected enough to respond how I might like to. I won’t be any of the things I’d like to be, but I do know that I won’t be cute, and for now that’s all I need to know or be for sure. | https://shanisilver.medium.com/why-do-men-think-its-cute-when-women-get-mad-8bc168338ee5 | ['Shani Silver'] | 2018-07-11 13:22:17.845000+00:00 | ['Life', 'Feminism', 'Culture', 'Anger', 'Writing'] |
Exploring the Trump Twitter Archive with NLTK | For this project, we’ll be using pandas and numpy for data manipulation, matplotlib for visualizations, datetime for working with timestamps, unicodedata and regex for processing strings, and finally, nltk for natural language processing.
Let’s get started by firing up a Jupyter notebook!
Environment
We’re going to import pandas and matplotlib, and also set the display options for Jupyter so that the rows and columns are not truncated.
# for manipulating data
import pandas as pd
import numpy as np # for visualizations
%matplotlib inline
import matplotlib.pyplot as plt # to print out all the outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all" # set display options
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', -1)
Getting the Data
Let’s read the data into a dataframe. If you want to follow along, you can download the dataset here. This dataset contains President Trump’s tweets from the moment he took office on January 20, 2017 to May 30, 2020.
df = pd.read_csv('trump_20200530.csv')
Let’s look at the first five rows and see the number of records (rows) and fields (columns).
df.head()
df.shape
Let’s do a quick renaming of the columns to make it easier for us later.
df.columns=['source', 'tweet', 'date_time', 'retweets', 'favorites', 'is_retweet', 'id']
Let’s drop the id column since it’s not really relevant right now.
df = df.drop(columns=['id'])
Let’s do a quick sanity check, this time let’s also check the dtypes of the columns.
df.head()
df.info()
Working with Timestamps
We can see from the previous screenshot that the ‘date_time’ column is a string. Let’s parse it to a timestamp.
# for working with timestamps
from datetime import datetime
from dateutil.parser import parse dt = []
for ts in df.date_time:
dt.append(parse(ts))
dt[:5]
Let’s add a column with ‘datetime’ that contains the timestamp information.
df['datetime'] = df.apply(lambda row: parse(row.date_time), axis=1)
Let’s double-check the data range of our dataset.
df.datetime.min()
df.datetime.max()
Trimming the Data
Let’s see how many sources there are for the tweets.
df.source.value_counts()
Let’s only keep the ones that were made using the ‘Twitter for iPhone’ app.
df = df.loc[df.source == 'Twitter for iPhone']
We should drop the old ‘date_time’ column and the ‘source’ column as well.
df = df.drop(columns=['date_time', 'source'])
Separating the Retweets
Let’s see how many are retweets.
df.is_retweet.value_counts()
Let’s make another dataframe that contains only retweets and drop the ‘is_retweet’ column.
df_retweets = df.loc[df.is_retweet == True]
df_retweets = df_retweets.drop(columns=['is_retweet'])
Sanity check:
df_retweets.head()
df_retweets.shape
Back on the original dataframe, let’s remove the retweets from the dataset and drop the ‘is_retweet’ column altogether.
df = df.loc[df.is_retweet == False]
df = df.drop(columns=['is_retweet'])
Another sanity check:
df.head()
df.shape
Exploring the Data
Let’s explore both of the dataframes and answer a few questions.
What time does the President tweet the most? What time does he tweet the least?
The graph below shows that the President most frequently tweets around 12pm. He also tweets the least around 8am.
title = 'Number of Tweets by Hour'
df.tweet.groupby(df.datetime.dt.hour).count().plot(figsize=(12,8), fontsize=14, kind='bar', rot=0, title=title)
plt.xlabel('Hour')
plt.ylabel('Number of Tweets')
What day does the President tweet the most? What day does he tweet the least?
The graph below shows that the President most frequently tweets on Wednesday. He also tweets the least on Thursday.
title = 'Number of Tweets by Day of the Week'
df.tweet.groupby(df.datetime.dt.dayofweek).count().plot(figsize=(12,8), fontsize=14, kind='bar', rot=0, title=title)
plt.xlabel('Day of the Week')
plt.ylabel('Number of Tweets')
plt.xticks(np.arange(7),['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'])
Isolating Twitter Handles from the Retweets
Let’s import regex so we can use it to parse the text and isolate the Twitter handles of the original tweets. In the code below, we add another column that contains the Twitter handle.
import re pattern = re.compile('(?<=RT @).*?(?=:)')
df_retweets['original'] = [re.search(pattern, tweet).group(0) for tweet in df_retweets.tweet]
Let’s create another dataframe that will contain only the original Twitter handles and their associated number of retweets.
df_originals = df_retweets.groupby(['original']).sum().sort_values('retweets').reset_index().sort_values('retweets', ascending=False)
Let’s check the data real quick:
df_originals.head()
df_originals.shape
Let’s visualize the results real quick so we can get an idea if the data is disproportionate or not.
df_originals = df_retweets.groupby(['original']).sum().sort_values('retweets').reset_index().sort_values('retweets', ascending=False)[:10].sort_values('retweets')
df_originals.plot.barh(x='original', y='retweets', figsize=(16,10), fontsize=16)
plt.xlabel("Originating Tweet's Username")
plt.xticks([])
Which Twitter user does the President like to retweet the most?
The graph below shows that the President likes to retweet the tweets from ‘@realDonaldTrump’.
The Top 5 Retweets
Let’s look at the top 5 tweets that were retweeted the most by others based on the original Twitter handle.
Let’s start with the ones with ‘@realDonaldTrump’.
df_retweets.loc[df_retweets.original == 'realDonaldTrump'].sort_values('retweets', ascending=False)[:5]
And another one with ‘@charliekirk11’.
df_retweets.loc[df_retweets.original == 'charliekirk11'].sort_values('retweets', ascending=False)[:5]
Examining Retweets’ Favorites count
Let’s find out how many of the retweets are favorited by others.
df_retweets.favorites.value_counts()
Surprisingly, none of the retweets seemed to have been favorited by anybody. Weird.
We should drop it.
Counting N-Grams
To do some n-gram ranking, we need to import unicodedata and nltk. We also need to specify additional stopwords that we may need to exclude from our analysis.
# for cleaning and natural language processing
import unicodedata
import nltk # add appropriate words that will be ignored in the analysis
ADDITIONAL_STOPWORDS = ['rt']
Here are a few of my favorite functions for natural language processing:
Let’s look at the top 10 bigrams in the df dataframe using the ‘tweet’ column.
get_bigrams(df, 'tweet')
And now, for the top 10 trigrams:
Let’s use the viz_bigrams() function and visualize the bigrams.
viz_bigrams(df, ‘tweet’)
Similarly, let’s use the viz_trigrams() function and visualize the trigrams.
viz_trigrams(df, 'tweet')
And there we have it!
Conclusion
Using basic Python and the nltk library, we’ve explored the dataset from the Trump Twitter Archive and did some n-gram ranking out of it. | https://towardsdatascience.com/exploring-the-trump-twitter-archive-6242e5100a74 | ['Ednalyn C. De Dios'] | 2020-06-08 18:39:56.180000+00:00 | ['Python', 'Data Science', 'NLP', 'Towards Data Science', 'Exploring Trump'] |
When the Doors to the House are Locked | When the Doors to the House are Locked
Accepting Christ’s Peace when We are Afraid
Photo by Denny Müller on Unsplash
and the doors of the house where the disciples had met were locked for fear…Jesus came and stood among them and said, “Peace be with you.” After he said this, he showed them his hands and his side. Then the disciples rejoiced when they saw the Lord. Jesus said to them again, “Peace be with you.” John 20:19–21(NRSV)
Quarantine, social distancing, and spending so much time at home were not listed on the church calendar this year. That our social, emotional, financial, and faithful elements of living would change so drastically were not predicted. There is some solace in knowing the church has dealt with deadly disease in its past, but we didn’t expect it to find us now.
There are resources in our time as we love our neighbors by staying apart — technology, digital bulletins, virtual hangouts, and online chats as we watch a service from home or gather in community groups. Many of us may have the gift and curse of living with our families, biological and created. Others may be alone in close quarters, controlling what we can in our surroundings, but knowing the outside is different, more measured and reserved as we keep our distance.
This includes our churches too. The Christian tradition is embodied in the act of showing up, being present, and asking what the Lord will do. What if God shows up for you as Christ does in the short passage above?
Disciples, alone and afraid, locked away, suffering from a shared fear after the crucifixion of Jesus. They wonder if they will be harmed as well.
Christ comes to them, resurrected, showing the wounds on his new body. A testament to suffering and love for those gathered there. The door is locked, but Christ is present.
Gather with your churches in ways that love your neighbor well. Many services come through the same device you are reading this article on now. If your faith is still floundering in this new posture, that’s okay. Acknowledge that. Grief is also an offering. Don’t ask yourself what’s enough if it’s all you can offer. God will find you. Follow Christ’s model as he greets the disciples in their fear and loneliness, to yourself and others — offer peace. | https://medium.com/new-body/when-the-doors-to-the-house-are-locked-640f9af5a3eb | ['Presley Thomas'] | 2020-04-19 13:14:32.509000+00:00 | ['Essay', 'Quarantine', 'Religion', 'Christianity', 'Coronavirus'] |
Crypto Tales: Success story of Sanchit Jain | I became aware of the cryptocurrency in 2017, but I never paid much attention to it. People everywhere said that it was a fraud, that it was some kind of trap, and so on and so forth. So I thought I’d stay away from it and I did — for sometime. But around the time of Diwali I came to know that BTC was cheap, so I thought I’d give it a try. I thought that even if I lost it wouldn’t be much.
I asked my dad to lend me some loan to purchase BTC, but he denied. I wasn’t surprised, of course. I saved some money on my own and on 9th December I invested 2K in BTC and soon it soon went up to 15K. During those days I wasn’t aware of any ALT coins or any other kind of crypto. Learning about them didn’t take longer and I invested in XRP. It went up to thrice the amount I’d invested and didn’t sell it. That’s the mistake I made. I got greedy and expected it to go up further. My investment of 35K turned to just about 8K.
I was heartbroken but not without realizing that holding crypto was a dumb move. I also realized that even if NTC pumped up to, say S20K, I’d still book a loss if I just hold on to them. I’d also booked some loss in Zebpay due to some mismanagement from their side. That’s when I discovered Bitbns and my profits have soared ever since.
I started investing in cheap coins like DOGE, DGB, etc. in search of some profit, but still the market was going down by the day. Then the Bitbns team introduced me to Arbitrage and taught me how it worked. I started arbitrage with very small amount, about 2K that I had got from liquidating all my BTC. Once I started arbitrage I started earning for the first time and I soon started recovering my losses.
All this was possible because of Bitbns. Bitbns is the only exchange in India that has a variety of coins and the best customer support. I have made profits I wouldn’t have dreamt of making if not for Bitbns.
Thanks, Bitbns, for helping me recover my losses. | https://medium.com/bitbns/crypto-tales-success-story-of-sanchit-jain-44b9987687 | [] | 2019-06-04 15:11:00.733000+00:00 | ['Storytelling', 'Cryptocurrency', 'Blockchain'] |
Five New Year’s Resolutions That Resolve Nothing | Celebrating the new year is a way to acknowledge the promise of our own potential — to shake off any lingering failure and disappointment from the current year and get a fresh start. Sure, it may be 95% psychological. January is just a month, after all. Change can happen any time. But there’s something about fireworks, cheering crowds, sparkling Champagne and the collective determination of millions of people that make it feel real and important.
2020 will be the third new year since my daughter died and the first new decade without her. My ability (and interest) in looking ahead has diminished. The last few years of the current decade have been dominated by despair and grief and I think I’m ready for this year — and this decade — to end, but I’m not sure I’m ready for the next decade.
The end of a year offers the promise of a fresh start. I want to look forward again, to believe that the ache of missing my daughter will be a bit easier to bear. But, I’m ready to let more joy into my life. So, even though I’ve still got a long way to go, I’ll ast least participate in a tried and true New Year’s tradition — a list of resolutions for 2020. (Well, sort of.)
In 2020, I resolve to:
Leave the lights on. I’m not worried about saving money on my electricity. I’m worried about getting lost in the dark. My main resolution for the new year is let as much light in as possible, particularly during the long winter months. I will open the curtains, light candles, turn on all the lights and let the Christmas tree twinkle for an extra day or two. I will accept the comfort, companionship, and love of my family and friends as beacons of light that are here to guide me.
Breathe (deeply). I tend to stop breathing when things get really hard. Grief takes my breath away. In 2020, I’m going to try to remember to take deep, slow breaths and then release them, so that I can fill my body with oxygen and have strength enough to bolster myself and the people I love, especially when they’re holding their own breath.
Lie Down. Grief is exhausting. I use a lot of energy trying to mitigate it or suppress it. I keep thinking I should be as focused and efficient as I was before Ana died, but the truth is that I move through life much more slowly these days. When I ignore this truth and try to push forward, the grief rushes in and I stop functioning completely. So, in 2020, I plan to honor my limitations. I’m going to take more naps, get real sleep at night, and give myself more time off. I will give myself the space I need to grieve and recharge so that I can better meet my needs and the needs of my family.
Give in. Four years was a long time to walk the road of cancer with my child. Three years is an eternity without her. I am not the same person I was before this tragedy hit my family. I’m tired of worrying. I have no interest in controlling anything or anyone. In 2020, I’m going to give in to the many forces I can’t control. Life is random and it can be ruthless. I’ve been looking at what happened to Ana like it’s a puzzle to solve — Why her? Why us? I’ve been wasting precious energy on regret and longing. As the new year approaches, I resolve to give in to what I can’t change. At the very least, I resolve to try.
Stay present. I’ve spent a huge part of my life rushing forward, trying to accomplish goals, change flaws and become successful (whatever that means). I’ve also spent time obsessing over the past — when the kids were little, when my daughter was healthy, the few years I was in the best physical shape of my life. But if I’m always looking forward or back, then I’m not really existing here, in the present. I’ve missed so much, wasted time as though I had an endless supply of it. So, 2020 will start the decade of “now.”
A Tibetan sand mandala is a painting made from millions of grains of sand that can take carefully trained monks days or weeks to complete. Once done, the mandala is dismantled, a ritual meant to symbolize that life is transitory.
Like a mandala, my resolutions are characterized by their impermanence. They are meant to keep me focused on what’s important in the moment and empower me to ignore what’s not. If I break a resolution, then I can simply apply it at the very next opportunity — again and again — to infinity.
This is the only promise I can make to myself in the coming year — focus on the moment, forget everything else. My ultimate resolution is to try to live every day of 2020 as though it were New Year’s Day. | https://jacquelinedooley.medium.com/five-new-years-resolutions-that-resolve-nothing-8ba8ccf79261 | ['Jacqueline Dooley'] | 2019-12-17 12:36:46.395000+00:00 | ['Family', 'Mental Health', 'Self', 'Grief', 'Parenting'] |
Recovering a dormant obsession | On March 2 of this year, the 36 days of type project started, and I took that as a final nudge to get this going. I began, of course, with that uppercase A. From the outset, I wanted to explore the typeface in a couple of different realms — in one, I stayed close to the original outlines and trimmed the contours; in another, I accentuated the stroke contrast. Lastly, I added a stencil on some studies that took even broader geometric liberties. Would the spirals taper or not? Would I keep the slabs? Would the quirky internal architecture of the original lettering hold up for legibility within an entire alphabet? Those are things I wanted to dig into as the project got off the ground.
Over time, the forms that were closer to the original began to lose their appeal, as it felt like an exercise in cleaning up vernacular type without adding anything else. A specter of pointlessness began to fall over those letters, particularly in the middle of the alphabet, where the task felt more like a production assignment, not a years-long What If project. I loved how the A and the B and the C turned out, and I continued drawing them to see what would happen once the R or the X or the Z came around, but I did so without the same interest.
Drawing tapering spirals also tested my patience a little too much, especially with the daily pace the project required. The high contrast version stayed around a little longer (in fact, it is still an intriguing idea to me and perhaps something I will revisit down the line). Still, by the end of the alphabet, I was most satisfied with stencil / geometric approach. I kept drawing high contrast versions until the end, but as the numerals started to count down, I knew the outcome would be a cleaned-up version of that stencil set.
Little rules emerged here & there. The original wordmark had no diagonals, so I stuck to that as much as possible, which made things like the N and the forward slash and the accents somewhat of a challenge, but I felt there had to be some idiosyncratic remnant to the alphabet to keep the spirit of the original letterforms. I had to sacrifice the original S as it just started to read like a 5 in most cases (I kept is as an alternate), and, though I’ve always loved the NE relationship, I removed the curved stem of the N to make it work better in the entire system. | https://uxdesign.cc/recovering-a-dormant-obsession-c76260324fc0 | ['Julio Martínez'] | 2020-09-03 22:58:42.200000+00:00 | ['Typography', 'Design Process', 'Visual Design', 'Design', 'UX'] |
Securing Web-REST-APIs with JWT | The right way to balance out security and convenience on the web
source: the Independent: the US to call for a worldwide tightening of airport security measures over fears of ‘new generation’ of Syrian terror attacks
JSON Web Token (JWT) is an open standard that defines a compact and self-contained way for securely transmitting information between parties as a JSON object.
This information can be verified and trusted because it is digitally signed.
JWTs can be signed using a secret(with the HMAC algorithm) or a public/private key pair using RSA or ECDSA.
JWT is robust and can carry a lot of information but it's still simple to use even though its size is relatively small.
Link any other token, JWT can be used to pass the identity of authenticated users between an identity provider and a service provider
(which are not necessarily the same system). It can also carry all the user’s claim, such as authorization data, so the service provider does not need to go into the database or external system to verify user’s roles and permissions for each request; ~ that data is extracted from the token.
JWT Authentication Flow
source: toptal blog: Spring Security
Clients logs in by sending their credentials to the identity provider.
The identity provider verifies the credentials; if all is OK, it retrieves the user data, generates a JWT containing user details and permissions that will be used to access the services, and it also sets the expiration of the JWT (which might be unlimited).
The client stores the JWT for a limited or unlimited amount of time, depending on the expiration set by the identity provider.
The client sends the stored JWT in an Authorization header for every request to the service provider.
For each request, the service provider takes the JWT from the Authorization header ad decrypts it, if needed, validates the signature and if everything is OK, extracts the user data and permissions. Based on this data solely, and again without looking up further details in the database or contracting the identity provider, it can accept or deny the client request. The only requirements are that the identity and service providers have an agreement on encryption so that the service can verify the signature or even decrypt which identity was encrypted.
The main difference between JWT and other arbitrary tokens is the standardization of the token’s content.
Another recommended approach is to send the JWT token in the Authorization header using the Bearer scheme.
The content of the header should look like this;
:Authorization: Bearer <token>
When should you use JSON Web Tokens?
Authorization:
This is the most common scenario for using JWT. Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. Single sign-on is a feature that widely uses JWT nowadays, because of its small overhead and its ability to easily be used across different domains.
Information Exchange:
JSON Web Tokens are a good way of securely transmitting information between parties. Because JWTs can be a signed-for example, using public/private key pairs- you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn’t been tampered with.
What is the JSON Web Token Structure?
In its compacted form, JSON Web Tokens consist of three parts by dots (.)
which are:
1. Header
2. Payload
3. Signature
Therefore a jwt looks like the following:
xxxx.yyyyyy.zzzzz
Let's break down the different parts.
a. Header
The header typically consists of two parts: the type of the token, which is JWT, and the signing algorithm being used, such as HMAC SHA256 or RSA.
For example:
{
“alg”: “HS256”,
“typ”: “JWT”
}
Then this JSON is Base64Url encoded to form the first of the JWT.
b. Payload
The second part of the token is the payload, which contains the claims. Claims are statements about an entity (typically, the user) and additional data. These three types of claims, public, registered and private claims.
c. Signature
To create the signature part you have to take the encoded header, the encoded payload, a secret, the algorithm specified in the header and sign that.
The result is three BASE64-URL strings separated by dots that can be easily passed in HTML and HTTP environments while being more compact
when compared to XML-based standards such as SAML.
How do JSON Web Tokens work? — Client-Side
In authentication, when the user successfully logs in using their credentials, a JSON Web Token will be returned. Since tokens are credentials, great care must be taken to prevent security issues. In general, you should not keep tokens longer than required.
Whenever the user wants to access a protected route or resource, the user agent should send the JWT, typically in the Authorization header using the Bearer schema. The content of the header should look like the following:
Authorization: Bearer <token>
The server’s protected routes will check for a valid JWT in the Authorization header, and if its present, the user will be allowed to access the protected resource.
If the token is sent in the Authorization header, Cross-Origin Resource Sharing (CORS) won’t be an issue as it doesn’t use cookies.
Why use JSON Web Tokens?
JWT is more compact that SAML (Security Markup Language Tokens) which are based on XML.
SWT (Simple Web Tokens) can only be symmetrically signed by a shared secret using the HMAC algorithm. JWT and SAML can use public/private key
pairs.
JWT’s main strength is handling user authentication in a stateless, and therefore scalable way while keeping everything secure with up-to-date cryptography standards. | https://medium.com/datadriveninvestor/securing-web-rest-apis-with-jwt-787c75229cee | ['Samuel Owino'] | 2020-02-26 06:39:46.514000+00:00 | ['Software Engineering', 'Security', 'Rest Api', 'Software Architecture', 'Json Web Token'] |
Breaking Out of Prison May Be Easier Than Breaking Into Publishing | Breaking Out of Prison May Be Easier Than Breaking Into Publishing
7 tips to keep you chipping away
Photo by Rumman Amin on Unsplash
Five years ago, I put the finishing touches on my debut novel and set out to do what I’ve come to consider nearly impossible: finding a literary agent to represent my work and land me a book deal. At the time, I underestimated how difficult the quest for a traditional publishing contract could be. Flash forward to present day, and I have a more elaborate set of adjectives to describe the process, including competitive, subjective, exhausting, and extremely frustrating.
The good news is that I’m five years older and arguably a bit wiser. And even though I’ve written two more novels since the first, you still can’t find my books on Amazon (or anywhere else for that matter). However, I’ve learned a lot throughout my journey — enough that I feel it’s worth sharing with other aspiring authors out there.
If you share my dream of becoming a published author someday, here’s some insight you might find helpful:
1) Learn the Lingo.
There’s a lot of terminology used in the publishing industry that I simply hadn’t heard of before. For instance, a query letter is the first step in pitching to an agent, somewhat like a cover letter that accompanies a resume. Each literary agent has his/her own set of requirements for a query. Some want just the letter with a short pitch, others ask for an accompanying section from your manuscript (e.g., five to ten pages, or the first few chapters). It’s critical that you read each agent’s website carefully to know exactly what they’re looking for and follow their submission guidelines exactly. Otherwise, you risk having that agent pass before ever reading your first page. (I once received a rejection less than five minutes after sending my query. My guess is that I hadn’t paid close enough attention to the genre he represented, or the requisite word count posted on his website.) Moral of the story: DO YOUR HOMEWORK!
You’ll also hear about beta readers — people willing to read your manuscript and give feedback. They differ from critique partners in that betas typically aren’t writers — they’re reviewing your work as readers only. Terms or phrases like trope, head-hopping, comp titles, multiple POV, synopsis, own voices and submissions (partial or full) all have specific meaning when it comes to trying to sell your work. Familiarize yourself with them all and be sure you’re using them in the right context.
2) Tap Into Available Resources.
There are endless resources to help you get published. WARNING: It can be overwhelming! Relevant topics include: How to find the right literary agent…How to improve your craft, your query or your synopsis…Understanding pacing, plot, story, structure and high concept… Is your head spinning yet?
Once you dig into the available material, you’ll begin identifying trends and understanding your specific needs. I write fiction novels, and some books I’ve found helpful include Stephen King’s On Writing: A Memoir of the Craft, Lisa Crohn’s Story Genius and Jessica Brody’s Save the Cat! Writes a Novel.
Writer’s Digest is a great resource for all-things-writing. They host useful conferences and seminars, sponsor writing contests and publish an annual list of literary agents sorted by genre. Check out websites like ManuscriptWishList and QueryTracker, and if you follow your favorite authors, be sure to see whether they offer educational opportunities. I’ve participated in several in-person and online writers’ workshops, which are great ways to hear from the experts and meet other writers. Many conferences include the chance to pitch your book to agents (for an added fee). It’s worth the money (in my humble opinion), as it’s much easier to connect with an agent face-to-face. Not all of these opportunities cost money, so peruse the Internet and see what fits your budget and your learning style best. There really is something for everyone!
3) Read, Read, Read.
One of the best pieces of advice I’ve received is to read as much in your genre as possible. This might seem obvious, but for years I was trying to hold down a full-time job while writing during the wee hours of the morning and late at night. The last thing I had time to do was read other authors. But the best way to know what’s selling is to familiarize yourself with the market. Once I actually listened to this advice, I learned to read like a writer, evaluating what’s ‘making it to the shelves’ to ensure there’s a place for my work. Besides, most agents want comp titles — books you’d describe as similar to the one you’re pitching. How would you know what those are if you’re only reading your own stuff?
4) Write Often.
Like anything, the only way to get better at something is through practice. It’s important to establish a writing routine, even if it’s only for an hour every day. That may seem like an impossible feat. We all have busy lives and for me, sometimes taking time away from the chaos seemed selfish. But if you want to succeed in this business, it’s a necessity (Don’t worry. I see the irony in this advice coming from someone who hasn’t reached that level of success yet. Just wait.)
One common excuse I used to make is that since I was working on a novel, it wasn’t always productive to write or revise my manuscript daily. If you’re waiting for feedback from a critique partner or an editor, or waiting to hear back from some queries, sometimes that manuscript needs to breathe. My advice here is to find something to fill the gap, maybe starting your next project. It doesn’t have to be another 80,000-word novel. A short story or a magazine article might do the trick. Anything to keep those creative juices flowing and those fingers banging on the keyboard.
Here’s where I’ve had some success and it was completely unintentional. I’d given my manuscript to a few beta readers and decided to set it aside while I was waiting for their feedback. I had been in a good groove of writing daily and didn’t want to disrupt my mojo, so I decided to write an essay for one of the websites I follow. And guess what? That little side trip resulted in my very first paid writing gig! Let me say that again — MY FIRST PAID WRITING GIG! (Damn, that felt good). I can’t tell you what that did for my confidence. It’s the first time someone other than my husband and my closest friends told me that my writing was worth something. That instant validation lifted me up and motivated me to write a few more essays — several of which have also been published. If I hadn’t been looking for a way to keep writing while my novel was percolating, I never would have stumbled upon another outlet for my writing. Cool, right?
5) Develop Your Platform.
Before trying to get published, I didn’t even know what that meant. And if you Google “platform,” you’re bound to find a variety of interpretations. While the definition seems to change as rapidly as technology, my current understanding of platform is not only your online presence, but an all-encompassing ability to reach a potential audience for your book.
Platform is not just your number of Twitter followers — but that’s definitely part of it. I’ve found that lots of authors hang out on Twitter, using hashtags like #amwriting, #writingcommunity and #amquerying. It’s fun to follow other writers on their journeys, and to share stories, ask questions, maybe even meet critique partners. I’ve participated in a few pitch contests on Twitter (two of which resulted in requests for submission from literary agents — short-lived, but requests nevertheless).
Author websites and blogs are an important part of platform too. At first, I didn’t understand why I needed a website without any books to sell. But it’s good to be prepared before you get published. That way, you’ll hopefully have a following prior to your first book hitting the market. Many agents ask about your platform, especially those using QueryManager for their submissions. Just a heads up so you’re better prepared than I was.
6) Seek Feedback.
One of the mistakes I made with my first book is that I never shared it with anyone. And I mean no one. Once I was satisfied it was finished, I simply started querying agents and was devastated when I couldn’t generate any interest. I’ve gone back and reread that manuscript and it’s embarrassing. I was nowhere near ready to have a professional literary agent look at my work, let alone send it out to publishers for review.
I learned the hard way that it’s important to have several sets of eyes on your work before it’s worthy of being sent out into literary circles. Friends and family are great for the once-over, but don’t limit yourself to people with whom you’re emotionally attached. They may be too concerned with hurting your feelings so it’s not likely that they’ll give you a fully honest opinion. Be sure to connect with people who can give you a completely objective assessment. Again, you can find them at conferences, in writing groups, or online.
Hiring a professional editor is also an option. I’ve heard some industry experts say that this isn’t necessary, but it’s not a bad option if you have the means and need a truly objective opinion. With any feedback, always keep in mind that it’s just one person’s opinion. You could have three different editors review your work and get three entirely different opinions. If you’ve ever been in a book club, you know exactly what I mean. Authors need to have thick skin, enough to resist caving to every piece of constructive criticism. It’s your book. Be flexible and open to feedback, but confident enough to know how much you’re willing to change.
7) Persevere.
After I finished my first book, I was lucky enough to sit down one-on-one with an award-winning romance novelist. She was very generous with her time and expertise, and she told me that if I took one thing away from our conversation it was that perseverance is the key.
I’m more hopeful about the possibility of my own success today than I was last year. And even if it still doesn’t happen for me soon, I’m confident that if I keep at it, I’ll eventually reach my goal.
All that being said, there may come a time that I decide the traditional publishing route is not for me. I’ve done some research on self-publishing, and that’s definitely another option — just a discussion for another day! | https://medium.com/swlh/breaking-out-of-prison-may-be-easier-than-breaking-into-publishing-4d7e91bb8e0b | ['Susan Poole'] | 2020-08-19 19:37:40.954000+00:00 | ['Writing', 'Publishing', 'Writing Tips', 'Writers Life', 'Writer'] |
January Summary | January Summary
By Adrià Navarro Martínez on The Capital
This weekend has concluded my first month of trading in TopStepTrader evaluation and here’s my story about it.
It has been a roller coaster.
Not only with monetary results, but with emotions, it all started with a drawdown as you could read in my posts, but it ended with profits.
Here is my trade report on this concluded month. Around 900$ in profit, close to 1/3 of the first step on TST.
Continue reading to see and understand what happened along this month.
Let’s start speaking with the emotions that have appeared during this month and how it evolved to the end of the month:
Boredom: yes, the first days of my trading month were negative but I was still trading in the market. I didn’t saw any opportunity within my plan until 1 week and a half. It was really boring to be in front of the screen and not taking any trade. Then the need for more trading strategies got a place in my mind. Something positive about this was that I didn’t enter on anything that wasn’t in my plan. Despair: once boredom attacked my room head, it evolved to despair, I couldn’t find any profitable trade in the market and I didn’t set up/organize my time to develop more strategies. Taking into consideration the weekend but I couldn’t manage to do it. Breaking the rules: crazy day, my mind stopped working on discipline mode and started thinking of being on the market in any condition, was just 1 opportunity to trigger that mechanism in my mind. “I wanted to take a trade and make benefit from it”. This day I won some money in the market but the rules were broken, so the next trade equal to that made the profits gone. Recovery: As this was only 1 day bad in my trading month, I decided to be the last. So I would never break the rules again because I started to feel worried, afraid and not confident in my trading, and I don’t wanna any of these emotions while I trade. I just want to execute profitable setups and wait time to make the hedge be working. Confident: The last week of January and the first of February something triggered on my mind, It was the end to do stupid things, I am here to be professional and to take the best opportunities on place. Plus our mentor sharing his ideas day by day on the market gave me the boost to feel confident and feel that this is possible again, not only in my mind but in my body.
This is a marathon and not a sprint.
I started January with a kinda nervous body and emotions around trading, but I remembered why I’m doing this and the trading action cleaned that.
The fear of failing isn’t here anymore, my action deleted fear day by day, that’s the only way to get out of the comfort zone and grow.
At the end of the month I gained confidence and I know what are my next steps, I’m reviewing every weekend my trades and learning from my errors, and the last one: I decided to open a real personal account on CFD to recover my personal developed strategies and execute them on real-time day by day.
Let’s see how this February month ends.
Have a nice week people! Cy@ | https://medium.com/the-capital/january-summary-55f4eef82d55 | ['Adrià Navarro Martínez'] | 2020-02-11 01:56:45.352000+00:00 | ['Development', 'Money', 'Trading', 'Investing', 'Freedom'] |
Ban the Sagging Pants-profiling veteran from Walmart | Ban the Sagging Pants-profiling veteran from Walmart
What Walmart can learn from their own employees’ patience
Photo credit: Fabio Bracht/Unsplash
“Iain’t a fighter, but don’t push me.” I live by this Tupac line (from “Hail Mary” but one word is edited). I’ve also worked at Walmart for a whopping eight months after undergrad, surprising myself at how long I lasted there. It takes a special level of patience to work in any retail stores, just from tolerating and being nice to customers alone. It’s the reason I try my absolute best to be pleasant to cashiers and sales clerks. I know what it’s like on the other side of the counter.
But there’s a level of entitlement that has increased to a disturbing amount in Trump’s America, and this latest incident is a prime example of what it’s like to be a minority employee in retail. A 36-year-old army vet took it upon himself to decide a Walmart employee’s pants were too low. Instead of just doing what a customer should do in this situation if it bothers him this much, he decided to physically assault the Hispanic worker, who looks to be about half his age.
In a non-coronavirus-filled world — with 3.4 million U.S. people infected with coronavirus and almost 136K who have died from it — one would expect that another customer or employee would stand in the way of this fight. But customers’ hands are tied with a worldwide health outbreak on their hands. To absolutely no one’s surprise, the veteran is also not wearing a face mask.
While the comment section is flooded with one group blaming this attack on white people, the other group is cheering for the white woman with the shopping cart who defends the worker immediately. I, for one, am elated to see someone stand up to him and a white woman at that — because it surely frustrated the white male veteran even more. How dare she go against him? There is absolutely no way one can convince me the veteran’s issue is wardrobe store policies. Are the Hispanic young man’s pants too low? Absolutely. Is it unprofessional? Yes. But is it really about the pants? You’ll never convince me it is. No one’s pants should ever make a sane person this irate. And the woman who stood up to him knew it.
But what happens now? Does Walmart shrug off this behavior and hope its employees just tolerate people physically assaulting them through a worldwide health outbreak? Or, does this moment result in a mediocre lecture to one worker about pulling up one’s pants to avoid confrontation again?
I, for one, believe Walmart needs to ban the customer who cannot keep his hands to himself. This isn’t just a matter of physical assault; it’s also deadly in a world where being within 6 feet of another person can put that same person 6 feet under. For the Walmart employee to keep his cool, and for his fellow colleague to patiently pull him away, is a teachable moment — that I don’t think my waist-wearing caprid self would have ever learned. The next time they want to show someone customer service training videos, run this video by new employees — sagging pants and all. And give both employees a raise for not sinking to the level of this absolutely abhorrent customer.
The ball is in Walmart’s court. And right now, this veteran needs to be in someone’s court instead of someone’s shopping aisle. | https://medium.com/i-do-see-color/ban-the-sagging-pants-veteran-from-walmart-e3c596da6b8 | ['Shamontiel L. Vaughn'] | 2020-07-16 20:08:55.900000+00:00 | ['Walmart', 'Racial Profiling', 'Bullying', 'Coronavirus', 'White Privilege'] |
How to Make Better Decisions by Overcoming the 5 Obstacles of the Mind | Master Shi Heng Yi is as interesting as his name suggests. He has an MBA, two university degrees, and a bunch of other diplomas and certificates. Yet, he has also 30 years of practice as a Shaolin monk. If I had to ask someone about how to overcome all the obstacles of our modern world, he’d be high up on the list.
His answer is in line with what Brendon Burchard, author of the NYT #1 bestseller High Performance Habits advocates: First, seek clarity. Yi says self-mastery and living a valuable and meaningful life is all about seeing clearly. However, this isn’t as easy as wiping your windshield — there are five major obstacles you have to overcome.
The modern world aims at making things easy and convenient. Google Maps tells you where to go, Starbucks has your coffee needs covered, and if you’re in the mood for diarrhea you seek out the nearest McDonald’s. But in your personal life, decisions are more complicated than ever.
Should you take that job or wait for another offer? Should you tell her you’re unsatisfied with your relationship? Should you read that book or watch Netflix? Should you move cities? Should you sign a gym membership? Should you get a dog — and if yes, which one, and should you get pet insurance? Whether you want to adopt a canine or not, you get the point.
In today’s world, there are almost endless possibilities — and all these options make it hard to choose. You start sweating from all the heavy thinking, your windshield gets foggy, and you can’t see clearly. You take the wrong turns and end up in the wrong places.
The ancient advice of the Shaolin has been around for millennia, yet it’s still invaluable wisdom for your modern life. By mastering yourself first before you take on the outside world, you’ll free yourself from distractions, bad decisions, and layers of overwhelming uncertainty.
When you apply it, your life’s windshield clears up. You can set the trajectory for the life you want, make better decisions, and take the action that will get you exactly where you want to be. | https://medium.com/mind-cafe/how-to-make-better-decisions-by-overcoming-the-5-obstacles-of-the-mind-90414e0dfee9 | ['Moreno Zugaro'] | 2020-12-18 15:49:48.857000+00:00 | ['Decision Making', 'Buddhism', 'Self-awareness', 'Self Improvement', 'Advice'] |
Under a Fig Tree and My Connection with Nature | Under a Fig Tree and My Connection with Nature
A relationship otherwise unknown by many
Source: T. Miranda 2020
Why should I sit on a park bench if I have a tree?
Not a normal tree though. A type of tree that provides free park benches. The ones that once raised from the ground it resembles a curved stonewall, with fissures similar to a badly done plastering job. The kind of bench that kids cannot wait to let go of their parent’s hands to climb on it.
The design looks like a huge octopus overextending its tentacles, with swirling shapes and coarse texture. This natural seat is nothing more than a buttress root. There are several types of buttress roots or root flair (another similar name amongst tree species). But this one is a large fig tree (Ficus macrophylla).
It is so big that if you lie down on the root flair, you cannot even see where the canopy finishes. Humongous. Such a surreal experience when you feel embraced by a big tree, resembling at once your mom’s lap. I cannot even describe in words. That is why you do not need park benches for the most adventurous ones.
As I was seated and appreciating the noises surrounding the park, I could not tell if I were close to the city or not. So close that as soon as you leave the canopy cover where I was standing, cars and trucks take over your ears.
For one minute, I felt contemplated by nature amidst the urban area. Trees are intercalated within a respected distance in this park. Most of them are mature Figs, London planes, Norfolk pines and Eucalyptus. Mature I mean tall and large. The kind of large that even vicious tree huggers would walk away in desolation.
For while, my bum feels like a heavy square plate and the natural becomes unpleasant. A good sign that it is time to move on. Though, as soon as I stood on both feet, a crackling noise of leaf litter called my attention.
Do you know that same feeling stepping on leaves when you were a kid? I am aware that for most of us it is hard to recall, but for those of easy memory, I felt the same thing.
My attention revolved towards the ground.
The Nonstop Moving Sphere
You can tell the stages of the decomposing process whilst looking at green and brown leaves on the floor. Normally, the old ones are underneath where initiates their progression into becoming organic matter. It is worthwhile a quick sweep of the leaves. You can see the wet black soil and notice that is much warmer than at the surface, especially in winter.
Sounds of dropping fruits are a common noise whilst standing under a flowering tree.
Source: T. Miranda 2020
Hence, the wind plays with the tree crown, squeezing through the stems, slapping its leaves and rotating like a tiny windmill. The leaves produce peculiar sounds. It is similar to an African origin instrument, with a sharp progressive noise that sticks in your mind for at least a few minutes.
This sound from flapping leaves due to the wind has an interesting contrast with another company — a bird of a kind. Parrots and honeyeaters are the common visitors, including red wattlebird, lorikeets, rosellas, galahs, cockatoos and much more what South Australia can offer.
Coming back to the ground before I get distracted by the natural symphony above me, I decided to poke my finger into the soil. Amazingly enough, it is more alive than I thought.
With a bit of digging effort, it did not take long to find earthworms — one of the essential decomposers of our ecosystems. I was stunned.
Since I used to play with them when I was a kid, I have not had a close encounter like this one in years. If I put a soil sample under the microscope, I might see millions of other decomposers, such as bacteria and fungi. This is the nature’s interrelationship at your fingertips.
Source: T. Miranda 2020
I think we sort of guess that the only way to understand nature is by delving into a graduation course and become doctors. I believe there are other doors available to be opened by the most ordinary individual.
The only pre-requisite to understand science, or I should say two, are having respect and curiosity. Then questions may arise, but do not be afraid. These questions are your friends, and with them, you might have a deeper exploration of our diverse ecosystems.
Without them, you may feel empty, dubious, solitary and willing to find sense before you fall into frustration. The common end of someone deluded about the unknown is the old “God” search.
A search that not necessary could take you away from nature appreciation but may stop you from being curious and sceptical — the two most important things towards something bigger than us.
No, I am not talking about a deity.
I am talking about the Universe, and its functioning process instead.
Even after reading this text you are still not convinced that you could be a citizen scientist, please bear with me in the next few paragraphs then.
Source: T. Miranda 2020
An Experiment
Walk to your nearest park, find a bench ideally under a tree (or a root flair), bring your lunch and just close your eyes. Notice all the sounds, the different sounds of each bird, wind and the leaves. Feel the wind in your face and listen to the old leaves landing on the floor.
And then, open your eyes again, and invite yourself to join in this conversation without words. But remember that they are talking amongst themselves in a different language.
A language that pushes you to explore by physical signs, which you may identify a certain type of communication. You are just a spectator. That is when you start realising there is much more than a designer, a deity behind the curtains.
This natural exploration is the “drug” that Charles Darwin, Bertrand Russel, Richard Dawkins, Thomas Jefferson, Christopher Hitchens, Stephen Fry, Neil De Grasse Tyson, Carl Sagan and many more have got hooked up for the rest of their lives. You might be the next one, just like me.
This “drug” is the profound accumulation of natural knowledge that forever recedes without a return ticket. I do recommend for once to let it blow your mind. Let humbleness takes over and you will then realise, fellow citizen scientist, that the world is full of surprises. | https://medium.com/age-of-awareness/under-a-fig-tree-and-my-connection-with-nature-95bdfe098cc2 | ['Tiago Miranda'] | 2020-12-17 16:40:44.817000+00:00 | ['Vision', 'Philosophy', 'Nature', 'Science', 'Awareness'] |
Discrimination/Gender Bias Against Women: A Girl Who Asked why? | Discrimination/Gender Bias Against Women: The Girl Who Asked Why?
Story Written by Shon Mehta
© Sheetal Mehata.
Part One
This story happened really long time back, but it is still very relevant.
Girls were taught to cook, to take care of the family, and then married off. Studying was off-limits to girls.
In those times, there lived a girl. She was a little different. She always had lots of questions in her mind.
When she was little, her mother wanted her to learn cooking.
The girl asked her mother, “Why should I learn to cook?”
Mother said, “So that you can feed yourself when required.”
The girl said “Fair enough”, and learned to cook.
After some time, her mother wanted to teach her household work.
The girl again asked, “Why?”
Mother said “So that you can be self-dependent.”
The girl said “Fair enough!”, and she learned the household chores.
Then one day, her parents told her that they will be marrying her off soon.
She asked, “Why?”
“Because all girls get married at this age.” Said the parents
“Everyone does, and so should I? That’s not a good reason. I am not going to marry.”
The girl’s determination surprised her parents. Other parents could have forced the girl into marriage, but her parents didn’t.
So, now the girl had enough time in hands. As her father was a teacher, she joined her father’s academy. There she learned several hymns and their meanings. She asked her questions and learned even more. Soon, she surpassed her father in knowledge.
Part Two
One day, an invitation arrived. It was from the king. The invitation was for the brightest scholar in the academy. As it happened, the king wanted to compile all the knowledge in the universe into books. To get the inputs, he had invited scholars and philosophers from all over the world.
There was a discussion in the academy about who to send for this conference. After a lot of thought, they all agreed that the girl is the brightest scholar in the academy. So, the girl was sent to the conference.
When the girl reached the conference venue, she was taken aback by the grandeur. She noticed a large number of men, but hardly any women among the delegates.
She climbed the Dias to take her seat. Suddenly, there was huge uproar — people in the audience were staring at her.
“A woman, who thinks she can sit on the scholars’ panel?”
“Preposterous!” Screamed someone.
“Look at her clothes, so provoking. I don’t think she is female of good reputation.” declared another.
“Stop her! It’s a sin against god.”
Everybody looked at the king for a solution.
The king pondered for a moment.
“Girl, there is some misunderstanding. A woman can’t sit on the scholars’ panel, unless she is accompanied by a man.”
“Pardon me, Your Grace! But I was invited to join the discussion.” Said the Girl.
“I don’t remember inviting you.” said the King.
“You sent the invitation for the brightest scholar in my academy. I am the brightest in my academy. On the invitation there was nothing about only male scholars being allowed.” Answered the girl.
The king gave little chuckle.
“You have made a good point. I have no objection.” Said the king.
“But I don’t think a woman can join the discussion.” Murmured one of the women in the audience.
“Why?” Asked the girl.
“You will not feel comfortable around so many men” answered another woman.
“I have no problem — my focus is on my work, not men”.
“You don’t have to do this. You are not bad looking, you can marry some wealthy gentleman.” advised one elderly.
The girl ignored him.
“Let us have a discussion. If the scholars have objections, they can debate with her. If she wins, she can join the panel.” Said the king.
Several liked the solution. They were sure that the girl will be humiliated by scholars.
The scholars on the Dias discussed among themselves, and selected an elderly scholar as their representative.
“So, by joining the discussion, what you want to prove? That women are better than men?” Asked the elderly scholar.
“No, sir. I don’t want to prove anything. I am here to join the discussion, to quench my thirst for the knowledge. Like all of you.” Said the girl, fearlessly.
“But greater knowledge is not for women.” Said the elderly scholar.
“I beg your pardon, sir, but why?” Asked the girl.
“Because female intellect is weaker than men.” Said the elderly scholar.
“Says who, sir?”
“It is written in the hymns.”
“May I ask, who wrote those hymns?”Asked the girl.
“The hymns were written by our forefathers.” Said the elderly scholar.
“By forefathers you mean, our male ancestors?” Asked the girl, again.
“Yes, of course. By our male ancestors.” Said the elderly scholar.
“How did our forefathers know that women have weaker intellect?”
“They noticed” said the elderly scholar, irritated.
“But how, my lord? Give me an example, how did they notice?” asked the girl again.
“I don’t remember.” Said the elderly scholar.
“Doesn’t matter. Why don’t any of you scholars ask me questions to prove my weaker intellect.”
Many scholars thought of asking her questions, but feared seeing her immense confidence.
“You ask too many questions, girl!” shouted the elderly scholar. He was furious.
The atmosphere was tense.
“Sir, answer her. Why is a female’s intellect weaker than a male’s?” said the king.
“I need to study, Your Grace, to come up with an example.” Said the elderly scholar.
“Then I can’t stop her from joining the scholars’ panel. She has come here on her own merit. I will allow her to sit on the panel until you come up with a convincing example” said the king.
People were still doubtful about girl’s worthiness. But as the discussion progressed, all doubts vanished.
Days passed. The girl took part in several discussions, asked many questions and answered many others. Other scholars were astonished by her brilliance.
When the final draft of the book was compiled, many hymns which were composed by the girl were included.
Nobody knows for sure what happened to the girl thereafter.
Some say, she constructed a book of her own hymns. Some say, she opened an academy for girls. Different people, different stories. But everybody agrees that the girl “who asked why” became the first female scholar. | https://medium.com/shon-mehta/discrimination-gender-bias-against-women-a-girl-who-asked-why-ca599824c305 | ['Shon Mehta'] | 2020-03-05 13:13:55.856000+00:00 | ['Storytelling', 'Women Empowerment', 'Discrimination', 'Women', 'Gender Equality'] |
Svandis Development Progress | 2018 has been a very fruitful year for Svandis team. We have completed a lot of work and achieved great results in developing the ecosystem that will be beneficial for all crypto-market participants. We would like to share what has been done so far with our community, and will keep you updated on our plans for the future.
Q1 2018
Data Mining Worker
Broke ground on the worker application for distributed users to contribute their computing resources. The worker is used to actively query websites that have been tagged by the Svandis Internal team.
When new information is found by a worker, a socket-server confirms the findings and is able to forward them onto the content extractor.
With outside users to contributing computational resources, more web sources online can be queried by the system.
Svandis Frontend
The frontend for Svandis is an angular application created to act as a dashboard for the many features included within the system.
It was created in Material Design with friendly login options. It will serve as a platform for the newsfeed, the tokensale screeners, alt-coin screeners and up to date information, and blockchain connected modules that serve to keep the platform transparent.
Research & Development
The team researched originally the data structures that token offerings have in common to create a standardized fact reporting system for blockchain.
News aggregation and the scraping of data sources was of particular interest in research.
Q2 2018
Data Content Extraction
Works to extract content from URLs programmed by the Svandis Internal Team. The content extracted is later scraped with machine learning to be relevant in the platform’s news feed.
The Content extractor is meant to integrate natural machine learning to improve the quality of the tags important to users in the newsfeed.
Svandis Token Sale Smart Contracts
Started working on the Ethereum Based Smart Contracts for Svandis.
In this quarter, the Svandis token sale protocol was created. The token sale will be tiered and feature a Svandis ERC20 token.
Token Sale Website
The Svandis main website was designed and created to show off the most important information on the Svandis platform.
Svandis key features from the whitepaper are presented on the website. The prospective token sale information has also been listed. Finally, the team and advisors are listed within the website.
Q3 2018
ICO Screeners
Started UI and Schematics For Tokensale Screeners. Created the detail sets for the Tokensale screeners. Internally drafting the schematics for different types of Svandis data available.
Tokensale Screeners are available through the front end, they show important information such as price, industry, country, team members.
This information is meant to be crowdsourced in the future and improved by decentralized Svandis users acting in the benefit of the ecosystem, in return for Svandis rewards.
Alt Coins Info and Screeners
Such as with the Tokensale Screeners, there is also up to date Alt Coin information available on the platform. This information will later be partially crowdsourced for factual information.
Technical information such as the price, price changes and volume are available through the platform.
Technical Whitepaper
This whitepaper was created in the purpose of offering an extended explanation for the goals of the Svandis platform. By concisely breaking down the different components of the full system, we are able to pinpoint why Svandis is useful.
Svandis’ technical architecture allows for the system to take on great loads of data, scrape it and present it to interested blockchain users. Those users have different ways to contribute to the ecosystem, and can furthermore influence the actual content of the Svandis offering.
Svandis offers a news feed for opinion based information through the use of the content extractor and worker. Svandis offers a database of factual cryptocurrency information through crowdsourced data. Users are incentivized to submit factual information about token offerings and active tokens for example.
Svandis Helpdesk
Established the Svandis helpdesk to gain users feedback, offering technical support with the demo, and offer information on the Svandis upcoming token sale.
Customizable Interface
Steps were taken to make the frontend more customizable in appearance for the end user. The UI was significantly improved with new features and settings.
Token Sale Administration Module
In order to help administer the token sale, the team built a typescript angular based administration module for internal use.
To help the token sale move smoothly, the smart contract admin panel will allow the internal team to be able to execute a potential tokensale from their browser in metamask.
Beta Testing
The front end and worker applications were qualified as minimum viable offerings for a demo. Beta users joined Svandis to start building feedback on the application.
Q4 2018
Frontend Newsfeed
In this quarter the frontend had the UI for the newsfeed complete, now users can see newsfeeds from multiple sites around the world.
Globally distributed workers run remotely and provide processing power to scrape set web sources, the content is then extracted through the system and the end result is displayed to the user.
Users can filter the news feed with parameters such as date, categories and sentiment.
MVP for Ethereum User Onboarding tour
Logged in users in the platform are now able to onboard onto the Blockchain. The MVP has been setup for both crypto beginners and expert users. Blockchain keys are setup for a user within the browser. The user is then able to save the key.
Expert users have the ability to manage their own decentralized identity. They can save their key, and if it is lost they are the only ones who can recover it.
Beginner users are able to manage their own decentralized identity as well, however we introduce centralization as we give Svandis a key. This way, although the user can save their key, Svandis is able to recover their blockchain identity if they lose the key stored in their browser.
The strategy here is to allow users to gain their decentralized identity to be able to submit signed data to the platform. This verifies users identities and provides a way for Svandis to assign reputation to a user. Unlike centralized tokensale platforms, the users reputation is linked to their ability to submit factual unbiased information about projects.
Users do not require Metamask to interact, everything is done through the browser. Fees are collected from the user to cover the cost of backend processes, Svandis submits the users decentralized identity to the blockchain on their behalf using a signed message to confirm it was them. Svandis covers gas fees.
This feature was inspired by Ethereum Improvement Protocol EIP 1078 for Universal Login.
Svandis Smart Contract Backend | https://medium.com/svandis/svandis-development-progress-d56ff372cd5b | [] | 2019-05-14 18:38:13.019000+00:00 | ['Data Mining', 'Development', 'Crypto', 'Blockchain'] |
How to Write a High-Quality Article in 1 Hour or Less | How to Write a High-Quality Article in 1 Hour or Less
So that you can write more and make more money. Quality doesn’t lie in the effort
Photo by Nick Morrison on Unsplash
I used to think the more effort you put in and for longer, the better the article or piece of content will be. Combine this belief with ever-lasting perfectionism (like in my case), and you get a deadly match.
After thousands of hours spent working as a freelance writer, I found out how shallow and dangerous this mindset really is. It can deplete your energies and waste your best hours of focus — leaving you to burn out every few weeks or months.
I proved this idea wrong when I got my first semi-viral article here on Medium. I was experimenting with shortening my writing time, and that article took me exactly one hour to write, edit and publish. I definitely didn’t think it was a good article, but I decided to publish it anyway just because.
Not only did it get accepted, but it received a ton of attention, too. I published it months ago and it still receives comments and highlights on a daily or weekly basis. I would have probably thrown that article away because I didn’t “put enough effort into it,” and there it was, receiving comments about how useful people found it.
At that point, I asked myself what the real goal of writing was for me: to spend hours reading every paragraph 50 times, or to be helpful?
I started fighting my perfectionism habits relentlessly, so I could write more and help more. And this is why we’ll talk about how it’s absolutely possible to write a perfectly-good, high-quality article in one hour only, including everything, from writing to publishing. Let’s start. | https://medium.com/better-marketing/how-to-write-a-high-quality-article-in-1-hour-or-less-2eb5263ceb39 | ['Celeste Galizia'] | 2020-03-11 10:49:29.156000+00:00 | ['Writing Tips', 'Content Writing', 'Article Writing', 'Writing', 'Self Improvement'] |
Stanford among Best Feeder Universities to Top Graduate Programs | by Autumn Carter
In September 2003, The Wall Street Journal published an article called “Want to Go to Harvard Law?” along with its rankings of the nation’s top 50 feeder schools to the top Business, Law, and Medical programs. A feeder university is a university whose undergraduate students often attend a given Professional or Graduate School. In its “Behind the Rankings” portion, the article states, “Traditionally, college rankings have focused on test scores and grade averages of kids coming in the door. But we wanted to find out what happens after they leave — and try to get into prestigious grad schools.”
Appearing in 2003, just in time for the start of undergraduate and graduate school application season, the rankings immediately incited controversy. At the top, the list looked much like the standard annual undergraduate rankings. Our own Stanford University came in fourth ahead of Williams College (№5), but behind Harvard (№1), Yale (№2), and Princeton (№3). But some of the traditional undergraduate powerhouses found themselves lower-ranked than usual. The University of Pennsylvania found itself at number 16, Georgetown at 17, Cornell at 25, and Berkeley at 41.
Lower-ranked universities immediately criticized the ranking methodology. For instance, The Journal only considered their own top 5 Business, top 5 Law, and top 5 Medical schools in their sample of elite destination schools. But surprisingly, the Journal included none of Stanford’s professional schools in those lists. Furthermore, the admissions data gathered constituted only one year’s worth of data, meaning that the results were very susceptible to reflecting admission trends unique to that year.
Nonetheless, the reaction to the rankings then and their frequent mention today in articles and on forums shows that the information had power and still has it today. For example, try this: Google “Business School Feeders.” Then Google “Law School Feeders” and “Med School Feeders” and “Grad School Feeders.”
Unfortunate as it may be, inside and outside the realm of higher education, people often see an individual’s educational pedigree more than they see the individual. Thus, it should come as no surprise that Googling those search terms returns online forum threads galore of frantic undergrads scurrying to determine what their prospects are for being admitted into the nation’s top-ranked Advanced-Degree Programs. And the high schoolers plotting their undergrad plans of entry into Stanford and the like are just as frantic.
It appears that feeder universities may soon become the new fodder for publications like U.S. News and World Report, Business Weekly, and Princeton Review that generate major revenue by ranking universities each year. With undergrads looking to locate their own universities on the lists and high schoolers looking to discover which university will give them the best shot at an MBA, JD, MD, MA, or Ph.D, these ranking publications can potentially tap a market that could be incredibly viable. And furthermore, as universities face ever-more shrewd applicants, impressive feeder university rankings will only generate more interest in individual universities, which will only continue to fuel the obsession with university rankings.
Still, while one’s educational pedigree may matter to many, we know that it is simply not enough to guarantee placement anywhere. Michael Robinson, ’09, is currently a student at UC Davis’s School of Medicine and he says, “Having a feed-forward type of mind really worked for me. […While an undergraduate,] I was more concerned with my progress after I completed my undergraduate degree, meaning I was already preparing for graduate/medical school. I took this mindset with me into my more difficult classes and it turned out great.”
Ultimately, attending Stanford as a feeder university will mean less than attending Stanford and feeding forward one’s perspective. The responsibility and power truly rests with the individual who sees his future in graduate school approaching and chooses to prepare for it. That undergraduate should emerge from Stanford well-fed. | https://medium.com/stanfordreview/stanford-among-best-feeder-universities-to-top-graduate-programs-830950948123 | [] | 2016-12-11 10:22:47.508000+00:00 | ['Education', 'Startup'] |
The Art of Unifying Broken Pieces | With a desire to connect faces to this insidious disease, people where asked to participate in a project. Everyone who participated were asked the same questions and their responses developed into poems sharing their experiences.
Follow | https://medium.com/faces-of-coronavirus/the-art-of-unifying-broken-pieces-e37f1e27962d | ['Brenda Mahler'] | 2020-12-27 21:11:47.412000+00:00 | ['Faces Of Covid', 'Reflections', 'Poetry', 'Coronavirus', 'Covid 19'] |
聽 UX 開山始祖們談 UX!Nielsen Norman Group 修課觀察 | Written by
UX Designer / Information Architect from Taiwan, now living in Seattle. | https://medium.com/as-a-product-designer/nielsen-norman-group-conference-experience-5b4ae7de795a | ['Jasmine Lin'] | 2020-07-13 03:50:28.508000+00:00 | ['學習', '設計', 'Design', '設計思考', '中文'] |
Don’t Let COVID-19 Doom Your Holidays | Don’t Let COVID-19 Doom Your Holidays
You just have to get a little creative.
This year has been challenging to say the least. From the way we work — to the way we socialize, we have all ultimately hit the ‘reset’ button. But perhaps there are still ways to keep the spirits up during this time.
As Christmas season approaches, we all have to remember that there is still an ongoing pandemic and that we need to spend it safely and responsibly.
Depending on where you live, you may still be able to see a small bubble of friends from a distance or perhaps another household other than your own.
Regardless of your situation, there are still ways to keep your holiday merry and bright. Maybe this year is the year is really cherish those around us. | https://medium.com/the-partnered-pen/dont-let-covid-19-doom-your-holidays-32635ae8ff8f | ['Katy Velvet'] | 2020-12-08 05:11:33.126000+00:00 | ['Christmas', 'Mental Health', 'Family', 'Relationships', 'Friendship'] |
Kafka-Python explained in 10 lines of code | Although it’s not the newest library Python has to offer, it’s hard to find a comprehensive tutorial on how to use Apache Kafka with Python. By means of approximately ten lines of code, I will explain the foundations of Kafka and it’s interaction with Kafka-Python.
Setting up the environment
First of all you want to have installed Kafka and Zookeeper on your machine. For Windows there is an excellent guide by Shahrukh Aslam, and they definitely exist for other OS’s as well.
Next install Kafka-Python. You can do this using pip or conda, if you’re using an Anaconda distribution.
pip install kafka-python conda install -c conda-forge kafka-python
Don’t forget to start your Zookeeper server and Kafka broker before executing the example code below. In this example we assume that Zookeeper is running default on localhost:2181 and Kafka on localhost:9092.
We are also using a topic called numtest in this example, you can create a new topic by opening a new command prompt, navigating to …/kafka/bin/windows and execute:
kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic numtest
What is Kafka?
Simply put, Kafka is a distributed publish-subscribe messaging system that maintains feeds of messages in partitioned and replicated topics. In the simplest way there are three players in the Kafka ecosystem: producers, topics (run by brokers) and consumers.
Producers produce messages to a topic of their choice. It is possible to attach a key to each message, in which case the producer guarantees that all messages with the same key will arrive to the same partition.
Topics are logs that receive data from the producers and store them across their partitions. Producers always write new messages at the end of the log. In our example we can make abstraction of the partitions, since we’re working locally.
Consumers read the messages of a set of partitions of a topic of their choice at their own pace. If the consumer is part of a consumer group, i.e. a group of consumers subscribed to the same topic, they can commit their offset. This can be important if you want to consume a topic in parallel with different consumers.
The offset is the position in the log where the consumer last consumed or read a message. The consumer can then commit this offset to make the reading ‘official’. Offset committing can be done automatically in the background or explicitly. In our example we will commit automatically in the background.
Let’s code
In our example we’ll create a producer that emits numbers from 1 to 1000 and send them to our Kafka broker. Then a consumer will read the data from the broker and store them in a MongoDb collection.
The advantage of using Kafka is that, if our consumer breaks down, the new or fixed consumer will pick up reading where the previous one stopped. This is a great way to make sure all the data is fed into the database without duplicates or missing data.
Create a new Python script named producer.py and start with importing json, time.sleep and KafkaProducer from our brand new Kafka-Python library.
from time import sleep
from json import dumps
from kafka import KafkaProducer
Then initialize a new Kafka producer. Note the following arguments:
bootstrap_servers=[‘localhost:9092’]: sets the host and port the producer should contact to bootstrap initial cluster metadata. It is not necessary to set this here, since the default is localhost:9092.
value_serializer=lambda x: dumps(x).encode(‘utf-8’): function of how the data should be serialized before sending to the broker. Here, we convert the data to a json file and encode it to utf-8.
producer = KafkaProducer(bootstrap_servers=['localhost:9092'],
value_serializer=lambda x:
dumps(x).encode('utf-8'))
Now, we want to generate numbers from one till 1000. This can be done with a for-loop where we feed each number as the value into a dictionary with one key: number. This is not the topic key, but just a key of our data. Within the same loop we will also send our data to a broker.
This can be done by calling the send method on the producer and specifying the topic and the data. Note that our value serializer will automatically convert and encode the data. To conclude our iteration,we take a 5 second break. If you want to make sure the message is received by the broker, it’s advised to include a callback.
for e in range(1000):
data = {'number' : e}
producer.send('numtest', value=data)
sleep(5)
If you want to test the code, it’s advised to create a new topic and send the data to this new topic. This way, you’ll avoid duplicates and possible confusion in the numtest topic when we’re later testing the producer and consumer together.
Consuming the data
Before we start coding our consumer, create a new file consumer.py and import json.loads, the KafkaConsumer class and MongoClient from pymongo. I won’t dig any deeper in the PyMongo code, since that’s outside the scope of this article.
Furthermore, you can replace the mongo code with any other code. This can be code to feed the data into another database, code to process the data or anything else you can think of. For more information about PyMongo and MongoDb, please consult the documentation.
from kafka import KafkaConsumer
from pymongo import MongoClient
from json import loads
Let’s create our KafkaConsumer and take a closer look at the arguments.
The first argument is the topic, numtest in our case.
bootstrap_servers=[‘localhost:9092’]: same as our producer
auto_offset_reset=’earliest’: one of the most important arguments. It handles where the consumer restarts reading after breaking down or being turned off and can be set either to earliest or latest. When set to latest, the consumer starts reading at the end of the log. When set to earliest, the consumer starts reading at the latest committed offset. And that’s exactly what we want here.
enable_auto_commit=True: makes sure the consumer commits its read offset every interval.
auto_commit_interval_ms=1000ms: sets the interval between two commits. Since messages are coming in every five second, committing every second seems fair.
group_id=’counters’: this is the consumer group to which the consumer belongs. Remember from the introduction that a consumer needs to be part of a consumer group to make the auto commit work.
The value deserializer deserializes the data into a common json format, the inverse of what our value serializer was doing.
consumer = KafkaConsumer(
'numtest',
bootstrap_servers=['localhost:9092'],
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: loads(x.decode('utf-8')))
The code below connects to the numtest collection (a collection is similar to a table in a relational database) of our MongoDb database.
client = MongoClient('localhost:27017')
collection = client.numtest.numtest
We can extract the data from our consumer by looping through it (the consumer is an iterable). The consumer will keep listening until the broker doesn’t respond anymore. A value of a message can be accessed with the value attribute. Here, we overwrite the message with the message value.
The next line inserts the data into our database collection. The last line prints a confirmation that the message was added to our collection. Note that it is possible to add callbacks to all the actions in this loop.
for message in consumer:
message = message.value
collection.insert_one(message)
print('{} added to {}'.format(message, collection))
Testing
Let’s test our two scripts. Open a command prompt and go to the directory where you saved producer.py and consumer.py. Execute producer.py and open a new command prompt. Launch consumer.py and look how it reads all the messages, including the new ones.
Now interrupt the consumer, remember at which number it was (or check it in the database) and restart the consumer. Notice that the consumer picks up all the missed messages and then continues listening for new ones.
Note that if you turn off the consumer within 1 second after reading the message, the message will be retrieved again upon restart. Why? Because our auto_commit_interval is set to 1 second, remember that if the offset is not committed, the consumer will read the message again (if auto_offset_reset is set to earliest).
— Please feel free to bring any inconsistencies or mistakes to my attention in the comments or by leaving a private note. —
Acknowledgements
This article is by no means a complete guide to Kafka or Kafka-Python, but rather a comprehensive teaser that will familiarize you with essential Kafka concepts and how to transform these in useful Python code.
For more advanced topics reading the documentation is advised. If you want to deploy code, it is probably a good idea to take a look at Confluent-Kafka and this post by Russell Jurney.
Sources
Kafka-Python documentation
Consume JSON Messages From Kafka using Kafka-Python’s Deserializer
Apache Kafka documentation
Cloudera Kafka documentation
Putting Apache Kafka To Use: A Practical Guide to Building a Streaming Platform
Introducing the Kafka Consumer: Getting Started with the New Apache Kafka 0.9 Consumer Client | https://towardsdatascience.com/kafka-python-explained-in-10-lines-of-code-800e3e07dad1 | ['Steven Van Dorpe'] | 2018-08-13 13:56:11.917000+00:00 | ['Python', 'Kafka Python', 'Consumer', 'Producer', 'Kafka'] |
Predicting the leaps of Schrödinger’s Cat | Predicting the leaps of Schrödinger’s Cat
Researchers have deciphered one of the key mysteries of quantum mechanics — predicting sudden ‘leaps’ in a system’s state. Thus devising a method to finally rescue the most famous moggy in science history.
Yale researchers have figured out how to catch and save Schrödinger’s famous cat, the symbol of quantum superposition and unpredictability, by anticipating its jumps and acting in real time to save it from proverbial doom. In the process, they overturn years of cornerstone dogma in quantum physics.
The discovery enables researchers to set up an early warning system for imminent jumps of artificial atoms containing quantum information.
Yale researchers have found a way to catch and save Schrödinger’s famous cat, the symbol of quantum superposition and unpredictability. (Kat Stockton)
Schrödinger’s cat is a well-known and paradoxical analogy used to illustrate the concept of superposition — the ability for two opposite states to exist simultaneously — and unpredictability in quantum physics.
The idea as presented by Erwin Schrödinger is that a cat is placed in a sealed box with a radioactive source and a poison that will be triggered if an atom of the radioactive substance decays. The superposition theory of quantum physics suggests that until someone opens the box, the cat is both alive and dead — a superposition of states. Opening the box to observe the cat causes it to abruptly change its quantum state randomly.
Thus forcing our hypothetical feline to be either dead or alive.
Don’t jump! Predicting quantum leaps
The quantum jump or leap refers to a discrete — non-continuous — and random change in the state when it is observed.
This new experiment — performed in the lab of Yale professor Michel Devoret and proposed by lead author Zlatko Minev — peers into the actual workings of a quantum jump for the first time. A study announcing the discovery appears in the June 3rd online edition of the journal Nature.
The results reveal a surprising finding that contradicts Danish physicist Niels Bohr’s established view — these jumps, say the researchers, are neither abrupt nor as random as previously thought.
For a tiny object such as an electron, molecule, or an artificial atom containing quantum information (known as a qubit), a quantum jump is a sudden transition from one discrete energy states to another. A key element of developing quantum computers is dealing with the jumps of the qubits — which are the manifestations of errors in calculations.
The enigmatic quantum jumps were theorized by Bohr a century ago, but not observed until the 1980s, in atoms.
Devoret, the F.W. Beinecke Professor of Applied Physics and Physics at Yale and member of the Yale Quantum Institute, explains: “These jumps occur every time we measure a qubit.
“Quantum jumps are known to be unpredictable in the long run.”
Minev continues: “We wanted to know if it would be possible to get an advance warning signal that a jump is about to occur imminently.”
The experiment was inspired by a theoretical prediction by professor Howard Carmichael of the University of Auckland, a pioneer of quantum trajectory theory and a co-author of the study.
Researchers say reliably managing quantum data and correcting errors as they occur is a key challenge in the development of fully useful quantum computers.
The Yale team used a special approach to indirectly monitor a superconducting artificial atom — three microwave generators irradiating the atom enclosed in a 3D cavity made of aluminium. This doubly indirect monitoring method — developed by Minev for superconducting circuits — allows the researchers to observe the atom with unprecedented efficiency.
Microwave radiation stirs the artificial atom as it is simultaneously being observed — resulting in quantum jumps — the tiny quantum signal which results can be amplified without loss to room temperature. Thus allowing the signal to be monitored in real time.
This enables the researchers to see a sudden absence of detection photons. This tiny absence alerting researchers to an imminent quantum jump.
Devoret continues: “The beautiful effect displayed by this experiment is the increase of coherence during the jump, despite its observation.
“You can leverage this to not only catch the jump — but also reverse it.”
Why is this so significant?
The crucial point, the researchers say, is that while quantum jumps appear discrete and random in the long run, reversing a quantum jump means the evolution of the quantum state possesses, in part, a deterministic and not random character; the jump always occurs in the same, predictable manner from its random starting point.
Minev says: “Quantum jumps of an atom are somewhat analogous to the eruption of a volcano.
“They are completely unpredictable in the long term. Nonetheless, with the correct monitoring, we can with certainty detect an advance warning of an imminent disaster and act on it before it has occurred.
In addition to its fundamental impact, the discovery is a potentially major advance in understanding and controlling quantum information. One of the major hurdles with controlling quantum systems is their inherent randomness.
Whilst this development doesn’t remove that non-deterministic nature — nothing can, it’s intrinsic — the ability to predict this randomness is invaluable.
Original research: DOI: 10.1038/s41586–019–1287-z | https://medium.com/swlh/predicting-the-leaps-of-schr%C3%B6dingers-cat-advanced-warning-of-randomness-in-quantum-mechanics-c8071ca3a662 | ['Robert Lea'] | 2019-06-03 15:22:52.708000+00:00 | ['Physics', 'Quantum Physics', 'Science', 'Quantum Computing', 'Quantum Mechanics'] |
Setting the stage for the machine intelligence era in marine science | A new themed set published in ICES Journal of Marine Science explores the application of artificial intelligence and machine learning in marine science. The following excerpt is from the introduction to the themed set:
Artificial intelligence (AI) is increasingly being applied to all kinds of data. Some applications of AI are face recognition systems, natural language processing (e.g. speech recognition, language understanding, language generation, and language translation), disease detection systems, video surveillance, quality inspection in manufacturing, product design and creation, robotics, and self-driving cars (Dargan et al., 2019). It is accurate to say that AI is now everywhere, from our smartphones, to web-browser, to cars.
Machine learning (ML), which is a subfield of AI, implements dynamic models resulting in data-driven decisions. ML techniques can be applied to high-dimensional (Fan et al., 2009), nonlinear, complex, and big data. Further, the ML approach is effective even in cases where the data are noisy (e.g. Frenay and Verleysen, 2014; Xiao et al., 2015) or some identification labels are missing (McKnight et al., 2007; Aste et al., 2015). ML is also able to address the small sample size problem: so-called zero or few-shot learning (Huo et al., 2019). What makes ML most appealing is its capacity to handle problems that are impossible or too challenging for traditional approaches, which require many people and considerable time and resources to produce the desired accuracy. In other words, ML provides not only effective solutions, robustness, and accuracy but also efficiency as it can rapidly process huge amounts of data.
Deep learning (DL), inspired by the structure and function of the human brain, is a subfield of ML that involves the use of artificial neural networks (ANNs). ANN can take several forms, including recurrent neural networks (Hochreiter and Schmidhuber, 1997) and convolutional neural networks (CNNs) (Krizhevsky et al., 2012). Although ANNs are not new, their wide use only became practical after the development of massively parallel graphical processing units (GPUs). GPUs provide computation power and fast processing so that DL architectures running on GPUs can analyse huge amounts of data quickly and efficiently. In 2012, Krizhevsky et al.(2012) proved that CNNs can achieve a high level of accuracy in image classification. The success of CNNs has been extended to other computer vision tasks, for example object localization (Ren et al., 2015; Redmon et al., 2016), semantic segmentation (Long et al., 2015; Badrinarayanan et al., 2017), natural language processing for speech recognition (Hinton et al., 2012), machine translation (Sutskever et al., 2014), optical character recognition (Goodfellow et al., 2014), face recognition and verification (Taigman et al., 2014), object recognition (Xiao et al., 2015), and so forth. All of these will, in due course, be applied to data analysis in many branches of research, including marine science.
Studying and sustainably managing marine ecosystems presents special challenges because they are three dimensional, expansive, very dynamic, and complex. These characteristics require data collection over a wide range of spatiotemporal scales, which has been a major challenge (Godø et al., 2014; Janzen et al., 2019). Rapid progress in sensors, information, and communication technologies now allows marine scientists to collect large volumes of data at ever lower cost.
Studying and sustainably managing marine ecosystems presents special challenges because they are three dimensional, expansive, very dynamic, and complex.
Moored buoys support long-term monitoring and high resolution measurements of physical, chemical, and biological variables, as well as acoustics, at fixed locations and transmit their data in real-time via satellite uplink or cabled connection to shore (e.g. Aguzzi et al., 2015; Van Engeland et al., 2019). However, they are limited to monitoring depths from the seabed to the ocean surface. Sophisticated and heavily instrumented towed observation platforms, and autonomous drones, are collecting large volumes of data of many types (e.g. De Robertis et al., 2019; Lombard et al., 2019; Verfuss et al., 2019). However, the capacity of human experts to filter, curate, and analyse all of these data is limited. This is where ML and AI will be making greater-and-greater contributions as methods improve and are implemented more broadly.
ML can be applied to automate various routine tasks in marine science. The prediction of ocean weather, for example detecting sea surface temperature (Tanaka et al., 2004; Wu et al., 2006), habitat modelling (Krasnopolsky, 2009; Thessen, 2016), modelling monsoons (Cavazos et al., 2002), forecasting sea level fluctuations (Makarynskyy et al., 2004), wind and wave modelling (Forget et al., 2015; James et al., 2018), and the detection of acute situations, for example oil spill and other point sources of pollution (Kubat et al., 1998) are just some of the applications. Continuous underwater video and acoustic surveillance systems are rapidly developing tools to monitor marine life while computer vision and ML techniques contribute by automatically analysing the massive data streams from these platforms (e.g. Fisher et al., 2016). These data can already be used to extract higher-level interpretations by automatically detecting and tracking fish underwater (Spampinato et al., 2008), identifying fish species (Joly et al., 2015; Siddiqui et al., 2018; Villon et al., 2018; Allken et al., 2019), and estimating swimming trajectories and speeds (Beyan et al., 2018). Eventually, it will be possible to use time series of these data streams to assess changes in species abundance and distribution, environmental change, predator–prey relationships, and more (Fisher et al., 2016). Baited cameras and camera traps allow data to be collected without disturbing animals, which produces high volumes of images that can be analysed by using DL techniques (e.g. Tabak et al., 2019). There is also great potential to apply DL to automatic fish identification, counting, and sizing on fishing vessels (e.g. Bartholomew et al., 2018).
In this context, the objective of this themed set of articles was to bring together contributions on the broad theme of the applications of AI, ML, DL, and advanced data systems (e.g. block chains) to research, monitoring and management of marine organisms and ecosystems.
The articles that appear in this themed set, and the many relevant articles that they cite, demonstrate that AI is already a very helpful tool in a wide variety of applications in marine science. | https://medium.com/science-uncovered/machine-intelligence-in-marine-science-649204092ce2 | ['Oxford Academic'] | 2020-08-04 11:01:01.595000+00:00 | ['Oxford University Press', 'Marine Science', 'Artificial Intelligence', 'Machine Learning'] |
Software Engineer. What to learn or practice after graduation? | Hello everyone. I hope you are doing great and have wonderful time these days. I know some of you have just graduated or just want to find some more good information what to do next or just a new idea… Anyway, you will find something for yourself. After graduation you can feel different emotions(good and bad), you can be lost in your tech path and you can still feel sad that you don’t know something. I would like to tell you that’s alright. Everyday you have challenges, we have to move forward to our goals and dreams and they always are bittersweet. One day you can feel you know everything, on the another day you understand that it was just small part of a huge universe of knowledge. It’s alright. Unfortunately, we can’t have all knowledge just in one day. Just small steps will bring you to success. I have collected all good experiences from different people who are working right now in different companies and also included my own. So, what the next step?
1. Learn new technology(coding language) or keep coding on what you know?
I guess this is one of the most important questions that you might have. Certainly, it depends how do you feel about knowing the language. Okay, just ask truly yourself if you ready to right full app by yourself on this on technology? If yes, I would recommend you just still practicing and develop a new skills in this technology, to make it an ideal. Don’t rush yourself to learn everything(If you know React you can also learn how deploy your project, learn something new about graphs, how to do better styling in your projects). If the answer is no, I would honestly suggest you to go through one more course online and build very small projects by your own again. For example, you studied at bootcamp or college JavaScript, React and Ruby on Rails and you still want to become as a Software Engineer, to my point of you, if you don’t understand JavaScript and React or Ruby. It’s not because you are bad student that you don’t understand. Maybe for you, teacher didn’t explain in that way that you would understand. Everything was super fast that you couldn’t understand in the first time. Just go to Udemy.com they have a tons of information and nice courses. You can buy with discount for 10$-15$ a good course from that will explain you everything from the beginning and to the end. Remember, if you want to have a good teacher and good courses you have to pay. Of course, you can find some information on Youtube or just read tutorials from official websites of technologies. To my opinion, it’s also good practice when you follow someone’s instructions and step by step in monkey way you write code with a person who knows. After all, you will understand more things then you can write by your own sometimes. There are a lot of more sources like Coursera, Codeacademy, …
2. Learn algorithms and data structures.
I know a lot of people hate them, but we have to know the logic how everything works in coding and of course the second or third interview when you are applying for a job, definitely will start with some logical tasks. So for my recommendation, just one algorithm per day will help you with a future interview and also you improve your skills in your language. IT’S REALLY IMPORTANT. There are some good resources: Geeksforgeeks, Udemy. Also to practice more HackerRank.
3. Write your whole project by yourself?
Exactly, people want to see your knowledge after you learned something. People want to see the results. And the result will be your project. Even a small one. It’s hard and it is a lot for beginner software engineer. We are demanded to do more things day by day. The world is growing fast so that we have do the same. Create a new project, write in your new or favorite language, push to your Github every time you did something new. People who are interested in you will check your Github and they will see what you have done and what you didn’t. If you never worked on Github, I insist on learning that, if you are still not good in it. Learn how to deploy your own projects with Github Pages, Firebase or Heroku. Write your own portfolio and show to everybody what you did.
4.How to get work experience or how to feel work in team?
Well there are a lot of good ideas. First of all, find some people who can be with you during your first experience. Find someone who will be happy to work with you on the same project. You can have your own idea and implement this idea in real world. After that you can participate in hackathons and find someone who will be happy to give money for growing your idea. You never know. But other side is to participate in different open source projects(volunteering jobs). I understand that it’s for free, but you will understand what to be a part of the team. Also from the tech side, you will understand others programmers code and add your own one.
To sum up everything, you are boss of your future. Nobody knows what is better for you, but I am happy to share with you some experience that I have. Meet more people, read more information everywhere and try everything that you are passion about. | https://medium.com/dev-genius/software-engineer-what-to-learn-or-practice-after-graduation-d675a5feed68 | ['Yuriy Berezskyy'] | 2020-06-16 08:02:04.746000+00:00 | ['JavaScript', 'Software Development', 'Coding', 'Software Engineering', 'Programming'] |
Why, Yes, I Do Feel Like an Enormous Fraud | I don’t really like to talk about imposter syndrome. It’s not something I typically get into when I write about writing because, for me, it’s up there with writer’s block — everybody talks about it, everybody has an opinion about it, but neither of those things means my thoughts about it will make a difference.
For a long time, I’ve sort of just taken it for granted that every writer feels painfully inadequate some days. Don’t they?
Toward the beginning of October, I wrote a story that’s had about 360,000 views. It was a nice enough ride until the traffic began to sputter, and I wound up feeling a bit like a failure.
Maybe the story wasn’t good enough. Maybe other writers would have done much better. Maybe it was weird that the story quit spiking when it did. On and on it went with my mind making up imaginary problems just because my traffic went down again. Back to normal. Suddenly, my normal felt like a real deficit.
Everybody who deals with imposter syndrome knows that it doesn’t actually matter what you do or how well you do it. It follows you around wherever you go and beats you down just because it can. Imposter syndrome is opportunistic, and lately, it seems to find a lot more faults in me.
The truth is that my writing’s changed a lot in 2020. I suspect there’s a lot of that going around this year. A lot of general uneasiness among creatives.
For more than two-and-a-half years, I’ve been known for writing deeply vulnerable stories about my life and all the gory blunders. My best writing feels a lot like opening my veins — it’s cathartic — not just for me, but for certain readers too. For the readers who get it, who need it, there’s nothing tawdry or low-brow about bleeding on the page.
Even so, I find I do a lot less bleeding on the page these days. It feels like 2020 has chewed up and spit out more than enough of me. Much of it has to do with being a single working mother in the middle of a freaking pandemic. Some of it has to do with various health problems during a viral pandemic. A lot of it has to do with hitting perimenopause before I’m even forty… in the middle of a fucking viral pandemic.
And then, you know, there’s the election.
I mean, I’m tired and exhausted. Emotionally, mentally, and physically depleted. It feels a helluva lot better —or even safer? — to lean into cultural commentaries. Slightly less personal politics. Unpacking, reliving, and learning from my past? God, that’s so pre-corona.
For the past seven months or so, I’ve lived with so much heightened stress that I can’t get too excited about any deeply personal topic. And hello, imposter syndrome. I see how much you like to take advantage of that. | https://medium.com/honestly-yours/why-yes-i-do-feel-like-an-enormous-fraud-d70c06058972 | ['Shannon Ashley'] | 2020-10-22 07:59:18.584000+00:00 | ['Self', 'Life Lessons', 'Blogging', 'Success', 'Writing'] |
Subscription Boxes I Hereby Will To Loved Ones | Our generation’s legacy.
Photo by Erda Estremera on Unsplash
In the event of my untimely death, untimely being any point during which the reaction is, “What? How!?” and before it evolves to, “Oh no, that’s so sad,” I hereby will to my loved ones all of the following incredibly well-branded subscription boxes. While I had intended to leave a much larger bequest, the convenience and perpetual Instagram marketing of these services was relentless. As a digital consumer, I succumbed, as many do. I’m certain you tracked the delivery of your organic bathtub disinfectant just this morning.
Therefore the full list of my subscription boxes are itemized below, to be distributed in equal parts to anyone I care about who remains alive and still shopping for things in an actual store and even then only when they actually need them at the time of my demise.
Makeup Samples: The original. Who among us can say no to teeny mascara?I’ve received monthly boxes of small beauty products since early 2012 so this gift is, in some ways, a cherished antique. May this subscription bring you joy and far better quality lotion to smear on your hands and face in-flight.
Books: By default, you have just inherited my memorial library. By now the stockpile of books should extend well into the garage and the plastic Home Depot shed I bought to catch the overflow out back. Lend books to anyone who comes to visit or even walks past the property. Do not specify a return date or impose late fees. I do ask that you stamp each one with the “please recycle” ink pad I keep near the coffee pot.
Deodorant: I know what you’re thinking. How much deodorant can one person use? In truth, far less than these people want to send you. Given that they email me regarding new deodorant twice a month, I have enough stockpiled in the foyer for you to use to grease an amusement park waterslide. It’s all natural. You’ll love it.
Skincare: My skincare subscription is tailored entirely to both the physical properties of my skin as well as my own personal concerns, but nevertheless I hope you enjoy the posh packaging.
Tampons: Honestly I don’t even want these I just can’t figure out how to get them to stop sending them. Maybe you’ll have better luck.
CBD Gummies: I thought about just leaving this to the person among you I like the most, but I don’t want to cause postmortem infighting. Somewhere in the last decade I began utilizing CBD gummies as needed for anxiety or just when I wanted a little something fruity. They are delicious and they might even work.
New Moon Worship: Prior to each month’s new moon, you’ll receive a box containing intention-setting instructions, crystals, and a ritual sacrifice provided it’s survived the FedEx truck. Only open it in the northwest corner of your home and even then only while utilizing extreme caution and waving lit sage overhead. Repurpose the collectible box prior to the following full moon, this is of the utmost importance, though to be fair I’ve never looked into why.
Tea: My monthly tea subscription box is one of my absolute favorites. It is also why you can’t open the far left cupboard in my kitchen without wearing protective gear, I make no claims about what will or will not spill out atop your person with extreme velocity. I’m sorry, there’s only so much tea I can drink and it’s so lovely I could never bring myself to give it away. If you all prefer coffee just box it up and leave it outside.
Plants: I’ve left a machete by the back door that you can use to hack through any existing deliveries taking root in the yard, in order to make room for new additions. None of the prior deliveries have been in any way ecologically compatible with one another, so there’s a very unnatural and quite frankly toxic ecosystem currently pulsating beneath the soil, but it’s so nice to get a package once a month.
Wine: In 2022 I turned the basement into a wine cellar for both storage and as a hosting space for my thrice-yearly tasting events. Previous guest lists are hanging on a rusty nail toward the back. Send each guest home with a case, I find it’s quite fun to remove the labels prior, always makes for fun texts throughout the year. We have found though that the 2017 merlot goes very well with trail mix from Aldi.
Cat Stuff: I never had the heart to cancel this following Mittens’ death, I think we all know what it’s like to have difficulty letting go. I’ve taken to leaving all food items on the porch for local vermin and all feather toys have gone toward a duvet I’ve been stuffing in the guest room. The quirky t-shirts are great to keep around in case you ever want to paint a room.
Coffee: Please don’t fight over this one, I assure you there’s plenty. I’ve included measuring spoons and reusable jars for you all to disperse each delivery in equal shares. You should also check the freezers in the kitchen, laundry room, and car port, as I find that it keeps fresher, longer at cool temperatures.
Pickles: This is one box that makes me sad I’ve died. Anyone who’s a fan of these crunchy, briny, preserved delights will absolutely marvel at the things they can manage to pickle these days. Given the naturally lengthy shelf life of these snacks, you should all find plenty to divide amongst yourselves in both the pantry as well as in the crawl space beneath the dining room. You won’t find any spicy dills however, as they were my favorite. July’s box is rumored to have pickled scotch bonnets, I’m so excited for you.
Socks: I wore a ladies size 8.5 shoe during my lifetime, so whoever has feet of similar width and length is welcome to this one. I never did care much for slippers, I always found it so charming to look down and see a whimsical pattern of frogs or comic book heroes instead. If you tie them end to end I’ve also found they make a surprisingly sturdy escape rope which you can toss out the second story window in the event of an emergency. There are seven in the closet in the hall.
Fragrance: There’s a hazmat suit hanging outside the guest bath, I suggest employing it prior to entry. While I used to pour unwanted scent down the drain, this caused eventual corrosion to the point of property devaluation, so now I just pour them all into the tub and leave them there.
Spices: You’ll want to use the gloves provided to open these, and it’s best if those with seasonal allergies avoid this subscription altogether. Due to a concern regarding fumes, prior boxes have been hermetically sealed and placed in that giant drawer under the oven people use for pots. I’ve also found that when mixed with good quality vodka, these can be used as fuel for the common automotive engine.
Jewelry: See those wind chimes on the porch? I’ve been fashioning them from these boxes and selling them on Etsy. It’s good pocket money.
Flowers: The fiscally responsible thing would be to simply use the next delivery as the floral arrangement for my service, but if the UPS guy is late (Gary always is), just use a bubble machine or something. There’s a closet in the hall I’ve been using to store all of the DIY potpourri I’ve made from prior deliveries, you’re also welcome to use it as attic insulation.
Boxes: At first blush, I’ll admit a monthly delivery of boxes isn’t nearly as exciting as the others, certainly not as welcome as the cheese. (Which I discontinued last year following a physician’s advice.) But I’ve found them surprisingly useful! Please feel free to exercise the “coffin” option for this month’s delivery, it’s a premium feature but I’ve left money in the empty coffee can on the windowsill above the sink to cover the difference. | https://shanisilver.medium.com/subscription-boxes-i-hereby-will-to-loved-ones-ca416063270b | ['Shani Silver'] | 2020-01-08 13:19:40.401000+00:00 | ['Humor', 'Life', 'Fiction', 'Culture', 'Writing'] |
Significance of the derivative | CALCULUS DERIVATIVES
Significance of the derivative
The process of finding critical values
After derivative theory posts, we will start to see some of the applications that make this technique one of the most important in mathematics and therefore at machine learning.
Maximum and Minimum of a function
The theorem of the local maximum that we already introduced, can be extrapolated to the whole function domain.
Let f be a function and A a set of numbers contained in the domain of f. A point x in A is a maximum point for f on A if f(x)≥f(y) for every y in A. The number f(x) itself is called the maximum value of f on A.
Notice that there can be multiple maximum points for the same function at distinct x values. The minimum point definition is obtained inverting the previous definition, ergo changing f(x)≥f(y) for f(x)≤f(y).
Derivatives at maximum and minimum points
As you can expect, maximum and minimum points will always be a change in the derivative of the function, that allows us to demonstrate that:
Let f be any function defined on (a,b). If f is a maximum or a minimum point for f on (a,b), and f is differentiable at x, then f’(x)=0.
Local maximums and minimums
Let f be a function, and A a set of numbers contained in the domain of f. A point x in A is a local maximum[minumum] point for f on A of there is some δ > 0 such that x is a maximum[minumum] point for f on A ⋂ (x-δ,x+δ ).
Critical points
Not all x that makes f’(x) = 0 will be maximums or minimums, they are types of critical points:
A critical point of a function f is a number x such that f’(x)=0. The number f(x) is called a critical value of f.
In order to find the maximum and minimum of f, we have to check the following values:
The critical points of f in [a,b] .
. The endpoints, a and b .
and . Points x in [a,b] such that f is not differentiable at.
Some important theorems for critical point detection
The Rolle’s Theorem
If f is continuous on [a,b] and differentiable on (a,b), and f(a)=f(b), then there is a number x in (a,b) such that f’(x)=0.
This can lead to a constant function or to a function that changes the gradient between both values, so f’(x)=0.
The mean value theorem
Rolle theorem allows us to demonstrate the following:
If f is continuous on [a,b] and differentiable on (a,b), then there is a number x in (a,b) such that
Mean value theorem, self-generated.
Classifying critical points
Increasing and decreasing functions
A function f is increasing on an interval if f(a)<f(b) whenever a and be are two numbers in the interval with a<b. The function f is decreasing on an interval if f(a) > f(b) for all a and b in the interval with a<b.
Second derivatives for critical point classification
Now we know how to find a critical point with the first derivative and check the type of them using the left and right derivative values, we can use the second derivative to skip the lateral limit calculations.
Suppose f’(a)=0. If f’’(a)>0, then f has a local minimum at a; if f’’(a)<0 the nf has a local maximum at a.
Two strong theorems for simplification
The Cauchy Mean value theorem
If f and g are continuous on [a,b] and diffrentiable on (a,b), then there is a number x such that [f(b)-f(a)]g’(x)=[g(b)-g(a)]f’(x). If g(b)≠g(a), and g’(x)≠0, this can be written as:
Cauchy mean value theorem, self-generated.
L’Hôpital’s rule
L’hôpital Rule, self-generated.
Conclusion
In this post, we introduced how to use derivatives to find local maximums and minimums, they allow us to find possible solutions to our cost function optimization. This will allow us to determine the gradient easier. | https://medium.com/ai-in-plain-english/significance-of-the-derivative-4c1f505e9b88 | ['Adrià Serra'] | 2020-09-22 18:55:36.715000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Calculus', 'Data Science'] |
The Undeniable Power of the Pause | Let’s take the humble number one.
Seemingly, it’s not worth much on its own.
Add a zero and its value goes up.
Add a few more zeroes, and its might increases even further.
Can we achieve the same effect by adding blank space to our days?
We’ve all heard about famous people whose success is partly attributed to secret power-naps but somehow feel that this does not apply to us.
We live in an age where taking a time-out from work and chores is optional. Struggling to fit in respite is normal, even something to be lauded. Even if you haven’t chosen to worship at the altar of Maximum Efficiency at All Times, there is often a niggling sensation at the back of your mind that you should be doing something.
All the same, you’ve read the research — you know that taking breaks will benefit your health, your sanity and yes, even your productivity.
So why then, don’t you take them?
It may be because:
a) You don’t know how.
Most breaks appear to be pointless not simply in terms of what they allow you to accomplish, but also in terms of how they fail to make your life better.
Let’s face it, if you’d found a way to take a breather that was truly refreshing and restorative, your daily schedule would have more breaks in it than a teenage boys’ choir session.
b) Part of you is scared.
Perhaps you’re scared of what you’d discover if you stopped for a while. A break might make you question the value of what you’re doing. Maybe you’d feel you simply couldn’t go back to your task or your job or the country you live in.
These are valid concerns. Let’s address them.
a) You can re-learn how to take breaks that you actually enjoy.
No matter how frazzled your brain is, their is a form of down-time that you will enjoy, if you can find the right one.
During a recent few days spent in a quiet village with a population of 276 inhabitants (a mere 30,000 times smaller than the population of London, my current home town) a few things became started to become clear, not the least of which was my tumultuous mind.
Changing your environment can go a long way in helping your brain to relax. As highlighted by an episode of The Unmistakable Creative, one of my favourite podcasts, your ‘9 environments’ play a vital role in helping you feel inspired (or not). These include, by the way, things like your body and financial situation.
If you experiment a little, you might find a surprising variety of things that help you to feel rested. Here’s a few things that worked for me recently:
Getting together at a friends’ home for a karaoke session, watching them belt their hearts out to ‘Hakuna Matata’ and joining in
Eating lunch on a grassy lawn while having a chat
Spending time browsing at the library
Enjoying a home-cooked meal
Writing a letter to a friend
Crafting this article
Breaks don’t have to feel bland and lifeless.
Let’s revisit the powerful little zeroes from before:
Each one of these ‘breaks’ is starting to take its own shape and colour in my mind.
One of them is glowing sun, warming me up and helping me to feel energized.
Another is a hot air balloon, lifting me far up above a harried world so that I don’t lose sight of the big picture.
The last one is my favourite — a warm cookie with molten chocolate chips that makes life a lot sweeter.
Introducing the new donut cookie plus cookie hole
Experiment with ways to make your breaks fun. They might involve curling up on a sofa and calling an old friend, or going for a walk with your favourite podcast, or spreading out your limbs somewhere comfy while listening to something soothing (thanks Kristi Keller for the suggestion).
Whatever you choose, make sure your breaks are long enough. The measly three minutes you give yourself to go to the bathroom may not be enough to help your brain to switch off.
b) Taking a step back can bring to light uncomfortable truths. What then?
There is a risk associated with creating more space in your life.
Putting aside your tasks and other distractions and taking a few deep breaths can uncover any dissatisfaction and angst you feel about your current situation.
Why am I still doing this job? Should I move to another country? Do I need to worry about this constant feeling of tiredness?
Is life getting away from me without my having much of an impact on the world?
Most of us have to face such questions at some point, and it can be very disquieting. If there is an easy way to answer them, I haven’t come across it yet.
Interestingly though, while giving yourself some breathing room allows these difficult questions to bubble up in your consciousness, giving yourself more of that very same breathing room can help you answer them, albeit in unexpected ways.
What if, when facing a conundrum of any sort, instead of marching straight into doing mode, we simply stopped for a while and did something relaxing?
The ideal answer would not necessarily flash straight into our awareness, but over time, giving ourselves space can act as a gentle cleansing fluid for the lens through which we see the world. Helpfully, it can allow fresh perspectives to come into focus.
This approach requires patience, an asset that find myself in short supply of these days. I would very much like a quick, easy and painless resolution of my doubts and anxieties (who wouldn’t?) but I’m beginning to experience short intervals of relative tranquillity during which it suddenly seems like letting answers unfold in their own time would not be the worst thing in the world. It’s no surprise that these tend to occur when I’m letting my mind rest in some way and they are often aided by music, movement, nature and flowing water.
For an ever-growing portion of the world’s population, the reason for most of life’s conundrums appears to be not doing the right thing or simply not doing enough. In an attempt to solve our problems, we repeatedly search for a step we can take that will make things better.
Action can be incredibly helpful, but it may be time we took less of it, and instead turned our attention to its largely ignored and underrated cousin, Restfulness.
If it brings us nothing more than a little more peace and joy, that’s probably enough, don’t you think? | https://medium.com/the-maths-and-magic-of-being-human/the-undeniable-power-of-the-pause-907e7c1933bf | ['Roshan Daryanani'] | 2019-09-30 08:15:06.515000+00:00 | ['Rest', 'Peace', 'Happiness', 'Relaxation', 'Productivity'] |
2 Reasons Why “They” is the Best Animal Pronoun | Perspective Piece
2 Reasons Why “They” is the Best Animal Pronoun
Every species of animal is astonishingly unique — including their sexes
Photo by Mylon Ollila on Unsplash
There’s a simple reason I don’t think of nonhumans animals as he’s and she’s.
The sexual dimorphism of a cat or a seahorse is unique. It’s different from the sexual dimorphism of a human.
Human biology & culture form a vast database of what gender means to us. When we label someone a she or a he, these pronouns carry a library of connotations — connotations that don’t make sense for a dolphin.
Of course, he and she don’t tell the full story of Homo sapiens either. Up to 2% of our bodies have intersex traits. Some of us are trans, non-binary, or behave atypically of our sex. Don’t forget Finnish! This language uses a single pronoun, hän, instead of dividing us by gender.
As education increases, we English speakers have options: she, they, he… If you don’t know a person’s pronoun, it’s great to just say they.
But what does this have to do with nonhuman animals?
A fish or pig won’t be offended by the “wrong” pronoun. They speak their own languages. They won’t know or care what we’re saying.
Besides, people are more used to saying she and he. These pronouns successfully convey that an animal is a sentient being — not an object or an it — and that’s tremendously important.
Nevertheless, there are 2 reasons why calling an animal they/them makes more sense from my point of view.
He/she makes it seem like sex is what defines us. They affirms that we are unique and complex beings, not reducible to genitals. He/she erases animals’ diversity. Female and male function very differently depending on whether you’re an ostrich, ant, or octopus.
Let’s explore both of those points in more detail.
Sex & reproduction do not define us
We are more than our private parts. But for centuries, people have been forced into roles they might not choose (such as child-rearing, or war) just because of what’s between their legs.
Meanwhile, we equate animals with how their reproductive organs can serve us. For example, “dairy cows” have their babies taken. That way, humans can have the milk. This happens when the cow and calf are still intensely bonded and they mourn the separation.
“Egg-laying hens,” too, suffer abuses and slaughter that would shock us if done to a dog. Male chicks can’t lay eggs, of course, so they get immediately put in a grinder or otherwise killed. Again, just because of their sex.
Sex caused us to exist. Now that we’re here, we’re unique and complex sentient beings who seek happiness and health. “M or F” is a contributing factor to our experiences, but it doesn’t define who we are.
Every species of animal is astonishingly unique — including their sexes
We dimorphic mammals have things in common. Adult males shoot sperm. Adult females gestate and lactate.
It’s exciting to see ourselves in animals. We might relate to how a female gorilla nurses her young, or how a male lion acts all high-testosterone and dominant over his pride.
Still, sex traits are massively diverse. Let’s not project human gender differences onto various nonhumans who are not all the same.
To start with ourselves, look how unique it is to be a male or female human. Apart from our primate cousins, only spiny mice, elephant shrews, and some bats are known to menstruate.
Our forward-facing intercourse, and making out, are rare in other species.
It remains a scientific mystery why humans have permanent boobs; breasts aren’t limited to when we’re pregnant or lactating.
Here are some ways in which other species, too, are incredibly individual in how their sexes operate.
Size contrast
Gotta love this National Geographic headline: This Octopus is 40,000 Times Heavier Than Her Mate.
Genital contrast
Most birds have cloaca. They release either sperm or eggs, but both sexes have what we would compare to a vagina. Peregrine falcon privates are hard to tell apart.
By contrast, there are animals with matching phallic organs. Elephants and spotted hyenas have either a penis or “penile-clitoris.”
Color
Imagine if “men and women” implied different body colors. Like the bright feathers donned by male peafowl and mallard ducks.
Intersex traits
Some 1 in 2,000 humans are born with ambiguous genitalia, but true hermaphroditism does not exist with us. Banana slugs, on the other hand, are fully stocked with both female and male sex organs.
Clownfish are born male. The largest male in a group becomes female. This is an example of sequential hermaphroditism.
In cows, when twins are of opposite sex, the female comes out a freemartin. The freemartin has a bigger clitoris, different innards, and is usually sterile.
You could say the spaying and neutering of domestic animals further distinguishes their sex from what most humans know.
Parenting
It’s the male seahorse who gets pregnant, not the female.
Male emu birds are the “stay-home parents” who sit on the eggs and look after the chicks when they hatch. The females are not involved.
Mating and pair bonds
When we humans mate, we often form bonded pairs. Most of us are heterosexual.
Pair bonds are unheard of in some species. There are also at least 70 species of vertebrates that are asexual. Check out the all-female whiptail lizards.
One study showed 94% of giraffes’ sexual acts were male-on-male. In other words, most giraffe sex is gay. However, females only mounted females 1% of the time.
Gender would have a different meaning if we humans did not form pair bonds… or if our sexual behavior were distributed like giraffes.
That’s why I don’t think of animals as he’s and she’s. Female and male mean something completely different depending on species. Animals are in leagues of their own.
Photo by Catherine Merlin on Unsplash
What really matters: Animals exist for their own reasons
Women, men, female cows, and male bulls are like 4 distinct sexes.
Oops, I almost forgot freemartins. That’s 5. Don’t forget intersex, agender, and transgender humans like me!
Jokes aside, the reason I’m passionate about this has nothing to do with pronouns.
I just want to say how unique animals are — both as groups, and as individuals. Creatures exist for their own reasons. They certainly don’t fit into our human concepts of gender (which he and she tend to evoke).
She/he will probably prevail as the main animal pronouns we use. I’m at peace with it. It’s so much better than calling an animal it!
Still, I invite you to explore this they/them perspective. See if it enhances your appreciation of our animal neighbors.
Around the globe, we’re embracing human gender diversity like never before. Education and acceptance are on the rise. As a transgender woman, I couldn’t be more thankful.
I pray every day animal protection will see a similar worldwide tipping point. Let’s get to know nonhumans’ uniqueness, just as we celebrate our own. Each animal is a being, with an amazing biology, unique sex, and beautiful species they’re a part of. Let’s stop treating them as food objects, “pests,” or scientific sacrifices to serve our interests. With human creativity and compassion, we can keep finding ways to be kinder.
Thank you. | https://medium.com/creatures/2-reasons-why-they-is-the-best-animal-pronoun-9a1c4ae9df6 | ['Phoenix Huber'] | 2020-12-09 16:02:18.974000+00:00 | ['Creative', 'LGBTQ', 'Animals', 'Psychology', 'Ideas'] |
Never Forget: The 90s Products We Loved | Just like the decades before it, the nineties was an era that produced notable events that would shape the world for generations to come. From the fear of computers crashing with the impending Y2K, to Bill Clinton’s scandalous behavior in the oval office, there was no shortage of exciting news leading up to the new millennium.
Perhaps the most exciting news, however, was not the anxiety of computers failing, but rather the developments that were occurring in the technological sector.
In the years leading up to 2000, companies were turning out innovative software and digital hardware products at a rapid pace. While these products would probably induce nostalgic laughter in most of us at this point, many of them were the driving force behind our technological advancements that contemporary society is all but addicted to.
With products such as Napster, AOL, and RealNetworks representing the starting point for many modern-day iterations, it is important to reflect on the influence these have on the current state of Product Management.
What is Google?
Google was not always around and that is strangely hard to imagine. Before, when people wanted to browse the vast networks of the internet, they had to rely on a different service: Netscape Navigator.
Of course, there were competitors, but Netscape was the superior browser. They were so dominant at one point that Microsoft was working tirelessly to develop a competitive alternative in order to gain market share.
Does this cyber dual sound familiar? It should! The battle for supreme internet juggernaut has not subsided in the current era. In reality, the desire to be top dog has only been increasing. Google, Bing, Yahoo, and all the other browsing services that were born from similar visions as Netscape, are jockeying for dominance in the global market.
Because of Netscape, the innovative nature of internet searching was brought into reality. Google, therefore, did not have to invent the wheel (so to speak), but instead they had to optimize it.
The Grandaddy of Instant Messaging
Before Slack, WhatsApp, or iMessaging, there was AOL. While AOL was a trailblazer in a wide variety of internet services, their instant messaging product was a groundbreaking feature that would define Product Management work for decades.
In the late nineties, the iconic dial-up service was almost synonymous with peer-to-peer online messaging. Even while they were battling competitors like Microsoft, AOL was able to define a delightful experience that customers were flocking to.
Like many businesses/products from the nineties, AOL was not able to continue its dominance in the tech industry. Despite their decline in status, their pioneering work was one of the first roadmaps for modern-day predecessors.
Of course, technology has been making huge strides in general since the days of AOL’s popularity. Still, Product Managers for AOL had similar challenges to face, and were carrying out the same work in order to bring solutions to customers.
To illustrate this more clearly, let’s take the example of AOL adjusting to a world that was moving away from desktop computers. As mobile cellphones and other mobile communication devices were increasing in popularity, AOL was faced with adapting to this changing landscape.
PMs were facing the challenge of creating a vision that matched the evolving technology. Just like in modern day product management, bringing this vision to life meant working horizontally across departments, managing stakeholders, along with a host of other tasks that PMs still perform today.
Napster VS. Metallica
If you were around in the nineties and early 2000s, you probably remember the infamous court battle between the heavy metal icons of Metallica and the young P2P music service.
While the music industry was up in arms over this free file sharing service, little did they realize that Napster was laying the groundwork for scores of music streaming apps that are wildly popular today.
By making music available online, Napster was one of the leaders in antiquating CDs and portable disc players (RIP Tower Records). This revolutionary shift — although legally precarious — was beginning to reposition the focus of music consumption by bringing consumers immediate gratification.
In large part due to this shift, Product Management, in modern music streaming apps, is defined by Napster. For example, hitting metrics like the number of streams, downloads, and shares is one of the directly attributable crossovers between the early streaming service and current day iterations.
The Quiet Birth of Streaming
Streaming media is not as modern as you may think. While us contemporary folk consume streaming media at an astonishing rate, many people in the nineties were doing the exact same thing.
Before Youtube, Netflix, Hulu, or Vimeo was able to break into the streaming scene, RealNetworks was serving up streamable content to millions. More specifically, according to their numbers, RealNetworks had over 100 million active users at its peak.
Another interesting detail was that RealNetworks was able to provide streaming for both audio broadcasts as well as video. In an era of VHS and radio, one could consider this feat a technological marvel.
Considering that most of the population is constantly in a state of video or audio consumption, RealNetworks’ impact on current software in undebatable. It is also safe to say that PMs at modern-day equivalents are building the vision that PMs at RealNetworks were hoping to establish.
To put it another way, Product Managers of RealNetworks saw the writing on the wall. They built what could be described as the first minimal viable product for streaming businesses.
A Lasting Impact
Almost all technology today has an ancestral counterpart that was born in the nineties. The cushy nature that apps and products provide now is largely due to the vision that Product Managers in the previous decades were building.
Product Management practices can even be attributed to the work structure of the pre-millennial decade. Creating software that requires quick updates and continuous change was a novel task. Now, it simply is the way Product Managers go about their business.
So while it the clunkiness of products of the past may bring about patronizing chuckles, we should all be saluting these foundation laying achievements.
What were your favorite products from the nineties? Have they had a lasting impact on today’s technology? Let us know! Drop us a line on our Slack Channel. | https://medium.com/hackernoon/never-forget-the-90s-products-we-loved-8f3ce3274cb0 | ['Carlos G De Villaumbrosia'] | 2019-11-22 19:45:31.080000+00:00 | ['Tech Industry', 'Product Management', 'Apple', 'Hackernoon Top Story', 'Products'] |
Philosophical Foundations of AI, Part II: Semantics and Pragmatics, (Social) Reality and Representation | I call you ‘baby Travis.” Photo by Alex Hockett on Unsplash
Persons and Names, Semantics and Pragmatics
Human beings have names, which they are called. Many languages with Indo-European origins make explicit use of this in their grammatical structures: me llamo Travis, je m’appelle Travis, ich heisse Travis, and so on. We are called so that we may be addressed (as, say, du or Sie; tu or usted, etc.) and respond to other humans in communication. Note, too, how languages often reserve a distinction between knowing persons and knowing general facts (roughly: knowledge by acquaintance vs. description). For example, conocer/saber in Spanish, kennen/wissen in German, 認識/知道 in Chinese and รู้จัก/รู้ in Thai. So this is not strictly a “Western” social phenomenon.
The idea is that ethical commitments derive from dialogical and social ones. It’s not a coincidence that in Chinese and Thai, for instance, honorific pronouns are family terms such as “uncle,” “aunt,” “big brother,” etc., implying that filial piety/responsibility goes along with such titles. We don’t just respond to those who call out for us, we have a responsibility to them, as Soviet literary theorist Mikhail Bakhtin and French philosopher Emmanuel Levinas have claimed. See what they did there?
The notion of ethical responsibility is derived from one’s ability to communicate dialogically with others — to be called on, and respond to, others. This suggests there are communicative norms guiding our dialogical behavior (see the work of German philosopher Jürgen Habermas for details).
At this level, we have gone beyond semantics to the level of pragmatics. We have moved from an analysis of how words relate to things in the world, to now an analysis of how humans use such linguistic expressions in their dealings with one another as they create social and cultural structures. What is fascinating and impressive is the way in which these philosophers have used this metaphysical analysis of self and other, identity and difference, to ground an ethics of dialogue.
Wittgenstein might argue AI will never capture meaning because it has no access to our dynamic, shared forms of life which ground the meaning of our behaviors and words we use in language games. Photo by Daniel Lonn on Unsplash
From Language to Ethics
Here’s how we go from language to ethics. Recognition of identity is a presupposition of ethical responsibility. You cannot respond to what you cannot first identify. You cannot re-cognize what you cannot first cognize. We can now understand why the recognition of identity is so crucial to modern political discourse, especially as relates to “identity politics.”
The failure of some modern computer vision systems to recognize the faces of minorities is a symptom of a lack of responsibility, a lack of recognition of identity, towards people that look different from those who have designed facial recognition systems. In our modern, democratic society, marginalized groups and individuals implicitly call out for recognition of similarity — as ultimately belonging to one and the same society of equals — but the engineers of such systems do not respond. Unable to engage in dialogue, to share in perspective and understanding, the engineers behind facial recognition systems assume no responsibility towards those viewed as different. The result is a false-positive rate up to 100X higher for Asian and Black people, as the article linked above describes.
What responsibility do you have towards that which is un-identifiable and thus un-namable? What would you call or how would you address that which has no name? I am reminded of the Dalit or “untouchables” in the Indian caste system. Ask yourself,
Have you ever known a person without a name?
After you die, your tombstone is a sign or symbol of your prior existence. Photo by Gary Meulemans on Unsplash
Name as Symbol of Presence, Death and Representation
Our identities spring from and are shaped by the names given to us by others from the moment we leave our mothers’ bodies. Conversely, when we die, a tombstone with a name is all that’s left behind as a symbol — a trace — of our existence. Your name stands in for your identity, your presence as an actually existing thing. Remember: you don’t exist until you have been recognized as such by another cognitive agent. The word “exist” (Latin: existere), as Heidegger points out, originally meant something like “to stand out or stand forth.” To exist is to be a difference that makes a difference, in the words of polymath cyberneticist/anthropologist Gregory Bateson. In other words, to exist is to, at the very least, comprise one bit of information. The question is whether there is anything there sensitive enough to detect this difference.
Here is one rather sketchy psychological and theological implication of this idea. A belief in an omniscient God is one way to ensure recognition of one’s existence, regardless of whether one’s identity is recognized (politically or socially) on Earth. One can then go forward in life knowing one’s actions and thoughts are recognized, and thus have meaning to at least one cognitive agent, God. This explains why God is often the final bastion of comfort for those politically or socially oppressed. Faith in God can be viewed as a psychological strategy for coping with worldly torment, for ensuring the recognition of one’s existential bit — exist (1), not exist (0).
Let’s get back on track, though. Did you ever write your name on a desk in school? That signature served to stand in, or re-present, your presence in your absence to any cognitive agent capable of reading it. It’s a sign, a signal, re-presenting the original and authentic Being of something at a later point in time. Representation is transitive: we represent something to someone. We must always consider that. A representation can carry information, but this information is interpretable only by those who recognize it as such. The receiver must be able to decode the Being or presence encoded in the representation.
In lieu of Being, signals re-present presence to receiver. Source: Wikipedia.
Signal detection and processing theory, information theory, etc., are mathematical formalizations of Being invented in order to extract, manipulate and derive useful patterns from the experience of any cognitive agent (yes, even a sensor) capable of imposing a boundary between itself and an other, an inner and an outer — an observer and a system. Remember that existence means to stand out, or stand forth. But stand out or forth from what? From the system itself. Philosophers such as Derrida and Heidegger ask whether redundancy is essential to identity or whether compression is.
Is the true signal the one which is revealed after maximal compression (ideal), or is ever-present noise part and parcel of its identity (actual)?
Evolutionary Epistemology
Not so long ago, philosophers such as Donald Campbell, and Nobel Prize-winning biologists such as Konrad Lorenz, made Kantian (i.e., transcendental) arguments to the effect that the mechanism of evolution could not function were it not “truth-tracking” to some extent. In philosophy this view was called Evolutionary Epistemology. Organisms whose faculties of perception were not aligned, to some degree, with reality, would eventually fail to reproduce or die off. Or so they thought.
Recently, cognitive scientist Donald Hoffman claims to have shown mathematically (his “fitness beats truth” theorem or FBT) that there is no necessary connection between what we perceive and reality as it is, “out there.” Our concepts of time and space, as we know them, are mere representations — good enough for survival and reproduction — of some deeper underlying reality. What that is, we have no idea. Students of Kant will recognize this as old hat. It’s simply the distinction between Dinge (phenomena) and Dinge an sich (noumena) rehashed in the language of modern science and computer simulation.
What does all this have to do with data? Well, eventually, we all die and will be absent forever to everyone; when that happens, only our names will be evidence of our existence. Death forecloses any possibility of presence, leaving mere representation in its wake. Never again can your face be compared with a digital picture of your face, or your voice compared with a digital recording of your voice. Similarity (and distance), will then be defined by a relation of copy with copy.
Image Recognition Performance Evaluation: How to Interpret?
Compare this with modern AI. We evaluate the performance of a CNN or GAN by how well it classifies a set of test images, for example, and we blindly assume this says something useful about its performance in the “real world.” But test images are not real: they are digital representations of the reality we live in. To the extent these digital representations capture what we believe is important, the performance measures are meaningful. Put another way, if all that mattered was digital reality, then 99% accuracy would indeed be impressive and useful in reality. Conflating digital reality with analog reality is one easy way to overstate the achievements of current AI. I, for one, would like to keep the distinction because I live in analog reality.
A representation by convention. I hereby claim this picture denotes Sadness. Photo by Steve Johnson on Unsplash
Representation is Convention, not Resemblance
If we take the view of Nelson Goodman (Conventionalism), then anything can represent anything through an arbitrary act of denotation. We simply define it into existence, as we might with a logical axiom. Why? Because a resemblance theory of representation and reference is too restrictive, a vestige of Western science’s striving towards a perfectly transparent mirror of nature. (Post)modern art aims to free itself from the restrictive metaphysics of a resemblance theory of representation. Representation, unlike similarity, is not a symmetric relationship!
A heart represents love, but love doesn’t represent a heart.
This is important for machine learning because a metric space is defined using a notion of symmetry. See axiom (ii) below.
Look at axiom (ii) which defines a metric in terms of symmetry. Source: analysiswebnotes.com.
There is no necessary connection between performance on a representation of reality and performance in reality, though Pythagoras may have disagreed.
The obsession with test set performance in image recognition tasks is a good example of failing to realize we are dealing with what Baudrillard calls the hyperreal. We have replaced the formerly real with a copy of the real and forgotten that such a replacement has occurred. We have conflated reality with appearance. If Plato had to banish the dramatists and poets from his ideal Republic because they dealt with shadows of shadows, then he would also have to banish the data scientists and engineers because they deal in representations of reality without caring about the detailed relationship between reality and its representation.
Data is dead when unreflectively used to re-present the real.
A picture of cat is not a cat. People are not pixels, and pixels are not numbers. But we can represent them as if they were. How? By number.
But only when certain conditions hold will performance of our CNN be good and useful in the real world (the place we humans live in and actually care about). A good model is isomorphic to the part of reality we wish to model — it’s one to one (each thing in reality is mapped to one thing, and only one thing, in the model) and maintains the relations among its parts. For example, if our source system has three objects, (a,b,c), we might call the relation R the set of ordered pairs (a,b) and (b,c). In this case, the relation R holds for objects a and b, and b and c, but not a and c. If these relations exist in the source system, then they must be preserved in the model system.
When this happens we call this method of mapping numbers to properties of things in the world measurement, and the view that the defining property of a model and its target system (reality) should be structural isomorphism is called the semantic view of scientific theories.
But we cannot simply assume a priori that all objects possess quantitative structure. According to controversial historian of psychology Joel Michell, a fundamental task of science, then, is to investigate whether the properties of objects have quantitative structure that can be isomorphically mapped to numbers, thus preserving their inherent relations. If so, measurements of them are meaningful. If not, they are meaningless. Michell argues on this basis that much of what modern psychology is doing is “pathological” because it has not first demonstrated that psychological attributes have quantitative structure appropriate for quantification.
Meaning, Algebra, and Arbitrary Orderings
Psychometric theorists Keith Marcus and Denny Borsboom, in their book The Frontiers of Test Validity Theory, give a nice example of how meaning, algebra and order (logos) are related. They point out that we can develop algebraic operations by positing a few basic axioms, a sequence of numbers, and a successor function that returns the next number in the sequence. Here’s how it relates to the interpretation of numbers.
Normally, most of us assume a sequence of numbers where 1 is the successor of 0, and so x+1 equals the successor of x. But if we start with different orderings, we get different algebras. In ordering (A) we start with 0, 1, 2, 3, 4, 5, 6 … and then 2+2 =4. But in ordering (B) we start with 0, 1, 2, 3, 5, 4, 6… and then 2 + 2 = 5. Both answers are correct applications of the successor function and both ‘4’ and ‘5’ represent the same quantities, though couched in a different system of ordering.
Now suppose we now tell you these numbers represent something. Maybe the second ordering is someone’s preference for numbers, where 1 is his favorite. Then in both cases ‘4’ represents the fifth favorite number and ‘5’ represents the fourth favorite. Markus and Borsboom point out that the meanings of the algebraic operations differ in the resulting two algebras. They may share theorems, such as x+1 = x+2–1, but differ in others. For example, 4x > 5x. Which is right? You could create two algebras which share many of the same theorems by ordering the numbers in the same way but for different reasons.
As Markus and Borsboom explain:
Most people learn algebraic operations without first learning their basis in assumed orderings and successor functions. People depend upon experts for that. The result is that the meaning of mathematical statements depends on the social context. (pg. 257)
Dialogue and Perspective Taking
So we might think social dialogue and logic are opposing concepts. We’d be wrong: dia-logue derives from di (two) and logos (reason). Language and logic are thus inseparable from sociality. To be named is to be recognized as having a particular kind of identity which confers membership into a moral community. Naming probably occurs in all human cultures due to its performative role in creating symbolic moral communities of equals. Once addressed with names, we can be called into dialogue with others.
Dialogical communication requires the ability to imagine the perspective of our interlocutors and have a rudimentary grasp of theory of mind, which helps us to interpret the words and deeds of others by reference to our own inner dialogue and behavior. It goes the other way around too. We understand our own behavior and inner dialogue by watching and observing the words and deeds of those around us. When I see someone smash his finger with a hammer and say “ouch!” I know he must also experience some inner feeling we collectively refer to as pain. A philosophical chestnut consists in trying to verify, solely on the basis of observable behavior, how we know my pain and your pain are referring to the same kind of thing.
This kind of intersubjective perspective-taking apparently takes vast cognitive resources, but is crucial to human development. If we are to make joint plans and coordinate our actions with other human beings we must be able to imagine what it would be like, to some extent, to take the perspective of the other.
The influential Soviet psychologist Lev Vygotsky, himself a fan of Hegel, claimed that human development occurred first socially through inter-mental categorization, and then later this process was mirrored within the individual child as an intra-mental one. On this view, human individuality is derivative of human sociality. In a section below we will explore why this idea is somewhat radical in Western thought. Hint: it has to do with some intellectual baggage we inherited from Descartes. For now, I want to mention one thing with relevance to the collection of behavioral big data (BBD).
Can I haz join ur moral community? k thx. Photo by Cong H on Unsplash
Dehumanization, Pets, and Numeric Reference
We can also put this curious fact of human sociality to other, less morally pure uses. The philosopher Charles Taylor explains that when we wish to dehumanize a person, for instance, we do not call her by her name. Numbers have historically been used for “easy” reference. As just one silly example, in Star Wars the robots’ names were C3PO and R2D2, but the Wookie was called Chewbacca. Chewy mattered more.
We also do the converse when we give animals names. We call these animals “pets.” Giving an animal a name is a performative act conferring membership into our moral community. Objectively, nothing about the animal has changed. Yet, upon receiving a name, the creature’s identity has changed symbolically. We feel obligated now to consider its interests, perspective, and well-being and we implicitly recognize its capacity for emotions, such as pleasure and pain, as we go about our lives. Have you ever cut short an activity you were enjoying because you had to get home to feed the cat?
Finally, membership into our moral community confers another practical bonus: we tend not to kill and eat animals we have given a name to. In short, names are implicit evidence of a certain kind of essence and identity that is valuable (i.e., has dignity) to us as humans. But what does this strange behavior centered around names reveal about who and what counts as part of our moral community?
People deemed “criminals” through enaction of performative legal processes are removed from the social community and often referred to using numeric identifiers. Why? Photo by Damir Spanic on Unsplash
Think back to the most gruesome periods of recent human history. The use of numbers to refer to persons has a sordid past. There’s a reason prisoners have them. It is something best avoided unless absolutely necessary, ID numbers and drivers’ licenses notwithstanding. It is only a small step to barbarism.
Here’s why. The number “5” could represent five apples, five dollars, or a single person. We’re indifferent to which it is. This power of numeric abstraction is also a weakness. We treat all instances of “5” the same, as objects to be manipulated according to the rules of algebra. This is the level of syntax. Algebra doesn’t care about which side of the equals sign variables appear on. But in many (social) sciences we make semantic distinctions between numbers in equations: we call them IVs and DVs, exogenous and endogenous variables in systems of equations and thus specify how such variables relate to one another, according to theory (etymologically theory just means a perspective, a view). Causal modeling provides an example of why we should care about questions of syntax and semantics. From the space of all possible predictive models we might build, only a fraction of them might obey the laws of physics or conform to our best theories.
So syntax and semantics matter when applied to persons. They have ethical implications. Referring to a person by using a number is to be indifferent to one’s identity as a person. It is arguably the ultimate indignity as it places one outside the norms of the human moral community (Worse, even, than killing — as that would imply recognition of your humanity and potential for death. Numbers, after all, don’t even receive burial rites.). As Charles Taylor insightfully notes in Sources of the Self,
Beings who are just referents and not also addressees are ipso facto classed as non-human, without identity.
Source: Stackoverflow.com
This suggests one connection with data science. Care must be taken when we deal with the personal data of human persons. By referring to persons by things like “user_id” we risk overlooking our potential to enter into dialogue with another human being. user_ids do not make plans, laugh, or tell jokes. People, however, do. user_ids only exist because we needed a convenient means of reference to an object, but ideally (e.g., with enough time, energy, and money) we would engage people face to face to understand them. user_ids are not reality, but representations of reality (real persons). We must be careful to distinguish the two. A later installment will have to treat this issue in greater depth. | https://towardsdatascience.com/philosophical-foundations-of-ai-part-ii-semantics-and-pragmatics-social-reality-and-3215e60d6d73 | ['Travis Greene'] | 2020-12-14 07:37:03.738000+00:00 | ['Logic', 'Data', 'AI', 'Data Science', 'Ethics'] |
Protecting audio and music assets with Node and Javascript | In my previous post I discussed my latest small project of building an external music player for Bandcamp. What I realized is that many similar sites and services can easily be abused for pirating content, in particular copyrighted audio, music and video. In this post I will discuss several strategies for protecting such content.
Obtaining mp3 files (and other digital content) can usually be done by looking at the HTTP requests that are being made upon playing/using that particular content. In Bandcamp’s case I only had to look at the network traffic and spot the “mpeg” data type of 5.37MB in size, then by copy pasting the GET URL you can download its corresponding mp3 file.
Today it’s nearly impossible to fully secure digital content, there’s always some way of obtaining it. But the purpose of security systems is to make the hacker’s / pirate’s life very painful. Either by making the process very long and/or complex, in the hope of them giving up.
A very basic, yet quite effective method is to encrypt the sensitive assets. In Bandcamp’s case, they can encrypt the mp3 contents server-side using some key, send it to the client, and let the client’s JavaScript code decrypt and play it. The client can still download the encrypted mp3 file, but without the proper decryption algorithm it’s a useless file. This method is only as effective as our ability of hiding and obfuscating the decryption function.
In the code below I show my prototype for doing all of this.
NodeJS server code
"use strict";
const express = require("express")
const app = express()
const { Readable } = require('stream')
const fs = require('fs') app.get("/audio", function (req, res) {
res.setHeader('Access-Control-Allow-Origin','*')
xor_encrypt(res)
}) function xor_encrypt(res) {
// read audio file to buffer
let buff = fs.readFileSync('./audio.mp3') // determine encryption key
let key = buff[buff.length-1] // encrypt buffer contents
buff = buff.map(x => x ^ key).map(x => ~x) // store the encryption key as last element
buff[buff.length-1] = key // transform buffer to stream
let readStream = Readable.from(buff) // send stream to client
readStream.pipe(res) readStream.on('end', () => {
res.status(200).send()
})
} app.use(express.static('.')) const serverHost = "localhost"
const serverPort = 3007
app.listen(serverPort)
JS client code
let curr_track = document.createElement('audio') var oReq = new XMLHttpRequest()
oReq.open("GET", 'http://localhost:3007/audio', true)
oReq.responseType = "arraybuffer" oReq.onload = function(oEvent) {
xor()
}
oReq.send() function xor() {
// convert arrayBuffer to regular Array
const arr = oReq.response
var byteArray = new Uint8Array(arr) // obtain encryption key
let key = byteArray[byteArray.length - 1] // use key to decrypt contents
byteArray = byteArray.map(x => x ^ key).map(x => ~x) // restore key
byteArray[byteArray.length - 1] = key // convert byteArray to Blob
const blob = new Blob([byteArray], { type: 'audio/mp3' })
// create playable URL from Blob object
const url = URL.createObjectURL(blob) // memory leak possible! curr_track.src = url
curr_track.load()
} // now you can bind 'curr_track.play()' to some click-event
The code above contains comments for each step, so it should be self-explanatory. The method for encryption relies on simple yet highly efficient bitwise operators (xor and not).
In the client code, the url variable points to a temporary in-memory Blob object representing the mp3 file. If you print this url to console you will get something like this:
blob:http://localhost:3007/9a2ffb47-72af-4c58-a0f9-08b9a63b81d0
If you then copy paste this into a new tab you'll be able to play/download the decrypted mp3 track. This Blob object exists in-memory as long as your website window remains open, else it gets garbage collected; this also means that creating many Blobs can lead to memory leaks (but there is a method for cleaning them up manually).
This encryption strategy works fine, we made it harder for users to download mp3 files. It’s still possible once a user figures out how the decrypt function works, then they can automate it. Or by debugging/editing the JavaScript code they can similarly obtain the mp3 file.
Alternatively, instead of using a Blob object, you could use base64 encoding, but that’s just as trivial as Blobs are at decoding and downloading the binary contents.
A further improvement is to use many different encryption/decryption methods (instead of one) at random, but then again some kind of identifier will be needed to determine which method should be used client-sided. Once again the hacker/pirate can figure this out. | https://medium.com/dev-genius/protecting-audio-and-music-assets-with-node-and-javascript-5898683fa035 | ['Ilya Nevolin'] | 2020-12-01 06:56:03.244000+00:00 | ['JavaScript', 'Software Development', 'Security', 'Software Engineering', 'Nodejs'] |
A Times For Thanks And Blessings | A Times For Thanks And Blessings
A Thoughts And Ideas Newsletter
The pandemic seems to rage on here where I am from in New Jersey, USA. And this isn’t the only place where it’s happening. The election chaos has come and passed, however there is still plenty of drama going on in the political world.
Thankfully, I have my writing, and editing, and my commitment to my publishing community. I am quite pleased that it keeps me rather busy. It continues to be a wonderful gift, because it distracts me from the stress of the world, and it helps countless others, whether it is someone looking for something good to read, or helping an up and coming writer get their words out to the world in and out of Medium.
This continues to be a good path for me, especially my involvement with this Thoughts and Ideas publication. We have had a very busy month here, and we are quite close to reaching 23,000 followers/readers. The number of contributors and writers is over 300 right now, and it seems like everyone loves Thoughts and Ideas for different reasons. The more reasons, the merrier I believe.
I receive emails almost everyday from writers requesting to be added to this publication. It remains constant, and consistent.
While it’s originally, an India based publication, it has long since opened its arms to the entire world. The Indian community has played a great role, in making this publication global in its outreach. People from every continent want to be a part of this, and I continue to enjoy embracing professional relationships from one of the most diverse followings that there can be.
Recently articles about the celebration of Diwali, have been submitted to me for consideration and publication, and it ended up being a wonderful opportunity for me to learn more about that incredible holiday and joyous celebration. After publishing those pieces, I immediately found myself doing my own research of the holiday. It was a great learning experience for me, and I believe it was also vital for me to learn about such a huge holiday that touches so many thousands of my readers.
I am always glad to see that happiness and celebration can still exist and prevail even when a global pandemic still presents itself in many places around the world. It is a shining example that we can still find blessings and love, if we look further in life, than just in the negative and the pain.
I am quite thankful to have all of you, because this publication is one that prospers and succeeds only when we can all work together as a publishing family. Most of the readers and writers for Thoughts and Ideas speak so happily about being able to be part of what we have here. When I read the messages from all of you telling me how blessed you feel to be part of this, it makes me feel fulfilled, as I cannot feel my true sense of success as a Publisher and Editor, if I can’t be among the very best readers and writers here on Medium.
Until next time, I bid you all peace and prosperity.
Michael Patanella
is a Trenton, New Jersey Author, Publisher, Columnist, Editor, Advocate, and recovering addict, covering topics of mental health, addiction, sobriety, mindfulness, self-help, faith, spirituality, Smart Recovery, social advocacy, and countless other nonfiction topics. His articles, publications, memoirs, and stories are geared towards being a voice for the voiceless. Hoping to reach others out there still struggling. | https://medium.com/indian-thoughts/a-times-for-thanks-and-blessings-87555c49202b | ['Michael Patanella'] | 2020-11-15 19:25:34.424000+00:00 | ['India', 'Life', 'Life Lessons', 'Reading', 'Writing'] |
Comprehensive Guide To Optimize Your Pandas Code | Avoid Loops ♾
Pandas is designed for vector manipulations. Vectorization is the process of executing operations on entire arrays, Which makes loops inefficient.
Bad Option 😈
A rookie mistake in pandas will be to just loop over all the rows” by either using errors or regular loops.
In the following snippet, we are calculating the original meal price (without the tip) by subtracting the tip from the meal price itself.
def iterrows_original_meal_price(df):
for i, row in df.iterrows():
row["orig_meal_price"] = row["meal_price"] - row["meal_tip"]
return df %%timeit -r 1 -n 1
iterrows_original_meal_price(df) 35min 13s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
As you can see, the execution time is around 35minutes, unsatisfying results indeed. But don’t worry, as I said, using iterrows is a rookie mistake, and I am here to show you a much better approach.
Better Option 🤵
Fortunately, there is a much nicer way, using apply. Apply accepts any user-defined function that applies a transformation/aggregation on a DataFrame (iterative).
def calc_ orig_meal_price(row):
return row['meal_price'] - row['meal_tip'] def apply_original_meal_price(df):
df["orig_meal_price"] = df.apply( calc_ orig_meal_price, axis=1)
return df %%timeit
apply_original_meal_price(df) 22.5 s ± 170 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
As we can see, the performance boost here is insane, instead of 35 minutes, the same program took us 21 seconds which is much better. I will gladly take the 100x Improvement In Execution Time over iterrows⌛.
So the lesson is that Iterrows is pure evil 😈.
But is all can we do? Can’t we make the same simple code extremely fast? Well, it can be done, and now I am going to show you the best way, aka vectorizations.
Best Option 👼
As a reminder, vectorization is a process of executing operations on entire arrays. Pandas/NumPy/SciPy includes a generous collection of vectorized functions from mathematical operations to aggregations or to create new ones with np.vectorize .
In the following snippets, we are going to subtract the entire meal_tip column from the entire meal_price column.
def vectorized_original_meal_price(df):
df["orig_meal_price"] = df["meal_price"] - df["meal_tip"]
return df %%timeit
vectorized_original_meal_price(df) 2.46 ms ± 18.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
That’s insane. We can see the benefit of the vectorized function right away. We got to 2.5 milliseconds. And about 100,00. 8000x Improvement In Execution Time over the apply method⌛. So the lesson is that vectorized operations rules 😇. | https://medium.com/towards-artificial-intelligence/comprehensive-guide-to-optimize-your-pandas-code-62980f8c0e64 | ['Eyal Trabelsi'] | 2020-11-19 14:59:33.970000+00:00 | ['Machine Learning', 'Python', 'Performance', 'Data Science', 'Pandas'] |
As news deserts expand, new approaches to local news are taking root | By Karen Rundlet
If news and information are part of the fabric of democracy, then the fabric of U.S. democracy is in tatters. That’s the conclusion that leaps off the map in the 2018 The Expanding News Deserts report, which shows that 171 U.S. counties do not have a local newspaper, and nearly half all counties — 1,449 — have only one newspaper, usually a weekly.
The report by Penelope Muse Abernathy, Knight Chair in Journalism and Digital Media Economics at the University of North Carolina, shines the light on a silent phenomenon, the disappearance of 1,800 newspapers since 2004, and drop by half of the number of reporters covering local news.
“The historic role of newspapers — informing, nurturing and improving communities, both large and small — is vitally important in the digital age,” Abernathy writes on the website of UNC’s Center for Innovation and Sustainability in Local Media. The belief that informed and engaged citizens are vital to healthy democracy is also deeply held by Knight Foundation, a supporter of the Center.
The broad story of the collapse of the business model for local news is well known. Print advertising revenues have plummeted, while proportional gains in digital revenues have been captured mostly by Facebook and Google.
This report, which builds on the first News Deserts report in 2016, delves much deeper into the story. It explores the impact of hedge funds and private equity investors in hollowing out newspaper staffs, and the impact of consolidation on local coverage.
If there is good news, it’s that more than 500 digital news outlets have sprung up across the country, filling part of the void. Many of these news outlets were founded during the recession in 2008, by investigative reporters who wanted the public service mission of journalism to continue. These journalists grew into their role as publishers, but they and their fledgling organizations now need to take the next steps toward maturity. They need . the resources and skills that will help them establish a permanent presence within their local news ecosystems.
Building their capacity and sustainability is a major area of focus for Knight, which is among several partners that supports NewsMatch. This matching grants program for nonprofit news sites, many of them local, works with partners such as the Institute for Nonprofit News and the News Revenue Hub. NewsMatch also works to raise awareness on the need for communities to support news, journalism and civic information, in the same way it would support any other public good.
Recently, we teamed with Democracy Fund to provide a planning grant to the American Journalism Project, an ambitious plan to raise a venture-like fund to invest in revenue-generating capacity of nonprofit news sites to provide civic news on the local and statewide level.
Knight Foundation staff recently returned from LION Publishers Summit 2018, held in Chicago this past weekend. This event brings hundreds of locally focused independent digital news organizations together to focus on training, education, and peer-to-peer learning around sustainability, excellence in journalism and building community. Workshops focused on topics like advertising rates, events production, libel law, and audience engagement.
In addition the Knight-Lenfest Newsroom Initiative is helping dozens of legacy newspaper brands strengthen their local coverage by sharing best practices for digital, as well as elevating innovation by local and regional TV stations, a key source of local news. Knight Foundation also supports the
Investigative Reporters and Editors to produce regional trainings to strengthen the broadcast and digital journalists in television newsrooms. The next one will be held in Philadelphia in early 2019.
It is true that digital platforms have, in a large part, been responsible for swallowing much of the ad market,; however they can also be a vital part of the solution for local news organizations. To this end, the Facebook Journalism Project and Google News Lab are increasingly focusing on tools to help local news organizations get their reporting out to the public, which in turn helps them attract more funding .
But as the report points out, much of this news innovation is taking place in metro hubs and wealthier communities, while information dries up in rural areas.
“The residents of America’s emerging news deserts are often its most vulnerable citizens. They are generally poorer, older and less educated than the average American,” the report notes.
Local news sits at the heart of democratic engagement, providing people with the information they need to contribute and shape their communities. Without it the future of our democracy is in peril. That’s a call to action for us all.
Karen Rundlet is director/journalism at Knight Foundation. Follow her on Twitter @kbmiami. | https://medium.com/trust-media-and-democracy/as-news-deserts-expand-new-approaches-to-local-news-are-taking-root-5b830138e95d | ['Knight Foundation'] | 2018-10-16 17:02:40.785000+00:00 | ['Knight Local', 'News', 'Research', 'Journalism'] |
Market Segment Analysis Part 1 — Analyzing and Clustering Data | An application of k-means algorithm together with dimensionality reduction to credit card data
Photo by Mark OFlynn on Unsplash
This is a practice project I did to learn about clustering and dimensionality reduction. Cluster algorithms have a broad list of applications in terms of Data Science studies. One of the main applications is in market research and an example of this will be shown in this project.
Because the analysis is a bit long, it will be split into two posts:
Part 1: Data Analysis and Clustering Part 2: Dimensionality Reduction and final evaluation
The dataset contains credit card data with 18 variables. The variables describe consumer usage behaviours of about 9000 credit card holders during a period of 6 months.
The original dataset in kaggle can be found here. The code for this project can be found here.
The data science skills applied in this project are:
Exploratory data analysis
Unsupervised learning (k-means algorithm)
Principal component analysis (for dimensionality reduction)
Autoencoders (for dimensionality reduction)
Why consider dimensionality reduction?
As you will see later, there are just too many features in this dataset. To implement machine learning problems, one thing to have in mind is that we must choose the best features to represent the data. Having too many features may result in overfitting the model. When a model is overfit, it means the model adjusted too well to the training data which makes it difficult to generalize beyond the examples of the training set.
Our goal as data scientists is to create models that represent well the data and can be applied to new data fed to the model, once it is put into production.
1 — Loading the dataset and getting to know the data
See below a description of all the 18 variables:
CUSTID : Identification of Credit Card holder
: Identification of Credit Card holder BALANCE : Balance amount left in their account to make purchases
: Balance amount left in their account to make purchases BALANCEFREQUENCY : How frequently the Balance is updated, score between 0 and 1 (1 = frequently updated, 0 = not frequently updated)
: How frequently the Balance is updated, score between 0 and 1 (1 = frequently updated, 0 = not frequently updated) PURCHASES : Amount of purchases made from account
: Amount of purchases made from account ONEOFFPURCHASES : Maximum purchase amount done in one-go
: Maximum purchase amount done in one-go INSTALLMENTSPURCHASES : Amount of purchase done in installment
: Amount of purchase done in installment CASHADVANCE : Cash in advance given by the user
: Cash in advance given by the user PURCHASESFREQUENCY : How frequently the Purchases are being made, score between 0 and 1 (1 = frequently purchased, 0 = not frequently purchased)
How frequently the Purchases are being made, score between 0 and 1 (1 = frequently purchased, 0 = not frequently purchased) ONEOFFPURCHASESFREQUENCY : How frequently Purchases are happening in one-go (1 = frequently purchased, 0 = not frequently purchased)
: How frequently Purchases are happening in one-go (1 = frequently purchased, 0 = not frequently purchased) PURCHASESINSTALLMENTSFREQUENCY : How frequently purchases in installments are being done (1 = frequently done, 0 = not frequently done)
: How frequently purchases in installments are being done (1 = frequently done, 0 = not frequently done) CASHADVANCEFREQUENCY : How frequently the cash in advance being paid
: How frequently the cash in advance being paid CASHADVANCETRX : Number of Transactions made with “Cash in Advanced”
: Number of Transactions made with “Cash in Advanced” PURCHASESTRX : Number of purchase transactions made
: Number of purchase transactions made CREDITLIMIT : Limit of Credit Card for user
: Limit of Credit Card for user PAYMENTS : Amount of Payment done by user
: Amount of Payment done by user MINIMUM_PAYMENTS : Minimum amount of payments made by user
: Minimum amount of payments made by user PRCFULLPAYMENT : Percent of full payment paid by user
: Percent of full payment paid by user TENURE: Tenure of credit card service for user
Getting the statistics for all variables with the .describe() command we can note the following:
On average, clients maintain 1564 dollars in the bank account for use with the debit card.
About the purchase mode, on average clients spend 592 dollars on one-off purchases and 411 dollars on purchases with installments.
Good news for the bank: clients, on average, use 978 dollars as cash advancement. One must have in mind that, in general, the taxes for cash advancement are higher than the credit card taxes.
In regards to frequency, clients more frequently make purchases with installments (mean = 0.364) than one-off (mean = 0.202).
Regarding credit limits on the credit card, the maximum limit is 30,000 dollars with the minimum being 50 dollars. On average, clients have a credit card limit of 4494 dollars.
2 — Exploratory Data Analysis
Before visualization, we need to clean and make sure that the data have no missing values.
Checking for null values
Running the command below:
credit_data.isnull().sum()
We can check that there are 2 variables with null numbers: CREDIT_LIMIT (1 null value) and MINIMUM_PAYMENTS (313 null values).
The null values will be replaced with the mean, as both variables are continuous variables.
credit_data.loc[(credit_data['MINIMUM_PAYMENTS'].isnull() == True), 'MINIMUM_PAYMENTS'] = credit_data['MINIMUM_PAYMENTS'].mean()
Checking for duplicated data
credit_data.duplicated().sum()
There are no duplicated data.
Deleting unnecessary data
Custom id has no value for the behavioral analysis that will be carried out so this column can be dropped.
Visualizing the frequency for all variables plotting histograms:
From the plots, we can extract some insights:
BALANCE left in the account is more frequent around 1,000 dollars.
left in the account is more frequent around 1,000 dollars. PURCHASES values concentrate below 5,000 dollars.
values concentrate below 5,000 dollars. BALANCE FREQUENCY — we can see that clients frequently update the balance in their accounts.
— we can see that clients frequently update the balance in their accounts. ONEOFF_PURCHASES and INSTALLMENT_PURCHASES — looking at the scale of the graph we notice that purchases with installments are more frequent for values no greater than 5,000 dollars and one-off purchases are more frequent for values no greater than 10,000 dollars.
— looking at the scale of the graph we notice that purchases with installments are more frequent for values no greater than 5,000 dollars and one-off purchases are more frequent for values no greater than 10,000 dollars. PURCHASE FREQUENCY show a segmentation of clients: one group make purchases very frequently, while the other group rarely make purchases.
show a segmentation of clients: one group make purchases very frequently, while the other group rarely make purchases. MINIMUM PAYMENTS and PRC FULL PAYMENT — these variables show us that many clients opt for paying the minimum of their credit card bill. Very few clients pay the full bill. This is also good for the bank as taxes are high for credit card bills.
— these variables show us that many clients opt for paying the minimum of their credit card bill. Very few clients pay the full bill. This is also good for the bank as taxes are high for credit card bills. TENURE shows that most of the clients are long term clients (more than 11 years)
Getting correlations between variables
Instead of just printing a table with the coefficient of correlation values for the variables, we can make it easier to analyze by plotting on a heatmap.
correlations = credit_data.corr()
f, ax = plt.subplots(figsize=(20,15))
sns.heatmap(correlations, annot=True);
Some variables seem to have a strong correlation, for example: PURCHASE and ONE-OFF PURCHASES with a coefficient of correlation of 0.92. On the other hand, CREDIT LIMIT does not seem to be strongly correlated with any variable.
As there are many features and some seem to be redundant in the dataset, clustering the data may enable us to draw a deeper analysis and find similarities in the data.
3 — Clustering the data with K-means
Scaling the data
There are 17 numerical variables in this dataset. Some of them are within a wide range (for example BALANCE in the accounts) others are within a smaller range, from 0 to 1 (frequency data). Using Standard Scaler from sklearn preprocessing to scale the data:
scaler = StandardScaler()
credit_data_scaled = scaler.fit_transform(credit_data)
Standard Scaler centers the input value by subtracting the mean and dividing the results by the standard deviation. The result is that the distribution of values for that variable will have mean of zero and standard deviation of 1.
Determining the optimum number of clusters
To choose the best number of clusters the elbow method was implemented. This is one of the most popular methods to determine the number of clusters.
The objective of the elbow method is to minimize WCSS, which measures the within cluster sum of squares. This is the sum of squares of the distances of each data point in all clusters to their respective centroids. When WCSS is minimum, you have less variability of the data inside the cluster.
So, considering that the optimum number of clusters is between 1 and 20 we can run the code below and plot the results:
range_values = range(1, 20)
for i in range_values:
kmeans = KMeans(n_clusters=i)
kmeans.fit(credit_data_scaled)
wcss.append(kmeans.inertia_)
Using the elbow method it seems that the optimum number of clusters is between 7 and 10, this is the range which indicates the formation of an ‘elbow’ on the graph.
Running k-means and getting insights on the data
Running k-means algorithm considering the number of clusters equal to 8, as a first iteration:
kmeans = KMeans(n_clusters=8)
kmeans.fit(credit_data_scaled)
labels = kmeans.labels_
Checking the number of clients assigned to each cluster (or group):
np.unique(labels, return_counts=True)
Out[27]:
(array([0, 1, 2, 3, 4, 5, 6, 7], dtype=int32),
array([1044, 82, 2596, 1241, 1178, 200, 623, 1986]))
Getting the centroids:
cluster_centers = pd.DataFrame(data = kmeans.cluster_centers_, columns = [credit_data.columns])
cluster_centers = scaler.inverse_transform(cluster_centers)
cluster_centers = pd.DataFrame(data = cluster_centers, columns = [credit_data.columns])
cluster_centers
Exploring the 8 groups from the table we can draw some conclusions:
GROUP 0 : This seems to be an intermediate group — it is the fourth in the list for the amount of money spent on purchases ( 923 dollars on average). Most of the purchases were with the method of payment by installments. Credit limit is around 3,000 dollars, which is a low credit limit considering the 8 clusters.
: This seems to be an intermediate group — it is the fourth in the list for the amount of money spent on purchases ( 923 dollars on average). Most of the purchases were with the method of payment by installments. Credit limit is around 3,000 dollars, which is a low credit limit considering the 8 clusters. GROUP 1 :This also seems to be an intermediate group. At some points seems to present a similar behaviour to group 0, buy they spent more money on purchases, being the third on the list in terms of money spent on purchases in general. Most of the purchases were one-off purchases and this is the second group in the list which is likely to pay the credit card bill in full. Maybe this could be a group of clients that could be offered to increase the credit card limit.
:This also seems to be an intermediate group. At some points seems to present a similar behaviour to group 0, buy they spent more money on purchases, being the third on the list in terms of money spent on purchases in general. Most of the purchases were one-off purchases and this is the second group in the list which is likely to pay the credit card bill in full. GROUP 2 : This group has the highest frequency for asking for money in advance (0,52) and are not much likely to pay for the credit card bill in full (PRC_FULL_PAYMENT = 0,039, second last on the list). This does not to seem a target group to increase credit card limit, having one of the lowest purchase frequency of all (0,302) and being less likely to pay the credit card bill in full.
: This group has the highest frequency for asking for money in advance (0,52) and are not much likely to pay for the credit card bill in full (PRC_FULL_PAYMENT = 0,039, second last on the list). GROUP 3 : spends a lot on purchases (24,957) being the group which spends more on the credit card. Most of the purchases are one-off purchases which means this is a group of clients with high potential for consuming. Not surprisingly this is the group with one of the highest purchase frequencies (0.91). This is also the most likely group to pay the credit card bill in full. It could be a target group for marketing campaigns since they buy a lot and are ‘good payers’.
: spends a lot on purchases (24,957) being the group which spends more on the credit card. Most of the purchases are one-off purchases which means this is a group of clients with high potential for consuming. Not surprisingly this is the group with one of the highest purchase frequencies (0.91). This is also the most likely group to pay the credit card bill in full. GROUP 4 : Presents one of the lowest purchase frequencies from the list (0,267), lowest balance and balance frequency.
: Presents one of the lowest purchase frequencies from the list (0,267), lowest balance and balance frequency. GROUP 5: This is the group with highest balance and purchase frequencies. Their credit limit is the second highest on the list and most of their purchases were, on average, one-off purchases.
This is the group with highest balance and purchase frequencies. Their credit limit is the second highest on the list and most of their purchases were, on average, one-off purchases. GROUP 6 : This is the group that spent less money on purchases, with, as expected, the lowest purchase frequency. This is also the group less likely to pay for the credit card bill on full, which may indicate a group of clients that may have difficulties their financial situation.
: This is the group that spent less money on purchases, with, as expected, the lowest purchase frequency. This is also the group less likely to pay for the credit card bill on full, GROUP 7: This seems an to be another intermediate group, closer to the group of clients with less probability of spending money on purchases. A reduction of dimensionality is necessary to make this separation more clear. This is the group with the lowest credit limit and one of the lowest purchase frequencies. On the other hand, they seemed to ask for a considerable amount of cash in advance (1,122 dollars), which was a higher amount than the money they spent on purchases with the credit card over the period.
Below are the plots representing some of the variables discussed above for each of the 8 groups:
Balance variations for the 8 groups
Balance Frequency variations for the 8 groups
Overall expenditure on one-off purchases for the 8 groups
Overall expenditure on installment purchases for the 8 groups
Purchase frequency for the 8 groups
Cash advance frequency for the 8 groups | https://medium.com/ai-in-plain-english/market-segment-analysis-part-1-analyzing-and-clustering-data-7758421fada8 | ['Mariana Almeida'] | 2020-12-27 21:31:45.477000+00:00 | ['Machine Learning', 'Data Science', 'Kaggle', 'K Means'] |
When Fiction Might Become Reality | When Fiction Might Become Reality
And we are not still reading enough
Photo by Darwin Vegher on Unsplash
The world is full of cultures. Cultures that sometimes are different or similar in several ways. Within these cultures, sub-cultures emerge as a result of different habits adapted overtime.
We are living amongst habits that are constantly changing around us. Habits that are causing our world to consider different priorities, different life values, different learning mechanisms and different ways to educate the young. As this article may specify a bit more, culture shapes our minds after all.
Today’s situation is completely different than it was fifty years ago. Today’s information flies like a fighter aircraft during war: so fast that it could deploy a bomb at any time.
But the youngsters are the ones to suffer through this process.
The high flux of information, normally available online, is one of the burdens for storytellers. It is so easy to pick the smartphone and find on Google what anything means and where they are located.
The culture of reading and discussing is long gone. Though it does not need to be unless we create new habits. Habits that were once old and now can become new.
Do books stand strong along this fight on short reads and lack of attention?
I think they do, as long as we bring back those habits of reading them more often. The habit of spending time quietly in your room, with or without music, and read a long story that takes you to another reality. Books can have superpowers. And you can feel it.
For different people are different ways that motivate them to read. It could be a non-fiction, a fiction, a short story, a real story, a good book cover, a book advice from someone, the librarian enthusiasm about a piece, a good title and so forth.
We can tell that books made history possible. Books were once read as part of a formation process. Part of a life dedication job and simply to understand about the outside world.
Sitting down and flicking through a few pages were once things of the most privileged. The ones that could read, write and had time for it. As only to them that the Illiterate ones did not have a chance to access books. Books were like gold on the hands of the most powerful ones.
Reading books was like reading the newspaper nowadays: they were more often read to acquire information and to conquest accordingly. The references were minimal and the writers were selected by the minority.
Not everyone could sit down and write. Only those in power and literate had access to it, and also the ones responsible to solve problems. Intellectuals were paid minds to take hold of progress within an empire. Without them, kings would not be able to conquer.
“Privilege yields opportunity, and opportunities confers responsibilities” — Noam Chomsky on “The Responsibility of Intellectuals”.
The art of reading is not something taught at schools today. Amongst many technological devices, reading has no space within the classroom, unless it is done through a screen. And if it does, none of the youngsters would be keen to dive into a reading story anymore.
There are exceptions though, and I might be one of them. So do you?
I cannot find any other best way rather than books to create a self-conversation. The one type that takes your attention away from the confusion, complexity and mind-boggling world. The one type that instead makes you dream and talk to yourself in how the world can be according to that point of view; the writer’s point of view.
Reading a book makes you humble. Demonstrates that you do not know what you think you do because that writer showed you another way to know. Even through fiction, the world can be adaptable by different ideas into a real one. Ideas that open your mind to the possibility of innovating the self. Ideas that sometimes can be only told by words and nothing else.
That is why libraries are the sanctuary of knowledge where words become weapons to shoot your brain with creativity, wit, innovation, emotions, curiosity, friendship, love and self-identification.
It will come a time yet that books will disappear. Completely. It is sad to think but Kindle is a perfect example of it. Do not get me wrong. I love Kindle, but it does not smell like a book. It does not look like a book that has remains of handwriting. It does not feel like it has any soul.
Although this is the new generation, we are happy to have one and continue to read several masterpieces on it. I am happy to have one anyway.
Speaking about masterpieces, burning books could become a habit. A habit of the future that will stop humans to access knowledge through words.
The fireman could be the most important professional to create order amongst society, whereas they consider books as a threat to the mind of the one. A threat to a system once numbness is the new civil law. People will not be able to touch pages because it could cause a revolution. The elite could eliminate the access to books and this is scary.
Photo by Jonny Caspari on Unsplash
You probably think I am crazy but I am just based upon a novel that, nonetheless, is close to being a reality. Ray Bradbury was not so crazy after all in creating a story where books and knowledge are only stored in hard disks away from the population’s lore.
When he wrote “Fahrenheit 451”, he made it clear that the firemen were the ones responsible to inhibit society of knowing deeply about their past, their diseases and their problems that once occurred.
Burning books was an activity, according to the novel, as a priority.
Now, can you imagine for one second this happening in the future? It could be, but better not to think too much about it.
The read of every day might be a solution to combat the loss of books, the loss of knowledge. If we are ought to discover the art of reading, to identify the avid reader and pinpoint the essentials, we must preserve books.
And also, we must read them. A little bit every day, for the rest of our lives. Only then we can be in peace with our mind and continue to be humans — the only animal capable of reading so far. | https://medium.com/age-of-awareness/when-a-fiction-might-become-reality-fa1c09dd61a9 | ['Tiago Miranda'] | 2020-12-28 13:33:37.850000+00:00 | ['Philosophy', 'Education', 'Books', 'Learning', 'Reading'] |
Avoid Security Loopholes Using @JsonView in Spring Boot | Avoid Security Loopholes Using @JsonView in Spring Boot
Don’t expose more than you think needs exposing
Spring Boot logo
If a certain property on an object is internal to your business and not useful to a consumer, then don’t return it.
Let’s say we’re using the controller to query information and return it to the front end in the JSON data format. Often, some username and password queries are involved in the JSON data, but for security reasons, we may not need all of the User object user information (for example, username and password ) to be returned to the front end. | https://medium.com/better-programming/avoid-security-loopholes-using-jsonview-in-spring-boot-3b1230f1ae30 | ['Parul Dhingra'] | 2020-10-15 01:13:13.898000+00:00 | ['Spring Boot', 'Web Development', 'API', 'Java', 'Programming'] |
London Calling | Photo Credit — Me — St Paul’s Cathedral — London
Freedom's what unites us
You want to test this, come to my yard and fight us?
Telling you mate you best think twice cos
We hear London calling
Best believe that London it ain’t falling
We pick ourselves up, stand tall and
The man dem from Big Ben to Wren's St Paul's dem
All stand together
Our city's our tether
Keeps us bonded from now to forever
Here, let me give you some advice
We Landoners mate, you thought once?
Best think twice.
For those that fell. 22/03/17 | https://medium.com/poets-unlimited/london-calling-fd6ec2f35371 | ['Aarish Shah'] | 2017-03-23 20:53:11.206000+00:00 | ['Inspiration', 'Poetry', 'UK Politics', 'Photography', 'Writing'] |
Data Visualization, let them wonder | 2 — The Island of Knowledge
The concept of “the island of knowledge” is simple, and it’s stated by Ralph W. Sockman as “The larger the island of knowledge, the longer the shoreline of wonder”. Our understanding of a subject is like the size of an island, and its shoreline represents how far we can think, ponder and question about it.
Marcelo Gleiser wrote a book entitled “The Island of Knowledge. The Limits of Science and the Search for Meaning”, and he gives this definition:
“The knowledge that we have defines the knowledge that we can have. As knowledge shifts, we ask new kinds of questions that we couldn’t have anticipated”².
Data visualization is a powerful way to increase our island of knowledge. It can give us those different perspectives that will increase the size of our island and consequently enlarge the shoreline of wonder. It will enable us to think of questions we couldn’t before. Alberto Cairo uses the island of knowledge to teach about data visualization in his book “the truthful art”³ in a very insightful way. I recommend reading it.
At the beginning of 2019, I worked on a visualization that represents this concept. I was looking into the stocks of Petrobras (PBR, a Brazilian company) to decide if it was a good moment to buy it. I was staring at the following chart:
Petrobras Stocks
My eyes went straight to the period of 2013–2016. I wondered what happened there to make those shares fall so drastically. I then realized that within those years, a huge operation was in progress in Brazil to investigate corruption between the government and Petrobras. I studied more about that and put together some important events in the government and in the operation against corruption to find any correlation with PBR stocks. The image below is the result of my research (click here for the full page):
Since Dilma Rousseff was elected the president of Brazil in 2012 and changed the leadership of Petrobras, the stocks started falling and hitting the lowest prices in the past 13 years. The red area shows the period where nearly 30 people were arrested being accused of corruption, and it’s followed by the lowest price level. The first plot made me ask new questions, enlarge my island of knowledge, and find new perspectives. Now my shoreline of wonder is larger and the island can grow bigger. How corruption in government have affected other companies or other parts of the economy? Can I use the government’s corruption to predict stock prices?
Data visualization is powerful, it can change, improve and save the world. It’s key to bringing new perspectives and innovation. Next time that you go to work on your data collection or visualization designs, remember the island of knowledge and make it so that your readers will wonder about new questions they never had before.
Data visualization is powerful, it can change, improve and save the world. It’s key to bringing innovation. As you create and design your next visualizations, do your data exploration with these concepts in mind. It will lead you to provide new faces of the data and knowledge for your readers. You will enable them to ask deeper questions. Let them wonder! | https://towardsdatascience.com/data-visualization-make-them-wonder-31a7a76f66df | ['Gustavo Hideo'] | 2020-01-08 16:46:09.508000+00:00 | ['Dataviz', 'Data Analysis', 'Data Science', 'Data', 'Data Visualization'] |
Field Notes from the Land of Big Data | Many of us have heard of the fashionable term “big data analysis”, and some of us have a day-to-day relationship with it. As a data scientist with many years of statistics training in a (relatively) smaller data world, I didn’t start to build my connection with big data until recently. Sometimes this new romance is a thrill; other times it can be fairly frustrating. If you are like me, then maybe some of the “fights” we’ve had can be inspiring (or at least relatable) to you.
What makes big data different from small data?
As the name suggests, big data is big. But how big should it be to differentiate it from small data? If you google questions like that, I’m sure there will be tons of articles trying to define this term. But based on my own experience (remember I was born in the small data world), here are 5 points that in my opinion define “big data”. Most of them were born from the question, “Why is it so hard for me to finish this job?!”:
The size of the data is at least tens of GB and can easily exceed a hundred GB;
The data is saved in partitions rather than a single file;
Any code that runs on this data is likely to get stuck somewhere, even on a Mac that you spent $4000 to configure. To get around this, you have to rely on clever pieces of code and/or powerful machines (e.g., a computer with hundreds of GB of memory or a cluster of such computers);
It’s a pain to visualize the data;
It’s a bigger pain to debug your code.
If three out of the above five points are satisfied, then congratulations, you are in the big data world!
2. Schema is a major trigger
Big data can give you big headaches. So what can we do to make our lives easier? Most of the time, the problems I encounter during data analysis or model building do not come from the analysis or modeling itself, but instead from data that was mishandled. And most data mishandling is caused by a wrong schema specification. Don’t underestimate this mistake! It can easily happen: when we modify our code to reproduce existing data, when we backfill new data with additional records or attributes, when we merge datasets that are across a range of time periods, each of which has a different schema, etc. Therefore, it is always a good habit to check the schema first.
There are many types of schema errors, but the most common mistakes I’ve come across, no matter what language or platform, are the following:
Wrong specification of the `header` argument (e.g., the partitioned data contain headers but we forget to skip the first row);
Data type misspecification (e.g., our schema file thinks a column is integer-typed while it’s actually double);
Column misalignment (e.g., I appended metadata for a new attribute at the end of the schema file while the column was actually added in the middle of the data).
If your program yells at you when importing data, check whether it’s because of the above reasons, as they can rule out a large number of errors. If your program remains silent, then taking a small subset of the data and summarizing the distribution of the columns (especially new columns and the columns before and after them) is a convenient and effective sanity check. Automatic tools such as data type inference tools or schema generators can be good helpers, but do not blindly rely on them, since there could be data issues that they (or their default settings) cannot catch.
3. Pre-programmed defaults could be mysterious helpers (or saboteurs)
Because of the uniqueness of big data analysis, we often need to shift to new languages or platforms that are specialized for these tasks. Spark is one such platform. When I first started to learn it, I didn’t anticipate the scope of the pre-set defaults. Sometimes they can be surprising saviors for code that appears buggy, while other times they can be unwelcome saboteurs for code that looks correct.
One example of the former is the order of applying `select` and `filter` on a DataFrame. In my “buggy” code, I first selected a few columns from a DataFrame to create a new DataFrame. I then filtered the new DataFrame by another column that was in the original DataFrame but no longer in the new DataFrame. Spark ran the code without complaints. An example of the latter situation was to generate predictions using a fitted model on a test dataset without persistent memory (persistent memory is where we cache the data from disk into memory so that it can be reused without copying from disk again). Spark generated very different predictions when I ran the same piece of the code at different times.
Not limited to Spark, these mysterious behaviors can occur with any new technology that we learn, and require a long period of practice and experience to uncover. But one golden rule is to always run sanity checks whenever a major step of the job is done. Don’t skip these sanity checks to save time — better safe than sorry.
4. A series of sanity checks within an organized workflow is the ultimate savior
As mentioned in the first section, big data is much harder to visualize and debug than small data. By “visualize”, I mean seeing the data table or summarizing the distribution effortlessly (like running one line of code). By “harder” I don’t mean impossible: we can still print a few rows or print the schema to peek at the data from a few angles. However, if the project is big (and thus has more scope for error), then we need to check our procedures carefully from as many angles as possible. In that case, an organized workflow which embeds a series of sanity checks at different stages of the job is the key to success. It may take a long time to build such a workflow, but once it’s built it will save us a ton of time in the long run.
To achieve it, we can either create our own workflow or learn existing workflow management tools. A good workflow is made up of small tasks, each of which has its own sanity checkpoint. For example, we can break down a model building task into individual steps such as data preparation pipeline, model training pipeline, model evaluation and production pipelines, etc. Whenever we add a new task to the workflow, we should add its sanity checkpoints accordingly. Also, keep in mind that our jobs are evolving and tasks can become more complicated. Therefore our workflow needs to be adaptable to new situations as well as reproducible for past results. It is worthwhile to spend some time thinking about a good versioning plan in the beginning of building the workflow. Last but not least, it is highly cost-effective and a good idea to include a “debug mode” that can test the functionalities on a small subset of data before the full run. | https://medium.com/upstart-tech/field-notes-from-the-land-of-big-data-5af9140fa28b | ['Feihan Lu'] | 2020-09-28 17:59:50.591000+00:00 | ['Machine Learning', 'Data Science', 'Big Data'] |
The Curious Alchemy of Cheesemaking | ”Milk’s leap into immortality” — Clifton Fadiman
The ancient Swiss Alps method of cheesemaking required 100 days of manual labor. The summer, so hot in the valley, was the perfect season to take the herds into the mountains. There was plentiful wildflowers, grasses, and herbs to forage. The cheesemaker could work with the herd to capture the nutrients of the season and provide for the community through the winter. They did this by turning milk into one of mankind’s oldest foods: cheese!
Milk contains all the nutrition you need to stay alive, and turning that milk into cheese is a way of preserving that nutrition. Mountain cheeses are a bit of sun on a winter’s day.
All around the country, such old time cheesemaking traditions are being fostered. Curious about the process and how hard it really was to make your own cheese, I headed over to the home of a friend who had recently begun the care of a small herd of goats.
It takes a while to make cheese, so we start early. We were going to make feta from scratch, as well as some chevre.
To start our feta from scratch, Ann took out two jugs of her goat’s milk, unadulterated from her herd. This means raw and unpasteurized.
All good cheese starts with good quality milk, and what makes good milk depends on good feed, good health, good environment, and lack of stress on the animals.
The first step is to warm the milk to 86 degrees and add in a starter culture and lipase.
Once the milk is poured, microbial culture is added. Starter culture it’s called. The cultures are microorganisms that are eating the sugar lactose in the milk and producing lactic acid, in a “lacto” fermentation.
In old world European methods, the barrels were made of wood, the better to hold in microbial culture. Or, one might use cultured whey from a previous batch. Much like continuing a sourdough bread culture, cultured milk contains live probiotics, especially if it’s raw. Today we are using starter culture specifically designed for feta.
We wait 45 minutes for the starter culture to work it’s magic.
At this point, rennet is added, an enzyme that causes the milk to coagulate.
Rennet for cheesemaking was made for a long time from the stomach of suckling calves. Since digestion turns milk into cheese, consumption of milk technically predates humanity, starting with the first mammals. The stomach acid of a prehistoric baby mammal nursing separated the whey and curds of milk. How did we ever find this out? It is believed that ancient humans carried milk in the dried stomach of animals, and that on a hot day it curdled, and they found that it both was edible and preserved longer.
Rennet is a protease enzyme that causes the proteins in the milk to coagulate, causing it to separate into solids (curds) and liquid (whey). Don’t worry, nowadays, rennet can be extracted from lots of different, including vegan, sources. Even non-GMO fungal sources, or vegetarian options like nettle and yarrow.
When we peer under the lid 45 minutes later, it’s an amazingly different consistency.
The rennet has done it’s job. The milk has turned from pure liquid into curds and whey.
The next step is to cut the curd: this separates the curd from the whey. After cutting the curd, you must mix it up every ten minutes while alternately letting it sit still.
Every ten minutes we give it another swirl.
Next, the curds must be scooped out and strained through a cheesecloth.
The curds are pressed into small molds. For feta, we use small plastic molds, though anything could be used. The remaining whey can be cultured as well, into ricotta cheese.
In this feta method, the weight of the curds themselves is what ends up pressing out more of the whey and pressing them into each other; it knits itself together. It works surprisingly well, and quickly. An hour later they were ready to flip, and had already formed their own unique masses.
Before air drying, they need to be salted. If you didn’t live near the sea and had access to salt, such as in old time chèvre making in France, you’d use ash.
They can age for as long as you desire. Traditionally, as cheese ages in caves, it begins to change pH, and as it ages, new microorganisms take root to imbue it with even more and subtler flavor profiles. Cheese, it turns out, is very much alive. It’s its own ecosystem. As many as 2,000 species of organisms can be thriving on cheese. This battalion of probiotic bacteria keeps the bad ones at bay.
Aging the cheese brings together pollens, molds, and yeast in the air, and incorporates them into the flavor of the cheese. This is where the countless variations of texture, flavor, and consistency arise; from the terroir not just of the grasses, animals, and milk, but from a wild microbial array growing on the cheese.
The final feta of the process is tangy, salty, and delicious. It seems to taste all the better when you’re sitting on the same farm it came from. Whether terroir is detectable or not could be debated, but the knowledge of the place, animals, and creative labor that goes into the cheese seems to add a sweet flavor indeed.
It’s a curious narrative nowadays — people are heading back to the land to produce artisanal food products. Books adorn shelves with titles like: “One city-dweller’s radical move to the country!” “Couple drops all to invest in small family farm! ” Artisanal cheesemakers are on the rise. Real cheese is making a comeback.
One cheesemaker calls it America’s cheese revolution. An artisan revival of dairy heritage. A resurgence of connection to people and place.
Historically, traditional local cheese said quite a bit about the person who made it; their identity. If they produced Cheddar cheese it meant they were from Somerset County in the United Kingdom, nearby Cheddar Gorge, that they were milking cows, and had access to salt.
Your regional cheese spoke of where you were from: your geography, kin and home. Your belonging in the word.
In the same way, making local cheese could be a connection of people and place. Working with its textures and flavors, coaxing something delicious out of your environment with your ingenuity and perseverance, food in context becomes a story. Deeply satisfying; cheese for the soul.
A well earned breakfast: goat milk chevre drizzled with local honey. Yum! | https://tarnaska.medium.com/the-curious-alchemy-of-cheesemaking-b8266d8ac1fa | ['Jennifer Tarnacki'] | 2019-11-23 10:29:02.375000+00:00 | ['Environment', 'Resilience', 'Cheese', 'Homesteading', 'Food'] |
The Impact of the Pandemic on College Running Teams | Looking back in 2020 with the Miami Redhawks
It was March 2020. Director and head coach of the Miami Redhawks track and cross country team, Thomas Chorny checked in at the hotel in Albuquerque, New Mexico. They were there for the NCAA Indoor Track and Field Championships. At the last minute, they were told they had to go home. The entire meet was canceled.
They packed their bags and arranged flights home immediately. By the time they made it back to Ohio, the remaining team members who were not attending the championships already left the campus. There was no team meeting or a chance to say goodbye.
This is when the reality for Chorny hit. The indoor track and field season ended abruptly, and it looked as though the outdoor season would suffer the same fate. Reflecting on how hard they trained to get where they were, ending in that manner was difficult. Adding to this sinking feeling was the reality of the effects it would have on his graduating seniors.
The seniors in 2020 were a special group for Chorny since they were the first of his recruiting class. He knew these seniors would be unable to finish their college running careers. It not only impacted their remaining track and field season but potentially their entire running careers ahead of them. How many of these athletes would spend a lifetime wondering how their careers may have been different had they had the chance to finish off the year strong?
Returning Athletes in fall 2020
For the runners returning to the Redhawk campus in fall 2020, the season was anything but normal. In-school-learning didn’t happen until October. Once it began, team practices started up again. It was the first time they came together since March. The team looked different, however. Several athletes, both walk-ons and runners with scholarships, opted to stay home rather than risk being on campus.
The Redhawk athletes who did return had a wider range of fitness than normal. Some of Chorny’s returning athletes found the extra time off as an opportunity to continue training and working on their fitness. Other athletes suffered from a lack of motivation due to the uncertainty around what kind of season would be available to them when they returned. The collegiate level runners work at a very high intensity. Pushing themselves and managing injuries along the way is a high price to pay if races end up canceled in the end.
Chorny, along with other college running coaches, had no experience dealing with these new challenges. Many unprecedented changes arose as the NCAA pieced together the remainder of the cross country and track seasons. To add to the mix were concerns over protecting athletes from the virus itself, which included physical distancing and masks during training.
Despite the difficulties, the Miami Redhawk runners trained hard and were able to participate in a last-minute season. The Mid-American Conference (MAC) approved three races and a conference championship. The Redhawks ended strong with a second-place finish for their men’s team and fourth for the women.
A strange Spring 2021 season
At the end of September, the NCAA Division 1 Council approved a cross country championship race on March 15th, 2021 in Stillwater, Oklahoma. While many celebrated the fact that a cross country meet would take place, it poses strategic difficulties for college runners participating in both track and cross country.
When collegiate athletes return from Christmas break, those who also run track start preparing for their upcoming indoor season. Since track races are shorter and faster, the training shifts to speed-oriented sessions. Overlapping the two sports is tricky from a training perspective.
Combining these two seasons have an additional challenge. To fit both championship races into a spring timeline means the races are a little close for comfort. One of Chorny’s track athletes is also a top 8k runner who was selected to participate in the NCAA Championships in March. If the indoor season isn’t canceled, this athlete will participate in an 800-meter race on Saturday and an 8k two days later — too soon for a runner to be fully recovered.
Looking forward
Although the coming college spring races will lead to some interesting challenges, athletes and coaches are relieved there is a season at all. Perhaps the senior class of 2021 will not suffer the same fate as the class of 2020.
However, as things begin to head back to normal, there is a group of collegiate athletes that will continue being impacted by the chaos created by the COVID-19 pandemic — incoming freshman.
To account for lost seasons, the NCAA expanded eligibility to current athletes. In a normal college term, athletes are allowed to compete for four years, with one additional year where athletes “redshirt,” which means they can train, but not compete. To prevent college athletes from losing out on a competition year, the NCAA offered eligibility relief for missed seasons due to the pandemic.
With the NCAA eligibility changes, this means students can now train and compete for a total of six years instead of five (four years plus one “redshirt” year). For incoming freshmen, the repercussions mean there will be fewer spots available for scholarships and in some colleges for spots on the team altogether.
The extra year of eligibility will continue to be a factor for another four years as freshmen move through the system. Current high school seniors will have less chance of ending up on these teams, which also means a decreased chance of ending up on elite teams.
Recruiting
High school seniors are also finding it difficult to gain attention for college recruitment. These challenges are covered in more depth in the following article:
Chorny is navigating this new world of recruiting as well. The inability to see potential recruits in a racing environment makes it riskier to commit to incoming athletes. Not all runners thrive under pressure, so although seeing filmed time trials is helpful, it isn’t the most reliable way to recruit an athlete.
At the same time, these athletes are going out of their way to organize the filming of their time trials. Their continued training and motivation show coaches they have initiative. These traits are imperative for a successful running career. Perhaps this poses a rare opportunity for coaches to understand which athletes are committed at a higher level than their peers. | https://medium.com/runners-life/the-impact-of-the-pandemic-on-college-running-teams-2aecf9d09055 | ['Amy J. Wall'] | 2020-12-24 15:37:25.897000+00:00 | ['Sports', 'Education', 'Running', 'Fitness', 'Coronavirus'] |
5 (Re)forms of Sacrament Meeting: How the LDS Worship Service could Better Accommodate the Diverse Spiritualities of Its Attendees | Photo by Josh Applegate on Unsplash
One of the benefits of being in a heavily centralized church is its worship services are the same everywhere you go. That might be beneficial for the well-heeled traveller Latter-day Saint who wants the familiarity of American culture in a chapel in the middle of nowhere, but that isn’t really the point, is it?
Sacrament meeting is supposed to be the most important and doctrinally momentous part of the sabbath experience. It incorporates the sacrament or eucharist ordinance and provides an hour of communal prayer, sacred music and sermonizing. I daresay (the heavens forbid my admitting) that it is supposed to feed its attendees spiritually.
I daresay further that quite a few people find the weekly event…well, let’s be charitable…underwhelming.
The sacrament meeting’s liturgy is heavily formulaic. Church headquarters prescribes the order of worship, giving local leaders discretion only over the choice of hymns (from a hymnal of only 300 songs, with what I’d define as ‘usable’ being closer to 200) and vague allowance for ‘music and gospel messages’. Church tradition has dictated that this means three ‘talks’ by lay members, the last the longest (oh and ‘youth, woman, adult male’ the customary casting), maybe with an intermediate hymn thrown in the middle when everyone stands up to get the blood flowing so they can attack the last talk without rubbing their eyes screaming at the bishop for letting Brother So-and-So exceed time limits again. Church faithful sit through the thing once a week. Many sleep or snooze or stare ahead in reverent but unspiritual reverie. Still others cheerfully peruse their phones to distract from the pain of droning sermons. I doubt many remember the talks much beyond Sunday lunch.
A Variety of Service Forms
Some Christian churches, in response to the varying and sometimes irreconcilable needs of their congregants, have moved to holding different types of worship services. A given weekend may, for example, have a Saturday evening service, a Sunday morning ‘traditional’ eucharist, an afternoon youth service (complete with modern Christian music and electric guitars), an evensong (a magnificent experience if you’ve never been), guided meditation or prayer, a Sunday School and so on. Members attend whichever services feed their souls. I, for example, would find little spiritual feasting while dancing to punk rock, but some people find deep spirituality in that energy.
What if the Church of Jesus Christ of Latter-day Saints did the same?
Here are some ideas of different forms of sacrament meeting. All of these suggestions incorporate the sacrament ordinance because, obviously, that is doctrinally necessary.
Photo by Ben White on Unsplash
The Silent Sacrament
So many Sundays, all I want is time to ponder and just feel spiritual, to meditate and ruminate. We all have sabbaths when the last thing we need is the racket of crying babies and hit-or-miss pulpit jokes and jabbering homilies and sobbing testimonies and droned announcements and all that…well, noise…that sacrament meetings just tend to attract sometimes. Sometimes people just need quiet. Reprieve. Prayer time.
Mormonism actually makes space for this exact kind of worship: in the celestial room of the temple and in the chapel area where members wait to begin an endowment session. These rooms are designed as sacred spaces set apart for quiet contemplation, prayer and silent scripture study. And Saints are very good at doing this in the temple. I know many members who go the there sometimes because they have a nagging question they need to pray out.
Now, what about a sacrament meeting that provides that too?
Imagine the chapel being open for an hour, say, with the doors closed to the bustle of the foyer. No music (or maybe very soft music). No speaking. No announcements. No spontaneous testifying. The bread and water are blessed prior to the doors opening for the service. Members come in whenever they want and sit wherever they want and stay however long they want. They can pray or read scriptures or just sit. Whenever they’re ready, they approach the sacrament table and a priest offers bread and water. They can stay afterwards or leave. It’s up to them.
This form of sacrament meeting would feed my soul. It would bring a temple-level of reverence to our chapels which, frankly, is so often lacking.
The Unsung Service
We all know there are plenty of congregants who wince as the organ intones the intro for a hymn. They hate singing, are inclined to tone-deafness anyway and mouth the odd word here or there to allay suspicion of their not participating heartily. And they certainly would rather avoid the Primary’s most recent squawking musical item to which everyone is supposed to go ‘aaaah how sweet!’ and give each other the eye that says, cohootingly, ‘What a sickly sweet song that sanitizes the gospel into a showtune that would make religion look like Barney the Dinosaur prancing through a field of daisies!’ (I might be betraying my feelings about the Children’s Songbook. Moving on….)
Some people just want words. Good ol’ sermons and prayers. No frills, no hymns, no compulsion to participate if they just don’t want to. Is there really anything wrong with wanting to be a bit passive sometimes? We manage fine without singing in lessons and in the temple (which is far more spiritually maintained). We’d just be continuing that.
I think most members would admit that a 100% music-free service would likely feel a bit dry and that music does play a liturgical role. That doesn’t mean everyone has to sing along. I’ve always wondered what the aversion is to having choirs sing the sacrament hymn. If its purpose is to prepare the congregation spiritually for the sacrament ordinance, surely we can admit that that preparation can sometimes be enhanced by listening to a choir or a soloist or an instrumental presentation? I mean, we sing the same 20-odd sacrament hymns in cycles. How many people can pay much attention to the words anymore? Variation can be the prompt for vigorous spiritual experience.
And while I’m on this point: LDS English teachers (myself included) are indebted to the Church for the public speaking exposure sacrament meetings offer lay members, but would it kill to let the ordained bishops and Relief Society presidents speak more often? They have the weight of ecclesiastical office. Let’s hear their thoughts. (And note that I said RS presidents too. Mormon men need to get used to hearing women speak with spiritual authority.)
The Morningsong or Musical Service
Photo by Michael Maasen on Unsplash
The resources required to pull of the evensong services of the Anglican Church are considerable and prohibitive for most LDS congregations. But there is something sublime and divine about them. For those members for whom music is the medium of spiritual encounter, the salve to the soul, the trigger of sacred feelings, a musical sacrament would be so nourishing and transportive!
Evensong is obviously held in the evenings, but I see no reason why Mormons can’t do some kind of ‘morningsong’ or musical service. If there are resources (good musicians, choirs, rehearsals), a sacrament meeting could begin with carefully selected sacred music—some congregational hymns, some choral performance, some instrumental pieces—and musically build up to the sacrament as the service’s aesthetic crescendo. Following this, the music could soften towards a reverent, prayerful, musical benediction.
With some thumping organ postlude that thrills and invigorates. I had to put that in there.
And on that point, it’s high time we recognize that what counts as ‘sacred music’ in middle-class Mormon Utah, informed as it is by white, conservative American protestantism, does not necessarily transfer to other cultures. I know there are moves in the Church’s current revision of the hymnal to explore the world’s religious repertoires, but the fear is that they’ll revert to the genre with which they’re most comfortable. We would be surprised at the depths of spiritual experience available outside our narrow musical range.
Some might bristle at the rituality of this kind of service, but we do ritual very well in the temple. Some cathedrals have seen an uptick in evensong attendance among millennials (while other services see a decline), which suggests that my generation yearns for church experiences that are aesthetically and symbolically rich. We want authentic moments that let us step out of the superficial world and into something more beautiful and stirring.
The EFY Sacrament
Photo by Josh Rocklage on Unsplash
There is a genre of music in the Church that hearkens towards contemporary Christian music but retains some Mormonness. EFY is the closest I can come to describing it. Many bishops freely allow this kind of music in sacrament meetings already. Otherwise, they litter youth devotionals and YSA conferences.
While a rowdy sacrament service probably wouldn’t gel with most Mormons’ sense of spiritual appropriateness (right?), I think there are many who would engage with and get more out of a worship service if everything were just less stuffy and US protestant-conservative in style. They want talks that are lessons and discussions more than sermons. They want songs that resonate with their cultural expressions.
The EFY Sacrament would feel more like a devotional. It would feel loose enough not to be rigidly formal, but still retain the reverence needed for the sacrament to be holy. | https://medium.com/interfaith-now/5-re-forms-of-sacrament-meeting-how-lds-services-might-better-accommodate-diverse-spiritualities-d7e80ab8d823 | ['Michael Mcleod'] | 2019-10-25 03:08:08.244000+00:00 | ['Mormon', 'Lds', 'Church Music', 'Lds Church', 'Sacrament'] |
Euclidean Distance Matrix | Step by step explanation on how EDM is represented in linear algebra and how to code it as a function in Python in just one line.
Pitagora : Euclide = Triangle : Geometry (drawing by : Andrea Grianti)
Hi everybody, in this post I want to explain my experience in figuring out how, a rather intuitive concept like that of the Euclidean Distance Matrix (EDM), could become a challenge if you decide to improve your (in my case Python) programming skills crossing the chasm from classical “for…loops” type of code toward the beauty of a single line of code using linear algebra concepts.
Why ? Because if you can solve a problem in a more efficient way with one line of code but you don’t understand how to do it because you do not have linear algebra skills, well it’s time to learn and try.
The result is amazing and elegant but best of all opens up your mind in thinking with vectors and matrices which is important if you want to move later to other data science topics.
Hi everybody my name is Andrea Grianti, I spent my professionial life in IT and Data Warehousing but I became later more and more passionate with data science and analytics topics.
What is Euclidean Distance Matrix (EDM)
There are so many articles and wikis on EDM that I don’t want to repeat things you can find around. The basic concept is that it represents a table in which the rows are “source” entities and columns are “target” entities upon which you want to calculate a distance (in euclidean way). I know, it’s a bit generic, but ‘entities’ can be a lot of things for example planets described by their properties (radius, mass, etc.) or basket players described by their properties (age, height, weight etc….).
An easy “distance table” to imagine is that in a map with cities in both rows and columns with miles in the crossing cells representing a distance concept. In that case an EDM is often seen as a squared table (matrix) where the diagonal is zero (i.e. distance between same city is zero) and the rest of the table is symmetrical (distance from a to b is the same as distance from b to a).
Even the ‘distance’ concept is also not trivial as there are many different ways to define a ‘distance’. Here I consider distance the euclidean one which is usually represented by something like this:
formula to calculate distance in two dimensions from point A to point B. elements of the formula are the projections of the vectora A and B over the two dimensions axis.
But even this simple formula could easily become a lot more complex if you think data tables with thousands of rows on one side and hundreds of “features” (n-dimensional space). It’s here that becomes evident the gap between thinking “code” and thinking “algebra”.
Rows and Columns mean for most of us something like this in code:
But this is exactly what we want to avoid and we want to find a smarter way to operate with tables minimizing the code to write.
The starting point
If you will follow my story here till the end you will be able to understand the formulas that you can find on algebra books or papers about EDM and most importantly the reason why the formulas are written that way, so you understand the logic behind those and use that for coding your EDM function in one line!
To keep it simple suppose you have a matrix A (4,2) and matrix B (3,2). A is made of 4 rows (think about players) with 2 features, B is 3 rows and same 2 features.
Matrix A(4,2), each row can be observations of two distinct features
Matrix B(3,2). A and B share the same dimensional space. In this case 2. So the dimensions of A and B are the same.
We want to calculate the euclidean distance matrix between the 4 rows of Matrix A from the 3 rows of Matrix B and obtain a 4x3 matrix D where each cell represents the distance between a vector in matrix A and a vector in matrix B. I call for example a1 the vector with elements (a11,a12); b1 is the vector (b11,b12)etc. etc.:
From now on we work on squared values and we leave the ‘square root’ as a last passage of our procedure
It’s important to note that nothing change if you instead have a unique matrix (let’s say A) and you want to calculate the distances between each point of that matrix. It’s the same problem with A=B.
In our case we can define each cell element of the D matrix as:
each cell is nothing difficult as it’s the classical squared distance between points in a n-dimensional space
if we analyze in our analysis just the first element (d11) we find out the following :
In equation (1) we have just developed the polynomial and we discover interesting things I colored in red, blue and black to better highlight. In red: that sum is just what the mathematicians call the norm-2 (here is squared) of the vector 1 (let’s call it a1) of our matrix A. If you don’t know what the norm-2 is just look at equations number 2 and 3. It’s written in the books most of the times in that way and it’s simply the sum of the squares of the elements of a vector. In this case our first vector a1. Don’t forget that the norm formula operates also a square root function on the sum of squares. But in order to proceed we leave it out for the moment and the reason is : we can better see the diagonal property of the dot product between two matrices.
3. In blue: we are still analyzing equation1 and this is similar to the red part but for the vector 1 (call it b1) of our matrix B.
4. In black: see equation 4. We recognize here the classical dot product of two vectors (you remember the elementary [rows] by [columns] operation) so we can write in that way because it’s the dot product between row vector a1 and column (it was a row but we transposed it) vector b1. T is to mean that you must transpose the vector b1 in order to make it possible multiply it with the vector a1.
If we rewrite our Full Euclidean Distance Matrix with all the cells “exploded” and written as we did for the first element we have the following:
Take a moment to see the patterns and try to focus on how to generalize them.
You see the patterns in the matrix ? First of all we can check that D is actually the sum of three parts as follow:
the red columns repeat themselves vertically =>therefore we can isolate that apart as (for now) a vertical vector made of all the norms of each horizontal row of the A matrix. the blue rows are conceptually similar to the red columns even if in this case you will notice that they repeat themselves horizontally top-down the black is also intersting because it’s the dot product of our full original matrices A and B (transposed) multiplied by the scalar -2. Matrix A was (4 by 2) and B was (3 by 2) so B transposed is (2 by 3) and -2 times A dot product B is (4 by 3).
So we can say that matrix D is the sum of three parts but we have to tell more about how to calculate the norm-2 of the first two components: the red and blue parts. We have said that the red part is the sum of the squares of the row vectors of the matrix A, while the blue part is the sum of the squares of the row vectors of matrix B (before transposing).
In matrix operations terms you expect to obtain 2 columns. One column of values for A and one for B. You can check that the result we are looking for comes from the main diagonal values of dot product of matrix A by matrix A transposed.
Highlights of the elements of the diagonal . The same applies to the B matrix.
so we can say that:
if you think in python you can obtain the same result by squaring and summing A. This is the typical case when you could decide to take a shortcut and leave the tracks of algebra for something easier.
One more thing: in order to make it possible summing the three elements we must be sure they can be summed. This means that we have a little work to do on the red and blue parts to make them compatibles with the result of the dot product which is 4 by 3.
To make the two norms columns like rectangular matrices so that they can be summed all together we know that the produt of a column (in this case 4 by 1) and a row of an identity matrix (here 1 by 3) “replicates” the original column creating a matrix of 3 identical columns .
the concept of broadcasting in matrix literature is achieved through the multiplication of a vector with a right sized identity vector and the result is a “repeat†of the column m times. The same concept applies the other way round to a row vector multiplied by a column identity vector. This is another situation where following algebra with coding becomes inefficient. So think with an end in mind and follow your intuition. In Python for example read about the concept of ‘broadcasting’ and you can forget to code identity matrices.
The similar thing happens with the column vector of B which is 3 rows by 1 column. In this case we first transpose the column and multiply it for a identity vector of 4 rows by 1.
This looks complex but it’s not because when we think in Python we can leverage the broadcasting feature and so we can avoid this approach alltogether, but if you want to understand algebra here we are.
Well in the end of the end we have all the parts which are compatible for being summed and found out the solution can be generalized like this which is often reported in many web sites :
The general formula for EDM. Square root operation can be done as a very last passage.
How to translate that in Python:
The first 2 parts of the D² equation for the general formula are matrices where the norms have been “repeated” in order to make it possible summing them with -2AB^T. You can calc those matices in several ways. The point is that if you code it exactly following the algebra formulas => with identity matrix and such, the code is not efficient because you have probably to transpose, reshape, calculate the diagonals with functions like np.diag, np.reshape(…) etc.,. If you want to do it as an excercise, it works.
Instead the following code leverages Python features to do the same and can be squeezed in a single line of code (!). It accepts as input 2 matrices and returns a matrix with the distances. No for…loops or similar things.
if you like you can actually put all the code in the return statement. depending on the needs I decided to round the results with 2 decimals, but you can take that out if you don’t like it.
That’s all. I hope you liked it and your comments are very welcome. | https://medium.com/swlh/euclidean-distance-matrix-4c3e1378d87f | ['Andrea Grianti'] | 2020-05-11 22:54:22.702000+00:00 | ['Python', 'Data Science', 'Euclidean Distance', 'Vector', 'Matrix'] |
Young Emanuel Swedenborg | God Is The Absolute Reality Underlying Everything
Up to now, this series has been marked by a pool of Masters of Many that rarely explored or embraced their spiritual side. Emanuel Swedenborg, the fourteenth entrant, breaks this mold by publicly devoting his life’s work to both the assumed-opposites of science & religion.
From diverse scientist to mechanical inventor to religious philosopher, Emanuel Swedenborg is tragically overlooked as one of the greatest thinkers of the previous century. Maintaining the same focus as previous submissions, we ask again — what was he like in his twenties?
Note-Worthy Accomplishments
— Famed scientific author that evolved mineralogy & established Sweden’s first scientific journal, Daedalus Hyperboreus
— Prolific mechanical inventor that sketched out detailed volumes of future inventions rivaling that of Da Vinci
— Religious philosopher that published 18 theological volumes & inspired a religious church (New Church / Church of the New Jerusalem)
20s To 30s (1708–1718)
It’s incontrovertible that our childhood environment has lasting impacts on our psyche & persona — Swedenborg is no exception to this rule. His father, Jesper, a professor of theology at the University of Uppsala, was perfectly placed to mesmerize an impressionable Emanuel on the riches of both science & religion; challenged consistently by Aristotelians, Jesper frequently brought home spirited these debates for Emanuel to absorb. At the age of eleven, Swedenborg entered the University of Uppsala (which included Primary & Secondary school).
Uppsala University, Sweden
In 1709, at twenty-one, Swedenborg attained his PhD in science, by defending a Latin dissertation covering selected works of Seneca (worth noting that this literature analysis counted towards a doctorate in physical sciences). Instead of immediately seeking employment, he decides to travel through the great cities of Europe, embarking on a Grand Tour to seek out leading academics in all sciences.
The next summer, at age twenty-two, he finally sailed for England. Between multiple near-fatal boating mishaps & a short jail stint for technically-illegal entry, to say the trip started tumultuously is putting it lightly (appropriately enough, London was on lockdown for the plague). Once in, however, he was warmly embraced into the buzzing Newtonian metropolis. A magnet for adventurous minds, he lodged with multiple craftsmen, living cheaply & absorbing from fields ranging from watchmaking to brass-instrument making. He eventually crossed paths with the eminent John Flamsteed, the first Royal Astronomer famous for cataloguing over 3,000 stars, & had the pleasure of being his assistant for a handful of months.
At First I Was A Watchmaker, Afterwards A Cabinetmaker, & Now I Am A Mathematical Instrument Maker — From Them I Steal Their Trades, Which Some Day Will Be Of Use To Me
Following his social-academic nomad compass, & a second allowance of money from home, twenty-three year-old Swedenborg settled in Oxford. Here, he crossed paths with a second influential astronomer — the great Sir Edmund Halley. Ahead of himself with his short-lived astronomy experience, Swedenborg claimed to have originated a method for finding the longitude at sea & attempted to present to the Royal Academy. Rejected, he temporarily withdrew from the scientific world to express his discouragement through a compilation of poems later released.
Mid-way through 1712 , at twenty-four, Swedenborg begins to feel financial pressure; indignant because his father left him without money, he remarks:
It Is Difficult To Live Like The Maid In Schone, Without Either Food Or Drink
Utrecht, Netherlands
To decrease his expenses & dive back into his scientific nature, he headed out to Utrecht, Netherlands. Here, he spends most of his time with Sven Palmqvist, the Sweden Ambassador at the Hague with a penchant for mathematics.
Midway through 1713, twenty-five year-old Swedenborg arrives in Paris; exhausted from non-stop studying & apprenticing, he falls seriously ill for six weeks. Upon recovering, he again vows to make an acquaintance of the most learned men in the area — quite successfully, he befriends De la Hire, Varignon, Bignon & a young Voltaire.
Next, in 1714, he headed to Rostock Germany. Turning his focus internally, his work proliferated. For one, he thought-through & sketched out fourteen mechanical inventions, among these a submarine & a flying machine:
In addition, he published another series of poems, entitled, “The Northern Muse Sporting With The Deeds Of Heroes & Heroines.”
The year he turned twenty-seven, 1715, he finally returned home to Sweden. True to his lifelong pattern, he immediately sought out & successfully came in contact with Christopher Polhem, the most famous Swedish inventor of that time; shortly afterward, Polhem made Swedenborg his assistant. Next, within a few months of networking within the top scientific circles, he came to the notice of none other than King Charles XII of Sweden; impressed by his knowledge & mechanical skills, King Charless XII appointed Swedenborg as Assessor-Extraordinary to the Swedish Board of Mines (Bergskollegium).
The following year, twenty-eight year-old Swedenborg founded the scientific journal, Daedalus Hyperboreus. Notably, the accepted scientific language of that time was Latin, yet Hyperboreus was published in Swedish — this made it the very first Swedish scientific journal, which heightened his celebrity.
In a frenzy of productivity, he spent his twenty-ninth year balancing three monumental projects (including Hyperboreus). It’s unclear if the motivation was curiosity or state orders, but Emanuel spent a lot of his time & energy engineering a machine for transporting Swedish battleships overland (from Strömstad to Idefjorden). Additionally, a derivative of his appointed occupation, he published a study of blast furnace construction and methods for iron smelting: “Beskrivning över svenska masugnar och deras blåsningar” (“Description of Swedish Blast Furnaces and Their Methods of Blasting Air”).
The following & final year in this mini-bio, thirty year-old Swedenborg continued his flurry of productivity. Previewing the full pivot into spirituality & religion, he published an article that attempted to explain spiritual & mental events in terms of minute vibrations, or “tremulations.”
Quirks, Rumors & Controversies
Time & time we’ve seen again — people with enormous strengths tend to balance this out with tragic weaknesses. Every bright light casts a large shadow; so, the question follows — what shadows followed Emanuel Swedenborg?
When it comes to negative characteristics, it’s must be noted that Swedenborg was financially-privileged & clearly spoiled. This is evident by his ill-feelings towards his father during this Grand Tour, when he capriciously complained about “living like a maid.”
When it comes to rumors & controversies, well, they were second-to-none in this series (perhaps except Sir Francis Bacon). A disclaimer here: I claim zero experience as a psychologist & I certainly don’t want to cast an opinion on religious views; therefore, I’ll refrain from interpreting his actions & simply convey historic opinions. For starters, it’s safe to say that the transition in his mid-50’s was far from smooth; he undoubtedly withstood heavy criticism from both communities (scientific & religious).
On the one hand, the Vatican & Church repeatedly blasted him for any teaching that deviated from their interpretation. Quite committed to his causes & faith, this hardly bothered Swedenborg who flat out rejects nearly one-half of the New Testament.
Similarly, the scientific community that he so diligently networked turned it’s back on him (one of the reasons for his lack of popularity relative to others in this series). For example, one of his theological publications argued for the piety behind using concubines; clearly (& rightfully so) a controversial topic for a revered scientific authority.
In Closing
Who Was Emanuel Swedenborg In His 20s? A highly-ambitious, curious, scientific student of life that networked & absorbed from the greatest minds he could find.
Was He Accomplished In His 20s? Yes —evident by his PhD, publications/sketches, founding of Daedalus Hyperboreus & personal appointment by King Charles XII, it’s hard to argue otherwise. It’s worth noting, however, that he had yet to produce his best, deeply original work.
A perfect product of his environment, Emanuel Swedenborg was the right polymath at the right time; a reflection of his time period, he grew up during era when religion’s dominance in the world of thought was quickly being challenged by science( Age of Enlightenment). Willing to peak over the edge of both cliffs, he nonetheless sustained a lifelong, immoderate appetite for useful knowledge; this love of utility outlines the golden thread which runs through all of his work, scientific or theological.
Additional Entries
Part I — Benjamin Franklin
Part II — Bertrand Russell
Part III — Leonardo Da Vinci
Part IV — Thomas Young
Part V — Mary Somerville
Part VI — Richard Feynman
Part VII — Sir Francis Bacon
Part VIII — Jacques Cousteau
Part IX — Nikola Tesla
Part X — Isaac Newton
Part XI — Thomas Jefferson
Part XII — Sir Jagadish Chandra Bose
Part XIII — Charles Babbage | https://medium.com/young-polymaths/young-emanuel-swedenborg-6833cb74831b | ['Jesus Najera'] | 2020-06-21 01:44:13.344000+00:00 | ['Biography', 'Sweden', 'Science', 'Religion', 'History'] |
Can I complain for a minute? | More from rstevens Follow
I make cartoons and t-shirts at www.dieselsweeties.com & @rstevens. Send me coffee beans. | https://rstevens.medium.com/can-i-complain-for-a-minute-7748ffe04613 | [] | 2019-09-11 03:28:20.988000+00:00 | ['Pedantry', 'Friendship', 'Psychology', 'Comics'] |
Which Python Package Manager Should You Use? | Nowadays Python is everywhere - academics, data science, machine learning, enterprise application, web application, scripting… you name it python is everywhere. Whatever you do, python is there either to help you or give you a headache.
Let’s say, you have learned python programming and ready to use to develop applications, surely, as that sounds great, you jump into coding python scripts and eventually start installing python packages. From there one follows a dangerous path into a developer’s nightmare.
Package installation may lead to having incompatibility issues or make other applications unworkable. And you may discover that your code does not work on some machines while it just works flawlessly on your local machine. Why??? It’s because of the Python environment.
To save yourself from incompatibility issues, a separate virtual python environment needs to be created for a project.
A virtual environment is a bunch of scripts and directories that can run python isolated. By using a virtual environment, each python project can have its own dependencies regardless of other projects and system python environments.
In this blog post, I would like to share with you my environment for working with data and doing machine learning. You most definitely do not need to copy anyone’s setup but perhaps use the one that best fit for you.
Every programmer has different preferences when it comes to their programming environment vim versus emacs , tabs versus spaces, virtualenv versus anaconda.
To start with, we need to talk about pip . A python person {what :O} knows that pip is Python’s package manager. It has come built into Python for quite a while now, so if you have Python, you likely have pip already.
pip installs packages like tensorflow and numpy , pandas and jupyter , and many more along with their dependencies. Many Python resources are delivered in some form of pip packages. Sometimes you may see a file called requirements.txt in someone’s folder of Python scripts. Typically, that file outlines all of the pip packages that the project uses, so you can easily install everything needed by using
pip install -r requirements.txt
As part of this ecosystem, there’s a whole world of version numbers and dependencies. You sometimes need to use different versions of a given library for different projects that you are working on.
So you need a way to organize groups of packages into different isolated environments. Otherwise, looking at the version errors you would want to bang your head against the wall.
There are two popular options currently for taking care of managing your different pip packages virtualenv and anaconda .
1) Virtualenv
Virtualenv is a package that allows you to create named virtual environments where you can install pip packages in an isolated manner. This tool is great if you want to have detailed control over which packages you install for each environment you create.
For example, you could create an environment for web development with one set of libraries, and a different environment for data science. This way, you won’t need to have unrelated libraries interacting with each other, and it allows you to create environments dedicated to specific purposes.
# install
pip install virtualenv # create environment
virtualenv venv # activate environment
source venv/bin/activate
2) Anaconda
Now, if you’re primarily doing data science work, Anaconda is also a great option. Anaconda is created by Continuum Analytics, and it is a Python distribution that comes preinstalled with lots of useful Python libraries
for data science. Anaconda is popular because it brings many of the tools used in data science and machine learning with just one install, so it’s great for having a short and simple setup.
Like Virtualenv, Anaconda also uses the concept of creating environments so as to isolate different libraries and versions. Anaconda also introduces its own package manager called conda from where you can install libraries.
Additionally, Anaconda still has a useful interaction with pip that allows you to install any additional libraries which
are not available in the Anaconda package manager.
Follow the instructions to download and install anaconda from here
# create environment
conda create — name test-env # activate environment
conda activate test-env # install additional packages
conda install tensorflow
To add more you have a nice UI to manage your projects and environment
So… which one to use, virtualenv or anaconda ?
Well, it’s nice to try out different libraries on both virtualenv and anaconda, but sometimes those two package managers don’t necessarily play nicely with each other on one system.
In my case, I have opted to use both, but I manage the whole thing using a library called pyenv
Conceptually, pyenv sits on top of both virtualenv and anaconda and it can be used to control not only which virtualenv environment or Anaconda environment is in use, but it also easily controls whether I’m running Python 2 or Python 3.
pyenv local 2.7.10
pyenv activate py27_tf12 pyenv local 3.5.2
pyenv activate py35_tf12
One final aspect of pyenv that it has an ability to set a default environment for a given directory. This causes that desired environment to be automatically activated when you enter a directory.
~/code $ cd myproject
(py35_tf12) ~/code/myproject $
I find this to be way easier than trying to remember which environment I want to use every time I work on a project.
So which package manager do you use?
It really comes down to your workflow and preferences. If you typically just use the core data science tools and are not concerned with having some extra libraries installed that you don’t use, Anaconda can be a great choice since it leads to a simpler workflow for your needs and preferences.
But if you are someone who loves to customize your environment and make it exactly like how you want it, then perhaps something like virtualen or even pyenv maybe more to your liking.
There’s no one right way to manage Python libraries, and there’s certainly more out there than the options that I just presented.
As different tools come and go, it’s important to remember that everyone has different needs and preferences, so choose for yourself the best one that fits your needs.
That’s it for this post, my name is Vivek Amilkanthwar. See you soon with one of such next time; until then, Happy Learning :) | https://medium.com/in-pursuit-of-artificial-intelligence/which-python-package-manager-should-you-use-150d9696f9db | ['Vivek Amilkanthawar'] | 2019-06-14 02:42:52.596000+00:00 | ['Machine Learning', 'Python', 'Virtual Environment', 'Python Development', 'Package Manager'] |
Respawn VMs like an RPG with Autohealing and Autoupdates | Respawn VMs like an RPG with Autohealing and Autoupdates
Season of Scale
Season of Scale
“Season of Scale” is a blog and video series to help enterprises and developers build scale and resilience into your design patterns. In this series we plan on walking you through some patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises.
In Season 1, we’re covering Infrastructure Automation and High Availability:
In this article I’ll walk you through how to use autohealing and autoupdates to create health checks and maintain HA for GCP Compute Engine instances.
Check out the video
Review
So far we have looked at how Critter Junction launched and globally scaled a their gaming app on Compute Engine. With their growing daily active users, we helped them set up auto scaling and global load balancing to handle globally distributed and constantly rising traffic. Today let’s learn how they can make this social critter app more scalable by gracefully replacing failed instances.
A gaming nightmare
To keep their users from risking their daily game streaks, Critter Junction need to make sure their app is available all the time without interruptions.
One way to do that is to set up High Availability or HA at all layers of the stack. Though that can mean distributed databases, networks, and application servers, we’re focusing on their game servers running on Compute Engine.
We know that managed instance groups provide features such as autoscaling, regional (multiple zone) deployments, autohealing and auto-updating. Two features that can be tacked onto your configuration of Compute Engine are autohealing and autoupdates.
Autohealing helps proactively identify and replace the unhealthy instances (that are not responding) with healthy ones. Auto-updates help update the instances without disrupting the service
Autohealing
Let’s focus on Autohealing for a bit.
The first step is to create a health check, which not only detects whether the machine is running or not but also detects application-specific issues such as freezing, crashing, or overloading. If an instance is deemed unhealthy, new instances are created by the managed instance group.
We’re building on the instance configuration we created in the previous article.
First, create a health check in Compute Engine and give it a name. Set the protocol to HTTP. You can set the health check on any path, but let’s say the path is /health.
In our demo app we added code that ensures that /health returns 200 OK response when healthy, and HTTP 500 internal server error when unhealthy.
Set up the health criterion
Set Check interval to 10, which means every 10 seconds the service will be probed for health. Set timeout to 5. which means we wait for max 5 seconds for a response to a probe. Set a Healthy threshold to 2, which defines the number of sequential probes that must succeed for the instance to be considered healthy. And finally, set an unhealthy threshold to 3, which defines the number of sequential probes that must fail for the instance to be considered unhealthy. And then create.
As a best practice, you want the health check to be conservative so you don’t preemptively delete and recreate instances.
Add a health check to an existing instance
Now, let’s go to our instance group we created in the last episodes and add a health check to it.
Select the health check with an initial delay of 90 seconds.
Ideally this initial delay should be long enough for the instance to be fully running and ready to respond as healthy.
Simulate failures
Let’s have some fun with this and simulate failures now.
For that, we go to the VM instance and click on external IP and make it unhealthy. Wait for the autohealer to take action and you’ll see that the green checkmark next to the instance turns into a spinner, indicating that the autohealer has started rebooting that instance.
What about when you update an instance?
One of the other concerns when it comes to HA is applying updates to instances without impacting the service. Managed instances groups allow you to control the speed and scope of the update rollout to minimize disruptions to your application. You can also perform partial rollouts, which allows for canary testing.
Let’s see that in action now!
On our instance group click on the rolling update button. Rolling means it’s used for gradual updates. Add a second template for canary testing and select target size as 20%.
This means we want to send 20% of the traffic to the new instances for canary testing
3. Now, update mode is by default proactive which means Compute Engine actively schedules actions to apply the requested updates to instances as necessary. In many cases, this often means deleting and recreating instances proactively.
You can choose to perform an opportunistic update if a proactive update is potentially too disruptive. An opportunistic update is only applied when you manually initiate the update on selected instances or when new instances are created by the managed instance group.
update if a proactive update is potentially too disruptive. An opportunistic update is only applied when you manually initiate the update on selected instances or when new instances are created by the managed instance group. Max surge means how many more instances are you willing to spin up as a part of this update. The higher value here speeds up the update, but costs more for new instances. So you face a tradeoff between cost and speed.
means how many more instances are you willing to spin up as a part of this update. The higher value here speeds up the update, but costs more for new instances. So you face a tradeoff between cost and speed. Max unavailable and min wait time: Keep them as zero but these parameters are used to control how disruptive the update is to your service and to control the rate at which the update is deployed.
And that’s it!
Through our help setting up two high availability features within managed instance groups, Critter Junction has a much more resilient architecture. Autohealing proactively identifies unhealthy instances and heals them, while auto updates update instances without disrupting the service. Stay tuned to find out what more in store for Critter Junction.
And remember, always be architecting.
Next steps and references: | https://medium.com/google-cloud/give-your-vms-a-steady-pulse-with-autohealing-and-autoupdates-ae2c0828ecc9 | ['Stephanie Wong'] | 2020-09-01 23:26:12.210000+00:00 | ['Google Cloud Platform', 'High Availability', 'Computer Science', 'Software', 'Cloud Computing'] |
Time Series Forecasting With ARIMA Model in Python for Temperature Prediction | Time Series Forecasting With ARIMA Model in Python for Temperature Prediction Nachiketa Hebbar Follow Sep 18 · 8 min read
Time Series forecasting is one of the most in-demand techniques of data science, be it in stock trading, predicting business sales or weather forecasting. It is clearly a very handy skill to have and I am gonna equip you with just that by the end of this article.
In this tutorial, we are gonna build an ARIMA model(don’t worry if you do not exactly know how this works yet) to predict the future temperature values of a particular city using python. GitHub link for the code and data set can be found at the end of this blog. I have also attached my YouTube video at the end, in case you are interested in a video explanation. So without wasting any time let’s get started.
Reading Your Data
The first step in any time series is to read your data and see how it looks like. The following code snippet demonstrates how to do that.
import pandas as pd
df=pd.read_csv('/content/MaunaLoaDailyTemps.csv',index_col='DATE' ,parse_dates=True)
df=df.dropna()
print('Shape of data',df.shape)
df.head()
df
The code is pretty straightforward. We read the data using pd.read_csv and writing parse_date=True, makes sure that pandas understands that it is dealing with date values and not string values.
Next we drop any missing values and print the shape of the data. df.head() prints the first 5 rows of the dataset. Here is the output you should see for this:
Plot Your data
The next is to plot out your data. This gives you an idea of whether the data is stationary or not. For those who don’t what stationarity means, let me give you a gist of it. Although i have made several videos on this topic, it all boils down to this:
Any time series data that has to be modeled needs to be stationary. Stationary means that it’s statistical properties are more or less constant with time. Makes sense, right? How else are you supposed to make predictions if the statistical properties are varying with time? These are the following properties that any stationarity model will have:
Constant Mean Constant Variance(There can be variations, but the variations shouldn’t be irregular) No seasonality(No repeating patterns in the data set)
So first step is to check for stationarity. If your data set is not stationary, you’ll have to convert it to a stationary series. Now before you start worrying about all of this, relax! We have a fixed easy test to check for stationarity called the ADF(Augmented Dickey Fuller Test). But before showing that, lets plot the data first.
Since I am only interested in predicting the average temperature, that is the only column I will be plotting.
df['AvgTemp'].plot(figsize=(12,5))
Checking For Stationarity
Right off the bat, we can see that it seems to have somewhat of a constant mean around 45. And the fluctuations also seem to be more or less the same. However to be sure if the data is stationary or not, we run a fixed statistical test using the following code:
from statsmodels.tsa.stattools import adfuller def ad_test(dataset):
dftest = adfuller(dataset, autolag = 'AIC')
print("1. ADF : ",dftest[0])
print("2. P-Value : ", dftest[1])
print("3. Num Of Lags : ", dftest[2])
print("4. Num Of Observations Used For ADF Regression:", dftest[3])
print("5. Critical Values :")
for key, val in dftest[4].items():
print("\t",key, ": ", val) adf_test(df['AvgTemp'])
You will get the output as follows:
You don’t need to worry about all the complex statistics. To interpret the test results, you only need to look at the p value. And you use the following simple method:
If p< 0.05 ; Data is stationary
if p>0.05; Data is not stationary
It’s not a hard and fast rule, but a stationary data should have a small p value. Larger p value could indicate presence of certain trends(varying mean) or seasonality as well.
Finally, Decide your ARIMA Model
Now although I have made several YouTube videos on this topic, if you do not fully understand what an ARIMA model, allow me to present an easy overview:
ARIMA is composed of 3 terms(Auto-Regression + Integrated+Moving-Average)
Auto-Regression:
This basically means that you are using the previous values of the time series in order to predict the future. How many past values you use, determine the order of the AR model. Here’s how an AR(1) model looks like:
Y(t)= Some_Constant*Y(t-1)+ Another_Constant +Error(t)
Simple enough, right?
2. Integrated:
So, remember our talk on stationarity, and how it’s extremely important? Well if you are data set is not stationary, you most often need to perform some sort of difference operation to make it stationary. If you are differencing with previous value, its order 1 and so on. Here’s an example of that:
Forgive my bad drawing. But as you can the series Y(t) was not stationary, because of an increasing trend resulting in a varying mean. We simply subtract it from previous values and voila! It becomes stationary. Depending on your data, you might have to repeat the differencing to get a second order differencing , third order and so on..
3. Moving Average:
This basically means that you are using previous errors to make the future prediction. Also makes sense, right? By seeing how wrong you were in your prediction, you take that into account to make a better prediction. And just like in an AR model, the number of previous errors(also called number of lags) you use, determines the order of the model.
Here’s how MA(1) order equation looks like:
Y(t)= Mean + Some_Constant*Error(t-1) +Error(t)
So our main job is to decide the order of the AR, I, MA parts which are donated by(p,d,q) respectively.
And before you start worrying, let me tell everything is gonna be done automatically. pmdarima library comes to our rescue! It does the job of figuring out the order of the ARIMA all by itself. Here’s how the code snippet looks like:
from pmdarima import auto_arima
stepwise_fit = auto_arima(df['AvgTemp'], trace=True,
suppress_warnings=True)
(Make sure to install the pmdarima library first using pip install pmdarima)
The code is pretty self explanatory. We simple supply our data to the auto_arima function. The function basically uses something called as the AIC score to judge how good a particular order model is. It simply tries to minimize the AIC score, and here’s how the output looks like:
Model performance for different combination of orders
We can see the best ARIMA model seems to be of the order (1,0,5) with the minimum AIC score=8294.785. With this knowledge we can finally proceed to train and fit the model to start making prediction!
Split Your Dataset
Before we actually train the model, we have to split the data set into a training and testing section. We do this because we first train the model on the data and keep the testing section hidden from the model. Once model is ready, we ask it to make predictions on the test data and see how well it performs.
The following code snippet illustrates how to do that:
print(df.shape)
train=df.iloc[:-30]
test=df.iloc[-30:]
print(train.shape,test.shape)
So as you can probably tell, we reserving the last 30 days of the data as the testing section. You can see the shapes of the actual data, and the testing and training sections in the output.
Shape of training and testing section
Finally, We get to the Juicy Stuff!
Surprisingly, creating the ARIMA model is actually one of the easiest steps once you have done all the prerequisite steps. It’s as simple as shown in the code snippet below:
from statsmodels.tsa.arima_model import ARIMA
model=ARIMA(train['AvgTemp'],order=(1,0,5))
model=model.fit()
model.summary()
As you can see we simply call the ARIMA function, supply it our data set and mention the order of the ARIMA model we want. You will be able to see the summary of the model in your output as well.
Model Summary
You can see a whole lot of information about your model over here. Also you will be able to see the coefficients of each AR and MA term. These are nothing but the value of the variables that you saw in the previous AR/MA model equation which were labelled as ‘Some_Constant’. Generally a higher magnitude of this variable means that it has a larger impact on the output.
Check How Good Your Model Is
Here’s where our test data comes in. We first make prediction for temperature on the test data. Then we plot out to see how our predictions compared to the actual data.
start=len(train)
end=len(train)+len(test)-1
pred=model.predict(start=start,end=end,typ='levels').rename('ARIMA Predictions')
pred.plot(legend=True)
test['AvgTemp'].plot(legend=True)
To actually make predictions, we need to use the model.predict function and tell it the starting and ending index in which we want to make the predictions.
Since we want to start making predictions where the training data ends , that is what i have written in the start variable. We want to stop making predictions when the data set ends, which explains the end variable. If you want to make future predictions as well, you can just change that accordingly in the start and end variable to the indexes you want. Your output plot should look like this:
Test values vs Predictions Plot
As you can see the predictions does a pretty good job of matching with the actual trend all though there is a certain acceptable lag.
Check your Accuracy Metric
To actually ascertain how good or bad your model is we find the root mean squared error for it. The following code snippet shows that:
from sklearn.metrics import mean_squared_error
from math import sqrt
test['AvgTemp'].mean()
rmse=sqrt(mean_squared_error(pred,test['AvgTemp']))
print(rmse)
First we check the mean value of the data set which comes out to be 45. And the root mean squared error for this particular model should come to around 2.3. Also you should care about is that your root mean squared should be very smaller than the mean value of test set. In this case we can see the average error is gonna be roughly 2.3/45 *100=5.1% of the actual value.
So with that your ARIMA model is ready to go! In future blogs I am gonna talk about different models and how you can increase the accuracy of the model further.
If you are interested in the video explanation of the same, head over to my YouTube channel for more such content! You can find the GitHub link for the code and data set here: https://github.com/nachi-hebbar/ARIMA-Temperature_Forecasting | https://medium.com/swlh/temperature-forecasting-with-arima-model-in-python-427b2d3bcb53 | ['Nachiketa Hebbar'] | 2020-09-21 10:56:06.473000+00:00 | ['Python', 'Time Series Forecasting', 'Time Series Analysis', 'Arima', 'Statsmodels'] |
Let the Chosen Ones Choose | I hear the goddesses speaking.
Haughty, scornful Isabeau derides and dismisses her.
“We go from a beautiful, Amazon princess warrior to this joke, a dysfunctional misfit who wants to be a hero and a librarian. A librarian!”
She considers her unworthy and takes the child’s death as a certainty. | https://medium.com/genius-in-a-bottle/let-the-chosen-ones-choose-4a98b4b24be5 | ['Susannah Mackinnie'] | 2020-11-10 01:09:34.800000+00:00 | ['Storytelling', 'Childhood', 'Fantasy', 'Poetry', 'Fate'] |
Using emotions to drive conversions | Using emotions to drive conversions
This blog post is based on Cugelman Emotion Map and draws heavily from his course on “Digital Psychology & Behavioral Design” at CXL Institute.
Photo by Tengyart on Unsplash
An emotion is a complex set of physiological changes in response to a perceived threat or opportunity. They’re automatic and mostly unconscious, which is why we’re never fully aware of all the changes we’re experiencing.
Emotions drive behavior or put another way motivation is an emotion that facilitates action. If you have read my previous post on Using Neuromarketing to Drive Conversions, you came across a fascinating study done by researchers at UT Austin where participants were shown photos of two chickens. The research concluded that we are ruled by our emotions first, and then we build justifications for our response since we want to be considered scientific and rational.
So understanding emotions and designing with that in mind is key to giving users the right experience for them.
Cugelman’s Emotion Map
Cugelman’s emotion map is based on the “dimensional approach” wherein every emotion either boosts or lowers three dimensions: (1) arousal, (2) pleasure, and (3) control. Arousal is the level of physical and cognitive energy experienced which can span from feeling energized and focused on one hand to feeling lethargic and unfocused on the other hand.
The pleasure dimension describes how pleasurable or painful emotions feel. And finally, control describes how much power someone feels that they hold in any situation. When people possess more power or control, they generally feel calmer and more confident. With less power and control, people can’t fully predict the outcome of a situation, which can lead them to feel higher levels of stress.
Dr. Cugelman essentially divided these dimensions into four emotional quadrants as follows:
Optimistic: This quadrant of highly-arousing and pleasurable emotions is where people feel control, motivation and pleasure. These emotions compel users to act in anticipation of a reward.
Pessimistic: The quadrant with low-arousal, negative emotions and a lack of control is where people experience feelings of powerlessness, helplessness, shame, humiliation, pessimism, and lethargy which demotivate any action as users eventually give up.
Insecure: This quadrant with high-arousal and painful emotions is where users react to any threat of losing control, and experience emotions such as urgency, suspicion, vigilance, fear, stress, and anxiety. People describe these strategies as pressure tactics, and their primary drive comes from the emotions that underpin loss aversion.
Secure: The low-arousal positive emotions are where people let their guard down, within a context where they feel secure, grateful, trusting and generally content. This is where our target audiences or users trust us so much, that they feel secure and trusting interacting with our organization, and shift into a long-term trusting relationship which is known as loyalty. These are the emotions where we want our customers to reside, as these are the emotions tied to loyalty, and complacency. This is where you form long-term relationships with the people who matter to your organization.
What causes people to act is usually loss aversion (insecure emotions) and achievement (optimistic emotions). When people feel either very helpless or very secure, they are less likely to act.
Example
Let’s look at how a company like Dollar Shave Club uses emotions on their homepage. | https://medium.com/design-bootcamp/using-emotions-to-drive-conversions-9588de9c569e | ['Bithika Mehra'] | 2020-11-10 19:38:41.023000+00:00 | ['Experiment', 'A B Testing', 'User Research', 'Conversion Optimization', 'Psychology'] |
The Bank Robber Jericho Brown | My name is Henry James and I’m a writer for Dark Sides of the Truth Magazine.
Part I, Part II, Part III, Conclusion
Six weeks ago I woke up in a hospital with a massive headache and my left arm in a cast. I also met Baxter Huntley, a doctor who’d been dead for twenty five years.
Yeah, I know. In my line of work strange things are the rule rather than the exception. I’ve gotten used to the weird.
My personal doctor, after removing the cast, prescribed another eight weeks of physical therapy.
I took it as more of a suggestion.
For those of you who read my stories you know I have an aversion to exercise of any kind. Just not going to happen.
Unless you’re talking about walking from the car to the inside of a fast food joint.
Speaking of cars, I was forced to buy another steel stallion. The eighteen wheeler which smashed into my ass did a real number on my old ride. But the silver lining was the settlement check from the trucking company paid all my hospital bills and the cost of my new car.
So all in all, things were looking pretty good.
But as you folks should know by now my situation changes faster than Texas weather.
It all started with a trip to my local bank to withdraw a little traveling money from what was left of the settlement.
I’d been a customer at the bank so long I knew most of the people there on a first name basis. It was always a “hey Henry, hello Mr. James” moment when I walked in.
But not today.
Nobody was saying a damned word.
You know that feeling you get when something’s not quite right? You’re looking at what should be a normal situation and something seems out of place, but you just can’t put your finger on it?
Yeah, that feeling.
For some reason, my spidey sense started jangling big time. I should have listened to my brain yelling at me to get the hell out of there.
And…I didn’t.
I watched as two drive in tellers walked through the opening behind the counter followed by a skinny kid with stringy bleached hair beneath a ball cap.
He was wearing sunglasses and brandishing a shotgun.
Okay, so I’d just walked my way into the middle of a bank robbery. Now it was time to shift into reverse and walk my way out.
The minute he saw me he vaulted the counter and aimed the shotgun in my direction.
“Get on the fucking floor man! Get down or I swear to God I’ll cut your fat ass in half!”
As I struggled to comply I must say I was a little pissed.
I may be a few pounds on the heavy side, but come on fat ass?
Really?
“Everybody do what I tell you and nobody gets hurt. Ladies pop your tills, then come out from behind the counter slowly. Do it now. You people in the offices get your asses out here with old man fatty and sit your asses down. Everybody move it now!”
I tell you folks if he hadn’t had that shotgun I would have gotten up and punched him in the face. The nerve of that scrawny ass punk calling me fat two times in less than thirty seconds.
“Who has the keys to the front door?”
We all looked at Robert Sanford. Bob was the bank’s branch manager for as long as I could remember. I knew Bob pretty well. Hell, we’d even spent time in a deer stand together freezing our asses off. As cold as that day was, I was guessing Bob would have traded it for the situation we were all in now. Slowly, he raised his hand.
“Okay sir, the keys are in my pocket. I need to pull them out.”
“Get the fuck up and get your ass to the door. If you so much as flinch and I don’t like it I’ll cut you in half. You understand?”
“Yes, sir.”
We all watched Bob inch his way to the front door, lock it then return and sit down. Despite the chilled air from the units on the roof all of us were sweating.
“Give me the damned keys.”
Bob held up the keys and the man snatched them and jammed them in his pocket.
“Listen up people. I’m just here for the money. I don’t want to hurt nobody. Once I’m done with the tills, I’m going to take whatever money you got then I’m outta here. Nobody tries to be a hero and everybody stays alive. Got it?”
I’m guessing in the punk’s brain he was thinking everything was going as planned. A few more minutes and he would be out the door with a butt load of money.
When the flashing lights of several police cars bounced and flickered their way into the bank’s lobby the shit started to get real. My guess is one of the bank tellers must have triggered a silent alarm.
I believe Robert Burns said it best.
The best laid schemes o’ mice and men gang aft a-gley.
The Texas version however, is almost as poetic.
Boy you done bit off way more’n you can chew.
I can tell you what was running through all of our minds right then. We’d all heard about hostage situations and as skittish as our well armed captor was it was quite possible one of us could get hurt.
Like dead hurt.
“Shit, shit, shit, SHIT, SHIT!!”
The punk kept repeating the one word as he darted back and forth from the edge of the double glass doors to where we sat.
“SHIT, SHIT!”
Okay somebody had to try and calm the kid down. I was thinking we wouldn’t get a chance to draw straws or pick a spokesperson with a rousing game of rock, paper, scissors so I took a deep breath, let it out then craned my neck to lock eyes with the guy.
“Son, I think you need to breathe a bit, and try to calm down.”
Whoa, that sure put a cap on it.
He stopped pacing and aimed the shotgun at my head.
“I don’t think I want to hear a Goddamned word from you grandpa. Why don’t you just shut the fuck up?”
“First off you little piece of shit I ain’t your granddaddy. If I was I’d be kicking your ass right about now. Second off, you need to hear what your fucking options are before you kill somebody or the police start busting caps in your ass. How about we start with an introduction. My name’s Henry, Henry James. What’s yours?”
“Henry James? The Henry James what writes all that weird shit in the Dark Sides? That Henry James?”
I’m thinking my answer might bring about two reactions. One, this punk was a loyal fan who read my work and loved me. Two, this guy hated my shit so bad he’d as soon shoot my ass as look at me.
Oh well. There’s days when you just gotta lay it out there.
“One and the same.”
“Well ain’t this a kick in the head? I read that last story you and that chick wrote.”
“Sunny Alexander?”
“Yeah that story was totally whack.”
“Whack like in good?”
“Oh dude, that shit was crazy good. So did you guys really talk to a ghost? Like for real?”
“Yeah, except he wasn’t pointing a shotgun at my ass and trying to steal my money. Okay so you know who I am. Now it’s your turn.”
Before the young man could answer one of the phones began to ring, then another, and another. In seconds every phone in each of the five offices was warbling.
“I believe that’s for you son.”
READ ON — THE BANK ROBBER JERICHO BROWN PART II
Let’s keep in touch: [email protected] | https://medium.com/dark-sides-of-the-truth/the-bank-robber-jericho-brown-7d34cdce4e5c | ['P.G. Barnett'] | 2019-08-23 19:12:48.020000+00:00 | ['Storytelling', 'Fiction', 'Short Story', 'Fiction Series', 'Henry And Sunny'] |
3 Easy Tricks to Get Started with Python (and Ditch Excel!) | So you’ve taken the mental leap and want to learn Python — that’s awesome! But where to start?
Let me guide you through tasks you already know how to do in Excel and how to do them in Python!
You’ll be using Pandas, the primary data analysis library in Python.
If you need to install Pandas, you can do this using pip or conda:
pip install pandas
#or
conda install pandas
Pandas loads data into dataframes, which you can think of as Excel sheets.
Let’s load a dataset into a datagram. You can accomplish this by using the following code. We’ll also explore the first five rows by using the Pandas head method:
Generating our dataframe with Pandas. Source: Nik Piepenbreier
To follow along with the tutorial in Excel as well, the file can be found here.
There are a number of differences between Excel sheets and Pandas dataframes. Let’s take a quick look!
Comparing Excel worksheets and Pandas dataframes. Source: Nik Piepenbreier
Ok, now let’s get started!
Filtering, Sort, & Reorder Columns
Excel is a much more visual tool, which makes it easy to click a button that abstracts the function behind what you want to accomplish.
Sort by a Single Column
Sorting a single columns in Excel. Source: Nik Piepenbreier
For example, if you wanted to sort a column in Excel, you could simply:
Select the Data tab,
Highlight the column you want to sort, and
Click Sort A to Z or Sort Z to A.
To do this in Pandas, you would write:
Sorting by a single column in Pandas. Source: Nik Piepenbreier
Sort by Multiple Columns
Sometimes you might want to sort by multiple columns.
Sorting by multiple columns in Excel. Source: Nik Piepenbreier
Again, Excel makes this easy:
Click on the Data tab
Click on Sort
Enter the columns you want to sort by
To do this with Pandas, simply add a list to the “by” argument:
Sorting by multiple columns in Pandas. Source: Nik Piepenbreier
Filtering Columns
Filtering columns is an easy task in Excel! Simply click on the Data tab, then Filter. This creates arrows on all the column headers. When you click on these, simply fill in your selection:
Filtering columns in Excel. Source: Nik Piepenbreier
In Pandas, this is just as easy:
Filtering a column in Pandas. Source: Nik Piepenbreier
What’s great about this, you can also use comparison operators to select more than 10 units, or select based on multiple conditions:
Comparison : You can use > (greater than), < (less than), == (equal to), >= (greater than or equal to), and <= (less than or equal to),
: You can use > (greater than), < (less than), == (equal to), >= (greater than or equal to), and <= (less than or equal to), ‘And’ Conditions : wrap each condition in brackets and separate the brackets with an ampersand (&)
: wrap each condition in brackets and separate the brackets with an ampersand (&) ‘Or’ Conditions: wrap each condition in brackets and separate the brackets with a pipe (|)
Different types of filters in Pandas. Source: Nik Piepenbreier
Reordering Columns
Reordering columns is more of a visual cue for yourself.
To drag a reorder a column in Excel, you’d select the column by click its header, hover on the side until the cursor changes to a four-pointed arrow, then hold the SHIFT key and drag the column to a new position:
Moving a column in Excel. Source: Nik Piepenbreier
To accomplish the same thing in Pandas, you simply write in the order of columns you want to have into a double set of square brackets:
Re-ordering columns in Pandas. Source: Nik Piepenbreier
Pivot Tables (with Percentages)
Pivot tables are one of those things that take you to the next level in Excel.
They allow you to easily summarize data quickly, without needing to rely on complex formulas.
Say you wanted to know the total value of sales in each region, you would:
Select your data and click PivotTable on the Insert Tab and click OK to create your table. Drag Region into the Rows box, and Sales into the values tab. Excel automatically assumes we want to add up the values.
Creating a pivot table in Excel. Source: Nik Piepenbreier
To accomplish the same thing in Pandas, you could simply use the pivot_table function:
Creating a pivot table in Pandas. Source: Nik Piepenbreier
Let’s break this down a little bit:
We create a new variable called pivot.
We use the pandas pivot_table function. Our first argument is the dataframe df.
The index argument is ‘region’, which tells Pandas to create rows based on the ‘Region’ column.
We assign the argument values the field ‘Sales’, to let Pandas know we want to calculate the Sales column.
Finally, we use the aggfunc (‘aggregation function’) argument to tell Pandas to sum up the values. The default value is ‘mean’.
For a deep dive into the Pandas Pivot Table function, check out my other post on Pivot Tables in Pandas.
Show Pivot Table Values as Percentages
You may want to show your values as percentages of the column total. Again, Excel makes this very easy:
Calculating percentages in pivot tables in Excel. Source: Nik Piepenbreier
Simply right-click on a value,
Select Show Values as → % of column total
It’s just as easy to do this in Pandas. The easiest way is to create a new column for this:
Calculating pivot table percentages in Pandas. Source: Nik Piepenbreier
Let’s take a look at what’s happening:
We declare a new column by using pivot[‘% of column total’] — this assigns the name ‘% of column total’ to the column
We then divide each value in the row (pivot[‘sales’]) by the sum of the entire column (pivot[‘sales’]sum()) and multiply it by 100
Creating Charts
Creating a chart in Excel. Source: Nik Piepenbreier
Now, what if you wanted to create some charts, this is incredibly easy in both Excel and in Python.
Let’s look at Excel first. If you want to plot this pivot table out as a column graph:
Place your pointer on one of the cells in the table,
Go to Insert → 2-D Column Chart
In Pandas, this is even easier. Pandas comes with one of Python’s top data visualization libraries functionality built-in. It’s as easy as adding .plot(kind = ‘bar’) to the end of your code:
Creating a chart in Pandas and Matplotlib. Source: Nik Piepenbreier
This might look a little daunting. Let’s break it down:
Import pyplot from matplotlib as plt
Re-write your earlier code (lines 2–3)
Now plot out the ‘Sales’ column and assign a kind = ‘bar’ as an argument
Finally, save the file by using the savefig method.
Note: if you’re using Jupyter notebooks, you can display the chart inline by writing this code after importing your library:
%matplotlib inline
Bonus Tip: Format Values Properly
While you’re working with data, it may be helpful to format your values properly.
Formatting your data in Excel. Source: Nik Piepenbreier
For example, formatting currencies as dollars (etc.) or percentages as percentages.
To do this in Excel, you would:
Select the values you want to format,
Under the Home tab in the Number section, select your desired type
Pandas hides this away a little bit, which is something that can stump newcomers quite a bit.
An easy way to accomplish this would be using the apply() function. What this function does is take a series and apply another function to it. The function being applied would be the one formatting the values.
If you wanted to format the Sales column in the pivot dataframe, you could write:
Formatting values in Pandas. Source: Nik Piepenbreier
This might seem a little round-up (and it is), but it does give you a ton of flexibility in terms of how you style your data. Let’s take a closer look:
We define a function called format() that takes one argument (x)
The function only serves to return a formatted value using string formats in a particular format.
The ${:,.2f} part represents the actual formatting. The colon (:) is used to signify the beginning of the formatting, the comma (,) is used to signal comma separators for thousands, and the .2 signals two decimal places.
This notation can be a little tricky to get used to and I tend to google the style I want and copy and paste it.
Similarly, if you wanted to stay percentages, you could write:
Formatting percentages in Pandas. Source: Nik Piepenbreier
In this code, we created another function called format_percent() and went through similar steps as above.
Note: The ‘% of column total’ column has been amended to not multiply the value by 100. This is because the formatter does this automatically.
Where You Can Learn More
Thank you so much for reading this article! I hope you found it useful!
I have written a number of articles that explain how to take on common and advanced Excel tasks in Python, if you want to check them out!
If you’re ready to take the plunge into Python and Pandas, I have also written an eBook that provides a complete introduction to Python, Pandas, and Matplotlib and will have you up and running in no time!
You can find it by clicking this link.
Have a great day! | https://towardsdatascience.com/get-started-with-python-and-ditch-excel-85e7f67318b | ['Nik Piepenbreier'] | 2020-05-25 04:12:33.753000+00:00 | ['Data Science', 'Python', 'Software Development', 'Coding', 'Data'] |
The importance of branding for nonprofits: How to tell a story that wins you followers | Branding for nonprofits is not just a fundraising tactic. In fact, it’s not a tactic at all. Branding is a strategy that serves as the compass for your different initiatives.
A strong brand builds your trustworthiness and credibility. And the best way to grow trust and connect people to your nonprofit is through brand storytelling.
Your story creates a connection between your organization and individuals. It invites them to get involved and shape a future narrative.
We’re not just telling a story to entertain. The goal is to tell a story that draws people in and inspires them to share it with their own connections. You want to engage your audience in a powerful way that elicits emotional connection that translates to taking action.
Brand storytelling framework
You may doubt your ability to tell a captivating story. You may even doubt that your nonprofit has a story worth telling. But I say, baloney.
You don’t lack the ability and you’re not without a story. What you’re missing is a framework to weave your nonprofit’s brand into a story that wins you followers.
For this, we’ll turn to Joseph Campbell.
Campbell, a mythological researcher, was best known for his work in comparative mythology and religion. Through his studies he discovered a common narrative pattern called the hero’s journey.
The hero’s journey provides a foundational framework that can be applied to crafting a captivating brand story that has emotional appeal, is relatable, and will draw people in.
We’ll roughly follow Campbell’s concept to illustrate how you can use the hero’s journey to tell your nonprofit’s brand story. A quick case study on the nonprofit charity:water will provide a concrete example of how to put the framework into action.
Step one: Identify your main character and their trigger
Every story needs a main character or protagonist. This character (or hero) is someone or something that your audience can identify with and have empathy for.
Your main character is likely your founder, but it could also be your nonprofit as a whole, your product, or service.
Once you’ve identified your main character, list out their main personality traits and attributes. These attributes can help you connect with your audience in a humanistic way.
Charity:water’s main character is its founder Scott Harrison. Years of promoting nightclubs and fashion events in New York left him financially successful but spiritually bankrupt.
This tension and frustration was his trigger. Harrison asked himself, “What would the opposite of my life look like?”
He leaned in to the trigger and signed up for an eight-month volunteer position on Mercy Ships: hospital ships that provide free medical service to the world’s poorest nations.
Step two: Address the conflict
Your hero’s conflict doesn’t need to be absolutely gut-wrenching to be inspiring or believable. But without conflict, your story is a lullaby.
Since you’re a nonprofit, the conflict to include in your story is likely found in your mission statement. Your conflict is the issue or problem you are determined to solve.
For Cornerstone Associates, a client of the MAC, the main conflict it addresses are the barriers people with mental and physical disabilities face in becoming integrated into the community.
For Harrison, when he was volunteering with Mercy Ships, he encountered a level of poverty and disease that he didn’t know existed. This experience conflicted with what he believed to be an acceptable standard of living and became the impetus for starting his nonprofit.
When speaking to their followers, charity:water invites people to imagine what life would be like without water. This stirs up an inner conflict with the goal of encouraging people to get involved with the nonprofit’s cause.
Step three: The revelation
The struggle with conflict leads to the eventual revelation that the issue or problem can be addressed.
For your nonprofit’s story, the revelation could be how your founder discovered a solution to a specific problem or how a service you offer addresses a specific need.
When Harrison returned to New York, he was determined to address the medical problems related to inadequate access to clean drinking water.
On his 31st birthday, he launched charity:water by asking for donations of $31 instead of gifts. This first step to addressing the identified problem brought in $15,000 and helped build the nonprofit’s first wells.
For followers of charity:water, they understand how their financial contributions or sweat equity contribute to improved health and quality of life.
Step four: Leading the transformation
This is the part of the story where you show how your nonprofit is solving an issue in a unique way. Through the transformation, you illustrate the value you provide. This transformation invites your followers to to connect with you on an emotional and logical level.
The brand story for charity:water contains two transformations. Harrison was personally transformed, and through his nonprofit he is transforming the lives of others.
Finding your nonprofit’s brand story
Now that you have a foundational framework, get out there and look for your brand stories. Stories are developing every day; you just have to be willing to look.
Look for small, specific stories that bring your brand to life. How did a project help one person? Was a volunteer transformed by working with your nonprofit?
Win followers
By working with the hero’s journey framework and being on constant alert for intriguing stories, you’re well armed to craft a story that solidifies your nonprofit brand in people’s minds and wins you followers.
When you tell a captivating story, you humanize your nonprofit and invite connection. When you strike a chord, your story will inspire people to follow along and actively help you achieve your mission. | https://medium.com/madison-ave-collective/the-importance-of-branding-for-nonprofits-how-to-tell-a-story-that-wins-you-followers-1359227aaafc | ['Hanna Knowles'] | 2018-01-16 18:32:38.631000+00:00 | ['Storytelling', 'Nonprofit', 'Branding'] |
SQLZoo: The Best Way to Practice SQL | SQL is a useful skill to have for many roles. No matter the industry, there’s going to be data stored in databases and SQL is the best way to get to it. And Data Scientists, in particular, need to be experts for quick access to high quality data. While most of us in tech have a decent grasp on the basics, we may lack the opportunities to push those skills further in our day-to-day work.
In comes SQLZoo — a great place to test your skills and rebuild rusty ones. You can use it for interview prep, or to stay sharp on the job and impress your boss. Here, I’ll introduce SQLZoo and why you should check it out, as well as a useful link to some SQLZoo answers for double-checking!
SQLZoo is a well established online platform (since 1999) for writing and running SQL queries against a live database. This means you can see the actual result of your query without having to scrupulously check your query matches a solution — it’s the result that matters. This is important because there are often many approaches to difficult questions, with one not necessarily being the best.
They have an educational section, but what you’re looking for are the “Assessments”. These contain more involved examples that allow you to deep-dive into a database at varying levels of difficulty. My favourite problems were under the White Christmas challenge which doubled as a good learning experience for the history of the famous “White Christmas”. Other good ones are Help Desk and Guest House which have detailed diagrams explaining the database as well as some more challenging problems.
At some point you might want to check your SQL looks good, and for that, you can check my solutions for some of the problems on Github (contributions encouraged!). Writing good quality SQL queries is a not so straightforward as you need to consider readability, speed, efficiency, robustness — all of which matter for businesses. While you’re trying out the problems, think about other ways you could have approached them. What would have been a more concise way to write it? How could you have been more efficient? What would happen if some of the columns contained NULL values?
It’s worth noting that SQLZoo is built with a MariaDB Server supporting MySQL. For someone like myself who works mostly with BigQuery’s StandardSQL or PostgreSQL this meant some of the techniques I would normally apply wouldn’t work. This was frustrating at first, but at the same time it’s a good chance to practice other techniques that you might not think of when using the variant of SQL you regularly work with.
Finally, there are other platforms out there with similar services. A small list:
w3resource — another great free resource for writing queries.
The SQL Murder Mystery — another one of my favourites thanks to it’s fun, interactive environment that has you feeling like a top secret agent.
Interview Query — a platform dedicated to data scientists to practice their SQL. If you’re serious worth looking into, but it’s a paid service.
TestDome — another platform for interview practice.
For practicing your general coding skills, there are many great, modern platforms such as Leetcode but SQL is a skill which tends to get less appreciation. Use SQLZoo to practice, test and improve your skills to bring your SQL to the next level.
PS. Let me know if you found any great problems or found some better solutions to mine — happy SQLing! | https://towardsdatascience.com/sqlzoo-the-best-way-to-practice-sql-66b7ccb1f17a | ['Jye Sawtell-Rickson'] | 2020-09-05 15:20:31.065000+00:00 | ['Data Science', 'Sql', 'Data Analysis', 'Data Engineering', 'Data'] |
Deep-dive into Spark internals and architecture | Apache Spark is an open-source distributed general-purpose cluster-computing framework. A spark application is a JVM process that’s running a user code using the spark as a 3rd party library.
As part of this blog, I will be showing the way Spark works on Yarn architecture with an example and the various underlying background processes that are involved such as:
Spark Context
Yarn Resource Manager, Application Master & launching of executors (containers).
Setting up environment variables, job resources.
CoarseGrainedExecutorBackend & Netty-based RPC.
SparkListeners.
Execution of a job (Logical plan, Physical plan).
Spark-WebUI.
Spark Context
Spark context is the first level of entry point and the heart of any spark application. Spark-shell is nothing but a Scala-based REPL with spark binaries which will create an object sc called spark context.
We can launch the spark shell as shown below:
spark-shell --master yarn \
--conf spark.ui.port=12345 \
--num-executors 3 \
--executor-cores 2 \
--executor-memory 500M
As part of the spark-shell, we have mentioned the num executors. They indicate the number of worker nodes to be used and the number of cores for each of these worker nodes to execute tasks in parallel.
Or you can launch spark shell using the default configuration.
spark-shell --master yarn
The configurations are present as part of spark-env.sh
Our Driver program is executed on the Gateway node which is nothing but a spark-shell. It will create a spark context and launch an application.
The spark context object can be accessed using sc.
After the Spark context is created it waits for the resources. Once the resources are available, Spark context sets up internal services and establishes a connection to a Spark execution environment.
Yarn Resource Manager, Application Master & launching of executors (containers).
Once the Spark context is created it will check with the Cluster Manager and launch the Application Master i.e, launches a container and registers signal handlers.
Once the Application Master is started it establishes a connection with the Driver.
Next, the ApplicationMasterEndPoint triggers a proxy application to connect to the resource manager.
Now, the Yarn Container will perform the below operations as shown in the diagram.
ii) YarnRMClient will register with the Application Master.
iii) YarnAllocator: Will request 3 executor containers, each with 2 cores and 884 MB memory including 384 MB overhead
iv) AM starts the Reporter Thread
Now the Yarn Allocator receives tokens from Driver to launch the Executor nodes and start the containers.
Setting up environment variables, job resources & launching containers.
Every time a container is launched it does the following 3 things in each of these.
Setting up env variables
Spark Runtime Environment (SparkEnv) is the runtime environment with Spark’s services that are used to interact with each other in order to establish a distributed computing platform for a Spark application.
Setting up job resources
Launching container
YARN executor launch context assigns each executor with an executor id to identify the corresponding executor (via Spark WebUI) and starts a CoarseGrainedExecutorBackend.
CoarseGrainedExecutorBackend & Netty-based RPC.
After obtaining resources from Resource Manager, we will see the executor starting up
CoarseGrainedExecutorBackend is an ExecutorBackend that controls the lifecycle of a single executor. It sends the executor’s status to the driver.
When ExecutorRunnable is started, CoarseGrainedExecutorBackend registers the Executor RPC endpoint and signal handlers to communicate with the driver (i.e. with CoarseGrainedScheduler RPC endpoint) and to inform that it is ready to launch tasks.
Netty-based RPC - It is used to communicate between worker nodes, spark context, executors.
NettyRPCEndPoint is used to track the result status of the worker node.
RpcEndpointAddress is the logical address for an endpoint registered to an RPC Environment, with RpcAddress and name.
It is in the format as shown below:
This is the first moment when CoarseGrainedExecutorBackend initiates communication with the driver available at driverUrl through RpcEnv.
SparkListeners
SparkListener (Scheduler listener) is a class that listens to execution events from Spark’s DAGScheduler and logs all the event information of an application such as the executor, driver allocation details along with jobs, stages, and tasks and other environment properties changes.
SparkContext starts the LiveListenerBus that resides inside the driver. It registers JobProgressListener with LiveListenerBus which collects all the data to show the statistics in spark UI.
By default, only the listener for WebUI would be enabled but if we want to add any other listeners then we can use spark.extraListeners.
Spark comes with two listeners that showcase most of the activities
i) StatsReportListener
ii) EventLoggingListener
EventLoggingListener: If you want to analyze further the performance of your applications beyond what is available as part of the Spark history server then you can process the event log data. Spark Event Log records info on processed jobs/stages/tasks. It can be enabled as shown below...
The event log file can be read as shown below
The Spark driver logs into job workload/perf metrics in the spark.evenLog.dir directory as JSON files.
There is one file per application, the file names contain the application id (therefore including a timestamp) application_1540458187951_38909.
It shows the type of events and the number of entries for each.
Now, let’s add StatsReportListener to the spark.extraListeners and check the status of the job.
Enable INFO logging level for org.apache.spark.scheduler.StatsReportListener logger to see Spark events.
To enable the listener, you register it to SparkContext. It can be done in two ways.
i) Using SparkContext.addSparkListener(listener: SparkListener) method inside your Spark application.
Click on the link to implement custom listeners - CustomListener
ii) Using the conf command-line option
Let’s read a sample file and perform a count operation to see the StatsReportListener.
Execution of a job (Logical plan, Physical plan).
In Spark, RDD (resilient distributed dataset) is the first level of the abstraction layer. It is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel. RDDs can be created in 2 ways.
i) Parallelizing an existing collection in your driver program
ii) Referencing a dataset in an external storage system
RDDs are created either by using a file in the Hadoop file system, or an existing Scala collection in the driver program, and transforming it.
Let’s take a sample snippet as shown below
The execution of the above snippet takes place in 2 phases.
6.1 Logical Plan: In this phase, an RDD is created using a set of transformations, It keeps track of those transformations in the driver program by building a computing chain (a series of RDD)as a Graph of transformations to produce one RDD called a Lineage Graph.
Transformations can further be divided into 2 types
Narrow transformation: A pipeline of operations that can be executed as one stage and does not require the data to be shuffled across the partitions — for example, Map, filter, etc..
Now the data will be read into the driver using the broadcast variable.
Wide transformation: Here each operation requires the data to be shuffled, henceforth for each wide transformation a new stage will be created — for example, reduceByKey, etc..
We can view the lineage graph by using toDebugString
6.2 Physical Plan: In this phase, once we trigger an action on the RDD, The DAG Scheduler looks at RDD lineage and comes up with the best execution plan with stages and tasks together with TaskSchedulerImpl and execute the job into a set of tasks parallelly.
Once we perform an action operation, the SparkContext triggers a job and registers the RDD until the first stage (i.e, before any wide transformations) as part of the DAGScheduler.
Now before moving onto the next stage (Wide transformations), it will check if there are any partition data that is to be shuffled and if it has any missing parent operation results on which it depends, if any such stage is missing then it re-executes that part of the operation by making use of the DAG( Directed Acyclic Graph) which makes it Fault tolerant.
In the case of missing tasks, it assigns tasks to executors.
Each task is assigned to CoarseGrainedExecutorBackend of the executor.
It gets the block info from the Namenode.
now, it performs the computation and returns the result.
Next, the DAGScheduler looks for the newly runnable stages and triggers the next stage (reduceByKey) operation.
The ShuffleBlockFetcherIterator gets the blocks to be shuffled.
Now the reduce operation is divided into 2 tasks and executed.
On completion of each task, the executor returns the result back to the driver.
Once the Job is finished the result is displayed.
Spark-WebUI
Spark-UI helps in understanding the code execution flow and the time taken to complete a particular job. The visualization helps in finding out any underlying problems that take place during the execution and optimizing the spark application further.
We will see the Spark-UI visualization as part of the previous step 6.
Once the job is completed you can see the job details such as the number of stages, the number of tasks that were scheduled during the job execution of a Job.
On clicking the completed jobs we can view the DAG visualization i.e, the different wide and narrow transformations as part of it.
You can see the execution time taken by each stage.
On clicking on a Particular stage as part of the job, it will show the complete details as to where the data blocks are residing, data size, the executor used, memory utilized and the time taken to complete a particular task. It also shows the number of shuffles that take place.
Further, we can click on the Executors tab to view the Executor and driver used.
Now that we have seen how Spark works internally, you can determine the flow of execution by making use of Spark UI, logs and tweaking the Spark EventListeners to determine optimal solution on the submission of a Spark job.
Note: The commands that were executed related to this post are added as part of my GIT account.
Similarly, you can also read more here:
If you would like too, you can connect with me on LinkedIn — Jayvardhan Reddy. | https://medium.com/free-code-camp/deep-dive-into-spark-internals-and-architecture-f6e32045393b | ['Jayvardhan Reddy'] | 2019-05-14 20:18:53.299000+00:00 | ['Data Science', 'Technology', 'Artificial Intelligence', 'Programming', 'Spark'] |
The What, Why, and How of TypeScript for JavaScript Developers | Using Types With TypeScript
Basic types
TypeScript has a number of basic types that are predefined. Number, string, boolean, and array are a few examples of them.
You can find the complete list of basic types in the TypeScript documentation .
Here are a few examples:
Note how any type reverts TypeScript to behave the same way as JavaScript. Since our purpose of using TypeScript is to give a better structure to our code, avoid using any type whenever possible.
Similarly, try to avoid using a union of types, but if it is unavoidable, limit the number of types allowed in the union as much as possible.
Declaring custom types
Remember how I used a type called Person in a previous code example? But Person is not a basic data type in TypeScript. I created the Person type according to my requirements to use it as the type of parameter accepted by the given function.
We use interfaces to define the basic structure of a new type we are introducing to the application.
interface Person {
name: string;
age: number;
}
Now, if we create a new object of type Person , it should have the field’s name and age within it. If not, TypeScript throws an error.
VS Code alerting of missing properties in custom types
You can also define optional fields inside an interface.
You can then use a custom type as the type of a field when defining another type.
interface Person{
name: string;
age: number;
address: Address;
}
Extending interfaces
In TypeScript, you can inherit the properties of another type by extending its interface.
Assume that your application needs two different types, Person and Employee . Since an employee is also a person, it makes sense to inherit the Person type’s properties when creating the Employee interface. It prevents code repetition.
You can quickly achieve this by extending the Person interface.
Function parameter types and return types
Similar to variable types, you can define types for function parameters and return values. While the parameter type is declared next to the parameter name, the return type is declared just before the curly braces.
With the type of the parameter and return value defined, we can guarantee that you or anyone else using this function won’t accidentally pass an object that doesn’t have the characteristics of the Car type.
You can also guarantee that the field sold in any object passed won’t be undefined or null. And it eliminates a number of scenarios that could throw an error during the runtime. If you were using JavaScript, you would have to write more code to prevent the possibility of such an error occurring during runtime.
Similar to variables, you can define the return and parameter types as a union of several types.
function buyCar(car : Car): Car | boolean {
if (car.sold === true){
return false;
}
return car;
}
When you declare the accepted parameter or return type, objects of types that extend the initial type’s interface are also accepted as the argument or the return value. | https://medium.com/better-programming/the-what-why-and-how-of-typescript-for-javascript-developers-a2177675f6d2 | ['Juan Cruz Martinez'] | 2020-11-30 18:01:05.702000+00:00 | ['Programming', 'Software Development', 'Nodejs', 'JavaScript', 'Typescript'] |
Basic Steps In Natural Language Processing Pipeline | Source: Unsplash
Natural Language Processing (NLP) deals with text data. The applied research in NLP is motivated to design the technology that understands the human language more effectively. The research in NLP is more demanding and challenging as it is difficult to understand how the human brain understands the secrets of language and its communication methods.
Many laboratories, researchers from all around the world are working their best to synchronize between technology and human language with machine learning and deep learning frameworks. But before feeding the text data to different machine learning models, there are some basic steps that are needed to be implemented for any raw text data.
This blog is intended to understand the basic pipeline in NLP and convert it into the desired format, that is given as input for any Machine learning or deep learning algorithms.
Basic NLP Pipeline-
Steps in NLP
As shown in the above figure, there are basic 5 steps. Definitely there are many others like Named-Entity Recognition(NER) tagging, coreference resolution etc but at an initial stage or as a beginner the above mentioned steps are important to know and understand. These steps are easily implemented with Python either with Nltk or Spacy libraries. I will be demonstrating here using nltk library.
Tokenization:
Tokenization is a way of separating a piece of text into smaller units called tokens.This can be done at the sentence level or word level. The following example shows the word tokenize implementation.
The output will be as follows:
['[',
"'The",
'Plateau',
'coffee',
'tables',
',',
'designed',
'by',
'Büro',
'Famos',
',',
'comprises',
'just',
'two',
'simple',
'wooden',
'components',
'a',
'top'
Text Cleaning:
This phase deletes words and items from a corpus of text data to help enhance a machine-learning model ‘s efficiency. Numbers, capitalization, punctuation, stopwords, single quotes will be removed from the text data. Text cleaning process is done using regular expression. The following piece of code snippet will perform the task of text cleaning efficiently.
POS tagging:
POS tagging is a task of labelling each word in a sentence with its appropriate part of speech. Parts of speech include nouns, verb, adverbs, adjectives, pronouns, conjunction and their sub-categories. This will help as a pre-requisite to simplify a lot of different problems in NLP.
The output for POS tagging is as follows:
[('You', 'PRP'),
('just', 'RB'),
('gave', 'VBD'),
('me', 'PRP'),
('a', 'DT'),
('scare', 'NN')]
Stopwords:
These are the words that do not add much value to the meaning of the document. Stopwords are common in very language.Removing of stopwords must be done after tokenization. Nltk provides stopwords in many different languages like Danish,German,English and so on. Some of the benefits of removing stopwords are:
Data set size decreases when stopwords are removed and the time to train the model also decreases. Stopword elimination will theoretically help boost performance, as there are fewer and only relevant tokens remaining. This could thus improve the accuracy of the classification.
Lemmatization:
This reduces the inflected words with properly ensuring that the root word belongs to the language. It helps to get necessary and valid words. Nltk provides WordNet Lemmatizer that uses the WordNet Database to lookup lemmas of words.
After performing these steps, the text will be in the desired format which will be given as input for ML model. | https://medium.com/the-innovation/basic-steps-in-natural-language-processing-pipeline-763cd299dd99 | ['Dr. Monica'] | 2020-09-07 17:21:03.312000+00:00 | ['Machine Learning', 'Data Science', 'NLP', 'Artificial Intelligence', 'Nltk'] |
Nicholas Bloom on Management, Productivity, and Scientific Progress (Ep. 102) | Nicholas Bloom on Management, Productivity, and Scientific Progress (Ep. 102)
What might the electrification of factories teach us about how quickly we’ll adapt to remote work? What gives American companies an edge over their competitors on the international stage? What value do management consultants really provide? Stanford professor Nick Bloom’s research studies how management practices, productivity techniques, and uncertainty shape outcomes across companies and countries.
He joined Tyler for a conversation about which areas of science are making progress, the factors that have made research more expensive, why government should invest more in R&D, how lean management transformed manufacturing, how India’s congested legal system inhibits economic development, the effects of technology on Scottish football hooliganism, why firms thrive in China, how weak legal systems incentivize nepotism, why he’s not worried about the effects of remote work on American productivity (in the short-term), the drawbacks of elite graduate programs, how his first “academic love” shapes his work today, the benefits of working with co-authors, why he prefers periodicals and podcasts to reading books, and more.
Listen to the full conversation
You can also watch a video of the conversation here.
Read the full transcript
TYLER COWEN: Hello, everyone. Today I am honored to be chatting with Nick Bloom, who is professor of economics at Stanford.
I sometimes put it this way: If I read a new and interesting article, whether it be on productivity in science, the productivity of firms, how effective it is to work from home, the effect of uncertainty on economic output, then I think, “Well, who’s the most likely economist to be a coauthor or author of this article?” That one person is Nick Bloom. Nick, welcome.
NICHOLAS BLOOM: Thanks very much for having me on, Tyler. It’s great to be here.
COWEN: Let’s start with your piece with coauthors on whether progress in science has slowed down, and you argue that it has. I would ask, which are the areas where progress in science has not slowed down?
BLOOM: Ooh, that’s a good question. That’s not what I’m normally asked about. Where has it not slowed down? I’m not doing any deep personal insight here, but just looking at the valuations of firms in exciting areas, it’s going to be things like AI. I’m going to guess social media, genetic medicine.
I think of what’s going on at Stanford. I know there’s a huge explosion of work in Stanford. Several of my friends and colleagues on campus are working on genetic medicine. All kinds of amazing things there, actually.
Wearables — one of my friends here is actually working with Apple on getting devices in the Apple Watch so it can check your heart rate and tell you in advance if there’s complications in your heart and basically pre-warn you.
I don’t want to be too super pessimistic that all science is dying, and you’re right. You’re exactly right. In the research we looked at, we showed, for example, progress on cancer had an acceleration in the ’80s and ’90s. It seems that field after field eventually starts to decline. And there aren’t enough new fields that are growing to offset the bulk of the current fields that are declining.
COWEN: So if progress in Moore’s Law is slowing down, progress in crop yields is slowing down, cross-sectionally, what is different about the areas where progress in science is speeding up?
BLOOM: In some instances, they’re new. It seems pretty obvious. This is why it’s useful to have economists to look seriously at the data. In a sense, it seems pretty obvious that individual areas are going to slow down. So, the wheel was a fantastic innovation, but at some point, as progress slows down, and the horse and cart, corn yields. You can just go through innovation after innovation. They’re incredibly important, but at some point, of course, progress in those areas slows down.
You mentioned Moore’s Law. The number of transistors you can pack onto a silicon chip has been roughly doubling every two years. That was kind of Moore’s Law, and that’s been roughly held constant, actually, for about 50 years. It’s just, we’ve been pouring way more scientists into that. We estimate since the ’70s, there are 18 times more scientists just to hold that constant.
In that sense, if you’re putting in a lot more scientists to generate the same increase in compute power, you’d say that progress is slowing down. Now, it seems obvious that each field is slowing down. The question is, are there enough new fields that are coming into being to offset that? It just appears that, at least since the 1950s in the US, the answer is no. There are new fields coming on board, but just not fast enough to offset the decline. So right now —
COWEN: How do we know what counts as a new field? You mentioned progress in genetics, but Mendel was some time ago. You mentioned the wheel, but Tesla now has a phenomenal valuation. That’s the wheel plus electricity. Electricity is another old sector, right?
BLOOM: Yep.
COWEN: Aren’t some of the old sectors currently the most dynamic?
BLOOM: Well, Tesla is the electric motor. I mean, you’re right. The electric motor — again, it’s not my expertise, but I think the first cars were, in fact, electric cars back in, like, 1900. Whether you call that a new field or an old one, the progress is driven on batteries.
So batteries — we looked at this, actually. There are several areas we looked at in our paper. I had a paper with Chad Jones, John Van Reenen, and Michael Webb looking at whether innovation and productivity is slowing down. We looked at several sectors to try and evaluate this. Some of them lacked complete data in either inputs or outputs, but one of them is batteries, and batteries have made slow but steady progress. For example, lithium-ion batteries are much more effective.
Recently, batteries have gotten to the stage where electric cars are feasible because you need to, obviously, store enough energy. It’s not so much the electric motor is a new idea — it’s that batteries make it possible.
If you want to ask what areas are new, I would practically look at, say, patents. There’s an enormous amount of new companies floating on the stock market. Patents are a very simple way to look at what technologies are new, in the sense that they add new fields. You look at patents that don’t seem to patent or cite much that’s gone before them. They’re truly radical. And there’s a huge research literature on exactly this.
COWEN: If I understand your estimates correctly, efficacy per researcher, as you measure it, is falling by about 5 percent a year. That seems phenomenally high. What’s the mechanism that could account for such a rapid decline?
BLOOM: The big picture — just to make sure everyone’s on the same page — is, if you look in the US, productivity growth . . . In fact, I could go back a lot further. It’s interesting — you go much further, and you think of European and North American history. In the UK that has better data, there was very, very little productivity growth until the Industrial Revolution. Literally, from the time the Romans left in whatever, roughly 100 AD, until 1750, technological progress was very slow.
Sure, the British were more advanced at that point, but not dramatically. The estimates were like 0.1 percent a year, so very low. Then the Industrial Revolution starts, and it starts to speed up and speed up and speed up. And technological progress, in terms of productivity growth, peaks in the 1950s at something like 3 to 4 percent a year, and then it’s been falling ever since.
Then you ask that rate of fall — it’s 5 percent, roughly. It would have fallen if we held inputs constant. The one thing that’s been offsetting that fall in the rate of progress is we’ve put more and more resources into it. Again, if you think of the US, the number of research universities has exploded, the number of firms having research labs.
Thomas Edison, for example, was the first lab about 100 years ago, but post–World War II, most large American companies have been pushing huge amounts of cash into R&D. But despite all of that increase in inputs, actually, productivity growth has been slowing over the last 50 years. That’s the sense in which it’s harder and harder to find new ideas. We’re putting more inputs into labs, but actually productivity growth is falling.
COWEN: Let’s say paperwork for researchers is increasing, bureaucratization is increasing. How do we get that to be negative 5 percent a year as an effect? Is it that we’re throwing kryptonite at our top people? Your productivity is not declining 5 percent a year, or is it? COVID aside.
BLOOM: COVID aside. Yeah, it’s hard to tell your own productivity. Oddly enough, I always feel like, “Ah, you know, the stuff that I did before was better research ideas.” And then something comes along. I’d say personally, it’s very stochastic. I find it very hard to predict it. Increasingly, it comes from working with basically great, and often younger, coauthors.
Why is it happening at the aggregate level? I think there are three reasons going on. One is actually come back to Ben Jones, who had an important paper, which is called, I believe, “[Death of the] Renaissance Man.” This came out 15 years ago or something. The idea was, it takes longer and longer for us to train.
Just in economics — when I first started in economics, it was standard to do a four-year PhD. It’s now a six-year PhD, plus many of the PhD students have done a pre-doc, so they’ve done an extra two years. We’re taking three or four years longer just to get to the research frontier. There’s so much more knowledge before us, it just takes longer to train up. That’s one story.
A second story I’ve heard is, research is getting more complicated. I remember I sat down with a former CEO of SRI, Stanford Research Institute, which is a big research lab out here that’s done many things. For example, Siri came out of SRI. He said, “Increasingly it’s interdisciplinary teams now.”
It used to be you’d have one or two scientists could come up with great ideas. Now, you’re having to combine a couple. I can’t remember if he said for Siri, but he said there are three or four different research groups in SRI that were being pulled together to do that. That of course makes it more expensive. And when you think of biogenetics, combining biology and genetics, or bioengineering, there’s many more cross-field areas.
Then finally, as you say, I suspect regulation costs, various other factors are making it harder to undertake research. A lot of that’s probably good. I’d have to look at individual regulations. Health and safety, for example, is probably a good idea, but in the same way, that is almost certainly making it more expensive to run labs.
In fact, COVID is a huge pushback. I was talking just before the shutdown to a good friend of mine, and she said she has a big lab that has a number of animals and longer-running experiments going on. In fact, the shutdown has been extremely expensive. When we reopen with social distancing, of course, the costs are going to go up again. These are all factors pushing on your point of regulation. It’s just expensive running research.
COWEN: What if I argued none of those are the central factors because, if those were true as the central factors, you would expect the wages of scientists, especially in the private sector, to be declining, say by 5 percent a year. But they’re not declining. They’re mostly going up.
Doesn’t the explanation have to be that scientific efforts used to be devoted to public goods much more, and now they’re being devoted to private goods? That’s the only explanation that’s consistent with rising wages for science but a declining social output from her research, her scientific productivity.
BLOOM: Great question. There are two responses before I lose track in it. First is, I’m embarrassed to say, I forgot the fourth reason, so you’re right. A fourth factor on this could be, just as a simple empirical fact, the share of R&D in the US and Europe — which we have best figures on this — funded by the government has been declining over time. In fact, in the US, when you go back to the ’60s, roughly two-thirds of it is funded by the government and one-third by private firms. Now it’s the reverse.
In fact, it always made me wonder — when I first pulled up this data six, seven years ago — it made me wonder about the story of Stanford. Because when I arrived at Stanford, I was told that Stanford prewar really was like a finishing university. It wasn’t really a big deal. Postwar, Stanford got its big break because of lots of research from NASA dollars.
I was thinking, “Well, government R&D is not such a big factor anymore. It’s mainly private firms.” That’s because postwar, it was the big driver, and the government pulled back from R&D, and private firms have taken over.
The reason that’s the fullest possible driver of the decline in productivity, as you point out, is government R&D tends to be more focused on the R, the research, and private R&D more on the D, the development. The R, you may think, has more spillovers, longer-run benefits, and is what’s going to drive long-term growth.
So yes, that will be another role for policy, in fact. I’m embarrassed I left it out, but a big driver would be . . . It feels hard to be saying this, being in a university, but the government should fund more public R&D. I can get on the economic-status-on-wages question if you want, but I can see you have a question because we’re looking at each other on Zoom.
COWEN: But if you assign the blame to government, ideas are a global public good. Isn’t it true that global governmental expenditure on R&D in absolute terms is up, even if it may be down as a percentage of budgets for total R&D? Thus, scientific progress in the United States, which can draw upon governmental support in China, Japan, India, UK, Switzerland, should still be going up.
It has to be within private scientific progress that there’s a diversion of effort away from public goods and toward more private goods. Or no?
BLOOM: No, it’s a good question. It’s certainly true, our paper only focused on the US. The puzzle gets much harder if you include global R&D. You see that productivity per researcher or research dollar is falling, in the sense of the rate of progress per dollar we’re spending.
Just to be clear, this is not a new thesis. You have your book, The Great Stagnation, and Patrick Collison, for example, has worked on this more recently. There’s that book, The Death of Science. I’m embarrassed I’ve forgotten who the author was.
COWEN: John Horgan.
BLOOM: Yeah, that’s it. The puzzle gets even more extreme if you look globally. Sure, it’s tricky because Europe has become slightly less of a powerhouse, but obviously Asia has completely taken off in the amount of R&D being spent in, say, India, and China has exploded. Has that offset the reduction in US publicly funded R&D, certainly as a share of GDP?
It’s not obvious. One reason is, there’s plenty of evidence on knowledge spillovers being localized. There’s a lot of evidence, for example, that you’re more likely to coauthor with your colleagues in your own university or the same firm. I guess the same firm is more obvious, but if that was true, you may think the transmission of ideas from China to the US is less effective than within the US.
I also don’t know if the increase in Chinese and Indian R&D by their government sectors is enough to offset the reduction by the US, and whether it’s in the right areas. It may be that a lot of developing countries’ R&D is more, say, defense and national security focused, which I suspect has lower tradeoffs.
The nice thing about the US and things like the National Science Foundation and the National Institute for Health is they would put huge amounts of funding on very basic research that had broad value. An MIT researcher goes to the NSF, gets funding for research. They tend to be focused on very basic things that are of interest to broad science, and that has, I suspect, the largest value in.
COWEN: Apart from possibly giving them more money, how should we improve the NSF and the NIH? How can we raise their productivity?
BLOOM: More money seems the most obvious part of it. There’s obviously, secondly, how you distribute it, and, again, I remember seeing a couple of papers on how exactly you evaluate research proposals, and it’s hard. Do you get insiders within the field to evaluate, who tend to be more informed but tend to be more biased towards their own field, or outsiders?
I’m not aware of any huge criticisms of how the research agencies hand out their money. I’m sure there’s lots of quibbling around the edges. I’ve been involved in refereeing for the NSF, for example. I’ve always been very impressed the way they’ve run it. And the ESRC, for example, in the UK.
I think the big issue is just their budgets. Their budgets, sure, are growing, but they’re not growing nearly as fast as GDP. As a result, government is basically pulling back from R&D. It’s being, to some extent, replaced by universities. If you think of these elite universities — their enormous endowments are partly being funneled into early-stage R&D.
Another fascinating observation on the US is, increasingly, growth is being driven by knowledge flows out of elite research universities, even in the stock market. The stock market, over the last 10, 15 years, has almost entirely been driven by high tech. In many ways, you can think of it as clustering around elite US research universities. They are stepping in, to some extent, to fill the gap left by government.
Another fascinating observation on the US is, increasingly, growth is being driven by knowledge flows out of elite research universities, even in the stock market. The stock market, over the last 10, 15 years, has almost entirely been driven by high tech. In many ways, you can think of it as clustering around elite US research universities. They are stepping in, to some extent, to fill the gap left by government.
COWEN: How much of the measured productivity edge of American multinationals is just tax arbitrage and where profits get assigned to?
BLOOM: I never really thought a huge amount of it was that. My personal view — I guess this is, again, biased by my research — is American firms are particularly just fantastically well managed. I’ve done a lot of work for many years, looking at management practices, trying to collect data in cross-country surveys.
To explain what I mean, management practices — the basics are, do you collect information and use it to improve yourself? Think of lean, collecting information all the time, and having improvement processes. Secondly, do you train and promote employees, try to promote the best people, trying to avoid things like promoting family, friends, or long-serving employees? So meritocratic HR systems.
I’m not going to say American firms are perfect. They’re definitely not perfect. There are many management scandals. But on average, American firms are much better managed, and they take that with them abroad. There’s this whole literature that has been sometimes called dark matter, or explaining why we seem to have this endless negative balance of trade but positive balance on our investments abroad. American companies seem to make huge profits abroad.
One big explanation is they’re just exploiting lots of this intangible capital, which we think of as good management. American multinationals around the world are well managed, and they make a lot of profits where they’re located in the UK and France, in Ghana, in Thailand — wherever they are. That’s helping keep the US economy afloat, that return profits. I don’t see that as being related — transfer pricing and offshore tax manipulation. That is a factor, but I think American firms are primarily driven, actually, by better innovation and better management.
COWEN: Why hasn’t information technology boosted productivity more? Productivity is sluggish. IT has been taking off like crazy. Companies, big business is what uses IT. How do we fit the whole picture together?
BLOOM: [laughs] Another age-old debate goes back to Robert Solow’s quip in the New York Times. He wrote, I think, in ’86, “You see computers everywhere except in the productivity figures.”
So you’re right. Paul David and Tim Bresnahan at Stanford, my colleagues, have had various . . . Again, there’s an old literature about general-purpose technologies, these technologies that change society, and at least two previous ones were the steam engine and the electric motor. The big question is, why haven’t computers done that? They seem as transformational as the previous two.
As we discussed, productivity growth rates in the US have been declining since the ’50s and don’t seem to have picked up much, anyway, with computers. I think the primary reason people argue for this is, you need to change society in order to exploit this. In fact, in an odd way, COVID, the pandemic, and working from home is one example of this — all the technology necessary for working from home.
Just to be clear, the internet and email, cheap personal computers and video calls have all been around since the late 2000s. The last piece, Skype, came out in 2003. But it isn’t until the pandemic that we actually massively embraced working from home. Why is that? I think it’s just social norms and firm organizational practices were slow to change. I think something holding back the impact of ICT is firms and society don’t change that rapidly.
A good example: Paul David mentions about electricity that when electricity came in — which I believe is in the 1910s, 1920s — factories were slow to adopt it. The reason was, in the older factories where you had a big steam engine or even a waterwheel, it made sense to have the building very vertical. You’d have four stories around this one central shaft, which belts would connect to, which drove all your mechanical power.
With electricity, instead, you can have lots of little localized electric motors, which is a large flat building. That explains, if you look at really old-fashioned factories in the center of Manhattan and places where they were built 200 years ago, they’re very tall buildings. Modern factories are low-slung, massive sheds. But of course, when electricity came in, it’s very hard to reshape all those buildings, and it takes decades.
It’s kind of like that with reshaping the management, organizational structures of society. I think that’s one reason why it’s taken so long for IT to affect productivity.
COWEN: Italy has had almost no per capita income growth for about 20 years now. Is that because of the deficiencies of Italian firms? Italy hasn’t changed enough?
BLOOM: Italy is just a productivity basket case. When I talk to Raffaella Sadun, for example, my long-term coauthor — when I talk to her about Italian productivity, a lot of the issues you hear about are regulations, political instability, challenges in the education system, migration.
Another thing for Italy — it’s even more so for Greece, actually — is that a lot of Southern Europe has suffered from a large negative brain drain. I know lots of highly able Italians, but many of them I know are in the US and the UK, and they’ve left the country because of its poor economic prospects.
Italy is almost a laundry list of what’s gone wrong and what not to do, but I think a lot of it comes down to poor government that then feeds through into all these policies that make it hard for firms to innovate. Italy’s R&D performance isn’t great. It’s very uncertain. It drives a lot of people abroad. The education system is poor.
On the value of management consultants
COWEN: What exactly is the value of management consultants? Because to many outsiders, it appears absurd that these not-so-well-trained young people come in. They tell companies what to do. Sometimes it’s even called fraudulent if they command high returns. How does this work? What’s the value added?
BLOOM: I don’t know if everyone knows, but I worked at McKinsey for about a year and a half. I should state that I no longer work for them. I don’t take any money from them anymore. That was a long time ago. It was almost 20 years ago. Just from that and from my research, there’s two or three things they do.
It is true on the negatives. To start with the negative side, the critique that’s often thrown at them is they tell you the obvious. They ask to borrow your watch and tell you the time. Or they tell you things that the CEO normally knew, but she or he basically didn’t want to ’fess up to tell the workers.
Now, it’s true that I felt there was some element of that. There was one project in particular that I was involved in. I remember it seemed to us to be reasonably clear what to do. I think it seemed to be reasonably clear to the division head what to do, but it was hard for her to tell the whole group. McKinsey came in, the project was highly successful, the division improved dramatically, but it was, partly, we were there to bolster evidence.
The third element, I think, is generally useful, and I’ve seen this. When I think of the randomized controlled trial we did in India, where we hired Essentia to work in a number of firms, is a lot of management improvements aren’t that obvious to people on the ground.
Just to give you one example, dating back in history — after World War II, the big movement in the US was what’s called mass production. Henry Ford had the production line, and the idea is you just scale up, get bigger and bigger and bigger and make more and more Fords and roll it off in a massive factory setup.
Toyota and the Japanese car manufacturing sector at that time in the 1950s — because they were obviously so devastated by the war — didn’t have access to capital and had to produce things on a small scale. They went for an alternative system called lean. The whole idea of lean is that you try and spot mistakes and immediately stop the line. It’s very painful in the slow run. If you see a problem in the car, you stop it, you go through it, you figure out, and then you restart.
It takes time to start off, and it’s a slow burn. By the 1980s, the Japanese car factories were clearly starting to dominate. They had lower costs and higher quality. In fact, there’s a great MIT book, called The Machine That Changed the World, that documents that. Now, if you think of the way consultants were operating in the ’90s, 2000s, it isn’t obvious to many firms that lean was a far better way to run your factory, that you really want to introduce these Kaizen production processes, et cetera, and consultants come in and help you adopt them.
It’s not just factories. Healthcare — there’s been a huge transformation in lean health whereby when you go in and see a doctor, you really, really don’t want there to be process mistakes. Lean is actually very good at reducing quality defects and improving productivity. That’s the area where consultants are great.
I remember when I was at McKinsey, and one of the projects I did with a retailer — we had someone that used to work at Toyota. This Toyota guy had been there for three, four years and was just fantastic. He went around the retailer and said, “Here’s the kind of tools we use in Toyota, and just apply them.” And that was extremely valuable. That’s the positive side of managing consulting — highlighting things that maybe, ex post after the event, are obvious why it works, but in advance just aren’t.
COWEN: Given the high returns to management advice to India and other emerging economies, what’s the main constraint that prevents that from being scaled up much more? Why don’t those consultants just transform those management practices and productivity levels?
BLOOM: Yeah, it’s a great question. I’ve long thought about this. India I know best because I’ve been out there a lot. One of the huge constraints there is the legal system. I’ll just go through it. In India, the actual law as it’s written down in the statute book is good. There’s no obvious issues with it, at least as people I talk to.
But the big constraint is processing cases through the courts. The courts are dramatically undersupplied in terms of judges, so what happens is, it’s very slow to process cases through court. As a result, when you talk to Indian firms, they are very skeptical of taking any issues through the court system.
Just to be clear, if you’re in the US and you’re a manager and you discover somebody stealing stuff from you, you’re pretty likely to report it to the police. Then it goes to the court system. That manager faces potential prison. They clearly lose their job. They have a big loss of career earnings, so in the first place, they probably won’t do it. In India, you can’t —
COWEN: If the courts are the binding constraint, why doesn’t that make all management advice for India worth less? Why is that particularly an issue with respect to scaling? Because they all live under that court system.
BLOOM: I was going to say, I use India, but it’s basically all developing countries and including, honestly, large parts of Southern Europe, is private equity. Look, you see all these badly run firms. Why doesn’t PE come in, buy out firms, and turn them around?
The problem is, the legal environment is not great. Blackstone came in, a big PE firm, bought a large apparel manufacturer in India, and really struggled because, sure, they can improve management practices, but profit wasn’t going up. There was a lot of money, basically, illegally leaking out of the company. Because the legal system is weak, it was hard to turn this around.
Then you’re right. The alternative is, look, even if private equity doesn’t come in, why can’t they do it organically? And they are. To be clear, management practices in India, which I know best, have been improving over time. There are some very successful Indian multinationals like Tata and [Reliance].
The issue is that it isn’t scaling. If you think of it, the frontier of management practice is improving every year. We’re getting better at managing firms in the US. Below that frontier, there are countries that are closest — say, Northern Europe — that are further — say, Southern Europe — and even further below— the developing world.
They’re improving too. It’s just there’s a big gap, and it takes time. It’s like innovation. It takes time to diffuse, and a better legal system would accelerate that. If you could have ruthless private equity backed by tough laws, I think it will be painful economically and socially, but the growth rate would improve because you’d have much more transfer of management practices.
COWEN: You mentioned The Machine That Changed the World — also a favorite book of mine. What’s another book on management you find especially rewarding?
BLOOM: Another book on management — I’m not a huge book reader. Having said that, I’ve been recently reading Hillbilly Elegy, which is fantastic. Oddly enough, I tend to be a huge reader of news like the Times, the Wall Street Journal, the Economist, the FT. I’m trying to think — management books. I’m sure as soon as the interview is over, I’ll kick myself and think there were some fantastic books.
COWEN: Let me re-point the question: Why are management books so bad? If I asked myself, if I had to go into a big Barnes & Noble and had to read all the books in one section, management might be the last section I would pick, even though I’m an economist and, to a more modest extent, a manager. Why is there so much junk in that area? It’s endogenous that you don’t read more of it, correct?
BLOOM: Yeah. There are great books, I’m sure. I don’t mean to imply they’re all terrible. I’m sure there are, but yeah.
COWEN: You go to the history section — most of the books are at least pretty good.
BLOOM: Yeah. One issue that struck me — why I got into management in the first place — just to explain what I’ve been doing for years, I’ve been working with a huge coalition of people. I mentioned Raffaella Sadun, John Van Reenen, Renata Lemos, Daniela Scur, Erik Brynjolfsson, Lucia Foster. There’s a huge group of us that have been trying to measure management practices across firms and countries — just very methodically and, in some ways, very boringly.
Oh, we must have surveyed several million organizations by now to create a big dataset. We take populations of firms, run these surveys, collect data, and compare them.
Most of the books that I read that are popular — Good to Great and Built to Last and things like these — are generally based on individual anecdotes and case studies, and I think that’s great for teaching. I use case studies in the Harvard Business School all the time to teach because it’s very inspirational. [laughs] They’re always positive stories of how Mrs. X or Mr. Y turned the firm around, but they’re not great for research.
The reason is, I know from past experience, having to write one case study. I wrote a case study on a firm that was owned by someone that was in my Stanford MBA course called Gokaldas, and it’s called The Challenge of Change.
The problem was, we wrote this case study — it was a fascinating company that actually eventually got taken over by a private equity in India. They were a huge, very successful apparel firm. We had to get legal sign-off from everyone involved. We interviewed six or seven people, and they all had to legally sign off and say they were fine with us using the material.
You can imagine what that does for selection effects. It means that, basically, these books — it’s very hard for them to get proper information on firms that do badly because they refuse. They threaten lawsuits.
I think a lot of management research is correct. Most of these books are probably saying the right thing. The problem is that every story you want to come up with, every theory, there’s a book supporting it. It’s kind of hard to know where to look.
What we really need, ideally, is what we’re trying to build. I wouldn’t say it’s our research, but more our data that, hopefully, people will use — because it’s publicly available data — to say, “Look, here are five hypotheses of management. Is there support in large-scale data?” I think that will put more discipline on it and, therefore, put more credibility on these books.
COWEN: If teaching management techniques to companies is so effective, can we expect similarly large gains teaching personal productivity techniques to individuals who, if anything, should absorb it more rapidly, right? No collective action problem. But it seems, overall, self-help books, life coaching — they seem pretty ineffective. How do we square that larger picture?
BLOOM: I’m not sure they’re pretty ineffective. I don’t know how to evaluate it. I could flip it around. I’ll tell you the economist’s take. I’m going to take your line on this, which is, there is an enormous volume of self-help books and podcasts and newsreels, et cetera. The fact that they exist means people are spending a lot of time reading them, I presume. If you assume that people are rational, it means they get value out of it.
I actually find these things quite useful. I’m not sure I absorb most of the tips. I don’t tend to listen to self-help podcasts, but I read a lot. I read something — there may be 10 tips in there. In fact, before the podcast started, I was talking to Dallas, your producer. She had sent me this whole list of things on what to do with your microphone and video. I had read it all. In fact, she included a link that I went onto. I found a couple of them really useful. I’d say 90 percent of them I’d seen, or maybe it wasn’t applicable, but 10 percent were great.
I actually think that, potentially, they are quite helpful. The issue is maybe on the evidence base. Again, as an economist, ideally, you’d have an RCT. How you’d execute it is not obvious, but you may take a thousand Americans or a thousand Spaniards or something, some sample, and then give 500 of them intensive self-help coaching for a month and see what happens. Quite possibly, somebody’s done this. It may exist, but that would be my way to evaluate these kinds of interventions.
COWEN: Then you must think people are remarkably productive and effective because self-help books are very cheap. The advice Dallas sent you — that was for free. If you think of it in terms of marginal value, given the low price, the marginal gains to being more productive personally — well, you must be very close to the frontier.
But that strikes me as counterintuitive. I see people screwing up all the time, not realizing their potential. I think the market for talent is remarkably inefficient and that people don’t do their very best.
BLOOM: Well, there’s two issues, I think. One is, you’ve got to consider what it would be like without it. Humanity is dramatically more productive than it was, and some of it could be self-help. The other issue is, there’s unknown unknowns. The problem is you don’t know what you don’t know.
Again, as a personal anecdote, I was recently given an energy efficiency. My brother came around a couple of years ago and pointed out, I should be using LED bulbs throughout the entire house rather than the old halogen or CFL fluorescent ones. My brother’s an engineer, and I sat down and went through the numbers, and it paid off within two to three years.
That’s clearly a fantastic rate of return. That’s pretty rapid. I switched every single bulb in my house to LED. But I didn’t know it until someone had pointed it out. Ex post, it seems kind of obvious. I could have easily gone to Amazon and worked out the cost of it, worked the electricity uses and done the calculation. I just never thought of it.
A lot of this is like Dallas’s recommendations. She said, using this microphone, turn the gain thing down to zero. I didn’t realize that. There was a knob at the back of the microphone I’d never even looked at. I then looked at it. And, “Oh yeah, there is that.” Turned it down to zero and, hopefully, it sounds okay.
Honestly, you see this in firms all the time, and when we were out in India or when I was in McKinsey, you’d often give pieces of advice, and ex post, it was really useful. For example, as another concrete piece of analysis, when I was out in India, a big issue in a lot of modern management is quality defects. These companies were large companies making, say, fabric, that goes into making shirts and trousers and upholstery coverings.
A lot of the learnings coming out originally from Japan is, you should zero in on quality defects and fix them instantly. Essentially, I said, “Look, your factory of a hundred looms — we’re going to take six looms at the back row and have a quality defect index and have a quality control process, a Kaizen process.”
After two to four weeks, it was so effective in spotting repeated issues that the factory owner said, “This is great. It’s worth the effort setting up this QDI index and this committee. We’re just going to roll it out to the whole factory.” But in advance, they were skeptical. I think that — as with so many things in life, unfortunately — we just don’t know what we don’t know, and so we’re skeptical on advice.
COWEN: How bullish are you on Chinese management?
BLOOM: I don’t have fantastic recent data, so I’ll give you my best data. We surveyed them last at scale in 2005. At that point, they were roughly in line with GDP. They were okay. They weren’t fantastic. I’ve had some other surveys, but not internationally comparable.
More recently, they’re pretty good. I have to say, manufacturing — a lot of what drives good management practices is being large, being around for a while, being open to competition, and having educated employees. And China has those inputs. A lot of their manufacturing firms, in particular, are big. They’re competing ferociously with other companies.
Actually, Chinese education system is churning out vast numbers of engineers, and they’ve been operating for quite a while. I suspect at this point now, Chinese manufacturing management is pretty good, actually. It’s harder to tell in other sectors, particularly those that are not internationally comparable. Their financial services — who knows as much? That’s harder to evaluate.
Typically, if you want to look for well-run companies, it’s size, high levels of competition, open to trade, educated employees, no family firms where it’s handed down by primogeniture — the eldest son inherits it. If you go into the sectors that don’t have these issues, then you tend to see very good management. In China, typically, tick most of those boxes in manufacturing.
COWEN: Over 20 years ago, your Stanford colleague Frank Fukuyama wrote a book on trust. He basically said, “Well, China will never have successful large firms in the way that Japan does because there’s not enough trust in Chinese society.” That seemed plausible at the time. Yet, obviously it’s turned out to be wrong. What did we miss about China?
Since you emphasize trust and corruption and ability to delegate authority without too many bureaucratic checks and balances, ex ante, China seemed bad on all those things. Yet Chinese big business has done pretty phenomenally well.
BLOOM: A lot of trust, I think, derives from rule of law. In China — again, this is getting sensitive into politics, but there’s rule of law around political systems, which I really don’t want to comment on. But there’s rule of law around things like contractual enforcement, which turns out to be important for trust between firms.
If you, Tyler Cowen, set up a company and give me a contract for three years for providing ball bearings, I’m going to go and put a bit of money into R&D in improving and set up a process. If you then say after six months, “I’ve changed my mind. Can I sue you and get the money back? If I can effectively do it through the court system, I can trust you.” That’s maybe a kind of odd concept of trust.
It’s not based in some cultural, religious thing. It’s based on the fact that the legal system works. If you look in, for example, the World Value Survey, which measures interpersonal trust, trust measured there is highly correlated with effectiveness of the legal system. Some of the lowest countries in the world, in terms of trust, are some of the African countries whereby the legal system’s in chaos because they’re undergoing civil war, and the highest countries are like Norway, Sweden, North America.
Currently in China, the rule of law as applied to commercial contracts, I think, is reasonable. I am not an expert, but you don’t hear endless stories of scandals and corruption, at least as commercial contracts go. I think that’s what enables these large firms to grow. When we’ve collected survey evidence in reverse, we definitely don’t hear endless stories of managers ripping off firms and stealing ideas, which is a big problem.
Just to reverse it around, what happens in countries with very weak legal systems where you can’t trust anyone is, you hire your family members. If I want to set up a company and I can’t trust any outsiders, I start to stuff it full of sons, daughters, brothers, brothers-in-law, sisters, sisters-in-law, aunts, uncles, et cetera. Now, that’s good because I can trust them, but the problem is, these people aren’t naturally the best managers to run the place.
Of course, as I get bigger and bigger, I’m running out of good family members. Do I appoint a second cousin, or that pretty incompetent youngest son of mine? You can imagine the tradeoffs that are going on, but it means that, unless you have a proper legal system which generates trust, it’s very hard to grow large firms without professional managers.
On Scottish management
COWEN: How do you think about trust and management in England versus trust and management in Scotland?
BLOOM: [laughs] I don’t know if you know, my wife is a Scot.
COWEN: Yes, of course.
BLOOM: My mum is Scottish. I don’t think they’re that different, actually. Having now lived in the US, even the US-UK difference I don’t think is enormous. Increasingly, as I travel around the world, you realize that there are huge differences. There are huge differences between Northern and Southern Europe. It strikes me as quite striking, actually. England and Scotland are very similar. We effectively have the same legal system, the same educational standards.
My mother-in-law who’s in Glasgow — I should send her this podcast. She will probably kill me for saying that. The Scots, I should point out, have had some of the most successful members of the British government, like Gordon Brown and various prime ministers. They’re overrepresented. I don’t think they’re very different. I think, in fact, in reverse — they’re really pretty similar.
COWEN: But the Scots have done much better fighting against the pandemic in the public sector. If you look at globally known brands, I know England has a greater population, but it seems to do disproportionately better than Scotland does. So, it seems to me, the two cultures are not that similar across critical margins. Maybe there are small differences in an absolute sense, but those compound into large differences in final outcomes.
BLOOM: It’s an interesting point. The Scots also voted what I would say is the right way on Brexit. They’re against Brexit. I’m going to be very open here. I was against Brexit because Britain leaving the European Union, I think, is bad economically for the UK. I think this whole concept of being a little England — they’re looking inwards.
Scotland voted against Brexit quite resoundingly, and it’s true that they’ve handled the pandemic much better. Why that is, is not clear. I regularly talk to my mother-in-law in Scotland. In some ways, they seem to be more educated, at least as far as I’d say in the way they vote. Their OECD-measured levels of education are not higher. I’m not aware of any other striking differences.
I like Nicola Sturgeon, who is, I think what’s called the first minister. She’s effectively the prime minister of Scotland. She’s done a very good job. She locked down faster in Scotland. I think that’s why they dealt with the pandemic sooner. Again, on the pandemic, I’m not enough up in the news on England versus Scotland, living in the US, to give more of an answer, but I am aware the Scots have done better on that. And they certainly did better on Brexit.
COWEN: Does Scotland have a different cultural notion of hooliganism?
BLOOM: If you know about the famous Old Firm rivalry, Celtic and Rangers, you probably think, “No.” The two Glaswegian teams. Again, in Scotland, I really spent the vast amount of my time in Glasgow. I wasn’t expecting, Tyler, to be asked about Scottish football hooliganism.
[laughter]
But as far as I’m aware, no.
The interesting thing, by the way, on technology — one of the issues that afflicted the UK was hooliganism. There’s various elements of it. One was just fighting and violence, but another one is racism, and in both of them, technology has been fantastic at combating. On both of them — cameras in the grounds, ID cards, online.
There was an incident just over the weekend — I was looking just this morning — about the racial comments made against a Crystal Palace football player that the police checked through online, and turned out to be a 12-year-old boy in the West Midlands making this stuff.
Just in terms of the ground, technology’s improved attendance at sports games. Because of this, we can stamp it out. That’s something that doesn’t show up in productivity figures. Another concern you could have — and is being a big debate in terms of productivity — is the case of the quality of life has risen in ways that we’re not measuring. I could get into that, but I think the answer is, primarily, no. You could make that claim, and hooliganism has been pushed back a lot by technology.
COWEN: If policy uncertainty is so important for the macroeconomy pre-COVID, why was the reign of Donald Trump just fine for the American economy? Because there was high uncertainty. I woke up every morning not knowing what would happen or what would be said. I’m not sure, ex post, that uncertainty was realized until COVID, but in fact, it was realized on a massive scale. Yet ex ante, the uncertainty didn’t seem to have much of a negative drag.
BLOOM: Donald Trump, in terms of economic performance — how would you assess it? Before COVID, it was fine. Again, I’m not a Trump supporter, so definitely don’t get me wrong. You could be mildly positive on it, saying, “Look, he took the Obama boom and continued it.” As expansions go on, maybe you think it’s harder and harder to keep the expansion going. Growth didn’t pick up, but it also didn’t slow down on the Trump. That would be a passing grade. It wouldn’t be fantastic. It wouldn’t be terrible either.
One thing that aided growth on the Trump was the corporate tax cuts. There’s another political uncertainty in changing his mind all the time. Honestly, a lot of bad policy with reduced growth on the Trump.
It seems to have netted out to about zero. It was no higher or no lower than in Obama’s second term. The policy uncertainty was a negative, but there were other things he did that were positive. It is also true that under Obama, there was considerable policy uncertainty because of things like the debt ceiling debate and the fiscal cliff.
Who you blame is less obvious there. Congress was fighting the president. Obama wanted to pass various pieces of legislation and couldn’t. The same thing is true now, of course. We have mixed control of Congress. I think Trump made it a lot worse, to fault him quite explicitly. He just changed his mind, and he also didn’t listen to advisers.
When he talked to firms, it’s very hard to predict which way a policy was going to go, because a lot of decisions didn’t seem entirely thought out, rational, predictable — I don’t know what words you’d use. Firms who complain about, “We didn’t see this coming.” He changed his mind and the tweets.
By the way, US physical investment, even before COVID, was not great.
COWEN: You talk about the intangibles, right?
BLOOM: Yes.
COWEN: The stock market is doing well.
BLOOM: Yes, but the stock market does not reflect the US economy. The stock market, for example — right now it’s 30 percent high tech, which has only 7 percent of US jobs. Also, when interest rates drop because the economy slows, it makes the stock market go up because it’s suddenly a relatively better investment. I think the stock market and the state of the US economy are only weakly linked.
COWEN: Say we take the 1960s, which is one of the golden eras for macroeconomic growth — many wonderful things about it. It seems policy uncertainty was quite high. There was the Cold War. There was the Vietnam War. It was the civil rights movement — not clear how it would turn out. There were riots in cities all the time. We were on the verge of major changes in regulatory policy, like the environmental movement. Anecdotally, very high policy uncertainty. Things proceeded just great, it seems. Or no?
BLOOM: This is why long-run measures are actually useful. It’s very hard — when you talk to people, they often raise different eras as particularly more or less uncertain. Often, it’s driven by their own personal experiences. There’s actually a phenomenon. It’s interesting you raised the ’60s. It’s actually phenomenal to think the past was more certain than the present because you see the past having happened. You forget all the alternative scenarios that could have been.
Just on data, the ’60s, in terms of stock market volatility, were quite low. In terms of macro volatility, were moderately low. There was the whole Great Moderation, and the ’70s and ’80s were very volatile macro growth, but the ’60s were reasonably calm. In terms of our index, Economic Policy Uncertainty Index, where we scraped newspapers, it didn’t appear to be particularly high levels of uncertainty.
You could argue newspapers in that era — it wasn’t clear how completely open they were. Watergate was opening the floodgates of being more transparent. But I don’t see in the evidence I’ve seen of the ’60s as a period of particularly high policy uncertainty. You’re right, those incidents happened, but in other areas, like domestic economic policy — again, I’m going off newspapers and stock market reactions — it doesn’t seem to be particularly high. The two great spikes in the stock market volatility by the end of the ’60s were the Cuban Missile Crisis and the assassination of JFK.
On how long working from home can work
COWEN: We’re speaking in July 2020. Given that there’s so much working at home going on right now, how long will it take before a tech company productivity declines as people grow frustrated or disconnected, or they become too restless? It’s too hard to bring on board new hires. How much time do we have before things really start to fray?
BLOOM: Great question. Just to be clear, my thoughts on work environment in the short run, working from home for those of us that can — only something like 40 percent of Americans can work from home, but that accounts for something like 50, 60 percent of GDP, because they tend to be higher-earning individuals.
For those of us that can work from home, the evidence looks like, in the short run, that increases productivity, as long as you’ve got reasonable conditions, like proper internet and a room, your own exclusive room to work in. I had an old paper looking at China, and it showed very large increases in workers’ short-run productivity from people working in call centers.
The big unknown I’m working in — I know other people are looking at this too — which is, what’s the impact on longer-run productivity, which is the concept — coming back to the beginning of the podcast — about creation and innovation. Lots of claims, including Steve Jobs, before he passed away, made several comments about he wanted people to be in the office. You have to be there for the new ideas that come up from water-cooler discussions and meetings and one-on-one stuff.
Obviously, under COVID, that’s all stalled. None of that we’re really sharp right now. You can get away with three to six, maybe even nine months, probably, of not radically creating new things. But in the long run, I fear there’d be a drop in, say, patenting in 2021, 2022 because of this. The question is how firms just run. My guess, from talking to a lot of US companies, is they will return partly to the office.
I think in the long run, working from home will be fine because we’ll be in the office three days a week and two days a week at home. That’s the best of both worlds. I don’t think you need to be in the office five days a week to be creative, but you do need some time each week with colleagues. I’m not too worried now. What I think will be problematic is, if in late 2021, we’re still all 100 percent working from home. Then I would really worry about impacts and productivity.
COWEN: Your long-term coauthors should be those who are at Stanford or Berkeley, but your short-term coauthors can be anywhere.
BLOOM: [laughs] I know, my coauthors are just all over the place. I was going to say, one of the things I really miss about working from home is going to seminars and conferences, particularly the two last conferences we went to before lockdown. One was in Mexico — ITAM — and one was in Melbourne in Monash University. They were both fantastic because they were small, and I got to, basically, talk to everyone there.
That’s the kind of thing that generates, for me, coauthors — talking to someone of a quirky idea that comes up with something. Oddly enough, most of my coauthors are not at Stanford, which seems to disobey my own rule. I don’t know why that is.
Mostly I have overlap with them physically at one point or another — they’re former students or former colleagues, like when I was at UCL or LSE. Steve Davis, I’ve worked with a lot in Chicago. I never physically overlapped with him. Two others are Ivan Alfaro, Xiaoji Lin — I met them at Ohio State University. It is harder —
COWEN: If you can do it, why can’t tech companies do the same?
BLOOM: Let’s take Ivan and Xiaoji Lin from Ohio State University. I first met them physically. I went to give a seminar at Ohio State University. I sat in Xiaoji’s office for half an hour. We kind of got excited about a research idea. That was the critical meeting point. I’m not sure it would have happened if we’d done it remotely. After talking to him, I thought, “This guy seems great. There’s a really interesting idea.” We continue to communicate by email.
My thought is, and it kind of matches roughly what a lot of Silicon Valley types say, is the initial spark or idea is much more effectively generated in person. Often, it’s over lunch or over coffee. This is the sense in which productivity now . . . I’ve been running masses of surveys on working from home to try and get the sense of how people are feeling, and both firms and workers are overwhelmingly positive about working from home.
Now, again, to be clear, that’s July, and we’re three to four months into the lockdown. My theory is, if it were full-time working from home five days a week for another six to nine months, there’s going to be much more discontentment. In fact, I saw that in China when we did the Ctrip study. People were working from home for nine months. Towards the end of it, it started to really grind and drag on. That was more about loneliness, but the other issue is in terms of being productive and being creative.
On the Nick Bloom production function
COWEN: For our final section of the conversation, I have a number of questions about your own productivity. This is called the Nick Bloom production function. Are you ready?
BLOOM: [laughs] Go ahead, thank you.
COWEN: Now, most people at top-five schools in economics, as you know, also have PhDs from other top-five schools, but Nathan Nunn has a PhD from University of Toronto, and your PhD is from University College London. What made you an outlier in this regard? And what do you think has been its advantages and disadvantages for you?
BLOOM: For me, doing my PhD at UCL was extremely fortunate. Oddly enough, I’ve had this discussion with a lot of people that are applying to Stanford as PhD students. I’m not sure if I effectively sell or undersell Stanford, but there’s tradeoffs when thinking about grad school. It’s true if you go to an elite grad school, you’re surrounded by a fantastic cohort and have great faculty. On the other hand, it’s hard to work off of a faculty because there’s so many other good students around.
At the time I was at UCL — I was doing my PhD in the late ’90s — the number of other PhD students was very thin. It wasn’t a big program. Mostly there’s a mix. Many of them are not interested in ultimately going into academia. I was one of the few students that was focused. There were a few others, don’t get me wrong. There were about five or six in my year, but it was a much smaller cohort, say, compared to Stanford, where there’s 25 a year.
As a result, it was much easier for me to work with faculty. Not just faculty at UCL, but others through the IFS. People like Richard Blundell, John Van Reenen and Rachel Griffiths, Frank Windmeijer, Steve Bond. These guys were sitting around. Lucy Chennells — I remember she was sitting right on the other side of the desk from me. I’d speak to Lucy as a grad student. It’s fantastic. This was something that had been out with Rachel for 5, 10 years.
Having that exposure is great. If I’d been in an enormous cohort of 25 of us per year, over six years, I never would’ve got that.
COWEN: So are the top five schools overrated for economics graduate study?
BLOOM: I think the question to ask is, what’s the value added? Remember, the top five schools recruit, by far, the best students. I know Stanford ranks the students. We often make offers to those at the top of the list, and we do pretty well at that. We typically get picked by MIT. The question is, what’s the value added? It’s never been obvious to me what that is. I suspect it’s positive, but I’m not certain. It’s definitely not uniformly positive.
For me, almost certainly, I was better off having gone to UCL. It was a fantastic outcome for me versus anywhere else because I got to work with these people early on. I also have to say I’m very lucky because the IFS in that era was big into what they called micro econometrics, which is basically using panel data, which turned out to be exactly the way to go. So I was clearly fortunate. I just happened to be in a university when it was on the rise at the time.
COWEN: You began your career at the British Institute for Fiscal Studies. How did that shape your subsequent research and how you think? Was that a mistake? Was that a wonderful start to have? It’s, again, highly unusual. Yes?
BLOOM: Yeah, the IFS was great. I did a master’s at Oxford. I wasn’t intending to go into research at all, actually. I applied for a lot of investment banks, and I applied for IT jobs. I remember getting an offer from VZW, now long-closed British investment bank, to go work in the IT department and thought about it very seriously. So, all over the place. I took this job at the IFS — turned out to be fantastic.
One is, it really inspired me to get interested in economics. They answered what I would call pub economics questions. What I mean is, in the British sense, there are questions you can talk to your friends in the pub about, which are the same ones, frankly, the New York Times or anyone . . . They’re not abstruse things like, “What happens in this model when alpha goes to seven,” but more like, “How would you increase growth rates?”
The IFS was very much about inspiring me to do this stuff, and it’s also entirely empirically focused. Again, that was in an era when empirical economics wasn’t so dominant. It is much more dominant now.
So, I just basically focused on data. And I was lucky at the IFS — I could do a part-time PhD. Just to be clear, when I started that, I was not a PhD student. They had a program encouraging people to go do part-time PhDs at UCL. I then went to start my PhD about nine months after joining the IFS at UCL. I was, oddly, an accidental PhD student. It was not something I ever had in mind.
COWEN: What do you think it is, in either your personality or your background, that led you to take these unusual paths? Because again, they’re somewhat atypical, as you know.
BLOOM: The IFS — at some point I left and went to work in McKinsey. I went to the UK treasury.
COWEN: Also atypical, right? Most people just go straight through — research, research, research.
BLOOM: I was clearly very lucky, so I wouldn’t advise, probably, my . . . Certainly going to work in McKinsey, as in leaving a PhD and going to a nonacademic job, probably, on average, is not a good path. I was just extremely fortunate that I managed to get back into academia afterwards. I wasn’t there for that long, for under two years. I was fortunate the people I worked for before were running a research center. John Van Reenen, in particular, at the CEP, took me back. I was called a research officer. I was like a souped-up RA.
Then I started working in two areas. One was management. One was uncertainty. The management one turned out to be a fertile area to look in just because there’s not much data.
Uncertainty — I honestly was, again, fortunate on timing because when I started to look in it, it was during the period of the Great Moderation. When I was working in uncertainty, I was looking at things like 9/11 as an enormous uncertainty shock, started to get into the topic. Business cycles were kind of quiet, people weren’t working on that that much. Then suddenly of course, ’08, ’09 happened and then COVID.
In hindsight, I wouldn’t advise that path. The issue is, it’s like first and second orders of the stochastic dominance. On average, the path I took was probably a less good path to take. It turned out, for me individually — due to circumstance and good luck, it worked out well.
COWEN: Now, your dissertation was on the topic of adjustment costs. Is there a lens through which I can read a lot of your subsequent major topics as actually all being about adjustment costs — speeding up progress in science, copying management productivity techniques and why it’s so hard, the effects of uncertainty — it’s hard to adjust to it. Are you still working on adjustment costs?
BLOOM: Yeah, it’s like my first academic love was adjustment costs. It seems strange to say that. I remember Bob Hall saying — he went to some MBR event, saying — there was a huge shouting match about adjustment costs, and he said, “How can anyone get so excited about” — you know, Bob Hall has some famous papers on adjustment costs, so it’s kind of funny — “How can anyone get so animated and excited about something so boring?”
Bob and I and many others have worked in it. I realized halfway through my PhD, it was hard to excite other people about adjustment costs. I honestly stopped talking to people. Again, coming back to the public economics thing. My friends in particular — their eyelids would start drooping. I was just boring them to tears. That’s how I ended up morphing to looking at uncertainty.
I realized if you have high adjustment costs — as in, it’s expensive to hire someone and fire, invest and disinvest — uncertainty is really costly because you can’t change your mind. But yeah, it has colored my thinking a lot.
I was thinking about working from home. Just to be clear, under COVID, with social distancing, working from home, I think this is going to last for another, let’s say a year. It’s hard to know. If, after a year or more, we are still social distancing and working from home, we’ve been in that regime for up to 18 months. A lot of firms are going to have adjusted individuals to that process.
You can call it inertia. You’d also think of it as adjustment costs, but this is why I think a lot of what’s happening now is going to stick, because of that. Yes, and in some senses, that has colored my thinking.
COWEN: Just you personally, relative to your level of talent — are you a person of high or low adjustment costs when you need to adjust?
BLOOM: As we get older and older, it feels like our adjustment costs become higher and higher. I have these three areas I’m working on. Innovation I started working on. Management and uncertainty — the two I started working on more recently. Innovation — again, this is a random thing.
I don’t know how long ago it was; I had a summer internship, an unpaid internship — there’s a ministry long gone in the UK called the Department of Trade and Industry — to do a project looking at patents. This is 30 years ago. I remember putting up all the data on patents and that kind of interest in innovation stuff. I tend to think I’ve built up so much knowledge and interest in, particularly, management and uncertainty and innovation, I tend to mostly focus on that.
Although recently, through fortuitous luck, I was working with another couple of coauthors — again, I’ve never overlapped with Faith Guvenen and Sergio Salgado — looking at inequality and firms and skewness and other topics.
For me, I really like to read broadly rather than deeply — sounds an odd thing to say. Every Monday, for example, or Sunday night, the National Bureau of Economic Research has this vast email of all the recent papers. I tend to try and scan every title and abstract. I read the papers. I like the Economist magazine. It’s good. It’s often been a source of ideas, actually.
We were talking before the call — I listen to your podcast. I actually listen to a lot of podcasts because I try and go out for a walk or a run for about an hour every day. I mostly listen to podcasts. [laughs] If I’m getting too tired, I have to switch to music. For me, that’s been helpful for coming up with new research ideas.
COWEN: What do you think will be the next different thing that you do? It’s not just an extension of current work.
BLOOM: Geez, that’s hard to say. My best guess is — as you said, the other thing that’s really helpful for me is working with coauthors — will be some bright, sparky coauthor, grad student who will suggest, “We should look at X.” Maybe they’re not that interested in it. I say, “Oh, that’s a great idea.” Maybe at some point it turns into a collaboration.
Often, I’m giving a seminar. A lot of great ideas come from . . . For those who don’t go on the academic seminar, the way that academic seminars work is — because at GMU, not long ago — you go and give a talk, and then normally you get meetings in the morning and the afternoon.
A classic day will be, you turn up at 10:00 AM. You have half-hour meetings and then lunch, and there’s a talk in the afternoon and then dinner. What I really like is those one-on-one meetings because you’re talking to lots of people for half an hour. I find them fundamentally really tiring because you’re fully on.
Actually, whenever I meet people, I go to their website, look them up for half an hour, 20 minutes beforehand, and really try and learn about what they work on. It takes a lot of time, but I find it really valuable. That’s the great source of ideas.
I’m personally also suffering in the sense of productivity — as I mentioned, I think the US economy is — from working at home full time because those one-on-one meetings have stopped. My own production function, in some ways, of continuing current projects is fine. I can do that.
But I do feel that if this carries on for another year, the US economy is going to suffer a little bit in terms of struggling to come up with new ideas because there’s not so much one-on-one discussion. I’m not randomly meeting people. I can easily Zoom current people I know, but it’s much harder to come up with random people at seminars you would’ve gone to, but clearly aren’t.
COWEN: Nick Bloom, thank you very much.
BLOOM: Tyler, thanks so much for having me. That was great. | https://medium.com/conversations-with-tyler/nicholas-bloom-tyler-cowen-productivity-economics-b5714b05fc2b | ['Mercatus Center'] | 2020-08-12 12:13:24.781000+00:00 | ['Efficiency', 'Podcast', 'Progress', 'Productivity', 'Economics'] |
Three Questions With RJ Andrews | I am a fan of pictorial diagrams, but a pictorial diagram is not enough for me. I want some sort of statistical insight as well. You can do that by arranging pictures according to some order.
I did that with the cathedral spread at the end of the Info We Trust book. I was intentional about arranging these. Each one’s an individual map and it’s on polar coordinates — it’s also a polar bar chart.
Source: RJ Andrews, Oriented Cathedrals: ‘Praying Toward the Sun’ reveals a hidden code in Gothic architecture, from Info We Trust.
2. If you were stuck on a desert island, what viz would you want to create and what would you use to make it?
If you’re on a desert island, it’s sort of like a prison, right? So, the natural thing is that you would want to track time. That might keep you from going crazy. Also, how big is this isle? Is this a New Yorker cartoon desert isle? Or, is it a Robert Louis Stevenson desert isle?
I have to eat, so we’re going to assume access to fresh shellfish. I’m making my visualization out of shells. And, I’m not going to be optimistic about my survival. I’m going to clean the beach and make it perfectly smooth. And, the thing that you can do on a desert island better than anywhere else in the world is you can look at the stars. I’m going to take all the shells from the shellfish I’ve been eating and I’m going to bleach them in the sun. Then, I’m going to make the most beautiful visualization of the night sky and reflect it back. That will probably entertain me. And, it’ll keep me out of the sun during the day when it’s hot. It will give me something to do at night. And, I’ll probably do that until I wither away from exhaustion.
3. What is one visualization that has inspired you?
It feels appropriate to pick one from the exhibition. I’ve written about this before, but the Paris Theater Review is my favorite thematic map. I love it. It may be my favorite piece of data visualization of all time. It’s not just perfect, it’s incredibly delightful. When you look at the exhibit write-up, I gave it a second paragraph. There’s nobody who knows how to do what they did here. The color on top is just so vibrant, but you can still see the base map below. It’s an incredible work. | https://medium.com/nightingale/three-questions-with-rj-andrews-6cb4dab43c51 | ['Mary Aviles'] | 2020-09-25 13:25:50.292000+00:00 | ['Datavis', 'Topicsindv', 'Interview', 'Exhibit', 'Dataviz'] |
Building a REST API using Python, FastAPI, and Heroku | First of all, who never had to build and REST API in a few days or even hours? I bet you did, so I had this job, to create a geolocation API that works as a microservice.
I have been using python recently, so, the first idea was to use Flask for this job, but my boss was asking me to look for new approaches as I already had experience using flask. So in my researches, I found some options, and the two top ones were Django and FastAPI.
First decided to test if Django, the one I already met before, fits our needs. It ended to be a very powerful framework and used to do a lot of things that would not be necessary, so let’s see how FastAPI has worked for me.
So, let's begin learning how to install it. in this post, I will be using pipenv, but you can use any package manager that you enjoy:
pipenv install fastapi
pipenv install uvicorn
We’ll use uvicorn as an ASGI server.
Now, you have uvicorn and FastAPI installed, you can now begin to code.
Let's create the architecture:
├── main.py
├── manager.py
├── app
│ ├── router.py
│ ├── configs.py
│ └── geolocation
│ ├── services.py
│ ├── controller.py
│ ├── models.py
│ └── routes.py
└── pipfile
Let me say that every folder needs an __init__.py file to be considered as a module.
Now, starting to code. The main.py is the one responsible for running your server and setting this up.
We set the prefix to “v1”, so we can have multiple versions API’s.
Setting up the manager.py now:
This one is only a helpful file that I created that helps to create the modules, to use it you should run the following command:
python manager.py module MODULE_NAME
In the router.py file, you need to set up all the primary routes, one to each module. We use the API router so we can group all the routes together in the same place.
Now, creating the file that will be responsible for the endpoints for the geolocation module the routes.py inside the geolocation folder.
Remember, this only one way to deal with the parameters of the endpoint, other ways you can find it HERE.
Next step, the controller, here you need to take the response body or parameters and organize it to provide the data to the services:
Now, the services, the one responsible for all the magic, here you can make all the data processing, requests, DB management, anything that you need. We are going to make a request to a geolocation by IP provider.
You can see that we used a file named configs.py, in this file we can store any static data, an API token, for example.
API_TOKEN = "YOUR_API_TOKEN_HERE"
Now, after all this coding we can build this project to Heroku or any cloud platform. After creating a Heroku app you need to deploy using a Procfile.
web: uvicorn main:app --host=0.0.0.0 --port=${PORT:-5000}
The Procfile should look like this. Now, looking for the URL of your project in Heroku: https://YOUR-PROJECT-NAME.herokuapp.com/v1/geolocation/
Resulting in something like this:
{"lat":"-8.0539","long":"-34.8811"}
Now, you can improve this service saving the requests to make a database cache, you can create other services using the manager.py and adding it to the router.py | https://medium.com/analytics-vidhya/building-a-rest-api-using-python-fastapi-and-heroku-b7e9341f578 | ['Hugo Teixeira'] | 2020-03-09 12:30:02.344000+00:00 | ['Python', 'Geolocation', 'Heroku', 'Rest Api', 'Fastapi'] |
BERT Text Classification Using Pytorch | Getting Started
Huggingface is the most well-known library for implementing state-of-the-art transformers in Python. It offers clear documentation and tutorials on implementing dozens of different transformers for a wide variety of different tasks. We will be using Pytorch so make sure Pytorch is installed. After ensuring relevant libraries are installed, you can install the transformers library by:
pip install transformers
For the dataset, we will be using the REAL and FAKE News Dataset from Kaggle.
Step 1: Importing Libraries
The most important library to note here is that we imported BERTokenizer and BERTSequenceClassification to construct the tokenizer and model later on.
Step 2: Preprocess and Prepare Dataset
In the original dataset, we added an additional TitleText column which is the concatenation of title and text. We want to test whether an article is fake using both the title and the text.
For the tokenizer, we use the “bert-base-uncased” version of BertTokenizer. Using TorchText, we first create the Text Field and the Label Field. The Text Field will be used for containing the news articles and the Label is the true target. We limit each article to the first 128 tokens for BERT input. Then, we create a TabularDataset from our dataset csv files using the two Fields to produce the train, validation, and test sets. Then we create Iterators to prepare them in batches.
Note: In order to use BERT tokenizer with TorchText, we have to set use_vocab=False and tokenize=tokenizer.encode . This will let TorchText know that we will not be building our own vocabulary using our dataset from scratch, but instead, use the pre-trained BERT tokenizer and its corresponding word-to-index mapping.
Step 3: Build Model
We are using the “bert-base-uncased” version of BERT, which is the smaller model trained on lower-cased English text (with 12-layer, 768-hidden, 12-heads, 110M parameters). Check out Huggingface’s documentation for other versions of BERT or other transformer models.
Step 4: Training
We write save and load functions for model checkpoints and training metrics, respectively. Note that the save function for model checkpoint does not save the optimizer. We do not save the optimizer because the optimizer normally takes very large storage space and we assume no training from a previous checkpoint is needed. The training metric stores the training loss, validation loss, and global steps so that visualizations regarding the training process can be made later.
We use Adam optimizer and a suitable learning rate to tune BERT for 5 epochs.
We use BinaryCrossEntropy as the loss function since fake news detection is a two-class problem. Make sure the output is passed through Sigmoid before calculating the loss between the target and itself.
During training, we evaluate our model parameters against the validation set. We save the model each time the validation loss decreases so that we end up with the model with the lowest validation loss, which can be considered as the best model. Here are the outputs during training:
Image by author
After training, we can plot a diagram using the code below:
Image by author
Step 5: Evaluation
For evaluation, we predict the articles using our trained model and evaluate it against the true label. We print out classification report which includes test accuracy, precision, recall, F1-score. We also print out the confusion matrix to see how much data our model predicts correctly and incorrectly for each class.
Image by author
After evaluating our model, we find that our model achieves an impressive accuracy of 96.99%!
Conclusion
We find that fine-tuning BERT performs extremely well on our dataset and is really simple to implement thanks to the open-source Huggingface Transformers library. This can be extended to any text classification dataset without any hassle.
Here are other articles I wrote, if interested 😊: | https://towardsdatascience.com/bert-text-classification-using-pytorch-723dfb8b6b5b | ['Raymond Cheng'] | 2020-07-22 10:51:01.068000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Technology', 'Data Science'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.