title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
How To Write Functions In TypeScript | How To Write Functions In TypeScript
Beginner-friendly series to future proof your app
Hey There
The other parts of this series are below for your convenience. 🥳
Part 1 of future-proofing your app
Part 2 of future-proofing your app
We previously talked about prop-types and moving on to an introduction on TypeScript. Still, our TypeScript introduction was very brief and focused on the typed system of primitive data types. This post will move on to show how a function can first check the argument type before moving on to execute the function body. We will also look at some more complex types after exploring how TypeScript works with functions.
Photo by James Harrison on Unsplash
JavaScript Kinks
First, let’s take a look at the amount of extra code we need to handle type errors or unexpected behavior in JavaScript, and this is only when you actually have the experience and foresight to expect these sorts of things. JavaScript code can’t really expect the types of arguments passed in. A JavaScript function can take in any data type, but it is in the function body where we specify logic in handling unexpected data types.
JS code:
const findSum = (numOne, numTwo) => numOne + numTwo findSum(5, '6') -> outputs 56 * Notice the '6' is a string in this case and the integer 5 gets coerced into a string as well, resulting in a concatenated string value. Definitely not what we want in this case - and sure, we can write logic inside the function body like below. const findSum = (numOne, numTwo) => {
if(typeof numOne !== 'number' || typeof numTwo !== 'number'){
throw new Error('passed in args are not integers')
}
return numOne + numTwo
}
The second JavaScript code is definitely valid, and it works. However, there is a much simpler way to write this in TypeScript. The less code we write usually means the less code we have to read and maintain. Everybody ‘liked’ that!
TS code:
const findSum = (numOne:number, numTwo:number) => numOne + numTwo findSum(5, '6') * This will result in an error that looks like the line below. Error - Argument of type 'string' is not assignable to parameter of type 'number'.
We can take advantage of type annotations with the same syntax when declaring variables — <variable/parameter: desired data-type>. The type system within TypeScript will handle the logic of unexpected data type and throw the error without writing the extra code. Note that a type of ‘any’ will be inferred if no type is provided next to the parameter.
Functions, Parameters, Arguments…Battlestar Galactica?
Remember how JavaScript likes to silently fail most of the time and tend to throw undefined whenever there’s a small bug somewhere? There are times when it is due to missing arguments, and it won’t fly the same way with TypeScript. Let’s see how the TypeScript error will look with the code below.
JS code:
const findSum = function (numOne, numTwo) => {
console.log(numOne, numTwo) ------> // 6, undefined
return numOne + numTwo
} console.log(findSum(6)) ------------> // NaN TS code:
const findSum = (numOne:number, numTwo:number) => numOne + numTwo findSum(5) Error - Expected 2 arguments, but got 1.
- An argument for 'numTwo' was not provided.
The errors are beneficial, and there’s definitely less flexibility unless specified explicitly with optional parameters in TypeScript.
JS code:
const greeting = function (name) => {
return "hello " + (name || 'stranger')
} console.log(greeting()) // hello stranger TS code:
const greeting = (name?:string) => { <-- notice the question mark?
return `hello ${name || 'stranger'}`
} The question mark after the parameter allows developers to make a given parameter optional.
Assigning default values to parameters looks the same as JavaScript.
TS code:
const greeting = (name = 'stranger') => {
return `hello ${name}`
} console.log(greeting()) // outputs 'hello stranger'
TypeScript will perform type inference here based on the default value’s type. So, in this case, the default value is a type ‘string’ — receiving any other data type will result in an error. Since we are on the topic of type inference, it’s also worth mentioning that TypeScript infers the return type by looking at the value after a return statement. In the code above, the return type of the greeting function will be a ‘string.’ If for some reason, you have to store the returned value into another variable that specifies a data type — it will result in an error being thrown. Now let’s also look at explicitly stating return types.
TS code:
const myAge = (age = '18'): string => {
return Number(18)
// returning any other data type will cause an error
} // explicit annotation after closing parenthesis to state return type // Type 'number' is not assignable to type 'string'.
If a function returns undefined because of a missing argument or forgetting to return a value, TypeScript will produce a helpful error because undefined is also a data type. Super neat!
Conclusion
Thank you for joining me on my journey to learn TypeScript. I personally see the usefulness of TypeScript to safeguard our code. It certainly helps with readability and takes away the guesswork of what a function is supposed to receive and return when working in a big engineering environment.
Hopefully, I was able to be clear about my thoughts, and we all learned a lot from this article. Now we know how to read and write type annotations on parameters, give default values just like in JavaScript, and understanding how to determine or explicitly state a return type for a function. This is probably more than half the battle already because functions are treated as first-class citizens able to be passed around to other functions.
The next post on TypeScript will cover applying types to more complex data structures like arrays and objects. Always be learning, am I right?! See you all on the next one.
References | https://medium.com/javascript-in-plain-english/how-to-write-functions-in-typescript-4fe5cea4c9d | ['Wilson Ng'] | 2020-12-21 15:48:34.690000+00:00 | ['Programming', 'Web Development', 'React', 'JavaScript', 'Typescript'] |
10 Habits I Borrowed From Python That I Use in JavaScript (Part 2) | Read Your Code a Week From Now
Can you tell how it works at a glance? Sparse is better than dense. Readability counts.
One way to make your code very, very readable is to have lots of little parts with very clear functionality. This is known under the name of the single responsibility principle. If your code base is big, there’s going to be a lot of logic to write. So what is easier to work with? Lots of well-structured but small functionalities or a very big piece of code doing a lot of things? If you’ve learned anything from programming, you’ll know the first option is your friend.
The reason for that is that code is not written to be static forever. You want to add new features, modify some behavior, fix some bugs, test your code, change designs, etc. So having functionalities as isolated as possible makes it really easy to swap out the parts you don’t need or to figure out what to change for different parts with the desired behavior, given that the responsibilities are well-defined. Most important, it makes debugging a breeze because you can more easily pinpoint the culprit part of your code.
You can do this in Python by wrapping your data structures in classes and assigning them responsibilities that are the simplest to assign them to, and then give those classes the required methods to perform their tasks. In React, the same principle can apply. Break your big, bulky components into smaller parts and give those parts the job of rendering a specific part of the DOM. Or refactor logic so that a custom Hook deals with a specific part of the logic of a component (a specific subset of the state + memoized callbacks related to manipulate that state, for instance). This makes your component much easier to read, and if a task needs to be redesigned or performed differently, the reader knows exactly where to look.
A good example of a small building block like that would be to write a Hook that manages a boolean state, with three memoized callbacks to set the boolean to true , false , and to toggle the value, respectively. This is the useBoolean Hook I displayed in Part I! (I’m a big fan of it. I want more of those!) | https://medium.com/better-programming/10-habits-i-borrowed-from-python-that-i-use-in-javascript-part-ii-53b405d31f1b | ['Patrick Da Silva'] | 2020-12-01 17:57:39.617000+00:00 | ['JavaScript', 'Software Development', 'Programming', 'React', 'Python'] |
We Can All Go Back to Kansas Now | We Can All Go Back to Kansas Now
People have the memory of a gnat. We don’t have to cancel history, because 99 percent of it is already forgotten. How much do you remember yesterday? Last week? Last month? How different is your point of view, relative to those around you? Multiply that by billions.
That’s the nature of time. We have this grandiose sense of it as the present marching firmly from past to future, but the reality is that change turns future to past. Tomorrow becomes yesterday, because the earth turns. There is no dimension of time, because the past is consumed by the present. Cause becomes effect.
Consequently time is this very blunt funnel, where the nearly infinite future sorts into this very finite present, then becomes the crumbling residue of the past.
While we exist in this moment, we dream of all the future possibilities and how to make them happen. Yet so is every other living being reaching for the future.
One primary way to leverage our efforts is to work together. Which requires some collective sense of destiny. Religious, national, economic, tribal, familial. Then we have to layer and sort which are more important, long term, short term, etc.
The more complicated the arrangements, the better we can calibrate them to the specific details, yet the more fragile and delicate they become.
So we often have to deal with very crude and simplistic relationships. Which wield enormous power and minimal nuance. Such as our current political duopoly and the forces pushing it to open conflict.
The same laws of nature apply on all scales though, so the lack of responsiveness is also shortsighted. Thus the extremes come to resemble chest thumping apes.
The bull is power, the matador is art.
To step back a little, social organisms form governments for the same reason individuals have central nervous systems; To guide and regulate the appetite and survival driven impulses of the body. This process entails cycles of expansion and consolidation, consumption and digestion. We grow, but we also have to regulate that growth. Our cultural dichotomy reflects this, as liberalism is the side of growing and expanding, while conservatism is the organization and structure of civics and culture, giving form to society.
There was a time when government was a private function, but as monarchies proved insufficiently adaptable, government shifted, with enormous conflict, to becoming a public function.
There is another system serving the entire body and that is blood and the circulation system. Its communal analogy is money and banking. Which is still very much in private hands, but appears to be having its own, “Let them eat cake.” moment.
So the government we have has become increasingly subservient to this banking establishment, which can starve anyone questioning, or reward anyone serving its interests.
As the expression of conservatism, the Republican Party has always been on the side of those forces running society, while the Democratic Party has been traditionally on the side of those on the bottom, the ones really providing the raw energy propelling the community. Even if this energy is guided by those on top.
Yet when the economy slowed in the 70’s and Jimmy Carter had the audacity to tell people to put on a sweater and suck it up, Ronald Reagan came along and said we should put it on the credit card and keep partying.
After Reagan’s greed is good, trickle down economics became the law of the land, Clinton beat Bush by playing the same game, with a faster beat and we had neoliberalism and the Third Way. As Deep Throat told Woodward, “If you want to know what’s happening, follow the money.”
The many and varied problems this has created have been pretty effectively swept under the rug, such as burgeoning public debt to back surplus private sector investment capital and a financial sector turned into a casino, as metastatic gobs of money are held in suspension, by being wagered in enormous feedback loops. The medium of the markets has become the message of capitalism. The tool has become the god.
So the last couple generations have come of age on the surface of history’s largest financial bubble, which is turning large sectors of the actual economy into bloat and calling it growth, while predating on other important, but financially vulnerable areas.
What their lack of historical perspective misses is that when we suddenly have a bankruptcy skating con artist wiggling his way to the top, after such refined and educated, but evidently sociopathic leadership as the last several presidents, there are deeper reasons than that half the country are racist homophobes.
If you were one of those shipped off to the endless wars, or had your home taken through blatant white collar crime, you might view such a clown as a breath of fresh air. A monkey wrench in the machine.
For the sorts of people who have been through this, the current media white washing and crowning of Biden is a disappointment, but not a surprise. It’s like one of those arguments where the other guy totally ignores your points, beats up on some strawman of his own, then runs around pumping his fist in the air, assuming he won.
Yes, there are all sorts of races, ethnicities, cultures, sexual orientations, etc, but that’s multiculturalism! The question is what causes tension between groups. Which is either bottom up economic competition, exploitation, etc. Or a top down absolutist ideology, that insists on its own monoculture.
When the powers that be are suddenly on the side of the most downtrodden, against much of the middle class, it does look like divide and conquer, which is another form of exploitation.
Now that the Orange Clown is being pushed off stage, so the same old creeps, kleptomaniacs, war criminals, charlatans, moochers and other slithery creatures can get back to business as usual, all is good again.
If you were one of those promised a rainbow colored sparkle pony, for throwing stones at the Bad Man, well, the check is in the mail, but don’t hold your breath, or you really will turn blue.
If you are one of those currently cashing a big fat check, be careful of whom you’ve sold your soul. The lesser evil is still evil. It will be much more expensive to buy it back, should you change your mind and will be well used and abused in the meantime.
Though maybe the bling is worth more than a clear conscience. I don’t know though. Ask Hunter Biden.
As it is, these elections are like the wildebeest having to chose between the lions and the hyenas. Who would you prefer running the government? The Mafia and the Hell’s Angels, or the Crips and the Bloods?
Personally I wrote in, Assange/Manning. They had the guts to look the Beast in the eyes and not blink. Just because the mob has nailed you up on a cross, doesn’t mean you’ve lost. The war for the integrity of this country has only just begun.
Respect and responsibility hold society together. Fear and greed tear it apart. | https://medium.com/predict/ding-dong-the-witch-is-dead-6f19fd291b9d | ['John Brodix Merryman Jr.'] | 2020-11-18 21:36:40.877000+00:00 | ['Society', 'Culture', 'Politics', 'Philosophy', 'Future'] |
Humans Are Awkward Creatures | Anthropologists and sociologists have been studying the emotion — or feeling, rather — of awkwardness for quite some time, but, despite this, the majority of what the average person knows about it is often exclusively limited to their own personal experience.
For the most part, we’re all vastly familiar with what it’s like to feel awkward. We are experts at identifying it, and even more well practiced in trying to vehemently avoid it. Whether it stemmed directly from our own actions, or we were caught red-faced in an awkward moment we bear no personal responsibility for — it’s true we’ve encountered these scenarios for as long as we can retrospectively recall, after all.
Described as being the feeling that arises from situations where we our innate desire to be liked and accepted by others is threatened, awkwardness largely concerns a violation of perceived social norms.
Naturally, in circumstances where we have familiarity, there is protection against the occurrence of awkward moments because we are already aware of the accepted (and not accepted) parameters of the interactions. By examining our evolutionary relationship with the concept of awkwardness, it becomes a little bit clearer why it might have benefitted us to have this emotional response stick around over all these years.
Living in small communities and groups for the majority of human history, we would have been raised with the rules of the village as they were initially outlined to us, with no additional tasks other than to uphold them and fulfill our social roles within them. Because everyone would have been on the same page since birth, any sign of awkwardness, or a violation of these established social expectations, would signify a problem, or perhaps, the arrival of an outsider.
Awkwardness, then, evolved with us to become an indictor of emotional unfamiliarity, and the risk of rejection by the group. It stems from not adhering to, or perhaps, not being aware of to begin with, the norms that exist for any given situation.
The trouble with norms is that they’re not exactly set in stone — or even scrawled in pencil on a old receipt in the city hall, for that matter. They exist in every culture you can think of, but the difference in norms across and between them can be immense. Norms can differ even between the individuals within a particular community, with any one person viewing the ‘right’ way to conduct themselves in a situation completely contrastingly to the next.
It becomes clear to understand, now, why this is quite a lot to ask of a human being — not only are we somehow tasked with effortlessly assessing the social norms of any given interaction, but we’re also socially expected to live up to them. | https://medium.com/curious/humans-are-awkward-creatures-1435e4affa7a | ['Alexandra Walker-Jones'] | 2020-11-13 15:47:31.999000+00:00 | ['Culture', 'Psychology', 'Society', 'Human Behavior', 'Evolution'] |
How people wade through one of the top trending technology (artificial intelligence) | Nowadays people will be showing more interest in learning new technologies. There are a number of top trending technologies in that Artificial intelligence is one of the top trending technologies. Who is wade through Artificial intelligence it is good for their career?
Artificial intelligence is one of the most important technologies in this world. Today the field of artificial intelligence is more alive than ever and some believe that we are on the threshold of finding that could shift human society permanent for better or worse.
What is Artificial Intelligence?
Artificial intelligence is an important technology and it is one of the branches of computer science that can be the creation of intelligent machines that can work like humans. It has become an essential part of the technology industry. It can be performed on specific tasks by processing large amounts of data.
What are artificial intelligence platforms?
It has the use of machines to perform the tasks that are performed by human beings. This platform of AI is performed like human minds such as learning, solving problems, reasoning, social intelligence. In this AI is classifies either as narrow AI/ weak AI which is generally meant for particular tasks, the strong AI is also known as artificial general intelligence it can find the solutions for different tasks.
There are different kinds of AI platforms.
Machine learning. Automation. Natural language processing and natural language understanding. Cloud infrastructure.
1.Machine learning: It is one of the roots of artificial intelligence. When machines take care of your problems? first, we require good and reliable data to work machines well. All you need is going to build what you want. It uses the above processes to learn complicated decision systems.
2.Automation: Automation is everywhere in technology. In your artificial intelligence also automation is a must-have feature. It is basically creating software or hardware that is capable of doing tasks automatically without human interruption. Artificial intelligence is all about trying to make machines or software imitator, and ultimately, supersede human behavior and intelligence. With the right way, you can automate processes such as invoicing, marketing, job documents with ease.
3.Natural language processing and natural language understanding: Natural language processing (NPL) is an interaction between human (natural) and computer language and it is referring to communicate with an intelligence system.
The field of natural language understanding (NPU) is an important and challenging subset of Natural language processing (NPL). By using an algorithm, this is to reduce human speech structured.
4. Cloud infrastructure: Cloud infrastructure has a feature provides the scalability to grow and access resources to deploy even the complex artificial intelligence and machine learning solutions.
Artificial intelligence is trending technology with the potential not only improve the existing cloud platform authorities but also boost up a new generation of cloud computing technology.
Current forms of Artificial intelligence:
Voice assistants: If we call any technology that makes our lives easier by one name is almost impossible. It is a digital assistant that uses voice recognition, speech, and natural language processing to provide a service through a special application. They differ essentially based on how we interact with the technology, the app, or a combination of both.
Translation: The translation is not just about translating languages. This is also about translating objects, pictures, and sounds into data that can be used in various algorithms.
Predictive systems: Artificial intelligence is looking at statistical data and forms valuable conclusions for investors, doctors, meteorologists, and nearly every other field where statistics and event prediction prove valuable.
Top artificial intelligence platforms:
Below are the best top AI platforms using the software.
Microsoft Azure machine learning.
Google cloud prediction API.
Tensor flow.
Infosys Nia.
Wipro HOLMES.
API.AI.
Premonition.
Rainbird.
Vital.AI.
MindMeld.
Is AI dangerous?
Artificial intelligence’s are long series of programmed replies and collections of data right now, and they don’t have the capability to makes really independent decisions. If AI sees humanity as useless for its purposes, it could easily eliminate us from the equation by using our existing properties of biological weapons, or by making existing viral agents into weapons to make our city to come to a pause.
Artificial intelligence will probably get a small smarter and kill a lot more people in the process before we figure out how to make it actually clever. Clever AI in the future, the super-intelligent sentient sort, is probably going to see us as either tool to manipulate, like toys to play with, or as pets to protect.
Humans have empathy made in because we evolved to be social animals. Artificial intelligence made from the ground needn’t come with empathy. If we don’t make sure to build empathy into such artificial intelligence at the onset, it could be dangerous for us. | https://medium.com/quick-code/how-people-wade-through-one-of-the-top-trending-technology-artificial-intelligence-6f59f9bb55a1 | [] | 2019-05-22 12:21:00.874000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Software Development', 'Technology', 'AI'] |
What is Machine Listening? (Part 3) | What is Machine Listening? (Part 3)
“I saw the news about security camera with scream detection 10 years ago.”
It’s like speech recognition in the early 2000s.
When people first encounter machine listening technology, some remember the news articles about smart security camera with scream and gunshot detection functionality, published more than 10 years ago. We can indeed find quite many articles about it on media, but it is very hard to find an actual working one in real-life.
Many technologies are getting polished over time, but sometimes new technology is based on a fundamentally different methodology with much more potential, coverage, advantage, and functionality, and we call it “next-generation”. It’s like speech recognition in the early 2000s. We all remember that speech recognition tech was there since a long time ago, but it was only able to understand a few words or very simple sentences with low accuracy. Not machine is adapting to human, but the human needed to talk like a robot for better recognition rate. This could make a wow-effect on people at that time, but it is hard to say that it was widely used in our daily life.
However, speech recognition is now a part of our daily life. It now understands much more natural real-world pronunciations, integrated into many smart devices, transforms how we interact with machines. I wouldn’t say it is perfect because it still has many points to be improved. For instance, you can just try “Hey, woo-woo” instead of “Hey, Google”. Or you can try “Aleka” instead of “Alexa”. I’m sure you will get what I mean by many points to be improved.
Then, next-generation is end-to-end deep learning.
Similar things are happening to modern machine listening technology. Conventional sound recognition is limited to identifying a few sounds with frequent false alarms from similar sounds. It is based on the rule-based method which uses handcrafted spectral features which are fully manually engineered by human knowledge and observation. This doesn’t mean that Machine Learning(ML) techniques are totally new in this field. ML has been widely used and the state-of-the-art algorithm in this field until 2016 was actually a combination of handcrafted features with ML classifiers.
Modern machine listening can now do much more things. First, it provides significantly better performance in the real-world. Previous approaches were not adaptive to various environment and situations whereas end-to-end deep learning(DL) can be trained with various real-world cases, which significantly improves robustness and reducing false-positive rate.
Second, it has great potential to support a huge number of target sounds. For the conventional method, the performance drop is kind of unavoidable problem because all the rules should be changed according to new target classes. But for the deep learning-based algorithm, more target sounds means that it can extract better feature from the input data, hence performance can be even more improved and it can remove false alarm caused by similar sounds. It is easy if you think detecting certain sound is equal to detecting all other sounds to be negative.
Third, and most importantly, it can be generalised much more than before. If we use deep learning for a specific device or microphone only, it worth nothing compared to the conventional approaches but introducing heavy computations only (Probably it worth in marketing perspective though).
Most importantly, with the proper use of DL and audio signal processing techniques, we can achieve an extremely generalised system which can be used any kind devices, microphones, and environments. It is highly critical for the future of machine listening because we will be surrounded by thousands of IoT devices soon, all running in different environments with various microphones, also we can never obtain the recording information of all videos on Youtube.
What makes it so difficult?
Making a generalised solution requires deep understanding in audio signal processing. Just name a few, there are sampling rate, codec, types of microphone, analysis window size, and so on. It is not simply about applying deep learning techniques on the data, because it will only work well for demo.
Next, there are millions of different sounds. For instance, dog barking might look simple, but there are thousands of different species with different ages. Also, describing sounds into human language can be sometimes ambiguous, or the same sounds can be described in different words. Key jingling sound can be described as a key jingling but it is not wrong to say metal jingling or keychain sound. And there are many different sounds in a different culture like the acoustic scene of the metro in Seoul would not be the same as the underground in London. Plus, some sounds are really difficult to collect as well. We might not be able to collect real-world car crash sound in a short time, because it is hard to guess when it happens where.
A common myth about the amount of data
The amount of data is indeed highly important for deep learning. That’s why many people say data is the new oil. However, I would like to emphasise that more data does not necessarily mean better accuracy. In fact, there are lots of other things that are related to the final performance of the system.
Amount of data needed is different for different sounds. For instance, machine-generated sound like a siren is always the same, so we wouldn't need lots of data for it. On the other hand, ambulance siren in a tunnel with 60km/h would be really difficult to obtain. In this case, simulating it with signal processing technique will be a much better way.
Various quality rather than high, correct data rather than more
Also, performance is saturated once it reaches a certain amount of input data from the same audio source. It also depends on the target sound, but it is clear that performance does not increase just because of more data is used for the training. A more important factor for performance, especially for the generalised system, is the diversity of data. Finding patterns from the various condition will allow the DL model to learn the unique pattern of the target sound better rather than overfitted to the recording device or environment.
Finally, the quality of the data is extremely important. High quality here doesn't mean studio-recorded clean audio because we will never face a studio-like clean environment in a real-life. It means appropriate audio samples with correct labels. If there are lots of wrong or noisy data in the training set, it will be harmful to the system performance.
To summarise, simply pouring time and money on data is not a way to make a DL system stronger. Making an efficient, appropriate, and sustainable data collection pipeline associated with DL architecture and data augmentation technique, and focusing on the quality of data rather than quantity will be a key for successful machine listening system.
Where it is heading towards
Deep learning-based machine listening is already powerful by its superior performance. But apart from performance, it has even greater potential which can make our life much more convenient by providing generalised sound cognition ability. This will make audio information to be used not only for a simple trigger-action but also for context-aware autonomous systems and humanoid robots.
In my opinion, modern machine listening will begin with replacing existing sound recognition systems first, then move towards supporting more target sounds (wider width) with more detailed information (deeper depth). Now, the term machine listening is used mostly for environmental sound detection, but it will gradually evolve to integrate speech analysis and music information retrieval as well under one umbrella to ultimately achieve a human-like understanding of acoustic information. | https://medium.com/cochl/what-is-machine-listening-part-3-eb72c844d30e | ['Yoonchang Han'] | 2020-12-04 14:03:59.373000+00:00 | ['Machine Learning', 'Audio', 'Machine Listening', 'Music', 'Artificial Intelligence'] |
How to Develop a Custom CRM using only Low-Code Platforms | After that, we’re going to have to build a relationship between the Deals and TouchPoints table. This is because many TouchPoints can happen during the life of a Deal, and we’re going to want a has_many relationship to model that.
All you’re going to need to do is drag the Deals table over the TouchPoints table’s schema and drop it into the “Add New Field” input. This will wire up most of what’s needed automatically. What you’ll want to set manually are the following items.
Name the relationship “Deal” on the TouchPoints schema. Confirm that Allow multiple TouchPoints per Deal is checked in the relationship settings and not Allow multiple Deals per TouchPoint. Check the Deal field is Mandatory validation.
Building a relational data-model for custom CRM
Nice work so far! We’re going to speed it up now since you’ve already learned how to create tables, add fields, and build relationships between tables.
Our Deals table is going to be storing much more information. It’s serving as the main record in our CRM. Click over into its schema and try to add the following fields with the right types and settings!
Deals Table
email (type = Text, Field Size = 30, Mandatory = true)
(type = Text, Field Size = 30, Mandatory = true) stage (type = Switch, Format = Custom, Options = lead/opportunity/customer, Mandatory = true, Default Value = lead)
(type = Switch, Format = Custom, Options = lead/opportunity/customer, Mandatory = true, Default Value = lead) amount (type = Number, Decimal places = 2)
(type = Number, Decimal places = 2) deal_name (type = Text, Field Size = 100, Mandatory = true)
(type = Text, Field Size = 100, Mandatory = true) department (type = Text, Field Size = 100)
(type = Text, Field Size = 100) phoneNumber (type = Text, Field Size = 12)
(type = Text, Field Size = 12) TouchPoints (type = Table)
At this point, our data model is set up and our API is actually ready to use! Just for fun, let’s go ahead and add our first deal to the database using the GraphQL API.
Move over into the workspace API Explorer. This is a great environment for writing and testing your GraphQL queries, mutations, and subscriptions. However, you can always use another GraphQL client if you want, like GraphiQL or Postman. That said, if you set your tables up correctly, go ahead and run the following mutation to add a deal that already has 2 touch points.
mutation {
dealCreate(
data: {
deal_name: "Big opportunity at Big.co"
department: "Marketing"
email: "[email protected]"
amount: 100000.00
stage: "lead"
touchPoints: {
create: [{ contactMedium: "Phone" }, { contactMedium: "Email" }]
}
}
) {
id
deal_name
createdAt
}
}
Since the API Explorer is embedded in the 8base console, it’s able to handle authentication for you. That said, to allow our Retool frontend to talk to 8base, we’re going to generate an API Token and assign it the necessary role.
Navigate to Settings > API Tokens and click the plus button. Name the token whatever you like, however make sure that you assign it the Administrator role! This will allow the Retool full access to our workspace tables.
Copy and paste the token value somewhere safe once you create it! It’s only visible once.
Create API Token for the custom CRM
Awesome! Our backend for the application is set up. It’s not time to go ahead and retrofit our frontend CRM template.
Setting up Retool with a GraphQL Resource and Custom CRM template
Retool allows you to connect to a crazy number of data-sources — both APIs and databases. Since 8base exposes a GraphQL endpoint, we’re first going to be adding a GraphQL Resource to our account.
Starting at the home screen of your Retool account, move to the Resources tab and click on the Create New button. You’re going to then scroll down to the APIS section and select the GraphQL option. Once open, it will prompt you for some information that we’ll add from our 8base workspace.
Name — “8base GraphQL Backend”
Base URL — “YOUR_8BASE_WORKSPACE_ENDPOINT”
Headers — key = “Authorization”, value = “Bearer YOUR_API_TOKEN” | https://sebscholl.medium.com/hot-to-develop-a-custom-crm-using-only-low-code-platforms-67da41bbe9b1 | ['Sebastian Scholl'] | 2020-08-14 17:06:58.413000+00:00 | ['Development', 'Apps', 'Software Development', 'Productivity', 'Low Code'] |
Pitching Your Manuscript On Twitter’s PitMad Event | Pitching Your Manuscript On Twitter’s PitMad Event
Is It Worth The Time And Effort to Pitch Your Work During This Quarterly Event?
This December was the first time I took part in Twitter’s PitMad. Since I got my manuscript as polished as I can get it in time for the event, I thought I should give it a go.
But was it worth the time and effort?
What Is PitMad?
PitMad is an opportunity for writers to pitch their unpublished manuscripts to agents. The event takes place four times a year and is open to all genres. There are other pitching events throughout the year, but unlike PitMad, they focus on specific genres.
During the day (8am — 8pm EST), writers can pitch a previously unpublished manuscript. You can write three tweets per book. Your book must be polished and ready to go. If you have more than one manuscript ready, you can tweet a pitch three times for each one.
Agents can show they are interested by liking your tweeted pitch. Once you get a tweet from an agent, you can submit your manuscript by following their submission guidelines.
What Are The Positives?
If you want to pitch your manuscript, it needs to be ready to send to any agents that request it by liking your tweet. Why is that a good thing? Because it gives you a deadline. I have been procrastinating with the final edits to my manuscript, but knowing that I wanted to take part in PitMad, meant I had to get a move on. I now have a manuscript that is ready to go.
You get to practise your book pitch. I came away from the event empty handed. But that’s OK. It simply means I’ve got to work on my pitch. Writing a pitch that catches someone’s attention in 280 characters is tough, and creating three different ones was good practice.
Reading through the pitches from other writers gives you a good idea of what works and what doesn’t. I read some great ones that got me instantly intrigued. But I also read some terrible ones that shouldn’t have seen the light of day.
Mine fell somewhere in the middle. This one got the most retweets by other writers.
After 20 years, Kati returns to her hometown hoping to find solace in her new home. Instead, she discovers her home comes with its own dark secrets. Will Kati be able to slay the demons of the past to find peace and happiness again?
The biggest positive I took away from the event was the support of the Twitter writing community. It’s uplifting to have other writers give positive feedback on your pitch.
What Were The Negatives?
One negative is the sheer amount of tweeted pitches. I tried my best to show my support to fellow writers and retweet the ones I liked. But with new tweets posted almost every second, I’m sure I missed lots of great pitches.
I’m sure the agents miss some of them, too. You can narrow your category down by selecting age and genre hashtags, but even then, the agents must scroll through hundreds of pitches. There comes a point when they need a break.
And when the agent gets back to the tweets, hundreds more might have been tweeted while they weren’t looking. What if yours just fell in the gap when the agent wasn’t looking?
Timing your pitches right is a challenge. How do you know what is the best time to tweet? I posted my first one a few minutes past 8am EST. So did several hundred others.
I know this might sound like I’m finding excuses for why agents did not favourite my tweeted pitches. I promise I’m not looking for excuses. I’m the first to admit that my pitch wasn’t as great as some others I read.
My Conclusion
I enjoyed the experience. It was an opportunity to learn and improve.
Would I do it again? Definitely. It was fun and so what if I didn’t get an agent liking my tweets? That doesn’t mean my book is not good. It just means my pitching is not yet good enough.
Would I rely on PitMad to get my manuscript noticed? Definitely not. Because of the numbers taking part, even the greatest of pitches can slip by unnoticed. Which is why the hard road of querying is still a more reliable way to get your manuscript in front of the right agent. | https://medium.com/inspired-writer/pitching-your-manuscript-on-twitters-pitmad-event-a48c50ff4202 | ['Reija Sillanpaa'] | 2020-12-08 19:09:46.612000+00:00 | ['Writing', 'Pitmad', 'Writing Tips', 'Creativity', 'Writer'] |
How to Automate Excel Files Received on Email - Increase your Productivity | Step-1: Project Set-up
Ideally, a good practice to set up the bare skeleton of the project. As I have mentioned earlier in my previous stories as well. I prefer to store all my project in a folder named Work.
Input - This is where we will download our statements. Output - Storing our output file Main.kjb - This is where we will configure and download files from email. PDI job file. Process-Bank-1-File.ktr & Process-Bank-2-File.ktr - These are the transformation files where we will process the downloaded CSV files.
Project Folder Structure
If you have any difficulty understanding the above steps or Spoon (desktop app), then request you to go through the below link.
Step - 2: Set up of Application Password
Let’s assume that we receive bank statements in our Gmail account. This can be any account and we can read the emails as far as we have Open Authentication (OAuth) details for the same.
We need to set up an application password read Gmail via an application without two-factor authentication. We need to follow the guide provided by Google. Below is the screenshot guide. We need to click on ‘Manage you Google Account’ in your profile icon.
Click on Security Tab and then on App password
Select ‘Mail’ in Select app option and Select device Other (custom name)
You will receive a pop-up for an app password. Copy and store it somewhere safe
Please note, we cannot use our Gmail password to authenticate or access emails via PDI or any other application. We will have to create an App password for the same.
Step - 3: Let’s read Gmail and Download File
Once the application password is set-up, we can download the files from Gmail and perform analysis on the same. Below are the steps that we need to observe.
Drag Steps/Plugins: In PDI main.kjb job file, we need to search for ‘Start’, ‘Get mails (POP3/IMAP)’, ‘Transformation’ and ‘Success’ Steps.
Main job flow
Edit ‘Get mails’ step properties: We need to double click on ‘Get mails (POP3/IMAP) step and edit the properties. Below are the properties that we need to edit.
We need to either use POP3 or IMAP to read and fetch files. You can go through the following link to understand more about the same. We need to rename the step name to ‘download_bank_statements’. In Source Host, put imap.gmail.com. We need to check the Use SSL checkbox We need to use Port 993; this is specific to IMAP and SSL. You can refer the point #1 link to understand various ports. Fill your Username and app Password (do not use your Gmail password; it will not work). Please note, [email protected] is a dummy address. Define Target directory to Input path. Make sure the Get mail attachment and Different Folder is checked. In the Settings tab, we need to change the Retrieve to Get all messages options. This is because we might receive the same naming convention files daily. Optionally, we can also perform activities like moving the email to some other directory like ‘processed’ or deleting the same etc. In the Filters tab, we need to add a filter using the Received date filter. Again, I have hard-coded the date here. However, we can easily use system date variable use it there. I will write a separate blog explaining the variables in PDI. We can remove this filter as well; this will fetch all the emails
General tab properties
Settings tab properties
Filters tab properties
Sample Inbox Screenshot with Emails
Now that we have configured the email download process. Let’s collate both the files.
Step - 4: Collate Information
There are various ways of collating information from multiple workbooks. We can create a generalized flow; which caters to multiple types of structure or create a simple case transformation. We will be using the later one.
If you want to generalize multiple workbooks processing, then I will recommend you to go through the below guide.
Process-Bank1-Files.ktr and Process-Bank2-Files.ktr transformations will be identical and the only difference will be regarding the file name changes.
Open Process-Bank1-Files.ktr and drag ‘Text file input’ and ‘Microsoft Excel Writer’.
In the Text file input, we will change the name to input. In the File tab, browse for ‘D:\Work\AutomateEmail\Input\Bank-1 - Transaction.csv’; assuming bank will observe same naming conventions. In the Content tab, change the Separator to ‘,’ comma and Format to mixed. In the Fields tab, click on Get Fields button and change the Name column to Column1, Column2, Column3.. as shown in the below email.
File tab properties
Fields tab properties
In ‘Microsoft Excel writer’ plugin and in File & Sheet tab, add file path of the output file (D:\Work\AutomateMultipleExcel\Output\ConsolidatedStatement), change the extension to xlsx [Excel 2007 and above] in Extension field, select Use existing file for writing in If output file exists field and select write to existing sheet in If sheet exists in output file.
In ‘Microsoft Excel writer’ plugin and in the Content tab, select shift existing cells down in When writing rows, check Start writing at end of the sheet (appending lines), tick Omit header field and click on Get Fields button.
File & Sheet tab Properties
Content tab properties
Now, we need to duplicate the same transformation and only change the input file name to D:\Work\AutomateEmail\Input\Bank-2 - Transaction.csv
We need to create an empty excel (xlsx) file in the output folder and name it ConsolidatedStatement.xlsx with below mentioned headers. Here, the idea is to create a template and overwrite the same regularly.
That’s it, let’s check if it’s working.
Step - 5: Connect, Execute Job and Test
In Main.kjb file, we need to connect the two transformations in our job file by sequentially browsing the same.
All our efforts would ideally get realized here in this step; where we execute and see the output. Let’s see if the pipeline created by us works as per our expectation.
Success
Here, we can perform our test cases defined earlier and analyze the file as per our requirements.
Conclusion
There are little nuances of the above data pipeline which can be tweaked and probably can be generalized to cater to multiple structures and files. Since I am cherry-picking the use case as per my convenience, this may or may not be the real-world situation. However, you can build using this as a platform. We can perform a lot of similar activities using PDI. We will take a case on how to send email notifications using PDI next.
Please feel free to ask questions in the comment section.
See you in the next post. Happy ETL | https://towardsdatascience.com/how-to-automate-excels-received-on-email-increase-your-productivity-3d2f50ddc958 | ['Shravankumar Suvarna'] | 2020-05-31 21:00:33.975000+00:00 | ['Productivity', 'Data Science', 'Data', 'Programming', 'Artificial Intelligence'] |
Exploratory Data Analysis (EDA) and Data Visualization with Python | Exploratory Data Analysis — EDA — plays a critical role in understanding the what, why, and how of a problem statement.
Originally published at kite.com.
Table of Contents
Introduction Defining Exploratory Data Analysis Overview A detailed explanation of EDA Quick peek at functions Univariate and bivariate analysis Missing value analysis Outlier detection analysis Percentile based outlier removal The correlation matrix Conclusions
Introduction
There is so much data in today’s world. Modern businesses and academics alike collect vast amounts of data on myriad processes and phenomena. While much of the world’s data is processed using Excel or (manually!), new data analysis and visualization programs allow for reaching even deeper understanding. The programming language Python, with its English commands and easy-to-follow syntax, offers an amazingly powerful (and free!) open-source alternative to traditional techniques and applications.
Data analytics allow businesses to understand their efficiency and performance, and ultimately helps the business make more informed decisions. For example, an e-commerce company might be interested in analyzing customer attributes in order to display targeted ads for improving sales. Data analysis can be applied to almost any aspect of a business if one understands the tools available to process information.
Defining Exploratory Data Analysis
Exploratory Data Analysis — EDA — plays a critical role in understanding the what, why, and how of the problem statement. It’s first in the order of operations that a data analyst will perform when handed a new data source and problem statement.
Here’s a direct definition: exploratory data analysis is an approach to analyzing data sets by summarizing their main characteristics with visualizations. The EDA process is a crucial step prior to building a model in order to unravel various insights that later become important in developing a robust algorithmic model.
Let’s try to break down this definition and understand different operations where EDA comes into play:
First and foremost, EDA provides a stage for breaking down problem statements into smaller experiments which can help understand the dataset
EDA provides relevant insights which help analysts make key business decisions
The EDA step provides a platform to run all thought experiments and ultimately guides us towards making a critical decision
Overview
This post introduces key components of Exploratory Data Analysis along with a few examples to get you started on analyzing your own data. We’ll cover a few relevant theoretical explanations, as well as use sample code as an example so ultimately, you can apply these techniques to your own data set.
The main objective of the introductory article is to cover how to:
Read and examine a dataset and classify variables by their type: quantitative vs. categorical
Handle categorical variables with numerically coded values
Perform univariate and bivariate analysis and derive meaningful insights about the dataset
Identify and treat missing values and remove dataset outliers
Build a correlation matrix to identify relevant variables
Above all, we’ll learn about the important APIs of the python packages that will help us perform various EDA techniques.
A detailed explanation of an EDA on sales data
In this section, we’ll look into some code and learn to interpret key insights from the different operations that we perform.
Before we get started, let’s install and import all the relevant python packages which we would use for performing our analysis. Our requirements include the pandas, numpy, seaborn, and matplotlib python packages.
Python’s package management system called Pip makes things easier when it comes to tasks like installing dependencies, maintaining and shipping Python projects. Fire up your terminal and run the command below:
import python -m pip install --user numpy scipy matplotlib ipython pandas sympy nose statsmodels patsy seaborn
Note that you need to have Python and Pip already installed on your system for the above command to work, and the packages whose name look alien to you are the internal dependencies of the main packages that we intend to you, for now you can ignore those.
Having performed this step, we’re ready to install all our required Python dependencies. Next, we need to set up an environment where we can conduct our analysis — feel free to fire up your favorite text editing tool for Python and start with loading the following packages:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib
from matplotlib import pyplot as plt
For reading data and performing EDA operations, we’ll primarily use the numpy and pandas Python packages, which offer simple API’s that allow us to plug our data sources and perform our desired operation. For the output, we’ll be using the Seaborn package which is a Python-based data visualization library built on Matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics. Data visualization is an important part of analysis since it allows even non-programmers to be able to decipher trends and patterns.
Let’s get started by reading the dataset we’ll be working with and deciphering its variables. For this blog post, we’ll be analyzing a Kaggle data set on a company’s sales and inventory patterns. Kaggle is a great community of data scientists analyzing data together — it’s a great place to find data to practice the skills covered in this post.
The dataset contains a detailed set of products in an inventory and the main problem statement here is to determine the products that should continue to sell, and which products to remove from the inventory. The file contains the observations of both historical sales and active inventory data. The end solution here is to create a model that will predict which products to keep and which to remove from the inventory — we’ll perform EDA on this data to understand the data better. You can follow along with a companion Kaggle notepad here.
Quick peek at functions: an example
Let’s analyze the dataset and take a closer look at its content. The aim here is to find details like the number of columns and other metadata which will help us to gauge size and other properties such as the range of values in the columns of the dataset.
sales_data = pd.read_csv("../input/SalesKaggle3.csv")
sales_data.head()
The read_csv function loads the entire data file to a Python environment as a Pandas dataframe and default delimiter is ‘,’ for a csv file.
The head() function returns the first 5 entries of the dataset and if you want to increase the number of rows displayed, you can specify the desired number in the head() function as an argument for ex: sales.data.head(10) , similarly we can see the bottom rows of the Pandas dataframe with the command sales_data.tail() .
Types of variables and descriptive statistics
Once we have loaded the dataset into the Python environment, our next step is understanding what these columns actually contain with respect to the range of values, learn which ones are categorical in nature etc.
To get a little more context about the data it’s necessary to understand what the columns mean with respect to the context of the business — this helps establish rules for the potential transformations that can be applied to the column values.
Here are the definitions for a few of the columns:
File_Type: The value “Active” means that the particular product needs investigation
The value “Active” means that the particular product needs investigation SoldFlag: The value 1 = sale, 0 = no sale in past six months
The value 1 = sale, 0 = no sale in past six months SKU_number: This is the unique identifier for each product.
This is the unique identifier for each product. Order: Just a sequential counter. Can be ignored.
Just a sequential counter. Can be ignored. SoldFlag: 1 = sold in past 6 mos. 0 = Not sold
1 = sold in past 6 mos. 0 = Not sold MarketingType: Two categories of how we market the product.
Two categories of how we market the product. New_Release_Flag: Any product that has had a future release (i.e., Release Number > 1)
sales_data.describe()
The describe function returns a pandas series type that provides descriptive statistics which summarize the central tendency, dispersion, and shape of a dataset’s distribution, excluding NaN values. The three main numerical measures for the center of a distribution are the mode, mean(µ), and the median (M). The mode is the most frequently occurring value. The mean is the average value, while the median is the middle value.
sales_data.describe(include='all')
When we call the describe function with include=’all’ argument it displays the descriptive statistics for all the columns, which includes the categorical columns as well.
Next, we address some of the fundamental questions:
The number of entries in the dataset:
print(sales_data.shape)
We have 198917 rows and 14 columns.
Total number of products & unique values of the columns:
print(sales_data.nunique())
nunique() would return the number of unique elements in each column
Count of the historical and active state, (we need only analyze the active state products):
print(sales_data[sales_data['File_Type'] == 'Historical']['SKU_number'].count())
print(sales_data[sales_data['File_Type'] == 'Active']['SKU_number'].count())
We use the count function to find the number of active and historical cases: we have 122921 active cases which needs to be analyzed. We then Split the dataset into two parts based on the flag type. To do this, we must pass the required condition in square brackets to the sales_data object, which examines all the entries with the condition mentioned and creates a new object with only the required values.
sales_data_hist = sales_data[sales_data['File_Type'] == 'Historical']
sales_data_act = sales_data[sales_data['File_Type'] == 'Active']
To summarize all the operations so far:
The dataset contains 198,917 rows and 14 columns with 12 numerical and 2 categorical columns. There are 122,921 actively sold products in the dataset, which is where we’ll focus our analysis.
Univariate and bivariate analysis
The data associated with each attribute includes a long list of values (both numeric and not), and having these values as a long series is not particularly useful yet — they don’t provide any standalone insight. In order to convert the raw data into information we can actually use, we need to summarize and then examine the variable’s distribution.
The univariate distribution plots are graphs where we plot the histograms along with the estimated probability density function over the data. It’s one of the simplest techniques where we consider a single variable and observe its spread and statical properties. The univariate analysis for numerical and categorical attributes are different.
For categorical columns we plot histograms, we use the value_count() and plot.bar() functions to draw a bar plot, which is commonly used for representing categorical data using rectangular bars with value counts of the categorical values. In this case, we have two type of marketing types S and D. The bar plot shows comparisons among these discrete categories, with the x-axis showing the specific categories and the y-axis the measured value.
sales_data['MarketingType'].value_counts().plot.bar(title="Freq dist of Marketing Type")
Similarly, by changing the column name in the code above, we can analyze every categorical column.
Below is the code to plot the univariate distribution of the numerical columns which contains the histograms and the estimated PDF. We use displot of the seaborn library to plot this graph:
col_names = ['StrengthFactor','PriceReg', 'ReleaseYear', 'ItemCount', 'LowUserPrice', 'LowNetPrice'] fig, ax = plt.subplots(len(col_names), figsize=(16,12)) for i, col_val in enumerate(col_names): sns.distplot(sales_data_hist[col_val], hist=True, ax=ax[i])
ax[i].set_title('Freq dist '+col_val, fontsize=10)
ax[i].set_xlabel(col_val, fontsize=8)
ax[i].set_ylabel('Count', fontsize=8) plt.show()
We can see that leaving the ReleaseYear column every other column is skewed to the left which indicates most of the values lie in the lower range values and vice versa in the case of a ReleaseYear attribute.
The bivariate distribution plots help us to study the relationship between two variables by analyzing the scatter plot, and we use the pairplot() function of the seaborn package to plot the bivariate distributions:
sales_data_hist = sales_data_hist.drop([
'Order', 'File_Type','SKU_number','SoldFlag','MarketingType','ReleaseNumber','New_Release_Flag'
], axis=1)
sns.pairplot(sales_data_hist)
We often look out for scatter plots that follow a clear linear pattern with an either increasing or decreasing slope so that we can draw conclusions, but don’t notice these patterns in this particular dataset. That said, there’s always room to derive other insights that might be useful by comparing the nature of the plots between the variables of interest.
Missing value analysis
Missing values in the dataset refer to those fields which are empty or no values assigned to them, these usually occur due to data entry errors, faults that occur with data collection processes and often while joining multiple columns from different tables we find a condition which leads to missing values. There are numerous ways with which missing values are treated the easiest ones are to replace the missing value with the mean, median, mode or a constant value (we come to a value based on the domain knowledge) and another alternative is to remove the entry from the dataset itself.
In our dataset we don’t have missing values, thus we are not performing any operations on the dataset that said here are few sample code snippets that will help you perform missing value treatment in python.
To check if there are any null values in the dataset
data_frame.isnull().values.any()
If the above snippet returns true then there are null values in the dataset and false means there are none
data_frame.isnull().sum()
The above snippet returns the total number of missing values across different columns
Now in order to replace the missing values, we use the fillna function of pandas to replace na values with the value of our interest and inplace=True command makes the permanently changes the value in that dataframe.
data_frame['col_name'].fillna(0, inplace=True)
Outlier detection analysis
An outlier might indicate a mistake in the data (like a typo, or a measuring error, seasonal effects etc), in which case it should be corrected or removed from the data before calculating summary statistics or deriving insights from the data, failing to which will lead to incorrect analysis.
Below is the code to plot the box plot of all the column names mentioned in the list col_names . The box plot allows us to visually analyze the outliers in the dataset.
The key terminology to note here are as follows:
The range of the data provides us with a measure of spread and is equal to a value between the smallest data point (min) and the largest one (Max)
The interquartile range (IQR), which is the range covered by the middle 50% of the data.
IQR = Q3 — Q1, the difference between the third and first quartiles. The first quartile (Q1) is the value such that one quarter (25%) of the data points fall below it, or the median of the bottom half of the data. The third quartile is the value such that three quarters (75%) of the data points fall below it, or the median of the top half of the data.
The IQR can be used to detect outliers using the 1.5(IQR) criteria. Outliers are observations that fall below Q1–1.5(IQR) or above Q3 + 1.5(IQR).
col_names = ['StrengthFactor','PriceReg', 'ReleaseYear', 'ItemCount', 'LowUserPrice', 'LowNetPrice'] fig, ax = plt.subplots(len(col_names), figsize=(8,40)) for i, col_val in enumerate(col_names): sns.boxplot(y=sales_data_hist[col_val], ax=ax[i])
ax[i].set_title('Box plot - {}'.format(col_val), fontsize=10)
ax[i].set_xlabel(col_val, fontsize=8) plt.show()
Based on the above definition of how we identify outliers the black dots are outliers in the strength factor attribute and the red colored box is the IQR range.
Percentile based outlier removal
The next step that comes to our mind is the ways by which we can remove these outliers. One of the most popularly used technique is the Percentile based outlier removal, where we filter out outliers based on fixed percentile values. The other techniques in this category include removal based on z-score, constant values etc
def percentile_based_outlier(data, threshold=95):
diff = (100 - threshold) / 2
minval, maxval = np.percentile(data, [diff, 100 - diff])
return (data < minval) | (data > maxval) col_names = ['StrengthFactor','PriceReg', 'ReleaseYear', 'ItemCount', 'LowUserPrice', 'LowNetPrice'] fig, ax = plt.subplots(len(col_names), figsize=(8,40)) for i, col_val in enumerate(col_names):
x = sales_data_hist[col_val][:1000]
sns.distplot(x, ax=ax[i], rug=True, hist=False)
outliers = x[percentile_based_outlier(x)]
ax[i].plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False) ax[i].set_title('Outlier detection - {}'.format(col_val), fontsize=10)
ax[i].set_xlabel(col_val, fontsize=8) plt.show()
The values marked with a dot below in the x-axis of the graph are the ones that are removed from the column based on the set threshold percentile (95 in our case), and is also the default value when it comes to percentile-based outlier removal.
The correlation matrix
A correlation matrix is a table showing the value of the correlation coefficient (Correlation coefficients are used in statistics to measure how strong a relationship is between two variables. ) between sets of variables. Each attribute of the dataset is compared with the other attributes to find out the correlation coefficient. This analysis allows you to see which pairs have the highest correlation, the pairs which are highly correlated represent the same variance of the dataset thus we can further analyze them to understand which attribute among the pairs are most significant for building the model.
f, ax = plt.subplots(figsize=(10, 8))
corr = sales_data_hist.corr()
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values)
Above you can see the correlation network of all the variables selected, correlation value lies between -1 to +1. Highly correlated variables will have correlation value close to +1 and less correlated variables will have correlation value close to -1.
In this dataset, we don’t see any attributes to be correlated and the diagonal elements of the matrix value are always 1 as we are finding the correlation between the same columns thus the inference here is that all the numerical attributes are important and needs to be considered for building the model.
Conclusions
Ultimately, there’s no limit to the number of experiments one can perform in the EDA process — it completely depends on what you’re analyzing, as well as the knowledge of packages such as Pandas and matplotlib our job becomes easier.
The code from our example is also available here. The code is pretty straightforward, and you can clone the kernel and apply it to a dataset of your choice. If you’re interested in expanding your EDA toolkit even further, you may want to look into more advanced techniques such as advance missing value treatments that use regression-based techniques, or even consider exploring multivariate factor and cluster analysis.
These techniques are usually used when there are many attributes to analyze, and many of them represent the same information, often containing hundreds of variables — depending on the domain. Usually for model building, we consider 30–40 odd variables, in which case performing more advanced techniques is necessary to come up with factor variables that better represent the variance in the dataset.
Once you practice the example in this post, go forth and analyze your own data! Pretty much any process that generates data would benefit from the analysis techniques we used here, so there are a lot of opportunities to put your new skills to work. I’d love to help if needed and hear about your experiences!
Vigneshwer is a data scientist at Epsilon, where he crunches real-time data and builds state-of-the-art AI algorithms for complex business problems. He believes that technology needs to have a human-centric design to cater solutions to a diverse audience. He’s an official Mozilla TechSpeaker and is also the author of Rust Cookbook.
This post is a part of Kite’s new series on Python. You can check out the code from this and other posts on our GitHub repository. | https://medium.com/kitepython/data-analysis-visualization-python-c3a76597722f | ['Vigneshwer Dhinakaran'] | 2019-01-30 21:40:00.185000+00:00 | ['Data Visualization', 'Exploratory Data Analysis', 'Data Analysis', 'Data Science', 'Python'] |
Roundtable With US Regional Environment Officer, Jonathan Kelsey on Environmental Awareness. | Moses Eboigbe was the first to speak. He highlighted the work ICCDI does and what has been achieved this year, covering training and sensitization of secondary school students of Minas Academy on proper waste management and climate change. He also mentioned the SDGs for Universities intervention currently driven by ICCDI across tertiary institutions in Nigeria. Again Ayibatonye further highlighted other initiatives of ICCDI such as the hashtag Climate Wednesday tweet chat, stating how ICCDI drive environmental advocacy on social media and thereby making it part of the social discussion.
Also speaking, Temitope Okunnu highlighted her organization’s activities in advocacy and upcycling of waste, how they reach out to schools and train kids on how to make products from waste. Although this is done at a small scale now, she said they hope to expand their reach as soon as they have the capacity to do so. Olawale Adebiyi of WeCyclers listed his organization’s achievement in recycling plastic waste and pet bottles in Lagos state. Bilikiss Adebiyi-Abiola, General Manager of Lagos State Parks and Gardens Agency (LASPARK) also highlighted their achievements in developing new parks and gardens in Lagos state and implementing waste segregation activities in such parks in partnership with recycling organizations such as WeCyclers. Dr. Dedeke Gabriel also spoke about Wildlife for Africa Conservation Initiative (WACI), how they sensitize and advocate for wildlife conservation in Nigeria using filmography i.e. production of documentaries on endangered species and the likes, and how they need partnership to increase capacity and the reach of their message.
Discussions then shifted to the economics and business of waste management. Isaiah Tuolienuo, the US Regional Environment Specialist for West and Central Africa asked, ‘what is the legal framework looking like for waste management in Nigeria? if there is a policy or specific law for waste management?’. Responding, Olawale Adebiyi mentioned the fact that there is currently a draft policy document on waste management that is not operational yet, but highlighted the fact that the draft document is nothing to write home about as it doesn’t address the pressing issues in the waste management industry now, insisting that the right consultations must be made and new inputs from industry experts added before the final document is published. Further probing of Olawale Adebiyi by Jonathan Kelsey about the presence of a waste exchange platform for collectors, recyclers and up-cyclers revealed that while there is no such platform now, there is an association of waste recyclers in Lagos state, although it hasn’t been formalized by law yet. | https://medium.com/climatewed/roundtable-with-us-regional-environment-officer-jonathan-kelsey-on-environmental-awareness-1c9620942fd6 | ['Iccdi Africa'] | 2019-07-22 23:18:57.036000+00:00 | ['Climate Change', 'Environment', 'Health', 'Lagos', 'Knowledge'] |
Visualization and Miracles | As it goes, we were doing an internal conference, and one of the speakers had no time to do a visual wrap-up for his presentation. So, he went with the slides that only had the portraits of the authors to whose work he was referring. The presentation was about the basics of lean, and the authors were Deming and Taylor. While everyone seemed to get bored with the listening, as there was no nice visual stuff such as the one we’d all become used to, I suddenly caught myself visualizing the talk of the boss and the worker in my head (it was in the presentation: the boss was intent on having his workers transport 47 tons of cast iron in one shift instead of 12 tons, or something of that nature).
This got me thinking that sometimes visualizations are doing a lip service to us. As we sit back comfortably, watching videos or live presentations with cute data and/or concept visuals, we are actually stifling our ability to create visualizations by ourselves! The picture is served to us, like a plate dish at a restaurant, so we make no use of the imagination “muscles”, and they get weak.
I’m not saying that everyone who watches animated drawings such as these ones is doomed to a life of zero creativity. My point is: we want to be watchful about keeping this balance — yes, again, it’s about a balance — between perceiving someone else’s visualizations — as well as visions — and creating our own. The power of our inner creator should not be shy about itself. As we watch others speak, and write, and sketch their stories , we might get all humble and subdued about what we are capable of bringing to the table. No influencer, no matter how many times “liked” they are, will draw a vision of our business, or of our product, or of some function of the app/software we’re working on. There’s always a time to look at someone’s visions/visualizations, and time to create our own. Each and every “how-to” about creativity includes this magic word called “vision”. Have you ever thought why any business starts with a vision? It’s exactly for this reason.
There’s a paradox: on one hand, the visual media is everywhere. Lots of visual channels deliver the dish to any, even to the laziest of the recipients. The problem is that the legions of those watching produces too few creators.
We’ve got 1 more month left in the year 2019. And one more week left before Thanksgiving. It’s this time of the year that we collectively allocate to miracles. So, I suggest that we devote this window to our creativity and to cleaning the debris of anything we don’t want or need any more, as we visualize:
Image by Gerd Altmann from Pixabay
Related:
Visualization: Understated or Overrated?
My Favourite Ways to Visualize Ideas
The 18’s and the Thanks
Better The Devil That You Don’t Know
A Christmas Tale of a Software Developer and Santa | https://medium.com/quandoo/visualization-and-miracles-261fd8642cc5 | ['Olga Kouzina'] | 2019-11-21 11:30:08.706000+00:00 | ['Learning', 'Software Development', 'Creativity', 'Insights', 'Visualization'] |
Choosing the best AutoML Framework | Choosing the best AutoML Framework
A head to head comparison of four automatic machine learning frameworks on 87 datasets.
Adithya Balaji and Alexander Allen
Introduction
Automatic Machine Learning (AutoML) could bring AI within reach for a much larger audience. It provides a set of tools to help data science teams with varying levels of experience expedite the data science process. That’s why AutoML is being heralded as the solution to democratize AI. Even with an experienced team, you might be able to use AutoML to get the most out of limited resources. While there are proprietary solutions that provide machine learning as a service, it’s worth looking at the current open source solutions that address this need.
In our previous piece, we explored the AutoML landscape and highlighted some packages that might work for data science teams. In this piece we will explore the four “full pipeline” solutions mentioned: auto_ml, auto-sklearn, TPOT, and H2O’s AutoML solution.
Each package’s strengths and weaknesses are detailed in our full paper, “Benchmarking Automatic Machine Learning Frameworks”. The paper also contains additional information about the methodology and some additional results.
Methodology
In order to provide an accurate and fair assessment, a selection of 87 open datasets, 30 regression and 57 classification, were chosen from OpenML, an online repository of standard machine learning datasets exposed through a REST API in a consistent manner. The split of datasets provides a broad sample of tabular datasets that may be found in a business machine learning problem. A lot of consideration was given to the choice of datasets to prevent contamination of the validation sets. For example, auto-sklearn uses a warm start that is already trained on a set of OpenML datasets. Datasets such as these were avoided.
Each of the four frameworks, auto_ml, auto-sklearn, TPOT, and H2O were tested with their suggested parameters, across 10 random seeds per dataset. F1 score (weighted) and mean squared error were selected as evaluation criteria for classification and regression problems, respectively.
A constraint of 3 hours was used to limit each AutoML method to a timespan that reflects an initial exploratory search performed by many data science teams. This results in an estimated compute time of 10,440 hours.As a result, we decided to evaluate the models using AWS’s batch service to handle the parallelization of this task using C4 compute-optimized EC2 instances allocating 2 vCPUs and 4 GB of memory per run.
We used a best-effort approach to ensure all tests completed and that all tests had at least 3 chances to succeed within the 3 hour limit. In some cases, AWS Batch’s compute environments and docker-based resource management resulted in unpredictable behavior. To overcome this, we developed a custom “bare-metal” approach to replicate AWS Batch on EC2 instances with more fine-grained control over per process memory management. Specifically, the docker memory manager was sending a kill signal to the benchmarking process if the amount of memory used by the process exceeded the amount allocated by Batch. This hard limit cannot be changed without greatly increasing instance size per run. Using the same computational constraints, we tested the runs that failed under these very specific conditions on our custom “bare-metal” implementation.
Also during the process of running these tests, we fixed a few bugs in the open source frameworks which are described in our full paper. After these fixes, none of the datasets outright failed. These failures were usually obscured from daily use but showed up during the scale of testing that was performed.
Results
Figure 1 describes the diversity of our chosen datasets. You can see that classification is typically binary and the regression row count is relatively uniform while the classification row count is skewed towards datasets around 1000 rows. The feature count for both regression and classification centers around 10 features with classification skewed slightly towards 100. We believe that this data group is a representative sample of general data science problems that many data scientists would encounter.
Figure 1: Raw dataset characteristics split between classification and regression problems
Some frameworks ran out of time on specific seeds and frameworks. A total of 29 run combinations (dataset and seed) were dropped. These run combinations were dropped across all frameworks in order to maintain the comparability of individual frameworks. This process resulted in a total of 132 data points (29*4) that were dropped, which is about ~3% overall (116 / 3480 runs).
Figure 2: Framework head to head mean performance across classification datasets
Figure 3: Framework head to head mean performance across regression datasets
Each framework was evaluated on both regression and classification datasets mentioned above. Their performance was calculated by aggregating the weighted F1 score and MSE scores across datasets by framework. Each metric was standardized on a per dataset basis across frameworks and scaled from 0 to 1. In the case of MSE, these values were inverted meaning higher values represent better results, so that the graphs would remain consistent between classification and regression visualizations. The mean across the 10 evaluated seeds represents a framework’s performance on a specific dataset. In figures 2 and 3, darker shades indicate greater performance differences.
Figure 4: Framework performance across all classification datasets
Figure 5: Framework performance across all regression datasets
We used boxplots to demonstrate framework performance here in figures 4 and 5. The notches in the box plots represent the confidence interval of the medians. The means and standard deviations in table 1 show the precise differences.
Table 1: Precise per framework results
Conclusion and Interpretation
Overall, each visualization and interpretation presents the same picture. Auto-sklearn performs the best on the classification datasets and TPOT performs the best on regression datasets. It’s important to notice, that the quantitative results from this experiment have extremely high variances and as such, it is likely more important to think about the state of the code base, continuing development, feature set, and goals of these individual frameworks rather than the standalone performance. We recommend both TPOT and auto-sklearn due to these factors and due to our interactions with each of their communities through the time we worked on this analysis.
Each of the packages (Auto-sklearn, TPOT, H2O, Auto_ml), the full paper, and the implementation of the benchmarking are linked here. | https://medium.com/georgian-impact-blog/choosing-the-best-automl-framework-4f2a90cb1826 | [] | 2018-09-11 14:12:48.417000+00:00 | ['Machine Learning', 'AI', 'Artificial Intelligence', 'Software Development', 'Automl'] |
Listening and co-creation: five insights that will strengthen your engaged journalism | In the past year, our Engaged Journalism Accelerator programme has supported news organisations across Europe to kickstart and evolve promising models of engaged journalism — journalism that puts community engagement at its core, empowering communities and their conversations.
Last week in Berlin, we gathered 140 practitioners from more than 20 countries for Engagement Explained Live, a one-day event aimed at hearing about their experiences with community-driven journalism, building connections, celebrating their successes and learning from their mistakes.
Here is what we learned:
1. Plan your resources — engaged journalism is not a small feat
Don’t underestimate the effort it takes to make engaged journalism an integral part of your work. Facilitating conversations with your community is about planning, logistics and ongoing dialogue as much as it is about the editorial work.
Oliver Fuchs, deputy editor-in-chief for Swiss organisation Republik, shared this: “We underestimated how many resources are needed. We thought we could sit down once a week and chat a bit. But it is a lot of back and forth, a lot of explaining and much asking. Don’t underestimate how much time it takes.”
Catalina Albeanu, digital editor at DoR (Decât o Revistă), and her colleagues travelled across Romania to host pop-up newsrooms, collaborative events for journalists and readers to come up with story ideas together. During the process, they found that they needed more support than they had anticipated — mostly for non-editorial tasks such as driving or photographing.
Examples and resources:
2. You will need to work hard to earn (and keep) people’s trust
If you are reaching out to a community to involve them in your reporting, it’s very possible you will be met with scepticism, regardless of how good your intentions are. Like in any relationship, you need to listen to people’s concerns, get to know them and allow them to find out more about your work, before expecting them to trust you.
In his workshop on deep listening, Cole Goins, engagement lead for Journalism+Design at The New School, explained that not all communities want to be listened to by journalists. That does not mean that you cannot cooperate with them — but you need to earn their trust first. Local partnerships are essential, as well as having people in your staff that are representative of the communities you are looking to work with.
“At the moment, many newsrooms aren’t going the extra mile to reach people who are totally disengaged and to understand their stories,” said Paul Myles, head of editorial, On Our Radar. To increase trust and to avoid “parachuting” into communities, On Our Radar is working with local reporter networks based in the areas they are covering, and people in those networks don’t necessarily have a journalism background. On Our Radar is aiming to train people who are often unable to share stories on their own terms. To do so, they have developed a framework that enhances the community’s connectivity, capacity, confidence and conviction.
When you report with — and for — a community, you need to remember that you are accountable to them, said City Bureau’s co-founder Andrea Hart. Journalism needs to be treated like a human relationship, where trust builds from a demonstration of accountability and transparency. Ask yourself whether you are willing to give as much as you are asking from your community, and whether your goal is to cultivate a relationship or whether you are asking for one-way transactions.
Examples and resources:
3. Find a common language and way of working with your community
Inclusive, engaged journalism often requires tapping into communities that may not be as media literate, or who do not have a solid understanding of journalistic mechanisms.
“There is a huge wealth of people that are passionate about socially responsible journalism, but who don’t have the language to talk about this,” said Anna Merryfield, community media director at Social Spider, and UK representative of the Accelerator Ambassador Network.
“Not everyone shares a journalist’s love for the written word,” added Amanda Eleftheriades-Sherry, co-founder of Clydesider. To lower the barrier for community members to participate in the editorial process, Clydesider is organising creative workshops where participants contribute through arts and crafts, music, discussions and exercises.
Ties Gijzel, co-founder, Are We Europe, presented a way to set up a collaboration process in a short amount of time, enabling everyone to have a say and contribute expertise to the process and the outcome of the story. Are We Europe uses an adapted version of the Google Design Sprint to create multimedia stories and reporting with local experts, in only five days. Using this process, they recently produced The Drums of Democracy in Moldova (shortlisted for the 2019 European Press Prize) and a reportage on polarisation in Greece.
Examples and resources:
4. Make good use of your comments section
While they are not new, comments sections have proven to be a useful — yet often underutilised — tool for engaging in discussion with users.
At De Correspondent, the comments section is a central point of the reporters’ work. “We don’t call it a comments section, and we want to make sure that you contribute to our stories,” explained Gwen Martel, conversation editor at De Correspondent. The section is only visible to paying members, and contributions are accompanied by a person’s full name and verified expertise title. Instead of just hoping constructive conversations would take place, Martel actively reaches out to members to bring them into the discussion in the areas they have expertise in.
People want to help. Many people like to talk about their work, people want to contribute to good journalism. — Gwen Martel
Oliver Fuchs from Republik agreed: ‘‘[Asking members] ‘what do you think?’ does not cut it. You need to ask good, constructive questions.”
German organisation Krautreporter has created a whole strategy around its comments section, said editor-in-chief Rico Grimm. Every reporter has to be active in the comments section and respond to a comment within 30–60 minutes to show users that they are present. “When you publish you have to think about how to continue the conversation. You need to know what you ask for.”
Instead of asking for opinions, Grimm advises asking people for knowledge, experiences or reasonings. For example, for a recent story, Krautreporter asked its members why they eat meat even though they know animals are suffering. Almost 200 people answered and their replies were very nuanced. The resulting story was one of the most discussed articles on the platform, attracting a number of new members.
Examples and resources:
5. Understand your community’s expertise and use it smartly
To successfully co-create with your community you have to understand what they know and where their contributions can add value.
But how can you find out what knowledge your users have? Spanish fact-checking platform Maldita.es created a simple and appealing survey that asked members to share their “superpowers” with them. Membership coordinator Beatriz Lara explained the idea behind it: “We have many malditos [members] speaking different languages — scientists, computer experts, doctors, and they collaborate with us. We journalists cannot know everything, and by doing it together, our work is way more professional.”
The result of simply asking was striking: Maldita.es received answers from 2,500 existing members and 500 new users who became members of the Maldita.es community.
Krautreporter has implemented qualitative reader surveys into its subscription model. When their users register, they are asked five questions about their education, their field of expertise and their contacts. Answers feed into a database that reporters can use to find sources and support for stories.
Examples and resources: | https://medium.com/we-are-the-european-journalism-centre/listening-and-co-creation-5-insights-that-will-strengthen-your-engaged-journalism-889c33e7986 | ['Stella Volkenand'] | 2019-07-18 10:11:14.746000+00:00 | ['Media', 'Insights', 'Community Engagement', 'Journalism', 'Community'] |
A Conversational Interface with Algolia’s Nicolas Dessaigne | “There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods.”
Mark Weiser
In 2012, Nicolas Dessaigne founded Algolia, a company that provides ‘Search as a Service’. Algolia works with companies that build apps. It provides those companies with the building blocks to make their apps easily searchable. Algolia is also a star player in the emergence of the SaaS based API business — along with scale-ups like Twilio (telco infrastructure), Stripe (payments), Clearbit (Business Intelligence) and Contentful (Content Management Systems). Amongst the giants, Salesforce built the traditional business to business SaaS model, but it has also evolved its own hugely successful SaaS API play. But how is Algolia the wunderkind for search in the SaaS API world? Why don’t companies just use Google’s technology to make their products searchable?
Algolia v Google
Google has turned us into impatient humans.
Google Cloud Search
Nicolas told me in our 14 Minutes of SaaS interview that Google has made life hard for developers building in-house search functions. They have raised the bar so high, companies struggle to get near it. And we’re all used to using Google, so we all know how high that bar is positioned. However Google can’t be all things to all companies.
Enter Algolia. It addresses a widening hole in the market created by the explosion of software product development and the implosion of user patience. It can’t match Google for generic search, but Algolia is a much better fit for many companies because of its ability to be flexible and responsive to the particular needs of an individual client. The company gets much closer to its clients and can help them build creative solutions that involve highly accurate, responsive and extremely fast search solutions.
Building context-specific search solutions results in a more authentic search experience. For example Stripe uses Algolia for its support function. Developers can more efficiently search for the tools required to integrate Stripe’s payments technology into their sites and apps. Periscope uses Algolia in a completely different way - allowing real-time visualisation of livestream videos on a global map.
The more specific the use case, the bigger the delta between Algolia and Google in terms of quality of delivery. That’s powerful because use cases are getting more specific over time. Google is continuously building Algolia’s market. Moreover Algolia is not just about helping users to search — they want to guide the user, making intelligent suggestions with immediacy as the user types in the query.
Nicolas feels that as Algolia progresses, it becomes better at predicting the future. “Every 6 months it’s a new company!”
The Emergence and Future of the Conversational Interface:
What is a conversational interface anyway? It’s an enabler of verbal and written interactions between artificially intelligent software and a human. They are sometimes text based chatbots, and sometimes they are voice recognition interfaces empowered with natural language processing capabilities. The virtual assistant Amazon Echo is probably the most well-known voice-based conversational interface.
Advances in natural language processing and the fact that we all have smartphones with mics connected to the web are the reasons why conversational interfaces are suddenly surging as an area of interest. These interfaces (particularly voice driven ones) will change our lives and Algolia is well placed to play a role in the future of this technological evolution.
This new wave of interfaces is likely to enhance the human experience very positively overall. I don’t think we realise how tired, antisocial and distracted screens are making us and how inefficient GUIs really are. This will all change over the next decade. And by then we’ll be fully cognisant of the dark side of our bright screens. We evolved to be social animals and we converse primarily using our voices. The more you mess with that, the more likely we are to create psychological, social, cognitive and even physical problems. The next wave of innovation will start to undo some of that damage for the next generation.
Nicolas says that conversational interfaces will be huge in 4 or 5 years. Algolia is currently working on understanding how they can integrate their technology to augment the power of virtual assistants like Siri and Alexa. He believes Amazon, Apple and Google will educate the market to have a very high expectation for conversational interfaces. I’m delighted to see sophisticated companies like Algolia focused on this space because I feel that over-exposure to screens is a ticking time-bomb — especially for our children.
“Tomorrow’s search is going to be invisible. People won’t realise they are using a search technology anymore. Everything is about the context.”
Nicolas Desaigne, Co-founder and CEO, Algolia
Apple, Facebook and Slack are embracing conversational interfaces.
Big organisations with vast data and more than a passing interest in artificial intelligence are diving into conversation interface design and technology. Any UI that pretends to ‘chat’ with a human is a conversational interface. It needs to be able to ‘understand’ and respond somewhat convincingly to the natural language of a human. Conversation Interfaces will irreversibly weaving themselves into the fabric of everyday life.
“If our goal is to understand intelligent behaviour, we had better understand the difference between making it and faking it.”
Hector J. Levesque
When we talk about conversational interfaces, we’re talking about faking it in terms of replicating human conversation, but making it in terms of engaging in a valuable way. That value can be delivering your goal e.g. solving a problem directly. Alternatively it can be assisting you to progress towards your goal e.g. transferring you to a human capable of answering your question. These are some of the reasons why text based chatbots and, much more profoundly, voice interfaces will change our world:
· They are platforms agnostic and do not always need screens.
· They integrate with the guardians of vast data e.g. Facebook, Google, Amazon and Twitter.
· They lower barriers between humans and computers. Anatomically modern humans, an advanced form of homo sapiens, evolved around 200,000 years ago — give or take 20,000 years either side. We started to speak human languages about 100,000 years ago, give or take a lot of long arguments. We started to speak GUI only 45 years ago in the brief age of Xerox Parc, but GUIs only reached the masses in the 80s. It comes as no surprise that we still communicate infinitely more efficiently in human languages.
· The omnipresence of screens is a big problem. We spend more time awake watching a screen than we do sleeping. And we spend more time sleeping than awake without watching a screen. We have a big problem talking about this, because nobody in the tech industry wants to say just how destructive screens can be for a child’s development. We just didn’t evolve to spend huge amounts of time reading, writing and interacting with GUIs. Behaving in such a way from an early age represents a huge danger to the cognitive and emotional development of children. I wrote about this in 2014 in a popular 4-minute article titled Evolving Away from Boxes of Tricks and Screens. The rise of the voice interface cannot happen soon enough in my book.
Stephen Cummins, Evolving Away from Boxes of Tricks and Screens
“Pretty much any website or app can be turned into a bot.”
Larry Kim
Why is Algolia so interested in voice interfaces and why might it be so well placed to become a major player?
According to a BI Intelligence survey of 950 users of Voice assistants, the top 2 uses were to ‘Ask Questions’ (75%) and to ‘Search’ (58%). And, after all, a search query and return is simply a Q & A. Hence it’s no surprise that Google are the #2 company in this space. Only a currently dominant Amazon sits above them for now.
If voice interfaces are most useful for asking questions and searching, then Algolia can play a big part. Nicolas talks all the time about wanting to be more than a search engine API — he talks about guiding the user. When one thinks of guidance, one thinks of holding the users’ hand a little more, one thinks of a friendlier Q & A, one thinks of conversations.
The conversational interface is evolving from chatbot to voice assistant to something I call a ‘voice friend’. All of these overlap of course, but the endgame is this voice friend. I think Algolia’s current technology, it’s stated interest in conversational interfaces, it’s huge focus on a user friendly experience and it’s obsession with evolving a culture that supports all of these things, combine to suggest a very special opportunity for them in this space.
“The best of life is conversation.”
Ralph Waldo Emerson
A Healthy Obsession with Culture
Nicolas is extremely proud of maintaining a relatively horizontal culture of integrity and mutual respect within and between his teams. He loves seeing developers and sales people working together on discovery calls with customers. I know from experience that this is something rare in the traditional SaaS space. You might have a sales person with a semi-technical pre-sales engineer on the line. However, Algolia is a leading light in the emergent domain of the SaaS API. It’s dealing with developers that are allergic to spin. They want detail and they want their hands on the product yesterday. This API-centric sales process is usually much more complex, high touch and interactive than a traditional SaaS sales process.
Nicolas takes customer obsession to an extreme level. If a premium customer is down for 5 hours, Algolia will refund up to 6 months. Essentially they refund up to 1,000 times the downtime. This sort of self-imposed pressure has proven to be very healthy in the past. One of the best decisions Salesforce ever made was to launch trust.salesforce.com. Such transparency was great for the company’s brand equity because transparency looks great, but the far greater benefit was that it forced Salesforce to hold its own feet to the coals. It realised that if it’s servers were not up virtually continuously, it was going to fail as a business. This transparency kept that uptime front of mind 24–7.
Algolia also understands how critical it is to have trust as a core business value. You get the feeling with Nicolas that this is not just a smart strategy, it’s a core part of values he builds into the company’s culture. Developers, product managers and heads of IT suffer very few fools gladly. Algolia makes an internal culture focused on optimising the end-user experience and the employee experience part of its external brand. It wears its heart on its sleeve like a badge of honour. And why not? A strong focus on positive human experiences is always something to shout about.
“It’s a bit counter-intuitive but sometimes when you have a problem or a bug it’s an occasion to shine, it’s an occasion to show how much you care.”
Nicolas Desaigne, Co-founder and CEO, Algolia
“You want people walking away from the conversation with some kernel of wisdom or some kind of impact.”
Harry Dean Stanton
Live long and prosper!
Stephen Cummins, 27th March, 2018
CEO & Founder, AppSelekt
— — — — — — — —
If you found this interesting, then please press the applause symbol for as many claps as you feel it deserves! And …
1. Listen to me interview the greatest founders in the world on the14 Minutes of SaaS podcast … you can listen to it wherever you listen to podcasts:
14 Minutes of SaaS on Spotify / Apple podcasts / Google podcasts / TuneIn / Stitcher
2. Follow me on social networks you use: @Stephen_Cummins and @14MinutesOfSaaS and my LinkedIn profile
— — — — — — — — — — — — — — — | https://medium.com/understanding-as-a-service-uaas/a-conversational-interface-with-algolias-nicolas-dessaigne-f17221ba8973 | ['Stephen Cummins'] | 2019-11-22 12:17:39.028000+00:00 | ['Podcast', 'Artificial Intelligence', 'Conversational UI', 'SaaS', 'API'] |
App Notifications If They Were Written By Classic Poets | Once upon a midnight dreary, while you pondered, drunk and sleepy,
over many a quaint and creative course that Skillshare owned.
Eventually you signed up for an art class, no thought that you might fail to pass,
Or even take the time to amass, the hours needed to master the skill at last.
“I’ll finish it later,” you said to no one.
It waits in your queue, yet undone. | https://medium.com/jane-austens-wastebasket/app-notifications-if-they-were-written-by-classic-poets-e601a05368c9 | ['Kyrie Gray'] | 2020-11-16 05:58:39.970000+00:00 | ['Apps', 'Poetry', 'Productivity', 'Technology', 'Humor'] |
I Turned My Life Around Using These Keystone Habits | Exercise Regularly
Regular exercise is considered a keystone habit because of the results that come along with it. Better eating habits, lowered blood pressure, increased focus, and better sleep benefits from regular exercise.
Track Your Spending
Track your spending if you want to change your financial future. Tracking your spending is considered a keystone habit because it will influence how you look at money, and you’ll likely follow through on a budget.
Track Your Eating
Like tracking your spending, tracking your eating is also a considerable keystone habit to adopt because it will influence your eating choices.
Are you used to eating tons of fast food?
Track it, all of it, and then look back two, three weeks from now, and feel the disgust. Tracking your eating helps because it holds you accountable when it comes to poor eating choices.
Write In A Journal
Journaling is an excellent keystone habit because it lets you observe your life with a sense of clarity. Journaling is useful for tracking your priorities, thoughts, wants, and needs. Transparency is vital when it comes to self-improvement, and nothings more black and white than a journal.
Meditate
Meditation will help you develop keystone habits because it forces you to take a step back and observe your thoughts and actions. Meditation has helped me identify many unnecessary patterns that were holding me back. Now, I’m free to choose what habits suit me without mindlessly jabbing away at my goals.
Create a Daily Routine
Consistency is key! So why not start a morning routine? Having a routine is probably the best keystone habit you could have because it keeps you consistent with the things that matter most.
I’ve spent years working on developing a proper morning routine, generally consisting of brushing my teeth, sitting in silence, drinking some coffee, writing, and reviewing my journal.
Get Regular Sleep
Last but not least, get regular sleep! Everything else won’t matter if you can’t get at least your required six hours of sleep.
Try going to bed around the same time each night, and wake up around the same time each morning. Make sleep a routine, and soon it’ll become a significant keystone habit. | https://medium.com/illumination-curated/i-turned-my-life-around-using-these-keystone-habits-eb70bdc00b01 | ['Jazz Parks'] | 2020-10-16 03:06:54.640000+00:00 | ['Life', 'Life Lessons', 'Self Improvement', 'Inspiration', 'Motivation'] |
Phone Calls With Dead Authors: William Shakespeare | Phone Calls With Dead Authors: William Shakespeare
Once I got him on the phone, he spilled the beans
Shakespeare, in a surprisingly revealing phone call. Image: Mercopress —edited by the author
Recently, I found a dusty antique phone in my great-grandmother’s attic. When I polished it the phone began to glow. Intrigued, I dialed a random number. Wow! The phone was a direct connection to an ethereal network of dead authors. The following conversation will benefit academic scholars and devoted students of literature. Any casual reader, who may have tired of watching Seinfeld or Game of Thrones re-runs, is also welcome to have a look.
TRANSCRIPT FOLLOWS:
ME: Hello — I’m trying to reach a Mr. William Shakespeare.
WS: (DISTINCT BROOKLYN ACCENT) Yeah, that’d be me.
ME: Ah, yes, but I was hoping to speak to the William Shakespeare who once lived in Stratford-upon-Avon. From the mid-1500s into the 1600s.
WS: I’m your boy — whazzup?
ME: Well, sir, I must say I’m a bit taken aback. I rather thought your style of speaking would be more Elizabethan in manner. Are you indeed the revered Bard who once penned the words, “ . . . a poor player, that struts and frets his hour upon the stage, and then is heard no more: it is a tale told by an idiot, full of sound and fury, signifying nothing.”
WS: Yeah, dat’s my stuff. But, y’know, that uppity lingo sucks. Those frilly words made the actors sound pompous — like dey wuz walkin’ around with sticks up their asses.
ME: So, what’s with the Brooklyn accent?
WS: See, here’s da thing. For the past twenty years, my best buddy here has been a Brooklyn comedian named Henny Youngman. Y’know him?
ME: Oh, sure, he’s famous for his zinger: “Take my wife — please!”
WS: (WISTFULLY) I’d give two Hamlets and a King Lear to have written that line. So, anyway, Henny’s been teaching me Brooklynese.
ME: Actually, according to Wikipedia, Henny was born in London. His family moved to Brooklyn when he was a child.
WS: You coulda fooled me — that suckah sure don’t have no limey accent when he’s doin his schtick. And I friggin’ love his stuff! How about the line where he goes: “So my wife sez to me, ‘For our anniversary I wanna go somewhere I’ve never been before.’ And then Henny says, ‘Try the kitchen!’”
ME: Yeah, I guess that’s funny. But does Henny like your writing?
WS: I did ask him once if he’d read any of my plays. He said he might have glanced at a couple but, well — y’know that comment about actors having “sticks up their asses?” That sorta came from Henny, but he said it in a nice way.
ME: Well, sir, the reason I called is to get the answer to a question Shakespearean scholars would like to settle. Did you write all your plays by yourself?
WS: Are you fucking kidding me? My name is on 39 plays, 154 sonnets, and a bunch of scripts for porn shows that ran in Whitechapel. I had a whole TEAM of writers working for me. Sort of like that guy you’ve got who knocks out a dozen detective potboilers a year. Pat Jammerson or something like that.
ME: You must mean James Patterson. He works with several co-authors and has more than 300 million books in print.
WS: Love that guy! With his stable of scribes I’ll bet he hasn’t lifted a quill pen in 30 years. The dude sits back and watches the big bucks roll in.
ME: Yeah, and Hollywood keeps optioning his books, too.
WS: That reminds me — I’ve got a beef with your Hollywood people. You got any connections there?
ME: Sorry, I’m just a minor author who lives in New York City.
WS: New York City? Nevah hoid of it. Anyway, I wanna know why the hotshots at Netflix or Amazon aren’t producing a flock of miniseries using my plays? For example, how about Macbeth?
ME: Oh, you mean the play that actors won’t speak the name of?
WS: Huh?! What’s that all about?
ME: Actors think if they say Macbeth out loud it brings a curse of bad luck. So they refer to it in hushed tones as “the Scottish play.”
WS: That’s fakakta!!! If I wanted to name it that I would have named it that. The play’s name is Macbeth — MACBETH! — maybe you could pass the word.
ME: I’ll call Actors Equity and ask if they’ll mention it in their next newsletter.
WS: Thanks, pal — I owe you one. Now I’m off for brunch with Henny. He’s serving original Brooklyn bagels and they’re fantastic!
ME: Having yours with a schmear?
WS: What else, bubeleh?
CLICK. BUZZ. | https://medium.com/slackjaw/phone-calls-with-dead-authors-william-shakespeare-e7513bb06e4d | ['John Emmerling'] | 2020-10-21 16:35:43.752000+00:00 | ['Writing', 'Shakespeare', 'Literature', 'Brooklyn', 'Humor'] |
A student perspective on the NCSS Challenge | The NCSS Challenge is an online coding competition which runs 4 times per year and is open to school students from around the world. The July round teaches the Python programming language.
Thinking of joining the NCSS challenge this July? Hear from Lily who’s a seasoned NCSS veteran about why she loves the competition and how it’s helped her put options on the table for her future!
How many times have you done the NCSS Challenge?
I did it twice — once in Primary and once in High School. The NCSS Challenge is actually included as part of our Technology subject.
Even though it was compulsory at your school, were you encouraged to do things like the NCSS Challenge?
My mum works in IT and studied Maths at Uni, so she always encouraged me to do coding.
Would you consider doing another NCSS Challenge?
I would consider doing another Challenge, but it would have to be in my own time.
Some people might say that coding is a ‘boy thing’. What would you say to that?
Well, I go to a girls’ school, so everyone does coding there. But in any case, a girl can do whatever a boy can — probably better!
What would you say to other girls who aren’t sure about doing the Challenge?
I would say give it a try! Why not? You might find that you can do it, and like it.
What did you like about the NCSS Challenge and what didn’t grab you as much?
It was really fun… but it’s also frustrating when you get stuck on a problem that you can’t solve. The teachers know how to help you and when you figure it out and solve the problem you do get a sense of achievement. It’s also motivating to solve the Challenge because you get high marks in your assessment.
Would you say you’re a technical person?
I wouldn’t say that. I’m actually into a lot of things: I love history, music and coding — so I’m a bit of an all-rounder.
So would you be considering a career in tech?
I’m not really sure yet. My mum always tells me the “money’s in STEM”, but I do think that whatever I end up doing, having learned coding will set me up for whatever degree or job I choose.
What kind of skills did you learn from the NCSS Challenge?
I think the main one is problem-solving. Obviously this is most relevant for maths, but I also think it applies to all subjects — even History, which is more discussion based and where there isn’t a clear-cut ‘right’ answer. It helps you to think through different ideas and support your point of view. I also did sewing as a subject. You wouldn’t think that sewing has much to do with problem-solving or maths skills. But you need to measure, count, make sure that your patterns match. All of these elements need some problem-solving skills.
How do you apply problem-solving to wider issues apart from school subjects?
I find myself thinking about how the world could be a better place. I think about Government systems, or how education and the curriculum could be improved.
And what would you do differently in terms of the curriculum?
Well my year was the first year at our school to do coding as part of the curriculum. It should have been taught much earlier — just like Maths and English. I think that in general I would introduce more choice, more flexibility in the subjects we are offered. For example, Sewing was a compulsory subject. I didn’t like it or understand why it should be mandatory. Students have such a range of interests — whether it’s music, design, coding… school should be a place where students get to try many different things, so they can work out where their talents lie.
And finally, what was your most memorable moment out of the two Challenges you did?
In Year 6, I remember that in the ‘Newbie Stream’ you get a lot of green ticks as you go through the steps successfully. When you finally solve the Challenge there is a confetti animation! I thought that was really cool. | https://medium.com/groklearning/a-student-perspective-on-the-ncss-challenge-ae75eb31d604 | ["Nicola O'Brien"] | 2020-07-07 23:48:50.047000+00:00 | ['Edtech', 'Programming', 'Education', 'Student Voice', 'Python'] |
This is NOT collaboration | Clients making “changes”
We normally work in Sketch, but some projects are also being done in Figma, if the customer started them that way.
One of our clients recently requested the ability to edit the Figma file we created and we reluctantly complied. What happened next was a typical designer's nightmare.
First of all, they realised they can follow the cursor of our designer working in the file, so they logged in in the morning and followed his work for a couple of hours until I simply kicked them out of the file.
This is the micromanagement version of the boss standing behind you all day and making sure you only do work. Imagine a proverbial whip behind your back lashing at you each time you open Twitter or Instagram.
This is so awesome! We can see him working in real-time!
That particular client also didn’t know Figma well enough to actually work in it. The goal was so that they can update the copy on the landing pages themselves. Makes sense, right?
In reality, they mostly managed to accidentally move whole sections in and out of the Artboards and break things.
The best part of collaboration is when it’s off?
The quick solution was only to enable viewing of the file when we were done working on it. But this nightmare scenario is not that uncommon.
I asked around and that micromanagement technique of “looking at your every move” is frustrating designers all around.
There’s also the problem of potential chaos when many people work on the same set of screens, especially remotely. There’s absolutely no way this could work, given everyone is different and will push the design their own way.
Imagine Amy changing the background color for everyone to one that she likes.
How people really collaborate?
The way people are really using the collaboration features is in fact completely different. First of all, the best feature is having one “source of truth” file that’s always synced. In Sketch we used Dropbox for this, buy we had to always make sure the file was closed and saved by the other person before jumping in.
Obviously removing the out-of-sync issue is a great strength of good collaboration that should be highlighted. You are sure that the file is always up to date, even if the original author went on holiday or quit the company.
Someone can simply jump in and takeover. The entire design stays in sync at all times. No wonder Sketch is also introducing this, as to me this part is the most groundbreaking part of the entire “collaboration” thing.
A larger organization I spoke to, said they usually have up to five designers inside one project, but they are rarely even on the same page. And if they are each person gets assigned an artboard to fill with previously agreed design library components.
Without that library — again — it would cause chaos and confusion with everyone making something a little bit their own way — either deliberately or by forgetting some key guidelines. After all — we’re only human. People forget. | https://uxdesign.cc/this-is-not-collaboration-b40d997f9157 | ['Michal Malewicz'] | 2020-11-06 16:48:36.602000+00:00 | ['UI', 'Collaboration', 'Figma', 'Design', 'UX'] |
A Deep Conceptual Guide to Mutual Information | A Deep Conceptual Guide to Mutual Information
Embracing the “Correlation of the 21st Century.”
The logical relations between various concepts underlying Mutual Information.
Dependency
Causality is a central concept in our lives. It refers to the idea that one event influences another event. Our perception of causality underlies many of our personal opinions. If I believe the internet makes people dumber, or that the President has made things worse, I am suggesting a causal connection; real or not.
In business we look to understand what makes a good hire, a good decision, a good product. In government we create new legislation and policies based on evidence for social and economic causes. In short, causality has much to do with how we try to make sense of the world.
Of course science rests largely on the idea of causality. We attempt to interpret our observations by making causal statements. If we believe we know the event, process or state that contributes to another event, process or state we say we know something about the underlying phenomenon. We also often make predictions with whatever causal structure has been uncovered by the models we build.
The statistical concept for dealing with causality is called dependence, more commonly called correlation. Correlation represents any statistical association between 2 or more variables. Of course we all know correlation alone is not some guarantee of causal relationship but it can act as a signpost of a potential relationship between 2 things.
The most common approach to quantifying correlation between variables is the Pearson product-moment correlation coefficient (PPMCC). I will simply refer to this approach as Pearson’s correlation. Pearson’s correlation is the workhorse of dependence and considered an industry standard, inside and out of academia.
The ubiquity of Pearson’s correlation means it should be subject to the highest amount of scrutiny compared to other methods. Unfortunately the amount of criticism put to Pearson’s correlation is far from that deserved. The reason is its ease of use and extremely intuitive interpretation. Pearson’s correlation tells a straightforward story about 2 things “moving” together and can make that story look reasonably technical. A researcher hoping to support some narrative around how things are related can call upon Pearson’s correlation to appear “scientific.”
But Pearson’s correlation makes some very simple assumptions about how 2 things might be related. Most real-world situations are nontrivial and don’t lend themselves to such simplistic descriptions of dependence. It’s too easy to promote false narratives with techniques like Pearson’s correlation since data can always be “tortured” into submission. A scientific justification for how prevalent Pearson’s correlation is appears scant at best.
The popularity of Pearson’s correlation does however present us with the best basis of comparison for contrasting other methods. By looking at more scientifically valid techniques we can better understand where Pearson’s correlation falls short. Specifically, we can see how Pearson’s correlation is a very weak proxy to a more rigorous approach to understanding how 2 or more variables share information.
We will unpack just how important the concept of information is throughout this article. Our discussion will center around what some have called a “correlation for the 21st Century.” The starring role is played by a measure known as Mutual Information; the topic of this article. Mutual Information digs much deeper into the notion of dependence by focusing directly on information itself.
Pearson’s Shortcomings
The core equations behind Pearson’s correlation are as follows:
Figure 1 The math behind Pearson’s correlation.
Pearson’s correlation is the 2nd equation at the bottom, and is calculated using the normalized covariance shown at the top. Normalization is done whenever we wish to make data comparable, by adjusting values that are on different scales to a common scale. In Figure 1 we are dividing the covariance of 2 random variables by the product of both their standard deviations to achieve our normalization. Think of this step as “squashing” the possible correlation values between -1 and 1, which makes the coefficient easier to work with.
A more intuitive way to think of Pearson’s correlation is in terms of its geometric interpretation. Any variable (column in a dataset) can be thought of as a vector in vector space. Correlation is then simply calculating the cosine of the angle θ between two observed vectors:
Figure 2 An intuitive explanation of Pearson’s correlation is the cosine angle between 2 vectors representing each variable.
Figure 3 If we increase the angle between 2 vectors (variables) we decrease Pearson’s correlation. Note the change in variance in the upper left.
While covariance gives us the direction of the relationship, correlation gives both direction and strength. This is because the magnitude of covariance is arbitrary (it depends on the units), meaning if we change the units we change the magnitude for the exact same phenomenon. Thus correlation normalizes the covariance as discussed above to make the strength of the relationship non-arbitrary.
Correlation is best suited to continuous, normally distributed data and is thus easily swayed by extreme values. As such, correlation will misrepresent relationships that are not linear. This occurs VERY OFTEN in practice since much of our world is nonlinear and not normally distributed (your stats class notwithstanding).
Despite being the goto measure for association in practice Pearson’s correlation is not a general measure of dependence. We say that the information given by a correlation coefficient is not enough to define the dependence structure between random variables. The fact that correlation only looks for linear dependence means it cannot suggest there is no general correlation when it measures 0 (a correlation of 0 does not mean variables are independent).
Only in situations where things are simple (e.g. linear) will Pearson’s correlation offer some potential insight into causality (although there are issues with this idea as well). It relies on covariance, which suggests that greater values of one variable mainly correspond with greater values of another variable. While this sounds obvious, it is a simplistic notion of things being related. Variables can be related in all kinds of ways.
To tease out the dependence structure between variables requires we do more than just compare vectors of data. We need to somehow tap directly into the idea of information. After all, that is what we’re doing when we make a measurement. The outstanding question in any measurement is how much information is being shared between the 2 or more things.
Mutual information, as its name suggests, looks to find how much information is shared between 2 variables rather than just noting their commensurate “movement.” To grasp such a technique requires we understand information itself, and this brings us to the bulk of this article.
Information
Information is defined as the resolution of uncertainty. This means an appropriate approach would account for the uncertainty inherent in any measurement, and this demands probability.
The most obvious way forward would be to use the multiplication rule for independent events:
IF P(A)*P(B) = P(A and B) THEN A and B are independent events, otherwise, they are dependent events.
This tells us that variables A and B are to be considered independent if the product of their marginals equals their joint probability.
Recall that a marginal distribution gives the probabilities of different values of a variable contained within some subset, without reference to values of any other variables. A joint distribution gives the probability that 2 or more variables fall within a particular range.
Why would 2 variables be independent if the product of their marginals equal their joint probability? Imagine a square matrix filled with values that show the combination of rows and columns as shown in Figure 4. A casual look at these values reveals that almost half of them are redundant; they are symmetric about the diagonal.
Figure 4 The sharing of information between 2 variables, visualized as a matrix of values. Symmetry in data signifies redundancy.
This symmetry points to a sharing of information, since we see the same outputs produced by different axes.
Of course this can occur by coincidence, but only to a point. Any 2 variables from a reasonably sized phenomenon will not produce the same outputs by coincidence. Compare this to what Pearson’s correlation does: it simply compares variances, meaning as long as 2 variables grow or shrink in unison the variables are deemed dependent.
In other words, whereas Pearson’s uses a well-defined moment (variance) of an assumed distribution (Gaussian), Mutual Information instead tallies up actual values between all variables considered. The chances of coincidence are far lower with MI than Pearson’s.
While the product of 2 distributions contains all the information brought by both variables (everything inside the matrix), the joint distribution is more like the pink area above. If outcomes are shared between 2 variables then there is less information in the joint than in the product of marginals. This is why any deviation between the joint distribution and the product of marginals indicates a sharing of information, and thus dependence.
If we wanted to create a method that detects when information is shared we would look to leverage the above concept, and this is precisely what Mutual Information does. Let’s look at how Mutual Information is constructed:
Figure 5 The definition of Mutual Information.
Mutual Information uses something known as Kullback–Leibler divergence (KL divergence), which is what we see on the right-hand side of the equation. KL divergence is a measure of how one probability distribution is different from a second, reference probability distribution.
Let’s look more closely at how KL-divergence is defined mathematically using 2 ordinary distributions:
Figure 6 KL-Divergence for discrete variables.
We can see that KL-divergence is the expectation of the logarithmic difference between the probabilities P and Q. The expectation just means we’re calculating the average value, hence the summation sign summing over all values. We can see the expression is also logarithmic (more on this later). A critical piece is the use of a ratio between probability distributions. This is how the “difference” between distributions is taken into account.
Figure 6 shows the discrete form, but we can just as easily express KL-divergence for continuous variables (where p and q are now probability densities instead of mass functions).
Let’s plug values into the KL-divergence formula to see how it works. Figure 7 shows 2 distribution with their respective values.
Figure 7 2 different probability distributions and their tabulated values. Adapted from Wikipedia.
Plugging these values into the KL-divergence equation we get:
Figure 8 Calculating the KL-Divergence from the tabulated values of 2 distributions.
A KL-divergence of 0 would indicate that the two distributions in question are identical. We can see in our example that the distributions diverge from one another (a non-zero result).
Think of P as representing the true distribution of the data (“reality”) while Q is our theory, model, or approximation of P. The intuition behind KL-divergence is that it looks at how different our model of reality is from reality. In many applications we are thus looking to minimize the KL-divergence between P and Q in order to find the distribution Q (model) that is closest to the distribution P (reality).
We often say KL divergence represents the divergence between P and Q, but this isn’t quite right. There is a fundamental asymmetry in the relation. We should actually say it describes the divergence of P from Q, or the divergence from P to Q. While this sounds pedantic it better reflects the asymmetry in Bayesian inference; we start from a prior (Q), which updates to the posterior (P).
There are various ways to use KL-divergence. When talking in terms of Bayesian inference we can think of it as a measure of the information gained by revising one’s beliefs from the prior probability distribution (the amount of information lost when Q is used to approximate P). In machine learning we call this information gain, while in coding theory it refers to the extra bits needed to code samples from P using a code optimized for Q.
We can see that the general definition of KL-divergence doesn’t quite look like the one used in Mutual Information. This is because instead of using plain distributions Mutual Information uses the joint and the product of marginals. This is valid since a joint distribution is itself a single distribution, and the product of marginals is also itself a single distribution. In other words, we can look for the KL-divergence between a single joint distribution and a single product distribution to measure the dependence between 2 variables.
Since we are looking for how different the joint is from the product of marginals we are doing what we saw in Figure 4. We already know that if the joint differs from the product of marginals there is some dependence between the variables in question.
So we understand how KL-divergence can be formulated in a way that captures shared information. But to truly understand Mutual Information we need to grasp what information is. A hint towards doing this is the fact that KL-divergence is also called relative entropy. If entropy is being used in the formulation of Mutual Information then it must have something to do with information itself.
Entropy and Information
When we observe something we are being exposed to some source of information, and a model is exploiting that information to explain and/or predict something more general.
Figure 9 shows the path from an observation to the use of a model. Making an observation is done for the sake of being surprised; we are not studying things to observe the obvious rather we wish to notice something we have not seen before. The word “surprise” might seem too colloquial to be useful in a scientific context but it actually has a specific meaning tied to information theory. Information can be thought of as the amount of surprise contained in an observation. We call this surprisal.
Figure 9 The path from an observation to the use of a model. Entropy oversees all these steps since they all relate back to the idea of surprisal. Die icon from icons8.
The more surprisal associated with a variable the more information that variable contains. This makes intuitive sense. If I find out that the details of my observation are things I already knew then it isn’t useful to consider these details as information. So we can see the connection between an observation, surprisal and information.
But calling something information doesn’t lend our endeavor to measurement. We need to quantify the amount of information in the variables we observe. This is possible by interpreting the observation — surprisal — information connection as uncertainty. Uncertainty connects us to probability since probability counts things in terms of likely and unlikely outcomes. Probability is what gives us the mathematical framework to quantify what we observe. Finally, if we are in the realm of probability we are squarely aligned with science since probability theory can be considered the “logic of science.”
But how does entropy relate all these concepts together? To answer this question let’s do a simple experiment with a coin flip and use the equation of entropy to quantify the outcome we observe.
A common expression for entropy looks like this:
It is a logarithmic measure of the total microscopic states that correspond to a particular macroscopic state. We’ll understand exactly what that means shortly but for now, let’s see the difference in entropy between a fair and biased coin toss.
Figure 10 shows how entropy is calculated for a fair coin. We know the probabilities of heads and tails are both 0.5. A fair coin has uniform probability since it is equally likely to get either outcome (heads or tails). Uniform probability leads to an entropy representing the most we can get from a binary event.
Figure 10 Calculating the entropy of a fair coin toss, using the base 2 logarithm.
We are using a logarithm to the base 2, which is standard practice when dealing with calculations involving information. There is nothing stopping you from using other bases, however this would give numbers that are less easy to work with for things like coin flips.
What happens if we bias the coin? Let’s bend the coin and assume⁶ this leads to a nonuniform probability in outcome, plugging these probabilities into the entropy equation. We’ll say bending the coin gives heads a probability of 0.7 and tails a probability of 0.3:
Figure 11 Biasing a coin by bending it, leading to nonuniform probabilities.
Biasing gave us nonuniform probabilities since there is now a higher chance of getting heads than tails. This affected the calculated result of entropy by lowering the value with respect to what we saw with a fair coin:
Figure 12 Calculating entropy for a fair and biased coin toss.
What does the lower entropy value with the biased coin mean intuitively? It means there is less uncertainty in the outcome. Plotting the amount of entropy associated with a single coin flip versus the amount of bias shows the following:
Figure 13 Change in entropy based on the bias applied to a coin. (information and face icons from icons8)
Notice the relationship between bias and surprisal. The more we bend the coin, the less entropy, the less surprised we will be of the outcome.
Let’s walk through the steps of Figure 9 to relate entropy to our observation. We can see that a fair coin toss must have the most surprise, whereas a biased coin toss has less surprise. Since there is no “favoritism” when the coin is fair we have no idea to which side the coin will land, but if we bias one side there will be less surprisal.
We can also say that observing the result of a fair coin toss provides the maximum amount of information. We had no idea of the outcome prior to flipping the fair coin, so we learned more from this outcome than we would from a biased coin.
It becomes obvious that fair coins also contain the most amount of uncertainty. We are more uncertain about a fair coin than a biased coin. Biasing a coin reduces the amount of uncertainty in proportion to the amount of bias applied.
Remember that probability is how we bring math to uncertainty. We saw above that the fair coin toss has equal probabilities between heads and tails. It is these probabilities that drive the entropy value we calculate.
The result of a fair coin toss is also the least predictable, since there is no increased likelihood of having one event over the other. But once we introduce bias to the coin we are creating a situation of nonuniform probabilities between the potential outcomes. We can see that a prerequisite for being able to make a prediction is the existence of nonuniform probabilities that underly the possible events.
To wrap up our tour through Figure 9, the explanation or interpretation of a measurement is when we attempt to give reasons to our observation (“why do we see this”), which means finding one or more causal agents that contribute to what we observe. Bias underlies interpretation as much as it does prediction. A fair coin toss has nothing to interpret since both outcomes are equiprobable. With bias however there is the potential to explain our observation since there is a reason for the event (the predominance of one outcome over the other).
2 things worth noting with respect to probability and entropy:
1. nonuniform probabilities decrease the surprise/information/uncertainty;
2. smaller probabilities attached to each possible outcome increases the entropy.
The first point is apparent with the coin toss. We can understand the second point by comparing a rolled die to a tossed coin. A rolled die has higher entropy than a tossed coin since each outcome of a die toss has a smaller probability (p=1/6) than each outcome of a coin toss (p=1/2). Remember that all probabilities must sum to 1 (the outcome was realized) thus smaller probabilities attached to each outcome means more possible outcomes and thus more surprise upon learning the actual outcome.
Figure 14 Difference in entropies between tossing a fair coin and rolling a fair die.
Our coin toss example showed how each of the topics in Figure 9 (surprisal, information, uncertainty, probability, prediction, and explanation) fall under the purview of entropy.
What we just covered was an information-theoretic view of entropy. While this is the most general way to think about entropy it isn’t the only way. The original take on entropy was thermodynamic. To truly grasp what entropy (and information) is it’s worth thinking about it both in thermodynamic and information-theoretic terms.
Thermodynamic Entropy
In 1803 Lazare Carnot realized that any natural process has an inherent tendency towards the dissipation of useful energy. Lazare’s son Sadi Carnot showed that work can be produced via a temperature difference. In the 1850s and 60s physicist Rudolf Clausius provided a mathematical interpretation for the change that occurs with temperature difference, attempting to formulate how usable heat was lost whenever work was performed. Clausius named this idea entropy and considered it “the differential of a quantity which depends on the configuration of the system.”
But it was scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell who gave entropy its statistical foundation. Importantly, Boltzmann visualized a way to measure the entropy of an ensemble of ideal gas particles using probability.
Figure 15 shows how we interpret an observation of a physical system probabilistically. At any given instant in time a system will have a configuration, specified by the positions and momenta of its components. We can imagine the 7 components (small blue balls) of the system in Figure 15 tumbling about every which way. Now imagine we wanted to measure some macroscopic property of the system such as its temperature. Temperature is a measure of the average kinetic energy of all the molecules in a gas. The temperature we measure is expected to be the peak of the probability distribution.
Figure 15 Any physical system is composed of many possible configurations, with some most probable set of configurations determining what we observe.
Keep in mind, anything we observe comes from some underlying random process. This means whatever we observe when taking a measurement comes from a distribution rather than a specific value. The most probable outcome corresponds to the largest number of configurations that can be “stacked up.” In Figure 15 there are 3 configurations that lead to the most probable temperature we measure. Adding heat to a system increases its entropy because it increases the number of possible microscopic states that are consistent with the macrostate.
The statistical mechanical interpretation of entropy concerns itself with the relationship between microstates and macrostates. A microstate is a specific microscopic configuration of a thermodynamic system that the system may occupy in the course of its thermal fluctuations. A macrostate is defined by the macroscopic properties of the system, such as temperature, pressure, volume, etc. In Figure 15 there are 3 microstates that correspond to the most probable macrostate.
Entropy measures the degree to which the probability of the system is spread out over different possible microstates. With respect to Figure 15 entropy would attempt to quantify how many possible microstates make up the total distribution.
Back to Information-Theoretic Entropy
We first introduced entropy in terms of information, looking at coin flips as our example. There is a striking similarity between the information and thermodynamic interpretations of entropy. Both quantify the uncertainty by considering all possible outcomes that could lead to the event.
We can think of our coin flipping example in a similar way to thermodynamic entropy, where the microstates are all the possible outcomes and the macrostate is the event we observe. This is more obvious when we flip at least 2 coins as shown in Figure 16.
Figure 16 Thinking of coins in terms of microstates and macrostates.
If you were asked to guess which combination of heads and tails is most likely in a fair coin toss (with 2 coins) you could confidently answer “1 heads and 1 tails.” The reason is there are 2 microstates that lead to the (H,T) macrostate, whereas (H,H) and (T,T) macrostates have only 1 each. This is directly analogous to what we saw with thermodynamic entropy. It’s merely a matter of realizing that the most likely outcome for an event is the largest set of configurations that lead to the same observation.
Our examples in Figures 15 and 16 are simple. But imagine a weather system with pressure gradients, temperature fluctuations, precipitation, etc. We are talking orders of magnitude more complexity than our examples above. But weather is still some set of possible configurations between matter and energy, and what we experience is expected to be the largest set of configurations that lead to the same observation.
Let’s look at an example more inline with how we think about information. Compression is a technology used to reduce the memory footprint of a file. Our compression is “lossless” if we can always recover the entire original message by decompression. The following image shows the well-known example of communicating over a channel with a message passed between source and destination. Encoders are used to convert the original message into another, usually shorter representation to be transmitted. A decoder can then convert the encoded message into its original form.
Figure 17 Communicating a message over a channel.
Entropy rears its head again in this situation. In our previous example we saw how entropy quantifies the amount of uncertainty involved in the value of a random variable (outcome of a random process). This idea was made concrete in 1948 by Claude Shannon in his paper “A Mathematical Theory of Communication.” Shannon was interested in finding a way to send messages over a noisy channel such that they could be reconstructed at the other end with a low probability of error.
Let’s look at 2 different scenarios of data compression and compare them with respect to entropy.
Say we want to know the smallest amount of information that will convey our message ABADDCAB assuming we can only use 1s and 0s. Let’s come up with 2 possible encodings. One option is to use 2 bits for each letter, so A is 00, B is 01, C is 10, and D is 11. This would work. Another option would be to have A coded as 0, B as 10, C as 110 and D as 111. This would also work.
If we tally the total bits we see that our first option uses less bits (8 bits) than our second option (9 bits). At first blush it seems as though our first option is better. But what happens if the letter A in our message occurs with 70% probability, B with 26%, and C and D with 2%? In this case we could assign 1 bit to A, which would lower the average number of bits required to send (compress) the message because 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. The entropy is lower in our second scenario because it requires less than 2 bits on average, whereas the first scenario requires 2 full bits on average (8 bits / 4 letters).
So we can see how the existence of nonuniform probabilities determines how well we can compress a message since it changes its overall entropy. This is just another version of what we stated previously with respect to the relationship between probability and entropy; nonuniform probabilities decrease the surprise/information/uncertainty. If A occurs 70% of the time there is obviously less surprise in the message, which we now know means less information.
We have to be careful here with what we mean when we say less information. We are not removing information from the original message. Rather we are determining how much actual information was in the original message and transmitting that using less information. If we treated each letter in the message as equally probable we would be adding redundancy into the transmitted message and transmitting it with more information than necessary.
To be clear, our compressed message has the same quantity of information as the original just communicated in fewer characters. It has more information (higher entropy) per character. Data compression algorithms find ways to store the original amount of information using less bits, and they do so by removing redundancies from the original message.
Going back to our path from observation to prediction (Figure 9) we can again see how each step plays out under entropy. A message with nonuniform probabilities in outcomes (A, B, C, D) decreases the uncertainty, which means less surprisal, which means less information required to transmit the message. Imagine trying to predict the message in Figure 12. Obviously we could predict better under the 2nd scenario since A occurs 70% of the time. And just as a biased coin allowed for an explanation of what was observed so too does the bias present in communication with a preponderance of one letter over the others.
The Deeper Connection
A big question regarding entropy is whether its thermodynamic and information-theoretic interpretations are different versions of the same thing. The parallels between the two are undeniable, and many regard the information-theoretic version of entropy as the more general case, with the thermodynamic version being the special case.
Information-theoretic entropy is thought of as more general since it applies to any probability distribution whereas thermodynamic entropy applies only to thermodynamic probabilities. It can be argued that information-theoretic entropy is much more universal since any probability distribution can be approximated arbitrarily closely by a thermodynamic system².
Regardless where you stand on the debate there is an advantage to thinking of all entropy in terms of its information-theoretic interpretation. It allows us to see any physical system as having information content, and any physical process as the transfer of information. This interpretation will help us understand how Mutual Information gets to the heart of dependency and what it means for information to be shared.
To begin our deeper dive into this connection let’s look at the original statistical mechanical equation found by Boltzmann, shown in Figure 18.
Figure 18 The original thermodynamic expression for entropy.
The first equation is the one on Boltzmann’s tombstone, except above written with the natural logarithm. The type of logarithm used just determines the units entropy will be reported in, while Boltzmann’s constant k simply relates everything to the conventional units of temperature. The important part is the W, which is the number of microstates that lead to the macrostate. In thermodynamic terms the microstates might be the individual atoms or molecules that make up some physical system. This form assumes all the microstates are equiprobable (a microcanonical ensemble), where W is the number of microstates.
Gibbs extends the simple equation of Boltzmann that uses equiprobable probabilities to account for nonuniform probabilities. You can see how this is accounted for in the Gibbs entropy equation in the bottom right. By using individual probabilities and summing them we allow for distinct contributions from different microstates.
The use of the logarithm is fundamental to understanding why Boltzmann came up with this expression, so let’s explore that now.
The Intuition Behind Logarithms
Entropy is routinely measured in the laboratory and is simply heat divided by temperature. It has a well-known one-way behavior corresponding to the 2nd Law of Thermodynamics, which states that the total entropy of an isolated system can never decrease over time. This was all known prior to Boltzmann as practitioners had a strong intuitive appreciation of entropy’s behavior⁵.
But Boltzmann wasn’t satisfied with the definition of entropy as simply heat over temperature since this doesn’t tell us what entropy is. Some kind of model was needed that aligned with what was known about the motion of atoms and molecules.
If you fill a container with gas and connect it to another container the gas will fill both containers. The only thing changed here is the arrangement of matter (not temperature, pressure, etc.). But the system still exhibits the one-way behavior associated with entropy, so something is still increasing. Boltzmann realized that entropy must have something to do with the arrangement of matter rather than some usual thermodynamic quantity.
Boltzmann realized that in any random process the final arrangement of matter was more likely than the initial arrangement. This thinking allowed Boltzmann to connect the known concept of entropy to probability. This was a major shift in science. It meant the usual approach to understanding reality by defining properties like position, speed, weight, size, etc. of physical things could now be recast purely in terms of uncertainty, and thus information.
But how does the arrangement of matter get formalized in an equation? We already saw from Figure 15 that a physical system is composed of many possible configurations with some most probable set of configurations determining what we observe. Another way of saying this is that the probability of finding a particular arrangement is related to the number of ways a system can be prepared⁵.
If I asked you which outcome from rolling a pair of dice was more likely, a 12 or a 7, what would you say? There is only 1 way to roll a 12 (1 in 36 chance) but 6 ways to roll a 7 (6 in 36 change), so obviously we are more likely to roll a 7. This is the same reasoning we applied above to the most likely outcome between heads and tails in a 2-coin flip. This property gave Boltzmann a path towards describing the entropy of a physical system in terms of the number of ways it can be arranged.
There is a problem though. Whereas the entropy used by scientists is additive, the number of ways we can arrange things is multiplicative. For example, a single die can fall 6 ways (6 sides) but 2 dice can fall 36 ways (6 X 6). How can entropy, which is additive (a system twice as large has double the entropy), be described by a multiplicative process?
This is where the logarithm comes in. Logarithms convert multiplicative quantities into additive ones. Entropy is simply the log of the number of ways a system can be arranged, converting an underlying multiplicative phenomenon (things combining) into an additive tool. Of course the number of ways we can arrange physical systems is very (very!) large, estimated around 10²³. To bring entropy back into the kind of magnitude’s we’re used to using in everyday situations Boltzmann multiplied the expression by a “fudge factor”, which we now call Boltzmann’s constant (1.38064852 × 10⁻²³).
Figure 19 shows the difference between exponential growth viewed on a linear scale and the same growth viewed on a logarithmic scale. Whereas the linear scale shows explosive behavior the logarithmic scale “tames” this into a simple straight line.
A common approach to detecting exponential growth is to see if it forms a straight line on a logarithmic scale.
Figure 19 Logarithms allow us to view an exponential process as if it were an additive (linear) process.
Exponential growth is associated with multiplicative processes. Graphically, think of the logarithm as offloading the explosive growth from a multiplicative process onto the axes such that the function itself no longer has to bear that growth. If you prefer algebraic interpretations, recall that the logarithm is the inverse function to exponentiation. This means the logarithm of a given number x is the exponent to which another fixed number (the base) must be raised to produce x. If we keep taking the logarithm of a massive number we will get back a smaller number (the exponent) that counts by 1.
Figure 20 depicts the use of a logarithm as the representation of some combinatorial explosion in terms of simple linear growth. Just as 2 dice “explode” in the number of possible combinations, so too do the various configurations a system can take on. Logarithms are what make working with massive numbers (like the number of ways atoms can be arranged in a material) more manageable.
Figure 20 The relationship between a combinatorial explosion and the logarithm used to represent the process as simple linear growth.
A critical realization here is that the use of logarithms takes into account the complexity of the problem. In other words, we don’t toss away the combinatorial behavior of systems when we simplify them with logarithms. They are “brought along for the ride” despite our simplification of the problem. Contrast this to something like Pearson’s correlation. Pearson’s correlation also simplifies a system by depicting it as simple vectors pointing in similar directions, BUT it does so by ignoring the complexity of the problem.
The goal in science is not to reduce phenomena down to the simplest description, it is to reduce phenomena down to the simplest description that retains the core properties of the system.
Is Information Physical?
By showing the fundamental connection between the physical arrangement of matter and information we can better understand Mutual Information. To build this case let’s look at some well-known thought experiments.
The first is Maxwell’s demon. Maxwell’s demon was put forward by physicist James Clerk Maxwell to suggest how the second law of thermodynamics might be violated. He argued it might be possible to create a temperature difference in a gas without expending work.
In this thought experiment Maxwell imagines a demon who operates a trap door between 2 compartments of a closed box. While average particle velocities are fairly constant, individual gas molecules travel at fast (red) and slow (blue) speeds. When a given molecule approaches the trap door the demon opens and shuts the door such that all fast molecules end up in one chamber and slow molecules in the other. Note that the shutter is frictionless and thus no work is performed by the demon.
This scenario leads to one of the chambers being hotter than the other. We now have a situation where we have decreased entropy (since there is more order). This results in a temperature difference that could be exploited in a heat engine, and thus we have apparently violated the second law of thermodynamics (entropy only increases).
Figure 21 Maxwell’s Demon, showing how thermodynamics might hypothetically be violated by demonstrating the possibility of a gas evolving from a higher to a lower entropy state. (demon and arm icons from icons8)
Maxwell’s thought experiment was a reasonable challenge to the idea that thermodynamics depends fundamentally on atomic physics (and as such could be treated probabilistically). After all, if it was just a matter of knowing more details about a system (individual particle velocities) then isn’t it possible we could arrange for a decrease in entropy to occur?
This all suggests that small particles might be open to exploitation, making Maxwell’s Demon realizable. Importantly, Maxwell’s argument suggests entropy has a subjective quality to it. The value of entropy one measures depends on the amount of information one has about the system.
We don’t regard this thought experiment as a true demonstration of the violation of the 2nd law of thermodynamics since it’s expected the demon would in fact increase entropy by segregating the molecules. But the argument stands; theoretically a sufficiently intelligent being could arrange for a perpetual device to be constructed, and so more insight is needed to bring the 2nd law back into focus.
We now know there is some fundamental connection between obtained knowledge and entropy. For non-demon entities like us the acquisition of information means measurement. Given Maxwell’s argument we can say that measurement is required for entropy reduction to take place (to gain knowledge of the system’s configuration). The outstanding question is this: assuming the 2nd law cannot be violated, what is the compensating entropic cost of this measurement?
This brings us to Leo Szilárd, who had his own version of the Maxwell’s thought experiment using a single molecule. Imagine an empty box with no partition in contact with a heat bath (e.g. surrounding container of water). A partition can then be inserted into the box, which divides the container into 2 volumes. The partition is also capable of sliding without friction, left or right. As soon as the partition is added to the container collisions between the molecule and the partition exert a pressure on the partition. Since the partition moves after being inserted we could theoretically add a pulley to the partition that lifts a weight and thus extracts work (molecule hits partition -> exerts pressure -> partition moves -> pulley moves -> extract work).
If we added the partition to the middle of the container we then create a situation where the single molecule would be on either side with equal probability.
Now imagine we knew which side of the box the molecule was and we then introduced a partition in the middle. We could hook up a pulley and weight system and extract work a shown in Figure 22. The work extracted is drawn from the heat bath due to thermal contact. When the partition reaches all the way to one side of the container it can be removed, representing one full cycle. This process can be repeated indefinitely with work continually extracted.
The above description is known as Szilard’s Engine:
Figure 22 Szilard’s Engine is a modification of Maxwell’s Demon thought experiment showing how work could theoretically be extracted using only the acquisition of information.
In Szilard’s single molecule experiment the possession of a single bit of information corresponds to a reduction in the entropy of the physical system (remember, if something moves towards equilibrium then it must have less order, more uncertainty, more entropy). Szilard’s engine is thus a case of information to free (useful) energy conversion since we had to have possession of information (where the particle was) in order to position a piston such that work could be extracted. Without possession of information we wouldn’t know how to hook up the pulley system to extract work. This means that in order for the 2nd law to not be violated the acquisition of knowledge must be accompanied by some entropic cost.
Szilárd solidifies the relationship of Maxwell’s demon to information by connecting the observer via measurement. Rather than some supernatural demon being in control of the information, we are in control of the information, and can use this to affect the entropy situation. The take-home message here is that Szilárd showed how the possession of information can have thermodynamic consequences.
The work of Maxwell and Szilard opens up a new set of questions regarding the physical limitations of computation. Between Szilard and some other key players (Brillouin, Gabor, Rothstein) it became apparent that the acquisition of information, via measurement, required a dissipation of energy for every bit of information gathered. More generally, as suggested by von Neumann, every act of information processing was necessarily accompanied by this level of energy dissipation².
This brings us to Rolf Landauer who in 1961 used his version of a thought experiment. Imagine the Szilard’s engine starting in a thermalized state (equilibrium). There would be a reduction in entropy if this were re-set to a known state. The only way this would be possible is under information-preserving microscopically deterministic dynamics, and such that the uncertainty was “dumped” somewhere else. This “somewhere else” could be the surrounding environment or other degrees of freedom that are non information-bearing. In other words, the environment would increase in heat, and again, the 2nd law of thermodynamics is preserved.
There must be some physical substrate on which computation occurs. Landauer argued that any physical system designed to implement logical operations must have physical states that correspond to the logical states.
To understand this we need to appreciate the distinction between logically reversible and logically irreversible operations.
An operation is logically reversible if the input state can be uniquely identified from the output state².
Figure 23 The difference between logically reversible and logically irreversible operations.
The NOT operation is an example of a logically reversible operation. This is because if the output is 1 then the input must be 0, and vice versa. The AND operation is an example of a logically irreversible operation since if the output is 0 there are multiple possible inputs (3 of them in this case), as shown in the above figure. Note that logically reversible operations must have a 1-to-1 mapping.
Reversible operations do not compress the physical state space (the set of all possible configurations of a system). On the other hand, irreversible operations compress the logical state space (look at the table in the above figure; 3 combinations of input led to a single output) and thus do compress the physical state space. Landauer argued that a compression of the physical state space must be accompanied by a corresponding entropy increase in the environment (via heat dissipation).
It turns out most logical operations are irreversible and thus, according to Landauer, must generate heat². The most basic logically irreversible operation is resetting a bit. Landauer used this idea to quantify the heat generation described above.
Take as an example 2 input states (0 and 1) that outputs to 0 (a reduction in logical state space) as shown in Figure 24. Similar to the Szilard engine we have a container in thermal contact with a heat bath, with a single molecule and a partition in the middle. We will say that the molecule on the left is logical state 0 and the molecule on the right is logical state 1.
Now remove the partition and allow the single molecule to roam freely throughout the container. Now reintroduce the partition on the far right side and push it towards the center. We know from our previous discussion that the single molecule will exert a pressure on the partition, which requires work to be performed (again, the energy from this work is transferred to the heat bath).
Figure 24 Landauer’s thought experiment showing an irreversible operation that moves from 2 possible inputs to one definite output.
We have arrived at something called Landauer’s principle, which states there are no possible physical implementations of the resetting operation that can do better than this (reset a bit to zero converting less than a given amount of work into heat). This amount turns out to be kT ln 2 .
This is known as resetting, also called erasure.
Landauer argued that by only referring to the abstract properties of a logical operation we can deduce a thermodynamic constraint upon any physical system that performs a logical operation. There is a definite connection between abstract logical functions and their physical implementation. Landauer’s Principle makes a strong case for the idea that information is physical.
Information as the Number of Yes-No Questions
This is where the real connection between the information-theoretic and thermodynamic views of entropy come into play. We saw from the thought experiments of Maxwell, Szilard and Landauer that entropy can be viewed as the amount of information needed to define the detailed microscopic state of the system, given what is known at the macroscale. Specifically, it is the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.
At this point we realize that whatever we observe at the macroscopic level does not communicate the detailed information about a microscopic state. And from Landauer we realize that if information has a physical representation then the information must somehow be embedded in the statistical mechanical degrees of freedom of that physical system.
What Exactly is Dependency?
Our tour through the physical interpretation of entropy wasn’t for historical interest or for forcing some intuitive “analogy.” The physical is needed to grasp what dependency is. We already know the physical biasing of a probability distribution changes the entropy, and thus the amount of information contained in a variable. We first saw this when we bent the coin, and then with Szilard’s engine, placing the partition at some arbitrary location (rather than the middle). What we are left with is the following realization:
Dependency, at its core, is the coincidence of physical configurations between 2 or more things being measured.
To shirk the physical underpinnings of information allows one to get away with any kind of statistical measure of correlation. But when a physical substrate is accounted for such simplistic notions of dependence are not possible. Pearson’s correlation doesn’t look at the configurational difference between variables. It instead uses a summary (variance) of an assumed distribution (Gaussian) and compares these using vectors of data points.
But mutual information does look at the physical configurational coincidence between variables since it looks at how physical outcomes of a random event differ. These outcomes are captured by a probability measure untethered to any specific distribution, and this is what makes MI a general measure of dependence.
Mutual Information in the Wild
Let’s bring our conceptual tour full circle by revisiting the definition of mutual information, originally displayed in Figure 5:
We know MI uses KL-divergence, meaning it uses the discrepancy between a joint distribution and the product of marginals to measure information. Let’s build our intuition around joint and marginal probabilities using the classic urn example.
The following figure shows blue and red marbles being drawn from 2 urns:
Figure 25 Independent draws from 2 runs.
Consider A and B as discrete random variables associated with the outcomes of the draw from each urn. The probability of drawing a red marble from either of the urns is 2/3 since there are 8 red marbles out of 12 total marbles (in each urn).
In probability we typically write out the possible counts in a table as shown below:
Figure 26 Table showing the various probabilities resulting from 2 independent draws (discrete random variables).
This shows the various probabilities depending on the outcome (the possible combinations from the 2 draws).We have drawing a red marble from both urns, drawing a blue marble from both urns, and drawing one red and one blue marble. The center cells (green) is the joint probability. The red cells are the marginal probabilities.
If we were using machine learning to classify cats in images containing either cats or dogs we would be comparing a vector of predicted labels to a vector of actual labels in a test set. Both of these vectors have distributions, which are either similar to each other or not. Thus a practical use of KL-divergence would be to use it as a loss function in a machine learning algorithm, where internal model parameters are adjusted until the delta between predicted and actual labelling is minimized.
Figure 27 Supervised machine learning involves comparing a predicted vector to a test vector using a loss function as a measure of “distance.”
To connect this to the concept of joint and marginal probabilities let’s reframe the comparison between predicted and actual vectors in terms of the urn example we looked at previously.
Figure 28 Looking at the similarity between vector comparison (in ML) and independent draws from 2 urns.
Each predicted label is compared to an actual label, which is analogous to a single draw from 2 urns. The learning algorithm is “observing” the draws from the “urns” as 1 of 4 possibilities; cat-cat, cat-dog, dog-dog, dog-cat. Just as with the urn example we can use the number of each of these possibilities to fill a probability table.
Figure 29 The confusion matrix is nothing more than a probability table counted as independent “draws” from labelled vectors.
Anyone familiar with machine learning will immediately recognize this as a confusion matrix. Confusion matrices are used to assess how well our classifier is performing. It is a matrix of the 4 possible outcomes between 2 vectors containing 2 labels. We can thus view the confusion matrix in terms of probability by realizing that the center rectangle are the joint probabilities, while the outer row/column are the marginal probabilities.
Now we can see where the 2 distributions that are being compared in MI come from:
Let’s wrap up by looking at an alternative expression for MI, compared to what we saw previously in Figure 5.
Here MI is depicted in terms of the additive and subtractive relationships of various information measures associated with variables X and Y. H(X) and H(Y) are the marginal entropies, while H(X,Y) is the joint entropy of X and Y. This tell us that mutual information is a joint entropy subtracted off marginal entropies. We can see this visually in the following Venn diagram:
Figure 30 Venn diagram showing Mutual Information as the additive and subtractive relationships of information measures associated with correlated variables X and Y. (adapted from Wikipedia)
The area contained by both circles is the joint entropy H(X,Y). Joint entropy is a measure of the uncertainty associated with a set of variables.
Figure 31 Joint Entropy
Recall the thermodynamic entropy we saw in Figure 18 (Gibbs entropy). Notice the similarity between that equation and the equations above. The approach to capture the uncertainty (entropy) of a system is by sandwiching probabilities around a log.
In the case of Gibbs entropy we took the probabilities of the individual microstates and summed them. By using individual probabilities and summing them we allowed for distinct contributions from different microstates. Here, we use individual joint probabilities and sum them, so as to allow for distinct contributions from different random variables.
In other words, if entropy is calculated by sandwiching individual probabilities around a log, then it makes sense that joint entropy would be the same thing but with joint probabilities. It thus conceptually makes sense that joint entropy is a measure of the uncertainty associated with a set of variables since entropy is how we calculate uncertainty, and we are taking into account how 2 variables probabilistically act together.
Of course we can generalize this out to more than just 2 variables. Imagine more summation signs and more variables in the above equation. Still the same approach. We are merely measuring the uncertainty associated with many variables.
What about the individual entropies (full circles on the left and right)? These are the entropy we already know from our discussion on information-theoretic entropy:
Figure 32 Individual entropies are the self-information contained within a variable.
We can think of these individual entropies as the expected value of the “self-information” contained within the variable. In other words, each variables has some level of “surprisal” (recall earlier) contained within it that we have yet to access. Upon learning the outcome (of the event) we will gain access to this information.
What about the conditional entropies, which are the parts of the circles that are not overlapping in Figure 27? Conditional entropy is defined as follows:
Figure 33 Conditional entropy of a random variable.
It should be noted that the Venn diagram is considered somewhat misleading. Refer here https://www.inference.org.uk/itprnn/book.pdf
Thinking of mutual information in terms of adding and subtracting entropies gives us an additional view on how MI is calculated.
Summary
Causality is a central concept in our lives, underlying virtually every decision we make. The industry standard approach to uncovering causality is Pearson’s correlation. There are a number of severe limitations baked into Pearson’s, relating to its assumptions regarding how random variables might be related. Mutual Information digs much deeper into the notion of dependence by focusing directly on information itself. Further, by understanding information in terms of its “physicality” we can see how both thermodynamic and information-theoretic interpretations of entropy make any entropic measure of dependency, like MI, truly fundamental.
Further Reading | https://medium.com/swlh/a-deep-conceptual-guide-to-mutual-information-a5021031fad0 | ['Sean Mcclure'] | 2020-11-08 05:29:23.715000+00:00 | ['Science', 'Machine Learning', 'Correlation', 'Probability', 'Statistics'] |
Lawyers need to get their stories straight… | Lawyers need to get their stories straight…
…Or risk missing out on growth
The legal industry has got a flame under it. While the news value of investment banking is in a down cycle, the fizz in the law firm world is becoming audible.
Domestic U.S. firms are merging to strengthen regional and international positions and international firms are merging to strengthen global positions. In the UK, there have been 108 law firm mergers since 2011 — with the world’s now-largest law firm by number of lawyers, Dentons, leading the way. It seems that the pain of the financial crisis has shocked everyone into doing all they can to promote stability — meaning smarter and better controls, more pre-warning and tighter regulation. And many believe that increased stability is a direct result of increased size.
It wasn’t always like this. Just after the crisis, when we worked with firms like Osborne Clarke, Allen & Overy and the legal world’s equivalent of the Star Alliance, Lex Mundi, firms were loosely grouped in three clusters — predominately U.S. domestic firms with not that much interest in global business; major U.S., Asian and European firms with a lot of global interest; and ones in between, kind of bridging firms. So what’s changed? Is there still room for firms of all shapes and sizes? If so, what role does brand have to play?
I’m certain that we will still have groupings who decide to zag while others zig.
The current legal commentary is focused on the question of confidence in your brand. It’s hard to imagine for example, Quinn Emanuel taking a risk in merging with a player that could in any way divert its meaning and direction or detract from its existing reputation.
Equally, while all around are tying knots, there are many firms who would rather concentrate on what has produced their long-lasting client relationships and strong profitability. For instance, we all know that when clients are asked about cross-border transactions and who they’d rather handle them, they always say pre-existing relationships, lawyers they know. The brands they already trust. So for some, it was no surprise to read recently that the firm leading VW’s defence in the U.S. has only 66 lawyers. So this begs the question about the size thing… is it a sure-fire winner to go out there and rapidly build market share through mergers?
Felix Oberholzer-Gee from the Strategy Unit at Harvard Business School thinks possibly not. Rapid growth can be accompanied by a slide into a commoditised position as pricing is used to ensure market share gain feeds the larger machine. This puts all kinds of pressures onto the firm, perhaps the worst being that it could kill the motivation of associates to drive as hard for equity partnership as there’s not enough profit around for them to make it worthwhile. Theorists point to the current era in law as one step away from a different business model, closer to that of the accountants.
But all of that is fine and easy to understand. When it comes to brand (or reputation), the difficult bit is finding the right way to talk about your strategy, even, to be able define your strategy in such fluid circumstances.
From our experience with firms of all sizes, it’s not a question so much for people running the practice where this matter is gaining importance, but a question for the larger groups of partners who are getting swept along in the current. The irritating thing for many of these partners and their colleagues is the distraction of seeing the merger after the event, as somehow being the cause of some kind of limitation to growth because of cultural difficulties.
Indeed, sometimes these mergers fail the even more basic test — ‘what do my clients think?’ That’s when it can get tough.
It’s at that point that people in these firms are bound to ask, what are we doing, where are we trying to get to, what’s the organising idea that we use for our work, and how do we, all of the lawyers and the professionals in this new vision, get behind this and make it work?
So I’d suggest that whether you’re big or small, merged or independent, there’s never been more pressure to get your story straight — for your clients, your future trainees and your partners.
It’s not just the story you decide to tell but how one starts to think about the story that matters. And never forget, your brand — the external perception of your business — IS your story.
Lawyers are brilliant at reaching the positive by identifying and eliminating the negatives. But reputations and the cultures that create them are sometimes softer and more nebulous, they are harder to probe and also quite probably harder to empathise with when all around you is a race to get bigger.
Straight stories go down well. They should also be the easiest to tell. | https://medium.com/dragon-rouge/lawyers-need-to-get-their-stories-straight-or-risk-missing-out-on-growth-2907863ec787 | ['Dragon Rouge'] | 2016-04-19 16:06:46.847000+00:00 | ['Storytelling', 'Law', 'Branding'] |
AI-Generated Review Detection in Book Reviews | Abstract
Online market stability is predicated upon consumer trust. Most online platforms provide an open and semi-anonymous platform for posting reviews for a wide range of products. Due to the open and anonymous nature of these platforms, they’re vulnerable to reviews being faked, and the most efficient way to do this is to generate reviews using one of the many advanced Natural Language Processing (NLP) pre-trained models. These models are generally offered as free and open-source research projects that utilize advanced research into language processing and semantic understanding. We have developed a Machine Learning product that utilizes, in some cases, the same technology that these blackhat users work with in order to detect these AI-generated fake reviews.
We have utilized transfer learning techniques in order to design a robust detection system. Transfer learning is the simple act of storing knowledge gained solving one problem and applying it to another problem. We have achieved this by utilizing two cutting-edge NLP model architectures, OpenAI’s GPT2, and Google AI’s BERT. These models utilize an advanced Neural Network concept known as Transformer architecture which utilizes stacks of encoders and/or decoders to process text data in a way that can draw context from surrounding words in a sequence.
We leveraged GPT2’s specialization in text generation and BERT’s ability to classify text. Using a set of 50,000 Amazon book reviews sampled from 51 million reviews we were able to fine-tune GPT2 to generate book reviews. We were able to then use the real reviews combined with the fake reviews to generate a labeled dataset of 100,000 reviews on which to train our BERT classifier. We were able to design an architecture that would work when layered on BERT to allow for greater classification abilities. With this architecture combined with a base layer of BERT we were able to achieve an 80% success rate in detecting our AI-generated reviews.
Consumer Trust
One of the most important consumer metrics for shopping in general, but even more so for online shopping, is almost immeasurable.
Trust is an invaluable tool that wielded correctly can build a platform, product, or service to must-have status in our have-it-now society. The flip side of that coin is that a single viral video can destroy your brand or product overnight. These are both exciting and scary prospects for any business that is facing them. The entire world is peer-reviewed now and one of the ways that this is most obvious is through customer interaction via reviews. Customer reviews sell products.
According to Insider Intelligence, a market leader in marketing and business insights:
In yet another sign that online reviews can make or break the path to purchase, June 2019 research from Trustpilot found that consumers would lose trust in a brand not only if they saw negative reviews-but also if the brand went one step further and deleted them. For a majority(95.0%) of the digital shoppers surveyed worldwide, that type of behavior played a big role in their distrust of a company, as did not have any reviews at all (cited by 81.0%). When asked what factors would lead to an increase in brand trust, three of the top 10 factors centered around reviews. Nearly all respondents said positive customer reviews increased their trust in a brand, while 80.1% said they trusted companies that have a lot of customer reviews. Interestingly, if a company responded to negative customer comments, that would drive up trust for 79.9% of those surveyed.
(emphasis added)
As the pandemic has further transitioned our economy into an online and digital economy these consumer reviews hold even more weight. We can see the rapid growth of online shopping visits in just the first half of 2020 in this chart on Statista:
The Issue with Online Reviews
Online reviews are one of the many forms of semi-anonymous means of communication that are available on the internet. As we’ve seen with the numbers provided by TrustPilot and Insider Intelligence 95% of individuals are influenced by a positive online reputation and 93% are influenced by positive reviews. The inverse is also true with 95% of respondents being influenced by negative reviews or comments. This system and influence can be greatly abused by our advancing technology.
There is a dark side to the exponential advances in machine learning that we have seen in relation to things such as Natural Language Processing (NLP). These techniques are being applied to the manipulation of customers.
The Scientific American covered this in an article in late 2017:
When Hillary Clinton’s new book What Happened debuted on Amazon’s Web site last month, the response was incredible. So incredible, that of the 1,600 reviews posted on the book’s Amazon page in just a few hours, the company soon deleted 900 it suspected of being bogus: written by people who said they loved or hated the book but had neither purchased nor likely even read it. Fake product reviews-prompted by payola or more nefarious motives-are nothing new, but they are set to become a bigger problem as tricksters find new ways of automating online misinformation campaigns launched to sway public opinion.
Consider that 3 years in tech development is exponential as our research advances at an incredible pace these fake review generations have become easier and easier. Generating fake reviews, once the domain of University research labs, is now available to anyone with enough technical acumen and seed money to rent GPU power. It has become easier and easier to cheat the system, and while the opposition technology increases at a similar rate edge cases will always fall through the cracks.
It isn’t difficult to imagine a full pipeline of ML assisted tools that could be deployed from front to back to assist with anything from fake review generation to political Twitter bots.
Simple web-interaction scripts to create bogus accounts and email addresses.
Use of open datasets to train cloud-based ML solutions through one of the widely available frameworks.
Dashboard deployment for use of the product as a service (SaaS).
The Solution
Fight fire with fire.
Push has come to shove in this end-user manipulation fight. We can use the same tools that the black-hat users leverage to beat them at their own game. Fortune 500s are using their resources to combat this issue, but the nefarious users abusing these review systems are more agile in most cases. Large-scale deployment and corporate-bureaucratic are slow processes that will always be behind the curve. It is important to employ a faster and more efficient method.
This efficient method is using one of the robust and pre-trained models to base the foundation of our fake detection bots. Making use of deeply researched and efficiently trained models will allow for a quicker turnaround and a more fine-tuned approach to modeling.
Data
Initially, we attempted to utilize 51 million book reviews as provided by Julian McAuley in a single dataframe. Due to the nature of dataframes, being memory inefficient and unable to stream data, we had to engineer a work-around. After further research, we decided upon MongoDB.
MongoDB
MongoDB is a simple document-based database system that provides great flexibility in its expandability and extensibility. It does this by:
Offering JSON-like document storage, meaning data structure can change document to document
Easily map objects to application code
Ad hoc queries, indexing, and real-time aggregation.
Distributed at its core.
Free to use under the (SSPLv1 license)
For our uses, we were able to load in a 51 million 20+gb JSON file up as a database. We were then able to aggregate and further sample the data so that we could feed a selection of the reviews into our model for fine-tuning.
Thus, in the end, we ended with a corpus of 50,000 reviews on which to train our GPT2 model for review text generation. We chose not to push the number further due to a lack of computer resources. Were we working with a distributed network architecture we could've easily expanded the corpus size.
PyTorch Dataset
PyTorch has a Dataset inheritable class that can be used with the PyTorch framework. The Dataset inheritable class represents a Python iterable over a dataset that supports map-style or iterable-style datasets.
Map-Style — Represents a map of Key-Value pairs to data samples within the dataset.
Iterable-Style — Represents an iterable dataset like that which could be streamed from a database, remote server, or even generated in real-time.
Computers do not natively understand words and that is an issue because a lot of the tasks we wish to automate involve processing language. There is a process that has been developed to change the text into a medium that computers can ingest.
Tokenization
Tokenization is the process by which words are converted to tokens that our models can process.
Tokens are one of a few things:
The most common way of separating tokens is by space, assuming that the strings being fed into the tokenizer are delimited by spaces.
Tokenization is the most important step in text preprocessing as it converts words into data that our models can take in. It does this by transforming the words, characters, or subwords into tokens and then these tokens are used to generate vocabularies. These vocabularies can then be used by our models to operate on text data.
Bidirectional Encoder Representations from Transformers (BERT) Model
Bidirectional Encoder Representations from Transformers (BERT) was presented in a white paper by Google’s AI Language team in late 2018, and it caused an uproar in the NLP community for its wide variety of uses for everything from question answering to inference. As the name suggests its advancement is the bidirectional training of the Transformer architecture.
Before the paper NLP training involved text paring left-to-right, or left-to-right and right-to-left training. The paper showed that a deeper context for language could be drawn from bidirectional transversal of sentences which allowed the model to draw deeper context and flow than a single direction model allowed.
BERT is based on a novel mechanism named Transformer Architecture. Transformers are models that are at a very base level a set of encoder cells and decoder cells that use context from sequences to provide output sequences. An encoder takes in a sequence and compiles it into a vector called context. The context is then passed to the decoder and the decoder uses this context vector to produce a sequence token by token. These encoders and decoders tend to be Recurrent Neural Networks.
Transformer FlowImage Courtesy
Recurrent Neural Networks
Recurrent Neural Networks (RNN) are a directed graph network that works on data through various temporal steps. A basic RNN takes N input vectors and will output N output vectors based on the input, and it does this operation by remembering context of the sequence and training based on past decisions. RNNs remember past decisions in hidden state vectors that influence the outputs of the network, and it does this to consider context of a sequence at a given time (temporal) step in the sequence. RNNs represents the ability for a past step to influence a future step and can increase the depth by adding additional hidden states or add additional nonlinear layers between inputs and hidden states.
Encoders, Decoders and Context
Context is a self-defined vector that is generated during token sequence encoding in a sequence-to-sequence model. It is the number of hidden units that are used by the encoder RNN. RNNs take two inputs at each step: the input sequence and the hidden state. These context vectors need to be generated somehow and that is through word embeddings. These embeddings generally turn words into vectors that capture the meaning and information of the words. These encodings are generated to be passed to the decoder in order for it to ‘understand’ the context of the token sequence.
Attention Mechanism
Though these context vectors provided context, it was supremely inefficient and therefore new mechanisms were developed to combat this but maintain the use of meaning and semantic information use in outputs. Several landmark papers introduced a method known as Attention which improved the efficiency of machine translation systems.
A traditional sequence-to-sequence model only passes the end hidden state from the encoder to the decoder. An attention-based network work different from this:
Encoder:
An encoder in an attention-based network instead passes all of the hidden states of the encoding sequence.
Decoder:
The decoder also does significantly more work. It receives all hidden states, and therefore the hidden states provide context for each token in the sequence instead of just the final state. Next, all hidden states are scored. Finally, each of the scores is softmaxed. Softmax takes a vector of K-real numbers and transforms them then sums them to one, thereby minimizing the low scoring states and maximizing the high scoring vectors.
Self-Attention
Self-attention is another branch of the attention mechanism. It relates the position of tokens in a sequence in order to compute a representation of the same sequence.
BERT not only builds upon several years of research as a foundation it also adds its own twist to the mix.
Masked LM
Before feeding token sequences into BERT , ~15% of words are replaced by a masking token ( [MASK] by default). The model then attempts to predict the words that are masked based on the context provided by the other words in the sequence. This places a classification layer on the output of the transformer, which is then multiplied with the output vectors of the embedding matrix, and finally, the probability of likely words is predicted using a softmax.
Next Sentence Prediction
BERT receives pairs of sentences as part of the training input. It can learn to predict if the second sentence is subsequent to the first sentence. The model is trained on half subsequent sentence pairs and half random pairs from the corpus thereby learning subsequent context between sentences. It is aided in this task by special tokens:
[CLS] which is inserted at the beginning of a sentence
which is inserted at the beginning of a sentence [SEP] which is inserted at the end of each sentence.
BERT Metrics
Model size is important. BERT_large has 345 million parameters, which functions significantly better than BERT_base at 110 million parameters. More training equates to more accuracy. The longer a model can fine-tune on the chosen corpus, the more accurate it becomes. BERT converges slower than the sequence-to-sequence model since there are several extra layers stacked on top of the transformer architecture, but it outperforms other models at similar numbers of training steps.
Model Architecture
Using transfer learning is the heart of our model. The BERT model is already trained with 110 million parameters. It is very important that we do not train the base model further, so we freeze the parameters of our base model by disallowing gradient calculations.
We inherit from the PyTorch nn module so that we can start building our neural network. In our __init__ all we are taking in is our BERT model that was instantiated earlier. After our BERT model is in-place we follow that by passing the outputs from the pre-trained model into our LSTM layer. Our inputs are then passed through a series of fully connected layer , with all layers after the LSTM layer are separated with a batchnorm layer. Our inputs are then softmaxed to return a vector with lower values minimized and larger values maximized on a 0-to-1 scale.
LSTM
The Long Short Term Memory architecture was motivated by an analysis of error flow in existing RNNs which found that long time lags were inaccessible to existing architectures because backpropagated error either blows up or decays exponentially. An LSTM layer consists of a set of recurrently connected blocks, known as memory blocks. These blocks can be thought of as a differentiable version of the memory chips in a digital computer. Each one contains one or more recurrently connected memory cells and three multiplicative units — the input, output and forget gates — that provide continuous analogues of write, read and reset operations for the cells. … The net can only interact with the cells via the gates. - Alex Graves, et al., Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures, 2005.
Batch Normalization
The issue with deep neural networks is that model weights are updated backward from outputs to inputs, but in reality model layers are updated simultaneously during the updating process, therefore the model is always chasing a moving target.
Batch normalization has taken up this issue and seeks to solve it:
Batch normalization provides an elegant way of parametrizing almost any deep network. The reparametrization significantly reduces the problem of coordinating updates across many layers. - Page 318, Deep Learning, 2016.
It does this by scaling the output of the layers. Standardizing the activations during each mini-batch, such as activations from a previous layer. This means that the assumptions a previous layer makes about the subsequent layer’s weight spread will not change. This stabilizes and speeds up the training process.
AdamW
AdamW is the newest generation of adaptive optimizers that are paving the way for super-fast convergence and improving generalization performance.
AdamW works by essentially decoupling the weight decay and the optimization step. This allows the two to optimize separately and therefore find an optimal rate for both. This results in faster convergence and better overall generalization for the models.
GPT2 Model
GPT2 is a distinct model architecture that was developed by OpenAI and based upon their original GPT model.
GPT2 is based on Transformer architecture.
GPT2 differs from something like BERT in that it only uses the decoder side of the Encoder-Decoder part of Transformer architecture.
It differs greatly from BERT in that is doesn't actually change the chose tokens to [MASK] but instead chooses to interfere with the self-attention calculation for the tokens of the right of the current position being calculated. This is Masked Self-Attention, as opposed to BERT 's Self-Attention.
The Decoder Stack that makes up the GPT2 transformer architecture contains decoders that are cells of masked self-attention layers and then a feed-forward neural network layer. These are stacked to produce the GPT2 architecture.
Before token sequences are passed to the decoder stack they are first embedded into vocabularies and then position embedded. These embeddings are then passed up the decoder stack.
Text generation is one of the main purposes of the GPT2 model. We take a random sample of the reviews on which we trained the initial model. These are then broken down to a start of sentence token and then a 4 length token sequence. This sequence is the Prompt. The prompt is required for the model. The model will take the prompt and then use it to generate context-based upon it. The 4 tokens (5, if the start of sentence token is counter) sequence can then be tokenized and fed into the model so that it can generate responses to the prompts.
Conclusion
Consumer trust is of the utmost importance, and much like fiat currency, trust is what drives the market. Without customer trust, a platform is doomed. It is important to implement some sort of review sort and bot countermeasures in order to insure the stability of your open, semi-anonymous reviews.
Here are the key take-aways:
Transfer Learning
Transfer learning is a must in this sort of boots-on-the-ground style deployment. Large and well-funded research labs and think-tanks have developed and open-sourced these large models for this very purpose. It is always better to use a tool instead of trying to re-invent the wheel because:
Cost efficiency Time efficiency Resource pipelines
Pretrained model layers within these neural networks will give back time, be significantly more resource-efficient, and most important cost less overall.
Training Time
Training length is important for BERT. We have chosen to only train for 50 epochs and have already achieved a testing accuracy, precision, and recall of 80% (+/-5%).
At this batch-size (128 sequences of 80 tokens each) we are training and validating at approximately 220 seconds per epoch. At 3.6 minutes per epoch, we could theoretically train 200 epochs in 12 hours.
Market Backend or Customer Frontend
While we have spent most of the notebook referring to this as a project directed towards market places to police their own platforms. However, another valid and recommended use of this technology would be producing it as a consumer-facing product for any review. While this product is currently a proof of concept it is easily extensible in several forms in this matter.
Future Works
Work with smaller models, such as DistilBERT, that can provide a functionally smaller footprint while not sacrificing quality. (DistilBERT retains 97% of the language understanding capabilities of BERT) This will allow a much more mobile and deployable model that could function on light-weight devices and even via webapps.
As discussed in several places already BERT greatly benefits from increased training epochs. This would allow us to train for more time and thus further tune the model towards correct predictions. | https://talkdatatome.medium.com/ai-generated-review-detection-in-book-reviews-986a9762ef68 | ['Samuel Middleton'] | 2020-12-04 19:48:29.338000+00:00 | ['Python', 'Pytorch', 'NLP', 'Gpt 2', 'Review'] |
How I Reinvented What It Means for Me to be a Writer | I haven’t been consistent in my writing blog since Jan 2019. I haven’t been a WRITER (in my own eyes) since before then. This is clear from my weekly posts dropping down to 18 in 2019, and only 8 this year.
But I am a Reinventor by nature: I craft my life, created my identity, and follow my inner fire. And at my heart, I’m still a writer underneath it all. But I had, at least in my actions and my thoughts, given up on this piece of myself.
Unbecoming a Writer
I have 16 novels.
To just give you the context, of those 16, 13 are “complete” drafts. Of those 16, 10 are “viable” for the future. Interestingly, the 3 unfinished are in that viable category, alongside my trilogy (all 3 books are completed drafts but need edits), and 4 stand-alone stories.
Aside from those ‘challenge’ NaNoWriMo projects I do not plan to continue working on (i.e. they were dead by the end or I never planned to revisit them), the last long-term creative writing projects I worked on out of joy and the pull to write… were in 2016. Looking at my annual writing stats, I fell off the wagon by the end of 2017.
I began Skeletal in 2016. It’s only 10k done. I wrote the final book of my trilogy in November 2016. I then wrote and had published two short stories in anthologies.
And then aside from the odd moment I picked up The Felled Gods (first drafted in 2014) because I sent it to a beta reader who was interested.
Photo by Sincerely Media on Unsplash
A Decade of Annual Wordcounts:
2009: 50,138 — 2010: 55,300
2011: 50,131 — 2012: 80,052
2013: 81,210 — 2014: 150,263
2015: 104,912 — 2016: 90,244
2017: 100,002 — 2018: 62,001
2019: 51,173
My 11-yr total: 875,426 words of fiction
I had planned not to do NaNoWriMo this year. It would be my 11th year. The last 4 years have left me with 50,000 words of rubbish I’ve hated by the end. I’ve felt good about “ticking the box” but I’m not Being a Writer in them.
I haven’t been a writer in so long.
And yet, somewhere at the end of November 2020 I felt the spark to want to work on stories again. I’ve taken a couple of weeks off between day-job transitions and (thanks to my life coach for stopping me from throwing myself into DO ALL THE THINGS) am taking my own damn advice.
Photo by Laurenz Kleinheider on Unsplash
The Importance of Reflection
I started my self-development business as a baby idea in 2017 while burnt out and depressed. In hindsight, this took the place of my writing time, of my creativity outlet, and although I love the work and am still working on things (having reinvented it to work for me and be clearer for those seeking support), it’s the first week of December and I’m feeling the GAP in my path.
If you’ve known me a while, or follow me on the biz side, you know I identify one of the massive issues with self help is Not Pausing to Reflect.
Like I’ve apparently not truly done for 5 years, hah. >_> <_< #HumanBeing #NotPerfect
Post-Reflection Planning
I’d love to say I took my own worksheets, realised the block, worked through it in my planner and decided to become a writer again. That’s what the reinvention teacher part of me wants to say happened. That this was obvious and simple and I just fixed the problem.
In reality, I had to force myself to finish reading a fiction book (my 3rd fiction book in 2 damn years: again, really might have noticed the signs of writerly-avoidance if I’d paid attention but hindsight is grand and all that) and then spent 4 damn hours crying on the sofa as I grieved for the part of my identity that I no longer share with the world.
Thankfully, she’s still here, and just making the decision to let her write again (without any steps beyond that) has reignited the fire in me in a matter of days.
But the true pivot for me, was in considering giving up ‘writer’ from my identity. I recognised that I have too much going on in my life, and considered all of the things that take my time and attention.
The places I spend my mental energy.
“If I have a few hour to spend on A or B, which would I pick for now?”
In asking this, I considered putting my novels away for a solid period of time.
And I realised that I can’t. it doesn’t even feel like an option. It just felt wrong in my body. Giving up other things did not.
And that was it. I re-opened my writing blog up and began this post. I opened up Scrivener and my manuscript, and here I am, feeling more alive in the last 16 hours than I have in over a year.
Feeling like myself again.
Photo by Halacious on Unsplash
Moving Forward
So for December my current plan is to just let myself write and read, whenever and whatever. I want to explore how just making that decision impacted my wellbeing and sense of identity, even before I opened up my manuscript and story binder.
Because my biggest reinvention story was in shaping my life as a writer.
Want to watch my journey as I reintegrate writing into my life? Here are the best places to connect with me:
I worked full-time the whole way through all lockdowns so this is my first 10-days-off-in-a-row since my honeymoon in 2016. But I’m aware a lot of people have had time-off this year with the pandemic:
Have you had any revelations this year? | https://medium.com/change-your-mind/how-i-reinvented-what-it-means-for-me-to-be-a-writer-d7496ba25ac8 | ['Katy-Rose'] | 2020-12-15 10:32:39.498000+00:00 | ['Writing', 'Self Improvement', 'Reinvention', 'National Novel Writing', 'Identity'] |
2019 is coming to an end. What’s next for AI? | 2019 is coming to an end. What’s next for AI?
The limits of Deep Learning and AI ethics monopolize discussions on the future of AI
AI Now and AI Index 2019 Reports
2019 is ending and two of the most well known reports on AI state of the art — AI Now 2019 report, and AI Index 2019 — are out. What are they saying about AI state of the art? Where is AI moving next? I will try to summarize it in this short post.
Let’s start by AI Index 2019, developed by the Human-Centered Artificial Institute at Stanford University. As a novelty this year, the AI Index includes for the first time two powerful Data visualization tools:
The Global AI Vibrancy Tool, which allows you to compare countries activities around AI under three metrics including R&D, Economy, Inclusion.
Ai Vibrancy Dashboard example
The ArXiv Monitor, a full paper search engine to track metrics from papers on AI published on ArXiv.
ArXiv Monitor search engine
By using those tools, anyone can deep dive into detailed information about the state of AI by country or discipline, but I will try to list here some of the report highlights that I found more interesting while crossing them to some personal thoughts, or thoughts I have shared with some experts in the field:
China now publishes as many papers on AI per year as Europe , having passed the US in 2006.
, having passed the US in 2006. Attendance of AI conferences continues to increase significantly . As an example, 2019 NeuIPS was over 13,000 attendees, 41% over 2018 (and 800% relative to 2012).
. As an example, 2019 NeuIPS was over 13,000 attendees, 41% over 2018 (and 800% relative to 2012). Post 2012, AI compute power is doubling every 3.4 months. That being said, the report also mentions “progress on some broad sets of natural-language processing classification tasks, as captured in the SuperGLUE and SQuAD2.0 benchmarks, has been remarkably rapid; performance is still lower on some NLP tasks requiring reasoning, such as the AI2 Reasoning Challenge, or human-level concept learning task, such as the Omniglot Challenge.”
While discussing these figures with some experts, the feeling they get is that:
Paper originilaty has definitely decreased .
. An extensive amount of papers now focus on just presenting minor improvements over previous work by means of tweaking models, or even by applying brute force (both including dataset scale or computing power).
It seems that the experts I know personally are not the ones pointing in that direction, as you can see in this article in which Yoshua Bengio warns that “progress is slowing, big challenges remain, and simply throwing more computers at a problem isn’t sustainable”. Other articles, like this one from MIT Technology Review go further by suggesting the era of deep learning may come to an end.
Also, as I wrote on my article “Is Deep Learning too big too fail?” it seems that Deep Learning models are becoming massive while accuracy at scale is not benefiting that much from it.
Another interesting field to look for progress is education. While enrollment in AI training (both in traditional universities and online) is growing, I still find two worrying areas. One of them is mentioned in the AI Index 2019 report highlights, while the other does not:
Diversity in AI is still an issue . In particular, diversity in gender, with women comprising less that 20% of the new faculty hires in 2018 or PhD in AI recipients.
. In particular, diversity in gender, with women comprising less that 20% of the new faculty hires in 2018 or PhD in AI recipients. While the report focus a lot on AI talent, another relevant topic is missing, which is how are governments and companies training their non technical talent to prepare for AI. Actually, the executive summary of the AI Now 2019 Report clearly states that “ The spread of algorithmic management technology in the workplace is increasing the power asymmetry between workers and employers. AI threatens not only to disproportionately displace lower-wage earners, but also to reduce wages, job security, and other protections for those who need it most.”
Finally, the AI Index Report Highlights points out a trend that has been quite noticeable during the last months. AI Ethics is becoming very relevant with Interpretability and Explainability as most frequently mentioned ethical challenges. This leads me to the second report, the AI Now 2019 Report, which focuses a lot on ethics. Let me try to summarize which I think are some of the most relevant takeaways of this second report. First of all, some executive takeaways:
AI Ethics pressure comes primarily from communities, journalists and researchers, not companies themselves.
comes primarily from communities, journalists and researchers, not companies themselves. While there are efforts underway to regulate AI, government adoption for surveillance outpaces them .
. AI investment has profound implications in climate change (note the comptuting power increase rate mentioned before) as well as in geopolitics and inequities reinforcement.
Secondly, regarding the recommendations, the authors make it clear that techniques like affect recognition or facial recognition should be banned or not used while not regulated in sensitive environments that could impact people’s lives and access to opportunities. Regarding bias, the authors point out that research should move beyond technical fixes to address broader political consequences.
While I agree, I honestly think that there is another important thing to consider, and that is that bias control should move from research to business implementation, linking this to another recommendation of the paper, being it making Data Scientists accountable for potential risks and harms associated with their models and data.
Of course, the reports deal with a lot more topics, and I would be more than happy to discuss any relevant aspect you find interesting. Which are for you the highlights on these two reports? | https://towardsdatascience.com/2019-is-coming-to-an-end-whats-next-for-ai-3a4cd45f70aa | ['David Pereira'] | 2019-12-17 15:11:22.108000+00:00 | ['Deep Learning', 'Ai Ethics', 'Artificial Intelligence'] |
Long Short Term Memory Network Maths — Part 1 | This is my fourth article in the series of learning basic maths behind neural networks. You can check out my previous articles here:
Coming to LSTM, this is the most daunting of all. Don’t believe me! Just have a look at it..
My heart just skipped a beat when I first saw it for the first time (in a bad way of course :P ). But, turns out it’s not that difficult to understand. As usual, I’ll try to explain it in simple words and will just focus on the intuition behind it.
Longer Sequence!
So, from the previous article, we developed the intuition that whenever we have a sequence, we go for RNNs. LSTM’s working is a bit different in the sense that it has a global state which is maintained among all the inputs. All the previous input’s context is basically transferred to future inputs by a global state. And because of this nature, it doesn’t suffer from vanishing and exploding gradient problems. | https://medium.com/datadriveninvestor/long-short-term-memory-maths-part-1-d99b3c3b09d0 | ['Vidisha Jitani'] | 2020-09-24 15:00:57.672000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Neural Networks', 'Data Science', 'Deep Learning'] |
State of Blockchains Q2 2019 | State of Blockchains Q2 2019
Blockchain Founders Raise over $822 million by Q2
Despite several strong indicators in both network activity and user growth price continues to lag well behind the highs of 2017–18. The ecosystem is more robust and becoming increasingly professionalised with several new institutional participants
All on-chain indicators hint at what is about to come
Markets and the weather hold one common attribute — they function in cycles. The token economy has suffered a prolonged winter throughout 2018 to Spring 2019. However, things have been looking increasingly positive throughout the second quarter of 2019. Whilst the price of tokens are already up roughly 3 times, analysis of on-chain activity indicates that a full recovery may be imminent. Metrics such as transactions, hash power (dedicated to mining) and the number of new wallets opened have begun showing recovery signs similar to highs witnessed during the bull run of 2017. While Bitcoin’s rapid surge from a low of $3,200 to $11,000 surprised many analysts, it could be noted that price is simply following other indicators driven by actual demand in comparison to pure hype of the past.
Venture dealflow is distributed across stages
The Netscape browser or Hotmail led wider Web adoption, but the token ecosystem is yet to find its gateway to mass adoption. That said, Brave Browser continues to enjoy strong growth with over 5.5 million users, and a number of decentralised finance applications such as Synthetix and Nuo show promise. Investor appetite for risk-taking does not seem to have diminished either by the lack of early stage traction. A cumulative $822 million was raised across 279 deals during this time period, 159 of these were seed stage deals indicating that new entrepreneurial talent is still flowing into the space. Exchange issued offerings going by the name of ‘IEOs’ have been taking off, indicating an interest from the retail market in still backing new token based projects. Bitfinex claims to have raised $1 billion from a token sale for their exchange even as concerns around Tether’s redeemability looms in the market. IEOs have proven instrumental in helping startups with discovery, listings, and liquidity.
Enterprise Blockchains Embrace Going Open-source
Enterprise focus on the space has evolved from merely testing blockchains in walled environments to now contributing open-source code. JP Morgan and EY have both released open code towards making transactions on Ethereum private. Bank consortiums have been collectively working towards issuance of a digital currency much like stable tokens. However, the biggest development has been Facebook’s plans to launch a global blockchain initiative focused on payments and remittances with a consortia of partners including the likes of Uber, Spotify and. It could indicate that social networks are evolving from an advertisement driven model to potential social financial service marketplaces that derive revenue from transactions through them instead of advertisements. Facebook could also match social data with financial data and develop further efficiency in how products are advertised to their users.
Governments involvement continues to rise
Where governments were ambiguously trailing market activity in 2017, they have become active contributors to the industry’s evolution in 2019. The SEC and CFTC have captured much of the globe’s attention with their proactive involvement in the industry. In addition to pursuing legal action against what may have been unlawful issuances of tokens in 2017, the SEC has clarified its stance on certain late stage networks such as Ethereum and Kin. Similarly, regions like Hong Kong and Switzerland have clearly issued guidelines for the issuance, trading, and ownership of digital assets. Economies like those of Japan have gone from a fully restrictive, license-based regime to a self-regulated industry over the past 18 months. One indication of how closely governments are tracking the space was on display when Facebook’s Libra launched. The Bank of England suggested it will keep an open mind to the initiative. Meanwhile, governments in the EU and the US asked for a halt to it. We may receive more clarity around token ownership in the upcoming G20 summit. However, if trends point to anything — it is that regulators are no longer learning about tokens. On the contrary, they are active market participants much like they have been with more traditional asset classes such as equities and currencies.
Market Cycles Repeat With Fundamentals In Place
We had reasons to believe a bottom is in when we issued the Q2 report for 2018 and Bitcoin was trading at $6000. Markets slid further downwards amidst fear and uncertainty. With market capitalisation of the industry now hitting $336 billion at the time of writing this, we have reason to believe that price is simply trailing other activities within the industry due to a hangover from the ICO mania of 2017.
Unlike the hubris of 2017, the nature of tokens that are seeing traction today have better-defined ecosystems associated with them, trading infrastructure has evolved substantially and regulatory compliance has become easier for institutional giants. With this in context -it is safe to suggest, winter is finally over and we may witness a full-blown summer around Bitcoin’s halvening next year.
To find out more read our report below: | https://medium.com/outlier-ventures-io/state-of-blockchains-q2-2019-f2b753906122 | ['Joel John'] | 2019-07-23 13:54:04.609000+00:00 | ['Blockchain', 'Startup', 'Bitcoin', 'Venture Capital'] |
How to Build a Community Lending Library | Photo by Daniel Funes Fuentes on Unsplash
Lending libraries are for more than just books…
You may have seen a lending library in your own neighborhood. You know, those cute little boxes stuffed full of books, free for the taking. Some communities are taking the idea a few steps further by pooling together by creating lending libraries for other items. Ever borrow a cup of sugar from a neighbor? Imagine that on a much larger scale.
Best Items to Use for Community Lending Libraries
The only limit to building a community lending library is your own imagination. The best items to lend are those that consume a lot of resources to own individually. A lot of resources are consumed when we take a quick trip to the store for a simple item. Some of these may include:
Gardening Equipment — Shears, Lawnmowers, Leaf-Blowers, Gloves. Why have 15 lawnmowers in the neighborhood?
Kitchen Equipment — Bread Machines, Blenders, Extra Dishes & Cutlery for Events. These items often sit in cupboards for long periods of time.
Car Maintenance Equipment — Ice Scrapers, Car Washing Tools, Vacuums. Why drive across town to the automated car wash?
Home Repair Equipment — Screwdrivers, Drills, Spackle, Glue. Sometimes, you need these tools for a few minutes to make minor repairs.
First-Aid & Medical Equipment — Wheelchairs, Crutches, First-Aid. This is especially helpful for senior living.
Non-Perishable Food — Canned & Dry Items. Save your neighbors a trip to the store for some simple items.
Forming a Community Lending Library Committee
There will need to be structure when forming a lending library of this magnitude. Think of it as a neighborhood watch for sustainability. Depending on the cost of the equipment and the group’s size, you may opt to have members sign agreements outlining circumstances where a damaged item will need to be replaced by the person who borrowed it.
You will also need to decide whether people will pay a small per-use fee or all-inclusive dues. Funds should be set aside to pay for the replacement, maintenance/repairs, or equipment upgrading. To maximize your community lending library’s success, funds should be deposited into a trust and interest-bearing savings account. In time, this will help ensure your library will stay well-maintained and even grow! | https://medium.com/new-world-optimist/how-to-build-a-community-lending-library-4d1bb20c1f25 | ['Kimberly Forsythe'] | 2020-10-28 21:31:45.478000+00:00 | ['Lending Libraries', 'Green Living', 'Sustainability', 'Green Living Tips', 'Community'] |
10 Digital Small Business Trends for 2018 | The world of small business in the online space is full of rapid change and here-today-gone-tomorrow opportunities.
If you can stay ahead of the trends, you’re more likely to keep yourself focused, build stronger systems into your business, and be more intentional about what you’re creating. It’s not a race — but it is a call for minding what’s going on around you.
Here, I’ve gathered 10 trends for which I see momentum building quickly in the digital small business world. I’ve chosen them based on my observation of hundreds of business owners at CoCommercial — the social network for digital small businesses that I run, over 45 business owner interviews from the past year for my podcast, Profit. Power. Pursuit., and conversations with movers & shakers in this space.
I’m defining digital small business as any small business which is powered (in whole or in part) by digital tools like social media, websites, online learning, video conferencing, etc… This includes coaching, consulting, design, development, online education, wellness, maker businesses, and more.
I stayed away from trends toward specific marketing or sales techniques, technology, or product development and, instead, focused on the structural shifts that are happening in our marketplace. I’ve ordered the trends by how confident I am that we will see them hit the mainstream by the end of 2018 — from most confident to least confident (with the ones I’m least confident in being the ones I’m most hopeful will come about).
The overarching trend — the one that ties all of these trends together — is one that many digital small business owners echoed as I gathered their input:
The market — and necessarily the businesses operating in it — is maturing.
As the market matures, it creates growing pains. Some business owners will realize their all-but-get-rich-quick-style businesses were never built to withstand changing market currents. Some will be forced out of their comfort zone to become much stronger leaders, executives, and managers. Others will close up shop.
These trends reflect the areas maturing business owners will need to contemplate in their year-end review and planning.
1. Transparency
The world of small business online has been a fairly opaque one.
You get someone to sign up for an enticing free gift. You generously provide them with mountains of free content. You take them on a meandering journey that inches them closer and closer to wanting to buy.
Then, pounce! Activate full-throttled sales plan.
There is actually a lot of good in this way of marketing. You create value before you ever ask for a sale. You educate, inspire, and entertain. You answer questions and explore new possibilities.
But we’re starting to see the market diverge into two camps: those who want everything for free and those who just want to know what you’re selling so they can cut to the chase and buy it.
I believe we’re likely to see sophisticated marketers using the best of launch marketing, affiliate marketing, and content marketing to both provide immense value for free and being very clear up front about what they’re selling. We saw glimmers of this over 2017 (i.e. Danielle LaPorte announced that she was promoting Marie Forleo’s B-School course at the beginning of the promotional period and offered her audience a way to opt-out of that promotion while remaining subscribed) but I expect it to really explode in 2018.
Transparency is not just a marketing trend, though. We’re likely to see much more transparency in branding in 2018, as well. Copywriter Hillary Weiss puts it like this:
I feel like in some ways Martin Luther reincarnated has posted his grievances to the door of the Church of Marketing and people are nodding their heads. I foresee a “reformation,” a step away from super opulent branding and shiny “saints” of industry and into more grassroots, in-the-trenches-with-you type of experiences. Less shiny, more transparency.
In order for transparent marketing to be in integrity, brands will have to get real, too.
Live video and the evolution of social platforms (like Instagram Stories) give brands a real opportunity to drop the pristine posts and share more of the behind-the-scenes.
The lack of transparency in this space has often come from an underlying belief that people don’t really want what’s being sold — and so a complicated dance is needed to woo them whether that’s with marketing, branding, or a sales conversation. As digital small business becomes more mainstream, founders need to get clear about the value they’re providing and be able to clearly communicate that to customers. If the customer doesn’t want to buy based on that value, the product is at fault — not the marketing.
Review: Does your potential customer actually know what you’re selling? Do they start paying attention with the intent to solve a problem (and buy your product)? How can your marketing and brand become more straightforward and transparent in 2018?
2. Personalization
Customers are tired of one-size-fits-all solutions. They are actively looking for ways to customize what you’re offering to fit their unique needs.
While they might have valued price and conformity in the past, they are increasingly opting for more adaptable, higher-priced options.
Of course, personalization doesn’t need to be high-priced in 2018. It can be incredibly scaleable and accessible. One place I’m seeing this is in the rise of communities and membership sites. These products put the customer in the driver’s seat and allow them to adapt the experience to suit their needs. They take what they want and leave what they don’t need.
Gina Bianchini and the team at Mighty Networks have created a software platform that allows business owners is to create highly personalized experiences in the form of deep interest networks. At CoCommercial (built on Mighty Networks), our goal is to provide a steady stream of exclusive content, events, and conversations so that members can create a customized experience of the platform. As they do, their own realizations and questions bubble up in the form of member-generated content and conversations (their own posts) and they customize their experience even further.
Online courses and workshops are becoming less about loads of content and more about what students can do with that content. Mastery and application is taking over for learning and understanding.
Review: How could you create a more personalized experience for your customers? What opportunities are there to guide a customized application or experience instead of forcing conformity?
3. High-touch Service
This might just be the year where founders figure out how to utilize the best of digital and the best of real human interactions. While there is a trend toward done-for-you or 1:1 services (more on that in a bit), High-touch Service doesn’t have to mean selling services or service packages.
Business coach Racheal Cook says, “I see a lot of people returning to in-person events and real conversations instead of just information products. People are overwhelmed by information — they want a real human to help them.”
You can sell an information product, SaaS app, or membership community and still provide a guide.
You can offer workshops and still provide a personalized experience. You can have an automated welcome sequence and still send personal follow-ups to new customers.
I believe 2018 will be the year where business owners are not just providing High-touch Service — but investing in it. That’s what we’ve done. We’re allocating more of our marketing budget to customer service and member experience so that our retention is higher and word of mouth marketing is stronger.
Similarly, Beautiful You Coaching Academy founder Julie Parker cites investing in her staff, Racheal cites taking the time for personalized welcome videos, copywriter Jamie Jensen cites going deeper and creating more intimate experiences. Each of these examples of High-touch Service require an investment of time, energy, and/or money. Budget for your own High-touch Service on your calendar or in your financial budget for 2018.
Review: How could you allocate a greater portion of your budget to High-touch Service? How can you provide a greater level of guidance while maintaining a light and lean approach? Where do you customers often get stuck and how could High-touch Service keep them moving forward?
4. Consolidation
With digital small business hitting the puberty stage, it’s time to turn gangly limbs and immature frames into mature, adult businesses. This will occur in a number of ways over 2018.
First, we’ll see more and more businesses eschewing the sort of “junk drawer business models” — willing to sell anything and be anything to accommodate their customers — they’ve been using and the insane marketing calendars they’ve been tied to. Content strategist Lacy Boggs says, “Less but better in all things. By this I mean, launching less but doing more with what you do launch; writing fewer articles but making sure they really have an impact; running less advertising but getting super strategic with retargeting, etc…”
Instead, they’ll consolidate their offers into a core product and get crystal clear on the key value proposition they offer to their customers.
They’ll better define their boundaries and operate within them to improve their brands, positioning, and profitability. They’ll better understand why people buy and use that to their advantage to create more strategic — and less spray & pray — marketing.
Second, we’ll see more business owners and freelancers coming together in one consolidated company or offer. Charlie Gilkey believes we might even start to see small-scale “acquihires” — where one company buys another with the purpose of acquiring the talent as much as the technology or intellectual property.
I’ve seen this — and even participated in it! — as well in the small business space. There’s a huge opportunity to acquire the services of a subcontractor you work with frequently or a client who loves your mission and brings with them complementary skills. This allows for more hires on the value creation or delivery sides, not just on the administrative or financial sides, which frees you up to truly take the helm on your growing company.
Review: Where has your business gotten overly complicated or convoluted? How can you simplify to become more profitable in 2018? How could you strengthen your company by acquiring the skills of another business owner or freelancer?
5. Conflagration
Unfortunately, 2018 will be the year when a lot of small business owners try to burn it all down. Either they will close up shop or they’ll pivot away from something that’s working because the work becomes optimizing and tweaking instead of designing new things.
There’s growing worry in the digital small business world that the opportunity is over and it’s time to abandon ship.
Of course, this isn’t true.
Anywhere people gather, ask questions, and look for solutions there is an opportunity to do business. The cause of the distress is largely due to businesses being built with little to no foundation under them. They were able to capitalize on a trend or fad but, when faced with the prospect of creating more sustainable systems, they feel stuck and left behind.
Breanne Dyck, founder of MNIB Consulting, says, “those who have built a cash cow with no underlying business structures will be forced to either grow up and start acting like a real business, or face collapse.” Burning everything down is not the only recourse when things stop working the way you’re used to. You can also decide to create a more intentional, mature, and foundational business that can weather whatever storm it faces.
On the other hand, some business owners have built great products, solid systems, and reliable revenue engines. Unfortunately, the next steps can be mind-numbing. Jennifer Kem, a marketing & brand strategist shared:
As a wise mentor has said to me: “Making money is boring. But that’s how you make money.” I’m seeing people abandon sales funnels because they “didn’t work” — when what they need to do is optimize it more and get it back to their core offers.
Denise Duffield-Thomas, founder of Lucky Bitch and a veteran of the internet marketing industry, echoed this sentiment, “I’m definitely seeing a lot of businesses throw the baby out with the bathwater, or ditch awesome programs because they are bored, or because their list is stagnant they think ‘everyone’ has seen it.”
I’d go so far to say that many entrepreneurs — not strictly limited to online small business — create problems for themselves to solve which can amount to sabotaging products, team relationships, sales systems, and marketing engines all in the name of having something new and exciting to work on.
So while Conflagration is definitely a trend for 2018, I hope that, as a community, we look to put out the fires as quickly as possible. There are more options than abandoning a great idea or your baby business. Just because things get more challenging and demand a more mature approach doesn’t mean there isn’t a huge opportunity for you to create value and reap the rewards.
Review: If you’re feeling the need to burn things down, what would make you excited about your business again? What’s your favorite part of running the business (as opposed delivering your product)? What do you find creatively fulfilling about growing your company? When are you most likely to self-sabotage on the way to success?
6. Done For You
One place Conflagration has been valuable, though, is with business owners burning down group programs and online courses they never really loved. They realized that the best results — and most valuable outcomes — for their clients came when there was a personal guide and a well-managed process rather than a half-hearted attempt for customers to do it themselves. These small business owners are opting to go back to the individualized work they love and build out scaleable systems and teams — rather than solutions designed to scale infinitely.
Back in September, I talked with Dr. Michelle Mazur, a speech coach and the founder of Communication Rebel, about her choice to stop offering group programs for her speech coaching services and instead focus on clients who were willing to pay more to work with her now and work with her individually. She said business has never been better!
Laura Roeder, who successfully self-funded social media scheduling startup MeetEdgar after running a profitable training company, predicted that even software companies would get into the Done For You game more often. She said, “we just piloted a complete ‘done for you’ set-up package for MeetEdgar and it was a huge success. People are willing to pay thousands for the entire solution instead of just a piece of it.” Even back in 2015, Nathan Barry’s ConvertKit got me to switch email marketing providers with a Done For You offer.
When you combine the Done For You trend, the Consolidation trend, and the Reorganization (next) trend, you get a big move towards forming specialized agencies that walk clients from start to finish through the messy journey of web design, branding, marketing, and other services. The rise of more sophisticated and mature agencies also benefits from the Personalization and High-touch Service trends, too.
Review: When has a DIY approach really worked for your customers and when has it left them stuck? Where could you or your team be most useful with some hands-on help?
7. Reorganization
As the digital small business industry begins to mature, more founders are going to be looking to make their own organizations more mature as well. Instead of allowing themselves to be the linchpin that desperately holds everything together, they’ll look to hire a team that can truly support them and be devoted to the mission of the company. I wrote about my own experience with this in my end-of-year review.
Look for more businesses investing in full-time teams or part-time employees.
They’ll still be hiring specialists and contractors but only to complete particular projects or get the team up to speed on a new initiative.
With this reorganization comes a real need to spend time on establishing company culture. Business strategist Charlie Gilkey says to look for “more discussion of culture, mission, and values as they apply to micro businesses.”
As more small business owners creep towards burnout, the discussion around culture is going to feel less corporate and much more enticing. They’ll be dialing in how they work, how things get done, and how the team works together at a whole new level.
Charlie said in a recent podcast interview with me, “Everyone on our team knows how we work. That’s just how we do things here. … Showing up in the morning and knowing how we do things takes a lot of the meta work out of the process.” While you might feel a negative knee jerk reaction to setting policies and crafting procedures, Charlie knows it makes things easier on everyone. Part of Reorganization in 2018 might be sitting down to get the business intentionally organized for the first time.
Review: How is your company culture defined and communicated to team members? Does everyone on your team know what the unique strengths of your company are? Are policies and procedures clearly defined for recurring tasks?
8. Pop-up Learning
Education has been a huge opportunity for digital small business for at least the last 5 years. Over that time, it’s become more and more slick, polished, and professional. In keeping with the trends of Transparency & Personalization, I believe we’ll see a trend toward “pop-up” education in 2018.
Education and training companies will necessarily become more attune to the in-the-moment needs of their customers and create new ways to accommodate their questions.
Think half-day workshops, live courses, and in-person events designed to quickly immerse the learner in a new subject and give them what they need to move forward.
Plus, instead of lecture-style courses, these learning opportunities will be heavily application-oriented. The curricula will be light and flexible to give students plenty of time to play and experiment with new concepts. These experiences will also be highly interactive — with students either engaging directly with the instructor in smaller groups (think 25 instead 2500) or engaging with each other in an intentional structure (think a mastermind group or class section).
Review: Where do you see an opportunity to help your customers without the expense of developing a full-blown product? What questions have they been coming to you with that you can answer quickly with a hands-on learning session?
9. Inclusion
The past year has been one where many of us realized just how segregated our social and professional circles were both online and offline. Further, we are starting to realize how much the shiny personal brands both men and women have used to get ahead in the digital small business space have tapped into patriarchal norms and conventional white/straight/cis-gendered beauty standards.
While I don’t believe any (or, at least, many) of these brands have intentionally created hostile environments for LGBTQ, minority, or feminist followers nor used structural racism, sexism, or gender norms to their advantage, the reality is that they have.
Luckily, the conversation around true Inclusion is starting to happen in this space and I’ll readily admit that I am no expert or saint on it. But I do believe that every step towards seeing the problem and taking action on it is a step in the right direction.
True Inclusion means, of course, that social media posts of support are not enough.
The privileged of the entrepreneurial class — speaking as a white, straight, cis-gendered small business owner — need to step up and step out of our comfortable social circles and seek out colleagues, interviewees, employees, and mastermind buddies who come from different backgrounds and who look differently than we do. Diversity is an asset, as Desiree Adaway, Ericka Hines, and Jessica Fish would say.
I feel confident that the conversation around Inclusion will continue into 2018 and beyond. But I’m hopeful that a real trend toward doing something about it starts, too. My personal goal is to continue to seek out minority voices and experiences to include in articles, in our community, and in the events we host. I also plan to make a strategy for finding more minority candidates the next time we’re hiring.
Review: Is your own professional network only full of people who look like you? Where do you go to seek out people with different experiences and backgrounds? Is your brand inclusive of different backgrounds? What’s your plan for making people from different backgrounds feel comfortable and valued in your community?
10. Profitability
We’ve been bombarded with monthly income reports, inflated revenue numbers, and ludicrous status symbols for far too long. As digital small businesses mature in 2018, so will their owners’ understanding of the financial matters of their businesses.
Instead of blindly chasing revenue, they’ll get critical about what is profitable instead. The trend toward Profitability will create even more momentum behind Consolidation, Done For You, High-touch Service, and Reorganization. It’ll probably lead to a fair amount of Conflagration, too.
Amanda M Steinberg, founder of DailyWorth and WorthFM, predicts more business owners will be looking for “less revenue, more margin.” Sure, you can spend $3 to make $4 but there are often far simpler, more profitable ways to make $4 if you’re willing to settle for less top line revenue and more in your personal bank account.
Digital small business has largely been a culture of vanity metrics — likes, followers, email subscribers, members, and revenue — while arguably much more important metrics like profit get the short shrift. The focus has been on whatever makes you look good, not on what means you have a healthy, sustainable, mature business. Brenda Wilkins, a leadership and business consultant, says, “This is my #1 concern with much of online business dialogue — too much talk about revenue, launch numbers, ‘$____figure business’ etc… and no talk about margin, profit, cost of goods sold, debt ratios, etc…”
As the market matures and business owners become more sophisticated, the profile of Profitability will rise. We’ll see less talk of vanity metrics and more talk about what makes a business really work. We’ll see less business owners trying to build platforms with flashy numbers and more concrete value propositions. And if we don’t? We won’t be around long.
Review: Where have you focused on metrics that don’t lead to long-term sustainability? What could change about your business to make it more profitable? Where are you expending more energy than necessary for the returns you’re getting? | https://medium.com/help-yourself/10-digital-small-business-trends-for-2018-71eccccdc7a6 | ['Tara Mcmullin'] | 2018-01-19 21:21:08.440000+00:00 | ['Freelancing', 'Entrepreneurship', 'Trends', 'Small Business', 'Digital Marketing'] |
In store or online — what’s the environmentally friendliest way to shop? | In store or online — what’s the environmentally friendliest way to shop?
Drones, robots, crowd-shipping and more offer new options for solving the sticky “last-mile” problem of bringing our purchases home
Photo © iStockphoto.com/baranozdemir
By Fred Pearce for Ensia | @ensiamedia
Is cyber-shopping terrible for the environment?
Some say yes, with all those trucks heading out into suburbia to deliver your latest gadget, fashion garment or book. But online retailers insist theirs is the greener delivery route — much better than you driving to the store.
So, who is right? And are there even better ways?
This really matters for the climate. Online shopping makes up one in seven retail purchases worldwide. Its value in 2019 will be a staggering US$3.5 trillion, a figure that is rising by more than a fifth every year.
How much of the total carbon footprint of what you buy is attributable to delivery varies hugely. But wherever your latest purchase comes from — whether a Chinese factory or a field in your home state — transport from the store or warehouse to home likely dominates the delivery footprint, says Alan McKinnon, a professor of logistics at the Kühne Logistics University in Hamburg, Germany, and author of a new book Decarbonizing Logistics.
What logistics folks call the “last mile” is usually the most energy-intensive stage, McKinnon and colleague Julia Edwards have pointed out, “and typically generates more CO2 emissions than all the upstream logistical activities.”
Crowded parking lots suggest in-person shopping has a massive carbon footprint — but the devil is in the details. Photo © iStockphoto.com/Konstantin Aksenov
It is also where the difference between online and in-store shopping is greatest — and McKinnon says most times delivery is best. A typical home delivery round of online purchases in Britain consists of 120 drops on a 50-mile (80 kilometer) round. That round produces some 50 pounds (20 kilograms) of CO2, or just over 6 ounces (170 grams) per individual delivery. If you went to the store, the typical drive would be around 13 miles (21 kilometers) there and back, which would generate 24 times more CO2. So you’d have to pick up 24 items to break even, he says.
Theory vs. Real World
That’s the theory. In the real world, the difference is much less, says manufacturing technology specialist Dimitri Weideli, who did an environmental analysis of online shopping while a research associate at MIT in 2013. For instance, 12% to 60% of home deliveries have been reported to fail first time. Either the van has to make a second and even third run, or customers end up driving to an out-of-town warehouse to pick up the product. Also, typically, one-fifth of products are returned, for whatever reason. Every false move increases the carbon footprint.
Just as bad, our growing love of speed deliveries almost triples the footprint of online delivery, says Weideli. That is because your supplier no longer has the flexibility to bundle multiple orders into a single delivery, and because it sends out vans less full and to travel farther per delivery than they would if you were willing to wait a bit longer for your purchase to arrive.
Click here to subscribe!
Weidel says such “impatient” cyber-shoppers have the worst carbon footprint of all. But even allowing for them, in general, whether buying laptops or Barbie dolls or T-shirts, he wrote in an analysis he did as a research associate at the MIT Center for Transportation and Logistics in 2013, “online shopping is the most environmentally friendly option.”
Of course, this assumes the comparison is with conventional shoppers who make special trips to the store for single purchases. Many don’t do that. We walk, bike or take the bus. Or buy many items on a single shopping trip.
In a bus ride, you share the emissions. On a typically half-empty bus, your share may still be greater than the emissions for a home delivery — seven times more if you are only buying one item, says Patricia van Loon, based on her research at Heriot-Watt University in Edinburgh, Scotland. But since the bus would have been on the route regardless, you haven’t added to the actual emissions.
EVs, Drones and Robots
If we can shop better, can online retailers deliver better too? That last mile is still a source of great (and costly) inefficiency for them, say logistics analysts. It’s where both dollars and carbon emissions can be saved.
So they are trying. Amazon wants half its shipments to be “net zero carbon” by 2030. But how?
Electric vehicles are one possibility. With no tailpipe emissions, they reduce transport’s contribution to urban smog. But their carbon footprint depends on how their electricity is generated. Right now, an electric vehicle is a lot greener in Vermont than in coal-burning West Virginia.
How about drones? They would mostly deliver one package at a time. But even flitting back and forth from the depot, drones could sometimes still reduce carbon emissions relative to delivery trucks, according to Anne Goodchild of the University of Washington. They are likely to work best with light, urgent deliveries, such as medicines, food or mail, and in confined high-demand areas such as university campuses.
Both FedEx and Amazon have plans to deploy ground-based robots as members of their online-purchase delivery teams. Photo courtesy of Elvert Barnes from Flickr, licensed under CC BY-SA 2.0
But staying aloft for long with a heavy load is energy-intensive. Drones could be combined with trucks that drive to local transport nodes, and then hand over to drones for the last mile.
Or perhaps ground-based robots? This year, both FedEx and Amazon announced plans to deploy these smart, autonomous hampers-on-wheels along our sidewalks, dodging pedestrians and crossing at the lights. Lowe’s already has plans to deploy with FedEx, and FedEx says it is talking to Pizza Hut and Walmart about doing their deliveries as well.
Low-Tech Options
Some say low-tech is still the best route to low carbon. Many European cities have companies such as Deliveroo using bicycle couriers for fast, zero-emission meal deliveries from local restaurants. The system could be extended for other goods. Ford recently developed software that could summon bike couriers to take parcels in a suburban London neighborhood the last mile from truck to front door.
Lockers in shopping malls also get around the last mile problem for online retailers. Customers are given a code and pick up their own package. But if you drive there, the carbon gain is lost.
Bicycle-based delivery now used for restaurant meals could easily be adapted to online purchases. Photo courtesy of TaylorHerring from Flickr, licensed under CC BY-NC-ND 2.0
The new kid on the block is crowd-shipping — hitchhiking for parcels. Start-ups like Roadie promise to “connect people who have stuff with driver already heading that way.” Drivers make bids to deliver. Right now in some places, half of all crowd-shipping trips are made specially for the delivery, while another third take long detours. So the potential carbon saving disappears. But the more people join in, the more efficient it could be.
The Bottom Line
The bottom line? Online shopping can be greener than driving to the store. Novel last-mile alternatives to conventional delivery trucks stand to make it even more environmentally friendly.
But the devil is in the details. If we bundle our orders, and avoid the speedy delivery option, we boost the environmentally friendly quotient. (Imagine if we were offered a “green shipping” button when choosing dispatch options?) Other tips for reducing delivery’s environmental impact: Do be in when the courier calls. Don’t buy on a whim and then take up the “free return” option.
Oh, and don’t binge on stuff. Some say the real danger from online shopping is it encourages us to buy stuff we wouldn’t otherwise. The purchase that doesn’t happen has the lowest delivery carbon footprint of all.
Editor’s note: GreenBiz will webcast a program on decarbonizing e-commerce Tuesday, June 11, at 1 pm ET. For more information, see Decarbonizing E-Commerce: A Path to Low-Carbon Shipping.
Originally published at ensia.com on May 23, 2019. | https://medium.com/ensia/in-store-or-online-whats-the-environmentally-friendliest-way-to-shop-60110e2561c | [] | 2019-06-24 14:16:01.158000+00:00 | ['Environment', 'Drones', 'Shopping', 'Online Shopping', 'Ecommerce'] |
ParseMyCF — a codeforces contest scraper | I wrote an useful python script- ParseCF
It will parse all the Codeforces contests that a particular user has participated in till date, make separate folders for every contest and every folder would contain all the user’s submissions for that particular contest , ordered by the problem names.
For a guy like me who likes to keep all my codeforces submissions organised , this script saves a lot of unnecessary time in organising codes in folders .At last, I can push all my codes to Github in one click.
GOALS of this scraper:
Ask the user for the username and the number of recent contests to parse Set up a proper directory tree under the parent directory with each Folder having the (contest-name + username) format Each Folder contains all the solutions of the user having separate files for separate questions (A,B,C..) and a contest-info .txt file that encompasses all the miscellaneous details of the user w.r.t the current contest.
All the detailed instructions on how to run this script is available on my Github .
Now enough of theory ! Let me show you a demo.
Let us say I want to parse the 5 recent contests of legendary tourist i.e When the scipt will prompt me to enter a username, I’ll type in tourist
Then the scraper will
show me the total no of contests tourist has participated in till date
ask me how many recent contests I want to parse: let us say 3
As soon as I type in 3 and press Enter , the parsing will start.
To make your wait less boring, I have made the script spit out log texts as it goes on parsing contests.
Notice the parent directory has no other folder right now except cf py file right now.
Let us start the parsing.
As you can see, instantly, the folder for the first contest has been created and the log is saying the same. It is sleeping for 5s :) and then it will begin scraping the corresponding solutions for this round.
It is a coincidence that the first scraped solution of tourist is WA on pretest 10 .
Haha :)
So it continues scraping and doing it’s job. At this point, files are being added inside the first folder.
Hurray!!
The first contest(most recent one) has been parsed and it has begun scraping the second contest as you can see the folder for Manthan-Codefest has been created !
It is working hard :)
Moving on to the 3rd and the last contest for us..
Now after a lot of hard work, let us see what happens!
The above picture shows that the parsing has ended!
Now let me show you the directory structure inside each folder.
So 3 folders…
Let us enter the first folder-:
As you can see, all the problems have been scraped successfully with the proper extensions. An extra contest-info.txt file is also there. Let us explore its contents.
As you can see, it contains all miscellaneous information about the tourist’s performance in this round like Rank, Rating change etc.
Let’s go inside the 2nd one
Let’s go inside the 3rd contest folder.
So this ends the successful scraping!
Bonus:
If you place the cf py file inside a git initialised repository , it will automatically record the added files and you can just make a single commit and upload them to github.
, it will automatically record the added files and you can just You can compare the performance and code of your cf friends in the recent contests you both participated in. Just run the script again and mention your friend’s username and you’re good to go.
in the recent contests you both participated in. Just run the script again and and you’re good to go. As mentioned above , the ParseCF repository is available under my JanaSabuj/ParseMyCF-contest github. You can make improvements or put forward bugs you encounter via a pull request or fork or star.
Bonus2:
Here is a short video where I parse my favourite coder’s contests.
Username: Ashishgup
This is my first python script and I hope you will find it useful ! | https://medium.com/sabuj-jana/parsemycf-a-codeforces-contest-scraper-13dc0e9d3872 | ['Sabuj Jana'] | 2019-09-21 12:37:16.693000+00:00 | ['Script', 'Codeforces', 'Programming', 'Python', 'Github'] |
plt.xxx(), or ax.xxx(), That Is The Question In Matplotlib | Difference between plt.xxx() and ax.xxx()
As shown in Figure 1, there are three main layers in matplotlib architecture. From top to bottom, they are Scripting layer ( matplotlib.pyplot module), Artist layer ( matplotlib.artist module), and Backend layer ( matplotlib.backend_bases module), respectively.
Figure 1, Matplotlib architecture
Let’s start from the bottom, the Backend layer handles all the heavy works via communicating to the toolkits like wxPython or drawing languages like PostScript in your machine. It is the most complex layer. Within this layer, FigureCanvas is the area onto which the figure is drawn and Renderer is the object which knows how to draw on the FigureCanvas . A regular user like you and me barely need to deal with this layer.
Then the middle layer, Artist layer, where ax.xxx() derives from. As the name implies, using this layer, you can control and fine-tune as many elements (e.g. spines, tick direction, tick label size, tick label font, tick colour etc.) as possible in the figure just like an artist paints on the canvas. This layer allows you to do more customisation compare to Scripting layer (see below) and more convenient for advanced plots. Especially when handling multiple figures/axes, you will not get confused as to which one is currently active since every subplot is assign to an ax . This is why ax.xxx() is sometimes referred to object-based plotting. We definitely will use this layer more often when writing a web application, or a UI application, or perhaps a script to be shared with other developers.
The top layer, Scripting layer, where plt.xxx() resident is designed to make matplotlib work like MATLAB script. In other words, this layer is considered as the lightest scripting interface among all three layers, which comprises a collection of command style functions for a quick and easy generation of graphics and plots. This is why many matplotlib tutorials prefer to introduce from this layer. It is the easiest part to start with and use, you basically add up objects (e.g. line, text, rectangle) on top of the figure . Scripting layer plotting is sometimes also called procedural plotting.
Figure 2, Scripting layer plotting
‘figure’ and ‘axes’ in Matplotlib
In matplotlib , figure and axes are layers of a figure (please note that I do not quote this “figure” as a script). Here let’s use a figure from matplotlib website to explain the concepts.
Figure 3, Parts of a figure in Matplotlib
As we can see from Figure 3, the whole figure (marked as the outer red box) is the base of a figure. The layer above it is the axes (marked as the inner blue box). A figure can at least have one axes . From here we know that, axes refers to a part of the figure and is not a plural word for more than one axis. For instance, if you have one plot on a figure , then that plot is the axes . If you have multiple subplots on a figure , then each subplot is one axes . To be able to make a plot, we normally call fig = plt.figure() at the beginning. We create one axes object in the figure by calling ax1 = fig.add_subplot(2, 1, 1) . This created the first subplot within a 2-row by 1-column figure . Therefore, all ax1.xxx(…) are functions specifically for ax1 . For example, to access x-axis and y-axis in the subplot ax1 , we call ax1.xaxis(…) and ax1.yaxis(…) . Likewise, we can add another subplot by calling ax2 = fig.add_subplot(2, 1, 2) and manipulating its elements by calling ax2.xxx(…) . In this way, we have a clear idea about which subplot we are working on without messing up the code (of course, there are many other ways to call two axes , for instance, fig, ax = plt.subplots(2) , then each axes can be accessed by calling ax[0] and ax[1] ).
A example plot with two methods
Alright, after clarifying the concepts of plt.xxx() and ax.xxx() , let’s use a simple example adapted from matplotlib document to demonstrate their differences when plotting figure with subplots.
Scripting layer plotting
Artist layer plotting
If everything goes right, you will get the following figure.
Figure 4, A example figure
As you can see from these two scripts (Scripting layer plotting vs. Artist layer plotting), although the code of artist layer plotting is more verbose than that of scripting layer plotting, it is easier to read. This is a very important practice to let you produce quality code and increase the readability of your code. When the plots getting complicated, the power of artist layer plotting will become more and more apparent.
Taken together, we may use plt.xxx() to quickly get a plot for exploratory data analysis, however, ax.xxx() is a go-to style when your code is part of a serious project and need to be shared with others. In addition, as a learner of matplotlib , I strongly advise starting from artist layer plotting, from which you will have a more comprehensive understanding about matplotlib plotting and definitely benefit more for your long-term development in data visualisation.
Here are materials I found very useful (continually updated list) | https://towardsdatascience.com/plt-xxx-or-ax-xxx-that-is-the-question-in-matplotlib-8580acf42f44 | [] | 2020-02-01 22:42:48.301000+00:00 | ['Data Science', 'Matplotlib', 'Data Visualization'] |
We’re All In This Together? | We’re All In This Together?
No. Clearly we’re not.
“All animals are equal, but some animals are more equal than others.” A proclamation by the pigs who control the government in the novel Animal Farm, by George Orwell.
It was only back in February that home secretary for the UK, Priti Patel labelled any person earning less than £25,000 a year a “low skilled” or “unskilled” worker. Now, amid the chaos of the coronavirus pandemic, it is those workers who’re helping to prop up the country in lockdown.
Coronavirus is not some grand leveller: it is an amplifier of existing inequalities, injustices and insecurities.
The trite slogan ‘We’re All In This Together’ is simply bollocks. There’s no equality on display. The virus isn’t some grand leveler affecting everybody in society in the same way. In case you haven’t heard, old people are more susceptible than young. The weak are more likely to be infected. And those on the low-end of the poor scale, the social-economically deprived people feeding the trough of the rich, are even more likely to be exposed to the virus.
Are the super-rich, holed up in their bunker, really in it together? Is Gwyneth Paltrow really concerned for my well-being as her in-house chef whips up another batch of smashed avocado smoothie?
It’s time to debunk the myth.
‘Be Kind’ is what the governments are preaching. They need the populace to stay calm. They need the ‘workers’ now to be in the front-line in order to protect the system. They need you now more than ever.
Boris Johnson lavishes praise on a Kiwi worker. A woman who stayed by his bed all night long ensuring he didn’t succumb to the virus. By her side was a Portugese health worker. Two vital women that helps the National Health Service to tick over. The irony here being that both would have difficulty getting visas in the Uk’s Brexit future. Migrants help the system tick over.
Photograph: Guy Bell/REX/Shutterstock
The day Dominic Raab encouraged us all in the UK to clap for the workers who’re risking their lives to keep society going, the government restated that some of those same people won’t be allowed in the country come January 2021.
“Low-skilled” people would not be able to apply for a UK work visa.
In It Together until you’re no longer useful or needed. Who’ll take their place? Who among the wealthy are willing to step into their shoes? What will become of the the migrant care workers, hospital porters, bus drivers and cleaners who are keeping us safe and keeping society functioning.
Are the cleaners essential during a pandemic but non-essential when the pandemic clears? If recognising our common humanity is something we can do when Boris Johnson is admitted to intensive care, the same should be possible for all people, regardless of their immigration status. All migrants’ rights should matter. Not just in a crisis, but all the time. | https://medium.com/the-bad-influence/were-all-in-this-together-29ec713c7c69 | ['Reuben Salsa'] | 2020-04-13 22:59:44.711000+00:00 | ['Covid 19', 'Salsa', 'Equality', 'Coronavirus', 'The Bad Influence'] |
My Stories of Failing to Make Money Online | Medium
As we are on Medium, I can’t talk about it. Some people make money here. Big bucks! Not me.
In all seriousness, I like writing on Medium. The user interface is nice and clean. Easy to write and easy to read. It has a huge visitor base and you can even get paid for writing. It has all I want from a platform for writers.
You have to give a few compliments when they’re deserved. Of course, Medium has its drawbacks, nothing is perfect. You know about them.
They had to add the monthly subscription to make more money, but it’s only $5/month. Not much for most people. I am not subscribed. I read some free stories and write from time to time, and I don’t really need the subscription. Yes, it will be cool to read some more, but that’s life.
How much do they pay writers is the big question?
Not much is the short answer. They have a complicated model of claps linked to your stories that are published behind a paywall. Writers can choose whether to publish them for free or not.
Some claps can give you more money, depending on how often the person who is subscribed claps. You can find some articles here describing it in detail.
Medium is my favorite place to write and I would be super happy to earn some extra cash for a beer or two, but I don’t even publish my stories behind a paywall.
First, I don’t think I offer such high-quality writing that people should pay to read it.
Second, I get almost no views anyway.
Third, I actually made a few stories eligible to earn money, which contradicts my first point, but I saw how little Medium pays when you don’t have any followers and your stories get 30 views at most.
Fourth, writers should first get people to read them, a couple of thousands of followers under their account. Start getting a good amount of views and reads every day, and then try to earn money from it.
I’ve read stories of writers making a few thousand per month here, but it’s less than 1% of the total amount of writers that regularly post.
Earnings: $0,35 | https://medium.com/better-marketing/my-failure-stories-of-trying-to-make-money-online-c74526a6775f | ['Atanas Shorgov'] | 2019-09-11 13:24:05.116000+00:00 | ['Money', 'Personal Story', 'Marketing', 'Blogging', 'Freelancing'] |
How I Overcame Insomnia | Having dealt with insomnia since the age of 16, I was no stranger to the inconveniences brought on by the lack of sleep, but pursuing a career in software development alongside being unable to sleep was debilitating.
On most days I would start my work-day around 6 AM, which meant that for me to get a good night’s sleep I would have to fall asleep at the latest, around 12 AM. For the majority of the workweek, this was not the case, I would lay in bed till 3 or 4 in the morning unable to fall asleep.
The following morning, I would wake up to the sound of my alarm with a burning sensation in my eyes and a raging headache. I would start my workday, hoping that tonight might be different, it hardly ever was.
I tried a lot of things, ranging from complaining to my doctor to taking part in a sleep study (which ironically was inconclusive because I was unable to achieve any level of sleep for them to gather data).
Here are some of the strategies that helped me overcome my onset insomnia and while I am not 100% cured, these tips help me get restful sleep for at least 5 days a week.
1. Practice and master sleep hygiene
Like many people, I had a bad habit of winding down after a long day in bed by playing on my phone or reading a book. Avoid engaging in anything other than sleeping and sex in bed to build a strong association between being in bed and falling asleep.
2. Enable blue-light filtering on your devices
I noticed a boost in overall sleep quality and a decrease in the time it takes for me to fall asleep by enabling this feature on my devices. Warm colors during nighttime hours alleviated eye strain as well for me.
3. Come up with a sleep/wake routine that you can stick to 7 days a week
For me, going to bed at 10 PM and waking up at 6 AM is sustainable all week. The key is to come up with a realistic routine that you will be able to stick to regardless of it being a workday or not.
4. Restrict screen-time 1.5 hours before bedtime & have a wind-down ritual
Get into the habit of dropping stopping highly stimulating activities at a certain time before bed. Having a designated ‘wind-down period’ prevents my mind from being overly restless at night and inhibiting sleep. And if you are suffering from stress or anxiety about falling asleep, this might be an effective ritual to follow.
5. Read and or write before bed
This ties in with my previous point of a screen-time cut off before bed. You can still get things done, but only with pen and paper. I find it relaxing to draft ideas for medium articles before bed or read up on software engineering topics (I’m currently reading about iOS development with Swift). The key here is to refrain from reading something overly stimulating that might cause you to ponder critically at night (stay away from mystery fiction). Writing a list of things you accomplished on the current day can help you go to bed feeling satisfied and that might bring restful sleep.
6. Intermittent fasting and a ketogenic diet
This might be specific to me, and please consult your physician before taking this advice — but this lifestyle choice helped my particular case of insomnia quite a bit. Restricting eating after 7 PM enabled me to go to bed without feeling bloated and on a ‘settled’ stomach.
Following a ketogenic diet helped my overall sleep quality once I became fat dependent. Take a look at this article if you’re curious about the science behind this.
7. Maximize sleep duration by restricting fluids at night
I would restrict fluids after 8 PM or so to make sure that when I do fall asleep I am not waking up to use the bathroom at night. Knowing that I would be fluid restricted at night, I started tracking my water intake and ensured that I would consume at least 3 liters of water during the day.
8. Workout in the morning, not at night
The first thing I would do at 6 AM would be to workout, I found that it takes me longer to wind-down after working out in the evening/at night. Be sure to focus on cardio and weight-lifting and don’t neglect one for the other.
9. Take melatonin
Consult your physician and consider taking a melatonin supplement. You won’t reap the benefits of this in the short term, but once it builds up in your system — it could help you fall asleep.
10. Listen to pink noise
This is a recent habit of mine — I play a pink noise loop (consult youtube for this) through a soundbar near my bed. I have noticed that when my mind is restless, I can use the sound of the pink noise to visualize sleep-friendly thoughts. In a way, the sound helps ground me to my sleep. Your mileage may vary, but if you have racing thoughts at night — this may help you reach a quasi-meditative state that can lull you into sleep.
11. Try and be as productive as possible during the day
As Leonardo Da-Vinci once said “As a well-spent day brings happy sleep…”, try being as productive as possible during your waking hours. Working hard during the day helped me by lowering over-all stress levels so I’m not laying in bed thinking about all the stuff I need to power through the next day. Being present, and staying productive is the name of this game. | https://medium.com/curious/how-i-overcame-insomnia-ba63d624fd85 | ['Don Restarone'] | 2020-07-26 11:29:29.195000+00:00 | ['Lifestyle', 'Productivity', 'Lifestyle Change', 'Insomnia', 'Self'] |
3 Lines Story — She’s a Magic! | “The way she laughs is classic,
When she is near him everything look cinematic,
He looks her like may be she is a magic.” | https://medium.com/3-lines-story/3-lines-story-shes-a-magic-dd3c0f4db1a0 | ['Pawan Kumar'] | 2017-03-18 07:09:40.339000+00:00 | ['Poetry', 'Writing', 'Relationships', 'Love', 'Poem'] |
How to Survive (and Thrive) in a Sexless Marriage | Avoid comparisons.
Social media can be a terrible thing when it comes to mental health and wellness: for those struggling with the illness and for those who are dealing with the repercussions.
So often, I find myself on Facebook or Instagram, looking at all of the cutesy, lovey-dovey pictures of my friends, and I catch myself in the comparison trap. I find myself wishing I had a partner or a spouse that did this, this, and this differently.
The problem is that when I do this, I am actually changing my partner in my mind into someone he’s not.
Mental illnesses affect people in different ways, particularly if they are long-term, lifetime struggles. My husband is the way he is because he struggles with depression and anxiety. Some of his personality quirks and the things I love about him are a direct result of his struggles.
When I catch myself comparing, I have to go back to those early days when I fell in love with him. I have to remember the things I love about him and the things that are so endearing about his personality and the things he finds important.
Remember why you married your spouse or began a relationship with your partner. Love them beyond the labels they carry. Every relationship is different.
Recognize that you can’t fix the problem.
This might be one that has been the most difficult for me to navigate. I am a fixer. I love to fix people’s problems. However, it’s very frustrating when I feel like I can’t figure out a solution.
With my husband, I find myself constantly trying to “fix” him.
I am constantly giving him suggestions for things he “should” do, parroting things I’ve learned in my own therapy sessions for anxiety, not realizing that I’m not helping, but actually hindering the process. My husband sees an amazing therapist, but when I try to “fix” him, I take that role of “therapist” onto myself. It’s really not the way it should be.
My role is to be a spouse, a wife, a lover, and a support. That’s it. Leaving the “fixing” to the professionals.
There is nothing sexy about being your partner’s therapist.
Take this time to focus on YOU.
It’s crucially important to make sure you take time for self-care, not only when things are good, but especially when things get challenging. Burn-out and caregiver fatigue is a real thing. Consider seeing a therapist for yourself if time and money allow.
In addition, make sure to do things for yourself. Do things that give you life and bring you joy. Go to the gym. Get a manicure or pedicure. Go for a walk with the dog. Take a mental health day away from work (and preferably spend it away from home or away from the source of the fatigue). Read an entire book in a day.
Even if you have other responsibilities, like children or pets, and don’t have time for a full day to just focus on yourself, even small amounts of time can bring levity and lift.
During this time, if you are feeling like your sexual needs are not being met or fulfilled, turn the focus onto yourself. Explore your own body and your own interests. Develop your own sexual identity if that’s something you’ve never done before.
A healthy solo sexual identity is a crucial part of a healthy sexual partnered relationship.
When you understand what feels good to you, it’s easier to communicate that to a partner. It also is a chance to really show yourself some self-love and self-care.
Discover what makes you tick during this time! It’ll make your partnered experience that much richer later!
Support your significant other in both tangible and intangible ways.
There’s a reason those vows you said probably included something involving “in sickness and in health.” Life happens. Things don’t go the way we plan. In these weeks and months, find ways to support your partner.
Validate their feelings. Acknowledge the work they are doing in the therapy office. Do things around the house to help ease the burden. Snowblow the driveway. Mow the lawn. Do the laundry. Bring them flowers. Empty the dishwasher.
Do it without resentment, knowing that what you are doing can bring a little levity to their life. Little things can make a big difference.
Know your role. Be the best lover, supporter and sympathizer you can during this process.
Remember this is not a permanent state you are living in.
Unlike what the media portrays as a “typical” marriage/relationship, real life experience is full of ups and downs. Just like those weeks and months where sex was the only thing on either of your brains, there will be weeks and months where it is the last thing one or both of you are interested in. | https://medium.com/midwest-confessions/how-to-survive-and-thrive-in-a-sexless-marriage-360dcc0e1796 | ['Mallory Joy'] | 2019-03-18 20:00:37.859000+00:00 | ['Anxiety', 'Sex', 'Family', 'Mental Health', 'Depression'] |
Apple iCar: Specs and Rumors | The news that Apple made cars has become a hot topic in the industry. The CNBC said that Apple has never admitted to making a car plan, but Cook acknowledged that Apple is developing an “Autonomous system” that can be used for self-driving cars or other purposes.
The DigiTimes predicts that Apple cars will be unveiled from 2024 to 2025. According to Bloomberg News, Apple has also handed over the autonomous driving department of its “Titan” to John Giannandrea, the senior director of the artificial intelligence department.
The main task is to develop an autonomous driving system that can eventually be used in Apple’s electric cars
Taiwan’s Economic Daily News further pointed out that Apple has recently put forward stocking requirements to Taiwan’s auto parts manufacturers such as Heda, BizLink, Heqin, and Tomita.
Apple is expected to announce Apple Cars in September next year and also call its prototype cars. It has been tested on the road in California, USA.
Judging from the signs revealed by the media, Apple’s building a car is already a certainty.
At the moment, Apple has handed over the autonomous driving department of its “Titan” to John Giannandrea, the senior director of the artificial intelligence department.
The main task is to develop an autonomous driving system that can eventually be used in Apple’s electric cars.
Why does Apple build cars?
As early as 2016, Musk said: “Tesla’s biggest competitor is not Google, but Apple.”
Apple’s stock price has been hovering between US$100 and US$120 for a long time, and its market value can basically stand at US$2 trillion.
However, if you want a breakthrough in market value in the future, it is difficult to build on the existing track. Obviously, with iPhone hardware sales reaching the ceiling, Apple’s service business has fallen into an antitrust controversy again.
Apple urgently needs to find a new business to open the situation
The automotive industry is ushering in a stage of rapid growth, and it is a good time for Apple to enter the market at this time.
From January to November this year, 17.79 million passenger vehicles were sold in the Chinese market, and electric vehicles accounted for 5.4%. The industry predicts that sales of electric vehicles will exceed 1.5 million next year.
According to economic principles, when a new product category enters the market and the market share can rise to about 10%, it is often an inflection point of explosive growth.
This law has been verified in the smartphone industry
In the electric vehicle market this year, products such as model 3 have attracted huge attention, and the inflection point will faintly come.
Photo by zhang kaiyv on Unsplash
Apple’s main products are mainly consumer electronics products represented by the iPhone, Mac, and iPad
At present, everything Apple has laid out in the industrial chain is to form a complete supply chain system around these consumer electronics products. But in the automotive field.
Apple is still a novice
But Apple has been planning for a long time in the automotive field. Since the establishment of the Apple project code-named “Titan” in 2014, Apple’s patent application on automobiles has not been interrupted.
In January 2016, Titan ushered in its first major adjustment. According to employees who did not want to be named at the time, Apple is still a few years away from developing “a truly differentiated electric vehicle.”
Due to the lack of a clear goal, the Titan project encountered some challenges
But despite this, Apple’s investment in the automotive field has not stopped. According to incomplete statistics, Apple has applied for nearly 100 automotive patents since 2017, covering charging, autonomous driving, AR navigation, biometrics, and smart hardware.
And the Car External interactive electric vehicles on-board systems, body structure optimization, safety assurance, etc., can be said to cover the core areas of car manufacturing.
In 2018, Apple obtained several patents related to lidar. According to Apple insiders, the core of Apple’s car strategy is a new battery design.
The design reduces the number of modules for storing battery materials and frees up space inside the battery pack, thereby enabling the car to have a stronger endurance.
From the perspective of patent reserves
In 2020, Apple has obtained at least 10 autonomous-driving patents, which involve the three major areas of autonomous driving sensors, communication, and control.
It is worth mentioning that the patent for autonomous driving sensors involves the linkage between sensors and data sharing.
Different sensors correct each other in the process of calculating data, making the decision-making process of autonomous driving more accurate; the other is called the “Charging station with an automatic alignment device” patent.
Image by Gerd Altmann from Pixabay
Besides, Apple has also applied for lidar obstacle detection, hidden lidar, recognition of traffic police gestures, and scene recognition under severe weather conditions.
We have seen that Lidar has been used on the iPhone 12, and Apple can still observe passengers, The psychological changes adjust the vehicle driving mode and other related patents.
Apple’s patents in the smart cockpit include voice and gesture operation of the car
Cars use gestures (AR recognition) or eye movements (eye-tracking technology) to determine the driver’s intention to perform related operations.
This technology is further upgraded and may be extended to passenger interaction.
The current mainstream model of new energy vehicles is to first develop a set of highly integrated chassis that includes power systems, battery systems, vehicle controllers, and automatic driving execution, such as braking and steering, which is called “Skateboard”.
For this part of the work, Apple’s current accumulation and layout in the automotive field should not be a big problem
It can be seen from these patent applications that Apple’s performance in the field of central control car machines and autonomous driving is worth looking forward to.
Whether it is an operating system or a chip, Apple’s technical depth and human-computer interaction understanding should exceed that of traditional car companies.
Apple’s patent accumulation in smart cockpits is even more exciting
Although some patents have been applied in some car companies, Apple’s ability to integrate technology and products is worth looking forward to.
In general, Apple wants to build a self-driving electric car with a high degree of intelligence, connectivity, and integration of software and hardware.
If the car is launched, it can be expected that it will surpass the existing car interactive experience and innovate user experience
The experience in the car scene is a high probability event.
It can be found from Apple’s patents that Apple still has more thoughts on the future of intelligent electric vehicles.
On the whole, it is based on the perspective of future user experience. It aims at various pain points in-vehicle use and future autonomous driving and The evolution trend of related technologies combined with AR.
Can Apple make cars?
Before the iPhone came out in the past, Apple was also a novice in the mobile phone industry. However, mobile phones and PCs are actually products that are closely related to each other.
The iPhone made by Apple is not considered a cross-border, but a homeopathic extension based on past technological accumulation and industrial chain.
Mobile phones and cars are actually two complete products.
In different product fields, the complexity and difficulty of the automotive field are also greater
For Apple, the capital investment is large, the production line and component suppliers have to start planning from the beginning, and the talents for carmakers need to be accumulated.
Therefore, it is more difficult for Apple to bring subversive changes and breakthroughs in the automotive field than to make a subversive mobile phone.
Judging from the current car manufacturing by new forces, various OEMs are focusing on car-machine systems and smart cockpits, focusing on intelligent networking, human-computer interaction, and a more networked user experience.
This is actually Apple’s home field
Advantages, Apple’s iOS-based software ecosystem has been unable to make progress in the automotive field in the past. It can build cars on its own.
Based on iOS, it can cut into the Internet of Vehicles and open up the seamless connection between mobile phones and smart cockpits in the car. This will undoubtedly allow other new power cars. Advantages envied by enterprises.
Building a car is a matter of burning money
At present, Tesla, the most successful car company in the field of new energy vehicles, has been losing money almost every year during its 17 years of establishment and has burned more than 5 billion US dollars in accumulated money.
Despite the huge investment in building a car, Apple is not bad for the money.
In 2019, Apple has 1.5 trillion in cash reserves. All of Tesla’s losses in 17 years are less than a fraction of Apple’s cash reserves.
If Apple wants to burn money to compete with Tesla, then Tesla is not a dimension and magnitude at all
In general, after the entry of new forces such as Apple and Tesla, the importance of core technologies in the automotive industry will become more and more prominent.
The competitors of traditional car companies are not limited to Audi, Mercedes-Benz, BMW, and Volkswagen, but Tech giants such as Apple, Google, and Tesla.
According to Goldman Sachs, Apple can use cars as a hardware platform to support its services. However, the low level of profit in the auto industry may mean that investors can see a limited contribution to profitability from this move.
But in fact, it is speculated from the focus of Apple’s patents that Apple is rushing to control the technological commanding heights of autonomous driving.
It does not pay attention to all the details of the vehicle but focuses on the most profitable intelligent driving and intelligent cockpit system links.
The other parts can actually be outsourced
If Apple has the ability to control some parts, it is not difficult to make a car that surpasses the standard of traditional car companies, and Apple’s model is still to hold the majority of profits in its own hands.
This is actually a continuation of Apple’s consistent style and model
In the mobile phone field, Apple only controls the core of the chip and operating system. Other parts are handed over to a group of suppliers, and OEMs are handed over to Foxconn, Wistron, and Luxshare Precision, etc.
In the automotive industry, Apple also wants to control only the parts that represent high technological manufacturing and high profits in the future.
After all, the industrial chain of the automotive industry is much more complicated than that of the mobile phone and PC industries.
A car has 30,000 parts and components, and the extremely large industrial chain places extremely high requirements on Apple’s integration capabilities.
According to Taiwan media sources, Apple has put forward stocking requirements to Taiwan’s auto parts manufacturers such as Heda, BizLink, Heqin, and Tomita.
From the perspective of the industrial chain
The current auto industry’s parts supply operation model has been relatively mature, auto market companies are highly dispersed, and the years of development of traditional auto companies have built their own community of interests.
Suppliers are also highly dispersed, and Apple is involved
It is basically impossible to achieve control of the mobile phone field in a short time.
From the experience of Tesla and other traditional automakers, it is very difficult to have a global electric vehicle or car production network.
Therefore, based on Apple’s inherent model to build cars, the integration capability of the supply chain is one of Apple’s biggest difficulties facing the auto market.
Besides, another big problem for Apple to make cars is that the early adopters are difficult to reproduce. The automobile category and the mobile phone category are two different user groups.
Photo by Romain Tordo on Unsplash
In the past, the launch of new products such as AirPods and Apple Watch first attracted the early adopters of the original fruit powder group and then popularized it among the mass consumer groups.
This made Apple’s new products a very smooth transition to the initial cold start stage of sales.
But in the automotive industry, this fan effect may no longer be useful. It is basically impossible for users in the mobile phone market and PC market to transfer to the car market.
On the one hand, mobile phones and cars are completely different products, with different product attributes, user needs, and interests, and there is no continuity and continuity at the demand level.
Secondly, cars are consumer category that focuses on decision-making. Apple cars are on the market and the price is not cheap.
Ordinary consumers can afford Apple mobile phones, but they need to buy Apple cars is based on a certain economic foundation
Apple’s inherent fan base The proportion of China Energy’s affordable consumption is not large.
Besides, it also needs to establish market trust. For automotive products, especially self-driving smart cars, safety is the most important.
It will take some time to verify the safety and handling experience. After all, a novice will end up making Consumers may be more willing to take a look at the car first.
Consumers may have a long wait-and-see period in the early stage of Apple’s introduction to the market
Although Apple is not bad for the money, it is difficult for Apple to tolerate long-term losses based on catering to the capital market’s expectations of investment returns.
From the analyst’s point of view: “If fully autonomous driving becomes the mainstream, then the time the driver spends on the road will again become the focus of competition for technology giants.”
In this regard, Apple’s full-scene ecosystem based on iOS services has given automotive hardware to it, and it will grow into a new service business.
Apple’s product ecosystem will achieve a closed-loop of overall travel from home, mobile travel to the company, and realize a unified lifestyle. Station-style “smart service”.
Therefore, in this sense, it depends on whether Apple is willing to sacrifice short-term returns and profits and use it as a core long track in the future for continuous investment.
Because it is too easy for Apple to make money, it may have underestimated the fierce competition and profitability of making cars and is unwilling to do long-term money-burning projects.
From the point of view of its revealed strategic purpose, Apple may make cars different from Tesla. Apple is more for profit, and Tesla still has the idea of changing the auto industry.
However, if Apple is determined to make a difference in the auto market, Tesla’s market value may be cut by half if there is a small success in a few years. | https://medium.com/datadriveninvestor/apple-icar-specs-and-rumors-299a41cfdedb | ['Arslan Mirza'] | 2020-12-28 13:14:20.721000+00:00 | ['Apple', 'Electric Car', 'Technology', 'Self Driving Cars', 'Tesla'] |
Unpaid UX Work Disguised As “Design Exercises”: How To Handle It | After a very enjoyable initial call where I had a great rapport with the interviewer, she explained that the next round of interviews was a design exercise: I was to redesign a section of their existing product. If the CEO liked it, I’d move on to the next round of interviews.
This is a request I hadn’t heard in years. They were seriously expecting me to redesign a part of their product for free. All those traumatic memories and emotions I experienced in my first years of working (and being exploited) as a designer came welling back up. I was livid. How could they be so exploitative? And this was a company making software for churches, no less! They should know better!
Many people have written about why you shouldn’t do this kind of work. In this article, I want to cover how to elegantly handle these requests.
How To Handle Requests Like This
Ask For Details About The Exercise
There is a fine line between ethical and unethical design exercises. You want to make sure you keep calm and ask for more clarification around the design exercise and what it entails before deciding.
An ethical design exercise:
Takes less than a day to complete
Is unrelated to their current company needs
Pays you for your time (ideally)
However, if it’s unrelated to their company needs, you have the problem of not fully understanding what it is like to work together. The hiring manager often will not have a design background, so they don't understand how to transfer over the outcomes of the design exercise to their company needs.
The compromise here is to do a one-hour brainstorming/whiteboarding session together on a current problem. Many people disagree with me on this, but I think a one-hour consultative session about their problems is time and energy you can invest in the right situations, for the right company.
The goal of design exercises should be to get a feel for your chemistry when working together, not to assess if you can design or not. By the time you are in this step of the hiring process, a good interviewer ought to know whether you have the hard skills.
The goal of design exercises should be to get a feel for your chemistry when working together, not to assess if you can design or not.
Try To Steer Them Away From This Request
After I heard the rough scope of this design exercise request, my first instinct was to tell them to fuck right off. But that was the years of pent up rage from being treated poorly early in my career flaring up.
You want to assume goodwill, or at least cluelessness, on behalf of the company making such a request. Try to educate them about what exactly they are asking for.
During our call, after hearing their request about redesigning a section of their product as a design exercise, I told them they are walking a fine line here, and that this is something you rarely see asked for in the design industry. She brushed me off and said, “Ah don’t worry, this won’t take too long, let me send you over the full exercise and you can decide then”.
Scope The Work, Then Send Them A Proposal
I let her send over the design exercise. It was even worse than I thought it would be.
It wasn’t one design challenge; it was two.
Exercise 1: redesign their availability tool for volunteers. Here is the deck they sent over for that. The expected deliverable is an InVision prototype.
Exercise 2: Redesign their blog. Deliverable is a set of high fidelity mockups.
So many emotions welled up inside me when I read this. It’s a hilariously bad request, but also sickeningly exploitative. My first instinct was to write up a profanity-laden email to the hiring manager. I already knew by this point that these are not the kinds of people you can have a good working relationship with, so why not tell them off? “No, I’m an adult now, I need to handle this differently”, I told myself.
I sent them this message back:
Hi *name*, thanks for sending this over. I’d really love to work with you guys. These two exercises would take me 8–16 hours to complete. It’s also spec work, which is considered highly problematic (some even call it unethical). I’d be happy to take this on as a small paid project. Let me know if that works for you. J
This way, you are giving them one last opportunity to steer the relationship into a more productive direction, while also setting boundaries and explaining why their behavior is not okay.
Of course, a company that behaves like this is likely not going to suddenly change course and behave ethically. She politely made clear that other candidates will do the exercise and that she will move forward with them.
What To Take Away From This
Most companies will try to achieve the highest possible financial outcomes for themselves, even if this means putting their workers at a disadvantage. It’s very unfortunate and frustrating that many people are in a situation where they have no other choice but to enter into such a exploitative relationship: 4 out of 5 american workers are living paycheck to paycheck.
I recently encountered a similarly exploitative situation: a company approached me about creating a curriculum for a 10-week UX design course. They gave me 4 weeks to do it and offered me $5000. This is a course that costs $4000 per person, and they will profit from it for years. I’d have to work 80-hour weeks to even have a chance to hit such an unrealistic deadline, and I’d walk away with not even enough money to cover the cost of living in San Francisco for a month. I made them a counteroffer asking for 50k plus 10% of course sales for 2 years. They declined(lol).
However, when I checked their website the other day, I saw that a very skilled and well-respected former coworker of mine ended up taking that terrible offer. That’s also why I can’t name this company — I don’t want to put this colleague in the spotlight. But it really pissed me off to see that this company not only found someone to take that terrible offer, but they also got someone who is really good.
You can’t stop companies from behaving like this, but you can decide to not play that game. You need to know your (self-)worth and have the courage to step away from toxic opportunities, even if you need the money. There are other opportunities out there where you don’t have to lose your self-respect. If everyone in the design industry were unified in saying “no” to these types of requests, companies would have to change their behavior. | https://medium.com/truthaboutdesign/unpaid-ux-work-disguised-as-design-exercises-how-to-handle-it-462925fea00c | ['Jamal Nichols'] | 2019-05-07 21:32:52.931000+00:00 | ['Work', 'Design', 'UX Design', 'User Experience', 'UX'] |
Spark: Aggregating your data the fast way | This article is about when you want to aggregate some data by a key within the data, like a sql group by + aggregate function, but you want the whole row of data. It’s easy to do it the right way, but Spark provides lots of wrong ways. I’m going to go over the problem, and the right solution, then cover some ways that didn’t work out and cover why.
This article assumes that you understand how Spark lays out data in datasets and partitions, and that partition skewing is bad. I’ve included links in the various sections to resources that explain the issues in more depth.
Even though I do understand the above, let’s be clear: it took experiments with every method that didn’t work to realise that they weren’t doing what I expected, and some pretty focused reading of docs, source code, and Jacek Laskowski’s indispensable The Internals of Apache Spark to find a solution that does work as I expect.
The problem: user level
Consider data like these, but imagine millions of rows spread over thousands of dates:
Key Date Numeric Text
-------- ------------ --------- -----------
ham 2019-01-01 3 Yah
cheese 2018-12-31 4 Woo
fish 2019-01-02 5 Hah
grain 2019-01-01 6 Community
grain 2019-01-02 7 Community
ham 2019-01-04 3 jamón
And what you want is latest (or earliest, or any criterion relative to the set of rows) entry for each key, like so:
Key Date Numeric Text
-------- ------------ --------- -----------
cheese 2018-12-31 4 Woo
fish 2019-01-02 5 Hah
grain 2019-01-02 7 Community
ham 2019-01-04 3 jamón
The problem: Spark level
The problem with doing this for a very large dataset in Spark is that grouping by key requires a shuffle, which (a) is the enemy of Spark performance (see also)(b) expands the amount of data that needs to be held (because shuffle data is generally bigger than input data), which tends to make tuning your job for your cluster parameters (or vice versa) much more important. With big shuffles, you can have slow applications with tasks that fail repeatedly and need to be retried.
So, given this problem, what you want to do is shuffle the minimum amount of data. The way to do this is to reduce the amount of data going into the shuffle. In the next section, I’ll talk about how.
As an aside, if you can perform this kind of task incrementally, you can do so faster and with less latency; but sometimes you want to do this as a batch, either because you’re recovering from data loss, you’re ensuring that your stream processing worked (or recovering from it losing some records), or you just don’t want to operate stream infrastructure (and you don’t need low latency).
The solution: Aggregators
Aggregators (and UDAFs, their untyped cousins) are the solution here because they allow Spark to partially perform the aggregation as it maps over the data getting ready to shuffle it (“the map side”) (see code), so that the data actually going into the shuffle is already reduced; then on the receiving side where data are actually grouped by key (by the action of the groupByKey), the same reduce operation can happen again.
Another part of what makes this work well is that if you’re selecting a fixed number of records per key (e.g. the n latest), you will also remove partition skew, which again makes the reduce side operations much more reliable.
Aggregator: example code
In this specific example, I’ve chosen to focus on aggregating whole rows. Aggregators and UDAFs can be used to also aggregate part of the data in various ways; and again the more you do cut down on the width of your data going into a shuffle the faster it will be.
See this notebook and the API docs for more examples
Full example of using Aggregator
Using real data, this took 1.2 hours over 1 billion rows.
The solution to a different problem: Aggregate Functions, UDAFs, and Sketches
UDAFs are the untyped equivalent to Aggregators, and I won’t say much more about them except that if you’re using a custom function that extracts exactly what you need, instead you’re going to get functionality very similar to sql groupby: you can get the original columns in the groupby clause, and then the rest of the columns in your results are aggregates. It’s as fast as an Aggregator, for the same reasons, including that you narrow your data going into the shuffle.
The code looks a little bit like this:
foods.groupBy('key). agg(max("date"), sum("numeric")).show()
Aggregate functions are simply built in (as above), and UDAFs are used in the same way.
Sketches are probabilistic (i.e. not fully accurate) but fast ways of producing certain types of results. Spark has limited support for sketches, but you can read more at Apache Data Sketches and ZetaSketches.
Non-Solution: groupByKey + reduceGroups
For some reason, this takes forever, and doesn’t do the map-side aggregation you’d expect:
in.
// version 2
in.groupByKey(_.key).reduceGroups((a: Food, b: Food) => Seq(a,b).maxBy(_.date)).map(_._2)
// version 3 - as above but with rdd
in.rdd.keyBy(_.key). // version 1in. groupByKey (_.key). reduceGroups ((a: Food, b: Food) => Seq(a,b).maxBy(_.date)).rdd.values// version 2in.groupByKey(_.key).reduceGroups((a: Food, b: Food) => Seq(a,b).maxBy(_.date)).map(_._2)// version 3 - as above but with rddin.rdd.keyBy(_.key). reduceByKey ((a: Food, b: Food) => Seq(a,b).maxBy(_.date)).values.toDS
I haven’t reproduced the query plan diagrams for any of these solutions, largely because none of them look distinctively crazy. Unfortunately, I also have retained the statistics from failed runs.
Using real data, over 1 billion rows, version 1 took 4.4 hours; version 2 took 4.9 hours, and version 3 failed after 4.9 hours.
Non-Solution: mapPartitions + groupByKey + reduceGroups
This ought to work. Maybe it can even be made to work. The idea is to do the map-side aggregation oneself before the grouping and reducing. This is what I tried, and it didn’t work for me. I suspect that I would have needed to not accumulate the whole map first before returning the iterator (maybe yield options, then flatMap the Option away)?
val latestRows = HashMap.empty[String, Food]
i.forEach((r: Food) => {
val latestFood = Seq(
latestRows.get(r.key), Some(r)).flatMap(x=>x).maxBy(_.date)
latestRows.put(r.key, latestFood)
}
latestRows.iterator
}).groupByKey(_.key).reduceGroups((a: Food, b: Food) => Seq(a,b).maxBy(_.date)).rdd.values in. mapPartitions ((i: Iterator[Food]) => {val latestRows = HashMap.empty[String, Food]i.forEach((r: Food) => {val latestFood = Seq(latestRows.get(r.key), Some(r)).flatMap(x=>x).maxBy(_.date)latestRows.put(r.key, latestFood)latestRows.iterator}).groupByKey(_.key).reduceGroups((a: Food, b: Food) => Seq(a,b).maxBy(_.date)).rdd.values
No timing result, as this fell over almost immediately.
Non-Solution: combineByKey
This one is kind of disappointing, because it has all the same elements as Aggregator , it just didn’t work well. I tried variants with salting the keys and such in order to reduce skew, but no luck. Fell over after 7.2 hours. | https://medium.com/build-and-learn/spark-aggregating-your-data-the-fast-way-e37b53314fad | ['Marcin Tustin'] | 2019-08-17 21:43:33.135000+00:00 | ['Spark', 'Apache Spark', 'Big Data', 'High Performance', 'Performance'] |
Creating a React Calendar Component: Part 3 | Section 1: Styling the calendar header using Flexbox
Before we begin, let’s add some CSS to the main BaeCalendar component so we can see it on the browser and normalize some HTML element styles.
.bae-calendar-container {
box-sizing: border-box;
box-shadow: 0 0 3px rgba(0, 0, 0, 0.5);
height: 100%;
width: 100%; h1,
h2,
h3,
h4,
h5,
h6,
p {
padding: 0;
margin: 0;
line-height: 1;
font-family: 'Varela Round', sans-serif;
}
}
As you can see, elements with a class called bae-calendar-container is given a 100% height and width of its parent element. This allows any users importing the component to wrap it in a different container so they can specify the height and width themselves. For now, it will take on the body element's height and width property so we can see it on the browser.
Aside from this, you’ll notice that we are taking advantage of SASS’s capability of nesting to style the h1, h2, h3... elements to normalize its default styles. Without going into great detail, nesting will translate this into a CSS code that will look like this:
.bae-calendar-container h1, .bae-calendar-container h2 {
padding: 0;
margin: 0;
line-height: 1;
font-family: 'Varela Round', sans-serif;
}
// And so on...
Above is the component with no styling that we will transform into the following.
Let’s take a moment to look at the HTML layout of the CalendarHeader component.
<div className="bae-calendar-header">
<div className="left-container">
<h1>{getReadableWeekday(selectDate)}</h1>
<h1>{getReadableMonthDate(selectDate)}</h1>
</div>
<div className="right-container">
<h3>{getYear(selectDate)}</h3>
</div>
</div>
All of the elements we see here are block elements which by definition is an element that starts a new line. So how do we end up with 2 columns? Thankfully, flexbox is a very useful CSS property we can take advantage of.
.bae-calendar-header {
display: flex;
}
The display: flex property defines an element to act like a flex element. A flex element acts differently. Essentially, the outer element is still a block element, but any inner elements take on a more fluid form. Unless display: flex is specified, we're not able to apply any other flex based CSS properties.
Here is an example of the component with display: flex .
Notice how the inner elements are no longer acting as block elements? The container itself wrapping the contents will maintain its block behavior, but this gives us freedom to control the layout of the inner elements. Let's make it so that the readable dates and the years end up on opposite sides by adding justify-content: space-between .
.bae-calendar-header {
display: flex;
justify-content: space-between;
}
Self-explanatory right? When flex is in a row format (e.g. left-container and right-container ), justify-content modifies the horizontal layout. Keep in mind, that if you are working with a flex element in a column format, justify-content will follow the new change and affect the vertical layout. The option we provide space-between does exactly what the name states. It provides an even spacing between all elements, but no spacing on the edges.
We’re making progress, but let’s see if we can provide some space to the edges and a border to show where the CalendarHeader component ends.
.bae-calendar-header {
padding: 15px;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.3);
display: flex;
justify-content: space-between;
} | https://medium.com/dev-genius/creating-a-react-calendar-component-part-3-a69740dd8d43 | ['Elbert Bae'] | 2020-07-12 22:09:59.780000+00:00 | ['Web Development', 'CSS', 'React', 'Sass'] |
Implementing Payments in Your Web or Mobile App as a Startup in India | Choose wisely
The most fundamental factor in the life-cycle of a start-up: Money. 💰
Not just in the general scheme of fund-raising or making profits but especially if your company relies on online payments for your product to sell. For eg. An event ticketing portal, selling pickle online etc.
Having a smooth payment interface is a basic necessity in such cases.
Think of it, you’ve spent all your money developing a killer product that your potential customer gets really excited about buying only to turn them off at the last step: Paying for it. 😿 So let’s dig into how you can effectively implement digital payments.
Here’s what you should be thinking about.
Universal Instrument Support
Broadly ensure that your payment gateway supports all common existing methods such as UPI, Net Banking, Debit Cards, Credit Cards (including International) as well as a majority of Indian wallets, namely Paytm, PhonePe, Ola Money, MobiKwik and Freecharge.
Well, maybe not all.
Two important things to especially ensure:
Your gateway supports UPI, and has direct connectivity to Google Pay . This is because UPI has a large share of online payments in India due to it’s easy usability and these two are the kings of the UPI market in India. 👑
. This is because UPI has a large share of online payments in India due to it’s easy usability and these two are the kings of the UPI market in India. 👑 Your gateway supports wallets (especially Paytm) since it’s become an important part of the Indian Payment Ecosystem. In case your target audience is those from Tier 2–3–4 cities and beyond look for direct Paytm support since these audiences love using Paytm.
Costs and Transaction Fees
Now that you’ve narrowed down your choices on the basis of instrument support you must filter with respect to your budget. Categorising by fee plans there are basically two types of gateways:
Those that charge a set-up fee + a transaction fee.
Those that charge only a transaction fee.
Little costs add up at scale!
Start-ups should prefer the latter option as it’s pointless to invest in a high one-time set up/maintenance fee due to the uncertainties of usage and scale.
Most gateways charge you a percentage of the transaction (1%–4%, depending on the instrument) and / or a fixed fee per transaction.
I’ve collected links to the pricing of a trusted payment gateways in India :
Features That Matter!
You should look out for features such as
Payment Links
Subscription / Recurring Payments
Easy Payouts & Refunds
Settlements for multiple vendors
The key feature that you as a startup need to have in place is Good Customer Support.
Said no one ever
To avoid major losses in terms of money as well as reputation you need excellent customer support to prevent sudden issues from turning into disputes. Users hate pending payments and often dispute such transactions if there is no immediate resolution. 😰
This is what happens if they do that:-
When your customer disputes a transaction, the customer’s bank allows them to file a complaint and demand for a refund from you (the merchant).
Your bank will debit the disputed funds from your account and place these funds on hold, pending the outcome of the case.
Your customer’s bank will then initiate an investigation into the transaction and if the dispute is found to be legitimate, the held funds will be credited to your customer’s account.
If the customer’s claims are discovered to be without merit, the held funds will be reversed to the merchant’s account.The process of filing a dispute is called a chargeback.
You can imagine how damaging chargebacks can be not just in terms of time and money but also your startup’s reputation, hence the emphasis on good customer support.
Update: Do check for Gateway’s Merchant Bank and make sure it is backed by a trusted Banking Partner to avoid any loss of users and business. Recently, A major Payment App in India (I don’t want to name it) was down for 24 hours because Reserve Bank Of India had placed their Merchant Bank in a Moratorium period.
I hope you’ve gained some clarity from this article and are now better equipped to choose your ideal gateway. I will be covering new developments in the Indian payments space (LazyPay!) in upcoming posts, stay tuned! 😄
Gunish Matta [LinkedIn Facebook Github]
Thanks to Mahima Samant, Sumukha Nadig, Mashayikh Shaikh for edits and reviews of the draft | https://medium.com/novasemita/implementing-payments-in-your-web-or-mobile-app-as-a-startup-69d143ec0ef0 | ['Gunish Matta'] | 2020-03-27 13:16:03.058000+00:00 | ['Payments', 'Startup', 'Technology', 'Full Stack Development'] |
How we’re building computers out of DNA and proteins | You read the title correctly. We’ve heard of classical and quantum computing, but promising research and work suggest that future supercomputers will consist of biological components.
Biocomputing, or organic computing, is an emerging field of computation that’s completely different from anything we’ve seen before the 21st century — and the area is still rapidly emerging.
So what specifically is biocomputing, and how does it work?
Biocomputing refers to computation using DNA or other organic structures.
The word biocomputing is pretty self-explanatory — computing systems that use biological materials. While computational biology means modeling biology on computers, with biocomputing, biology becomes building blocks for computers. We’re now entering the world of wetware.
This probably sounds pretty abstract and implausible, and while the technology is experimental and theoretical, we know it can work.
Three main sectors of biocomputing right now:
Nano-biological motors: natural/synthetic living materials in parallel computational circuits 🔌
DNA computing: design computational wetware from the genome 🧬
DNA-based data storage 🤯
Before we dive into each of these points, it’s important to internalize that computation has always existed on a cellular level. DNA stores the core data of who we are in the form of base pairs -> RNA inputs data -> ribosomes perform logic operations -> outputs are in the form of synthesized proteins.
In a way, our bodies function as computers — with biocomputing we extrapolate and model these processes physically, outside of nature.
Nano-biological motors are being leveraged for parallel computation.
Through leveraging molecular machines at the nanoscale, past experiments and research have shown how these ‘nano-biological’ motors are the future of parallel computation.
With ordinary computers (like the one you’re using now 😉), tasks are done sequentially. Multiple tasks done simultaneously are actually lightning-fast switches between tasks within your processor. To exponentially increase the capabilities of our computers, parallel computing is necessary — multiple tasks truly being done simultaneously.
Parallel computing through nano-biological motors could be our new approach. Not only is there the potential for more complex computation, but cost and energy efficiency are also possible. In fact, nano-biological motors use 1% of the energy consumed by modern electronic transistors.
So how do they actually work?
The key ingredients: proteins and artificial labyrinthine structures.
Extremely small maze structures are built out of artificial components, with pathways and exits representing answers to computational problems/tasks. Myosin guides protein filaments alongside artificial pathways — thus motors moving through a maze, with their solution representing the answer to a computational problem.
These biocomputers are the size of a book, with the same computational power for mathematical problems as a supercomputer. Additionally, this model was able to solve the ‘Subset Sum Problem’ faster than classical, sequential computers. The advantages are clear — all because of parallel computing!
Check it out!
finding multiple solutions in parallel
DNA also has the potential to solve computational tasks and problems.
Nano-biological molecular machines that move through computational mazes are cool, but DNA (deoxyribonucleic acid) has also shown high potential for computation. With DNA computing, silicon chips are replaced with strands of DNA.
Quick refresh: in strands of DNA, the building blocks of all organisms, information is represented using A’s, G’s, C’s, and T’s -> Adenine, Guanine, Cytosine, and Thymine. These are called nucleotides or bases. Pairs of letters in double-helical DNA are referred to as base pairs.
DNA Computing with Algorithms
In DNA Computing, an algorithm’s input is represented as a sequence of DNA. Instructions are then carried out by lab procedures on DNA, like gene editing. The result (answer to the computational problem) is some property of the final DNA molecule.
The most prevalent example of DNA computing in action is Adleman’s Experiment — following the aforementioned process, DNA molecules were used to solve the Travelling Salesman problem.
Adleman’s experiment opened up the possibility of programmed biochemical reactions. However, this was only on a small scale -> as problem complexity increases, so will DNA volume. But most importantly, the idea of DNA for parallel computation checks out — biological components can act on DNA strands simultaneously, enabling parallel computing.
Self-Assembly and Programmability
Strands of DNA also show promise to become programmable in terms of self-assembly, structure, and behavior — like computer-based robotic systems. Programmable biochemical systems that can sense surroundings, act on decisions and more are being developed. That being said, this isn’t necessarily artificial intelligence. Instead, DNA molecules execute these functions based on reactions from stimuli/interaction.
A huge area in this field is DNA Origami -> the ability for single (1D) strands of DNA to form into 2D shapes and sheets, and then self-assemble into 3D scaffolds.
Biochips — DNA Computing through Self-Multiplication
In short, silicon transistors and chips are becoming obsolete. They’ve reached their maximum potential for optimal size and computational capabilities. Functionally, our silicon tech has met their atomic limit. Biocomputing to the rescue!
DNA sequences are the building blocks of ‘biochips’ — demonstrating their direct potential to replace silicon chips. Millions of DNA strands multiply themselves in number iteratively to perform calculations. With biochips, as opposed to Adleman’s experiment, DNA computes through self-multiplication instead of editing within a lab.
Think of a hydra — when you cut off one of their heads, two more grow back. By the same token, these genetic sequences expand as more computation is performed. Therefore, DNA computers expand as they solve computational problems and tasks.
In summary — DNA has the potential to pioneer small-scale and effective computation.
DNA is the future of digital data storage.
In my opinion, this is the cream of the crop when it comes to biocomputing. Not just because of coolness, but because of its potentially massive implications.
Technology is at the forefront of society — whether for education, finances, entertainment, or dating. While obviously providing several benefits, an issue is exponentially growing on a daily basis — data generation. With over 80% of America owning a smartphone (not to even mention the rest of the world), our tech usage rapidly generates and creates data. Despite controversy around topics of data collection, a key problem lies with this phenomenon — we don’t have enough storage.
In fact, conservative estimates predict that by 2025, data storage systems will only be able to store half of our generated data.
Twist Bioscience
Building more hard drives and expanding AWS aren’t the best solutions to the problem, economically and feasibly. Ideally, data needs to be densely stored at an extremely small scale.
If DNA already stores data in the form of nucleotides in organisms… and is very small… what better way to tackle this issue? Introducing DNA Data Storage.
DNA is extremely small, stable, and will never be obsolete — they’ve existed since the beginning of life.
A single nucleotide can take up four values, so they’re analogous to two binary bits:
A -> 01
G -> 10
C -> 00
T -> 11
A typical human cell has 6 billion base pairs in the form of a double helix, organized in chromosomes. Scientists estimate that a single cell of DNA can encode 1.6 gigabytes — which scales to 100 zettabytes in the entire human body. For context, 100 zettabytes are more than humans have collectively generated throughout time.
Meaning, by leveraging DNA — we could store all the data the world has generated throughout history. And it would take up the amount of space as a human body.
Digital DNA Data Storage is a 6-step Process
Encoding: the binary data that needs to be stored on the DNA sequence is converted to nucleotide values.
Synthesis: our encoded DNA sequence is actually designed and made through synthetic biology/gene engineering
Storage: the encoded DNA is stored for later usage
Retrieval: when it’s needed, DNA is retrieved
Sequencing: the DNA is sequenced or ‘read’, reading and writing the molecule’s nucleotide sequence
Decoding: the sequenced DNA (list of A’s, G’s, C’s and T’s) is converted back into binary and is readable by a classical computer.
The process for data storage may seem complex, but as sequencing and synthesis technologies continue to improve over time, this framework will eventually become very simple. Right now, Twist Bioscience uses silicon as a substrate for DNA synthesis. They own a novel platform for manufacturing synthetic DNA on a massive, parallel scale — leaving little boundaries for effective and efficient DNA data storage.
Most importantly, the advantages of DNA are clear and powerful:
high storage density
can be 3D, as opposed to 2D disks or chips
can last centuries/millennia before maintenance is required
large demand and economic impact
Material costs of less than a fraction of a penny per gigabyte of stored data, is estimated for the encoding and decoding for DNA. Compared to average USB costs of $3/gb, the overwhelming benefits are clear.
And that’s Biocomputing!
The race has just begun to discover and identify ways DNA + other molecular structure can revolutionize computation. Experimentation and research has been done, so we know ‘it works’ — but to truly build super-bio-computers we need exponentially more awareness and people working in this field.
While the theory in itself is awesome, there’s genuine massive potential economically and computationally through biology.
Biocomputing suggests several advantages over quantum and classical computation.
While the idea of futuristic computers made from cellular components is extremely interesting, there’s a genuine opportunity for biocomputing in the world of computation. DNA is highly stable and in contrast to quantum computing, doesn’t need to be stored at unnatural temperatures for functionality. In the case of digital DNA storage, we anticipate centuries before error-correction processes are necessary.
The overwhelming advantage is how better computation can be achieved on a radically smaller scale. As opposed to bulky classical supercomputers and quantum computers, at the size of a book or human body, we can satisfy the computational and storage needs of the entire world’s population.
Crazy. | https://medium.com/swlh/how-were-building-computers-out-of-dna-and-proteins-6d3f1e160fe8 | ['Joshua Payne'] | 2020-03-05 03:09:22.412000+00:00 | ['Data', 'Biocomputer', 'Computing', 'Future'] |
Write for Analytics Vidhya | Analytics Vidhya is on a mission to create the next generation data science ecosystem. We aim to make data science knowledge accessible to as many people as possible. We aim to play an active role in enabling people to create products enabled by Data Science.
So, if you can create pieces of work which make data science easier to apply, we promise to put it in front of the world.
What can you expect by writing for Analytics Vidhya?
Visibility in front of our community — We have an awesome community passionate about data science. Millions of people across the globe use Analytics Vidhya as their source of knowledge. Your work will be put in front of this community.
— We have an awesome community passionate about data science. Millions of people across the globe use Analytics Vidhya as their source of knowledge. Your work will be put in front of this community. Recognition as a subject matter expert — Many authors have got unparalleled recognition through their posts and work on Analytics Vidhya. There is no reason why you will not get it.
— Many authors have got unparalleled recognition through their posts and work on Analytics Vidhya. There is no reason why you will not get it. Get feedback on your content from our Editors — We know how to make content which people love. We will provide you with feedback based on this experience.
What do we expect from you?
High-quality content- Our focus is on creating high-quality content as it helps the community effectively. We maintain this bar across all pieces of work and are looking who share the same passion.
Our focus is on creating high-quality content as it helps the community effectively. We maintain this bar across all pieces of work and are looking who share the same passion. Consideration of any feedback provided on your writing. We will provide you feedback through private notes on Medium.
provided on your writing. We will provide you feedback through private notes on Medium. We need the content to be free from any plagiarism. Any plagiarism, if detected would lead to a ban of the author and deletion of all the articles.
Here is what you need to do next?
Kindly go ahead and fill this form for us. We will add you as a writer on our publication and you can go ahead and share your stories with us!
Who owns the work?
Any content which is published on the Analytics Vidhya Medium channel would continue to be owned by the Original Author completely.
Can you earn money by publishing on Analytics Vidhya?
The aim of Analytics Vidhya is to make data science knowledge accessible to everyone. In order to do this — we need a healthy mix of free articles and paid articles. We encourage people to share as much as they can with the community for free, however, authors are free to put their articles behind the Medium payment wall.
Got more questions?
Please feel free to share your questions through comments below — we will be happy to talk/discuss. | https://medium.com/analytics-vidhya/why-write-for-analytics-vidhya-6c7ea8f0aeef | ['Team Av'] | 2019-06-19 12:14:35.325000+00:00 | ['Analytics Vidhya', 'Machine Learning', 'Writing', 'Data Science', 'Contribute'] |
Modern CI/CD Pipeline: Github Actions with AWS Lambda Serverless Python Functions and API Gateway | Modern CI/CD Pipeline: Github Actions with AWS Lambda Serverless Python Functions and API Gateway
Modernizing web application development and deployment
Photo by Morning Brew on Unsplash
Cloud is here to stay and more and more developers are seeking ways to effectively incorporate the cloud. Whether you are a startup recognizing limitations of your on-premise hardware and local machines or a large enterprise curious about how to slowly offload on-prem workloads, this tutorial will be insightful. I describe a phase 1 AWS architecture including Github, API Gateway, and AWS Lamba python functions. This represents an initial tutorial exposing developers to the AWS cloud adoption learning curve.
Outline:
Notional Architecture Purpose and Goals How to Setup Limitations and Problems of AWS Lambda Thoughts and Conclusion
Notional Architecture:
Architecture Diagram
The architecture above describes the basic CI/CD pipeline for deploying a python function with AWS Lambda. The developer represented above can pull and push their git repository to github using git. We configured the github actions YAML file to automatically update the AWS Lambda function once a pull request is merged to the master branch. The API Gateway represents a trigger which runs the AWS Lambda python function and returns a result. In this way, a Data Scientist (or analyst, frontend developer, or another developer) can trigger and access results in a quick and succinct fashion.
N ote: AWS Lambda Functions exists in AWS Lambda, a compute resource inaccessible by the developer.
Purpose and Goals
This architecture represents an effective way for teams to build a robust CI/CD pipeline for application development and deployment. Though this architecture is “incomplete” with respect to a full-blown web application, this design can represent the first phase of building a web application. For anyone interested in offloading local compute resources, AWS Lambda serverless functions can be an effective way to leverage cloud in a cost-effective manner (AWS Lambda Functions are part of AWS always free tier). So many times, development teams design a lofty cloud based-architecture for application deployment (or migration) and fail. Conducting Proof-Of-Concepts and slowly incorporating cloud is more prudent.
How to Setup
The primary challenge is understanding the YAML file and correctly formatting the “main.py” file which is executed in AWS Lambda. To setup a new workflow where developers can configure YAML file for function deployment, click “actions” in the github repository. This will provide instructions for creating a new workflow or selecting from a pre-existing workflow consistent with the deployment architecture. I created a default workflow and then searched other workflows to find a template that deployed to AWS Lambda (but there is probably an easier way leveraging a pre-configured workflow for AWS Lambda).
Note: line 57 is actually zipping github repo into a zip file called “test-dep-dev”. Ideally, to organize a Lambda function deployment, I recommend creating a folder in your repo and zipping that folder for deployment. For example, you might create a folder called “Lambda_Function_Scrapping_Data” which contains all the dependencies needed to deploy your function. Line 57 would look something like “zip -r test-dep-dev.zip ./Lambda_Funtions_Scrapping_Data”
Once you have configured the YAML file, check the AWS Lambda Function page to see if the function has been updated. To troubleshoot, create a test case within the AWS Lambda Function. The issue I first faced was a malformed syntax for the python return value (here is some documentation that might help). Below is a simple example of what a AWS Lambda Python function should look like. The response format is most important.
Note: in AWS Lambda function, the handler, is defined as the file_name.function_name. For example, if the file name is main.py and the function is my_serverless_function, the handle should be defined as main.my_serverless_function.
Instead of recreating the wheel, this is a great video demonstrating how to create an API Gateway for the AWS Lambda Function:
Limitations and Problems of AWS Lambda and possible alternatives
Before deciding to use AWS Lambda its important to consider the limitations. AWS Lambda functions will timeout after 15 minutes which is fairly generous, but for more involved enterprise level workloads, this might not be enough. More importantly your packaged functions are limited to 250mb unzipped and 50mb zipped which include the size of the packages, coded functions, and other dependencies. These are just some of the limitations that were especially applicable to the use case I am addressing. You can find other limitations here.
Given that AWS Lambda is essentially a shared compute instance running a containerized function, for more flexibility you can provision an EC2 instance. Obviously this results in higher costs. If you are worried about high availability, I am not sure what the SLA offers are for AWS Lambda, but generally, AWS compute instance users should assume 90% availability.
Conclusion
There is a bit of learning curve to AWS Lambda and serverless functions, but given their general applicability in most modern web application development, I consider it a worthwhile investment of time. If you are working on a project and want to incorporate some cloud, AWS Lambda is fairly worthwhile from a process and cost standpoint. End users can easily call the endpoint and retrieve data. There seems to be plenty of documentation and tutorials to address nearly any use case. Overall, I would recommend AWS Lambda. For more flexibility with a compute instance, I recommend Oracle Cloud Compute Instances. They are free (less than 1 OCPU), but documentation might be difficult to find and understand. | https://towardsdatascience.com/modern-ci-cd-pipeline-git-actions-with-aws-lambda-serverless-python-functions-and-api-gateway-9ef20b3ef64a | ['Ary Sharifian'] | 2020-08-03 03:54:53.679000+00:00 | ['API', 'Python', 'DevOps', 'Data Science', 'AWS Lambda'] |
3 Ways To Get A Little More Life Out Of Your Computer | 3 Ways To Get A Little More Life Out Of Your Computer
Putting a little structure around the tech in your business can have a huge impact on productivity and profitability.
TECH STRETCH ARMSTRONG
There are many reasons why you might want to get a little more life out of your computer.
You’re in a cash crunch.
You’re trying to be nice to the environment.
Or, you’re just not ready to upgrade your computer.
Whatever your reason, here are 3 ways to get a little more life out of that computer of yours.
CLEAN THAT DISK
One way to bring some life back into your computer is to clean up your hard drive.
It’s a simple do-it-yourself process that doesn’t require you to downloading any weird apps from the internet.
Safety tip — Don’t ever download a cleaner app from the internet. (never ever!)
Now on Microsoft Windows 10, there’s a built-in tool named Disk Cleanup.
You can run it as is or jump to the More Options section and do more stuff like deleting old restore points and remove applications.
Next, take a look at the list of programs you have installed on your computer.
If it’s an app you don’t use anymore or just used once.
I’d recommend that you get rid of it.
Now on to the next step.
CLEAN THAT COMPUTER
Yes, even computers need cleaning.
Dust build-up can do things like cause your computer to run hotter.
And running hotter means running slower.
Depending on your skills you may want to take your computer to shop to clean or if you’re a DIY kind of person, grab yourself a can of compressed air from your local hardware store and while you’re at it, grab a vacuum cleaner.
If you have a desktop computer, remove the case cover and carefully vacuum out any dust and build-up you see.
Then finish it off with some compressed air.
Make you’re doing this in a well-ventilated area — like outside.
If you have a laptop, grab that compressed air and blow out the blow-holes.
And while you’re at it, blow out all that dead skin under the keyboard.
And last but not least.
KICK IT UP A NOTCH
I hesitate to recommend this last step but what the heck.
Another way to get a little more life out of your computer is to upgrade it.
The 2 things you should consider upgrading are the hard drive and memory.
If your computer has an older hard drive you’ll want to upgrade to a solid-state hard drive.
Obviously, there’s a cost involved in upgrading the hard drive but if it gets you another 12 to 18 months of life that would help.
The other item would be to add some memory.
Memory is fairly inexpensive and can be popped in or swapped out fairly quickly.
THE LAST MILE
If these 3 options don’t work for you then it’s likely time to suck it up and buy a new computer.
One last thought for you and this is the Big Think.
Ideally, if you’re managing the tech in your business effectively you’ll have a Life Cycle Replacement Plan in place.
With a Life Cycle Replacement plan in place, you’ll ideally have new expenses budgeted.
And if all of the stars are aligned, you’ll be replacing your computers as they round that last mile of life.
Of course, all this takes a little planning and forethought.
Putting a little structure around the tech in your business can have a huge impact on productivity and profitability.
The key is having the right technologist in your corner.
Happy computing!
🌴 | https://medium.com/getyourtechright/3-ways-to-get-a-little-more-life-out-of-your-computer-677efaf249ef | ['Rob Leon'] | 2020-11-30 18:02:41.383000+00:00 | ['Startup', 'Computers', 'Get Your Tech Right', 'Small Business', 'Managed Service Provider'] |
A Complete Breakdown Of Donald Trump’s 72nd Unpresidented Week As POTUS | Before we dive into what we’ve learned about the detention centers immigrant children are being held in, let’s make one thing clear: The inhumane policy of separating immigrant children from their parents at the border (who are systematically prosecuted) is a Trump administration policy, not Democratic legislation. The President could end this right now, but instead, he is lying about who is responsible for it.
Now, let’s begin. NBC News reported that the U.S. is running out of room to house the children who are being separated from their parents at the border, and they are being placed into holding cells that don’t have adequate medical resources:
Border agents and child welfare workers are running out of space to shelter children who have been separated from their parents at the U.S. border as part of the Trump administration’s new “zero tolerance” policy, according to two U.S. officials and a document obtained by NBC News. As of Sunday, nearly 300 of the 550 children currently in custody at U.S. border stations had spent more than 72 hours there, the time limit for immigrants of any age to be held in the government’s temporary facilities. Almost half of those 300 children are younger than 12, according to the document, meaning they are classified by the Department of Homeland Security as “tender age children.”
The report goes on to say:
The overstays at border stations are a result of a backlog at U.S. Health and Human Services (HHS), the agency responsible for sheltering migrant children longer term and matching them with relatives or foster parents in the U.S. The agency’s Administration for Children and Families has 11,200 unaccompanied children in its care and takes 45 days on average to place a child with a sponsor, according to a spokesperson.
There are reports of children as young as 53 weeks old being taken. Once the kids are placed with sponsors, they are sometimes moved to different states, leaving the parents in the dark about their whereabouts.
The administration implemented this policy as a deterrent. On May 7, Attorney General Jeff Sessions issued an order, which DHS Secretary Kirstjen Nielsen implemented, that requires all undocumented immigrants crossing the border be referred for criminal prosecution…including migrants seeking asylum from violence.
The move would also mean that even if immigrants caught at the border illegally have valid asylum claims, they could still end up with federal criminal convictions on their record regardless of whether a judge eventually finds they have a right to live and stay in the US.
Many of these migrants are asylum seekers, fleeing violence and cruelty from Central America only to be welcomed by more cruelty from the country that has “Give me your tired, your poor, your huddled masses yearning to breathe free” enshrined on its Statue of Liberty. NBC News reported:
From October 2017 to mid-April, before the new prosecution strategy officially went into effect, more than 700 children were reportedly separated from their parents at the border.
In spite of the fact this is his administration’s policy, President Trump continues to falsely blame Democrats.
While Trump tries to cast blame on them, Democrats are trying to expose this horrific policy. Senator Merkley (D-OR), who wasn’t allowed to enter a Texas detention facility and had the police called on him, spoke about what he witnessed.
The United Nations has condemned this as an illegal human rights violation.
This policy doesn’t exist in a vacuum; it comes from a President who has made the dehumanization of Latino immigrants central to his political platform. Last month, President Trump said of unaccompanied minors who are crossing the border: “They look so innocent. They’re not innocent.” Trump said this in spite of the fact that only 56 out of 250,000 unaccompanied minors apprehended by border patrol were suspected or confirmed to have gang ties.
Also last month, President Trump once again conflated MS-13 with Latino immigrants, calling them “animals.” This fear-mongering rhetoric goes back years. But as we can see, this dehumanization has moved far beyond rhetoric and has gone even further than the inhumane ICE raids we’ve seen.
Imagine for a moment you are a young immigrant mother fleeing Guatemala. As you join a caravan of asylum seekers heading towards the United States, you begin to hear about the President of the United States tweeting negative things about your initiative. You keep marching onward. You then hear the President call you an animal. You keep marching onward because you and your child’s safety are too important. You finally arrive at the border, and rather than being welcomed and treated with dignity, your child is ripped from your arms without explanation and you are put in shackles.
And if you put yourself in the child’s shoes, you are put into a detention facility and then flown to a different state, still with no explanation as to what is happening to your parent.
This is not America.
America has done unconscionable things in the past from slavery to Japanese internment camps to unjustified wars, but we must learn from that historical indecency, not embrace it.
As I’ve said before, the beauty of America is that despite who we were in the past or who we are today, we as a people have the power to choose who we will be tomorrow.
We will not stop reporting on this story until this inhumane policy ends.
Meanwhile…
President Vladimir Putin bragged about his close relationship with President Trump and said: “we regularly talk over the phone.” Important to note, Putin just made his first big trip abroad since “winning” re-election and has been trying to assure Russians that he and Trump do in fact have a good relationship in spite of Trump sending weapons to Ukraine and the new sanctions.
As we know, President Trump blames much of his Russia investigation woes not on the fact his campaign had contacts with Russians, but on Attorney General Jeff Sessions’ recusal. He criticized him again on Twitter.
Now, let’s put Trump’s above statement in context.
Jeff Sessions was nominated for Attorney General in November 2016…that was before the public was aware the Russia investigation even existed and before FBI Director James Comey revealed the Trump campaign was a subject of the investigation.
So, what President Trump is essentially saying is that he wouldn’t have nominated Sessions for Attorney General if he knew he wouldn’t protect him from an investigation the public didn’t know existed yet?
If President Trump’s claim is true, it appears that Trump was aware there was underlying wrongdoing in his campaign that he expected Sessions to cover up. Otherwise, his claim wouldn’t make sense.
Read our rundown of the Sessions-Trump feud here.
Politico reported:
Mitch McConnell is canceling all but a week of the Senate’s traditional August recess, hoping to keep vulnerable Democrats off the campaign trail and confirm as many of President Donald Trump’s judicial and executive branch nominees as possible.
Buzzfeed News reported:
All parties involved in Summer Zervos’s defamation lawsuit against Donald Trump — including the president himself — should be deposed by Jan. 31, 2019, a New York judge said Tuesday. Summer Zervos, a former contestant on The Apprentice who had made public accusations against Trump during the campaign about “unwanted sexual misconduct,” sued him after he said “these allegations are 100% false” and began calling her and others “phony people coming up with phony allegations,” among other statements.
Historical Context: It was the Paula Jones sexual harassment case that garnered Bill Clinton’s deposition, which ended in a perjury/obstruction of justice impeachment referral from Independent Counsel Ken Starr.
President Trump canceled the Super Bowl Champion Philadelphia Eagles White House visit after he realized there were only a few players that agreed to attend. He falsely blamed it on the national anthem protests. The problem is, the Eagles never protested the anthem.
This didn’t stop the Trump propaganda network Fox News from using photos of players praying and conflating them with the protests. They later apologized.
Secretary of Education Betsy DeVos made a ridiculous claim that they are not looking at the #1 cause of school deaths when it comes to school safety.
Day 503: Wednesday, June 6
The Swamp Thing | https://medium.com/rantt/a-complete-breakdown-of-donald-trumps-72nd-unpresidented-week-as-potus-2cf82aa52bde | ['Ahmed Baba'] | 2018-06-11 00:51:45.895000+00:00 | ['Politics', 'Unpresidented', 'Donald Trump', 'Government', 'Journalism'] |
Bulldozer Brain vs. Rabbit Brain in Meetings | There are countless articles on the Internet teaching people how to speak up more in meetings. A few of them are not complete bullshit.
What if we think about it the other way around? Instead of focusing on individuals who tend to be quietly thinking in meetings, how about we try to make meetings more effective and inclusive in the first place?
Awareness
First and foremost, we should recognize that people think in different ways. We have bulldozer brains, rabbit brains, and anywhere in between. This awareness on its own is fundamental for us to think about meetings differently.
Structure
Structure is critical. Before meeting, prepare an agenda and share it with the attendees in advance. Even better, share written documents that frame the meeting discussion and give people time to read through them. In meetings, make sure everyone has a chance to speak up. After meetings, shares notes and encourage follow-ups.
I love our meeting structure at Medium, which reflects our inclusive culture. The check-in and tension rounds are specifically designed to give everyone an opportunity to speak up. Participating in check-in rounds makes it easier to speak up later in the meeting. I recently learned from a post about check-in arounds that “pre-meeting talk” is actually a psychologic research topic. Academic literature suggested that pre-meeting talk is a strong indicator of meeting effectiveness.
Attention
Pay extra attention to folks who are quiet. Are they in deep thinking? Do they need a few more seconds? Do they have enough context? Do they show any signs that they want to speak up but couldn’t find an opportunity to start talking? Call them out if they are comfortable being called out.
Pause
From time to time, we can collaboratively pause for a few seconds for everyone to take some time to think and an opportunity for everyone to jump into the discussion. Taking a pause is especially important before we switch topics because once the topic is switched, it pretty much cuts off the opportunities for bulldozer brains to express anything.
Follow-up
Follow-up with people after meetings. Give them more opportunities to share their perspectives outside of meetings, either in person or in writing. Show them that you want to listen to them and value their thoughts. It will encourage them to speak up more in the future.
At Medium, we have an internal version of the site called Hatch. It is for everyone in the company to share their thoughts no matter if they are slow or fast thinkers in meetings. It became a unique part of Medium’s culture. | https://medium.com/open-sourced-thoughts/bulldozer-brain-vs-rabbit-brain-in-meetings-bf143ab1e83 | ['Xiao Ma'] | 2018-10-27 03:47:51.931000+00:00 | ['Meetings', 'Inclusion', 'Work', 'Psychology', 'Random Thoughts'] |
God Is Not Santa | God Is Not Santa
“He wants to move us from being consumers to contributors,” Gray notes.
Photo by Mike Arney on Unsplash
Lately, I’ve been so lost. And I know it’s because I’m clinging to things that will never make me whole. Even though my life looks like it’s going great on the outside, on the inside, I’ve felt like everything is falling apart. I know it’s not valid materially, but it is valid spiritually.
And that’s because I need God now more than ever. And lately, I’ve felt this existential angst because, as ashamed as I am to admit it, I haven’t been putting God first. I’ve put other things that aren’t God before Him: work, success, money, approval from my friends and parents, and more. I haven’t put people first either, which leads me to be even more ashamed.
I know I can’t keep up like this — not because I can’t, but because it’s a spiritually empty way of pressing forward. It’s unsustainable. There’s always going to points where there’s simply nothing you can do. It’s not like I want to lose hope, but these days, it’s hard to find hope in the world. I find hope in God, and by extension, children of God like my family, friends, students, and neighbors.
In Colossians 3:2–3 (ESV), Paul writes:
“Set your minds on things that are above, not things that are on Earth. For you have died, and your life is hidden with Christ in God.”
For me, putting God first and everything else second is key. Since COVID, I’ve given all sorts of excuses for why I haven’t engaged with Scripture and church the way I used to — Zoom church was too distracting, worship sounded worse online, and I was simply growing into more and more of a cynic. According to Luna Greenstein at the National Alliance on Mental Illness, religion and spirituality have a positive impact on mental health. They help people tolerate stress better by generating peace, purpose, and forgiveness.
Of course, I know that everyone has their own walk with their faith. I certainly did — I didn’t believe in God until I was in college and came to the faith after a lot of Christian people helped me while I was in crisis in college. I found God after wondering what was up with this whole Christianity thing that led some campus ministers and friends to be so kind when I didn’t deserve it.
But other people who were forced into faith when they grew up certainly have a different story, and I know many people who are hostile to any mention of faith at all. According to Greenstein, religion also gives people a group of people to connect with over familiar beliefs, which research shows leads to reduced suicide rates, alcoholism, and drug use.
For me, I don’t feel like I’ve been a good person, let alone Christian the last couple of months. I’ve bickered way too much about politics. I’ve let my pride and self-righteousness wash over me. I simply have not loved others more than myself and proving how right I am.
“A new commandment I give to you, that you love one another: just as I have loved you, you also are to love one another.” Jesus says in John 13:34.
I know, however, that my struggle in my faith is not isolated. Many of us hold the wrong perception of God as someone who can grant all our wishes. I recently watched a TV show about figure skating, and one figure skater gets a hip injury that essentially ends her skating career. She struggles with her faith as a result and prays to God every day to make her be able to skate again.
Her brother comes into her room, knowing something is wrong. The two of them are very culturally Christian — they come to age in a Chinese church and a strong church community. The girl elaborates what she’s praying to God about, and doubts God if He doesn’t grant her wish. But the brother talks to her about how that mindset is misguided.
“God is not Santa,” he tells her.
Every day, he used to pray to God to make him not gay, but that wasn’t how God works. Instead, he rethought the way he thought about God. Prayer isn’t so you can change God or your circumstances. Prayer is how God changes us. The rest of the show is about the main character’s struggle with bipolar disorder, but the show enlightens how God restores even the most dysfunctional of relationships.
Along the same analogy of God not being Santa Claus, Darwin L. Gray at Christianity Today attributes our equating of God with Santa as a result of our consumerist society. Instead:
“He wants to move us from being consumers to contributors,” Gray notes.
As Christians, it’s important to ask ourselves if what we ask God for are to advance Jesus’s kingdom or just make us more comfortable with life. Of course, we should be honest with God if we want anything, but hold the key phrase in the Lord’s prayer first: “thy will be done.” For me, I want to be more successful, be a better teacher, advance in my career, become a better writer, and make more money. I want to become a better runner and a better athlete. And while it’s not wrong to want material things, God always comes first, and we have to love God even if we don’t have the things we want.
“You see, we exist for God. God does not exist for us,” Gray says.
For a Christian, the greatest risk is feeling like you don’t need God. And there are a lot of times I ascribe the successes in my life to my own merit, whether it was my own savviness or hard work, it’s easy to feel like I worked hard and I deserve success. However, even those tools were given by God. Every success and even every failure is a gift from God to grow closer to Him. Feeling like you don’t need God is always a reckoning moment for me that I’m growing prideful and arrogant, and need to be put in check.
This Christmas is one of those moments, where I’m growing close to God as my faith matures again. In this moment like anyone, I need God more than ever. The less I think of God, the less I pray to thank God, the more spiritually empty I am, much like I was before I became a Christian. This Christmas, loving God and loving others before anything else is the path forward, which is more gratifying than material riches any day of the week. As Paul writes in 1 Timothy 6:17: | https://medium.com/this-shall-be-our-story/god-is-not-santa-7bb46ddb5e5b | ['Ryan Fan'] | 2020-12-25 17:37:31.623000+00:00 | ['Christianity', 'Religion', 'Spirituality', 'Self', 'Nonfiction'] |
Porque o Double Diamond não é o suficiente | Why the double diamond isn’t enough
The design process doesn’t end with prototyping. Unfortunately, designers are more often than not are pulled off a… | https://brasil.uxdesign.cc/porque-o-double-diamond-nao-e-suficiente-f0a587b95be2 | ['João Traini'] | 2020-09-29 14:26:02.008000+00:00 | ['Design Process', 'Ux Translations', 'Startup', 'Product Management', 'UX'] |
12 Quick Tips For Success on Medium | Sometimes you just want some quick and dirty tips without all so much of the extra commentary. Hey, I get it. Hopefully these tips will help you take your work on Medium even further.
Decide first and foremost what Medium success looks like for you.
Want to earn $500 a month? Or reach 20,000 followers? Pick a goal--or a series of goals--to help you focus on what you’re actually trying to do.
Tag each of your stories with five tags.
No, I don’t mean your responses to different stories. Just be sure to give every standalone story that you write five tags to help your work catch more eyes.
Run your story through grammarly before hitting publish.
I didn’t always do this, and yes, I did make a ton of swypos. Awkward. Grammarly won’t catch every error, and sometimes it will amuse you with unnecessary suggestions. But it’s a helpful line of defense at any rate.
Give your stories more obvious titles.
Flowery and creative headlines may not be your friend if you write a lot of essays about life and the issues that matter to you. My top performing headlines are so obvious they’re almost boring. Like “We Don’t Really Know Our Parents Until We Grow Up.”
Don’t fear the use of strong statements or metaphors.
If you’re writing an essay on Medium with strong emotion or emphasis, using metaphors can work for you. Some writers shy away from these for fear that blanket statements might turn readers away. But the right audience will understand nuance. “The Fragile Male Ego Has Ruined Online Dating" doesn’t mean that all men have fragile egos. My target readers get that. So take some risks with your phrasing.
Accept that Medium swings left.
Sorry not sorry? Quality, narrative-driven stories and essays do well here, but I’d say they swing left for a reason.
Business Insider said this about Ev Williams:
He once said of President Trump’s use of Twitter, "It’s a very bad thing, Twitter’s role in that ... If it’s true that he wouldn’t be president if it weren’t for Twitter, then yeah, I’m sorry." He had believed that Twitter’s ability to let anyone say anything, to a wide audience, would mean that "the world is automatically going to be a better place." But, "I was wrong about that," he said.
Now you might not be on the liberal spectrum of things, and I don’t think you have to be. But you likely do need to accept that your writing may very well be good for a specific niche--and not Medium members at large.
Give your stories a hopeful ending.
It’s no secret that my work on Medium hasn’t always been so positive. I have written in the midst of great upheaval in my personal and professional life. I’ve opened up a lot about my mental health. But as time has passed, I’ve gotten better at improving my tone. Guess what? Hopeful endings fare better. Readers get more from positive stories than downtrodden ones.
Don’t try to be an expert here. Unless you’re actually, you know, an expert.
There’s nothing wrong with writing about the issues that matter to you, even if you aren’t an expert. The problem is when you pretend to be one. It’s okay to be honest about who you are and why you write what you do. Not everybody cares if you’re an expert, but most readers will care if you’re pretending to be something you’re not.
Quit worrying about your views.
Seriously. Write great content that means something to you. Write great content that other people legitimately want to read because you’re using solid images, clear headlines, and a unique voice. 100,000 views means nothing if those viewers aren’t reading and engaging with your work.
Engage in the community here.
Some top writers on Medium are able to engage a little, and some are able to engage a lot. It might ebb and flow for you, and I think most readers get that. The point is that it helps to take some part in the community that makes Medium so damn special. Foster a little give and take whenever possible.
Learn how Medium works, and then work WITH it.
If you want to be successful on Medium, but you’re constantly complaining about the rules and platform itself, you’ve got to question what you’re really trying to do. It’s much more effective to learn how Medium works and then work with the system instead of complaining that the system doesn’t work your way.
Don’t underestimate the power of curation.
Lindy recently wrote a great piece about everything she’s learned regarding curation through various topics on Medium. Go read her story. Frequent curation can change your life here. It changed mine:
Before frequent curation
3 Months after frequent curation
It pays to familiarize yourself with the curation guidelines and terms. I am not spending much time marketing my work at all. Curation helps my stories reach more readers who are already interested in my topics.
In my experience over the past (nearly) nine-ish months, a person can set an intention for success on Medium and go after it in a strategic way. You’ve just got to be willing to put in the work and the research. My friend Glenna Gill has compared it to AA--if you work the system, it works! | https://medium.com/awkwardly-honest/12-quick-tips-for-success-on-medium-765800d8a3b0 | ['Shannon Ashley'] | 2019-11-23 20:31:22.879000+00:00 | ['Medium', 'Writing', 'Success', 'Writing Tips', 'Goals'] |
AI Has Become So Human, That You Can’t Tell the Difference | AI Has Become So Human, That You Can’t Tell the Difference
The truth about machines, and their ability to blend with us
You might be wondering if machines are a threat to the world we live in, or if they’re just another tool in our quest to improve ourselves. If you think that AI is just another tool, you might be surprised to hear that some of the biggest names in technology have a clear concern for it. As Mark Ralston wrote, “The great fear of machine intelligence is that it may take over our jobs, our economies, and our governments”.
If you disagree with this idea, that’s OK, because I didn’t write the previous paragraph. An Artificial Intelligence (AI) solution did. I used a tool called GPT-2 to synthetically generate that text, just by feeding it with the subtitle of this article. Looks pretty human, doesn’t it?
Using GPT-2, you can synthetically generate text (highlighted in blue) just by providing an initial input (marked in red). Source: Transformer Hugging Face
GPT-2 is a text-generation system launched by OpenAI (an AI company founded by Elon Musk) that has the ability to generate coherent text from minimal prompts: feed it a title, and it will write a story, give it the first line of a poem and it’ll supply a whole verse. To explore some of its capabilities, take a look at Fake Fake News, a site that uses GPT-2 to generate satire news articles in categories like politics, sports or entertainment.
But the big breakthrough happened this year when OpenAI launched the next generation of GPT-2 (called GPT-3): a tool that is so real, that can figure out how concepts relate to each other, and discern context. From an architecture perspective, GPT-3 is not an innovation at all: it simply takes a well-known approach from machine learning like artificial neural networks, and trains them with data from the internet. The real novelty comes from its massive size: with 175 billion parameters, it’s the largest language model ever created, trained on the largest dataset of any language model.
Example of GPT-3 to create an email message. Source: WT.Social
Having the ability to be ‘re-programmed’ for general tasks with very little fine-tuning, GPT-3 seems to be able to do just about anything by conditioning it with a few examples: you can ask it to be a translator, a programmer, a poet, or a famous author, and it can do it with fewer than 10 training examples. If you’re interested in knowing its performance, The Guardian proved it could synthetically write a whole news article based on an initial statement, taking less time to edit than many human articles.
More than words
Take a look at the image below. What do you think of this apartment? Would you consider renting it?:
Looks good, right? Well, there’s one minor detail, and it’s that the place doesn’t exist. The whole publication was made by an AI and is not real. None of the pictures, nor the text, came directly from the real world. The listing titles, the descriptions, the picture of the host, even the pictures of the rooms. Trained with millions of pictures of bedrooms, millions of pictures of people, and hundreds of thousands of Airbnb listings, the AI solution from thisrentaldoesnotexist.com was able to create this result. You can try it yourself if you want.
These fake images were produced using Generative Adversarial Networks (GANs for short), which are artificial neural networks capable of producing new content. GANs are an exciting innovation in AI which can create new data instances that resemble the training data, widely used in image, video and voice generation.
Some examples of edits performed by GANPaint Studio over the yellow areas
GANs contain a “generator” neural network and a “discriminator” neural network which interact in the following way:
The generator produces fake data samples to mislead the discriminator, while the discriminator tries to determine the difference between the fake and real data, evaluating them for authenticity.
By iterating through this cycle the goal is that the two networks get better and better until a generator that produces realistic outputs is created (generating plausible examples). Because the discriminator “competes” against the generator, the system as a whole is described as “adversarial”.
Example of a GAN training process. At the end, the distributions of the real (in green) and fake samples (in purple) converge. Source: GAN Lab
Also, GANs have some special capabilities: the data used for training them doesn’t need to be labelled (as the discriminator can judge the output of the generator based entirely on the training data itself), and adversarial networks can be used to efficiently create training datasets for other AI applications. GANs are definitely one of the most interesting concepts in modern AI, and we will see more exciting applications in the coming years.
Final thoughts
I know it’s shocking, but there’s no reason to be scared (at least not yet) by these technologies. None of the examples provided are magical, and they are the result of scientific research that can be explained and understood. Above all, although some AI outputs can give all the appearance of being “intelligent”, they are still very far away from any human cognition process.
GPT-3 possesses no internal representation of what words actually mean, and lacks the ability to reason abstractly. Also, it can lose coherence over sufficiently long passages, contradict themselves, and occasionally contain non-sequitur sentences or paragraphs. GPT-3 is a revolutionary text predictor, but not a threat to human kind.
On the other hand, GANs need a wealth of training data to get started: without enough pictures of human faces, a GAN won’t be able to come up with new faces. They also frequently fail to converge and can be really unstable, since a good synchronization is required between the generator and the discriminator; and once a model is generated, it lacks the generalization capabilities to tackle different types of problems. They can also have problems counting, understanding perspective and recognizing global structures.
No single breakthrough will completely change the world we live in, but we’re witnessing such a massive change in the way we interact with technology that we should prepare ourselves for the world to come. My suggestion is: learn about these technologies. It will ease your way across these extraordinary times. | https://medium.com/ai-in-plain-english/ai-has-become-so-human-that-you-cant-tell-the-difference-d62ed2f22775 | ['Diego Lopez Yse'] | 2020-09-18 21:03:15.534000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'NLP', 'Data Science', 'Digital'] |
The Co-op Close-up: Data Science at ICBC | SFU’s professional master’s program in computer science trains computational specialists in big data and visual computing. All students complete a paid co-op work placement as part of their degree. In this feature, we examine the co-op experiences of some of our big data students.
Sagar Parikh completed his bachelor’s degree in information technology at Gujarat Technological University, India, before joining the professional master’s program at SFU. During his undergraduate studies, he completed internships working with Java, Python, cloud services, and graphic design.
Can you tell us about ICBC? What is it like working there?
The Insurance Corporation of British Columbia (ICBC) has been the sole provider of mandatory, basic auto insurance in the Canadian province of British Columbia since its creation in 1973. ICBC also sells optional insurance and provides driver licensing and vehicle registration and licensing. The crown corporation has multiple offices across BC with its head office in North Vancouver. I am working as a part of the strategic analytics team, which falls under the finance division. My primary work is on historical claims data at ICBC.
ICBC is committed to creating a workplace where employees know they are valued and where employee experience is a priority. Its core values are at the heart of the organization, and ICBC’s employee commitment guides it in promoting growth, well-being, and a culture of caring — for the customers and the community.
Can you tell us a bit about the project(s) you are working on in your co-op position?
One of the main responsibilities of the data science team is partnering with ICBC’s claims division to improve the claim handling and settlement process. ICBC receives more than 600,000 claims every year, with more than 10% of these having an injury component. One way analytics can help is by identifying claims requiring more attention, and thus ensuring fair settlement. The information required to do so is not always present in a structured format in the relational databases, but oftentimes it is present in the notes — unstructured data — taken by staff.
One of my main tasks during this internship is to establish a benchmark, or proof of concept, for utilizing NLP techniques to extract useful information from this huge untapped source along with generating various predictive models. Such models would be able to assist in real-time analysis and identification of claims requiring more attention. The approach was to start simple, with bag-of-words methods, to establish benchmarks of performance, and later move to more complex techniques, such as word embeddings and language models.
How did the big data program prepare you for your co-op?
While working on multiple projects here at ICBC, I have interacted with and used multiple big data and machine learning tools and technologies such as Pyspark, Spark NLP, Hadoop, Sklearn, Elephas, Keras, and so on. The applied knowledge gained from the big data courses at SFU combined with my background in computer science and my interest for data science came in more than handy during this work term. Working with big data tools such as Hadoop and Spark was especially smooth because I worked with them previously in my big data lab courses. These courses were particularly useful to me for this internship since I’m working on huge volumes of data fairly regularly. The datasets that I work on sometimes have anywhere from 3 to 30 million rows! The assignments and project work in the big data lab courses prepared me to handle such huge quantities of data and cluster resources efficiently.
What are your most valuable takeaways from this co-op experience?
The highlight of the work term has been the opportunity for constant learning and feedback. Positive feedback and appreciation have helped me alter my path whenever needed. In terms of technical skills, the best takeaway for me was the practical NLP knowledge I gained during this term. I learned a lot and am still learning about the overall flow of NLP projects, and the intersection of machine learning and deep learning with NLP. Armed with this knowledge, I am now more confident than before in pursuing a career in data science, especially in the NLP domain.
Do you have any advice for other co-op students?
The ability to quickly absorb and adapt to the transition from coursework to co-op played a part in my success. The key to this ability I believe is effective communication. A lot of my success has been due to the constant check-ins and bi-weekly meetings with my supervisor and manager. They answer my questions patiently and point me in the right direction when I’m stuck. So don’t be afraid to ask questions.
The role of a data scientist requires one to truly understand the data before generating insights or building predictive models. In my experience, understanding the business is the first step towards understanding the data, so my advice is to try to learn as much as possible about the company you are working for, and in particular, understand the data flow within the organization. | https://medium.com/sfu-cspmp/the-co-op-close-up-data-science-at-icbc-876b052af4d4 | ['Kathrin Knorr'] | 2019-11-26 17:26:13.228000+00:00 | ['Data Science', 'Big Data', 'NLP', 'Machine Learning', 'Co Op'] |
How Today’s Marketers Are the Architects of Consumers’ Choice | Framing Good Default Choices
This may have already happened to you: you have just taken out a new insurance plan, but one year later this plan changes its price. So instead of changing, you don’t bother and keep the same plan.
Economists Samuelson and Zeckhauser have described this human tendency as status quo bias. Individuals prefer to choose and keep the default option rather than make the effort to select other options.
For example, they studied the use made by university professors of their pension plan. They found that more than half did not change the way their contributions were allocated, although they may have got different marital situations.
From the marketer’s point of view, this psychological bias is sometimes used to take advantage of consumers’ weaknesses, as shown by automatically renewed subscriptions that the consumer makes no effort to change. But it can also be used to help consumers make simpler and more effective choices.
For example, in French restaurants, there is always the option of a menu of the day that the customer can choose if they don’t want to think about it. In good restaurants, the chef then serves the customer with the dish that they feel is most appropriate at the time and season. This allows them to discover new things and often even eat better.
In many administrative, banking, and insurance services, there is usually no default choice, or the default choice is not the most optimal for the consumer. The consumer thus gets lost in the range of choices available and, if they don’t have a lot of time, they often make bad choices.
A good idea for the marketer would be to invent a good default choice, in which the consumer can find himself without overthinking. | https://medium.com/better-marketing/how-todays-marketers-are-the-architects-of-consumers-choice-b59a4974a3d5 | ['Jean-Marc Buchert'] | 2020-11-19 18:46:05.648000+00:00 | ['Customer Experience', 'Decision Making', 'Marketing', 'Consumer Behavior', 'Customer Service'] |
5 Things You Should Know About Sex Writers | 5 Things You Should Know About Sex Writers
Clearing up common misconceptions
Photo by Richard Jaimes on Unsplash
I got an email some days ago from a fellow writer. She asked what it was like to be a writer who specializes in writing about sex. It was a sensitive question because it felt like she was honest — she wrote on how she had been wanting to fix her niche on sexuality but didn’t have the courage to. It was expected because being a sex writer is not an easy task especially when it comes to criticism.
In the past, I’ve gotten emails that screamed criticism — hurtful ones that stabbed me in the bones, heart, and brain, but for some reason, I kept writing. I simply put out my email because I wanted to know what people had to say about my part-time career. I write about sex because I like it.
But, that’s not what everyone thinks. They think sex writers, all including erotica writers write because we’re obsessed and addicted psychos who do nothing but have sex all day. As expected, because of this thought, rude advancements are made.
Yes, I have gotten roasted for loving what I do, both on Medium and social media, I have been called a whore writer a couple of times by people who discovered my personal blog. I’m well aware that I’m not the only writer who faces such slurs. At the same time, I have been commended for my works, and have continued to love what I do regardless.
However, I wrote this to clear up some misconceptions about sex writers:
1. Sex writers are not sex-addicted psychos
This particular disrespectful phrase was the subject of an email I received regarding an article I wrote about anal sex and prostate massage. In the minds of a few people, sex writers are freaks who have sex all day. It’s a wrong thought.
Yes, sex writers write about sex for a living — many of which make a decent amount because of the demand rate. Still, before sex writers are writers, they are people first. This means they have lives, and jobs and have sex when they want to. Not because sex writers are accustomed to having sex or are sex addicts, but because they are normal people who live normal lives regardless of their career.
I have self-published books in the past where I love my characters so much that I create them to be perfect, both erotica writers and writers who deal solely with sex and relationship advice have that word in common — love. So, sex writers are not sex-addicted freaks who just want to write about sex because of some cheap reason for being horny, they do it because they love it.
2. Sex writers have families and relationships
Sex writers are not any different from writers. They go about their daily activities, write in the study room, or at work, and come back home to meet their spouse. Stereotypes that go around convincing people that sex writers are so much into sex that it is so hard to have a family or relationship are totally disrespectful.
Many sex writers including myself have relationships, not situation-ships. They date for a long time frame and coexist happily. Many are married with kids, some are divorced, and some are single parents. In other words, just the way we see it for normal people, it’s the same way for sex writers.
Although some sex writers don’t get supported by their acquaintances, some are blessed with people who support them. Those who are influencers and affiliate marketers of sex products like vibrators do have the opportunity of testing them out with their partners before review.
3. Sex writers aren’t always kinky
Kinky sex and BDSM is a sensitive topic and because most sex writers write about it, people think kinky is what the writer is all about. There are times when sex writers are completely okay with the issue of kinkiness, trying it out, and connecting with a partner with the same views.
There are also times when sex is just sex. All sex writers are not wholly kinky. This idea has gotten to the point where people think it’s okay to mess with a writer during one-night stands or even relationships by being rough without permission.
4. Sex writers are not always down for sex
There is a certain woman in the Sex Education series; Jean Milburn — Otis's mom, she’s a person who’s usually open to talk about sex and give honest advice about it — you can call her a sex therapist. Most sex writers have the characteristics of this character when it comes to their daily lives.
I’ve had friends compliment me for being utterly open about sex discussions. To me, I feel comfortable talking about sex because, to be honest, it’s not a forbidden topic. When I talk, I educate.
Nonetheless, It has come to a point where people don’t respect boundaries. They try to explore my privacy, assuming I’m always down, ready, dirty for sex. That mentality is outright wrong. I know a lot of writer friends who have gotten calls, emails, and messages on social media of sex advancements and sometimes declaration of threats.
The truth is, talking about sex doesn’t make sex writers available for sex. And assuming so is outright disrespectful.
5. Sex writers don’t always orgasm during sex
Writing about advice on how to take sex to a better level is one of the most written because of the high demand. High demand means there are a lot of people out there who have problems with their sex life and are willing to see what they can do to improve it. A lot of people including sex writers.
Sex writers don’t always have perfect relationships and sex partners. They also aren’t perfect when it comes to sex, a sex writer isn’t always a sexologist either. Sex writers are people who put their words out there to help both themselves and people in terms of sex.
Positively writing about sex and orgasm has made a fair number of people think all sex writers are so good with sex that it’s impossible to not have an orgasm. No, sex writers don’t always orgasm during sex. It’s just the same for them as it is for people. Sometimes, sex is good, sometimes, it isn’t. | https://medium.com/be-unique/5-things-you-should-know-about-sex-writers-a016daf65e9f | ['Jada Mikel'] | 2020-11-12 12:07:55.757000+00:00 | ['Writing', 'Relationships', 'Sexuality', 'Advice', 'Work'] |
5 Scientifically-Proven Ways Happy Couples Help Their Relationships Thrive | 5 Scientifically-Proven Ways Happy Couples Help Their Relationships Thrive
And that you can implement in your own relationship today.
Photo by Rosie Ann from Pexels
Anyone who’s in a relationship wants it to be a happy one. We’re not masochists here; joy seems to be the elusive part of life that everyone wants but sometimes has trouble getting.
And the answer isn’t always fancy dinner dates or expensive couples therapy. You don’t need something physically outside of yourselves to be happy in your relationship.
Very little is needed to make a happy life; it is all within yourself, in your way of thinking. — Marcus Aurelius
In fact, turning to people who already are in happy relationships is probably your best bet. Who better to learn from than the people who already have what you want?
When I first ventured into the world of dating and relationship writing, I realized that relationships don’t come naturally to everyone. Unless you had an excellent set of parents to show you the way, most of us end up learning through trial and error.
Yet many people don’t. They get into a relationship and think that’s where all their efforts end. But they are sadly mistaken.
Through research and writing about the world of love, I’ve realized that a relationship without intention is a recipe for disaster. But a relationship with some effort and awareness? Now those are the kinds that last a lifetime.
So how do happy couples act differently than other people? Well, there are a few ways that you can model in your own relationship today:
Actively listen to each other.
Open any relationship book, and you’ll find communication at the top of the list of ways to improve your relationship. And while people want to feel understood and heard, is simply listening enough?
According to a 2014 study on listening between couples, listening just to hear what someone says isn’t enough. You have to take things a step further.
Active listening is done by hearing what someone says, rephrasing what they said back to them, and showing non-verbal attentiveness (aka, eye contact and not looking at your phone). This helps your partner know you understand the ideas and feelings they’re trying to explain.
Seeing as feeling understood and accepted is a basic human need, it’s no surprise that happy couples actively listen to each other, from small complaints to bigger issues.
They keep being playful with each other.
Something that helped me realize my current relationship has what it takes to stand the test of time is how we can joke around. We’re just as playful today as we were a year and a half ago when we started dating.
And this is an anomaly for me because my relationships in the past lacked this quality. What’s worse is, I’m not the only one. Research shows that adults become less playful over their lifetime due to stress, work, and responsibilities.
And maybe you’re even reading this and thinking playing in a relationship is silly; that it’s not constructive or has no significant effects.
Well, you’d be wrong.
The University of Halle’s René Proyer asked couples to reflect on why playfulness helps their relationship. They had multiple reasons.
Couples feel more connected and happier when they play. They feel it helps them have better sex and communicate more effectively. Point blank: their research showed it’s a great skill for happiness between couples.
They focus on solutions rather than problems.
Every couple is going to have issues. No two people will see eye-to-eye on every problem. And you know what? That’s perfectly OK because arguing is healthy.
Now you might be thinking, “The arguments I have with my partner are far from healthy.” That’s a fair point. Arguments that feel like they suck the life from you and leave distance in your relationship aren’t the healthy arguments I just referred to.
I’m talking about the kind of arguing that happy couples do: solution-oriented arguing.
When there’s an issue, rather than complain, the person comes to the discussion with solutions about how things can be improved. Doing this feels less like an attack on the other person and more like you’re on the same team.
That’s right. What you argue about doesn’t matter nearly as much as how you argue.
They look for opportunities to be happier.
Do you believe that your mindset affects your reality? That simply changing the way you think changes the world around you?
“We suffer more in imagination than in reality” — Seneca
If not, you might want to think twice about that. Because happy couples inadvertently live by this motto, and it’s one of the reasons why they’re so damn happy.
To prove this point, let’s talk about the game Tetris. More specifically, the time a Harvard professor had students play it for several hours and, after the study, the students began seeing Tetris everywhere, from the grocery store to their dreams.
Their brains were primed for seeing objects that fit perfectly together. But this phenomenon goes beyond computer games. It explains why, once you’re having a crappy day, you more easily come across negative people and experiences.
But the opposite is true. When you think positively, you become more grateful and optimistic. Apply this to your relationship, and all of a sudden, you’re appreciative and noticing things in your relationship that you love.
They don’t strive to be a perfect couple.
As a result of growing up in a media-laden world, you might have ideas of how “perfect” couples act: they never argue or go to bed angry. They eat dinner together and enjoy talking about their feelings. After three years of dating, they get married and have kids.
Essentially, there’s a template that people believe exists for a happy relationship. But truly happy couples know that a load of BS.
Happiness isn’t a one-size-fits-all construct. What works for one couple won’t work for others. In fact, there are only several aspects to a relationship that truly matter; the rest is just personal choice.
Cornell Professor, Robert Sternberg, suggests that lasting relationships come down to three components: intimacy, passion, and commitment. Happy couples have a mixture of all three.
The rest of their relationship is molded to whatever makes them happy. If that includes marriage, then so be it. But if it doesn’t, that’s perfectly OK. They find what works best for them rather than trying to fit into a societal mold. | https://medium.com/hello-love/5-scientifically-proven-ways-happy-couples-help-their-relationships-thrive-832bd0a93364 | ['Kirstie Taylor'] | 2020-12-20 00:33:10.085000+00:00 | ['Life Lessons', 'Psychology', 'Life', 'Relationships', 'Love'] |
Survivor’s Guilt: The Psychological War After My Daughter’s Traumatic Birth | Photo by Colton Jones on Unsplash
My daughter did not come into this world in a normal fashion. The typical build up to the birth of a child in America — the smiles, Lamaze classes, the swerving through traffic when the contractions start, the push, the scream, the glow, and taking the baby home all bundled up a couple of days later — none of that happened for us when my daughter was born.
Nothing that anyone would consider normal happened at all.
Survivor’s guilt is something that people experience when they’ve survived a life-threatening situation and others might not have. It is commonly seen among Holocaust survivors, war veterans, lung-transplant recipients, airplane-crash survivors, and those who have lived through natural disasters such as earthquakes, fires, tornadoes, and floods. — Dr. Diana Raab, Psychology Today
Six weeks into the pregnancy we were told my wife had a miscarriage. We were encouraged to schedule a D & C to get rid of “it.” A week later they found a heartbeat.
At that same trip to the emergency room, they discovered my wife had a tumor.
20 weeks in, my wife’s water broke. We were told that if we could hold on for another four weeks my wife could be checked into the hospital because then our daughter would be “viable”. We were consulted by one doctor to go ahead and “terminate” because most babies in our daughter’s situation have a hard time surviving. The doctor said statistically babies in our daughter’s situation had a 1% chance of survival.
At 26 weeks, after two weeks in the hospital, there were more complications. The doctor made the call to prep for delivery around midnight. I’m calling people on the phone to let them know, but no one is picking up. I’m all by myself as the doctor starts giving me a rundown of both real and hypothetical scenarios, some of which involve the deaths of my wife and daughter.
They cut my wife open. Pulled my daughter out, then I followed them back into a room where they stuffed my daughter full of tubes and hooked her up to a ventilator. She weighed 2 pounds, 2 ounces.
In the room, the doctor says there’s a good chance she’ll make it past the first week. They’ll re-evaluate after that. If she makes it.
I don’t get to hold my daughter for 8 days. I only get to stare at her through the glass and trust she’s going to be okay.
And the days progress. 149 of them. Teams of doctors. Countless nurses and nurse-practitioners. Countless plans, evaluations, theories, hours, sitting, waiting in limbo.
Finally, we get to come home.
Why did I get to come home? | https://medium.com/invisible-illness/survivors-guilt-the-psychological-war-after-my-daughter-s-traumatic-birth-db49e045f9d2 | ['Real Coach Monk'] | 2020-10-19 18:59:13.347000+00:00 | ['Hospital', 'Mental Health', 'Parenting', 'Mental Illness', 'Mental Health Awareness'] |
A Crack at the Edges | A Crack at the Edges
Microsoft’s new take on designing for accessibility
In eighth grade, two days before my year-end math test, I slipped while playing soccer and broke my wrist. I was demonstrably bad at the game, and as a thirteen-year-old I tried to compensate for it by kicking the ball with all my strength. Usually, the ball went careening off, my teammates groaned, and that was it. But on that day the wind picked up and the ground turned suddenly on its axis. Then, smack! An excruciating pain rippled through my arm.
Two hours later, I was in the emergency room, and the orthopedic surgeon was pulling my arm apart at its ends so my dislocated bones could snap back into place. I was given painkillers, and my arm was placed in a cast from elbow to palm. Only the fingers twiddled free. The next morning, I appeared at the office of my school’s vice-principal — a hawkish Jesuit priest with a cane — and asked for special treatment.
“What do you need, exactly?” Father Boris wondered aloud.
At first, I was unable to answer him. Extra time? Mid-exam breaks? A transcriber? An exemption from having to take the test at all? I was as yet unaware of the limitations of my new disability. We eventually settled on twenty extra minutes. My parents bought me a special pen with a curved nib. I wrote the exam leaning into my desk in an awkward spinal twist. The experience left me exhausted and painfully sore. I hadn’t forgotten any math since I broke my wrist, but my grade was significantly lower than usual. Mobile dexterity shouldn’t have much to do with mathematical ability, yet there I was.
In the world of design, there are two broad perspectives on disability. The first is the medical model, which explains disability as a personal inadequacy, i.e., a failing on the part of the disabled person. According to the medical model, an individual who does not have the use of their legs is inherently less able than an individual with working legs. The second is the social model, which explains disability as a social inadequacy, i.e., a failing on the part of society to support and empower its citizens. According to the social model, a successful society must enable citizens with various abilities to reach their potential. It’s a subtle distinction, but it makes all the difference in the world.
This past week, at the Interaction Design Association’s conference in Lyon, France, I attended a workshop led by Margaret Price, who is part of the Inclusive Design team at Microsoft. Margaret is petite and sprightly, with large owl eyes and the kind of voice that might interrupt a conversation mid-sentence without seeming rude. She is trained in philosophy, and she considered becoming an academic for many years. Then, during a moment of luminous clarity, she realized that her ability to impact the world was magnified by the tangible problems faced by industry. She decided to become a consultant. In her first project, she was approached by a condom seller that wanted to expand their sales and reach more customers. She convinced the company to move away from traditional ads and instead focus on promoting sexual health awareness among women. The campaign was a wild success. “I realized that my talent lay in reframing problems outside of the context in which they arise,” Price told me over dessert at the repurposed sugar factory where the conference was being held.
Microsoft hired Price to create a new paradigm for guiding design at the company. In some ways, the move is an attempt to atone for an industry-wide failure to make technology accessible for everyone. Graphical user interfaces famously exclude people who are blind, most devices cannot be used by people with mobile dexterity impairment, and aesthetic appeal often trumps the need to make websites and mobile apps accessible to differentially abled users. Price represents a new wave of professionals who seem determined to reign in some of the indulgent, self-referential tangents that currently splinter the field. She is on a mission to change the world; you can see it in the way her eyes wander off during frivolous conversations.
During the three days I spent in Price’s company, she radically altered my perspective on designing for accessibility. The product of Price and her team’s work is the Inclusive Design Toolkit, which provides a radical new framework to design for accessibility in the digital space. Inclusive design may be contrasted with universal design, which comes from the world of architecture and professes a commitment to the “one size fits all” philosophy— most buildings are designed for everyone and must therefore be used as they are by all people. Inclusive design, on the other hand, espouses the philosophy of “one size fits one.” It derives from the social model of disability. According to the Toolkit, disability is not a personal health condition but a “mismatched human interaction.”
The traditional approach to designing for accessibility is to create a product and then tweak it to make it more accessible. Inclusive design proposes a markedly different approach — if the hallmark of good design is that it is user-centered, why not create products that are specific to users with particular disabilities? According to inclusive design, products should not be designed for everyone, but for specific edge cases — a mobile phone for the deaf, an navigational app for the blind, an intelligent math tutor for children with dyslexia.
Beyond the moral case, Microsoft’s Toolkit also makes a market case for inclusive design. It proposes the concept of a persona spectrum, i.e., a spectrum of analogous users who might benefit from a product designed for the edge. Here are four examples:
A product meant for people with a missing limb might benefit people with a broken arm as well as new parents. A product for people who cannot speak might also be used by people who have laryngitis or a heavy accent. A product for the blind might benefit people with cataracts as well as distracted drivers. A product for the deaf might benefit people with ear infections or those who work in loud places (such as bartenders).
Examples of persona spectra from the Inclusive Design Toolkit.
The market case for designing a product for people without an arm (26,000 people in the U.S.) is strengthened immeasurably when you realize that you’re also designing for people with broken arms (13 million) and new parents (8 million). Suddenly, your potential customer base is more than 20 million people in the U.S. alone. Disability isn’t an edge case any more.
At Lyon, I participated in the Student Design Challenge, a hackathon that runs alongside the Interaction Design Association conference. This year’s Challenge was sponsored by Microsoft’s Inclusive Design team. We were put in groups of three and asked to design a product for people with a specific disability. My team’s target users were children who are visually impaired, and we expanded our persona spectrum to include adults who are new to blindness as well as children with dyslexia.
Our final design was a musical instrument, called Mockingbird, that converts three-dimensional objects into sound. Mockingbird has a tangible board with two axes: time and pitch. When an object is placed on the board, the board uses the object’s weight and position to determine what note should be played (watch our concept video here).
A top view of Mockingbird. The shapes represent 3D objects of different weights. A quarter pie is a quarter note, a half pie is a half note, and so on. The red line progresses from left to right as an arrangement is played.
Mockingbird takes a playful approach to music. It is a learning tool with emergent properties: at face value, it is an inclusive way for students to create simple rhythms; for adventurous users, Mockingbird enables open-ended exploration of objects and their properties. We enjoyed incorporating inclusive design into the process of creating Mockingbird. The Toolkit’s principles ultimately led to a stronger, more accessible product.
Microsoft’s Inclusive Design Toolkit makes a compelling case for placing accessibility issues at the forefront of design. But even as I celebrate the work of Price and her team, I am cognizant of the limitations of the Toolkit. Like any framework, it is only as effective as the teams that use it and the environments in which those teams operate. A design produced at a hackathon is under very different constraints than a product made by a company. The inclusive design webpage has a long list of Microsoft products that have incorporated the Toolkit’s ideas, but it remains to be seen how much of an impact the movement will have outside the company. I wish them good luck!
To stay in-the-know with Microsoft Design, follow us on Dribbble, Twitter and Facebook, or join our Windows Insider program. And if you are interested in joining our team, head over to aka.ms/DesignCareers. | https://medium.com/microsoft-design/a-crack-at-the-edges-bfe9fa3db78c | ['Po Bhattacharyya'] | 2019-08-27 17:22:47.783000+00:00 | ['Inclusive Design', 'Ixda', 'Lyon', 'Design', 'Accessibility'] |
Can Science Be Taught Online? | As universities move classes online, some science disciplines can transition from physical to virtual classroom more easily than others. Biology is challenging, especially in the courses I teach that focus on cells, molecules and genes. Here are some ‘out of the laboratory’ options for educators to consider.
Virtual laboratories
These are often used when the real experience is not possible due to restrictions on access to equipment, costs or time. Virtual laboratories can provide a quality learning experience — remember pilots do some practice in flight simulators! Some options, such as Labster, have a cost. Others are free.
My students have used virtual laboratories for a number of molecular biology experiments, to rapidly breed virtual fruit flies when investigating how traits are inherited, and to simulate large numbers of organisms for population genetics. These options allow students to obtain large data sets and perform experiments not possible in real life. Free options for studying Drosophila genetics and population genetics also exist.
A virtual oxygen electrode laboratory enables students to use equipment not otherwise available at scale. Image author’s own.
Other free virtual laboratories I use include a karyotype activity where students look at stained human chromosomes and a bacterial ID activity that teaches the steps used to identify an organism using molecular methods.
Experiments at home
There are some experiments that students can conduct at home. My students have isolated strawberry DNA in their online classes. And a class experiment on the properties of enzymes using rennin (found in Junket tablets) or bromelain (from pineapples) works similarly when performed at home with plastic cups replacing laboratory glassware.
Making fermented foods, such as sauerkraut, can compliment discussion on how microbial communities change over time. Students can even make a pH indicator from red cabbage to test the pH at different time points. Similarly, many resources demonstrate how to prepare a Winogradsky column at home and time lapse videos can prompt discussion.
Citizen science
A variety of citizen science projects exist for students to participate in that may complement the curriculum. Foldit gamifies protein folding with a current challenge aiming to identify possible antivirals that may be used for COVID-19 therapy. A range of other projects provide context for practicing observation skills including collecting information from historical records and wildlife cameras, or by taking an iphone outside to record pollinators. A list of current projects is available from the Australian Citizen Science Association.
Change the emphasis
Laboratory work frequently focuses on methodologies and recording results with less time for analysis and discussion. Students may encounter new techniques or use specialised equipment as they practice the scientific method. However, not all practical techniques need mastering to become a scientist.
Without access to the laboratory, students may learn about a technique through resources such as JoVE. This peer-reviewed and PubMed indexed video journal publishes high quality demonstrations of methodologies. Students can be provided with experimental data and more time spent on its analysis. Learning to use bioinformatic tools is an obvious alternative to some in class experiments.
There is often a call to improve the communication skills of scientists. Outside of the traditional laboratory students can learn by taking their knowledge of a concept and communicating it through a different medium, such as by producing a model or infographic. In one of my classes, a student learnt about the protein RNA polymerase by accessing 3D structure online and space filling it to create a design that could be 3D printed.
A 3D printed model of RNA polymerase. Photo author’s own.
Laboratory replacements need not be lesser substitutes
Not all scientists work in research laboratories. Many options for teaching students how to practice science exist outside the classroom laboratory. Replacements for in-class experiments need not be lesser substitutes, and as COVID-19 restrictions force our transition to online science teaching, I expect we will identify many quality alternatives. | https://medium.com/age-of-awareness/can-science-be-taught-online-143603efbd83 | ['Rebecca Lebard'] | 2020-11-24 02:39:53.162000+00:00 | ['Higher Education', 'Education', 'Teaching', 'Learning', 'Science'] |
The Subtle Art of Naming the Smells That Shape You | The Subtle Art of Naming the Smells That Shape You
On a nose-first investigation into the world.
Photo by Brooke Cagle on Unsplash
Petrichor
Summer smells like fresh rain on hot asphalt. It smells like staying out twenty minutes past curfew, of storms moving in, of running home under newspapers, giggling and laughing and telling the boy next to me to run faster, we’re getting wet!
It smells like my first car, of driving too fast with the windows down, of conflating the feeling of artificial speed with my sense of immortality, of invincibility, knowing that I would never grow tired and old. Petrichor smells like home, where the winters are mild and the summer are scorching, and the humidity is so high that walking out of an air-conditioned building feels like walking right into a brick wall.
Green leaf volatiles
Such a violent name for a gentle summer scent — but the summer I turned sixteen smells like sharing secrets with my friends, lying on the lawn trying to tan our pale forearms, sun baking the piles of cut grass next to us. We’d learned in biology that year that when a blade of grass is cut, it shouts into the world that it’s been hurt, that it’s been damaged. It tells all the blades of grass next to it to start pumping out toxic defense compounds to defend themselves.
The lawnmower mows them down regardless, but the scent still lingers, permeating my skin and leaving smears of green on my jean shorts.
This year smells like the tears my friend wept about the boy who hurt her, who went on to hurt the other girls in my friend circle who didn’t heed her warning. The rest of us grew tough in time, turning bristling exteriors to his advances. We’re no better than the grass.
Maillard reactions
This year I learned to bake, learned to take disparate ingredients and turn them, like magic, into something greater than the sum of the individual parts. This summer smells like burnt sugar and rising bread, of thick chocolate mousse infused with dark rum, of blueberries caramelizing on the pan to be put into a tart.
It smells the knowledge that when everything else is going wrong, when everything else in my life is out of my control and rapidly spiraling into a meltdown, I can still make a perfect loaf of bread simply by using careful measurements and strict timing.
Chloramines
This year I dedicate myself, body and soul, to the swimming pool. I have tan lines marking the straps of my competition swimsuit, and my muscles move my body faster than I’ve ever experienced.
Eight times a week, sometimes twice a day, this is the year I swim so much it feels more like I’m fitting in my life around the pool. I leave, water dripping down my hair to soak the back of my t-shirt to ruin it.
There’s something about the smell of my sweat combining with the chlorine of the pool, something about the way it clings to my skin even when I’ve showered, that makes it hard to forget swimming.
The Proust effect
There’s a power in naming things, in placing a word to an experience, and there’s nothing stronger than a smell to shape a memory. And so I learned one more name, to better understand why it is that a smell — a single whiff of a cut blade of grass, a wind that reeks of chlorine, the combination of water and flour, heat rising from a wet road — can evoke such powerful memories.
Every scent is stamped into your brain, caught between its folds, pinpointing coordinates in space and time where that memory was formed. And I think it’s one of the loveliest things in the world that with the right word, we can communicate the slightest nuance of our shared experience. There is a subtle art to defining an experience so sharply you can find its name, but when you do, there is nothing more rewarding. | https://zulie.medium.com/the-subtle-art-of-naming-the-smells-that-shape-you-5a8588d0745c | ['Zulie Rane'] | 2019-09-12 16:29:33.398000+00:00 | ['Life Lessons', 'Lifestyle', 'Human Prompt', 'Science', 'Self'] |
How the internet fuels polarization | By Tyler Sonnemaker
Americans are becoming increasingly intolerant of those with opposing views, and our increasingly digital lives are part of the problem.
Between 1975 and 2017, animosity between Democrats and Republicans nearly doubled, according to researchers from Stanford University. And that survey was conducted before most of Donald Trump’s presidency, before the COVID-19 pandemic, before George Floyd’s death, and before the 2020 US presidential election.
But polarization is a complicated topic, and the ways that technology influences our politics and democracy aren’t as simple as we’re often led to believe. There are multiple ways to measure polarization, for example, with each explaining a different way we’re divided. And “filter bubbles” vastly oversimplify the forces that shape how we interact online.
Documentaries like “The Social Dilemma” and “The Great Hack” have awakened many Americans to some of the powerful ways tech companies can, intentionally or unintentionally, influence our behavior in the “real” world. Yet these explain only some of the economic, psychological, technological, and political forces at play.
Of course, the tech industry is only partly to blame — we play a significant role as users, consumers, and citizens as well. But that requires a deeper understanding of how the internet is designed so that we can navigate both our online and offline worlds in ways that humanize others, make us better informed, and help us find common ground.
To pull back the curtain on some of the internet’s invisible hands, Business Insider spoke with 11 experts whose backgrounds include, among other things, ethnography, misinformation, political science, cognitive psychology, media, and mental health.
Here’s what they had to say:
Individuals, communities, and the digital village
Jolynna Sinanan, University of Sydney Jolynna Sinanan
Jolynna Sinanan, University of Sydney
So far, that conversation has been “all about the technology,” according to Jolynna Sinanan, a digital ethnographer at the University of Sydney in Australia, “whereas it should be the other way around.” The internet and social media have only become widely used in the past decade or so, meaning we’re just beginning to establish norms around how people should or shouldn’t behave online.
Instead, she said, we should be asking “what does being a person in a community mean?”
In Trinidad, where Sinanan has spent the last few years researching social media usage, there are stronger desires to fit in with one’s community, while in America there is a stronger sense of individualism. Those values play out in people’s online behavior, she said.
“All the sorts of extremes we’ve seen this year [in America] is very much the externalization of the ‘I matter as an individual,” she said, adding that Americans’ tendency to engage in political conversations with complete strangers or share conspiracy theories is partly because they place more value in their individual identity.
Interestingly, young people, Sinanan said, are “the first group to figure out” how social and cultural norms map to online spaces like TikTok and Snapchat, and “they learn about privacy, community, and the village, and how to negotiate that very, very early on.”
The internet, it turns out, may just need to grow up a bit.
Economic incentives
Samantha Bradshaw, Stanford University Samantha Bradshaw
Samantha Bradshaw, Stanford University
No conversation about social media’s impact on our politics and democracy is complete without talking about their business models.
“One of the main tensions,” according to Samantha Bradshaw, a researcher at Stanford who studies that exact intersection, “is this tension between the economic incentives of platforms, and then democracy.”
When Facebook designs its news feed, Twitter identifies trending topics, and YouTube recommends videos, their first goal is keeping us online longer because it helps them sell more ads and make more money, Bradshaw said, which can “conflict with more democratic design choices.”
As criticism of that business model has grown, companies have reframed that goal slightly. Instead of prioritizing content that keeps us online longer, they’re now boosting posts and videos that drive “engagement,” or in Facebook’s case, “spark conversations” and lead to “meaningful social interactions.”
However, as Bradshaw pointed out: “things that are meaningful for conversation and for Facebook might not actually be meaningful for democracy.”
Research shows that people are more likely to engage with content when they’re angry and scared, and as a result, Bradshaw said, prioritizing engaging content reinforces “affective polarization.”
Affective polarization
Dr. Karin Tamerius, Smart Politics Karin Tamerius
Dr. Karin Tamerius, Smart Politics
The word “polarized” probably makes many Americans think about political polarization — that is, a wide gap between our political beliefs and preferences.
But in the US, it’s not our policy preferences that are growing farther apart, according to Dr. Karin Tamerius, a political and social psychologist who started Smart Politics, a progressive group focused on changing how people talk about politics.
“It’s emotional polarization, it’s what political scientists call ‘affective polarization,’” she said. “Most of all, it’s negative feelings about each other, so people on the left don’t like people on the right and people on the right don’t like people on the left, even though they’re not actually that much farther apart on policy than they were in the past.”
Social media has helped fuel that animosity by creating a space without a “clear set of norms,” which has in turn brought out people’s worst behavior, Tamerius said. As hard as it is to have political conversations with people offline, she added, having them online without those norms “can really explode.”
“And if that’s the only interaction that people have with someone who thinks differently from them, it’s going to feed these perceptions that the other is bad or evil in some way,” she said.
Shades of gray
Joel Benenson, Benenson Strategy Group Benenson Strategy Group
Joel Benenson, Benenson Strategy Group
Joel Benenson, a pollster who consulted for Barack Obama’s presidential campaigns in 2008 and 2012 as well as Hillary Clinton’s in 2016, said that surveys his polling firm has conducted support the idea that Americans’ political beliefs actually haven’t shifted that dramatically in the past decade.
While those nuances often get lost on social media, Benenson said that’s one advantage of the survey methods that pollsters use — including online surveys, phone calls, text, and in-person focus groups — to learn about people’s beliefs and attitudes.
“What you have to do is continually ask questions that are provocative in ways that allow you to look at the answers not always as black-and-white questions,” he said, because “there are few attitudes or values that people bring to the table that don’t have shades of gray… they are not absolutist.”
Cognitive echo chambers
Moshen Mosleh, MIT Mohsen Mosleh
Mohsen Mosleh, Massachusetts Institute of Technology
One dimension where Americans are particularly divided, at least in terms of their social media habits, is their personality traits.
Mohsen Mosleh, a data scientist and cognitive psychologist at MIT, said that his research has identified “cognitive echo chambers.”
“People who rely more on their intuitions,” Mosleh said, tend to follow more promotional accounts and get-rich-quick scams. “Those who are analytical thinkers tend to avoid” those types of accounts, he said, instead favoring weightier topics such as politics.
Personality traits, like the “big five” (often referred to as OCEAN), are often more predictive of how we use social media than our political ideologies, Mosleh said.
Perceptual filtering
David Sabin-Miller, Northwestern University David Sabin-Miller
David Sabin-Miller, Northwestern University
People may perceive the same political content differently based on how they view the world to begin with, and that can shift how they react to it.
David Sabin-Miller, a graduate student at Northwestern University, built a mathematical model to help explain how those subjective responses — a psychological phenomenon known as “perceptual filtering” — can fuel polarization online.
“Perceptual filtering is how we’re all participating in constructing our own distribution of content that is either comfortable to us or enticingly uncomfortable,” Sabin-Miller said, referring to content we disagree with but may enjoy consuming because it gives us a “sense of righteousness.”
As a result, even if society itself isn’t becoming more polarized, Sabin-Miller said, “individuals have a sort of feedback with the environment where they can push themselves farther and farther to one side of the other just because they’re fed different information.”
Conspiracy entrepreneurs
Russell Muirhead, Dartmouth University Eli Burakian
Russell Muirhead, Dartmouth University
Before social media, most Americans got the bulk of their news from a handful of cable news stations, radio shows, and print newspapers or magazines. For economic reasons, those outlets often catered their content to broad audiences, meaning there was a larger common set of facts on which people based their opinions.
But social media platforms have created “information flows that fit our preferences pretty precisely,” according to Russell Muirhead, a political science professor at Dartmouth University.
In doing so, he said, they’ve created “conspiracy entrepreneurs.”
Facebook, Twitter, and YouTube have made it possible for creators to make money by attracting a much smaller number of people to their page or channel, even if they’re peddling dubious ideas or products, Muirhead said.
“If people can sell QAnon to a fairly narrow audience, they can make money doing that,” he said. “That didn’t used to exist, there was no occasion for that kind of entrepreneurial activity.”
Selective thinking
Helen Lee Bouygues, Reboot Foundation Reboot Foundation
Helen Lee Bouygues, Reboot Foundation
Fake news and misinformation has undeniably been on the rise in recent years. But our susceptibility to it is, in part, actually a symptom of a lack of critical thinking skills suitable for the digital age, argues Helen Lee Bouygues, who launched her organization, Reboot Foundation, to tackle that exact problem.
Social media platforms and search engines use algorithms and design choices that promote “selective thinking” — where we gravitate toward information that confirms our existing beliefs — rather than critical thinking, Bouygues said.
For example, Facebook makes it difficult to distinguish between a link from a reliable news source or government agency versus a random blogger, while Google surfaces sites you’ve viewed in the past and designs its results page so people rarely make it past the first few results.
Bouygues said we need more tools and skills to help us “fight against the challenges of digital learning and gathering information through visual media.”
“One of the biggest liberties is liberty of thinking,” she said. “If we can’t do our own metacognition and thinking about our own thinking, which is what critical thinking helps you do, then we’re just a little bit like the number in ‘Men In Black.’”
User-driven filter bubbles
Francesca Tripodi, University of North Carolina Francesca Tripodi
Francesca Tripodi, University of North Carolina
Algorithms don’t just influence us, however, we also influence them.
“Users drive these filter bubbles as well,” according to Francesca Tripodi, a professor of sociology and media at the University of North Carolina.
Many people think of Google — which accounts for 90% of all online searches — and other search engines as neutral providers of information, but really they’re designed to return the results that are most relevant based on our search queries, Tripodi said.
“The keywords that we enter are driven by us, not by the search engine that we choose,” she said, giving the example of how searching for “undocumented workers” versus “illegal aliens” will return wildly different results about the topic of immigration.
“Because we come to these search engines with such drastically different ways of seeing the world,” Tripodi said, we’re essentially “keeping ourselves bubbled in information that only reaffirms what we already think we know about a topic.”
There’s a lot of focus on assessing the bias or credibility of an information source, but we also need to assess our starting point, Tripodi said.
Performative activism
Tina Harris, Louisiana State University Louisiana State University
Tina Harris, Louisiana State University
After George Floyd’s killing sparked nationwide protests against police brutality and systemic racism in America, social media became a major outlet for people to express their views on the topics.
That has exposed a lot of explicit racism, but also subtler — and sometimes more harmful — racism, according to Tina Harris, a professor of race, media, and literacy at Louisiana State University.
Harris described a phenomenon of “performative activism,” where people say they support social justice on public social media profiles, but their words and actions in their private, social, and professional lives at times have the opposite impact.
“‘It’s not just, what are they presenting on social media, but what happens when the camera is away,” she said. “Their public face and their private face — do they actually match up?”
Harris said the Kardashians come to mind as one example of this because, while they’ve used their social media followings to push for things like prison reform and protest hate speech, they’ve also engaged in lots of cultural appropriation.
Offline recovery
Jonathan Jenkins, Massachusetts General Hospital Jonathan Jenkins
Dr. Jonathan Jenkins, Massachusetts General Hospital
There’s a growing body of research showing how social media networks exploit our psychology and emotions to keep us online longer. But just as important is what that keeps us from doing instead.
Jenkins, who helps everyone from athletes to first responders to executives develop mental strategies to cope with stress and anxiety, said that a key focus of his coaching is mental and physical recovery. Addiction to social media, he said, can also keep us from recovering properly.
“It takes away time that people could be resting and recovering and building their mental health or their emotional health and resilience,” he said. An hour on social media could be spent meditating, taking a nap, reconnecting with family and friends, planning for the next day so it’s not so stressful, or just relaxing.
Fortunately, Jenkins added, “you don’t need it as much as you think you do. You existed completely in a healthy way before social media.”
For more great stories, visit Business Insider’s homepage. | https://medium.com/business-insider/11-experts-explain-how-our-digital-world-is-fueling-polarization-acc2dc41b54a | ['Business Insider'] | 2020-12-24 17:02:25.960000+00:00 | ['Internet', 'Conspiracy Theories', 'Polarization', 'Psychology', 'Social Media'] |
Test Driven Development: Developers Magic Wand | Let us go through a story.
“Not in distant past, I had provided a build of my application to the tester which was about to go live. I had made some fixes to it which as per my dev sanity testing had made build production ready. But after the testing, the situation was completely different. I got bugs in some cases which were due to my earlier fixes. What the Hell??? Instead of making my build stable, I made it more unstable.”
I wasn’t the only one who confronted this situation, but many developers confront this too. So what can we do to prevent this? The answer is Test Driven Development(TDD).
What are we gonna learn?
There are many blogs on TDD. I will list down some of the best references that you could find out there. We are not gonna discuss theoretical TDD concepts, rather we will focus on how we could plan on implementing TDD from a developers perspective.
What is TDD?
Why TDD?
How to plan out TDD?
Caveats of TDD.
Do you need TDD?
Does it seem too many? Don’t worry we will make this fast and short. ;)
What is TDD?
Test Driven Development is a development procedure proposed by American software engineer Kent Beck in which we write down our test cases simultaneously as we write down our code. Thereby allowing us to test our code concurrently as we keep on developing. The following image describes the difference between traditional development and TDD.
In traditional development, we develop our code first, complete the functionality and go for manual testing. But in TDD, we write down our test cases first and then develop our code accordingly. This helps us in minimizing the chances of code failure or bugs. Any new functionality developed should respect the existing test cases as well, if old test cases are not to be updated.
TDD is based on the concept of RGR i.e Red, Green and Refactor.
Red: We write the failing test case first.
Green: We write the minimal code required to pass the test.
Refactor: We refactor our code if required our test code as well.
Why TDD?
Minimises the chance of code failure.
Minimises the chance of errors by any new developer in the team working on the code base.
Less chance of breaking existing functionalities due to the introduction of new features or bug fixes.
Improves product knowledge.
Improves coding standards.
How to plan out TDD?
Here comes the million dollars question? We all know what is TDD.
But how are we gonna plan out TDD? How are we gonna decide what are the test cases that we need to write? Am I gonna write and develop the functionality, and then write tests for them or how is it gonna be? Confused?
Understand the functionality to be implemented properly.
Let us take an example of the following requirement:
Requirements:
Build a program which will take population input of a city from the user and will return back with the category to which the city belongs. Following are the type of the city with their population ranges:
Small (5,000 to 10,000)
Medium (10,000 to 50,000)
Large (More than 50,000)
Break down the functionality to sub-functions. Consider this as the breaking of each functionality in terms of methods in the programming sense.
We break down the whole implementation to following sub-functions:
Take input regarding population from the user.
Prepare a method which categorizes the population count and returns the type of city by customizing your output.
Preview the UML diagram for the functionality. Consider the classes that you will build, the properties and methods that you will be using to achieve the requirement.
We can have the following set of classes with a rough estimation of our classes and associated properties and methods.
determineCityType() -> Holds the logic for determining the city type based on the population.
configureOutput() -> Holds the logic for configuring the display to user.
Write down test case for each of the functionality. Each sub-functions can be broken down into functions.
Now we should write down test cases predicting the above class diagram. Its a good practice to write tests for each classes and their functionalities. Referring the above UML diagram, we would like to test attributes of City and CityCategorizer and also test the determineCityType() and configureOutput() methods.
Caveats of TDD:
Slows down the development process.
When initial test cases are written, the complexities of future enhancements are not predicted.
If requirements change frequently, you will end up in wasting much of your time reconfiguring your test cases.
Do you need TDD?
If your project is short-term based and is not gonna be there for a long time, you might want to avoid TDD.
If your project is not scalable, you should avoid TDD.
TDD works great if your project is scalable and has a huge code base.
References:
Wikipedia
Introduction To Test Driven Development
RGR
UML Diagram
I would love to hear from you
You can reach me for any query, feedback, or just want to have a discussion by the following channels:
Twitter — @gabhisek_dev
LinkedIn
Gmail: [email protected]
Please feel free to share with your fellow developers. | https://medium.com/swift-india/test-driven-development-developers-magic-wand-b81cfbfeee99 | ['G. Abhisek'] | 2020-06-28 04:48:33.375000+00:00 | ['iOS App Development', 'Software Design', 'Test Driven Development', 'Software Engineering', 'Tdd'] |
Distributed Orchestration with Camunda BPM, Part 2 | The term Orchestration in Microservice context might be ambiguous. To get it clearer, I would like to propose the following classification:
SOA-like orchestration
SOA focuses on remote communication between services, built around business capabilities. Central process engine synchronously calls distributed services remotely. The integration is performed between the state-handling process engine and the state-less service.
I’m over-simplifying it a little here and describing a “bad-design/misunderstood-SOA”, since in essence SOA was NOT about stateless services, but was sometimes implemented this way.
Synchronous orchestration
There are two different implementation styles of this class of systems.
The Connector integration pattern is used, if the process engine is calling the service (S1, S2, S3) using the selected protocol directly (usually HTTP). The RPC integration pattern is used, if the engine calls a local delegate and these are invoking a remote service (S1, S2, S3) via selected protocol (HTTP, Java RMI or any other synchronous protocol).
In both cases, the integration requires the engine and the services to be online simultaneously. The engine might know the location of the services or use a registry or a broker (remember the Webservice triangle) to resolve this and the services use invocation-oriented implementation to execute work on behalf of the process engine.
Message-driven orchestration
Instead of synchronous invocation, the central engine might send messages to queues or topics and the stateless services subscribe to those. The simultaneous availability of the engine and the services is not required. As a result the services use a subscription-oriented implementation to execute work on behalf of the process engine.
Asynchronous orchestration
There are two types of implementation depending on the messaging abstraction in use:
The messaging infrastructure might be middleware (for example using a central messaging bus) offering the concept of queues (Q1, Q2, Q3). The engine send asynchronous messages to services (S1, S2, S3) using queues. Instead of using queues, the process engine may publish the information to pre-defined topics (T1, T2, T3). The topics subscription may be a part of the process engine (aka External Task Pattern as displayed above) or be on the centralized messaging middleware.
Distributed orchestration
The orchestration itself is distributed. Instead of separation between state-full engine and stateless services, the services become state-full (and get their own means of handling state e.g. using orchestration) and the integration takes place between business processes (e.g. running in process engines PE1, PE2, PE3).
Distributed orchestration between process engines
This style of orchestration has been introduced in the last article (see Part 1 of this series), in which I shared my thoughts about the decomposition patterns of orchestration. In this part, I focus on more patterns and implementation strategies using the External Task Pattern. | https://medium.com/holisticon-consultants/distributed-orchestration-with-camunda-bpm-part-2-9a6d54389184 | ['Simon Zambrovski'] | 2019-11-20 08:04:29.556000+00:00 | ['Microservices', 'Orchestration', 'Camunda', 'Bpm', 'External Task Pattern'] |
Daily blog of the first-time founder #26 | As I said in a previous post, I decided to focus on mobile apps. I have figured out, there was my previous issue in Flutter.
I spent all the evening trying to connect API and mobile. I’m nearly there. I think tomorrow a basic working prototype will be ready.
Primary metric
DAU: 4
Users
Users/prospective users talked to today? 1
What have you learned from them?
Ui needs improvement
Goals
What are your top 1–3 goals for the next 2–3 days?
Flutter prototype
What most improved your primary metric?
Talking to users
The Biggest obstacle?
Frontend
Morale
On a scale of 1–10, what is your morale? | https://medium.com/startup-everyday/daily-blog-of-the-first-time-founder-26-6b8ff59ecf4d | ['Alex Kuznetsov'] | 2020-02-20 12:53:51.501000+00:00 | ['Startup', 'Flutter', 'Dart', 'Mobile', 'Programming'] |
The trouble with 16:9 | This tweet by a highly competent designer flashed by:
https://twitter.com/johnmaeda/status/839284888075448323?ref_src=twsrc%5Etfw
It is work in progress on a presentation. You see what direction he is taking: the big headline on the left, rather than across the slide, and a paragraph of very small text.
I think this might be a format that many presentations will use:
More and more display devices are now wide screen (which is a great format for movies)
Headlines that stretch all across very wide screens are unreadable.
The best visual compositions / layouts are not very wide ones
Increasingly, we use presentations to send beforehand, without actual presenting/verbal explanation, hence the need for explanatory text
In my presentation app SlideMagic, I stuck to the 4:3 aspect ratio for slides, enabling you to put the headline across the slide, and added an optional slide out panel for plain text that turns the 4:3 composition into a 16:9 one. | https://medium.com/slidemagic/the-trouble-with-16-9-92e71a14e7cf | ['Jan Schultink'] | 2017-03-09 08:05:31.555000+00:00 | ['Design', 'Presentations'] |
Simpler Financial Management with 5 Expense Groupings | Simpler Financial Management with 5 Expense Groupings honeyguide Follow Nov 13 · 1 min read
Photo by Pixabay on Pexels
It is fairly common to see expenses listed in alphabetical order or in some hard to follow default order within financial statements. This means your Rent expense and Telephone expense may have Salary expense wedged in between. Play this out across the many expense categories that show up on your financial statements and finding any meaning in the numbers is difficult.
It’s just a bunch of words and numbers.
Expense groupings can help better organize your expenses to make the review more intuitive.
A great starting point is 5 expense groups into which all other [expense categories] are grouped.
Salary & Services [Payroll, Accounting Services, Contracted Labor]
[Payroll, Accounting Services, Contracted Labor] Office & Occupancy [Rent, Property Expense, Repairs, Internet]
[Rent, Property Expense, Repairs, Internet] Branding & Marketing [Advertising, Travel, Events]
[Advertising, Travel, Events] Business Administration [Supplies, Software, Bank Fees, Taxes, Interest]
[Supplies, Software, Bank Fees, Taxes, Interest] Discretionary [Meals, Entertainment, Miscellaneous]
Now when you select the appropriate expense category for each transaction, that category will roll up into the group, and the financial statement is broken down into smaller, more manageable, pieces. Financial analysis just became simpler.
If you need further guidance on how to create expense groupings feel free to schedule a complimentary advisory session.
honeyguide | ending paperwork for creatives | https://medium.com/the-innovation/5-expense-groupings-for-simpler-financial-management-76dbc995255e | [] | 2020-11-17 18:03:34.343000+00:00 | ['Accounting', 'Financial Planning', 'Small Business', 'Expense Management', 'Entrepreneurship'] |
5 Signs It’s Time to Raise Venture Capital For Your B2B SaaS Company | 5 Signs It’s Time to Raise Venture Capital For Your B2B SaaS Company Emily Brungard Follow Nov 2 · 3 min read
For founders building software startups, money can be a major hurdle. How can you be sure, though, that venture capital is the right choice for you? After all, crowdfunding, bootstrapping, and alternative funding sources have only grown in popularity as fundraising mechanisms and show no signs of slowing down. Angel investors have become easier to find thanks to technology and growing networks, and as a result, more companies are looking to startup investment platforms like SeedInvest, Republic, FundersClub, and others.
These guidelines can be applied generally to the entrepreneurial crowd, but they’re especially pertinent for founders raising B2B SaaS venture capital.
Photo via Unsplash
You need domain expertise.
One of the most obvious and immediate benefits of working with a software venture capital firm is access to experts in your field. Top SaaS venture firms like Emergence Capital, Bessemer Venture Partners, Battery Ventures, and, of course, High Alpha, have helped some of the world’s most successful software brands (think: Zoom, Box, Gainsight) to scale because they’ve worked with hundreds of other companies. If they don’t have the answers, they’ll connect you with others in their network that do. At the end of the day, they want you to be successful so that they can generate a return on their investment.
You want to grow your network.
A VC firm’s value should extend beyond the check they write. If you’re a SaaS company, it’s likely that you’ll pitch many SaaS venture capital firms during your fundraising journey. As you receive term sheets, think about the firm’s value from a 1000-foot view. Can they connect you with other entrepreneurs in their portfolio? Do they have access to the talent you need to achieve your next big milestone? Will they introduce you to potential mentors, or future investors? Your VC could even introduce you to potential customers, so be sure to ask about your investors’ connections before you finalize a deal.
You want to scale your startup. Fast.
Companies that win move quickly. “Move fast” is a core value at High Alpha, and it’s a value that we share with our portfolio companies. Founders who want to beat the competition know that they need to move fast, grow quickly, and test often. Speed is particularly important if your unit economics depend on your company reaching a certain scale. Venture funding can help companies hire the right talent to help them move faster — which is a priority, since VCs hope for a 10x return on their investment.
You want to acquire, sell, or (some day) go public.
VC-backed technology companies are traditionally in a better position for an IPO or acquisition, which can help accelerate growth and create liquidity for the founding team and early employees. With the support of venture capital funding, founders are often better equipped to acquire companies than their bootstrapped counterparts. VC funding also helps to create credibility when attracting talent and selling to customers. Further down the road, public stock can also make it easier for companies to attract talent, as stock-based compensation has clear value for employees.
You’re prepared to face rejection.
On average, a founder raising $2 million in seed funding has 27 meetings with investors before completing their funding round. That’s a lot of pitching — which means a lot of rejection. VCs reject deals for a number of reasons. Timing might be off, an investor might not believe your company can reach venture scale, or they could have companies in their portfolio that represent a conflict of interest. Founders should be prepared to hear “no,” and they should remember that the reasoning doesn’t always have to do with their business.
While venture capital isn’t the only option to grow a software business, it can be a valuable tool in an entrepreneur’s toolkit. If you’ve raised venture capital — or if you invest in SaaS — tell us: what would you add to the list? | https://medium.com/high-alpha/5-signs-its-time-to-raise-venture-capital-for-your-b2b-saas-company-7788f27fdb5b | ['Emily Brungard'] | 2020-11-02 20:05:07.847000+00:00 | ['Venture', 'SaaS', 'Fundraising', 'Startup', 'Venture Capital'] |
As Major Outlets Ignore Assange Extradition Hearing, Ai Weiwei Demands Freedom for WikiLeaks Founder | CORPORATE MEDIA & FREEDOM OF THE PRESS
As Major Outlets Ignore Assange Extradition Hearing, Ai Weiwei Demands Freedom for WikiLeaks Founder
“He truly represents a core value of why we are free — because we have freedom of the press,” Weiwei said.
Artist and Chinese dissident Ai Weiwei staged a silent protest Monday outside the Old Bailey Court in London as critics pan the media for largely ignoring the extradition hearing of WikiLeaks founder Julian Assange, whose trial enters its fourth week of witness testimony.
“He truly represents a core value of why we are free-because we have freedom of the press,” Weiwei, a longtime supporter of Assange, said outside the courtroom.
“[Assange] is prepared to fight, but this is not fair to him,” he continued. “Free him, let him be a free man.”
Weiwei urged more civil action to bring attention to the trial, a call that came as media watchdogs point out an alarming lack of coverage of the hearing.
“The next time you see a mainstream media talking-head fawn over Bob Woodward, just remember that if they had any backbone, any moral core, they would be fawning over Julian Assange instead,” Lee Camp, a progressive political critic, wrote in Consortium News last week.
Camp pointed to the stark contrast in the deluge of mainstream media coverage of veteran journalist Bob Woodward’s recent book, and revelations about President Donald Trump’s lying to the American public about the severity of the impending Covid-19 pandemic last winter and the relative silence on Assange’s trial.
Video journalist and commentator Matt Orfalea this month also drew comparisons between Woodward and Assange’s treatment by the United States government and global media.
“Bob Woodward… has made his career publishing government secrets,” Orfalea said in a video posted earlier this month. “But today, he could go to jail for publishing government secrets, because the Trump administration has issued the first indictment in history charging a publisher for publishing government secrets.”
U.S. prosecutors have indicted the 49-year-old Assange on 17 espionage charges and one charge of computer misuse over WikiLeaks’ publication of secret American military documents in 2010. The charges carry a maximum sentence of 175 years in prison.
“Journalists do not need to care about Assange or like him,” Jonathan Cook, a U.K.-based reporter wrote as the trial began in early September. “They have to speak out in protest because approval of his extradition will mark the official death of journalism. It will mean that any journalist in the world who unearths embarrassing truths about the U.S., who discovers its darkest secrets, will need to keep quiet or risk being jailed for the rest of their lives.”
“That ought to terrify every journalist,” Cook added. “But it has had no such effect.”
Explaining the vested interests of corporate media in siding with western governments on which they report, Cook continued:
There were two goals the U.S. and U.K. set out to achieve through the visible persecution, confinement, and torture of Assange. First, he and WikiLeaks, the transparency organization he co-founded, needed to be disabled. Engaging with WikiLeaks had to be made too risky to contemplate for potential whistleblowers. That is why Chelsea Manning-the U.S. soldier who passed on documents relating to U.S. war crimes in Iraq and Afghanistan for which Assange now faces extradition-was similarly subjected to harsh imprisonment. She later faced punitive daily fines while in jail to pressure her into testifying against Assange. The aim has been to discredit WikiLeaks and similar organizations and stop them from publishing additional revelatory documents-of the kind that show western governments are not the “good guys” managing world affairs for the benefit of mankind, but are in fact highly militarized, global bullies advancing the same ruthless colonial policies of war, destruction, and pillage they always pursued. And second, Assange had to be made to suffer horribly and in public-to be made an example of-to deter other journalists from ever following in his footsteps. He is the modern equivalent of a severed head on a pike displayed at the city gates. The very obvious fact-confirmed by the media coverage of his case-is that this strategy, advanced chiefly by the U.S. and U.K. (with Sweden playing a lesser role), has been wildly successful. Most corporate media journalists are still enthusiastically colluding in the vilification of Assange-mainly at this stage by ignoring his awful plight.
U.S. lawmakers have largely condemned Assange, despite what columnist Alan MacLeod argued last week in a column for Fairness & Accuracy In Reporting (FAIR) is the “incendiary precedent” Assange’s case would set for the media in the U.S. in particular.
Both President Donald Trump and Democratic presidential nominee Joe Biden have condemned Assange. In 2010, Biden reportedly compared the WikiLeaks founder to “a high-tech terrorist.”
Progressive journalists have noted the media’s missing coverage.
ShadowProof’s Keven Gosztola—who has been providing comprehensive coverage of the trial since its start—reported earlier this month that Daniel Ellsberg, the Pentagon Papers whistleblower, testified in Assange’s defense and poked holes in the U.S government’s argument that in publishing the secret documents on WikiLeaks, Assange endangered lives. Gosztola pointed out that the WikiLeaks founder had asked the U.S. government for help redacting names prior to releasing the information on his website.
Ellsberg noted Assange withheld 15,000 files from the release of the Afghanistan War Logs. He also requested assistance from the State Department and the Defense Department on redacting names, but they refused to help WikiLeaks redact a single document, even though it is a standard journalistic practice to consult officials to minimize harm. “I have no doubt that Julian would have removed those names,” Ellsberg declared. Both the State and Defense Departments could have helped WikiLeaks remove the names of individuals, who prosecutors insist were negatively impacted. Yet, rather than take steps to protect individuals, Ellsberg suggested these government agencies chose to “preserve the possibility of charging Mr. Assange with precisely the charges” he faces now. Not a single person has been identified by the U.S. government when they talk about deaths, physical harm, or incarceration that were linked to the WikiLeaks publications.
As Assange’s trial continues, advocates fear corporate media is failing not only the public but the future of press freedom.
Cook noted that access journalism has weakened corporate media’s willingness to challenge sources they rely on regularly-including the U.S. government-even if that means not quite holding power to account. He wrote:
Assange did not just expose the political class, he exposed the media class too-for their feebleness, for their hypocrisy, for their dependence on the centers of power, for their inability to criticize a corporate system in which they were embedded. Few of them can forgive Assange that crime. Which is why they will be there cheering on his extradition, if only through their silence. A few liberal writers will wait till it is too late for Assange, till he has been packaged up for rendition, to voice half-hearted, mealy-mouthed or agonized columns arguing that, unpleasant as Assange supposedly is, he did not deserve the treatment the U.S. has in store for him. But that will be far too little, far too late. Assange needed solidarity from journalists and their media organizations long ago, as well as full-throated denunciations of his oppressors. He and WikiLeaks were on the front line of a war to remake journalism, to rebuild it as a true check on the runaway power of our governments. Journalists had a chance to join him in that struggle. Instead, they fled the battlefield, leaving him as a sacrificial offering to their corporate masters.
On Monday Rebecca Vincent, director of International Campaigns for Reporters Without Borders, confirming reporting from Gosztola, tweeted news from the trial that medical experts are now concerned Assange could attempt to take his own life while in detention.
“Even as their house is burning down, media are insisting it is just the Northern Lights,” MacLeod wrote. | https://medium.com/discourse/as-major-outlets-ignore-assange-extradition-hearing-ai-weiwei-demands-freedom-for-wikileaks-2263b39ca0c | ['Lisa Newcomb'] | 2020-09-28 21:12:55.839000+00:00 | ['Julian Assange', 'Ai Weiwei', 'Wikileaks', 'Journalism', 'Freedom Of The Press'] |
14 Beneficial Tips to Write Cleaner Code in React Apps | 4. Avoid the Boolean Trap
When deciding your output, be extra careful when it comes to primitive booleans, used to determine the output value of something.
It’s known to be a code smell and it forces the developer to look at the source code/implementation of the component to be able to make an accurate assumption of the result.
For example, let’s say we declared a typography component that takes these available options: 'h1' , 'h2' , 'h3' , 'h4' , 'h5' , 'h6' , 'title' , 'subheading' .
How would you figure out how they’ll be applied when they’re passed in like this?
const App = () => (
<Typography color="primary" align="center" subheading title>
Welcome to my bio
</Typography>
)
Those who are more experienced with React (or more appropriately, JavaScript) might already guess that title will proceed over subheading . The way the order works, the last one will overwrite the previous.
But the problem is that we won’t be able to truly tell how far title or subheading will be applied without looking at the source code.
For example:
Even though title wins in this case, the text-transform: uppercase CSS line still won't be applied. This is because subheading declares higher specificity with text-transform: none !important; in its implementation.
If we aren’t careful enough, it might become really difficult to debug a styling issue, especially when it won’t show any warnings/errors to the console. This can complicate the component’s signature.
Here’s just one example of a cleaner alternative to re-implement the Typography component that solves the issue:
const App = () =>
<Typography variant="title">Welcome to my bio</Typography>
Typography:
Now, when we pass variant="title" in the App component, we will be assured that only title will be applied. It saves us the trouble of having to look at the source code to determine the outcome.
You can also just do a simple if/else to compute the prop:
But the biggest benefit from this is that you can just do this simple, clean one-liner and call it a day: | https://medium.com/better-programming/14-beneficial-tips-to-write-cleaner-code-in-react-apps-a167798fa1ba | [] | 2019-08-18 22:40:42.379000+00:00 | ['Programming', 'Nodejs', 'React', 'JavaScript', 'Frontend Development'] |
Your Ultimate Data Manipulation & Cleaning Cheat Sheet | X-Variable Cleaning Methods
Applying a function to a column is often needed to clean it. In the case where cleaning cannot be done by a built-in function, you may need to write your own function or pass in an external built-in function. For example, say that all values of column b below 2 are invalid. A function to be applied can then act as a filter, returning NaN values for column elements that fail to pass the filter:
def filter_b(value):
if value < 2:
return np.nan
else:
return value
A new cleaned column, ‘cleaned_b’, can then be created by applying the filter using pandas’ .apply() function:
data['cleaned_b'] = data['b'].apply(filter_b)
Another common use case is converting data types. For instance, converting a string column into a numerical column could be done with data[‘target’].apply(float) using the Python built-in function float .
Removing duplicates is a common task in data cleaning. This can be done with data.drop_duplicates() , which removes rows that have the exact same values. Be cautious when using this — when the number of features is small, duplicate rows may not be errors in data collection. However, with large datasets and mostly continuous variables, the chance that duplicates are not errors is small.
Sampling data points is common when a dataset is too large (or for another purpose) and data points need to be randomly sampled. This can be done with data.sample(number_of_samples) .
Renaming columns is done with .rename , where the parameter passed is a dictionary where the key is the original column name and the value is the renamed value. For example, data.rename({‘a’:1, ‘b’:3}) would rename the column ‘a’ to 1 and the column ‘b’ to 3.
Replacing values within the data can be done with data.replace() , which takes in two parameters to_replace and value , which represent values within the DataFrame that will be replaced by other values. This is helpful for the next section, imputing missing values, which can replace certain variables with np.nan so imputing algorithms can recognize them.
More handy pandas functions specifically for data manipulation can be found here: | https://towardsdatascience.com/your-ultimate-data-manipulation-cleaning-cheat-sheet-731f3b14a0be | ['Andre Ye'] | 2020-07-04 16:55:44.812000+00:00 | ['Machine Learning', 'Data Analysis', 'AI', 'Data Science', 'Statistics'] |
The Insanity of Mind While Hunting for a New Job | I can never forget the day I was told by my manager to start looking for a new job, it was my father’s birthday. I was happy until 6.30 in the evening because for the first time I had gifted my father something expensive out of my pocket and he was so joyous when the gift presented him as a surprise. But all of my happiness faded in a matter of minutes when I was told to begin my job hunt. In the midst of this chaotic event, I was given an assurance that I have some time with me to find a new job and that gave me some sense of security and mental peace that at least I have some time to fight my way to a new job.
After this frenzy of an event, I started studying and preparing for job hunt by the standard template known to us software developers. I was somewhat calm because I knew from my conversation with my boss that I have time to search for a new job. But a turn of events happened in the coming weeks, due to some internal and not disclosable reasons, I was told to start serving the notice period. At that moment, it felt like that the floor beneath my feet is suddenly gone.
Here, I was planning my job search on the sole parameter of having enough time to mitigate out of the situation. I was psychologically prepared to go out and search for a new job but one heavy blow that I got was that I was told that I will not be receiving my performance bonus which was due at the end of month because of the escalation of my notice period. I felt cheated. Coming from a middle class family, my parents had to take up a loan for my education and these kind of bonuses help a fella like me to help get rid of these student loans as soon as possible. As every other individual with a student loan, I was planning on putting my performance bonus into my student loans to reduce them by a significant amount in a single strike. But I was devoid of that opportunity.
Nothing hurt me more than this not even when I was asked to search for a new job.
Given the escalation of the situation, I had to revamp my strategy to give me rapid results. I consulted my friends and family and presented them my situation. After hearing their advice and opinions, I formulated a path ahead for me and it worked wonders. I am going to share my strategy with you and I hope that it helps all of you out there searching for a new job or may need to search sometime in future. | https://medium.com/swlh/the-insanity-of-mind-while-hunting-for-a-new-job-a1597380b4bd | ['Tarun Gupta'] | 2020-12-25 14:44:21.706000+00:00 | ['Interview', 'Mental Health', 'Job Search', 'Job Hunting', 'Jobs'] |
NumPy: Slicing and Indexing | NumPy: Slicing and Indexing
One-dimensional slicing and indexing with NumPy, a Python package
NumPy is the fundamental package for scientific computing in Python.
One-dimensional slicing and indexing
Slicing of one-dimensional NumPy arrays works just like the slicing of Python lists. We can select a piece of an array from index 3 to 7 that extracts the elements 3 through 6:
In: a = arange(9)
In: a[3:7]
Out: array([3, 4, 5, 6])
We can select elements from index 0 to 7 with a step of 2:
In: a[:7:2] Out: array([0, 2, 4, 6])
Similarly as in Python, we can use negative indices and reverse the array:
In: a[::-1]
Out: array([8, 7, 6, 5, 4, 3, 2, 1, 0])
Time for action — slicing and indexing multidimensional arrays
A ndarray supports slicing over multiple dimensions. For convenience, we refer to many dimensions at once, with an ellipsis.
Create an array and reshape it: To illustrate, we will create an array with the arange function and reshape it:
In: b = arange(24).reshape(2,3,4)
In: b.shape Out: (2, 3, 4) In: b
Out:
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
The array b has 24 elements with values 0 to 23 and we reshaped it to be a 2-by-3-by-4, three-dimensional array.
We can visualize this as a two-story building with 12 rooms on each floor, 3 rows and 4 columns. As you have probably guessed, the reshape function changes the shape of an array. You give it a tuple of integers, corresponding to the new shape. If the dimensions are not compatible with the data, an exception is thrown.
2. Selecting a single cell: We can select a single room by using its three coordinates, namely, the floor, column, and row. For example, the room on the first floor, in the first row, and in the first column (you can have floor 0 and room 0 — it’s just a matter of convention) can be represented by:
In: b[0,0,0]
Out: 0
3. Selecting slices: If we don’t care about the floor, but still want the first column and row, we replace the first index by a : (colon) because we just need to specify the floor number and omit the other indices:
In: b[:,0,0]
Out: array([0, 12])
This selects the first floor In: b[0]
Out:
array([[ 0,1, 2, 3],
[ 4,5, 6, 7],
[ 8,9, 10, 11]])
We could also have written:
In: b[0,:, :]
Out:
array([[0,1, 2, 3],
[4,5, 6, 7],
[8,9, 10, 11]])
An ellipsis replaces multiple colons, so, the preceding code is equivalent to:
In: b[0,...]
Out:
array([[0,1, 2, 3],
[4,5, 6, 7],
[8,9, 10, 11]])
Further, we get the second row on the first floor with:
In: b[0,1]
Out: array([4, 5, 6, 7])
4. Using steps to slice: Furthermore, we can also select each second element of this selection:
In: b[0,1,::2]
Out: array([4, 6])
5. Using ellipsis to slice: If we want to select all the rooms on both floors that are in the second column, regardless of the row, we will type the following code snippet:
In: b[...,1]
Out:
array([[ 1, 5, 9],
[13, 17, 21]])
Similarly, we can select all the rooms on the second row, regardless of floor and column, by writing the following code snippet:
In: b[:,1]
Out:
array([[ 4, 5, 6, 7],
[16, 17, 18, 19]])
If we want to select rooms on the ground floor second column, then type the
following code snippet:
In: b[0,:,1]
Out: array([1, 5, 9])
6. Using negative indices: If we want to select the first floor, last column, then type the following code snippet:
In: b[0,:,-1]
Out: array([ 3,7, 11])
If we want to select rooms on the ground floor, last column reversed, then type the following code snippet:
In: b[0,::-1, -1]
Out: array([11, 7,3])
Every second element of that slice:
In: b[0,::2,-1]
Out: array([ 3, 11])
The command that reverses a one-dimensional array puts the top floor following the ground floor:
In: b[::-1]
Out:
array([[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]],
[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]]])
What just happened?
We sliced a multidimensional NumPy array using several different methods.
Time for action — manipulating array shapes
We already learned about the reshape function. Another recurring task is flattening of arrays.
Ravel: Return a contiguous flattened array. A 1-D array, containing the elements of the input, is returned. A copy is made only if needed. As of NumPy 1.10, the returned array will have the same type as the input array. (for example, a masked array will be returned for a masked array input). We can accomplish this with the ravel function:
In: b
Out:
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]]) In: b.ravel()
Out:
array([ 0, 1, 2, 3, 4, 5, 6, 7, 15, 16,17, 18, 19, 20, 21, 22, 23])
2. Flatten: The appropriately-named function, flatten, does the same as ravel, but flatten always allocates new memory whereas ravel might return a view of
the array.Return a copy of the array collapsed into one dimension.
In: b.flatten()
Out:
array([ 0, 1, 2, 3, 4, 5, 6, 7,15, 16,17, 18, 19, 20, 21, 22, 23])
3. Setting the shape with a tuple: Besides the reshape function, we can also set the shape directly with a tuple, which is shown as follows:
In: b.shape = (6,4)
In: b
Out:array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
As you can see, this changes the array directly. Now, we have a 6-by-4 array.
4. Transpose: Reverse or permute the axes of an array; returns the modified array. For an array, a with two axes, transpose(a) gives the matrix transpose. In linear algebra, it is common to transpose matrices. We can do that too, by using the following code:
In: b.transpose()
Out:
array([[ 0, 4, 8,12,16,20],
[ 1, 5, 9,13,17,21],
[ 2, 6, 10,14,18,22],
[ 3, 7, 11,15,19,23]]
5. Resize: Return a new array with the specified shape. If the new array is larger than the original array, then the new array is filled with repeated copies of a. Note that this behaviour is different from a.resize(new_shape) which fills with zeros instead of repeated copies of a. The resize method works just like the reshape method but modifies the array it operates on:
In: b.resize((2,12))
In: b
Out:
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]])
What just happened?
We manipulated the shapes of NumPy arrays using the ravel function, function flatten , the reshape function, and the resize method. | https://medium.com/python-in-plain-english/numpy-slicing-and-indexing-264f6d3a438b | ['Bhanu Soni'] | 2020-12-21 11:47:43.115000+00:00 | ['Numpy', 'Python', 'Machine Learning', 'Data Science', 'Programming'] |
There’s Nothing New About The New-Age | 1. There’s nothing new about: “Easy Orientalism”
New-Age beliefs, concepts, and practices are primarily borrowed (and by borrowed, I mean stolen) from varying sub-sects of Hindu, Buddhist, old Pagan, and Native-American schools of thought.
Note: As I personally come from a Hindu background and have no personal identity rooted in Indigenous or Paganist systems, I will only speak on behalf of Eastern religious traditions within this article.
Concepts like reincarnation, the ever-lasting soul (“atma”), chakras, yoga, mind-body-spirit connectivity, and karma are all central within our ancient Vedic texts. There isn't even one singular way of understanding them, as Hinduism itself has varying branches. Now these concepts may be easy to grasp on a cognitive, intellectualized level, but truly conquering these concepts beyond the surface-level was never meant to be a walk in the park. It requires incredulous self-discipline, study (under appropriate supervision), and selfless service. At the core, it meant cultivating one’s bhakti (devotional love) for the Divine. Historically, India’s greatest ascetics have supposedly meditated over numerous lifetimes. Lifetimes. And Buddhist monks were asked to adhere to a strict structure for everyday living.
In other words — to become enlightened, you had to work for it. There’s no fast-pass lane to gaining spiritual wisdom. It isn’t reading a few books by Deepak Chopra and discussing it over a weed session, going on a retreat, and experiencing a “Kundalini awakening” within a month. And also, where’s the talk of doing service? Social responsibility? Duty? Self-awareness? Bhakti?
But here’s the thing: That’s not going to sell. There’s nothing appealing nor glamorous about rigorous meditation under a tree for years, and having to forego all material riches. But hey! Meditating every now and then, in a hot-pilates studio with a $7 matcha latte sure sounds like it’s enough! In his critical read “Trickster and Tricked”, Erik Davis calls it “Easy Orientalism”: a knock-off copy on Eastern ideologies, that's made easier to swallow for Western audiences. It’s not too Asian to turn you off, but it’s exotically Asian enough. Convenient.
Easy Orientalism: a knock-off copy of Eastern ideologies, that’s made easier to swallow for Western audiences. It’s not too Asian to be a turn-off, but it’s Asian enough.
2. There’s nothing new about: Colonialism
Ever heard of colonialism? Well think of the New-Age market as colonialist plagiarism, where teachings that were once belittled are now sold back to us as a watered-down, Walmart version of the original concepts.
And aside from the obvious, why is that a problem you ask? Because when people think of concepts like karma, yoga, cosmic interconnectedness and everything else — they roll their eyes. Eastern thought sounds like a bunch of ungrounded fluff, because it's automatically associated with the Western’s poorly-packaged narrative for it. But these ideologies have been taken out of their textual context, to become more convenient for modern-day consumerism (and to sell Tibetan singing bowls for $100 apiece).
I also think of all the other kids from an Asian immigrant home, growing up out of their home country. Most (if not all) of us battled a turbulent relationship with our Eastern religions. As children, it was always hard to explain the bindi on our foreheads in sixth grade, or how we have an elephant-headed God named Ganesh. But now as an adult, it’s a newer issue: I cringe, guiltily, every time I attempt to read a Hindu text that speaks of the universe, or karma, or anything else that’s on a New-Age YouTube channel. Why? Because all I can think of are the hyper-commercialized, hardly-accurate explanations plastered everywhere in mainstream spiritualism. How key terms like “shakti” and “consciousness” have been incorrectly regurgitated over and over again like an overplayed song on the radio — and the tune is completely off.
You know, there’s a monstrous guilt in cringing before your own people’s texts. Quite the heartbreak.
3. There’s nothing new about: Snake-oil salesmen
Just because they’re selling a MasterClass on it, doesn’t mean they’re a master at it.
Products can’t be sold without their seller. The New-Age has really taken off with the help of social-media superstars and self-proclaimed “spiritual gurus” (influencers). Now don’t get me wrong, real gurus do exist. But the spiritual teachers that I’m thinking of don’t typically run an Instagram business page, with their bio reading “Entrepreneur 💪💯🔥”.
Course sold by spiritual teacher and guru, Bentinho Massaro.
The Sanskrit term “guru” implies a spiritual teacher who “dispels darkness” within the student. They bring us light through humble guidance, wisdom, and instigating the pursuit of knowledge.
To be honest, when it comes to speaking about these spiritual gurus on social media, I don’t even know where to begin. I really don’t. But in all fairness, perhaps their intentions really are pure: to genuinely share what’s worked for them, and to watch others bloom too. In fact, many of these influencers’ followers do profess that they’ve become happier, better, and kinder.
But it doesn’t change the fact that there are still underlying dangers in following one — the dangers of cult dynamics, spiritual narcissism and spiritual materialism:
There’s nothing new about cults
We see this time and time again. You’ve got a charismatic, god-like leader at the front of the room, with a large following of worshippers drawn to their “progressive” ideologies and promises. Today, New-Age idealists build digital cults through their online communities and platforms. These cults are trickier to pin down. On the surface, it seems light, empowering, and all in the name of personal development. But it sure gets questionable when these spiritual leaders curate a parasocial relationship with you: you obsessively spend your energy, time and money on them, but they barely know you exist. You’re just a fan.
It also gets real concerning when these all-knowing wise ones proclaim to have special insider knowledge and powers that you never will, and your only way of “evolving” is through them. They’re your only hope — you need them.
Let me share what very little I do know on real gurus: They’ll never disempower you like this.
An actual guru will remind you that they are just that — A guide. Ultimately, everything and everyone outside of ourselves (gurus, tarot cards, psychics, astrology, life coaches, etc) are just external nods. But your answers are always to be found internally, within you. Gurus may advise you, they may bless you, but they’ll gently remind you to always follow your own lead too.
There’s nothing new about spiritual narcissism
I’m going to tell you the one red-flag that you really need to look out for: Spiritual narcissism. In Douglas Todd’s “Spiritual Narcissism Works in Subtle Ways”, psychiatrist Gerald May defines it as “the unconscious use of spiritual practice, experience, and insight to increase rather than decrease self-importance”.
“The unconscious use of spiritual practice, experience, and insight to increase self-importance”.
The point of spiritual pursuit in Eastern religious traditions is to decrease the sense of self and ego — not inflate it. But when I watch the moves of these so-called spiritual masters, I see no humility. Spiritual narcissism can get the best of all of us, but these influencers insist on being “the master” figure. They want to feel seen and applauded for being oh-so spiritually inclined.
So if you’re feeling mighty proud of your “spiritual progress”, humble-bragging your special abilities on Insta, or feel like you’re slightly above your non-spiritual peers in ANY way — please, sit down.
There’s nothing new about spiritual materialism
(Also known as hypocrisy.)
Originally coined by Chogyam Trungpa Rinpoche, spiritual materialism is when we use spiritual knowledge and insights for primarily materialistic pursuits. (For example, learning the law of attraction to get a pretty car). And although there’s nothing wrong with desiring a home, a car, or a particular career, it warns us from becoming too externally focused.
So when a spiritual leader lectures about detachment from all possessions, but makes it on the Forbes 2020 billionaires list… I raise an eyebrow. Seriously, you’re allowed to be skeptical. And again, there’s nothing wrong with wanting success, having goals, and making it big. But if you’re going to make a living off of preaching Eastern theological ideas like material detachment, humility, groundedness and minimalism in your Buddhist monk robes — I expect you to walk your talk. | https://medium.com/an-injustice/theres-nothing-new-about-the-new-age-45274e51a62f | ['Vishali N.'] | 2020-05-10 08:15:36.459000+00:00 | ['Spirituality', 'Wellness', 'Social Justice', 'Culture', 'Colonialism'] |
Vision Statement | As I type this story, America is getting ready for bed. Our Operations Chief, Q, is heading out to lunch with investors in Hanoi and somewhere between the darkness and the light, is the blur.
We see it twice with each complete cycle because the blur is not the end. Dawn is the beginning of light and dusk is the beginning of dark. The blur is always the beginning, and that’s where we’re at with cryptocurrency today. I’m thankful for this because it means that the future is nearly upon us. I can almost see it ahead, and my vision is crystal clear.
I see that we will one day wake up and go to bed as a unified global mesh. Our activities will feed into an endless value chain and each person is fairly compensated for every second that they give to the network. The value they provide would come in many forms, from answering questions, to hosting experiences, to verifying transactions, to creating content. Fair compensation would come in the form of Enzo, a stable cryptocurrency base on the intrinsic properties of Time, to deliver a vast digital economy of equal opportunity.
I see developers hard at work, creating thousands of digital places where people can provide value for each other. Instead of extracting value from consumers, developers are compensated through the Samaritan Protocol for facilitating the exchange of value between them.
I see a day where there are thousands of different places where people can go to trade with the world, and thousands of virtual places where people can go to connect with others. At these Crossroads, the Enzo people earn is blind to their age, country, and race. Instead, it will be based solely on their contributions to the network.
EON Bridges the Gap
We want to create a digital economy that doesn’t exist today. It is a digital economy that blurs the line between the physical and the digital. This is an ambitious vision, but it’s one that is well within reach and blockchain technology can fast-track our pursuit.
Billions of people are already providing value to society in digital places — they just aren’t getting paid. They’re on message boards and in games, in social apps, and chat rooms. They’re transacting offline, dealing online, and everywhere in between. Then there are many billions more who aren’t even tapped into any network. They are the bank-less and the under-served that holds our world up. For the first time, we can include them all.
This is the vision for EON (Enzo Open Network): a digital data economy of equal opportunity.
It would be a world where people are compensated for their time, and a world where developers are compensated for creating novel pathways that let consumers provide value for each other. It would be a world where people win together by working together. It would be a digital world built on Enzo — secured, private, and accessible via our GGUI (Gateway Graphical User Interface), Alfa.
We, the world, can move forward as a whole — the way we always thought we could. The way we always thought we would. In this world, everybody wins. | https://medium.com/alfaenzo/vision-statement-cc732f21e708 | ['Tony Tran'] | 2018-06-29 09:25:34.423000+00:00 | ['Social Media', 'Blockchain', 'ICO', 'Startup', 'Bitcoin'] |
Arshile Gorky & Mark Rothko’s Abstract Expressionism | Arshile Gorky & Mark Rothko’s Abstract Expressionism
The Eastern European creators of American Color Field Painting.
Several shifts in the intellectual, social and political climate of post World War II America gave way to the rise of Abstract Expressionism. The artistic style gained popularity as a weapon against Totalitarianism, which had plagued Europe during the war, and as a method of art-making that championed individuality and self-expression.
Critics such as Harold Rosenberg and Clement Greenberg explored the motives and meaning of the contemporary American art scene through scholarly writings and critical reviews of gallery exhibitions. Many of these art critics and scholars deemed Abstract Expressionism a uniquely American art movement; after all, it took root and blossomed almost exclusively in New York City.
Although the profound explorations and developments of the Abstract Expressionists are largely accredited to American-born figures like Jackson Pollock and Barnett Newman, one simply cannot ignore the profound ways in which European immigrants and war refugees shaped the American art world. Fleeing from violence and persecution in their native countries, millions of Europeans arrived to Ellis Island in the 1930s and 1940s. With them, they brought European art theory, philosophy, psychoanalysis, and other intellectual values that greatly influenced the birth of the Abstract Expressionist movement.
The works of Arshile Gorky and Mark Rothko, two influential artists in the Abstract Expressionist schools of Action Painting and Color Field Painting respectively, exemplify this Euro-American exchange of ideas. Although the formalist approaches of Gorky and Rothko differ greatly, both artists share life stories threaded with tragedy.
Born in Turkish Armenia in 1904, Gorky and his family fell victim to persecution and exile during World War I. One by one, members of Gorky’s family fled to the U.S. but he remained in Armenia for several years with his mother and his sister, Vartoosh. In 1915, the Armenian people were forced on a death march across the Caucasus mountains, during which Gorky’s mother died of starvation in his arms. Gorky and Vartoosh eventually escaped to the United States but Gorky’s traumatizing experiences would always weigh heavily on him.
Rothko faced the very similar threat of persecution in Lithuania. Born Marcus Rothkowitz in 1903, his childhood was plagued by mob violence against Jews in Russia. When he was ten years old, Rothko fled with his mother and sisters to meet his father and brothers in Portland, Oregon, where his father died just seven months later (Findberg, 62–64, 106).
As their artistic endeavors progressed, both Gorky and Rothko channeled their childhood experiences into artwork that was fraught with raw emotion. Gorky’s art was “the vehicle through which he experienced everything,” (62). As educated and self-taught intellectuals, both men were familiar with the principles of surrealism and the European Avant-Garde.
Gorky took an interest in cubism as well, emulating the geometric style in his earlier works. He had a habit of copying the styles of the great modern masters believing that good painting is good painting regardless of originality. However, as his style progressed, his paintings increasingly relied on surrealist automatism.
Arshile Gorky, Garden in Sochi, № 3, c. 1948. Oil on canvas, 31 x 39in. The Museum of Modern Art, New York.
Garden in Sochi, №3, finished in 1948, qualifies, as Abstract Expressionist in it’s own right. Considered one of his later paintings, Garden of Sochi stylistically mirrors the works of Spanish surrealist Joan Miro but manages to showcases a technique that is recognizably Gorky (68).
The artist weaved curvilinear brushstrokes into shapely masses, distinguishing them from the crème-colored background by filling the carved-out forms with bold splotches of color. Meant to depict an abstracted landscape, Gorky contradicts preconceived illusions to three-dimensional space and applications of earth-tones by painting his landscape on a flat plane marked by outlines and blocks of bold red, yellow, green and black hues.
His objective in depicting the Garden in Sochi in this manner was not to illustrate its physical appearance but to capture the positive emotional associations the artist ties to the place. According to his sister, the painting refers to their own family garden back in Europe: “It was a custom in our family at the birth of a son to plant a poplar tree which would later have the birth date and name carved into it. Gorky as a child loved his tree and took great pride in caring for it,” (68).
Like Gorky, Mark Rothko also believed capturing emotion was essential to the art-making process. He even described the composition of his mature works as the ideal format for “dealing with human emotion [and] with the human drama as much as I can possibly experience it,” (105).
In the early years of his career, Rothko drew influence from surrealism and contemporary psychoanalysis. Post-war artists in New York City art schools studied the psychological and philosophical works of Carl Jung, Friedrich Nietzsche, Albert Camus and Jean-Paul Sartre, often attempting thereafter to visually explore these ideologies. Rothko, like many Abstract Expressionists, developed an interested in Jungian archetypes and the concept of the “collective unconscious.” From about 1940–1946, Rothko explored Jungian notions through symbolic and figurative representation.
In the late forties, he began experimenting with more simplistic block forms of color. He sketched new ideas and developed an admiration for the work of Clyfford Still, a color field painter who belittled automatist painters as mere “scribblers,” (109). Rothko eventually met Still in California and drew inspiration from Still’s expressive use of color blocks.
Mark Rothko, Number 22, 1949. Oil on canvas, 9ft 9in x 8ft 11 1/8in. The Museum of Modern Art, New York.
In 1949, Rothko finalized his new style of color field painting, exemplified in Number 22. The artist himself described this composition as “the elimination of all obstacles between the painter and the idea, and between the idea and the observer,” (110). Rothko’s new style contrasts the monumental idea with the simplistic visual, the stable composition with the emotional turmoil it illustrates.
The artist’s fiery color palette recalls Gorky’s Garden in Sochi but conveys a different mood entirely. While Gorky’s painting immortalizes a fond childhood memory, Rothko’s canvas conveys a raw anger that almost assaults the viewer. He was a deeply depressed man whose friends described him as a short-tempered, often irrational person. When discussing his own visual aesthetic, Rothko described his work as possessing “a clear preoccupation with death,” an apparent characteristic of his mature color field works (111).
Although representation was not an aim of Number 22, the distinct yellow lines etched into the central red bar strongly suggest a horizon line. The warm yellow and orange tones of the color clouds could represent sunlight permeating the sky and blanketing the Earth. By this interpretation, it can be argued that Gorky and Rothko arrested the emotional essence of place.
Influenced by early 20th century European art trends and the teachings of European intellectuals, both Gorky and Rothko created visual images that addressed both the grandiose psychoanalytic principles of Carl Jung and the intimate, internal struggles of the individual artist. Placing a foot in the European intellectual world and the New York City art scene, Gorky and Rothko also channeled their shared early childhood traumas and their internal emotional struggles. Ultimately, they created works that achieved transcendence through simplistic forms. | https://medium.com/the-curiosity-cabinet/arshile-gorky-mark-rothkos-abstract-expressionism-56b4fcd78fc3 | ['Polina Rosewood'] | 2020-12-03 21:48:19.363000+00:00 | ['Painting', 'Artist', 'History', 'Art', 'Creativity'] |
How O’Reilly Media Learned To Reinvent Its Business Model And Built A New Future | When Laura Baldwin arrived at O’Reilly Media in 2001 as Chief Financial Officer, she was well equipped for the job. She had previously held the same position at Chronicle books and then spent a few years as a consultant for BMR & Associates, a firm that specializes in helping media companies.
The challenges at O’Reilly, however, were somewhat unique. Over the years the company had become something akin to the official publisher of Silicon Valley and had ridden the dotcom boom to prominence. Now that the boom had turned to bust, O’Reilly was in dire straits and banks were calling in loans.
So Baldwin did what a good CFO does, she instilled better financial management and within a few years the company had returned to profitability. Yet she and the firm’s founder, Tim O’Reilly, began to see an opportunity to truly transform the business. Today, as President, she focuses not just on publishing books, but on everything that comes after.
Humble Beginnings
Tim O’Reilly is an unusual tech icon. Soft spoken and understated, he’s about as far from Steve Jobs or Elon Musk as you can imagine. In college he didn’t major in engineering or physics, but classics. Nevertheless, he began working as a technical writer in the late 70s and within a few years started his business publishing manuals in a converted barn outside of Boston.
As the center of gravity in the technology industry shifted from Boston’s Route 128 to the San Francisco Bay area in the late 80s, O’Reilly moved with it. The company soon became known for having a knack for identifying important technologies that were emerging and getting out reliable guides before anybody else.
So when the dotcom era began, O’Reilly and his company were at the center of it. Its products practically leapt off the shelves as eager entrepreneurs raced to keep up with the latest technologies and bring their products to market. Tim O’Reilly himself became something of a Silicon Valley oracle, with an almost uncanny ability to sense and articulate emerging trends.
That’s what all came crashing down in 2000. With once high-flying startups dropping back down to earth, demand for O’Reilly books plummeted. In a painful restructuring, the company would lay off about a quarter of its staff. However, within a few years it would emerge again to help power the next technology boom.
O’Reilly 2.0
By 2003, O’Reilly saw a new era of technology emerging. Called Web 2.0, a term that he and the company did much to popularize, it represented a shift from the Internet as mostly a collection of informational pages to a truly interactive software platform that would shift power from publishers to users.
Baldwin, named the company’s Chief Operating Officer in 2004, saw the opportunity to shift the business along similar lines. With the company profitable again, it began to focus on building up its conference business with sponsorship emerging as a key revenue stream.
Now, rather than just merely publishing instruction manuals that explained how to use new technologies, O’Reilly emerged as a platform to help bring those technologies to the fore. At events with names like Web 2.0 Summit and OSCON, thought leaders would gather to meet and exchange ideas. Much like the web itself, the audience was becoming the product.
The success of the conference business would lead to an even greater shift that would not only lead to greater profits, but would redefine how it saw its role as a publisher.
From A Publishing Company To A Knowledge Business
As O’Reilly saw its conference business grow, it also began to change the way it saw its business. Previously, its conferences focused on tech luminaries, like Jeff Bezos, Mary Meeker and Marc Andreessen, to get people excited about new technology trends. Yet it saw even greater opportunity in using conferences to help people leverage those new technologies.
“That shift in focus to the end user and asking ourselves ‘What did they need to learn?’ ‘What tasks did they need to accomplish?’ — that really drove our growth.” Baldwin, now President of O’Reilly, told me. By 2014, the business was growing at the rapid rate of 45%
The company also saw another opportunity. Back in 1999, Tim O’Reilly had the foresight to partner with Pearson in a digital library. Baldwin saw the opportunity to transform that platform from what was essentially a reference library into a true learning platform. (Disclosure: I’ve been paid to appear on the O’Reilly’s learning platform as an expert).
So O’Reilly bought out Pearson’s share of the business and switched emphasis from one of selling individual books to a subscription model. The platform was also expanded to include online training courses, video events and other resources to help its customers meet the challenges of an ever-changing world. With 2.5 million users, it’s now the fastest growing part of O’Reilly’s business.
Gearing Up For The Next Great Disruption
Today, as the rest of the publishing industry struggles, O’Reilly is thriving. What’s key to its success is that while other firms in the industry focus exclusively on sourcing and distributing books, O’Reilly’s business is increasingly focused on what happens after the book is published by leveraging conferences and additional learning opportunities on the digital platform.
“We don’t see ourselves as a book publisher,” Baldwin says. “The book is just a container. What we’re focused on now is packaging the knowledge and know-how of thought leading talent to help our customers make an impact on the world.” That creates more value for O’Reilly’s customers and also provides additional revenue for itself and its authors.
“So, for example, we might contract an author to write a book, but that relationship goes far beyond the publishing date,” Baldwin explains. “They speak and do trainings at our conferences. They do events on our learning platform. They design online training curriculums. Anything that helps us spread knowledge so that it can be used productively.”
Baldwin also feels confident that O’Reilly is well positioned for the next era of technology. “Our history has been rooted in the digital revolution, yet our future may not be,” she says. “There are a number of technologies that are nascent today, from genomics to nanotechnology to new computing architectures like quantum. We see our job as spreading that knowledge, making it actionable and helping people navigate what’s happening.”
What could have been another cautionary tale of the excesses of the dotcom era has turned into an inspiring story of rebirth. The truth is that value never really disappears, it just shifts to another place.
An earlier version of this article first appeared on Inc.com
Previously published at www.digitaltonto.com. | https://greg-satell.medium.com/how-oreilly-media-learned-to-reinvent-its-business-model-and-built-a-new-future-9ecdf8e91550 | ['Greg Satell'] | 2019-04-07 11:15:03.758000+00:00 | ['Startup'] |
Finally, You Can Start Understanding Machine Learning Papers | Immediately, the authors of the Batch Normalization paper begin by initializing variables.
Tip: Machine learning papers are notorious for creating dozens of variables and expecting the reader to know what they mean when they are referenced later. Take a highlighter and highlight where a variable is ‘initialized’ and where it is used henceforth. This will make reading much easier.
The authors present the formula for Stochastic Gradient Descent, initializing several variables, such as the parameters and the maximum number of training examples in the set.
G lossary: “arg min f(x)” refers to the arguments, or inputs, which minimize the following function, in this case f(x).
In English, the statement reads, “the parameters of the network Θ are equal to the values in which [the average of all values outlined by a function l which takes in an individual training point and the current parameters] is minimized.” This is a mathematically rigorous definition of the goal of a neural network. Just for the purpose of being rigorous, often mathematical equations are written to be more complex than they actually are. It’s helpful to write out what an equation means in a human language.
The authors note that with SGD, training proceeds in steps, outlining an additional variable m which represents the size of each “mini-batch”. By using mini batches instead of one example at a time, the gradient of the loss is a better estimate for the gradient over the entire set.
The highlighted statement reads, “the average of [the change in the loss function, which takes in the current training example and the parameters, with respect to the parameters] for all values i in the training mini-batch”. This is the definition of a ‘gradient’, which calculates the error landscape of the loss function to provide insight on how the parameters should be updated. The use of [fraction][sigma] is almost always a complex way of meaning the average.
Glossary: ∂ is universally used to represent a partial derivative, or a change in a function with two or more variables. A derivative can be thought of, in a simple context, as “a small change in one variable’s effect on another variable.”
While SGD works well for a variety of reasons, the authors write that a change in the distributions of one layer’s inputs causes difficulties in the following layers because they need to adapt to the changing distributions, a phenomenon they call covariate shift.
While traditionally this was handled by domain adaptation, the authors believe that the idea can be extended to a sub-network or a layer.
The first highlighted statement reads “the loss is equal to an arbitrary transformation on [a parameter and an arbitrary transformation on (another parameter and an input u)]”. In this statement, the authors are setting up a hypothetical network to support their ideas. The authors simplify the initial statement by replacing a component with x to represent an input form the previous function. Θ[1] and Θ[2] are the parameters that are learned to minimize the loss l. These premises are identical to a complete neural network, simply built at a smaller scale.
The highlighted equation demonstrates the mechanics behind gradient descent, which computes partial derivatives to calculate the gradient, with progress determined by a learning rate. The change, which may be positive or negative, is subtracted to the parameter Θ[2], and is intended to steer the parameters in a direction to minimize loss/F2.
The authors write that this gradient descent step is the same for a single network F2 with the input x to establish the legitimacy of comparison between a real neural network and the one hypothesized. Because it Θ[2] does not need to readjust to compensate for a change in the distribution of x, it must be advantageous to keep x’s distribution fixed.
Note: This is a common theme in machine learning papers. Because machine learning deals with systems that involve so many more variables and with so much more complexity than other fields, its papers will often follow a three-step process to demonstrating how a thesis works: 1. Create a hypothetical and simple system.
2. Establish the identical natures between it and a real neural network.
3. Draw conclusions by making operations on the simple system. Of course, in more modern papers one will see a section entirely devoted to displaying accuracies and how the method works on various common benchmark datasets like ImageNet, with comparisons to other methods.
Besides, the authors write, a fixed distribution of inputs would be beneficial for inputs in the entire network. They bring up the standard equation
z = g(Wu + b), with W representing the weights and b the bias. g(x) is defined to be the sigmoid function. The authors point out that as x’s distance from 0 increases, its derivative — or its gradient — tends ever closer to 0.
Because the derivative slopes away for extreme values of x, when the distribution shifts, less information (gradient) is given because of the nature of the sigmoid function.
Glossary: g’(x) is another notation for the derivative of g(x).
Hence, the authors conclude, useful information propagated by the gradient will slowly vanish as it reaches the back of the network because changes in distributions cause cumulative decay in information. This is also known as the vanishing gradient problem. (Its opposite, the explosive gradient problem, is when massive gradients cause weights to fluctuate wildly, causing instability in learning.)
As a proposal to address this issue, they consider a layer that adds a learnable bias to the input, then normalizes the result. If the changes of the normalization caused by the bias are ignored (the bias’s gradient is calculated and updated independently), the combination of updating b and the corresponding change in the normalization yielded no change in the output layer.
Glossary: E[x] is often used to represent the mean of x, where “E” represents “expected value”. This is later defined with the formal summation definition later on. ∝ means “proportional to’ — the delta (change) in b is proportional to the standard formula for gradient descent.
This is proven mathematically — since x[hat] is equal to x − E[x], and x is equal to u + b, these statements are combined to form u + b − E[u + b]. However, the changes to b, represented by Δb, cancel each other out, and is equal to itself without any changes. Hence, b will grow indefinitely because of the faulty gradient while the loss remains fixed.
Tip: Often, papers will set up various statements and suddenly combine them together. How the authors arrive at a conclusion may be puzzling; try to underlying various relevant equations and see how they fit together. More importantly, however, it’s important to understand what the equation means.
With these considerations, the authors slightly adjust their batch normalization formula to normalize each scalar feature independently, each with zero mean and a unit variance. With the removal of an unnecessary bias, the layer transforms all inputs into a normally distributed output.
There’s plenty more within the Batch Normalization paper to be read and understood. Be aware, however, that the conclusions these authors come to has been proven to be slightly flawed — specifically, that internal covariate shift was the reason why batch normalization worked so well. | https://towardsdatascience.com/finally-you-can-start-understanding-machine-learning-papers-85183fe6734f | ['Andre Ye'] | 2020-08-06 00:58:52.836000+00:00 | ['Data Analysis', 'Machine Learning', 'Research', 'AI', 'Data Science'] |
I have something against the new Nike campaign, and it isn’t about Kaepernick | Once again, be ready for sacrifice — as long as it serves your little ego at the expense of everything else.
I am tired.
Ever since I came to this world, whether through schools relying on grades and competition, or more perniciously by being faced constantly with ads and commercials urging me to become a better-self by buying bullshit, I have been confronted to the idea that I should try to be “THE BEST”. That I should try to be on top. And that only that mattered. That there was no need in being kind, in helping others, or even in being smart, as long as it did not serve the purpose of climbing up the very subjective ladder of human success and recognition.
I am going to talk about the words first, because I am a writer, and words DO have their importance. Looking at the words of Nike’s new campaign shows another symptom of our society’s sickness with “being the best, whatever the price” but first of all, it is dangerously evasive regarding the philosophy it propagates. This works like a mantra, and this is why it is dangerous: because it’s been around for way longer than the ad.
The importance of words
Featuring Colin Kaepernick, the ad is made up of two very elliptical sentences: “Believe in something. Even if it means sacrificing everything.” Nothing is said about what that something could be — it might as well be that it’s OKAY to kill your neighbor for his beliefs, or that women don’t deserve equal pay — or worse, that they deserve to die because they won’t have sex with you. When looking at the ad, it is impossible to know what that “belief” you’re ready to “sacrifice everything for” should be. When I first saw the ad, I actually thought, in a pretty extreme manner, that this could actually totally work for an ISIS ad too, which is pretty ironic from an occidental brand and company. See, I am pretty sure people in ISIS actually DO believe in something, strongly enough so that they are actually sacrificing EVERYTHING for it, even their own body. This is the principal at the very heart of any suicide bombing. And I think these extreme kinds of beliefs have proven to be dangerous depending on what you chose to believe in.
However, this is just me giving meticulous attention to words. Even though I do believe that words used in commercials, because of the importance marketing and advertising have taken in our consumption-driven societies, actually do tend to act more like brainwashing devices and gurus than what we actually want to acknowledge. Philosophy, literature, sadly, have massively been replaced by ads and mottos used to sell shit all around the world. Just compare the number of available positions for copywriters to the ones open for actual journalists and writers.
I also have a problem with the second part of the ad. “Even if it means sacrificing everything”. Once again, what is that everything? Family life? Your friends? Your health? This is so stupid it’s outrageous. Nike doesn’t even want to do bad, but this is terrible. What kind of troubled person would agree to “sacrifice everything” just in order to be the “fastest runner ever”? (Because when looking at the commercial accompanying the ad, that’s what you learn: that the “belief” is individualistically sports related, of course). How about a more humble “prioritize your desires and needs”? Would that sound so terribly human?
Don’t be fooled: Nike’s real intentions
Of course, the fact that Nike used Kaepernick shows their intention to convey meaning, that this “something” could actually be very political, and that bravery could be shown by sacrificing your career for something you truly believe in — not only for yourself, but for others too. Still. I refuse to be fooled. Nike does not want us to stand up politically because, obviously, that would not work out so well for them. Nike represents everything that’s wrong with mass consumption and globalization, a company long-known for using child labor abroad and selling shoes a hundred times the price it actually cost to make them.
Here’s about the commercial now, that urges us to make sure we’re on the “best team”, or that we’re working hard to become the “fastest ever”. The whole political statement that can be grasped on the Kaepernick ad has totally disappeared and has been replaced by selfish dreams of performance. This just shows that if we are under any dictatorship right now, it is the one of our sick sick sick egos. A dictatorship that benefits from the brands’ propaganda, and, in sports, from the glorification of made-up gods that are paid millions. The soccer industry around the world is one of the most convincing examples of that. | https://efenaughty.medium.com/i-have-something-against-the-new-nike-campaign-and-it-isnt-about-kaepernick-86750b9451ef | ['Emilie Fenaughty'] | 2018-09-12 08:28:43.171000+00:00 | ['Environment', 'Nike', 'Opinion', 'Advertising', 'Sports'] |
A Letter to Students Who Just Started (or Are About to Start) Coding Bootcamps | A Letter to Students Who Just Started (or Are About to Start) Coding Bootcamps
My experience in coding bootcamp can hopefully help you overcome anxiety and nervousness before your first day
Photo by Tonik on Unsplash.
Hi, my name is Megan. First off, I want to congratulate you on starting/being admitted to the bootcamp of your choice! You may wonder, “Who is this stranger writing a letter to us about a coding bootcamp?” I am currently nearing the end of a coding bootcamp with Flatiron. Yeah, I know. I am not done with the program yet, but it does not mean I cannot write an article for you. As I am preparing my final project, I wanted to write this not only for myself but for anyone who is reading this today.
Are you feeling nervous? Anxious? Maybe some imposter syndrome? It’s normal to feel this way. It’s a new chapter of your life. You either:
Probably just left your old job behind and are about to make a major career change that will change your life forever. Are currently working part-time so you don’t have to worry about your expenses while taking classes at night and gaining new skills in the meantime to prepare for a career change. Simply want to learn some new skills and/or brush up on your web development skills. Are like me, who just graduated from college (or high school and decided not to go the traditional college route) and are hoping to look for better job opportunities and skills.
Feel free to comment if I have left some of you out! I want you all to feel included while reading this.
A little about myself: I am currently in the Immersive Software Engineering program with Flatiron. There are so many categories out there, but I hope I could include you in this letter — even if you are attending other types of coding bootcamps.
Every company sets their programs differently. I had to complete 100+ hours of pre-work before my first day of bootcamp. Before that, I had to go through a behavioral interview and then 80+ hours of prep work before the tech interview. Was it intense? Quite. But I enjoyed it a lot — especially the tech interview. Admittedly, I did postpone my tech interview several times just because I was so nervous and felt like I was not ready for it at all. I hope you all don’t make my mistake because I should’ve done the interview earlier. Once again, congratulations! You made it!
I remember feeling super excited when I received a call from my admission specialist that I got into the bootcamp I had been eyeing for months. He told me all about the prep work I had to do and I thought, “Yes! I love more coding!” But then sometimes I felt a bit unmotivated and my code would go untouched for days.
And guess what? On the last two weeks before my first day of bootcamp, I had to rush to finish all my work so I could catch up and start with my intended cohorts. *Show of hands if you’re also a procrastinator.*
These issues may sound familiar and resonate with you. | https://medium.com/better-programming/a-letter-to-students-who-just-started-or-about-to-start-coding-bootcamps-e469551eb0e9 | ['Megan Lo'] | 2020-11-23 15:55:30.280000+00:00 | ['Software Development', 'Coding', 'Startup', 'Learning To Code', 'Programming'] |
The Evolution of View Linking in Android | Data Binding
The Data Binding Library is a support library that allows you to bind UI components in your layouts to data sources in your app using a declarative format, rather than programmatically.
The Data Binding Library generates binding classes for the layouts, which we can use in Android components like Activity and Fragment . Data binding is a sort of declarative solution with type and null safety.
To make a layout support data binding, we have to wrap the content of the file with the layout tag. This will generate a binding file of that layout with a reference to all of the views in it. Have a look:
Wrapping the root content inside a ‘layout’ tag
Once we’ve finished that part in the XML, a binding class will be generated, and we need to use it in the View class. This binding-file instance contains all of the views in the layout. Have a look:
Data binding in the ‘View’ class
This seems to be a reasonable solution, but data binding is created to solve more complicated problems than just linking views, such as the layout file accessing Data classes to publish data from the XML itself and loading remote images using binding adapters.
These sort of things made Data Binding a complicated library to solve the view-linking problem. By nature, data binding is created to solve complicated issues because it creates a longer compile time and also errors are hard to understand. | https://medium.com/better-programming/the-evolution-of-view-linking-in-android-d6219678740d | ['Siva Ganesh Kantamani'] | 2020-07-01 04:19:36.694000+00:00 | ['Programming', 'AndroidDev', 'Software Engineering', 'Kotlin', 'Mobile'] |
7 Stories Behind the World’s Most Popular Machine Learning Algorithms | The world of AI/Machine Learning is evolving fast. If you’re like me, keeping up with the latest developments can seem like trying to reach a destination while walking on a treadmill. Sometimes it’s worth it to just step off, pause, and look back at the origins of many of the paradigms and algorithms that got us to where AI is today.
We are very fortunate that many but, sadly, not all, of the inventors who shaped AI are alive today. It can be inspiring (if sometimes intimidating) to hear about pivotal moments in the field from the very people who made them so significant. To that end, I’ve included the following seven videos (taken from past interviews and talks) because of what we can learn from these luminaries of our profession.
Together, the videos shine a light on the history of these algorithms, particularly on specific problems these researchers were trying to solve. Their solutions eventually led to the invention of the algorithms themselves. This glance at the past provides a deeper understanding of the methods and suitability of the algorithms for different applications.
The videos also give us a glimpse into the thought processes behind these inventions. An understanding of these mental processes might, in turn, help us apply these similar processes to solve the problems our field currently faces.
Finally, the videos provide an entertaining history of the development of the algorithms, analogous to the way the “origin stories” in comic books help readers understand the “back story” of popular heroes and heroines.
The Seven Stories | https://medium.com/bcggamma/7-stories-behind-the-worlds-most-popular-machine-learning-algorithms-51472939d14b | ['Sithan Kanna'] | 2018-09-13 11:56:14.557000+00:00 | ['Machine Learning', 'Algorithms', 'Data Science', 'Creativity', 'Management'] |
Ultimate Guide to Python's Matplotlib: A Library Used to Plot Charts | Ultimate Guide to Python's Matplotlib: A Library Used to Plot Charts
A simple guide to draw Bar Charts, Line charts, and Pie charts in Python
Photo by Cookie the Pom on Unsplash
Data visualization refers to the graphical or visual representation of data and information using elements like charts, graphs and maps, etc. Over the years, data visualization has gained immense popularity as it provides an easy interpretation of even massive amounts of data by displaying it in the form of patterns, trends and so on. By following these patterns and trends a user can facilitate his decision making.
Python uses the Matplotlib library's pyplot for data visualization. Pyplot is a collection of methods within the Matplotlib library, which can be used to create 2D charts, graphs and represent data interactively and effectively. The Matplotlib library is preinstalled with Anaconda distribution or can also be installed easily from the internet.
Installing Matplotlib
1. If you have Anaconda navigator, open the navigator window, click environments and scroll down to find Matplotlib library. It is preinstalled on your computer.
2. If you don't have Anaconda navigator, that isn't a problem. Just go to https://pypi.org/project/matplotlib/#files
Here you will find the library. Download and install it, and you are ready to create wonderful charts and graphs in python itself.
Types of charts offered by Matplotlib
It offers a wide range of charts of which the most prominently used ones are listed below:
1. Line Chart
It connects important points called 'markers' through the use of straight-line segments. These points will represent the data that you will enter while making the chart.
2. Bar Chart
It uses bars to represent the data. The height of the bars is variegated to depict the differences in the given data. Bar charts can be plotted horizontally as well as vertically depending upon the need of the user.
3. Pie Chart
Slices of a circular area are used to depict the data. The slice with larger area represents a higher value, whereas a smaller one is represented by less area.
4. Scatter plot
Scatter chart just plots the data in the form of dots. It differs from the line chart by not joining the dots using straight lines.
Now, let's move on to the steps to create these charts.
Note: You will have to give the command to import Matplotlib before you set out to create charts. For this just type the below-mentioned command in your Jupyter or python window:
import matplotlib.pyplot as pl
This will import Matplotlib to your window and you will have to just use ‘pl' in place of the long ‘matplotlib.pyplot' every time you create your charts.
Line Charts
To create a line chart you must assign some data beforehand. This data can be given in the form of lists, or dictionaries in python. Here I will use lists to create charts:
import matplotlib.pyplot as pl a= [ 1, 2, 3, 4] b= [2, 4, 6, 8] pl.plot(a,b) pl.show()
Here a and b were lists that were created consisting of values 1,2,3,4 and 2,4,6,8 respectively. The command pl.plot(a,b) was given to plot a line chart using values in ‘a' as the x-axis and ‘b' as the y axis.
Here is the plotted chart:
Image source: Author
You can also give names to the x and y-axis as follows:
pl.xlabel(“ values in a”) pl.ylabel(“values in b") pl.plot(a,b) pl.show()
Here the x-axis will be named as ‘values in a' and y-axis as ‘values in b'.
Bar Charts
Let’s move on to drawing bar charts. Bar charts also require the same steps as the line chart. The only difference arrives while giving the command to plot the data.
import matplotlib.pyplot as pl a= [ 1, 2, 3, 4] b= [2, 4, 6, 8] pl.bar(a,b) pl.show()
While giving the command to plot a bar chart we need to specify ‘bar’ for the same, as we have done above.
Image source: Author
To name the x and y-axis the same procedure can be followed.
pl.xlabel(“ values in a”) pl.ylabel(“values in b") pl.bar(a,b) pl.show()
The width of the bars can also be altered using the ‘width’ command. The value given in the width command should be numeric, otherwise, Python will raise an error.
pl.bar(a,b, width=<value>)
Scatter Charts
Scatter charts allow you to change the way it’s data points or markers look by specifying the marker size and marker type. In this type of chart, it is compulsory for you to specify any of the two or both while giving the command to create the same.
a= [ 1, 2, 3, 4] b= [2, 4, 6, 8] pl.plot(a,b, “o", markersize=10) pl.show()
Here the data points would like ‘o' letter and would have a marker size equal to 10. If we don’t specify them then instead of the scatter chart, a line chart would be plotted.
Image source: Author
Changing the x label and y label would remain the same for the scatter charts as well.
pl.xlabel(“ values in a”) pl.ylabel(“values in b") pl.plot(a,b, “o", markersize=10) pl.show()
Pie charts
Contrary to other charts, a pie chart can also function with just one of the list. But for clarity in the understanding of the readers as well as users we specify the labels for each slice, which requires the use of the second list.
a= [‘Sam’, ‘Tina’, ‘Joe’, ‘Mark’] b= [100, 200, 300, 400] pl.pie(b, labels=a) pl.show()
The list represents the contributions made by 4 members to organize a party. The different members will be depicted by different colour as follows:
Image source: Author
You can also give a title to your pie chart: | https://medium.com/datadriveninvestor/ultimate-guide-to-pythons-matplotlib-a-library-used-to-plot-charts-3d2210ccb04c | ['Niyati Jain'] | 2020-12-14 18:04:54.883000+00:00 | ['Technology', 'Digital Life', 'Design', 'Programming', 'Computer Science'] |
AI is making CAPTCHA increasingly cruel for disabled users | Written by Robin Christopherson MBE, Head of Digital Inclusion at AbilityNet
A CAPTCHA, (an acronym for “completely automated public Turing test to tell computers and humans apart”), is a test used in computing to determine whether or not the user is human. You’ve all seen those distorted codes or image-selection challenges that you need to pass to sign up for a site or buy that bargain. Well, improvements in AI means that a crisis is coming … and disabled people are suffering the most.
CAPTCHAs are evil
Whatever the test — whether it’s a distorted code, having to pick the odd-one-out from a series of images, or listen to a garbled recording — CAPTCHAs have always been evil and they’re getting worse. The reason is explained in an excellent recent article from The Verge; Why CAPTCHAs have gotten so difficult. Increasingly smart artificial intelligence (AI) is the reason why these challenges are becoming tougher and tougher. As the ability of machine learning algorithms to recognise text, objects within images, the answers to random questions or a garbled spoken phrase improve month on month, the challenges must become ever-more difficult for humans to crack.
Jason Polakis, a computer science professor at the University of Illinois at Chicago, claims partial responsibility. In 2016 he published a paper showing that Google’s own image and speech recognition tools could be used to crack their own CAPTCHA challenges. “Machine learning is now about as good as humans at basic text, image, and voice recognition tasks,” Polakis says. In fact, algorithms are probably better at it: “We’re at a point where making it harder for software ends up making it too hard for many people. We need some alternative, but there’s not a concrete plan yet.”
We’ve all seen the ‘I am not a robot’ checkboxes that use clever algorythms to decide if the user’s behaviour navigating the website is random enough to be a human. These used to work well — letting us through with that simple checking of the box — but increasingly the bots are able to mimic a human’s mouse or keyboard use and we get the same old challenge of a selection of images popping up as an additional test of our humanity.
The Verge article quite rightly bemoans the place we’ve arrived at — highlighting how difficult these ever-more-obscure challenges are for people with normal levels of vision, hearing and cognitive abilities. We just can’t compete with the robots at this game.
Don’t forget the disabled — we’re people too
But what about all those people who don’t have ‘normal’ abilities? People with a vision or hearing impairment or a learning disability are well and truly thwarted when it comes to CAPTCHAs that test the vast majority of humans to the very limit and beyond. After reading the article, I came away feeling that this very significant group (a fifth of the population and rising) deserve a mention at the very least — after all, they’ve been suffering in the face of these challenges far, far longer than those who do not have a disability or dyslexia (and have been locked out of many an online service as a result).
At the very heart of inclusive design is the ability to translate content from one format into another. For example, if a blind person can’t see text on-screen, it should allow the ability to be converted into speech (that’s how I’m writing this article). If someone can’t easily read a certain text size or font style or in certain colours, then it should allow for resizing or the changing of fonts and colours — this is all basic stuff that most websites accommodate quite well. Images should be clear and their subject easy to understand — and they should include a text description for those who can’t see it at all. Audio should be clear. All aspects of ‘Web Accessibility 101’.
The whole point of CAPTCHA challenges is to allow for none of these. No part of the challenge can be machine-readable or the bots will get in. Text can’t be plain text that can be spoken out by a screenreader for the blind — it has to be pictures of characters so excruciatingly garbled that no text-recognition software can crack it. Ditto with an audio challenge. Pictorial challenges must be so obscure that object recognition software can’t spot the distant traffic lights amongst the foliage etc, etc. It has ever been thus.
Today the road signs need to be obscured by leaves because the bots are better than ever at recognising them — but five years ago the images were still chosen to be just complex enough so as to thwart the bots of the day. And because the bots are using the same machine-learning AI as the assistive software used by disabled people to convert content into a form that is understandable to them, they were locked out too.
Did I mention? — CAPTCHAs are evil
So long as websites want to keep the bots from registering spam accounts or posting bogus comments, there will need to be some way for developers to detect and deflect their attempts. The use of CAPTCHA challenges, however, is not and has never been a fair (or even legal) one. It discriminates and disenfranchises millions of users every day.
So, whilst the article neglects to mention the significant segment of users most egregiously affected by CAPTCHAs, I’m hopeful that its main message — namely that this arms-race is rapidly reaching a point where the bots consistently beat humans at their own game — is a herald of better times to come.
As CAPTCHAs actually begin to capture the humans and let the bots in, then they begin to serve the opposite objective to that intended. They should then disappear faster than a disillusioned disabled customer with money to spend but wholly unable to access your services.
So what’s the alternative?
Companies like Google, who have long provided commonly-used CAPTCHA services, have been working hard on a next-generation approach that combines a broader analysis of user behaviour on a website. Called reCAPTCHA v3, it is likely to use a mix of cookies, browser attributes, traffic patterns, and other factors to evaluate ‘normal’ human behaviour — although Google are understandably being cagey about the details.
So hopefully by now you get the bigger picture. Hopefully you’re saying to yourself, “Ah, but will the clever analysis cater for users who aren’t so average or will they once again be excluded by not being ‘normal’ enough?” Excellent question — I’m glad you’re on your game and on-board.
For example, will I, as a blind keyboard-only user of a website, be flagged as a bot and banished? Will a similar fate befall switch users (like the late and much missed Prof Stephen Hawking) who use certain software settings to methodically scan through a page. Dragon users issue voice commands that instantly move the mouse from one position to another in a very non-human way. I could go on.
I hope you get the picture. Moreover, I hope that Google and other clever types working on the issue elsewhere get the picture too. They certainly haven’t to date.
Originally posted here
More thought leadership | https://medium.com/digital-leaders-uk/ai-is-making-captcha-increasingly-cruel-for-disabled-users-1c0c994934ef | ['Digital Leaders'] | 2019-02-22 16:01:23.231000+00:00 | ['Artificial Intelligence', 'Online', 'Technology', 'Captcha', 'Accessibility'] |
These Are 5 Ways to Check Your Progress on Medium | Medium writers always try to check their progress using their earning dashboards. It is not a good idea to check the dashboard on medium to check your progress. Medium is not all about money. On medium, every writer can write to give world knowledge and further can change the lives of the people.
Medium earnings do not tell medium writers' progress, but it depends on many factors.
1. Are readers commenting on your stories
Are readers commenting on your stories? If they are doing it, then your work is being appreciated by them. Medium is a social media platform, the interaction between you and the reader is necessary for engagement.
To create a relationship with the reader, create engagement with them. Try to write content that entertains and teach readers. They will interact with your work.
More comment you get by readers, it means your work is getting appreciated by the reader and you are doing well. Write the number of times readers are commenting on your spreadsheet this will help you track your engagement.
2. Pay attention towards readings to clap ratio | https://medium.com/illumination/these-are-5-ways-to-check-your-progress-on-medium-262c0834e852 | ['Mike Ortega'] | 2020-10-26 13:17:38.797000+00:00 | ['Writing', 'Self Improvement', 'Writer', 'Success', 'Writing Tips'] |
Community FAQ’s at WePower: November 2018 | The year may be drawing to a close but the WePower team is feeling more motivated than ever! We’ve just had a very successful live AMA session on Telegram where our community raised lots of questions and shared their interest in the WePower story. We are thrilled to have so many people supporting our journey.
Thank you, dear community!
For those of you who missed it, or wanted to revise the answers to all of the questions, we have put together our first Community FAQ’s at WePower blog…happy reading!
Questions from Live AMA on Telegram on Monday 19 November:
With the new business model applied how has the token use case changed, if it did change at all?
The use case of WPR did not change, it provides 2 features at this stage which is access to priority auctions and what is the most important — access to donation pool.
What happened with Arturas moving to a more passive role? Everybody is scared that there’s something bad happening in the team…could you explain?
Good question, nothing bad has happened. With around 60 contributors to WePower today, internal organisations need to change to be able to grow. It is very common that organisations outgrow positions and people and Arturas has moved to a different role. He is a co-founder of WePower and part of WePower but in a less active role.
How will private people have the access to the auctions if its an B2B model?
The goal has not changed and the platform will be open for all the participants (individuals will be able to participate), from the business perspective and to provide most benefits possible we refocused where most of the traction is today and where the results can be achieved this year with highest benefits to the community.
Hi Nick, you wrote that the use case did not change at this stage it provides 2 features. Should we be expecting any changes regarding these 2 features in the future?
The features will be expanded with additional functionalities coming to the platform, those are focused on all of the customers as in the energy world we are transitioning from a old utility driven sector to a buyer driven sector.
Will businesses that will have priority access to the platform B2B be required to hold WPR tokens as well?
Absolutely! If you do not have WPR you are unable to enter in to the priority stage of the auction.
Have many business who intend to purchase electricity (ie users) signed up?
We are currently working with some of the top 100 energy consumers in Australia who are very keen to purchase energy through us and see the value we provide.
Is it correct that priority access is for big companies first and later in some years also for private people? Any specific market rules applies?
Actually the limitations for purchasing energy for your home as a small consumer are still the same as in the whitepaper — you need to be grid connected, there needs to be a way to deliver that energy to you. So if we are starting in Australia, then grid connected small users will be able to participate very soon — for other markets — at scaling speed. Also, KYC and credit scoring requirements will apply, like with any financial purchase.
Will businesses (B2B option) in regards to WPR token be subjected to exact same rules as individual investors? When you say “If you do not have WPR you are unable to enter in to priority stage of the auction” it does not exactly refer to corporates that will be engaging in the B2B (as opposed to individual investors)?
You are absolutely right, all rules are the same.
Is the contribution to the donation pool still at 0.9%?
Yes, donations are 0.9%, however the size of the auction has increased as not 20% but 100% are being sold right now through the platform. Initial model was based on the fact that we were able to technically sell only 20% of the energy through the platform. With the work the team has done we were able to move to selling 100% of energy from any renewable power plant that is ready to be built. 0.9% donation from the amount of energy the renewable power plant is selling through WePower.
Is my thinking right: if projections in white paper was prepared for only Spain and Italy, so with Australia can we reach our goals faster?
You are right, initial projections do not represent Australia.
Have many businesses who intend to purchase electricity (ie users) signed up?
Around 10 companies will be able to participate in the first WePower auction. We are oversubscribed by a dozen. And when we think about large companies, then we need to understand that the volumes and patterns of energy consumption are different and at scales which are difficult to imagine for normal people.
Is it possible to update the projections?
When the quiet season hits, we will look at updating it, for now the focus is on business development.
You started hiring in Germany and other EU countries. Can we expect WePower as an energy deliverer in EU in 2019?
Yes we have plans for Europe in 2019. But availability as an energy deliverer in a specific region will depend on a few things, for example having a smart meter installed.
Are you partnered with AGL and Origin?
We have partnerships in place that were announced during this year which have provided a lot of support and value to us but we are happy to engage with all possible market players.
What happened with plans on launching business in Spain?
During Summer time when Spain went on vacation there was a choice for us in either do nothing until Spain comes back from holidays or focus on Australia as a priority as it was winter time here. We refocused on launching the Australian market first and we are happy about it as it allowed us to expand the business case as well. A lot of work has been done in Spain and we are working in the background to launch Spain ASAP.
How will the Buyer companies get access WPR? I presume that there is an arrangement with them ? They will not come exchanges to buy the coin like most of us.
Buying of tokens will be done through exchanges.
In essence, is the main change in the business model who actually can purchase the electricity (businesses instead of retail/individual)?
The business model has not essentially changed, its only a change in the market entry strategy. Pieces of the puzzle are the same, they are just put together in a slightly different order. Companies have large volumes, and they are a good place to start simplifying the power purchasing process. The buy-process for companies and individuals is a bit different, but both will be developed.
When will you guys update officially the projections and the white paper with this amazing new information?
We are changing format, and will keep it in a form of business focus around the market and potential of the market we are in.
When do you think you will be in a position to share the timeline for the first auction?
We are working to make it happen as soon as possible, and we will happily share this news as soon as possible.
How was it on Utility Week in Vienna? Was it instructive?
EUW in Vienna was intense. WePower co-founder Kaspar has been going there every year for 5 years giving talks as a former energy grid executive. However this year, with WePower, both Nick and Kaspar had the the opportunity to share their bold vision for the energy transition and that was really exciting to see it resonate. We will be talking more about that vision as we move forward.
May I ask about Peter Diamandis? Why is he giving so little about WePower? Is he still with the team?
Peter is advising us on the long-term aspects of WePower’s vision for energy digitalization — in scaling exponentially beyond our first target markets — this is his specialty. As we move forward, this will be more visible. | https://medium.com/wepower/community-faqs-at-wepower-november-2018-ce4190c8daf0 | ['Greta Jonaitytė'] | 2018-11-23 10:03:09.965000+00:00 | ['Energy', 'Blockchain', 'Renewables', 'Renewable Energy'] |
Publish a Python Package like a Pro | If you are in Pythoneer/Pythonista or at least familiar with python, you will encounter the word PIP as this is a vital tool that we will use to install a package to import any package in the program. Have you ever thought of creating your package or wanted to know how this process works then you came to the perfect place? This article demonstrates your step to step process so that right away you can publish your package after reading this article.
Steps for publishing a python package in PIP:-
Firstly you need to create a python script you wanted to publish as a package. In my case, for example, I am creating a simple script that will return a welcome message. This is the following script which I want to publish.
2. This file should be saved as <packagename>.py. In my case, my package name is welcomesample so the above script should be saved as welcomesample.py.
3. Now create a new folder and keep it as the name the same as the package name.
4. Now create a subfolder in the project folder with the name “src” and move all the required programs and files for your above script.
5. Now create a setup.py file with the following script below.
In the above script, the setup() file has the following attributes.
name is the distribution name of your package. This can be any name as long as only contain letters, numbers, _ , and - . It also must not already be taken on pypi.org. Be sure to update this with your username, as this ensures you won’t try to upload a package with the same name as one which already exists when you upload the package. In my case, the name attribute value is
is the distribution name of your package. This can be any name as long as only contain letters, numbers, , and . It also must not already be taken on pypi.org. as this ensures you won’t try to upload a package with the same name as one which already exists when you upload the package. In my case, the name attribute value is version is the version number of the package. If it is the first version you can specify 0.0.1. It should update for each update in the package.
is the version number of the package. If it is the first version you can specify 0.0.1. It should update for each update in the package. author refers to the name of the author who published the package.
refers to the name of the author who published the package. author_email refers to the mail of the author. This field is optional but important.
refers to the mail of the author. This field is optional but important. description refers to a short description of the package which is usually 10 to 25 words long.
refers to a short description of the package which is usually 10 to 25 words long. long_description refers to the documentation of the package which will be usually given in form of a markdown file which will be discussed in-depth in the next step.
refers to the documentation of the package which will be usually given in form of a markdown file which will be discussed in-depth in the next step. url refers to the source code of the package. It can be optional but important. Usually, we can put the Github repository URL of the package.
refers to the source code of the package. It can be optional but important. Usually, we can put the Github repository URL of the package. packages refer to a list of all Python import packages that should be included in the Distribution Package. Instead of listing each package manually, we can use find_packages() it to automatically discover all packages and sub-packages.
refer to a list of all Python import packages that should be included in the Distribution Package. Instead of listing each package manually, we can use it to automatically discover all packages and sub-packages. classifiers are used to classify our project in the PyPI filter field. If you want to know more about classifiers you can go to this link.
are used to classify our project in the PyPI filter field. If you want to know more about classifiers you can go to this link. python_requires is used to specify which python version your package works. For this, you need to check manually and fill this field.
is used to specify which python version your package works. For this, you need to check manually and fill this field. py_module refers to the package name. It is mandatory.
refers to the package name. It is mandatory. package_dir refers to the src file path. It is mandatory.
6. After you create the setup.py file then you need to create a README.md markdown file to write the documentation of your package. It should contain details regarding the short description of the package, how to use this package etc. For example, your README.md can look like this:-
# WelcomeSample This is a welcome package which prints a welcome message
The output of the README.md file looks similarly as shown below.
7. Create a Licence file named LICENCE. It’s important for every package uploaded to the Python Package Index to include a license. The best way for finding license templates is by picking a LICENCE Template from GitHub or by choosing from https://choosealicense.com/. For example, if you choose the MIT License format then your license will look like below:-
8. After the above steps are done then the directory structure looks similar to the below shown one. Please check whether your package directory looks similar.
tutorial
├── LICENSE
├── README.md
├── WelcomeSample
│ └── src
└── welcomesample.py
├── setup.py
9. Now you need to generate distribution archives for the package you need to create. These archives help us in installing our package in any OS. For this, you need to run the following command in CMD to install wheel and setuptools.
python -m pip install --user --upgrade setuptools wheel
10. After successfully installed the above packages now you need to run the following command to generate the archives file(.gz) and wheel file(.whl) for the package you want to create.
python setup.py sdist bdist_wheel
The output of the above command will generate the 2 directories in your working package directory namely dist and build. If you check the dist folder you can be able to see the 2 files i.e wheel(.whl) and source archive (.gz) files. These are very much important to publish your package. The directory structure looks like this
tutorial
├── build
├── dist
├── welcomesample-0.0.1.tar.gz
├── welcomesample-0.0.1-py3-none-any.whl
├── LICENSE
├── README.md
├── WelcomeSample
│ └── src
└── welcomesample.py
├── setup.py
The reason why we require wheel and source archive files is that the newer pip version supports wheel built distributions but will fall back to source archives if they are needed. If we include both files then it makes our package to install on any platform.
11. Now we can install the package locally in the system by using the following command.
pip install -e .
12. We can test the package locally in your IDE by writing the following code and run it.
import welcomesample
print(welcomesample.welcome())
The output of the above code is shown below.
Hurray! It works fine 😎.
13. Now it's time to ship the code to PyPI. Before that, you need to ship your code firstly to Test PyPI. Test PyPI is a platform similar to PyPI used for experimenting and testing purposes. You need to register on the following websites to proceed further.
14. After creating the account on 2 platforms then we need to install a package twine which helps in shipping the package. For this, you need to run the following command in the CMD.
python -m pip install --user --upgrade twine
Once the twine package is installed then you need to run the following command below to ship the code to Test PyPI.
python -m twine upload --repository testpypi dist/*
When we run the above command then it asks you for a username and password. Enter the credentials by which we registered an account in Test PyPI. Once we enter these credentials your package is successfully installed in Test PyPI. The output of the above command looks like this shown below. | https://medium.com/analytics-vidhya/publish-a-python-package-like-a-pro-bbda96e68bfb | ['Sai Durga Kamesh Kota'] | 2020-12-22 16:31:43.739000+00:00 | ['Programming', 'Python3', 'Python', 'Package'] |
3 Simple Wealth Building Mindsets for a Pandemic | Catalytic events are historical occurrences that cause important changes. How can you build wealth in the middle of one?
Photo by JESHOOTS.COM on Unsplash
Life is filled with uncertainty, especially in times like this!
We find ourselves in the middle of a pandemic that has triggered an economic recession, escalated racial tensions and increased political instability. All of these factors have amplified financial anxiety which has inevitably taken a toll on our collective physical and mental health.
Analysis by the Institue for Policy Studies shows that since the pandemic started, a staggering $6.5 trillion in household wealth has disappeared and the collective wealth by billionaires has surged by more than $584 billion! Federal Reserve Chair Jerome Powell noted that “This is the biggest economic shock in the U.S. and in the world in living memory!”.
When catalytic events like a pandemic happen, they level the playing field and expose hidden opportunities, for those who can identify them.
As a career consultant, I have spoken to hundreds of unemployed and underemployed job seekers in the last few months who are desperate to provide for their families, put food on the table and have some measure of financial security. As a new dad myself, working towards the financial security and stability of my young family has been my biggest priority.
In this article, I will share with you my mindset, reasoning, and strategies to successfully build wealth in these strange times based on my personal experience and what has worked for me.
Catalytic Events & Timing
Catalytic events are historical occurrences that cause important changes.
The 9/11 terrorist attacks of 2001, the financial crisis of 2008 or the current COVID 19 pandemic are classic examples of catalytic events that are sudden, catastrophic and one for history books. Unfortunately, this is how history moves, never smoothly but always in bumps. When catalytic events like a pandemic happen, they level the playing field and expose hidden opportunities, for those who can identify them.
The natural reaction that most people have to a catalytic event is fear.
Fear is a natural emotion induced by perceived dangers or threats, this emotion causes physiological and ultimately behavioural changes aimed at fleeing, hiding or protecting one’s self from the perceived danger. Fear has its evolutionary role in protecting us, but giving in to fear is far from a winning strategy if you want to maximize hidden opportunities to build wealth.
To overcome fear, you have to visualize this pandemic as an opportunity to evolve and an excellent occasion to take initiative and “force the issue”.
We are in a crisis; a time of intense difficulty, mortal danger, confusion, anxiety and change.
“Moments of crisis show us that the ways we’ve been doing things actually hinder our existence,” says Michele Moody-Adams, a professor of political philosophy and legal theory at Columbia University. Think back to a pre-coronavirus world, when you’d ask why something was done in a certain way, the answer was often, “Because that’s the way it’s always been done”.
The rules are now changing, today we are now being forced to deeply consider everything and redefine what is normal. Nothing is no longer taken for granted, people are suddenly open to new ways of doing things from how we work, communicate and commute to how we shop and socialize. The rules are changing and this new reality ultimately creates an abundance of hidden opportunities for anyone who is looking.
Crisis creates new problems that you can solve.
During this pandemic, I have identified needs in areas of career and business coaching that has proven lucrative for me. People are willing to pay me to coach and guide them as they try to get a new or better job in this recession. Small business owners are happy to pay me to coach them as they try to grow their businesses and increase their revenue in an ever-competitive market.
Don’t give in to fear, choose to see this crisis as an opportunity to create a solution for one of the many problems that this pandemic has created.
Trust me, there are many problems out there in your local community, your industry, your country, and even across the world that you can offer solutions to. All you have to do is to observe what people around you are struggling with.
Photo by Sharon McCutcheon on Unsplash
The law of Value
Your true worth is determined by how much more you give in value than what you take in payment — Bob Burg
In The Go-Giver by Bob Burg, Burg shares the simple yet mind-blowing concept of giving more in value than what you take in payment. The secret to getting ahead, building wealth and maximizing opportunities is by putting other’s interests first and continually adding value to others.
Once you have gotten over your fear of the pandemic and identified potential opportunities to create solutions, you have to focus on adding more value to others than what you ask in payment. This doesn’t mean that you are providing free services; rather, it means that your solution adds much more value in comparison to the payment you are asking. This is how to build loyal fans and customers that will support your solution and be an advocate.
For example, In the early days of my coaching business, I routinely reviewed many job seekers’ resumes at no cost and often provided free career consulting and access to my online course for those who truly needed it. I only asked for their honest feedback on the effectiveness of my methods and for them to share their experience with other friends who could use some coaching. They received more in value in comparison to the payment I requested.
I also did something similar when it came to consulting with small business owners, I focused on adding value to their business by helping them to increase their revenue and performance before asking for payment. This created true superfans who not only believed in my coaching but also became evangelists for me.
Furthermore, your compensation is directly proportional to the number of lives you touch. Your income is determined by how many people your solution serves and how well you serve them. It is not just enough to create solutions, you have to find ways to offer your solution to the most amount of people to become truly successful. The more the number of people you can help, the higher your earnings.
Taking Action & Managing Risk
Maslow’s hierarchy of needs is a motivational theory in psychology comprising a five-tier model of human needs, often depicted as hierarchical levels within a pyramid. Needs lower down in the hierarchy must be satisfied before individuals can attend to needs higher up. From the bottom of the hierarchy upwards, the needs are physiological, safety, love and belonging, self-esteem and self-actualization.
Using this concept, we can apply it to finances and wealth creation. As you work towards overcoming your fear of this pandemic and you start to identify problems around you that you can provide solutions to, you have to manage your risk. Starting a side-hustle outside of your regular job is the best way to begin this journey and if there is any time to start, it is now! To learn more about side hustles, read Zulie Rane’s powerful article on the 3 steps she took to earning 6 figures at 25 years old!
Your priority as it relates to finances should be to make enough money through your side hustles that you can save for emergencies. Emergency funds cover unexpected expenses and the essentials (rent, food, etc.) in case of a loss of income. Once enough savings for 3–6 months has been saved, you should then focus on buying insurance products to secure you against the loss of health, property, employment or family. There are different types of insurance products for this.
Your next priority should then be paying off high-interest rate debts and digging yourself out of any holes. Debts like this include credit card debts, lines of credit etc. After this, you can then focus on saving and investing for retirement, education and any other essentials. This is how to achieve financial security and peace of mind. At this point, your side hustle(s) will help you achieve financial self-actualization and freedom.
In conclusion, the pandemic is a leveller and is changing the rules of what we define as normal. Don’t be afraid of this change, embrace it. Look for problems you can provide solutions to, it doesn’t have to be rocket science, it could be as simple as putting your woodworking or gardening skills to work, freelancing or teaching an online class. Then, grow your side hustle. | https://medium.com/the-innovation/3-simple-wealth-building-strategies-for-a-pandemic-82af9444dae9 | ['David Owasi'] | 2020-10-13 16:45:52.888000+00:00 | ['Work', 'Entrepreneurship', 'Self', 'Money', 'Wealth'] |
Creating and mounting an EBS volume to a Windows Amazon EC2 Instance | If you’re reading this post you probably already know about Amazon Elastic Block Store, aka Amazon EBS, one of the many services provided by the Amazon Web Services (AWS) ecosystem.
An EBS volume can range from 1GB to 1TB and can be mounted to an Amazon EC2 instance as a device, each EBS volume can be mounted to one instance but multiple volumes can be mounted to the same instance. To mount an EBS volume to an instance you have to first log into the AWS Management Console and follow a few simple steps.
Creating a new volume
1. Once inside the AWS Management Console, click the Volumes link under Elastic Block Store in the Navigation panel:
2. Click the ‘Create Volume’ button:
3. In the pop-up form, choose the size of the EBS volume you want to create and select the availability zone (remember, the EBS volume needs to be in the same availability zone as the instance it needs to be attached to):
Click “Create” to create a new EBS volume.
Attaching an EBS Volume to an Instance
1. Once the EBS volume is created and its status changed to ‘available’, select it, and click the “Attach Volume” button:
2. In the following pop-up, choose the instance you want to attach the volume:
Mount the EBS Volume as a drive
1. Now that the EBS volume is attached to the instance, remote connect to the instance, go to Start –> Run, and enter “diskmgmt.msc” to start the Disk Management tool:
2. In the Disk Management tool, right click on the new disk, which is “Offline” right now and bring it “Online”:
3. The disk then needs to be initialized, right click again and select “Initialize Disk”:
4. Choose whether to initialize the disk with MBR or GPT:
5. You’re almost there! Right click on the unallocated space in the initialized disk and create a “New Simple Volume”:
and follow the steps in the wizard.
6. A 10GB volume will probably take 5 ~ 10 mins to format, and once it’s done you’ll be able to see the drive:
Parting thoughts…
As mentioned in Membase’s documentation here, when using an Amazon EC2 instance as a Membase cache server you can use an EBS volume to alleviate the risk of losing your data when an instance fails because the EBS volume can be attached to another instance to effectively restore the node.
Another fun fact about EBS and Amazon EC2 is that Windows 2008 instances are backed and run directly from an EBS volume because of the sheer size of the images themselves! Which is why for every Windows Server 2008 instance you are running you’ll find a 30GB EBS volume attached to that instance, and every AMI you create will have a matching EBS snapshot. And the implication of this is that, if you’re using a Windows instance to run Membase, you won’t need to create additional EBS volumes to store the Membase sqlite files. BUT, there’s a catch, if you terminate the instance manually or through a scaling down event the EBS volume will be deleted too, so be sure to take this into consideration when deriving your strategy for deploying and scaling your Membase cache clusters. | https://medium.com/theburningmonk-com/creating-and-mounting-an-ebs-volume-to-a-windows-amazon-ec2-instance-e8897cb6bfa4 | ['Yan Cui'] | 2017-07-03 20:55:15.678000+00:00 | ['Cloud Computing', 'Programming'] |
Stashed. | What’s the difference between self-preservation and hiding?
Between private and public?
Today, it seems hard to tell.
One person’s safety will end up another’s hell.
Stashing.
Is it damaging?
Or is it a shield?
Is it an infliction?
Or is it an accommodation?
Being Stashed.
Is it mistreatment?
Or is it protection?
Is it wrong?
Or is it right?
Deliberate or unintended, it’s complicated.
If one missed the other, would they tell?
If one wanted the other, would they call?
If one needed the other, would they ask?
The economy of love can lead to an abundance of pain. | https://medium.com/etc-magazine/stashed-7f10c0c9bc15 | ['Keith R. Higgons'] | 2020-11-29 17:37:39.915000+00:00 | ['100 Naked Words', 'Fiction', 'Writing', 'Relationships', 'Love'] |
Hoopers: Website Redesign | Creating a new design for a basketball community platform
After intense weeks of hard work at the Ironhack UX/UI Design Bootcamp, it was time to prepare for the final project.
The students were introduced to real clients facing real problems.
Among all the business, Hoopers got my attention. They are a basketball community that works as an athlete-centered media platform. They also map and transform basketball courts, as well as creating specialized products for the community. | https://medium.com/swlh/hoopers-website-redesign-b8ee353b12e4 | ['Guilherme Torres'] | 2020-07-01 21:10:27.888000+00:00 | ['UX Design', 'Design', 'Ironhack', 'UI Design', 'UX'] |
How to Budget as an Emotional Spender | How to Budget as an Emotional Spender
Keeping track of your finances without pulling teeth
Photo by Diane Helentjaris on Unsplash
Did you know that 26% of Americans would rather get a cavity filled than create a budget?
Creating a budget is an unpleasant emotional rollercoaster for many, but for many right now, it’s a necessity. Between the recession, unemployment, and the job market, many people are forced to re-evaluate their finances.
But for many people, it’s not hard because they have to look at numbers on the page: it’s because there’s an emotional aspect to their spending. So to create a budget without it feeling like pulling teeth, you need to do an emotional audit as well as a financial one.
And to start with that, let’s talk about a budget meme.
Emotional spenders
This meme seems like a dumb conversation, but it shines the light on the truth about money for many people.
It seems super easy to budget for this person: there’s something ‘non-essential’ on the budget, and it should be more than easy enough to reduce spending on it.
Except that’s not how everyone thinks.
Have you ever met someone who spends far too much money on seemingly random things? They spend $200 on new clothing. Or $2000 on new electronics.
Or, perhaps, $3.12 million on a baseball card.
For these people, these purchases are not merely a transactional thing: there may be a deep emotional attachment to the things that they buy.
To give an example, imagine you had a virtual night out with your friends. You all ordered food from the same place had a couple of beers, and maybe each rented the same movie online multiple times.
The logical spender would see the cost as maybe $40 and could easily see a couple of places to cut spending.
But the emotional spender? They see a (virtual) night out with friends. To cut costs on this event might mean to worsen this experience for a couple of bucks.
For many people, money is not about finances: it’s about the emotional states that it’s associated with.
As a result, money conversations are rarely about the numbers. For many people, it seems wrong that numbers lined up on a page represent dreams, goals, or even safety.
For them, dreams and goals aren’t meant to fit on a spreadsheet, so conversations involving what matters are not about logic.
So how do you budget if you have that mindset? The trick is to do an emotional audit.
Evaluating your emotional attachment
Let’s say that you’ve calculated that you have $75 to spend on a specific day (learn how to calculate your daily number here).
Would you be willing to spend $10 on groceries? Most people would answer yes: after all, they would need to eat.
On the other hand, would you be willing to pay $20 for new books? Some people might, while others might not.
This is the process of emotional auditing, a concept brought up in Living Rich by Spending Smart, by Gary Karp.
He recommends going through your expenses and evaluating the emotional aspects of how each expense makes you feel, with a 1–3 rating.
1 means that it’s either crucial to you somehow or would be a significant blow to your emotional state
2 means that it’s nice to have, but it’s something that can be put on pause
3 means that you have no emotional attachment to the expense
You should start by looking at all of your annual, quarterly, or monthly subscriptions and memberships.
If you’re unsure how to keep track of all of these things, it’s very likely that your bank or an application like Mint has taken your credit card transactions and organized them into specific spending categories.
As you go through these charges, don’t decide what to cut at the moment: instead, just mark how these memberships make you feel on an emotional level.
After looking at your recurring expenses, then look at spending categories that have been automatically organized to see what you likely spend in a month for that category.
After that, you can finish off by looking at individual transactions.
Once you’re done with that, then look over what score you gave each re-occurring expense and figure out ways to cut back on things that you don’t care much about.
For example, if you rated your monthly gym membership as a 2, perhaps you can either put it on pause or switch to a pay-per-visit plan.
These can save you money but it doesn’t put as big an emotional strain as cutting it off entirely.
Sorting out your emotions and money
According to Dave Ramsey, financial management involves 20% head knowledge and 80% behavioral change.
This is especially important for emotional spenders: for many, these numbers can bring joy, comfort, or a host of other emotions.
So rather than trying to force your behavior to change drastically and suddenly, bringing negative emotions such as bitterness, or depression into the mix, figure out what expenses affect you the most emotionally.
If you’re able to come in $200 under budget, does it matter, number-wise, what you spend on each category?
Not really.
All of the advice that is posted online about guidelines is what the average consumer spends on a category if they don’t feel very strongly about something.
So don’t cause yourself more undue stress and anxiety on the rollercoaster that is budgeting. Make sure to take yourself and your needs into account.
I write about Productivity, Psychology, and UX every week. If you would like to become better at UX, you can check out my online courses about UX Research Planning and Design Communication. | https://kaijzen.medium.com/how-to-budget-as-an-emotional-spender-69b30066fbf6 | ['Kai Wong'] | 2020-07-07 11:12:09.406000+00:00 | ['Money', 'Budget', 'Emotions', 'Anxiety', 'Productivity'] |
The Coup in Caracas | The Coup in Caracas
How I reported on days of anarchy in Venezuela as a foreign correspondent.
Image by 272447 from Pixabay
Wednesday, April 10, 2002
The general, the breast of his uniform jacket striped with ribbons and medals, stood ramrod stiff on the TV screen as he declared he was withdrawing his allegiance to the commander-in-chief of the Venezuelan Armed Forces, President Hugo Rafael Chávez Frías.
A chill pattered down my spine. I dashed to my laptop and punched out a quick email to the foreign editor at the Miami Herald. Given the growing political instability in Venezuela, including a recent string of high ranking military officers denouncing the controversial leftist president, the long-rumored coup d’état against Chávez had to be under way.
Based in Caracas, I was a freelance foreign correspondent, reporting from around Latin America for Time, Business Week, The New York Times, Financial Times, Houston Chronicle and many other media outlets.
Venezuela was a sleepy country when I arrived in 1995, ruled by a doddering octogenarian president, Rafael Caldera. Then Chávez exploded onto the national scene. As an Army lieutenant colonel, he’d led a failed coup attempt in 1992 and had been imprisoned for treason. After Caldera pardoned him, Chávez had spent several years crisscrossing the country, spreading his message of social justice for the 58 percent of the population that was poor in one of the world’s biggest oil-producing nations.
Image by OpenClipart-Vectors on Pixabay
Elected by a landslide, Chávez took over as president in 1996 and embarked on a radical project to stamp out corruption and transform the country into a workers’ paradise. Venezuelans either adored him or despised him. The poor masses saw him as a savior, while the middle class and the elite, who held the reins to the country’s money and power, viewed him as a threat to their privileged lives.
While talk of a coup had been circulating for months, with business leaders even holding a press conference to declare their plan for a transition government, Chávez had pooh-poohed the prospect of it actually happening. “Who plans a coup drop by drop?” he said.
But the opposition had grown increasingly emboldened.
Whenever Chávez appeared on television to give a speech, often interrupting the primetime telenovelas that Venezuelans were addicted to, people would lean out of their windows and balconies in the tall apartment buildings in east Caracas, the city’s affluent side, and bang pots and pans. It was called a cacerolazo, a form of popular protest.
As the clanging cacophony bounced off the concrete, I would hang out of my window and watch, fascinated and moved by this show of profound political discontent.
The day before the general’s televised defection, the opposition had called a general strike to protest Chávez’s firing two days earlier of top executives at the state-owned oil company Petróleos de Venezuela, S.A., known as PDVSA. Business owners, who were largely against Chávez, refused to open their stores, offices, schools and factories. Oil workers slowed down oil production, the nation’s lifeblood. Their aim was to paralyze and destabilize the country to the point where the masses, unable to withstand the loss of their paychecks, turned against Chávez, forcing him to resign. It had worked before in 1958, forcing dictator Gen. Marcos Pérez Jiménez to step down and flee the country.
I made sure to stock up on food and water at my local supermarket, Excelsior Gama in Los Palos Grandes, before the strike. No one knew how long it would last. It would depend on how much of a loss business owners could stomach before reopening.
On the first day of the strike, I walked the streets to get a feel for how it was going.
The normally clogged artery of Avenida Francisco de Miranda was deserted. The bustling commercial district of Chacao was sleepier than a Sunday. Bakeries, usually chockfull of people getting their morning negritos or marrones (espresso coffee without or with milk) and cachitos (croissants), were shuttered with their iron grilles padlocked to the ground. A TV helicopter showed a growing number of tankers like dots in the sea off the coast, unable to load their cargoes of oil as refinery and dock workers honored the strike.
But in the city’s center and west side, a Chávez stronghold of blue-collar neighborhoods and slums, it was much like a normal day. Street vendors hawked their ware and buses roared along the streets, snorting great plumes of black exhaust and riders hanging out the doors.
The opposition declared the strike a success and ordered it to continue. Riding the crest of swelling anti-Chávez sentiment, the opposition scheduled a massive protest march through Caracas that Thursday, the third day of the strike, and the day after the general announced denounced his defection.
That bright, sunny morning of April 11, hundreds of thousands of people turned out to march against Chávez, forming a human river stretching at least a mile. It was an incredible sight of popular dissent. Again I felt moved by the show of solidarity in dissent, of people fighting for their country. I went to the march to report on it and walked with it for a while.
In true fun-loving Venezuelan style, it was a protest-cum-party. The air was full of jubilation, a celebration of unity in opposition to Chávez with everyone decked out in red, yellow and blue, the colors of the flag. After I’d gathered enough quotes, I went home to file my story for the Miami Herald and followed the march on TV.
A couple hours later, the marchers reached the presidential palace, and snipers fired shots into the crowd and people dropped to the ground. I watched in amazement as pandemonium broke out. Protesters ran for cover. Then Chávez supporters, who were stationed on an overpass above the march, started firing on the crowd, thinking that the demonstrators were attacking them.
I was transfixed to the disaster unfolding on live TV.
I tore myself away to hurriedly file an update to my earlier story. National Guard troops arrived on scene, clashing with the protesters and clouds of tear gas soon enveloped the street. The TV screen suddenly split — one half showed Chávez, who was giving one of his longwinded speeches about a totally unrelated subject, while the other showed the chaos right outside the wall of Miraflores, the presidential palace. The contrast effectively created the picture of an oblivious, out-of-touch president.
Several hours later, after Chávez had finally been informed what was going on and called for calm, the fighting tapered off as dusk fell. Nineteen people were dead and hundreds injured. Then small tanks rumbled out of an Army base just south of Caracas and surrounded the palace.
TV cameras showed a cohort of generals entering, and then the nation waited to see what was going to happen. I planted myself in front of the TV and dozed on and off, waiting for developments as the night wore on.
About two a.m., a general loyal to Chávez announced that the president had resigned and a civilian-military junta was taking over the government.
There wasn’t much to do at that hour, so I got a few hours sleep then in the early morning I ventured out onto the street. It was surreally still. The collective shock over the previous day’s tragic events hung palpably in the air, like an eerie, ghostly pall.
As the nation awoke, we became aware of several key facts via TV: Chávez had been arrested. No one knew where he was or even if he was still alive. Borders were sealed. Airports closed. A very ugly side of human nature reared up as anarchy too over in the absence of rule of law. Lynch mobs formed and hauled members of the Chávez government out of their homes. Stores were looted. Cars were set on fire. Streets were blocked with piles of tires set ablaze by hooligans wearing T-shirts over their heads to disguise their identities.
My phone was ringing nonstop from media wanting on the ground reporting so I had to get out on the volatile streets. I found an enraged mob surrounding the Cuban Embassy. Chávez opponents hated Chávez’s affinity for Castro and all things Cuban. People were climbing trees to get over the embassy’s high wall and removing manhole covers in the sidewalk to cut off electricity and water to the building. They wanted to force out the Cuban diplomats but their ultimate goal was unclear — to arrest them, beat them? It was simply vengeance of the winners.
Political events were moving with stunning fluidity. A civilian government was set up, comprising largely wealthy business leaders, and a new president, Pedro Carmona, the head of the business federation, swore himself in. Carmona’s first act was to dissolve congress and the Supreme Court. We now had, in effect, a dictatorship, and I had another front-page story to write. That night I stayed up late writing stories and filing updates. I finally fell asleep on the couch in front of the TV.
The next day, Sunday, however, the tide turned. Chávez supporters stormed the streets, demanding to see his resignation letter. The new government, recognized by the United States, had to admit it didn’t exist. The head of Congress noted that without that letter, the opposition had simply staged a coup and called for constitutional order to be followed: if the president was absent, the vice president was legally bound to take over the presidency. In the absence of the vice president, the head of Congress was next in line as president. Other Latin American nations backed that stance.
Chávez supporters retook control of the government-owned TV station and urged more people to flood the streets demanding to know where Chávez was. By the end of that day, as domestic and international opposition to the illegal power grab, the new government had crumbled and the de facto president Carmona had escaped to Colombia. The vice president came out of hiding and was sworn in as president.
Thanks to some loyalist troops, Chávez was discovered being held prisoner on a small island off the coast. By the wee hours of the Sunday morning, Chávez returned by helicopter to the presidential palace where thousands of his supporters had gathered.
He emerged victorious onto a balcony to resounding cheers.
I was drained. It had been five days of functioning on sheer adrenalin, filing updates upon updates to keep up with events. I had slept just a few hours every night. I lost a couple pounds because I simply hadn’t had time to eat.
Things calmed down over the next few weeks as order was restored then the truth of the overthrow attempt filters out. A group of hardline right-wing military officers placed snipers on the rooftops to massacre the anti-Chávez marchers.
They knew that the president would be blamed for the deaths and that would push the military high command, sensitive to any allegation of human rights violations, to force Chávez out. It was pursuit of power at its most cold-blooded.
Today, as we confront the global scourge of COVID-19, the events in Venezuela serve as a powerful reminder of what can happen in a society that becomes deeply divided by political ideology, where blame is sought over unity, and where sacrificing lives takes precedence over preserving lives. It is a reminder that governments must serve all people. | https://medium.com/illumination/the-coup-in-caracas-963192be69e3 | ['Christina Hoag'] | 2020-05-04 21:25:09.618000+00:00 | ['Venezuela', 'Journalism', 'Modern History', 'Coup', 'Chavez'] |
The Three Phrases of a Money-Hungry Medium Writer | This is how you can become the writer you want to be without selling your soul
Photo by NeONBRAND on Unsplash
Lately, I’ve been taking a break from Medium to finish writing my book and try ghostwriting. This has given me some time to think about the habits of other writers on here, particularly new writers.
This is not a hit-and-run on all new writers. If the shoe fits (ie: It pisses you off), I guess it’s about you.
There are many new writers who are very talented and I really enjoy them personally and love reading their writing. This is because they are awesome people, who are humble and actually care about our craft.
To them, our craft is not just another way to move up the ladder of life: It’s not a get-rich-quick-scheme.
The craft of writing is their life.
I have been writing on my own for a while and I’ve been on Medium since 2019. I still consider myself a newer writer with a lot to learn about this platform.
There are many secret inner operations on this platform. You never know what the curators will like, what the Medium publications will accept, or who the next new “star” on the platform will be.
Honestly, most of us who have been on here for a while gave up fairly quickly on finding that information out because we care about writing, not about visibility and making money. | https://medium.com/the-innovation/the-three-phrases-of-a-money-hungry-medium-writer-775b86571a59 | ['Amy The Maritimer'] | 2020-08-28 06:49:23.071000+00:00 | ['Life Lessons', 'Writing', 'Writing Tips', 'Writers On Writing', 'Personal Development'] |
How I Earn Over $350 a Day with a 4yo on My Lap | My laptop is a bit on the slow side. It's this pretty (as in cute) but cheap HP thing I bought off QVC back in 2012 when I told myself I was going to become a writer and change my whole life.
Okay, so it took a few years for me to actually get started.
Back then, I thought I was going to write a novel, and I don't know--maybe I'll still finish it one day. But ever since I became a single mom in 2014, my creative brain seems to work much differently.
So I've built my writing career upon personal essays right here on Medium--beginning last April.
Prior to going all-in on Medium this past December, I was a writer for a social media management company for nearly 4 years. For the first several months of that gig, I worked from my phone and my laptop.
But I found laptop usage with an infant to be rather cumbersome, and figured I would need to get a tablet instead. Not that I'd ever used one before or anything like that. Still, I was fortunate enough to receive a Samsung Galaxy Tab A as a gift, and I quickly did develop a preference for the tablet over my laptop.
I'm still using that same old tablet, and I don't think I've dug out my laptop for at least six months now. Once in a while, I do need to use it, but that's a rare occasion indeed.
Several Medium writers have been surprised to hear that I write on the platform using my tablet and even my phone, and they have expressed the desire to learn how I do it. Particularly since I've begun making good money here.
Over the past two weeks, I've earned slightly more than $350 a day writing from the comfort of my phone and tablet. Keep in mind it took months of (sustained) hard work to get to this point.
Estimate for the first 2 weeks of March
It's actually pretty simple and there are no true secrets. Keep in mind, though, these are android tips.
I utilize BOTH the (desktop) website and the app... on my phone.
Personally, I’d say that both the app and website are pretty straightforward. What you see is what you get.
On my android phone, writing on the app looks like this:
And on my tablet, writing on the website looks like this:
Why yes, I DO have about 100 tabs open at all times....
When I use the website, it is always the desktop site.
From a mobile device, the default is always the mobile website. You’ve got to override that default for more functionality.
I use a Chrome browser on both of my mobile devices. To turn on the desktop site, click the three dots at the right side of your browser.
Clicking on those dots will reveal a drop-down menu. Check the box that says “desktop site.”
It's easier to remove an image with the app.
All you have to do is click the picture, and then the X that appears in the upper right corner. Easy.
But it’s easiest to input an image caption from the web.
At this time, the app doesn’t allow you to put in a caption, but the website does.
BTW, there's no need for ugly links.
So... I see a lot of writers here who copy and paste links in their stories without actually hyperlinking them. It makes me sad.
If you're on the mobile app, you only need to highlight the word(s) you wish to hyperlink. Then click the little link icon.
But if you’re looking for a prettier link? Then use the website. All you have to do is hit “Enter” to make a new line, paste the link, and hit “Enter” again.
You’ll end up with something pretty like this:
Here are a couple of other formatting tricks you'll need for the desktop site.
When you write on Medium from the website, there's a little pop-up box that only comes up when you select a word. You can still access that pop-up when you're using a mobile device. You just need to hit another button like the symbols or uppercase arrow after you select the text.
There are the beautiful formatting options you've been looking for...
Also, when you begin a new line by hitting "Enter," you'll see a plus sign inside a circle. Like this:
Go ahead and click on it. You'll find options to add video, search for Unsplash images, insert a page break and more.
I don’t type, so much as Swype.
Both my tablet and phone keyboards are set to “Swype.” That means I trace (or drag) my finger across the keyboard to spell each word.
I’d say that “swyping” goes quicker for me than typing.
I only use my tablet when I’m at home in my “work mode.”
That means the bulk of my stories are written with a stylus from my tablet. But anytime I’m on the go or trying to write a story from bed, you can bet I’m using my phone and swyping with my finger.
Writing in bed either after my daughter goes to sleep or before she wakes up happens at least 4 times a week. Sometimes I wake up in the middle of the night with a story idea, or the mojo to Swype something out for an hour or two.
If you're not sure about something on Medium, don't be afraid to experiment.
The truth is that I got myself set up on Medium last April without any help. I'm no super duper techie or even web savvy. Yet, I figured out how to write on Medium from my phone and tablet all by myself. That means you can do this too--you've just got to be able to do a little trial and error.
When I first started up on Medium, publishing from the app didn't work if I wanted to put a story behind the paywall and earn money through the Partner Program. So I only published locked stories through the website.
These days, you can publish locked stories from the app. Like most things on Medium, everything is subject to change.
The hardest part of getting my work done is just the personal stuff.
Honestly, Medium itself makes writing and publishing pretty damn easy. It's a helluva lot easier for me to write on Medium, build an audience, and earn a decent living than anything else I've done.
And as a single mom, having the ability to work from my phone or tablet is a huge win. My daughter is turning 5 soon, but she is still very much attached to me. It's not unusual for me to work from my tablet or phone with my daughter on my lap.
As you might expect, the interruptions are plentiful (right now, she's playing with dolls but stopping every few minutes to tell me all about it), and the hours are often long. I'm alright with that, and I believe in the career I'm building here on Medium.
Besides, I'm proud to show my daughter the value of going after your dreams.
What about the rest of you--do you use mobile devices to write on Medium? Do you have any questions about using the app or accessing the website from a phone or tablet?
Much thanks to Luke Rowley, Nate Miller, Nick Wignall, and Danny Forest on this one! | https://medium.com/awkwardly-honest/how-i-earn-over-350-a-day-with-a-4yo-on-my-lap-458a5b8052 | ['Shannon Ashley'] | 2019-03-19 19:53:34.575000+00:00 | ['Medium', 'Parenting', 'Writing', 'Partner Program', 'Success'] |
Bragging about plastic bottles for “all natural” beverages. Really? | Which of these three phrases does not belong with the others on the label of a Snapple “Takes 2 to Mango Tea” my daughter recently bought?
“All Natural”
“Made from the BEST stuff on EARTH!”
“Plastic Bottle!”
By the way, I didn’t add the exclamation mark to the last phrase. For some inscrutable reason, the company that has long marketed itself as ecologically conscious has decided to brag about switching from glass to plastic containers.
Perhaps anticipating that plastic bottles showing up in the intestines of dead whales isn’t a good look, Snapple assures the world that their containers are “made from a non-BPA (bisphenol A) material and are 100 percent recyclable, just like our glass ones.”
May I please take this opportunity to nominate the word “recyclable” for the 2019 Most Misleading Word In The English Language award. Saying a plastic bottle is “recyclable” is not that different from declaring I’m “eligible” to win the Nobel Prize for Literature.
The recycling rate for glass containers is bad enough in the United States: 33 percent overall, though much higher in states with bottle bills. (For comparison, it’s around 90 percent in Switzerland, Germany, and some other European countries.)
But the plastic bottle rate is even lower … and it’s getting worse. According to the Association of Plastic Recyclers and the American Chemistry Council, their recycling rate was 29.3 percent in 2017, down from 29.8 percent the year before. And that’s before Malaysia became the latest developing country to turn back shipments of plastic waste, saying last week, in the words of its environment minister, it would “not be a dumping ground to the world.”
The real reason for Snapple’s shift is hidden in plain sight on their “Real Facts” web page: Plastic bottles are “easier for distributors and retailers to handle.”
I get it. Plastic bottles bounce instead of crack when dropped from a forklift, and that’s good for Snapple’s bottom line. It even allows the company to charge less — not that I’ve noticed a price change since the plastic bottles were introduced last year.
But there are certain short-term financial costs that we all need to bear for our own long-term good. And that goes double for a company that has staked its reputation on being “natural.”
So, Environmental Action’s Thneed Trophy for June 2019 is dubiously awarded to the Snapple Beverage Corp. for its very-unnatural plastic bottles . . .and, perhaps more disappointing, for bragging about them. Exclamation points matter.
* * * * *
The Thneed Trophy is awarded monthly by Environmental Action to a product that exemplifies the spirit of The Lorax’s “thneed”. It’s the thing that everyone wants but nobody needs, for which all of the Truffula Trees were cut down. In other words, bad for the environment, with little or no redeeming social value.
This message is not associated with or endorsed by the creators or the publishers of “The Lorax.” | https://medium.com/the-public-interest-network/bragging-about-plastic-bottles-for-all-natural-beverages-really-a6c718476219 | ['Kirk Weinert'] | 2019-06-10 23:16:46.249000+00:00 | ['Zero Waste', 'Drinks', 'Advertising', 'Environment', 'Plastic Pollution'] |
NumPy Illustrated: The Visual Guide to NumPy | 2. Matrices, the 2D Arrays
There used to be a dedicated matrix class in NumPy, but it is deprecated now, so I’ll use the words matrix and 2D array interchangeably.
Matrix initialization syntax is similar to that of vectors:
Double parentheses are necessary here because the second positional parameter is reserved for the (optional) dtype (which also accepts integers).
Random matrix generation is also similar to that of vectors:
Two-dimensional indexing syntax is more convenient than that of nested lists:
The “view” sign means that no copying is actually done when slicing an array. When the array is modified, the changes are reflected in the slice as well.
The axis argument
In many operations (e.g., sum ) you need to tell NumPy if you want to operate across rows or columns. To have a universal notation that works for an arbitrary number of dimensions, NumPy introduces a notion of axis: The value of the axis argument is, as a matter of fact, the number of the index in question: The first index is axis=0 , the second one is axis=1 , and so on. So in 2D axis=0 is column-wise and axis=1 means row-wise.
Matrix arithmetic
In addition to ordinary operators (like +,-,*,/,// and **) which work element-wise, there’s a @ operator that calculates a matrix product:
As a generalization of broadcasting from scalar that we’ve seen already in the first part, NumPy allows mixed operations between a vector and a matrix, and even between two vectors:
Note that in the last example it is a symmetric per-element multiplication. To calculate the outer product using an asymmetric linear algebra matrix multiplication the order of the operands should be reversed:
Row vectors and column vectors
As seen from the example above, in the 2D context, the row and column vectors are treated differently. This contrasts with the usual NumPy practice of having one type of 1D arrays wherever possible (e.g., a[:,j] — the j-th column of a 2D array a — is a 1D array). By default 1D arrays are treated as row vectors in 2D operations, so when multiplying a matrix by a row vector, you can use either shape (n,) or (1, n) — the result will be the same. If you need a column vector, there are a couple of ways to cook it from a 1D array, but surprisingly transpose is not one of them:
Two operations that are capable of making a 2D column vector out of a 1D array are reshaping and indexing with newaxis :
Here the -1 argument tells reshape to calculate one of the dimension sizes automatically and None in the square brackets serves as a shortcut for np.newaxis , which adds an empty axis at the designated place.
So, there’s a total of three types of vectors in NumPy: 1D arrays, 2D row vectors, and 2D column vectors. Here’s a diagram of explicit conversions between those:
By the rules of broadcasting, 1D arrays are implicitly interpreted as 2D row vectors, so it is generally not necessary to convert between those two — thus the corresponding area is shaded.
Matrix manipulations
There are two main functions for joining the arrays:
Those two work fine with stacking matrices only or vectors only, but when it comes to mixed stacking of 1D arrays and matrices, only the vstack works as expected: The hstack generates a dimensions-mismatch error because as described above, the 1D array is interpreted as a row vector, not a column vector. The workaround is either to convert it to a row vector or to use a specialized column_stack function which does it automatically:
The inverse of stacking is splitting:
Matrix replication can be done in two ways: tile acts like copy-pasting and repeat like collated printing:
Specific columns and rows can be delete d like that:
The inverse operation is insert :
The append function, just like hstack , is unable to automatically transpose 1D arrays, so once again, either the vector needs to be reshaped or a dimension added, or column_stack needs to be used instead:
Actually, if all you need to do is add constant values to the border(s) of the array, the (slightly overcomplicated) pad function should suffice:
Meshgrids
The broadcasting rules make it simpler to work with meshgrids. Suppose, you need the following matrix (but of a very large size):
Two obvious approaches are slow, as they use Python loops. The MATLAB way of dealing with such problems is to create a meshgrid:
The meshgrid function accepts an arbitrary set of indices, mgrid — just slices and indices can only generate the complete index ranges. fromfunction calls the provided function just once, with the I and J argument as described above.
But actually, there is a better way to do it in NumPy. There’s no need to spend memory on the whole I and J matrices (even though meshgrid is smart enough to only store references to the original vectors if possible). It is sufficient to store only vectors of the correct shape, and the broadcasting rules take care of the rest:
Without the indexing=’ij’ argument, meshgrid will change the order of the arguments: J, I= np.meshgrid(j, i) — it is an ‘xy’ mode, useful for visualizing 3D plots (see the example from the docs).
Aside from initializing functions over a two- or three-dimensional grid, the meshes can be useful for indexing arrays:
Works with sparse meshgrids, too
Matrix statistics
Just like sum , all the other stats functions ( min/max , argmin/argmax , mean/median/percentile , std/var ) accept the axis parameter and act accordingly:
np.amin is just an alias of np.min to avoid shadowing the Python min when you write ' from numpy import *'
The argmin and argmax functions in 2D and above have an annoyance of returning the flattened index (of the first instance of the min and max value). To convert it to two coordinates, an unravel_index function is required:
The quantifiers all and any are also aware of the axis argument:
Matrix sorting
As helpful as the axis argument is for the functions listed above, it is as unhelpful for the 2D sorting:
It is just not what you would usually want from sorting a matrix or a spreadsheet: axis is in no way a replacement for the key argument. But luckily, NumPy has several helper functions which allow sorting by a column — or by several columns, if required:
1. a[a[:,0].argsort()] sorts the array by the first column:
Here argsort returns an array of indices of the original array after sorting.
This trick can be repeated, but care must be taken so that the next sort does not mess up the results of the previous one:
a = a[a[:,2].argsort()]
a = a[a[:,1].argsort(kind='stable')]
a = a[a[:,0].argsort(kind='stable')]
2. There’s a helper function lexsort which sorts in the way described above by all available columns, but it always performs row-wise, and the order of rows to be sorted is inverted (i.e., from bottom to top) so its usage is a bit contrived, e.g.
– a[np.lexsort(np.flipud(a[2,5].T))] sorts by column 2 first and then (where the values in column 2 are equal) by column 5.
– a[np.lexsort(np.flipud(a.T))] sorts by all columns in left-to-right order.
Here flipud flips the matrix in the up-down direction (to be precise, in the axis=0 direction, same as a[::-1,...] , where three dots mean “all other dimensions’”— so it’s all of a sudden flipud , not fliplr , that flips the 1D arrays).
3. There also is an order argument to sort , but it is neither fast nor easy to use if you start with an ordinary (unstructured) array. | https://medium.com/better-programming/numpy-illustrated-the-visual-guide-to-numpy-3b1d4976de1d | ['Lev Maximov'] | 2020-12-28 12:29:25.255000+00:00 | ['Programming', 'Python', 'Numpy', 'Numpy Array', 'Data Science'] |
Blessings | Blessings
Blessings are everywhere,
It is up to us
To recognize them. | https://medium.com/spiritual-secrets/blessings-85195049d9b1 | ['Ivette Cruz'] | 2020-12-14 15:39:43.123000+00:00 | ['Poetry', 'Self-awareness', 'Spirituality', 'Life Lessons', 'Spiritual Secrets'] |
Why Apple didn’t acquire Tesla | Photo by Andy Wang on Unsplash
Rumors surrounding a potential Apple Car began circulating online in 2014, and last week they began to stir back up. According to Reuters, an electric Apple Car could enter production as soon as 2024, with significant battery tech advancements. The article caught the attention of Tesla CEO Elon Musk — who shared an interesting story via tweet.
The ‘darkest days’ referred to 2017 when Tesla was struggling with producing the Model 3 sedan. So, just three years after Apple begins exploring electric vehicles, the industry’s leading company reaches out to discuss an acquisition.
So why did Tim Cook refuse the meeting?
An acquisition of Tesla would effectively catapult Apple into the market overnight, skipping years of Research and Development (R&D). The deal sounds great on paper, which is why Tim Cook’s decision is so interesting. Maybe he didn’t want Apple to seem monopolistic, running the risk of a tech break up similar to what Microsoft suffered in the 90s.
Photo by Scott Graham on Unsplash
Or maybe… Apple doesn’t need Tesla
Apple’s R&D budget was reported at 18.75 billion for 2020’s fiscal year. Compare that to Tesla’s 2019 R&D budget of 1.34 billion, and things start to get interesting. A large chunk of Apple’s R&D budget does go toward its current product lineup. Still, even a smaller fraction of the whopping 18.75 billion could put them on the fast track for an Apple Car.
Years of technological advancement can happen in months if you have enough money and a skilled workforce.
Apple has the money, but what about the workforce? A car presents engineering complications that don’t exist on a phone. Apple has been poaching Tesla employees since as early as 2015, reportedly offering signing bonuses up to $250,000 and a 60% salary increase. So, they can afford to buy their workforce.
OK, so Apple can afford to build a compelling vehicle from the ground up. But the deal with Tesla still would have made financial sense — setting them even further ahead. So, again, why would Tim Cook refuse a meeting?
The Apple Philosophy
When Apple announced that they would be transitioning their Mac lineup to their in-house ARM processors, many people were skeptical. To their surprise (or dismay), Apple delivered a top product at its price point. What Apple achieved with their M1 processors is due to its complete control over the hardware and software. A synergy that is — and will continue to be, unrivaled by other manufactures.
Apple has their own Maps software, and they’ve been acquiring AI startups like Drive.ai, a company that specializes in autonomous driving. The implementation of AI systems with the company’s own Maps software on vehicle hardware they control could revolutionize autonomous driving in a way that other manufacturers can not.
So, why did Tim Cook refuse the meeting? It’s all about control and making a cohesive product. It’s what Apple does, and it’s what they’re going to do with the Apple Car. | https://medium.com/macoclock/why-apple-didnt-acquire-tesla-7579016fa395 | ['J.P. Scott'] | 2020-12-29 06:25:41.303000+00:00 | ['Cars', 'Gadgets', 'Tesla', 'Technology', 'Apple'] |
4 Social Media Tips for Your Small Business | 4 Social Media Tips for Your Small Business
Resources for business owners during Small Business Week
We’re extremely proud to sponsor Small Business Week and celebrate the small businesses that do so much to help their communities and local economies thrive.
We’re honored to hit the road to meet small business owners all over the country and provide educational resources this week (and every week!) for small businesses.
Our Local Outreach Manager, Emma Vaughn, was in Washington, D.C. with our team to kick off Small Business Week with the U.S. Small Business Administration. Along with other experts, Emma spoke on a panel to give small business owners the tips they need to be successful on social media. Watch the full video here.
Interested in using your social media to get new customers and keep them coming back?
Check out our blog post below for 4 tips small business owners can use to post the right content at the right time on Facebook, Instagram, and Twitter, hashtag effectively on Twitter and Instagram, and finally, extend their customer service on social media.
1. Post the Right Content at the Right Time
Engaging with users across Facebook, Twitter, Instagram gets more eyes on your customers’ pages, making it possible for them to discover their online community, spread word-of-mouth, and create conversations with current and potential customers. But what’s the best time of day to post and what types of content works best?
Your content on these three platforms in particular has the power to get your followers familiar with and excited about what your business offers, so here are a few tips on when and what post to keep your followers engaged and keep new customers coming in the door.
Pay attention to time of day/month/year
Think about the kinds of content your followers would want to see and when they want to see it.
Here are some examples: If you own a gym, let your customers know when you get new coaches and introduce them later in the day so you might draw people in after work — like you see here with CrossFit DC on Twitter.
In this Instagram post by Adams House B&B, they remind their fans and followers that spring is coming up as well as SXSW, which is a very popular festival in Austin in March, to remind potential guests to book their stay with them (over their competition) and before rooms get all booked up:
2. Vary Your Content
Your customers want to know the story of your business — so tell it! Give them interesting content they’ll come back to again and again — like engaging photos and videos, trivia, tips, and local love.
Check out this post by M2 Chiropractic with the theme, “Health Tip Tuesday.” If they are consistent with these tips on Tuesdays, you can see how their followers would be more likely to return to their Facebook page every Tuesday for more helpful health-related information. Every week is a good cadence for these kinds of tips, not every day. You don’t want to become too predictable to your followers.
Here are some additional great content ideas to fill your feed and keep your content fresh:
Post photos of the products you have in store, photos of the interior and exterior of your business, and photos of your team members on Twitter, Facebook, and Instagram. Show your followers what’s happening in your salon, pet store, yoga studio, or restaurant!
By showcasing what your business is all about on these platforms, you’ll entice your fans and followers to come by for a visit. But don’t forget to track and test. What works for one brand or business might not necessarily work for your posting strategy, so be sure to test, track, and test again. The best way to track your post performance is to use the Insights feature on Facebook and Twitter, and keep track of your engagement rate on Instagram to make sure that the number of followers you have syncs with the number of likes you’re getting per post.
3. Hashtag Effectively on Twitter and Instagram
Using hashtags on social media, especially on Instagram and Twitter, can be an effective way to enter new conversations and get your content in front of new eyes. But if you use them incorrectly, they can disrupt your messaging.
Users click on hashtags to find content that’s relevant to what they just looked at or read about. From a user standpoint, hashtags are used to categorize content, making the discovery of new or related articles and insights easy to do. But when hashtags are used incorrectly, people looking for new content have to dig through irrelevant, miscategorized content in order to get to what they’re actually looking for. This can disrupt the experience that you meant your customers to have with your content.
Don’t miss our full blog post on using hashtags on Twitter and Instagram here!
Instagram
Many users and businesses use hashtags on Instagram to increase their post reach and get their content in front of new users.
Using relevant hashtags in your caption (like you see here with Sur Restaurant) will get new eyes on your business’ Instagram content and help you boost your engagement.
What are relevant hashtags? Anything your followers would find interesting, or anything you think might help your post reach more people on the platform — that can be hashtags specific to your local area, or hashtags that are used widely on the platform.
You can add up to 30 hashtags per post, but when you go over 10, engagement starts to drop, according to TrackMaven.
Twitter
Tweets with hashtags are 33% more likely to get retweeted than tweets without hashtags, however, engagement will drop on Tweets with more than two hashtags, according to TrackMaven.
So, what hashtags should you use? The same rules that apply on Instagram apply on Twitter as well. Use hashtags that are relevant and interesting to your customers, local area, and industry. If your business’ account is public, any users who search for a hashtag you’re using just might find your tweet in their search results. You can also start conversations on Twitter by searching popular hashtags or keywords that relate to your business. Once you find tweets from people using those hashtags, then, you can start chatting!
4. Extend Your Customer Service Online
As a small business owner, you know how important it is to show your guests excellent customer service when they come into your local business. The same philosophy applies to your followers on social media. Consumers have come to expect quick responses from businesses and brands to their questions, comments, and requests on social media — which means that each and every message, mention, and wall post that comes in for your business on social media has the power to influence that consumers opinion of your business and their likelihood to visit.
Get a system in place to monitor all wall posts, direct messages, mentions, reviews, and tags that come in for your business on platforms like Facebook, Twitter, and Yelp.
Many of these online reach outs from your current and potential customers will be requests for business information, questions about your products and services, real time notifications of their visit to your business, or description of previous experiences. You want to respond to all of these communications in order to seem accessible, appreciative, and responsive to your customers.
Customers who receive a service request response through social media spend 20–40% more, so providing a quick and helpful response to every online interaction will positively impact your business’ social media ROI. | https://medium.com/main-street-hub/4-social-media-tips-for-your-small-business-5790e7daeb4 | ['Main Street Hub'] | 2018-05-01 13:00:57.376000+00:00 | ['Content Marketing', 'Marketing', 'Small Business', 'Social Media', 'Social Media Marketing'] |
I’ve Been Meditating for 48 Years | I’ve Been Meditating for 48 Years
I’ve learned an incredible amount about myself and the world around me
Illustration: Benjavisa/Getty Images
In the fall of 1972, I was beginning my last year in college. The Watergate scandal was unfolding. The war in Vietnam was beginning to wind down, but unrest and turmoil still plagued college campuses across the country — and plenty of people were smoking weed and washing psychedelics down with cheap wine.
I was done with psychedelics and weed. I wanted to believe in something more but had no idea what that was or how to find it. When I wasn’t rebelling against some form of authority — parents, university leaders, police, the government, big business — I wanted to see a more peaceful world, and that meant finding an inner truth, a higher truth, a truth I didn’t see around me.
The summer before my final semester, I learned about a young Indian boy, a teacher, who was revealing some sort of inner truth. I felt an awakening deep within me, a recognition of something familiar, a touch of déjà vu. I wanted whatever this young teacher had to offer.
Within a few months, I was sitting cross-legged in a cramped attic in Columbus, Ohio, with 30 other hippies and seekers listening to a bald, middle-aged Indian guy in flowing saffron robes. He was a follower of the teacher, a mahatma (“great soul”). I was mesmerized and learned about Guru Maharaji, whose birth name was Prem Rawat.
After another evening, I was selected to return and receive the “knowledge.” I was taught four techniques of meditation. The session drew to a close, and his parting words were these: “Try it. If you like it, continue. If you don’t like it, then don’t do it.”
I thought to myself, “Cool. That works.”
The next day, I sat down to meditate, closing my eyes and going within. There are many types of meditation. Some use mantras, visualization, and points of focus within the body. The meditation I learned is called raj yoga (“royal union”). The four techniques — ancient practices going back hundreds of years — help take our outwardly focused awareness and turn it inward.
After about 30 minutes, I started to sink into a calm, alert, relaxed place. My breathing was deep and slow as my attention went inside. I stayed in that experience another 15 minutes or so, and when I opened my eyes, I felt a sense of peace and openness. At the time, I was used to getting a jolt from whatever drug or ecstatic experience I was on. This was something else — something much subtler. But there was enough there to keep me going.
I graduated in June 1973 and moved to Boston to join a community of followers of Prem Rawat. At that time, all the full-time roles in the organization (known as Elan Vital) required you to live communally and be monastic, which meant no sex, no drinking or drugs, and no salary. I wanted to immerse myself in the experience, and I did so for 10 years.
Photo of the author by Mariclaire
After holding various roles in the organization, I was asked by Prem Rawat to become a meditation instructor. I accepted and traveled for several years throughout the world teaching the techniques to people in small villages in Ghana and Cameroon, then in Lagos, Nairobi, Cairo, Alexandria, Turino, Vienna, Athens, Belgrade, Sarajevo, Amsterdam, London, and throughout Australia, New Zealand, Canada, and the United States.
I finished my time with Elan Vital in Miami in 1984. I got married, raised a family, and began my career in leadership consulting, sales, and management. I’ve continued to meditate virtually every day of my life since then. Here is what I’ve learned from my years of meditation:
There is an inner experience of light, tranquility, energy, and peace
When I connect deeply, I feel enveloped in a universal essence. A sense of fulfillment seems to well up from within, and I can feel a gentle euphoria vibrating throughout my body. I become an observer of my thoughts, and I know the brilliant swirling light I see pulsating like a galaxy of stars before my closed eyes is my home. I came from there, and I will return there—the life force.
When I finish my meditation practice, I feel grateful for being alive and knowing my own essence. My mind is clear and focused, the mental chatter is quiet, and my entire being feels full of joy, happiness, and love. I start most of my days like this and do my best to stay present to the experience, no matter what happens. Some days I’m more successful than others. The purpose of meditation is not to control the mind. A quiet mind happens as a result of a connection to an inner experience of peace.
In my meditation, there is no mantra or visualization. The primary technique involves awareness of breath, something common in many forms of meditation and yoga. The active mind needs something to help still it, and the breath is the most natural thing available to us.
Our respiratory process is the only bodily system that functions consciously and unconsciously, so the act of focusing on the breath helps bring our awareness in the present moment. The more our breathing and our mind’s racing thoughts slow down, the more noticeable a rich inner experience becomes.
After initially trying too hard to concentrate on my breath, I finally learned to relax into a state of doing nothing, of just letting thoughts go and returning to the gentle inflow and outflow of the breath. The more I did this, the more I discovered that meditation is actually doing nothing, which opens the door to a vast pool of energy — my life force within me.
We become what we put our attention on
Meditation was not easy for me in the beginning. I often found it challenging to sit quietly for 30–60 minutes. My body was uncomfortable, my to-do list kept appearing, noises were distracting, and I often felt impatient. Where is that inner light? Why is it taking so long?
Over the years, it’s become more manageable, and like anything in life, desire and thirst are motivators for action. I may want something, but if I’m not willing to make an effort toward it, nothing happens. If you want that inner experience and make the smallest step toward it, you will have it. If I want peace and joy, I need to do my part. Some days it goes better than others.
Don’t listen to any mental bullshit. The talkative mind/ego will attempt, in any possible way, to disrupt your state of equilibrium by throwing up doubt, fear, guilt, shame, and ridiculous thoughts like: “You can’t meditate.” “You can’t sit still.” “All the stuff you have to do is more important than this.” “Nothing is happening.” “This is a waste of time.”
When thoughts like these appear, remember that we choose every second where to put our attention. When I meditate, I see my thoughts floating in and out. I don’t resist or fight them. I put my focus back on my breath. Soon enough, the thoughts dissipate. In the present moment, doubt, fear, worries, and concerns over what just happened or what might happen don’t exist. They are vapor-like ghosts vying for our attention. They are false prophets.
When we meditate, we can see our thoughts as separate from ourselves. In that place of observing, we’re more connected to our consciousness, our inner self.
Consistency matters
Regular practice is essential. Even if your goal is 30 minutes, doing 10 minutes is better than nothing at all. Just because I’ve meditated for 48 years doesn’t mean I can start taking days off. It’s like hooking yourself up to a battery charger. If you want the charge, you have to plug into the source regularly.
Find the right time and place. For me, the right time is in the morning, and the right place is a comfortable chair in dim light or no light at all. Some people like soft, ambient New Age music, but I find it distracting. If there are other people in the house, I let them know I need some quiet time.
Having a breath-based practice is practical. I do an eyes-open version throughout the day by keeping my awareness on my breath. When I walk, sit in meetings, speak with someone, play tennis, work out at the gym, or stand in line at the grocery store, I simply pay attention to my breathing. I feel centered, focused, calm, and much more likely to be intentional in my actions rather than reactive.
I don’t beat myself up for drifting off into my thoughts or missing a day of meditation. If I struggle one day getting centered or my mind races out of control, I don’t panic. I just let it go. Years ago, an older man shared with me what he’d learned in his life: “We are our own judge, jury, and jailer.” It’s so true. I’m the one that passes judgment on myself, creating my private jail.
Forget about enlightenment. For a short time, I thought enlightenment was like the last stop on a subway, a place the spiritually diligent arrived at because of all the meditation they did. It’s not. There is no destination. The journey and the experience take place in every moment.
Meditation can build mental toughness
I have learned a great deal about my attitude, my ability to choose my response, and how to remain steadfast amid the negative thoughts created by my mind. Not only could I observe them, but I could also feel them trying to overtake me and upset my equilibrium. Sometimes my thoughts screamed at me to stop meditating or to walk away from everything. At times, they were relentless. I practiced flexing, not breaking. I imagined myself as a tree, rooted in the soil, letting hurricane-force winds blow over me, my branches bending. There were times during the first 10 years of meditating when I almost broke entirely.
When I left my full-time role with Elan Vital, started a family, and entered into corporate America, I began to more fully appreciate that I’d taken a real-time course on mental toughness. I learned patience, prayer, faith in myself, and the importance of being calm and centered. I’d learned to never give up on what my heart knows and wants.
Meditation alone won’t make us peaceful
We need to live a peaceful life. For me, that means living my life in alignment with my highest values and being proud of how I behave. I have four guiding values now: integrity, responsibility, humility, and respect for others.
When I was in my forties and fifties, despite having a sound meditative practice for many years, I made some abysmal choices that didn’t reflect my highest values. I paid the price and experienced significant stress, trauma, sorrow, and depression. I could meditate during all this and experience some degree of inner peace, but when I opened my eyes, I still faced chaos. Mine was not exactly a peaceful life. I learned that the experience of inner contentment is always there, but I need to invite it into my life by making good choices.
We are spiritual, emotional, rational, physical, mental beings. I realize now — perhaps more than ever before — the need to work on all aspects of myself to have a vibrant, abundant, healthy, peaceful life. Meditation is just one part of the work.
After my 10 years of living communally as a monastic, meditating morning and night for hours, I emerged spiritually rich, but my psychological health was poor. I had put all my resources into meditation at the expense of other parts of myself. It took years of study, therapy, and life experience to become more balanced.
Over the last 20 years, I’ve given significant attention to understanding my own family of origin, my psychological strengths and weaknesses, my childhood wounds, my unconscious biases, and the belief systems I have. As a result of this work, I feel more complete as a human being now. And, as I continue to discover, the work never ends either.
We’re all connected
The way I treat myself is the way I treat others. I project discontentment on others if I am discontent. If I am loving and kind to myself, I will be to others. I know that I must be kind, loving, and forgiving of myself if I am to have any chance of living a peaceful life. I need to understand and accept my darkness, my shadow. If I deny it and push it away, I know it will play out subconsciously, undermining me.
Suffering, war, hate, and fear come from the false belief of separation. We’ve forgotten that a thread of life connects us. Meditation can unlock the experience that will show you that thread.
When I was in a small village in Ghana, I taught meditation techniques to an older man, a bricklayer. He was completely uneducated, his hands rough and scarred, his body toughened by years of working in the sun. He wanted to know his inner self. He wanted to experience inner joy and love. I showed him the techniques, and he sat quietly to meditate. When he opened his eyes, he beamed with joy, and tears streamed down his face.
He understood.
I understood too.
Every human being, no matter where they live or who they are, has the spark of life in them. Knowing that spark opens up our hearts, and we experience more connection to every human being and to this beautiful planet we live on. | https://humanparts.medium.com/ive-been-meditating-for-48-years-6fc284071c90 | ['Don Johnson'] | 2020-05-08 19:06:12.621000+00:00 | ['Self Improvement', 'Life', 'Mindfulness', 'Meditation', 'Mental Health'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.